Open Journal of Social Sciences, 2015, 3, 145-149
Published Online November 2015 in SciRes. http://www.scirp.org/journal/jss
http://dx.doi.org/10.4236/jss.2015.311019
How to cite this paper: Yang, Z.Z. and Zhou, H.Y. (2015) How Different Is the Cognition towards Dissertation between Can-
didates for Mathematics Master Degree and Reviewers? Open Journal of Social Sciences, 3, 145-149.
http://dx.doi.org/10.4236/jss.2015.311019
How Different Is the Cognition towards
Dissertation between Candidates for
Mathematics Master Degree and Reviewers?
Zezhong Yang1, Haiyin Zhou2
1The School of Mathematics, Shandong Normal University, Jinan, China
2The Educational College, Shandong Normal University, Jinan, China
Received 26 October 2015; accepted 13 November 2015; published 16 November 2015
Copyright © 2015 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/
Abstract
This research focused on the cognition differences of candidates for Ed. M. in mathematics and re-
viewers towards dissertation. We showed 3 different levels dissertations to 37 candidates and 5
reviewers and analyzed the scores mean on 55 items. The results indicated that most candidates’
cognitions towards dissertation were quite different from reviewers’, especially the cognitions on
dissertation literature review, question expressing and analysis, research methods, research
works, application of professional knowledge, results and conclusion. These candidates’ cogni-
tions were overall superficial and not in place; even some of them were inexact or false. So the su-
pervisors should introduce candidates for Ed. M. in mathematics some fundamental, concrete and
detailed knowledge about dissertation in order to help them write out a good dissertation.
Keywords
Ed. M. in Mathematics, Cognition Difference, Candidates, Dissertation
1. Introduction
The dissertations’ quality of candidates for Ed. M. in mathematics was overall not high these years in Mainland
China [1]. What is the r eason? Some researchers thought that the main reason was the unreasonable arrangement
of supervisors and lack of standard [2]. Some researchers claimed that it was the insufficient learning and prac-
ticing about research method [3]. Some researchers argued that the imperfect cultivation system was the subs-
tantive reason [4]. Our preliminary research indicated the superficialness and deviation of cognitions should be
the significant r eason [5]. Then how big is the deviation? How different is the candidates’ cognition from the re-
Z. Z. Yang, H. Y. Zhou
146
viewers’? To answer these questio ns, we conducted a further investigation. We showed 3 different levels disser-
tations to candidates and reviewers, and asked them to asses and rate it. Our aim is to find out the differences
between candidates and reviewers in assessment, and then to ascertain the candidates’ concrete deficiency in
their cognition towards diss ertation and try to find some strategies to help candidates.
2. Methodology
2.1. Instrument
We chose 3 dissertations of Ed. M. in mathematics randomly published by Shandong normal university last year
and respectively scored A, B and C b y reviewers as object to be rated. The rating sheet was made through syn-
thesizing present prevalent assessment sheets in Mainland China and reorganizing or rearranging their items,
and had 55 items totally. The highest score of each item was 5, and the lowest score was 0. We intend to provide
candidates and reviewers an unambiguous and detailed and comprehensive rating sheet so that they can rate ac-
curately and expediently.
2.2. Participants
We recruited 37 candidates for Ed. M. in mathematics randomly from Shandong normal university, Qufu Nor-
mal University, Ludong University, Qingdao University and Liaocheng University. They all were freshmen of
postgraduates majoring mathematics education, including 26 female candidates and 11 male candidates. Mean-
while, we recruited 5 reviewers who had assessed and rated candidates’ dissertation over 5 years from above
universities. Dissertations’ copies and rating sheets were delivered to them by post. We asked them to post
sheets bac k in a month.
2.3. Data Collection
We recalled 42 rating sheets totally at last, ther e into, 37 sheets were from candidates and 5 sheets were from re-
viewers. After rejecting 3 incomplete sheets from candidates’ sheets, 39 effective rating sheets were available
finally.
2.4. Data Analysis
We analyzed all responses by examining the scores mean of every item. To make conclusion, we examined all
data in numerical, graphical and tabular forms, while considering the relevant research literature.
3. Results
3.1. Descriptive Statistics
The scores mean of each item related to dissertation is shown in Table 1. From Table 1 we knew that most
scores which candidates rated towards A level dissertation were bigger than 4, while the scores which reviewers
rated most were between 3 and 4. Most scores which candidates rated towards B level dissertation were between
3 and 4, while the scores which reviewers rated mostly were between 2 and 3. Most scores which candidates
rated towards C level dissertation were between 3 and 4, while the scores which reviewers rated mostly were
between 1 a nd 2. So generally most scores rated by ca ndidat e s were bigger than scores rate d by reviewers.
3.2. The Biggest Difference of Scores towards 3 Dissertations
Figure 1 displayed the 55 items’ mean difference between candidates’ scores and reviewers’ scores towards 3 dis-
sertations. From Figure 1 we knew the almost each items’ mean difference was getting bigger from A level dis-
sertation to C lev el diss ertation. Some mean differences about C level dissertation were over 3, even 4, such as the
item 10, 13, 14, 15, 17, 36, etc. Only a few items’ mean differences were not obvious, such as item 43, 52, 53 and
54. So the candi dates ’ cogni tions about C l evel di ssert ation i n gene ral shoul d be qui te diffe rent from the revi ewe rs’.
3.3. The Most Controversial Items
Figure 2 displayed the mean difference of all scores between candidates rated and reviewers rated towards each
Z. Z. Yang, H. Y. Zhou
147
Table 1. Descriptive s tatistics of respondents.
Ite ms
A level dissertation
B level dissertation
C level dissertation
Reviewers
Candidates
Reviewers
Candidates
Reviewers
1. Does the question have theoretical value?
3.6
3.52
3
3.76
2.2
2. Does the question have practical value?
4.6
4.08
3.8
4.94
3
3. Is it novel?
3.8
4.18
1
3.96
2.2
4. Is the question suitable for Ed. M. candidates to research?
3
3.82
3.8
4.22
3
5. Did the author introduce the ba ckground of question clearly?
3.8
4.11
2.2
3.82
3
6. Is the literature review comprehensive?
4.2
4.61
0.2
3.14
2.2
7. Does the literature review include the last research results?
3.6
3.98
0.2
3.45
1
8. Did the author analyze previous researches?
4.4
3.28
0.2
3.16
1
9. Did the author summarize all previous results?
4.6
3.77
0.2
3.67
1
10. Did the author put forward a new question?
3.8
3.06
0.2
4.56
1
11. Is the introduction of question clear?
3.8
4.43
3
4.23
1
12. Did the introduction of question include the source
of question?
4.02 4.6 3.76 3 3.56 1
13. Did the introduction of question include details
on the significance of the research?
4.94 4.6 3.89 3 4.28 1
14. Is the introduction of methods clear?
2.2
4.59
1
4.53
0.2
15. Is the introduction of methods exact?
3
4.56
1
4.33
0.2
16. Is the selected method suitable for research?
3.8
4.33
2.2
3.98
0.2
17. Did the author explain why these methods were
chosen adequately?
4.34 2.2 4.63 0.2 4.54 1
18. Are the definitions of relevant concepts clear?
4.6
3.61
3.8
3.52
3
19. Is the analysis of questions comprehensive?
3.8
3.84
3
3.85
2.2
20. Is the analysis of questions in-depth?
3.8
4.85
2.2
4.59
2.2
21. Is the analysis of questions logical?
3.8
3.84
2.2
4.01
1
22. Are all results clear?
3.8
3.37
3
3.72
2.2
23. Are all results rational?
3.8
2.93
3
3.72
1
24. Is the re s ult enough?
3
2.84
3
3.38
1
25. Are results novel?
3
2.43
1
3.89
1
26. Are results believable?
3.8
3.48
3
4.43
1
27. Is the conclusion clear?
3.8
3.53
3
4.28
1
28. Is the conclusion reasonable?
3
3.87
2.2
3.43
1
29. Did the conclusion have full exposition?
3
3.27
3
3.41
1
30. Is the explanation of conclusion logica l?
3
3.02
2.2
3.35
1
31. Is the conc l us ion new?
2.2
1.84
1
2.86
1
32. Did the conclusion answer previous question?
3.8
2.85
1
3.94
1
33. Is the conclusion valuable?
4.6
3.47
2.2
3.81
1
34. Did the author apply thei r professional knowledge?
4.6
3.87
3
4.44
1
35. Is the application of professional knowledge proper?
3.8
4.91
3
4.08
2.2
36. Is the professional knowledge which was used rich?
3.8
4.83
2.2
4.68
1
37. Is the re s ea rch work comprehensive?
3
3.24
2.2
4.67
1
38. Is the re s ea rch work enough?
3
3.67
2.2
4.07
1
39. Is the research work believable?
3.8
2.81
3
4.14
1
40. Are all methods used in research works mentioned
in the previous pa rt of method ?
4.98 3 2.87 3 2.92 1
41. Is the using of methods reasonable?
3
3.82
3
3.94
1
42. Is there innovation in res earch methods?
1
2.8
0.2
3.54
1
43. Is the arrangement clear?
3.8
4.44
3
3.28
3
44. Is the arrangement reasonable?
3.8
4.61
3
3.24
2.2
45. Is the arrangement of chapter logical?
3.8
3.34
3.8
3.97
1
46. Is the language fluent?
3.8
4.35
3.8
4.44
3
47. Are the tables and figures clear?
3.8
4.49
3.8
4.46
2.2
48. Are the tables and figures right?
3.8
3.86
3.8
4.15
3
49. Are the symbols and formula clear?
3.8
3.66
3.8
3.95
2.2
50. Are the symbols and formula right?
3.8
3.98
3.8
4.26
3
51. Are the title directory, abstract and key words standard?
4.6
4.31
4.6
3.97
3
52. Is its printing standard?
4.6
2.58
4.6
4.96
4.6
53. How is its binding?
4.6
3.52
4.6
4.49
4.6
54. Is the reference standard?
4.6
4.36
4.6
3.73
3
55. Is its appe ndix standard ?
4.6
4.47
4.6
3.76
0.2
SUM
202.8
206.43
140.6
217.46
91
MEAN
3.6873
3.7533
2.5564
3.9538
1.6545
Z. Z. Yang, H. Y. Zhou
148
Figure 1. The mean differences towards 3 dissertations.
Figure 2. The mean difference of all scores towards each item.
item in all 3 dissertatio ns. From Figure 2 we kn ew that the most controv ersial items were: 7 . Does the litera ture
review include the last research results? 9. Did the author summarize all previous results? 10. Did the author put
forward a new question? 14. Is the introduction of methods clear? 15. Is the introduction of methods exact? 16.
Is the selected method suitable for research? 17. Did the author explain why these methods were chosen ade-
quately? 20. Is the analysis of questions in-depth? 25. Are results novel? 28. Is the conclusion reasonable? 32.
Did the con clusion answ er prev ious qu estion? 36. I s the pro fes sional know ledge which was u sed ric h? 38. Is the
research work enough? 42. Is there innovation in research methods?
4. Discussion
Based on the results above, the 55 items’ scores that candidates rated t ow a r ds 3 di s se rtations we re usually bigge r
than reviewers rated and changed not much from A level dissertation to C level dissertation. It seemed that in
present candidates’ opinion all passed and published dissertations were good. This indicated there were many
differences in cognition of dissertations between present candidates and reviewers in detail. Most candidates’
concrete cognitions towards dissertation were obviously superficial and not in place, even some of them were
inexact, otherwise they could judge subtle d ifferences of different level disser tations accurately.
-3
-2
-1
0
1
2
3
4
5
12345678910 11 121314 15 16 17 181920 21 22 23 242526 27 28 29 30 313233 34 35 36 373839 40 41 42 434445 46 47 48 495051 52 53 54 55
A levelB levelC level
-2
-1
0
1
2
3
4
135791113 15 17 192123 25 27 29 31 33353739 414345 47 49 51 53 55
Z. Z. Yang, H. Y. Zhou
149
The number of most controversial items was 14 totally. The scores that candidates rated for these 14 items
were generally much bigger than reviewers rated. This phenomenon indicated the biggest cognition differences
towards dissertations between present candidates and reviewers were mainly in literature review, question, re-
search methods, research works, professional knowledge, results and conclusion. It seemed most present candi-
dates did not completely understand what the literature review and what research method was, and did not real-
ize the literature review must include the latest researches results, summarize all previous results and put for-
ward some new questions at last yet. It seemed that they did not know what the clear and exact introduction
about research method was, what suitable method for a research and what the innovation of research method
was, they did not know why the research methods must be explained adequately and how to do it. It seemed that
they do not know what the comprehensive or deep analysis to questions was, and did not know how to judge re-
search results were novel or not and the conclusion’ coming was reasonable or not. Concerning the research
works, it seemed that the candidates could not judge whether the research work was enough or not, even they
did not know what the enough research was. Since if they knew the all above very well and exactly, the scores
differences were affirmatively not so big.
5. Conclusion
Even though the number of dissertations selected for rating only is 3, and participants are not so many, the re-
sults we obtained are unquestionably significant and reliable. Based on the results above, we knew that the most
cognitions o f candidates toward s dissertation were quite different from those of reviewers, especially the cogni-
tion on literature review, question expressing and analysis, research methods, research works, application of
professional knowledge, results and conclusion. These candidates’ cognitions were superficial and not in place;
even some of them were inexact. In their views, all passed and published dissertations were good. So it is ne-
cessary for supervisors to let candidates know the fundamental knowledge related to above aspects well in the
process of guiding candidates for Ed. M. in mathematics t o write their dissertation s. The supervisors must teach
candidates some concrete and detailed criteria of dissertation. The supervisors should guide candidates to know
wha t the clear and exact introduction of method is, what the comprehensive analysis towards questions is, what
the novel results are, what reasonable conclusion coming is and what the method innovation is, etc. And what’s
more, the supervisors should let candidates know how to do some of them, such as how to introduce the question
and method clearly and exactly, how to analyze the question comprehensively and how to obtain a conclusion
reasonably, etc. Because only knowing more and doing much can change and improve people’s cognition, espe-
cially for current candidates for Ed. M. in mathematics, even more so. The candidates only have known more
knowledge especially relevant detailed criteria about dissertation and known how to do the research works, so
that their cognitions about dissertation can be improved, and furthermor e they can write out a good dissertation.
Funding
Supported by the project of research on enhancing quality of full-time master of mathematics education candi-
date’s disse rt a t i on (SDYC14048 ).
References
[1] Hou, Zh.T. (2010) Resea rch on Quality Assurance of Master of Education Dissertation: Reviews and Reflection. Aca-
demic Degrees & Graduate Education, 6, 40-44.
[2] Yang, Q.L. (2005) The Problems and Its Explanation Appeared in Practice of Master Education. Research in Educa-
tional Development, 6, 77-80.
[3] Li, G.F. and Yang, Z.P. (2011) The Problems, Reasons and Countermeasures of Master of Education Dissertations.
Academic Degrees & Graduate Education, 2, 20-25.
[4] Zhang, D.Q. (2011) Mathematics Education for the Master Degree Thesis Writing Analysis of the Investigation. Jour-
nal of Mathematics Education, 6, 25-29.
[5] Yang, Z.Z. and Sun, D.D. (2015) Research on Full-Time Master of Mathematics Education Candidates’ Cognition of
Dissertation. Open Journal of Social Sciences, 3, 46-50. http://dx.doi.org/10.4236/jss.2015.310007