[1]
|
R. E. Wright, “Student Evaluations of Faculty: Concerns Raised in the Literature, and Possible Solutions,” College Student Journal, Vol. 40, No. 2, 2006, pp. 417-422.
|
[2]
|
Monash University, “Course Experience Questionnaire (CEQ), Management Quality Unit (MQU),” Monash University, Retrieved 27 August 2010. http://opq.edu.au/ mqu/evaluations/ags/ceq.html
|
[3]
|
Stanford University, “Interpreting and Working with Your Course Evaluations. The Counter for Teaching and Learning,” Retrieved 14 July 2010. http://ctl.stanford. edu/interpret.pdf
|
[4]
|
The University of Sydney, “Reading the Student Course Experience Questionnaire Report,” Institute for Teaching & Learning, Retrieved 14 July 2010, http://www.itl.usyd. edu.au/sceq/reading 1.htm
|
[5]
|
The Australian National University, “Course Program Experience Questionnaire 2009,” Retrieved 14 July 2010. http://unistats.anu.edu.au/Pubs/Suveys/CEQ/2009%20CEQ%20-%20All%20Coursework.pdf
|
[6]
|
McGill University, “Course Evaluation Questionnaires,” Teaching and Learning Services, Retrieved 14 July 2010. http://www.mcgill.ca/tls/courseevalutions/questionnaires/
|
[7]
|
Princeton University, “Mid-Semester Course Evaluations,” The McGraw Center for teaching and learning, 14 July 2010. http://www.princeton.edu/ mcgraw/library/ for-faculty/midcourseevals/Student-Rating-Form.pdf
|
[8]
|
University of Washington, “Understanding IAS Course Summary Reports,” Office of Educational Assessmen, Retrieved 14 July 2010. http://www.washington.edu.au/ oea/service/course_eval/uw_seattle/course_reports.html
|
[9]
|
J. Franklin, “Interpreting the Numbers: Using a Narrative to Help Others Read Student Evaluations of Your Teaching Accurately,” In: K. G. Lewis Ed., Techniques and Strategies for Interpreting Student Evaluations (Special Issues), New Directions for Teaching and Learning, Vol. 87, 2001, pp. 85-100.
|
[10]
|
M. Theall and J. Franklin, “Looking for Bias in All the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction?” In: M. Theall, P. C Abrami and L. A. Mets Eds., The Student Ratings Debate: Are They Valid? How Can We Best Use Them? (Special Issue), New Directions for Institutional Research, Vol. 109, 2001, pp. 45-46.
|
[11]
|
M. Theall, “Students Ratings: Myths vs. Research Evidence (2002),” Retrieved 26 June 2008. http://studentratigs.byu.edu/info/faculty/myths.asp
|
[12]
|
F. Zableta, “The Use and Misuse of Student Evaluations of Teaching,” Teaching in Higher Education, Vol. 12, No. 1, 2007, pp. 55-76. doi:10.1080/13562510601102131
|
[13]
|
P. Gravestock and E. Gregor-Greenleaf, “Student Course Evaluations: Research, Models and Trend,” Higher Education Quality Council of Ontario, Toronto, 2008.
|
[14]
|
J. Carifio and R. J. Perla, “Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and Their Antidotes,” Journal of Social Sciences, Vol. 3, No. 3, 2007, pp. 106-116. doi:10.3844/jssp.2007.106.116
|
[15]
|
Wikipedia, “Likert Scale,” Retrieved 6 February, 2010. http://en.wikipedia.org/wiki/likert_scale
|
[16]
|
The George Washington University, “Course Evaluation,” Retrieved 14 July 2010. http://extend.unb.ca /oalp/ oalp_course_eval.php
|
[17]
|
M. Healey, K. M. O’Connor and P. Broadfoot, “Refelction on Engaging Student in the Process and Product of Strategy Development for Learning, Teaching, and Assessmen: An Institutional Case Study,” International Journal for Academic Development, Vol. 15, No.1, 2010, pp. 19-32. doi:10.1080/13601440903529877
|
[18]
|
F. Campbell, J. Eland, A. Rumpus and R. Shacklock, “Hearing the Students Voice Involving Students in Curriculum Design and Development,” Retrieved 31 May, 2009, http://www2.napier.ac.uk/studentvoices/curriculum downloadStudentVoice2009_final .pdf
|
[19]
|
J. Case, “Alienation and Engagement: Exploring Students’ Experiences of Studying Engineering,” Teaching in Higher Education, Vol. 12, No. 1, 2007, pp. 119-133. doi:10.1080/13562510601102354
|
[20]
|
K. A. Dufy and P. A. O’Neill, “Involving Students in Staff Development Activities,” Medical Teacher, Vol. 25, No. 2, 2003, pp. 191-194.
doi:10.1080/0142159031000092616
|
[21]
|
M. Fuller, J. Georgeson, M. Healey, A. Hurst, S. Ridell, H. Roberts and E. Weedon, “Enhancing the Quality and Outcomes of Disabled Students’ Learning in Higher Education,” Routledge, London, 2009.
|
[22]
|
M. Yorke and B. Longden, “The First-Year Experience of Higher Education in the UK,” Higher Education Academy, York, 2008.
|
[23]
|
K. Grace-Martin, “Can Likert Scale Data ever be Continuous?” 2008, Retrieved 19 June 2010. http://www.articlealley. com/print_670606_22.html
|
[24]
|
K. R. Sundaram, S. N. Dwivedi, and V. Sreenivas, “Medical Statistics: Principles & Methods,” BI Publications Pvt. Ltd., New Delhi, 2009.
|
[25]
|
R. Gob, C. Mc Collin and M. F. Rmalhoto, “Ordinal Methodology in the Analysis of Likert Scales,” Qualilty & Quantity, Vol. 41, No. 5, 2007, pp. 601-626.
doi:10.1007/s11135-007-9089-z
|
[26]
|
W. E. Cashin, “Students Do Rate Different Academic Fields Differently,” In: M. Theall and J. Franklin Eds., Student ratings of instruction: Issues for improving practice (Special Issues), New Directions for Teaching and Learning, Vol. 43, 1990, 113 -121.
|