Appraisal of Using Global Student Rating Items in Quality Management of Higher Education in Saudi Arabian University

Abstract

Academic institutions preparing for quality and academic accreditation adopt a range of evaluations. Each of such evaluations involves closed items, a mixture of individual items on various aspects, followed by global item which is the overall satisfaction of students about related evaluation. A common question in mind of the academic developers is “where to start, using global items results, or, individual items results!” Through exploratory results of course evaluation survey (CES) data on courses in nursing program of University of Dammam, this article attempts to answer this question. In summary, under this program which is in the developmental phase, one can expedite decision making related to required action plans by using global items results.

Share and Cite:

A. Rubaish, L. Wosornu and S. Dwivedi, "Appraisal of Using Global Student Rating Items in Quality Management of Higher Education in Saudi Arabian University," iBusiness, Vol. 4 No. 1, 2012, pp. 1-9. doi: 10.4236/ib.2012.41001.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] A. Al Rubaish, “On the Contribution of Student Experi- ence Survey Regarding Quality Management in Higher Education: An Institutional Study in Saudi Arabia,” Jour- nal of Service Science and Management, Vol. 3, No. 4, 2010, pp. 464-469.
[2] A. Al Rubaish, “A Comparative Appraisal of Timings for Program Evaluation Survey and Related Institutional Re- sults in Saudi Arabia: Quality Management in Higher Edu- cation,” Journal of Service Science and Management, Vol. 4, No. 4, 2011, pp. 184-190.
[3] A. Al Rubaish, L. Wosornu and S. N. Dwivedi, “Using Deductions from Assessment Studies towards Further- ance of the Academic Program: An Empirical Appraisal of Institutional Student Course Evaluation”, iBusiness, Vol. 3, No. 2, 2011, pp. 220-228.
[4] P. Gravestock and E. Gregor-Greenleaf, “Student Course Evaluations: Research, Models and Trends,” Higher Edu- cation Quality Council of Ontario, Toronto, 2008.
[5] L. P. Aultman, “An Expected Benefit of Formative Stu- dent Evaluations,” College Teaching, Vol. 54, No. 3, 2006, pp. 251-285. doi:10.3200/CTCH.54.3.251-285
[6] T. Beran, C. Violato and D. Kline, “What’s the “Use” of Students Ratings of Instruction for Administrators? One University’s Experience,” Canadian Journal of Higher Educatuon, Vol. 37, No. 1, 2007, pp. 27-43.
[7] L. A. Braskamp and J. C. Ory, “Assessing Faculty Work: Enhancing Individual and Institutional Performance,” Jossey- Bass, San Francisco, 1994.
[8] J. P. Campbell and W. C. Bozeman, “The Value of Stu- dent Ratings: Perceptions of Students, Teachers and Ad- ministrators,” Community College Journal of Research and Practice, Vol. 32, No. 1, 2008, pp. 13-24. doi:10.1080/10668920600864137
[9] W. E. Cashin and R. G. Downey, “Using Global Student Rating Items for Summative Evaluation,” Journal of Educational Psychology, Vol. 84, No. 4, 1992, pp. 563-572. doi:10.1037/0022-0663.84.4.563
[10] M. R. Diamond, “The Usefulness of Structured Mid-Term Feedback as a Catalyst for Change in Higher Education Classes,” Active Learning in Higher Education, Vol. 5, No. 3, 2004, pp. 217-231. doi:10.1177/1469787404046845
[11] L. C. Hodges and K. Stanton, “Translating Comments on Student Evaluations into Language of Learning,” Innova- tive Higher Education, Vol. 31, No. 5, 2007, pp. 279-286. doi:10.1007/s10755-006-9027-3
[12] J. W. B. Lang and M. Kersting, “Regular Feedback from Student Ratings of Instruction: Do College Teachers Im- prove Their Ratings in the Long Run?” Instructional Sci- ence, Vol. 35, No. 3, 2007, 187-205. doi:10.1007/s11251-006-9006-1
[13] H. W. Marsh, “Do University Teachers Become More Ef- fective with Experience? A Multilevel Growth Model of Students’ Evaluations of Teaching over 13 Years,” Jour- nal of Educational Psychology, Vol. 99, No. 4, 2007, pp. 775-790. doi:10.1037/0022-0663.99.4.775
[14] R. J. Menges, “Shortcomings of Research on Evaluating and Improving Teaching in Higher Education,” In: K. E. Ryan, Ed., Evaluating Teaching in Higher Education: A Vision for the Future (Special Issue), John Wiley & Sons, Hoboken, 2000, pp. 5-11.
[15] A. R. Penny and R. Coe, “Effectiveness of Consultations on Student ratings Feedback: A Meta-Analysis,” Review of Educational Research, Vol. 74, No. 2, 2004, pp. 215- 253. doi:10.3102/00346543074002215
[16] R. E. Wright, “Student Evaluations of Faculty: Concerns Raised in the Literature, and Possible Solutions,” College Student Journal, Vol. 40, No. 2, 2008, pp. 417-422.
[17] F. Zabaleta, “The Use and Misuse of Student Evaluation of Teaching,” Teaching in Higher Education, Vol. 12, No. 1, 2007, pp. 55-76. doi:10.1080/13562510601102131
[18] A. S. Aldosary, “Students’ Academic Satisfaction: The Case of CES at KFUPM,” Journal of King Abdul Aziz University for Engineering Sciences, Vol. 11, No. 1, 1999, pp. 99- 107.
[19] M. Yorke, “‘Student Experience’ Surveys: Some Meth- odological Considerations and an Empirical Investigation,” Assessment & Evaluation in Higher Education, Vol. 34, No. 6, 2009, pp. 721-739. doi:10.1080/02602930802474219
[20] W. J. McKeachie, “Students Ratings: The Validity of Use,” American Psychologist, Vol. 51, No. 11, 1997, pp. 1218- 1225. doi:10.1037/0003-066X.52.11.1218
[21] M. Theall and J. Franklin, “Looking for Bias in all the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction?” In: M. Theall, P. C Abrami and L. A. Mets Eds., The Student Ratings Debate: Are They Valid? How Can We Best Use Them? (Special Issue),” John Wiley & Sons, Hoboken, 2001, pp. 45-46.
[22] National Commission for Academic Accreditation & As- sessment, “Handbook of Quality Assurance and Accredi- tation in Saudi Arabia, Part 2,” Internal Quality Assur- ance Arrangements, Dammam, 2007.
[23] P. C. Abrami, “Improving Judgements about Teaching Ef- fectiveness Using Teacher Rating Forms,” John Wiley & Sons, Hoboken, 2001.
[24] C. Sid Nir and L. Bennet, “Using Student Satisfaction Data to Start Conversations about Continuous Improvement,” Quality Approaches in Higher Education, Vol. 2, No. 1, 2011.
[25] W. E. Cashin, “Students Do Rate Different Academic Fields Differently,” In: M. Theall and J. Franklin, Eds., Student Ratings of Instruction: Issues for Improving Practice (Spe- cial Issues),” John Wiley & Sons, Hoboken, 1990, pp. 113 -121.
[26] R. Gob, C. Mc Collin and M. F. Rmalhoto, “Ordinal Meth- odology in the Analysis of Likert Scales,” Qualilty & Quan- tity, Vol. 41, No. 5, 2007, pp. 601-626. doi:10.1007/s11135-007-9089-z
[27] K. R. Sundaram, S. N. Dwivedi and V. Sreenivas, “Medi- cal Statistics: Principles & Methods,” BI Publications Private Ltd., New Delhi, 2009.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.