Investigating the Reliability and Validity of Self and Peer Assessment to Measure Medical Students’ Professional Competencies

Abstract

The use of peer assessment through a multisource feedback process has gained recognition as a reliable and valid method to assess the characteristics of professionals and trainees. A total of 168 first-year medical students completed a 15-item questionnaire to self-assess their professional work habits and interpersonal abilities. Each student was expected to identify 8 first-year classmates to complete a corresponding 15-item peer assessment. Although the self and peer assessment questionnaires had strong reliability (Cronbach’s α = 0.85 and 0.91, respectively), an exploratory factor analysis resulted in a 3- and 2- factor solution, respectively. The third factor was associated with items related to students’ personal attributes. Significantly lower mean score differences for the self-report assessment were found for all 15 items (Cohen’s d = 0.27 to 1.39, p < 0.001). A decision study analysis found that 7 peer assessors were needed to achieve a generalizability coefficient of 0.70. The findings suggest some inconsistencies in regards to the construct validity and stability of measures between self and peer assessment measures. The need for self-awareness of students’ strengths and limitations, however, is recommended as part of their development in a profession that emphasizes self-regulation.

Share and Cite:

Donnon, T. , McIlwrick, J. & Woloschuk, W. (2013). Investigating the Reliability and Validity of Self and Peer Assessment to Measure Medical Students’ Professional Competencies. Creative Education, 4, 23-28. doi: 10.4236/ce.2013.46A005.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Accreditation Council for Graduate Medical Education (ACGME) (2013). ACGME Outcome Project. URL (last checked 15 January 2013). http://www.acgme.org/acgmeweb/tabid/159/DataCollectionSystems
/AccreditationDataSystem.aspx
[2] Allerup, P., Aspegren, K., Ejlersen, E., et al. (2007). Use of 360-degree assessment of residents in internal medicine in a Danish setting: A feasibility study. Medical Teacher, 29, 166-170. doi:10.1080/01421590701299256
[3] Arnold, L. (2002). Assessing professional behavior: Yesterday, today, and tomorrow. Academic Medicine, 77, 502-515. doi:10.1097/00001888-200206000-00006
[4] Bandiera, G., Sherbino, J., & Frank, J. R. (2006). The CanMEDS assessment tools handbook: An introductory guide to assessment methods for the CanMEDS competencies. Ottawa, ON: The Royal College of Physicians and Surgeons of Canada.
[5] Brennan, R. L. (2001). Generalizability theory. New York: Springer Verlag.
[6] Brinkman, W. B., Geraghty, S. R., Lanpher, B. P., et al. (2007). Effect of multisource feedback on resident communication skills and prof essionalism. Pediatric Adolescence Medicine, 61, 44-49. doi:10.1001/archpedi.161.1.44
[7] Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.
[8] Colthart, I., Bagnall, G., Evans, A., et al. (2008). The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide No. 10. Medical Teacher, 30, 124-145. doi:10.1080/01421590701881699
[9] Dannefer, E. F., Henson, L. C., Bierer, S. B., et al. (2005). Peer as sessment of professional competence. Medical Education, 39, 713 722. doi:10.1111/j.1365-2929.2005.02193.x
[10] Epstein, R. M., Dannefer, E. F., Nofziger, A., et al. (2004). Com prehensive assessment of professional competencie: The Rochester experiment. Teaching & Learning in Medicine, 16, 186-196. doi:10.1207/s15328015tlm1602_12
[11] Frank, J. R. (2005). The CanMEDS 2005 physician competency frame work. Better standards. Better physicians. Better care. Ottawa, ON: The Royal College of Physicians and Surgeons of Canada.
[12] Lockyer, J. M., & Clyman, S. G. (2008). Multisource feedback (360 degree evaluation). In: E. S. Holmboe, & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence (pp. 75-83). Philadelphia, PA: Mosby Elsevier.
[13] Lurie, S. J., Nofziger, A. C., Meldrum, S., Mooney, C., & Epstein, R. M. (2006a). Effects of rater selection on peer assessment among medical students. Medical Education, 40, 1088-1097. doi:10.1111/j.1365-2929.2006.02613.x
[14] Lurie, S. J., Nofziger, A. C., Meldrum, S., Mooney, C., & Epstein, R. M. (2006b). Temporal and group-related trends in peer assessment amongst medical students. Medical Education, 40, 840-847. doi:10.1111/j.1365-2929.2006.02540.x
[15] Lurie, S. J., Meldrum, S., Nofziger, A. C., Sillin, L. F., Mooney, C. J., & Epstein, R. M. (2007a). Changes in self-perceived abilities among male and female medical students after the first year of clinical training. Medical Teacher, 29, 921-926. doi:10.1080/01421590701753559
[16] Lurie, S. J., Lambert, D. R., Nofziger, A. C., Epstein, R. M., & Grady Weliky, T. A. (2007b). Relationship between peer assessment during medical school, Dean’s letter rankings, and ratings by internship directors. Journal of General Internal Medicine, 22, 13-16. doi:10.1007/s11606-007-0117-4
[17] Nofziger, A. C., Naumburg, E. H., Davis, B. J., Mooney, C. J., & Epstein, R. M. (2010). Impact of peer assessment on the professional development of medical students: A qualitative study. Academic Medicine, 85, 140-147. doi:10.1097/ACM.0b013e3181c47a5b
[18] Papadakis, M. A., Teherani, A., Banach, M. A., et al. (2005). Disciplinary action by medical boards and prior behavior in medical schools. New England Journal of Medicine, 353, 2673-2682. doi:10.1056/NEJMsa052596
[19] Violato, C., & Lockyer, J. (2006). Self and peer assessment of pedia tricians, psychiatrists and medicine specialists: Implications for self directed learning. Advances in Health Science Education, 11, 235-244. doi:10.1007/s10459-005-5639-0

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.