Assessment Methods Used during Clinical Years of Undergraduate Medical Education at Moi University School of Medicine, Kenya


Background: Assessment is the systematic collection, review, and use of information about educational programs undertaken to improve teaching and learning. It ensures quality in training programmes, motivates, and directs students’ learning. Assessment is also used for verification of achievement of objectives of training, feedback to students, licencing, certification, and screening of students for advanced training. It is guided by several principles, including the aims of the assessment (why assess), who to assess, timing (when to assess), what to assess, methods (how to assess), and the criteria for determining the usefulness of the assessment. Objective: To describe the assessment methods used during clinical years of the undergraduate programme at Moi University School of Medicine (MUSOM) and determine the student’s perspectives concerning the comprehensiveness, relevance, objectivity of various assessment methods and challenges faced. Methodology: The study was carried out at MUSOM and a cross-sectional study design was employed. Ten study participants were studied using convenience sampling. Data were collected using an interview guide and was analyzed using content analysis. An informed consent was obtained from the study participants. Results: The methods used during clinical years at MUSOM for assessing knowledge and its applications include multiple-choice questions (MCQ), short answer questions (SAQ), modified essay questions (MEQ), long essay questions (LEQ), and oral exam. Whereas the methods for assessing clinical competence include long case, short cases, objective structured clinical examinations (OSCE), and logbook. Students felt that MCQs were comprehensive, objective, and relevant to the curriculum content. They reported that feedback was not provided after assessments. Conclusion: The assessment methods used at MUSOM during clinical years include MCQ, SAQ, MEQ, LEQ, Short cases, long cases, and OSCE. Students reported varied perceptions of the different assessment methods but favored MCQ and OSCE over other formats for assessing knowledge and clinical skills respectively.

Share and Cite:

Kipkulei, J. , Kangethe, S. , Boibanda, F. , Jepngetich, H. , Lotodo, T. and Kirinyet, J. (2022) Assessment Methods Used during Clinical Years of Undergraduate Medical Education at Moi University School of Medicine, Kenya. Health, 14, 296-305. doi: 10.4236/health.2022.143023.

1. Introduction

Assessment is defined by Banta & Palomba [1], as cited by Asani [2] as the systematic collection, review, and use of information about educational programs undertaken for improvement, learning, and development.

Assessment plays a key role in medical education. It ensures quality in training programmes, motivates and directs student’s learning [3] [4]. Assessment is also important to verify whether or not the objectives of the training are being met. Other aims for assessment include monitoring of the training program, feedback to students, licensing, certification, and screening of students for advanced training.

Several principles guide assessment, and they include the aims of the assessment (why assess), who to assess, timing (when to assess), what to assess, methods (how to assess), and the criteria for determining the usefulness of the assessment [2].

Assessment can be formative or summative. Formative assessment also referred to as assessment for learning, occurs at the end of a lesson or unit and the main goal is student feedback. On the other hand, summative assessment, also referred to as assessment of learning or evaluation, occurs at the end of a course, at the end of term, semester or year. The objectives of summative include evaluation of the overall performance of a candidate and course outcomes, certification, licensing, among others [5].

The need to know what to assess must be clearly stated in every curriculum. Assessment can be carried out to determine the knowledge (cognitive), skills (psychomotor and attitudes (affective) domains of learners as per Bloom’s taxonomy of learning objectives [6]. Other areas of learning that need to be assessed include communication, teamwork, professionalism, clinical reasoning, and ethical issues [2].

There are various methods of assessments and the usefulness of each method has to be determined in terms of its reliability, validity, educational impact, objectivity, acceptability, and feasibility [7] [8]. Education and practice in the health professions typically require multiple cognitive, psychomotor, and attitudinal/ relational skills [7]. To assess all these attributes, assessment systems have to be comprehensive, sound, and robust.

During the last three decades, medical schools have been faced with a variety of challenges from the society, patients, doctors, and students. These institutions have responded in several ways including the development of new curricula, the introduction of new learning situations and new methods of assessment, and a realization of the importance of staff development.

As the practice of medicine evolves, the knowledge, skills, and attitudes required to provide patient care will continue to change. These changes will necessitate the restructuring of assessment systems to ensure that high-quality assessment programmes are implemented to fulfill health professions education’s needs and contract with society.

Furthermore, effective and efficient delivery of healthcare requires not only knowledge and technical skills but also analytical and communication skills, interdisciplinary care, counseling, evidence-and system-based care. This calls for assessment systems to be comprehensive, sound, and robust [7].

The assessment program for the clinical years at MUSOM is block or rotation-based which in turn depends on the year of study. During the fourth year of study, the students undertake four blocks or rotations in what is referred to as junior clerkship. The rotations are in internal medicine, paediatrics and child health, reproductive health and general surgery. In the fifth year of study, the students rotate through what is called special rotations in anaesthesiology and critical care, oral health, dermatology, ear, nose and throat (ENT) surgery, ophthalmology, orthopaedics and traumatology, radiology, and imaging. During the 6th year, the rotations are referred to as senior clerkship and the students rotate in internal medicine, paediatrics and child health, reproductive health, mental health, and general surgery.

In each rotation, students’ assessment is divided into two main parts: continuous assessment tests (CAT) and end-of-year examination (EYE). The CAT includes logbook assessment, written tests, and end of term rotation clinical practical assessment. The EYE comprises written tests and clinical practical assessments. The CAT accounts for 50% of the final grade, and the EYE accounts for the other 50% (Figure 1).

The aim of this study is to report on the current assessment practice during clinical years at MUSOM. The objectives were to describe the assessment methods that are used and the perceptions of students on the comprehensiveness, relevance, objectivity of various assessment methods and challenges faced.

2. Methodology

The study employed a descriptive cross-sectional study design and was carried out at Moi University School of Medicine (MUSOM). MUSOM is located in Eldoret town and was established more than 30 years ago as the second medical school in Kenya. The school implements an innovative medical curriculum, the SPICES (Student-Centered, Integrated, Problem-based, Community-based, Electives, and Systematic) model and it uses MTRH as its main teaching hospital and

Figure 1. Flow chart of the assessment program in years 4, 5, and 6 at MUSOM. CAT: Continuous Assessment Tests. EYE: End of Year Examination. MCQ: Multiple Choice Question. SAQ: Short Answer Question. LEQ: Long Essay Question. MEQ: Modified Essay Question. OSCE: Objective Structured Clinical Examination (Source: MUSOM curriculum).

various county hospitals and health centers in Western Kenya for its Community Based Education and Service (COBES) program.

The study subjects included a cohort of medical students in the fourth year, fifth year, sixth year, and newly graduated medical interns. A purposive sampling technique was employed in selecting the study participants. The MBChB curriculum of the medical school was also scrutinized. Data was collected using a semi-structured interview guide. The areas covered by the guide include assessment methods, opinions on the preferred methods, the comprehensiveness, objectivity and relevance to curriculum content of the various assessment methods; feedback, challenges encountered, and suggestions to improve the assessments.

Data were analyzed using content analysis. Written permission was obtained from the school administration and verbal informed consent was obtained from the study participants and confidentiality was maintained.

3. Results

3.1. Assessment Methods at MUSOM

The assessment methods employed at MUSOM during the clinical years include: written, oral, and clinical/practical examinations. The written examinations used are multiple choice questions (MCQs), short answer questions (SAQs), long essay questions (LEQs), and modified essay questions (MEQs) whereas the clinical skills assessments are the long case, short cases, and objective structured clinical examination (OSCE). The above formats are not applied uniformly across all the rotations/departments, for example, OSCE is only used during the 6th year end of year examinations in the departments of internal medicine and surgery.

3.2. Students’ Perspectives on the Assessment Methods Used at MUSOM

The students who were interviewed gave varied opinions on the comprehensiveness, objectivity, relevance to the curriculum content, and the challenges they face with the various assessment methods.

The students stated that each assessment method has its own merits and demerits. They mentioned that they preferred the MCQ assessment format as it “is comprehensive, objective and the questions were relevant to the curriculum content”. One student, however, reported that “some MCQs were beyond the scope of what we had covered in the curriculum”. The MCQ types used mainly at the schools are type 2 and type 3, and most students preferred type 3 over type 2.

All the students interviewed favoured SAQ and MEQ over LEQ and the reason given for this preference is that the SAQ covered wider content whereas LEQ, which in most cases was only one question could be on an area where one is not well prepared to tackle and hence the chance of failing the question is high. MEQ format was viewed favourably because it is structured, focused, based on clinical scenarios and improves the chances of one getting high marks, though one student pointed out that “one could fail the whole question if he does not know the answer to the first part of the question as the subsequent parts of the question are related to the initial question”.

The students pointed out that, though the long case reflected the real situation of clinical practice, it is associated with several shortcomings, including subjectivity in terms of the case assigned and the examiners, uncooperative patients, language barrier, unconducive environment, and intimidating, impatient, and harsh examiners.

When it comes to the short cases, the students were of the view that they were not standardized as some students can be asked to take a brief history, others to perform a physical examination, and others for a spot diagnosis. They also intimated that the cases subject them to a lot of anxiety and panic.

The OSCE was perceived by those who had experienced the method to be objective, and that it “allows one to demonstrate one’s knowledge and skills”. However, they observed that more time needs to be allocated per station and that the students need to be prepared early for the OSCE.

The participants stated that they rarely get feedback on their performance after the continuous assessment tests. They said that the only feedback is the return of their scripts.

The following are ways by which students have suggested for quality improvement of assessments during clinical years at MUSOM

• The school to adopt only MCQ type 3 as this is the format being adopted for assessment of medical knowledge and its application worldwide.

• The long case and short cases to be standardized.

• A “mock” clinical examination to be introduced before the real examination.

4. Discussion

Written examination is mainly to evaluate the medical knowledge and its application while the clinical examination aims to assess skills and attitudes. The written examination methods used at MUSOM employs include MCQs, SAQs, MEQs, LEQs, and viva voce, while the methods for assessing skills and attitudes include short cases, long case, logbook and OSCE. Assessment methods during clinical years of undergraduate medical education follow Miller’s hierarchical model for the assessment of clinical competence [9]. This model starts with the assessment of the cognition domain and ends with the assessment of behaviour in practice as illustrated in Figure 2. The assessment of cognition deals with knowledge and its application (knows, knows how) and the assessment of behaviour deals with assessment of competence under controlled conditions (shows show) and the assessment of competence in practice or the assessment of performance (does).

MCQ assessment format was the preferred method of assessment and this compares to other studies [11] [12]. Multiple choice question assesses factual knowledge, recall, understanding, and interpretation, which correspond to Miller’s pyramid level of “knows” and “knows how.” There are several formats of MCQs and they are commonly used in both formative and summative assessments. The advantages of MCQs include coverage of large content of the syllabus, high reliability in scoring, ease of marking and scoring, can be marked by computers, require less time in administering, and can test a large sample of knowledge in a short period. On the other hand, the limitations of MCQs include the inability to assess other domains of learning chiefly the psychomotor and affective domains, difficulty to write especially in certain content areas, does not assess communication and writing skills, students can guess the answers

Figure 2. Miller’s hierarchical model for the assessment of clinical competence, adapted from [10].

rightly, and students can perform excellently well if questions are repeated [10] [13] [14].

All the participants favored SAQ and MEQ formats over LEQ. This compares to a study by Preston et al. [3], where the study subjects felt that SAQs were a more accurate reflection of what they had learnt. SAQs are an open-ended

semi-structured question format where students are required to answer the questions with only a few words, phrases, or numbers. They mainly test recall of knowledge, e.g. definitions, terms, facts, figures, etc. A structured predetermined marking key is used to improve objectivity. On the other hand, long essay questions are used to assess the ability of students to process, summarize, evaluate, and apply information in new situations. Only a few questions are used and hence leading to low reliability. A structured marking scheme is used to improve its objectivity. Modified essay questions are a special type of long essay questions where a case is followed by a series of questions that must be answered in the sequence asked. This leads to question interdependency and a student answering the first question incorrectly is likely to answer the subsequent questions incorrectly too. MEQ is used to assess students’ problem-solving skills, reasoning skills, and understanding of concepts, rather than recall of factual knowledge [10] [13].

The participants reported that both the short case and long case methods of clinical assessment reflected the real situation of clinical practice, though they pointed out several shortcomings associated with them. This finding concurs with that of a study by De Mel et al. [15] where most of the respondents perceived that both long and short case assessments were fair in assessing their knowledge and clinical skills. Long case is a format used in the traditional clinical assessment. The typical scenario is where a student clerks a real patient for a stated period and he/she is thereafter examined by a set of examiners. The advantages of this method include face validity and authenticity since the student interacts with a real-life situation and are presented with a complete and realistic true life challenge, and it also assesses the three major domains and communication skills; whereas the major drawback is the poor reliability or reproducibility mainly due to case specificity, variability in clinical scenarios among the examinees and differences among examiners [2].

A short case is a form of assessment used also to assess clinical competence. In this format, the students are asked to perform a physical examination of a real patient with little knowledge of the patient’s history and then assessed on the examination technique and the ability to elicit physical signs and interpret the signs correctly. Several cases are used in any one assessment to increase the sample size. Studies on the validity and reliability of short case assessment, however, are scarce. However, a study by Hijazi et al. [16] concluded that performance in the short cases is a better discriminator of competence than that in the long case. A shortcoming of short cases is the omission of history taking by the candidate [2].

Because of the drawbacks associated with short and long cases, many medical schools are moving towards using the OSCE method to conduct clinical examination [15]. This format is yet to be fully embraced by MUSOM and it is only used by two clinical departments. And all the students who had had an experience felt that OSCE is objective. OSCE is a method where students are assessed at several stations on focused clinical skills. Standardized patients (SPs), real patients, or simulators may be used at each station, and demonstration of specific skills can be observed and measured. Each student is exposed to the same stations and assessment. A checklist or a structured marking scheme is used by staff members to assess the student [2] [10] [17]. The strengths of OSCE include its reliability because of its multiple stations, multiple assessors, sufficient test time, and checklist. It also has high validity because of blueprinting. It tests a wide range of skills. Feedback is also possible, making it a very useful tool for formative assessment. The disadvantages of this format include high cost and labour-intensive. It also leads to an emotional burden on real patients and its acceptability is an issue among many faculty staffs because of poor exposure to its principles and resistance to change [2] [10] [17].

In our study, participants reported a lack of any form of feedback after continuous assessments tests. A similar finding has been reported by other studies [3] [18] [19]. Feedback during undergraduate medical education is critical and is one of the principles that lead to more effective learning [20]. According to Eraut, [21] as cited by Al-Mously et al., [18], feedback “is an interactive process which aims to provide learners with an insight into their performance”. It assists learners to maximize their potential and professional development at different stages of training. Furthermore, studies have shown that medical students who received feedback tend to perform better on objective outcome measures than students who did not receive feedback [22] [23]. Though it is crucial in the teaching and learning process, several factors hamper the provision of feedback, for example, lack of time for the tutors due to workload, placement of a greater focus on assessment rather than on feedback, lack of clear and structured feedback system embedded in the curriculum, lack of empowerment of learners and teachers with skills required to understand, accept, value and act on feedback [24] [25].

5. Conclusions and Recommendations

The MUSOM uses various methods for assessing knowledge and clinical skills during clinical years. Some assessment methods are not uniformly applied across all the departments. The challenges encountered by the students during clinical assessments include an unconducive environment, language barrier, uncooperative patients, intimidating and harsh examiners, and lack of feedback after assessments.

This study recommends that the school adopts OSCE across all the departments. Other innovative methods of assessments, for example, Objective Structured Long Examination Record (OSLER) could also be considered. These could alleviate some of the challenges raised by the students. Students should be given feedback after their assessment tests. Finally, a larger study needs to be conducted using both quantitative and qualitative methods, and in addition, the perspectives of the teachers on clinical assessments should also be considered.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.


[1] Banta, T.W. and Palomba, C.A. (1999) Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. Higher and Adult Education Series.
[2] Asani, M. (2012) Assessment Methods in Undergraduate Medical Schools. Nigerian Journal of Basic and Clinical Sciences, 9, 53-60.
[3] Preston, R., Gratani, M., Owens, K., Roche, P., Zimanyi, M. and Malau-Aduli, B. (2020) Exploring the Impact of Assessment on Medical Students’ Learning. Assessment & Evaluation in Higher Education, 45, 109-124.
[4] Shumway, J.M. and Harden, R.M. (2009) AMEE Guide No. 25: The Assessment of Learning Outcomes for the Competent and Reflective Physician. Medical Teacher, 25, 569-584.
[5] Vageriya, V. (2018) Assessment and Evaluation—In Perspective of Medical Education. Nursing & Healthcare International Journal, 2, 1.
[6] Anderson, L.W., Krathwohl, D.R. and Bloom, B.S. (2000) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives.
[7] Norcini, J., Anderson, M.B., Bollela, V., Burch, V., Costa, M.J., Duvivier, R., Hays, R., Palacios Mackay, M.F., Roberts, T. and Swanson, D. (2018) 2018 Consensus Framework for Good Assessment. Medical Teacher, 40, 1102-1109.
[8] Van Der Vleuten, C.P. (1996) The Assessment of Professional Competence: Developments, Research and Practical Implications. Advances in Health Sciences Education: Theory and Practice, 1, 41-67.
[9] Miller, G.E. (1990) The Assessment of Clinical Skills/Competence/Performance. Academic Medicine: Journal of the Association of American Medical Colleges, 65, S63-S67.
[10] Al-Wardy, N.M. (2010) Assessment Methods in Undergraduate Medical Education. Sultan Qaboos University Medical Journal, 10, 203-209.
[11] Ibrahim, N.K., Al-Sharabi, B.M., Al-Asiri, R.A., Alotaibi, N.A., Al-Husaini, W.I., Al-Khajah, H.A., Rakkah, R.M. and Turkistani, A.M. (2015) Perceptions of Clinical Years’ Medical Students and Interns towards Assessment Methods Used in King Abdulaziz University, Jeddah. Pakistan Journal of Medical Sciences, 31, 757-762.
[12] Holzinger, A., Lettner, S., Steiner-Hofbauer, V. and Capan Melser, M. (2020) How to Assess? Perceptions and Preferences of Undergraduate Medical Students Concerning Traditional Assessment Methods. BMC Medical Education, 20, Article No. 312.
[13] Tabish Yed, A. (2008) Assessment Methods in Medical Education. International Journal of Health Sciences, 2, 3-7.
[14] Epstein, R. (2007) Assessment in Medical Education. The New England Journal of Medicine, 356, 387-396.
[15] De Mel, S., Jayarajah, U. and Seneviratne, S.A. (2018) Medical Undergraduates’ Perceptions on the End of Course Assessment in Surgery in a Developing Country in South Asia. BMC Research Notes, 11, Article No. 731.
[16] Hijazi, Z., Premadasa, I. and Moussa, M.A. (2002) Performance of Students in the Final Examination in Paediatrics: Importance of the “Short Cases”. Archives of Disease in Childhood, 86, 57-58.
[17] Wass, V., Van der Vleuten, C., Shatzer, J. and Jones, R. (2001) Assessment of Clinical Competence. The Lancet (London, England), 357, 945-949.
[18] Al-Mously, N., Nabil, N.M., Al-Babtain, S.A. and Fouad Abbas, M.A. (2014) Undergraduate Medical Students’ Perceptions on the Quality of Feedback Received during Clinical Rotations. Medical Teacher, 36, S17-S23.
[19] Brits, H., Bezuidenhout, J., van der Merwe, L.J. and Joubert, G. (2020) Assessment Practices in Undergraduate Clinical Medicine Training: What Do We Do and How We Can Improve? African Journal of Primary Health Care & Family Medicine, 12, 7.
[20] Harden, R.M. and Laidlaw, J.M. (2013) Be FAIR to Students: Four Principles That Lead to More Effective Learning. Medical Teacher, 35, 27-31.
[21] Eraut, M. (2006) Feedback. Learning in Health and Social Care, 5, 111-118.
[22] Wigton, R.S., Patil, K.D. and Hoellerich, V.L. (1986) The Effect of Feedback in Learning Clinical Diagnosis. Journal of Medical Education, 61, 816-822.
[23] Park, J.H., Son, J.Y., Kim, S. and May, W. (2011) Effect of Feedback from Standardized Patients on Medical Students’ Performance and Perceptions of the Neurological Examination. Medical Teacher, 33, 1005-1010.
[24] Agius, N.M. and Wilkinson, A. (2014) Students’ and Teachers’ Views of Written Feedback at Undergraduate Level: A Literature Review. Nurse Education Today, 34, 552-559.
[25] Kuhlmann Lüdeke, A., Guillén Olaya, J.F., Kuhlmann Lüdeke, A. and Guillén Olaya, J.F. (2020) Effective Feedback, an Essential Component of All Stages in Medical Education. Universitas Medica, 61, 32-46.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.