Creative Education
2013. Vol.4, No.10A, 48-53
Published Online October 2013 in SciRes (http://www.scirp.org/journal/ce) http://dx.doi.org/10.4236/ce.2013.410A008
Copyright © 2013 SciRes.
48
Implementation of Objective Structured Clinical Examination
for Assessing Nursing Students’ Clinical Competencies:
Lessons and Implications*
Patricia Katowa -M u kwato, Lonia Mwa pe, M a rjori e Kabinga-Mak u kul a,
Prudencia Mweemba, Margaret C. Maimbolwa
Department of Nurs in g S c i en c e s, School of Medicine, Univer si ty of Zambia, L usaka , Zambia
Email: patriciakatowamukwato@gmail.com
Received August 8th, 2013; revised September 8th, 2013; accepted September 15th, 2013
Copyright © 2013 Patricia Katowa-Mukwato et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original wo rk is properly cited.
Objective Structured Clinical Examination (OSCE) as a performance-based assessment method is a well
established student assessment tool. Its popularity in the assessment of clinical competence is well docu-
mented and prominent in situations where reliability and content validity are fundamental. In this paper,
we describe the implementation of OSCE in the Department of Nursing Sciences; University of Zambia
for assessment of nursing students’ clinical competencies. The implementation process followed an eight
step-approach from which several lessons were drawn and implications were generated. Major lessons in-
cluded the need for adequate preparation of faculty and students, which is a fundamental ingredient to
ensure reliability of the examination, and in minimizing stress and anxiety respectively. Following the
implementation we acknowledged that OSCEs are suitable for testing clinical, technical and practical
skills which may not be adequately assessed through traditional assessment methods as they possess the
ability to improve the validity and reliability of assessments. Nevertheless, careful consideration should
be taken to avoid entirely relying on OSCE as the only means of assessing clinical competencies.
Keywords: Clinical Competence; Objective Structured Clinical Examination; Assessment; Nursing
Students
Introduction
Clinical Competence is a complex concept and debates con-
tinue about the most appropriate definition and method of as-
sessment (Evans, 2008). Watson et al., (2002) suggest that
competence is a nebulous concept defined in deferent ways by
different people. Its relationship with other concepts such as
capability, performance, proficiency and expertise make it even
more difficult to define. Earlier Gonzi (1994) described three
ways of understanding competence: 1) tasks related skills, 2)
patterning to generic attributes essential to performance and 3)
the bringing together of a range of general attributes such as
knowledge, skills and attitudes appropriate for professional
practice. Later the Australian Nurses and Midwifery Council
(2005) described competence in a more holistic way as a com-
bination of skills, knowledge, attitudes, values and abilities that
underpin effective and/or superior performance in a profession.
The above definitions underscore the complexity and multi-
facetedness of clinical competence. The complex nature of cli-
nical competence consequently poses a challenge in isolating or
identifying suitable assessment methods that are able to meas-
ure all its attributes as well as maintaining validity, reliability
and objectivity. Affirming the challenges in assessing clinical
competence for nursing students, Levette-Jones and others (2010)
stated that the challenge of validity, reliability, subjectivity and
bias in measuring clinical competence has confronted universi-
ties for many years.
Since inception in 1978, the Department of Nursing Sciences
at the University of Zambia, utilized Direct Observation of
Procedural Skills (DOPS) for assessment of nursing students’
clinical competences for both formative and summative pur-
poses. DOPS is a method for assessing procedural competence
through direct observation by faculty (Holmboe & Hawkins,
2008). It was considered sufficient in assessment of clinical
competence as the Department of sorely admitted Registered
Nurses with diploma to upgrade to Bachelor’s Degree. These
students were already practicing nurses and had been certified
competent to practice nursing by the regulatory body (General
Nursing Council of Zambia). Some students had also attained
post registration qualifications such as Midwifery, Operating
Theatre Nursing and Mental Health Nursing. Using DOPs, each
student was assessed on one procedure deemed appropriate by
the examining faculty. Selection of procedures was sorely de-
termined by the examiner as well as the availability of patients
requiring such a procedure, as opposed to curricular core com-
petencies and examination blue prints. There was often a lack
of transparency about the objectives of the assessment and the
competencies required to succeed (Marwaha, 2011). In addition,
the lack of a clear marking system resulted in variability be-
tween examiners. Although DOPS was considered feasible and
*Competing Interest: The Authors declare that there is no competing inter-
est.
P. KATOWA-MUKWATO ET AL.
acceptable by faculty and students, its inherent characteristics
made it fail to meet the principles of reliability, content validity
and standardization.
In 2010 the Department of Nursing Sciences implemented a
competence-based curriculum and admitted the first cohort of
pre-service students. Consequently it became necessary to re-
view the clinical assessment methods to facilitate the imple-
mentation of those techniques that are authentic for measuring
and enhancing clinical competence. As a result, in the 2012/
2013 academic year, the department implemented the Objective
Structured Clinical Examination (OSCE) for assessment of
clinical competence. OSCE is a comprehensive, systematic and
objective method of evaluation that involves an individual stu-
dent rotating through a number of practical and theoretical “sta-
tions” where they are assessed using a set criteria (Bhat &
Anald, 2006; Gormley, 2011). Byrne and Smyth (2008) de-
scribed OSCE as an approach to students’ assessment in which
aspects of clinical competence are evaluated in a comprehen-
sive, consistent and structured manner, with close attention to
the objectivity of the process.
OSCE was originally conceptualized by Harden and Gleeson
at the University of Dundee in1975 as a solution to the difficul-
ties of adequate sampling and standardization common with
traditional assessment methods (Wass et al., 2001; Turner &
Dankoski, 2008; Marwaha, 2011). In the seminal paper on
OSCEs, Harden et al., (1975) recommended OSCE as an alter-
native to traditional clinical testing methods due to its objectiv-
ity, reliability and ability to test multiple competencies in a
controlled standardized manner, thus eliminating non-candidate
variance in results (Marwaha, 2011). Although OSCE evolved
from medical education, it has been used extensively in nursing
worldwide (Shadia et al., 2010). It allows for testing of a wide
range of knowledge, skills and attitudes and can accommodate
large numbers of examinees in one examination session (Bhat
& Anald, 2006; Shadia et al., 2010). It is also accepted as a fit-
for-purpose instrument for measuring clinical reasoning skills
(Ahmed, 2009). When compared to traditional methods of cli-
nical skills assessment, OSCE has several advantages, and it
meets the two cardinal criteria of an effective assessment tool
vis-à-vis validity and reliability (Gormely, 2011; Turner &
Dankoski, 2008; Auewarakul et al, 2005).
Validity refers to the extent to which an instrument measures
the construct of interest (Evans, 2008), while reliability is the
consistence of examinee scores over time (Turner & Dankoski,
2008). Although some studies have reported low reliability for
OSCE, various methods have been found to increase its reli-
ability; large number of stations (at least 10), large number of
raters, good standardization of patients and adequate test length
(at least 3 - 4 hours) to obtain a reliability of .85 - .90 (Turner &
Dankoski, 2008). Bhat and Anald (2006) assert that the ability
to test a wide range of knowledge, skills and attitudes in a sin-
gle OSCE helps to ensure content validity which according to
Downing (2003) is the most essential of the three types of va-
lidity: conten t, criterion and construct validity.
Despite its positive attributes the cost of implementing
OSCE is high in terms of personnel, facilities, finances and
time for examinees, Standardized Patients and faculty (Evans,
2008; Turner & Dankoski, 2008; Marwaha, 2011). Some stud-
ies have reported OSCE settings to be stressful or intimidating
for participants although none has compared the level of stress
to other forms of formal examinations (Evans, 2008). Notwith-
standing the cost and time constraints, OSCEs have the capacity
to improve the validity and reliability of assessments of many
aspects of clinical competence. There is also literature to sup-
port implementation of OSCE as its running costs are out-
weighed by the benefits. For example, it has been affirmed that
the running cost of OSCE is outweighed by the educational
benefits as well as student satisfaction (Selim et al., 2011). In
addition, OSCE has been supported as an appropriate method in
evaluating nursing clinical skills because of various advantages
such as, improving student clinical performance, preparing
highly qualified and competent graduates, increasing decision
making abilities, and enhancing teaching level (El Darir & Abd
El Hamid, 2013). Furthermore, OSCEs have been reported to
be beneficial as they enhance skill acquisition through hands-
on approach and affords students to practice in a safe controlled
environment (Evans, 2008).
Implementation Process
The implementation of OSCE was preceded by a series of in-
house planning and orientation meetings for faculty and Staff
Development Fellows (SDF) in the Department of Nursing
Sciences at the University of Zambia, between January, and
April 2013. Since OSCE was a new concept for most members
of staff and SDFs, initial meetings were designed to give an
overview of OSCE as an assessment tool, its characteristics in
comparison with other clinical assessment tools and in meeting
the criteria of validity and reliability. Other meetings dealt with
the design of OSCE; including types and numbers of stations.
Meetings were also held to discuss and agree on the steps to-
wards developing an OSCE.
In literature, a variety of models have been used in the im-
plementation of OSCE. For example El Darir and Abd el
Hamid (2013) utilized a three step model which included con-
struction of OSCE schemes and clinical scenario, as first phase,
actual conducting of OSCE as second phase, and Evaluation as
third and last pha se. In our case e ight steps were followed d uring
the planning and implementation phases as outlined in Table 1.
Following consensus on the outlined steps, four teams were
constituted based on subject matter expertise and included;
clinical nursing, mental health and psychiatric nursing, mater-
nal and child health nursing and community health nursing. The
Table 1.
OCSE planning and implementation phases.
Step Description
Step 1 Identification of competencies to be assessed
(drawn from the curriculum)
Step 2 Development of case scenarios based on
identified competences
Step 3 Identification/modification/development of
evaluation tools (checklist, rating scales etc.)
Step 4 Identification of assessment sites
class rooms and clinical s kil ls laboratory)
Step 5 Planning for resources (human, simulator/models,
medical s ur g ical supplies and stat ionery )
Step 6 Orientation of standardized patients
SPs (graduate students and supp o r t staff)
Step 7 Mock OSCE
Step 8 Implementation (act u al conduct of OSCE)
Copyright © 2013 SciRes. 49
P. KATOWA-MUKWATO ET AL.
teams developed scenarios, checklists and identified required
resources, both human and material, after which all developed
scenarios and checklist were reviewed by a combined team
drawn from the four subject areas. Given that this was the first
OSCE, students were given an orientation on the structure and
what they should expect and what was expected of them.
To test the practicality of OSCE, a mock was conducted
three days prior to the date of examination. The mock OSCE
was used to pilot some of the scenarios and checklists and de-
termine whether the time allocated was adequate for the per-
formance of different skills. The mock OSCE was also used to
determine inter-rater variability and provided a platform for
consensus building regarding the scoring system and allocation
of marks. During the mock OSCE, SDFs acted as students
while faculty acted as examiners and standardized patients.
Two days before the examination, some postgraduate students
were oriented to act as standardized patient during the main
OSCE. Finally, the first ever OSCE in the department was ad-
ministered on 104 fourth year Nursing students during the
2012/2013 end of academic year examination. OSCE was used
to assess clinical competences in six courses; Medical Surgical
Nursing, Community Health Nursing, Maternal and Child
Health Nursing, Psychiatric and Mental Health Nursing,
Peaditrics and Operating Theatre Nursing.
Many variants exist in the implementation of OSCE. For
example, stations may be much longer and examiners may not
be present, with the marking being undertaken by the simulated
patients on whom the task was performed. In other Situation,
there may be stations at which multiple-choice questions are
asked or at which other forms of written responses are required,
while other stations require performance of a clinical procedure
while being observed and evaluated by faculty using a standard
checklist (Newbel, 2004; El Darir & Abd El Hamid, 2013). In
the case of the department of Nursing Sciences, seven stations
were designed for each of the six courses. Four of the seven
were marker station while three were observer station. Observer
stations consisted of a task presented in two to three sentence
scenarios and a request for appropriate action or performance
rated by an examiner using predetermined checklist. Observer
stations were used mainly for evaluating skills of psychomotor
nature eg measuring vital signs, physical examination, and in-
tramuscular drug administration. Marker stations consisted of
presentation of data with a request for interpretation, documen-
tation or appropriate clinical action (Amin & Hoon-Eng, 2003).
Each station was allocated 10 minutes.
As opposed to DOPS, where each student is examined on
only one randomly selected skill by one examiner, with the
implementation of OSCE, for the first time in the Department
of Nursing Sciences, all students were examined on seven dif-
ferent core competencies as specified in the curriculum. In ad-
dition each student was scored by three examiners on the three
observer stations. This to a large extent eliminated personal
biases that usually arise from single examiners. In addition, the
use of predetermined structured checklists and the involvement
of examiners through the implementation process ensured ob-
jectivity which is the main tenet of OSCE (Wilkinson et al.,
2003).
Lessons Learnt
Preparation and Orientation of Students
Important lessons were drawn from the first experience of
conducting OSCE in the Department of Nursing Sciences. It
was clear at the end of the examination that clarity of purpose
of OSCE is of utmost importance for examiners and especially
students considering that they were switching from the tradi-
tional kind of assessment. Being the first time students were
being assessed through an OSCE, it was very stressful for stu-
dents. This was evidenced by the questions that were asked by
the students during the orientation meeting. Similarly, on the
day of the examination, students exhibited stress as they
awaited the assessment process. This could have affected their
performance. As indicted earlier, studies have reported OSCE
settings to be stressful or intimidating for participants although
none has compared the level of stress to other forms of formal
examinations (Evans, 2008; Rennie & Main, 2006). Orienta-
tion of students was also done late hence students felt ill pre-
pared.
Cost of OSCE
OSCEs being costly require that ample time is allocated to
the preparation process (Evans, 2008; Turner & Dankoski, 2008;
Marwaha, 2011; Nulty et al., 2011). During the implementation,
it was discovered that a lot of time, material and human re-
sources were required to conduct effective OSCEs. This was
compounded by a large number of students that were examined.
Additionally, there were limited number of assessors and stan-
dardized patients. This could have affected the results and per-
formance of students as some students were assessed outside
the scheduled examination time. This could have resulted in
fatigue in both the students and assessors thereby affecting the
validity and relia b il ity.
Weighting of the Questions/Scenarios
All the stations were allocated equal marks despite the fact
that some skills on certain stations were more critical than oth-
ers. This resulted in a situation where a student who fails to
perform a critical skill at one station but performs well in a less
critical skill, ends up passing the overall examination after ag-
gregation of scores. This situation entails that a student may
progress to the next level of training or graduates without that
particular core competence.
Use of Checklists
Development of OSCE checklists as well as scoring is not as
straight forward as it was thought. It appears that this also needs
detailed discussion; attention and agreement especially between
or among examiners assessing the same skill in cases where
students are divided into more than one stream. Assessors
should be adequately prepared to ensure consistence in ap-
proach and inter-rater reliability (Evans, 2008). It is therefore
important that checklists are standardized and comprehensive
thereby preventing disadvantaging students. It was discovered
during the examination that some checklists used in the exami-
nation were not exhaustive. This could have introduced some
bias as it prompted some assessors to begin probing for more
answers from the students. Therefore, thorough preparation and
pretesting of the checklists is required to avoid introducing bias
and subjectivity.
Pre-Testin g of OSCE
It is also recognized from our experience that it is important
Copyright © 2013 SciRes.
50
P. KATOWA-MUKWATO ET AL.
to meticulously organize and pretest OSCE in order to uphold
reliability and validity. The duration, interconnectedness, num-
ber and order of OSCE stations need to be carefully examined
in order to ensure that the potentially competing requirements
of validity and reliability are balanced because they all affect
students (Rushforth, 2007).
Finally, it came to light that the process of OSCE becomes
monotonous especially when examiners are dealing with huge
numbers of students. At the same time, it is recognized that
changing examiners for a particular station may compromise
the objectivity of the assessment and disadvantage the students.
Implications
Implications for introducing OSCE in the Department of
Nursing Sciences include:
OSCEs are very costly to implement. For example the num-
ber of faculty required to assess a group of 104 students using
DOPS where each student is assessed on one procedure for 30
minutes would have been 13. For the OSCE where we had two
streams running concurrently each with seven stations, the
number was almost doubled. In addition to faculty we required
6 Standardized patients for each of the three observer stations in
the two streams. Furthermore each stream required a coordina-
tor and time keeper. Apart from faculty there was also a cost for
paying standardized patients. OSCEs are also time consuming
in both preparations and actual administration. This cost impli-
cation entails a need to allocate adequate funds for the process
thus demanding changes in student examination fees and chan-
ges in departmental examination budget.
Introducing OSCE will require investment in identifying and
training simulated and or standardized patients well in advance
of the assessment. It has been documented that, when well
trained, simulated patients cannot be distinguished from real
patients, are stable over time, and can provide accurate feedback
and assessments (Vu & Barrows, 1994; Newbel, 2004).
There is eminent need to improve the organization of OSCEs
in order to reduce the overall duration, although long examina-
tions contribute to achieving high levels of reliability (Newbel,
2004). Approaches for decreasing the practical challenges that
accompany long examinations should be identified and ad-
dressed. Organizational issues, which may include the numbers
of candidates versus examiners, venues, and resources, influ-
ence the quality of the assessments (Newbel, 2004). Despite the
above challenges, Newble (2004), recognized that traditional
clinical examinations have serious limitations related to validity
and reliability issues thus supporting the use of OSCE as an
alternative eminent.
This paper also further draws from Newble’s (2004) asser-
tion and agrees that OSCE is particularly suitable for assessing
numerous components of clinical competence. However, attitu-
dinal and behavioural aspects of the student may not be fully
assessed through this method. Nevertheless, certain attitudinal
issues may be assessed if well trained standardized patients are
used. This therefore requires the use of other methods of as-
sessment that are best suited for assessing behavioural and atti-
tudinal aspects of candidates such as DOPS in the actual clini-
cal environment.
Finally OSCE must be integrated into the overall clinical
evaluation system to be used for both formative and summative
assessment. Use of OSCE for formative assessment, allows for
provision of immediate feedback and serves as a teaching op-
portunity (Newbel, 2004).
Discussion
OSCE as a performance-based assessment method is a well
established student assessment tool for many reasons: compe-
tence-based, valid, practical and effective means of assessing
clinical skills that are fundamental to the practice of Nursing
and other health care related professions (Association of
American Medical Colleges, 2008; El Darir & Abd El Hamid,
2013). Its popularity as a major tool in the assessment of clini-
cal competence is well documented as being specifically pro-
minent in assessing situations where reliability and content
validity are fundamental elements for making the results of
such assessments justifiable to both examinees and external
agencies (Newbel, 2004; Rushforth, 2007; Selim et al., 2011; El
Darir & Abd El Hamid, 2013). The popularity of OSCE re-
sulted from concerns that were raised about the traditional cli-
nical and oral examinations used for assessing clinical compe-
tence (Rushforth, 2007; Holmboe & Hawkins, 2008; Levette-
Jones et al, 2010; Marwaha, 2011; El Darir & Abd El Hamid
2013). The concerns were triggered by the discovery of low
correlations between examiners mark allocation, which resulted
in unacceptable reliability. However, change in some parts of
the world took long to occur due to a general lack of an op-
tional assessment method for clinical competence.
The introduction of OSCE has provided benefits in clinical
testing to that of objective written examinations in knowledge
testing. The use of marking which is based on a checklist has
lead to improved inter-rater consistency (Rushforth, 2007).
Testing students’ performance on numerous stations has also
contributed to the increase in the number and range of compe-
tencies that could be sampled thus improving on content valid-
ity (Downing, 2003; Bhat & Anald, 2006). It is these benefits
of OSCE that outweighs its cost (Selim et al., 2011; El Darir &
Abd El Hamid, 2013).
Regarding students’ opinion of OSCE, a study conducted by
Turner and Dankoski, to assess the validity, reliability and fea-
sibility, majority of students felt that they had been fairly
marked (Turner & Dankoski, 2008). Similarly, OSCEs are re-
garded by most students as comprehensive-covering a range of
knowledge and clinical competences and useful practical ex-
perience (Piere, 2004). Similar assertions were made by Eldarir
and Abd el Hamid (2013) where students reported to have had
positive opinion of OSCE when they isolated a number of ad-
vantages of OSCE compared to traditional evaluation; measur-
ing of course objectives, enhancing teaching level, relating
theory to practice, making examinations well developed, in-
creasing decision making abilities and an enhanced method of
evaluation (El Darir & Abd El Hamid, 2013). On the other hand
negative student opinions on OSCE have also been reported for
example in a study by Moudoon, Biesty and Smith (2013), 57%
students either disagreed or strongly disagreed to the statement
that OSCE reflected real life clinical situation.
Limitations of the Study
There are two main limitations of this case study: Firstly
OSCE was being introduced for the first time and used for
summative assessment of clinical competencies. This could
have affected both students and examiners in some unique way
as there was no prior experience with this type of testing during
Copyright © 2013 SciRes. 51
P. KATOWA-MUKWATO ET AL.
formative assessments. For students, introduction of a new
assessment technique could have increased the levels of anxiety
in addition to the usual anxiety associated with examination
consequently affecting performance. Therefore, the obtained
performance level could not have reflected the actual levels of
competencies in the tested clinical skills. For examiner,
switching from DOPS to OSCE could have affected inter-rater
reliability. Despite the orientation some examiners could not
confine to the structured checklist, they still asked additional
questions to some and not all students as is the case in DOPS.
Second limitation was the use of untrained SPs. In cases
where SPs were required, Postgraduate students were used.
Although the SPs were oriented to their role, the orientation
was not adequate such that they varied their responses in some
cases, thus infringing on standardization which is a critical
component in ensuring objectivity in OSCE.
Conclusion
OSCEs are suitable for testing clinical, technical and practi-
cal skills which may not be adequately assessed through tradi-
tional assessment methods, and it possesses the ability to im-
prove the validity and reliability of assessments of many as-
pects of clinical competence. It should therefore be used as a
method for assessing nursing students’ clinical competences,
however careful consideration should be taken to avoid entirely
relying on OSCE as the only means of assessing clinical com-
petence. Hence, the use of other methods of assessment that are
best suited for other aspects of clinical competence may be
equally important in complementing OSCE. In addition we
acknowledge that while OSCE as an assessment tool has been
widely researched and documented, research evidence specific
to nursing in Zambia is scanty. This therefore underscores the
critical need for more research in this area in Zambia.
Acknowledgements
The authors acknowledge the Department of Nursing Sci-
ences at the School of Medicine, University of Zambia for fa-
cilitating the Implementation of OSCE.
REFERENCES
Ahmed, C. N., & Abu Baker, R. (2009). Assessing nursing clinical
skills performance using objective structured clinical examination
(OSCE) for open distance learning students in Open University Ma-
laysia. Proceedings of the International Conference on Information,
Kuala Lumpar, 12-13 August.
Amin, Z., & Hoon-Eng, K. (2003). Basics in medical education. New
Jersey: World Scientific Publication.
Association of American Medical Colleges (2008). Recommendations
for clinical skills curricula for Undergraduate medical education.
Auewarakul, C. Downing, S. M., Pradistuwan, R., & Jaturatamrong, U.
(2005). Item analysis to improve reliability for an internal medicine
OSCE. Advances in Health Science Education, 10, 105-113.
http://dx.doi.org/10.1007/s10459-005-2315-3
Australian Nursing and Midwifery Council (ANMC) (2005). National
competency standards for the registered nurse.
http://www.edcan.org
Bhat, M. S., & Anald, S. (2006). Objective structured clinical examina-
tion. Nursing Journal of India, 97, 14-16.
Byrne, E., & Smyth, S. (2008). Lecturers’ experiences and perceptions
of using objective structured clinical examination. Nurse Education
in Practice, 8, 283-289. http://dx.doi.org/10.1016/j.nepr.2007.10.001
Downing, M. S. (2003). Validity: On the meaningful interpretation of
assessment data. Blackwell Publishing Ltd. Medical Education, 37,
830-837. http://dx.doi.org/10.1046/j.1365-2923.2003.01594.x
El Darir, A. S., & Abd El Hamid, N. A. (2013). Objective structured
clinical examination versus traditional clinical students achievement
at maternity nursing; a comparative approach. Journal of Dental and
Medical Sciences, 4, 63-68. http://dx.doi.org/10.9790/0853-0436368
Evans, A. (2008). Competence Assessment in Nursing: A summary of
literature published since 2000. EdCan. National Education Frame-
work, Cancer Nursing.
Gonzi, A. (1994). Competence based assessments in the professions in
Australia. Assessment in Education, 1, 27- 44.
http://dx.doi.org/10.1080/0969594940010103
Gormley, G. (2011). Summative OSCEs in undergraduate medical edu-
cation. Nurse Edu ca t io n in Practice, 80, 127-132.
Harden, R. M. G., Stevenson, M., Downie, W. W., & Wilson, G. M.
(1975). Assessment of clinical competence using objective structured
examination. B r itish Medical Journal, 1, 447-451.
http://dx.doi.org/10.1136/bmj.1.5955.447
Holmboe, E. S., & Hawkins, R. E. (2008). Practical guide to the eval-
uation of clinical competence. Mosb y, Philadelphia.
Levette-Jones, T., Gersbach, J., Arthur, C., & Roche, J. (2010). Imple-
menting a clinical competence assessment model that promotes cri-
tical reflection and ensures nursing graduates’ readiness for profes-
sional practice. Nurse Education in Practi c e , 11, 64-69.
http://dx.doi.org/10.1016/j.nepr.2010.07.004
Marwaha, S. (2011). Objective Structured Clinical Examinations (OSCEs),
psychiatry and the clinical assessment of skills and competencies
(CASC) Same Evidence, Different Judgment. BMC Psychiatry, 11,
85. http://dx.doi.org/10.1186/1471-244X-11-85
Muldoon, K., Biesty, L., & Smith, V. (2013). I found the OSCE very
stress: Student Midwives attitudes towards an objective, structured
clinical examination (OSCE) nurse education today.
Newble, D. (2004). Techniques for measuring clinical competence: Ob-
jective structured clinical examinations. Medical Education, 38, 199-
203. http://dx.doi.org/10.1111/j.1365-2923.2004.01755.x
Nulty, D. D., Mitc hell, M. L., Jeffrey, C. A., Henderson, A., & Groves,
M. (2011). Best practice guidelines for use of OSCEs: Maximizing
value for student lea rning. Nurse Education Today, 31, 145-151.
http://dx.doi.org/10.1016/j.nedt.2010.05.006
Pierre, R., Wierenga A., Batron, M., Branday, J., & Chrietie, C. (2004).
Students evaluation of an OSCE in peaditrics at the University of
West Indies, Jamaica. BMC Medical Education, 4, 1-7.
http://dx.doi.org/10.1186/1472-6920-4-22
Rennie, A. M., & Main, M. (2006). Student midwives’ views of the
objective structured clinical examination. British Journal of Mid-
wifery, 14, 602-607.
Rushforth, H. E. (2007). Objective structured clinical examination
(OSCE): Review of literature and implications for nursing education.
Nurse Education Today, 27, 481-490.
http://dx.doi.org/10.1016/j.nedt.2006.08.009
Selim, A. A., Ramadam, F. H., El-Gueneidy, M. M., & Gaafer, M. M.
(2011). Using OSCE in undergraduate psychiatry nursing education:
Is it reliable and valid? Nurse Education Today.
Shadia, A. E., Hanaa, A. E., He wida, A. H., Nagwa, A. E. F., & Anas,
H. E. S. (2010). An introduction of OSCE versus traditional methods
in nursing education: Faculty capacity building and students perspec-
tives. Journal of American Science, 6, 1002-1014.
Turner, J. L., & Dankoski, M. E. (2008). Objective structured clinical
examination: A critical review. Family Medicine, 40, 574-578.
Vu, N. V., & Barrows, H. S. (1994). Use of standardized patients in
clinical assessments: Recent developments and measurement find-
ings. Education Researcher, 33, 23-30.
Wass, V., Van der Vleuten, C., Shatzer, J., & Jones, R. (2001). As-
sessment of clinical competence. The Lancet, 375, 945-949.
http://dx.doi.org/10.1016/S0140-6736(00)04221-5
Watson, R., Stimpson, A., Topping, A., & Parock, D. (2002). Clinical
Competence in Nursing: A systematic review of literature. Journal of
Advanced Nursing, 39, 431-441.
http://dx.doi.org/10.1046/j.1365-2648.2002.02307.x
Copyright © 2013 SciRes.
52
P. KATOWA-MUKWATO ET AL.
Copyright © 2013 SciRes. 53
Wilkinson, T. J., Frampton, C. M., Thompson Fawcet, M., & Egan, T.
(2003). Objectivity in objective structured clinical examinations:
checklists are no substitute for examiner commitment. Academic
Medicine, 78, 219-223.
http://dx.doi.org/10.1097/00001888-200302000-00021