ArticlePDF Available

The Objective Structured Clinical Examination (OSCE) as a Determinant of Veterinary Clinical Skills


Abstract and Figures

The Objective Structured Clinical Examination (OSCE) has become an excellent tool to evaluate many elements of a student's clinical skills, especially including communication with the patient (human medicine) or client (veterinary medicine); eliciting clinical information from these conversations; some aspects of the physical examination; and many areas of clinical evaluation and assessment. One key factor is that the examination can be structured to compare different students' abilities.
Content may be subject to copyright.
The Objective Structured Clinical Examination
(OSCE) as a Determinant of Veterinary Clinical
Margery H. Davis
Gominda G. Ponnamperuma
Sean McAleer
Vicki H.M. Dale
The Objective Structured Clinical Examination (OSCE) has become an excellent tool to evaluate many elements of a student’s
clinical skills, especially including communication with the patient (human medicine) or client (veterinary medicine); eliciting
clinical information from these conversations; some aspects of the physical examination; and many areas of clinical evaluation
and assessment. One key factor is that the examination can be structured to compare different students’ abilities.
The Objective Structured Clinical Examination (OSCE) has
been used in medicine for 30 years. Since its introduction at
Dundee Medical School, in Scotland, its use has spread
throughout the world. A recent unpublished literature
search carried out in Dundee identified more than 700
published articles on the OSCE. The utility of the OSCE in
medicine is undoubted—but does it have potential for
veterinary medicine?
The concept of OSCEs for veterinary education is not new.
It has been over a decade since the OSCE was recommended
as an alternative to traditional examinations in veterinary
education, which typically only stimulate factual recall.
Nevertheless, only recently has the OSCE been used in some
UK veterinary schools.
This article describes the OSCE and highlights its relevance
to veterinary education.
The exam consists of a number of stations, usually 14 to 18,
that candidates rotate through. At each station the candidate
is asked to carry out a specific task within an allotted period.
The exam is finished when all candidates have completed
all stations. Figure 1 shows the general format of a 15-station
OSCE. OSCEs with a larger or smaller number of stations
have a similar format.
The OSCE may need to be run on several sites at the same
time, or repeated, to accommodate the total number of
candidates. It is advisable to restrict the number of runs to
two for test-security reasons. The candidates for the second
run should be sequestered before the end of the first run;
they can be briefed while the first run is in progress. This
will prevent contamination of the second-run candidates
with exam information from first-run candidates.
All candidates go through the same stations. The OSCE
stations are selected to reflect the curriculum content and
course outcomes and are planned using an assessment
The purpose of the blueprint is to ensure that all
outcomes are assessed and that the curriculum content is
appropriately sampled. An example of an exam blueprint
for the final-year small-animal clinical studies course at the
University of Glasgow, Scotland, is shown in Table 1. Ideally
there should be at least one checkmark in each column and
each row.
Each station is built around a specific task that the candidate
must accomplish within a standard period.
Most stations
last between five and 10 minutes, but stations of up to 20
minutes are not uncommon.
If a particular task cannot be accomplished in the standard
station time, linked or double stations may be employed.
Linked stations are sequential stations at the first of which
the candidate carries out a task or prepares for the second
station, where he or she builds on the findings of the first
station or answers questions related to it. Double stations
require doubling of resources, including examiners, and
take twice the standard station time. A staggered start is
Figure 1: The format of a 15-station OSCE
578 JVME 33(4) ß 2006 AAVMC
Table 1: An assessment blueprint for an OSCE in veterinary medicine
Internal medicine
Soft Tissue
Diagnostic Imaging
Anesthesia, Intensive
Care, Fluid Therapy
Behavioral Problems
Caged Pets & Exotics
Vaccination, Parasite
Control, Zoonoses
Legislation, Prescription
Writing, Ethics
skills (history
taking; client
explanation of
a condition,
or investigation)
ppp p
Clinical exam,
or interpretation
(on a live animal
or cadaver, from
photos or video)
pp p p
Practical skill
(theater and
surgical skills,
urine exam,
microscopy, fine
needle aspirate,
lab techniques)
pp ppp p
Data and image
urinalysis, ECG,
pp p p
JVME 33(4) ß 2006 AAVMC 579
required for both linked and double stations, as candidates
cannot start halfway through the task.
Candidates may be observed by one or more examiners
while carrying out the task at the station; this is a manned
station. Each examiner is equipped with a checklist or a
rating scale. The checklist is constructed of items that the
candidate should carry out at the station. The checklist
should not be overly long, as it becomes unwieldy to
complete; it is important to remember that only the key or
core items, as identified by the subject experts, should be
included in the checklist. Global rating scales may also be
used, as well as or instead of checklists, and have been
shown to be more reliable than checklists.
Rating scales
give predetermined descriptors for each point on the scale,
which provide an accurate description of candidate beha-
vior for a given rating.
The use of checklists and rating
scales for each station is the reason for the objective
descriptor in the name ‘‘OSCE.’
There may, however, be stations where the candidate is
unobserved (unmanned stations); here a paper-based answer
is required, or the candidate provides responses at the next
station (linked station). The candidates may be given
answer books for unmanned stations, which they carry
with them round the exam and submit for marking at the
end. Alternatively, they can complete an answer sheet at
each unmanned station and put it into a ‘‘post box’’ at the
station, usually a cardboard box with a posting slit that
allows input of answer sheets but not their removal.
The sampling of curriculum content and outcomes as
demonstrated by the blueprint, the use of checklists and
rating scales, and all candidates’ seeing the same patients
contribute to the objective nature of the exam; because a wide
range of clinical skills can be assessed within the OSCE
framework, the OSCE is a clinical exam.
The OSCE is unique among examination formats because it
provides a framework to assess the hands-on skills of the
candidate in a simulated setting. Hence, it assesses the
candidate at the level of competence, as opposed to assessing
skills in real-life settings, which is assessing at the level of
6, 7
The following description of the OSCE considers the six
questions that must be addressed by any assessment: why;
what; when; where; by whom; and how.
Traditionally, clinical skills in medicine were assessed using
long and short cases.
In the long case, the candidate spends a fixed period of time
with a real patient, as opposed to a simulated patient, to
take a comprehensive clinical history and carry out a
physical examination. The examiners later assess the
candidate using oral questions based on the patient that
the candidate examined. Different candidates see different
patients, and different examiners assess different candi-
dates. As a result, the questions put to candidates vary.
Some candidates may be presented with an ‘‘easy’ patient,
while others will be more ‘‘difficult’’; that is, the level of
difficulty of the assessment varies. Appropriate sampling of
the curriculum is not possible with one long case, which
contributes to the lack of reliability and lack of content
validity of the results of this type of test.
In the short case, one or two examiners directly observe the
candidate carrying out a particular task (e.g., eliciting a
physical sign or examining an organ system). All candi-
dates, however, are assessed neither with the same patients
nor by the same examiners. Though sampling of the
curriculum content can be improved by using multiple
short cases, the variability of cases seen by the candidates
adversely affects test reliability.
In 1969 the fairness of long- and short-case exams for
individual candidates was queried.
In a landmark article,
Wilson and colleagues showed that an individual candidate
carrying out the same task was scored differently by
different examiners. The range of scores assigned to the
same candidate by different examiners viewing the same
actions (i.e., poor inter-rater reliability) rendered the assess-
ment unreliable.
The OSCE was introduced to overcome the above problems
pertaining to exam content variability, subjectivity, and
fairness of the long and the short case.
These issues were
addressed by structuring OSCE stations to ensure that all
candidates go through the same test material, or test
material of a similar level of difficulty, and are scored
using the same criteria (e.g., pre-validated checklists or
rating scales) by the same, or similar, examiners.
It has recently been recognized that some traditional
examinations in veterinary medicine lack the reliability
and objectivity of the OSCE, and, therefore, some schools in
the UK have implemented the OSCE as a fairer form of
What can be tested in the OSCE is limited only by the
imagination of the test designers. Stations can be developed
to test course outcomes such as clinical skills (e.g., chest
examination); practical procedures (e.g., urine testing);
patient investigations (e.g., interpretation of a chest radio-
graph); patient management (e.g., completing a prescription
sheet); communication skills (e.g., breaking bad news); IT
(information technology) skills (e.g., accessing a database);
and providing patient education (e.g., lifestyle advice).
Knowledge is invariably assessed by the OSCE.
11, 12
marks are not usually given for answering knowledge-
based questions, without the underlying knowledge and
understanding the candidate cannot carry out the instruc-
tions at each station. Critical thinking and clinical judgment
are the other cognitive skills that can be assessed in an
The OSCE can also be used to assess attitudes
professionalism, but both of these can be more conveniently
assessed using the portfolio framework.
The OSCE has been described in a range of medical
specialties: family medicine,
general practice,
internal medicine,
obstetrics and gynecol-
emergency medicine,
As basic training in medicine becomes
increasingly integrated,
25, 26
so does its assessment, and the
OSCE framework supports integrated assessment.
In the UK, given the similarities
between the General
Medical Council’s ‘principles of professional practice’’
and the Royal College of Veterinary Surgeons’ ‘‘10 guiding
principles,’’ it seems that the OSCE has as much relevance to
580 JVME 33(4) ß 2006 AAVMC
the assessment of professional skills in veterinary education
as it does in medical education.
Shown below are examples of OSCE stations for veterinary
students at the University of Glasgow, first introduced in the
2003/2004 academic session. Each example has four parts:
instructions to the candidate; instructions to the examiner;
checklist; and marking scheme. The second example also
includes an equipment list, an essential component when
equipment is required at a station.
The first example (Figure 2) assesses communication skills
related to a clinical scenario. The second example (Figure 3)
assesses candidate competence in a practical procedure.
The OSCE can be used as a pre-test, in-course test, or
The pre-test assesses baseline student competence before the
course begins and can provide a student profile. Courses
can then be customized to meet individual student needs.
Comparing the results of the pre-test with those of
subsequent tests provides information about the course,
the student, and the teaching.
The OSCE has been a useful in-course assessment, from the
standpoint of both formative assessment (i.e., providing
feedback to the student) and course evaluation (i.e.,
providing feedback to the instructor). OSCE checklists can
be used by students for peer and self-assessment (i.e.,
formative assessment). Involving students in developing
OSCE checklists may be one way of helping them learn and
encouraging them to engage in self-and peer assessment. A
mid-course OSCE will also provide feedback to instructors
as to how successful their teaching has been and what areas
that they should concentrate on in future.
The OSCE has been used mostly as a post-test in medical
education. Over the years its objective and structured nature
has ensured its appropriateness for summative assessment
purposes (i.e., where passing or failing the exam has
consequences for the candidate).
Within the continuum of medical education, the OSCE has
been used in undergraduate,
and con-
medical education.
In health professional education, a variety of professions
such as medicine, dentistry,
32, 33
veterinary medicine, nur-
para-medical services,
medical labora-
tory technology, pharmacy,
and radiography have used
the OSCE framework for their assessment purposes.
Relevant articles in this issue demonstrate the use of the
OSCE mainly as a post-test, with in-course (mock) tests used
to prepare students at various undergraduate levels in
veterinary medicine for their end-of-year examination.
Initially, OSCEs were held in hospital wards. However,
experiences such as patient cardiac arrests during the exam
have moved the OSCE away from the wards to Clinical
Skills Centres, empty outpatient departments (OPDs), or
specially designed test centers. What all these venues have
in common is that they can accommodate several stations,
such that candidates can rotate around the different stations
in sequence without overhearing the candidates at adjacent
stations. If the stations are set up within small cubicles,
they can simulate the real-life environment of a clinic or
OPD and ensure privacy for intimate procedures. Some test
centers have simulated emergency rooms or patient home
One UK veterinary school, the Royal Veterinary College in
London, already has its own clinical skills facility for
students to practice their skills, which doubles as an OSCE
28, 38
Other UK veterinary schools are following this
By whom?
Test designers, on-site examiners, site organizer(s), time-
keeper(s), secretarial staff for computerized collation of
results (e.g., data entry, handling optical mark reader), and
portering staff are all involved in the OSCE process.
Depending on the content tested at the station, OSCE
examiners may be clinicians, pre-clinical instructors, stan-
dardized or simulated patients, or other health care
professionals (e.g., nurses, paramedics, social workers).
While clinicians are needed for OSCE stations assessing
clinical skills, patients and other health professionals are
often used for stations assessing outcomes such as commu-
nication skills, empathy, attitudes, and professionalism.
There are two important prerequisites for all examiners:
they need to be trained in using the checklist or rating scale
and familiar with the topic assessed at the station.
Box 1 outlines the steps to be followed in designing an
OSCE station. The guidelines for implementing an OSCE are
shown in Box 2.
Selecting a Topic for an OSCE Station
Clearly define the ‘clinical task’’ (e.g., interpreting an
x-ray) that is to be assessed by the station. Have a
clear idea of its place in the assessment blueprint (i.e.,
what are the content area(s) and outcome(s) that the
‘exam task’’ is assessing?).
Identify a clinical scenario that will provide sufficient
information for the candidate to carry out the clinical
task, within the stipulated time limit.
Identify the different assessment tools (e.g., checklist,
rating scale, global rating) that can be used to assess
the candidate and choose the most suitable tool.
Developing an OSCE Station
Document the layout of the station (if needed with
clear diagrams).
Construct the question(s) that the candidate will be
Develop instructions to the candidate to explain what
the candidate is required to do at the station.
Develop the checklist/rating scale and validate it,
with the help of one or more colleagues.
JVME 33(4) ß 2006 AAVMC 581
Figure 2: Example OSCE Station 1, from a fourth-year companion-animal studies examination
582 JVME 33(4) ß 2006 AAVMC
Figure 3: Example OSCE Station 2, from a final-year small-animal clinical-studies examination
JVME 33(4) ß 2006 AAVMC 583
The main advantages of the OSCE over other clinical exams
are that it provides
a framework to assess a representative sample of the
curriculum, in terms of content and outcomes, within
a reasonable period of time;
the same assessment material to all candidates;
Develop a script for the patient/standardized
patient/actor (if necessary).
Develop instructions to the patient/standardized
Construct a marking scheme for the checklist, with
itemized marks for each step.
Design a marking sheet, paying attention to ease
of use.
Develop instructions to the examiner(s).
List the equipment needed for the station (e.g.,
mannequin, IV drip set, ophthalmoscope).
Test-run (pilot) the station.
Before the Exam
These are mainly responsibilities of the central examination
unit, in consultation with the examiners.
Agree the question weightings and pass mark with
the exam board (i.e., set standards).
Develop instructions to the supervisors, invigilators,
or exam implementers to
sequence the station (i.e., the station number; e.g., it
may be convenient to place a hand-washing and
rest station after a station involving intimate
examination; and if two stations are linked, one
station should immediately follow the other
configure the equipment/furniture.
identify how the arrows to and from the exam
station should be positioned. It is important that
the stations form a circuit, with no crossover
identify linked and double stations that require a
staggered start.
decide when to distribute answer books, candidate
instructions, and examiner checklists/ratings
scales and when to transport patients to the exam
decide how and when the marking sheets will be
collected. It is sometimes useful to collect checklists
from examiners during the exam to facilitate data
Draw up a map of the examination venue, indicating
the position of the exam station, with arrows
indicating the direction in which candidates should
Draw up a timetable for the whole exam.
Before the OSCE
Start early.
Design a blueprint.
Identify topics.
Identify the number of candidates to be assessed
and the number of venues and runs; allocate
candidates to venues/runs.
Fix time and date.
Book venues.
Route the exam.
Notify the candidates: when and where to turn up;
nature of the exam.
Appoint examiners and brief them (remember to
appoint stand-by examiners).
Brief support staff.
Appoint venue coordinators/site managers and
brief them.
Arrange resources for cases within stations and
portering the resources (equipment, real or
standardized patients, radiographs, etc.).
Prepare documentation note the format of the
exam material within stations, draw up the exam
site plan, copy and collate paperwork, label candidate
Provide a complete set of exam material for site
coordinators and brief them.
Set up the stations well in advance of the exam.
Remember to order refreshments for examiners,
candidates, and patients.
On the Day of the OSCE (Tasks for the Site Coordinator)
Arrive at least one hour before the exam is due to
Check each station for completeness.
Note candidate absentees.
Brief candidates.
If there is a second run, sequester the second-run
candidates by taking them in for briefing before the
end of the first run to prevent contamination.
Start exam and timer.
Oversee conduct of exam.
Collect scoring sheets.
Gather preliminary examiner feedback.
Oversee dismantling of exam and safe return of
584 JVME 33(4) ß 2006 AAVMC
an opportunity to score all candidates using
the same assessment criteria (i.e., the same
pre-validated and structured checklists and/or
rating scales);
an assessment that uses trained examiners.
These advantages make the OSCE high in validity and
Thus, it is suitable for high-stakes summative
The main disadvantage of the OSCE over the traditional
long case is that the OSCE does not allow assessment of the
candidate’s holistic approach to a clinical case or patient.
The OSCE is thus criticized for fragmentation of skills. It is
important that a learner’s ability to carry out a full history
and physical examination is assessed and that the trainee be
given feedback. The Clinical Evaluation Exercise (CEX) or
provides an instrument for such assessment.
The other main disadvantage is the cost of organizing an
OSCE. These costs mainly relate to the examiners’ and test
designers’ time and that of the simulated patients (SPs), if
paid SPs are used. If the rental of a test centre is necessary,
these costs may also be substantial. Organizing the
examination involves considerable effort. Studies on the
OSCE have shown, however, that the logistics are achiev-
able, even for national examinations.
Although the exam is
costly, its cost effectiveness is high in terms of the
information it provides.
Van der Vleuten’s formula
can be used to evaluate the
utility of the OSCE. The formula suggests that the utility of
an assessment is the function of its validity; reliability;
acceptability; educational impact; cost effectiveness; and
If suitably designed to test candidate competence in a range
of curriculum outcomes, the OSCE is a valid assessment.
However, since it is conducted under simulated examina-
tion conditions, it does not provide valid information on the
candidate’s ability to perform the skill in real-life situations.
The performance level of Dutch general practitioners, for
example, has been shown to be lower than their competence
level as assessed via OSCE.
The reliability of an exam tends to be related to the length of
testing time.
The structured examination format, with
wide sampling, lasting approximately one-and-a-half to
two-and-a-half hours, makes the OSCE more reliable than
the long and short case formats.
Owing to its structured nature, which allows every
candidate to be tested under the same conditions, the
OSCE is highly acceptable to students, who appreciate its
It therefore has high face validity.
The OSCE requires the examiners to directly observe the
candidate carrying out clinical skills. Thus, the examination
highlights the importance of clinical skills to students. Since
the exam material represents a wide sample of the
curriculum and a vast range of skills can potentially be
assessed, students cannot risk ignoring clinical skills. It thus
has a positive educational impact.
The resources needed to conduct an OSCE are demanding.
The high cost, however, is justified by the information it
provides about the clinical competence of the candidate,
making the exam cost effective.
Designing an OSCE station is a skilled activity, and
organizing the examination involves considerable effort.
The returns, however, have proved the OSCE to be a
worthwhile exercise, as judged by the many institutions
from Europe, North and South America, Asia, Australasia,
and Africa reporting their OSCEs in the literature.
1 Weeks BR, Herron MA, Whitney MS. Pre-clinical
curricular alternatives: method for evaluating student
performance in the context of clinical proficiency. J Vet Med
Educ 20: 9–13, 1993.
2 American Council for Graduate Medical Education
[ACGME], American Board of Medical Specialties [ABSM].
Objective structured clinical examination (OSCE). In Toolbox
of Assessment Methods, version 1.1 <
Outcome/assess/Toolbox.pdf>. Accessed 10/03/06.
ACGME Outcomes Project, 2000 p7.
3 Newble D, Dawson B. Guidelines for assessing clinical
competence. Teach Learn Med 6: 213–220, 1994.
4 Selby C, Osman L, Davis M, Lee M. How to do it: set up
and run an objective structured clinical exam. Brit Med J 310:
1187–1190, 1995.
5 Hodges B, McIlroy JH. Analytical global OSCE ratings
are sensitive to level of training. Med Educ 37: 1012–1016,
6 Miller G. The assessment of clinical skills/competence/
performance. Acad Med 65: S63–S67, 1990.
7 Van der Vleuten CPM. Validity of final examinations in
undergraduate medical training. Brit Med J 321: 1217–1219,
8 Harden RM. How to ...Assess students: an overview.
Med Teach 1: 65–70, 1979.
9 Wilson GM, Lever R, Harden RMcG, Robertson JIS,
MacRitchie J. Examination of clinical examiners. The Lancet
January 4: 37–40, 1969.
10 Harden RM, Stevenson M, Downie WW, Wilson GM.
Assessment of clinical competence using objective
structured examination. Brit Med J 1: 447–451, 1975.
11 Coovadia HM, Moosa A. A comparison of traditional
assessment with the objective structured clinical examina-
tion (OSCE). S Afr Med J 67: 810–812, 1985.
12 Norman G. Editorial: inverting the pyramid. Adv Health
Sci Educ 10: 85–88, 2005.
13 Davis MH, Harden RM, Pringle S, Ledingham I.
Assessment and curriculum change: a study of outcome
[abstract]. In: Abstract book: The 7th Ottawa International
Conference on Medical Education and Assessment, 25–28 June
1996, Faculty of Medicine, University of Limburg. Maastricht:
Faculty of Medicine, University of Limburg/Dutch
Association for Medical Education, 1996:104.
JVME 33(4) ß 2006 AAVMC 585
14 Davis MH, Ponnamperuma GG. Portfolio assessment.
J Vet Med Educ 32: 279–284, 2005.
15 Chessman AW, Blue AV, Gilbert GE, Carey M,
Mainous AGIII. Assessing students’ communication and
interpersonal skills across evaluation settings. Fam Med 35:
643–648, 2003.
16 Kramer AW, Jansen KJ, Dusman H, Tan LH, van der
Vleuten CP, Grol RP. Acquisition of clinical skills in
post-graduate training for general practice. Brit J Gen Pract
53: 677–682, 2003.
17 Yudkowsky R, Alseidi A, Cintron J. Beyond fulfilling
the core competencies: an objective structured clinical
examination to assess communication and interpersonal
skills in a surgical residency. Current Surgery 61: 499–503,
18 Hafler JP, Connors KM, Volkan K, Bernstein HH.
Developing and evaluating a residents’ curriculum. Med
Teach 27: 276–282, 2005.
19 Auewarakul C, Downing SM, Praditsuwan R,
Jaturatamrong U. Item analysis to improve reliability for an
internal medicine undergraduate OSCE. Adv Health Sci Educ
10: 105–113, 2005.
20 Windrim R, Thomas J, Rittenberg D, Bodley J, Allen V,
Byrne N. Perceived educational benefits of objective
structured clinical examination (OSCE) development and
implementation by resident learners. J Obstet Gynaecol
Canada 26: 815–818, 2004.
21 Johnson G, Reynard K. Assessment of an objective
structured clinical examination (OSCE) for undergraduate
students in accident and emergency medicine. J Accid Emerg
Med 11: 223–226, 1994.
22 Park RS, Chibnall JT, Blaskiewicz RJ, Furman GE,
Powell JK, Mohr CJ. Construct validity of an
objective structured clinical examination (OSCE) in
psychiatry: associations with the clinical skills
examination and other indicators. Acad Psychiatr 28:
122–128, 2004.
23 Reddy S, Vijayakumar S. Evaluating clinical skills of
radiation oncology residents: parts I and II. Int J Cancer 90:
1–12, 2000.
24 Hanna MN, Donnelly MB, Montgomery CL, Sloan PA.
Perioperative pain management education: a short struc-
tured regional anesthesia course compared with traditional
teaching among medical students. Region Anesth Pain Med
30: 523–528, 2005.
25 General Medical Council [GMC].Tomorrow’s Doctors:
Recommendations on Undergraduate Medical Education.
London: GMC, 1993.
26 GMC.Tomorrow’s Doctors: Recommendations on
Undergraduate Medical Education. London: GMC, 2003.
27 Quentin-Baxter M, Spencer JA, Rhind SM.
Working in parallel, learning in parallel? Vet Rec 157:
692–695, 2005.
28 GMC.Good Medical Practice. London: GMC, 2001.
29 Davis MH. OSCE: the Dundee experience. Med Teach 25:
255–261, 2003.
30 Taylor A, Rymer J. The new MRCOG Objective
Structured Clinical Examination: the examiners evaluation. J
Obstet Gynaecol 21: 103–106, 2001.
31 Harrison R. Revalidation: the real life OSCE. Brit Med J
325: 1454–1456, 2002.
32 Mossey PA, Newton JP, Stirrups DR. Scope of the OSCE
in the assessment of clinical skills in dentistry. Brit Dent J
190: 323–326, 2001.
33 Schoonheim-Klein M, Walmsley AD, Habets L, van der
Velden U, Manogue M. An implementation strategy for
introducing an OSCE into a dental school. Europ J Dent Educ
9: 143–149, 2005.
34 Bartfay WJ, Rombough R, Howse E, Leblanc R.
Evaluation: the OSCE approach in nursing education. Can
Nurs 100: 18–23, 2004.
35 Govaerts MJ, van der Vleuten CP, Schuwirth LW.
Optimising the reproducibility of a performance-based
assessment test in midwifery education. Adv Health Sci Educ
7: 133–145, 2002.
36 Rao SP, Bhusari PS. Evaluation of disability knowledge
and skills among leprosy workers. Indian J Leprosy 64:
99–104, 1992.
37 Sibbald D, Regehr G. Impact on the psychometric
properties of a pharmacy OSCE: using 1st-year
students as standardized patients. Teach Learn Med 15:
180–185, 2003.
38 Yamagishi BJ, Welsh PJK, Pead MJ. The first veterinary
clinical skills centre in the UK. Res Vet Sci 78Suppl.A8–9,
39 Roberts J, Norman G. Reliability and learning from the
objective structured clinical examination. Med Educ 24:
219–223, 1990.
40 Norcini JJ, Blank LL, Duffy FD, Fortna GS. The
mini-CEX: a method for assessing clinical skills. Ann Intern
Med 138: 476–481, 2003.
41 Reznick R, Smee S, Rothman A, Chalmers A, Swanson,
D, Dufresne L, Lacombe G, Baumber J, Poldre P,
Levasseur L, Cohen R, Mendez J, Patey P, Boudreau D,
Berard M. An objective structured clinical examination for
the licentiate: report of the pilot project of the Medical
Council of Canada. Acad Med 67: 487–494, 1992.
42 Van der Vleuten CPM. The assessment of professional
competence: developments, research and practical
implications. Adv Health Sci Educ 1: 41–67, 1996.
43 Rethans J, Sturmans F, Drop M, van der Vleuten C.
Assessment of performance in actual practice of general
practitioners by use of standardised patients. Brit J Gen Pract
41: 97–99, 1991.
44 Swanson DB. A measurement framework for
performance-based tests. In Hart IR, Harden RM, eds.
Further Developments in Assessing Clinical Competence
[proceedings of the international conference, June 27–30,
1987, Congress Centre, Ottawa, Canada]. Montreal:
Can-Heal Publications, 1987:13–45.
45 Pierre RB, Wierenga A, Barton M, Branday JM,
Christie CD. Student evaluation of an OSCE in paediatrics
586 JVME 33(4) ß 2006 AAVMC
at the University of the West Indies, Jamaica <http://>. BMC Med
Educ 4(22), 2004.
Margery Davis, MD, MBChB, FRCP, ILTM, is Professor of
Medical Education and Director of the Centre for Medical
Education at the University of Dundee, Tay Park House, 484
Perth Road, Dundee DD2 1LR Scotland, UK. E-mail:
Gominda Ponnamperuma, MBBS, Dipl. Psychology, MMedEd,
is Lecturer in Medical Education at the Faculty of Medicine,
University of Colombo, P.O. Box 271, Kynsey Road, Colombo 8,
Sri Lanka, and Researcher at the Centre for Medical Education,
University of Dundee, Tay Park House, 484 Perth Road, Dundee
DD2 1LR Scotland, UK.
Sean McAleer, BSc, DPhil, ILTM, is Senior Lecturer in Medical
Education at the University of Dundee, Tay Park House, 484
Perth Road, Dundee DD1LR Scotland, UK. E-mail:
Vicki H.M. Dale, BSc, MSc, ILTM, is an Educational
Technologist at the Faculty of Veterinary Medicine, University
of Glasgow, Glasgow G61 1QH Scotland, UK. E-mail:
JVME 33(4) ß 2006 AAVMC 587
... As medical knowledge expanded throughout the 20th century, Flexnerian curricula came to be criticised as inefficient, overloaded and reductionist (Coombes and Silva-Fletcher, 2018). The fragmenting of tasks (as in an OSCE checklist for example) has been criticised for trivialising the task and reducing validity (Talbot, 2004;Davis et al., 2006;Rhind, 2006;Turner and Dankoski, 2008). Efforts to define and assess each individual competency risks reducing them to long lists of observable actions that are frustrating for both learners and educators (Frank, Snell, et al., 2010). ...
... Developing and running an OSCE is a challenging process. Writing and testing tasks, recruiting and training suitably qualified examiners, purchasing sufficient equipment and consumables, and running the examination is expensive and very time-consuming (Bark and Shahar, 2006;Davis et al., 2006;Hecker et al., 2010;Gormley, 2011;Khan et al., 2013). Time and expertise are also needed to run and evaluate suitable metrics for quality assurance (Pell et al., 2010;Dennick and Tavakol, 2012;Tavakol et al., 2016). ...
... The high number of stations needed for optimal reliability (15-20) may pose serious feasibility issues (Hardie, 2008). Concerns have also been expressed that the process of dividing clinical performance into itemised checklists risks fragmenting skills and trivialising what should be a holistically competent performance (Davis et al., 2006;May and Head, 2010). ...
Full-text available
Irish veterinary nursing graduates must demonstrate competence in an objective structured clinical examination (OSCE) prior to registration with the Veterinary Council of Ireland (VCI). The training of competence poses challenges for educators. These include limited animal handling experience amongst students, the absence of a widely-accepted definition of veterinary nursing competence and limited published evaluations on the effectiveness of student-centred teaching methods in veterinary nursing competence training. The purpose of this mixed methods case study was to address these challenges, with the aim of contributing to the evidence base for educators. Student motivation and prior animal handling experience was examined at two Irish Higher Level institutions. Competence perceptions amongst registered veterinary nurses (RVNs) and students were also explored in this exploratory sequential study. Error guidance videos to improve performance were evaluated. OSCE-associated test anxiety and learner personality traits were quantified using validated instruments before a workshop intervention intended to reduce test anxiety was evaluated via student focus groups. Key findings included confirmation of a lack of animal handling experience amongst veterinary nursing students, despite their strong intrinsic motivation to work with animals. RVNs had a broader perception of competence than students and they expected a longer period of clinical experience to be needed to attain it. Error guidance videos may improve student performance but care is necessary to avoid reinforcing incorrect techniques. OSCE-associated test anxiety scores were high regardless of personality type, but the workshop was effective in improving student resilience. Recommendations for educators were identified to support student-centred teaching and competence assessment.
... Various methods exist for assessing communication skills (Shaw, 2019). Objective structured clinical examination (OSCE) offers a format for assessing communication skills of undergraduate health professional students that is valid for clinical skills, not just knowledge, and has high reliability (Davis et al., 2006;Hodges, 2006). Objective structured clinical examination provides a simulated environment where students can demonstrate their abilities across various contexts following a standardized examination format (Hecker et al., 2012). ...
Full-text available
In food animal production medicine (FAPM), the success of control programs for infectious diseases that have serious animal health and economic consequences frequently rely on the veterinarian's effective communication and producer adherence to veterinary recommendations. However, little research has been conducted on communication skills of practicing FAPM veterinarians. During this study, we developed a communication training workshop intervention to support the Atlantic Johne's Disease Initiative. Seventeen FAPM veterinarians across 10 clinics practicing within Maritime Canada participated in a pre-post intervention study design. Communication skills were evaluated utilizing 3 assessment tools; an objective structured clinical exam (OSCE), standardized client feedback, and an instrument designed for veterinary participants to assess their self-efficacy. Study results showed that before training, communication skills of participating veterinarians had limitations, including skill deficits in communication tasks strongly associated with increased adherence to veterinary recommendations. Based on the 3 assessment tools, communication skills of participating veterinarians improved with the training provided. Significant increases were detected in pre- to postintervention self-efficacy percentage scores, OSCE percentage and global scores from expert raters, and OSCE percentage and global scores from standardized client feedback. These improvements emphasize the importance of communication skills training specific to FAPM.
... 11,12 An example of a widely established simulation method in health sciences and veterinary education is the Objective Structured Clinical Examination (OSCE). 13,14 This versatile, 15 valid, reliable and feasible 16 approach to the assessment of clinical skills at the third level of Miller's pyramid has been widely adopted across distinct educational areas in many countries. 14 OSCEs involve a circuit of short steps, or stations, at each of which the candidate is required to perform a specific task, generally under examiner supervision (so-called manned stations). ...
To conduct animal experiments, researchers must be competent to handle and perform interventions on living animals in compliance with regulations. Laboratory animal science training programmes and licensing bodies therefore need to be able to reliably ensure and certify the professional competence of researchers and technicians. This requires access to assessment strategies which can verify knowledge as well as capturing performative and behavioural dimensions of assessment. In this paper, we describe the process of developing different global rating scales measuring candidates' competence in a performative assessment. We set out the following sequence, with three crucial phases, in the process of scale development: (a) Item Development, (b) Scale Development and (c) Piloting of the Scale. We note each phase's different sub-steps. Despite the emergent need to ensure the competence of researchers using animals in scientific procedures, to our best knowledge there are very few species and procedure/skill specific assessment tools for this purpose, and the assessment methodology literature in the field is very limited. This paper provides guidance for those who need to develop and assess proficiency in laboratory animal procedures by setting out a method that can be used to create the required tools and illustrating how competence assessment strategies can be implemented.
... 11 However, when evaluating students' competence outside of clinical rotations, the most common assessment is the OSCE, which has been used in veterinary education for at least two decades. [12][13][14][15][16] Unfortunately, OSCEs require a different rater for each station and may require sequestration of students in a large group before or after the exam, which are not virus-friendly practices. Whatever form of assessment is chosen, the direct assessment of veterinary students' medical and surgical skills is mandated by the American Veterinary Medical Association's (AVMA) Council on Education's (COE) standard 11, "Outcomes Assessment." 17 The Royal College of Veterinary Surgeons also requires educators to directly assess students' ability to perform clinical skills, including collecting diagnostic samples, administering anesthesia, and performing surgery. ...
Full-text available
In spring 2020, the COVID-19 pandemic forced educators to adjust the delivery and assessment of curriculum. While didactic courses moved online, laboratory courses were not amenable to this shift. In particular, assessment of clinical skills courses through common methods including objective structured clinical examinations (OSCEs) became inadvisable. This article describes decisions made for first-, second-, and third-year veterinary students (n = 368) with respect to clinical skills at one US college. This includes the remote completion of a surgical skills curriculum using instructional videos and models and the delaying of laboratory sessions deemed impossible to deliver remotely. First- and third-year students were subsequently assessed using modified remote OSCEs. Second-year students were assessed using the standard surgical skills examination, video-recorded. All first- and third-year students successfully passed their OSCE upon either first attempt or remediation. Two second-year students failed their remediation examination and were offered additional faculty tutoring and another remediation attempt at the start of the fall semester. The remediation rate on the surgical skills examination was not different from that of previous years. One incident of suspected academic dishonesty occurred in the first-year OSCE. Students learned surgical skills successfully at home by practicing on models and receiving feedback of their skills on video recordings. While disappointing, one case of academic dishonesty among the 368 total students tested was not surprising. Remote assessment using modified OSCEs and surgical skills exams appears feasible and fair when in-person testing is not possible.
Using simulated clients is an effective teaching method for training and assessing communication skills in veterinary education. The aim of this study is to evaluate the use of actors and peers in communication skills training in veterinary medicine. For this purpose, the subjective perception of the use of actors was assessed in a first study using a paper-based self-evaluation survey. In a second study, different groups of veterinary students who trained their communication skills with actors or peers were compared in an electronic Objective Structured Clinical Examination (eOSCE) assessment with regard to their outcomes of communication proficiency. All participants reported the actors to be helpful and supportive in learning communication skills. Above all, participants highly rated the achieved authenticity when using actors as well as feedback sessions. Regarding the comparison of actors and peers as teaching methods, no significant difference in the performance of veterinary students in an eOSCE was identified. Despite the lack of objective evidence, both methods may be considered valuable and accepted teaching tools. Training with peers gives students an opportunity to learn how to conduct structured history interviews and to understand pet owners’ motives at an early stage of undergraduate veterinary training. Change of perspective is considered a positive training element. However, when portraying authentic and standardized emotions and reactions and giving formative feedback based on the pet owners’ internal perspectives, actors are beneficial for training advanced veterinary students and graduates in difficult conversation topics.
Objective structured clinical examinations (OSCEs) are used to assess students’ skills on a variety of tasks using live animals, models, cadaver tissue, and simulated clients. OSCEs can be used to provide formative feedback, or they can be summative, impacting progression decisions. OSCEs can also drive student motivation to engage with clinical skill development and mastery in preparation for clinical placements and rotations. This teaching tip discusses top tips for running an OSCE for veterinary and veterinary nursing/technician students as written by an international group of authors experienced with running OSCEs at a diverse set of institutions. These tips include tasks to perform prior to the OSCE, on the day of the examination, and after the examination and provide a comprehensive review of the requirements that OSCEs place on faculty, staff, students, facilities, and animals. These tips are meant to assist those who are already running OSCEs and wish to reassess their existing OSCE processes or intend to increase the number of OSCEs used across the curriculum, and for those who are planning to start using OSCEs at their institution. Incorporating OSCEs into a curriculum involves a significant commitment of resources, and this teaching tip aims to assist those responsible for delivering these assessments with improving their implementation and delivery.
Interpersonal communication is critical in training, licensing, and post-graduate maintenance of certification in veterinary medicine. Simulation has a vital role in advancing these skills, but even sophisticated simulation models have pedagogic limitations. Specifically, with learning goals and case scenarios designed by instructors, interaction with simulated participants (SPs) can become performative or circumscribed to evaluative assessments. This article describes co-constructive veterinary simulation (CCVS), an adaptation of a novel approach to participatory simulation that centers on learner-driven goals and individually tailored scenarios. CCVS involves a first phase of scriptwriting, in which a learner collaborates with a facilitator and a professional actor in developing a client–patient case scenario. In a second phase, fellow learners have a blinded interaction with the SP-in-role, unaware of the underlying clinical situation. In the final part, all learners come together for a debriefing session centered on reflective practice. The authors provide guidelines for learners to gain maximal benefit from their participation in CCVS sessions and describe thematic possibilities to incorporate into the model, with specific case examples drawn from routine veterinary practice. Finally, the authors outline challenges and future directions toward implementing CCVS in veterinary medical education toward the ultimate goal of professional growth and co-evolution as veterinary practitioners.
A punção de agulha fina (PAF) é amplamente utilizada por veterinários, sendo ensinada maioritariamente por observação. Atualmente, considera-se que a utilização de simuladores melhora a aprendizagem de procedimentos práticos. No entanto, simuladores de PAF ainda não foram avaliados em Medicina Veterinária. Cinquenta e um estudantes de Veterinária sem experiência prévia em Citologia foram distribuídos aleatoriamente em dois grupos que usaram um simulador (caixa com nódulos artificiais) e uma peça de fruta (banana). Foi utilizado um desenho de aula invertida com estações: primeiramente, os estudantes observaram um vídeo tutorial sobre a PAF, utilizando em seguida o simulador ou a peça de fruta durante um máximo de 15 minutos. Depois, os estudantes efetuaram o procedimento num modelo animal realista, realizando-se uma avaliação clínica objetiva estruturada (ACOE). A aprendizagem através dos modelos foi comparada por meio de questionários, taxa de aprovação em ACOE e qualidade dos esfregaços obtidos. Após observar o vídeo tutorial, nenhum estudante manifestou ser capaz de fazer a PAF num animal vivo. Por oposição, a maioria revelou ser capaz após a simulação. Os estudantes praticaram mais tempo na caixa (14,8 ± 0,8 min) do que na peça de fruta (8,5 ± 2,2 min). Na avaliação, os primeiros tinham maior precisão na PAF. Ainda assim, não existiram diferenças na taxa de aprovação em ACOE. Portanto, os dois modelos são eficazes para a aprendizagem da PAF, mas a caixa tem vantagens quanto à repetição autónoma da prática. Essa repetição parece ter efeitos positivos na precisão da PAF, o que tem relevância do ponto de vista clínico.
Fine needle aspiration (FNA) is widely used by veterinary practitioners, being taught mostly by observation. Simulators are known to enhance students’ learning of practice skills, but to our knowledge, FNA simulators have never been assessed in veterinary medicine. Fifty-one undergraduate students with no prior experience in cytology were randomly assigned to two groups that practiced on either a box simulator (with artificial nodules) or a fruit (banana). An in-class flip was followed, in which students first observed a FNA video tutorial and then used their assigned simulator for 15 minutes maximum. Students then attempted a FNA on an animal model and were evaluated through an objective structured clinical examination (OSCE). Learning outcomes of each model was compared through questionnaires, OSCE pass rates, and quality of produced smears. After observing the video, no student reported being able to conduct a FNA on a live animal, whereas most assured that they would be able to do so after using a simulator. Students practiced more on the box model (14.8 ± 0.8 mins.) than on the fruit (8.5 ± 2.2 mins.). At evaluation, students who had practiced on the box had more puncturing accuracy than those who had practiced on the fruit. Still, no differences in OSCE pass rates existed. Simulation models thus were effective for learning FNA, but the box simulator seemed to be more successful than the fruit in terms of deliberate practice. This appears to have a positive effect on students’ puncturing accuracy, which has clinical relevance.
Full-text available
Medical organizations responsible for assessing the clinical competence of large numbers of examinees have traditionally used written, oral, and observation‐based examination methods. The results from these examinations form the basis for major professional decisions regarding promotion or privileges of medical students or physicians. In this article, we present a set of guidelines that examining bodies should follow in developing and implementing assessment procedures that are a valid reflection of examinees’ current level of competence and of their ability to perform satisfactorily at the next stage of training or practice. The guidelines are based on our collective experiences as well as the growing literature on assessment of clinical competence. The discussion covers issues that have not been fully addressed in previously published reviews, including identifying the competencies to be tested, selecting appropriate and realistic test methods, dealing with test administration and scoring, and setting standards for the desired level of performance.
Medical students' interpersonal and communication skills are a fundamental dimension of their clinical competence and will be measured on the anticipated US Medical Licensure Examination (USMLE) standardized patient (SP) exam. We compared students' performance on measures of SP satisfaction on a third-year family medicine Objective Structured Clinical Examination (OSCE) with measures of SP satisfaction on a fourth-year Clinical Practice Examination (CPX).
Objective: To evaluate the mini-clinical evaluation exercise (mini-CEX), which assesses the clinical skills of residents. Design: Observational study and psychometric assessment of the mini-CEX. Setting: 21 internal medicine training programs. Participants: Data from 1228 mini-CEX encounters involving 421 residents and 316 evaluators. Intervention: The encounters were assessed for the type of visit, sex and complexity of the patient, when the encounter occurred, length of the encounter, ratings provided, and the satisfaction of the examiners. Using this information, we determined the overall average ratings for residents in all categories, the reliability of the mini-CEX scores, and the effects of the characteristics of the patients and encounters. Measurements: Interviewing skills, physical examination, professionalism, clinical judgment, counseling, organization and efficiency, and overall competence were evaluated. Results: Residents were assessed in various clinical settings with a diverse set of patient problems. Residents received the lowest ratings in the physical examination and the highest ratings in professionalism. Comparisons over the first year of training showed statistically significant improvement in all aspects of competence, and the method generated reliable ratings. Conclusions: The measurement characteristics of the mini-CEX are similar to those of other performance assessments, such as standardized patients. Unlike these assessments, the difficulty of the examination will vary with the patients that a resident encounters. This effect is mitigated to a degree by the examiners, who slightly overcompensate for patient difficulty, and by the fact that each resident interacts with several patients. Furthermore, the mini-CEX has higher fidelity than these formats, permits evaluation based on a much broader set of clinical settings and patient problems, and is administered on site.
Important decisions are often taken about students as a result of the scores they achieve in examinations. It should be possible to make important decisions in relation to student counselling and course development on the basis of evaluation results, but often this is not done. All teachers are involved directly or indirectly with assessing students' competencies and should be familiar with some of the current thinking on assessment. They should ask (and answer) five questions: (1) What should be assessed? (2) How should it be assessed? (3) Why should it be assessed? (4) When should it be assessed? (5) Who should carry out the assessment?
Introduction The objective structured clinical examination (OSCE) is now an accepted tool in the assessment of clinical skills in dentistry. There are however no strict or limiting guidelines on the types of scenario that are used in the OSCE examinations and experience and experimentation will inevitably result in the refinement of the OSCE as a tool for assessment.Aim The aim of this study was to compare and contrast different types of clinical operative skills scenarios in multi-station OSCE examinations.Methodology Student feedback was obtained immediately after the sitting of an OSCE examination on two different occasions (and two different cohorts of students). The same questionnaire was used to elicit the responses.Results The questionnaire feedback was analysed qualitatively with particular regard to student perception of the usefulness and validity of the two different kinds of OSCE scenarios.Conclusions OSCE scenarios which involve phantom heads are perceived to lack clinical authenticity, and are inappropriate for the assessment of certain clinical operative skills. While the OSCE is useful in the examination of diagnostic, interpretation and treatment planning skills, it has apparent limitations in the examination of invasive operative procedures.
Objective structured clinical exams are increasingly used as a way of assessing a range of clinical skills at both undergraduate and postgraduate level. To those planning to introduce such assessments, this article provides basic guidance on their development and structure and the personnel required. For those already using the assessments, our article may provide new ideas or be the impetus for an exchange of ideas. For those who are facing such formal assessment as candidates, we hope this article shows the efforts that are made to achieve the necessary structure and objectivity in this type of examination.The objective structured clinical exam has come to prominence as a means of assessing undergraduate and postgraduate candidates.1 Its basic structure is a circuit of assessment stations, where a range of practical clinical skills are assessed by an examiner using a previously determined, objective marking scheme.2 This mechanism of assessment is at the “showing how” level of Miller's pyramid of knowledge.3 Compared with clinical “short cases”—where examination candidates see various patients at different times and are assessed by examiners who may not work to identical marking schemes and standards—the objective structured clinical exam offers the advantages of uniform marking schemes for examiners and consistent examination scenarios for students. It has proved an appropriate means of assessment for the clinical skills now being emphasised in the medical undergraduate curriculum.4 5This article is designed to give some guidance, derived from our recent personal experience, to those planning to introduce such a type of assessment (box). We also hope that this article may provide further ideas to those already using such assessments, and we welcome any comments or questions based on experience in other centres. Steps in developing an objective structured clinical exam Determine skills to be examinedHow many skill assessment stations needed?Skill assessmentsMarking schemesSpace …
Summary The difficulties in measurement of the clinical performance of students in the health professions are well known by educators. One innovative measure incorporated in several of the educational programmes, including the BSc in Nursing programme, in the Faculty of Health Sciences, at McMaster University, Hamilton, Ontario, Canada is the objective structured clinical examination (OSCE). The purpose of this study was to determine the reliability of this evaluation method, both within and between stations. One problem that has been noted by users of the OSCE method is that performance on individual OSCE stations is poorly correlated across stations, apparently regardless of the particular content of the station. A number of hypotheses have been advanced to attempt to explain this phenomenon: performance of any skill is sufficiently variable that the correlation is poor; different skills have little common basis, so that there is no generalizability from one to another, or reliability of assessment in any one station is low. To test these hypotheses, a study was designed for test-retest and interrater reliability. Students undergoing a 10-station OSCE also repeated their starting OSCE station at the end of the examination circuit. In addition, several stations were rated by more than one observer (interrater). This study of 71 first-year BScN students showed that the interrater reliability was high (ICC = 0.80 to 0.99), and test-retest reliability on the same station was good (ICC = 0.66 to 0.86); however, correlation across stations was low (α= 0.198). Thus it is apparent that there is high consistency of repeated performance of a skill but little consistency of performance on different skills.