Conference PaperPDF Available

Money makes the (medical assessment) world go round: How costly is an OSCE?

Authors:

Abstract

Introduction: The OSCE is an important component of assessment and yet is undoubtedly expensive (Patricio, et al. 2013). This study aims to evaluate the costs for one OSCE examination at one medical school however the message of the unrecognised cost of clinical exams is important for all. Methods: In 2013 the Aberdeen University held a two-day OSCE for 185 final year students. The costs of different stages of development and administration of this high-stakes OSCE were determined. Results: This OSCE cost our institution £65,328. Question development & testing costs approximated £6,280, £419 per question. Production costs were mostly examiner and patient training time, £8,154. The majority of costs occurred administering the examination including consumables, catering and staff time £52,504. The largest expense was examiner time £26,938. Post-examination costs included administrative tasks, exam board meeting and checking failed papers, £3,191. The most expensive station was depression history taking utilising actors, costing £5,105, an additional £2981 to the standard question cost. The cheapest station was prescribing, a total station cost of £2,760, an addition of only £636 to the standard cost. The total cost per student was approximately £293. Discussion: Cost in medical education assessment is a complex area; there are differences across institutions that lead to huge variations in cost, the use of volunteer patients in our institution incurs no cost, however other institutions pay up to £230 per patient. The main cost associated with the conduct of the OSCE is examiner time. Conclusion: The OSCE is expensive to run. With students and regulators demanding formative as well as summative OSCES costs are significant. Further work will need to identify whether the OSCE is value for money and tie to measures of utility- is it generalizable, valid, acceptable and economically feasible? (Walsh, et al. 2013). References: Patricio MF, Juliao M, Fareleira F Carneiro AV. 2013. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach 35: 503-14 Reznick R, Smee S, Baumber J, Cohen R, Rothman A, Blackmore D, Berard M. 1993. Guidelines for estimating the real cost of an objective structured clinical examination. Acad Med 68(7): 513-7 Walsh K, Jaye P. 2013. Cost and value in medical education. Educ Prim Care 24(6): 391-3
Money makes the
(medical assessment)
world go round
Craig Brown1, Sarah Ross1, Kieran Walsh2, Jen Cleland1
1. University of Aberdeen, Scotland
2. BMJ Learning, London
www.abdn.ac.uk
Introduction
Assessment is resource intensive
Many billions of pounds/dollars/euros
are spent on “medical education”
annually
The utility index defines the usefulness
of an assessment = Reliability, Validity,
Acceptability, Feasibility & Cost-
Effectiveness
1
1. Adv Health Sci Edu 1996;1:41-67
www.abdn.ac.uk
Background
Variable n
Students 185
Stations 15
Sites 4
Runs per site 6
Stations with
examiner present
14
Simulated patients
or actors
12
Station length (min) 8
www.abdn.ac.uk
Methods
Retrospective case study cost-analysis of a
“high stakes” final year OSCE
Staff
Consumables & equipment
Travel & accommodation
Venue
Patient costs
Other additional costs
Reznick’s 4 phase process model:
Development Production Administration
(Conductance)
Post-exam
reporting
2. Academic Med 1993;68(7):513-7
www.abdn.ac.uk
Results
Development
Question writing £3,220
Question review £1,458
Question testing £464
Pre-exam meetings £408
Administrator organisation time £1,127
£6,677
€7,808
$10,148
www.abdn.ac.uk
Results
Production
Examiner online training (1hr/examiner) £6,495
Staff costs for volunteer patient and actor training
£1,414
£7,909
€9,248
$12,020
www.abdn.ac.uk
Results
Examiner / standby examiner time £30,669
External examiners £1,359
Catering £2,273
Examiner accommodation/travel £3,727
Site coordinators/support staff time £7,780
Consumables £1,179
Actors £717
Administration
(Conductance)
£47,704
€55,785
$72,500
www.abdn.ac.uk
Results
Exam board (17 staff members 1.5hrs)
£1,327
Paper checking £1,107
Data management £614
Secretarial costs £142
Post-exam
reporting
£3,191
€3,731
$4,849
www.abdn.ac.uk
Results
Accommodation & utilities £5,025
Volunteer patients £18,720
Hidden Costs
£23,745
€27,767
$36,087
www.abdn.ac.uk
Results
£65,000
€76,011
$98,787
www.abdn.ac.uk
Station by station analysis
Station/Focus Cost
Gout (Communication/Examination) £4,415
TIA (Communication/Examination) £4,425
AF (Communication/Data
Interpretation)
£4,417
Urinary stones
(Communication/Procedural skill)
£4,532
Eye (Examination) £4,680
Prescribing £2,787
GI Bleed (Communication/Data
Interpretation)
£4,418
Station/Focus Cost
Headache (Communication) £4,415
Hypoglycaemia
(Communication)
£4,415
Acute abdomen (Examination) £4,418
Compartment Syndrome
(Examination)
£4,515
Pre-op assessment
(Communication/Data
Interpretation)
£4,416
Warfarin
(Prescribing/Documentation)
£4,329
Depression
(Communication)
£5,132
Respiratory
(Communication/Procedural
skill)
£4,415
www.abdn.ac.uk
The OSCE is an expensive with significant
institutional variability in costs
The bulk of these costs are not-modifiable in
light of other metrics of exam utility
Providers should be prepared to assign
significant financial resource to OSCE
assessment
Cost-effectiveness should be considered by
OSCE planners as a measure of its utility
Conclusion
Questions
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Medical education is vital to the future of healthcare provision. It is also expensive. We should ensure that the funding spent on medical education is spent in the most cost-effective way possible and delivers the best possible returns on our investment. Budgets that have been allocated to medical education should be spent on this and not on research or clinical care. Educational budgets should be transparent - so that their use and misuse are clear. We should develop a culture of lifelong learning and continually make explicit that future healthcare professionals need investment in their education to maintain the quality and safety of healthcare delivery.
Article
Full-text available
Background: The Objective Structured Clinical Examination (OSCE) was introduced by Harden et al. (1975) trying to answer the problems regarding the assessment of clinical competencies. Despite increasingly widespread use of OSCEs, debate continues with arguments as 'why using such a demanding format if other methods are available?' Aim: To review and synthesize evidence on technical and economic feasibility of OSCE in undergraduate medical studies. Methods: Best Evidence Medical Education methodology was applied by two independent coders to 1083 studies identified by literature search from 1975 until the end of 2008. Key findings: The OSCE is a feasible approach to the assessment of clinical competence for use in different cultural and geographical contexts; to assess a wide range of learning outcomes; in different specialties and disciplines; for formative and summative purposes; to assess students a curriculum or an educational intervention; in the different phases of education including the early and later years of the undergraduate curriculum; and in different health care professions. Conclusion: Despite being an expensive test format, evidence suggests that the use of OSCE produces reliable results. The study also suggests that one reason for the wide-scale adoption of the OSCE and the feasibility of its use in different contexts and situations is its inherent flexibility in terms of the number of students that can be assessed, the number of examiners included, the type of patients represented and the format of the examination itself, including the length of the examination, the number and duration of stations.
Article
Models of short-term remediation for failing students are typically associated with improvements in candidate performance at retest. However, the process is costly to deliver, particularly for performance retests with objective structured clinical examinations (OSCEs), and there is increasing evidence that these traditional models are associated with the longitudinal underperformance of candidates. Rather than a traditional OSCE model, sequential testing involves a shorter 'screening' format, with an additional 'sequential' test for candidates who fail to meet the screening standard. For those tested twice, overall pass/fail decisions are then based on results on the full sequence of tests. In this study, the impacts of sequential assessment on student performance, cost of assessment delivery and overall reliability were modelled using data sourced from a final graduating OSCE in an undergraduate medical degree programme. Initial modelling using pre-existing OSCE data predicted significant improvements in reliability in the critical area, reflected in pilot results: 13.5% of students (n = 228) were required to sit the sequential OSCE. One student (0.4%) was identified as representing a false positive result (i.e. under the previous system this student would have passed the OSCE but failed on extended testing). Nine students (3.9%) who would have required OSCE retests under the prior system passed the full sequence and were therefore able to graduate at the normal time without loss of earnings. Overall reliability was estimated as 0.79 for the full test sequence. Significant cost savings were realised. Sequential testing in OSCEs increases reliability for borderline students because the increased number of observations implies that 'observed' student marks are closer to 'true' marks. However, the station-level quality of the assessment needs to be sufficiently high for the full benefits in terms of reliability to be achieved. The introduction of such a system has financial benefits, good validity inferences and has proved acceptable to students and other stakeholders.
Article
The objective structured clinical examination (OSCE) is comprised of a series of simulations used to assess the skill of medical practitioners in the diagnosis and treatment of patients. It is often used in high-stakes examinations and therefore it is important to assess its reliability and validity. The published literature was searched (PsycINFO, PubMed) for OSCE reliability estimates (coefficient alpha and generalisability coefficients) computed either across stations or across items within stations. Coders independently recorded information about each study. A meta-analysis of the available literature was computed and sources of systematic variance in estimates were examined. A total of 188 alpha values from 39 studies were coded. The overall (summary) alpha across stations was 0.66 (95% confidence interval [CI] 0.62-0.70); the overall alpha within stations across items was 0.78 (95% CI 0.73-0.82). Better than average reliability was associated with a greater number of stations and a higher number of examiners per station. Interpersonal skills were evaluated less reliably across stations and more reliably within stations compared with clinical skills. Overall scores on the OSCE are often not very reliable. It is more difficult to reliably assess communication skills than clinical skills when considering both as general traits that should apply across situations. It is generally helpful to use two examiners and large numbers of stations, but some OSCEs appear more reliable than others for reasons that are not yet fully understood.
Article
Good examinations have a number of characteristics, including validity, reliable scores, educational impact, practicability and acceptability. Scores from the objective structured clinical examination (OSCE) are more reliable than the single long case examination, but concerns about its validity have led to modifications and the development of other models, such as the mini-clinical evaluation exercise (mini-CEX) and the objective structured long examination record (OSLER). These retain some of the characteristics of the long case, but feature repeated encounters and more structure. Nevertheless, the practical considerations and costs associated with mounting large-scale examinations remain significant. The lack of metrics handicaps progress. This paper reports a system whereby a sequential design concentrates limited resources where they are most needed in order to maintain the reliability of scores and practicability at the pass/fail interface. We analysed data pertaining to the final examination administered in 2009. In the complete final examination, candidates see eight real patients (the OSLER) and encounter 12 OSCE stations. Candidates whose performance is judged as entirely satisfactory after the first four patients and six OSCE stations are not examined further. The others - about a third of candidates - see the remaining patients and stations and are judged on the complete examination. Reliability was calculated from the scores of all candidates on the first part of the examination using generalisability theory and practicability in terms of financial resources. The functioning of the sequential system was assessed by the ability of the first part of the examination to predict the final result for the cohort. Generalisability for the OSLER was 0.63 after four patients and 0.77 after eight patients. The OSCE was less reliable (0.38 after six stations and 0.55 after 12). There was only a weak correlation between the OSLER and the OSCE. The first stage was highly predictive of the results of the second stage. Savings facilitated by the sequential design amounted to approximately GBP 30,000. The overall utility of examinations involves compromise. The system described provides good perceived validity with reasonably reliable scores; a sequential design can concentrate resources where they are most needed and still allow wide sampling of tasks.
Article
To avoid many of the disadvantages of the traditional clinical examination we have introduced the structured clinical examination. In this students rotate round a series of stations in the hospital ward. At one station they are asked to carry out a procedure, such as take a history, undertake one aspect of physical examination, or interpret laboratory investigations in the light of a patient's problem, and at the next station they have to answer questions on the findings at the previous station and their interpretation. As they cannot go back to check on omissions multiple-choice questions have a minimal cueing effect. The students may be observed and scored at some stations by examiners using a check list. In the structured clinical examination the variables and complexity of the examination are more easily controlled, its aims can be more clearly defined, and more of the student's knowledge can be tested. The examination is more objective and a marking strategy can be decided in advance. The examination results in improved feed-back to students and staff.
Article
The costs of objective structured clinical examinations (OSCEs) and other patient-centered examinations have not been well established. The published literature contains cost estimates ranging from $21 to over $1,000 per examinee. This wide range in cost estimates is due in part to both a lack of a consistent definition as to what should be included as an expense and a lack of understanding of how these expenses can be minimized. In 1993-94 the authors conducted a literature review and defined and subcategorized costs related to the production and implementation of an OSCE into costs for personnel, standardized patients (SPs), and administration. An analysis was undertaken of how each of the subcategory costs can be minimized. Costs for physicians, patient trainers, support personnel, and data analysis are negligible if the personnel who perform these duties do so as part of their overall academic responsibilities. Costs for SPs can be minimized by developing a cadre of experienced patients as well as professional personnel who participate in a comprehensive program in which SPs are used in both teaching and evaluative modes. This contributes to the development of a psychometrically valid OSCE with a minimum number of stations and decreased costs. Administrative costs are fixed and not amenable to significant cost saving. A detailed cost analysis of a comprehensive OSCE given at the end of an Introduction to Clinical Medicine course at one institution is presented, illustrating the practical aspects of these cost-containment methods. Based on these considerations it appears financially feasible for an individual academic institution to develop and implement an OSCE.
Article
The objective structured clinical examination (OSCE) has become an accepted technique for the evaluation of clinical competence in medicine. Although advances have been made in our knowledge of the psychometric aspects of the OSCE, extremely little has been written about feasibility and cost issues. Given the current economic imperative to control costs and the extremely scant literature on the costs of developing and administering an examination in medicine, the authors felt it timely and relevant to explore issues related to the cost of the OSCE. In 1991-92 and in 1992-93, costs and time requirements to implement and administer a structured oral (SO) examination and a six-station OSCE for a surgical clerkship at the University of Toronto Faculty of Medicine were gathered by review of invoices, interviews with those involved, and perusal of diaries kept by staff. To develop and administer the six-station OSCE, 327.5 hours of staff and faculty time were required for each rotation of surgical clerks (8.2 hours per student). The SO examination required 110 hours of staff and faculty time (2.75 hours per student). Direct expenses for the OSCE amounted to U.S. $6.90 per student per station, compared with no direct expense for the SO examination. The OSCE was more time-consuming and more expensive in human and material costs than the SO examination. However, costs of the OSCE can be substantially reduced from approximately U.S. $35 to U.S. ! per student per station if test developers, standardized patients, support staff, and examiners can donate their time. The authors compare the costs and time requirements of their OSCE with those of other OSCEs reported in the literature, and they provide guidelines to assist educators in deciding whether the costs of an OSCE are justifiable in the educators' individual settings.