A Core Competency–based Objective Structured Clinical Examination (OSCE) Can Predict Future Resident Performance

Department of Emergency Medicine, Emory University School of Medicine, Atlanta, GA. USA.
Academic Emergency Medicine (Impact Factor: 2.01). 10/2010; 17 Suppl 2(s2):S67-71. DOI: 10.1111/j.1553-2712.2010.00894.x
Source: PubMed

ABSTRACT This study evaluated the ability of an objective structured clinical examination (OSCE) administered in the first month of residency to predict future resident performance in the Accreditation Council for Graduate Medical Education (ACGME) core competencies.
Eighteen Postgraduate Year 1 (PGY-1) residents completed a five-station OSCE in the first month of postgraduate training. Performance was graded in each of the ACGME core competencies. At the end of 18 months of training, faculty evaluations of resident performance in the emergency department (ED) were used to calculate a cumulative clinical evaluation score for each core competency. The correlations between OSCE scores and clinical evaluation scores at 18 months were assessed on an overall level and in each core competency.
There was a statistically significant correlation between overall OSCE scores and overall clinical evaluation scores (R = 0.48, p < 0.05) and in the individual competencies of patient care (R = 0.49, p < 0.05), medical knowledge (R = 0.59, p < 0.05), and practice-based learning (R = 0.49, p < 0.05). No correlation was noted in the systems-based practice, interpersonal and communication skills, or professionalism competencies.
An early-residency OSCE has the ability to predict future postgraduate performance on a global level and in specific core competencies. Used appropriately, such information can be a valuable tool for program directors in monitoring residents' progress and providing more tailored guidance.

Download full-text


Available from: Sheryl Heron, Dec 17, 2014
16 Reads
  • Source
    • "A review of the literature on OSCE details that this assessment method originated from medical education, where it was initially developed during the 1970's to replace more subjective assessments such as 'long and short-cases'(Harden et al. 1975). Therefore, it has become embedded within medical training and has been subjected to further work on assessment reliability and validity (see Chesser et al. 2009; Wallenstein et al. 2010). However, for other healthcare disciplines, such as nursing, physiotherapy, midwifery, OSCE is a newer form of assessment for many students. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study explored the healthcare student's experience of an OSCE (Objective Structured Clinical Exam). The OSCE is a form of assessment in which the student demonstrates clinical skills, and underpinning knowledge, usually in simulated conditions. Historically, it has originated from medical education, and is now being adopted by other disciplines of healthcare education. Because the OSCE is a new experience for most students, it is important as educators, that we explore this assessment from the perspective of the student. A literature review revealed a paucity of research in this area. Hermeneutic phenomenology was used as this study's underpinning methodology. Data was collected through semi-structured interviews with students. Analysis revealed three main themes: (1) anxiety about the OSCE, (2) preparation was a seen as a coping strategy and (3) simulation was a further cause of anxiety. Recommendations for future practice: are that students need to be supported appropriately. Preparation of students for an OSCE requires effective planning and simulation needs to be grounded in practice. This study concludes that students valued the OSCE as a worthwhile assessment. However there are major concerns for students, which need careful consideration by academic faculty developing this type of assessment.
    01/2012; 1(1). DOI:10.7190/seej.v1i1.37
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective was to critically appraise and highlight medical education research studies published in 2010 that were methodologically superior and whose outcomes were pertinent to teaching and education in emergency medicine (EM). A search of the English language literature in 2010 querying PubMed, Scopus, Education Resources Information Center (ERIC), and PsychInfo identified 41 EM studies that used hypothesis-testing or observational investigations of educational interventions. Five reviewers independently ranked all publications based on 10 criteria, including four related to methodology, that were chosen a priori to standardize evaluation by reviewers. This method was used previously to appraise medical education published in 2008 and 2009. Five medical education research studies met the a priori criteria for inclusion and are reviewed and summarized here. Comparing the literature of 2010 to 2008 and 2009, the number of published educational research papers increased from 30 to 36 and then to 41. The number of funded studies remained fairly stable over the past 3 years at 13 (2008), 16 (2009), and 9 (2010). As in past years, research involving the use of technology accounted for a significant number of publications (34%), including three of the five highlighted studies. Forty-one EM educational studies published in 2010 were identified. This critical appraisal reviews and highlights five studies that met a priori quality indicators. Current trends and common methodologic pitfalls in the 2010 papers are noted.
    Academic Emergency Medicine 10/2011; 18(10):1081-9. DOI:10.1111/j.1553-2712.2011.01191.x · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To determine whether a "lay" rater could assess clinical reasoning, interrater reliability was measured between physician and lay raters of patient notes written by medical students as part of an 8-station objective structured clinical examination. Seventy-five notes were rated on core elements of clinical reasoning by physician and lay raters independently, using a scoring guide developed by physician consensus. Twenty-five notes were rerated by a 2nd physician rater as an expert control. Kappa statistics and simple percentage agreement were calculated in 3 areas: evidence for and against each diagnosis and diagnostic workup. Agreement between physician and lay raters for the top diagnosis was as follows: supporting evidence, 89% (κ = .72); evidence against, 89% (κ = .81); and diagnostic workup, 79% (κ = .58). Physician rater agreement was 83% (κ = .59), 92% (κ = .87), and 96% (κ = .87), respectively. Using a comprehensive scoring guide, interrater reliability for physician and lay raters was comparable with reliability between 2 expert physician raters.
    American journal of surgery 01/2012; 203(1):81-6. DOI:10.1016/j.amjsurg.2011.08.003 · 2.29 Impact Factor
Show more