Direct observed procedural skills assessment in the undergraduate setting

Clinical Skills Centre, University of Dundee, Ninewells Hospital, Dundee, UK.
The Clinical Teacher 08/2012; 9(4):228-32. DOI: 10.1111/j.1743-498X.2012.00582.x
Source: PubMed


Medical students are required to undertake procedural clinical skills training before they qualify as doctors, and an assessment of these skills is a critical element of their fitness to practice.
Challenges facing educators include the logistics of observing: i.e. who is best placed to assess their competence? Evidence appears to be inconclusive about the competence of students in the workplace to adhere to standards of practice, and the time required for an effective assessment.
In this article the aim is to discuss who is best placed to assess final-year medical students in the workplace. We explore the use of direct observed procedural skills (DOPS) to assess students undertaking procedural skills in a simulated workplace setting by tutor-, peer- and self-assessment. The DOPS tool has been used to assess foundation doctors, but can it be used to effectively assess undergraduate medical students?
The main purpose of formative assessment in the simulated setting is to support student learning through the provision of feedback and debriefing. The use of the DOPS tool in this way can provide an insightful perspective of a students' assessment of procedural clinical skills. Tutors can use the DOPS tool to guide their teaching practice by tailoring their lessons towards areas in which students require more guidance. The DOPS assessment tool presents an opportunity to provide immediate and relevant feedback.

Download full-text


Available from: Roddy A Mcleod, Feb 19, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The authors report final-year ward simulation (FYWSE) data from the University of Dundee Medical School. Faculty who designed this assessment intend for the final score to represent an individual senior medical student’s level of clinical performance. The results are included in each student’s portfolio as one source of evidence of the student’s capability as a practitioner, professional, and scholar. Our purpose in conducting this study was to illustrate how assessment designers who are creating assessments to evaluate clinical performance might develop propositions and then collect and examine various sources of evidence to construct and evaluate a validity argument. The data were from all 154 medical students who were in their final year of study at the University of Dundee Medical School in the 2010 – 2011 academic year. To the best of our knowledge, this is the first report on an analysis of senior medical students’ clinical performance while they were taking responsibility for the management of a simulated ward. Using multi-facet Rasch measurement and a generalizability theory approach, we examined various sources of validity evidence that the medical school faculty have gathered for a set of six propositions needed to support their use of scores as measures of students’ clinical ability. Based on our analysis of the evidence, we would conclude that, by and large, the propositions appear to be sound, and the evidence seems to support their proposed score interpretation. Given the body of evidence collected thus far, their intended interpretation seems defensible.
    Advances in Health Sciences Education 03/2015; DOI 10.1007/s10459-015-9601-5. DOI:10.1007/s10459-015-9601-5 · 2.12 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Workplace-based assessment is commonplace, particularly in medicine. These assessments typically involve the assessment of a student conducting a consultation, or part thereof, on a real patient in an authentic clinical practice setting. In disciplines such as medicine substantial work has been directed towards the evaluation of the processes and tools used to perform these assessments and understand their educational impact. At present, there is little literature on the tools used for workplace-based assessment in osteopathy yet they form a picture of the student’s capability. The current study presents data from a new workplace-based assessment tool for osteopathy, the mini Clinical Examination (mini-CEX) and is used to inform the implementation of the mini-CEX more broadly. Data presented here suggest the mini-CEX in this cohort is feasible, efficient, acceptable to stakeholders, internally consistent, and can differentiate between students at different stages of an osteopathic teaching program. Further research into the use of the mini-CEX in osteopathy is required, particularly focusing on educational impact, the reliability of the tool and its generalisability to clinical learning environments in other osteopathy teaching institutions.
    International Journal of Osteopathic Medicine 07/2015; DOI:10.1016/j.ijosm.2015.07.002 · 1.20 Impact Factor