Article

Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31

Foundation for Advancement of International Medical Education and Research, Philadelphia 19104, USA.
Medical Teacher (Impact Factor: 1.68). 12/2007; 29(9):855-71. DOI: 10.1080/01421590701775453
Source: PubMed

ABSTRACT

There has been concern that trainees are seldom observed, assessed, and given feedback during their workplace-based education. This has led to an increasing interest in a variety of formative assessment methods that require observation and offer the opportunity for feedback.
To review some of the literature on the efficacy and prevalence of formative feedback, describe the common formative assessment methods, characterize the nature of feedback, examine the effect of faculty development on its quality, and summarize the challenges still faced.
The research literature on formative assessment and feedback suggests that it is a powerful means for changing the behaviour of trainees. Several methods for assessing it have been developed and there is preliminary evidence of their reliability and validity. A variety of factors enhance the efficacy of workplace-based assessment including the provision of feedback that is consistent with the needs of the learner and focused on important aspects of the performance. Faculty plays a critical role and successful implementation requires that they receive training.
There is a need for formative assessment which offers trainees the opportunity for feedback. Several good methods exist and feedback has been shown to have a major influence on learning. The critical role of faculty is highlighted, as is the need for strategies to enhance their participation and training.

  • Source
    • "They are called Objective Structured Clinical Examinations and virtually every medical school in the world uses it (Harden, Lilley, & Patricio, 2015). However, since a number of years assessment methods are developed that used in the unstandardized real clinical environment (Norcini & Burch, 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: It has been such an honour to read the assessment papers in legal education that were written with an earlier paper of mine (C. P. Van der Vleuten & Schuwirth, 2005) as a frame of reference. The papers provide an excellent insight in a number of assessment practices in different law schools. Very striking were the similarities of the issues that are discussed from the legal domain to my own domain, the field of medicine. The papers are addressing notions of reflections, reflective practice, the importance of learning (and assessing) in context (either simulated or real) developing professional competences, definitions of professional competence, the relevance of general skills (professionalism, ethics, values, altruism, empathy, client-centeredness, managing themselves and others in work), and new approaches to assessment (journals, portfolios, extracted examples of work, observation, think-aloud in practice and holistic approaches to assessment). All these notions completely resonate with developments in the medical domain. For this contribution I thought of summarizing some recent developments in the medical domain having relevance to all these topics: competency frameworks, assessment of performance in context, reflection, and programmatic assessment. This is meant merely as an informative mirror on what happens in this other domain.
    Full-text · Article · Jan 2016
  • Source
    • "In clinical training programmes, performance evaluations through workplace-based assessments like the mini-clinical evaluation exercise (mini-CEX) are aimed at helping students improve their clinical performance (Norcini & Burch 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Context: Narrative feedback documented in performance evaluations by the teacher, i.e. the clinical supervisor, is generally accepted to be essential for workplace learning. Many studies have examined factors of influence on the usage of mini-clinical evaluation exercise (mini-CEX) instruments and provision of feedback, but little is known about how these factors influence teachers' feedback-giving behaviour. In this study, we investigated teachers' use of mini-CEX in performance evaluations to provide narrative feedback in undergraduate clinical training. Methods: We designed an exploratory qualitative study using an interpretive approach. Focusing on the usage of mini-CEX instruments in clinical training, we conducted semi-structured interviews to explore teachers' perceptions. Between February and June 2013, we conducted interviews with 14 clinicians participated as teachers during undergraduate clinical clerkships. Informed by concepts from the literature, we coded interview transcripts and iteratively reduced and displayed data using template analysis. Results: We identified three main themes of interrelated factors that influenced teachers' practice with regard to mini-CEX instruments: teacher-related factors; teacher-student interaction-related factors, and teacher-context interaction-related factors. Four issues (direct observation, relationship between teacher and student, verbal versus written feedback, formative versus summative purposes) that are pertinent to workplace-based performance evaluations were presented to clarify how different factors interact with each other and influence teachers' feedback-giving behaviour. Embedding performance observation in clinical practice and establishing trustworthy teacher-student relationships in more longitudinal clinical clerkships were considered important in creating a learning environment that supports and facilitates the feedback exchange. Conclusion: Teachers' feedback-giving behaviour within the clinical context results from the interaction between personal, interpersonal and contextual factors. Increasing insight into how teachers use mini-CEX instruments in daily practice may offer strategies for creating a professional learning culture in which feedback giving and seeking would be enhanced.
    Full-text · Article · Mar 2015 · Medical Teacher
  • Source
    • "A structured interview is used to elicit the trainee's rationalization and determinants for data gathering, problem solving, and patient management. Work-based assessment using the patient's chart attends to the base of Miller's pyramid: ''knows'' and ''knows how'', which conceptualizes the recall of knowledge and the application of the knowledge to problem solving and clinical decisions, respectively (Miller 1990; Norcini & Burch 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: The objective of this review is to summarize and critically appraise existing evidence on the use of chart stimulated recall (CSR) and case-based discussion (CBD) as an assessment tool for medical trainees. Methods: Medline, Embase, CINAHL, PsycINFO, Educational Resources Information Centre (ERIC), Web of Science, and the Cochrane Central Register of Controlled Trials were searched for original articles on the use of CSR or CBD as an assessment method for trainees in all medical specialties. Results: Four qualitative and three observational non-comparative studies were eligible for this review. The number of patient-chart encounters needed to achieve sufficient reliability varied across studies. None of the included studies evaluated the content validity of the tool. Both trainees and assessors expressed high level of satisfaction with the tool; however, inadequate training, different interpretation of the scoring scales and skills needed to give feedback were addressed as limitations for conducting the assessment. Conclusion: There is still no compelling evidence for the use of patient's chart to evaluate medical trainees in the workplace. A body of evidence that is valid, reliable, and documents the educational effect in support of the use of patients' charts to assess medical trainees is needed.
    Full-text · Article · Feb 2015 · Medical Teacher
Show more

Questions & Answers about this publication

  • Rozália Klára Bakó added an answer in Open Access:
    Can anyone recommend a validated evaluation form for a seminar?

    I'm organising 2 events (half day seminars) for International Open Access week and have read some literature around evaluation.  I'm hoping to publish my findings. Specifically I'd like to measure whether the seminar 1) increases peoples awareness of OA and 2) influences them to promote OA in their organisations.  I'm also interested in demonstrating that collaborative events between the academic & health sector leads to better outcomes.  Any advice appreciated.