Daily encounter cards facilitate competency-based feedback while leniency bias persists
Department of Medicine and the Wilson Centre for Research in Education, University of Toronto, and Department of Emergency Medicine, St. Michael's Hospital, Toronto, Ontario, Canada. Canadian Journal of Emergency Medicine
(Impact Factor: 1.16).
We sought to determine if a novel competency-based daily encounter card (DEC) that was designed to minimize leniency bias and maximize independent competency assessments could address the limitations of existing feedback mechanisms when applied to an emergency medicine rotation.
Learners in 2 tertiary academic emergency departments (EDs) presented a DEC to their teachers after each shift. DECs included dichotomous categorical rating scales (i.e., "needs attention" or "area of strength") for each of the 7 CanMEDS roles or competencies and an overall global rating scale. Teachers were instructed to choose which of the 7 competencies they wished to evaluate on each shift. Results were analyzed using both staff and resident as the units of analysis.
Fifty-four learners submitted a total of 801 DECs that were then completed by 43 different teachers over 28 months. Teachers' patterns of selecting CanMEDS competencies to assess did not differ between the 2 sites. Teachers selected an average of 3 roles per DEC (range 0-7). Only 1.3% were rated as "needs further attention." The frequency with which each competency was selected ranged from 25% (Health Advocate) to 85% (Medical Expert).
Teachers chose to direct feedback toward a breadth of competencies. They provided feedback on all 7 CanMEDS roles in the ED, yet demonstrated a marked leniency bias.
Available from: Debbie Jaarsma
- "A variety of instruments have been developed to evaluate residents' performance on the CanMEDS roles and to provide them with feedback (Norcini & Burch 2007). However, these instruments tend to generate feedback that is mainly focused on the Medical Expert role, often leaving the intrinsic roles behind (Bandiera & Lendrum 2008;Chou et al. 2008;Ginsburg et al. 2011). In addition, program directors expressed their dissatisfaction with the available evaluation instruments for the intrinsic roles, especially those for the Collaborator, Health Advocate and Manager roles (Chou et al. 2008). "
[Show abstract] [Hide abstract]
Residents benefit from regular, high quality feedback on all CanMEDS roles during their training. However, feedback mostly concerns Medical Expert, leaving the other roles behind. A feedback system was developed to guide supervisors in providing feedback on CanMEDS roles. We analyzed whether feedback was provided on the intended roles and explored differences in quality of written feedback.
In the feedback system, CanMEDS roles were assigned to five authentic situations: Patient Encounter, Morning Report, On-call, CAT, and Oral Presentation. Quality of feedback was operationalized as specificity and inclusion of strengths and improvement points. Differences in specificity between roles were tested with Mann-Whitney U tests with a Bonferroni correction (α = 0.003).
Supervisors (n = 126) provided residents (n = 120) with feedback (591 times). Feedback was provided on the intended roles, most frequently on Scholar (78%) and Communicator (71%); least on Manager (47%), and Collaborator (56%). Strengths (78%) were mentioned more frequently than improvement points (52%), which were lacking in 40% of the feedback on Manager, Professional, and Collaborator. Feedback on Scholar was more frequently (p = 0.000) and on Reflective Professional was less frequently (p = 0.003) specific.
Discussion and conclusion:
Assigning roles to authentic situations guides supervisors in providing feedback on different CanMEDS roles. We recommend additional supervisor training on how to observe and evaluate the roles.
Available from: Jonathan Sherbino
- "Several studies have examined the reliability of ECs that are organized around ad hoc designs of physician competence (Al-Jarallah et al. 2005; Brennan and Norman 1997; Richards et al. 2007; Turnbull et al. 2000). Bandiera and Lendrum (2008) reported an analysis of an end-of-shift EC based on the CanMEDS framework. However the ratings were based on a 2-point scale—'Satisfactory' or 'Needs Further Attention.' "
[Show abstract] [Hide abstract]
ABSTRACT: The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an emergency department shift, an EC was scored (1-10) for each student on Medical Expert, 2 additional Roles, and an overall score. Analysis of 1,819 ECs (155 of 186 students) revealed the following: Collaborator, Manager, Health Advocate and Scholar were assessed on less than 25 % of ECs. On average, each student was assessed 11 times with an inter-rater reliability of 0.6. The largest source of variance was rater bias. A D-study showed that a minimum of 17 ECs were required for a reliability of 0.7. There was moderate to strong correlations between all Roles and overall score; and the factor analysis revealed all items loading on a single factor, accounting for 87 % of the variance. The global assessment of the CanMEDS Roles using ECs has significant variance in estimates of performance, derived from differences between raters. Some Roles are seldom selected for assessment, suggesting that raters have difficulty identifying related performance. Finally, correlation and factor analyses demonstrate that raters are unable to discriminate among Roles and are basing judgments on an overall impression.
Available from: Jason R Frank
- "The papers included in the final selection for this review described a variety of methods for identifying and defining these outcomes (Harden et al. 1999b). The authors also collectively promoted the concept of ''progression of competence,'' meaning that learners advance along a series of defined milestones on their way to the explicit outcome goals of training (theme 1a) (Lane and Ross 1994b; Bandiera & Defining CBE Lendrum 2008). This is articulated by Ben-David (1999): ''Outcome-based frameworks require a defined scheme of levels of progression towards the outcome.'' "
[Show abstract] [Hide abstract]
ABSTRACT: Competency-based education (CBE) has emerged in the health professions to address criticisms of contemporary approaches to training. However, the literature has no clear, widely accepted definition of CBE that furthers innovation, debate, and scholarship in this area.
To systematically review CBE-related literature in order to identify key terms and constructs to inform the development of a useful working definition of CBE for medical education.
We searched electronic databases and supplemented searches by using authors' files, checking reference lists, contacting relevant organizations and conducting Internet searches. Screening was carried out by duplicate assessment, and disagreements were resolved by consensus. We included any English- or French-language sources that defined competency-based education. Data were analyzed qualitatively and summarized descriptively.
We identified 15,956 records for initial relevancy screening by title and abstract. The full text of 1,826 records was then retrieved and assessed further for relevance. A total of 173 records were analyzed. We identified 4 major themes (organizing framework, rationale, contrast with time, and implementing CBE) and 6 sub-themes (outcomes defined, curriculum of competencies, demonstrable, assessment, learner-centred and societal needs). From these themes, a new definition of CBE was synthesized.
This is the first comprehensive systematic review of the medical education literature related to CBE definitions. The themes and definition identified should be considered by educators to advance the field.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.