We sought to determine if a novel competency-based daily encounter card (DEC) that was designed to minimize leniency bias and maximize independent competency assessments could address the limitations of existing feedback mechanisms when applied to an emergency medicine rotation.
Learners in 2 tertiary academic emergency departments (EDs) presented a DEC to their teachers after each shift. DECs included dichotomous categorical rating scales (i.e., "needs attention" or "area of strength") for each of the 7 CanMEDS roles or competencies and an overall global rating scale. Teachers were instructed to choose which of the 7 competencies they wished to evaluate on each shift. Results were analyzed using both staff and resident as the units of analysis.
Fifty-four learners submitted a total of 801 DECs that were then completed by 43 different teachers over 28 months. Teachers' patterns of selecting CanMEDS competencies to assess did not differ between the 2 sites. Teachers selected an average of 3 roles per DEC (range 0-7). Only 1.3% were rated as "needs further attention." The frequency with which each competency was selected ranged from 25% (Health Advocate) to 85% (Medical Expert).
Teachers chose to direct feedback toward a breadth of competencies. They provided feedback on all 7 CanMEDS roles in the ED, yet demonstrated a marked leniency bias.
"Several studies have examined the reliability of ECs that are organized around ad hoc designs of physician competence (Al-Jarallah et al. 2005; Brennan and Norman 1997; Richards et al. 2007; Turnbull et al. 2000). Bandiera and Lendrum (2008) reported an analysis of an end-of-shift EC based on the CanMEDS framework. However the ratings were based on a 2-point scale—'Satisfactory' or 'Needs Further Attention.' "
[Show abstract][Hide abstract] ABSTRACT: The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an emergency department shift, an EC was scored (1-10) for each student on Medical Expert, 2 additional Roles, and an overall score. Analysis of 1,819 ECs (155 of 186 students) revealed the following: Collaborator, Manager, Health Advocate and Scholar were assessed on less than 25 % of ECs. On average, each student was assessed 11 times with an inter-rater reliability of 0.6. The largest source of variance was rater bias. A D-study showed that a minimum of 17 ECs were required for a reliability of 0.7. There was moderate to strong correlations between all Roles and overall score; and the factor analysis revealed all items loading on a single factor, accounting for 87 % of the variance. The global assessment of the CanMEDS Roles using ECs has significant variance in estimates of performance, derived from differences between raters. Some Roles are seldom selected for assessment, suggesting that raters have difficulty identifying related performance. Finally, correlation and factor analyses demonstrate that raters are unable to discriminate among Roles and are basing judgments on an overall impression.
Advances in Health Sciences Education 01/2013; 18(5). DOI:10.1007/s10459-012-9440-6 · 2.12 Impact Factor
"The papers included in the final selection for this review described a variety of methods for identifying and defining these outcomes (Harden et al. 1999b). The authors also collectively promoted the concept of ''progression of competence,'' meaning that learners advance along a series of defined milestones on their way to the explicit outcome goals of training (theme 1a) (Lane and Ross 1994b; Bandiera & Defining CBE Lendrum 2008). This is articulated by Ben-David (1999): ''Outcome-based frameworks require a defined scheme of levels of progression towards the outcome.'' "
[Show abstract][Hide abstract] ABSTRACT: Competency-based education (CBE) has emerged in the health professions to address criticisms of contemporary approaches to training. However, the literature has no clear, widely accepted definition of CBE that furthers innovation, debate, and scholarship in this area.
To systematically review CBE-related literature in order to identify key terms and constructs to inform the development of a useful working definition of CBE for medical education.
We searched electronic databases and supplemented searches by using authors' files, checking reference lists, contacting relevant organizations and conducting Internet searches. Screening was carried out by duplicate assessment, and disagreements were resolved by consensus. We included any English- or French-language sources that defined competency-based education. Data were analyzed qualitatively and summarized descriptively.
We identified 15,956 records for initial relevancy screening by title and abstract. The full text of 1,826 records was then retrieved and assessed further for relevance. A total of 173 records were analyzed. We identified 4 major themes (organizing framework, rationale, contrast with time, and implementing CBE) and 6 sub-themes (outcomes defined, curriculum of competencies, demonstrable, assessment, learner-centred and societal needs). From these themes, a new definition of CBE was synthesized.
This is the first comprehensive systematic review of the medical education literature related to CBE definitions. The themes and definition identified should be considered by educators to advance the field.
Medical Teacher 08/2010; 32(8):631-7. DOI:10.3109/0142159X.2010.500898 · 1.68 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.