The role of assessment in competency-based medical education
Get notified about updates to this publicationFollow publication
Click to see the full-text of:
Article: The role of assessment in competency-based medical education
- SourceAvailable from: John Burkhardt
[Show abstract] [Hide abstract]
- "Competence is a holistic judgment that incorporates knowledge, skills and attitudes that are demonstrated in the context of practice and are influenced by the context of practice and learning. The complexities of CBE assessment and the practical and theoretical implications are receiving greater attention from educators, researchers and psychometricians (Rethans et al. 2002; Norcini 2005; Holmboe et al. 2010; Schuwirth & Ash 2013). From these analyses, several assessment issues emerge that should be carefully considered by any CBE program: Assessment data need to be collected frequently, even continuously. "
ABSTRACT: There is a growing demand for health sciences faculty with formal training in education. Addressing this need, the University of Michigan Medical School created a Master in Health Professions Education (UM-MHPE). The UM-MHPE is a competency-based education (CBE) program targeting professionals. The program is individualized and adaptive to the learner's situation using personal mentoring. Critical to CBE is an assessment process that accurately and reliably determines a learner's competence in educational domains. The program's assessment method has two principal components: an independent assessment committee and a learner repository. Learners submit evidence of competence that is evaluated by three independent assessors. The assessments are presented to an Assessment Committee who determines whether the submission provides evidence of competence. The learner receives feedback on the submission and, if needed, the actions needed to reach competency. During the program's first year, six learners presented 10 submissions for review. Assessing learners in a competency-based program has created challenges; setting standards that are not readily quantifiable is difficult. However, we argue it is a more genuine form of assessment and that this process could be adapted for use within most competency-based formats. While our approach is demanding, we document practical learning outcomes that assess competence.
[Show abstract] [Hide abstract]
- "However, our experience shows that students can be more dynamic, active, demanding, flexible, autonomous, critical, and responsible when they are supported by an appropriate learning tool. It is expected that further development of the e-Portfolio will improve the achievement of competence by use of this unique combination of quantitative and formative assessments [36,37]. "
ABSTRACT: Background We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects. Methods Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved. Results Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant. Conclusions Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
[Show abstract] [Hide abstract]
- "As competency-based training systems evolve they will increasingly rely upon performance assessments that support defensible and reproducible decisions. (Holmboe et al. 2010) This bespeaks the need for a robust enterprise to establish the validity of such decisions and the scores that inform them (Schuwirth and van der Vleuten 2011; Boulet et al. 2011). "
ABSTRACT: Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.
Questions & Answers about this publication
- Could anyone refer me to some interesting empirical studies on the use of competence-based curricula in a university?
Any other empirical studies related to this issue would be interesting too.