Article

The role of assessment in competency-based medical education

American Board of Internal Medicine, USA.
Medical Teacher (Impact Factor: 1.68). 08/2010; 32(8):676-82. DOI: 10.3109/0142159X.2010.500704
Source: PubMed

ABSTRACT

Competency-based medical education (CBME), by definition, necessitates a robust and multifaceted assessment system. Assessment and the judgments or evaluations that arise from it are important at the level of the trainee, the program, and the public. When designing an assessment system for CBME, medical education leaders must attend to the context of the multiple settings where clinical training occurs. CBME further requires assessment processes that are more continuous and frequent, criterion-based, developmental, work-based where possible, use assessment methods and tools that meet minimum requirements for quality, use both quantitative and qualitative measures and methods, and involve the wisdom of group process in making judgments about trainee progress. Like all changes in medical education, CBME is a work in progress. Given the importance of assessment and evaluation for CBME, the medical education community will need more collaborative research to address several major challenges in assessment, including "best practices" in the context of systems and institutional culture and how to best to train faculty to be better evaluators. Finally, we must remember that expertise, not competence, is the ultimate goal. CBME does not end with graduation from a training program, but should represent a career that includes ongoing assessment.

  • Source
    • "Competence is a holistic judgment that incorporates knowledge, skills and attitudes that are demonstrated in the context of practice and are influenced by the context of practice and learning. The complexities of CBE assessment and the practical and theoretical implications are receiving greater attention from educators, researchers and psychometricians (Rethans et al. 2002; Norcini 2005; Holmboe et al. 2010; Schuwirth & Ash 2013). From these analyses, several assessment issues emerge that should be carefully considered by any CBE program: Assessment data need to be collected frequently, even continuously. "
    [Show abstract] [Hide abstract]
    ABSTRACT: There is a growing demand for health sciences faculty with formal training in education. Addressing this need, the University of Michigan Medical School created a Master in Health Professions Education (UM-MHPE). The UM-MHPE is a competency-based education (CBE) program targeting professionals. The program is individualized and adaptive to the learner's situation using personal mentoring. Critical to CBE is an assessment process that accurately and reliably determines a learner's competence in educational domains. The program's assessment method has two principal components: an independent assessment committee and a learner repository. Learners submit evidence of competence that is evaluated by three independent assessors. The assessments are presented to an Assessment Committee who determines whether the submission provides evidence of competence. The learner receives feedback on the submission and, if needed, the actions needed to reach competency. During the program's first year, six learners presented 10 submissions for review. Assessing learners in a competency-based program has created challenges; setting standards that are not readily quantifiable is difficult. However, we argue it is a more genuine form of assessment and that this process could be adapted for use within most competency-based formats. While our approach is demanding, we document practical learning outcomes that assess competence.
    Full-text · Article · Jun 2015 · Medical Teacher
  • Source
    • "However, our experience shows that students can be more dynamic, active, demanding, flexible, autonomous, critical, and responsible when they are supported by an appropriate learning tool. It is expected that further development of the e-Portfolio will improve the achievement of competence by use of this unique combination of quantitative and formative assessments [36,37]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects. Methods Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved. Results Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant. Conclusions Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
    Full-text · Article · May 2013 · BMC Medical Education
  • Source
    • "As competency-based training systems evolve they will increasingly rely upon performance assessments that support defensible and reproducible decisions. (Holmboe et al. 2010) This bespeaks the need for a robust enterprise to establish the validity of such decisions and the scores that inform them (Schuwirth and van der Vleuten 2011; Boulet et al. 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.
    Full-text · Article · May 2013 · Advances in Health Sciences Education
Show more

Questions & Answers about this publication

  • Jorge R Hernández added an answer in Competency-Based Education:
    Could anyone refer me to some interesting empirical studies on the use of competence-based curricula in a university?

    Any other empirical studies related to this issue would be interesting too.

    Jorge R Hernández

    See: http://whqlibdoc.who.int/php/WHO_PHP_68.pdf

    https://www.aamc.org/download/283598/data/powell-implementingtruecompetency-basedmeded.pdf 

    https://www.aamc.org/newsroom/reporter/april11/184286/competency-based_medical_education.html

    https://www.researchgate.net/profile/Jason_Frank/publication/45387549_The_role_of_assessment_in_competency-based_medical_education/links/09e41506449a82d2a3000000.pdf

    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: Competency-based medical education (CBME), by definition, necessitates a robust and multifaceted assessment system. Assessment and the judgments or evaluations that arise from it are important at the level of the trainee, the program, and the public. When designing an assessment system for CBME, medical education leaders must attend to the context of the multiple settings where clinical training occurs. CBME further requires assessment processes that are more continuous and frequent, criterion-based, developmental, work-based where possible, use assessment methods and tools that meet minimum requirements for quality, use both quantitative and qualitative measures and methods, and involve the wisdom of group process in making judgments about trainee progress. Like all changes in medical education, CBME is a work in progress. Given the importance of assessment and evaluation for CBME, the medical education community will need more collaborative research to address several major challenges in assessment, including "best practices" in the context of systems and institutional culture and how to best to train faculty to be better evaluators. Finally, we must remember that expertise, not competence, is the ultimate goal. CBME does not end with graduation from a training program, but should represent a career that includes ongoing assessment.
      Full-text · Article · Aug 2010 · Medical Teacher