Article

Assessing professional competence: From methods to programs

Department of Educational Development and Research, University of Maastricht, Maastricht, The Netherlands.
Medical Education (Impact Factor: 3.2). 04/2005; 39(3):309-17. DOI: 10.1111/j.1365-2929.2005.02094.x
Source: PubMed

ABSTRACT

INTRODUCTION: We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. EMPIRICAL AND THEORETICAL DEVELOPMENTS: Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. ASSESSMENT AS INSTRUCTIONAL DESIGN: When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. IMPLICATIONS FOR DEVELOPMENT AND RESEARCH: Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.

Download full-text

Full-text

Available from: Cees Van der Vleuten, Dec 09, 2015
  • Source
    • "It must include the problems and difficulties found in this process. For this reason, a portfolio designed as a " programmatic assessment " of an integrated clinical placement, as proposed by Van derVleuten & Schuwirth (2005) has sufficient evidence of validity to support a specific interpretation of student scores around passing a clinical placement, although with some modest precision in some competencies that could be reduced focussing more on feedback and supervision. (Roberts et al, 2014)Additionally, each month we hold a " round " in which students talk and discuss their cases together, and show the problems they had, including ethical issues. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Institutional assessment must be distinguished from the assessment of learning. Traditionally, assessment is reduced to institutional assessment: that is, to give a mark depending on the achievement of knowledge instead of focusing in the student’s learning. However, I propose (to remember) that: 1) (Formative) assessment is part of learning; 2) Reflective learning (and reflective skills) is/are a part of assessment. This implies a process of continuous evaluation instead of summative evaluation, for example, through an exam or a similar procedure. So, I agree with the idea that assessment "is not a measurement problem but an instructional design problem."To clarify what assessment is, we have to discuss several interlinked aspects (validity, reliability and fairness), which are connected to questions that must be answered: When is the assessment considered valid? How do we assess? What do we assess? Some ideas to answer these questions may include the need to provide space (s) and time (s) to reflect on the learning (as a way of learning and as a skill to be acquired), which in turn implies a multiplicity of assessments and/or reflection about learning. This should also include a variety of assessments: self-assessment, peer-assessment, team-assessment, and (external) assessment. And last, but not least: as it is said, reflection should be considered not only a skill but a part of learning. Reflection about learning is an exercise that promotes life-long learning (including that among future lawyers). A reflection about context and experience is the first step for future professional action. The benefits of experiencing autonomy and reflection are the same in a real or in realistic environments. But the experience of responsibility requires a real environment.
    Full-text · Article · Jan 2016
    • "The recognition that no single method in isolation is comprehensive or robust enough to measure the complex integration of knowledge and skills that constitute clinical competence has prompted a transition to multi-method competence assessment programmes within medical education (Van DerVleuten & Schuwirth, 2005). Participants in the current study advocated a similar approach to assessing CBT competence involving multiple direct observations of therapist skill, knowledge-based assessments and examination of patient outcome. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To offer insight into how cognitive-behavioural therapy (CBT) competence is defined, measured and evaluated and to highlight ways in which the assessment of CBT competence could be further improved, the current study utilizes a qualitative methodology to examine CBT experts' (N = 19) experiences of conceptualizing and assessing the competence of CBT therapists. Semi-structured interviews were used to explore participants' experiences of assessing the competence of CBT therapists. Interview transcripts were then analysed using interpretative phenomenological analysis in order to identify commonalities and differences in the way CBT competence is evaluated. Four superordinate themes were identified: (i) what to assess, the complex and fuzzy concept of CBT competence; (ii) how to assess CBT competence, selecting from the toolbox of assessment methods; (iii) who is best placed to assess CBT competence, expertise and independence; and (iv) pitfalls, identifying and overcoming assessment biases. Priorities for future research and ways in which the assessment of CBT competence could be further improved are discussed in light of these findings. Copyright © 2015 John Wiley & Sons, Ltd. A qualitative exploration of experts' experiences, opinions and recommendations for assessing the competence of CBT therapists. Semi-structured interviews were conducted and analysed using interpretive phenomenological analysis. Themes identified shed light on (i) what to assess; (ii) how to assess; (iii) who is best placed to assess; and (iv) common pitfalls. Priorities for future research and ways in which the assessment of CBT competence could be further improved are discussed in light of these findings. Copyright © 2015 John Wiley & Sons, Ltd.
    No preview · Article · Apr 2015 · Clinical Psychology & Psychotherapy
  • Source
    • "The means, standard deviations and reliability estimates are similar within each administration. The reliability estimates under all models are moderately high, ranging from 0.74 to 0.78, consistent with reliability for OSCE examinations such as the MCCQE Part II of two to four hours in length (Van der Vleuten & Schuwirth 2005). More importantly, the three simpler scoring models yielded scores that are as reliable "
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Background: Past research suggests that the use of externally-applied scoring weights may not appreciably impact measurement qualities such as reliability or validity. Nonetheless, some credentialing boards and academic institutions apply differential scoring weights based on expert opinion about the relative importance of individual items or test components of Observed Structured Clinical Examinations (OSCEs). Aims: To investigate the impact of simplified scoring models that make little to no use of differential weighting on the reliability of scores and decisions on a high stakes OSCE required for medical licensure in Canada. Method: We applied four different weighting models of various complexities to data from three administrations of the OSCE. We compared score reliability, pass/fail rates, correlations between the scores and classification decision accuracy and consistency across the models and administrations. Results: Less complex weighting models yielded similar reliability and pass rates as the more complex weighting model. Minimal changes in candidates' pass/fail status were observed and there were strong and statistically significant correlations between the scores for all scoring models and administrations. Classification decision accuracy and consistency were very high and similar across the four scoring models. Conclusions: Adopting a simplified weighting scheme for this OSCE did not diminish its measurement qualities. Instead of developing complex weighting schemes, experts' time and effort could be better spent on other critical test development and assembly tasks with little to no compromise in the quality of scores and decisions on this high-stakes OSCE.
    Full-text · Article · May 2014 · Medical Teacher
Show more