Assessment in medical education - Replies

Department of Family Medicine, and the Rochester Center to Improve Communication in Health Care, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA.
New England Journal of Medicine (Impact Factor: 55.87). 02/2007; 356(4):387-96.
Source: PubMed
Download full-text


Available from: Ronald M Epstein, Oct 13, 2015
1 Follower
58 Reads
    • "In contrast, formative assessments provide corrective feedback in order to promote self-reflection and guide future learning and so require methods that provide in-depth feedback regarding specific aspects of competence. It has, therefore, been suggested that different assessment methods may be required for formative and summative purposes (Epstein, 2007), and results from this study concur. For example, self-assessment was not viewed as robust or accurate enough for summative purposes. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To offer insight into how cognitive-behavioural therapy (CBT) competence is defined, measured and evaluated and to highlight ways in which the assessment of CBT competence could be further improved, the current study utilizes a qualitative methodology to examine CBT experts' (N = 19) experiences of conceptualizing and assessing the competence of CBT therapists. Semi-structured interviews were used to explore participants' experiences of assessing the competence of CBT therapists. Interview transcripts were then analysed using interpretative phenomenological analysis in order to identify commonalities and differences in the way CBT competence is evaluated. Four superordinate themes were identified: (i) what to assess, the complex and fuzzy concept of CBT competence; (ii) how to assess CBT competence, selecting from the toolbox of assessment methods; (iii) who is best placed to assess CBT competence, expertise and independence; and (iv) pitfalls, identifying and overcoming assessment biases. Priorities for future research and ways in which the assessment of CBT competence could be further improved are discussed in light of these findings. Copyright © 2015 John Wiley & Sons, Ltd. A qualitative exploration of experts' experiences, opinions and recommendations for assessing the competence of CBT therapists. Semi-structured interviews were conducted and analysed using interpretive phenomenological analysis. Themes identified shed light on (i) what to assess; (ii) how to assess; (iii) who is best placed to assess; and (iv) common pitfalls. Priorities for future research and ways in which the assessment of CBT competence could be further improved are discussed in light of these findings. Copyright © 2015 John Wiley & Sons, Ltd.
    Clinical Psychology & Psychotherapy 04/2015; DOI:10.1002/cpp.1952 · 2.59 Impact Factor
  • Source
    • "Assessments are powerful learning motivators for medical students (Newble and Jaeger, 1983; Krupat and Dienstag, 2009; Wood, 2009). The goal of any assessment type should be considered to determine what content will be included, the format of the assessment, the frequency of assessment, and the type of feedback provided (Epstein, 2007). Testenhanced learning improves content retention (Larsen et al., 2009; Butler, 2010; McDaniel et al., 2011) and has a positive effect on examination scores (Olde Bekkink et al., 2012). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The University of Oklahoma College of Medicine reduced gross anatomy from a full semester, 130-hour course to a six and one-half week, 105-hour course as part of a new integrated systems-based pre-clinical curriculum. In addition to the reduction in contact hours, content from embryology, histology, and radiology were added into the course. The new curriculum incorporated best practices in the area of regular assessments, feedback, clinical application, multiple teaching modalities, and professionalism. A comparison of the components of the traditional and integrated curriculum, along with end of course evaluations and student performance revealed that the new curriculum was just as effective, if not more effective. This article also provides important lessons learned. Anat Sci Educ. © 2014 American Association of Anatomists.
    Anatomical Sciences Education 03/2015; 8(2). DOI:10.1002/ase.1476 · 2.98 Impact Factor
  • Source
    • "Using multiple-choice questions (MCQs) in medical and nursing education is one of the most popular forms of checking knowledge and skills of the examinees. Despite certain critical opinions concerning the usefulness of the MCQs tests in the evaluation of clinical abilities [1] and the candidates' predispositions to become students [2], this method still presents many advantages as opposed to other tools used in didactics, such as short answer or essay-style questions. Well constructed MCQs allow not only to assess the ability to simply recall the memorised facts but also to measure the abilities of practical application of this knowledge so as to solve certain clinical problems [3]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction: Most multiple-choice tests comprise questions with four options per item. However, a number of academic teachers believe that a larger number of options per question shall increase the scope of variability of test results. An increase in discrimination capability is particularly important with reference to selective examinations. In 2011 and 2013, nursing entrance test at Medical University of Warsaw (MUW) comprised 5-option item tests, which was an exception from 4-option item tests used in 2009-2010 and 2012. Aim of study: Assessment of the impact of change of the number of options in multiple-choice questions on the quality of nursing entrance exams for an MA programme at MUW between 2009-2013. Materials and Methods: A total of 250 multiple-choice exam questions, including 150 4-option items (2009-2010 and 2012) and 100 5-option items (2011 and 2013). In order to compare the quality of particular exams, the level of easiness, substitute differentiation power, and Pearson's linear correlation coefficient were established for each pool of questions. The comparison comprised the scope of variability of the results (coefficient of variation, scope of results, and quartile range) as well as the average easiness and capacity of differentiating particular questions in consecutive versions of the exam. One-way analysis of variance ANOVA and post-hoc RIR Tukey honestly significant difference test were used. Results: In 2011 and 2013, when the 5-option item tests were introduced, the difficulty of the exam expressed as the mean score amounted to 24.3 and 25.7 points, respectively. These values are comparable to the results achieved in 2010 (25.6), but they are clearly different from those obtained in 2009 (30.2) and 2012 (31.5). Similar differences were observed in comparison of coefficients of variation that were similar in 2010, 2011, and 2013 (17.1, 17.9 and 18.4%, respectively) and significantly different from those obtained in 2009 and 2012 (14.3 and 14.1%, respectively). Moreover, a greater symmetry (skewness ≈ 0) in frequency distribution of test scores was observed in the case of 5-option item tests compared to 4-option item tests. The reliability of the exam was variable, Cronbach's α coefficient ranged between 0.429 and 0.559. No statistically significant differences were found in discrimination capability of the exams performed in the form of 4- or 5-option item tests (ANOVA test, P > 0.05). It was also demonstrated that the 2011 exam (5 options) was significantly more difficult than that of 2012 (4 options) (ANOVA test (P = 0.0025) and post-hoc RIR Tukey honestly significant difference test (P < 0.01)). Conclusions: The introduction of an additional option item to the test questions did not significantly improve the qualitative parameters of the nursing entrance exams at MUW. Significant increase in selective capacity of the exam and reliability of assessment was not observed. It is recommended to use 4-option item tests and to develop a good test plan for future editions of the exam.
    7th International Conference of Education, Research and Innovation, Seville, Spain; 11/2014
Show more