Article

Assessment in medical education - Replies

Department of Family Medicine, and the Rochester Center to Improve Communication in Health Care, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA.
New England Journal of Medicine (Impact Factor: 54.42). 02/2007; 356(4):387-96.
Source: PubMed
Download full-text

Full-text

Available from: Ronald M Epstein, Jun 28, 2015
1 Follower
 · 
264 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The University of Oklahoma College of Medicine reduced gross anatomy from a full semester, 130-hour course to a six and one-half week, 105-hour course as part of a new integrated systems-based pre-clinical curriculum. In addition to the reduction in contact hours, content from embryology, histology, and radiology were added into the course. The new curriculum incorporated best practices in the area of regular assessments, feedback, clinical application, multiple teaching modalities, and professionalism. A comparison of the components of the traditional and integrated curriculum, along with end of course evaluations and student performance revealed that the new curriculum was just as effective, if not more effective. This article also provides important lessons learned. Anat Sci Educ. © 2014 American Association of Anatomists.
    Anatomical Sciences Education 03/2015; 8(2). DOI:10.1002/ase.1476 · 2.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction: Most multiple-choice tests comprise questions with four options per item. However, a number of academic teachers believe that a larger number of options per question shall increase the scope of variability of test results. An increase in discrimination capability is particularly important with reference to selective examinations. In 2011 and 2013, nursing entrance test at Medical University of Warsaw (MUW) comprised 5-option item tests, which was an exception from 4-option item tests used in 2009-2010 and 2012. Aim of study: Assessment of the impact of change of the number of options in multiple-choice questions on the quality of nursing entrance exams for an MA programme at MUW between 2009-2013. Materials and Methods: A total of 250 multiple-choice exam questions, including 150 4-option items (2009-2010 and 2012) and 100 5-option items (2011 and 2013). In order to compare the quality of particular exams, the level of easiness, substitute differentiation power, and Pearson's linear correlation coefficient were established for each pool of questions. The comparison comprised the scope of variability of the results (coefficient of variation, scope of results, and quartile range) as well as the average easiness and capacity of differentiating particular questions in consecutive versions of the exam. One-way analysis of variance ANOVA and post-hoc RIR Tukey honestly significant difference test were used. Results: In 2011 and 2013, when the 5-option item tests were introduced, the difficulty of the exam expressed as the mean score amounted to 24.3 and 25.7 points, respectively. These values are comparable to the results achieved in 2010 (25.6), but they are clearly different from those obtained in 2009 (30.2) and 2012 (31.5). Similar differences were observed in comparison of coefficients of variation that were similar in 2010, 2011, and 2013 (17.1, 17.9 and 18.4%, respectively) and significantly different from those obtained in 2009 and 2012 (14.3 and 14.1%, respectively). Moreover, a greater symmetry (skewness ≈ 0) in frequency distribution of test scores was observed in the case of 5-option item tests compared to 4-option item tests. The reliability of the exam was variable, Cronbach's α coefficient ranged between 0.429 and 0.559. No statistically significant differences were found in discrimination capability of the exams performed in the form of 4- or 5-option item tests (ANOVA test, P > 0.05). It was also demonstrated that the 2011 exam (5 options) was significantly more difficult than that of 2012 (4 options) (ANOVA test (P = 0.0025) and post-hoc RIR Tukey honestly significant difference test (P < 0.01)). Conclusions: The introduction of an additional option item to the test questions did not significantly improve the qualitative parameters of the nursing entrance exams at MUW. Significant increase in selective capacity of the exam and reliability of assessment was not observed. It is recommended to use 4-option item tests and to develop a good test plan for future editions of the exam. http://library.iated.org/view/PANCZYK2014COM
    7th International Conference of Education, Research and Innovation, Seville, Spain; 11/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Competency assessment is a paradigm that is common in the healthcare environment and this being particularly true within the nursing profession. Demonstration of competence is necessary to meet the requirements of healthcare organisations and is a mandated requirement of nurses by the Nursing and Midwifery Board of Australia. Within the nursing education sector, one approach to determine competence, is through the use of competency assessment tools. Despite widespread use of competency assessment tools there remains ongoing concerns relating to the efficacy of competency assessment tools as a mean to demonstrate ‘competency’ amongst enrolled and registered nurses in the clinical environment. The authors of this paper ascertain that competency assessment tools run a serious risk of being nothing more than a ‘quick-fix’ means of assessment to demonstrate ‘nursing competence’ required for key performance indicators and clinical governance and that will provide evidence for accreditation standards. Based on this premise, the authors, provide an alternative approach to the use of competency assessment tools that moves away from a ‘tick-box’ approach to a ‘patient-centred’ competency model. This approach increases the reliability and validity of competency assessments, allows for the recognition of the knowledge, skills and experience of individual nurses, offers a more satisfying and rewarding approach to demonstrating ‘competency’ for nurses and finally, demonstrates ‘real-life’ competency.
    Collegian Journal of the Royal College of Nursing Australia 11/2013; 22(1). DOI:10.1016/j.colegn.2013.10.005 · 0.84 Impact Factor