The testing effect on skills learning might last 6 months

Centre for Clinical Education, Copenhagen University and Capital Region of Denmark, Rigshospitalet, Denmark.
Advances in Health Sciences Education (Impact Factor: 2.71). 10/2009; 15(3):395-401. DOI: 10.1007/s10459-009-9207-x
Source: PubMed

ABSTRACT In a recent study we found that testing as a final activity in a skills course increases the learning outcome compared to spending an equal amount of time practicing. Whether this testing effect measured as skills performance can be demonstrated on long-term basis is not known. The research question was: does testing as a final activity in a cardio-pulmonary resuscitation (CPR) skills course increase learning outcome when assessed after half a year, compared to spending an equal amount of time practicing? The study was an assessor-blinded randomised controlled trial. A convenient sample of 7th semester medical students attending a mandatory CPR course was randomised to intervention course or control course. Participants were taught in small groups. The intervention course included 3.5 h skills training plus 30 min of skills testing. The practice-only control course lasted 4 h. Both groups were invited to a retention assessment of CPR skills half a year later. Participants included 89/180 (50%) of those invited to participate in the study. Mean performance score was 75.9 (SD 11.0) in the intervention group (N = 48) and 70.3 (SD 17.1) in the control group, effect size 0.4. The difference between groups was not statistically significant, P = 0.06. This study suggests that testing as a final activity in a CPR skills course might have an effect on long-term learning outcome compared to spending an equal amount of time practicing the skills. Although this difference was not statistically significant, the identified effect size of 0.4 can have important clinical and educational implications.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the effect of automated testing and retraining on the cardiopulmonary resuscitation (CPR) competency level of emergency nurses. A software program was developed allowing automated testing followed by computer exercises based on the Resusci Anne Skills Station™ (Laerdal, Norway). Using this system, the CPR competencies of 43 emergency nurses (mean age 37 years, SD 11, 53% female) were assessed. Nurses passed the test if they achieved a combined score consisting of ≥70% compressions with depth ≥50 mm and ≥70% compressions with complete release (<5 mm) and a mean compression rate between 100 and 120/min and ≥70% bag-valve-mask ventilations between 400 and 1000 ml. Nurses failing the test received automated feedback and feedforward on how to improve. They could then either practise with computer exercises or take the test again without additional practise. Nurses were expected to demonstrate competency within two months and they were retested 10 months after baseline. At baseline 35/43 nurses failed the test. Seven of them did not attempt further testing/practise and 7 others did not continue until competency, resulting in 14/43 not competent nurses by the end of the training period. After ten months 39 nurses were retested. Twenty-four nurses failed with as most common reason incomplete release. Automated testing with feedback was effective in detecting nurses needing CPR retraining. Automated training and retesting improved skills to a predefined pass level. Since not all nurses trained until success, achieving CPR competence remains an important individual and institutional motivational challenge. Ten months after baseline the combined score showed important decay, highlighting the need for frequent assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.
    Nurse Education in Practice 11/2014; DOI:10.1016/j.nepr.2014.11.012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Healthcare providers demonstrate limited retention of knowledge and skills in the months following completion of a resuscitation course. Resuscitation courses are typically taught in a massed format (over 1-2 days) however studies in education psychology have suggested that spacing training may result in improved learning and retention. Our study explored the impact of spaced instruction compared to traditional massed instruction on learner knowledge and pediatric resuscitation skills. Medical students completed a pediatric resuscitation course in either a spaced or massed format. Four weeks following course completion students completed a knowledge exam and blinded observers used expert-developed checklists to assess student performance of 3 skills (bag-valve mask ventilation (BVMV), intra-osseous insertion (IOI) and chest compressions (CC)). Forty-five out of 48 students completed the study protocol. Students in both groups had similar scores on the knowledge exam spaced: (37.8±6.1) vs. massed (34.3±7.6)(p<0.09) and overall global rating scale scores for IOI, BVMV and CC; however students in the spaced group also performed critical procedural elements more frequently than those in the massed training group CONCLUSION: Learner knowledge and performance of procedural skills in pediatric resuscitation taught in a spaced format is at least as good as learning in a massed format. Procedures learned in a spaced format may result in better retention of skills when compared to massed training. Copyright © 2014. Published by Elsevier Ireland Ltd.
    Resuscitation 12/2014; 88. DOI:10.1016/j.resuscitation.2014.12.003 · 3.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We measured the long-term retention of knowledge gained through selected American Academy of Neurology annual meeting courses and compared the effects of repeated quizzing (known as test-enhanced learning) and repeated studying on that retention. Participants were recruited from 4 annual meeting courses. All participants took a pretest. This randomized, controlled trial utilized a within-subjects design in which each participant experienced 3 different postcourse activities with each activity performed on different material. Each key information point from the course was randomized in a counterbalanced fashion among participants to one of the 3 activities: repeated short-answer quizzing, repeated studying, and no further exposure to the materials. A final test covering all information points from the course was taken 5.5 months after the course. Thirty-five participants across the 4 courses completed the study. Average score on the pretest was 36%. Performance on the final test showed that repeated quizzing led to significantly greater long-term retention relative to both repeated studying (55% vs 46%; t[34] = 3.28, SEM = 0.03, p = 0.01, d = 0.49) and no further exposure (55% vs 44%; t[34] = 3.16, SEM = 0.03, p = 0.01, d = 0.58). Relative to the pretest baseline, repeated quizzing helped participants to retain almost twice as much of the knowledge acquired from the course compared to repeated studying or no further exposure. Whereas annual meeting continuing medical education (CME) courses lead to long-term gains in knowledge, when repeated quizzing is added, retention is significantly increased. CME planners may consider adding repeated quizzing to increase the impact of their courses. © 2015 American Academy of Neurology.
    Neurology 01/2015; 84(7). DOI:10.1212/WNL.0000000000001264 · 8.30 Impact Factor