Article

The testing effect on skills learning might last 6 months.

Centre for Clinical Education, Copenhagen University and Capital Region of Denmark, Rigshospitalet, Denmark.
Advances in Health Sciences Education (Impact Factor: 2.71). 10/2009; 15(3):395-401. DOI: 10.1007/s10459-009-9207-x
Source: PubMed

ABSTRACT In a recent study we found that testing as a final activity in a skills course increases the learning outcome compared to spending an equal amount of time practicing. Whether this testing effect measured as skills performance can be demonstrated on long-term basis is not known. The research question was: does testing as a final activity in a cardio-pulmonary resuscitation (CPR) skills course increase learning outcome when assessed after half a year, compared to spending an equal amount of time practicing? The study was an assessor-blinded randomised controlled trial. A convenient sample of 7th semester medical students attending a mandatory CPR course was randomised to intervention course or control course. Participants were taught in small groups. The intervention course included 3.5 h skills training plus 30 min of skills testing. The practice-only control course lasted 4 h. Both groups were invited to a retention assessment of CPR skills half a year later. Participants included 89/180 (50%) of those invited to participate in the study. Mean performance score was 75.9 (SD 11.0) in the intervention group (N = 48) and 70.3 (SD 17.1) in the control group, effect size 0.4. The difference between groups was not statistically significant, P = 0.06. This study suggests that testing as a final activity in a CPR skills course might have an effect on long-term learning outcome compared to spending an equal amount of time practicing the skills. Although this difference was not statistically significant, the identified effect size of 0.4 can have important clinical and educational implications.

0 Bookmarks
 · 
120 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Testing has been shown to enhance retention of learned information beyond simple studying, a phenomena known as test-enhanced learning (TEL). Research has shown that TEL effects are greater for tests that require the production of responses [e.g., short-answer questions (SAQs)] relative to tests that require the recognition of correct answers [e.g., multiple-choice questions (MCQs)]. High stakes licensure examinations have recently differentiated MCQs that require the application of clinical knowledge (context-rich MCQs) from MCQs that rely on the recognition of "facts" (context-free MCQs). The present study investigated the influence of different types of educational activities (including studying, SAQs, context-rich MCQs and context-free MCQs) on later performance on a mock licensure examination. Fourth-year medical students (n = 224) from four Quebec universities completed four educational activities: one reading-based activity and three quiz-based activities (SAQs, context-rich MCQs, and context-free MCQs). We assessed the influence of the type of educational activity on students' subsequent performance in a mock licensure examination, which consisted of two types of context-rich MCQs: (1) verbatim replications of previous items and (2) items that tested the same learning objective but were new. Mean accuracy scores on the mock licensure exam were higher when intervening educational activities contained either context-rich MCQs (Mean z-score = 0.40) or SAQs (M = 0.39) compared to context-free MCQs (M = -0.38) or study only items (M = -0.42; all p < 0.001). Higher mean scores were only present for verbatim items (p < 0.001). The benefit of testing was observed when intervening educational activities required either the generation of a response (SAQs) or the application of knowledge (context-rich MCQs); however, this effect was only observed for verbatim test items. These data provide evidence that context-rich MCQs and SAQs enhance learning through testing compared to context-free MCQs or studying alone. The extent to which these findings generalize beyond verbatim questions remains to be seen.
    Advances in Health Sciences Education 06/2014; · 2.71 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Healthcare providers demonstrate limited retention of knowledge and skills in the months following completion of a resuscitation course. Resuscitation courses are typically taught in a massed format (over 1-2 days) however studies in education psychology have suggested that spacing training may result in improved learning and retention. Our study explored the impact of spaced instruction compared to traditional massed instruction on learner knowledge and pediatric resuscitation skills. Medical students completed a pediatric resuscitation course in either a spaced or massed format. Four weeks following course completion students completed a knowledge exam and blinded observers used expert-developed checklists to assess student performance of 3 skills (bag-valve mask ventilation (BVMV), intra-osseous insertion (IOI) and chest compressions (CC)). Forty-five out of 48 students completed the study protocol. Students in both groups had similar scores on the knowledge exam spaced: (37.8±6.1) vs. massed (34.3±7.6)(p<0.09) and overall global rating scale scores for IOI, BVMV and CC; however students in the spaced group also performed critical procedural elements more frequently than those in the massed training group CONCLUSION: Learner knowledge and performance of procedural skills in pediatric resuscitation taught in a spaced format is at least as good as learning in a massed format. Procedures learned in a spaced format may result in better retention of skills when compared to massed training. Copyright © 2014. Published by Elsevier Ireland Ltd.
    Resuscitation 12/2014; 88. · 3.96 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the effect of automated testing and retraining on the cardiopulmonary resuscitation (CPR) competency level of emergency nurses. A software program was developed allowing automated testing followed by computer exercises based on the Resusci Anne Skills Station™ (Laerdal, Norway). Using this system, the CPR competencies of 43 emergency nurses (mean age 37 years, SD 11, 53% female) were assessed. Nurses passed the test if they achieved a combined score consisting of ≥70% compressions with depth ≥50 mm and ≥70% compressions with complete release (<5 mm) and a mean compression rate between 100 and 120/min and ≥70% bag-valve-mask ventilations between 400 and 1000 ml. Nurses failing the test received automated feedback and feedforward on how to improve. They could then either practise with computer exercises or take the test again without additional practise. Nurses were expected to demonstrate competency within two months and they were retested 10 months after baseline. At baseline 35/43 nurses failed the test. Seven of them did not attempt further testing/practise and 7 others did not continue until competency, resulting in 14/43 not competent nurses by the end of the training period. After ten months 39 nurses were retested. Twenty-four nurses failed with as most common reason incomplete release. Automated testing with feedback was effective in detecting nurses needing CPR retraining. Automated training and retesting improved skills to a predefined pass level. Since not all nurses trained until success, achieving CPR competence remains an important individual and institutional motivational challenge. Ten months after baseline the combined score showed important decay, highlighting the need for frequent assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.
    Nurse Education in Practice 11/2014;