The testing effect on skills learning might last 6 months

Centre for Clinical Education, Copenhagen University and Capital Region of Denmark, Rigshospitalet, Denmark.
Advances in Health Sciences Education (Impact Factor: 2.12). 10/2009; 15(3):395-401. DOI: 10.1007/s10459-009-9207-x
Source: PubMed


In a recent study we found that testing as a final activity in a skills course increases the learning outcome compared to spending an equal amount of time practicing. Whether this testing effect measured as skills performance can be demonstrated on long-term basis is not known. The research question was: does testing as a final activity in a cardio-pulmonary resuscitation (CPR) skills course increase learning outcome when assessed after half a year, compared to spending an equal amount of time practicing? The study was an assessor-blinded randomised controlled trial. A convenient sample of 7th semester medical students attending a mandatory CPR course was randomised to intervention course or control course. Participants were taught in small groups. The intervention course included 3.5 h skills training plus 30 min of skills testing. The practice-only control course lasted 4 h. Both groups were invited to a retention assessment of CPR skills half a year later. Participants included 89/180 (50%) of those invited to participate in the study. Mean performance score was 75.9 (SD 11.0) in the intervention group (N = 48) and 70.3 (SD 17.1) in the control group, effect size 0.4. The difference between groups was not statistically significant, P = 0.06. This study suggests that testing as a final activity in a CPR skills course might have an effect on long-term learning outcome compared to spending an equal amount of time practicing the skills. Although this difference was not statistically significant, the identified effect size of 0.4 can have important clinical and educational implications.

43 Reads
  • Source
    • "The mean time to achieve success was 13 min, which can be explained by the fact that almost half of the nurses only performed repetitive formative tests and skipped the additional practice. Kromann and colleagues (2009a; 2009b) reported that testing on its own has a learning effect. But according to several investigators the most powerful tool for learning improvement consist in delivering individualised feedback and feedforward after a test (Dine et al., 2004; Hattie, 2009; Seethala et al., 2010; Andriessen et al., 2012). "
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the effect of automated testing and retraining on the cardiopulmonary resuscitation (CPR) competency level of emergency nurses. A software program was developed allowing automated testing followed by computer exercises based on the Resusci Anne Skills Station™ (Laerdal, Norway). Using this system, the CPR competencies of 43 emergency nurses (mean age 37 years, SD 11, 53% female) were assessed. Nurses passed the test if they achieved a combined score consisting of ≥70% compressions with depth ≥50 mm and ≥70% compressions with complete release (<5 mm) and a mean compression rate between 100 and 120/min and ≥70% bag-valve-mask ventilations between 400 and 1000 ml. Nurses failing the test received automated feedback and feedforward on how to improve. They could then either practise with computer exercises or take the test again without additional practise. Nurses were expected to demonstrate competency within two months and they were retested 10 months after baseline. At baseline 35/43 nurses failed the test. Seven of them did not attempt further testing/practise and 7 others did not continue until competency, resulting in 14/43 not competent nurses by the end of the training period. After ten months 39 nurses were retested. Twenty-four nurses failed with as most common reason incomplete release. Automated testing with feedback was effective in detecting nurses needing CPR retraining. Automated training and retesting improved skills to a predefined pass level. Since not all nurses trained until success, achieving CPR competence remains an important individual and institutional motivational challenge. Ten months after baseline the combined score showed important decay, highlighting the need for frequent assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.
    Nurse Education in Practice 11/2014; 15(3). DOI:10.1016/j.nepr.2014.11.012
  • Source
    • "Testing stations could also present an added value as an integral part of training, since testing has been shown to yield a powerful effect on retention which may be essential to consolidate newly acquired skills [12]. Adding a test as a final activity in a BLS course seems to have a stronger long-term learning impact as compared to spending an equal amount of time practising the same skills [13-15]. At a theoretical level, the training of continuous retrieval processes seems to account for the “testing effect”. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Current methods to assess Basic Life Support skills (BLS; chest compressions and ventilations) require the presence of an instructor. This is time-consuming and comports instructor bias. Since BLS skills testing is a routine activity, it is potentially suitable for automation. We developed a fully automated BLS testing station without instructor by using innovative software linked to a training manikin. The goal of our study was to investigate the feasibility of adequate testing (effectiveness) within the shortest period of time (efficiency). As part of a randomised controlled trial investigating different compression depth training strategies, 184 medicine students received an individual appointment for a retention test six months after training. An interactive FlashTM (Adobe Systems Inc., USA) user interface was developed, to guide the students through the testing procedure after login, while Skills StationTM software (Laerdal Medical, Norway) automatically recorded compressions and ventilations and their duration ("time on task"). In a subgroup of 29 students the room entrance and exit time was registered to assess efficiency. To obtain a qualitative insight of the effectiveness, student's perceptions about the instructional organisation and about the usability of the fully automated testing station were surveyed. During testing there was incomplete data registration in two students and one student performed compressions only. The average time on task for the remaining 181 students was three minutes (SD 0.5). In the subgroup, the average overall time spent in the testing station was 7.5 minutes (SD 1.4). Mean scores were 5.3/6 (SD 0.5, range 4.0-6.0) for instructional organisation and 5.0/6 (SD 0.61, range 3.1-6.0) for usability. Students highly appreciated the automated testing procedure. Our automated testing station was an effective and efficient method to assess BLS skills in medicine students. Instructional organisation and usability were judged to be very good. This method enables future formative assessment and certification procedures to be carried out without instructor involvement. B67020097543.
    BMC Medical Education 07/2012; 12(1):58. DOI:10.1186/1472-6920-12-58 · 1.22 Impact Factor
  • Source
    • "We are aware that, in general, voluntary participants might initiate a positive selection bias, such that these volunteers would often outperform others [29]. However, as described in the literature [30,31], all of our study groups were exposed to this bias; therefore, they are still comparable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective The quality of external chest compressions (ECC) is of primary importance within basic life support (BLS). Recent guidelines delineate the so-called 4“-step approach” for teaching practical skills within resuscitation training guided by a certified instructor. The objective of this study was to evaluate whether a “media-supported 4-step approach” for BLS training leads to equal practical performance compared to the standard 4-step approach. Materials and methods After baseline testing, 220 laypersons were either trained using the widely accepted method for resuscitation training (4-step approach) or using a newly created “media-supported 4-step approach”, both of equal duration. In this approach, steps 1 and 2 were ensured via a standardised self-produced podcast, which included all of the information regarding the BLS algorithm and resuscitation skills. Participants were tested on manikins in the same mock cardiac arrest single-rescuer scenario prior to intervention, after one week and after six months with respect to ECC-performance, and participants were surveyed about the approach. Results Participants (age 23 ± 11, 69% female) reached comparable practical ECC performances in both groups, with no statistical difference. Even after six months, there was no difference detected in the quality of the initial assessment algorithm or delay concerning initiation of CPR. Overall, at least 99% of the intervention group (n = 99; mean 1.5 ± 0.8; 6-point Likert scale: 1 = completely agree, 6 = completely disagree) agreed that the video provided an adequate introduction to BLS skills. Conclusions The “media-supported 4-step approach” leads to comparable practical ECC-performance compared to standard teaching, even with respect to retention of skills. Therefore, this approach could be useful in special educational settings where, for example, instructors’ resources are sparse or large-group sessions have to be prepared.
    Scandinavian Journal of Trauma Resuscitation and Emergency Medicine 05/2012; 20(1):37. DOI:10.1186/1757-7241-20-37 · 2.03 Impact Factor
Show more

Similar Publications