Assessment of BLS skills: Optimizing use of instructor and manikin measures

University of Dundee School of Medicine, Dundee, Tayside Centre for General Practice, Mackenzie Building, Kirsty Semple Way, Dundee DD2 4BF, United Kingdom.
Resuscitation (Impact Factor: 3.96). 03/2008; 76(2):233-43. DOI: 10.1016/j.resuscitation.2007.07.018
Source: PubMed

ABSTRACT The primary objective of layperson CPR training is to ensure that learners achieve minimal competence to provide aid that improves the odds of survival of victims of out-of-hospital sudden cardiac arrest. During CPR courses, pronouncement of a learner's competence typically depends entirely on judgments made by an instructor; yet previous research strongly suggests that these judgments - particularly of chest compressions - are not sufficiently precise or accurate to ensure valid assessments. Comparisons of instructors' subjective assessments with objective data from recording manikins provide one means of understanding the magnitude and type of instructor errors in assessment.
Eight hundred and twenty-six laypersons between 40 and 70 years old participated in CPR training. Performance of five discrete skills was tested in a scenario format immediately afterward: assessing responsiveness, calling the emergency telephone number 911, delivering ventilations of adequate volume, demonstrating correct hand placement for compressions, and delivering compressions with adequate depth. Thirteen AHA-certified instructors assessed these five skills and rendered a global performance rating; sensored Resusci Anne manikins with SkillReporting software recorded ventilation and compression data.
Instructors' ratings of the ventilation skills were highly accurate; ratings of compressions were correct about 83% of the time; yet inadequate compression depth was rated adequate 55% of the time, and incorrect hand placement was rated adequate 49% of the time.
Instructors' judgments alone are not sufficient to determine learners' competence in performing compressions. Assessment, technology, and guidelines must be better aligned so that learners can receive accurate feedback.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the effect of automated testing and retraining on the cardiopulmonary resuscitation (CPR) competency level of emergency nurses. A software program was developed allowing automated testing followed by computer exercises based on the Resusci Anne Skills Station™ (Laerdal, Norway). Using this system, the CPR competencies of 43 emergency nurses (mean age 37 years, SD 11, 53% female) were assessed. Nurses passed the test if they achieved a combined score consisting of ≥70% compressions with depth ≥50 mm and ≥70% compressions with complete release (<5 mm) and a mean compression rate between 100 and 120/min and ≥70% bag-valve-mask ventilations between 400 and 1000 ml. Nurses failing the test received automated feedback and feedforward on how to improve. They could then either practise with computer exercises or take the test again without additional practise. Nurses were expected to demonstrate competency within two months and they were retested 10 months after baseline. At baseline 35/43 nurses failed the test. Seven of them did not attempt further testing/practise and 7 others did not continue until competency, resulting in 14/43 not competent nurses by the end of the training period. After ten months 39 nurses were retested. Twenty-four nurses failed with as most common reason incomplete release. Automated testing with feedback was effective in detecting nurses needing CPR retraining. Automated training and retesting improved skills to a predefined pass level. Since not all nurses trained until success, achieving CPR competence remains an important individual and institutional motivational challenge. Ten months after baseline the combined score showed important decay, highlighting the need for frequent assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.
    Nurse Education in Practice 11/2014; DOI:10.1016/j.nepr.2014.11.012
  • Resuscitation 10/2012; 83:e38–e39. DOI:10.1016/j.resuscitation.2012.08.097 · 3.96 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: According to the literature, 40% of injuries affecting school-age children are sports related. The role of physical education students, as future teachers, seems to be of high importance in terms of protecting children's safety during sports classes. Purposes: The aim is to evaluate the level of basic life support (BLS) knowledge and skills in physical education students instructed with the use of different methods. Methods: Second-year physical education students (n = 104, M age = 20 ± 0.6 years) were randomly assigned to three groups: experimental 1 (E1), experimental 2 (E2), and control (C). Group E1 students participated in a 2-hour BLS course based on computer-assisted presentations. Group E2 trainees practiced BLS algorithm in pairs during a 2-hour course. No manikins were used in both intervention groups. Students of Group C were asked to learn BLS algorithm on their own. All groups fulfilled a 10-question multiple-choice test on BLS at the beginning and the end of the experiment. After completing the course participants performed BLS on a manikin. Results: The results of knowledge test were not significant before an experiment but differed essentially among the groups afterward (analysis of variance contrast analysis, p <.05). Regardless of teaching method used, no significant differences were found among the students in preparatory BLS actions and cardiopulmonary resuscitation (CPR) performance on a manikin. The level of CPR performance was very low in all groups. Conclusions: Students of both intervention groups improved their BLS knowledge after the training. Teaching methods used in the current study seemed to be ineffective in terms of practical CPR skills. Access to greater number of modern manikins should improve the BLS training in physical education students. Moreover, permanent consultation on instructional methods with emergency medicine experts is recommended for university teachers.
    Teaching and Learning in Medicine 07/2014; 26(3):252-257. DOI:10.1080/10401334.2014.910459 · 1.12 Impact Factor