Article

Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios.

Division of Pediatric Critical Care Medicine, Children's Hospital of Philadelphia, PA 19104, United States.
Resuscitation (Impact Factor: 3.96). 03/2010; 81(3):331-6. DOI: 10.1016/j.resuscitation.2009.11.011
Source: PubMed

ABSTRACT To assess the reliability and validity of scoring instruments designed to measure clinical performance during simulated resuscitations requiring the use of Pediatric Advanced Life Support (PALS) algorithms.
Pediatric residents were invited to participate in an educational trial involving simulated resuscitations that employ PALS algorithms. Each subject participated in a session comprised of four scenarios (asystole, dysrhythmia, respiratory arrest, shock). Video-recorded sessions were independently reviewed and scored by four raters using instruments designed to measure performance in terms of timing, sequence, and quality. Validity was assessed by two-factor analysis of variance with postgraduate year (PGY-1 versus PGY-2) as an independent variable. Reliability was assessed by calculation of overall interrater reliability (IRR) as well as a generalizability study to estimate variance components of individual measurement facets (scenarios, raters) and associated interactions.
20 subjects were scored by four raters. Based on a two-factor ANOVA, PGY-2s outperformed PGY-1s (p<0.05); significant differences in difficulty existed between the four scenarios, with dysrhythmia scores being the lowest. Overall IRR was high (0.81) and most variance could be attributed to subject (17%), scenario (13%), and the interaction between subject and scenario (52%); variance attributable to rater was minimal (1.4%).
The instruments assessed in this study measure clinical performance during PALS scenarios in a reliable and valid manner. Measurement error could be minimized further through the use of additional scenarios but additional raters, for a given scenario, would not improve reliability. Further studies should assess validity of measurement with respect to actual clinical performance during resuscitations.

0 Bookmarks
 · 
69 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Each year an estimated 10 million newborns require assistance to initiate breathing, and about 900 000 die due to intrapartum-related complications. Further research is required in several areas concerning neonatal resuscitation, particularly in settings with limited resources where the highest proportion of intrapartum-related deaths occur. The aim of this study is to use CCD-camera recordings to evaluate resuscitation routines at a tertiary hospital in Nepal.
    BMC Pediatrics 09/2014; 14(1):233. · 1.92 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: IMPORTANCE: Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. OBJECTIVE: To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. DESIGN Prospective, randomized, factorial study design. SETTING: The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. PARTICIPANTS: We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. INTERVENTION Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. MAIN OUTCOMES AND MEASURES: Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). RESULTS: There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. CONCLUSIONS AND RELEVANCE: The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.
    JAMA Pediatrics 01/2013; 167:528-36. · 4.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Junior doctors are often the first responders to deteriorating patients in hospital. In the high-stakes and time-pressured context of acute care, the propensity for error is high. This study aimed to identify the main subject areas in which junior doctors¿ acute care errors occur, and cross-reference the errors with Reason¿s Generic Error Modelling System (GEMS). GEMS categorises errors according to the underlying cognitive processes, and thus provides insight into the causative factors. The overall aim of this study was to identify patterns in junior doctors¿ acute care errors in order to enhance understanding and guide the development of educational strategies.Methods This observational study utilised simulated acute care scenarios involving junior doctors dealing with a range of emergencies. Scenarios and the subsequent debriefs were video-recorded. Framework analysis was used to categorise the errors according to eight inductively-developed key subject areas. Subsequently, a multi-dimensional analysis was performed which cross-referenced the key subject areas with an earlier categorisation of the same errors using GEMS. The numbers of errors in each category were used to identify patterns of error.ResultsEight key subject areas were identified; hospital systems, prioritisation, treatment, ethical principles, procedural skills, communication, situation awareness and infection control. There was a predominance of rule-based mistakes in relation to the key subject areas of hospital systems, prioritisation, treatment and ethical principles. In contrast, procedural skills, communication and situation awareness were more closely associated with skill-based slips and lapses. Knowledge-based mistakes were less frequent but occurred in relation to hospital systems and procedural skills.Conclusions In order to improve the management of acutely unwell patients by junior doctors, medical educators must understand the causes of common errors. Adequate knowledge alone does not ensure prompt and appropriate management and referral. The teaching of acute care skills may be enhanced by encouraging medical educators to consider the range of potential error types, and their relationships to particular tasks and subjects. Rule-based mistakes may be amenable to simulation-based training, whereas skill-based slips and lapses may be reduced using strategies designed to raise awareness of the interplay between emotion, cognition and behaviour.
    BMC Medical Education 01/2015; 15(1):3. · 1.41 Impact Factor