Article

Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios.

Division of Pediatric Critical Care Medicine, Children's Hospital of Philadelphia, PA 19104, United States.
Resuscitation (Impact Factor: 3.96). 03/2010; 81(3):331-6. DOI: 10.1016/j.resuscitation.2009.11.011
Source: PubMed

ABSTRACT To assess the reliability and validity of scoring instruments designed to measure clinical performance during simulated resuscitations requiring the use of Pediatric Advanced Life Support (PALS) algorithms.
Pediatric residents were invited to participate in an educational trial involving simulated resuscitations that employ PALS algorithms. Each subject participated in a session comprised of four scenarios (asystole, dysrhythmia, respiratory arrest, shock). Video-recorded sessions were independently reviewed and scored by four raters using instruments designed to measure performance in terms of timing, sequence, and quality. Validity was assessed by two-factor analysis of variance with postgraduate year (PGY-1 versus PGY-2) as an independent variable. Reliability was assessed by calculation of overall interrater reliability (IRR) as well as a generalizability study to estimate variance components of individual measurement facets (scenarios, raters) and associated interactions.
20 subjects were scored by four raters. Based on a two-factor ANOVA, PGY-2s outperformed PGY-1s (p<0.05); significant differences in difficulty existed between the four scenarios, with dysrhythmia scores being the lowest. Overall IRR was high (0.81) and most variance could be attributed to subject (17%), scenario (13%), and the interaction between subject and scenario (52%); variance attributable to rater was minimal (1.4%).
The instruments assessed in this study measure clinical performance during PALS scenarios in a reliable and valid manner. Measurement error could be minimized further through the use of additional scenarios but additional raters, for a given scenario, would not improve reliability. Further studies should assess validity of measurement with respect to actual clinical performance during resuscitations.

0 Bookmarks
 · 
68 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Each year an estimated 10 million newborns require assistance to initiate breathing, and about 900 000 die due to intrapartum-related complications. Further research is required in several areas concerning neonatal resuscitation, particularly in settings with limited resources where the highest proportion of intrapartum-related deaths occur. The aim of this study is to use CCD-camera recordings to evaluate resuscitation routines at a tertiary hospital in Nepal.
    BMC Pediatrics 09/2014; 14(1):233. · 1.92 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The process of developing checklists to rate clinical performance is essential for ensuring their quality; thus, the authors applied an integrative approach for designing checklists that evaluate clinical performance. The approach consisted of five predefined steps (taken 2012-2013). Step 1: On the basis of the relevant literature and their clinical experience, the authors drafted a preliminary checklist. Step 2: The authors sent the draft checklist to five experts who reviewed it using an adapted Delphi technique. Step 3: The authors devised three scoring categories for items after pilot testing. Step 4: To ensure the changes made after pilot testing were valid, the checklist was submitted to an additional Delphi review round. Step 5: To weight items needed for accurate performance assessment, 10 pediatricians rated all checklist items in terms of their importance on a scale from 1 (not important) to 5 (essential). The authors have illustrated their approach using the example of a checklist for a simulation scenario of infant septic shock. The five-step approach resulted in a valid, reliable tool and proved to be an effective method to design evaluation checklists. It resulted in 33 items, most consisting of three scoring categories. This approach integrates published evidence and the knowledge of domain experts. A robust development process is a necessary prerequisite of valid performance checklists. Establishing a widely recognized standard for developing evaluation checklists will likely support the design of appropriate measurement tools and move the field of performance assessment in health care forward.
    Academic medicine: journal of the Association of American Medical Colleges 05/2014; · 2.34 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: IMPORTANCE: Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. OBJECTIVE: To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. DESIGN Prospective, randomized, factorial study design. SETTING: The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. PARTICIPANTS: We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. INTERVENTION Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. MAIN OUTCOMES AND MEASURES: Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). RESULTS: There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. CONCLUSIONS AND RELEVANCE: The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.
    JAMA Pediatrics 01/2013; 167:528-36. · 4.25 Impact Factor