Technology-enhanced simulation for health professions education: a systematic review and meta-analysis.

Office of Education Research, Mayo Medical School, Rochester, Minnesota, USA.
JAMA The Journal of the American Medical Association (Impact Factor: 29.98). 09/2011; 306(9):978-88. DOI: 10.1001/jama.2011.1234
Source: PubMed

ABSTRACT Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education.
To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention.
Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011.
Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals.
Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors.
From a pool of 10,903 articles, we identified 609 eligible studies enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I(2)>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality.
In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1 Bookmark
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The expectation that the primary function of systematic reviews in medical education is to guide the development of professional practice requires basic standards to make the reports of these reviews more useful to evidence-based practice and to allow for further meta-syntheses. However, medical education research is a field rather than a discipline, one that brings together multiple methodological and philosophical approaches and one that struggles to establish coherence because of this plurality. Gordon and Gibbs have entered the fray with their common framework for reporting systematic reviews in medical education independent of their theoretical or methodological focus, which raises questions regarding the specificity of medical education research and how their framework differs from other systematic review reporting frameworks. The STORIES (STructured apprOach to the Reporting In healthcare education of Evidence Synthesis) framework will need to be tested in practice and potentially it will need to be adjusted to accommodate emerging issues and concerns. Nevertheless, as systematic reviews fulfill a greater role in evidence-based practice then STORIES or its successors should provide an essential infrastructure through which medical education syntheses can be translated into medical education practice. Please see related article:
    BMC Medicine 10/2014; · 7.28 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: : Objectives: This study aimed to explore the benefits perceived by Omani undergraduate maternity nursing students regarding the effect of pre-clinical simulation-based training (PSBT) on clinical learning outcomes. Methods: This non-experimental quantitative survey was conducted between August and December 2012 among third-year baccalaureate nursing students at Sultan Qaboos University in Muscat, Oman. Voluntary participants were exposed to faculty-guided PSBT sessions using low and medium fidelity manikins, standardised scenarios and skill checklists on antenatal, intranatal, postnatal and newborn care and assessment. Participants answered a purposely designed self-administered questionnaire on the benefits of PSBT in enhancing learning outcomes. Items were categorised into six subscales: knowledge, skills, patient safety, academic safety, confidence and satisfaction. Scores were rated on a four-point Likert scale. Results: Of the 57 participants, the majority (95.2%) agreed that PSBT enhanced their knowledge. Most students (94.3%) felt that their patient safety practices improved and 86.5% rated PSBT as beneficial for enhancing skill competency. All male students and 97% of the female students agreed that PSBT enhanced their confidence in the safe holding of newborns. Moreover, 93% of participants were satisfied with PSBT. Conclusion: Omani undergraduate nursing students perceived that PSBT enhanced their knowledge, skills, patient safety practices and confidence levels in providing maternity care. These findings support the use of simulation training as a strategy to facilitate clinical learning outcomes in future nursing courses in Oman, although further research is needed to explore the objective impact of PSBT on learning outcomes.
    Sultan Qaboos University medical journal 02/2015; 15(1):494-500.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Non-technical skills (teamwork) assessment is used to improve competence during training for interprofessional trauma teams. We hypothesized non-technical skills assessment is less reliable for large size teams, and evaluated team size effects during teamwork training. Small-teams (n = 5; 5-7 members) and Large-teams (n = 6; 8-9 members) participated in three simulation-based trauma team training scenarios. Following each scenario, teamwork was scored by participating trauma attending physicians (TA), non-participating critical care trauma nurses (CRN), and two expert teamwork debriefers (E), using the Trauma Nontechnical Skills Assessment tool (T-NOTECHS). Large-team scores by TA and CRN were higher than E scores (P < .003); small-team scores did not differ by rater. Small-team inter-observer agreement was substantial (ICC = 0.60); large-team agreement was low (ICC = 0.29). E and TA scores showed no concordance, whereas E and CRN scores showed poor concordance for large teams (ICC = 0.41, r = 0.53, P = .02). By contrast, correlation between E and TA (ICC = 0.52, r = 0.80, P < .001) as well as E and CRN (ICC = 0.57, and r = 0.65, P < .01) for small teams was high. Team size should be considered in team-training design, and when using teamwork rating instruments such as T-NOTECHS for assessment of simulated or actual trauma teams. Modified rating scales and enhanced training for raters of large groups versus small groups may be warranted.
    Hawai'i journal of medicine & public health : a journal of Asia Pacific Medicine & Public Health. 11/2014; 73(11):358-61.


1 Download
Available from
Jan 13, 2015