Technology-Enhanced Simulation for Health Professions Education A Systematic Review and Meta-analysis

Office of Education Research, Mayo Medical School, Rochester, Minnesota, USA.
JAMA The Journal of the American Medical Association (Impact Factor: 35.29). 09/2011; 306(9):978-88. DOI: 10.1001/jama.2011.1234
Source: PubMed


Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education.
To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention.
Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011.
Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals.
Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors.
From a pool of 10,903 articles, we identified 609 eligible studies enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I(2)>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality.
In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

Download full-text


Available from: Stanley J. Hamstra, Jan 13, 2015
52 Reads
  • Source
    • "Medical knowing, affective knowing and communication in the simulation room emerge from the students' ability to suspend disbelief and act as if the situation was real. Authenticity of a simulation is mentioned as an important part for learning (Cook et al., 2011 "
    06/2015; 5(2). DOI:10.7577/pp.973
    • "First, we examined all studies identified in our earlier reviews of simulation-based training and assessment (Cook et al. 2011). In that study we had searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus using a search strategy previously reported in full (Cook et al. 2011), Table 2 Guiding issues for each inference in the validity argument "
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane's framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.
    Advances in Health Sciences Education 02/2015; DOI:10.1007/s10459-015-9593-1 · 2.12 Impact Factor
  • Source
    • "Acknowledging this body of comparative evidence, research is now required to evaluate the differing simulation approaches to determine the most applicable methods for each context (Cook et al., 2011). In particular, there is a scarcity of objective data regarding the effect of e-simulation which has prompted this report. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Simulation-based education is one strategy that may be used to teach nursing students to recognize and manage patient deterioration.
    Clinical Simulation in Nursing 02/2015; 11(2):97-105. DOI:10.1016/j.ecns.2014.10.010
Show more