Technology-Enhanced Simulation for Health Professions Education A Systematic Review and Meta-analysis

Office of Education Research, Mayo Medical School, Rochester, Minnesota, USA.
JAMA The Journal of the American Medical Association (Impact Factor: 35.29). 09/2011; 306(9):978-88. DOI: 10.1001/jama.2011.1234
Source: PubMed


Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education.
To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention.
Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011.
Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals.
Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors.
From a pool of 10,903 articles, we identified 609 eligible studies enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I(2)>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality.
In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

Download full-text


Available from: Stanley J. Hamstra, Jan 13, 2015
62 Reads
  • Source
    • "Although simulation training is becoming widely established within medical education, too little attention is often paid to its effects on motivation and the clinical context (Kneebone 2005). Technology-enhanced simulation training includes computer based simulators, high-fidelity and static mannequins and training with animals or cadavers (Cook et al. 2011). It provides learning opportunities for controlled skills practice, without harming the patient. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Simulation games are becoming increasingly popular in education, but more insight in their critical design features is needed. This study investigated the effects of fidelity of open patient cases in adjunct to an instructional e-module on students' cognitive skills and motivation. We set up a three-group randomized post-test-only design: a control group working on an e-module; a cases group, combining the e-module with low-fidelity text-based patient cases, and a game group, combining the e-module with a high-fidelity simulation game with the same cases. Participants completed questionnaires on cognitive load and motivation. After a 4-week study period, blinded assessors rated students' cognitive emergency care skills in two mannequin-based scenarios. In total 61 students participated and were assessed; 16 control group students, 20 cases students and 25 game students. Learning time was 2 h longer for the cases and game groups than for the control group. Acquired cognitive skills did not differ between groups. The game group experienced higher intrinsic and germane cognitive load than the cases group (p = 0.03 and 0.01) and felt more engaged (p < 0.001). Students did not profit from working on open cases (in adjunct to an e-module), which nonetheless challenged them to study longer. The e-module appeared to be very effective, while the high-fidelity game, although engaging, probably distracted students and impeded learning. Medical educators designing motivating and effective skills training for novices should align case complexity and fidelity with students' proficiency level. The relation between case-fidelity, motivation and skills development is an important field for further study.
    Advances in Health Sciences Education 10/2015; DOI:10.1007/s10459-015-9641-x · 2.12 Impact Factor
  • Source
    • "Medical knowing, affective knowing and communication in the simulation room emerge from the students' ability to suspend disbelief and act as if the situation was real. Authenticity of a simulation is mentioned as an important part for learning (Cook et al., 2011 "

    06/2015; 5(2). DOI:10.7577/pp.973
    • "First, we examined all studies identified in our earlier reviews of simulation-based training and assessment (Cook et al. 2011). In that study we had searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus using a search strategy previously reported in full (Cook et al. 2011), Table 2 Guiding issues for each inference in the validity argument "
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane's framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.
    Advances in Health Sciences Education 02/2015; DOI:10.1007/s10459-015-9593-1 · 2.12 Impact Factor
Show more