ArticlePDF Available

The Effects of Virtual Reality, Augmented Reality, and Mixed Reality as Training Enhancement Methods: A Meta-Analysis

Authors:
  • SA Technologies

Abstract and Figures

Objective The objective of this meta-analysis is to explore the presently available, empirical findings on transfer of training from virtual (VR), augmented (AR), and mixed reality (MR) and determine whether such extended reality (XR)-based training is as effective as traditional training methods. Background MR, VR, and AR have already been used as training tools in a variety of domains. However, the question of whether or not these manipulations are effective for training has not been quantitatively and conclusively answered. Evidence shows that, while extended realities can often be time-saving and cost-saving training mechanisms, their efficacy as training tools has been debated. Method The current body of literature was examined and all qualifying articles pertaining to transfer of training from MR, VR, and AR were included in the meta-analysis. Effect sizes were calculated to determine the effects that XR-based factors, trainee-based factors, and task-based factors had on performance measures after XR-based training. Results Results showed that training in XR does not express a different outcome than training in a nonsimulated, control environment. It is equally effective at enhancing performance. Conclusion Across numerous studies in multiple fields, extended realities are as effective of a training mechanism as the commonly accepted methods. The value of XR then lies in providing training in circumstances, which exclude traditional methods, such as situations when danger or cost may make traditional training impossible.
Content may be subject to copyright.
The Effects of Virtual Reality, Augmented Reality, and
Mixed Reality as Training Enhancement Methods: A
Meta- Analysis
Alexandra D. Kaplan , Jessica Cruit, University of Central Florida, Orlando, USA,
Mica Endsley , SA Technologies, Gold Canyon, AZ, USA, Suzanne M. Beers, MITRE
Corporation, Colorado Springs, CO, USA, Ben D. Sawyer, and P. A. Hancock ,
University of Central Florida, Orlando, USA
Objective: The objective of this meta- analysis is
to explore the presently available, empirical findings on
transfer of training from virtual (VR), augmented (AR),
and mixed reality (MR) and determine whether such ex-
tended reality (XR)- based training is as effective as tradi-
tional training methods.
Background: MR, VR, and AR have already been used
as training tools in a variety of domains. However, the ques-
tion of whether or not these manipulations are effective
for training has not been quantitatively and conclusively an-
swered. Evidence shows that, while extended realities can
often be time- saving and cost- saving training mechanisms,
their efficacy as training tools has been debated.
Method: The current body of literature was exam-
ined and all qualifying articles pertaining to transfer of
training from MR, VR, and AR were included in the meta-
analysis. Effect sizes were calculated to determine the
effects that XR- based factors, trainee- based factors, and
task- based factors had on performance measures after
XR- based training.
Results: Results showed that training in XR does not
express a different outcome than training in a nonsimulat-
ed, control environment. It is equally effective at enhanc-
ing performance.
Conclusion: Across numerous studies in multiple
fields, extended realities are as effective of a training mech-
anism as the commonly accepted methods. The value of
XR then lies in providing training in circumstances, which
exclude traditional methods, such as situations when dan-
ger or cost may make traditional training impossible.
Keywords: virtual environments, transfer of training,
immersive environments, meta- analysis
THE PROMISE OF AUGMENTED
REALITY, VIRTUAL REALITY, AND MIXED
REALITY FOR TRAINING
Training is required in order for humans
to develop necessary performance skills (see
Holding, 1989). Learning protocols can be
both expensive and time consuming. Thus, any
advancement in technology or methodology
that might reduce the cost, either in nancial or
in temporal terms, will be of great relevance to
many individuals, organizations, and industries.
For this reason, simulation- based training has
shown promise and is gaining acceptance as a
means to increase training efciency (Hancock,
2009). Although simulation- based training can
be delivered via portable tablets or conven-
tional at panel displays, we are witnessing an
increasing use of augmented reality (AR) and
virtual reality (VR) displays, and new mixed
reality (MR) displays, which include both AR
and VR. VR technology generally uses a head-
set, blocking out visual stimulus from the real
world. AR allows users to see the real world,
but overlays virtual elements. MR combines the
two, including aspects of both the real and vir-
tual world. Other denitions of the technology
employed in those categories, as well as sources,
are listed in Appendix A. Extended reality (XR)
is the umbrella term that refers to these three
different types of simulations. These technolo-
gies promise to reduce some of the costs asso-
ciated with expensive training, especially where
Address correspondence to Alexandra D. Kaplan, Department
of Psychology, University of Central Florida, 4111 Pictor Lane,
Orlando, FL 32816, USA; e-mail: adkaplan@ knights. ucf. edu
HUMAN FACTORS
Vol. 00, No. 0, Month XXXX, pp. 1
DOI:10.1177/0 018720820904229
Article reuse guidelines: sagepub. com/ journals- permissions
Copyright © 2020, Human Factors and Ergonomics Society.
Month XXXX - Human Factors
2
spatial information is important, such as in the
eld of aviation (Salas et al., 1998). XR training
can also eliminate some of the risks inherent to
high- level training by placing individuals in a
simulation rather than a real- world dangerous
situation.
Training in XR promises to have benets
beyond simply supplementing traditional train-
ing protocols. Whether it is a time- saving, cost-
saving measure or not, the benets that exist
may still outweigh potential drawbacks. Even
low- delity VR contains aspects of the physical
world that cannot be replicated in the traditional
classroom settings (Kozak et al., 1993). One
can argue that a simulated battleeld has more
in common with a real battleeld than does a
classroom.
The potential also exists that XR might be
used to help people prepare for situations that
do not yet exist, or are not yet safe for humans,
and thus cannot be adequately prepared for in
situ, for example, prospective missions to Mars
(Hancock, 2017). XR allows for training in
locations and for events where there are no safe
and realistic parallels. Additionally, such sim-
ulations can be rapidly and efciently updated
as new information becomes available, unlike
other full- delity built environments, which are
much less malleable.
However, training in XR does not solve all of
the problems that plague current training meth-
ods. While simulation may be the solution to
some issues, it comes with its own set of caveats
and concerns that have to be balanced against
those of more traditional methods. One such
caveat is the rate of technological innovation,
which far exceeds the speed of designing,
implementing, and testing a training regimen
(Hancock & Hoffman, 2015). Therefore, by the
time a simulator’s efcacy as a training tool is
fully tested, it is already out of date. The vari-
ability between the technology in use makes it
difcult (if not impossible) to empirically repli-
cate an earlier investigation of XR- based train-
ing effectiveness.
In light of these concerns and the numerous
benets of XR- based training, it is important to
assess the applicability of the training (in XR)
to execution of the task (in the real world). The
principle of “encoding specicity” indicates
that when the learning environment is suf-
ciently different from the environment in which
learning is subsequently measured, performance
tends to suffer (Tulving & Thomson, 1973). This
principle was further explored experimentally
by Godden and Baddeley (1975), in which they
found that scuba divers who memorized lists
of words on dry land recalled those lists bet-
ter above, rather than below, the surface of the
ocean. This calls into question whether learning
can fully transfer from practice to performance
when the performance occurs in a different
environment from training, such as is the case
when XR is used. In the Godden and Baddeley
(1975) example, what if rather than memorizing
words, an individual was learning how to safely
operate an underwater air tank? In such a sit-
uation, training on land for subsequent perfor-
mance underwater could prove disastrous if the
training did not transfer effectively. This same
concern can be potentially extended to train-
ing in XR. The situations in which simulation-
based training has the most benet (i.e., risky,
expensive, and/or unsafe conditions) also have
the highest cost of failure when training proves
inadequate.
Outcomes of Extended Reality and
Simulation-Based Training
Simulation- based training has already proven
advantageous for the military. It has been shown
that pilots who rst trained in simulators required
less in- ight training time before reaching an
acceptable level of competence (Rantanen &
Talleur, 2005). Simulators, as surrogates for
many of the expensive and limited resources or
dangerous situations encountered by the military,
free up equipment (such as runways) that might
be unavailable due to other operational demands
and allow the training of dangerous operations
(such as ight and air trafc control) in a safe
manner. Additionally, training in simulation
environments offers the possibility of immedi-
ate feedback (Haque & Srinivasan, 2006). Such
immediacy promotes faster and more accurate
training by letting the learner self- correct mis-
takes before the result of the error is propagated.
Training in XR appears to hold similar prom-
ise as a solution for many of the problems that
The effecTs of VR foR TRaining 3
currently make traditional training difcult and
ineffective. Of course, the question of whether
or not XR is a suitable medium for training is
the subject of some debate. Applicability of XR
as a training platform lacks some of the haptic
feedback that the real world offers. Additionally,
the variability in visual quality of different XR
products, lag and tracking problems, and the
potential for simulator sickness are all sources
of limitation that may diminish training ef-
cacy. To that end, the success of XR- based
training must be evaluated empirically across
differing applied elds. To accomplish this is
not a straightforward task. Learner capacities
vary, and inherent individual differences have
been shown to affect the transfer of training,
whether from real or simulated sources (Blume
et al., 2010). Additionally, the modes of deliv-
ery of virtual training vary in delity and qual-
ity. Both of these factors have signicant effects
on later performance measures.
The Transfer Effectiveness Ratio
One of the most benecial aspects of XR
assessment is that transfer of training from sim-
ulation to the target environment can be directly
measured. The transfer effectiveness ratio (TER)
determines the value of time spent training in a
simulator by calculating the efcacy of the (vir-
tual) training session (and see Roscoe, 1971).
The equation is as follows:
TER =YcY
x
Y×100
where Yc indicates the amount of time or num-
ber of trials it takes to train an individual on a
specic task, and Yx indicates the time it takes to
train someone who has already trained on a sim-
ulator, to complete that same task to the same
level of competence. Thus, a TER value of 0.5
indicates that training on a simulator can reduce
the in- person training time by one- half. Using
this formula, it is possible to specify numeri-
cally the time saved by training using simula-
tion in general or a particular XR technology.
However, not all training success factors can be
measured in terms of time saved. Further, not all
domains have the resources or ability to exper-
iment in order to specify the efcacy of each
particular set of simulation content and delivery
mechanism that might be considered. How,
then, can the efcacy of simulator- based (and
specically XR- based) training be explored?
To answer this question we conducted a meta-
analysis of the current empirical literature on
the topic.
The Present Meta-Analysis
VR has previously been examined in meta-
analysis. However, XR is, in general, so broad
a topic, and training so important an area, that
not all aspects of training in XR have been
addressed in research, and let alone in meta-
analysis. One previous meta- analysis exam-
ined only the efcacy of surgical simulators
(see Haque & Srinivasan, 2006), a vital but
small area. Fletcher et al. (2017) examined a
broader scope, analyzing the effectiveness of
VR in training. However, their selection criteria
were less stringent than for the meta- analysis
we employ. Fletcher’s analysis allowed arti-
cles where psychological ow and enjoyment
during virtual training represented an outcome
variable; additionally, articles were included in
their assessment where performance was mea-
sured during the time in the virtual environment
or with the help of virtual aids. In the present
meta- analysis we employed stringent selection
criteria in order to ll an important need for a
tightly controlled, methodologically sound, and
comprehensive meta- analysis. We only included
articles if performance measurement took place
after virtual training, but entirely in the non-
virtual world, to demonstrate training transfer.
Our focus is narrower, but no less important;
we look to determine the direct effect that train-
ing using XR has on real- world performance.
These ndings will serve to inform the design
and application of training regimens using XR.
METHOD
Searching the Literature
A literature search was conducted in order
to identify all published, peer- reviewed articles
on the topic of training transfer from XR- based
training. Search terms consisted of a primary
phrase describing forms of XR, combined
with a secondary group of phrases pertaining
to training. All possible combinations of the
c
Month XXXX - Human Factors
4
search terms were used, and the terms are listed
in Table 1.
The 15 search strings were each entered into a
series of search engines (ProQuest, EbscoHost,
and Google Scholar). All results were briey
examined to determine whether they met inclu-
sion criteria. The search took place in February
2019 and included all articles published prior
to that time. Additionally, prominent schol-
ars in the area of XR were contacted and
asked whether they had any relevant research
approaching fruition, which might t the crite-
ria. Identied relevant articles (n = 130) were
then examined more closely and rejected (n =
105) or included (n = 24) in the meta- analysis.
One article was identied that was published
after the initial search and was included in the
analysis (Whitmer et al., 2019). This process is
illustrated in Figure 1.
Inclusion Criteria
Articles met inclusion criteria if at least one
of the reported outcome variables measured
performance that took place after training in
XR. Articles were also required to be published
in a peer- reviewed journal, the proceedings of
a conference, part of a dissertation or thesis, or
a peer- reviewed technical report. Articles were
not included if the population was under 18,
such as elementary school- age students. Articles
were also rejected for inclusion if the reported
statistics did not provide sufcient information
so as to determine an effect size. Suitable sta-
tistics in this analysis were r, d, F, t, or means
and standard deviations. Finally, it was required
that all articles involved training in MR, AR, or
VR. Articles were not included if the outcome
variable measured something other than perfor-
mance after training, such as level of enjoyment
or engagement. Additionally, the performance
being measured had to take place in the real
world. Experimental results were rejected if the
outcome variable was performance with the aid
of XR or performance in a simulation. Articles
were required to include original empirical
data. If a dissertation included a sample, and
that same data were then later used in a referred
publication or conference proceedings paper,
the sample was only included once in the nal
analysis. Determination of inclusion and subse-
quent coding of the statistical data in the articles
were completed by two individuals.
If a study examined the appropriate variables
but did not include sufcient statistical infor-
mation to determine an effect size, the authors
were contacted directly and asked to supply
such information. If the authors did not respond
with or could not supply the needed statistics,
the article was not included. While there were a
variety of different forms of XR used for train-
ing, all fell into one of the aforementioned three
categories (AR, VR, or MR).
Variables
The outcome variable, in all included studies,
was some dimension of performance taken after
training in XR had occurred. Predictor variables
fell into three general categories related to (a)
the simulation, such as immersiveness, (b) the
trainee, such as age, or (c) the task, such as task
difculty.
Immersiveness. Of the XR- related variables,
one often- explored concept involved compari-
sons across differing virtual environments. For
example, VR using a headset was considered
more virtually immersive than desktop VR or
AR. Each of these differing levels of immersion
may have been compared to an entirely nonsim-
ulated control condition (e.g., real- world train-
ing) or to a less immersive training tool, such as
an interactive video or a simple instruction man-
ual. Here, we call this variable “immersiveness,”
and use the word to refer to any comparison
between environments where one is more virtu-
ally immersive than the other. Despite the
TABLE 1: Tabulation and Combination of Search
Terms
Primary Term Secondary Term
Virtual reality Training
VR Learning
Augmented reality Encoding specificity
Mixed reality
Simulation
The effecTs of VR foR TRaining 5
differences in level of immersiveness of the
training environment, all studies included in
this analysis measured nal performance in the
real world.
VR vs. control. A subset of the studies where
immersiveness was a factor compared training
in a fully immersive VR setting to training in a
nonvirtually immersive control. Such studies
were included both in the overall effect size
analysis of immersiveness and in their own spe-
cic subanalysis.
Pre/post training. The variable “pre/post”
includes any comparison between an individual
or a group’s performance before XR- based
training intervention, with performance after
that same intervention. This variable examines
the post- training improvement (or lack thereof).
Regardless of whether or not performance
improvement was the hypothesis of the original
article, if performance was measured and suf-
cient statistical information was supplied, then
the prescore was compared to the post- training
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) ow diagram
(Moher et al., 2009).
Month XXXX - Human Factors
6
score only for groups where the training was
virtual.
Task Type
The data were examined to determine the
direct effect on performance of each variable
described above. The data were also examined
with task type as a moderator. The three types of
task categories were as follows.
Cognitive tasks. Cognitive tasks included sit-
uations in which participants learned informa-
tion that later had to be either remembered
directly, such as in a test of recall, or utilized in
a subsequent applied setting.
Physical tasks. Physical tasks involved some
sort of bodily training, such as balance or aero-
bic activities. The predominance here was on
psychomotor skill assimilation.
Mixed tasks. Some tasks included combina-
tions of both physical and cognitive require-
ments, such as a maintenance task that required
participants to use learned physical skills while
simultaneously recalling applicable procedural
information (see Marras & Hancock, 2014).
Included Articles
Twenty- ve articles met the above- stated
criteria and so were included in the analysis.
Twenty- three articles examined the XR- related
factor of immersiveness. The majority of these
included at least one pairwise comparison
between a VR training condition and a con-
trol setting (k = 21). A number also examined
AR compared to a control setting (k = 5). One
examined training results after training in VR as
compared to AR (k = 1), and two studies looked
at different levels of AR (k = 2).
Twelve studies were included in the pre/
post comparison, one of which focused on
AR and the rest on VR. Only one article that
met our inclusion criteria examined predictor
variables other than XR (Bier et al., 2018).
This work included the effects of both age
and task difculty on performance after train-
ing in VR.
Data Analytic Strategy
Many of these articles examined both VR and
AR and most reported more than one pairwise
relationship between variables of interest. Thus,
multiple effect sizes were taken from each.
Some articles were included in the number of
studies (k) for multiple predictor variables if
that article reported enough data to determine
an effect size of two different variables (e.g., if
a study reported enough statistical information
to calculate an effect size for both immersive-
ness and task difculty, then the two separate
effect sizes were calculated). For variables
where k = 1, only one article reported results
in a method suitable for inclusion in the meta-
analysis. The calculated Cohen’s d is provided
here, but, as data only come from one source,
any condence intervals surrounding d would
not be meaningful and thus were not included.
Such information is included to illustrate what
is covered by the current research. Individual
effect sizes are listed in Appendix B.
A total of 176 effect sizes were included,
which were each converted to Cohen’s d and
weighted, based on the number of participants
included. Effects between similar pairs within
the same study were combined. Therefore, even
if any one particular study had several effect
sizes measuring the same variable, results were
aggregated in order that each study only had
one overall effect size for each specic pre-
dictor. If an article had two separate studies,
using two dierent samples, then two effect
sizes were calculated. This was done so that the
results from one sample would not dispropor-
tionately inuence the outcome and to maintain
independence.
Although all dependent variables repre-
sented a performance outcome, the scales used
to measure performance varied widely. In addi-
tion, the concept of “performance” itself varied
between articles. Therefore, it was not possible
to compare directly between studies. As a result
we used a random- effects model when calculat-
ing the meta- analytic results. For each study a
weighted value d was determined as the effect
of the predictor variable on performance, in that
particular study. These weighted effect sizes
were then used to determine the overall effect
The effecTs of VR foR TRaining 7
size and the associated 95% condence inter-
vals. SPSS was used to compute the effect sizes.
The effect sizes determined from this analy-
sis were not intended to determine whether XR
training was effective. Rather, results addressed
the question of whether it was different from the
other methods of training to which it was being
compared. If the effect size of immersiveness
is both signicant and positive, it represents
improvement on traditional training. If the
effect is both signicant and negative, then the
opposite is true. If a zero effect size falls within
the 95% condence interval, it would indicate
here that all levels of immersiveness exert an
equivalent (or similar) effect on performance
outcome.
RESULTS
The effect sizes are reported in Table 2. The
table also indicates the number of separate
studies investigating each respective predic-
tor (k). Some articles included more than one
study. Weighted overall levels of d are included,
as are 95% condence intervals for each rela-
tionship. Analysis of the associated variable of
immersiveness, as well as the subset analysis
of VR compared to control, showed no signif-
icant difference between levels of performance
post training, regardless of the virtual immer-
siveness. While the negative effect sizes (d =
−.07 and d = −.13, respectively) indicate a slight
decrement in training effectiveness when a vir-
tual environment was used, the fact that the
condence interval included zero indicates that
whether one trains in a virtual or a real setting,
the results are essentially equivalent. In essence,
these ndings indicate that XR experiences are
as effective as traditional training approaches.
Table 3 shows the effect size based on task
type. Results show that XR is a more suitable
medium for training on physical tasks (d = .36),
but otherwise the type of task learned in simula-
tion does not have an effect on the performance
outcome.
The overall pattern of effect sizes is com-
pared in Figure 2. Additionally, in Figure 2,
potentially moderating inuencers are divided
by task type. While these “interaction” effects
are interesting, we have to caution against
relying excessively on these results at present.
TABLE 2: Overall Effect Sizes of the Associated Variables
Predictor
Number of Studies
(K) Cohen’s d
95% Confidence Interval
Lower Limit Upper Limit
Immersiveness 23 −.07 −0.22 +0.07
VR compared to control 21 −.13 −0.27 +0.02
Pre/post training 12 .09 −1.05 +1.23
Age 1 −.08
Task difficulty 1 −.15
VR, virtual reality.
TABLE 3: Effect Sizes by Task Type
Task Type Number of Studies (K) Cohen’s d
95% Confidence Interval
Lower Limit Upper Limit
Cognitive tasks 9 .01 −0.24 +0.27
Mixed tasks 12 −.07 −0.31 +0.17
Physical tasks 8 .36*+0.01 +0.70
* indicates significant effect beyond p < .05 level.
Month XXXX - Human Factors
8
This is because, with the addition of each mod-
erating factor, the number of applicable studies
is smaller. Thus, there is less between- study
variation in the calculations for the smaller
number of studies. For forest plots showing
the effects of individual studies by task type/
predictor clusters, see Figures 4 through 6 in
Appendix C.
Additional Analysis and Overlap Between
Conditions
Within each study, performance tended to
be quite similar between immersiveness con-
ditions. To that end, it was important to deter-
mine the similarity between the conditions
beyond simply noting that for immersiveness
variables condence intervals included zero.
For this reason, scores were compared in order
to determine overlap. The data could not be
compared directly, as each study used differ-
ent scales to measure performance. So, the
mean performance of each training condition
within a study was converted to a z- score. The
average z- score for each condition, as well as
a 95% condence interval, is shown below
in Figure 3. As these z- scores came from the
two conditions present in each evaluation,
they are mirror images of each other. Only
their size and magnitude are meaningful; the
value of the average z itself has no real- world
meaning except in indicating the difference
between scores of participants in each condi-
tion. The fact that average z- scores were so
small in value serves to highlight the similar-
ity between conditions. The mean z- score for
the more immersive condition was lower than
the less immersive and control conditions, yet
the overlap between condence intervals was
large. These ndings indicate that, though a
more virtually immersive training condition
results in slightly worse performance than a
real training setting, the majority of individuals
will show similar results after training, regard-
less of the level of virtual immersiveness.
DISCUSSION
The fact that the zero could not be excluded
from the condence intervals relating to any
of the present overall predictors indicates an
apparent equivalency between XR training and
traditional instructional techniques employed
in situations such as a classroom. If we take
Figure 2. Forest plot of effect sizes by associated variable; 95% condence intervals shown.
The effecTs of VR foR TRaining 9
an optimistic perspective, these results con-
rm that the use of VR, MR, and AR train-
ing procedures provides at least an equivalent
performance result to that which is normally
experienced in traditional instruction methods.
If this is the case, and performance outcomes
after XR training are not signicantly different
than outcomes after traditional training, then the
previously enumerated benets of XR training
(such as safety, cost, and ease of implementing
changes) make it, on the whole, a more valu-
able investment of time than traditional training
methods. After all, if the performance outcome
is essentially the same, the other benets of XR
training make it a superior option.
However, it must be acknowledged that XR
has often been held out to offer superior train-
ing capacities (especially in popular press and
by various vendors). The results of the present
meta- analysis indicate that the case for this
proposition is at best “not proven.” Though
one study did nd that training in VR improved
speed of a maintenance task, compared to a no-
training control group (Ganier et al., 2014), the
comparison of interest is not between XR and
no training, but XR and traditional training.
Overall performance following XR- based train-
ing is neither better nor worse than performance
following traditional training.
Of course, the benet of training can be
measured in more than just performance
after- the- fact. Though it is beyond the quantita-
tive scope of this meta- analysis, the use of VR
in training has been shown to affect presence
and immersion, as well as the psychological
dimension of ow (see Lackey et al., 2016).
All of these are important factors to consider
in training beyond evaluating performance out-
come alone.
In addition to the mean level of the effect
sizes noted in Figure 2, there proved to be
unusually large condence intervals, particu-
larly concerning the pre/post variable. These
ranges of variability mean that there were an
approximately equal number of effects report-
ing strong transfer, as there were effects indicat-
ing negative transfer. This range may be viewed
as disturbing. On the principle of “do no harm,”
it is important to know that an imposed train-
ing regimen will not actually cause the trained
individual to be less procient than they would
have been with a traditional training approach.
At present, because of the associated degree of
variability we cannot ensure that this is always
so. It may be that XR provides signicant per-
formance benets but equally we cannot rule out
that such a manipulation may inhibit learning in
some cases. Some studies found large positive
effects of XR training, but a few did nd nega-
tive effects (see Appendix C). The sources were
varied enough that it was not immediately clear
whether there were any commonalities between
Figure 3. Immersiveness variables and condence intervals by condition.
Month XXXX - Human Factors
10
those nding negative effects. To investigate
further would require a larger body of research
from which to draw conclusions. The fact that
XR- based training had the same level of success
as traditional training indicates that “encoding
specicity” (see Godden & Baddeley, 1975)
does not pose a problem for XR training. That
is, the virtual environments employed in XR
are clearly similar enough to the real environ-
ment that transfer can occur effectively. It can
therefore be accepted that any negative transfer
or otherwise poor performance after XR- based
training is not a result of XR itself being an
unsuitable medium, but a result of some other
factors such as delity or individual differences.
There are several possibilities as to why
this high degree of variability occurs. First, the
actual training tasks represented in the summa-
tion here were highly heterogeneous. While each
study examined a separate form of task, the two
main categories of tasks were physical (where
participants were required to practice or learn
some spatial, procedural task) and cognitive
(where participants acquired new information,
but did not need to use it in a physical sense).
Yet, even within these categories, there proved
to be large variations. For example, cognitive
tasks ranged from rote memorization of facts
about planets to conducting simulated medical
dissections. Physical tasks involved balancing
skills, as well as performing a maintenance task
similar to that which a factory worker might do
on the job.
While overall XR training was more suc-
cessful on physical tasks than cognitive tasks,
this nding was not consistent. One study
included in the analysis of the physical tasks
used XR in order to teach the recovery of
balance to stroke patients (Lee et al., 2015).
Results of this particular study found that a
large number of participants performed worse
in the follow- up assessment. Of course it is
possible that the stroke patients were deterio-
rating in capacity over time, that is, a declining
baseline. On the other hand, a cognitive task
study where participants learned mathematics
showed that scores were consistently higher
after the VR training intervention (Bier et al.,
2018). However, due to the variability in the
literature, this question needs further study.
Many studies examining physical tasks did
show actual performance improvement after
XR training. Several of these studies included
in the analysis involved special populations
such as stroke victims, with the virtual train-
ing a method to improve their physical abili-
ties and retrain them in lost skills. Studies on
healthy populations have occasionally shown
that even procedural tasks such as way- nding
can benet more from virtual training than
from standard training (Goldiez et al., 2007).
Such ndings may have been washed out by
the variability of the populations examined
in the included studies. The literature does
not yet support a more thorough examination
of task or population differential as a subpre-
dictor. However, these shortfalls can be recti-
ed with future research. Table 4 shows each
study by task type and population examined.
Here, a typical population refers to any popu-
lation where participants were not selected for
any specic expertise or illness, but in nearly
every case was an undergraduate or university
sample.
In the analysis of cognitive tasks, one study
involved adults with autism, and six involved
samples from the general population (one of
which having age restrictions; Bier et al., 2018).
The mixed tasks involved four populations from
medical school, two groups of experienced
technicians, and ve typical populations. In
the examination of physical tasks, three studies
involved typical populations, although one had
an age restriction (Prasertsakul et al., 2018), and
four medical samples with specic ailments.
The disparity in populations examined in each
task type is thus fairly clear.
One other caveat with respect to the nd-
ings of the present meta- analysis is the tech-
nology used in each study. While some of the
identied studies included information about
the specic model of VR or AR technology
used, not all did. Of the studies that did provide
information, many utilized different levels of
the XR platform (e.g., interactive video games,
full- motion simulators). This differentiation
might help explain some of the performance
differences. While the examined variable of
immersiveness addressed some of the differ-
ences between the degrees of virtuality, no
The effecTs of VR foR TRaining 11
such distinctions can be made in the case of
difference in quality. Not all XR technologies
are created equal, and to compare two stud-
ies using different XR systems may even be
inappropriate to some degree. Disparate tech-
nology makes it difcult to determine direct
effects of each training intervention with so
few studies being suitable for analysis (see
Hancock & Hoffman, 2015).
Finally, it is important to reiterate that results
of the present meta- analysis, as are results
from all such analyses, are constrained by the
limits and extent of the existing body of litera-
ture. There were insufcient studies to examine
many of the factors related to either the task or
the learner. Nor was there enough information
to fully examine the subject of training trans-
fer from XR, in sufcient depth so that reliable
conclusions can be reached. Further, there were
insufcient numbers of studies on AR to ana-
lyze its affects as distinct from those of VR. The
“count of studies” columns in Tables 2 and 3
reveal the surprising paucity of research in this
vital area. This then is not simply a case of “more
research is needed,” but a case in which more
diverse research is needed. This may well be an
issue involved with the impetus and constituen-
cies to fund such research. Many organizations
“sell” training but frequently do not provide
robust quantitative evidence of the value of that
TABLE 4: Task Types and Populations
Citation Task Type Population
Andersen etal. (2016) Mixed Otorhinolaryngology residents
Andersen etal. (2018) Mixed Otorhinolaryngology residents
Bailey etal. (2017) Mixed Normal
Bier etal. (2018) Cognitive 27 older and 30 younger adults
Buttussi and Chittaro (2018) Cognitive Normal
Chan etal. (2011) Physical Normal; dancers
Ganier etal. (2014) Mixed Normal
Gavish etal. (2015) Mixed Experienced technicians
Gerson and Van Dam (2003) Mixed Medical residents
Gonzalez- Franco etal. (2016) Mixed Normal
Hamblin (2005) Mixed Normal
Kober etal. (2013) Physical Population: spatial disorientation
Lee etal. (2015) Physical Stroke population
Ma etal. (2011) Physical Parkinson’s population
Macchiarella (2004) Cognitive Normal
Madden etal. (2018) Cognitive Normal
Martín- Gutiérrez etal. (2010) Mixed Normal
Prasertsakul etal. (2018) Physical Adults age 40–60
Rose etal. (2000) Physical Normal
Smith etal. (2014) Cognitive Autistic adults
Valimont etal. (2007) Cognitive Normal
Wang etal. (2014) Mixed Medical students
Webel etal. (2013) Mixed Experienced technicians
Whitmer etal. (2019) Cognitive Normal
Yang etal. (2008) Physical Stroke population
Month XXXX - Human Factors
12
training in their promotional literature. The goal
here is not to simply point out shortcomings in
the existing eld of research, but to identify
those points where future research should be
conducted in order to best examine quantiable
antecedents of training efcacy in XR.
LIMITATIONS AND CONCLUSIONS
Although the current literature is surpris-
ingly sparse, posing some limitations for the
present meta- analysis, our present results are
not inconclusive. However, due to this pres-
ent paucity, certain analyses cannot be effec-
tively performed. For example, it might be
useful and insightful to consider the ways in
which the variables associated specically
with training per se are nested within those
particularly focused upon the state of the
technology in each of AR, VR, and XR. To
date, insufcient information has been col-
lected upon these combinations such that we
may be condent of the outcome. As with this
and other current shortfalls, we have high-
lighted important gaps in the literature that
need to be addressed if this important eld is
to move forward. For any meaningful effects
to be determined from future comprehensive
studies (meta or otherwise), the questions
raised in our present work must be addressed.
One of the most pressing areas needing more
research is individual differences; soldiers
are a very different population from elderly
stroke victims. Studies are needed that enable
performance comparisons between popula-
tions by holding variables such as simulation
platform, task, and performance measures
constant and studying performance by differ-
ent population groups. This is essential for
understanding which differences in results
can be attributed directly to the effect of
population factors such as experience or com-
fort with XR technology.
The second critical, but varying inuence
is the technology in use. At present there is a
wide range of VR headsets and simulated envi-
ronments used in studies. These are potentially
of very different quality, although quality was
rarely reported in the methods of each study.
“Fidelity” is a word which was used with some
regularity, but in the absence of consideration
of how the affordances of a virtual environment
or of a simulation used met the needs of those
being trained. In this endeavor, a useful set of
dimensions have already been dened in Extent
of World Knowledge, Reproduction Fidelity, and
Extent of Presence Metaphor (see Milgram et al.,
1995). Researchers would do well to complete
any simulation studies multiple times with differ-
ent display technologies, especially when the dif-
ference in quality is already quantied (Hancock
et al., 2015).
The third area which requires more speci-
cation is the task designation. It may be that
training sessions were entirely different as
they were meant to train unique tasks. Indeed,
the present body of evidence suggests that not
all task types benet equally and the physical/
cognitive division may not be the most criti-
cal one. What makes tasks amenable to con-
sistently efcacious XR training is presently
not well understood. Indeed, all three of these
factors, and any interactions between them,
make it difcult to determine the effect of vir-
tual training on later performance. What the
literature does support, currently, is the fact
that XR training has similar performance out-
comes to traditional training. In the absence
of any signicant differences between XR and
traditional training, there is a bright future in
considering the many benets that XR training
promises.
The effecTs of VR foR TRaining 13
APPENDIX A: Definitions of AR, VR, and MR
Type of
Simulated
Reality Definition Source
Augmented
reality
Any system that has the following three characteristics:
1. Combines real and virtual
2. Is interactive in real time
3. Is registered in three dimensions
Azuma (1997)
All cases in which the display of an otherwise real environment is
augmented by means of virtual (computer graphic) objects
Milgram and
Kishino (1994)
Augmenting natural feedback to the operator with simulated cues Milgram etal.
(1995)
The enhancement of the real world by a virtual world, which subsequently
provides additional information
Feiner etal.
(1993)
AR displays are those in which the image is of a primarily real environment,
which is enhanced, or augmented, with computer- generated imagery
Drascic and
Milgram
(1996)
Virtual reality VR can be defined as a three- dimensional computer- generated environment,
updating in real time, and allowing human interaction through various
input/output devices
Boud etal.
(1999)
Strictly the term virtual reality describes something that is “real in effect
although not in fact” [virtual] and which “can be considered capable of
being considered fact for some purposes” [reality]. A virtual environment,
put simply, is an environment other than the one in which the participant is
actually present; more usefully it is a computer- generated model, where a
participant can interact intuitively in real time with the environment
Wilson (1997)
A “virtual reality” is defined as a real or simulated environment in which a
perceiver experiences telepresence
Steuer (1992)
Virtual reality is an alternate world filled with computer- generated images
that respond to human movements. These simulated environments
are usually visited with the aid of an expensive data suit which features
stereophonic video goggles and fiber- optic data gloves
Greenbaum
(1992)
It is a new emergent mode of reality in its own right, that comes together
with actual reality to construct an extended world of human experience
Yoh (2001)
Virtual reality is a technology that convinces the participant that he or she
is actually in another place by substituting the primary sensory input with
data produced by a computer
Heim (1998)
A computer- generated display that allows or compels the user (or users) to
have a sense of being present in an environment other than the one they
are actually in, and to interact with that environment
Schroeder
(1996)
Mixed reality Mixed reality refers to the class of all displays in which there is some
combination of a real environment and virtual reality
Drascic and
Milgram
(1996)
Mixed reality environment is one in which real- world and virtual world
objects are presented together within a single display
Milgram etal.
(1995)
Month XXXX - Human Factors
14
APPENDIX B: Effect Sizes by Study
Source NTask Type Associated Variable
Effect
Size
Andersen etal. (2018) 37 Mixed Immersiveness −0.54
Andersen etal. (2018) 37 Mixed Immersiveness −0.55
Andersen etal. (2016) 40 Mixed Immersiveness −1.40
Andersen etal. (2016) 40 Mixed Immersiveness −1.12
Andersen etal. (2016) 40 Mixed Immersiveness −0.92
Andersen etal. (2016) 20 Mixed Pre/post 0.54
Andersen etal. (2016) 20 Mixed Pre/post 0.07
Andersen etal. (2016) 20 Mixed Pre/Ppost 0.47
Bailey etal. (2017) 83 Mixed Immersiveness 0.27
Bailey etal. (2017) 83 Mixed Immersiveness 0.06
Bier etal. (2018) 27 Cognitive Task difficulty −0.92
Bier etal. (2018) 30 Cognitive Task difficulty −1.28
Bier etal. (2018) 27 Cognitive Task difficulty 0.28
Bier etal. (2018) 30 Cognitive Task difficulty 0.19
Bier etal. (2018) 27 Cognitive Task difficulty −0.53
Bier etal. (2018) 30 Cognitive Task difficulty 0.88
Bier etal. (2018) 27 Cognitive Task difficulty −0.05
Bier etal. (2018) 30 Cognitive Task difficulty 0.26
Bier etal. (2018) 57 Cognitive Age 3.83
Bier etal. (2018) 57 Cognitive Age 0.61
Bier etal. (2018) 57 Cognitive Age −0.08
Bier etal. (2018) 57 Cognitive Age −0.23
Bier etal. (2018) 57 Cognitive Age −2.07
Bier etal. (2018) 57 Cognitive Age −0.77
Bier etal. (2018) 57 Cognitive Age −1.10
Bier etal. (2018) 57 Cognitive Age −0.87
Bier etal. (2018) 14 Cognitive Pre/post 0.98
Bier etal. (2018) 13 Cognitive Pre/post 0.06
Bier etal. (2018) 15 Cognitive Pre/post 1.42
Bier etal. (2018) 15 Cognitive Pre/post 0.85
Bier etal. (2018) 14 Cognitive Pre/post 0.58
Bier etal. (2018) 13 Cognitive Pre/post 0.19
Bier etal. (2018) 15 Cognitive Pre/Post −0.51
Bier etal. (2018) 15 Cognitive Pre/post −0.54
Bier etal. (2018) 14 Cognitive Pre/post −1.18
Bier etal. (2018) 13 Cognitive Pre/post −1.69
Bier etal. (2018) 15 Cognitive Pre/Post −1.77
Bier etal. (2018) 15 Cognitive Pre/Post −1.31
Bier etal. (2018) 14 Cognitive Pre/post −0.55
(continued)
The effecTs of VR foR TRaining 15
Source NTask Type Associated Variable
Effect
Size
Bier etal. (2018) 13 Cognitive Pre/Post −0.18
Bier etal. (2018) 15 Cognitive Pre/post −0.39
Bier etal. (2018) 15 Cognitive Pre/post −0.08
Buttussi and Chittaro (2018) 96 Cognitive Immersiveness 0.12
Buttussi and Chittaro (2018) 96 Cognitive Immersiveness 1.00
Buttussi and Chittaro (2018) 96 Cognitive Immersiveness −0.88
Buttussi and Chittaro (2018) 96 Cognitive Pre/post 6.26
Buttussi and Chittaro (2018) 96 Cognitive Pre/post 5.62
Buttussi and Chittaro (2018) 96 Cognitive Pre/Post 6.50
Chan etal. (2011) 8 Physical Immersiveness 1.65
Chan etal. (2011) 4 Physical Pre/post −2.07
Ganier etal. (2014) 42 Mixed Immersiveness 1.17
Ganier etal. (2014) 42 Mixed Immersiveness −1.14
Gavish etal. (2015) 20 Mixed Immersiveness 0.28
Gavish etal. (2015) 20 Mixed Immersiveness 0.28
Gavish etal. (2015) 20 Mixed Immersiveness −0.21
Gavish etal. (2015) 20 Mixed Immersiveness 0.00
Gerson and Van Dam (2003) 16 Mixed Immersiveness −1.12
Gonzalez- Franco etal. (2016) 24 Mixed Immersiveness −0.12
Gonzalez- Franco etal. (2016) 24 Mixed Immersiveness −0.58
Hamblin (2005) 18 Mixed Immersiveness 0.06
Hamblin (2005) 18 Mixed Immersiveness −1.67
Hamblin (2005) 18 Mixed Immersiveness −2.30
Hamblin (2005) 18 Mixed Immersiveness −0.34
Hamblin (2005) 18 Mixed Immersiveness −3.70
Hamblin (2005) 18 Mixed Immersiveness −2.13
Martín- Gutiérrez etal. (2010) 49 Mixed Immersiveness 0.63
Martín- Gutiérrez etal. (2010) 49 Mixed Immersiveness 0.51
Martín- Gutiérrez etal. (2010) 25 Mixed Pre/post 1.02
Martín- Gutiérrez etal. (2010) 25 Mixed Pre/post 1.27
Kober etal. (2013) 11 Physical Pre/post 0.21
Kober etal. (2013) 11 Physical Pre/post 2.92
Lee etal. (2015) 24 Physical Immersiveness 0.07
Lee etal. (2015) 24 Physical Immersiveness 0.07
Lee etal. (2015) 24 Physical Immersiveness 0.04
Lee etal. (2015) 24 Physical Immersiveness 0.04
Lee etal. (2015) 24 Physical Immersiveness 0.25
Lee etal. (2015) 24 Physical Immersiveness 0.26
Lee etal. (2015) 24 Physical Immersiveness 0.03
Lee etal. (2015) 24 Physical Immersiveness 0.03
(continued)
Month XXXX - Human Factors
16
Source NTask Type Associated Variable
Effect
Size
Lee etal. (2015) 24 Physical Immersiveness 0.49
Lee etal. (2015) 12 Physical Pre/post −0.38
Lee etal. (2015) 12 Physical Pre/post −0.38
Lee etal. (2015) 12 Physical Pre/post −0.42
Lee etal. (2015) 12 Physical Pre/post −0.42
Lee etal. (2015) 12 Physical Pre/post −0.49
Lee etal. (2015) 12 Physical Pre/post −0.49
Lee etal. (2015) 12 Physical Pre/post −0.51
Lee etal. (2015) 12 Physical Pre/pst −0.51
Lee etal. (2015) 12 Physical Pre/post 1.41
Ma etal. (2011) 33 Physical Immersiveness −0.73
Ma etal. (2011) 33 Physical Immersiveness 0.45
Ma etal. (2011) 33 Physical Immersiveness −0.24
Ma etal. (2011) 33 Physical Immersiveness 0.00
Ma etal. (2011) 33 Physical Immersiveness −0.28
Ma etal. (2011) 33 Physical Immersiveness −0.53
Ma etal. (2011) 33 Physical Immersiveness −0.4
Ma etal. (2011) 33 Physical Immersiveness 0.46
Ma etal. (2011) 33 Physical Immersiveness 0.10
Ma etal. (2011) 33 Physical Immersiveness −0.76
Ma etal. (2011) 33 Physical Immersiveness −0.16
Ma etal. (2011) 33 Physical Immersiveness −0.16
Ma etal. (2011) 33 Physical Immersiveness −0.02
Ma etal. (2011) 33 Physical Immersiveness 0.26
Ma etal. (2011) 33 Physical Immersiveness −0.10
Ma etal. (2011) 33 Physical Immersiveness −0.08
Ma etal. (2011) 33 Physical Immersiveness −0.24
Ma etal. (2011) 33 Physical Immersiveness −0.73
Ma etal. (2011) 33 Physical Immersiveness −0.42
Ma etal. (2011) 33 Physical Pre/post −0.71
Ma etal. (2011) 33 Physical Pre/post 0.61
Ma etal. (2011) 33 Physical Pre/post −0.3
Ma etal. (2011) 33 Physical Pre/ppost 0.38
Ma etal. (2011) 33 Physical Pre/post 0.10
Ma etal. (2011) 33 Physical Pre/post −0.25
Ma etal. (2011) 33 Physical Pre/post 0.18
Ma etal. (2011) 33 Physical Pre/post −0.10
Ma etal. (2011) 33 Physical Pre/post −0.44
Ma etal. (2011) 33 Physical Pre/post −0.13
Ma etal. (2011) 33 Physical Pre/post −0.33
(continued)
The effecTs of VR foR TRaining 17
Source NTask Type Associated Variable
Effect
Size
Ma etal. (2011) 33 Physical Pre/post 0.12
Ma etal. (2011) 33 Physical Pre/post 0.04
Ma etal. (2011) 33 Physical Pre/post 0.03
Ma etal. (2011) 33 Physical Pre/post 0.03
Ma etal. (2011) 33 Physical Pre/post −0.18
Ma etal. (2011) 33 Physical Pre/post 0.00
Ma etal. (2011) 33 Physical Pre/post −0.45
Ma etal. (2011) 33 Physical Pre/post −0.33
Macchiarella (2004) 96 Cognitive Immersiveness −0.05
Macchiarella (2004) 96 Cognitive Immersiveness −0.44
Macchiarella (2004) 96 Cognitive Immersiveness −0.82
Macchiarella (2004) 96 Cognitive Immersiveness −0.39
Macchiarella (2004) 96 Cognitive Immersiveness −0.76
Madden etal. (2018) 172 Cognitive Immersiveness 0.11
Madden etal. (2018) 172 Cognitive Immersiveness 0.24
Madden etal. (2018) 56 Cognitive Pre/post 1.48
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.61
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.23
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.90
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.23
Prasertsakul etal. (2018) 8 Physical Immersiveness −0.01
Prasertsakul etal. (2018) 8 Physical Immersiveness −0.05
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.07
Prasertsakul etal. (2018) 8 Physical Immersiveness −0.02
Prasertsakul etal. (2018) 8 Physical Immersiveness 0.93
Prasertsakul etal. (2018) 8 Physical Immersiveness 1.37
Prasertsakul etal. (2018) 4 Physical Pre/post 0.11
Prasertsakul etal. (2018) 4 Physical Pre/post 0.42
Prasertsakul etal. (2018) 4 Physical Pre/post −0.57
Prasertsakul etal. (2018) 4 Physical Pre/post −0.32
Prasertsakul etal. (2018) 4 Physical Pre/post −0.26
Prasertsakul etal. (2018) 4 Physical Pre/post 0.11
Prasertsakul etal. (2018) 4 Physical Pre/Post −0.15
Prasertsakul etal. (2018) 4 Physical Pre/post −0.03
Prasertsakul etal. (2018) 4 Physical Pre/post −0.22
Prasertsakul etal. (2018) 4 Physical Pre/post 0.32
Rose etal. (2000) 100 Physical Immersiveness 0.17
Smith etal. (2014) 26 Cognitive Immersiveness 0.72
Smith etal. (2014) 16 Cognitive Pre/post 0.56
Valimont etal. (2007) 32 Cognitive Immersiveness 0.45
(continued)
Month XXXX - Human Factors
18
Source NTask Type Associated Variable
Effect
Size
Valimont etal. (2007) 32 Cognitive Immersiveness 0.69
Valimont etal. (2007) 32 Cognitive Immersiveness 1.01
Valimont etal. (2007) 32 Cognitive Immersiveness 0.51
Valimont etal. (2007) 32 Cognitive Immersiveness 0.59
Valimont etal. (2007) 32 Cognitive Immersiveness 0.67
Wang etal. (2014) 16 Mixed Immersiveness 0.35
Wang etal. (2014) 16 Mixed Immersiveness −1.51
Webel etal. (2013) 20 Mixed Immersiveness −1.51
Whitmer et al. (2019) 41 Cognitive Immersiveness −0.89
Yang etal. (2008) 20 Physical Immersiveness 1.08
Yang etal. (2008) 20 Physical Immersiveness 0.67
Yang etal. (2008) 20 Physical Immersiveness −0.99
Yang etal. (2008) 20 Physical Immersiveness −0.89
Yang etal. (2008) 20 Physical Immersiveness 0.61
Yang etal. (2008) 20 Physical Immersiveness 1.00
Yang etal. (2008) 20 Physical Immersiveness 0.47
Yang etal. (2008) 20 Physical Immersiveness 0.13
APPENDIX C: Forest plot of individual studies by task type/predictor clusters
Figure 4. Studies involving cognitive tasks.
The effecTs of VR foR TRaining 19
Figure 5. Studies involving mixed tasks.
Figure 6. Studies involving physical tasks.
Month XXXX - Human Factors
20
KEY POINTS
Performance after training in VR/AR is generally
comparable to performance after training in a
traditional setting.
The population being trained, and task being
trained upon, can affect whether VR/AR is an
effective medium for training.
The eld of research is too disparate to deter-
mine precisely which factors contribute to better
training transfer from VR/AR.
ORCID iD
Alexandra D. Kaplan h t t p s : / / o r c i d . o r g / 0 0 0
0 - 0 0 0 3 - 0 0 5 1 - 0 1 5 0
Mica Endsley h t t p s : / / o r c i d . o r g / 0 0 0 0 - 0 0 0 2
- 2 3 5 9 - 9 4 7 X
P. A. Hancock h t t p s : / / o r c i d . o r g / 0 0 0 0 - 0 0 0 2
- 4 9 3 6 - 0 6 6 X
REFERENCES
References marked with an asterisk indicate studies included in the
meta-analysis.
*Andersen, S. A. W., Konge, L., & Sørensen, M. S. (2018). The effect
of distributed virtual reality simulation training on cognitive load
during subsequent dissection training. Medical Teacher, 40,
684–689.
*Andersen, S. A. W., Mikkelsen, P. T., Konge, L., Cayé-Thomasen,
P., & Sørensen, M. S. (2016). Cognitive load in mastoidectomy
skills training: Virtual reality simulation and traditional dissection
compared. Journal of Surgical Education, 73, 45–50.
Azuma, R. T. (1997). A survey of augmented reality. Presence:
Teleoperators and Virtual Environments, 6, 355–385.
*Bailey, SKT., Johnson, C. I., Schroeder, BL., & Marrafno, MD. (2017).
Using virtual reality for training maintenance procedures [Conference
session]. Proceedings of the Interservice/Industry Training, Simulation
& Education Conference (I/ITSEC), Orlando, FL.
*Bier, B., Ouellet, Émilie., & Belleville, S. (2018). Computerized
attentional training and transfer with virtual reality: Effect of age
and training type. Neuropsychology, 32, 597–614.
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010).
Transfer of training: A meta- analytic review. Journal of
Management, 36, 1065–1105.
Boud, AC., Haniff, DJ., Baber, C., & Steiner, SJ. (1999). Virtual
reality and augmented reality as a training tool for assembly
tasks [Conference session]. IEEE International Conference
on Information Visualization (Cat. No. PR00210), London,
England, 32–36.
*Buttussi, F., & Chittaro, L. (2018). Effects of different types of
virtual reality display on presence and learning in a safety
training scenario. IEEE Transactions on Visualization and
Computer Graphics, 24, 1063–1076.
*Chan, J. C. P., Leung, H., Tang, J. K. T., & Komura, T. (2011). A virtual
reality dance training system using motion capture technology.
IEEE Transactions on Learning Technologies, 4, 187–195.
Drascic, D., & Milgram, P. (1996, April). Perceptual issues in
augmented reality. Stereoscopic Displays and Virtual Reality
Systems III, 2653, 123–135.
Feiner, S., Macintyre, B., & Seligmann, D. (1993). Knowledge- based
augmented reality. Communications of the ACM, 36, 53–62.
Fletcher, JD., Belanich, J., Moses, F., Fehr, A., & Moss, J. (2017).
Eectiveness of augmented reality & augmented virtuality
[Conference session]. Proceedings of the MODSIMWORLD
Conference, Virgina Beach, VA.
*Ganier, F., Hoareau, C., & Tisseau, J. (2014). Evaluation of
procedural learning transfer from a virtual environment to
a real situation: A case study on tank maintenance training.
Ergonomics, 57, 828–843.
*Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M.,
Bockholt, U., & Tecchia, F. (2015). Evaluating virtual reality
and augmented reality training for industrial maintenance and
assembly tasks. Interactive Learning Environments, 23, 778–798.
*Gerson, L. B., & Van Dam, J. (2003). A prospective randomized
trial comparing a virtual reality simulator to bedside teaching for
training in sigmoidoscopy. Endoscopy, 35, 569–575.
Godden, D. R., & Baddeley, A. D. (1975). Context-dependent
memory in two natural environments: On land and underwater.
British Journal of Psychology, 66, 325–331.
Goldiez, B. F., Ahmad, A. M., & Hancock, P. A. (2007). Effects
of augmented reality display settings on human waynding
performance. IEEE Transactions on Systems, Man and
Cybernetics, Part C, 37, 839–845.
*Gonzalez- Franco, M., Cermeron, J., Li, K., Pizarro, R., Thorn,
J., Hutabarat, W., Hannah, P., Tiwari, A., & Bermell- Garcia,
P. (2016). Immersive augmented reality training for complex
manufacturing scenarios. arXiv:1602.01944.
Greenbaum, P. (1992, March). The lawnmower man. Film and Video,
9, 58–62.
*Hamblin, C. J. (2005) Learning transfer in virtual environments
[Unpublished doctoral dissertation]. Witchita State University
Hancock, P. A. (2009). The future of simulation. In D. Vincenzi, J.
Wise, M. Mouloua, & P. A. Hancock (Eds.), Human Factors in
Simulation and Training (pp. 169–186). CRC Press.
Hancock, P. A. (2017). On bored to Mars. Journal of Astro- Sociology,
2, 103–120.
Hancock, P. A., & Hoffman, R. R. (2015). Keeping up with intelligent
technology. IEEE Intelligent Systems, 30, 62–65.
Hancock, P. A., Sawyer, B. D., & Stafford, S. (2015). The effects of
display size on performance. Ergonomics, 58, 337–354.
Haque, S., & Srinivasan, S. (2006). A meta- analysis of the training
effectiveness of virtual reality surgical simulators. IEEE
Transactions on Information Technology in Biomedicine, 10,
51–58.
Heim, M. (1998). Virtual Realism. Oxford.
Holding, D. H. (Ed.). (1989). Human Skills. John Wiley & Sons
Incorporated.
*Kober, S., Wood, G., Hofer, D., Kreuzig, W., Kiefer, M., & Neuper,
C. (2013). Virtual reality in neurologic rehabilitation of spatial
disorientation. Journal of Neuroengineering and Rehabilitation,
10, 17–30.
Kozak, J. J., Hancock, P. A., Arthur, E. J., & Chrysler, S. T. (1993).
Transfer of training from virtual reality. Ergonomics, 36, 777–
784.
Lackey, S. J., Salcedo, J. N., Szalma, J. L., & Hancock, P. A.
(2016). The stress and workload of virtual reality training:
The effects of presence, immersion and ow. Ergonomics, 59,
1060–1072.
*Lee, H. Y., Kim, Y. L., & Lee, S. M. (2015). Effects of virtual
reality- based training and task- oriented training on balance
performance in stroke patients. Journal of Physical Therapy
Science, 27, 1883–1888.
*Ma, H.-I., Hwang, W.-J., Fang, J.-J., Kuo, J.-K., Wang, C.-Y.,
Leong, I.-F., & Wang, T.-Y. (2011). Effects of virtual reality
training on functional reaching movements in people with
Parkinson’s disease: a randomized controlled pilot trial. Clinical
Rehabilitation, 25, 892–902.
*Macchiarella, N. D. (2004) Eectiveness of video- based augmented
reality as a learning paradigm for aerospace maintenance training
[Unpublished doctoral dissertation]. Nova Southeastern University.
*Madden, JH., Won, AS., Schuldt, JP., Kim, B., Pandita, S., Sun,
Y., Stone, TJ., & Holmes, NG. (2018). Virtual reality as a
teaching tool for moon phases and beyond [Conference session].
The effecTs of VR foR TRaining 21
Proceedings of the Physics Education Research Conference,
Washington, D.C.
Marras, W. S., & Hancock, P. A. (2014). Putting mind and body back
together: A human- systems approach to the integration of the
physical and cognitive dimensions of task design and operations.
Applied Ergonomics, 45, 55–60.
*Martín- Gutiérrez, J., Luís Saorín, J., Contero, M., Alcañiz, M.,
Pérez- López, D. C., & Ortega, M. (2010). Design and validation
of an augmented book for spatial abilities development in
engineering students. Computers & Graphics, 34, 77–91.
Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality
visual displays. Institute of Electronics, Information, and
Communication Engineers Transactions on Information and
Systems, 77, 1321–1329.
Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995,
December). Augmented reality: A class of displays on the reality-
virtuality continuum. In Telemanipulator and telepresence
technologies (Vol. 2351, pp. 282–293).
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred
reporting items for systematic reviews and meta- analyses: The
PRISMA statement. Annals of Internal Medicine, 151, 264–269.
*Prasertsakul, T., Kaimuk, P., Chinjenpradit, W., Limroongreungrat,
W., & Charoensuk, W. (2018). The effect of virtual reality- based
balance training on motor learning and postural control in healthy
adults: A randomized preliminary study. Biomedical Engineering
Online, 17, 124–141.
Rantanen, E. M., & Talleur, D. A. (2005, September). Incremental
transfer and cost effectiveness of groundbased ight trainers
in University aviation programs. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 49, 764–
768.
Roscoe, S. N. (1971). Incremental transfer effectiveness. Human
Factors: The Journal of the Human Factors and Ergonomics
Society, 13, 561–567.
*Rose, F. D., Attree, E. A., Brooks, B. M., Parslow, D. M., & Penn,
P. R. (2000). Training in virtual environments: Transfer to real
world tasks and equivalence to real task training. Ergonomics,
43, 494–511.
Salas, E., Bowers, C. A., & Rhodenizer, L. (1998). It is not how much
you have but how you use it: Toward a rational use of simulation
to support aviation training. The International Journal of Aviation
Psychology, 8, 197–208.
Schroeder, R. (1996). Possible worlds: The social dynamic of virtual
reality technologies. Westview Press.
*Smith, M. J., Ginger, E. J., Wright, K., Wright, M. A., Taylor, J.
L., Humm, L. B., Olsen, D. E., Bell, M. D., & Fleming, M. F.
(2014). Virtual reality job interview training in adults with
autism spectrum disorder. Journal of Autism and Developmental
Disorders, 44, 2450–2463.
Steuer, J. (1992). Dening virtual reality: Dimensions determining
telepresence. Journal of Communication, 42, 73–93.
Tulving, E., & Thomson, D. M. (1973). Encoding specicity and
retrieval processes in episodic memory. Psychological Review,
80, 352–373.
*Valimont, R. B., Gangadharan, S., Vincenzi, D., & Majoros, A.
(2007). The effectiveness of augmented reality as a facilitator of
information acquisition in aviation maintenance applications.
Journal of Aviation/Aerospace Education & Research, 16, 35–43.
*Wang, Z., Ni, Y., Zhang, Y., Jin, X., Xia, Q., & Wang, H. (2014).
Laparoscopic varicocelectomy: Virtual reality training and
learning curve. JSLS : Journal of the Society of Laparoendoscopic
Surgeons, 18, e2014.00258–7.
*Webel, S., Bockholt, U., Engelke, T., Gavish, N., Olbrich, M., &
Preusche, C. (2013). An augmented reality training platform
for assembly and maintenance skills. Robotics and Autonomous
Systems, 61, 398–403.
*Whitmer, D. E., Ullman, D., & Johnson, C. I. (2019). Virtual
reality training improves real- world performance on a speeded
task. In Proceedings of the human factors and ergonomics
society annual meeting (Vol. 63, No. 1, pp. 1218–1222). SAGE
Publications.
Wilson, J. R. (1997). Virtual environments and ergonomics: Needs
and opportunities. Ergonomics, 40, 1057–1077.
*Yang, Y.-R., Tsai, M.-P., Chuang, T.-Y., Sung, W.-H., & Wang, R.-Y.
(2008). Virtual reality- based training improves community
ambulation in individuals with stroke: A randomized controlled
trial. Gait & Posture, 28, 201–206.
Yoh, MS. (2001). The reality of virtual reality [Conference session].
Proceedings Seventh International Conference on Virtual
Systems and Multimedia, Berkeley, CA, IEEE. 666–674.
Alexandra D. Kaplan is currently a graduate student at
the University of Central Florida. She obtained a mas-
ter’s in applied experimental and human factors psy-
chology in 2019 from the University of Central Florida.
Jessica Cruit is currently a research scholar at the
University of Central Florida. She received her PhD
in human factors from Embry- Riddle Aeronautical
University in 2016.
Mica Endsley is the president of SA Technologies,
Inc., and Chair, Air Force Scientic Advisory Board
Study Panel on 21st Century Training and Education
Technologies. She received her PhD in industrial and
systems engineering from the University of Southern
California in 1990.
Suzanne M. Beers is the OSD Test and Evaluation
Portfolio Manager at the MITRE Corporation and a
member of the Air Force Scientic Advisory Board.
She received her PhD in electrical engineering from
the Georgia Institute of Technology in 1996.
Ben D. Sawyer is currently an assistant professor and
lab director at the University of Central Florida. He
received a PhD in applied experimental and human
factors psychology at the University of Central Florida
in 2015.
P. A. Hancock is currently a Provost Distinguished
Research Professor at the University of Central
Florida, and member of the Air Force Scientic
Advisory Board. He received his PhD in human
performance from the University of Illinois in 1983
and a DSc in human–machine systems in 2001 from
the Loughborough University in Loughborough,
England.
Date received: August 25, 2019
Date accepted: January 7, 2020
... Nevertheless, the areas in which VR training can create positive training scenarios compared to more traditional learning methods are still not fully understood; furthermore, research on the area has been plagued by inconsistent results [4]. A recent meta-review by Kaplan et al. [13] finds that the use of VR training provides training transferability that is not significantly different from traditional training methods. The authors argue that the effectiveness of VR training, while generally assumed, has not yet been proven. ...
Article
Full-text available
Virtual reality (VR) presents numerous opportunities for training skills and abilities through the technology’s capacity to simulate realistic training scenarios and environments. This can be seen in how newer research has emphasized how VR can be used for creating adaptable training scenarios. Nevertheless, a limited number of studies have examined how personality traits can influence the training effectiveness of participants within VR. To assess individual preferences in a virtual environment, the current study examines the associations of Big Five personality traits with training effectiveness from VR, as well as sense of presence and cybersickness. Our results show that traits of high agreeableness and low conscientiousness are predictors of training transferability in the VR environment in relation to the real world. Furthermore, the results also showed that trainees experiencing higher levels of cybersickness incurred worse training outcomes.
... AR may help in training [45,46] and solving mistakes in surgeries. Teaching and learning do not need precision. ...
Article
Digital well-being records are multimodal and high-dimensional (HD). Better theradiagnostics stem from new computationally thorough and edgy technologies, i.e., hyperspectral (HSI) imaging, super-resolution, and nanoimaging, but advance mess data portrayal and retrieval. A patient's state involves multiple signals, medical imaging (MI) modalities, clinical variables, dialogs between clinicians and patients, metadata, genome sequencing, and signals from wearables. Patients' high volume, personalized data amassed over time have advanced artificial intelligence (AI) models for higherprecision inferences, prognosis, and tracking. AI promises are undeniable, but with slow spreading and adoption, given partly unstable AI model performance after real-world use. The HD data is a ratelimiting factor for AI algorithms generalizing real-world scenarios. This paper studies many health data challenges to robust AI models' growth, aka the dimensionality curse (DC). This paper overviews DC in the MIs' context, tackles the negative out-of-sample influence and stresses important worries for algorithm designers. It is tricky to choose an AI platform and analyze hardships. Automating complex tasks requires more examination. Not all MI problems need automation via DL. AI developers spend most time refining algorithms, and quality data are crucial. Noisy and incomplete data limits AI, requiring time to handle control, integration, and analyses. AI demands data mixing skills absent in regular systems, requiring hardware/software speed and flexible storage. A partner or service can fulfill anomaly detection, predictive analysis, and ensemble modeling.
... It enables them to gain practical experience. When training and execution are combined, the equipment is easier to learn and use [37,38]. Remote MR assists in lowering training and execution costs. ...
Chapter
Full-text available
Nowadays, mixed reality has improved operational efficiency and enhanced passenger experience in the aviation industry. Integrated with advanced machine learning and artificial intelligence techniques, mixed reality can easily deal with tons of aviation data to support decision-making processes in this industry. The chapter presents the state-of-the-art applications of mixed reality in smart aviation industry. Opportunities and challenges of integrating mixed reality with advanced machine learning and artificial intelligence techniques into the aviation industry are introduced. This chapter focuses on how the integrated mixed reality can improve the quality and reliability of maintenance, operation, piloting, training, and product design in smart aerospace engineering. It also describes autonomous, self-service, and data visualization systems in smart airports to enhance passenger experience. Finally, this chapter discusses airline’s digital-based responses to the COVID-19 crisis.
... Virtual reality (VR) and augmented reality (AR) are as effective of a training mechanism as the commonly accepted methods [15]. VR can enhance the learning and training. ...
... Simulation environments have traditionally been employed for training where doing so in the target environment is either costly or unsafe, for example in aviation (Allerton 2010) and in medicine (Badash et al. 2016). However, highfidelity simulators are themselves expensive, so there has been growing interest in using virtual reality (VR), augmented reality (AR) and mixed reality (MR) for training and simulation because of its cost-effectiveness (see Kaplan et al. 2020, for a recent meta-analysis). Modern VR systems are greatly improved over legacy systems, but technical deficiencies remain (Niehorster et al. 2017;Kelly et al. 2017; Barrera Machuca and Stuerzlinger 2019; see Scarfe and Glennerster 2019 for a review). ...
Article
Full-text available
There is an increasing interest in the use of low-cost virtual reality (VR) for training and simulation. As effective training for many tasks requires efficient sensory–motor coordination, we investigated the efficiency of visually guided movement in VR using a standardised Fitts’ tapping task. Throughput is a measure of movement efficiency and was significantly lower in VR than for a touchscreen. This difference was particularly marked for targets distributed in depth and is likely to reflect known limitations of VR visual display. The addition of haptic cues increased throughput slightly. The lower throughput in VR was due to both a decrease in the precision of pointing and an increase in movement time. Movement distances were equivalent for the touchscreen and VR, but on average slightly smaller than specified by the task. VR presentation also resulted in more numerous double touches on targets. There was evidence of a small and rapid learning effect for VR, but this was limited to the first block of 9 trials (252 movements). For tasks requiring skilled sensory–motor coordination, current low-cost VR may not provide the same transfer of training as the real world.
Chapter
Metaverse is a new learning spectrum for Malaysians where it proffers a parallel virtual universe to academics and students. The metaverse campus (MC) is an idea of converting learning from physical classes into virtual worlds. The initiative of MC allows students worldwide to partake in classes and events in the same virtual world. A study was conducted among 25 Malaysian academicians from local and private tertiary instructions using the fuzzy Delphi method (FDM). This study is aimed to get consensus from the academicians to discover the social perspectives towards MC from accessibility, diversity, equality, and humanity perspective.
Article
Full-text available
We investigated whether virtual reality (VR) training transfers to real-world performance. Participants were trained to complete a real-world maze task, which involved quickly moving an object through a tabletop maze without hitting the maze walls. Participants either trained in VR, trained by physically simulating the task (PS), or received no training (NT). After five timed training trials (or no training in the control condition), participants completed five timed test trials using the real-world maze. The VR training condition completed the maze significantly faster than the other conditions across each test trial. Other than the initial test trial where the NT condition was the slowest, the PS and NT conditions were comparable on the remaining test trials. These results suggest that VR provided trainees with an opportunity for meaningful encoding of the task, beyond the act of physically simulating the motions of the task.
Article
Full-text available
Background: Adults with sedentary lifestyles seem to face a higher risk of falling in their later years. Several causes, such as impairment of strength, coordination, and cognitive function, influence worsening health conditions, including balancing ability. Many modalities can be applied to improve the balance function and prevent falling. Several studies have also recorded the effects of balance training in elderly adults for fall prevention. Accordingly, the aim of this study is to define the effect of virtual reality-based balance training on motor learning and postural control abilities in healthy adults. Methods: For this study, ten subjects were randomly allocated into either the conventional exercise (CON) or the virtual reality (VR) group. The CON group underwent physical balance training, while the VR group used the virtual reality system 4 weeks. In the VR group, the scores from three game modes were utilized to describe the effect of motor learning and define the learning curves that were derived with the power law function. Wilcoxon Signed Ranks Test was performed to analyze the postural control in five standing tasks, and data were collected with the help of a force plate. Results: The average score was used to describe the effect of motor learning by deriving the mathematical models for determining the learning curve. Additionally, the models were classified into two exponential functions that relied on the aim and requirement skills. A negative exponential function was observed in the game mode, which requires the cognitive-motor function. In contrast, a positive exponential function was found in the game with use of only the motor skill. Moreover, this curve and its model were also used to describe the effect of learning in the long term and the ratio of difficulty in each game. In the balance performance, there was a significant decrease in the center of pressure parameters in the VR group, while in the CON group, there was a significant increase in the parameters during some foot placements, especially in the medio-lateral direction. Conclusion: The proposed VR-based training relies on the effect of motor learning in long-term training though different kinds of task training. In postural analysis, both exercise programs are emphasized to improve the balance ability in healthy adults. However, the virtual reality system can promote better outcomes to improve postural control post exercising. Trial registration Retrospectively registered on 25 April 2018. Trial number TCTR20180430005.
Conference Paper
Full-text available
In light of rapid technology advances and budget declines, the Navy is exploring innovative training solutions though initiatives such as Sailor 2025 and High Velocity Learning, which call for more hands-on, learner-centric training. Consistent with these initiatives, virtual reality (VR) offers a low-cost alternative to traditional methods of training by offering Sailors interactive and immersive 3-D simulation environments to train critical skills. Indeed, theoretical research predicts that such immersive training will result in better learning outcomes for training a procedural task than traditional computer-based training, yet there are few systematic experiments examining how and why VR may be effective for training. We conducted an experiment to: 1) test whether VR is as effective for training a military-based task as desktop-based training, and 2) compare two different input methods for interacting within the VR environment. Eighty-three participants were trained on maintenance procedures for the E-28 arresting gear, a system that hooks aircraft and rapidly decelerates them as they land. Participants were assigned randomly to one of three training conditions: Desktop-based simulation, Gesture-based VR, or Voice-based VR. A written recall test served as our measure of learning outcome. We analyzed the errors that trainees made during training and found differences between the conditions that suggest that Desktop training may be less efficient than VR training: The Desktop group committed more procedure-based errors, while the VR-Gesture group committed more gesture-related errors (indicating they understood the procedure but had issues with using the system). This experiment addresses a critical gap in VR research by examining characteristics that may contribute to VR training optimization. Furthermore, these results demonstrate the potential of VR to provide ready, relevant training to the Fleet.
Article
Full-text available
The present investigation evaluated the effects of virtual reality (VR) training on the performance, perceived workload and stress response to a live training exercise in a sample of Soldiers. We also examined the relationship between the perceptions of that same VR as measured by engagement, immersion, presence, flow, perceived utility and ease of use with the performance, workload and stress reported on the live training task. To a degree, these latter relationships were moderated by task performance, as measured by binary (Go/No-Go) ratings. Participants who reported positive VR experiences also tended to experience lower stress and lower workload when performing the live version of the task. Thus, VR training regimens may be efficacious for mitigating the stress and workload associated with criterion tasks, thereby reducing the ultimate likelihood of real-world performance failure. Practitioner Summary: VR provides opportunities for training in artificial worlds comprised of highly realistic features. Our virtual room clearing scenario facilitated the integration of Training and Readiness objectives and satisfied training doctrine obligations in a compelling engaging experience for both novice and experienced trainees.
Article
Full-text available
Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.
Article
Objective: The aims of this study were to assess whether computerized attentional training improves dual-tasking abilities in older adults and whether its effect and transfer are modulated by age and the type of training provided. This study also used virtual reality (VR) as a proxy to measure transfer in a real life related context. Method: Sixty participants (30 older and 30 younger adults) were randomized to either: (a) single-task training (two tasks practiced in focused attention; visual detection and alphanumeric equation task); or (b) divided attention variable-priority training (varying the amount of attention to put on each task when performed concurrently). Training effects were assessed at pre- and post-training with tasks similar to the one used in training. Transfer was measured with the virtual car ride, an immersive dual-task scenario and a self-reported questionnaire. Results: In older adults, variable-priority improved attentional control abilities and led to better transfer in the VR dual-task scenario compared with single-task. Younger adults benefited equally from the two types of training and transfer was found on the Alpha span task when performed concurrently in VR. Single-task improved the ability of all participants to carry out the tasks in the focused attention condition. No transfer effects were found on the self-reported measure for either training type or age. Conclusion: Attention remains plastic in old age and programs designed to improve attentional control might be beneficial to older adults. Importantly, training can produce transfer to more real life related tasks and transfer remains possible throughout the life span. (PsycINFO Database Record
Article
Background: Complex tasks such as surgical procedures can induce excessive cognitive load (CL), which can have a negative effect on learning, especially for novices. Aim: To investigate if repeated and distributed virtual reality (VR) simulation practice induces a lower CL and higher performance in subsequent cadaveric dissection training. Methods: In a prospective, controlled cohort study, 37 residents in otorhinolaryngology received VR simulation training either as additional distributed practice prior to course participation (intervention) (9 participants) or as standard practice during the course (control) (28 participants). Cognitive load was estimated as the relative change in secondary-task reaction time during VR simulation and cadaveric procedures. Results: Structured distributed VR simulation practice resulted in lower mean reaction times (32% vs. 47% for the intervention and control group, respectively, p < 0.01) as well as a superior final-product performance during subsequent cadaveric dissection training. Conclusions: Repeated and distributed VR simulation causes a lower CL to be induced when the learning situation is increased in complexity. A suggested mechanism is the formation of mental schemas and reduction of the intrinsic CL. This has potential implications for surgical skills training and suggests that structured, distributed training be systematically implemented in surgical training curricula.
Article
Until recently, in the field of Augmented Reality (AR) little research attention has been paid to the cognitive benefits of this emerging technology. AR, the synthesis of computer images and text in the real world, affords a supplement to normal information acquisition that has yet to be fully explored and exploited. AR achieves a more smooth and seamless interface by complementing human cognitive networks, and aiding information integration through multimodal sensory elaboration (visual, verbal, proprioceptive, and tactile memory) while the user is performing real world tasks. AR also incorporates visuo-spatial ability, which involves the representations of spatial information in memory. The use of this type of information is an extremely powerful form of elaboration. This study examined four learning paradigms: print (printed material) mode, observe (video tape) mode, interact (text annotations activated by mouse interaction) mode, and select (AR) mode. The results of the experiment indicated that the select (AR) mode resulted in better learning and recall when compared to the other three conventional learning modes.
Article
The increasing availability of head-mounted displays (HMDs) for home use motivates the study of the possible effects that adopting this new hardware might have on users. Moreover, while the impact of display type has bee respectively representative of: (i) desktop VR (a standard desktop monitor), (ii) many setups for immersive VR used in the literature (an HMD with narrow field of view and a 3-DOF tracker), and (iii) new setups for immersive home VR (an HMD with wide field of view and 6-DOF tracker). We assessed effects on knowledge gain, and different self-reported measures (self-efficacy, engagement, presence). Unlike previous studies of display type that measured effects only immediately after the VR experience, we considered also a longer time span (2 weeks). Results indicated that the display type played a significant role in engagement and presence. The training benefits (increased knowledge and self-efficacy) were instead obtained, and maintained at two weeks, regardless of the display used. The paper discusses the implications of these results.