ArticlePDF Available

Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence

Authors:
  • Virtual Bodyworks SL

Abstract and Figures

A new definition of immersion with respect to virtual environment (VE) systems has been proposed in earlier work, based on the concept of simulation. One system (A) is said to be more immersive than another (B) if A can be used to simulate an application as if it were running on B. Here we show how this concept can be used as the basis for a psychophysics of presence in VEs, the sensation of being in the place depicted by the virtual environment displays (Place Illusion, PI), and also the illusion that events occurring in the virtual environment are real (Plausibility Illusion, Psi). The new methodology involves matching experiments akin to those in color science. Twenty participants first experienced PI or Psi in the initial highest level immersive system, and then in 5 different trials chose transitions from lower to higher order systems and declared a match whenever they felt the same level of PI or Psi as they had in the initial system. In each transition they could change the type of illumination model used, or the field-of-view, or the display type (powerwall or HMD) or the extent of self-representation by an avatar. The results showed that the 10 participants instructed to choose transitions to attain a level of PI corresponding to that in the initial system tended to first choose a wide field-of-view and head-mounted display, and then ensure that they had a virtual body that moved as they did. The other 10 in the Psi group concentrated far more on achieving a higher level of illumination realism, although having a virtual body representation was important for both groups. This methodology is offered as a way forward in the evaluation of the responses of people to immersive virtual environments, a unified theory and methodology for psychophysical measurement.
Content may be subject to copyright.
ACM Reference Format
Slater, M., Spanlang, B., Corominas, D. 2010. Simulating Virtual Environments within Virtual Environments
as the Basis for a Psychophysics of Presence. ACM Trans. Graph. 29, 4, Article 92 (July 2010), 9 pages.
DOI = 10.1145/1778765.1778829 http://doi.acm.org/10.1145/1778765.1778829.
Copyright Notice
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for pro t or direct commercial advantage
and that copies show this notice on the rst page or initial screen of a display along with the full citation.
Copyrights for components of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any
component of this work in other works requires prior speci c permission and/or a fee. Permissions may be
requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701, fax +1
(212) 869-0481, or permissions@acm.org.
© 2010 ACM 0730-0301/2010/07-ART92 $10.00 DOI 10.1145/1778765.1778829
http://doi.acm.org/10.1145/1778765.1778829
Simulating Virtual Environments within Virtual Environments as the Basis for a
Psychophysics of Presence
Mel Slater
ICREA - Universitat de Barcelona &
University College London
Bernhard Spanlang
Universitat Polit`
ecnica de Catalunya &
Universitat de Barcelona
David Corominas
Universitat de Barcelona
Figure 1: Overview of the scenario (A) rendered with real-time dynamic shadows and reflections (B) with Gouraud shading.
Abstract
A new definition of immersion with respect to virtual environment
(VE) systems has been proposed in earlier work, based on the con-
cept of simulation. One system (A) is said to be more immersive
than another (B) if Acan be used to simulate an application as
if it were running on B. Here we show how this concept can be
used as the basis for a psychophysics of presence in VEs, the sen-
sation of being in the place depicted by the virtual environment
displays (Place Illusion, PI), and also the illusion that events occur-
ring in the virtual environment are real (Plausibility Illusion, Psi).
The new methodology involves matching experiments akin to those
in color science. Twenty participants first experienced PI or Psi in
the initial highest level immersive system, and then in 5 different
trials chose transitions from lower to higher order systems and de-
clared a match whenever they felt the same level of PI or Psi as they
had in the initial system. In each transition they could change the
type of illumination model used, or the field-of-view, or the display
type (powerwall or HMD) or the extent of self-representation by
an avatar. The results showed that the 10 participants instructed to
choose transitions to attain a level of PI corresponding to that in the
initial system tended to first choose a wide field-of-view and head-
mounted display, and then ensure that they had a virtual body that
moved as they did. The other 10 in the Psi group concentrated far
more on achieving a higher level of illumination realism, although
having a virtual body representation was important for both groups.
This methodology is offered as a way forward in the evaluation of
the responses of people to immersive virtual environments, a uni-
fied theory and methodology for psychophysical measurement.
CR Categories: H.5.1 [Information Interfaces and Presenta-
tion]: Multimedia Information Systems—Artificial, augmented,
and virtual realities H.1.2 [Models and Principles]: User/machine
systems—Human Factors I.3.7 [Computer Graphics]: Three-
Dimensional Graphics and Realism—Virtual Reality
Keywords: immersive virtual environments, presence, place illu-
sion, plausibility, response function, Markov Chain
e-mail: melslater@ub.edu
1 Introduction
Immersive virtual environments (IVE) are typically employed to
place people within representations of physical reality - for exam-
ple, for training, various forms of rehabilitation, design, and enter-
tainment [Brooks 1999]. Yet a largely unexplored possibility is to
use an IVE system to simulate what can be experienced when us-
ing another type of system. This idea was apparently first exploited
in an experimental study of presence in virtual environments [Slater
et al. 1994], where within a head-mounted display (HMD) delivered
VE the participant was able to select and put on a virtual HMD that
transferred them to a deeper level environment. In this paper we
show how this capability of IVEs, the possibility of simulating one
type of IVE system with another, can be used as the foundation for
a psychophysical approach to the long studied concept of presence
in virtual environments, introducing a new method that avoids the
problems of both questionnaire studies and purely physiological or
behavioral approaches to measurement.
The concept of ‘immersion’ has previously been regarded as a way
to describe the technological capabilities of a virtual reality system
- e.g., system Ais more ‘immersive’ than system B, other things
being equal, if Ahas a wider field-of-view than B, or, say, Acan
generate real-time shadows and reflections but Bonly ‘Gouraud
shading’, or Ahas head-tracking but Bdoes not [Slater and Wilbur
1997; Draper et al. 1998]. A recent review of the concept of immer-
sion has extended this approach to the idea of a partial order over the
class of IVE systems based on an immersion relation [Slater 2009].
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
The ‘immersion’ relation between systems Aand Bdenoted by
ABoccurs when Acan be used to build an application in which
a participant would experience a simulation of that application as if
running in system B. By definition Ais at a higher level of immer-
sion than Bif ABbut not BA. For example, in principle
it is possible to simulate the experience of being in a 4-sided Cave
system using a wide field-of-view head-tracked head-mounted dis-
play (HMD): a virtual environment can be built that is delivered
through such a HMD where a participant enters into a virtual Cave,
sees a dynamic virtual body representation that is a likeness of him-
or herself from an egocentric viewpoint, and experiences a virtual
environment running in that virtual Cave. Similarly, it is possi-
ble using a HMD to simulate a virtual environment delivered by a
powerwall type of display or even a desktop system. We say ‘in
principle’ since clearly this is based on a series of abstractions -
ignoring aspects such as the weight of the HMD compared to the
weight of shutter glasses that might be used for a Cave, differences
in display resolution, brightness, and so on. Of course this requires
a multimodal system exploiting not just vision but the auditory and
especially haptic modalities. However, it is the case that given suf-
ficient resources, each of the above simulations would be feasible
with even today’s technology. Such examples can alternatively be
regarded as thought experiments but mostly they would be realiz-
able.
The immersion relation imposes a partial order over any set of
IVE systems. This is based on the physical properties of each sys-
tem and the corresponding set of computer programs that enable
its use. However, in order to understand the utility of a system
for particular applications we need also to be able to assess how
participants respond to applications that are built with it. The con-
cept of ‘presence’ has for many years been thought to provide a
ubiquitous measure of one aspect of the subjective experience of
being in a virtual environment that applies across different appli-
cations and systems. Presence refers to the illusion of being in the
scene displayed by the IVE system, a concept developed in the early
1990s, for example [Held and Durlach 1992; Sheridan 1992].
Since then the concept has become diffuse, and has been thought
of as applying to a very wide range of different types of subjec-
tive response to mediated experience [Lombard and Ditton 1997].
Moreover, there has never been a unified and generally accepted
approach to the measurement of presence, rather a set of different
methods have been used (questionnaire based, behavioral, physio-
logical) each with their own set of problems.
2 Measuring Presence
Since presence has been thought of as a subjective experience, elic-
iting the strength of the feeling of ‘being there’ using questionnaires
has been one obvious approach to measurement. The paradigm
that developed was to carry out experiments where particular as-
pects of the virtual environment were manipulated and the resulting
questionnaire responses were based on Likert scales regarding how
much the participant felt themselves to ‘be there’ (e.g., scoring 1
for ‘not at all’ and 7 for ‘very much’). For examples see [Witmer
et al. 2005; Lessiter et al. 2001; Schubert et al. 2001] with a re-
view in [Sanchez-Vives and Slater 2005]. However, this approach,
certainly when used alone, has several problems: it does not seem
to be able to distinguish between an experience in reality and vir-
tual reality [Usoh et al. 2000], the measurements may be unstable
[Freeman et al. 1999], it has problems in actually assessing the con-
cept itself [Slater 2004], and there are methodological problems in
analyzing subjective rating data as if it were interval or ratio data
[Gardner and Martin 2007; Slater and Garau 2007].
If a person feels that they are in the scenario depicted by the IVE
then they should exhibit behavioral and physiological responses
concomitant with that feeling - i.e., they should have physiologi-
cal responses and behaviors as if they were there. This is the basis
for the application of virtual environments to real-world situations
such as training, rehearsal or psychotherapy, since if the participant
does not to some extent act as if they were ‘there’ then nothing
much useful could be gained from their IVE experience in relation
to these applications. In this type of approach an experiment is
designed in order to elicit a clearly measurable physiological or be-
havioral response, and then assess how this changes under various
experimental conditions. The typical measure used is based on the
physiological characteristics of stress since this is relatively easy to
identify (using arousal as measured by increasing skin conductance
responses, increase in heart-rate and decrease in heart-rate variabil-
ity). Skin conductance and heart rate were used to examine people’s
stress response to a visual cliff in [Meehan et al. 2002; Slater et al.
2009], and also the impact of different levels of latency on the expe-
rience [Meehan et al. 2003]. The effect of social interaction in IVEs
has also been studied extensively based on physiological measures,
for example [Slater et al. 2006].
Although the use of behavioral and physiological responses as sur-
rogates for presence is methodologically sound, this avoids rather
than solves the problem of conceptualizing and measuring pres-
ence, since a situation must be set up in the virtual reality that
would cause stress or some other clearly measurable physiological
response. Not every application is amenable to that, and it does not
make sense to deliberately add a stressful event into a virtual envi-
ronment scenario solely for the purpose of measurement. This still
leaves open the issue of a ubiquitous measure that applies across
different types of application and system.
3 Deconstructing Presence
In one review of the concept it was argued that presence should
be defined as the extent to which participants respond realistically
to virtual events and situations [Sanchez-Vives and Slater 2005]
rather than as their sense of ‘being there’. In this approach ‘re-
sponse’ is considered as multilevel, from low level automatic phys-
iological responses, through non-conscious behavioral reflexes, vo-
litional behavioral responses, emotional responses through to high
level cognitive responses and thoughts (including the sensation of
being there). This response profile is what defines presence, and the
more that the measured responses point in the same direction, i.e.,
consistent responses that point to the participant treating the virtual
reality as if it were real, the greater the degree of presence.
This approach helps to solve the conceptual problem of definition
and of measurement. Presence is exhibited when people behave as
they would in reality, and the extent to which this occurs is mea-
sureable in principle. In other words presence is identified with
its operationalization as a measureable property of the actions of
people within IVEs compared with their expected or actually ob-
served behavior within similar real-world settings. However, if we
regard the actions of people as the surface manifestation of a deeper
quality of subjective experience, and if this quality itself can some-
how be measured, then we would have the basis for constructing
a theory, one that might predict when people are likely to respond
realistically. An appropriate theoretical framework well integrated
with empirical studies is an essential requirement for progress in
this field, and the area of presence research has been dogged by the
lack of any theoretical framework that also implicitly includes how
presence itself might be measured.
Moreover, lumping everything into ‘being there’ misses another
very important aspect of people’s experience. In physical reality,
for example, you know very well that you are there, but you can
encounter events that are not what they appear to be. For example,
92:2 • M. Slater et al.
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
you enter a room and see a person standing at the far end, and you
wave to them. Later you realize that there was no person there but
that it was a shop dummy. This happens in IVEs - for example, a
person responds realistically to a virtual character for a while, un-
til realizing that the character engages in repetitive or inappropriate
behaviors [Garau et al. 2008] and the credibility of the virtual en-
vironment situation is lost. This plausibility is separable from the
sensation of being there.
[Slater 2009] put forward the thesis that there are two orthogonal
components of presence to consider. The first is Place Illusion (PI)
the original idea of the sensation of being in the place depicted by
the VE. This is a qualia, a quality of our subjective experience, like
seeing the ‘redness’ of the color red. The second is Plausibility
(Psi), the illusion that what is apparently happening is really hap-
pening. Both of these are known by the participant to be illusions,
but knowing that they are illusions does not extinguish them. It was
argued that when there is PI and Psi then ‘response as if real’ is
likely to occur.
The physical basis of PI was argued to be sensorimotor contingen-
cies that correspond to those of physical reality. When a person
perceives by carrying out actions that result in changes in (multi-
sensory) perception much as in physical reality, then the simplest
hypothesis for the brain to adopt is that what is being perceived is
actually there - i.e., that the person is in the place depicted by the
IVE. The physical basis of Psi was postulated to be the extent to
which the system is programmed to produce correlations with the
behavior of the participant, how much events in the IVE refer per-
sonally to the participant, and the overall credibility of the scenario
(in particular in relation to how a similar situation might be in phys-
ical reality).
Another important point about the framework presented in [Slater
2009] was the fusion of PI and Psi in the notion of a ‘virtual body’.
When you wear a head-tracked HMD for example, and look down
towards your own body, what do you see? Unless a virtual body has
been programmed, you have no body. The act of looking at your-
self is a natural movement with concomitant changes in perception,
and what you see determines a critical aspect of the realness of the
situation. This applies also in projection type systems such as a
Cave. As you move around the virtual environment depicted in a
Cave, ideally you would see not only your own real body but also
shadows and reflections of your body in the VE.
4 PI, Psi and Color: An Analogy
In the perception of color there is a physical basis, which is the
actual wavelength distribution of the light emitted and/or reflected
from a surface patch. However, the sensation of color depends on a
number of complex perceptual mechanisms. In the tristimulus the-
ory of color the average person’s response to light can be computed
by integrals of the wavelength distribution times response functions
for each of the ‘red’, ‘green’ and ‘blue’ cones - e.g., [Fairchild
2005]. In practice these response functions are determined em-
pirically through color matching experiments. It is impossible to
know how someone really experiences the color ‘red’, but we do
know that typically normally sighted people agree on what is ‘red’.
Moreover individuals can match a target color with a test patch by
additively mixing three primaries in order to make the patch appear
to have the same color as the target. In such experiments people are
never asked questions such as ‘How red is this color on a scale of 1
to 7?’ (which has been the approach used in presence) but instead
they are asked to carry out an action (e.g., turning dials on three pro-
jectors) to produce a surface patch, which has the same sensation of
color as the target. When this type of experiment is repeated several
times over several people, empirically based response functions can
be constructed that then are used in agreed international standards
on the representation of color. This works because patches that emit
or reflect light that have quite different wavelength distributions can
nevertheless be perceived as the same color by an individual (these
are called metamers). A particular color sensation can be thought of
as an equivalence class over an infinite number of different wave-
length distributions.
There are an infinite number of possible physical realizations of
any VE application, depending on variations in the hardware and
associated computer programs. Some differences between different
realizations will be important, with respect to the associated PI sen-
sation that an individual might feel. Other differences may have no
influence. A particular PI sensation can therefore also be thought of
as representing an equivalence class amongst the set of physical re-
alizations that result in the same sensation of PI. The same applies
to Psi. Here two realizations being in the same equivalence class
would mean that the ‘average participant’ would match them.
A set of VE systems can be organized into a partial order using the
relation. Consider a particular realization of an application in a
system S0such that S0Sj,j= 1, ..., m. This provides a basis
for matching experiments. Suppose that an individual spends some
time in the application realized with S0, and is asked to become
aware of the corresponding feeling of PI that is experienced in that
system and application. This is like looking first at the target color.
Now the experimental subject has access to a set of virtual ‘buttons’
that when selected effect transitions to the various Sj, j = 1, ..., m.
At each Sjthe participant can form an assessment, matching their
sensation of PI in Sjwith the feeling that they had while experienc-
ing S0. If they find that the feeling is the same, then for this subject
with respect to this environment, there is an equivalence between
PI experienced in Sjand S0. We refer to that as a match. More to
the point, if the subject is placed in some Sjand asked to choose
transitions and stop whenever they achieve the same feeling of PI
that they had in S0then we can observe the sequence of transitions
that they make, in order to understand which are the most impor-
tant. This can be repeated several times over several subjects, each
time allowing for a different sequence of choices. From this data
it would be possible to estimate the probability distributions of a
match over the different configurations.
In practice attention would focus on a particular set of properties of
interest that characterize a VE system [s1, s2, ..., sm]. For example
s1might refer to the average frame rate achievable over the appli-
cation, s2the number of degrees of freedom of headtracking, and
so on. We use the convention that if there are two systems S0and S
and s0
isifor the ith property, then S0can realize a level of this
property that is equal to or higher (e.g., greater frame rate) than S.
We also assume that if a system can achieve level s0
iof property i,
and s0
isi, then it can also achieve level si. If we refer to sj i as
the realization of the ith property under system Sj, then the order-
ing over the set of systems of interest implies that s0isji, for
all i, j. In other words S0can simulate the application of interest,
and also it can simulate the application as if it were running on sys-
tems Sj, j = 1, ..., m. We carried out an experiment to illustrate
the methodology described above, which is described in the next
sections.
5 The Experiment
5.1 Recruitment
Twenty participants (10 of them males) were recruited by adver-
tisement through the university campus. Their average age was
27 ±8(S.D.)years. Only 2 had any prior experience of virtual
reality. None of the participants had any prior knowledge of the
Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence • 92:3
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
experiment or the general research of the group.
They were exposed in various ways to be described below to a vir-
tual environment that consisted of a room 4×4×2.8m3with
various objects in it as shown in Figure 1.
5.2 Materials
Throughout the whole experiment a Fakespace Labs Wide5 HMD
was used, which has a field-of-view 150 ×88 with an estimated
1600×1200 resolution. The software environment was XVR [Car-
rozzino et al. 2005], together with a hardware accelerated avatar
library (HALCA) [Gillies and Spanlang 2010]. The participant’s
head was tracked by an Intersense PC Tracker IS 900 system. The
joystick of the Intersense system was also used in one of the con-
ditions (see below). Tracking data was streamed to the VR system
via VRPN [Taylor et al. 2001] and used to turn the avatar’s head
and to adapt the viewpoint in the virtual environment according to
the participant’s head position and orientation.
The participants wore a tight fitting Velcro suit that had retroreflec-
tive markers attached that enabled our system to track the whole
body movements of our participants. The marker-based infrared
tracking system was a 12 camera Optitrack system from Natural-
point1that could track, in our configuration, a volume of approxi-
mately 2.5m width ×2.5m length ×3m height. 2D marker infor-
mation was transferred from the cameras via USB to the Natural-
Point Arena motion capture software in which the dynamic skele-
tal configuration of the participants was reconstructed. The move-
ments are reconstructed at 100 Hz with millimeter accuracy. From
Arena the skeletal motion data was streamed to HALCA via the
NatNet protocol. The skeletal motion data was then mapped so that
the avatar posture matched that of our participant to a good degree.
The avatars (male and female) were from AXYZ-design2.
Participants were asked to sit on the chair (which was located in
the center of the volume) but were allowed to make any movement
they wanted. The chair was also shown in the virtual environment
and registered in the same position as the real chair. They held the
joystick in their dominant hand throughout the experiment, but it
was only useful for the simulated powerwall condition described
below.
5.3 Properties
The property vector S= [I, F , D, V ], where Irefers to the il-
lumination model used (Gouraud shading, static global illumina-
tion, global illumination with dynamic changes), Fthe field-of-
view (small or large), Dthe display type (simulated powerwall or
HMD) and Vthe virtual body self-representation of the participant
(none, static avatar, fully tracked avatar). It should be noted that
only the Wide5 HMD was used throughout, and the system simu-
lated each instance of S. Each of these properties is detailed below.
We call each instance of the property vector a configuration.
(I) Illumination
(I=0) Gouraud shading. In Gouraud shading mode the envi-
ronment was rendered without taking global illumination ef-
fects into account.
(I=1) Static global illumination. The environment was ren-
dered with view independent global illumination effects but
without dynamically changing shadows or reflections. This
illumination was achieved by a light tracing and texture bak-
ing approach from Mental Images in Autodesk Maya. If the
1http://www.naturalpoint.com/optitrack
2http://www.axyz-design.com
participant’s avatar was visible it did not cast any shadows and
was not reflected in the mirror.
(I=2) Dynamic global illumination. In this mode in addi-
tion to static shadows as described in the previous mode there
were dynamic soft shadows cast by the virtual character onto
the environment (using the GPU based percentage closer soft
shadows approach [Fernando 2005]) and the environment and
the participant’s avatar were reflected in the virtual mirror (us-
ing a stencil mirror approach [Kilgard 2000]).
(F) Field-of-view
The meaning of field-of-view depended on which of the two display
types were used (powerwall or HMD, see below).
(F=0) Small field-of-view. In the case of the powerwall the
size was 1.25m×0.69m. In the case of the HMD display the
field-of-view was restricted to 60% of the full FOV.
(F=1) Large field-of-view. In the case of the powerwall the
size was 2m×1.1m. In the case of the HMD the full available
FOV was used (150o×88o).
(D) Display Type
(D=0) A simulated powerwall display. In the simulated pow-
erwall display the participant viewed the environment from
within a virtual viewing room that had a virtual back pro-
jected powerwall on one of its walls. The scenario room was
displayed on the virtual powerwall in stereo, and they were
seated about 1.35m away from it. Head-tracking was used
normally with respect to the viewing room, but the participant
could navigate through the environment displayed on the vir-
tual powerwall by using a joystick. The viewing room that
contained the powerwall was gray and illuminated by the vir-
tual powerwall.
(D=1) The head-mounted display. In the head mounted dis-
play mode the participant viewed the environment from a first
person view. The viewpoint was that of the avatar’s eyes. The
participant could look around the room using normal head-
movements.
(V) Virtual Body
(V=0) No virtual body. In this mode there was no avatar rep-
resentation.
(V=1) Static virtual body. In the static virtual body mode there
was an avatar that only rotated to match the direction that the
participant faced but otherwise did not move. The static avatar
appeared to be in a comfortable seated pose.
(V=2) Full body-tracked avatar. In the full body tracked avatar
mode the avatar’s pose was updated with the real-time whole
body tracking data of the optical tracking system.
Altogether there were 36 possible configurations: 3 types of illumi-
nation ×2field-of-views ×2display types ×3virtual body modes.
Some examples are shown in Figures 2 and 3 and on the accompa-
nying video.
5.4 Procedures
When the experimental participants arrived at the laboratory they
were given an information sheet to read, and the experimental pro-
cedures were also explained to them verbally. They read and signed
an informed consent form, the experiment had been approved by
the institutional ethics review committee. They then were assisted
to put on the full body tracking suit and the HMD. The area of
the laboratory where they wore the body suit and HMD could be
92:4 • M. Slater et al.
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
Figure 2: Examples of the different scenario properties (A) The
participant wears a body suit (B) Gouraud shading seen from the
first person perspective of the avatar (C) there is global illumina-
tion and the avatar is static (D) there is global illumination and the
avatar is dynamic moving based on the tracking in A.
Figure 3: The powerwall simulation (A) large screen with static
avatar and global illumination (B) small screen and dynamic
avatar with global illumination.
closed off from the rest of the laboratory by a black curtain, so that
the participants would be in darkness once the experiment started.
They were seated throughout. They put on the HMD and were left
to become accustomed to the displayed environment for 1.5 min-
utes. Then they were shown that it was possible to manipulate the
properties of the environment by changing each of the illumination,
field-of-view, display type and virtual body settings. They did not
change these settings themselves but the settings were verbally la-
beled, and the experimental operator changed them on request from
the subject. This continued until the participants were familiar with
all the possible settings and the transitions that they could make.
The verbal labels for transitions that they learned were: (I) ‘illumi-
nation’, (F) ‘display size’, (D) ‘navigation’, and (V) ‘avatar’. These
were taught in the order I, F, D and V, and no participant had trouble
learning them.
5.5 Transitions
After the period of acclimatization and training described above,
participants had 5 trials, each of which they started from a different
basic configuration, and then were encouraged to make transitions
and stop whenever they had reached a level of PI or Psi that they
felt was equivalent to that obtained in the full environment. The
starting configurations were as shown in Table 1. Whenever they
wished to make a transition to the next one, they would call out the
required transition using the previously learned transition labels.
Trial Illumination Field of Display Virtual
(I) View (F) Type (D) Body (V)
1 1 0 0 0
2 0 0 0 1
3 0 0 0 0
4 0 0 1 0
5 0 1 0 0
Table 1: The Basic Starting Conditions for the 5 Trials
In order to encourage participants to think carefully about their
transitions, and avoid the possibility that they would straight away
simply choose the full configuration [2,1,1,2] (which would have
made the problem trivial) we imposed the following rules:
Transitions could only be in one direction - i.e., having chosen
a higher level of one property they could not undo that and go
backwards. For example, if they had made the transition from
‘Gouraud shading’ to ‘static shadows’ they could not later go
back to ‘Gouraud shading’. An additional reason for this was
simplicity of the task, and also to limit the total number of
actual transitions that would be possible.
Only one-step transitions could be made. For example, they
could not choose to jump directly from ‘Gouraud shading’
to ‘dynamic shadows and reflections’ but would need to first
make a transition to ‘static shadows’.
In order to avoid participants choosing transitions in a random
order simply to get to the initial [2,1,1,2], we imposed a cost
structure on the transitions. We told the subjects that they
would start out with e10. Every transition would cost them
e1. If they stopped too early i.e., before they were in the PI
or Psi state, they would lose e5. On the other hand if they
reached the desired state they would get a bonus of e5. We
did not explain to them, and no subject actually asked, how
we, the experimenters, would know which state they were
in. The rule that we used in fact was that they lost the e5
if they stopped in less than 3 transitions (but this was not told
to them). They were informed after each of the 5 trials only
whether they had passed the test or not. Their final payment
for the experiment was the maximum achieved amongst their
5 trials. The final payments in fact were e11 (3 subjects) and
e12 (17 subjects).
After the participants had chosen the configuration at which
they had made a match they were asked to continue until they
had completed 5 transitions.
5.6 Experimental Design
While participants were experiencing the configuration [2,1,1,2]
(each property at the highest level) they were given one of the fol-
lowing two instructions:
(PI) Pay attention to your feeling that you are in that room that
you can see. Later we will ask you to try to get that feeling of
being in that room again.
(Psi) Pay attention to how real this feels. Later we will ask
you to try to get that feeling of reality again.
Half of the participants were given the instruction PI and the other
half Psi. Participants were assigned to one of the two groups by or-
der number in which they arrived at the laboratory. Odd numbered
subjects were assigned to the PI group and even numbers to the Psi
group.
Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence • 92:5
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
Overall, participants spent 1.5 minutes first exploring the virtual
room in configuration [2,1,1,2]. Then they learned the various
possible transitions that they could make, normally taking 2-3 min-
utes for this.
Hence our experiment had one factor with two levels, PI or Psi, cor-
responding to the instruction that the participants had been given.
There were two different types of response (or dependent) variable.
The first was the 4-tuple [I, F , D, V ]at which a participant declared
a match (with PI or Psi). The second consisted of the transitions -
i.e., the set of all transitions from configuration i(e.g., [1,1,0,1])
to another configuration j(e.g., [2,1,0,1]).
6 Results
6.1 Method of Analysis
We make the simplifying assumption that the results of the five tri-
als were statistically independent. There is not true independence
between these, however, since obviously the same person carried
out each of the 5 trials, and may have learned from trial to trial.
There are two reasons to suppose, however, that the independence
assumption may not have been violated. First, by design each trial
started from a different configuration, and therefore participants
were forced to think each trial anew. Second, empirically if we
let nij be the transition number at which the ith subject declared a
match in the jth trial (i= 1,...,,20;j= 1,...,5), we find no significant
correlations between the columns of the matrix n. (The highest cor-
relation is between trials 1 and 3 with r= 0.42, P = 0.06, the next
highest is between trials 2 and 3 with r= 0.32, P = 0.17, and so
on). Hence although zero correlation between the trials does not
prove the strong requirement of their statistical independence, the
assumption is at least not contradicted empirically.
We follow two methods of analysis. First, we consider the configu-
rations [I, F , D, V ]at which participants declared a match in each
of their 5 trials. From these we can estimate the joint probability
distributions: P(I=i, F =f, D =d, V =v|c) = pc(i, f , d, v)
where P(E|c)represents probability of event Econditional on
c(PI or Psi). From these probability distributions we can com-
pute any marginal or other conditional distributions of interest.
In particular we define π(i, f, d, v ) = P(PI |i, f, d, v)and
ψ(i, f, d, v ) = P(Psi|i, f , d, v)as the conditional probabilities
of a match being declared when the participant is experiencing con-
figuration [i, f, d, v ]. These can be computed using Bayes’ Theo-
rem from pc(i, f, d, v ).
Second, we consider the transitions as a Markov Chain over the
configurations of the system. That is we assume that the probability
of choosing a transition to any particular (allowable) configuration
is only dependent on the current configuration, and not on prior his-
tory. Then using the results of all the transitions made by the sub-
jects, we can estimate the transition matrix Pij,c, the probability of
a transition to configuration jgiven that the current configuration is
i, where i, j range across the configurations and cis the condition
of interest (PI or Psi). From the two resulting transition matrices it
is easy to compute the probabilities of being in the various config-
urations after the successive transitions. Given the rules applied to
the possible transitions, the configuration [2,1,1,2] is absorbing,
since once that is reached no further transitions are possible.
6.2 Probability Distributions
For any particular [i, f, d, v]we can estimate the probability of a
match in that configuration as the number of times that subjects
stopped in that configuration over the total number of stops. Each
subject carried out five trials, and stopped in each one. Hence the
denominator within each group (PI and Psi) is 50.
The two probability distributions pc(i, f, d, v)(c=P I , P si)are
shown in Figure 4A (in A only configurations with at least one
probability >0.04 are shown). A Chi-Squared test on the dif-
ference between the two distributions shows that they are highly
significantly different (P < 2.0×106). (The Chi-Squared test
combined some frequencies together to avoid values of less than 5,
as is standard practice). The PI group chose the large display and
HMD together more often than the Psi group (88% compared to
60%), discussed later. The PI group’s most likely stopping config-
uration was to leave the illumination as Gouraud shading, but with
the full body tracked avatar. For the Psi group the most likely stop-
ping configuration was with no avatar, but with static shadows. The
next most likely stopping configuration was with dynamic illumi-
nation, a static avatar, with the small field-of-view HMD.
The participants chose their responses non-randomly under both
conditions. To see this, assume that the participants were choos-
ing their stopping configurations randomly. Then in Figure 4A
we should find a fairly uniform distribution amongst the stop-
ping configurations reached. In fact if we carry out a Chi-squared
test comparing each distribution with the theoretical uniform dis-
tribution, then random choice for the stopping configurations is
an inconceivable hypothesis (in both cases the significance level
P < 1.0×109). In fact were subjects making random choices
then we would have also expected the distributions for PI and Psi to
be similar, which is not the case.
Figure 4B shows the probabilities π(i, f, d, v)and ψ(i, f , d, v)
where only configurations with at least one n10 and one p > 0
are shown. The meaning of Figure 4B is that, for example, for
those in the PI group, the configuration [1,1,1,2] was reached 16
times, and on 14 occasions the participant found a match on PI
compared to [2,1,1,2] leading to a probability of 0.875. It can
be seen that there are some very striking differences between the
two groups. For example, 89% of the 16 times that the Psi group
reached [2,0,1,1] there was a match for Psi, whereas there were
2matches out of 4in the PI group. There is a similar difference
between the two groups in [0,1,1,1] and [0,1,1,0].
6.3 Transitions
We constructed matrices of transition probabilities for the PI and
Psi groups. Each subject made 5 transitions in each of the 5 tri-
als leading to 250 transitions for each group. Recall that they were
asked to continue making transitions after they had declared their
match. Given the structure of the transitions possible these are
highly sparse matrices, with 58 and 61 non-zero entries respec-
tively. There are 36 possible configurations, but our main interest is
[0,0,0,0] (i.e., Gouraud shading, small field-of-view, powerwall,
and no avatar). From this start we consider the evolution of the
configurations reached for the two groups. Let ube the 1×36
vector corresponding to starting configuration [0,0,0,0], i.e. uhas
a1corresponding to this configuration and 0elsewhere. Then if
Pis the transition matrix the vectors uP n(n= 1,2,3,)give the
probability distributions over the configurations after ntransitions.
Figure 5 shows the estimated probability distributions over the con-
figurations at each of the transitions. (Transition 6 would be to the
absorbing configuration [2,1,1,2]). There are early signs of the
difference between the two groups. At transition 1 all of the PI
group changed either the size of the display or changed from pow-
erwall to HMD. For the Psi group the transitions were 60% to the
HMD, 30% to the powerwall, but even at this early stage 10% to
improve the illumination to static shadows. By transition 3, consid-
ering the maxima of the distributions, the two groups are symmetric
92:6 • M. Slater et al.
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
Figure 4: Response functions. (A) pc(i, f , d, v)the probability distributions over matching configurations. (B) π(i, f , d, v)and ψ(i, f, d, v),
the conditional probabilities of a match in the given configuration. The pairs of numbers under the x-axis in B are the n’s corresponding to
the probabilities.
- both have chosen the HMD and large display size, but the PI group
has included the static avatar [0,1,1,1] and the Psi group the static
shadows illumination [1,1,1,0]. At the fourth transition this sym-
metry is maintained ([0,1,1,2] compared to [2,1,1,0]) and again
at the fifth transition ([1,1,1,2] compared with [2,1,1,1]). Given
the cost structure of making transitions, the order is also impor-
tant. The PI group tended to improve first the avatar to the best one
and then turned attention to the illumination, whereas the Psi group
tended to do the opposite.
7 Discussion
Here we consider the claims of the theory outlined in Section 3.
Natural sensorimotor contingencies are important for PI. In this
experimental design participants could observe the virtual room on
a simulated powerwall manipulating their view by using a joystick,
or directly through the HMD changing their view by natural head
movements. Moreover, a wider field-of-view gives a better approx-
imation to natural sensorimotor contingencies than a more narrow
field-of-view - in the case of the direct HMD interface because head
movements would change the view in a way similar to physical real-
ity, whereas with a narrow field-of-view more head movements are
needed. In the PI group 88% chose to stop in a condition where both
HMD and wide field-of-view were chosen, compared with 60% in
the Psi group. This difference is significant (P < 4×104, one-
sided test against the hypothesis that the proportion for PI is greater
than for Psi). Additionally, considering the transitions shown in
Figure 5, by the second transition the probability that both large
field-of-view and HMD are chosen is 0.74 for the PI group and
0.49 for the Psi group. By the third transition these become 0.95
and 0.67 respectively. Finally π(0,1,1,0) = 0.1905(n= 42)
whereas ψ(0,1,1,0) = 0.0588(n= 34). These are the probabili-
ties that amongst all the times that the configuration [0,1,1,0] had
been reached (Gouraud shading, larger field-of-view and HMD, no
avatar) that a match had been chosen in that configuration. Again
the difference is significant (P = 0.035, one-sided test).
Correlations between self-actions and events are important for Psi.
In the scenario of this experiment there were no actual events ex-
cept those caused by the participants (i.e., body movement). For
these events to have counterparts in the virtual reality, the partici-
pant needs to have either a static body with reflections in the mirror
[2,,,1], or a dynamic body with or without reflections in the mir-
ror [,,,2]. In these cases, movements of the participant would
result in changes in the environment. For the cases with Gouraud
shading, ψ= 0.0179(n= 112),0.0455(n= 44),0.3571(n=
28) for the no avatar, static avatar and dynamic avatar respec-
tively. For the cases with static shadows, there is no change from
no avatar to static avatar (both are approximately 0.3), but again,
a large change to the dynamic avatar (ψ= 1, n = 8). Finally
in the case of illumination with shadows and reflections the values
are 0.1538 (n=26), 0.5789 (n=38) and 0.8571 (n=14) respectively.
(Each of these differences are significant, both P < 0.015). More-
over, ψ(0,1,1,2) is high, and so is ψ(2,0,1,1) (Figure 4B). Note
that for Psi, SCs are less important, some participants seemed will-
ing to sacrifice the larger display size (Figure 4, cases of the form
[,0,,]).
Illumination realism may be more important for Psi. We saw in the
discussion of the transitions (Figure 5) that those in the PI group
tended to first establish the wide field-of-view and HMD and then
gravitated towards obtaining an avatar, whereas those in the Psi
group more towards improving the illumination. Additionally, in
Figure 4B we can see that the second largest value of ψoccurs for
condition [2,0,1,1].
The fact that the illumination type was found to be less important
in the PI group is consistent with other evidence. In [Zimmons and
Panter 2003] a between-groups experiment assessed presence us-
ing questionnaires and physiological responses to the visual cliff
(the pit-room). Subjects experienced one of 5 types of illumination
model, ranging from Gouraud shading through to radiosity. Physio-
logical responses indicating stress increased significantly once sub-
jects saw the edge of the precipice over which they were virtually
standing. However, there were no significant differences between
the groups with respect to a presence rating scale nor with respect
to the physiological stress responses. In other words, illumination
realism apparently made no difference. However, in [Slater et al.
2009] it was found that on a presence rating scale there was a sig-
Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence • 92:7
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
nificantly higher mean score for a group that experienced the pit
room with real-time ray tracing, compared with another group that
experienced it with only ray casting. This was also backed up with
physiological evidence showing a greater stress response for the ray
tracing group.
In this second experiment [Slater et al. 2009] participants were
endowed with a very simple virtual body, but one that only par-
tially moved in response to the participant’s overall movements
(e.g., leaning forward or swaying, and one arm movement). In
the case of the ray tracing group movements of the virtual body
were accompanied by real-time changes to shadows and reflec-
tions of that body in the environment. For the ray casting group
there were only static shadows. Note that both experiments used
a Virtual Research V8 HMD which has 60 degree diagonal FOV
- i.e., a small FOV compared to the current experiment. To con-
sider the experiment of [Zimmons and Panter 2003] we can com-
pare π(0,0,1,0) = 0.0714(n= 28) (Gouraud shading, small
FOV, HMD, no avatar) with π(1,0,1,0) = 0(n= 12) (static
shadows, approximately equivalent to radiosity). The difference be-
tween these two is not significant. However, for [Slater et al. 2009]
we need to compare π(1,0,1,1) = 0(n= 6) (static shadows, low
FOV, HMD, static avatar) with π(2,0,1,1) = 0.5(n= 4) (global
illumination). Of course the numbers in the second case are too
small to test significance, but they are at least consistent with the
results of the paper.
Additionally if we consider what might have happened if a large
field-of-view had been used in the first experiment, we find
π(0,1,1,0) = 0.1905(n= 42) and π(1,1,1,0) = 0.5833 (n
= 24) (P = 0.0004). In other words, a specific prediction of our
method is that repeating the experiment described by [Zimmons and
Panter 2003] but using a field-of-view similar to that of the Wide5
HMD would result in a significant difference between the responses
to Gouraud shading compared with the radiosity-like illumination
method.
The virtual body is important for both PI and Psi. It is of great inter-
est that having a virtual body appears to be an important element for
PI. This was first suspected in the early days of presence research
and as reported earlier, there was one experiment that looked at this
issue. Evidence that it is of importance for Psi as well was dis-
cussed above, and also Figure 4B shows that the one of the highest
probabilities for Psi is in the condition [0,1,1,2].
8 Conclusions
This paper is based on recent theoretical work described in [Slater
2009] that introduced the concept of immersion as a relation over
virtual reality systems, forming a partial order. The basis of immer-
sion is simulation, where one system may be used to simulate an
application as if it were running in another system. It is postulated
that probability distributions can be defined over a set of immersive
systems, that act as response functions, for sensations such as Place
Illusion and Plausibility. Each type of system can result in a certain
type of qualia, and the goal of the matching experiments is to find
the equivalent of ‘metamers’ in color science, i.e., configurations
that give rise to similar feelings, just as different wavelength distri-
butions can give rise to the same color sensation. We have shown
how this can be done with a relatively simple example, only manip-
ulating four aspects giving rise to 36 different abstract IVE systems.
We have shown that depending on the matching criteria used (PI or
Psi) that the probability distributions are different, and that the re-
sults fit with some previous work, and also are consistent with the
theory put forward in [Slater 2009]. (Of course consistency does
not imply truth).
Moreover, on the basis of the derived response functions πand ψit
Figure 5: Probability distributions over the configurations after
each transition for each of the PI and Psi groups.
would be possible to make predictions - one was made in this paper
about the effects of using a HMD with a larger field-of-view on
one experiment. We believe that this is the first empirically based
prediction that has ever been made in two decades of research into
the concept of presence in virtual environments. Many others could
be made on the basis of this methodology.
The caveats are that we are working with abstract systems - we did
not really use a powerwall, for example, but only a simulation of
one. We cannot know whether other factors such as changes in
resolution would cause any significant differences. However, the
advantage of our approach is that we can compare different sys-
tems deliberately abstracting away from many confounding effects.
It is not possible to do a scientific comparison of one physical sys-
tem with another, since too many factors change simultaneously,
not under experimental control. Moreover, the theory and associ-
ated methodology can make predictions, and these can be tested in
formal experimental studies with the physical systems under inves-
tigation. It should also be noted that we are not attempting here
to replace current methods, we provide empirically based response
functions πand ψthat describe how the ‘average participant’ might
respond. These can be used to make predictions. In particular stud-
ies, presence (PI and Psi) could still be measured using question-
naires and physiological and behavioral responses, and the criterion
of ‘response as if real’. The results of these experiments could then
be compared with predictions of the underlying theory. Finally, we
92:8 • M. Slater et al.
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
put forward this methodology as a program of research, where dif-
ferent laboratories could collaborate in order to form an agreed set
of probability distributions, based on a much wider sample of data
than one lab alone can gather, and using a number of different sys-
tems.
Acknowledgements
This work was developed under the FET project PRESENCCIA
(27731) and the ERC project TRAVERSE (227985).
References
BROO KS , F. P. 1999. What’s real about virtual reality? IEEE
Comput. Graph. Appl. 19, 6, 16–27.
CARROZZINO, M., T ECCHIA, F., BAC IN EL LI , S., CAPPELL ET TI ,
C., AN D BERGAMASCO, M. 2005. Lowering the development
time of multimodal interactive application: the real-life expe-
rience of the XVR project. In Proceedings of the 2005 ACM
SIGCHI International Conference on Advances in computer en-
tertainment technology, 270–273.
DRA PE R, J. V., KAB ER , D. B., AND USH ER , J. M. 1998. Telep-
resence. Human Factors 40, 3, 354–375.
FAIRCHILD, M. D. 2005. Color Appearance Models 2nd Ed.,.
Wiley-IS&T, Chichester, UK.
FER NAN DO , R. 2005. Percentage-closer soft shadows. In SIG-
GRAPH ’05: ACM SIGGRAPH 2005 Sketches, ACM, New
York, NY, USA, 35.
FREEMAN, J., AVONS , S. E. , PE AR SO N, D. E., AND IJSSEL-
ST EI JN , W. A . 1999. Effects of sensory information and prior
experience on direct subjective ratings of presence. Presence:
Teleoperators and Virtual Environments 8, 1, 1–13.
GAR AU, M., FRIEDMAN, D., WI DE NF EL D, H . R. , AN TL EY, A.,
BROG NI , A. , AN D SLATE R, M . 2008. Temporal and spatial
variations in presence : Qualitative analysis of interviews from
an experiment on breaks in presence. Presence: Teleoperators
and Virtual Environments 17, 3, 293–309.
GAR DN ER , H. J., A ND MARTIN, M . A . 2007. Analyzing ordinal
scales in studies of virtual environments: Likert or lump it! Pres-
ence: Teleoperators and Virtual Environments 16, 4, 439–446.
GIL LI ES , M., AND SPANLANG, B. 2010. Real-time character en-
gines comparing and evaluating real-time character engines for
virtual environments. Presence: Teleoperators and Virtual Envi-
ronments, in press.
HEL D, R . M. , AND DUR LA CH , N. I . 1992. Telepresence. Pres-
ence: Teleoperators and Virtual Environments 1, 1, 109–112.
KILGARD, M . 2000. Improving shadows and reflections via the
stencil buffer. Tech. rep., NVIDIA.
LES SI TE R, J., FREEMAN, J., KE OG H, E., A ND DAVI DO FF, J.
2001. A cross-media presence questionnaire: The ITC-sense of
presence inventory. Presence: Teleoperators and Virtual Envi-
ronments 10, 3, 282–297.
LOM BAR D, M ., A ND DI TT ON , T. 1997. At the heart of it all: The
concept of presence. Journal of Computer-Mediated Communi-
cation 3, 2, online journal.
MEE HA N, M., INS KO, B., WHITTON , M. , AN D BRO OK S, JR.,
F. P. 2002. Physiological measures of presence in stressful vir-
tual environments. ACM Transactions on Graphics 21, 3, 645–
652.
MEE HA N, M., RAZ ZAQ UE , S., WH IT TO N, M. C., AN D BROOKS,
JR., F. P. 2003. Effect of latency on presence in stressful vir-
tual environments. In VR ’03: Proceedings of the IEEE Virtual
Reality 2003, IEEE Computer Society, Washington, DC, USA,
141–148.
SAN CH EZ -VIV ES , M. V., AN D SLATER, M . 2005. From pres-
ence to consciousness through virtual reality. Nature Reviews
Neuroscience 6, 4, 332–339.
SCH UB ERT, T., F RIEDMANN, F., AND REGENBRECHT, H. 2001.
The experience of presence: Factor analytic insights. Presence:
Teleoperators and Virtual Environments 10, 3, 266–282.
SHE RI DAN , T. 1992. Musings on telepresence and virtual presence.
Presence: Teleoperators and Virtual Environments 1, 1, 120–
126.
SLATE R, M., A ND GA RAU , M. 2007. The use of questionnaire
data in presence studies : Do not seriously Likert. Presence:
Teleoperators and Virtual Environments 16, 4, 447–456.
SLATE R, M., A ND WI LB UR , S . 1997. A framework for immersive
virtual environments (FIVE): speculations on the role of pres-
ence in virtual environments. Presence: Teleoperators and Vir-
tual Environments 6, 6, 603–617.
SLATE R, M., USO H, M ., A ND ST EE D, A. 1994. Depth of presence
in immersive virtual environments. Presence: Teleoperators and
Virtual Environments 3, 2, 130–144. Depth of Presence in Im-
mersive Virtual Environments xD;TY - JOUR.
SLATE R, M., A NT LE Y, A., DAVI SO N, A., S WAPP, D. , GU GE R,
C., BARKER, C. , PISTRANG, N ., A ND SA NC HE Z- VI VES,
M. V. 2006. A virtual reprise of the stanley milgram obedience
experiments. PLoS ONE 1 (doi:10.1371/journal.pone.0000039).
SLATE R, M., KHA NNA, P., MO RTE NSEN, J., AND YU, I. 2009.
Visual realism enhances realistic response in an immersive vir-
tual environment. IEEE Computer Graphics and Applications
29, 3, 76–84.
SLATE R, M. 2004. How colorful was your day? : Why question-
naires cannot assess presence in virtual environments. Presence:
Teleoperators and Virtual Environments 13, 4, 484–493.
SLATE R, M. 2009. Place illusion and plausibility can lead to real-
istic behaviour in immersive virtual environments. Philos Trans
R Soc Lond B Biol Sci 364, 1535 (Dec), 3549–3557.
TAYLOR , II , R. M., H UDSON, T. C., SE EG ER , A. , WE BE R, H .,
JULIANO, J., AN D HEL SE R, A. T. 2001. VRPN: a device-
independent, network-transparent vr peripheral system. In VRST
’01: Proceedings of the ACM symposium on Virtual reality soft-
ware and technology, ACM, New York, NY, USA, 55–61.
USO H, M., CATENA, E., ARMAN, S., AN D SLATE R, M. 2000.
Using presence questionnaires in reality. Presence: Teleopera-
tors and Virtual Environments 9, 5, 497–503.
WIT ME R, B. G., JE ROME, C. J., AND SINGER, M. J. 2005. The
factor structure of the presence questionnaire. Presence: Teleop-
erators and Virtual Environments 14, 3, 298–312.
ZIM MO NS , P., AND PANT ER , A . 2003. The influence of rendering
quality on presence and task performance in a virtual environ-
ment. In VR ’03: Proceedings of the IEEE Virtual Reality 2003,
IEEE Computer Society, Washington, DC, USA, 293–293.
Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence • 92:9
ACM Transactions on Graphics, Vol. 29, No. 4, Article 92, Publication date: July 2010.
... For instance, an egocentric POV has been shown to reliably induce the sense of self-location [27], while a third-person perspective tends to lower it [28,29]. The sense of agency is induced through the experienced control toward the virtual body [19], influenced by the visuomotor congruence between real body and avatar [30][31][32], while incongruences tend to lower it [33,34]. Lastly, the sense of ownership is influenced by the body appearance and has been induced through avatar models with different degrees of anthropomorphism [35]. ...
... In sum, embodying an IVR avatar might improve the usability [51], spatial awareness [6,45], and (self) presence [31,64], though adverse effects can occur due to an increase in complexity [60]. Yet, till now, design requirements for such IVR avatars and the feasibility to induce the IVBO have not been explored in people with MBID. ...
... collision) with haptics to amplify the illusion [44,76]. Besides, implementing more appealing IVEs could improve the user involvement and realism, potentially enhancing the IVBO [31,64]. However, using IVR avatars for people with MBID required extensive habituation periods when inducing and ending the IVBO, e.g. by gradually adding control and supporting postacclimatization, as some participants described prolonged body sensations ('But for my own body I have to get used to it very much. ...
Preprint
Full-text available
BACKGROUND: Immersive Virtual Reality (IVR) has been investigated as tool for treating psychiatric conditions. Especially the practical nature of IVR, by offering a doing instead of talking approach, could support people who do not benefit from existing treatments. Hence, people with mild to borderline intellectual disability (MBID, IQ = 50-85) might profit particularly from IVR therapies, for instance, to circumvent issues in understanding relevant concepts and interrelations. In this context, immersing the user into a virtual body (i.e. avatar) appears promising for enhancing learning (e.g. by changing perspectives) and usability (e.g. natural interactions). However, design requirements, immersion procedures, and the proof of concept of such embodiment illusion (i.e. substituting the real body with a virtual one) have not been explored in this group. OBJECTIVE: Our work aimed to establish design guidelines for IVR embodiment illusions in people with MBID. We explored three factors to induce the illusion, by testing the (1) avatar’s appearance, (2) locomotion using IVR controllers, and (3) virtual object manipulation. Further, we report on the feasibility to induce the embodiment illusion and provide procedural guidance. METHODS: We conducted a user-centered design with 29 end-users in care facilities, to investigate the (1) avatar’s appearance, (2) controller-based locomotion (i.e. teleport, joystick, or hybrid), and (3) object manipulation. Three iterations were conducted using semi-structured interviews to explore design factors to induce embodiment illusions in our group. To further understand the influence of interactions on the illusion, we measured the Sense of Embodiment (SoE) during five interaction tasks. RESULTS: IVR embodiment illusions can be induced in adults with MBID. To induce the illusion, having a high degree of control over the body outweighed avatar customization, despite the participants' desire to replicate the own body image. Likewise, the highest SoE was measured during object manipulation tasks, which required a combination of (virtual) locomotion and object manipulation behavior. Notable, interactions that are implausible (e.g. teleport, occlusions when grabbing) showed a negative influence on the SoE. Contrarily, implementing artificial interaction aids into the IVR avatar’s hands (i.e. for user interfaces) did not diminish the illusion, presuming that the control was unimpaired. Nonetheless, embodiment illusions showed a tedious and complex need for (control) habituation (e.g. motion sickness), possibly hindering uptake in practice. CONCLUSIONS: Balancing the embodiment immersion, by focusing on interaction habituation (e.g. controller-based locomotion) and lowering customization effort seems crucial to achieve both a high SoE and usability for people with MBID. Hence, future work should investigate requirements for natural IVR avatar interactions by using multisensory integrations for the virtual body (e.g. animations, physics-based collision, touch), and other interaction techniques (e.g. hand tracking, redirected walking). In addition, procedures and usage for learning should be explored for tailored mental health therapies in people with MBID.
... PSI, on the other hand, is suggested to be dependent on coherence, which can be roughly defined as the general credibility of the virtual scenario and its ability to meet expectations [39]. There are studies, however, that indicate the benefits of having a visible body for PSI as well [41,45]. ...
... Interestingly, however, even though PSI does not depend on sensorimotor contingencies, virtual bodies seem to affect not only PI, but PSI as well [42]. A study aimed at separating various PI and PSI related aspects found that having a virtual body was important for both PI and PSI [45]. Also, in Skarbez's experiment examining the relative importance of various factors for eliciting PSI, having a virtual body was found to be the most important [41]. ...
... Although PI and PSI are generally treated as separate illusions (one can have PSI without PI and vice versa), in our case, having a virtual body that broke PSI seemed to affect PI as well (participants reporting preference towards not having a virtual body reporting significantly lower PI scores). However, this is in line with previous research in the sense that virtual body seems to be an aspect that affects both PI and PSI [42,45]. ...
Article
Full-text available
We propose augmenting immersive telepresence by adding a virtual body, representing the user's own arm motions, as realized through a head-mounted display and a 360-degree camera. Previous research has shown the effectiveness of having a virtual body in simulated environments; however, research on whether seeing one's own virtual arms increases presence or preference for the user in an immersive telepresence setup is limited. We conducted a study where a host introduced a research lab while participants wore a head-mounted display which allowed them to be telepresent at the host's physical location via a 360-degree camera, either with or without a virtual body. We first conducted a pilot study of 20 participants, followed by a pre-registered 62 participant confirmatory study. Whereas the pilot study showed greater presence and preference when the virtual body was present, the confirmatory study failed to replicate these results, with only behavioral measures suggesting an increase in presence. After analyzing the qualitative data and modeling interactions, we suspect that the quality and style of the virtual arms, and the contrast between animation and video, led to individual differences in reactions to the virtual body which subsequently moderated feelings of presence.
... In immersive VR, the sense of presence depends on the strength of the placement (i.e., realistic visuals), plausibility (i.e., realistic interactions), and embodiment illusions [146], [147], [148]. However, the placement illusion is fragile, especially when the virtual environment does not realistically react to the user's actions [147], [148]. ...
... In immersive VR, the sense of presence depends on the strength of the placement (i.e., realistic visuals), plausibility (i.e., realistic interactions), and embodiment illusions [146], [147], [148]. However, the placement illusion is fragile, especially when the virtual environment does not realistically react to the user's actions [147], [148]. This is resolved by the plausibility illusion, which is the deception of the user that the environment reacts to the user's actions. ...
Article
Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators’ size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes’ durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.
... Previous works present several variables related to the immersion and the visual features provided by the system. These include the field-of-view [20,21], the screen resolution [20,22], the stereopsis [20,22], the response time or latency [23], brightness, contrast, saturation, and sharpness [24], the level of detail of the 3D models [25], the lighting of the virtual environment [26], and the use of dynamic shadows [26]. Regarding the variables related to audio, these include the use of sound compared to not using sound [27,28], the ambient sound [29], the 3D spatial sound [30,31], the use of headphones compared to the use of speakers [31], and the echo or reverberation [29]. ...
... Previous works present several variables related to the immersion and the visual features provided by the system. These include the field-of-view [20,21], the screen resolution [20,22], the stereopsis [20,22], the response time or latency [23], brightness, contrast, saturation, and sharpness [24], the level of detail of the 3D models [25], the lighting of the virtual environment [26], and the use of dynamic shadows [26]. Regarding the variables related to audio, these include the use of sound compared to not using sound [27,28], the ambient sound [29], the 3D spatial sound [30,31], the use of headphones compared to the use of speakers [31], and the echo or reverberation [29]. ...
Preprint
Full-text available
Technological advances in recent years have promoted the development of virtual reality systems that have a wide variety of hardware and software characteristics, providing varying degrees of immersion. Immersion is an objective property of the virtual reality system that depends on both its hardware and software characteristics. Virtual reality systems are currently attempting to improve immersion as much as possible. However, there is no metric to measure the level of immersion of a virtual reality system based on its characteristics. To date, the influence of these hardware and software variables on immersion has only been considered individually or in small groups. The way these system variables simultaneously affect immersion has not been analyzed either. In this paper, we propose immersion metrics for virtual reality systems based on their hardware and software variables, as well as the development process that led to their formulation. From the conducted experiment and the obtained data, we followed a methodology to find immersion models based on the variables of the system. The immersion metrics presented in this work offer a useful tool in the area of virtual reality and immersive technologies, not only to measure the immersion of any virtual reality system but also to analyze the relationship and importance of the variables of these systems.
... To be valid and reliable, such usage requires a virtual environment to be equivalent to its real-world counterpart. Considered a measure of the realism of a virtual experience (Slater, 2009;Slater et al., 2010;Ommerli, 2020;Milleville-Pennel and Charron, 2015;Deniaud et al., 2014;Skarbez et al., 2017), the concept of presence could be used to evaluate such correspondence (Milleville-Pennel and Charron, 2015). ...
... We know that such FOV size makes any task where the observer is detecting objects in peripheral vision difficult to perform (Blissing, 2020). Research that evaluated the FOV size in a virtual environment tends to demonstrate that higher presence is evoked by values closer to the total FOV size Slater et al., 2010;Duh et al., 2002). This relation is also reflected in performance measure (Ragan et al., 2015). ...
Article
We consider that to objectively measure immersion, one needs to assess how each sensory quality is reproduced in a virtual environment. In this perspective, we introduce the concept of functional threshold which corresponds to the value at which a sensory quality can be degraded without being noticed by the user of a virtual environment. We suggest that the perceived realism of a virtual experience can potentially be evoked for sensory qualities values ranging from the perceptual threshold to the functional threshold. Thus, the identification of functional thresholds values allows us to constrain immersion. To lay the foundation for the identification of functional thresholds, we applied a modified version of the method of limits. We measured the value at which 30 participants were able to identify the degradation of their field of view (FOV), visual acuity, and contrast sensitivity while executing a multidirectional selection test. This enabled us to identify functional perceptual thresholds of 96.6 degrees for FOV, 12.2 arcmin for visual acuity, and 25.6% for contrast sensitivity.
... In immersive VR, the sense of presence depends on the strength of the placement (i.e., realistic visuals), plausibility (i.e., realistic interactions), and embodiment illusions [146], [147], [148]. However, the placement illusion is fragile, especially when the virtual environment does not realistically react to the user's actions [147], [148]. ...
... In immersive VR, the sense of presence depends on the strength of the placement (i.e., realistic visuals), plausibility (i.e., realistic interactions), and embodiment illusions [146], [147], [148]. However, the placement illusion is fragile, especially when the virtual environment does not realistically react to the user's actions [147], [148]. This is resolved by the plausibility illusion, which is the deception of the user that the environment reacts to the user's actions. ...
Preprint
Full-text available
Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators’ size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes’ durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.
... The adapted questionnaire is shown in Table 1, answers were provided in a 7 point scale and added to obtain the final presence score. Sensorimotor contingencies have been shown to increase the sense of presence in VR, as discussed in [19]. Therefore, we expect higher sense of presence, as assessed by the SUS questionnaire, in conditions that allow for full control of the point of view, namely Full 3D and 3D + Billboard. ...
... Moreover, we argue that the decrease in the presence score may be explained by the lack of POV translation in the VR360 condition. The sense of presence has been empirically associated to accurate sensorimotor contingencies [19,23], that is, the coupling of motor commands and appropriate sensory feedback of how these commands affect the environment (the point of view in this case). The VR360 condition is the only to provide incomplete sensory feedback in response to head movement. ...
Article
Full-text available
In this paper, we investigate three forms of virtual reality (VR) content production and consumption. Namely, pre-rendered 360 stereoscopic video, full real-time rendered 3D scenes, and the combination of a real-time rendered 3D environment with a pre-rendered video billboard used to present the central elements of the scene. We discuss the advantages and disadvantages of these content formats and describe the production of a piece of VR cinematic content for the three formats. The cinematic segment presented the interaction between two actors, which the VR user could watch from the virtual room next-door, separated from the action by a one-way mirror. To compare the three content formats, we carried out an experiment with 24 participants. In the experiment, we evaluated the quality of experience, including presence, simulation sickness and the participants’ assessment of content quality, for each of the three versions of the cinematic segment. We found that, in the context of our cinematic segment, combining video and 3D content produced the best experience. We discuss our results, including their limitations and the potential applications.
... The affordances of a VR system that supports only head rotation for tracking can be completely simulated in a VR system that supports 6 degrees of freedom head tracking-as a practical example, the affordances of an Oculus Go device can be completely simulated using an Oculus Quest, but not the other way around. We have referred to this as the 6 degrees of freedom VR being more "immersive" than the rotation only VR (Slater, 2009;Slater et al., 2010a). Similarly, a desktop-based VR system, where the participant observes the virtual environment on an external screen rather than through a head-mounted display, and controls movements through a joystick, can in principle be completely simulated by a model-based VR system experienced through a head-mounted display. ...
Article
Full-text available
The prevailing scientific paradigm is that matter is primary and everything, including consciousness can be derived from the laws governing matter. Although the scientific explanation of consciousness on these lines has not been realized, in this view it is only a matter of time before consciousness will be explained through neurobiological activity in the brain, and nothing else. There is an alternative view that holds that it is fundamentally impossible to explain how subjectivity can arise solely out of material processes—“the hard problem of consciousness”—and instead consciousness should be regarded in itself as a primary force in nature. This view attempts to derive, for example, the laws of physics from models of consciousness, instead of the other way around. While as scientists we can understand and have an intuition for the first paradigm, it is very difficult to understand what “consciousness is primary” might mean since it has no intuitive scientific grounding. Here we show that worlds experienced through virtual reality (VR) are such that consciousness is a first order phenomenon. We discuss the Interface Theory of Perception which claims that in physical reality perceptions are not veridical and that we do not see the “truth” but that perception is based on evolutionary payoffs. We show that this theory may provide an accurate description of perception and consciousness within VR, and we put forward an experimental study that could throw light on this. We conclude that VR does offer an experimental frame that provides intuition with respect to the idea that “consciousness is first” and what this might mean regarding the perceived world. However, we do not draw any conclusions about the veracity of this notion with respect to physical reality or question the emergence of consciousness from brain function.
... The IVBO can be influenced by providing photorealistic or even personalized avatars (Waltemate et al., 2018). Using an avatar can lead to a higher presence when interacting in VR in comparison to using a traditional user interface (Slater et al., 2010). The provision of an embodiment might intensify emotional response (Gall et al., 2021). ...
Article
Full-text available
Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
Chapter
This study reviews related works in regard to the embodiment of virtual character and the illusion in the virtual reality (VR). There are five types of illusion shown in the result according to the characteristics of place illusion, plausibility illusion, body illusion, partial awareness of illusion, and illusion of movement. In order to analyze possible factors of illusion in VR, the meaning and interaction behavior of illusion is defined. Furthermore, the questionnaire items are joined based on the meaning as well as the implicit and explicit interaction of users from the five constructs of illusion.
Article
Full-text available
"Slap textures on polygons, throw them at the screen, and let the depth buffer sort it out." For too many game developers, the above statement sums up their approach to 3D hardware- acceleration. A game programmer might turn on some fog and maybe throw in some transparency, but that's often the limit. The question is whether that is enough anymore for a game to stand out visually from the crowd. Now everybody's using 3D hardware-acceleration, and everybody's slinging textured polygons around, and, frankly, as a result, most 3D games look more or less the same. Sure there are differences in game play, network interaction, artwork, sound track, etc., etc., but we all know that the "big differentiator" for computer gaming, today and tomorrow, is the look. The problem is that with everyone resorting to hardware rendering, most games look fairly consistent, too consistent. You know exactly the textured polygon look that I'm talking
Conference Paper
Full-text available
In this paper we present XVR, an integrated development environment for the rapid development of Virtual Reality applications. Using a modular architecture and a VR-oriented scripting language, XVR contents can be embedded on a variety of container applications. This makes it suitable to write contents ranging from web-oriented presentations to more complex VR installations involving advanced devices, such as real-time trackers, haptic interfaces, sensorized gloves and stereoscopic devices, including HMDs. Some case studies are also presented to illustrate the development processes related to XVR and its features.
Article
Full-text available
Ivan Sutherland first proposed virtual reality in 1965, and in the next few years built a working system. Twenty years later, line-drawing hardware, the Polhemus tracker, and LCD tiny-TV displays made VR feasible, if costly and inadequate, for several explorers. In 1990, journalists jumped on the idea, and hype levels went out of sight. As usual with infant technologies, the realization of the early dreams and the harnessing to real work has taken longer than the wild prognostications, but it is now happening. I survey the current state-of-the-art, addressing the perennial questions of technology and applications.
Article
Full-text available
This paper presents qualitative findings from an experiment designed to investigate breaks in presence. Participants spent approximately five minutes in an immersive Cave-like system depicting a virtual bar with five virtual characters. On four occasions the projections were made to go white to trigger clearly identifiable anomalies in the audiovisual experience. Participants' physiological responses were measured throughout to investigate possible physiological correlates of these experienced anomalies. Analysis of subsequent interviews with participants suggests that these anomalies were subjectively experienced as breaks in presence. This is significant in the context of our research approach, which considers presence as a multilevel construct ranging from higher-level subjective responses to lower-level behavioral and automatic responses. The fact that these anomalies were also associated with an identifiable physiological signature suggests future avenues for exploring less intrusive ways of studying temporal fluctuations in presence during the course of the mediated experience itself. The findings also reveal that breaks in presence have multiple causes and can range in intensity, resulting in varying recovery times. In addition, presence can vary in intensity within the same space, suggesting that presence in an immersive virtual environment can fluctuate temporally and that spatial behavior is consistent with what would be expected in an equivalent real environment.
Article
Full-text available
The presence research community would benefit from a reliable and valid cross- media presence measure that allows results from different laboratories to be com- pared and a more comprehensive knowledge base to be developed. The ITC-Sense of Presence Inventory (ITC-SOPI) is a new state questionnaire measure whose de- velopment has been informed by previous research on the determinants of pres- ence and current self-report measures. It focuses on users' experiences of media, with no reference to objective system parameters. More than 600 people com- pleted the ITC-SOPI following an experience with one of a range of noninteractive and interactive media. Exploratory analysis (principal axis factoring) revealed four factors: Sense of Physical Space, Engagement, Ecological Validity, and Negative Ef- fects. Relations between the factors and the consistency of the factor structure with others reported in the literature are discussed. Preliminary analyses described here demonstrate that the ITC-SOPI is reliable and valid, but more rigorous testing of its psychometric properties and applicability to interactive virtual environments is re- quired. Subject to satisfactory confirmatory analyses, the ITC-SOPI will offer re- searchers using a range of media systems a tool with which to measure four facets of a media experience that are putatively related to presence.
Article
Full-text available
The problems of valid design of questionnaires and analysis of ordinal response data from questionnaires have had a long history in the psychological and social sciences. Gardner and Martin (2007, this issue) illustrate some of these problems with reference to an earlier paper (Garau, Slater, Pertaub, & Razzaque, 2005) that studied copresence with virtual characters within an immersive virtual environment. Here we review the critique of Gardner and Martin supporting their main arguments. However, we show that their critique could not take into account the historical circumstances of the experiment described in the paper, and moreover that a reanalysis using more appropriate statistical methods does not result in conclusions that are different from those reported in the original paper. We go on to argue that in general such questionnaire data is treated far too seriously, and that a different paradigm is needed for presence researchone where multivariate physiological and behavioral data is used alongside subjective and questionnaire data, with the latter not having any specially privileged role.
Book
This chapter describes the color appearance model developed by Robert William Gainer Hunt. This model is the most extensive, complete, and complex color appearance model that has been developed. The Hunt model is designed to predict a wide range of visual phenomena including the appearance of related and unrelated colors in various backgrounds, surrounds, illumination colors, and luminance levels ranging from low scotopic to bleaching levels. In this sense, it is a complete model of color appearance for static stimuli. Hunt's model does not attempt to incorporate complex spatial or temporal characteristics of appearance. To make reasonable predictions of appearance over such a wide range of conditions, the Hunt model requires more rigorous definition of the viewing field. brightness; colorimetry
Article
Hako-ne is an augmented reality hybrid art/technology project. Users will be able to see characters of 3D musical notes moving on each side of the augmented musical dollhouse by using a handheld Optical See-Through Information Viewer (OSTV), which consists ...