ArticlePDF Available

The Effect of Viewing a Self-Avatar on Distance Judgments in an HMD-Based Virtual Environment

Authors:

Abstract and Figures

Few HMD-based virtual environment systems display a rendering of the user's own body. Subjectively, this often leads to a sense of disembodiment in the virtual world. We explore the effect of being able to see one's own body in such systems on an objective measure of the accuracy of one form of space perception. Using an action-based response measure, we found that participants who explored near space while seeing a fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4 m to 6 m away from where they were standing than did participants who saw no avatar. A nonanimated avatar also improved distance judgments, but by a lesser amount. Participants who viewed either animated or static avatars positioned 3 m in front of their own position made subsequent distance judgments with similar accuracy to the participants who viewed the equivalent animated or static avatar positioned at their own location. We discuss the implications of these results on theories of embodied perception in virtual environments.
Content may be subject to copyright.
Betty J. Mohler*
Max Planck Institute for
Biological Cybernetics
72076 Tübingen, Germany
Sarah H. Creem-Regehr
Department of Psychology
University of Utah
William B. Thompson
School of Computing
University of Utah
Heinrich H. Bülthoff
Max Planck Institute for Biological
Cybernetics
Department of Brain and
Cognitive Engineering
Korea University
Presence, Vol. 19, No. 3, June 2010, 230–242
©2010 by the Massachusetts Institute of Technology
The Effect of Viewing a
Self-Avatar on Distance
Judgments in an HMD-Based
Virtual Environment
Abstract
Few HMD-based virtual environment systems display a rendering of the user’s own
body. Subjectively, this often leads to a sense of disembodiment in the virtual world.
We explore the effect of being able to see one’s own body in such systems on an
objective measure of the accuracy of one form of space perception. Using an action-
based response measure, we found that participants who explored near space while
seeing a fully-articulated and tracked visual representation of themselves subsequently
made more accurate judgments of absolute egocentric distance to locations ranging
from4mto6mawayfrom where they were standing than did participants who
saw no avatar. A nonanimated avatar also improved distance judgments, but by a
lesser amount. Participants who viewed either animated or static avatars positioned
3 m in front of their own position made subsequent distance judgments with similar
accuracy to the participants who viewed the equivalent animated or static avatar
positioned at their own location. We discuss the implications of these results on
theories of embodied perception in virtual environments.
1 Introduction
The classic view of visual space perception involves what is sometimes
called inverse optics, in which geometric analysis is used to infer the structure
of the world that is likely to have generated the sensed view of the world. An
alternate approach, arising from converging research in psychology and neuro-
science, considers the viewer’s body as central to the act of perceiving (Wilson,
2002; Barsalou, 2008; Proffitt, 2006). This body-based approach to perception
is sometimes referred to as embodied perception. The importance of body-based
perception in immersive virtual environments (IVEs) has been recognized for
some time (Slater & Usoh, 1994; Biocca, 1997a, 1997b; Hillis, 1999). How
the body is represented in IVEs has significance for how a user may perceive,
think, and interact within the environment. However, only recently has IVE
technology advanced sufficiently to easily allow incorporation of a user’s own
body into a high fidelity virtual simulation.
Explicit awareness of body-based information can potentially provide two
types of information that might be useful in spatial perception within the con-
text of locations beyond the body itself.
*Correspondence to betty.mohler@tuebingen.mpg.de.
230 PRESENCE: VOLUME 19, NUMBER 3
Mohler et al. 231
Awareness of the body may serve to ground or anchor
the body’s position in space. Visual information about
body position might serve to establish a frame of ref-
erence, particularly in situations in which the body’s
position is ambiguous or when cues about location con-
flict. A second function of body awareness is to provide
a metric for scaling of absolute dimensions of space. The
body may provide metric scaling information through
cues such as familiar size or through visual-motor
feedback obtained when moving the body.
In this paper, we examine how prior experience view-
ing a realistic human avatar affects subsequent spatial
judgments. Since the effect of viewing such an avatar
on space perception might be affected by the sense a
user has of body ownership or by other aspects of the
simulation tying the virtual body to the user, we consid-
ered both static avatars and animated avatars coupled to
the user’s own body motions, along with avatars colo-
cated with the viewer’s virtual world position and avatars
positioned in front of the viewing location in the virtual
world. Two control conditions were included, a sim-
ple marking on the floor colocated with the viewer’s
location but otherwise having no visual similarity to
a person, and lines indicating the location and height
corresponding to the displaced avatars, but otherwise
having no human characteristics.
The spatial judgment evaluated was absolute ego-
centric distance to floor locations several meters away
from the observation point. Egocentric distance refers
to the interval between the viewpoint and an environ-
mental location, which differs from exocentric distances
which are intervals between two environmental loca-
tions. Absolute distance refers to distances represented in
some absolute scale, rather than being relative to other
distances. Multiple studies of the accuracy of absolute
egocentric distance judgments in HMD-based virtual
environments have reported that distances appeared to
be compressed in such environments, at least for loca-
tions beyond a few meters (e.g., Loomis & Knapp,
2003; Thompson et al., 2004). One motivation for
the study presented in this paper was to determine
whether the presence of realistic avatars might reduce
or eliminate this perceptual distortion.
2 Background
Avatars are the digital representation of humans
in online or virtual environments (Bailenson & Blas-
covich, 2004). Avatars are now in common usage in
applications ranging from online games to IVEs. The
majority of avatars are used in third person perspec-
tive contexts, representing views of either the user at a
distance or other actors in a simulation. A substantial
body of research shows that users interact with avatars
representing other animate entities in a simulation in a
manner similar in important ways to their interactions
with real people (e.g., Durlach & Slater, 2000; Slater
et al., 2006; Zhang, Yu, & Smith, 2006), as long as the
avatars exhibit behavior that appears to respond to the
user’s actions appropriately, with gaze direction being
particularly important (Bailenson, Beall, & Blascovich,
2002).
Less common is the use of first person perspective,
first person avatars (sometimes called self-avatars),
which allow the user to see her or his own body. Slater,
Usoh, and Steed (1995), while investigating alternative
ways to control locomotion in a virtual environment,
found that the subjective rating of presence was
enhanced in some circumstances if participants also
reported a subjective association with a virtual body
that was rendered as part of the simulation. In a similar
experiment but with higher quality graphics and a more
complete coupling of user movement to avatar ani-
mation, Usoh et al. (1999) concluded that “presence
correlates highly with the degree of association with the
virtual body,” and argued that “presence gains can be
had from tracking all limbs.” Lok, Naik, Whitton, and
Brooks (2003) used a task that involved manipulating
real objects while viewing a virtual simulation of the
same objects and either generic or faithful self-avatars
of the user’s hands. They evaluated both task perfor-
mance and subjective presence and concluded that the
fidelity of the motion of the avatars was more impor-
tant for a “believable” self-avatar than the visual fidelity
of the avatar, though users indicated a preference for
the more self-accurate avatar. Finally, immersive virtual
environments can be manipulated in ways that gen-
erate a conflict between first person and third person
232 PRESENCE: VOLUME 19, NUMBER 3
perspectives for a user’s avatar and can be used to inves-
tigate “out-of-body” experiences (Lenggenhager, Tadi,
Metzinger, & Blanke, 2007) in which the observer feels
and acts as if they are in the visual location of the avatar.
Only a few studies have explored whether rendering
parts of a user’s body from a first person perspective in
a visually immersive environment affects space percep-
tion. Draper (1995) investigated the effect of a self-
avatar on spatial orientation and distance estimation
tasks, with equivocal results. There was no effect of the
presence of the self-avatar on a search and replace task,
possibly due to overall high performance, and a complex
interaction between the avatar and height of targets in a
perceived reachability task. Mohler, Bülthoff, Thomp-
son, and Creem-Regehr (2008), in a preliminary study
to the work presented here, showed that prior experi-
ence with a tracked self-avatar improved the accuracy of
distance judgments in a virtual environment and that the
effect was not due to visually attending to the ground on
which the user was standing when an avatar was present.
Ries, Interrante, Kaeding, and Anderson (2008) found
similar results, though they identified two possible con-
founds with their methodology, one due to participants
in their control condition not wearing a motion capture
suit and the other due to participants being allowed to
walk through the virtual hallway before giving distance
judgments, allowing feedback which likely resulted in
calibration of responses. In addition, Williams, Johnson,
Shores, and Narasimham (2008) found that viewing a
rendering of one’s static feet decreased the foreshort-
ening of a bisection task within an HMD-based virtual
environment.
There is evidence from the perception community
to support the important role of visual representations
of body parts on judgments of the spatial position. The
dominance of vision in body representation was demon-
strated over 40 years ago with the finding of visual cap-
ture of felt body position using distorting prisms (Hay,
Pick, & Ikeda, 1965). When presented with a conflict
between the visual and proprioceptive position of one’s
arm, we have a strong tendency to resolve the conflict
in favor of the visual position, feeling the arm to be
where it is seen. Similar visual capture effects have also
been induced with artificial limbs such as the rubber
hand illusion in which a visible fake hand is stroked
simultaneously with an unseen real hand, and a person
often feels and acts as if the stroking is occurring at the
location of the fake hand (Botvinick & Cohen, 1998).
This effect has also been demonstrated recently in virtual
environments (Slater et al., 2007; Slater, Perez-Marcos,
Ehrsson, & Sanchez-Vives, 2008).
There is, as yet, little perceptual research on the effects
of the visual presence of a person’s body on spatial
judgments beyond the reaching space. Creem-Regehr,
Willemsen, Gooch, and Thompson (2005) investigated
distance perception in real environments and showed
that viewing one’s feet and the immediately surround-
ing areas on the ground had no effect on the accuracy
of distance estimates. This experiment was done in a
real world setting rich in depth cues, and the differ-
ences in perceptual uncertainty between real and virtual
environments may well make observers rely on differ-
ent environmental and body-based information in the
two situations. Furthermore, factors such as the rel-
ative importance of rendering different body parts,
body tracking, and realism of rendering may also have
differential effects on performance within the IVE.
Rigorously evaluating the accuracy of distance per-
ception is quite difficult, since there is no direct way
to measure what someone “sees.” In the real world,
visually directed actions are often used as indicators
of the accuracy of space perception (Rieser, Ashmead,
Tayor, & Youngquist, 1990; Loomis, Silva, Fujita, &
Fukusima, 1992). Participants are presented with an
appropriately controlled visual stimulus and then asked
to perform tasks based on the visual information they
have been given. Visually directed actions are open-loop
tasks in which visual feedback is not provided. Thus it is
argued that the accuracy of the resulting actions reflects
the accuracy of the underlying perception. Visually
directed action tasks used to probe distance perception
include eyes-closed walking (Rieser et al.; Loomis et al.;
Fukusima, Loomis, & Silva, 1997), pointing (Loomis
et al.), and throwing (Eby & Loomis, 1987; Sahm,
Creem-Regehr, Thompson, & Willemsen, 2005) to or
toward a previously seen target.
Absolute distance perception over action space, as
indicated by performance on visually-directed action
Mohler et al. 233
tasks, is quite accurate in the real world (Rieser et al.,
1990; Loomis et al., 1992; Loomis, Da Silva, Philbeck,
& Fukusima, 1996). This is not true for absolute dis-
tance perception in HMD-based virtual environments,
where multiple studies have now reported that actions
are performed as if distances were perceived as 20%–50%
smaller than intended (e.g., Henry & Furness, 1993;
Loomis & Knapp, 2003; Thompson et al., 2004; Sahm
et al., 2005; Richardson & Waller, 2007; Waller &
Richardson, 2008). Much speculation and research has
been directed at the phenomenon of distance compres-
sion in IVEs. There is evidence that at least in isolation,
compression of distance judgments is not due to issues
involving binocular stereo (Willemsen, Gooch, Thomp-
son, & Creem-Regehr, 2008), or a variety of other
effects including restricted field of view (Knapp &
Loomis, 2004; Creem-Regehr et al., 2005), motion par-
allax (Beall, Loomis, Philbeck, & Fikes, 1995), or image
quality (Thompson et al., 2004). Physical properties
of an HMD may influence the effective scale of virtual
space, but this only partially accounts for the results that
have been observed (Willemsen, Colton, Creem-Regehr,
& Thompson, 2004). Other suggestions have been
made that cognitive effects such as expectations about
room size or explicit feedback about responses may
affect the scaling of actions in IVEs (Interrante, Ander-
son, & Ries, 2006a, 2006b; Interrante, Ries, Lindquist,
& Anderson, 2007; Foley, 2007; Richardson & Waller,
2005).
3 Experiment
The current work expands beyond Mohler
et al. (2008) and Ries et al. (2008) by exploring two
important questions relevant to the potential effect of
self-avatars on space perception in virtual environments:
(1) Is it important that the self-avatar accurately reflect
the user’s own body motions? and (2) Is it important
that the self-avatar be colocated with the user’s position
in the virtual environment?
The experiment was divided into two phases. Dur-
ing the initial exploration phase, participants visually
explored a virtual environment space immediately
around themselves. During the subsequent distance
judgment phase, participants performed eyes-closed
walking to previously seen targets, with a compari-
son between the actual target distance and the walked
distance used as a measure of the accuracy of their ego-
centric distance judgments. Six different conditions were
explored, involving variations in the exploration phase
but with all conditions using an identical distance judg-
ment phase. The exploration phases varied in terms of
(1) whether the avatar was static versus animated in cor-
respondence with the user’s movements compared to a
nonavatar location marker present, and (2) whether this
avatar or nonavatar location marker was located at the
participant’s body location in the virtual environment or
was displaced forward from that location.
3.1 Method
3.1.1 Participants. Forty-eight paid volunteers
participated in this experiment, eight in each condition.
Conditions were roughly balanced for gender. Partic-
ipants were drawn from the university community in
Tübingen, Germany, and compensated for their time at
the rate of 8 /hour. Participants ranged in age from
19 to 43 years (mean 29.9). None had prior experience
with head mounted displays. All had normal or cor-
rected to normal vision and were screened for the ability
to fuse stereo displays (using the stereo fly test, Stereo
Optical Co., Inc.).
3.1.2 Stimuli and Apparatus. The study
was carried out in a fully tracked free-walking space,
11.9 m×11.7 m in size and 8 m high. Users’ full-body
position and orientations were tracked using a 16 cam-
era Vicon MX13 optical tracking system and reflective
markers on the HMD and the user’s body. Each Vicon
camera had a resolution of 1280×1024 and the tracking
system had a frame rate of 120 Hz, with the tracking
latency ranging from 26 ms to 60 ms for this experi-
ment. In addition to updating the visual environment
as a function of users’ head movements, the tracking sys-
tem also allowed for the capturing of full body motion
data. While head motion data could be captured over
the whole of the area of the tracked space, technical
234 PRESENCE: VOLUME 19, NUMBER 3
issues limited full body motion capture to a smaller area.
For this study, head, hands, torso, and feet were tracked
and inverse kinematics were used to fully articulate the
avatar. The virtual model/avatar was rendered using
Virtools from Dassault Systems. The 3D models of the
two different virtual environments used for the explo-
ration and distance judgment phases were developed
using 3DStudio Max. An animation-enabled avatar pur-
chased from Rocketbox Studios was used. Height, arm
span, and leg height of the avatar were scaled to match
the physical dimensions of each individual participant.
The visual display was an NVIS nVisor SX HMD, with
a47
(h) by 37(v) field of view and 1280 by 1024 res-
olution in each eye, which yields a spatial resolution of
approximately 2.2 arc-minutes/pixel.
3.1.3 Design. In the exploration phase, partic-
ipants saw a static avatar matched to their own body
dimensions, an animated avatar which moved consis-
tent with their own body motions and which matched
their own body dimensions, or a nonavatar location
marker made up of one or two lines. The avatars or
markers were placed either at the location of the par-
ticipant in the virtual world (colocated conditions ), so
that they were visible when the participant looked down,
or were placed3minfrontofthevirtual world position
of the participant (displaced conditions ). The colocated
nonavatar marker was a 0.5 m line on the floor indicat-
ing where the participant was standing. The displaced
nonavatar marker was a 0.5 m line on the floor indi-
cating a location3minfrontofwhere the participant
was standing, together with a vertical line matching
the height of the participant. For the displaced avatar
conditions, the avatar was facing away from the partici-
pant, so that only the back of the avatar was visible. No
avatar or nonavatar location marker was present for the
distance judgment phase. A between-subject, six con-
dition design was utilized: 3 (avatar/marker type) ×
2 (colocated or displaced), as shown in Figure 1.
3.1.4 Procedure. In all conditions, participants
began by putting on gloves, shoes, and a light back-
pack which provided targets for the motion tracker.
The backpack with the laptop was carried by the exper-
imenter. The participants were then guided into the
tracking space with their eyes closed, at which point they
were fitted with the head mounted display. With eyes
still closed, they spent approximately 30 s in a T pose
looking straight ahead toward one of the tracking space
walls, while a calibration procedure registered the loca-
tion of the virtual avatar to the real person and scaled the
avatar body segments to that of the actual person (see
Figures 2–3). During the T pose, the height, armspan,
and pelvis height were recorded. Given the overall and
pelvis height of the participant, a version of the avatar
with the appropriate overall pelvis height ratio was cho-
sen. This ensured that the leg length would be accurate
after scaling the avatar’s height. Next, this avatar was
scaled in height and in width so that the overall height
and arm length was accurate. Finally, since inverse kine-
matics were used to animate the character, the virtual
hands and feet were always in the same absolute position
as the participant’s.
Once the setup was complete, participants opened
their eyes and were exposed to one of the six exploration
phase conditions, which lasted 5 min. All six exploration
phase conditions were done in an identical virtual room
that was 6 m ×7 m in size and 2.8 m high. Participants
were instructed to look down or forward and explore
the space immediately around where they were stand-
ing, but were not permitted to move from this location.
In all conditions, including those involving nonavatar
location markers, full body motion tracking was utilized.
These data were recorded to support post-experiment
analysis of gaze direction and body movement. For the
animated avatar conditions, real time motion tracking
was used to move the visually rendered avatar arms and
limbs in a manner consistent with the participant’s own
movements.
After the exploration phase, participants were pre-
sented with a different virtual world, consisting of a
simulated hallway 3.5 m ×15 m in size and 2.8 m high
(see Figure 4). In this new virtual world, they performed
a series of direct-blind walking tasks to targets placed
randomly at 4 m, 5 m, or6mfromtheparticipant in a
blocked random order where each distance was repeated
five times. On each trial, participants viewed the target
on the floor and were instructed to create a “good”
Mohler et al. 235
Figure 1. The six experimental conditions, varying in avatar/marker type (animated, static, line) and location (colocated or
displaced). There were eight participants in each condition.
236 PRESENCE: VOLUME 19, NUMBER 3
Figure 2. Virtual avatar in the virtual room used for the exploration
phase.
Figure 3. User wearing HMD, light backpack, gloves, and shoes.
image of the target and the surrounding environment.
They then were instructed to close their eyes, the HMD
screens were blanked, and the participants walked with-
out visual feedback to the target location. Because the
distance walked in some of these trials exceeded the
range over which full body tracking was possible, the
distance judgment phase was done without a view of the
avatar or nonavatar location marker. (The implications of
this are explored in Section 3.3.)
Figure 4. Virtual world where the target and hallway was viewed
before direct blind-walking tasks were performed. The textures in this
virtual world differed in scale from those used in the exploration phase.
Figure 5. Direct blind-walking results averaged across subjects (eight
per condition) for 15 trials of direct blind-walking to targets on the
ground plane which was performed after the exploration phase. Error
bars represent one standard error.
3.2 Results
Distance estimations increased after the static
avatar experience relative to the line on the floor con-
trol, with an even larger effect after the animated avatar
experience, regardless of the location of the avatar (as
shown in Figure 5). A 2 (location: colocated vs.
displaced) ×3 (avatar condition: animated, static, line)
univariate ANOVA performed on the ratio of walked
Mohler et al. 237
Ta b l e 1 . Ratio of Walked Distance to Actual Distance,
Reported for Three Different Distances for Six Different
Conditions
4 m 5 m 6 m Average
1: Colocated animated avatar 0.93 0.98 0.98 0.96
2: Colocated static avatar 0.86 0.88 0.90 0.88
3: Colocated line on floor 0.79 0.80 0.82 0.80
4: Displaced animated avatar 0.99 1.01 0.99 1.00
5: Displaced static avatar 0.89 0.86 0.91 0.89
6: Displaced line on floor 0.80 0.83 0.85 0.83
distance to actual distance (averaged across 15 trials for
each subject) confirmed a main effect of avatar condi-
tion, F(2, 48)=74.74, p<.01, η2
p=.78, a marginal
effect of location (p=.06, η2
p=.08), and no avatar ×
location interaction (p=.49, η2
p=0.3). Partial
eta squared (η2
p) is used as an indication of effect size
reflecting the amount of variance accounted for by the
independent variable, and shows a large effect for avatar
condition and small effects for location and the avatar ×
location interaction. Scheffe post hoc tests confirmed
significant differences (p<.01 for all comparisons)
between the ratio of distance walked/actual distance in
the colocated conditions with the animated avatar
(mean =0.964), static avatar (mean =0.884), and
line on the floor (mean =0.804). In the displaced
condition, the effects were similar (p<.05 for all com-
parisons) with animated avatar (mean =0.999), static
avatar (mean =0.887), and line on the floor (mean =
0.829). Table 1 reports the ratio of walked distance to
actual distance.
Statistical analyses were also performed on the gaze
direction data to assess whether there were differences in
time directed looking at locations near to the body dur-
ing the six exploration conditions or during the interval
preceding walking in the distance estimation task (see
Table 2). Separate 2 (location: colocated vs. displaced)
×3 (avatar condition: animated, static, line) univari-
ate ANOVAs were performed on time spent “looking
down” for the exploration and prewalking intervals.
For the exploration phase, there was an effect of loca-
tion, in which more time was spent looking down in
the colocated versus displaced condition, F(1, 42)=
275.94, p<.01, η2
p=.87. More importantly, there
was no effect of avatar condition (p=.62, η2
p=.02),
demonstrating that observers’ looking patterns were
similar regardless of the presence of the animated avatar,
static avatar, or line during exploration. For the prewalk-
ing interval during the distance estimation task, there
was no effect of location (p=0.36, η2
p=.02) or avatar
condition (p=0.64, η2
p=.02).
Statistical analyses were also performed on the body
movement data to access whether there were differ-
ences in the amount of body movement as a function
of location or avatar condition (see Table 3 for means
and standard deviations). A 2 (location: colocated vs.
displaced) ×3 (avatar condition: animated, static, line)
univariate ANOVA was performed on the average speed
of limb movements (m/s) during the exploration inter-
val, separately for arm and leg movements. For the arm
movements, there was an effect of the avatar condition,
F(1, 42)=1023.22, p<0.01, η2
p=.98. Planned
contrasts revealed that more arm movements were made
when the avatar was animated versus static (p<.01) and
there was no difference in arm movements in the static
versus line condition (p=.85). Importantly, there was
no effect of location (p=.35, η2
p=.021), demon-
strating that arm movements were similar regardless of
location of the avatar. We found the same effects for leg
movement, demonstrating an effect of avatar condition
F(1, 42)=137.08, p<0.01, η2
p=.87 (animated
versus static, p<.01; static versus line, p=.27) and
no effect of location (p=.114, η2
p=.06). Together,
the body movement data provide additional information
about the tendency to move one’s body when provided
with visual feedback (or lack of feedback) from an avatar.
Significantly greater movement was seen in both ani-
mated conditions, with notably greater movement in
the arms compared to the legs. Also, it is important to
note that there was no difference in body movements
from the static to the line condition, although there was
a statistical difference in spatial judgments. This indicates
that body movements alone were not responsible for the
changes in spatial judgments.
238 PRESENCE: VOLUME 19, NUMBER 3
Ta b l e 2 . Gaze Direction Data for the Six Different Conditions
Condition Exploration phase Prewalking phase
1: Colocated animated avatar 69.54% (5.70) 17.11% (7.90)
2: Colocated static avatar 63.37% (9.09) 14.05% (4.45)
3: Colocated line on floor 64.00% (4.99) 11.97% (4.79)
4: Displaced animated avatar 28.25% (6.77) 14.03% (5.09)
5: Displaced static avatar 29.17% (9.27) 17.21% (3.02)
6: Displaced line on floor 31.13% (8.33) 16.12% (4.73)
Average percentage of time that the participants looked down (defined as when their
gaze direction intersected with the floor with a radius from their standing point of
2.5 m). Standard deviations are in parentheses.
Ta b l e 3 . Body Movement Data for the Six Different Conditions Averaged over All Participants
Condition Hands Feet
1: Colocated animated avatar 0.310 (0.032) 0.090 (0.025)
2: Colocated static avatar 0.040 (0.012) 0.023 (0.003)
3: Colocated line on floor 0.036 (0.011) 0.014 (0.004)
4: Displaced animated avatar 0.290 (0.026) 0.120 (0.032)
5: Displaced static avatar 0.036 (0.011) 0.020 (0.006)
6: Displaced line on floor 0.034 (0.010) 0.016 (0.003)
Body movement is the average movement (m/s) for the 5 min exploration phase and
is reported for hands and feet. Standard deviations are in parentheses.
3.3 Discussion
Consistent with Mohler et al. (2008) and Ries
et al. (2008), we have demonstrated that experience
with a self-avatar within an HMD virtual environment
has large effects on subsequent distance judgments.
Furthermore, we examined the importance of self body-
motion and colocation with the user’s physical position
in the VE. Our results suggest that although significant
increases in distance judgments occurred when the
avatar did not move with the user’s movements, greater
effects were found with accurate avatar movement.
Additional analyses of gaze behavior suggest that these
effects are not a result of differential attentional or view-
ing behavior among the avatar/marker conditions.
Interestingly, the effects of static and animated avatars
were essentially the same regardless of the visual location
of the avatar.
There are several possible accounts for changes in
spatial estimates as a function of the presence and move-
ment of a visual body including (1) the visual presence
of the body, (2) a frame of reference that grounds the
observer in the environment, and (3) perceptual-motor
feedback that informs absolute scale or perceived body
ownership. Each of these accounts is informed by the
present results. First, the comparison between the line
and avatar conditions suggests that the presence of the
body itself, regardless of its fidelity of motion, had an
influence on distance estimations. One explanation for
this outcome is that the human body serves as a familiar
size cue which may provide metric scaling information
for the virtual environment. Related to this, we found
generalized effects of the avatars regardless of whether
the avatar was colocated with the physical body. This
effect suggests that the changes in spatial estimates may
Mohler et al. 239
be more of a result of scaling by information of the body
rather than grounding the body in a location in the
environment. However, further work to discriminate
between effects associated with establishing the body as
a frame of reference versus scaling of spatial dimen-
sions is necessary. One approach would be to determine
whether judgments change as a result of the body in
an additive or multiplicative way as a function of dis-
tance. An additive change would support the account
of the body serving as a frame of reference, grounding
the observer in a location within the environment and
leading to a systematic shift in responses. In contrast,
a change in scaling should be manifested in a multi-
plicative effect. This type of analysis relies on regression
models requiring more data points at more distances and
is an avenue for future research.
Notably the largest difference in distance estimates
was found for the animated/tracked body part avatars,
suggesting that visual-motor feedback does contribute
in some way, either to a greater sense of body owner-
ship, increased information for scaling of the virtual
space, or both. Understanding the mechanisms under-
lying perceived body ownership with avatars could have
important implications for increasing the fidelity of
space perception in virtual environments. Several studies
with and without VE technology have begun to test the
parameters involved in self-identification of bodies and
body parts. With artificial hands such as in the rubber
hand illusion, it has been shown that parameters such
as the orientation of the hands and visual similarity to
the participant’s hands modulate the effects of the illu-
sion (Pavani, Spence, & Driver, 2000). Lenggenhager
et al. (2007) found that an “out of body” experience
created with a virtual body was not effective when that
body was replaced by a size-matched simple rectan-
gle volume. Slater et al. (2008) recently demonstrated
self-ownership of a virtual arm after stimulation with a
virtual ball, following the rubber hand illusion paradigm.
In this context of body ownership, the effects of the
displaced avatar are both intriguing and puzzling. The
introduction of a self-avatar does not necessarily imply
embodiment—that the user and avatar are experienced as
the same self. In the present study, the displaced avatar
is viewed as in front of the user with movements that
correspond to the user’s own movements. One line
of reasoning would suggest that if the user did expe-
rience himself or herself in the location of the avatar
(3 m in front of his or her physical position) then we
might expect the distance walked by the observer to
decrease rather than increase in the displaced avatar
condition, as the avatar was moved forward in space.
However, our results show an increase in distance esti-
mations, consistent with the effects of the colocated
animated condition. On the other hand, we do not rule
out the possibility of embodiment of the avatar. One
important factor may be the extent to which agency
is experienced in the avatar (Short & Ward, 2009).
Agency is the sense of control of one’s own body and
events in the external environment. Our displaced, ani-
mated condition directly linked the control of one’s own
body movement with the visual movement of the avatar.
These circumstances result in efferent feedback from
voluntary motor commands, afferent feedback from
signals such as proprioception indicating the state of
body position, as well as reafferent sensory feedback
dependent on one’s actions (Wexler & van Boxtel,
2005) from the visual avatar movement. The con-
trol over self-movement has been shown to increase
self-recognition (Tskaris, Haggard, Franck, Mainy, &
Sirigu, 2005) and may have contributed to the similarity
of the effect to the colocated condition. More research
is needed with self-avatars viewed from first-person or
third-person perspectives to examine the roles of agency
and dynamic movement on one’s subjective embodi-
ment of an avatar as well as on spatial behavior.
Furthermore, the mechanisms underlying perceptual-
motor calibration with avatars and the subsequent
effects on space perception are unknown. Our previous
work has demonstrated that observers readily calibrate
locomotion after short adaptation periods of walking
(without avatars) which manipulate visual-motor pair-
ings of information for self-motion (Mohler et al.,
2007). Perceptual-motor calibration of body part move-
ment may be further tested using a similar approach in
which visual information for self-motion is mismatched
to actual limb motion. While the present results cannot
distinguish whether the effects of the animated avatar
are due to the presence of more information for scaling
240 PRESENCE: VOLUME 19, NUMBER 3
versus a more salient sense of ownership of the avatar,
they do emphasize the overall importance of body-
motion. Future work which allows for the viewing of
animated avatars whose movements are dissociated from
the viewer’s own movements may differentiate between
these two explanations.
Finally, it is important to consider that our labora-
tory tracking constraints led to an experimental design
in which the avatar was experienced only during an
exploration phase which occurred in a different envi-
ronment from the distance judgment phase. Thus, we
have shown that avatar effects on distance estimations
do not require the presence of the avatar during the dis-
tance estimations and that avatar effects generalize from
one environment to another.
4 Conclusions
The utility of virtual environments for applica-
tions involving spatial behavior will likely increase when
perceptual experiences within virtual environments mir-
ror those of the real world. Our goal was to investigate
the influence of self-avatars on one type of spatial
judgment requiring absolute distance perception. We
found that the presence of an avatar changed the typ-
ical pattern of distance underestimation seen in many
HMD-based virtual environment studies. Users showed
a remarkable increase in distance estimations with avatar
experience, especially when the avatar was animated
in correspondence with their own real body move-
ments. These results are an important advance in our
understanding of the role of embodied perception in
virtual environments. At the same time, our results
introduce several new questions about the nature of self-
representation in virtual environments and its effects on
spatial perception.
Acknowledgments
This work was supported in part by NSF grant IIS-
0745131 and by the World Class University program
through the National Research Foundation of Korea
funded by the Ministry of Education, Science and Tech-
nology (R31-2008-000-10008-0). The authors wish to
thank Michael Weyel, Martin Breidt, Naima Laharnar,
Stephan Streuber, and Jennifer Campos for discussions
and experimental support.
References
Bailenson, J. N., Beall, A. C., & Blascovich, J. (2002). Gaze
and task performance in shared virtual environments.
Journal of Visualization and Computer Animation,13,
313–320.
Bailenson, J. N., & Blascovich, J. (2004). Avatars. In W. S.
Bainbridge (Ed.), Encyclopedia of human-computer inter-
action (pp. 64–68). Great Barrington, MA: Berkshire
Publishing Group.
Barsalou, L. W. (2008). Grounded cognition. Annual Review
of Psychology,59, 617–645.
Beall, A. C., Loomis, J. M., Philbeck, J. W., & Fikes, T. G.
(1995). Absolute motion parallax weakly determines visual
scale in real and virtual environments. In Proceedings of the
SPIE—The International Society for Optical Engineering
(Vol. 2411, pp. 288–297).
Biocca, F. (1997a). The cyborg’s dilemma: Embodiment
in virtual environments. In Proceedings of the 2nd Inter-
national Conference on Cognitive Technology, 12–26.
Biocca, F. (1997b). The cyborg’s dilemma: Progressive
embodiment in virtual environments. Journal of
Computer-Mediated Communication,3(2), http://jcmc.
indiana.edu/vol3/issue2/biocca2.html.
Botvinick, M., & Cohen, J. (1998). Rubber hands “feel”
touch that eyes see. Nature,391, 756.
Creem-Regehr, S. H., Willemsen, P., Gooch, A. A., &
Thompson, W. B. (2005). The influence of restricted
viewing conditions on egocentric distance perception:
Implications for real and virtual environments. Perception,
34(2), 191–204.
Draper, M. (1995). Exploring the influence of a virtual body on
spatial awareness. Unpublished master’s thesis, University of
Washington, Seattle, Washington.
Durlach, N., & Slater, M. (2000). Presence in shared vir-
tual environments and virtual togetherness. Presence:
Teleoperators and Vir tual Environments,9(2), 214–217.
Eby, D. W., & Loomis, J. M. (1987). A study of visually
directed throwing in the presence of multiple distance cues.
Perception & Psychophysics,41, 308–312.
Foley, J. M. (2007). Visually directed action: Learning to
compensate for perceptual errors (VSS abstract). Journal
of Vision,7(9), 416.
Mohler et al. 241
Fukusima, S. S., Loomis, J. M., & Silva, J. A. D. (1997).
Visual perception of egocentric distance as assessed by tri-
angulation. Journal of Experimental Psychology: Human
Perception and Performance,23(1), 86–100.
Hay, J. C., Pick, H. L., Jr., & Ikeda, K. (1965). Visual capture
produced by prism spectacles. Psychonomic Science,2, 215–
216.
Henry, D., & Furness, T. (1993). Spatial perception in virtual
environments: Evaluating an architectural application. In
Virtual Reality Annual International Symposium, 33–40.
Hillis, K. (1999). Digital sensations: Space, identity, and
embodiment in virtual reality. Minneapolis, MN: University
of Minnesota Press.
Interrante, V., Anderson, L., & Ries, B. (2006a). Distance
perception in immersive virtual environments, revisited. In
Proceedings of IEEE Virtual Reality, 3–10.
Interrante, V., Anderson, L., & Ries, B. (2006b). Presence,
rather than prior exposure, is the more strongly indicated
factor in the accurate perception of egocentric distances in
real world co-located immersive virtual environments. In
Proceedings of the 3rd Symposium on Applied Perception in
Graphics and Visualization (p. 157).
Interrante, V., Ries, B., Lindquist, J., & Anderson, L. (2007).
Elucidating the factors that can facilitate veridical spatial
perception in immersive virtual environments. In Pro-
ceedings of the IEEE Symposium on 3D User Interfaces,
11–18.
Knapp, J. M., & Loomis, J. M. (2004). Limited field of view
of head-mounted displays is not the cause of distance
underestimation in virtual environments. Presence: Tele-
operators and Virtual Environments,13(5), 572–577.
Lenggenhager, B., Tadi, T., Metzinger, T., & Blanke, O.
(2007). Video ergo sum: Manipulating bodily self-
consciousness. Science,317 (5841), 1096.
Lok, B., Naik, S., Whitton, M., & Brooks, J. F. P. (2003).
Effects of handling real objects and self-avatar fidelity on
cognitive task performance and sense of presence in vir-
tual environments. Presence: Teleoperators and Virtual
Environments,12(6), 615–628.
Loomis, J. M., Da Silva, J. A., Philbeck, J. W., & Fukusima,
S. S. (1996). Visual perception of location and distance.
Current Directions in Psychological Science,5(3), 72–77.
Loomis, J. M., & Knapp, J. (2003). Visual perception of ego-
centric distance in real and virtual environments. In L. J.
Hettinger & M. W. Haas (Eds.), Virtual and adaptive
environments (pp. 21–46). Mahwah, NJ: Erlbaum.
Loomis, J. M., Silva, J. A. D., Fujita, N., & Fukusima, S. S.
(1992). Visual space perception and visually directed action.
Journal of Experimental Psychology: Human Perception and
Performance,18 (4), 906–921.
Mohler, B. J., Bülthoff, H. H., Thompson, W. B., & Creem-
Regehr, S. H. (2008). A full-body avatar improves distance
judgments in virtual environments. In Proceedings of
the Symposium on Applied Perception in Graphics and
Visualization.
Mohler, B. J., Thompson, W. B., Creem-Regehr, S. H.,
Williamsen, P., Pick, H. L., Jr., & Rieser, J. J. (2007). Cal-
ibration of locomotion resulting from visual motion in a
treadmill-based virtual environment. ACM Transactions on
Applied Perception,4(1).
Pavani, F., Spence, C., & Driver, J. (2000). Visual capture of
touch: Out-of-the-body experiences with rubber gloves.
Psychological Science,11, 353–359.
Proffitt, D. R. (2006). Embodied perception and the econ-
omy of action. Perspectives on Psychological Science,1(2),
110–122.
Richardson, A. R., & Waller, D. (2005). The effect of feed-
back training on distance estimation in virtual environ-
ments. Applied Cognitive Psychology,19, 1089–1108.
Richardson, A. R., & Waller, D. (2007). Interaction with
an immersive virtual environment corrects users’ distance
estimates. Human Factors,49(3), 507–517.
Ries, B., Interrante, V., Kaeding, M., & Anderson, L. (2008).
The effect of self-embodiment on distance perception in
immersive virtual environments. In Proceedings of the ACM
Symposium on Virtual Reality Software and Technology, 167–
170.
Rieser, J. J., Ashmead, D. H., Tayor, C. R., & Youngquist,
G. A. (1990). Visual perception and the guidance of loco-
motion without vision to previously seen targets. Perception,
19, 675–689.
Sahm, C. S., Creem-Regehr, S. H., Thompson, W. B., &
Willemsen, P. (2005). Throwing versus walking as indica-
tors of distance perception in real and virtual environments.
ACM Transactions on Applied Perception,1(3), 35–45.
Short, F., & Ward, R. (2009). Virtual limbs and body space:
Critical features for the distinction between body space
and near-body space. Journal of Experimental Psychology:
Human Perception & Performance,35 (4), 1092–1103.
Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C.,
Barker, C., et al. (2006). A virtual reprise of the Stanley
Milgram obedience experiments. PLoS ONE,1(1), 1–10.
242 PRESENCE: VOLUME 19, NUMBER 3
Slater, M., Frisoli, A., Tecchia, F., Guger, C., Lotto, B., Steed,
A., et al. (2007). Understanding and realizing presence
in the Presenccia project. IEEE Computer Graphics and
Applications,27 (4), 90–93.
Slater, M., Perez-Marcos, D., Ehrsson, H. H., & Sanchez-
Vives, M. V. (2008). Towards a digital body: The virtual
arm illusion. Frontiers in Human Neuroscience,2, 1–8.
Slater, M., & Usoh, M. (1994). Body centered interaction
in immersive virtual environments. In N. M. Thalmann
& D. Thalmann (Eds.), Artificial life and virtual reality
(pp. 125–148). New York: John Wiley and Sons.
Slater, M., Usoh, M., & Steed, A. (1995). Taking steps: The
influence of a walking technique on presence in virtual real-
ity. ACM Transactions on Computer-Human Interaction,
2(3), 201–219.
Thompson, W. B., Willemsen, P., Gooch, A. A., Creem-
Regehr, S. H., Loomis, J. M., & Beall, A. C. (2004). Does
the quality of the computer graphics matter when judg-
ing distances in visually immersive environments? Presence:
Teleoperators and Vir tual Environments,13(5), 560–571.
Tskaris, M., Haggard, P., Franck, N., Mainy, N., & Sirigu,
A. (2005). A specific role for efferent information in self-
recognition. Cognition,96, 215–231.
Usoh, M., Arthur, K., Whitton, M. C., Bastos, R., Steed, A.,
Slater, M., et al. (1999). Walking >walking-in-place >
flying in virtual environments. In Proceedings of the ACM
SIGGRAPH, 359–364.
Waller, D., & Richardson, A. R. (2008). Correcting dis-
tance estimates by interacting with immersive virtual
environments: Effects of task and available sensory infor-
mation. Journal of Experimental Psychology: Applied,14(1),
61–72.
Wexler, M., & van Boxtel, J. J. A. (2005). Depth perception
by the active observer. Trends in Cognitive Sciences,9(9),
431–438.
Willemsen, P., Colton, M. B., Creem-Regehr, S. H., &
Thompson, W. B. (2004). The effects of head-mounted
display mechanics on distance judgments in virtual envi-
ronments. In Proceedings of the 1st Symposium on Applied
Perception in Graphics and Visualization, 35–38.
Willemsen, P., Gooch, A. A., Thompson, W. B., & Creem-
Regehr, S. H. (2008). Effects of stereo viewing conditions
on distance perception in virtual environments. Presence:
Teleoperators and Virtual Environments,17 (1),
91–101.
Williams, B., Johnson, D., Shores, L., & Narasimham, G.
(2008). Distance perception in virtual environments. Pro-
ceedings of the 5th Symposium on Applied Perception in
Graphics and Visualization, 193.
Wilson, M. (2002). Six views of embodied cognition.
Psychonomic Bulletin & Review,9(4), 625–636.
Zhang, H., Yu, C., & Smith, L. B. (2006). An interactive vir-
tual reality platform for studying embodied social inter-
action. In Proceedings of the CogSci06 Symposium Toward
Social Mechanisms of Android Science. Retrieved from
http://www.indiana.edu/dll/papers/zhang_android06.
pdf.
... Several investigations identified solutions to distance underestimation in VR; some techniques that enhance distance estimation accuracy include introducing additional visual depth cues, enabling full avatar representations, increasing FOV size, and adding head-centric rest frames to the user [7,24,50,55,64,68]. While research is abundant on distance perception and reducing its underestimation, a central question motivating our research is whether participants' perception of distal judgments in VR matches their actual accuracy in spatial judgment tasks. ...
... Moreover, viewer body cues, avatar embodiment, and eye height characteristics also impact spatial judgment accuracy. Distance perception was more accurate when participants were represented with an avatar, or when they were shown a full-body avatar with character animations [21,53,55]. Using some rest frames improves near and mid-field VR distance estimation (i.e. ...
Article
Full-text available
Virtual Reality (VR) systems are widely used, and it is essential to know if spatial perception in virtual environments (VEs) is similar to reality. Research indicates that users tend to underestimate distances in VR. Prior work suggests that actual distance judgments in VR may not always match the users self-reported preference of where they think they most accurately estimated distances. However, no explicit investigation evaluated whether user preferences match actual performance in a spatial judgment task. We used blind walking to explore potential dissimilarities between actual distance estimates and user-selected preferences of visual complexities, VE conditions, and targets. Our findings show a gap between user preferences and actual performance when visual complexities were varied, which has implications for better visual perception understanding, VR applications design, and research in spatial perception, indicating the need to calibrate and align user preferences and true spatial perception abilities in VR
... While in the latter the negative stereotyped avatar was compared to a control condition also including an avatar but without such negative stereotypes (e.g., elderly versus young), the present control condition included no avatar to avoid any potential influence of avatar's characteristics on motor imagery performance. However, beyond the fact that the absence of an avatar implies no sense of embodiment, it more importantly implies the absence of bodily reference in the virtual environment, which might have a negative impact on both the accuracy of spatial perception (Mohler et al., 2010;Yadav and Kang, 2022) and motor performance (Berg et al., 2023). For this reason, Study 2 was designed to conceptually replicate Study 1, while addressing this limitation. ...
... The presence of body-related information anchors the body within a virtual environment and provides familiar metric cues for scaling space. In turn, this enables greater accuracy in spatial perception and motor performance (Berg et al., 2023;Mohler et al., 2010;Yadav and Kang, 2022). In Study 1, although the absence of bodily information in the control condition did not impact the spatial perception of the virtual environment as observed with the distance and the slope estimation tasks (see Supplementary material), an influence on spatial perception and/or motor imagery remains conceivable. ...
Article
The Proteus effect refers to the tendency for individuals to conform to the stereotypes related to the visual characteristics of the avatar used in a virtual environment. If the phenomenon has been widely observed, underlying mechanisms (e.g., self-perception, priming) and moderation factors, such as avatar embodiment, need confirmation. The sense of embodimentemerges when the properties of the avatar are processed in the same way as the properties of the biological body. The objective of the present study was, first, to investigate the effect of avatar embodiment on the Proteus effect related to the influence of an elderly avatar on motor imagery, and second, to examine the extent to which this relationship is explained by a change in self-perception. In two virtual reality studies, the agency and the self-location components of embodiment were manipulated through visuo-motor synchronization and visual perspective respectively. The time required to perform motor imagery displacements while being embodied (visuo-motor synchrony and first-person perspective) or not (visuo-motor asynchrony and/or third-person perspective) in an elderly avatar was measured. The results showed that the Proteus effect was not stronger the more participants embodied the elderly avatar, which does not support that embodiment moderates the Proteus effect. Moreover, analyses did not confirm that change in explicit selfperception mediates the relationship between embodiment and the Proteus effect. The Proteus effect is dis cussed in the light of the avatar identification process and the active-self account: crossover between these mechanisms could offer new insights into understanding the influence of avatars on individuals’ behavior.
... For instance, the VR condition completely removed the participants from their physical environment, requiring them to control a virtual, disembodied though "connected" hand using their actual hand to point at the target. Evaluated alone, this setup contains numerous features that may potentially affect performance accuracy, such as disembodiment (Gonzalez-Franco et al. 2019;Mohler et al. 2010), the discrepancy between participants' virtual and actual hands (Linkenauger et al. 2013), and, specific to online motor control, motion-to-photon latency (Warburton et al. 2022). In contrast, the AR condition with optical pass-through allowed participants to interact with virtual targets with their real hands while still having unperturbed visual access to the entire physical environment. ...
... Admittedly, MR systems introduced many additional perturbations to the human visual system that could affect distance perception in peri-personal space. For VR, these perturbations in the present study (and most VR interactions) include a disembodied and non-articulating hand (Gonzalez-Franco et al. 2019;Mohler et al. 2010), the size and minor orientation differences between the participants' physical hand and the virtual hand (Linkenauger et al. 2013), and the temporal delay between movement and rendering of the movement (Warburton et al. 2022). Similarly, for AR, although the participants used their physical hands to interact with virtual objects, issues such as the inappropriate occlusion of the hand (Bingham et al. 2001) and spatial drift (Miller et al. 2020) could also impact the measured motor performance in the current task. ...
Article
Full-text available
Mixed reality technologies, such as virtual (VR) and augmented (AR) reality, present promising opportunities to advance education and professional training due to their adaptability to diverse contexts. Distortions in the perceived distance in such mediated conditions, however, are well documented and have imposed nontrivial challenges that complicate and limit transferring task performance in a virtual setting to the unmediated reality (UR). One potential source of the distance distortion is the vergence-accommodation conflict—the discrepancy between the depth specified by the eyes’ accommodative state and the angle at which the eyes converge to fixate on a target. The present study involved the use of a manual pointing task in UR, VR, and AR to quantify the magnitude of the potential depth distortion in each modality. Conceptualizing the effect of vergence-accommodation offset as a constant offset to the vergence angle, a model was developed based on the stereoscopic viewing geometry. Different versions of the model were used to fit and predict the behavioral data for all modalities. Results confirmed the validity of the conceptualization of vergence-accommodation as a device-specific vergence offset, which predicted up to 66% of the variance in the data. The fitted parameters indicate that, due to the vergence-accommodation conflict, participants’ vergence angle was driven outwards by approximately 0.2°, which disrupted the stereoscopic viewing geometry and produced distance distortion in VR and AR. The implications of this finding are discussed in the context of developing virtual environments that minimize the effect of depth distortion.
... In VEs, users can represent body movements through avatar embodiment [1]. Various avatar representations have been reported to alter user experiences [2]. Accurate measurement of self-body and reflection in an avatar can enhance the sense of ownership of the body [3]. ...
Preprint
This study is the first to explore the interplay between haptic interaction and avatar representation in Shared Virtual Environments (SVEs). We focus on their combined effect on social presence and task-related scores in dyadic collaborations. In a series of experiments, participants performed the plate control task with haptic interaction under four avatar representation conditions: avatars of both participant and partner were displayed, only the participant's avatar was displayed, only the partner's avatar was displayed, and no avatars were displayed. The study finds that avatar representation, especially of the partner, significantly enhances the perception of social presence, which haptic interaction alone does not fully achieve. In contrast, neither the presence nor the type of avatar representation impacts the task performance or participants' force effort of the task, suggesting that haptic interaction provides sufficient interaction cues for the execution of the task. These results underscore the significance of integrating both visual and haptic modalities to optimize remote collaboration experiences in virtual environments, ensuring effective communication and a strong sense of social presence.
... Users have demonstrated deviant behavior with avatars of varying weight [50,51], height [67], age [40], and even race [2] [62]. Indeed, perceptions of affordances within the virtual environment, such as object size perception [3,45], depth perception [15,43], and passability perception [8,9,66] are altered by both the inclusion of a avatar * e-mail: christopheryou@ufl.edu † e-mail: rvenkatakrishnan@ufl.edu ‡ e-mail: rohith.venkatakr@ufl.edu ...
Article
Full-text available
Control over an avatar in virtual reality can improve one's perceived sense of agency and embodiment towards their avatar. Yet, the relationship between control on agency and embodiment remains unclear. This work aims to investigate two main ideas: (1) the effectiveness of currently used metrics in measuring agency and embodiment and (2) the relationship between different levels of control on agency, embodiment, and cognitive performance. To do this, we conducted a between-participants user study with three conditions on agency (n = 57). Participants embodied an avatar with one of three types of control (i.e., Low - control over head only, Medium - control over head and torso, or High - control over head, torso, and arms) and completed a Stroop test. Our results indicate that the degree of control afforded to participants impacted their embodiment and cognitive performance but, as expected, could not be detected in the self-reported agency scores. Furthermore, our results elucidated further insights into the relationship between control and embodiment, suggesting potential uncanny valley-like effects. Future work should aim to refine agency measures to better capture the effect of differing levels of control and consider other methodologies to measure agency
... Visual modality was mainly studied to enhance virtual embodiment due to their unique affordance for full-body illusions which have far-reaching implications for our virtual interactions. Experiential aspects affecting embodiment are found in, for example, viewing a self-avatar (Mohler et al., 2010), firstperson perspective and realistic humanoid textures (Maselli and Slater, 2013), and low-latency motor actions such as body and eye movements (Skarbez et al., 2017). In addition, the level of detail of the self-representation could also affect embodiment (Fribourg et al., 2020). ...
Article
Full-text available
Enhancing the experience of virtual reality (VR) through haptic feedback could benefit applications from leisure to rehabilitation and training. Devices which provide more realistic kinesthetic (force) feedback appear to hold more promise than their simpler vibrotactile counterparts. However, our understanding of kinesthetic feedback on virtual embodiment is still limited due to the novelty of appropriate kinesthetic devices. To contribute to the line of this research, we constructed a wearable system with state-of-the-art kinesthetic gloves for avatar full-body control, and conducted a between-subjects study involving an avatar self-touch task. We found that providing a kinesthetic sense of touch substantially strengthened the embodiment illusion in VR. We further explored the ability of these kinesthetic gloves to present virtual objects haptically. The gloves were found to provide useful haptic cues about the basic 3D structure and stiffness of objects for a discrimination task. This is one of the first studies to explore virtual embodiment by employing state-of-the-art kinesthetic gloves in full-body VR.
... Therefore, this underestimation phenomenon is influenced by various factors, including the design choices of the VEs, technological aspects, the distances themselves, and individual user characteristics. The visual composition of VEs, comprising textures, graphics, and avatars, can affect distance perception [17][18][19][20][21][22][23][24][25]. Therefore, the realism of VEs is also an important aspect. ...
Article
Full-text available
Spatial perception plays a critical role in virtual worlds and real environments, as it can impact navigation abilities. To understand this influence, the conducted study investigated the effects of human characteristics and immersion levels on the exocentric distance estimation process in virtual environments. As the first step, a virtual environment was implemented for both desktop and Gear VR head-mounted displays. Afterward, the exocentric distance estimation skills of 229 university students were examined. Out of these students, 157 used the desktop display, and 72 used the Gear VR. Using logistic regression analysis and linear regression analysis methods, their effects on the probabilities of accurate estimates and their estimation times were investigated. According to the results, gender, video game playtime per week, height, and display device had significant effects on the former, whereas dominant hand, video game playtime per week, height, and display device had significant effects on the latter. The results also show that by using the head-mounted display, the likelihood of the students estimating exocentric distances accurately significantly decreased; however, they were significantly faster with it. These findings can influence the development of more accessible and effective virtual environments in the future.
Article
Maintaining balance in immersive virtual reality (VR) environments poses a significant challenge for users, particularly affecting those with pre-existing balance disorders. This study investigates the efficacy of multimodal feedback-comprising auditory, vibrotactile, and visual stimuli-in mitigating balance issues within VR. A sample of 68 participants, divided equally between individuals with balance deficits related to multiple sclerosis and those without, was evaluated. The research explored the impact of various feedback conditions on balance performance. The results demonstrated that the multimodal feedback condition significantly enhanced balance control compared to other conditions, with statistical analysis confirming this improvement (p<.001). These findings underscore the potential of integrated sensory feedback in addressing balance-related difficulties in VR, thereby improving the overall accessibility and user experience for individuals affected by balance impairments. This research contributes valuable insights into optimizing VR environments for enhanced stability and user comfort.
Conference Paper
Virtual Reality (VR) has emerged as a powerful tool in the industry, offering benefits such as enhanced learning outcomes, improved skill acquisition, and deeper engagement across various fields [1]. However, the absence of a standardized approach and guidelines for VR system operation poses a challenge when implementing interaction types and user representations in VR environments. This paper focuses on investigating the impact of controller visualization on precision, accuracy, and immersion within VR environments. By identifying the optimal and most comfortable user representation, which involves a combination of input devices and visual displays, we aim to address this challenge. Previous research has not sufficiently explored the interplay between these interaction methods and user ownership, task performance and usability in immersive virtual environments for different tasks. To bridge this gap, we conducted a comprehensive user study, comparing three VR user representations: visualized controllers with physical controllers, visualized animated hands with physical controllers, and visualized animated hands with hand tracking. Participants evaluated each setup, and we employed qualitative feedback and quantitative metrics, including task completion times and error rates, to evaluate their experiences. Our findings highlight that hand tracking may have limitations in terms of usability but excels in generating a strong sense of body ownership compared to the alternative options. Notably, our analysis revealed minimal or no significant difference between the use of visualized hands or controllers when controllers were employed as the input device. Thus, the choice of the best user representation largely depends on personal preference, while the most effective operation mode varies based on the specific task executed in the VR environment. By leveraging these insights, the industry can harness the full potential of VR technology, driving greater productivity, efficiency, and overall success.
Article
Full-text available
This article presents an interactive technique for moving through an immersive virtual environment (or “virtual reality”). The technique is suitable for applications where locomotion is restricted to ground level. The technique is derived from the idea that presence in virtual environments may be enhanced the stronger the match between proprioceptive information from human body movements and sensory feedback from the computer-generated displays. The technique is an attempt to simulate body movements associated with walking. The participant “walks in place” to move through the virtual environment across distances greater than the physical limitations imposed by the electromagnetic tracking devices. A neural network is used to analyze the stream of coordinates from the head-mounted display, to determine whether or not the participant is walking on the spot. Whenever it determines the walking behavior, the participant is moved through virtual space in the direction of his or her gaze. We discuss two experimental studies to assess the impact on presence of this method in comparison to the usual hand-pointing method of navigation in virtual reality. The studies suggest that subjective rating of presence is enhanced by the walking method provided that participants associate subjectively with the virtual body provided in the environment. An application of the technique to climbing steps and ladders is also presented.
Article
Full-text available
Numerous previous studies have suggested that distances appear to be compressed in immersive virtual environments presented via head mounted display systems, relative to in the real world. However, the principal factors that are responsible for this phenomenon have remained largely unidentified. In this paper we shed some new light on this intriguing problem by reporting the results of two recent experiments in which we assess egocentric distance perception in a high fidelity, low latency, immersive virtual environment that represents an exact virtual replica of the participant?s concurrently occupied real environment. Under these novel conditions, we make the startling discovery that distance perception appears not to be significantly compressed in the immersive virtual environment, relative to in the real world.
Article
Perception informs people about the opportunities for action and their associated costs. To this end, explicit awareness of spatial layout varies not only with relevant optical and ocular-motor variables, but also as a function of the costs associated with performing intended actions. Although explicit awareness is mutable in this respect, visually guided actions directed at the immediate environment are not. When the metabolic costs associated with walking an extent increase-perhaps because one is wearing a heavy backpack-hills appear steeper and distances to targets appear greater. When one is standing on a high balcony, the apparent distance to the ground is correlated with one's fear of falling. Perceiving spatial layout combines the geometry of the world with behavioral goals and the costs associated with achieving these goals. © 2006 Association for Psychological Science.
Article
This article has no abstract