Conference PaperPDF Available

Distance Judgments to On- and Off-Ground Objects in Augmented Reality

Authors:

Abstract and Figures

Augmented reality (AR) technologies have the potential to provide individuals with unique training and visualizations, but the effectiveness of these applications may be influenced by users' perceptions of the distance to AR objects. Perceived distances to AR objects may be biased if these objects do not appear to make contact with the ground plane. The current work compared distance judgments of AR targets presented on the ground versus off the ground when no additional AR depth cues, such as shadows, were available to denote ground contact. We predicted that without additional information for height off the ground, observers would perceive the off-ground objects as placed on the ground, but at farther distances. Furthermore, this bias should be exaggerated when targets were viewed with one eye rather than two. In our experiment, participants judged the absolute egocentric distance to various cubes presented on or off the ground with an action-based measure, blind walking. We found that observers walked farther for off-ground AR objects and that this effect was exaggerated when participants viewed off-ground objects with monocular vision compared to binocular vision. However, we also found that the restriction of binocular cues influenced participants' distance judgments for on-ground AR objects. Our results suggest that distances to off-ground AR objects are perceived differently than on-ground AR objects and that the elimination of binocular cues further influences how users perceive these distances.
Content may be subject to copyright.
Distance Judgments to On- and Off-Ground Objects in Augmented Reality
Carlos Salas Rosales*
Vanderbilt University, USA
Grant Pointon
University of Utah, USA
Haley Adams
Vanderbilt University, USA
Jeanine Stefanucci§
University of Utah, USA
Sarah Creem-Regehr
University of Utah, USA
William B. Thompson||
University of Utah, USA
Bobby Bodenheimer**
Vanderbilt University, USA
ABSTRACT
Augmented reality (AR) technologies have the potential to provide
individuals with unique training and visualizations, but the effective-
ness of these applications may be influenced by users’ perceptions of
the distance to AR objects. Perceived distances to AR objects may be
biased if these objects do not appear to make contact with the ground
plane. The current work compared distance judgments of AR targets
presented on the ground versus off the ground when no additional
AR depth cues, such as shadows, were available to denote ground
contact. We predicted that without additional information for height
off the ground, observers would perceive the off-ground objects as
placed on the ground, but at farther distances. Furthermore, this
bias should be exaggerated when targets were viewed with one eye
rather than two. In our experiment, participants judged the absolute
egocentric distance to various cubes presented on or off the ground
with an action-based measure, blind walking. We found that ob-
servers walked farther for off-ground AR objects and that this effect
was exaggerated when participants viewed off-ground objects with
monocular vision compared to binocular vision. However, we also
found that the restriction of binocular cues influenced participants’
distance judgments for on-ground AR objects. Our results suggest
that distances to off-ground AR objects are perceived differently
than on-ground AR objects and that the elimination of binocular
cues further influences how users perceive these distances.
Keywords:
Augmented reality, Virtual environments, distance
perception, depth cues
Index Terms:
I.3.7 [Computer Graphics]: Three-Dimensional
Graphics and Realism—Virtual Reality; J.4 [Computer Applica-
tions]: Social and Behavioral Sciences—Psychology
1 INTRODUCTION
Due to recent advances in optical see-through head mounted dis-
plays, augmented reality (AR) has begun to permeate the domains
of entertainment, education, and engineering. The improved quality
and accessibility of this technology has made AR devices, such as
the Microsoft HoloLens, targets for new application development.
To illustrate, AR solutions have already been developed to aid ar-
chitecture, design, and medical imaging [1, 5, 12, 22]. However, to
optimally use this technology, it is essential that we understand how
users perceive and interact with virtual objects in mediated reality.
Recent reviews demonstrate widespread study of human behavior
within AR over the last 15 years [2,10]. However, open questions
*e-mail: carlos.salas@vanderbilt.edu
e-mail:grant.pointon@psych.utah.edu
e-mail: haley.a.adams@vanderbilt.edu
§e-mail:jeanine.stefanucci@psych.utah.edu
e-mail:sarah.creem@psych.utah.edu
||e-mail:thompson@cs.utah.edu
**e-mail: bobby.bodenheimer@vanderbilt.edu
remain about human spatial perception in augmented reality and the
depth cues that are used when virtual and real worlds coexist.
Our current work investigates the accuracy of observers’ distance
perception to virtual objects when the objects are presented on or
off the ground in the real world. The ability to perceive absolute
distances to objects in AR is critical to many applications that re-
quire an understanding of scale, such as environmental simulations
for training and architectural design. To act accurately, a person
must perceive spatial relationships of virtual objects in the same way
that they would be perceived in the physical world. Prior research
both in virtual environments (VEs) and AR has shown a tendency
to underestimate distances [11, 19, 33, 34, 36]. In AR, accurate posi-
tioning of virtual objects has unique challenges, requiring computer
visual input, visual markers, and virtual world anchors to build accu-
rate perceptions and references of the real world. For example, the
Microsoft HoloLens’ spatial mapping capabilities, although impres-
sive, require certain conditions to accurately visualize the real-world
environment and place virtual objects within it.
Although a variety of studies evaluating distance perception in
immersive technology have been conducted, little research has been
done to examine the effect that an object’s contact with the ground
has on depth perception within augmented reality. Examining this
effect is important, as previous real-world research has shown that
depth estimates can be overestimated for objects that appear to be
floating over a surface [26]. Since there is a demonstrated effect
of the ground plane on distance estimations in the real world [13],
it is important to consider similar ground contact effects on depth
perception for virtual objects in AR.
2 BACKGR OUN D
2.1 Distance Perception in Real, Virtual, and Aug-
mented Environments
In this paper, we assess judgments of absolute egocentric distance to
targets either on the ground or off the ground. Egocentric distance is
defined as the distance between the viewpoint of the observer and
a specified target location. A variety of measures can be used to
assess absolute, egocentric distances, including but not limited to
verbal reports of the distance in a specified metric, visually matching
the distance with another extent, and walking without vision to the
previously viewed target (termed blind walking). The current paper
uses blind walking to measure egocentric distance perception, be-
cause judgments of absolute egocentric distance are accurate when
assessed with blind walking in the real world up to approximately 15
meters [30]. However, research over the past 20 years assessing par-
ticipants’ abilities to blind walk to targets in virtual environments has
shown underestimation of distance (see [7,29] for recent reviews).
This underestimation ranged from 40-80% of real world distances–
depending on the lab and the measure of distance perception–and
was generally present in most HMDs and tracking systems [7]. It
has only been with the recent advent of commodity-level HMDs
that distance underestimation in virtual environments has reduced.
Studies evaluating distance estimation in these devices via blind
walking have reported 10% or less underestimation [6,8,31].
Prior work assessing distance perception in AR showed mixed
results regarding accuracy of distance judgments. For example, one
study comparing blind walked estimates of the distance to objects
in a virtual hallway compared to a visually matched real hallway
with objects displayed through augmented reality found similar
underestimation (60% of real distances) of AR judgments and VR
judgments [14]. However, other work found that distance judgments
made in AR were similar to those made in the real world [16,25,33].
Evidence from a study using a video see-through AR device suggests
that stereoscopic cues and relative size cues can be particularly
important for accurate distance estimation in AR [18]. Improvements
in distance estimation can also sometimes be due to the training and
feedback given to participants in these augmented environments
[14, 34]. One particularly relevant study for the current work was
conducted by Swan and colleagues [32]. They seated observers
in a hallway and asked them to visually match distances between
approximately 5 and 45 meters. But, in addition to the targets (which
were always placed on the ground), the researchers also placed
referent objects in the hallway in either the upper field of view or
the lower field of view (i.e., on the ceiling or the ground). Distances
were underestimated up to 23 meters, but past 23 meters they were
overestimated. This finding was qualified by the location of the
referents. When the referent objects were in the upper field of view,
participants overestimated closer targets, but when the referents were
in the lower field of view, participants underestimated closer targets.
Such disparity in estimation could be due to the referents being on
or off the ground plane, even though the targets were all on the
ground plane. In the following section, we review studies that have
investigated the effects of ground contact for perceived location.
2.2 Visual Information for Object Contact with the
Ground Plane and Distance Perception
Objects in the real world typically make contact with the ground
plane, or are set on surfaces that contact the ground. Gibson’s [13]
ground theory of perception argued for the importance of the ground
surface for terrestrial animals to perceive distance. In the absence of
information to the contrary, an object silhouetted against a ground
surface will appear to be in contact with the ground surface and at
a depth from the viewer corresponding to the occluded portion of
the ground [4]. The perceived distance may be influenced by an
illusory decrease in angular declination below the horizon, which
is an important visual cue for viewers standing on the ground that
inversely relates to distance estimates made for objects on the ground
surface [21]. Multiple different visual cues provide information
about the contact of objects with a ground plane. A lack of these
cues facilitates the ground contact effect for objects that are in fact
above ground level. Many of these same cues can provide explicit
non-contact information, thus inhibiting the ground contact effect.
When this occurs, not only will the object be seen to be floating above
the ground surface, but due to its 3-D position being constrained by
the line of sight, it will appear to be nearer to the viewer than the
occluded portion of the ground.
In many cases, shadows and interreflections (e.g., the result of
light reflecting from the object surface to the ground surface) provide
the strongest evidence for whether or not an object is in contact with
a support surface [17, 20]. Binocular stereo may also be able to
discriminate between contact and non-contact in two ways, but
few controlled studies have directly tested these potential effects.
One way is by indicating whether or not the three-dimensional
location of an object and a surface viewed immediately below are
compatible with contact. Alternatively, stereo might provide ordinal
or relative depth information about the relationship of the bottom of
an object and the surface immediately below in the view. Prior work
has shown that stereoscopic viewing, shadows, and interreflections
do influence participants’ judgments of distance between a virtual
block and a virtual table surface [15]. However, this study was
conducted in personal space rather than action space (spaces that
extend between 2–30 m), which is the focus of the current work [9].
Rand [26] demonstrated the importance of perceived ground location
on absolute distance judgments in action space in the real world by
placing targets on stands and then degrading viewing conditions so
that the stand could or could not be perceived. When the stand was
visible, viewers accurately judged distances to the targets. When
the stands were not detectable, participants overestimated the target
distance, consistent with the notion that they grounded the targets at
a location where they were visually projected, not where they were
physically located.
2.3 Overview of Current Study
In the current experiment, we varied ground contact of augmented
targets (cubes) by assessing the absolute distance perception to those
targets from an egocentric viewpoint. Our primary goal was to
determine whether distances to the AR cubes would be perceived
accurately and how ground contact or lack of it would affect these
judgments. Given prior work suggesting that stereoscopic informa-
tion can affect interpretations of object contact with a surface [15],
we also manipulated whether the cubes were viewed monocularly or
binocularly. Viewing the cubes monocularly will eliminate stereo-
scopic information for depth, which should interact with the pre-
sentation of the cubes as contacting the ground or not. Specifically,
without binocular information for the near ground surface, it should
be more difficult to localize the targets when off the ground, and
more likely that observers would perceive the off-ground objects at
the location where they visually intersect the ground (i.e., on the
ground but farther away). Thus, we predicted the following:
H1: Distances would be underestimated as demonstrated in prior
AR work.
H2: Distances would be judged to be farther for targets off the
ground versus on the ground.
H3: Distances to off-ground targets would be judged as farther
away with monocular compared to binocular viewing. Distance
judgments would not change as a function of viewing condition for
on-ground targets.
3 ASSESSING DISTANCE JUDG ME NTS TO ON-AND OFF -
GROU ND OB JEC TS
3.1 Calibration of Cube Placement
Traditional studies assessing depth perception often present targets to
observers in very sparse environments in order to control for relative
comparisons and alternate strategies for judging distance [23, 24].
However, this practice poses a problem for the HoloLens tracking
system, which relies on stable objects in the environment to accu-
rately and reliably place targets at correct distances on each trial.
In order to determine whether a somewhat sparse laboratory envi-
ronment would make reliable placement of targets difficult across
participants and trials, we conducted a preliminary study to assess
the placement accuracy and potential drift of our virtual targets with
the Microsoft HoloLens across two different laboratory spaces. The
studies took place at Vanderbilt University and the University of
Utah. Those participants run at Vanderbilt viewed the targets in
a second floor hallway with a railing on one side. Those partici-
pants run at Utah viewed the targets in a rectangular lab room (4m
x 9.5m) without furniture. Ten participants, distributed across the
two labs, viewed virtual cubes placed at 5 distances (4m–8m in 1m
increments). All participants had normal or corrected-to-normal
vision.
A custom AR environment was created for the HoloLens, which
weighs approximately 579g and has a graphical field of view of
30
X 17
(i.e., graphical AR objects can appear within an eyebox
of these dimensions). The environment was designed using Unity
(version 2017.4.4) on a Windows 10 laptop and run as a standalone
application. Participants used a Microsoft HoloLens to view virtual
cubes (20cm x 20cm x 20cm) at distances of 4m, 5m, 6m, 7m, and
8m from an augmented green line. All of the cubes were assigned
a marble texture and presented on the ground. A HoloLens clicker
was used to toggle the visibility of the cubes such that only one cube
was visible to the participant at any given point.
To try to increase accuracy and stability of cube placement, a
scan of the testing environment was done and uploaded into Unity
prior to building the application. After giving consent, participants
were asked to put on the HoloLens and align themselves directly
behind a virtually augmented green starting line on the floor. Once
in this starting position, participants’ next task was to align a real
world cube with each AR cube before and after walking in the
environment. If adjustments from the first alignment were necessary
after the participants walked in the environment, the experimenter
recorded the distance between the original and final placement of
the real world cube.
The sparseness of the environment mattered in terms of reliable
and accurate augmented cube placement. Mean drift for cubes in the
Vanderbilt environment was 0.15 cm (SD = 0.31 cm) whereas mean
drift distance for the Utah environment was 2.08 cm (SD = 3.10
cm). In addition, the program seemed more unstable in the Utah
environment. This discrepancy could be explained by the structure
of the hallway and railing in the Vanderbilt environment, which
likely provided the HoloLens’ sensors with more reliable reference
points. In contrast, the open and sparse Utah environment may
have hindered the HoloLens’ ability to accurately sense and map the
room.
In order to counteract this problem, visual features in the form
of chairs were added to the Utah environment along the walls. Two
additional participants were then run with these visual features in the
Utah environment. Mean drift for the cubes was reduced to 0.48 cm
(SD = 0.89 cm). Given these results, we determined that variability
in distance judgments would dwarf slight placement issues with the
HoloLens and render the effect of drifting null. Thus, drift error in
these environments was deemed tolerable for further experiments.
4 DISTANCE PERCEPTION EXPERIMENT
The results from the calibration study indicated that AR cubes can
be reliably placed at various distances with the HoloLens. Given
this, the current experiment manipulated distance to the cubes (3m,
4.5m, and 6m) and viewing condition (monocular or binocular) as
within-participants variables and presentation of the cubes as on or
off the ground as a between-participants variable.
4.1 Participants
Participants were recruited from the University of Utah Department
of Psychology participant pool. In total, we collected data on 33
participants, but 4 were excluded due to technical issues and 1
participant withdrew from the study. This left us with 14 participants
in the on-ground condition (8 Male, M
age
= 21.29, SD = 4.44) and 14
participants in the off-ground condition (7 Male, M
age
= 23.21, SD =
10.09). An independent samples t-test revealed that the participants
in the on- and off-ground conditions did not differ significantly in
height (p=0.628) or eye height (p=0.839).
4.2 Materials and Methods
Participants completed one of two conditions (on-ground or off-
ground targets) of the experiment in the same rectangular lab room
(4m x 9.5m) used for the calibration study (see Figures 1 and 2). All
the stimuli in the off-ground condition were displayed at 0.2m above
the ground plane. Within each condition, the participants viewed a
series of 18 cubes (20cm x 20cm x 20cm) presented at 3 distances
(3m, 4.5m, 6m). The three cube distances were repeated 3 times
in both the first and second block of trials. We used the same AR
cubes that were used in our calibration study. Trials were blocked
by viewing condition (monocular and binocular viewing). The order
of viewing conditions was counterbalanced across participants. The
experimental builds were programmed in Unity (Version 2017.4.4)
Figure 1: An example of an AR cube trial as it was run in the real
world laboratory space.
Figure 2: A participant viewing an AR cube with the HoloLens in the
real world laboratory space.
on a Windows 10 laptop and run as independent applications on the
Microsoft HoloLens.
After obtaining consent, participants’ stereo vision was assessed
with a random dot stereogram test. The experimenter then measured
eye dominance by asking participants to look through a small hole in
poster board and align their gaze to a target in the back of the room.
While aligned with the target, the participants closed one eye and
kept the other open. The eye for which participants could still see
the target with when open was considered their dominant eye. After
eye dominance was determined, the participant was given practice
with the blind walking procedure in a different part of the laboratory.
The experimenter stood beside the participant with their arm out for
support and guided the participant to the other side of the room with
their eyes closed. They then led them back to the starting location
with the experimenter guiding them by their shoulders. Once the
participant felt comfortable with blind walking and being led back
to a location without vision, the participant put on the HoloLens
with or without an eyepatch on (to correspond to viewing condition)
and began the experiment.
On each trial, the participants first aligned their toes with the edge
of an augmented green bar displayed on the floor. They then used the
verbal command ”advance” in order to generate a cube. Participants
were given as much time as they needed to view the cube and its
distance from them before they blind walked. When the participant
felt ready to blind walk, they used the verbal command ”ready” to
make the cube disappear and then walked to where they previously
saw the cube with their eyes closed. When the participants stopped,
they remained at that location with their eyes closed while the ex-
perimenter measured the distance walked from the start line to their
toes. The experimenter then guided the participant back to the start
area in a circuitous pattern so that participants could not count steps
or use any other strategy to determine distance walked. After the
ninth trial, the start line changed color to indicate that the first block
of the experiment was complete. At this point, the experimenter
had participants either remove their eyepatch or place one on their
non-dominant eye. The second block of trials followed the same
procedure. At the end of the experiment, the experimenter collected
the participants’ height, eye height, and their responses to a brief
debriefing.
4.3 Data Analysis
In order to investigate the influence of cube distance, cube height,
and viewing condition, we analyzed our results with a mixed model
approach (see Appendix for model details). Mixed models are a
form of generalized regression techniques that can account for both
between-participant variability (in this case, the variability between
the two cube height conditions) and within-participant variability
(in this case, the variability within each participant across distance
and viewing condition). The mixed models we ran were comparable
to a repeated measures ANOVA (analysis of variance) with planned
comparisons, but with the added advantage of model specification.
This allowed us to include only the interactions that were hypothe-
sized a priori which also increased our power to detect differences.
Furthermore, they are well suited for nested experimental designs,
i.e., repeated measures designs [27].
We designed our model to test four planned comparisons. These
comparisons were 1) the difference in judged distance between on-
ground and off-ground cubes across all other conditions, i.e., H2:
the main effect of cube height; 2) whether the on/off ground manip-
ulation differed by distance (the interaction between distance and
cube height); 3) whether the on/off ground manipulation differed
by viewing condition, i.e., H3: the predicted effect of monocular
viewing on off-ground objects; and 4) the interaction between cube
height and viewing order. Our fourth comparison was included to
ensure that our results were not confounded by the order of viewing
condition. Given that these comparisons were planned and orthog-
onal to each other (i.e., they test different conceptual questions), a
correction for multiple comparisons was not necessary. We report
the results of these comparisons below along with their respective
unstandardized coefficients. The unstandardized coefficients (B)
represent the expected difference of each comparison in raw units
(cm). Practical significance is also estimated by reporting Cohen’s d,
which we calculated by dividing the unstandardized beta coefficients
by the standard deviation of our outcome variable [28].
4.4 Results
H1: Distances to AR targets would be underestimated.
As
shown in Figure 3, both on-ground and off-ground targets were
underestimated. On average, participants underestimated distances
to on-ground cubes by 15% (Min = 13%, Max = 16%) and under-
estimated distances to off-ground cubes by 7% (Min = 6%, Max =
8%). Additionally, the figure shows that participants increased their
judged distances as actual distance of the cube increased. Specifi-
cally, participants walked on average about 130 cm further to the 4.5
m cube (
B
= 128.92,
SE
= 6.19,
t
= 20.83,
p< .001
,d= 1.01) and
about 270 cm further to the 6 m cube (
B
= 272.64,
SE
= 6.19,
t
=
44.04,
p< .001
,d= 2.14) relative to the distance they walked to the
3 m cube.
H2: Distances would be judged to be farther for targets off
the ground versus on the ground.
Our analysis supported this
prediction. We found a significant main effect of cube height (B=
33.5, SE = 15.45,
t
= 2.17, p= 0.04, d= 0.26), which indicated that
Table 1: Mean blind walked distances for each condition
On-ground Off-ground
Distance Binocular Monocular Binocular Monocular Predicted
Distance
300 256 (39.2) 247 (33.1) 272 (37.8) 279 (38.0) 342
450 386 (61.2) 374 (61.3) 419 (50.7) 426 (59.3) 513
600 531 (71.5) 517 (63.4) 546 (53.2) 570 (77.3) 684
Note: Values represent average blind walked distances in cm. Values in par-
entheses represent standard deviations. The predicted distances represent how
far the off-ground cubes would appear to be if they were interpreted to be on the
ground, and were calculated with the following equation: Predict edDistance =
EyeH eightavg CubeDistance/Eyeheightavg -Cubeheight
participants blind walked, on average, 33.5 cm farther to off-ground
cubes than to on-ground cubes across all conditions. We also found
that cube height interacted with the 3 - 4.5m distance effect (B=
17.74, SE = 8.754,
t
= 2.03, p
<
0.04, d= 0.14). This indicates
that the difference between the on- and off-ground conditions was
greatest for the 4.5m distance.
H3: Distances to off-ground targets would be judged as far-
ther away with monocular compared to binocular viewing. Dis-
tance judgments would not change as a function of viewing con-
dition for on-ground targets.
First, we did not find any interaction
between cube height and viewing order. We also did not find a
main effect of viewing order. Thus, the order in which the partic-
ipants experienced the viewing conditions did not influence their
behavior. Our main prediction was partially supported. Monocular
versus binocular viewing led to opposite effects for on-ground and
off-ground targets as indicated by a significant two-way interaction
between cube height and viewing condition (B= 24.56,
SE
= 7.15,
t
= 3.435, p
<
0.001, d= 0.19). As predicted, participants in the
off-ground condition judged distances as farther (by about 13 cm)
for all of the cubes displayed off the ground when in the monocular
viewing condition compared to the binocular viewing condition (B
= 12.81, SE = 5.05,
t
= 2.53, p= 0.01, d= 0.10). In addition, partici-
pants in the on-ground condition judged distances to be about 12 cm
less in the monocular viewing condition compared to the binocular
viewing condition (B= -11.75, SE = 5.05,
t
= -2.33, p= 0.02, d=
-0.09) which did not support our hypothesis that viewing condition
would have a null effect for on-ground targets.
5 DISCUSSION
The goal of this work was to examine the impact of varied ground
contact on egocentric distance to targets in AR. Prior work in VR
has shown underestimation of distance (though the extent of un-
derestimation has declined with the advent of new head-mounted
displays), but the few studies investigating distance perception in
AR have been mixed. We focused on the role of object contact
with the ground surface and the impact of stereoscopic (or lack of)
information. Our results showed an overall underestimation of blind
walked distance to targets across all conditions. As predicted, the
off-ground targets were judged to be farther away, possibly due to a
lack of information specifying that the objects were not in contact
with the ground. This explanation is supported by the finding that
with the reduced-cue condition of monocular viewing, off-ground
targets were judged to be even farther. We found the unexpected
additional result that targets on the ground were judged as closer
when viewed monocularly. Potential reasons for this finding are
discussed further below.
Our primary question was whether targets off the ground would
be perceived for their actual egocentric location above the ground,
or whether perceived distance would be based on their visual inter-
section with the ground. Previous work has demonstrated that when
there are insufficient cues to suggest that a target is off the ground,
distances will be perceived as if they are on the ground [13, 26].
0
100
200
300
400
500
600
300 450 600
Cube Distance (cm)
Average Blind Walk Distance (cm)
Condition
bino Off-ground
mono Off-ground
bino On-ground
mono On-ground
Figure 3: The figure above shows the average distance participants
walked to the 3m, 4.5m, and 6m cube in each combination of viewing
and cube height conditions. Error bars indicate ±1standard errors.
Our current results are consistent with this effect. While there was
overall underestimation in judged distance, targets off the ground
were relatively overestimated compared to those on the ground,
suggesting that their location may have been perceived as on the
ground but farther away. Despite the current limitations in AR tech-
nology for shadows, future work should investigate whether any
rendered shadows might affect the perceived location of targets off
the ground. Methods for rendering shadows in AR to improve depth
perception have been recently developed [11] and could be applied
to our current question of ground contact and perception of depth.
In addition, varying the height of off-ground objects might also be
useful in terms of determining whether participants connect shadows
to objects correctly. Placing objects on the ceiling might also be
an important test for understanding how off ground locations and
restricted vertical field of view could affect perceived locations of
targets. Blind walking to targets on the ceiling in the real world has
been shown to be accurate [35], but this accuracy could be affected
by a more restricted vertical field of view in AR.
We found differences in distance estimation due to monocular
versus binocular viewing. And we had predicted that further re-
ducing visual cues for depth with monocular viewing would affect
off-ground targets, in particular. While the off-ground targets were
perceived as farther away, we found that the on-ground targets were
also affected in that they were perceived as closer when viewed
monocularly. There is anecdotal evidence that strong differences in
spatial frequency, contrast, and chromaticity can lead to the appear-
ance of non-contact, but no study has formally tested this hypothesis.
However, this could have contributed to the effect observed in the
current study for monocular viewing of on-ground targets. Our
cubes were bright and textured in a way that made them stand in
stark contrast against the dark ground plane (a dark carpet with fairly
uniform texture). This large difference in appearance–particularly
in brightness, contrast, and texture–between the real world surface
and the virtual cube may have led to the cube being perceived as
hovering above the ground plane even though it was in contact with
the ground. In this case, the virtual cube on the ground would be
pushed forward perceptually.
Future work will explore the effect of object-background visual
similarity on the perception of surface contact and thus the on-
the-ground contact effect. Additional work should also examine
the generalizability of the monocular viewing effects (given the
somewhat small effect sizes) by testing distance judgments in varied
spatial environments and at different distances.
These results contribute to a growing body of work on depth
perception in AR—much of which previously has focused on near
locations—by using blind walking to assess perception of farther
distances in action space. We have established that, while view-
ers underestimate distances to AR objects relative to the intended
distance, their distance judgments systematically increased as cube
distance increased for both on- and off-ground targets. Although this
may have been assumed based on previous VR findings, the chal-
lenges faced with accurately calibrating the AR system and placing
objects so that they appear stable and grounded at farther distances
makes this first question necessary to address. The finding of under-
estimation in perceived distance may be a result of a combination
of factors, including the severely reduced horizontal and vertical
field of view of the HoloLens as well the lack of information for
ground contact as discussed above. Both are areas in need of future
research.
Beyond the importance of establishing the accuracy of perceived
scale in AR spaces, studies of perception of distance may generally
inform the application and use of AR for more complex spatial tasks
such as navigation. If AR is to be used to facilitate spatial learning
while navigating, it is critical to understand how the locations of AR
features such as landmarks are perceived.
5.1 Study Limitations
Future work should attempt to replicate and extend the current results
given a few limitations in methodology and design. While our
findings suggest that AR targets off of the ground are perceived as
farther away, we only compared one off-ground height to the ground
condition. Therefore, we can only speculate that the height of
AR objects would continue to effect distance judgments at various
heights, particularly significant heights, as studied in Tlauka et
al. [37]. Furthermore, participants may have judged distances more
accurately if additional cube sizes or additional AR objects were
placed in the environment to act as relative size cues. The effect of
participants’ height was also not thoroughly investigated. Future
work could emphasize the exact relationship between participant
height, AR object height, and distance judgments.
6 CONCLUSION
This research demonstrates that targets portrayed off the ground
in AR may not be perceived to be in the same location as targets
displayed on the ground plane, especially if viewed in conditions
under which depth cues are reduced (such as monocular viewing).
Other cues to the location of targets, such as shadows, should be
added in the future to assess their contribution to accuracy of per-
ceived location of targets. The current experiment provides an initial
step toward understanding cues for perceiving depth in AR but also
highlights the need for further research in depth perception within
AR environments to assess contributions of other depth cues.
ACKNOWLEDGMENTS
The authors would like to thank Richard Paris for advice and help
during the project. This work was supported by the Office of Naval
Research under grant N00014-18-1-2964.
A STATI STI CA L MODEL AND ANALYS ES
The mixed models we ran were designed to test the influence of
cube height, viewing condition, and cube distance on participants’
blind walk behavior. As stated above, mixed models are a form of
generalized regression. In these models, continuous and categor-
ical predictors can be included at different levels within a nested
experimental design. Our model included predictors at the within-
subject level (e.g., viewing condition, viewing order, cube distance)
and at the between-subject level (i.e., cube height). Because all of
our predictors are categorical, the Bestimates reported in the main
text represent the expected difference between each level of the
variable (e.g., the expected difference in distance walked for cubes
presented on and off the ground). The standard errors represent the
variance that surrounds each estimate in cm. The tand pindicate
the degree to which the fixed effect estimates are different from 0, or
our null hypothesis. Cohen’s dis also reported which indicates the
size of the effects we observed. All analyses were estimated with
restricted maximum likelihood and were conducted in R using the
lme4 package [3].
The equation below represents our primary model, where
i
rep-
resents a single observation and
j
represents a participant. This
means that any predictor with the subscript
i j
indicates a variable
that varies by within participants whereas a predictor with the sub-
script
j
indicates a variable that varies between participants. Each
variable was entered as a categorical factor with 2 or 3 levels. Dis-
tance had 3 levels (3m, 4.5m, 6m). Cube height (on-ground &
off-ground), viewing condition (binocular & monocular), and view-
ing order (binocular first and monocular first) had two levels. We
specified interactions between cube height and distance (
γ11
&
γ21
),
cube height and viewing condition (
γ31
), as well as cube height and
viewing order (
γ41
). Additionally, we included a random intercept
(
µ0j
) for each participant which accounted for individual variabil-
ity in blind walking behavior (i.e., the variability associated with
repeated measures within each participant). These are not reported
in the main text since they only control for within-subject variability
and do not indicate relevant information about our experimental
manipulations. Finally, contrast coding was used to allow for the
planned comparisons described in Section 4.3.
BlindWalkDistanceij =γ00 +γ01 CubeHeight j+
γ10 Distance(3vs.4.5m)i j+
γ11 CubeHeight jDistance(3vs.4.5m)i j+
γ20 Distance(3vs.6m)i j+
γ21 CubeHeight jDistance(3vs.6m)i j+
γ30 ViewingConditioni j +
γ31 CubeHeight jViewingConditioni j +
γ40 ViewOrderi j +
γ41 CubeHeight jViewOrderi j +µ0j
REFERENCES
[1]
H. Bae, M. Golparvar-Fard, and J. White. High-precision vision-based
mobile augmented reality system for context-aware architectural, engi-
neering, construction and facility management (aec/fm) applications.
Visualization in Engineering, 1(1):3, Jun 2013. doi: 10.1186/2213
-7459-1-3
[2]
Z. Bai and A. F. Blackwell. Analytic review of usability evaluation in
ismar. Interacting with Computers, 24(6):450–460, 2012.
[3]
D. Bates, M. M
¨
achler, B. Bolker, and S. Walker. Fitting linear mixed-
effects models using lme4. Journal of Statistical Software, 67(1):1–48,
2015. doi: 10. 18637/jss.v067.i01
[4]
Z. Bian, M. L. Braunstein, and G. J. Andersen. The ground dominance
effect in the perception of 3–D layout. Perception & Psychophysics,
67(5):801–815, 2005.
[5]
W. Birkfellner, M. Figl, K. Huber, F. Watzinger, F. Wanschitz, J. Hum-
mel, R. Hanel, W. Greimel, P. Homolka, R. Ewers, and H. Bergmann.
A head-mounted operating binocular for augmented reality visualiza-
tion in medicine - design and initial evaluation. IEEE Transactions on
Medical Imaging, 21(8):991–997, Aug 2002. doi: 10. 1109/TMI.2002.
803099
[6]
L. E. Buck, M. K. Young, and B. Bodenheimer. A comparison of
distance estimation in hmd-based virtual environments with different
hmd-based conditions. ACM Trans. Appl. Percept., 15(3):21:1–21:15,
July 2018. doi: 10. 1145/3196885
[7]
S. H. Creem-Regehr, J. K. Stefanucci, and W. B. Thompson. Chapter
six - perceiving absolute scale in virtual environments: How theory and
application have mutually informed the role of body-based perception.
In B. H. ROSS, ed., Psychology of Learning and Motivation, vol. 62
of Psychology of Learning and Motivation, pp. 195 – 224. Academic
Press, 2015. doi: 10. 1016/bs.plm.2014.09.006
[8]
S. H. Creem-Regehr, J. K. Stefanucci, W. B. Thompson, N. Nash,
and M. McCardell. Egocentric distance perception in the Oculus Rift
(DK2). In Proceedings of the ACM SIGGRAPH Symposium on Applied
Perception, SAP ’15, pp. 47–50. ACM, New York, NY, USA, 2015.
doi: 10.1145/2804408. 2804422
[9]
J. E. Cutting and P. M. Vishton. Perceiving layout and knowing dis-
tance: The integration, relative potency and contextual use of different
information about depth. In W. Epstein and S. Rogers, eds., Perception
of Space and Motion, pp. 69–117. Academic Press, New York, 1995.
[10]
A. Dey, M. Billinghurst, R. W. Lindeman, and J. E. Swan. A systematic
review of 10 years of augmented reality usability studies: 2005 to 2014.
Frontiers in Robotics and AI, 5:37, 2018. doi: 10. 3389/frobt.2018.
00037
[11]
C. Diaz, M. Walker, D. A. Szafir, and D. Szafir. Designing for depth
perceptions in augmented reality. In 2017 IEEE International Sympo-
sium on Mixed and Augmented Reality (ISMAR), pp. 111–122. IEEE,
2017.
[12] J. L. Gabbard and J. E. Swan II. Usability engineering for augmented
reality: Employing user-based studies to inform design. IEEE Transac-
tions on Visualization and Computer Graphics, 14(3):513–525, May
2008. doi: 10. 1109/TVCG.2008.24
[13]
J. J. Gibson. The perception of the visual world. Greenwood Press,
Westport, Conn., 1950.
[14]
T. Y. Grechkin, T. D. Nguyen, J. M. Plumert, J. F. Cremer, and J. K.
Kearney. How does presentation method and measurement protocol
affect distance estimation in real and virtual environments? ACM
Trans. Appl. Percept., 7:26:1–26:18, July 2010. doi: 10.1145/1823738.
1823744
[15]
H. H. Hu, A. A. Gooch, W. B. Thompson, B. E. Smits, J. J. Rieser, and
P. Shirley. Visual cues for imminent object contact in realistic virtual
environment. In Proceedings of the Conference on Visualization ’00,
VIS ’00, pp. 179–185. IEEE Computer Society Press, Los Alamitos,
CA, USA, 2000.
[16]
J. A. Jones, J. E. Swan, II, G. Singh, E. Kolstad, and S. R. Ellis. The
effects of virtual reality, augmented reality, and motion parallax on
egocentric depth perception. In Proceedings of the 5th symposium on
Applied perception in graphics and visualization, APGV ’08, pp. 9–14.
ACM, New York, NY, USA, 2008. doi: 10.1145/1394281. 1394283
[17]
D. Kersten, P. Mamassian, and D. C. Knill. Moving cast shadows
induce apparent motion in depth. Perception, 26(2):171–192, 1997.
[18]
M. Kyt
¨
o, A. M
¨
akinen, T. Tossavainen, and P. T. Oittinen. Stereoscopic
depth perception in video see-through augmented reality within action
space. Journal of Electronic Imaging, 23(1):011006, 2014.
[19]
J. M. Loomis and J. M. Knapp. Virtual and Adaptive Environments,
chap. Visual perception of egocentric distance in real and virtual envi-
ronments, pp. 21–46. Erlbaum, Mahwah, NJ, 2003.
[20]
C. Madison, W. Thompson, D. Kersten, P. Shirley, and B. Smits. Use
of interreflection and shadow for surface contact. Perception & Psy-
chophysics, 63(2):187–194, 2001.
[21]
T. L. Ooi, B. Wu, and Z. J. He. Distance determined by the angular
declination below the horizon. Nature, 414(6860):197–200, 2001.
[22]
K. Pentenrieder, C. Bade, F. Doil, and P. Meier. Augmented reality-
based factory planning - an application tailored to industrial needs. In
Proceedings of the 2007 6th IEEE and ACM International Symposium
on Mixed and Augmented Reality, ISMAR ’07, pp. 1–9. IEEE Com-
puter Society, Washington, DC, USA, 2007. doi: 10.1109/ISMAR.
2007.4538822
[23]
J. W. Philbeck and J. M. Loomis. Comparison of two indicators of
perceived egocentric distance under full-cue and reduced-cue condi-
tions. Journal of Experimental Psychology: Human Perception and
Performance, 23(1):72, 1997.
[24]
J. W. Philbeck, A. J. Woods, J. Arthur, and J. Todd. Progressive loco-
motor recalibration during blind walking. Perception & Psychophysics,
70(8):1459–1470, 2008.
[25]
G. Pointon, C. Thompson, S. Creem-Regehr, J. Stefanucci, M. Joshi,
R. Paris, and B. Bodenheimer. Judging action capabilities in augmented
reality. In Proceedings of the 15th ACM Symposium on Applied Per-
ception, SAP ’18, pp. 6:1–6:8. ACM, New York, NY, USA, 2018. doi:
10.1145/3225153. 3225168
[26]
K. M. Rand, M. R. Tarampi, S. H. Creem-Regehr, and W. B. Thompson.
The influence of ground contact and visible horizon on perception of
distance and size under severely degraded vision. Seeing and perceiv-
ing, 25(5):425–447, 2012.
[27]
S. W. Raudenbush and A. S. Bryk. Hierarchical linear models: Appli-
cations and data analysis methods, vol. 1. Sage, 2002.
[28]
S. W. Raudenbush, B. Rowan, and J. K. Sang. A multilevel, multivariate
model for studying school climate with estimation via the em algo-
rithm and application to u.s. high-school data. Journal of Educational
Statistics, 16(4):295–330, 1991. doi: 10. 3102/10769986016004295
[29]
R. S. Renner, B. M. Velichkovsky, and J. R. Helmert. The perception
of egocentric distances in virtual environments — A review. ACM
Computer Surveys, 46(2):23:1–23:40, 2013.
[30]
J. J. Rieser, D. H. Ashmead, C. R. Tayor, and G. A. Youngquist. Visual
perception and the guidance of locomotion without vision to previously
seen targets. Perception, 19:675–689, 1990.
[31]
Z. D. Siegel and J. W. Kelly. Walking through a virtual environment
improves perceived size within and beyond the walked space. Attention,
Perception, & Psychophysics, 79(1):39–44, 2017.
[32]
J. Swan, M. A. Livingston, H. S. Smallman, D. Brown, Y. Baillot,
J. L. Gabbard, and D. Hix. A perceptual matching technique for depth
judgments in optical, see-through augmented reality. Proceedings of
the IEEE Virtual Reality Conference, 2006, pp. 19–26, 2006.
[33]
J. E. Swan, A. Jones, E. Kolstad, M. A. Livingston, and H. S. Smallman.
Egocentric depth judgments in optical, see-through augmented reality.
IEEE transactions on visualization and computer graphics, 13(3):429–
442, 2007.
[34]
J. E. Swan, L. Kuparinen, S. Rapson, and C. Sandor. Visually per-
ceived distance judgments: Tablet-based augmented reality versus the
real world. International Journal of Human–Computer Interaction,
33(7):576–591, 2017.
[35]
W. B. Thompson, V. Dilda, and S. H. Creem-Regehr. Absolute distance
perception to locations off the ground plane. Perception, 36(11):1559–
1571, 2007.
[36]
W. B. Thompson, P. Willemsen, A. A. Gooch, S. H. Creem-Regehr,
J. M. Loomis, and A. C. Beall. Does the quality of the computer graph-
ics matter when judging distances in visually immersive environments.
Presence: Teleoperators and Virtual Environments, 13:560–571, 2004.
[37]
M. Tlauka, P. N. Wilson, M. Adams, C. Souter, and A. H. Young. An in-
vestigation into vertical bias effects. Spatial Cognition & Computation,
7(4):365–391, 2007. doi: 10. 1080/13875860701684138
... In VEs presented by HMDs, distances are generally underestimated in action space (2 -30 m) [Cutting and Vishton 1995] and beyond [Adams et al. 2022;Buck et al. 2021Buck et al. , 2018Creem-Regehr et al. 2023;Kelly 2023;Rosales et al. 2019]. A recent meta-analysis [Kelly 2023] found that field-of-view (FOV), weight, and pixel density were technical factors of HMDs that contributed significantly to this underestimation, but there were still unexplained sources of variance. ...
Conference Paper
Full-text available
Most modern head-mounted displays (HMDs) do not support the full range of adult inter-pupillary distances (IPDs) (i.e., 45 – 80 mm) due to technological limitations. Prior work indicates that the mismatch between a user’s actual IPD and the IPD set in the HMD (“IPD mismatch”) can affect distance and size judgments in near space (0 – 2 m). Therefore, users with IPDs outside of the supported HMD IPD range may not perceive virtual environments (VEs) accurately. Across three experiments, we investigated whether IPD mismatch significantly affects peoples’ distance judgments at longer distances (4 – 7 m). In two of the experiments, we recruited participants with IPDs smaller than the minimum supported IPD of the HTC Vive Pro HMD. They estimated distances in action space using verbal estimation (Experiment 1) and blind walking (Experiment 2) measures in indoor VEs. We found that: (i) distances were underestimated in action space, and (ii) IPD mismatch had minimal to no effect on their distance judgments. In a third experiment, we investigated whether we could generalize our findings to participants with an IPD within the supported HMD IPD range. We were able to replicate our previous findings. Overall, our findings suggest that IPD mismatch in an HMD may not be a major factor in distance underestimation in action space in VEs.
... In the distance estimation task environment, participants estimated the egocentric distance of a virtual traffic cone target object placed at 4 m, 4.75 m, 5.5 m, 6.25 m, and 7 m in each trial. These distances were chosen because the majority of previous studies found people underestimate distances in action space [Adams et al. 2022;Buck et al. 2018;Creem-Regehr et al. 2023;Kelly 2022;Rosales et al. 2019]. A horizontal guideline was also rendered on the ground to represent the starting location of distance estimation. ...
Conference Paper
Full-text available
Omnidirectional treadmills provide one solution for locomoting through large virtual environments in confined physical spaces. Through two experiments, this paper evaluated locomotion on an omnidirectional treadmill (Cyberith Virtualizer Elite 2) by comparing it to natural walking in an open physical space. In Experiment 1, participants judged distances and completed a path integration task using the treadmill and natural walking. Participants walked further on the treadmill but had larger angular errors during path integration, potentially due to increased cybersickness. Experiment 2 varied path lengths during path integration and found that longer paths led to higher cybersickness scores but did not affect performance. The paper offers interpretations and suggestions for using omnidirectional treadmills in virtual reality.
... This indicates an overestimation of target distance relative to the reference line, which may reflect a partial misinterpretation of its greater height-in-the-field as a cue to distance 1,2 . This effect has been found for similar contexts in augmented reality 31 . ...
Article
Full-text available
Shadows in physical space are copious, yet the impact of specific shadow placement and their abundance is yet to be determined in virtual environments. This experiment aimed to identify whether a target’s shadow was used as a distance indicator in the presence of binocular distance cues. Six lighting conditions were created and presented in virtual reality for participants to perform a perceptual matching task. The task was repeated in a cluttered and sparse environment, where the number of cast shadows (and their placement) varied. Performance in this task was measured by the directional bias of distance estimates and variability of responses. No significant difference was found between the sparse and cluttered environments, however due to the large amount of variance, one explanation is that some participants utilised the clutter objects as anchors to aid them, while others found them distracting. Under-setting of distances was found in all conditions and environments, as predicted. Having an ambient light source produced the most variable and inaccurate estimates of distance, whereas lighting positioned above the target reduced the mis-estimation of distances perceived.
Article
The immersive augmented reality (AR) system necessitates precise depth registration between virtual objects and the real scene. Prior studies have emphasized the efficacy of surface texture in providing depth cues to enhance depth perception across various media, including the real scene, virtual reality, and AR. However, these studies predominantly focus on black-and-white textures, leaving a gap in understanding the effectiveness of colored textures.To address this gap and further explore texture-related factors in AR, a series of experiments were conducted to investigate the effects of different texture cues on depth perception using the perceptual matching method. Findings indicate that the absolute depth error increases with decreasing contrast under black-and-white texture. Moreover, textures with higher color contrast also contribute to enhanced accuracy of depth judgments in AR. However, no significant effect of texture density on depth perception was observed. The findings serve as a theoretical reference for texture design in AR, aiding in the optimization of virtual-real registration processes
Article
This research investigated how the similarity of the rendering parameters of background and foreground objects affected egocentric depth perception in indoor virtual and augmented environments. We refer to the similarity of the rendering parameters as visual ‘congruence’. Study participants manipulated the depth of a sphere to match the depth of a designated target peg. In the first experiment, the sphere and peg were both virtual, while in the second experiment, the sphere is virtual and the peg is real. In both experiments, depth perception accuracy was found to depend on the levels of realism and congruence between the sphere, pegs, and background. In Experiment 1, realistic backgrounds lead to overestimation of depth, but resulted in underestimation when the background was virtual, and when depth cues were applied to the sphere and target peg. In Experiment 2, background and target pegs were real but matched with the virtual sphere; in comparison to Experiment 1, realistically rendered targets prompted an underestimation and more accuracy with the manipulated object. These findings suggest that congruence can affect distance estimation and the underestimation effect in the AR environment resulted from increased graphical fidelity of the foreground target and background.
Article
Full-text available
Introduction: Augmented Reality (AR) systems are systems in which users view and interact with virtual objects overlaying the real world. AR systems are used across a variety of disciplines, i.e., games, medicine, and education to name a few. Optical See-Through (OST) AR displays allow users to perceive the real world directly by combining computer-generated imagery overlaying the real world. While perception of depth and visibility of objects is a widely studied field, we wanted to observe how color, luminance, and movement of an object interacted with each other as well as external luminance in OST AR devices. Little research has been done regarding the issues around the effect of virtual objects’ parameters on depth perception, external lighting, and the effect of an object’s mobility on this depth perception. Methods: We aim to perform an analysis of the effects of motion cues, color, and luminance on depth estimation of AR objects overlaying the real world with OST displays. We perform two experiments, differing in environmental lighting conditions (287 lux and 156 lux), and analyze the effects and differences on depth and speed perceptions. Results: We have found that while stationary objects follow previous research with regards to depth perception, motion and both object and environmental luminance play a factor in this perception. Discussion: These results will be significantly useful for developers to account for depth estimation issues that may arise in AR environments. Awareness of the different effects of speed and environmental illuminance on depth perception can be utilized when performing AR or MR applications where precision matters.
Article
Full-text available
Underestimation of egocentric distances in immersive virtual environments using various head-mounted displays (HMDs) has been a puzzling topic of research interest for several years. As more commodity-level systems become available to developers, it is important to test the variation of underestimation in each system since reasons for underestimation remain elusive. In this article, we examine several different systems in two experiments and comparatively evaluate how much users underestimate distances in each one. To observe distance estimation behavior, a standard indirect blind walking task was used. An Oculus Rift DK1, weighted Oculus Rift DK1, Oculus Rift DK1 with an artificially restricted field of view, Nvis SX60, Nvis SX111, Oculus Rift DK2, Oculus Rift consumer version (CV1), and HTC Vive were tested. The weighted and restricted field of view HMDs were evaluated to determine the effect of these factors on distance underestimation; the other systems were evaluated because they are popular systems that are widely available. We found that weight and field of view restrictions heightened underestimation in the Rift DK1. Results from these conditions were comparable to the Nvis SX60 and SX111. The Oculus Rift DK1 and CV1 possessed the least amount of distance underestimation, but in general, commodity-level HMDs provided more accurate estimates of distance than the prior generation of HMDs.
Article
Full-text available
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
Article
Full-text available
Distances tend to be underperceived in virtual environments (VEs) by up to 50%, whereas distances tend to be perceived accurately in the real world. Previous work has shown that allowing participants to interact with the VE while receiving continual visual feedback can reduce this underperception. Judgments of virtual object size have been used to measure whether this improvement is due to the rescaling of perceived space, but there is disagreement within the literature as to whether judgments of object size benefit from interaction with feedback. This study contributes to that discussion by employing a more natural measure of object size. We also examined whether any improvement in virtual distance perception was limited to the space used for interaction (1–5 m) or extended beyond (7–11 m). The results indicated that object size judgments do benefit from interaction with the VE, and that this benefit extends to distances beyond the explored space.
Conference Paper
The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances---judgments of action capabilities---as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).
Article
PART I THE LOGIC OF HIERARCHICAL LINEAR MODELING Series Editor 's Introduction to Hierarchical Linear Models Series Editor 's Introduction to the Second Edition 1.Introduction 2.The Logic of Hierarchical Linear Models 3. Principles of Estimation and Hypothesis Testing for Hierarchical Linear Models 4. An Illustration PART II BASIC APPLICATIONS 5. Applications in Organizational Research 6. Applications in the Study of Individual Change 7. Applications in Meta-Analysis and Other Cases where Level-1 Variances are Known 8. Three-Level Models 9. Assessing the Adequacy of Hierarchical Models PART III ADVANCED APPLICATIONS 10. Hierarchical Generalized Linear Models 11. Hierarchical Models for Latent Variables 12. Models for Cross-Classified Random Effects 13. Bayesian Inference for Hierarchical Models PART IV ESTIMATION THEORY AND COMPUTATIONS 14. Estimation Theory Summary and Conclusions References Index About the Authors
Article
Does visually perceived distance differ when objects are viewed in augmented reality (AR), as opposed to the real world? What are the differences? These questions are theoretically interesting, and the answers are important for the development of many tablet- and phone-based AR applications, including mobile AR navigation systems. This paper presents a thorough literature review of distance judgment experimental protocols, and results from several areas of perceptual psychology. In addition to distance judgments of real and virtual objects, this section also discusses previous work in measuring the geometry of virtual picture space, and considers how this work might be relevant to tablet AR. Then, the results of two experiments are presented, where observers bisected egocentric distances of 15 and 30 meters in tablet-based AR and in the real world, in indoor corridor and outdoor field environments. In AR, observers bisected the distances to virtual humans, while in the real world, they bisected the distances to real humans. This is the first reported research that directly compares distance judgments of real and virtual objects in a tablet AR system. Four key findings were: (1) In AR, observers expanded midpoint intervals at 15 meters, but compressed midpoints at 30 meters. (2) Observers were accurate in the real world. (3) The environmental setting—corridor or open field—had no effect. (4) The picture perception literature is important in understanding how distances are likely judged in tablet-based AR. Taken together, these findings suggest the depth distortions that AR application developers should expect with mobile and especially tablet-based AR.
Conference Paper
Perceiving an accurate sense of absolute scale is important for the utility of virtual environments (VEs). Research shows that absolute egocentric distances are underestimated in VEs compared to the same judgments made in the real world, but there are inconsistencies in the amount of underestimation. We examined two possible factors in the variation in the magnitude of distance underestimation. We compared egocentric distance judgments in a high-cost (NVIS SX60) and low-cost (Oculus Rift DK2) HMD using both indoor and outdoor highly-realistic virtual models. Performance more accurately matched the intended distance in the Oculus compared to the NVIS, and regardless of the HMD, distances were underestimated more in the outdoor versus the indoor VE. These results suggest promise in future use of consumer-level wide field-of-view HMDs for space perception research and applications, and the importance of considering the context of the environment as a factor in the perception of absolute scale within VEs.