Conference PaperPDF Available

Spatial Updating and Simulator Sickness During Steering and Jumping in Immersive Virtual Environments

Authors:
Spatial Updating and Simulator Sickness during
Steering and Jumping in Immersive Virtual Environments
Tim Weißker*Andr´
e KunertBernd Fr¨
ohlichAlexander Kulik§
Virtual Reality and Visualization Research, Bauhaus-Universit ¨
at Weimar
ABSTRACT
Many recent head-mounted display applications and games imple-
ment a range-restricted variant of teleportation for exploring virtual
environments. This travel metaphor referred to as jumping only
allows to teleport to locations in the currently visible part of the
scene. In this paper, we present a formal description and classi-
fication scheme for teleportation techniques and its application to
the classification of jumping. Furthermore, we present the results
of a user study (
N=24
) that compared jumping to the more con-
ventional steering with respect to spatial updating and simulator
sickness. Our results show that despite significantly faster travel
times during jumping, a majority of participants (
75%
) achieved sim-
ilar spatial updating accuracies in both conditions (mean difference
0.02
,
σ=5.05
). In addition, jumping induced significantly less
simulator sickness, which altogether justifies it as an alternative to
steering for the exploration of immersive virtual environments. How-
ever, application developers should be aware that spatial updating
during jumping may be impaired for individuals.
Index Terms:
I.3.6 [Computer Graphics]: Methodology and
Techniques—Interaction techniques H.1.2 [Models and Principles]:
User/Machine Systems—Human information processing
1 INTRODUCTION
The interactive exploration and understanding of large virtual en-
vironments such as buildings, cities or whole landscapes requires
travel. A straightforward and intuitive metaphor of travel is steering,
during which users continuously control the direction and speed of
movement. The resulting visual motion flow, however, contradicts
motion cues of the vestibular system. Users visually experience self-
motion, but they do not feel the corresponding acceleration. This is
considered one of several reasons for simulator sickness [19,21].
Direct teleportation to points of interest avoids these conflicting
cues, but the spatial understanding of the connecting routes may also
be impaired, which can negatively impact one’s spatial awareness of
the scene as a whole [5]. In particular, when the teleportation target
is located beyond vista space [24], the spatial relation between the
target and the origin cannot be traced, which leads to disorientation
in unknown environments without the help of additional mediators.
Thus, both steering and teleportation can have undesirable side
effects that should be carefully considered.
As an alternative, various recent head-mounted display appli-
cations and games implement jumping, which limits the range of
possible teleportation targets to vista space. Consequently, distant
destinations can only be reached with a sequence of jumps along a
route. In contrast to teleportation beyond vista space, the traveled
path between two locations can be integrated based on perceived
*e-mail: tim.weissker@uni-weimar.de
e-mail: andre.kunert@uni-weimar.de
e-mail: bernd.froehlich@uni-weimar.de
§e-mail: kulik@uni-weimar.de
Figure 1: During steering, the user continuously perceives the scene
along the path to the destination. When teleporting, the scene is
viewed from two distinct points only. The jumping metaphor is located
in between. Each of the three techniques offers a different extent of
spatial information for path integration (indicated by red dots).
spatial information during the specification of intermediate jumping
targets. We refer to this process as forward path integration. From
this stance, jumping is an intermediate technique between steering
and regular teleportation (Figure 1).
In this paper, we explore the design space of teleportation tech-
niques in more detail, derive a comprehensive classification scheme
and use it for the classification of jumping. Furthermore, we report
on a user study that compares jumping and steering with respect
to their effects on spatial awareness and simulator sickness. For
our study task, we designed a parametric virtual city for generating
routes to be traveled by users in a head-mounted display. At the end
of each route, we measured spatial updating performance by asking
users to point back to the start.
Our work is motivated by the increasing use of jumping as a travel
metaphor in recent virtual reality games. Reduced symptoms of
simulator sickness are often mentioned as a reason of its popularity
although recent work did not reveal significant differences between
steering, jumping and walking in place [7]. Furthermore, prior
work indicated disadvantages of passive teleportation techniques [5]
concerning spatial awareness. However, previous research did not
analyze the effects of active jumping on spatial awareness. Our work
bridges this gap and provides the following contributions:
a formal description of the design space of teleportation tech-
niques and the classification of various implementations, e.g.
jumping as a range-restricted variation
the design of a parametric spatial orientation task to measure
spatial updating performance
a statistical evaluation of spatial updating performances after
steering and jumping and a follow-up analysis to reveal that
a subset of 18 participants (
75%
) achieved similar spatial up-
dating accuracies, indicated by a mean accuracy difference of
0.02(σ=5.05)
statistical evidence that jumping induced significantly less
simulator sickness symptoms and a follow-up analysis to reveal
that, in contrast, a subset of 15 participants (
62.5%
) were
similarly affected, indicated by a mean SSQ score difference
of 0.75 (σ=5.86)
Our results indicate that jumping is a viable alternative to steering
for exploring and understanding immersive virtual environments.
2 RE LATE D WORK
Spatial awareness is a complex cognitive construct and challeng-
ing to measure. To study effects of travel techniques on spatial
awareness, we analyze spatial orientation tasks in the literature and
motivate the evaluation of user performance in spatial updating
tasks. Furthermore, we review related work on the classification of
virtual travel techniques and discuss their limitations with respect to
the disambiguation of recent teleportation implementations.
2.1 Evaluation of Spatial Awareness
Bowman et al. [5] defined spatial awareness as “the ability of the
user to retain an awareness of her surroundings during and after
travel”. In a study, they measured the time after travel to find a
previously seen object in the scene as a quantification of spatial
awareness. A prerequisite of this approach is that the searched item
can be seen from both locations.
More generally, Siegel and White [32] proposed a tripartite divi-
sion of spatial knowledge into landmark,route and survey knowl-
edge. When learning the layout of a previously unknown environ-
ment, acquiring survey knowledge is the “key to successful wayfind-
ing” [12] as it allows humans to explicitly locate and orient them-
selves within a cognitive map of the environment. However, even
when this map is not yet present, the body can use a “process that
automatically keeps track of where relevant surrounding objects are
while we locomote, without much cognitive effort or mental load”,
which is Riecke’s definition of spatial updating [29, Section 12.2].
Spatial updating tests commonly require participants to estimate
the relative location of places in the scene after a series of active
or passive body movements (e.g. [4,8, 26,28]). Riecke et al. [30]
showed that optical flow information (like during steering in Virtual
Reality) can provide sufficient information to perform these tasks
without any vestibular cues. For a detailed overview of spatial up-
dating methodologies and further related studies, we refer to the
PhD theses of Riecke [29, Chapter 12] and Vuong [34, Chapter 1].
Generally, spatial updating seems to be a fundamental building block
for the acquisition of higher-level spatial skills like route and survey
knowledge. On a lower level, spatial updating builds on correct
judgements of distances and angles.
In earlier experiments on spatial updating in real and virtual
environments, triangle completion tasks were commonly applied
(e.g. [22, 23]). Fujita et al. [16] introduced an error model for these
task setups, which distinguishes three consecutive phases that are
potentially prone to errors: encoding during travel, mental spatial
reasoning after travel and the execution of the task. When the same
task is performed with different travel techniques, this corresponds
to different inputs in the encoding phase. We decided to use pointing
accuracy to the route’s origin as the measure of spatial updating
since the expected errors due to hand tremor and tracking noise in
the execution phase are smaller than those related to walking back,
where each step can introduce variance.
2.2 The Design Space of Travel Techniques
Steering and teleportation techniques mark the far ends of a pa-
rameter space between continuous visual motion (steering) and the
immediate change of location and orientation (teleportation). In
terms of steering, teleportation can be understood as travel at infinite
velocity [6]. Steering, on the other hand, can also be understood as
a sequence of teleports between infinitely close locations. In this
sense, teleportation in vista space (jumping) offers a compromise
between both extremes. The design space of travel techniques, how-
ever, involves many more parameters than the distance of sub-steps
and the travel velocity. Bowman et al., for example, suggested the
classification of travel techniques in terms of their methods for direc-
tion/target selection, and velocity/acceleration selection as well as
their input conditions [6]. Later, Bowman, Davis et al. extended this
approach with a more fine-grained decomposition of the target selec-
tion subtask and start/stop conditions [4]. Also, the classification by
metaphor was suggested [3,6]. Most existing steering techniques can
be unambiguously classified on this basis: as a combination of input
conditions with specific methods to control the motion direction and
the motion velocity or acceleration.
In addition to movement control (travel), navigation techniques
may offer mediators and visual effects to improve usability. Darken
and Peterson, for example, explored mediators as wayfinding
aids [11]. Fernandes and Feiner recently showed that a visual effect
like the dynamic reduction of the user’s field-of-view during travel
significantly reduces symptoms of simulator sickness [14]. Bow-
man et al. considered such interface extensions separately from the
classification of different steering techniques [3].
The mentioned taxonomies support the general classification of
teleportation as a disparate technique from steering, i.e. a specific
type of discrete target selection combined with infinite motion ve-
locity and an input condition to initiate the transition. However, they
do not seem appropriate for a fine-grained classification of the wide
range of teleportation techniques that have been suggested recently.
We considered an extension of existing taxonomies to account for
further relevant characteristics of teleportation, but the distinction
between travel direction (steering) and travel target (teleportation)
results in different interaction sequences that are not entirely com-
patible. In contrast to steering through a 3D scene, for example,
teleportation also allows transitions between two locations in image
space (e.g. blending between both views). Moreover, the selection
of a target location allows to provide related information before
the transition occurs. In fact, teleportation techniques often build
on the creation or selection of visual references for the target loca-
tion [17, 20], which would be considered as auxiliary wayfinding
aids for most steering techniques. Therefore, we suggest distinguish-
ing steering and teleportation by metaphor, which is also motivated
by different interaction goal of the user. When the experience of the
traveled route is of interest, steering techniques are more appropriate.
When, instead, only particular locations are relevant, teleportation
techniques should be considered. We propose a novel classification
scheme for teleportation techniques in the next section.
3 A CLASSIFICATION SCHEME FOR
TEL EPORTATI ON TECHNIQUES
Our classification scheme for teleportation techniques builds on
the decomposition of the teleportation process into four subsequent
stages: target specification, pre-travel information, transition and
post-travel feedback. Concrete implementations can be described as
a specific configuration of mechanisms for each stage (Figure 2).
3.1 Target Specification
The first stage of the teleportation process is the specification of
the target’s location and orientation. In some cases, this step is
implemented without any user involvement to control the variables
in formal user studies [1, 5].
Active target specification by the user, on the other hand, requires
an appropriate input method. This can be pointing with a tracked
input device in the simplest case, but other selection mechanisms
Figure 2: Four stages of the teleportation process with common options for their implementation.
such as gaze [2], direct walking into a gallery portal [13] or the
selection among preview perspectives using dedicated hardware [20]
have been demonstrated.
The scope of reachable targets can be unrestricted or restricted,
e.g. to vista space. This part of the taxonomy relates to the dis-
ambiguation of jumping and teleportation. With a restricted target
distance, several jumps must be performed to reach distant destina-
tions. Unrestricted target specification, on the other hand, requires
additional mediators to support selections beyond vista space (e.g. a
map or World-in-Miniature [33]). Once the target location is speci-
fied, the target orientation must be determined either explicitly or
implicitly. In many implementations, the users will maintain their
previous orientation in the scene. Choosing a target from a set of
preview perspectives also implies an orientation, but it may differ
from the one before the transition. In contrast, Pausch et al. allowed
users to explicitly control both target location and orientation in a
World-in-Miniature [27]. An example for an explicit orientation
mechanism from an egocentric perspective was implemented by
Bozgeyikli et al. [7] and can also be found in the game “The Gallery”
(Cloudhead Games Ltd.). Both approaches rely on the manipulation
of an orientation widget after the target location was specified.
3.2 Pre-Travel Information
In the second stage, the system may give the user additional infor-
mation about the teleportation to be performed. First, the visual
feedback given during target indication (e.g. pointing ray, gallery
preview, etc.) can already be considered pre-travel information. Ad-
ditional mediators can provide further information after the target
was successfully selected. Bolte et al., for instance, suggested gaze-
based placement of location markers that can be corrected before
the teleportation is applied [2]. Bakker et al., instead, used numbers
to indicate the target location and orientation [1]. The game “Spell
Fighter VR” (Kubold Games) shows an abstract avatar walking to
the target before the actual transition begins. Preview techniques
like the reorientation mechanism by Freitag et al. [15], Photoportals
by Kunert et al. [20] or the jumping technique used in the game
“Budget Cuts” (Neat Corp.) open a portal view to the indicated travel
target. This allows users to prepare for the destination and to apply
adjustments if they are not yet satisfied.
3.3 Transition
The transition stage is the core of teleportation, in which the actual
travel from the origin to the target happens. The simplest form is
the instant transition, in which the old view in one frame is directly
replaced by the new one in the next frame. Some games like “The
Lab” (Valve) implement fade-to-black transitions, which animate the
old view to a black screen, perform the teleportation and then fade
back into the new view. When portal views are used as pre-travel
information, its maximization can be used for a seamless transition as
suggested by Kunert et al. [20] and used in the game “Budget Cuts”
(Neat Corp.). Another approach are speeded motion transitions,
which move the camera very quickly from the origin to the target
location. Examples of this transition mode were implemented by
Bolte et al. [2] and in the game “Raw Data” (Survios).
3.4 Post-Travel Feedback
Additional information could be provided to improve the sense
of orientation and spatial awareness after the transition. So far, we
could not find implementation examples of such post-travel feedback,
but we believe that path visualizations in an overview, simple arrows
or even portal views to the teleportation origin could help the users
to maintain a sense of orientation and to recover if necessary.
4 ANEXPERIMENTAL TASK FOR THE EVALUATI ON O F
SPATIAL UPDATI NG
One of the most common experimental tasks to measure spatial
updating performance requires participants to travel along a given
route and point to its origin after they have reached the terminal
location. In many studies, triangular route layouts were used since
they constitute the most elementary unit of any navigation path. Only
the lengths of two path segments
(L1,L2)
and their enclosing angle
α
may vary. If the rotation control of the tested travel techniques does
not differ (as in our case), the angle can be kept constant to focus
on effects of varying segment lengths (e.g.
α=±90
). With a fixed
angle, however, the responses in triangle completion tests will be
prone to false positives. The maximum range of admissible pointing
angles is less than
90
, and if both path segments do not differ
extremely, the correct pointing direction towards the origin will be
close to
45
relative to the last path segment. Pointing responses
based on guessing may thus still be close to the correct answers.
As a result, we decided to add a third segment to the route layout
(see
L1
,
L2
and
L3
in Figure 3). This extension increases the range
of admissible pointing directions to a maximum range close to
180
.
Additionally, we used the simpler triangle task to obtain baseline
measurements of pointing accuracy to a hidden target location, which
is “just around the corner” (see LB1and LB2in Figure 3).
4.1 Task Sequence
Each task starts with a measurement of baseline pointing accuracy in
the mentioned triangle completion task. Participants travel along the
segments
LB1
and
LB2
. At the end of the second path segment (top
red circle in Figure 3), they are asked to point back to the start posi-
tion.
LB1
and
LB2
can be deliberately kept short to make this triangle
task very simple to complete, so the results can serve as a reference
measure of the maximally attainable accuracy. Thereafter, partici-
pants travel back to the start and continue along the route
(L1,L2,L3)
.
At the end of the third path segment of this longer route (bottom
red circle in Figure 3), participants are asked again to point to the
start position. In the next step, participants are passively teleported
back to the start (facing along
L1
), and the task is repeated without
the initial baseline measurement. As a result, each trial involves
three different measures of spatial updating performance: baseline
Figure 3: Parametrizable route layout used in the spatial updating
trials. From the start point, a simple route
(LB1,LB2)
and a more
complex route
(L1,L2,L3)
emerges. At each end, subjects were asked
to point back to the start position. In our study, both routes could also
appear mirror-invertedly.
accuracy, accuracy after traveling along the three path segments (first
point-to-start) and the repetition thereof (second point-to-start). In
all three pointing tasks, the angular mismatch between the correct
and the given response serves as a dependent variable.
4.2 Virtual Environment
The proposed route layout facilitates the creation of many individual
task instances by varying the lengths of the path segments. We
devised a scene generator that, given the lengths of the three seg-
ments, automatically creates urban virtual environments by placing
buildings and decorating objects. We made the Python code of
this generator publicly available on our website to facilitate repro-
duction
1
. We use four different house models of a similar style in
combination with five differently colored textures. The houses are
placed with random gaps between them, and the streets are visually
enhanced by the random placement of trees, benches, lanterns and
cars. The corner points of the current route to be traveled are high-
lighted by cones with arrows on top to indicate the next intermediate
target. Once a cone is passed, it disappears such that its location and
thus also the distance of a segment cannot be estimated by looking
back. Figure 4 shows the virtual environment from a user perspective
at three exemplary moments during the experimental task.
4.3 Distractor Task
User performance in spatial updating experiments depends on the
judgements of distances during travel. Montello [25] described three
complementary sources of information for perceiving the distances
of a motion path: the number of environmental features, the travel
time and the travel effort. Several pilot tests of the described exper-
imental task showed that people can actively focus on these cues
and develop distance judgement strategies based on counting. In the
case of steering, for instance, some people tried to count the time
needed to travel each street; others focused on counting the number
of houses. We incorporated a distractor task to avoid such strategies.
During travel in both conditions, participants are asked to listen to
and repeat two-digit numbers verbalized by the experimenter. Once
the answer is given, the next number follows. When the route’s end-
point is reached, the distraction stops such that the participant can
focus solely on completing the pointing task. This task is very easy
to fulfill without much cognitive effort. In a pilot test, we validated
the effectivity of this task as users were not capable of pursuing
counting strategies anymore when the distraction was present.
1http://www.uni-weimar.de/vr/steering-jumping
5 USER ST UDY
Prior work of Bowman et al. showed that “the level of spatial aware-
ness was significantly decreased with the use of a jumping tech-
nique” in comparison to two other conditions that implemented
continuous movement between locations [6]. In all three conditions,
however, participants were moved passively and without any pre-
or post-travel information. It is not surprising that they lost spatial
awareness after the instant transition to an unknown location and
orientation. In most applications, however, users actively control
their virtual travel, e.g. by selecting target locations. We expected
that this deliberate selection of a travel target allows users to prepare
for the transition and maintain a certain level of spatial awareness.
Nevertheless, the continuous experience of the traveled route during
steering seems to offer more information in that regard. We con-
ducted a formal user study with 24 participants to investigate the
effects of user-controlled steering and jumping techniques on spatial
updating performance and simulator sickness. The experimental
task described in the previous section was used for this purpose.
5.1 Experimental Setup
The VR-setup consisted of a HTC Vive
2
head-mounted display with
its lighthouse tracking system offering both position and orientation
tracking. The tracking space was approximately 3m x 3m in size,
and the cables were mounted to the ceiling to avoid tripping over
them. Input for both travel techniques was obtained using a Vive
handheld controller. The virtual content was rendered using the
Avango-Guacamole framework [31] with an update rate of 90Hz.
We measured an end-to-end latency of 27ms without and 12.5ms
with the prediction methods of OpenVR
3
applied. Questionnaires
were completed on a regular 2D desktop workstation.
5.2 Conditions
For the initial evaluations carried out in this paper, we tried to keep
steering and jumping as simple as possible, so optional mediators
or visual effects were deliberately omitted. Travel movements were
always restricted to ground level along the streets that represented
the pre-defined routes. Collisions with decorating objects (cars,
trees, lamps and benches) were ignored.
Following the taxonomy of Bowman et al. [6], the Steering condi-
tion can be described as a combination of direction selection through
a continuous 3D pointing gesture and velocity control on a con-
tinuous range with a finger-operated lever on the Vive controller.
The maximum steering speed was set to
50 km/h
. In a pilot test
comparing gaze-directed with pointing-directed steering, partici-
pants clearly preferred the latter because of the ability to freely look
around during travel.
The Jumping condition can be described according to the classifi-
cation scheme suggested in Section 3. We implemented a parabolic
ray for pointing-based target indication in vista space with implicit
orientation specification. The maximum reach of this ray was 180 m,
which allowed covering the distance of any straight street segment
in our study. Therefore, each path segment could potentially be
traveled with a single jump. The implemented transition was an
instant transition, and no additional pre- or post-travel information
were given to the user. We hypothesized that if significant effects
between steering and jumping exist, they will most likely become
visible when the difference between both techniques is maximal.
As a result, participants were instructed to use as large jumps as
possible in the Jumping condition.
5.3 Procedure
Initially, each participant signed an informed consent form and
provided basic demographic information. Thereafter, all participants
2http://www.vive.com
3https://github.com/ValveSoftware/openvr
(a) Colored cones symbolize waypoints of the
current route. The green and blue cones mark the
start and a checkpoint. Arrows on top indicate the
direction to continue traveling.
(b) The end of a route is marked by a red cone.
When entering the surrounding area, the view
changes to the one shown in (c) and spatial updat-
ing is tested.
(c) Spatial updating of the start is tested by at-
taching an arrow to the controller and asking the
participant to indicate the straight-line path to the
expected position of the green cone.
Figure 4: Three user perspectives of the virtual environment during the spatial updating task (screenshots from the control monitor).
Table 1: Route parametrizations used in both conditions. The correct
pointing angle
γ
when arriving at the route’s end is shown depending
on the three segment lengths L1,L2and L3.
Route ID γL1L2L3
1 30177 m 81 m 42 m
2 60138 m 54 m 108 m
3 90120 m 60 m 120 m
4 12057 m 117 m 126 m
5 15039 m 81 m 180 m
tested both travel techniques subsequently (within-subjects design)
in counterbalanced order. Each test session involved three training
and five recorded spatial updating trials. For each trial, participants
were placed within a new virtual street layout as illustrated in Figures
3 and 4. During all trials, the first path segment of the initial triangle-
completion test (
LB1
) was fixed to 15 m while the second one (
LB2
)
varied between 15 m and 20 m. All three-segment routes of the
actual test
(L1,L2,L3)
had an overall length of 300 m and appeared
in a randomized order. The individual segment lengths for each
recorded trial are given in Table 1 together with the correct response
angles
γ
for pointing back to the start. Each test session concluded
with a Simulator Sickness Questionnaire (SSQ) [18]. Between both
sessions, participants took a break of five minutes. After completing
both conditions, participants filled in a concluding questionnaire on
subjective preferences with respect to different application cases and
received an expense allowance of 10 Euros.
5.4 Participants
In total, 24 participants (17 males, 7 females) aged between 19 and
38 years (
M=25.54
,
σ=4.88
) participated in the user study. All
of them were either students or employees of our university, with
half of them having a background in Computer Science. On a Likert
scale from 0 to 6, participants rated their previous experiences with
Virtual Reality rather low (Mode =0, Mdn =2).
5.5 Dependent Variables
In each trial, three errors were captured as measures of spatial up-
dating performance: the baseline error, the first point-to-start error
and the second point-to-start error. In addition, the travel times to
complete the routes along
(L1,L2,L3)
were captured. Each of these
values was measured during five consecutive trials per condition.
These repeated measures were averaged to single scores per user
and travel technique. From the Simulator Sickness Questionnaire
(SSQ), we derived scores on nausea (N), oculomotor disturbance
(O) and disorientation (D) as well as a total simulator sickness score
(T) as advised in [18].
5.6 Hypotheses
Based on findings from prior work (see Section 2.1) and the research
questions of this paper, we set up the following hypotheses:
H1:The travel time is lower for jumping than for steering.
As the implemented jumping technique allows to cover large dis-
tances with just one jump and since participants were instructed to
complete the route with as few jumps as possible, it is reasonable to
assume that the routes are completed faster compared to steering.
H2:The baseline error is smaller than the other pointing errors.
The pointing task after traveling along
(LB1,LB2)
should be very
simple to complete, thus giving baseline measurements on how
accurate participants can become in solving spatial updating tasks
of this study. More specifically, it serves as a reference measure for
errors during the execution phase of our spatial updating task.
H3:Point-to-start errors are higher for jumping than for steering.
As illustrated in Figure 1, jumping allows to perceive the scene from
fewer points compared to steering. As the main research question
of this paper, H
3
investigates if this has negative effects on spatial
updating accuracy.
H4:
Reported simulator sickness symptoms are higher for steering
than for jumping.
H
4
aims at confirming one of the main motivations to implement
jumping techniques. Since jumping avoids conflicting motion cues
between the visual and the vestibular systems, the obtained simulator
sickness scores should be lower compared to steering.
Figure 5: Boxplots of pointing errors for each of the three pointing
tasks separated by travel technique (green: steering, blue: jump-
ing). Interquartile ranges (IQRs) are represented by boxes while the
whiskers show the full data ranges without outliers. Outliers (distance
to box
>
1.5
·
IQR) and extreme outliers (distance to box
>
3
·
IQR)
are indicated by circles and asterisks, respectively.
6 RE SULTS AN D EVALUAT ION
In this section, we evaluate and interpret the data of our user study
according to the given hypotheses. For this purpose, the means,
medians and standard deviations are abbreviated by
M
,
Mdn
and
σ
,
respectively. When analyzing data for normality, visual inspections
of the normal QQ-plots were used in combination with Shapiro-
Wilk Tests. For effect sizes
r
, the threshold values 0.1 (small), 0.3
(medium) and 0.5 (large) introduced by Cohen [10] were applied.
N=24 holds in all statistical tests and analyses.
6.1 Travel Times
The average travel times along
(L1,L2,L3)
for all participants were
non-normally distributed for both travel techniques. Hence, a
Wilcoxon signed-rank test was used for statistical comparison. The
travel time was significantly longer with the steering technique
(
Mdn =26.82s
,
σ=6.54s
) as compared to jumping (
Mdn =13.65s
,
σ=9.77s), W=23, p<0.001, r=0.74. This result supports H1.
6.2 Pointing Accuracy
Figure 5 shows the distributions of pointing errors separated by task
and travel technique. All errors were non-normally distributed.
6.2.1 Baseline Measurements
The average pointing error in the baseline task (triangle completion
after traveling along
(LB1,LB2)
) was compared individually against
the average pointing errors in the two more challenging spatial
updating tasks using Wilcoxon signed-rank tests with a Bonferroni-
corrected
α
-level of
α=0.025
. The baseline error (
Mdn =5.43
,
σ=3.62
) was significantly lower than both other pointing errors
(both W=299, p<0.001, r=0.869), which supports H2.
6.2.2 Accuracy by Travel Technique
The pointing accuracy was compared with a Wilcoxon signed-rank
test for each of the three subtasks and a Bonferroni-corrected
α
-
level of
α=0.017
. No significant difference between steering and
jumping could be found in the baseline task (
W=188
,
p=0.278
,
r=0.222
), the first point-to-start task (
W=198
,
p=0.17
,
r=0.28
)
and the second point-to-start task (
W=202
,
p=0.137
,
r=0.303
).
As a result, H
3
must be rejected. However, the effect sizes indicate
relevant differences between both techniques for individuals, which
is why a follow-up analysis was carried out.
For this purpose, the pointing accuracies of both point-to-start
repetitions were averaged to a single performance score per partici-
pant in order to compare the overall spatial updating performance
on the more complex routes. This seems reasonable since no signifi-
cant learning effects were observed between the first and the second
run (steering:
W=124
,
p=0.458
,
r=0.152
; jumping:
W=103
,
p=0.417
,
r=0.274
). A scatterplot of the resulting performance
scores is given in Figure 6(a). The dotted diagonal line represents
no accuracy difference between steering and jumping, so accuracy
differences between both techniques increase with the distance of
a point to this line. It is visible that the data points of most par-
ticipants are closely scattered around the diagonal line. However,
six participants (indicated with blue and orange color) achieved
notably lower accuracies in the jumping condition (more than
10
worse). Overall, the mean accuracy difference between jumping
and steering was
5.09
(
σ=10.57
). When excluding the six spe-
cial cases, however, the remaining data points (
N=18
) are almost
equally distributed around the center line (mean accuracy difference
between techniques:
0.02
). Additionally, the standard deviation of
difference scores (
σ=5.05
) is smaller than the average baseline
error, which indicates that similar spatial updating performances for
both travel techniques were achieved in this reduced sample.
6.3 Simulator Sickness
The four SSQ scores were non-normally distributed for both travel
techniques, so Wilcoxon signed-rank tests were performed. All three
scores on specific symptoms (N, O, D) and also the total scores (T)
were significantly higher for steering than for jumping (N:
W=14
,
p=0.008
,
r=0.539
; O:
W=20.5
,
p=0.007
,
r=0.547
; D:
W=
25
,
p=0.013
,
r=0.506
; T:
W=36
,
p=0.01
,
r=0.529
), which
supports H
4
. For a follow-up analysis on the impact of these results,
Figure 6(b) shows a per-participant scatterplot of the total simulator
sickness scores similar to the one of spatial updating accuracies.
Overall, the mean difference score between jumping and steering
was
13.40
(
σ=20.39
). Despite the significant result, it is visible
that only nine participants (indicated with red and orange color) were
much stronger affected by simulator sickness during steering. When
excluding these cases, the remaining data points (
N=15
) are almost
equally distributed around the diagonal line in Figure 6(b) with a
small standard deviation (mean difference between techniques:
0.75
,
σ=5.86
), which indicates that participants in this reduced sample
were able to cope with both conditions similarly well.
6.4 Subjective Preferences
At the end of the study, participants reported their subjective prefer-
ences with respect to different application cases in a questionnaire
with 7-point Likert-scales ranging from 0 (strong preference for
steering) to 6 (strong preference for jumping). The frequencies of
given answers are shown in Figure 7. Most participants expressed
a clear preference for steering for the use case of freely exploring
unknown virtual environments. This trend is still present but less
strong when asked for the more suitable technique to solve the task
of the user study. A further question focusing on which technique
was more fun to use yielded a bimodal distribution at both ends of
the scale with a higher peak for steering than for jumping.
6.5 Discussion
Overall, we observed relatively low pointing errors in our spatial up-
dating experiment (all medians
<20
). This indicates good spatial
updating performances compared to the results of similar experi-
ments in the literature [34, Section 1.3.3]. We conclude that the
experimental task based on a three-segment route layout was solv-
able and not too demanding. However, pointing errors in the baseline
task were significantly smaller than all other measurements, so the
(a) The six participants represented by the blue and orange
crosses have pointed notably worse during jumping (more
than 10less accurate).
(b) The nine participants represented by the red and orange
crosses have reported notably more simulator sickness symp-
toms during steering (score at least 20 points larger).
Figure 6: Per-participant scatterplots of the mean pointing errors over both point-to-start repetitions (a) and the total SSQ scores (b). The dotted
diagonal lines represent no differences between steering and jumping, so the differences increase as the distance of a point to the line gets larger.
Corresponding participant clusters are highlighted with the same color in both scatterplots.
Figure 7: Frequencies of the answers given to the technique pref-
erence questions in the concluding questionnaire on a scale from 0
(strong preference for steering) to 6 (strong preference for jumping).
task was sensitive enough to reveal general effects of travel on the
mental representation of relative locations in the virtual environment.
Apparently, spatial updating errors accumulated during travel.
An inherent difference of the two conditions is that steering re-
quires control of direction and speed whereas jumping requires the
specification of a target. When comparing spatial updating perfor-
mances for both travel techniques, no significant differences could be
observed. Although participants traveled significantly faster with the
jumping technique and thus experienced the path for a shorter time,
a follow-up analysis on a per-participant level revealed that 18 users
(
75%
) achieved similar spatial updating performances. In contrast to
the comparisons of passive virtual travel by Bowman et al. [6], the
differences between continuous virtual motion and instant transitions
seem to affect spatial awareness much less if the travel is actively
controlled by the user. The remaining six participants, among which
five stated to have no prior experience with VR, pointed more than
10less accurate after jumping. In four of these cases, the pointing
error was even above
30
(see Figure 6(a)). We therefore conclude
that integrating spatial information of the path during jumping can
be problematic, yet the number of severely affected users seems to
be smaller than generally expected.
The participants of our study reported significantly more symp-
toms of simulator sickness for steering. In contrast, Bozgeyikli
et al. [7] did not find any significant differences between steer-
ing, jumping and walking in place. Nevertheless, a per-participant
follow-up analysis revealed that 15 users (
62.5%
) also experienced
similar simulator sickness symptoms in both conditions. We there-
fore conclude that replacing a steering by a jumping technique in an
application generally results in equal or less simulator sickness.
When investigating the corresponding participant clusters in Fig-
ures 6(a) and 6(b), high simulator sickness does not seem to cohere
with inaccurate spatial updating. As a result, users who experience
more symptoms of simulator sickness with steering could use jump-
ing techniques instead, and users who have difficulties to maintain
spatial awareness during jumping could resort to steering. Only for
the two participants indicated with orange color in Figure 6, neither
of the techniques was ideal. Their lack of prior experience in VR
could be an explanation of this observation.
Most participants preferred steering over jumping, particularly
for the exploration of unknown virtual environments. Interestingly,
even some participants with more symptoms of simulator sickness
during steering seem to prefer this technique. The causes of this
observation are subject to future investigations.
7 CONCLUSION AND FUTURE WORK
Spatial awareness is an essential cognitive ability that helps humans
to avoid losing orientation in known and unknown environments.
Travel techniques in VR should support spatial awareness and mini-
mize the risk of simulator sickness. While teleportation beyond vista
space is known to impair spatial awareness, the results of our user
study indicate that restricting the range of a teleportation technique
to vista space helped many, but not all, participants in achieving
similar spatial updating performances to steering in our task. Future
work should find suitable measures for assisting the remaining users
having difficulties during jumping, e.g. by pre- and post-travel infor-
mation. Our results furthermore revealed significantly higher simu-
lator sickness scores during steering. However, also in this regard,
the impact is smaller than expected since
62.5%
of our participants
showed similar simulator sickness scores in both conditions.
In conclusion, the results of our study justify the implementation
of jumping as the default travel metaphor as done in many head-
mounted display applications and games. Nevertheless, we argue
that steering should not be excluded and always be offered as an
alternative, in particular because users seem to prefer the latter for
exploration tasks. An effective steering enhancement are the recently
proposed field-of-view restrictions by Fernandes and Feiner [14]
since they were shown to reduce simulator sickness. However, their
effects on spatial awareness are still unexplored.
For steering techniques, the influence of various mediators on
wayfinding performance was thoroughly investigated by Darken and
Peterson [11]. For teleportation techniques, the benefits of a map
mediator was illustrated [9], but the effects of further mediators and
visual effects have not been analyzed although they are inherent
features of many proposed implementations. We believe that our
classification scheme for teleportation techniques offers a valuable
tool for formal experimental comparisons and future developments.
We made the Python code of our route generator publicly available
on our website to facilitate the reproduction of our experimental
results as well as follow-up studies.
ACK NOWLED GM EN TS
Our research has received funding from the German Federal Min-
istry of Education and Research (BMBF) under grant 031PT704X
(project Big Data Analytics) and grant 03PSIPT5A (project Prove-
nance Analytics). We thank the participants of our study as well as
the members and students of the Virtual Reality and Visualization
Research Group at Bauhaus-Universit¨
at Weimar.
REFERENCES
[1]
N. H. Bakker, P. O. Passenier, and P. J. Werkhoven. Effects of head-
slaved navigation and the use of teleports on spatial orientation in
virtual environments. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 45(1):160–169, 2003.
[2]
B. Bolte, G. Bruder, and F. Steinicke. Jumping Through Immersive
Video Games. In SIGGRAPH Asia 2011 Posters, page 56, 2011.
[3]
D. Bowman, E. Krujiff, J. L. Viola, and I. Poupyrev. 3D User Interfaces:
Theory and Practice. 2005.
[4]
D. A. Bowman, E. T. Davis, L. F. Hodges, and A. N. Badre. Main-
taining Spatial Orientation During Travel in an Immersive Virtual
Environment. Presence: Teleoper. Virt. Environ., 8(6):618–631, 1999.
[5]
D. A. Bowman, D. Koller, and L. F. Hodges. Travel in Immersive
Virtual Environments: An Evaluation of Viewpoint Motion Control
Techniques. In Proceedings of the 1997 Virtual Reality Annual Inter-
national Symposium, pages 45–52, 1997.
[6]
D. A. Bowman, D. Koller, and L. F. Hodges. A methodology for the
evaluation of travel techniques for immersive virtual environments.
Virtual Reality, 3(2):120–131, 1998.
[7]
E. Bozgeyikli, A. Raij, S. Katkoori, and R. Dubey. Point & Teleport
Locomotion Technique for Virtual Reality. In Proceedings of the 2016
Annual Symposium on Computer-Human Interaction in Play, pages
205–216, 2016.
[8]
S. S. Chance, F. Gaunet, A. C. Beall, and J. M. Loomis. Locomotion
Mode Affects the Updating of Objects Encountered During Travel: The
Contribution of Vestibular and Proprioceptive Inputs to Path Integration.
Presence, 7(2):168–178, 1998.
[9]
D. Cliburn, S. Rilea, D. Parsons, P. Surya, and J. Semler. The Effects
of Teleportation on Recollection of the Structure of a Virtual World. In
M. Hirose, D. Schmalstieg, C. A. Wingrave, and K. Nishimura, editors,
Joint Virtual Reality Conference of EGVE - ICAT - EuroVR, 2009.
[10] J. Cohen. A power primer. Psychological Bulletin, 112(1):155, 1992.
[11]
R. P. Darken and B. Peterson. Spatial orientation, wayfinding, and
representation. Handbook of virt. environ., pages 493–518, 2002.
[12]
R. P. Darken and J. L. Sibert. Wayfinding strategies and behaviors
in large virtual worlds. In Proceedings of the SIGCHI conference on
Human factors in computing systems, pages 142–149, 1996.
[13]
T. T. Elvins, D. R. Nadeau, R. Schul, and D. Kirsh. Worldlets: 3-D
thumbnails for wayfinding in large virtual worlds. Presence: Teleoper.
Virt. Environ., 10(6):565–582, 2001.
[14]
A. S. Fernandes and S. K. Feiner. Combating VR sickness through
subtle dynamic field-of-view modification. In 2016 IEEE Symposium
on 3D User Interfaces, pages 201–210, 2016.
[15]
S. Freitag, D. Rausch, and T. Kuhlen. Reorientation in virtual environ-
ments using interactive portals. In 2014 IEEE Symposium on 3D User
Interfaces, pages 119–122, 2014.
[16]
N. Fujita, R. L. Klatzky, J. M. Loomis, and R. G. Golledge. The
encoding-error model of pathway completion without vision. Geo-
graphical Analysis, 25(4):295–314, 1993.
[17]
M. Hachet, F. Decle, S. Kn
¨
odel, and P. Guitton. Navidget for 3D
interaction: Camera positioning and further uses. International Journal
of Human-Computer Studies, 67(3):225–236, 2009.
[18]
R. S. Kennedy, N. E. Lane, K. S. Berbaum, and M. G. Lilienthal. Sim-
ulator Sickness Questionnaire: An Enhanced Method for Quantifying
Simulator Sickness. The International Journal of Aviation Psychology,
3(3):203–220, 1993.
[19]
E. M. Kolasinski. Simulator sickness in virtual environments. Technical
report, DTIC Document, 1995.
[20]
A. Kunert, A. Kulik, S. Beck, and B. Froehlich. Photoportals: Shared
References in Space and Time. In Proceedings of the 17th ACM
Conference on CSCW, pages 1388–1399, 2014.
[21]
J. R. Lackner. Motion sickness: more than nausea and vomiting.
Experimental Brain Research, 232(8):2493–2510, 2014.
[22]
J. M. Loomis, R. L. Klatzky, R. G. Golledge, J. G. Cicinelli, J. W.
Pellegrino, and P. A. Fry. Nonvisual navigation by blind and sighted:
assessment of path integration ability. Journal of Experimental Psy-
chology: General, 122(1):73, 1993.
[23]
V. V. Marlinsky. Vestibular and vestibulo-proprioceptive perception
of motion in the horizontal plane in blindfolded man III. Route
inference. Neuroscience, 90(2):403 411, 1999.
[24]
D. R. Montello. Scale and multiple psychologies of space. In European
Conference on Spatial Information Theory, pages 312–321, 1993.
[25]
D. R. Montello. The perception and cognition of environmental dis-
tance: Direct sources of information. In International Conference on
Spatial Information Theory, pages 297–311, 1997.
[26]
P. E. Napieralski, B. M. Altenhoff, J. W. Bertrand, L. O. Long, S. V.
Babu, C. C. Pagano, and T. A. Davis. An evaluation of immersive
viewing on spatial knowledge acquisition in spherical panoramic envi-
ronments. Virtual Reality, 18(3):189–201, 2014.
[27]
R. Pausch, T. Burnette, D. Brockway, and M. E. Weiblen. Navigation
and Locomotion in Virtual Worlds via Flight into Hand-held Miniatures.
In Proceedings of the 22nd Annual Conference on Computer Graphics
and Interactive Techniques, pages 399–400, 1995.
[28]
C. C. Presson and D. R. Montello. Updating after rotational and
translational body movements: Coordinate structure of perspective
space. Perception, 23(12):1447–1455, 1994.
[29]
B. Riecke. How far can we get with just visual information? Path
integration and spatial updating studies in Virtual Reality. PhD thesis,
Eberhard Karls Universit¨
at T¨
ubingen, 2003.
[30]
B. E. Riecke, H. A. H. C. v. Veen, and H. H. B
¨
ulthoff. Visual Homing
Is Possible Without Landmarks: A Path Integration Study in Virtual
Reality. Presence, 11(5):443–473, 2002.
[31]
S. Schneegans, F. Lauer, A. C. Bernstein, A. Schollmeyer, and
B. Froehlich. guacamole - an extensible scene graph and rendering
framework based on deferred shading. In IEEE 7th Workshop on Soft-
ware Engineering and Architectures for Realtime Interactive Systems,
pages 35–42, 2014.
[32]
A. W. Siegel and S. H. White. The development of spatial representa-
tions of large-scale environments. Advances in child development and
behavior, 10:9–55, 1975.
[33] R. Stoakley, M. J. Conway, and R. Pausch. Virtual Reality on a WIM:
Interactive Worlds in Miniature. In Proceedings of the SIGCHI Con-
ference on Human Factors in Computing Systems, pages 265–272,
1995.
[34]
J. Vuong. Investigating scene representation in human observers using
a spatial updating task. PhD thesis, University of Reading, 2015.
... Teleportation has emerged as one of the most widely adopted forms of travel through immersive virtual environments as it minimizes the occurrence of sickness symptoms for many users [9,10,17,45]. However, the locations that can be reached using conventional teleportation techniques are limited to the vicinity of scene objects that can be intersected with the selection ray. ...
... However, the visual motion flow introduced by steering techniques is often considered a plausible cause of sickness symptoms as it contradicts the vestibular cues perceived by the user [34]. Teleportation-based techniques prevent these contradicting cues and have consequently been shown to mitigate sickness symptoms for a large proportion of users compared to steering [9,10,17,45]. Riecke et al. therefore extended pointing-directed flying with automatic teleports in the indicated direction when the user exceeded a certain velocity threshold [35]. Most implementations of teleportation, however, work without a continuous locomotion component and therefore require the initial selection of a ground-based target location to which the user will then be teleported. ...
... We did not formulate a hypothesis for the discomfort score since all techniques were based on teleportation as the core travel metaphor, which was shown to be favorable in terms of sickness [34,45]. Moreover, discomfort could also be affected by fear of heights, which we also expected to be similar across conditions due to the identical platform and teleport visualizations when in mid-air. ...
Article
Full-text available
Most prior teleportation techniques in virtual reality are bound to target positions in the vicinity of selectable scene objects. In this paper, we present three adaptations of the classic teleportation metaphor that enable the user to travel to mid-air targets as well. Inspired by related work on the combination of teleports with virtual rotations, our three techniques differ in the extent to which elevation changes are integrated into the conventional target selection process. Elevation can be specified either simultaneously, as a connected second step, or separately from horizontal movements. A user study with 30 participants indicated a trade-off between the simultaneous method leading to the highest accuracy and the two-step method inducing the lowest task load as well as receiving the highest usability ratings. The separate method was least suitable on its own but could serve as a complement to one of the other approaches. Based on these findings and previous research, we define initial design guidelines for mid-air navigation techniques.
... L OCOMOTION allows users to change their viewpoint in an immersive virtual environment (IVE) and is therefore part of most user interfaces for virtual reality (VR) systems. A locomotion method (LM) realises locomotion in VR and can lead to different advantages and disadvantages such as simulator sickness [1], [2] or disorientation [3]. Over the recent years the number of LMs has risen [4] to meet new requirements, due to new possibilities enabled by technical advances, or because of new insights how LMs affect users. ...
... More specific taxonomies focus on Through-The-Lens Techniques [60], Redirection Techniques [62], [63], Walking Interfaces [78], Walking Techniques for Incompatible Spaces [79], Infinite Walking Solutions [80], Walking-based Locomotion Techniques [81], and Teleportation [2]. ...
... [74] 7) Move (Moving, Movement, Motion, Motion-based), 8, [4], [65], [66], [67], [70], [73], [77], [81] 8) Input, 7, [2], [56], [64], [67], [72], [74], [81] 9) Physical, 6, [4], [56], [66], [68], [72], [77] 10) Environment (Environmental), 6, [56], [63], [66], [71], [72], [74] 11) Steering, 6, [66], [67], [68], [69], [72], [73] 12) Continuous, 6, [4], [56], [63], [66], [72], [74] 13) Gaze (Gaze-directed), 6, [56], [66], [69], [72], [73], [77] While Locomotion and Travel are often used synonymously in VR, they were not automatically clustered by WordNet. Overall, 17 taxonomies referenced Locomotion or Travel. ...
Article
Full-text available
The change of the user's viewpoint in an immersive virtual environment, called locomotion, is one of the key components in a virtual reality interface. Effects of locomotion, such as simulator sickness or disorientation, depend on the specific design of the locomotion method and can influence the task performance as well as the overall acceptance of the virtual reality system. Thus, it is important that a locomotion method achieves the intended effects. The complexity of this task has increased with the growing number of locomotion methods and design choices in recent years. Locomotion taxonomies are classification schemes that group multiple locomotion methods and can aid in the design and selection of locomotion methods. Like locomotion methods themselves, there exist multiple locomotion taxonomies, each with a different focus and, consequently, a different possible outcome. However, there is little research that focuses on locomotion taxonomies. We performed a systematic literature review to provide an overview of possible locomotion taxonomies and analysis of possible decision criteria such as impact, common elements, and use cases for locomotion taxonomies. We aim to support future research on the design, choice, and evaluation of locomotion taxonomies and thereby support future research on virtual reality locomotion.
... The factors influencing cybersickness are a primary topic of research, and include aspects of the VR stimulus and the individual. Examples of stimulus-based factors that affect sickness include the content (e.g., games, 3D videos [38]), the locomotion interface (e.g., joystick, teleporting [6,8,27,51]), exposure duration [45], and task workload [40]. Examples of individual factors include prior VR experience [10,19,28], motion sickness history [10,25,45], and gender 1 [10,28,45]. ...
... For example, field-of-view (FOV) reduction is a technique known to reduce cybersickness by reducing peripheral visual stimulation [12,46,53,54]. Similarly, the teleport interface [8,27,51] and related techniques [2] reduce cybersickness by reducing visual selfmotion. These mitigation techniques modify the visual experience of the user, which could moderate the gender difference in cybersickness. ...
Conference Paper
Full-text available
Cybersickness is a barrier to widespread adoption of virtual reality (VR). We summarize the literature and conclude that women experience more cybersickness than do men, but that the size of the gender effect is modest. We present a mediation and moderation framework for organizing existing research and proposing new questions about gender and cybersickness. A mediator causally connects gender and cybersickness, and a moderator changes the magnitude of the gender difference in cybersickness.
... The factors influencing cybersickness are a primary topic of research, and include aspects of the VR stimulus and the individual. Examples of stimulus-based factors that affect sickness include the content (e.g., games, 3D videos [38]), the locomotion interface (e.g., joystick, teleporting [6,8,27,51]), exposure duration [45], and task workload [40]. Examples of individual factors include prior VR experience [10,19,28], motion sickness history [10,25,45], and gender 1 [10,28,45]. ...
... For example, field-of-view (FOV) reduction is a technique known to reduce cybersickness by reducing peripheral visual stimulation [12,46,53,54]. Similarly, the teleport interface [8,27,51] and related techniques [2] reduce cybersickness by reducing visual selfmotion. These mitigation techniques modify the visual experience of the user, which could moderate the gender difference in cybersickness. ...
Preprint
Full-text available
Cybersickness is a barrier to widespread adoption of virtual reality (VR). We summarize the literature and conclude that women experience more cybersickness than do men, but that the size of the gender effect is modest. We present a mediation and moderation framework for organizing existing research and proposing new questions about gender and cybersickness. A mediator causally connects gender and cybersickness, and a moderator changes the magnitude of the gender difference in cybersickness.
... To reduce fatigue and control the study time at about one hour, we set the speed of movement to 4 m/s with constant acceleration, which is greater than typical walking speed (i.e., 1.4m/s). Given prior research indicating that using high-speed steering (e.g., 13 m/s in Weißker et al. [89]) to learn spatial navigation in a virtual environment has no negative effects on spatial updating accuracy, we believe our speed setting would be slow enough to assess spatial learning. Also, participants in the pilot study did not report motion sickness when navigating the virtual urban environment at the 4 m/s speed. ...
Article
Full-text available
Daily travel usually demands navigation on foot across a variety of different application domains, including tasks like search and rescue or commuting. Head-mounted augmented reality (AR) displays provide a preview of future navigation systems on foot, but designing them is still an open problem. In this paper, we look at two choices that such AR systems can make for navigation: 1) whether to denote landmarks with AR cues and 2) how to convey navigation instructions. Specifically, instructions can be given via a head-referenced display (screen-fixed frame of reference) or by giving directions fixed to global positions in the world (world-fixed frame of reference). Given limitations with the tracking stability, field of view, and brightness of most currently available head-mounted AR displays for lengthy routes outdoors, we decided to simulate these conditions in virtual reality. In the current study, participants navigated an urban virtual environment and their spatial knowledge acquisition was assessed. We experimented with whether or not landmarks in the environment were cued, as well as how navigation instructions were displayed (i.e., via screen-fixed or world-fixed directions). We found that the world-fixed frame of reference resulted in better spatial learning when there were no landmarks cued; adding AR landmark cues marginally improved spatial learning in the screen-fixed condition. These benefits in learning were also correlated with participants' reported sense of direction. Our findings have implications for the design of future cognition-driven navigation systems.
... A common approach is to avoid continuous visual representations of motion by requiring the user to teleport in the virtual environment (e.g., Bozgeyikli et al., 2016). Several variants on this theme have been explored, such as "viewpoint snapping" (Farmani and Teather 2018), "jumping" (Weissker et al., 2018), and "blinking" (Habgood et al., 2018). Some researchers have noted a reduced incidence of cybersickness when smoothed virtual motions were presented during the virtual traversal of terrain that would normally afford a bumpier trajectory (Dorado and Figueroa 2014). ...
Article
Full-text available
In this article, we discuss general approaches to the design of interventions that are intended to overcome the problem of cybersickness among users of head-mounted display (HMD) systems. We note that existing approaches have had limited success, and we suggest that this may be due, in part, to the traditional focus on the design of HMD hardware and content. As an alternative, we argue that cybersickness may have its origins in the user’s ability (or inability) to stabilize their own bodies during HMD use. We argue that HMD systems often promote unstable postural control, and that existing approaches to cybersickness intervention are not likely to promote improved stability. We argue that successful cybersickness interventions will be designed to promote stability in the control of the body during HMD use. Our approach motivates new types of interventions; we describe several possible directions for the development of such interventions. We conclude with a discussion of new research that will be required to permit our approach to lead to interventions that can be implemented by HMD designers.
... This allows the player to physically move in the environment within the limits of their play space. The player's movement area is virtually extended using a teleportation system and joystick-controlled smooth locomotion [61], as well as virtually rotated via joystick-controlled stepped rotation ("snap turn"). The experience uses the joysticks of the Quest 2's handheld controllers. ...
Article
Full-text available
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with the Ambisonic bed, music, and virtual acoustics describing the aural features of an area. This methodology can become problematic in room-scale VR as the player cannot approach or interact with such background sounds, contradicting the player’s motion aurally and limiting interactivity. Written from a sound designer’s perspective, the paper addresses these issues by proposing a musically inclusive novel methodology that reimagines an acoustic environment predominately using objects that are governed by multimodal rule-based systems and spatialized in six degrees of freedom using 3D binaural audio exclusively while minimizing the use of Ambisonic beds and non-diegetic music. This methodology is implemented using off-the-shelf, creator-oriented tools and methods and is evaluated through the development of a standalone, narrative, prototype room-scale VR experience. The experience’s target platform is a mobile, untethered VR system based on head-mounted displays, inside-out tracking, head-mounted loudspeakers or headphones, and hand-held controllers. The authors apply their methodology to the generation of ambiences based on sound-based music, sound effects, and virtual acoustics. The proposed methodology benefits the interactivity and spatial behavior of virtual acoustic environments but may be constrained by platform and project limitations.
... To teleport, the user selects a position (and sometimes an orientation) in the VE and is instantly repositioned at the selected location. The teleporting interface is popular in part because it is easy to use [3,14] and does not typically contribute to cybersickness [6,14,16,22]. ...
Preprint
Full-text available
Virtual environments (VEs) can be infinitely large, but movement of the virtual reality (VR) user is constrained by the surrounding real environment. Teleporting has become a popular locomotion interface to allow complete exploration of the VE. To teleport, the user selects the intended position (and sometimes orientation) before being instantly transported to that location. However, locomotion interfaces such as teleporting can cause disorientation. This experiment explored whether practice and feedback when using the teleporting interface can reduce disorientation. Participants traveled along two path legs through a VE before attempting to point to the path origin. Travel was completed with one of two teleporting interfaces that differed in the availability of rotational self-motion cues. Participants in the feedback condition received feedback about their pointing accuracy. For both teleporting interfaces tested, feedback caused significant improvement in pointing performance, and practice alone caused only marginal improvement. These results suggest that disorientation in VR can be reduced through feedback-based training.
... Intuitive locomotion is an essential part of VR research and its applications. Usually a controller is used for virtual locomotion [1,8,5], but more recent work uses other techniques such as vision-based tracking [7] or sensors that are attached to the body [6,4]. The controller-based methods are extensively researched and already in commercial use. ...
Preprint
Locomotion in Virtual Reality (VR) is an important part of VR applications. Many scientists are enriching the community with different variations that enable locomotion in VR. Some of the most promising methods are gesture-based and do not require additional handheld hardware. Recent work focused mostly on user preference and performance of the different locomotion techniques. This ignores the learning effect that users go through while new methods are being explored. In this work, it is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR. Four different locomotion techniques are implemented and tested by participants. The goal of this paper is twofold: First, it aims to encourage researchers to consider the learning effect in their studies. Second, this study aims to provide insight into the learning effect of users in gesture-based systems.
Conference Paper
Full-text available
Real walking is the most natural method of navigation in virtual environments. However, physical space limitations often prevent or complicate its continuous use. Thus, many real walking interfaces, among them redirected walking techniques, depend on a reorientation technique that redirects the user away from physical boundaries when they are reached. However, existing reorientation techniques typically actively interrupt the user, or depend on the application of rotation gain that can lead to simulator sickness. In our approach, the user is reoriented using portals. While one portal is placed automatically to guide the user to a safe position, she controls the target selection and physically walks through the portal herself to perform the reorientation. In a formal user study we show that the method does not cause additional simulator sickness, and participants walk more than with point-and-fly navigation or teleportation, at the expense of longer completion times.
Article
Full-text available
Motion sickness is a complex syndrome that includes many features besides nausea and vomiting. This review describes some of these factors and points out that under normal circumstances, many cases of motion sickness go unrecognized. Motion sickness can occur during exposure to physical motion, visual motion, and virtual motion, and only those without a functioning vestibular system are fully immune. The range of vulnerability in the normal population varies about 10,000 to 1. Sleep deprivation can also enhance susceptibility. Systematic studies conducted in parabolic flight have identified velocity storage of semicircular canal signals-velocity integration-as being a key factor in both space motion sickness and terrestrial motion sickness. Adaptation procedures that have been developed to increase resistance to motion sickness reduce this time constant. A fully adequate theory of motion sickness is not presently available. Limitations of two popular theories, the evolutionary and the ecological, are described. A sensory conflict theory can explain many but not all aspects of motion sickness elicitation. However, extending the theory to include conflicts related to visceral afferent feedback elicited by voluntary and passive body motion greatly expands its explanatory range. Future goals should include determining why some conflicts are provocative and others are not but instead lead to perceptual reinterpretations of ongoing body motion. The contribution of visceral afferents in relation to vestibular and cerebellar signals in evoking sickness also deserves further exploration. Substantial progress is being made in identifying the physiological mechanisms underlying the evocation of nausea, vomiting, and anxiety, and a comprehensive understanding of motion sickness may soon be attainable. Adequate anti-motion sickness drugs without adverse side effects are not yet available.
Article
Full-text available
Finding one's way to sites of interest on the Web can be problematic, and this difficulty has been recently exacerbated by widespread development of 3-D Web content and virtual-world browser technology using the Virtual Reality Modeling Language (VRML). Whereas travelers can often navigate 2-D Web sites based on textual and 2-D thumbnail image representations of the sites' content, finding one's way to destinations in 3-D environments is notoriously troublesome. Wayfinding literature provides clear support for the importance of landmarks in building a cognitive map and then using that map to navigate in a 3-D environment, be it real or virtual. Textual and 2-D image landmark representations, however, lack the depth and context needed for travelers to reliably recognize 3-D landmarks. This paper describes a novel 3-D thumbnail landmark affordance called a worldlet. Containing a 3-D fragment of a virtual world, worldlets offer travelers first-person, multi-viewpoint experience with faithful representations of potential destinations. To facilitate an investigation into the comparative advantages of landmark affordances for wayfinding, worldlet capture algorithms were designed, implemented, and incorporated into two VRML-based virtual environment browsers. Findings from a psychological experiment using one of these browsers revealed that, compared to textual and image guidebook usage, worldlet guidebook usage: nearly doubled the time subjects spent studying the landmarks in the guidebook, significantly reduced the time required for subjects to reach landmarks, and reduced backtracking to almost zero. These results support the hypothesis that worldlets facilitate traveler landmark knowledge, expedite wayfinding in large virtual environments, and enable skilled wayfinding.
Conference Paper
With the increasing popularity of virtual reality (VR) and new devices getting available with relatively lower costs, more and more video games have been developed recently. Most of these games use first person interaction techniques since it is more natural for Head Mounted Displays (HMDs). One of the most widely used interaction technique in VR video games is locomotion that is used to move user's viewpoint in virtual environments. Locomotion is an important component of video games since it can have a strong influence on user experience. In this study, a new locomotion technique we called "Point & Teleport" is described and compared with two commonly used VR locomotion techniques of walk-in-place and joystick. In this technique, users simply point where they want to be in virtual world and they are teleported to that position. As a major advantage, it is not expected to introduce motion sickness since it does not involve any visible translational motion. In this study, two VR experiments were designed and performed to analyze the Point & Teleport technique. In the first experiment, Point & Teleport was compared with walk-in-place and joystick locomotion techniques. In the second experiment, a direction component was added to the Point & Teleport technique so that the users could specify their desired orientation as well. 16 users took part in both experiments. Results indicated that Point & Teleport is a fun and user friendly locomotion method whereas the additional direction component degraded the user experience.
Article
In this paper, we present guacamole, a novel open source software framework for developing virtual-reality applications. It features a lightweight scene graph combined with a versatile deferred shading pipeline. In our deferred renderer, the geometry processing is decoupled from subsequent shading stages. This allows us to use the same flexible materials for various geometry types. Materials consist of multiple programmable shading stages and user-defined attributes. In contrast to other deferred shading implementations, our renderer automatically infers and generates the necessary buffer configurations and shader programs. We demonstrate the extensibility of our pipeline by showing how we added rendering support for non-polygonal data such as trimmed NURBS and volume data. Furthermore, guacamole features many state-of-the-art post-processing effects such as ambient occlusion or volumetric light. Our framework is also capable of rendering on multiple GPUs for the support of multi-screen displays and multi-user applications.
Article
We report the results of an experiment conducted to examine the effects of immersive viewing on a common spatial knowledge acquisition task of spatial updating task in a spherical panoramic environment (SPE). A spherical panoramic environment, such as Google Street View, is an environment that is comprised of spherical images captured at regular intervals in a real world setting augmented with virtual navigational aids such as paths, dynamic maps, and textual annotations. Participants navigated the National Mall area of Washington, DC, in Google Street View in one of two viewing conditions; desktop monitor or a head-mounted display with a head orientation tracker. In an exploration phase, participants were first asked to navigate and observe landmarks on a pre-specified path. Then, in a testing phase, participants were asked to travel the same path and to rotate their view in order to look in the direction of the perceived landmarks at certain waypoints. The angular difference between participants' gaze directions and the landmark directions was recorded. We found no significant difference between the immersive and desktop viewing conditions on participants' accuracy of direction to landmarks as well as no difference in their sense of presence scores. However, based on responses to a post-experiment questionnaire, participants in both conditions tended to use a cognitive or procedural technique to inform direction to landmarks. Taken together, these findings suggest that in both conditions where participants experience travel based on teleportation between waypoints, the visual cues available in the SPE, such as street signs, buildings and trees, seem to have a stronger influence in determining the directions to landmarks than the egocentric cues such as first-person perspective and natural head-coupled motion experienced in the immersive viewing condition.
Conference Paper
Photoportals build on digital photography as a unifying metaphor for reference-based interaction in 3D virtual environments. Virtual photos and videos serve as threedimensional references to objects, places, moments in time and activities of users. Our Photoportals also provide access to intermediate or alternative versions of a scenario and allow the review of recorded task sequences that include life-size representations of the captured users. We propose to exploit such references to structure collaborative activities of collocated and remote users. Photoportals offer additional access points for multiple users and encourage mutual support through the preparation and provision of references for manipulation and navigation tasks. They support the pattern of territoriality with configurable space representations that can be used for private interaction, as well as be shared and exchanged with others.
Article
Novel user interfaces such as the Microsoft Kinect allow users to actively move their body in order to interact with immersive video games. Hence, users may navigate by using natural, multimodal methods of generating self-motions. For instance, [LaViola and Katzourin 2007] developed several body- and foot-based metaphors for hands-free navigation in IVEs, including a leaning technique for traveling short and medium distances and a floor-based world-in-miniature for traveling large distances. However, real walking is the most basic and intuitive way of moving and, therefore, keeping this ability is of great interest.
Article
Virtual Reality (also known as Virtual Environment or VE) technology shows many promising applications in areas of training, medicine, architecture, astronomy, data handling, teleoperation, and entertainment. A potential threat to using this - technology is the mild to severe discomfort that some users experience during or after a VE session. Similar effects have been observed with flight and driving simulators. The simulator sickness literature forms a solid background for the study of sickness in virtual environments and many of the findings may be directly applicable. This report reviews literature concerning simulator sickness, motion sickness, and virtual environments. Forty factors that may be associated with simulator sickness in virtual environments are identified. These factors form three global categories: subject, simulator, and task. The known and predicted effects of these factors on sickness in VEs are discussed. A table summarizes the information presented in this report. The information can be used as a guide for future research concerning simulator sickness in virtual environments.