Conference PaperPDF Available

Immersive well-path editing: Investigating the added value of immersion

Authors:

Abstract and Figures

The benefits of immersive visualization are primarily anecdotal; there have been few controlled user studies that have attempted to quantify the added value of immersion for problems requiring the manipulation of virtual objects. This research quantifies the added value of immersion for a real-world industrial problem: oil well-path planning. An experiment was designed to compare human performance between an immersive virtual environment (IVE) and a desktop workstation. This work presents the results of sixteen participants who planned the paths of four oil wells. Each participant planned two well-paths on a desktop workstation with a stereoscopic display and two well-paths in a CAVE™-like IVE. Fifteen of the participants completed well-path editing tasks faster in the IVE than in the desktop environment. The increased speed was complimented by a statistically significant increase in correct solutions in the IVE. The results suggest that an IVE allows for faster and more accurate problem solving in a complex three-dimensional domain.
Content may be subject to copyright.
Immersive Well-Path Editing: Investigating the Added Value of Immersion
Kenny Gruchalla
BP Center for Visualization
Computer Science Department
University of Colorado at Boulder
gruchall@colorado.edu
Abstract
The benefits of immersive visualization are primarily
anecdotal; there have been few controlled user studies
that have attempted to quantify the added value of immer-
sion for problems requiring the manipulation of virtual
objects. This research quantifies the added value of im-
mersion for a real-world industrial problem: oil well-path
planning. An experiment was designed to compare human
performance between an immersive virtual environment
(IVE) and a desktop workstation. This work presents the
results of sixteen participants who planned the paths of
four oil wells. Each participant planned two well-paths
on a desktop workstation with a stereoscopic display and
two well-paths in a CAVE™-like IVE. Fifteen of the par-
ticipants completed well-path editing tasks faster in the
IVE than in the desktop environment. The increased speed
was complimented by a statistically significant increase
in correct solutions in the IVE. The results suggest that
an IVE allows for faster and more accurate problem solv-
ing in a complex three-dimensional domain.
1. Introduction
There is a common assumption that immersive virtual
environments provide an improved interface to view and
interact with three dimensional structures over more tra-
ditional desktop graphics workstations [1]. After all,
an IVE differs greatly from traditional desktop graphics
workstations in that it provides users a three-dimensional
interface to view and interact with three-dimensional ob-
jects in a virtual world. This interface would seemingly
provide a more natural and intuitive means for viewing
and interacting with three-dimensional virtual worlds in
a variety of industrial settings. However, immersive tech-
nology has been slow to move outside the research labora-
tory and into industry. One of the main barriers in promot-
ing immersive technology to industry is that the benefits
are primarily anecdotal. The goal of this research was to
quantify the performance and usability of an IVE com-
pared to a desktop graphics workstation for a real-world
industrial task involving a complex three-dimensional do-
main. The planning of a new oil well-path through the ex-
isting wells of a mature oilfield is such a task. It requires
spatial understanding of a complex three-dimensional en-
vironment and the precise placement of objects within
that environment. The Immersive Drilling Planner (IDP)
is a software application capable of visualizing a mature
oilfield and editing a new path within that oilfield on both
a desktop environment and in an IVE. Although the user
interface is different in the two environments, the scene
and the dynamics of the scene are identical. This provides
a testbed that can be used to evaluate the added value
of immersion on a spatially complex real-world problem.
This paper describes an experiment designed to compare
an IVE with a stereoscopic desktop environment in the
performance and correctness of a well-path editing task.
2. Related Work
Most human performance virtual environment stud-
ies have focused on comparing various navigation and
manipulation techniques within the same virtual environ-
ment. Only a few studies have attempted to compare IVEs
with traditional desktop environments.
Ruddle, Payne, and Jones designed a virtual building
walk-through experiment to compare a helmet-mounted
display with a desktop monitor display [2]. Partici-
pants would learn the layout of large-scale virtual build-
ings through repeated navigation. Participants would nav-
igate two large virtual buildings, each consisting of sev-
enty rooms. A repeated measure design was used, where
each participant navigated one building four times us-
ing the head-mounted display, and navigated the second
building four times using the desktop workstation. On
average, participants who were immersed in the virtual
environment using the helmet-mounted display navigated
157
IEEE Virtual Reality 2004 March 27-31, Chicago, IL USA
0-7803-8415-6/04/$20.00©2004 IEEE.
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
the buildings twelve percent faster. The decreased time
was attributed to the participants utilizing the ability to
“look around” while they were moving when immersed,
as the participants spent eight percent more time station-
ary when using the desktop workstation. Participants
also developed a better understanding of the layout of the
building, as evidenced by their knowledge of relative dis-
tance between locations in the buildings.
Pausch, Proffitt, and Williams conducted a user study
comparing a search task between a head-tracked helmet-
mounted display and stationary helmet-mounted display
[3]. Participants were placed in the center of a virtual
room and instructed to search for a camouflaged target.
The study showed that when a target was present there
was no significant performance improvement in the im-
mersed environment. However, when the target was not
present participants in the immersed environment were
able to make the conclusion substantially faster than the
participants using the stationary display. The study also
found a positive transfer of training effect from the im-
mersive environment to the stationary display, and a neg-
ative transfer of training effect from the stationary display
to the head-tracked environment.
Arns, Cook, and Cruz-Neira conducted a user study
comparing statistical data analysis on a desktop and an
IVE [4]. The experiment compared both identification
and interaction tasks. During the identification tasks, par-
ticipants were asked to identify clusters of data and iden-
tify the dimensionality of data. During the interaction
tasks, participants were asked to “brush” clusters, mark-
ing data points with colored glyphs. The results of the
study suggested that IVEs significantly improve produc-
tivity for structure and feature detection tasks in the anal-
ysis of highly dimensional data. Participants performed
almost twice as well when identifying clusters in the IVE,
with an eighty percent correct rate verses a forty-seven
percent on the desktop. Participants performed equally
well identifying the dimensionality in the two environ-
ments. The performance in the IVE was as good as or
better than the performance on the desktop in the visual-
ization task, but in the interaction tasks the desktop was
faster. Participants’ brushing times were lower on the
desktop than on the IVE. However, drawing any conclu-
sions is difficult, since the brushing times had a large stan-
dard deviation.
3. Immersive Drilling Planner
The IDP development was started at the B.P. Center
for Visualization in the fall of 2002 by Kenny Gruchalla
and Jonathan Marbach. The IDP was built on top of the
CAVELIB™and Open Inventor libraries. The IDP capa-
bilities include interactive well planning integrated with
geological and geophysical data, visualizations of well
uncertainty, and design optimization for the development
of mature fields. The IDP was designed to operate in a va-
riety of visualization environments, including large screen
systems, immersive bench displays, and desktop worksta-
tions. To support both immersive environments and desk-
top workstations, two implementations of the IDP were
created. Both implementations share the same IDP code
base and identical scene graphs; the only difference is the
front-end user control that allows navigation through the
scene and the manipulation of the objects in the scene.
3.1. Well Planning Background
Modern drilling equipment can be controlled so that
a well can be drilled at a predetermined angle and di-
rected toward a predetermined target location. This type
of drilling is known as directional drilling [5]. The
most common use of directional drilling is in offshore
fields, where the expense of creating a drilling platform
is considerable. An offshore field, particularly those un-
der deeper waters, must be exploited by a small number
of fixed platforms. Each platform is capable of tapping a
sector of the field through a cluster of wells. Directional
drilling is becoming increasingly common onshore in ur-
ban and environmentally sensitive areas, since exploit-
ing a field through this method has a much smaller en-
vironmental footprint than does exploiting the same field
through straight hole drilling [5].
Oilfields exploited by directional drilling can quickly
become a tortuous underground labyrinth of wells, creat-
ing a very complex spatial domain (see Figure 1). When
planning a new well in a mature field, the planner must
take special care that the new well does not collide with
any existing wells. A collision with an existing well can
cause a blow out, an uncontrolled flow of fluids up a well.
Blow outs can lead to fires and explosions resulting in the
loss of the the drilling rig and possibly the loss of life [5].
One of the design goals of the IDP was to provide well
planners a way to plan a safe path for a new well in a
mature oilfield.
The IDP represent a well-path by its uncertainty sur-
face, this surface forms a volume that is known to con-
tain well-path. The location of a real well-path cannot be
known with complete certainty. A position in a well-path
is determined by surveying instruments that are placed
down the drilled hole. The surveying instruments typi-
cally measure attitude and the length along a well-path.
As these readings are subject to error, there are uncertain-
ties in a well-path’s position that accumulate with depth.
These uncertainties can be visualized as a elliptical vol-
ume perpendicular to the well-path. Accumulating the er-
rors at each point along the well enables an uncertainty
158
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
Figure 1. Snapshot of a virtual oilfield con-
structed from an actual well log dataset.
Mature oilfields can be very complex three-
dimensional structures.
surface to be constructed [5].
Although the direction and angle of the drill can be
controlled, the more curvature in a planned well-path the
more difficult the well will be to drill. In reality, a mul-
titude of geological, geographical, and physical factors
drive the complexity of a well, but currently the IDP only
provides a simple model: a weighted sum of curvature
along the well-path. The weight relates to the “sharp-
ness” of the curve; sharper curves have a higher weight
than softer curves. This complexity model provides the
planner feedback during the planning process.
3.2. Immersive Design
The three-dimensional user interface is a critical com-
ponent of a immersive virtual environment’s usability.
Bowman [6] has shown that immersive interaction tech-
niques based on natural and real-world metaphors often
exhibit serious usability problems. Therefore, careful
thought must go into the design of user interfaces and
interaction techniques for immersive applications. For-
tunately, a large body of work in the field of immersive
human-computer interaction exists. The design of the IDP
is based on many of the specific results and guidelines of
that work.
Navigation is the most universal user action in large-
scale immersive environments, and consequently several
implementations and user studies of immersive naviga-
tion techniques have been reported [7]. The IDP im-
plements a combination of two well known techniques:
physical navigation and pointing. Physical navigation
maps a user’s physical movements, such as walking, into
corresponding motions in the virtual world. Physical nav-
igation is cognitively simple, requiring no special action
on part of the user, and it has been shown to help users
maintain spatial awareness of their location in the scene
and the objects around them [8]. However, an oilfield
scaled to fit wholly within the physical boundaries of
the IVE would be unusably small. Therefore the point-
ing technique was used to help overcome the limitations
of physical navigation. In this technique, the direction
of motion depends upon the current orientation of the
user’s hand or hand held device [7]. User studies have
suggested that the pointing technique is well suited for
general-purpose applications that require speed and accu-
racy [9]. Using a combination of these techniques an IDP
user can navigate the portion of the oilfield inside the IVE
by simply walking within the IVE. To reach areas of the
field outside of the bounds of the IVE, the user points the
wand in the direction of desired travel. Pressing forward
on the wand’s joystick will “drive” the user in the direc-
tion the wand is pointing. Pressing backwards on on the
wand’s joystick will “drive” the user in the opposite direc-
tion. The joystick is pressure sensitive and the amount of
pressure exerted on it maps to the speed of travel. Press-
ing right or left on the joystick rotates the scene around
the user.
Interaction with a virtual object involves selecting, po-
sitioning and rotating the object in the virtual environ-
ment. The IDP implements a variation of the ray-casting
technique that allows objects to be selected, positioned,
and rotated. In this variation, a virtual ray extends from
the wand and interactive objects are highlighted when in-
tersected by the virtual ray (see Figure 2). Once an inter-
active object is intersected, pressing and holding the lower
left wand button will select and drag the object. When the
object is selected with the lower left wand button, it is
effectively “speared” on the virtual ray. Then, wherever
the wand moves, the speared object follows. When the
user releases the wand’s lower left button, the object is
released at its current location. While an object is being
dragged, its orientation remains constant, only its posi-
tion is changed. Once an interactive object is intersected,
pressing and holding the lower right wand button will se-
lect and rotate the object. When the object is selected
with the lower right wand button, it will mimic the ori-
entation of the wand. When the user releases the wand’s
lower right button, the object is released at that orienta-
tion. While an object is being rotated, its position remains
constant, only its orientation is changed.
3.3. Desktop Design
The IVE version of the IDP could be run directly on
a desktop workstation using the CAVELIB™simulator.
However, the simulator was designed as a tool to test
immersive applications, not as a production desktop in-
terface. Therefore, the Open Inventor examiner viewer
159
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
Figure 2. Photograph of a IDP user inter-
acting with the virtual oilfield inside the IVE
using the ray-casting technique.
is used by the desktop implementation of the IDP as
the front-end user interface. The user can manipulate
their view of a scene by generating mouse click-and-drag
events in the render area (right mouse down rotates the
scene, middle mouse down pans the scene, and right and
middle mouse down zooms in and out of the scene). The
user can also manipulate a scene with three thumbwheel
widgets which control zooming and rotation about the X
and Y axes.
To interact with objects in the scene, Open Inventor
manipulators are used. The manipulators provide a means
to position and rotate three-dimensional objects in three-
dimensional space with a two-dimensional mouse. A han-
dle box manipulator is used to position interactive ob-
jects in the desktop version of the IDP. This manipulator
draws a bounding box around the interactive object. The
manipulator responds to click-and-drag mouse events by
translating the interactive object it surrounds. It also pro-
vides scaling functionality, which is not used in the IDP.
A trackball manipulator is used to rotate interactive ob-
jects in the desktop version of the IDP. This manipulator
wraps the interactive object with three circular stripes. It
responds to click-and-drag mouse events by rotating the
interactive object it surrounds. Clicking in an area be-
tween the stripes allows the user to rotate the object freely
in three dimensions; clicking on the stripes allows the user
to constrain rotation in the X, Y, or Z axes.
4. Experimental Design
The experiment consisted of four separate logged ex-
perimental tasks (denoted Task01, Task02, Task03, and
Task04) and a training task (denoted Task00). Each par-
ticipant performed the training task and two experimen-
tal tasks on the desktop and the training task and the
other two experimental tasks in the IVE. Participants were
given a time limit of ten minutes to complete each task.
The runs were counterbalanced in four run experimen-
tal blocks to adjust for learning effects (see Table 1).
The independent variable was the environment: the head-
tracked stereoscopic IVE versus the stereoscopic desktop
environment. The dependent variables were the time to
complete the task and the correctness of the final well-
path.
The experimental tasks in this study involved editing
the path of a new well in a mature field. The same
dataset was used to construct the virtual mature field (see
Figure 1) for all the experimental tasks in this study.
Ninety well logs were used to construct the corresponding
ninety well-path uncertainty surfaces. A Landsat image of
the field was rendered above these uncertainty surfaces.
A roughly horizontal surface, representing a geological
property of the field’s reservoir, was rendered toward the
lower extents of the uncertainty surfaces. The objective of
each task was to edit the new path so that its uncertainty
surface did not intersect the uncertainty surface of any ex-
isting well while not exceeding a goal complexity value.
The path of the new well was edited using the pull point
method which allows the participants to edit a region of
the well. The participants could define an edit region by
dragging two well sliders up and down the original path
of the new well. The participant could then change the
path within the edit region by moving or rotating the pull
point. As the pull point is manipulated, the edited path’s
uncertainty surface is updated in real-time.
4.1. Participants
Nineteen unpaid participants were recruited from the
staff and students at the University of Colorado at Boulder
and from employees of several Colorado software firms.
The participants received no tangible benefit from partic-
ipation in the study. Two participants could not complete
the experiment due to hardware failures; the data from
these two incomplete runs are not included in the results.
Participants were organized into counterbalanced experi-
mental blocks of four. After disregarding the two incom-
plete runs, the remaining seventeen participants complete
four experimental blocks. The fifth experimental block
is incomplete, containing only the last run, and has been
excluded.
4.2. Apparatus
The IVE used for this study is located on the University
of Colorado campus at the B.P. Center for Visualization.
160
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
Table 1. Counter-balanced experimental design
1st Treatment 2nd Treatment
Subject ID Environment 1st Task 2nd Task 3 Task Environment 1st Task 2nd Task 3 Task
s00, s04, s08, s12 IVE Task00 Task01 Task02 Desktop Task00 Task03 Task04
s01, s05, s09, s13 Desktop Task00 Task01 Task02 IVE Task00 Task03 Task04
s02, s06, s10, s14 Desktop Task00 Task03 Task04 IVE Task00 Task01 Task02
s03, s07, s11, s15 IVE Task00 Task03 Task04 Desktop Task00 Task01 Task02
The IVE at the B.P. Center for Visualization is a Mech-
dyne™MD Flex™, which is a configurable large screen
projection-based system. In closed configuration, the MD
Flex™is a 12’x12’x10’ theater, resembling a CAVE™.
The MD Flex™can be re-configured to a 36’x12’x10’
open configuration or presentation mode. The closed
configuration provides a greater sense of immersion,
therefore, for the purposes of this study, only the closed
configuration was used. The MD Flex™consists of four
walls: three rear-projected screens measuring 12’x10’
which form the right, back, and left walls of the IVE, the
fourth wall is the 12’x12’ floor which is projected from
above.
The four display screens were driven by one Silicon
Graphics Incorporated (SGI) Origin 3800 computer with
four SGI Infinite Reality3 graphics pipes. Each pipe feeds
a Barco™909 projector. The projectors are capable of
up to 1600x1280 stereo resolution; however, due to other
hardware constraints, the resolution used for this study
was limited to 1024x768.
A three-dimensional effect was created inside the IVE
through active stereo projection. Participants wore in-
frared CrystalEyes™LCD shutter glasses to view the
stereoscopic images. The sole interaction device used
in this study was a wired InterSense™wand. The wand
is a hardware device that can be thought of as three-
dimensional, six degrees of freedom mouse. The wand
has four buttons and a pressure sensitive joystick. An In-
terSense™VET 900 tracking system tracked the position
and orientation of the shutter glasses and the wand.
The desktop equipment used for this study is similar
to desktop computers found in many homes and offices.
The desktop equipment consisted of a 21-inch SGI moni-
tor, a 3-button mouse, and an SGI keyboard. Unlike those
in most homes and offices, the desktop interface in this
study was connected to a SGI Origin 3800 (the same ma-
chine used to drive the IVE). The monitor’s images, like
the screens in the immersive experiments, were driven by
a SGI Infinite Reality3 graphics pipe and constrained to
a resolution of 1024x768. The images produced on the
desktop were rendered in stereo, producing a stereoscopic
display when used in conjunction with a pair of Crys-
talEyes™LCD shutter glasses. Unlike the immersive en-
vironment, the desktop environment did not include head
tracking.
4.3. Procedure
The experimental procedure (approved by an expe-
dited review by the University of Colorado Human Re-
search Committee) was conducted individually, one par-
ticipant at a time. Participants were greeted at the B.P.
Center for Visualization and given a brief tour of the fa-
cilities and a brief explanation of the experiment. Par-
ticipants were then asked to read and sign a Subject In-
formed Consent Form. Depending on the participant’s
position in the experimental block, the participant would
sit at the desktop or enter the IVE. While the experi-
menter read from a script explaining the environment’s
interface and the objective of the tasks, the participant ex-
plored the training task. The participant was encouraged
to explore the environment’s interface and the dynamics
of the well-path editing until they felt comfortable or until
the ten minute time limit was reached. After completing
the training task, the participants then performed the two
logged experimental tasks as assigned per their position in
the experimental block. Then the participant would per-
form the training task and two logged experimental tasks
in the other environment. Again, while performing the
training task, the experimenter would read from a script
describing the user interface in that environment. After
completing the second treatment, the participant was then
asked to complete a post-experiment questionnaire.
Each task would begin by presenting the participant
with a two-dimensional start dialog. This dialog provides
the time allotted for the task, the goal complexity of the
new well, and a start button. When the start button was
pressed, the dialog would be closed and the test applica-
tion began a timed log of the user’s actions. All changes
to the user’s viewpoint (i.e., head and camera motion) and
all interactions (i.e., mouse and wand movements and but-
ton presses) were logged.
The participant began at a fixed starting position out-
side of the virtual field, then would navigate through the
field to the new well. Then, through a series of well slider
and pull point movements, the participant would edit the
path of the new well. A three-dimensional text readout
above the pull point provided the user with complexity
161
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
value feedback. Once the participant believed that the
new path’s uncertainty surface did not intersect the un-
certainty surface of any existing well and that the new
path had a complexity value at or below the goal complex-
ity, the participant was instructed to complete the task by
closing the application. If the allotted time was reached,
the test application would terminate automatically.
4.4. Performance Measures
There were two performance measures per task: the
time to complete the task and the correctness of the final
well-path. The IDP maintained a timed log of the partici-
pant’s interactions with the virtual environment; the time
to complete the task was derived from the log. The final
well-path was reconstructed from the log to evaluate the
correctness of the participant’s solution. Any final well-
path whose uncertainty surface did not intersect with any
existing uncertainty surface and whose complexity did not
exceed the task’s goal complexity value was considered to
be correct.
5. Results
Comparing the number of correct solutions within the
participants shows a significant difference between the
two environments (see Figure 3). Of the sixteen partic-
ipants, nine had more correct solutions in the IVE, one
had more correct solutions in the desktop environment,
and six had the same number of correct solutions in the
two environments. The sign test shows a statistically
significant difference at the 0.05 significance level. Com-
paring the total solution time taken to complete two tasks
in the IVE with the two tasks in the desktop environment
provides a more significant result (see Figure 4). Of the
sixteen participants, only one participant took more time
in the IVE. The sign test shows this to be statistically
significant at the 0.001 significance level.
The results were also analyzed using an ANOVA.
The environment (IVE verses desktop) was treated as a
within-subject factor, and the environment order and the
task order were used as two between-subjects factors.
The environment order represents whether the first treat-
ment was in the IVE or on the desktop. The task or-
der represents which pair of tasks (Task01 and Task02 or
Task03 and Task04) occurred in the first treatment. The
ANOVA of total time spent on each pair of tasks (see Ta-
ble 2) shows a highly significant effect F(1,12)=54.740,
p=0.000 of the environment, however an interaction effect
F(1,12)=12.519, p=0.004 between the environment and
the environment order is also present. A paired samples
t-test (see Table 3) shows that a significant effect of the
environment does exist. The ANOVA of the number of
Figure 3. Graph illustrating the number of
correct solutions for each participant in
each environment.
Figure 4. Graph illustrating the total accu-
mulated solution time for each participant
in each environment.
correct solutions in each pair of tasks (see Table 4) shows
a significant effect F(1,12)=10.714, p=0.007 of environ-
ment.
An analysis of individual tasks is difficult since the
tasks were not fully crossed, that is, Task02 always fol-
lowed Task01 and Task04 always followed Task04. How-
ever, it is clear that there are differences between the tasks
(see Figures 5 and 6). On average, the solution time in the
IVE was approximately 23% faster than in the desktop
environment for Task01. The number of correct solutions
for Task01 were similar in the two environments, with
seven correct solutions on IVE and six correct solutions
in the desktop environment. The mean solution times for
Task02 were also nearly equal, with the desktop just 4%
faster than the IVE. However, the shorter mean solution
time on the desktop for Task02 was offset by a decrease
in correctness. Only three correct solutions were found
on the desktop for Task02 compared to seven correct so-
lutions in the IVE. Task03 had the largest difference in
mean solution times between the two environments. On
average, the Task03 solutions were found approximately
93% faster in the IVE than in the desktop environment.
The increased speed in the IVE did not correspond to a
162
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
Table 2. ANOVA of solution time
Source Type III Sum of Squares df Mean Square FSig.
Environment 282940.031 1282940.031 54.74 .000
Environment x pair order 12285.281 112285.281 2.377 .149
Environment x environment order 64710.031 164710.031 12.519 .004
Environment x pair order x environment order 8944.531 18944.531 1.73 .213
Error 62025.625 12 5168.802
Table 3. Paired Samples T-test : Immersed Time - Desktop Time
Paired Differences
Environment Order Mean Std. Dev. Std. Err. Mean 95% Confidence Interval tdf Sig (2-tailed)
Lower Upper
IVE first -98.128 101.566 35.909 -183.036 -13.214 -2.733 7.000 .029
Desktop first -278.000 116.068 41.036 -375.035 -180.965 -6.775 7.000 0.000
Table 4. ANOVA of number of correct solutions
Source Type III Sum of Squares df Mean Square FSig.
Environment 3.125 13.125 10.714 .007
Environment x pair order 1.125 11.125 3.857 .073
Environment x environment order 0.125 10.125 .429 .525
Environment x pair order x environment order 0.125 10.125 .429 .525
Error 3.500 12 .292
Figure 5. Graph of the number of correct
solutions by task.
decrease in correctness. There were seven correct Task03
solutions in the IVE and only four correct Task03 solu-
tions in the desktop environment. On average, the solu-
tion time in the IVE was approximately 26% faster than
in the desktop environment for Task04. There were six
correct solutions on IVE and four correct solutions in the
desktop environment.
Participants’ written comments reflect the added value
of immersion. In a post-experiment questionnaire, all six-
teen participants indicated that the IVE provided a more
intuitive interface for the experimental tasks. Several par-
ticipants described being more confident in the correct-
ness of their solutions in the IVE.
Figure 6. Graph of the mean solution time by
task. Error bars show standard deviation.
6. Conclusions
Participants in this study were consistently able to
complete well-path editing tasks faster in the IVE than
in the desktop environment. The total solution time taken
by an individual participant to complete two tasks in the
IVE was, with one exception, faster than the total solu-
tion time taken by the same participant to complete the
two tasks in the desktop environment. Fifteen participants
had faster solution times in the IVE than in the desktop,
leaving a single participant with faster desktop accumu-
lated solution time. This speed difference was shown to
be statistically significant.
Participants in this study had more accurate percep-
163
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
tions and judgments in the IVE, as evidenced by the num-
ber of correct solutions. Of the sixteen participants, nine
participants had more correct solutions in the IVE, one
participant had more correct solutions in the desktop en-
vironment, and six participants had an equal number of
correct solutions in the two environments. This was also
shown to be statistically significant.
The data suggest that IVEs may be more suitable for
certain types of problems. Notice in Figures 5 and 6 that
the number of correct solutions and the mean time for
Task01 are nearly equivalent for the two environments,
while the Task03 solutions have four times more errors
in the desktop environment and the mean solution time is
significantly slower in the desktop environment. The two
tasks are similar, but Task01 is less spatially complicated
than Task03 as there are fewer wells in the immediate
vicinity of the new Task01 well. A similar phenomenon
was observed during the pilot tests. Several initial pi-
lot tests involved spatially simple domains and failed to
show a significant difference between the two environ-
ments. These observations imply that the added value of
immersion may be correlated to the spatial complexity of
the problem. Clearly, there may be classes of spatial prob-
lems that would benefit from immersion.
This study showed that oil well-path planning can be
improved by immersion, but does not address why it was
improved. Were the benefits in the IVE a result of a bet-
ter understanding of the data? Were the benefits a result
of more natural navigation? Were the benefits a result of
more natural interaction the virtual objects? Or were the
benefits a result of some combination of the three? There
have been studies [2] showing that navigation through a
three-dimensional world is improved by immersion, but
there are no controlled studies showing which types of
interactions are improved by immersion. A logical pro-
gression of this work would be to identify classes prob-
lems that benefit from immersion, by constructing a tax-
onomy of user interactions that are faster, more precise,
and more accurate in an IVE and by constructing a taxon-
omy of spatial situations in which human understanding
is improved in an IVE.
This work is a controlled study designed to evaluate
the added value of immersion when interacting with vir-
tual three-dimensional objects. The results of this study
indicate that immersive technology can provide an im-
proved interface for solving real-world problems. Gen-
erally, not only were solutions found more quickly in the
IVE, but also the solutions were found with far fewer er-
rors. Increasing the speed and accuracy of an industrial
problem like oil well planning could save money, time,
and potentially lives.
7. Acknowledgments
The author would like to thank the all the research par-
ticipants for volunteering their time, Clayton Lewis for
his guidance, Bill Oliver for his assistance with the sta-
tistical analysis, Jonathan Marbach for his countless con-
tributions to this work, and the staff at the BP Center for
Visualization for their support.
References
[1] van Dam, A., Forsberg, A.S., Laidlaw, D.H., LaVi-
ola, J.J., Jr., Simpson, R.M. (2000). Immersive VR
for scientific visualization: A progress report. Com-
puter Graphics and Applications, 20, 6, 26- 52.
[2] Ruddle., R.A., Payne S.J., Jones, D.A. (1999).
Navigating large-scale virtual environments: What
differences occur between helmet-mounted and
desk-top displays? Presence: Teleoperators and
Virtual Environments, 8, 157-168.
[3] Pausch, R., Proffitt, D., Williams, G. (1997). Quan-
tifying Immersion in Virtual Reality. Proceedings of
the 24th annual conference on computer graphics
and interactive techniques, 13-18.
[4] Arns, L., Cook, D., Cruz-Neira, C. (1999). The
benefits of statistical visualization in an immersive
environment. Proceedings of IEEE Virtual Reality
1999, 88-95.
[5] North, F.K. (1985). Petroleum Geology. Boston,
MA: Unwin Hyman.
[6] Bowman, D.A. (1999). Interaction Techniques for
Common Tasks in Immersive Virtual Environments:
Design, Evaluation, and Application. Doctoral dis-
sertation, Georgia Institute of Technology.
[7] Mine, M. (1995). Virtual environment interaction
techniques (University of North Carolina Chapel
Hill Computer Science Technical Report TR95-
018).
[8] Usoh, M., Slater, M. (1995). An exploration of im-
mersive virtual environments. Endeavor, 19, 1, 34-
38.
[9] Bowman, D.A., Koller, D., Hodges, L.F. (1997).
Travel in immersive virtual environments: An eval-
uation of viewpoint motion control techniques. Vir-
tual Reality Annual International Symposium, 1997,
45-52.
164
Proceedings of the 2004 Virtual Reality (VR’04)
1087-8270/04 $ 20.00 IEEE
... from a technological perspective, "immersion" refers to an objective measure of the system to deliver a convincing and vivid environment and the extent to which a medium shuts out the outside world (Cummings & Bailenson, 2016;Slater, 2003). Among the studies that investigated the impact of immersion on learning outcomes, 15 of them defined or measured immersion from this perspective (Buttussi & Chittaro, 2017;Chaturvedi et al., 2012;Chua et al., 2019;eden & passig, 2007;freina et al., 2017;Gruchalla, 2004;Janssen, Tummel, Richert, & Isenhardt, 2016;markowitz et al., 2018;meyer et al., 2019;moro et al., 2017;passig, 2009;passig & eden, 2010;Roussou & Slater, 2017;Vargas-orjuela et al., 2017;yu et al., 2017). Immersion has also been defined from a user perspective, which emphasizes a user's subjective experience. ...
... In a study of CAVe vs. desktop, Gruchalla (2004) compared participants' performance in tasks involving planning well. They reported that the CAVe users not only had significantly more correct answers but also came up with the correct answers in less time. ...
... They suggested that instruction of complicated concepts and higherorder thinking would likely benefit from using IVR (Bertrand et al., 2017;ferguson et al., 2020;Gruchalla, 2004;Kwon, 2019;parmar et al., 2016). for example, ferguson et al. (2020) reported that participants in the active interaction and passive interaction groups performed equally well with questions on spatial knowledge but not on factual knowledge. ...
Chapter
Sweeping the digital realm in the past few decades, immersive virtual reality (IVR) has attracted much attention from industry and academia. A promising tool for teaching and training, this powerful medium offers access to experiences and places otherwise inaccessible to learners for reasons including finance, time, and safety. Among the factors that influence the IVR experience, presence, immersion, and interactivity are elements directly related to users’ cognitive and affective interactions in IVR. As the rapid development of technology continues to enhance presence, immersion, and interactivity in IVR, the mechanism by which these components promote learning remains underexplored. In this thematic review, we examined how presence, immersion, and interactivity have been defined in relevant studies as well as what scales were employed to measure them. We also reviewed the work of scholars investigating the relation between these elements and objective learning outcomes. Despite the absence of a definitive conclusion stating that enhancing the presence, immersion, and interaction of IVR promotes learning, this chapter nevertheless offers a rich context to review the impacts of these three elements on learning outcomes.
... In the recent past, VR applications have already gained in importance, in particular in the computer graphics sector. For the information visualization and visual analytics domain, several approaches have been proposed as well [141,294,429], but the potential of virtual reality has not yet been fully explored and established. To show that virtual reality is helpful for problems that occur within visual data exploration, we must prove that virtual reality has a place in the established information visualization pipeline [62,97,146], too. ...
... There are many reasons for considering visualization in VR: stereoscopic perception, direct interaction in 3D space, free body movement [72], a 360degree immersive view as well as a potentially infinite space to place visual elements. As shown in previous research, these introduced properties can improve visual analytics tasks in certain aspects (e.g., immersion: improvements in task efficiency and effectiveness for spatial well-path planning [141]). ...
... This list of aspects of physical reality refer to VR properties which may influence characteristics like immersion and presence, which can have an impact on visual analytics tasks (e.g., [141]). ...
... On the other hand, two studies compared the performance in navigation tasks in different environments, both in HMD with desktop monitor display (Ruddle et al., 1999) and the performance in a CAVE with a desktop monitor (Gruchalla, 2004). Both works found that participants showed faster and more accurate solutions on immersive VR devices than on the computer screen, and this result was attributed to the use of the "look around" ability while they were moving during immersion. ...
Article
Human spatial memories are usually evaluated using computer screens instead of real arenas or landscapes where subjects could move voluntarily and use allocentric cues to guide their behavior. A possible approach to fill this gap is the adoption of virtual reality, which provides the opportunity to create spatial memory tasks closer to real-life experience. Here we present and evaluate a new software to create experiments using this technology. Specifically, we have developed a spatial memory task that is carried out in a computer-assisted virtual environment where participants walk around a virtual arena using a joystick. This spatial memory task provides an immersive environment where the spatial component is constantly present without the use of virtual reality goggles. The design is similar to that of tasks used for animal studies, allowing a direct comparison across species. We found that only participants who reported using spatial cues to guide their behavior showed significant learning and performed significantly better during a memory test. This tool allows evaluation of human spatial memory in an ecological environment and will be useful to develop a wide range of other tasks to assess spatial cognition.
... Our program does not use controllers and operates using body gestures as input, employing the attached Leap Motion sensor. The resulting perception in the user's mind reinforces the feeling of immersion, and there is proof that increasing immersion also increases the capacity of users to learn within the context of virtual reality [13]. To further increase user immersion, our simulator system is programmed so that the user's body cannot penetrate the patient's body or other objects in the simulated world, thereby creating an additional illusion of physical interaction with the virtual environment [5][6]. ...
Conference Paper
As the ‘inventor’ of percussion as a diagnostic tool, Leopold Auenbrugger can be considered one of the founders of modern medicine [1], [2]. As a technique, it enabled clinicians to identify pathologic changes within a patient in realtime - changes which, until then, had only been identifiable posthumously by way of autopsy. In 1761, having spent seven years working in clinical settings, Auenbrugger published a work entitled “New Invention by Means of Percussing the Human Thorax for Detecting Signs of Obscure Disease of the Interior of the Chest”, in which he described four percussive tones, relating each to specific illnesses. The technique is still relevant today, but the sounds, of which 5 types have now been classified, are nonetheless tricky to identify as they are not so obviously distinct from one another. Training medical staff is therefore key, and in the light of the current pandemic, during which training opportunities with real live patients have been somewhat limited, interest in training with virtual patients has increased. The haptic virtual reality [3] simulator provides a means to this end, enabling excellent training opportunities for medical staff in a safe and economically viable environment.
... Additionally, the field of immersive analytics [Skarbez et al. 2019] is growing. We have both empirical [Gruchalla 2004] and strong anecdotal evidence of improved data analysis in real-world settings in immersive environments. However, it is not clear if using 3D scatterplots is justified, particularly with some studies finding a benefit to 3D scatterplots [Arns et al. 1999;Kraus et al. 2019] and others studies showing 2D scatterplots and 2D scatterplot matrices should be preferred [Filho et al. 2017;Sedlmair et al. 2013]. ...
Conference Paper
Full-text available
Understanding human perception is critical to the design of effective visualizations. The relative benefits of using 2D versus 3D techniques for data visualization is a complex decision space, with varying levels of uncertainty and disagreement in both the literature and in practice. This study aims to add easily reproducible, empirical evidence on the role of depth cues in perceiving structures or patterns in 3D point clouds. We describe a method to synthesize a 3D point cloud that contains a 3D structure, where 2D projections of the data strongly resemble a Gaussian distribution. We performed a within-subjects structure identification study with 128 participants that compared scatterplot matrices (canonical 2D projections) and 3D scatterplots under three types of motion: rotation , xy-translation, and z-translation. We found that users could consistently identify three separate hidden structures under rotation , while those structures remained hidden in the scatterplot matrices and under translation. This work contributes a set of 3D point clouds that provide definitive examples of 3D patterns perceptible in 3D scatterplots under rotation but imperceptible in 2D scatterplots. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.
Article
While Virtual Reality has slowly become a common sight, haptics is still struggling to appeal to the general public. We argue that one of the possible reasons is that while VR is designed to be as easily adaptable as possible to many different contexts, haptics is often designed to fulfil a specific purpose and fails to present itself as a tool that can be exploited by designers. To test our hypothesis, we created a VR game where a wrist exoskeleton was used to interact with the environment. The game was composed of multiple levels, some of which also featured a metaphorical interaction through the same haptic device, and was tested by expert haptics scholars during a conference. Preliminary results suggest that by showing multiple potential usages of an exoskeleton, it was possible to enhance users’ interest towards the haptic device and the game.
Article
Full-text available
Virtual reality (VR) is a potential assessment format for constructs dependent on certain perceptual characteristics (e.g., realistic environment and immersive experience). The purpose of this series of studies was to explore methods of evaluating reliability and validity evidence for virtual reality assessments (VRAs) when compared with traditional assessments. We intended to provide the basis of a framework on how to evaluate VR assessments given that there are important fundamental differences to VR assessments compared with traditional assessment formats. Two commercial off-the-shelf (COTS) games (i.e., Project M and Richie's Plank Experience) were used in Studies 1 and 2, while a game-based assessment (GBA; Balloon Pop, designed for assessment) was used in Study 3. Studies 1 and 2 provided limited evidence for the reliability and validity of the VRAs. However, no meaningful constructs were measured by the VRA in Study 3. Findings demonstrate limited evidence for these VRAs as viable assessment options through the validity and reliability methods utilized in the present studies, which in turn emphasize the importance of aligning the assessment purpose to the unique advantages of a VR environment. Practitioner points • Findings were mixed in correlating the VRA scores with similar assessments to the intended constructs being measured. • Details are provided on the design and scoring for the presented VRAs. • Although research using VRAs is still preliminary, there are promising methods through which we might design unique behavior based evaluation.
Article
Full-text available
Between the molecular and reactor scales, which are familiar to the chemical engineering community, lies an intermediate regime, here termed the “mesoscale,” where transport phenomena and reaction kinetics compete on similar time scales. Bioenergy and catalytic processes offer particularly important examples of mesoscale phenomena owing to their multiphase nature and the complex, highly variable porosity characteristic of biomass and many structured catalysts. In this review, we overview applications and methods central to mesoscale modeling as they apply to reaction engineering of biomass conversion and catalytic processing. A brief historical perspective is offered to put recent advances in context. Applications of mesoscale modeling are described, and several specific examples from biomass pyrolysis and catalytic upgrading of bioderived intermediates are highlighted. Methods including reduced order modeling, finite element and finite volume approaches, geometry construction and import, and visualization of simulation results are described; in each category, recent advances, current limitations, and areas for future development are presented. Owing to improved access to high-performance computational resources, advances in algorithm development, and sustained interest in reaction engineering to sustainably meet societal needs, we conclude that a significant upsurge in mesoscale modeling capabilities is on the horizon that will accelerate design, deployment, and optimization of new bioenergy and catalytic technologies.
Conference Paper
Full-text available
We have created an immersive application for statistical graphics and have investigated what benefits it offers over more traditional data analysis tools. We present a description of both the traditional data analysis tools and our virtual environment, and results of an experiment designed to determine if an immersive environment based on the XGobi desktop system provides advantages over XGobi for analysis of high-dimensional statistical data. The experiment included two aspects of each environment: three structure detection (visualization) tasks and one ease of interaction task. The subjects were given these tasks in both the C2 virtual environment and a workstation running XGobi. The experiment results showed an improvement in participants' ability to perform structure detection tasks in the C2 to their performance in the desktop environment. However, participants were more comfortable with the interaction tools in the desktop system
Article
Full-text available
alization of burgeoning scientific data sets and models. In this article we sketch a research agenda for the hard-ware and software technology underlying IVR for sci-entific visualization. In contrast to Brooks'excellent survey last year,, which reported on the state of IVR and provided concrete examples of its production use, this article is somewhat speculative. We don't present solu-tions but rather a progress report, a hope, and a call to action, to help scientists cope with a major crisis that threatens to impede their progress. Brooks'examples show that the technology has only recently start-ed to mature, in his words, it "bare-ly works. " IVR is used for walkthroughs of buildings and other structures, virtual prototyping (vehicles such as cars, tractors, and airplanes), medical applications (surgical visualization, planning, and training), "experiences" applied as clinical therapy (reliving Vietnam experiences to treat post-traumatic stress disorder, treating agorapho-bia), and entertainment. Building on Brooks'work, here we concen-trate on why scientific visualization
Article
Full-text available
Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were ‘‘virtual buildings’’ containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included ‘‘looking around’’more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel.
Article
Full-text available
Virtual Reality (VR) has generated much excitement but little formal proof that it is useful. Because VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. In this paper, we show that users with a VR interface complete a search task faster than users with a stationary monitor and a hand-based input device. We placed users in the center of the virtual room shown in Figure 1 and told them to look for camouflaged targets. VR users did not do significantly better than desktop users. However, when asked to search the room and conclude if a target existed, VR users were substantially better at determining when they had searched the entire room. Desktop users took 41% more time, re-examining areas they had already searched. We also found a positive transfer of training from VR to stationary displays and a negative transfer of training from stationary displays to VR.
Article
Full-text available
We present a categorization of techniques for first-person movement or travel through immersive virtual environments, as well as a framework for evaluating the quality of different techniques for specific virtual environment tasks. We conduct three quantitative experiments within this framework: a comparison of different techniques for moving directly to a target object varying in size and distance, a comparison of different techniques for moving relative to a reference object, and a comparison of different motion techniques and their resulting sense of "disorientation" in the user. Results indicate that "pointing" techniques are often advantageous relative to "gaze-directed" steering techniques, and that motion techniques which instantly teleport users to new locations are correlated with increased user disorientation.
Conference Paper
Full-text available
Presents a categorization of techniques for first-person motion control, or travel, through immersive virtual environments, as well as a framework for evaluating the quality of different techniques for specific virtual environment tasks. We conduct three quantitative experiments within this framework: a comparison of different techniques for moving directly to a target object varying in size and distance, a comparison of different techniques for moving relative to a reference object, and a comparison of different motion techniques and their resulting sense of “disorientation” in the user. Results indicate that “pointing” techniques are advantageous relative to “gaze-directed” steering techniques for a relative motion task, and that motion techniques which instantly teleport users to new locations are correlated with increased user disorientation
Article
Full-text available
this paper is to provide the reader with a good understanding of the types of interaction that are possible in a virtual environment. The main body of the paper consists of a discussion of the fundamental forms of interaction and includes numerous examples of interaction techniques that can be used as building blocks in the development of virtual worlds applications. Included as an appendix to this paper is an overview of coordinate system transformations and examples of using coordinate system diagrams in the implementation of virtual worlds interaction techniques. Though every effort has been made to avoid a bias towards a particular type of virtual environments system, the paper does assume some form of display to present images to a user, a tracking system which can be used to measure the position and orientation of the user's head and hand, and some form of input device such as a hand-held button device or an instrumented glove which can be used to signal the user's intentions. 2 Mark R. Mine TR95-018 May 5, 1995
Article
Immersive virtual environment (IVE) technology has made it possible for computers to accommodate to humans in a way which is more natural, in terms of human behaviour, than most other human —computer interfaces. A virtual world allows human participants to perform tasks as ‘naturally’ as they would do in everyday reality. To go through a door in the virtual world would be similar to doing so in the real world. This intuitive approach is a key factor which has moved virtual environments closer to the ultimate human-computer interface, allowing anyone to effectively communicate with the environments that computers display.
Article
13.44> . Drew Kessler for help with the SVE toolkit . The Virtual Environments group at Georgia Tech . The numerous experimental subjects who volunteered their time . Dawn Bowman iv TABLE OF CONTENTS Introduction ..................................................................... ................. 1 1.1 Motivation ..................................................................... ...............1 1.2 Definitions.......................................................... ..........................4 1.3 Problem Statement............................................................ ...............6 1.4 Scope of the Research............................................................. ..........7 1.5 Hypotheses........................................................... ........................8 1.6 Contributions........................................................ .....