Content uploaded by Chang Hee Lee
Author content
All content in this area was uploaded by Chang Hee Lee on Oct 26, 2020
Content may be subject to copyright.
Immersive Design Engineering
Bjorn Sommer1,$, Chang Hee Lee1, Nat Martin1, Vanna Savina Torrisi1
1Innovation Design Engineering, Royal College of Art, London, UK
$Corresponding author: bjoern@CELLmicrocosmos.org
Abstract
Design Engineering is an innovative field that usually combines a
number of disciplines, such as material science, mechanics, elec-
tronics, and/or biochemistry, etc. New immersive technologies,
such as Virtual Reality (VR) and Augmented Reality (AR), are
currently in the process of being widely adapted in various en-
gineering fields. It is a proven fact that the modeling of spatial
structures is supported by immersive exploration. But the field of
Design Engineering reaches beyond standard engineering tasks.
With this review paper we want to achieve the following: de-
fine the term “Immersive Design Engineering”, discuss a number
of recent immersive technologies in this context, and provide an
inspiring overview of work that belongs to, or is related to the
field of Immersive Design Engineering. Finally, the paper con-
cludes with definitions of research questions as well as a number
of suggestions for future developments.
Introduction
Design Engineering is an innovative area that stays at the inter-
section of fields, combining a number of diverse disciplines, such
as architecture, material science, biochemistry, mechanics, elec-
tronics, physics textiles etc. But besides disciplines that are more
closely related to the process of making, there are also those de-
voted towards economics [1].
New immersive technologies related to, e.g., Virtual Reality
(VR) and Augmented Reality (AR), are currently in the process of
being widely adapted in various engineering fields. It is a proven
fact that the modeling of spatial structures is supported by im-
mersive exploration [2, 3]. But the field of Design Engineering
reaches beyond standard engineering tasks. Robinson et al. define
for example 49 competencies relevant in the Design Engineering
field [1]. And although design engineers spend a decent amount of
time doing solo work – especially during technical tasks – they are
in some areas highly dependent on collaborative work, especially
in terms of decision making and information understanding [4]
This paper is structured the following way:
•First, we will introduce immersive technologies related to
VR and AR.
•Second, we make a first attempt to establish a definition for
the field of “Immersive Design Engineering”.
•Third, we are introducing a number of research projects that
belong to, or are related to the field of “Immersive Design
Engineering”. Please note that a fully comprehensive review
is out of the scope of this paper.
•This is followed by a quick overview of related commercial
projects whose roots go back to student projects or research
project. This also includes a concrete example of a student
semester project that recently became a startup.
•Then, in the Results and Conclusion chapter we will sum up
what we learnt from the overview of projects.
•And finally, in the Outlook, we will present a few new
ideas that might impact future Immersive Design Engineer-
ing work.
The Base: VR and AR Technologies
Virtual Reality (VR) and Augmented Reality (AR) technologies
are currently in the process of being widely adapted in various
engineering fields. VR technologies – such as Oculus RiftTM and
HTC VIVETM – have high potential to survive the hype of VR as
they are quite affordable and widely adapted in a number of ap-
plication fields. Since 2019 the extension of the product portfolio
shows a number of promising new developments, enabling also
more mobile solutions. It is widely accepted that the exploration
of spatial objects, such as cars, houses and medical scans, benefit
from VR technologies, as it improves the understanding of spatial
structures and compositions [2, 3].
Although the COVID-19 pandemic lead to a decrease in
shipments during the first half of 2020 – especially because of
disrupted supply chains – the long-term outlook is positive, ac-
cording to IDC [5]. The lock-down situation lead to a tempo-
rary working-from-home society and is expected to drastically in-
crease the interest in VR technologies.
As an alternative to VR headsets, semi-immersive 3D-
stereoscopic displays (such as Power Walls) were often seen in
engineering-related scenarios in the past. For research and de-
velopment, CAVEs were an especially promising virtual environ-
ment [6, 7]. Many of these screen-based VR technologies barely
survived the decline of the commercial 3D-TV market [8].
But it would be wrong to expect that the same destiny will
apply to all VR technologies on the market. Whereas full immer-
sive Head-mounted displays (HMDs) like Oculus Rift and HTC
VIVE might not meet the expectations of all professional audi-
ences, products like the Varjo XR-1 provide already a convinc-
ing alternative, in terms of resolution (3000 PPI), comfort (up to
90 Hz refresh rate) and tracking, and provide stereoscopic see-
through enabling AR applications (12 MP at 90 Hz) [9, 10, 11].
Due to the decline of 3D TVs, it became harder to find 3D
displays like Fishtank displays. However, as an appropriate al-
ternative the zSpace display and laptop provide a passive stereo-
scopic display [12]. Although this display requires glasses, it
comes with the big advantage of integrated high-resolution head
tracking. Moreover, it comes with a 3D stylus pen.
Years ago Planar provided the popular StereoMirror tech-
nology, which made use of two LCD displays and merged the
two images on a polarized mirror between both displays – polar-
ized glasses were required . Although Planar does not manufac-
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-1
https://doi.org/10.2352/ISSN.2470-1173.2020.2.SDA-265
This work is licensed under the Creative Commons Attribution 4.0 International License.
To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
ture these displays anymore, new providers like Schneider Digital
created the 3D PluraView enabling up to 4K resolution for each
eye [13].
Although there have been auto-stereoscopic displays around
for a long time, most of them were just prototypes or not really
unusable. Nowadays, the Looking Glass Display is an example
of a very user-friendly product and might be the currently the
most widely-adopted holographic display [14]. A wider range of
autostereoscopic flat displays is provided by Alioscopy R
– from
Full HD to 4K and different formats [15]. Alioscopy is indeed an
example of a successful S3D technology company that has been
in the market since 1986.
Another very popular AR/Mixed Reality device is the Mi-
crosoft Hololens which was used to build many prototype appli-
cations in the engineering field [16]. However, the limited inter-
action capabilities and field of view of the first HoloLens version
prevented that it being widely adopted by the industry. This might
change with the HoloLens 2 as the hand interaction has been dras-
tically improved.
One of the milestones in VR was the development of the
CAVE [6]. These multi-display and multi-side environments (e.g.
top, front, left and right side) can be only afforded by large com-
panies, such as car manufactures or universities. The CAVE2 is
an alternative version of the CAVE, which consists of multiple
3D displays [7]. Mechdyne is the company which provides many
of these technologies and has many patents in this area [17]. Last
but not least, for tracking in complex environments, often VICON
tracking technology is used [18].
Obviously, these mostly hardware technologies require an
engine to drive them. In the recent years, a number of VR de-
velopment engines have been established. According to Checa et
al. who evaluated a number of virtual reality serious games in the
context of learning and training, the most-widely used platforms
in gaming industry are Unity and the Unreal Engine, whereas-
Worldviz and the Open Source VR platform Ogre3D were mainly
used for older projects [19, 20, 21, 22, 23]. But in the Education
and Training sector, Unity dominates, especially as it it has been
available at a low cost for many years. However, a strong growth
for the Unreal Engine is expected, as it converted to free software
and provides better, photorealistic rendering capabilities.
Immersive Design Engineering
Nowadays, many professions steadily grow in terms of complex-
ity due to their interdisciplinary nature. This applies especially to
the field of “Design Engineering”. While already a complex field
with many specializations, this field is also under steady develop-
ment and transformation.
Before looking into several approaches that could support
Design Engineering through immersive technologies, we would
like to propose a definition for this field.
“Design Engineering” is defined by the McGraw-Hill Dictionary
of Scientific & Technical Terms as:
“A branch of engineering concerned with the creation of sys-
tems, devices, and processes useful to and sought by society.” [24]
As previously mentioned, Design Engineering is a complex
field and this definition might not be able to capture all nuances,
but we think that this is an appropriate general definition.
“Immersive Design Engineering” we propose to define as:
“A branch of engineering concerned with the creation of
systems, devices, and processes useful to and sought by society
immersing the actor by making use of new immersion-supporting
technologies. This definition includes, but is not limited to soft-
ware systems, hardware devices, as well as production processes.
Immersive Design Engineering (IDE) applies to the creation
process of related hardware devices, as well as their usage
in associated systems and processes. Therefore, all products
of IDE might be temporarily or permanently components of a
circular process. To achieve an appropriate level of the actor’s
immersion, a task-pragmatic selection of involved senses has to
be made.”
We previously discussed a number of VR and AR technolo-
gies. Following the definition above, is, i.e., an HMD already an
Immersive Design Engineering project?
From a theoretic perspective, let us assume, a specific HMD
is developed without actually using the technology. The optics
are developed and implemented based on calculations, the HMD’s
body is designed with general 3D modeling software while using
a standard 2D monitor and then the body is 3D-printed, sensors
are integrated, and the according software and tracking system
is developed based on specifications without actually using the
HMD in the process. The final HMD product is plugged into a
computer system and it works like expected. In this case, this
particular HMD is not an Immersive Design Engineering project,
but it can be used from now on to work in the context of Im-
mersive Design Engineering projects. However, as the develop-
ment of HMDs will always be accompanied by continuous testing
and evaluation of a number of fragments and prototypes involv-
ing substantial immersive experiences, the development of VR
technologies can nearly always be seen as an Immersive Design
Engineering project.
“Immersion” is of course not a novel term in the design do-
main. Lidwell, Holden and Butler defined Immersion as “A state
of mental focus so intense that awareness of the ‘real’ world is
lost, generally resulting in a feeling of joy and satisfaction” [25].
They also note that the term is complex and controversial and that
“perceptual immersion is more difficult to sustain for long periods
of time and is, therefore, usable only for relatively brief experi-
ences. Optimal immersive experiences involve both rich sensory
experiences and rich cognitive engagement” [25].
Furthermore, they characterized Immersion by a number of
elements [26]. In the context of Design Engineering, just a se-
lection should be mentioned: challenges that can be overcome,
contexts where a person can focus without significant distrac-
tion, clearly defined goals, immediate feedback with regards to
actions and overall performance, a modified sense of time, as well
as a feeling of control over actions, activities, and the environ-
ment [25].
According to Lidwell, Holden and Butler, already one of
these elements might be sufficient to create Immersion. Naturally,
there are various degrees of Immersion. As previously mentioned,
in the context of VR technologies, an HMD is usually character-
ized as fully immersive, whereas a 3D screen is seen as semi-
immersive. It is obvious that in this context the visual cue has the
highest impact and one could argue that an experience cannot be
fully immersive in case, e.g., audio support is missing. But even
265-2 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
the full audiovisual experience would still exclude the remaining
senses.
The ultimate Immersion would require the involvement of all
senses. Usually, recent VR applications support the visual, audio
and haptic sense (the latter mostly only in a very basic way using
vibration), whereas other senses are neglected. However, there
long exists research in this area. Just to give one examples: Years
ago, H¨
ulsmann et al. simulated wind and warmth in VR inside
a CAVE by using a very affordable setup consisting of fans and
infrared lamps [27].
But to come back to the selection of terms characterizing
immersion: it was mentioned, that a selection might be sufficient
to deliver immersion. Especially in terms of Design Engineering,
it will usually not be required to support all senses, as Design
Engineering is purpose driven and focus on a highly specific task,
the support of all senses is not required.
Related Projects
Here, we want to explore perspectives of Immersive Design En-
gineering, by discussing potential application scenarios and illus-
trating examples. In terms of Design Engineering, the amount
of available examples is still quite limited. However, we want to
present here a number of examples that are Design Engineering
projects, or which can inspire their future development.
Virtual Cities, Landscapes and Cultural Heritage
Auto-stereoscopic Display Design
Whereas a few previously-mentioned VR technologies are al-
ready able to render objects in a resolution that is appropriate for
most application cases, autostereoscopic displays usually come
with the disadvantage of a lowresolution for each eye. Barre et al.
present a new autostereoscopic design for lenticular lens displays
in conjunction with an optimized algorithm to display the stereo-
scopic images [28]. The difference of the discussed approach
to common autostereoscopic two-view-designs with lenticulars is
that it allows more distance between the image splitter and display
panel – the resulting low magnification factor reduces distance
errors from relative panel/splitter movements. The system was
integrated and tested in a production chain for digitalization and
presentation of scanned 3D objects from the Fraunhofer cultural
heritage project. Using web technology with a x3dom renderer,
autostereoscopic rendering as well as interaction with the objects
was enabled by providing hand and head tracking.
Simulation of a Historic City
The sensiLab at Monash University is a very experimental work-
place that uses VR technologies in the context of speculative de-
sign [29]. The Angkor Wat Virtual Environment developed at
the sensiLab is an excellent example of how VR can be used to
explore ancient cities and architecture to elaborate different the-
ories of their internal functionality using crowd simulation. In
this way researchers can analyze potential routes a pilgrim might
have taken when visiting the temple of Angkor Wat [30]. Similar
approaches could be used in architectural design to evaluate the
feasibility of buildings during the design process and test differ-
ent scenarios, such as disaster management, way finding, shortest
path analysis, etc.
Generation of Virtual Cities optimized for Virtual Envi-
ronments
Another interesting method is to generate virtual environments
optimized for immersive environments [31]. During the gener-
ation of virtual cities, stereoscopic parameters are used to auto-
matically place objects inside the stereoscopic camera frustum
avoiding to reach an excessive parallax on screen and reduc-
ing/avoiding window violations. In the context of this work, a
plugin for the Open Source Software Blender was created to gen-
erate the procedural cities. Using the standard parameter (with
s f f =1.0), the parallax on screen of the models on the far back-
ground will be approximately equal to the maximum parallax on
screen achievable before divergence occurs. Figure 1 shows a city
created with s f f =0.5. Keyword: Architecture, City Planning,
Paper: Application
Figure 1. A Blender-based procedural engine fills the stereoscopic camera
frustum with geometry according to a user-provided value. Images on the
right can be viewed with anaglyph red-cyan glasses. (Courtesy of c
2016
Davide Gadia. All Rights Reserved.)
Cloud-based Exploration of Sensor Data
The prototype of the Urban Insight Cloud Engine (UICE) was
developed to visualize and explore sensor data using a cloud-
based system. A prototype was implemented to explore the city
of Newcastle-upon-Tyne using web browsers as well as S3D dis-
play setups [32]. This paper also lists a number of design decision
for large-scale immersive display environments are listed, which
are: Ease of use, Easy to maintain, Standard aspect ratio, Bright,
high resolution, high frame rate, 3D, Simple interaction devices.
A number of different larger S3D display setups are discussed
and compared that are partly used to support decision finding in
groups.
Sensor Data Visualization
Baltabayev et al. discussed how two simple mobile VR ap-
proaches can be used to explore 3D terrain in combination with
sensor data with very simple smart phone setups [33]. For this
purpose, two Android apps were developed for the data record-
ing task, and a standard web browser was used in combination
with WebVR to visualize the virtual environments. A simple ter-
rain visualization was created and sensor data was associated with
the underlying coordinates. The coordinates were either recorded
based on GPS sensors (low spatial resolution, Figure 2) or by tri-
lateration between WiFi hotspots (high spatial resolution). This
work shows that it is possible with very simple mobile solutions
to record and visualize sensor data.
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-3
Figure 2. 3D world shown on a smart phone using a web browser. The
color scheme is encoding humidity and the surface of the virtual environment
was created based on GPS data. The data selection dialog is shown together
with the color scale.
Vehicle Simulation and Remote Control
Vehicle Simulation and Exploration
Miranda et al. were using a vehicle simulator with motion ca-
pability combined with a CAVE to elaborate physiological driver
data [34]. More precisely, data of breathing, galvanic skin resis-
tance (GSR) and pressure on the hand of the participant were mea-
sured with sensors. A number of interesting observations were
collected, e.g. respiration signals are good indicators for charac-
terizing stress variation, or that the simulator can identify changes
in user behavior through the physiological signs of breathing,
pressure and GSR and its analysis using the average frequency
throughout the simulation period. In this way, these virtual simu-
lation environments are adequate alternatives for R&D in the car
industry as long as a car prototype was not built.
Improving Remote Activities through Stereoscopic Vision
Boonsuk elaborated the use of S3D visualization to help improve
user performance in remote activities. Experiments were con-
ducted using a Oculus Rift DK2 with separate left/right video
streams with 12 participants. The videos were fed from two Rasp-
berry PI cameras attached to a remote car (Figure 3). Participants
using S3D visualization had estimated significantly better perfor-
mance in relative distances between remote car and a wall. These
results could be extended to help designers develop teleoperating
viewing system for devices like remote control cars, drones, and
tele-guided robots [35].
Navigation
Distance Measurement and Estimation
Deas et al. in line with previous work showed that stereoscopic
vision is especially useful to estimate the relative altitude in re-
lation to a terrain at low altitudes (0-5ft). In contrast, binocular
vision provides very weak input to absolute altitude estimates at
high altitudes (10-100ft) [36]. During their evaluation of absolute
Figure 3. This remote control car is equipped with two cameras for S3D
real-time video transfer. Participants of the user experiment had to stop the
car in front of a wall using either monoscopic or stereoscopic vision. The lat-
ter outperformed the monoscopic approach. (Courtesy of c
2015 Wutthigrai
Boonsuk. All Rights Reserved.)
and relative distance measurements, real world scenes were com-
pared using natural as well as simulated S3D imagery. For this
purpose, a S3D monitor was installed which showed a) the S3D
image/scene, and b) an extension of a real-world tube into the vir-
tual world (representing a helicopter skid) that was rendered on
top of a. As the head position as well as the distance between
S3D monitor and participant were fixed during the experiment,
the skid provided a convincing spatial reference when measuring
distances.
Semantics for Integrated Pipeline for Immersive Environ-
ments
Trellet et al. discussed an integrated pipeline designed for immer-
sive environments, promoting direct interactions on semantically-
linked 2D and 3D heterogeneous data presented in a common
working place [37]. The combination of data is facilitated by 1)
making use of pre-existing/inferred links present in our hierarchi-
cal concepts’ definition, 2) enrichment by using task-dependent
and adaptive analyzes proposed to the user, and (3) interactively
presented in a unique working environment to be explored.
Avoiding cybersickness
A relevant issue of daily work inside virtual environments is cy-
bersickness. In case a joystick is used to trigger rotations, fast
motions often lead to cybersickness as the user fully visually per-
ceives the motion in the virtual world. This is also a reason
why teleportation is frequently used in virtual environments in-
stead of moving fluently forward using controllers. To reduce
cybersickness by optimizing rotational motions, for example Ke-
meny et al. developed a new navigation technique called “Head
Lock” [38]. This method is based on thresholds obtained by ex-
periments which are used to keep fixed visual outside-world ref-
erences. Closing a single eye persistently will bind the virtual
environment to the user’s head. When the rotation is finished, the
user opens this eye again and the locking mechanism is disabled.
265-4 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
Hybrid-dimensional visualization and navigation
The paper [39] discusses hybrid-dimensional visualization of
biomedical and functional structures. It discusses new ways of
interaction with a stereoscopic monitor – the zSpace – by using
head-tracking for navigation. The head-tracking can be used to
navigate around the car model. Using the zSpace stylus pen (Fig-
ure 5 Top), the user can interact with objects in 3D space. The
3D visualization is supplemented by a simple 2D map showing
the different components of the car as technical terms (Figure 4).
Clicking on one of these terms directly navigates the 3D view to
the corresponding perspective and focuses the object of interest –
here, the left-front wheel.
Stereoscopic Space Map
The Stereoscopic Space Map is a system created for group pre-
sentations in a CAVE2 environment consisting of 80 stereoscopic
monitors, communicating with a central computer connected to a
zSpace monitor (Figure 5 Top). This system would enable the im-
mersive exploration of large virtual environments with a number
of experts and engineers by providing a 330◦panoramic view [40].
The central navigator is able to first check potential perspectives
on the zSpace (Figure 5 center) – here, inside an animal cell
– and then smoothly navigate the complete virtual environment
(shown in the CAVE2) to this specific location (Figure 5 Bottom).
This system tries to tackle the problem that the S3D projection
in CAVEs is usually only optimized for the navigator. Using the
[40] approach, the S3D projection is optimized for the complete
audience – instead of altering the whole projection matrix, the
rendered eye distance is adjusted in relation to the object of inter-
est.
Gamification
Stereoscopic Games
Many stereoscopic games and the corresponding game consoles
might be able to inspire the generation of next generation of S3D
immersive systems [41]. 1983 the first S3D video games emerged
on the market. The short-lived Vectrex Arcade System emerged
which already provided an optional headset as viewing compo-
nent that had to be used in conjunction with a CRT monitor.
The headset functioned as an early version of shutter glasses. A
few years later, also large companies like Nintendo entered the
marked. Around 1990 a game was released on the Nintendo NES
that made use of the Pulfrich effect. This review paper also de-
scribes a number of autostereoscopic approaches, early HMDs for
gaming, autostereograms, as well as reflection-based approaches.
It shows that the game industry always was a huge driver in inno-
vation for S3D technologies.
Molecular Visualization and Simulation
Wong et al. developed an immersive system to design molecu-
lar structure assemblies [42]. The immersive interactive system
can be used to change the molecular properties of the simulated
molecular membrane structure and directly observe the changes.
For this purpose, the simulation environment is represented by a
gaming machine-similar device in VR, which provides dials to
amend the system thickness and temperature (Figure 6).
Figure 4. Hybrid-dimensional visualization and navigation: the car model
can be explored in 3D via the zSpace using head and a 3D stylus pen track-
ing (see also Figure 5 Top) as well as using the 2D map on the right side.
Figure 5. Stereoscopic Space Map: Top: zSpace (z) communicates with
CAVE2 and enables a navigator from a central position to preview perspec-
tives and navigate then the virtual world of the CAVE2 to this specific place.
Here, 20 node computers (n) are connected to four monitors each. Center :
View from outside the cell while the navigator selects a component using
the zSpace stylus pen. Bottom: Selecting the Chloroplast, the view of the
CAVE2 moved inside the cell and shows the components of the Chloroplast.
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-5
Figure 6. The biomedical simulation environment provides two dials to
amend the thickness and temperature of a molecular membrane [42]. (Cour-
tesy of c
2018 Hua Wong et al. via license: CC BY-NC-ND 4.0 [43], changes
towards: original image was cropped by 50%).
Training Environments
Firefighter Training in VR
Puel et al. designed a system called PRESTO-FF to build a real-
istic firefighter training environment to teach situation awareness
and coordination of teams based [44]. The first prototype is based
on the commercial XVR simulation system [45]. The introduced
system includes a modeling facility for behaviours of people/AI
on the scene according to their roles (like firefighter, victim, in-
former), a task-specific 3D environment (e.g. a flat), and a visual
scripting system that impacts the simulation based on player’s ac-
tions. In addition, a trainer is able to manipulate the system in
real-time. 15 domain experts positively evaluated the system, de-
spite low visual quality in this first prototype. A review of fire-
fighting environments is provided by Engelbrecht et al. [46].
Evacuation Training in AR
Sharma et al. designed a Unity-based AR application helping peo-
ple to evacuate from a building affected by smoke and fire [47].
A study was conducted emulating an evacuation situation. Using
smart phones and tablets, the study participants were provided
with a 3D map of the environment – including fire sources – and
the direction towards the nearest exit were indicated. The evalu-
ation built on 30 responses clearly showed that participants pre-
ferred this approach over traditional 2D evacuation plans. Inte-
grated signs helped to improve orientation and participants ex-
pected this approach to be much better in real-time evacuations.
Optimization of the Visual Experience
Stereoscopic Movie Evaluation
Vatolin et al. developed a number of methods to assess the quality
of stereoscopic movies and created reports of a number of popu-
lar 3D movies [48]. In this work, 10 metrics are introduced, in-
cluding – besides well-known S3D issues like extreme horizontal-
disparity and vertical parallax – color mismatch between views,
field-of-view/scale mismatch, as well as rotation mismatch. Va-
tolin et al. conducted a number of reports assessing the quality
S3D movies by using these metrics and more. The analysis of
105 S3D movies was concluded with the statement that in the
period between 2010-2016 the S3D movie-production pipelines
have considerably improved.
Stereoscopic Design with Blender
Biere et al. discussed how to design a 3D model and create a
3D-stereoscopic movie of a Chlamydomonas r. cell using the
Open Source tool Blender [49]. The challenge here was to com-
bine different information sources, such as 2D, spatial, light- and
electron-microscopic images as well as from publication-related
data. In this way, an interpretative visualization approach had
to be used. In addition, different scales had to bridged and the
stereoscopic presentation had to be optimized for this purpose.
This approach might be interesting wherever structural data has
to be visualized that is inaccessible to the bare human eye as well
as to optics, and different visual information sources have to be
combined.
Interaction
Hand interaction in VR
MacAllister et al. presented one of the first proof of concept sys-
tems integrating the Leap Motion and Oculus Rift DK2 into a
commercial engineering visualization and analysis package called
Siemens’ Teamcenter R
Lifecycle Visualization Mockup. Dur-
ing a study it was shown that these two devices together can
provide real time, fluid interaction in a commercial engineering
platform [50]. Different gestures were used for this purpose:
direction-based rotation was triggered by a swipe gesture, the
zoom in/out feature by a moving forward/backward gesture, and
gesture tracking was turned on by closing the right fist and turned
off by using the left one. Based on Fikkert et al. these mirrored
gestures are highly intuitive for on/off actions [51].
A Virtual User-friendly Keyboard
A relevant problem in virtual environment is the keyboard inter-
action, which is normally limited to a navigation device coming
with the used HMD and an ordinary keyboard which can only be
used if the HMD is taken aside or in a blind manner. A virtual key-
board alternative was, e.g., developed by Jimenez et al. [52]. It is
used in combination with an Oculus Rift DK2 and Leap Motion.
The smart phone-based Swype algorithm is emulated in their im-
plementation to enable a user-friendly typing experience. Also,
a gaze mode was implemented that can be used for interaction
purposes.
Presence and Blended Spaces
Presence in the Virtual Workplace
Presence – or the “sense of feeling there” – is an important factor
when designing virtual worlds and underlying user interactions.
An extensive survey on Presence and related concepts was intro-
duced by Skarbez et al. [53]. In the future it can be expected that
designers have to work for extended periods of time in VR envi-
ronments. Therefore, feeling comfortable and immersed into the
virtual workspace will be of high relevance. Parola et al. proposed
as an alternative interpretation of might be the “sense of feeling
real” [54]. An important aspect of Presence is that implemented
designs should take individual user differences into account which
are usually reflected in the real world by individual Gestaltung of
a working place. Parola et al. further investigated the “sense of
feeling real” analysing a virtual climbing experience based on the
variables 1) expertise in real-world activities, 2) the interaction
ability, and 3) the virtual hand ownership illusion. Positive corre-
lations between the different variables were found [55].
265-6 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
Blended Spaces, Embodied Information Behavior and
Mixed Reality
West et al. discussed in the context of different VR and AR
technologies embodied information behavior [56]. For this pur-
pose, they reviewed a number of technologies and associated
narratives, e.g. by analyzing advertisements. The authors ob-
served that the proposed seamless interaction of material real-
ity and digital data envisions: 1) immersion on-demand, 2) im-
mersion on-the-go, 3) sensorial augmentation beyond that which
is humanly possible, and 4) hybrid virtual experiences consist-
ing of “being there” and “being here.” Moreover, the authors
were referencing to Benyon’s “Blended Spaces” which incorpo-
rate work in blended theory to develop user experiences within
and across digital-physical spaces [57]. Benyon advocates design
with awareness on the ontology, topology and volatility to create
correspondences and adequate associated transitions [56]. He de-
fines presence as “the experience, that [designers] are trying to
create for people” [57] and envisions a sense of presence that is
multidimensional, distributed as well as transitioning between the
digital and physical world.
Special Environments: Underwater and in Space
A digital VR planetarium
The work of Dias et al. [58] presents Galactica, a digital planetar-
ium that proposes the use of stereoscopy to depict photorealistic
scenes. The high-resolution multi-display projection system is
based on OpenSceneGraph and is able to visualize astrophysical
data and phenomena. More than 100K stars were visualized via
billboards plus more than 100K additional polygons. Often, for
this kind of visualizations, dome projections are used to enable a
180◦view of the night sky. This kind of environments is highly
interesting for space R&D.
Stereoscopic Underwater Imagery
Stereoscopic photography and filming have a long tradition and
there are many different approaches available. However, special
environments require highly specific solutions. The underwater
project of Woods et al. had the main task to image and reconstruct
the wrecks of the HMAS Sydney II and HSK Kormoran, includ-
ing their debris fields using photogrammetry [59]. For this pur-
pose, special underwater cameras were developed by the Centre
for Marine Science and Technology The produced S3D footage
was shown at 3D cinemas as well as at Power Walls in the HIVE
of Curtin University.
Data Analytics
Evaluation of Stereoscopic Rendering of Multi-
dimensional Data Visualization
Etemadpour et al. analyzed the user performance when carrying
out an interactive visual exploration of multidimensional data in a
CAVE [60]. For this purpose, a user study was conducted where
users had to analyze the spatial data relationships of 3D scatter
plots as well as cluster visualizations in the form of enclosing sur-
faces or hulls. The participants had to perform a set of typical
analysis tasks in the CAVE and on a standard 2D screen. The
study showed that the immersive environment improved perfor-
mance for local analysis tasks, but not so much for global ones.
In addition, surface-based visual encodings seemed to profit more
from the VR environment than point-based renderings. However,
it has to be mentioned that display fidelity and the quality of in-
teractions are important performance-related factors.
Collaborative Immersive Data Analytics
As analysis is a crucial part in the design process, there are a
number of aspects which are related to Immersive Analytics that
can be applied to Immersive Design Engineering as well. As
Immersive Analytics approaches were already evaluated from a
number of perspectives, we can utilize related scientific observa-
tions [61, 29, 62].
Figure 7. Abstract data visualization in virtual environments. IATK is used
here to visualize multi-dimensional data. Left the scatter plots are shown,
on the right the selected bar charts are highlighted in red. The selection is
performed via Brushing and Linking: on the top left via a sphere selection,
on the bottom left via a selection box. (Courtesy of c
2019 Maxime Cordeil.
All Rights Reserved.)
A very interesting example in this context is IATK – the Im-
mersive Analytics Toolkit – which provides interactive author-
ing and exploration of data visualization in immersive environ-
ments [63]. A grammar of graphics is provided to the user which
can be configured by using a GUI in Unity as well as a specific
API supporting the creation of novel immersive visualization de-
signs and interactions. Figure 7 shows how Linking and Brushing
is implemented here in the virtual environment. IATK supports
a number of visual designs, such as spheres, plots, quads with
different sizes and colors, as well as parallel rendering of mil-
lions of items. Also, a number of VR devices were already used
in this context, e.g., the Oculus Rift CV1, Meta 2 as well as the
HoloLens. Approaches like this will be relevant in the future to
merge VR engineering applications directly with statistical an-
alyzes in a fluent and user-friendly fashion. Another important
factor for the development of the VR work place of the future is
supporting collaborations. For this purpose, IATK was extended
by an early prototype called FIESTA, a free roaming Collabora-
tive Immersive Analytics (CIA) System [64]. In contrast to many
existing CIA prototypes, FIESTA allows users to collaboratively
work together wherever and however they wish, untethered from
mandatory physical display devices. Virtual wall can be used to
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-7
facilitate group discussion and presentation by placing, e.g., scat-
ter plots on those walls. For this purpose, a “tear-out” metaphor
is used to duplicate visualizations from panels and freely position
them afterwards in the virtual space.
Multi-perspective Trajectory Visualization and Analysis
A number of virtual display environments were discussed in the
context of different research projects. Often, either CAVEs or
HMDs were used. Klein et al. used an alternative approach for
visualizing flight trajectory data of birds by using a multiple S3D
screen setup [65]. Two screens each were placed on three wheeled
monitor mounts which also carry the connected computer. In
this way, the monitor setup can easily be transported to differ-
ent places and configured in different ways. Figure 8 shows the
“Tiled Stereoscopic 3D Display Wall” showing multiple perspec-
tives based on bird flight data [66]. It supports stereoscopic vi-
sualization using S3D TVs. In this way, first person perspectives
can be shown in parallel to overview maps in a collaborative dis-
play environment. The underlying software system TEAMwISE
is based on the Cesium framework and allows multiple synchro-
nized 3D views [67]. As the visualized data is based on GPS sen-
sors, this kind of immersive visualization system might be highly
interesting for Immersive Analytics of flight data from drones,
airplanes, etc.
Figure 8. The Tiled Stereoscopic 3D Display Wall showing different per-
spectives of a bird flock trajectory data set.
Commercial Projects
Virtual Studios enabling distant Group Dynamics and
Fluent Discussions
Skype, Zoom and Discord are all very well known video commu-
nication tools whose popularity drastically accelerated – Zoom in
particular – during the COVID-19 lockdown in 2020. Although
these are great tools for lectures, they lack dynamic interaction,
floating group building, serendipity and spatial localization. Here,
Gather.town – started in early 2020 by an independent group of
engineers called the Siempre Collective – provides an alternative
by combining an adventure gaming-like 2D interface with video
communication [68]. By using an editor, different virtual 2D en-
vironments can be created based on simple background images,
configured and extended by virtual posters, embedded Google
documents, games, video streams, as well as portals to additional
Gather maps. Figure 9 shows the first prototype of the virtual In-
novation Design Engineering studio at the Royal College of Art
during a first test. The studio was 3D modeled based on original
plans of the building and then rendered with Blender. However,
Gather.town is a 2D application, but provides a good level of im-
mersion by combining video chat with free movement on the map.
This kind of environment might very well be a near-future version
of virtual offices, discussion rooms, fluid presentation spaces in
the context of Design Engineering.
Figure 9. Gather.town showing the first prototype of the virtual IDE studio
at the Royal College of Art during a test session. Students and tutors are
communicating via chat and video, while moving along a 2D map.
Figure 10. Gravity Sketch enables 3D sketching using VR technologies
for professional purposes, such as car and shoe design. Top: context menu
to enable symmetric drawing with initial 3D sketch in the background; the
settings window can be freely positioned in the virtual space by using the
controllers. Bottom: drawing a symmetric subdivision surface along the 3D
sketch [69]. (Courtesy of c
2020 Gravity Sketch Ltd. All Rights Reserved.)
265-8 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
Freely Designing and Sketching in VR
Gravity sketch is a project whose roots go back to the Innova-
tion Design Engineering program (see below). The basic idea of
Gravity Sketch is to provide a new way of interaction inside vir-
tual environments by enabling 3D drawing and 3D sketching [70].
In this way, it is a drastically extended version of Google Tilt
Brush, which enables painting and brushing in 3D environments
with a focus on artists [71]. Gravity Sketch enables the drawing
of smooth curves freehand, extruding surfaces into the third di-
mension and then manipulating these surfaces by grabbing and
moving splines’ points. Gravity Sketch can be used as a cloud-
based solution to access design content directly from within the
VR headset and to collaborate between different design engineers.
Potential uses are car and shoe design, concept art, product de-
sign.
From Student Project to Commercial AR Product – an
Example
Augmented reality currently suffers from a lack of high-quality
input devices. The available state-of-the-art products have limi-
tations when in real-world use. For example, the hand tracking
used to control HoloLens 2 requires the user to keep their hands
raised in front of the headset. Voice is suited to selecting categor-
ical data but unfit for scrolling or scaling objects while handheld
controllers are restrictive because they require one hand to be free
for the controller.
Litho is a wearable controller for augmented reality that aims
to address the aforementioned problems.The Litho controller is
worn between your first and second finger. It has a capacitive
trackpad on the underside, an array of motion sensors, and pro-
vides haptic feedback. Litho’s design means you can wear it while
doing everyday activities (e.g typing, driving, drilling).
Litho’s tracking software outputs the approximate position
and rotation of the controller relative to the phone or headset’s
camera — allowing developers to create apps that let you inter-
act with objects in the real world, simply by pointing. This is
achieved by taking data from the camera and the IMU to estimate
the controller’s absolute position in space. Crucially, this does not
rely on external tracking systems to work, enabling use in a much
wider range of scenarios (outdoors, in large areas, etc).
The Litho device, in combination with the included Soft-
ware Development Kit (SDK) and suite of example projects, pro-
vides a complete package for developers looking to produce mo-
bile AR apps featuring precise and intuitive manipulations in 3D
space [72].
The first prototype of the controller was developed in the In-
novation Design Engineering program of the Royal College of Art
and Imperial College by Nat Martin. This is a two-year Master’s
Program. Towards the end of the first year, the students work for
several weeks on their own single project: the Solo Major module
(Solo Exploration from 2020). In this short amount of time, the
students have the opportunity to follow the whole product design
process, starting from ideation, formulating an innovation brief,
research, prototyping and validation.
Figure 11 Top shows an early sketch of Nat’s original idea
during the ideation phase. He quickly developed a number of
different ideas and started then to create an early prototype, as
seen in Figure 11 Top Center. The shape of the navigation ring
was 3D printed and the electronics were fitted into the device. For
Figure 11. LITHO: Top: First sketches by Nat Martin 2017. Center: An
early LITHO prototype. Bottom: The final product in 2020. (Courtesy of
c
2017-20 Nat Martin. All Rights Reser ved.)
each iteration, the prototype was tested on his classmates. Even
at this early stage the prototypes provided a simple 3D navigation
solution. The original process video shows the idea to use this
device for an augmented wayfinding application [73].
The final outputs from the project were a working prototype
of the hardware and a future-facing concept video. These out-
puts were presented at the Royal College of Art Degree Show
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-9
(2017). While on display at the show, Nat’s project was picked
up by several media outlets. This press coverage resulted in him
being contacted by a number of Angel investors and Venture Cap-
ital firms. At this point, with the guidance of the two universities
involved, Nat established a company to continue the development
of the prototype. And four months after this initial contact from
investors, they closed their first round of investment. This money
enabled them to build a small team, protect their IP and mass pro-
duce and ship the product.
Based on this early working prototype, the final product was
developed, as shown in Figure 11 Top Bottom. As of July 2020
Litho has been used in a wide array of fields including manufac-
turing, human-car interaction, education, space exploration, ar-
chitecture, retail, healthcare and financial services. More infor-
mation can be found at https://www.litho.cc.
Intuitive Surgical
Intuitive Surgical provides S3D medical solutions for surgeries
with the Da Vinci system, shown at Figure 12 [74]. On one hand,
this system enables perceiving spatial distances when operators
perform surgeries, on the other hand distant operation is possible.
In this way, medical experts can perform or accompany highly
specific surgeries from distant locations. While this device is ob-
viously optimized for the medical sector, derivative approaches
might be used in the future as part of the design engineering pro-
cess at small scales where precision matters.
Figure 12. Da Vinci X surgical system from Intuitive Surgical enabling
stereoscopic distant surgeries. (Courtesy of c
2020 Intuitive Surgical, Inc.
All Rights Reserved.)
Discussion and Conclusions
In the previous chapter, we discussed a number of Immer-
sive Design Engineering projects, or those which are related to
this area and can be used as an inspiration. The projects were
categorized into 10 categories:
•Virtual Cities, Landscapes and Cultural Heritage
•Vehicle Simulation and Remote Control
•Navigation
•Gamification
•Training Environments
•Optimization of Visual Experience
Figure 13. This is a snapshot of the comparison table listing all Immer-
sive Design Engineering and -related projects discussed in this short review:
http: // ide2020. immersive-analytics. org .
•Interaction
•Special Environments: Underwater and Space
•Data Analytics
•Commercial Projects
Of course, this differentiation is very general and the bound-
aries are floating. Therefore, this paper is accompanied by a com-
parison matrix (Figure 13) of the different tools which can be
found here. It differentiates between different general categories
similar to the one used in the previous chapter, as well as be-
tween different hardware, paper/project type, groups and grades
of immersion. Moreover, it assigns one or more categories of the
initial definition – Systems, Devices and Processes – to each of
the projects.
http://ide2020.immersive-analytics.org
Based on the projects discussed, we identified a number of
research questions in the context of Immersive Design Engineer-
ing projects. This non-exhaustive list will have to be extended in
the future:
1. How is it possible to combine simulation, such as crowd and
environment simulation, in a way that it supports the Design
Engineering process?
2. How can we improve the usage of web technology in com-
bination with immersive devices?
3. Distance estimation and spatial perception are crucial for
many DE processes. It was shown that immersive technolo-
gies support the distance estimation towards close objects
(0-5ft). Is there a way to improve the distance estimation in
terms of far-distance objects (10-100ft).
4. How can we use immersive technologies in terms of inter-
active and participatory city planning in DE?
5. How can we integrate and visualize sensor data – such as
temperature or air pressure – into immersive DE applica-
tions?
6. How can we integrate and utilize user’s sensor data – such as
galvanic skin resistance or breathing – in immersive DE ap-
plications with the purpose to improve the user experience?
7. What is the best way to integrate cloud-based data into IDE
applications and how to access the data afterwards?
8. A general problem of immersive solutions: how can we im-
prove the stereoscopic vision and perception and optimize it
265-10 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
for individuals – which even might be part of a larger audi-
ence? (view-centered S3D optimization)
9. How can we optimize a virtual scene or environment to pro-
vide an optimal configuration in terms of S3D visualization
(scene-centered S3D optimization)?
10. How can we merge hybrid-dimensional (i.e. 2D and 3D)
visualization in immersive environments in a user-friendly
way?
11. What is the best approach to merge different immersive
technologies, such as zSpace/CAVE, or 3D monitor and
HMD?
12. How do we optimize IDE in terms of different user scenar-
ios. In our review, different scenarios were observed: 1) sin-
gle user, 2) multiple users with the same role, usually during
collaboration, 3) 1-to-n users for small groups, as well as for
large groups.
13. Which lessons can we learn in the context of IDE from im-
mersive games and the corresponding consoles? What is the
impact of playfullness for IDE?
14. How to avoid cybersickness over a long period of time while
working in IDE projects?
15. How to provide data analytics inside an IDE environment
(directly related to Immersive Analytics)?
16. What is the best way to present data from multiple perspec-
tives and and enable immersive comparison?
Outlook
Finally, we would like to introduce a number of new – partly dis-
ruptive ideas – which might trigger future development in the con-
text of Immersive Design Engineering.
Disruptive Display Design
Since much of how we establish meaning in the real world is
qualitative rather than quantitative, there are opportunities within
Human-Computer Interaction (HCI) for qualitative displays and
interfaces in relation to information presentation – especially
when quantification has become a default mode or approach
for information display and presentation. Qualitative displays
and interfaces present information primarily through qualities of
things [75]. It enables people to explore new forms of understand-
ing information through qualities of phenomena (e.g., people, ob-
jects and events around us etc.). Figure 14 shows an example of
how a qualitative interface may work in the context of a person ex-
periencing weight or force. Here, the shape of a liquid, pressure,
expansion, and the constant change of scale provides an experien-
tial value of weight or force beyond merely providing a numerical
value on a scale. These qualitative representations may improve
people’s understanding and experience about the relationships be-
tween meanings and phenomena. Since the possibilities of qual-
itative display in design and HCI has not been fully discussed in
relation to virtual environments, exploring the idea of qualitative
display in the context of immersive visualization could push the
boundary of future interface/interaction.
Multisensory System towards Synaesthesia
Most systems discussed here strongly focus on the visual sense.
Of course, HCI plays also a pivotal role in most applications.
Most VR headsets nowadays are delivered with 3D interaction
devices – such as the VIVE (Cosmos) controllers, or the zS-
Figure 14. A non-numerical weighing scale. Liquid trapped under glass
changes shape, size and scale.
pace stylus. Like the previously-mentioned devices, many pro-
vide haptics, usually by using vibration feedback. Hand tracking
also plays an important role – especially in the early days of the
Oculus Rift (Development Kit 2) the Leap Motion was a widely-
distributed device which was attached to the Rift to provide hand
tracking. While various multisensory approaches are focusing to
improve user experience/interactions, there are not many attempts
of using synaesthesia as a source of design material to enhance
immersive experiences.
Synaesthesia is a cognitive phenomenon in which stimula-
tion of one sensory or cognitive pathway leads to automatic and
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-11
involuntary experiences in a second sensory or cognitive pathway
– for example, tasting a candy can trigger another sensation such
as seeing a shape or smelling a color. Currently, there are more
than 88 types of synaesthesia, and we believe exploring such a
translative property of synaesthesia can be potentially useful for
reinforcing immersive experiences. For instance, haptic value can
be mapped with visual value to maximize users experience and
sensory stimulation. Synaesthesia and its translative property [76]
in relation to immersive interactions are a less developed area.
Exploring this area of study may push the boundary of designing
immersive environments.
Cognitive Interaction
Exploring cognitive interaction beyond our sensory systems may
also reinforce immersive experiences. One of our pilot stud-
ies [77] tentatively suggests that it may be possible to shape user
experiences without any direct physical, sensory, and social inter-
actions. Figure 15 shows an example of how an experience can
be shaped without any direct interactions. If there is a way to
generate a type of experience without any interactions, where and
how can we apply this approach or phenomenon in the context of
immersive interaction? While many experiences can be achieved
through various types of physical, sensory and social interactions,
we believe such a focus can also shape our behaviors and experi-
ences within immersive environments.
Acceleration by Immersion?
MacAlister et al. mentioned in their work that “product design
in today’s globalized world requires quick turnarounds to main-
tain profitability” [50]. As mentioned earlier, Design Engineer-
ing is an interdisciplinary field, and therefore, the teams consist
of diverse interdisciplinary backgrounds enabling concurrent de-
sign. And these teams are supposed to generate products with
great speed and accuracy [78]. A challenge of the future will be
to optimize the use of immersive technologies in the context of
collaboration and effectiveness. Especially in the era of COVID-
19, distant collaboration support is essential. Effective work tools,
such as visualization or product life-cycle management software
can bridge gaps between diverse working backgrounds and distant
work environments [79, 80, 50].
Immersion 6=Virtual Reality
Although the term ’Immersion’ is nowadays strongly associated
with VR technologies, we would like to emphasize that immer-
sion must be also seen in a broader context (we mentioned earlier
a number of works discussing immersion from various perspec-
tives). Human-centered design has been a leading paradigm in
design for the past few years. However, now that the paradigm is
shifting – as we can observe that too much focus on the human
neglects the environment with negative implications on the own
population – new approaches are required.
So how does immersive design relates to human-centered de-
sign? Immersive design takes into account different senses of the
user. In the context of VR, full immersion is usually supported
by an HMD, providing a 360◦virtual experience, whereas semi-
immersion is supported by 3D fishtanks, offering a window into
the virtual world while maintaining the visual access to the real
world. However, if we would address this more strictly, full im-
mersion would also address all human senses. In the context of
Figure 15. This unusual machine or experiment is a stationary device
(left) that appears to do nothing. However, when there are no humans in
its environment – detected by no sound, no motion and no light – it secretly
starts to emit beams and rays of stunning colors/lights. Although it practi-
cally provides no interaction – since it will never function in front of people –
participants were able to experience certain playful aspects.
design and engineering, we would expect those senses to at least
be supported and required to finish a particular task. A 3D mod-
eling task of a car will require an immersive visualization, touch
interaction with feedback. In an extended context, this scenario
would require the support of audio (in case the exhaustion system
is modeled) and olfactory senses (in case the UX of the interior is
modeled). So, all these aspects are providing an immersive sce-
nario, and in this way a very user-centered scenario. But as im-
mersion takes different senses into account, we can argue that the
simulation or emulation of the environment can also be improved
– enabling a more context-aware (towards post-human-centered)
design. Whereas we are building the environment around the user
in the center, in this case, the degree of isolation of the user would
be relatively low as he can interact with his environment using
several senses and increasing awareness.
In contrast to many other immersive visualization ap-
proaches that are often more of theoretical nature or for technique
evaluation purposes, the product of design engineering is usually
a concrete artefact which emerges from the virtual world into the
real world. Therefore, the immersive experience produces a spa-
tial model, like in our example, a car. This bilaterally increases
the value of the immersive approach.
265-12 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
However, as shown by the initial research agenda extracted
from this review: there are many questions in this field to be more
precisely elaborated in the near future.
References
[1] M. A. Robinson, P. R. Sparrow, C. Clegg, K. Birdi, Design engineer-
ing competencies: future requirements and predicted changes in the
forthcoming decade, Design Studies 26(2), pp. 123–153 (2005).
[2] N. Greffard, F. Picarougne, and P. Kuntz, Visual community de-
tection: An evaluation of 2D, 3D perspective and 3D stereoscopic
displays, International Symposium on Graph Drawing, pp. 215–225
(2011).
[3] J. P. McIntire, P. R. Havig, and E. E. Geiselman. What is 3D good
for? A review of human performance on stereoscopic 3D displays,
Head- and Helmet-Mounted Displays XVII, 8383, pp. 83830X-1–13
(2012).
[4] M. A. Robinson. How design engineers spend their time: Job content
and task satisfaction. Design studies 33(4), 391–425 (2012).
[5] International Data Group (IDC), AR and VR Headsets Will
See Shipments Decline in the Near Term Due to COVID-19,
But Long-term Outlook Is Positive, According to IDC, https:
//web.archive.org/web/20200531133955/https:
//www.idc.com/getdoc.jsp?containerId=prUS46143720.
(last accessed on 2020-06-17).
[6] C. Cruz-Neira, D. J. Sandin, T. A. DeFanti, R. V. Kenyon, and J. C.
Hart, The CAVE: audio visual experience automatic virtual environ-
ment, Communications of the ACM, 35(6), pp. 64—72 (1992).
[7] A. Febretti, A. Nishimoto, T. Thigpen, J. Talandis, L. Long, J., D.
Pirtle, T. Peterka, A. Verlo, M. Brown, and D. Plepys, CAVE2: a
hybrid reality environment for immersive simulation and information
analysis., in IS&T/SPIE Electronic Imaging, pp. 864903-864903-12
(2013).
[8] 3D TV is dead - The Independent (2017),
https://www.independent.co.uk/life-style/gadgets-and-tech/news/3d-
tv-dead-last-manufacturers-lg-sony-give-up-tech-trend-glasses-
home-cinema-television-a7546091.html. (last accessed on 2020-08-
09).
[9] Oculus - VR Headsets & Equipment, https://www.oculus.com.
(last accessed on 2020-06-25).
[10] HTC VIVETM - Discover Virtual Reality Beyond Imagination,
https://www.vive.com. (last accessed on 2020-06-25).
[11] Varjo XR-1 - The world’s most advanced virtual and mixed reality
devices, https://varjo.com. (last accessed on 2020-06-25).
[12] zSpace - AR/VR Learning Experiences, https://zspace.com.
(last accessed on 2020-06-25).
[13] 3D Pluraview - Beam Splitter Technology, Passive 3D-stereo dis-
play, https://www.3d-pluraview.com. (last accessed on 2020-
06-25).
[14] Looking Glass Factory - The World’s Leading Holographic Display,
https://lookingglassfactory.com. (last accessed on 2020-06-
25).
[15] Alioscopy - Glasses-free 3D displays, http://www.alioscopy.
com. (last accessed on 2020-06-25).
[16] Microsoft HoloLens - Mixed Reality Technology for Business,
https://www.microsoft.com/en-us/hololens. (last accessed
on 2020-06-26).
[17] Mechdyne - Audiovisual Consulting Services & AV/IT Design So-
lutions, https://www.mechdyne.com. (last accessed on 2020-06-
25).
[18] Vicon - — Award Winning Motion Capture Systems, https://
www.vicon.com/. (last accessed on 2020-06-30).
[19] D. Checa, A. Bustillo, A review of immersive virtual reality seri-
ous games to enhance learning and training, Multimedia Tools and
Applications, 79(9), pp. 5501–5527 (2020).
[20] Unity Real-Time Development Platform - 3D, 2D VR and AR En-
gine, https://unity3d.com. (last accessed 2020-08-09).
[21] Unreal Engine - The most powerful real-time 3D creation platform,
https://www.unrealengine.com/. (last accessed 2020-08-09).
[22] WorldViz - Virtual Reality for Training and Research, https://
www.worldviz.com. (last accessed on 2020-06-25).
[23] OGRE - Open Source 3D Graphics Engine - Home of a marvelous
rendering engine, https://www.ogre3d.org. (last accessed 2020-
08-09).
[24] McGraw-Hill Dictionary of Scientific and Technical Terms, 6th ed.,
McGraw-Hill Education (2003).
[25] W. Lidwell, K. Holden, J. Butler, Universal principles of design, re-
vised and updated: 125 ways to enhance usability, influence percep-
tion, increase appeal, make better design decisions, and teach through
design. Rockport Pubublishers (2010).
[26] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experi-
ence, Harper Collins Publishers (1991).
[27] F. H¨
ulsmann, N. Mattar, J. Fr¨
ohlich, I. Wachsmuth. Simulating wind
and warmth in virtual reality: conception, realization and evaluation
for a cave environment, JVRB-Journal of Virtual Reality and Broad-
casting 11(10) (2014).
[28] R. De La Barr´
e, R. Bartmann, M. Kuhlmey, B. Duckstein, S. Jurk,
S. Renault, A new design and image processing algorithm for lentic-
ular lens displays. Electronic Imaging 2017(5), pp. 194-199 (2017).
[29] B. Sommer, D. Barnes, S. Boyd, T. Chandler, M. Cordeil, K. Klein,
T. D. Nguyen, H. Nim, K. Stephens, D. Vohl, S. Wang, E. Wilson,
J. McCormack, K. Mariott, F. Schreiber, 3D-Stereoscopic Immer-
sive Analytics Projects at Monash University and University of Kon-
stanz, IS&T Electronic Imaging: Stereoscopic Displays and Applica-
tions XXVIII Proceedings, IS&T, Springfield, VA (2017).
[30] T. Chandler, B. McKee, E. Wilson, M. Yeates, M. Polkinghorne. A
New Model of Angkor Wat: Simulated Reconstruction as a Method-
ology for Analysis and Public Engagement. Australian and New
Zealand Journal of Art 17(2), pp. 182-194 (2017).
[31] M. Scalabrin, L. A. Ripamonti, D. Maggiorini, D. Gadia.
Stereoscopy-based procedural generation of virtual environments.
Electronic Imaging 2016(5), pp. 1-7 (2016).
[32] N. Holliman, M. Turner, S. Dowsland, R. Cloete, T. Picton, Design-
ing a Cloud-based 3D Visualization Engine for Smart Cities, Elec-
tronic Imaging 2017(5), pp. 173-178 (2017).
[33] A. Baltabayev, A. Gluschkow, J. Blank, G. Birkh¨
olzer, M. Kern,
F. Klopfer, L.-M. Mayer et al., Virtual Reality for Sensor Data Visu-
alization and Analysis, Electronic Imaging 2018(3), pp. 451-1 (2018).
[34] M. R. Miranda, H. Costa, L. Oliveira, T. Bernardes, C. Aguiar,
C. Miosso, A. B. S. Oliveira, A. C. G. C. Diniz, D. M. G. Domingues,
Development of simulation interfaces for evaluation task with the use
of physiological data and virtual reality applied to a vehicle simulator,
The Engineering Reality of Virtual Reality 2015, p. 939207 (2015).
[35] W. Boonsuk, Usability of stereoscopic view in teleoperation, In
Stereoscopic Displays and Applications XXVI, pp. 93911G (2015).
[36] L. M. Deas, R. S. Allison, B. Hartle, E. L. Irving, M. Glaholt,
L. M. Wilcox, Estimation of altitude in stereoscopic-3D versus 2D
real-world scenes, Electronic Imaging 2017(5), pp. 41-47 (2017).
[37] M. Trellet, N. F´
erey, J. Floty´
nski, M. Baaden, P. Bourdot, Semantics
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-13
for an integrative and immersive pipeline combining visualization and
analysis of molecular data, Journal of integrative bioinformatics 15(2)
(2018).
[38] A. Kemeny, P. George, F. M´
erienne, F. Colombet, New VR naviga-
tion techniques to reduce cybersickness, Electronic Imaging 2017(3),
pp. 48-53 (2017).
[39] B. Sommer, S. J. Wang, L. Xu, M. Chen, F. Schreiber. Hybrid-
dimensional visualization and interaction-integrating 2D and 3D
visualization with semi-immersive navigation techniques Autoren,
IEEE Big Data Visual Analytics (BDVA), pp. 1–8 (2015).
[40] B. Sommer, A. Hamacher, O. Kaluza, T. Czauderna, M. Klap-
perst¨
uck, N. Biere, M. Civico, B. Thomas, D. G. Barnes, and F.
Schreiber. Stereoscopic Space Map – Semi-immersive Configura-
tion of 3D-stereoscopic Tours in Multi-display Environments. Elec-
tronic Imaging, Proceedings of Stereoscopic Displays and Applica-
tions XXVII, 2016(5), pp. 1—9 (2016).
[41] I. Benoit, E. Kurland. The History of Stereoscopic Video Games
for the Consumer Electronic Market. Electronic Imaging 2018(4), pp.
290-1 (2018).
[42] H. Wong, J. Pr´
evoteau-Jonquet, S. Baud, M. Dauchez, N. Belloy,
Mesoscopic Rigid Body Modelling of the Extracellular Matrix Self-
Assembly, Journal of integrative bioinformatics 15(2) (2018).
[43] Creative Commons License: Attribution-NonCommercial-
NoDerivatives 4.0 International (CC BY-NC-ND 4.0),
https://creativecommons.org/licenses/by-nc- nd/4.0/.
(last accessed on 2020-08-10).
[44] D. Puel, An authoring system for VR-based firefighting comman-
ders training, Electronic Imaging 2018(3), pp. 469-1 (2018).
[45] XVR Simulation - Incident command training tool for safety and se-
curity, https://www.xvrsim.com. (last accessed on 2020-08-05).
[46] H. Engelbrecht, E. Lindeman, S. Hoermann, S, A SWOT analysis of
the field of virtual reality for firefighter training, Frontiers in Robotics
and AI(6), p. 101 (2019).
[47] S. Sharma, S. Jerripothula, An indoor augmented reality mobile ap-
plication for simulation of building evacuation, The Engineering Re-
ality of Virtual Reality 2015, p. 939208 (2015).
[48] D. Vatolin, A. Bokov, M. Erofeev, V. Napadovsky. Trends in S3D-
movie quality evaluated on 105 films using 10 metrics, Electronic
Imaging 2016(5), pp. 1-10 (2016).
[49] N. Biere, M. Ghaffar, A. Doebbe, D. J¨
ager, N. Rothe,
B. M. Friedrich, R. Hofest¨
adt, F. Schreiber, O. Kruse, B. Sommer,
Heuristic modeling and 3D stereoscopic visualization of a Chlamy-
domonas reinhardtii cell, Journal of integrative bioinformatics 15(2)
(2018).
[50] A. MacAllister, T.-P. Yeh, E. Winer, Implementing Native Support
for Oculus and Leap Motion in a Commercial Engineering Visual-
ization and Analysis Platform.” Electronic Imaging 2016(4), pp. 1-11
(2016).
[51] W. Fikkert, P. Van Der Vet, G. van der Veer, A. Nijholt, Gestures for
large display control, International Gesture Workshop, pp. 245-256
(2009).
[52] J. G. Jimenez, J. P. Schulze, Continuous-Motion Text Input in Vir-
tual Reality, Electronic Imaging 2018(3), pp. 450-1 (2018).
[53] R. Skarbez, F. P. Brooks Jr, M. C. Whitton, A survey of presence and
related concepts, ACM Computing Surveys 50(6), pp. 1-39 (2017).
[54] M. J. Parola, S. Johnson, R. West, Turning presence inside-out:
Metanarratives, Electronic Imaging 2016(4), pp. 1-9 (2016).
[55] M. J. Parola, R. West, R. Herrington, C. Adams, M. Beyer, B. Davis,
K. Hays et al., From Being There To Feeling Real: The Effect Of
Real World Expertise On Presence In Virtual Environments, Elec-
tronic Imaging 2018(3), pp. 434-1 (2018).
[56] R. West, M. J. Parola, A. R. Jaycen, C. P. Lueg, Embodied informa-
tion behavior, mixed reality and big data, The Engineering Reality of
Virtual Reality 2015, pp. 93920E (2015).
[57] D. Benyon, Presence in blended spaces, Interacting with Computers
24(4), pp. 219-226 (2012).
[58] M.S. Dias, J. d’Alpuim, P. Caetano, Galactica, a Digital Planetarium
for Immersive Virtual Reality Settings, International Journal of Cre-
ative Interfaces and Computer Graphics (IJCICG), 7(1), pp. 19–39
(2016).
[59] A. Woods, P. Helmholz, A. Bickers, J. Hollick, An underwater
odyssey: the mission to image the wrecks, From Great Depths -
The Wrecks of HMAS Sydney II and HSK Kormoran, pp. 255–277
(2016).
[60] R. Etemadpour, E. Monson, L. Linsen, The effect of stereoscopic
immersive environments on projection-based multi-dimensional data
visualization. 17th IEEE International Conference on Information Vi-
sualisation, pp. 389-397 (2013).
[61] T. Chandler, M. Cordeil, T. Czauderna, T. Dwyer, J. Glowacki, C.
Goncu, M. Klapperst¨
uck, K. Klein, K. Marriott, F. Schreiber, E. Wil-
son, Immersive Analytics, Big Data Visual Analytics, BDVA 2015,
pp. 73–80 (2015).
[62] B. Sommer, M. Baaden, M. Krone, A. Woods, From virtual reality
to immersive analytics in bioinformatics, Journal of integrative bioin-
formatics 15(2) (2018).
[63] M. Cordeil, A. Cunningham, B. Bach, C. Hurter, B. H. Thomas,
K. Marriott, T. Dwyer, Iatk: An immersive analytics toolkit, IEEE
Conference on Virtual Reality and 3D User Interfaces (VR) 2019, pp.
200–209 (2019).
[64] B. Lee, M. Cordeil, A. Prouzeau, T. Dwyer, FIESTA: A Free Roam-
ing Collaborative Immersive Analytics System, ACM International
Conference on Interactive Surfaces and Spaces 2019, pp. 335–338
(2019).
[65] K. Klein, B. Sommer, H. T. Nim, A. Flack, K. Safi, M. Nagy,
S. P. Feyer, Y. Zhang, K. Rehberg, A. Gluschkow, M. Quetting,
W. Fiedler, M. Wikelski, F. Schreiber, Fly with the flock: immersive
solutions for animal movement visualization and analytics, Journal of
the Royal Society Interface, 16(153), 20180794 (2019).
[66] B. Sommer, A. Diehl, M. Aichem, P. Meschenmoser, K. Rehberg,
D. Weber, D., Y. Zhang, K. Klein, D. Keim, F. Schreiber, Tiled
Stereoscopic 3D Display Wall-Concept - Applications and Evalua-
tion. Electronic Imaging, 2019(3), pp. 641-1–641-15 (2019).
[67] Cesium - Changing how the world views 3D, https://www.
cesium.com. (last accessed on 2020-08-06).
[68] Gather - Virtual spaces for conferences, https://gather.town/
conferences/. (last accessed on 2020-08-08).
[69] Gravity Sketch - New Feature 4 - SubD Snapping, https://www.
youtube.com/watch?v=EznyDjGY36c. (last accessed on 2020-08-
09).
[70] Gravity Sketch - Think in 3D. Create in 3D, https://www.
gravitysketch.com. (last accessed on 2020-06-25).
[71] Tilt Brush by Google, https://www.tiltbrush.com. (last ac-
cessed on 2020-08-09).
[72] LITHO - A Controller for the real world, https://www.litho.cc.
(last accessed on 2020-06-25).
[73] LITHO Process Video - an Augmented Reality Ring Controller
(2017), https://www.youtube.com/watch?v=qNUJE_VNYdY.
(last accessed on 2020-08-08).
265-14 IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications
[74] Intuitive Surgical - Robotic-Assisted Surgery, https://www.
intuitive.com. (last accessed on 2020-06-25).
[75] D. Lockton, D. Ricketts, S. Aditya Chowdhury, C. H. Lee. Explor-
ing qualitative displays and interfaces, CHI Conference Extended Ab-
stracts on Human Factors in Computing Systems 2017, pp. 1844–
1852 (2017).
[76] C. H. Lee, D. Lockton, J. Stevens, S. J. Wang, S. Ahn, Synaesthetic-
Translation Tool: Synaesthesia as an Interactive Material for Ideation,
Extended Abstracts of CHI Conference on Human Factors in Com-
puting Systems 2019, pp. 1–6 (2019).
[77] C. H. Lee, D. Lockton, J.E. Kim, Exploring Cognitive Playfulness
Through Zero Interactions, ACM SIGCHI Conference on Designing
Interactive Systems 2018 (DIS 2018), pp. 63–67 (2018).
[78] M.L. Swink, J.C. Sandvig, V .A. Mabert, Customizing concurrent
engineering processes: Five case studies, Journal of Product Innova-
tion Management 13(3), pp. 229-244 (1996).
[79] W. Shen, Q. Hao, W. Li, Computer Supported Collaborative Design:
Retrospective and Perspective, Computers in Industry 59(9), pp. 855-
862 (2008).
[80] F. Belkadi, E. Bonjour, M. Camargo, N. Troussier, B. Eynard, A
situation model to support awareness in collaborative design, Interna-
tional Journal of Human-Computer Studies 71(1) pp. 110-129 (2013).
IS&T International Symposium on Electronic Imaging 2020
Stereoscopic Displays and Applications 265-15
• SHORT COURSES • EXHIBITS • DEMONSTRATION SESSION • PLENARY TALKS •
• INTERACTIVE PAPER SESSION • SPECIAL EVENTS • TECHNICAL SESSIONS •
Electronic Imaging
IS&T International Symposium on
SCIENCE AND TECHNOLOGY
Imaging across applications . . . Where industry and academia meet!
JOIN US AT THE NEXT EI!
www.electronicimaging.org
imaging.org