Conference PaperPDF Available

Towards Serious Games and Applications in Smart Substitutional Reality

Authors:

Abstract and Figures

Substitutional Reality (SR), the integration of the physical environment into Virtual Reality (VR), is a novel approach to facilitate and intensify the use of home-based VR systems. We propose to extend the passive haptics of SR with the interactive functionality of a smart home environment. This concept of smart SR is meant as a foundation for serious games and applications. In this paper, we describe the concept behind smart SR as well as the prototype in our lab environment. We created multiple virtual environments with a varying degree of mismatch regarding the real world. We present a user study where we examined the influence of these environments on the perceived sense of presence and motivation of users. Our findings showed that presence was high in all conditions while motivation increased with the level of mismatch. This provides us with a promising basis for further research.
Content may be subject to copyright.
Towards Serious Games and Applications in
Smart Substitutional Reality
Benjamin Eckstein
Human-Computer Interaction
University of Wuerzburg
Wuerzburg, Germany
benjamin.eckstein@uni-wuerzburg.de
Eva Krapp
Human-Computer Interaction
University of Wuerzburg
Wuerzburg, Germany
Birgit Lugrin
Human-Computer Interaction
University of Wuerzburg
Wuerzburg, Germany
birgit.lugrin@uni-wuerzburg.de
AbstractSubstitutional Reality (SR), the integration of the
physical environment into Virtual Reality (VR), is a novel ap-
proach to facilitate and intensify the use of home-based VR
systems. We propose to extend the passive haptics of SR with
the interactive functionality of a smart home environment. This
concept of smart SR is meant as a foundation for serious games
and applications. In this paper, we describe the concept behind
smart SR as well as the prototype in our lab environment. We
created multiple virtual environments with a varying degree of
mismatch regarding the real world. We present a user study
where we examined the influence of these environments on the
perceived sense of presence and motivation of users. Our findings
showed that presence was high in all conditions while motivation
increased with the level of mismatch. This provides us with a
promising basis for further research.
Index Terms—Serious Applications, Pervasive Gaming, Substi-
tutional Reality, Smart Home
I. INTRO DUC TI ON
Over the last years room-scale Virtual Reality (VR) applica-
tions have become widely available to private users. This can
be mainly attributed to affordable consumer hardware like the
HTC Vive1as well as dedicated high-end games. However,
the trend towards VR is not limited to entertainment. Serious
games and applications in areas such as education, therapy, and
social interaction are also benefiting from this technology [1].
Here, knowledge, skills, and virtual content in general can be
consumed in a save environment while being fully immersed.
While corresponding VR applications were formerly limited
to research labs and dedicated non-public spaces, we are now
entering a time where they can be deployed nearly anywhere.
This opens up new possibilities and difficulties which need to
be addressed.
One of the key differences between a home system and
a dedicated VR installation is the surrounding environment.
While a dedicated VR tracking space usually consists of a wide
empty room, the home system will in most cases be installed
in a fully decorated living room. This comes with a multitude
of physical obstacles which will interfere with the feeling of
presence, or worse safety, needed for serious applications. To
avoid this, the user will be forced to remove these objects
from the tracked area. This is a cumbersome solution and
can be challanging in smaller apartments. A less invasive
1https://www.vive.com
alternative could be to integrate the physical environment into
the virtual world [2] by substituting objects with similar virtual
counterparts. This is called Substitutional Reality (SR).
Following this idea, we took up the concept of SR and
applied it to the smart home environment of our lab space.
Therefore we implemented a smart SR environment with
multiple virtual versions. It includes user and object tracking as
well as environmental sensor data and simple actuators to turn
on electrical devices. The goal is to implement an environment
that integrates physical objects and smart devices into serious
games and applications to improve user experience. Former
studies have already shown that haptic feedback increases the
sense of presence in VR [3], [4]. In addition, substituted ob-
jects do not necessarily need to accurately match their physical
counterparts to maintain immersion [5]. Still, both presence
and motivation are important factors for the effectiveness of
serious games [1].
In this paper, we describe the implementation of our concept
alongside with a user study that investigates how small-to-
medium changes in the appearance of the environment affect
presence and motivation in SR. We therefore compared a
realistically modeled environment against such where some
or all objects are substituted with a sci-fi variant. This first
experiment was limited to object tracking and tactile feedback.
The focus is on identifying whether objects within the user’s
direct reach influence the experience more than others. The
intermediate goal is to integrate and evaluate all capabilities
of our smart room into the simulation. In the long-term we
would like to compile a list of guidelines for the design of
smart SR environments for serious applications.
II. SU BST ITU TIO NA L REA L IT Y
Simeone et al. [2] use the term Substitutional Reality
to define applications in which the physical environment is
integrated into the VR experience to provide passive haptic
feedback. In this definition integrated objects have a virtual
counterpart which may or may not visually match the original.
They argue that content authors should be given a certain
amount of flexibility when designing a virtual environment.
Therefore, a certain level of mismatch (LoM) needs to be
expected. In their work they define the following categories:
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
Replica – A 1:1 copy of the physical proxy with matching
dimensions, texturing, and affordances.
Aesthetic – The virtual copy differs from the original in
its appearance, e.g. in color or texturing.
Addition/Subtraction – The shapes of both objects di-
verge, e.g. object parts might be visually added or re-
moved. However objects still share common affordances.
Function – Objects differ to a degree where some of the
affordances differ noticeably, e.g. a drinking mug is used
as an oil lamp.
Category – There is no obvious commonality between
both objects. They merely happen to share the same
physical space, e.g. a tree substitutes a standing lamp.
According to the authors a variation in appearance may be
used to alter the perceived environment to match a variety
of contexts. This affects the perceived feeling of presence
and attributed realism. Thus, the LoM should be adjusted
to the target application. In addition, users attributed certain
properties (e.g. weight, temperature) to objects, which matched
their appearance. This could be used to intentionally invoke
mental connections in applications and games.
We employ the concept of SR to create a basis for the
application of serious games within a user’s home. In this
work, we limit the LoM of all objects to Addition/Subtraction
or below. Our overall goal is to create immersive scenarios
which induce a high level of motivation.
A. Related Work
In Milgram’s mixed reality continuum [6] which classifies
mixed reality systems on a range from completely real to fully
virtual, SR would be located around the area of augmented vir-
tuality. This means that the virtual environment is augmented
by elements of the real world. Here, the physical environment
is used, i.a., as a way to provide passive haptic feedback.
The effort of incorporating haptics into the simulation isn’t
new to VR. For example, Burdae [7] used force feedback to
provide a feeling of physical contact. This is a simple but
efficient technique which is still employed, e.g. on today’s
VR controllers. However, this type of feedback is rather rough
and not able to provide the distinctive subtleties that would be
expected when touching a surface with bare hands. In contrast,
Hoffman [3] used real world objects to provide tactile feedback
in a mixed reality scenario. An experiment showed that haptic
feedback increased the sense of presence and realism in VR.
Ever since, both techniques have been applied in various
fields including serious games and applications. For instance,
Garcia et al. [8] included physical models of spiders or
containers holding virtual spiders into their application to
support the treatment of spider phobia. It allowed the pa-
tients to touch and feel the spider, which is an integral part
of exposure therapy. The authors report that the VR setup
facilitated the treatment and that the results of their study
proved to be clinically significant. Sveistrup [4] reviewed
multiple techniques that include haptic feedback into physical
therapy. This included the use of typical gym equipment (e.g.,
a bicycle) as well as hand-tailored solutions (e.g., reactive
footpads to simulate walking). Here, the VR environment
benefits the patient in multiple ways. For instance a repetitive
task can be rendered more interesting and motivating by
providing a beautiful scenery or gamified content. Another
highly relevant topic is training through simulation, e.g. in the
medical domain [9].
Going further, Valente et al. introduced the term Pervasive
Virtuality [10] to describe VR scenarios in which modalities
other than vision and audition are addressed. Similar to SR, the
incorporation of the physical environment and objects plays
a major role. Aside from that, the definition also includes
other stimuli such as heat or smell. For this, the application
needs to be aware of devices available to generate these
stimuli. Location-independent solutions for this have already
been proposed, e.g. by Yamada et al. [11]. They designed a
wearable olfactory display to provide VR users with odors
of outdoor environments. Ranasinghe et al. [12], on the other
hand, approached the challenge of simulating wind and heat
in virtual environments. Their prototype device comprises
a head mounted display (HMD) with attached fans and a
peltier element around the user’s neck. For applications in SR
respectively Pervasive Virtuality smart home devices such as
fans and heat lamps [13] could provide a simple solution for
basic scenarios.
The idea of connecting reality and virtuality via sensors
and actuators has been explored by Lifton and Paradiso [14].
In contrast to SR however, the connected worlds do not
require any spatial equivalence. Instead, sensor input may be
represented both explicitly or implicitly in a variety of ways.
Stahl et al. [15] on the other hand reflect the state of smart
devices to a virtual copy of the real environment. Here, a
remote user is able to observe and manipulate the environment
in an intuitive way, e.g. in an assisted living scenario. Both
techniques could prove valuable in a smart SR environment.
B. Smart Substitutional Reality
As shown above, the integration of smart devices into SR
promises to be beneficial in a variety of scenarios. We propose
smart SR as a basis for serious games and applications in smart
home environments. Thus, sensor data can be used to provide
context and modify the simulation to increase immersion. The
data can further be presented to the user in different ways.
An explicit visualization could be achieved through overlays
and highlighting, e.g. similar to augmented reality. Implicit
visualizations on the other hand could include variations in the
observed environment, e.g. by changing virtual illumination
or the shape of objects. Contrarily, controllable smart devices
could be used to enhance immersion in games and simulations.
Fans and heat lamps could be used to simulate the wind
and sun in virtual outdoor environments. The climate control
could predict stressful or exhausting situations and ensure the
supply of oxygen in advance. Finally, the user could be given
control over the devices from inside the virtual environment
as part of the application or for reasons of comfort. For further
illustration, we envision these three examplary domains of
application for smart SR:
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
Fig. 1. The smart lab environment (top), its realistic virtual replica (center),
and an alternative sci-fi version (bottom).
Physical therapy and sports – The physical environment
of the living room is transformed to a virtual gym. Depending
on its stability the furniture will be substituted by simple
training equipment, e.g. a table to lean against or a TV rug as a
training mattress. The simulation can be modified to resemble
an outside environment, potentially increasing motivation. The
climate control will then open the windows or activate the air
conditioning to support the trainee while creating the illusion
of a cool morning breeze. A fitness bracelet will provide
physiological data which can be visualized through overlays.
The simulation itself will adjust to this data. In addition,
a trainer could remotely join the session to observe and
guide the trainee while not being obstructed by the physical
environment.
Visual programming of smart devices – The user will
be given the tools to visually create relationships and rules
between the smart devices in the environment. Therefore she
can point at objects or even touch them to create nodes which
will then be visibly connected, e.g. via lines or arrows. Data
can be shared between objects by dragging its visualization
to the fitting nodes, resulting in a constant animation to
represent data flow. The virtual world further allows the user
to look inside devices or through walls to gain additional
insight. The created dependencies can be tested either via live
demonstration or with the help of simulated input and results.
The user can also simulate devices that are not yet installed,
e.g. to test compatibility or to include the logic before the
device is delivered.
Social interaction and gaming – Multiple users will share
the same space while being immersed in SR. They will be
able to see each other while interacting with the environment,
e.g. as part of an escape game. Sensors and smart devices will
then be included in riddles, either to provide hints or as part of
the solution. Alternatively, only one person is immersed in the
game while the other players represent invisible ”ghosts” that
communicate by manipulating the environment. For instance,
they could activate smart devices and move tracked objects
around as part of a haunted house game. Other people could
remotely join the session to observe the game or participate
in a variety of ways, e.g. to help the player as they are not
obstructed and affected by the physical environment.
We take a first step towards this vision by implementing a
prototype smart SR environment with multiple virtual repre-
sentations.
III. IM PL E ME N TATIO N
We implemented our version of SR in the smart home
environment within our lab (cf. Fig. 1 (top)). This room is
equipped with several sensors and actuators. The provided data
includes the room climate (i.a. temperature, brightness, and air
quality), tracking information (i.a. user and object tracking),
multiple microphones, as well as a fixed HTC Vive setup. The
furniture is based on a simple living or break room. We are
able to remote control some of the electrical devices (i.a. the
fan) through typical smart home hardware, e.g. smart plugs
and lights.
We created a hand-modeled 3D replica of the room in
Unity2. We chose Unity mainly for it’s accessibility, extensive
scripting support, and the availability of plugins. At the same
time it provides state-of-the-art graphics and high performance
which are key requirements for VR applications. Most of the
objects were modeled in Blender3and are scaled and textured
appropriately (cf. Fig. 1 (center)). Some elements consist of
multiple parts to allow separate movement, e.g. the feet and
seat of the office chair can rotate independently. Multiple light
sources and a panorama canvas outside the window were added
to create a more realistic setting.
In our setup there are multiple ways of tracking which we
use to synchronize the virtual scene and the real environ-
ment. The window is equipped with an angle meter while
other objects hold Vive trackers. As these trackers are rather
bulky, we positioned them in places where they wouldn’t be
touched by users but remained visible to the lighthouses. User
tracking is realized with the help of the Vive headset or a
Kinect sensor [16]. To achieve a high degree of congruence
2https://unity3d.com
3http://www.blender.org
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
Fig. 2. The four conditions (real (a), sci-fi (b), mixed-objects (c), and mixed-
setting (d)) are created by choosing from both representations of each object.
between real and virtual objects, we first calibrated the Vive
tracking space. This was done with the help of the Kabsch
algorithm [17] which allowed us to align multiple points inside
the room with their virtual counterparts after marking them
with Vive controllers. This also allows to quickly realign the
tracking space if the equipment is moved or calibration is lost
for other reasons. The rest of the objects was positioned the
same way in relation to the already calibrated room.
The sci-fi version was integrated by initially duplicating the
complete room. We thereon substituted each object by either
providing a fitting equivalent (e.g. a pilot chair to substitute
the desk chair) or by altering the appearance of the original
(cf. 1 (bottom)). The intended LoM lay between Aesthetic
and Addition/Subtraction in accordance with the definition in
Section II. The goal was to maintain the primary affordances of
the objects. During the process the scale of most sci-fi objects
had to be adjusted to fit the boundaries of its counterpart. For
this, visual cues can be given by blending in both versions at
the same time. The two versions of each object were bound
to a shared parent object in the scene graph. This way both
objects will be equally affected when their parent node is
transformed, e.g. if a chair is moved around by the user. In
addition, a simple editor script allows to choose the desired
representation in a drop down menu to create mixed scenarios
as shown in Fig. 2.
We connected the smart devices and sensors of our lab to
the virtual environment. Therefore we created Unity scripts
which integrate the respective APIs and provide the data to
our scene. This data can be visualized inside the environment
as described in Section II-B. Fig. 3 shows an example for
Fig. 3. Explicit (bar graph) and implicit (plant size) visualization of sensor
input (water level) within the smart SR.
the water sensor inside the potted plant in our room. Here,
the water level can be shown either explicitly with the help
of numbers and a bar graph, or implicitly by adjusting the
size of the plant. Another example is the representation of
users. For instance, we mapped the output of the Kinect sensor
to a simplified skeleton shape. This allows users in VR to
determine the position and posture of other persons inside the
room. Smart devices, on the other hand, can be controlled from
inside the simulation. This can be done by the user, e.g. by
pointing at or touching a device with the controller. Collision
detection and ray casting will then determine the target of the
command. Alternatively, these devices are controlled through
the application when appropriate. For this, it requires some
amount of context knowledge, e.g. the user’s relative position
to certain objects. We obtain this information via meta scripts
named virtual sensors. These scripts use a combination of
collision shapes and other built-in tools of the game engine
to acquire the necessary information. All of these methods are
compatible with the different SR representations and can be
easily included in games and applications.
IV. USE R ST UDY
We designed a user study to evaluate the basic layout of
our SR environment. We want to know, how the small-to-
medium changes in the sci-fi environment affect the user’s
sense of presence and motivation. We expect that an increase
of mismatch will lead to a reduction in the sense of presence
and that this effect will be stronger for objects that the
user interacts with, especially through touch. In contrast, we
expect that the sci-fi environment will be more interesting to
explore, thus increasing the user’s motivation in comparison
to the replica. This would reflect the findings in research of
gamification [18]. The following hypotheses were formulated:
H1: Spatial presence is increased for a lower LoM in the
overall environment.
H2: Motivation is increased for a higher LoM in the overall
environment.
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
Fig. 4. The participants needed to open the window to reach the sphere
outside.
H3: Objects which are part of direct interaction have a higher
influence on spatial presence than items in the environment.
H4: Objects which are part of direct interaction have a higher
influence on motivation than items in the environment.
A. Study Design
We chose a one-factorial within-subjects design to achieve
greater power and to reduce error variance caused by inter-
individual differences. The design consisted of four treatment
conditions in the form of different SR environments with vary-
ing levels of mismatch as shown in Fig. 2. The real and sci-fi
conditions represent the minimum (min) and maximum (max)
LoM in our experiment. The two mixed conditions are treated
as a medium (med) LoM and differ in the choice of objects that
were substituted. The mixed-object condition substituted only
objects that we expected to be in the users’ direct interaction
space while the mixed-setting condition represents the direct
opposite.
In each of these environments, the participants have to fulfill
a simple search task. To cover all objects of interest, e.g.
movable objects, we positioned nine semi-transparent spheres
throughout the room which they need to collect. This is done
by touching them with a hand-held Vive controller, prompting
them to explore and interact with the room. As shown in
Fig. 4 some of the targets are placed in hard-to-reach locations
(e.g. outside the window or behind the chair) to ensure
physical interaction. The targets are shown in sequence in a
different order for each condition. Haptic feedback is provided
in form of vibration to confirm a successful collect action.
Each participant only holds one controller, leaving the other
hand free to touch and manipulate the room as needed. The
sequence of environments is randomized to combat possible
order effects.
For each of the environments, spatial presence and intrinsic
motivation were measured with the help of two self-report
questionnaires. In addition, we collected demographic data and
free-form feedback.
B. Questionnaires
To measure our two dependent variables – spatial presence
and intrinsic motivation – we used pre-defined and tested
scales that we translated to German.
The Spatial Presence Experience Scale [19] (SPES) aims
at assessing the user’s sense of ”being there” in the perceived
environment. It consists of eight items measuring the construct
in two dimensions – the user’s self-location (e.g. ’I felt as
though I was physically present in the environment of the
presentation’) and perceived possible actions (e.g. ’I felt like
I could move around among the objects in the presentation’).
Items are rated on a 5-point scale from 1 (I do not agree at
all) to 5 (= I fully agree).
The Intrinsic Motivation Questionnaire [20] (IMI) is
comprised of seven subscales that relate to the concept of
intrinsic motivation. These subscales can be used and scored
independently to assess only the facets relevant to the scenario
at hand. For our study we chose three subscales respectively
measuring Interest and Enjoyment,Effort and Importance and
Pressure and Tension. The Interest and Enjoyment subscale is
considered to measure intrinsic motivation itself and consists
of seven items. The Effort and Importance concept, assessed
through 5 items, is theorized to positively predict intrinsic
motivation, whereas Pressure and Tension, consisting of 5
items as well, is considered a negative predictor. All subscales
are rated on a 7-point Likert scale from 1 (not true at all) to
7 (very true).
C. Procedure
Ahead of starting the experiment, the participants filled in
a questionnaire on their health to ensure they were fit for
participating in the experiment. They were then introduced
to the HTC Vive setup and the smart room environment. The
examiner explained the task and briefly demonstrated how the
window could be opened and the chair could be moved. Great
care was taken to ensure that the participants were comfortable
wearing the HMD and felt safe moving around the room. It
was explicitly stated that participant should take their time and
feel free to explore and interact with objects in the room.
The participants began each condition seated on the sofa
facing towards the middle of the room. The examiner assisted
with putting on and adjusting the HMD and handed the
participants the hand-held controller. After the examiner had
moved out of the tracking space, the participants were given
the cue to start their task. They then explored the environment
while collecting the spheres placed around the room. After
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
the last sphere had been collected, the examiner instructed the
participants to take a seat on the sofa again and assisted with
removing the HMD.
After each condition the participants moved to a computer
to fill in the SPES and IMI questionnaires. Once they had
finished the questionnaires, they moved back to the sofa and
began with the next condition. This procedure was repeated
for each of the four conditions.
Following the last set of SPES and IMI questionnaires, the
participants provided feedback on the environments and filled
in another questionnaire assessing their demographic data and
PC and video game use (hours/week) as well as their VR
experience (number of times used).
D. Participants
Thirty-six participants aged 19 to 29 took part in our study
(M= 21.47, SD = 1.92), 75.0% of which were female. The
sample consisted entirely of students studying either Media
Communications (77.8%) or Human Computer Interaction. We
asked them about their experience with VR and had them rate
their use of computers and video games on a seven-point scale
(not at all / very frequently). The majority of our sample had
experienced VR before, but the number of experiences varied
greatly (M= 4.11, S D = 5.27) between participants. While
everyday computer use was frequent (M= 5.65, SD = 1.76),
video game use was rather infrequent (M= 2.00, SD =
1.41).
E. Results
1) Reliability analysis: To ascertain the reliability of our
scales, Cronbach’s αwas calculated for both the SPES and
the IMI subscales across all four conditions. While the SPES
attained high reliabilities (α > 0.80) in all conditions, the
reliability for the IMI showed large variations between the
three subscales and also within the subscales for the different
conditions. The Interest and Enjoyment subscale, had high
reliability across conditions (α > 0.90). For the Effort and
Importance subscale reliability was critically low for the real
condition (α= 0.29), and low for the mixed and sci-fi
conditions (αbetween 0.46 and 0.51). The Pressure and
Tension subscale, also had low reliability for the real condition
(α= 0.50), but high reliability for the sci-fi condition
(α= 0.79). The reliability for the mixed-setting and sci-fi
were intermediate (between α= 0.69 and 0.62).
2) Hypothesis tests: To determine whether the mismatch
between the actual room and the virtual rendering has an effect
on spatial presence and intrinsic motivation, we conducted
repeated measure ANOVAs for each of the (sub-)scales. De-
scriptive statistics and the F- and p-values for the ANOVAs
are summarized in Table I. Furthermore, we calculated planned
contrasts and t-tests for the conditions mentioned in our
hypothesis. Therefore the two conditions with a medium LoM
were combined to one group and compared to the two extreme
conditions.
H1 (Influence of LoM on spatial presence): Regarding
spatial presence, a repeated measure ANOVA did not re-
veal significant differences comparing the real,mixed-objects,
mixed-setting and sci-fi conditions, F(3,105) = 1.72, p =
0.167.
The planned contrasts neither showed a significant differ-
ence in spatial presence between the medium LoM group and
the real condition (p= 0.05), nor the medium level group and
the sci-fi condition (p= 0.16).
H2 (Influence of LoM on intrinsic motivation): Re-
garding intrinsic motivation, repeated measure ANOVAs were
conducted for each subscale. Significant difference between
conditions were only found for the Interest and Enjoyment
subscale, F(3,105) = 3.67, p = 0.015, ηp2= 0.10. No
significant difference was found for the subscales Effort and
Importance (F(3,105) = 0.15, p = 0.93) and Pressure and
Tension (Huynh-Feldt corrected for violation of sphericity,
F(2.57,89.81) = 0.96, p = 0.40) subscales.
Planned contrasts were also calculated for the three sub-
scales. Interest and Enjoyment did not differ significantly
between the medium LoM group and the real condition
(p= 0.11), or the sci-fi condition (p= 0.16). The same holds
true for Effort and Importance, which was not significantly
different between the medium LoM group and the real con-
dition (p= 0.73), or the sci-fi condition (p= 0.62). Pressure
and Tension also did not show a significant difference between
the medium LoM group and the real condition (p= 0.58), or
sci-fi condition (p= 0.12).
However, the t-test comparing the two extreme conditions
revealed that Interest and Enjoyment were significantly higher
for the sci-fi condition compared to the real condition, t(35) =
2.24, p = 0.03. No significant difference was found for the
Effort and Importance (t(35) = .239, p = 0.81) and Pressure
and Tension (t(35) = 1.81, p = 0.08) subscales.
H3 (Influence of direct interaction on spatial presence):
To determine whether objects which are part of direct inter-
action have a higher influence on spatial presence than items
in the environment, we conducted a t-test comparing the two
medium LoM conditions.
There was no significant difference in spatial presence be-
tween the mixed-objects and mixed-setting conditions, t(35) =
0.43, p = 0.67.
H4 (Influence of direct interaction on intrinsic moti-
vation): Analogous to the decisions above, we conducted
t-tests comparing the two medium LoM conditions for the
three subscales of the IMI. Interest and Enjoyment were
significantly higher for the mixed-setting condition compared
to the mixed-objects condition, t(35) = 2,148, p = 0.04. For
Effort and Importance there was no significant difference
between the mixed-objects condition and the mixed-setting
condition, t(35) = 0.32, p = 0.75. The Pressure and Tension
subscale did not differ significantly between the mixed-objects
and mixed-setting conditions either, t(35) = 0.39, p = 0.696.
3) Preferences: When asked participants to name the envi-
ronment they liked best. The majority of participants chose
the sci-fi condition (52.8%), followed by the mixed-setting
(22.2%) and real conditions (19.4%). The mixed-objects con-
dition was the most unpopular (5.6%).
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
TABLE I
DES CRIP TIV E STATIS TI CS OF S PATI AL PR ESEN CE A ND I NTR IN SIC M OTIVATI ON (M EASU RED ON TH E S UBSC A LE S
INT ER ES T A ND EN J OYM EN T (I & E), E FFO RT A ND IMP ORTA NC E (E & I), AND PR ES SURE A N D TE NS IO N (P & T) .
min LoM med LoM max LoM
real mixed-objects mixed-setting combined mixed sci-fi
Scale (Range) M (DS) M (DS) M (DS) M (DS) M (DS)
Presence (1-5) 4.22 (0.57) 4.02 (0.69) 4.07 (0.68) 4.05 (0.61) 4.01 (0.76)
I & E (1-7) 4.87 (1.25) 4.89 (1.10) 5.21 (1.23) 4.05 (1.08) 5.23 (1.28)
E & I (1-7) 4.18 (0.84) 4.13 (1.00) 4.17 (1.04) 4.15 (0.95) 4.21 (0.95)
P & T (1-7) 1.94 (0.68) 2.03 (0.97) 1.96 (0.87) 1.99 (0.74) 2.16 (0.84)
F. Discussion
No significant differences were found concerning spatial
presence, neither regarding the LoM nor the types of objects
that were manipulated. Interestingly, scores on spatial presence
were rather high across all conditions (>4.00), indicating that
the mismatches did not negatively affect the spatial presence.
As that is considered very important in SR applications, we
consider this promising regarding a high level of freedom in
designing such applications.
The results concerning intrinsic motivation are more hetero-
geneous. Regarding the LoM, we did find significant effects
for the Interest and Enjoyment subscale, but no effects on the
Effort and Importance and Pressure and Tension subscales.
As the Interest and Enjoyment subscale plays an important
role for SR applications as it measures intrinsic motivation
directly, we consider it promising to see that the motivation
is increased with increasing mismatch. While there was no
significant difference between the grouped medium conditions
and the extreme conditions, we did find significant differences
both between the two medium LoM conditions and between
the two extreme conditions. The fact that mean Interest and
Enjoyment scores were highest for the mixed-setting and sci-fi
conditions suggests that the deciding factor to increase Interest
and Enjoyment was the LoM in the environment as opposed
to the objects that are part of direct interaction. This is also
reflected in the fact that the sci-fi and mixed-setting conditions
were the two most preferred choices.
Especially for Pressure and Tension, scores were rather
low across all conditions, indicating that participants were
not negatively effected with negative feelings during task
fulfilment. However, reliability for this subscales was low,
which may have influenced the results.
V. CONC LUS IO N AND FU TUR E WOR K
In this work we introduced smart SR as a promising new
foundation for the creation of serious games and applica-
tions. Therefore, we described our concept of smart SR and
explained how it extends the existing definition of SR by
integrating smart home technology. To illustrate this concept,
we provided three exemplary use cases from the fields of
physical therapy, visual programming, and social interaction.
To prove feasability we implemented a prototype which is
based on the smart home environment in our lab space. This
prototype incorporates two virtual representations which allow
the user to experience SR in different levels of mismatch, as
well as mixed representations. In addition, it integrates the
input of multiple tracked objects, smart devices, and sensors
within the environment. Provided data can be visualized both
explicitly or implicitly through different methods such as
overlays or the visual modification of objects.
We conducted a user study to evaluate the influence of the
different SR environments on the experienced presence and
motivation. We therefore compared four different conditions
comprised of a combination of realistic and sci-fi versions of
existing objects. The focus was on discovering whether objects
in a user’s direct reach influence his or her experience more
than those in the environment. Results showed that presence
did not differ noticeably beween the four conditions while
motivation was higher in versions with a greater LoM in the
environment. This provides us with a promising foundation
for additional research.
In future work we will conduct further studies to explore the
strengths and weaknesses of our approach. These studies will
focus on the effect of integrated smart devices on immersion
and evaluate the effectiveness of our concept when applied
in serious games. Therefore actuators, which can already be
controlled from within the system, will be integrated into the
simulation. In view of our results concerning presence and
motivation, we are eager to evaluate the effects of a greater
variance in the LoM’s. We expect that a larger divergence
between real and virtual objects will lead to more significant
results. In addition, we would like to test our system in
multi-user scenarios. The long-term goal is to compile a list
of guidelines for the implementation of smart SR systems.
Therefore, we plan to implement multiple different scenarios
in the presented environments.
ACK NOWL E DG MEN TS
We would like to thank our students Leonie Albrecht,
Johanna Bode, Anne Els ¨asser, Jenny Harms, and Rebecca Leip
for their ideas when creating the virtual environments. We are
also grateful to Dr. Matthew McGinity and Florian Kern who
implemented the first version of the calibration system.
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
REF ERE NC ES
[1] F. Laamarti, M. Eid, and A. E. Saddik, “An overview of serious games,”
International Journal of Computer Games Technology, vol. 2014, p. 11,
2014.
[2] A. L. Simeone, “Substitutional reality: Towards a research agenda,” in
Everyday Virtual Reality (WEVR), 2015 IEEE 1st Workshop on. IEEE,
2015, pp. 19–22.
[3] H. G. Hoffman, “Physically touching virtual objects using tactile aug-
mentation enhances the realism of virtual environments,” in Virtual
Reality Annual International Symposium, 1998. Proceedings., IEEE
1998. IEEE, 1998, pp. 59–63.
[4] H. Sveistrup, “Motor rehabilitation using virtual reality, Journal of
neuroengineering and rehabilitation, vol. 1, no. 1, p. 10, 2004.
[5] A. L. Simeone, E. Velloso, and H. Gellersen, “Substitutional reality:
Using the physical environment to design virtual reality experiences,”
in Proceedings of the 33rd Annual ACM Conference on Human Factors
in Computing Systems. ACM, 2015, pp. 3307–3316.
[6] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual
displays,IEICE TRANSACTIONS on Information and Systems, vol. 77,
no. 12, pp. 1321–1329, 1994.
[7] G. C. Burdea, “Force and touch feedback for virtual reality,” 1996.
[8] A. Garcia-Palacios, H. Hoffman, A. Carlin, T. Furness III, and
C. Botella, “Virtual reality in the treatment of spider phobia: a controlled
study,Behaviour research and therapy, vol. 40, no. 9, pp. 983–993,
2002.
[9] D. Escobar-Castillejos, J. Noguez, L. Neri, A. Magana, and B. Benes, “A
review of simulators with haptic devices for medical training,Journal
of medical systems, vol. 40, no. 4, p. 104, 2016.
[10] L. Valente, B. Feij´o, A. Ribeiro, and E. Clua, “The concept of pervasive
virtuality and its application in digital entertainment systems,” in Inter-
national Conference on Entertainment Computing. Springer, 2016, pp.
187–198.
[11] T. Yamada, S. Yokoyama, T. Tanikawa, K. Hirota, and M. Hirose,
“Wearable olfactory display: Using odor in outdoor environment,” in
Virtual Reality Conference, 2006. IEEE, 2006, pp. 199–206.
[12] N. Ranasinghe, P. Jain, D. Tolley, S. Karwita, S. Yilei, and E. Y.-L. Do,
“Ambiotherm: Simulating ambient temperatures and wind conditions in
vr environments, in Proceedings of the 29th Annual Symposium on User
Interface Software and Technology. ACM, 2016, pp. 85–86.
[13] J. Dionisio, “Virtual hell: A trip through the flames,IEEE Computer
Graphics and Applications, vol. 17, no. 3, pp. 11–14, 1997.
[14] J. Lifton and J. A. Paradiso, “Dual reality: Merging the real and
virtual,” in International Conference on Facets of Virtual Environments.
Springer, 2009, pp. 12–28.
[15] C. Stahl, J. Frey, J. Alexandersson, and B. Brandherm, “Synchronized
realities,Journal of Ambient Intelligence and Smart Environments,
vol. 3, no. 1, pp. 13–25, 2011.
[16] Z. Zhang, “Microsoft kinect sensor and its effect,IEEE multimedia,
vol. 19, no. 2, pp. 4–10, 2012.
[17] W. Kabsch, “A solution for the best rotation to relate two sets of
vectors,Acta Crystallographica Section A: Crystal Physics, Diffraction,
Theoretical and General Crystallography, vol. 32, no. 5, pp. 922–923,
1976.
[18] K. Seaborn and D. I. Fels, “Gamification in theory and action: A survey,
International Journal of human-computer studies, vol. 74, pp. 14–31,
2015.
[19] T. Hartmann, W. Wirth, H. Schramm, C. Klimmt, P. Vorderer, A. Gys-
bers, S. B¨ocking, N. Ravaja, J. Laarni, T. Saari et al., “The spatial
presence experience scale (spes),Journal of Media Psychology, 2015.
[20] R. M. Ryan and E. L. Deci, “Self-determination theory and the fa-
cilitation of intrinsic motivation, social development, and well-being.”
American psychologist, vol. 55, no. 1, p. 68, 2000.
© 2018 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Preprint - DOI: 10.1109/VS-Games.2018.8493444
... While, Batmaz et al. [199] also used a wall in order to perform an eye-hand coordination task in AR and VR ( Fig. 2.30c). Alternatively, Eckstein et al. [200] made participants open and close a tangible window to reach for a virtual sphere outside of a virtual window ( Fig. 2.30d). Tiator et al. [201] even made a low height wooden climbing tower for the users to climb onto while climbing a virtual mountain ( Fig. 2.30e). ...
... Metal pipes with various attachments are used to provide additional tangible contact and study supports ( Fig. 2.30h). [197], (b) Unlimited Corridor [198], (c) Touch the Wall [199], (d) Smart Substitutional Reality [200], (e) Cliffhanger-VR [201], (f) Egocentric Distance Perception [202], (g) Simulator to Detect Acrophobia [203], (h) TurkDeck [204]. ...
... But, they pushed their concept further by proposing the Substitutional Reality framework, where the whole tangible workspace is brought into the virtual environment as each one would substitute a virtual element of similar shape, from small objects to the biggest furniture ( Fig. 2.38a). Eckstein et al. [200] even suggested bringing smart home elements into the virtual environment. In the same line of work, Hettiarachchi and Wigdor [140] developed a system able to scan the surroundings to find objects of everyday life and compute their closest primitive shape in order to register them to the various virtual objects through a primitive shape matching process (Fig. 2.38b). ...
Thesis
This thesis is centered around the usage of tangible objects in Virtual Reality. While wearing a Head Mounted Display, the user is unable see the tangibles and end up confronting the virtual objects he sees to the tangible ones he can feel. Two main axes were research upon this objective: I) improving the rendering of virtual objects using discrepant tangibles, and II) improving the registration between tangibles and virtual objects. First, as a tangible object renders a part of the virtual environment, we will study how and to what extent a discrepancy between the two can be introduced without breaking the user's immersion with three strategies: 1) by altering the perception the user has of the tangible by adding an tactile stimuli using a wearable haptic device, 2) by intentionally inducing unnoticeable discrepancies between the tangible and its virtual representation without the users noticing, and, 3) by pairing a single tangible to multiple elements of the virtual environment while maximizing the similitude between them. Second, as registering a tangible to a virtual object comes with limitation in terms of accuracy or workspace constraints, two strategies are proposed to solve some limitations of the registration process: 1) on a finer scale, by compensating setup errors to achieve contact synchronization between both real and virtual hand/object contacts, and, 2) on a large scale, by providing a tangible to grasp and release anywhere in the workspace, without any spatial restriction.
... For example, instead of an office desk in their real world, a player could be presented with a virtual sci-fi control deck that they could interact with physically. An increase in presence and immersion has been demonstrated in previous studies looking at passive haptics [12,15,22] and substitutional reality [9,30]. ...
... They focused mainly on small props manipulated by fingers and hands, rather than larger objects, such as furniture that may have interactions with other parts of a user's body. Eckstein et al. [9,10] took substitutional reality further by adding smart devices to add additional feedback to their users. They found an increased level of presence through the addition of SR and smart devices. ...
Conference Paper
Virtual reality (VR) and augmented reality (AR) have continued to increase in popularity over the past decade. However, there are still issues with how much space is required for room-scale VR and experiences are still lacking from haptic feedback. We present LevelEd SR, a substitutional reality level design workflow that combines AR and VR systems and is built for consumer devices. The system enables passive haptics through the inclusion of physical objects from within a space into a virtual world. A validation study (17 participants) has produced quantitative data that suggests players benefit from passive haptics in entertainment VR games with an improved game experience and increased levels of presence. Including objects, such as real-world furniture that is paired with a digital proxy in the virtual world, also opens up more spaces to be used for room-scale VR. We evaluated the workflow and found that participants were accepting of the system, rating it positively using the System Usability Scale questionnaire and would want to use it again to experience substitutional reality.
Article
Virtual environments and games are often used to evoke positive emotions. Contrary the survival horror genre aims to induce negative feelings in players. The effects of playing fear-inducing games in virtual reality (VR) is rather unexplored, since research mainly focuses on positive emotions. To investigate the relationship between immersion, presence and negative emotion induction, we compared repeated horror game usage between playing on desktop computers, in VR, and smart substitutional reality (SSR), which supplements VR with additional haptic and thermal stimuli. Conducting a longitudinal study utilizing questionnaires, observations and physiological measurements, we expected an increase of fear using VR and SSR due to the increased immersion. Physiological data was not analyzed due to huge data loss, while observations and self-reports revealed contradictory results. Behavioral data showed stronger expression of fear in VR and SSR. Presence was increased in the VR and SSR groups compared to PC, further a mediation of emotion induction via presence was confirmed. Altogether, the reception of horror games within VR or SSR is associated with strong emotional reactions for selected individuals. Future research should take methodological lessons learned into account.
Chapter
In this paper, we propose a method for generating a virtual environment in which users can avoid surrounding obstacles and also get haptic feedback from physical objects. By performing plane detection and clustering on the 3D point cloud acquired using a mobile device, and optimizing the layout of virtual objects using a cost function, a virtual environment based on the structure of the physical space is generated. In addition, by performing plane detection with normal constraints, surfaces that the user can touch are presented, and the user can touch objects in the physical space through the virtual environment with haptic feedback. We implemented the proposed method and conducted an experiment in an actual indoor physical space. As the result, we confirmed that a virtual environment was properly generated with many touchable surfaces based on the physical space, and that a user can touch a surface in the physical space.
Article
Full-text available
Medical procedures often involve the use of the tactile sense to manipulate organs or tissues by using special tools. Doctors require extensive preparation in order to perform them successfully; for example, research shows that a minimum of 750 operations are needed to acquire sufficient experience to perform medical procedures correctly. Haptic devices have become an important training alternative and they have been considered to improve medical training because they let users interact with virtual environments by adding the sense of touch to the simulation. Previous articles in the field state that haptic devices enhance the learning of surgeons compared to current training environments used in medical schools (corpses, animals, or synthetic skin and organs). Consequently, virtual environments use haptic devices to improve realism. The goal of this paper is to provide a state of the art review of recent medical simulators that use haptic devices. In particular we focus on stitching, palpation, dental procedures, endoscopy, laparoscopy, and orthopaedics. These simulators are reviewed and compared from the viewpoint of used technology, the number of degrees of freedom, degrees of force feedback, perceived realism, immersion, and feedback provided to the user. In the conclusion, several observations per area and suggestions for future work are provided.
Article
Full-text available
Serious games are growing rapidly as a gaming industry as well as a field of academic research. There are many surveys in the field of digital serious games; however, most surveys are specific to a particular area such as education or health. So far, there has been little work done to survey digital serious games in general, which is the main goal of this paper. Hence, we discuss relevant work on serious games in different application areas including education, well-being, advertisement, cultural heritage, interpersonal communication, and health care. We also propose a taxonomy for digital serious games, and we suggest a classification of reviewed serious games applications from the literature against the defined taxonomy. Finally, the paper provides guidelines, drawn from the literature, for the design and development of successful serious games, as well as discussing research perspectives in this domain.
Conference Paper
Full-text available
Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences.
Article
Full-text available
The study of spatial presence is currently receiving increased attention in both media psychology and communication research. The present paper introduces the Spatial Presence Experience Scale (SPES), a short eight-item self-report measure. The SPES is derived from a process model of spatial presence (Wirth et al., 2007, Media Psychology, 9, 493–525), and assesses spatial presence as a two-dimensional construct that comprises a user’s self-location and perceived possible actions in a media environment. The SPES is shorter than many other available spatial presence scales, and can be conveniently applied to diverse media settings. Two studies are reported (N1 = 290, N2 = 395) that confirm sound psychometric qualities for the SPES.
Article
Full-text available
Gamification has drawn the attention of academics, practitioners and business professionals in domains as diverse as education, information studies, human-computer interaction, and health. As yet, the term remains mired in diverse meanings and contradictory uses, while the concept faces division on its academic worth, underdeveloped theoretical foundations, and a dearth of standardized guidelines for application. Despite widespread commentary on its merits and shortcomings, little empirical work has sought to validate gamification as a meaningful concept and provide evidence of its effectiveness as a tool for motivating and engaging users in non-entertainment contexts. Moreover, no work to date has surveyed gamification as a field of study from a human-computer studies perspective. In this paper, we present a systematic survey on the use of gamification in published theoretical reviews and research papers involving interactive systems and human participants. We outline current theoretical understandings of gamification and draw comparisons to related approaches, including alternate reality games (ARGs), games with a purpose (GWAPs), and gameful design. We present a multidisciplinary review of gamification in action, focusing on empirical findings related to purpose and context, design of systems, approaches and techniques, and user impact. Findings from the survey show that a standard conceptualization of gamification is emerging against a growing backdrop of empirical participants-based research. However, definitional subjectivity, diverse or unstated theoretical foundations, incongruities among empirical findings, and inadequate experimental design remain matters of concern. We discuss how gamification may to be more usefully presented as a subset of a larger effort to improve the user experience of interactive systems through gameful design. We end by suggesting points of departure for continued empirical investigations of gamified practice and its effects.
Article
Full-text available
Recent advances in 3D depth cameras such as Microsoft Kinect sensors (www.xbox.com/en-US/kinect) have created many opportunities for multimedia computing. The Kinect sensor lets the computer directly sense the third dimension (depth) of the players and the environment. It also understands when users talk, knows who they are when they walk up to it, and can interpret their movements and translate them into a format that developers can use to build new experiences. While the Kinect sensor incorporates several advanced sensing hardware, this article focuses on the vision aspect of the Kinect sensor and its impact beyond the gaming industry.
Conference Paper
As Virtual Reality (VR) experiences become increasingly popular, simulating sensory perceptions of environmental conditions is essential for providing an immersive user experience. In this paper, we present Ambiotherm, a wearable accessory for existing Head Mounted Displays (HMD), which simulates real-world environmental conditions such as ambient temperatures and wind conditions. The system consists of a wearable accessory for the HMD and a mobile application, which generates interactive VR environments and controls the thermal and wind stimuli. The thermal stimulation module is attached to the user's neck while two fans are focused on the user's face to simulate wind conditions. We demonstrate the Ambiotherm system with two VR environments, a desert and a snowy mountain, to showcase the different types of ambient temperatures and wind conditions that can be simulated. Results from initial user experiments show that the participants perceive VR environments to be more immersive when external thermal and wind stimuli are presented as part of the VR experience.
Conference Paper
Virtual reality has received a lot of attention lately due to a new wave of affordable HMD devices arriving in the consumer market. These new display devices – along with the availability of fast wireless networking, comprehensive wearable technologies, and robust context-aware devices – are enabling the emergence of a new type of mixed-reality system for games and digital entertainment. In this paper we name this new situation as “pervasive virtuality”, which we define as being a virtual environment that is extended by incorporating physical environments, physical objects as “proxy” elements, and context information. This new mixed reality paradigm is not well understood by both industry and academia. Therefore, we propose an extension to the well-known Milgram and Colquhoun’s taxonomy to cope with this new mixed-reality situation. Furthermore, we identify fundamental aspects and features that help designers and developers of this new type of application. We present these features as a two-level map of conceptual characteristics (i.e. quality requirements). This paper also presents a brief case study using these characteristics.
Article
In our previous work on Substitutional Reality, we presented an exploration of a class of Virtual Environments where every physical object surrounding the user is associated with appropriate virtual counterparts. Differently from passive haptics, Substitutional Reality assumes the existence of a discrepancy in the association. This previous work explored how far this mismatch can be pushed and its impact on the believability of the experience. In this paper we discuss three main research directions for Substitutional Reality. Firstly, the design space is largely unexplored as the initial investigation focused on the mismatch between real and virtual objects. Secondly, the development of systems enabling a dynamic substitution process represents a key challenge. Thirdly, we discuss the meta-design process of these experiences.
Article
Recently, there are various types of display systems that can present aural, visual and haptic information related to the user?s position. It is also important to present olfactory information related to the user?s position, and we focus on the spatiality of odor, which is one of its characteristics. In this research, we constructed and evaluated a wearable olfactory display to present the spatiality of odor in an outdoor environment. The prototype wearable olfactory display system treats odor in the gaseous state, and the odor air is conveyed to the user?s nose through tubes. Using this system, we also present the spatiality of odor by controlling the odor strength according to the positions of the user and the odor source. With this prototype system, the user can specify the position of the odor source in an outdoor environment. To improve this prototype system, we constructed another wearable olfactory display. Because odor is treated in the gaseous state, the first prototype system has some problems such as the large size of the device and unintentional leakage of the odor into the environment. To solve these issues, we developed and evaluated an advanced wearable olfactory display that uses an inkjet head device to treat odor in the liquid state.