Content uploaded by Uwe Gruenefeld
Author content
All content in this area was uploaded by Uwe Gruenefeld on Oct 11, 2023
Content may be subject to copyright.
The Actuality-Time Continuum: Visualizing Interactions and Transitions
Taking Place in Cross-Reality Systems
Jonas Auda*
University of
Duisburg-Essen
Sarah Faltaous†
University of
Duisburg-Essen
Uwe Gruenefeld‡
University of
Duisburg-Essen
Sven Mayer§
LMU Munich
Stefan Schneegass¶
University of
Duisburg-Essen
ABSTRACT
In the last decade, researchers contributed an increasing number of
cross-reality systems and their evaluations. Going beyond individual
technologies such as Virtual or Augmented Reality, these systems
introduce novel approaches that help to solve relevant problems
such as the integration of bystanders or physical objects. However,
cross-reality systems are complex by nature, and describing the in-
teractions and transitions taking place is a challenging task. Thus, in
this paper, we propose the idea of the Actuality-Time Continuum that
aims to enable researchers and designers alike to visualize complex
cross-reality experiences. Moreover, we present four visualization
examples that illustrate the potential of our proposal and conclude
with an outlook on future perspectives.
Index Terms: Human-centered computing—Visualization—
Visualization design and evaluation methods
1 INTRODUCTION
While Virtual Reality (VR), Augmented Virtuality (AV), and Aug-
mented Reality (AR) are often researched as independent technolo-
gies, the lines between them have become increasingly blurry. Nowa-
days, VR-head-mounted displays (HMDs) can integrate physical
objects into the user’s experience, transforming these headsets into
AV devices on demand. In particular, researchers have demon-
strated prototypes that can include physical keyboards for natural
typing [22, 23, 39], furniture to avoid collisions [1], and bystanders
to allow for interaction [13]. Moreover, new hardware enters the
market that empowers users to transition between different degrees
of virtuality, for example, between video see-through AR and im-
mersive VR as seen on recent devices such as the Varjo XR-31.
Consequently, it turns out that terms such as VR, AV, or AR are
too inflexible to capture the various altering Mixed Reality (MR)
experiences that users can enjoy today. Currently, there is no well-
established term that references the degree of virtuality a user cur-
rently experiences, except for the extremes: reality and virtuality. In
this context, we argue for introducing the term actuality to better
describe the current “reality” of a user similar to previous sugges-
tions [9]. The English word actuality derived from the Latin word
actualitas translates to “in existence” or “currently happening.”.
Hence, it could be used to describe the “current reality” of a user.
The state in which the world is in for this particular user [34]. In
this line of thought, it could be used to refer to a specific degree
of virtuality; thus, the word actuality would be suitable to describe
*e-mail: jonas.auda@uni-due.de
†e-mail: sarah.faltaous@uni-due.de
‡e-mail: uwe.gruenefeld@uni-due.de
§e-mail: sven.mayer@ifi.lmu.de
¶e-mail: stefan.schneegass@uni-due.de
1
Varjo XR-3 Mixed Reality headset,
https://varjo.com
, last retrieved
on September 26, 2023.
the actual experience of users – the current world that they per-
ceive that coexists with physical reality in which the user is present
inevitably [43].
In the future, such a designated term could be useful due to an
increasing number of prototypes that enable users to experience
not only one particular manifestation like AR or VR but also allow
them to transition between these manifestations seamlessly over
time. Systems that allow for this kind of experience formed a new
research field recently: cross-reality systems. Simeone et al. de-
fined them as systems that “envision (i) a smooth transition between
systems using different degrees of virtuality or (ii) collaboration
between users using different systems with different degrees of virtu-
ality” [33]. By nature, cross-reality systems are rather complex as
they involve multiple actualities, and often there is more than one
user that experiences them. For example, researchers have proposed
cross-reality systems that allow users experiencing different actu-
alities to collaborate and play together [15, 19]. As a result of the
increasing complexity, it becomes hard for researchers to describe
these systems and to communicate the interactions and transitions
between actualities that take place.
A helpful concept to describe and understand cross-reality sys-
tems is the Reality-Virtuality Continuum introduced by Milgram and
Kishino [24]. However, while this continuum can clarify one partic-
ular experience for a user at a defined point in time, it remains chal-
lenging to depict transitions between different actualities over time
(e.g., a user transitioning from reality into a VR experience [36]).
Therefore, we added a time dimension to the Reality-Virtuality Con-
tinuum. This allows one to visualize how entities transition between
different actualities along the continuum. We argue that visualizing
transitions along the continuum over time offers several benefits,
including structuring and communicating novel cross-reality pro-
totypes and comparing cross-reality experiences. We named the
resulting continuum the “Actuality-Time Continuum.”
Our goal is to synthesize a way for the community to describe
cross-reality systems and experiences on an abstract level. There-
fore, we first argue for the term “actualities” to depict one specific
experience along the continuum from Milgram. Next, we describe
ways to advance the continuum to visualize transitions over time.
Fundamentally, we suggest adding a time dimension to the original
continuum. This can help one to understand how users perceive
actuality changes over time. However, we do not limit ourselves
to this; we propose including multiple users on the continuum to
describe mutual influences among them when using a cross-reality
system. We demonstrate our idea by presenting four visualization ex-
amples that are inspired by previously published research prototypes.
Finally, we conclude with an outlook on future perspectives.
Contribution Statement. As cross-reality research is getting
increasingly complex, we introduce a simple visualization concept
that empowers one to depict the interplay of cross-reality system
users, their individual experiences, involved objects, and possible
transitions along the Reality-Virtuality Continuum. Through the
resulting Actuality-Time Continuum, we can visualize interactions
between present entities and transitions that occur over time.
2 RE LATE D WORK
In this section, we first present different cross-reality systems and
highlight relevant aspects of these systems that contribute to their
complexity. Then, we revisit the reality-virtuality continuum intro-
duced by Milgram and Kishino [24] and debate extensions of the
continuum. We discuss the term actualities and how we think it
could support structuring the domain of cross-realty systems and
interactions.
2.1 Cross-Reality Systems
According to Simeone et al. [33], cross-reality systems can be classi-
fied into two types: 1) systems that implement a transition between
different actualities, and 2) systems that support multiple users expe-
riencing different actualities.
Single Users Transitioning Between Different Actualities: A tran-
sitional interface is a cross-reality system designed to enable users
to transition on the Reality-Virtuality Continuum [12]. Thereby,
users of transitional interfaces can experience changing actualities
(e.g., switching from AR to VR). For example, the MagicBook from
Billinghurst et al. supports reading it as a normal book but also
offers an AR mode (i.e., augmented with virtual objects) or serves
as a companion for a VR experience [4, 5]. Oftentimes, transitional
interfaces involve more than two actualities, making them rather
complex systems. For example, one reality by Roo and Hachet
supports six different actualities [28].
Multiple Users Experiencing Different Actualities: Users experi-
ence different actualities for purposes such as collaboration [7,26],
bystander inclusion in virtual experiences [13
–
15, 23, 38], or the
opposite to have an isolated experience [30]. As different users
can perceive different actualities simultaneously, each user’s per-
spective can differ and therefore needs to be understood by, for
example, collaborators [10]. In this context, clear communication of
the different actualities and their interplay is needed to understand,
structure, and further develop corresponding scenarios and manage
their complexity.
Virtual experiences that use a VR-HMD often result in an iso-
lated experience as bystanders are not included in the experience
nor are they able to perceive the same actuality as the HMD user [2].
Therefore, many approaches investigated the inclusion of bystanders
that do not use a device to join a certain virtual experience. Several
approaches included real-time images of the bystanders into VR ex-
periences by using external cameras [23, 38] or using external touch
displays mounted on an VR-HMD to allow interaction between
reality and VR [14, 15]. To address a larger group of bystanders,
projection-based setups were employed [17, 18]. With the help of
projectors, bystanders can also be an interactive part of a virtual
experience through inside-out tracking [40]. In contrast to shared
experiences, isolated experiences have the goal of detaching users
from their current actuality. Ruvimova et al. introduced an approach
to isolate office workers via VR with the goal of reducing distrac-
tions in an open office space [30]. Similar to bystanders that are not
involved in the experience, two different actualities are created – one
for the person in VR and one for the persons that remain in reality.
Common to all these approaches is that all persons have their own
actuality that is formed through the used technology. Bystanders that
are visually integrated into the virtual experience are still in reality
which is their actuality. Using a 2D display or projections to interact
with a Virtual Environment (VE) differs in terms of perception com-
pared to head-mounted VR. Involving several actualities makes it
hard to describe these systems and the created shared experience due
to the different perspectives. Here, we see the need for abstraction
to obtain a high-level view of these scenarios. In summary, cross-
reality systems can get complex due to multiple users, transitions,
and interactions. Therefore, we revisit the Reality-Virtuality Contin-
uum as it serves as the conceptional foundation of different classes
of such systems.
2.2 The Reality-Virtuality Continuum
At the time of writing, almost 30 years have passed since Mil-
gram and Kishino introduced the Reality-Virtuality Continuum in
1994 [24]. Up to this point, the work has had a profound impact,
coining terms that are frequently used in the field. According to
Google Scholar, the work has over 8000 citations, which highlights
its relevance. Just three years ago, the paper had around 3000 fewer
citations [35], which demonstrates the rapid growth of interest in this
topic. The introduced concept of the Reality-Virtuality Continuum
that spans between Reality (on the left) and Virtuality (on the right)
allows the classification of different technology classes, such as AR
and AV.
On this continuum, Reality refers to the real world, in which every
entity in the scene is real and subject to the laws of physics. On the
other end, Virtuality refers to the VE, in which each entity exists only
virtually [24]. Each point on this continuum between Reality and
Virtuality refers to an experience that mixes different amounts of the
two. While this is useful to depict the properties of MR systems like
conveying information about the degree of reality and virtuality and,
thus, can be used to categorize systems into classes like AR and VR,
the continuum does not allow to communicate changes within these
systems that 1) occur over time and 2) change the degree of virtuality.
Therefore, we propose to extend the Reality-Virtuality Continuum by
adding a time domain to the continuum. Through the time domain,
the continuum could enable one to describe time-dependent cross-
reality interactions and systems and accurately depict the user’s
experience. We call the resulting continuum the “Actuality-Time
Continuum.” In the following, we introduce the term actualities in
greater detail as it has the potential to describe experiences along
the Reality-Virtuality Continuum more precisely and therefore can
help to shape future communication of cross-reality systems and
interaction.
Increasing Number of Actualities: With cross-reality systems,
the ongoing trend towards systems supporting more than one man-
ifestation (e.g., AR or VR) continues. Further, these systems can
implement seamless transitions on the continuum, for example, to
allow users to transition from the real world into VR [20, 30, 37]
or to integrate parts of reality into their VR experience [8, 16, 23].
Hence, the existing term manifestation is too inflexible to reflect
such experiences and, more importantly, does not allow to describe
changes in these experiences over time. Thus, we argue for using
the term “actuality” to depict the current experience of a user. The
term actuality goes back to the concept of “potentiality and actuality”
introduced by Aristoteles [32]. In short, Aristoteles stated that po-
tentiality is a not yet realized possibility of all possibilities that can
happen and an actuality is the realization of a specific potentiality
– the actual thing that became real. The English word actuality is
derived from the Latin word actualitas, which translates to “in exis-
tence” or “currently happening.” In other words, the state the world
is in for a user [34]. In this sense, we could use the term actuality to
describe the “current reality” of MR users – the things that currently
seem to be facts for them.
For example, we can consider two users – one using VR and
one standing nearby. The actuality for the VR user would be a
virtual, digital experience, while for the bystander, the actuality is
reality. Moreover, when a user transitions, for example, from reality
to VR, we can say that the actuality of that user changes over time.
Our definition is in line with the suggestion of Eissele et al. who
propose to use the word actuality for describing different virtual
experiences [9].
3 TH E ACTUA LI TY- TIME CONTINUUM
The Reality-Virtuality Continuum helps one to classify not only
the actuality of a single user but also multiple interacting users.
For example, a single user is completely in VR. This user would be
somewhere on the right-hand side of the continuum. When two users
collaborate in AR and VR [7], we would add the AR user somewhere
on the left-hand side of the continuum. A bystander just watching
the AR and VR users remains in the real world. The bystander would
be shown on the far left of the continuum. However, the current
research and technology trend leads to investigating possibilities
of switching the actuality on the fly. For instance, when the world
around the user influences the experience, there is a short period
during which the user’s actuality can no longer be described as a
single position on the Reality-Virtuality Continuum. An example
of such a scenario would be a bystander interacting with a VR user,
causing the real world to fuse with the virtual world [38]. Another
example is collision prevention [3,27, 41]).
To empower researchers and designers to quantify their scenarios
fully, we set out to establish a new concept for visualizing how
people transition between actualities throughout an interaction. Thus,
in the following, we present an extended continuum in which we
argue that it is necessary to add a time dimension to quantify what
a user might experience throughout an interaction. We then use
this concept to implement a tool that allows others to generate their
scenarios’ visualizations easily. We envision that this will help
to better envision novel experiences, foster discussion of possible
alternative options, and share new solutions with others.
3.1 Conceptual Design
In the following, we introduce three questions that guided our con-
cept, discuss their implications, and introduce our approach to tackle
accompanying research challenges.
How can one manage the complexity of scenarios involving mul-
tiple actualities? The key for researchers, designers, and developers
is to manage the complexity of their cross-reality scenarios to un-
derstand the impact on the user. Therefore, an abstraction that fits
various scenarios and their dynamic behavior is needed. This ab-
straction must take into account involved entities, objects, and envi-
ronments. In particular, the perspectives of users or bystanders might
differ enormously while experiencing different actualities [13]. The
perceived influences on a user can even come from more than one
actuality, inevitably leading to increased complexity. This makes it
difficult to comprehend individual experiences and their impacts on
the perceiving person (e.g., communication between VR and the real
world [6, 11, 14, 15, 31]). Further, depicting dynamic changes within
these scenarios is vital to managing complexity and understanding
the interplay between users, objects, and the environment.
How can one compare and articulate research or experiences
involving multiple actualities? Comparing novel experiences to
previously introduced research from the literature can be cumber-
some due to complexity or a difference in the underlying hypotheses
or research questions. Furthermore, relevant aspects can often be hid-
den inside the research prototypes. Transitions along the continuum
over time add yet another layer of complexity. To approach these is-
sues, we suggest visualizing experiences along the Reality-Virtuality
Continuum to gain insight into involved users’ experiences, where
they manifest on the continuum, and how transitions can occur (i.e.,
when and how transitions affect the user’s experience). This can
help researchers to better understand the influences on the user and
to articulate new ideas to others in order to obtain feedback on future
design decisions that incorporate some form of the interplay among
multiple actualities. Still, we believe that visualization should be
abstract enough to allow for comparing a wide array of scenarios.
How can the Reality-Virtuality Continuum be utilized to ana-
lyze scenarios involving multiple actualities? Currently, it is not
entirely clear where on the continuum specific research projects
of systems are located. For example, two VR systems could be
classified close to the VR side of the continuum. It remains unclear
to what extent, for example, the enrichment of a VR experience
through a real-world object shifts it on the continuum towards AV.
Quantifying ranges on the continuum might help with comparing
and classifying future experiences, systems, or research prototypes,
making them more comparable and easier to understand. Knowing
how far a transition on the continuum goes might help in under-
standing its impact on transitioning users and their experiences and
perceptions.
3.2 Components of the Visualization
The concept’s general structure consists of three elements: the actu-
ality someone experiences (e.g., reality, AR, or VR), the time, and
the entities (e.g., users, objects, or environments). Here, the actuality
is represented on the x-axis and the time on the y-axis. As a result,
we obtain the Actuality-Time Continuum. Here, two or more entities
on the Actuality-Time Continuum stand in a specific relationship to
each other. This allows one to represent various interactions between
entities on the continuum over time. Now, we can visualize the in-
terplay of entities experiencing different actualities or transitions
between them.
Actualities on the Continuum: To describe the actuality that a user
experiences, we use the Reality-Virtuality Continuum. We placed
this continuum on the x-axis to depict the actuality of entities. The
actuality of users that are positioned furthest on the left is reality,
whereas the actuality of users furthest on the right is the purely
virtual world.
Time: Exploring previous literature, we realized that the use of the
Reality-Virtuality Continuum poses challenges when expressing the
mix of elements from Reality and Virtuality over time. Therefore,
we added a y-axis to our visualization that runs from top to bottom
representing time. Here, we took great inspiration from sequence
diagrams that are part of the unified modeling language (UML) [29].
We did not specify a definitive time measurement unit for this axis
to avoid restrictions regarding specific scenarios. Hence, the time
was specified in steps rather than hours, minutes, or seconds. This
provides more flexibility and the ability to visualize various scenarios
with the Actuality-Time Continuum. This allows us to change the
actuality dynamically by moving along the continuum at different
times.
Entities: We have identified two types of entities that can temporar-
ily influence the experience: Subjects and Objects. The difference
between both entities is that subjects have ways to perceive their
environment, while objects have no perception. Hence, subjects can
experience their environment and therefore an actuality exists that
describes their current experience. However, besides this difference,
subjects and objects also have attributes in common. Primarily, both
can either exist physically in the real environment, digitally in the
VE, or in both environments simultaneously. In many scenarios,
subjects are users and bystanders [23, 25, 38]. Bystanders can en-
gage with the user, but their presence alone can already impact the
perceived actuality. Objects can impact or enrich the interaction, or
may be important for the user’s safety (e.g., visualizing walls around
the user).
4 VISUALIZATION EXAMPLES
To illustrate our abstract visualization concept, we highlight four
different examples inspired by previous cross-reality prototypes (see
Fig. 1). The first two are single-user-focused examples in which
the main influence is due to the environment or remote people. The
other two are co-located multi-user examples in which a bystander
influences the AR or VR user.
Obstacle Awareness in Mobile VR: The first example was extracted
from SafeXR by Kang et al. [21]. To make a mobile VR user aware of
obstacles, they used built-in smartphone sensors to extract features
from real-world objects and alert the user. The system was tested
using a mobile VR game (see Fig. 1a). Overall, their approach
demonstrated a promising solution for enhancing user safety in
immersive VR environments.
Mixed Reality (MR)
Real
Environment
Virtual
Environment
Time
Obstacle
VR User
Getting closer
Moving away
(a) Obstacle Awareness in Mobile VR [21].
Mixed Reality (MR)
Real
Environment
Virtual
Environment
Time
Slack
Triggers notification
VR User
(b) Receiving a Message in VR [11].
Mixed Reality (MR)
Real
Environment
Virtual
Environment
Time
Bystander
Picks up phone
AR User
(c) Bystander Joins an AR Experience [42].
Mixed Reality (MR)
Real
Environment
Virtual
Environment
Time
Bystander
Approaching VR User
Starting conversation
VR User
(d) Bystander Approaches a VR User [23].
Figure 1: Four example visualization using the Actuality-Time Continuum. The examples are inspired by previously proposed cross-reality
systems. Different colors depict different entities. Solid lines indicate the perspective of the subject that experiences the current actuality.
Receiving a Message in VR: The second example is inspired by
Ghosh et al. [11]. Here, a Slack message was presented visually
in VR. The message was presented on existing surfaces based on
the user’s location and viewing direction. This method successfully
integrated a traditional messaging interface into the VR experience,
demonstrating a novel way to receive notifications in a VR environ-
ment. For the visualization, see Fig. 1b.
Bystander Joins an AR Experience: We extracted the third ex-
ample from the work of Xu et al. [42]. In this paper, a non-HMD
user could use a smartphone to join the same AR experience as
an HMD user experiencing virtual content in AR. The virtual AR
content was synchronized between the HMD and the smartphone to
present a joint experience in AR and enable interaction (see Fig. 1c).
This highlighted the potential for cross-platform engagement and
co-experience, effectively integrating non-HMD users.
Bystander Approaching a VR User: The fourth visualization
is inspired by work from McGill et al. [23]. In this example, a
bystander approaches a VR user. When the bystander enters the
same tracking space as the VR user, the former fades into the virtual
view. When the VR user chooses to engage with them, they are
rendered fully opaque (see Fig. 1d). This interaction technique
underscores an intriguing way to incorporate real-world entities into
the VR environment, thereby bridging the gap between virtual and
physical presence.
5 CONCLUSION
In this paper, we introduced the concept of the Actuality-Time Con-
tinuum. This extended continuum allows one to position multiple
entities on the Reality-Virtuality axis and use the new time axis to
visualize their interplay over time. The positioning of entities now
allows one to visualize the actuality of them their relationships with
others and also transitions between different manifestations (e.g.,
AR or VR). Our visualization can be used to explore alternatives
during the development process, discuss ideas with others, or com-
pare different cross-reality systems. We hope that our work sparks
discussion on how to describe complex cross-reality systems in an
intuitive way. In this context, we used the term ”actuality” – Latin
for “in existence” or “currently happening” – to name our extension
of the Reality-Virtuality Continuum [24].
REFERENCES
[1]
C. Afonso and S. Beckhaus. How to not hit a virtual wall: Aural
spatial awareness for collision avoidance in virtual environments. In
Proceedings of the 6th Audio Mostly Conference: A Conference on
Interaction with Sound, AM ’11, p. 101–108. ACM, New York, NY,
USA, 2011. doi: 10. 1145/2095667.2095682
[2]
J. Auda, U. Gruenefeld, and S. Mayer. It takes two to tango: Conflicts
between users on the reality-virtuality continuum and their bystanders.
In In the Proceedings of the International Workshop on Cross-Reality
(XR) Interaction, XR ’20, 2020.
[3]
J. Auda, M. Pascher, and S. Schneegass. Around the (virtual) world:
Infinite walking in virtual reality using electrical muscle stimulation.
In Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, CHI ’19, p. 1–8. ACM, New York, NY, USA,
2019. doi: 10. 1145/3290605.3300661
[4]
M. Billinghurst, H. Kato, and I. Poupyrev. The magicbook - moving
seamlessly between reality and virtuality. IEEE Computer Graphics
and Applications, 21(3):6–8, 2001.
[5]
M. Billinghurst, H. Kato, and I. Poupyrev. The magicbook: a transi-
tional ar interface. Computers & Graphics, 25(5):745 – 753, 2001. doi:
10.1016/S0097-8493(01)00117-0
[6]
L. Chan and K. Minamizawa. Frontface: Facilitating communication
between hmd users and outsiders using front-facing-screen hmds. In
Proceedings of the 19th International Conference on Human-Computer
Interaction with Mobile Devices and Services, MobileHCI ’17. ACM,
New York, NY, USA, 2017. doi: 10.1145/3098279.3098548
[7]
M. L. Chenechal, T. Duval, V. Gouranton, J. Royan, and B. Arnaldi.
Vishnu: virtual immersive support for helping users an interaction
paradigm for collaborative remote guiding in mixed reality. In 2016
IEEE Third VR International Workshop on Collaborative Virtual Envi-
ronments, 3DCVE ’16, pp. 9–12. IEEE, 2016. doi: 10.1109/3DCVE.
2016.7563559
[8]
A. P. Desai, L. Pe
˜
na-Castillo, and O. Meruvia-Pastor. A window to your
smartphone: Exploring interaction and communication in immersive vr
with augmented virtuality. In 2017 14th Conference on Computer and
Robot Vision, CRV ’17, pp. 217–224. IEEE, 2017. doi: 10. 1109/CRV.
2017.16
[9]
M. Eissele, O. Siemoneit, and T. Ertl. Transition of mixed, virtual, and
augmented reality in smart production environments - an interdisci-
plinary view. In 2006 IEEE Conference on Robotics, Automation and
Mechatronics, pp. 1–6, 2006. doi: 10.1109/RAMECH. 2006.252671
[10]
M. Feick, A. Tang, and S. Bateman. Mixed-reality for object-focused
remote collaboration. In The 31st Annual ACM Symposium on User
Interface Software and Technology Adjunct Proceedings, UIST ’18
Adjunct, p. 63–65. ACM, New York, NY, USA, 2018. doi: 10. 1145/
3266037.3266102
[11]
S. Ghosh, L. Winston, N. Panchal, P. Kimura-Thollander, J. Hotnog,
D. Cheong, G. Reyes, and G. D. Abowd. Notifivr: Exploring inter-
ruptions and notifications in virtual reality. IEEE Transactions on
Visualization and Computer Graphics, 24(4):1447–1456, 2018. doi: 10
.1109/TVCG.2018.2793698
[12]
R. Grasset, J. Looser, and M. Billinghurst. Transitional interface: con-
cept, issues and framework. In IEEE/ACM International Symposium
on Mixed and Augmented Reality, ISMAR ’06, pp. 231–232. IEEE,
2006. doi: 10. 1109/ISMAR.2006.297819
[13]
J. Gugenheimer, E. Stemasov, J. Frommel, and E. Rukzio. Sharevr:
Enabling co-located experiences for virtual reality between hmd and
non-hmd users. In Proceedings of the 2017 CHI Conference on Human
Factors in Computing Systems, CHI ’17, p. 4021–4033. ACM, New
York, NY, USA, 2017. doi: 10.1145/3025453.3025683
[14]
J. Gugenheimer, E. Stemasov, H. Sareen, and E. Rukzio. Facedis-
play: Enabling multi-user interaction for mobile virtual reality. In
Proceedings of the 2017 CHI Conference Extended Abstracts on Hu-
man Factors in Computing Systems, CHI EA ’17, p. 369–372. ACM,
New York, NY, USA, 2017. doi: 10.1145/3027063.3052962
[15]
J. Gugenheimer, E. Stemasov, H. Sareen, and E. Rukzio. Facedisplay:
Towards asymmetric multi-user interaction for nomadic virtual reality.
In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems, CHI ’18, p. 1–13. ACM, New York, NY, USA,
2018. doi: 10. 1145/3173574.3173628
[16]
J. Hartmann, C. Holz, E. Ofek, and A. D. Wilson. Realitycheck: Blend-
ing virtual environments with situated physical reality. In Proceedings
of the 2019 CHI Conference on Human Factors in Computing Sys-
tems, CHI ’19, p. 1–12. ACM, New York, NY, USA, 2019. doi: 10.
1145/3290605.3300577
[17]
A. Ishii, M. Tsuruta, I. Suzuki, S. Nakamae, T. Minagawa, J. Suzuki,
and Y. Ochiai. Reversecave: Providing reverse perspectives for sharing
vr experience. In ACM SIGGRAPH 2017 Posters, SIGGRAPH ’17.
ACM, New York, NY, USA, 2017. doi: 10.1145/3102163.3102208
[18]
A. Ishii, M. Tsuruta, I. Suzuki, S. Nakamae, J. Suzuki, and Y. Ochiai.
Let your world open: Cave-based visualization methods of public
virtual reality towards a shareable vr experience. In Proceedings of
the 10th Augmented Human International Conference 2019, AH ’19.
ACM, New York, NY, USA, 2019. doi: 10.1145/3311823.3311860
[19]
P. Jansen, F. Fischbach, J. Gugenheimer, E. Stemasov, J. Frommel, and
E. Rukzio. Share: Enabling co-located asymmetric multi-user inter-
action for augmented reality head-mounted displays. In Proceedings
of the 33rd Annual ACM Symposium on User Interface Software and
Technology, UIST ’20, p. 459–471. ACM, New York, NY, USA, 2020.
doi: 10.1145/3379337. 3415843
[20]
S. Jung, P. J. Wisniewski, and C. E. Hughes. In limbo: The effect
of gradual visual transition between real and virtual on virtual body
ownership illusion and presence. In 2018 IEEE Conference on Virtual
Reality and 3D User Interfaces, VR ’18, pp. 267–272. IEEE, 2018. doi:
10.1109/VR. 2018.8447562
[21]
H. Kang and J. Han. SafeXR: alerting walking persons to obstacles in
mobile XR environments. The Visual Computer, 36(10):2065–2077,
Oct 2020. doi: 10. 1007/s00371-020-01907-4
[22] P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze.
Physical keyboards in virtual reality: Analysis of typing performance
and effects of avatar hands. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems, CHI ’18, p. 1–9. ACM, New
York, NY, USA, 2018. doi: 10.1145/3173574.3173919
[23]
M. McGill, D. Boland, R. Murray-Smith, and S. Brewster. A dose of
reality: Overcoming usability challenges in vr head-mounted displays.
In Proceedings of the 33rd Annual ACM Conference on Human Factors
in Computing Systems, CHI ’15, p. 2143–2152. ACM, New York, NY,
USA, 2015. doi: 10. 1145/2702123.2702382
[24]
P. Milgram and F. Kishino. A taxonomy of mixed reality visual displays.
IEICE TRANSACTIONS on Information and Systems, 77(12):1321–
1329, 1994.
[25]
J. O’Hagan, J. R. Williamson, and M. Khamis. Bystander interruption
of vr users. In Proceedings of the 9TH ACM International Symposium
on Pervasive Displays, PerDis ’20, p. 19–27. ACM, New York, NY,
USA, 2020. doi: 10. 1145/3393712.3395339
[26]
S. Orts-Escolano, C. Rhemann, S. Fanello, W. Chang, A. Kow-
dle, Y. Degtyarev, D. Kim, P. L. Davidson, S. Khamis, M. Dou,
V. Tankovich, C. Loop, Q. Cai, P. A. Chou, S. Mennicken, J. Valentin,
V. Pradeep, S. Wang, S. B. Kang, P. Kohli, Y. Lutchyn, C. Keskin,
and S. Izadi. Holoportation: Virtual 3d teleportation in real-time. In
Proceedings of the 29th Annual Symposium on User Interface Software
and Technology, UIST ’16, p. 741–754. Association for Computing Ma-
chinery, New York, NY, USA, 2016. doi: 10.1145/2984511. 2984517
[27]
I. Podkosova and H. Kaufmann. Preventing imminent collisions be-
tween co-located users in hmd-based vr in non-shared scenarios. In
Proceedings of the 30 th International Conference on Computer Ani-
mation and Social Agents, pp. 37–46. CASA 2017, 2017. talk: CASA
2017, Seoul, South Korea; 2017-05-22 – 2017-05-24.
[28]
J. S. Roo and M. Hachet. One reality: Augmenting how the physical
world is experienced by combining multiple mixed reality modalities.
In Proceedings of the 30th Annual ACM Symposium on User Interface
Software and Technology, UIST ’17, p. 787–795. ACM, New York,
NY, USA, 2017. doi: 10. 1145/3126594.3126638
[29]
J. Rumbaugh, I. Jacobson, and G. Booch. Unified modeling language
reference manual, the, 2004.
[30]
A. Ruvimova, J. Kim, T. Fritz, M. Hancock, and D. C. Shepherd.
”transport me away”: Fostering flow in open offices through virtual
reality. In Proceedings of the 2020 CHI Conference on Human Factors
in Computing Systems, CHI ’20, p. 1–14. ACM, New York, NY, USA,
2020. doi: 10. 1145/3313831.3376724
[31]
R. Rzayev, S. Mayer, C. Krauter, and N. Henze. Notification in vr: The
effect of notification placement, task and environment. In Proceedings
of the Annual Symposium on Computer-Human Interaction in Play,
CHI PLAY ’19, p. 199–211. ACM, New York, NY, USA, 2019. doi:
10.1145/3311350. 3347190
[32] J. Sachs. Aristotle–motion and its place in nature. 2005.
[33]
A. L. Simeone, M. Khamis, A. Esteves, F. Daiber, M. Kljun,
K.
ˇ
Copi
ˇ
c Pucihar, P. Isokoski, and J. Gugenheimer. International
workshop on cross-reality (xr) interaction. In Companion Proceed-
ings of the 2020 Conference on Interactive Surfaces and Spaces, pp.
111–114, 2020.
[34]
S. Soames. Actuality: Actually. Aristotelian Society Supplementary
Volume, 81(1):251–277, 2007. doi: 10.1111/j.1467-8349. 2007.00158.
x
[35]
M. Speicher, B. D. Hall, and M. Nebeling. What is mixed reality?
In Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, CHI ’19, p. 1–15. ACM, New York, NY, USA,
2019. doi: 10. 1145/3290605.3300767
[36]
D. Sproll, J. Freiberg, T. Grechkin, and B. E. Riecke. Poster: Paving
the way into virtual reality - a transition in five stages. In 2013 IEEE
Symposium on 3D User Interfaces, 3DUI ’13, pp. 175–176. IEEE,
2013. doi: 10. 1109/3DUI.2013.6550235
[37]
F. Steinicke, G. Bruder, K. Hinrichs, A. Steed, and A. L. Gerlach. Does
a gradual transition to the virtual world increase presence? In 2009
IEEE Virtual Reality Conference, VR ’09, pp. 203–210. IEEE, 2009.
doi: 10.1109/VR. 2009.4811024
[38]
J. von Willich, M. Funk, F. M
¨
uller, K. Marky, J. Riemann, and
M. M
¨
uhlh
¨
auser. You invaded my tracking space! using augmented
virtuality for spotting passersby in room-scale virtual reality. In Pro-
ceedings of the 2019 on Designing Interactive Systems Conference,
DIS ’19, p. 487–496. ACM, New York, NY, USA, 2019. doi: 10.
1145/3322276.3322334
[39]
J. Walker, B. Li, K. Vertanen, and S. Kuhl. Efficient typing on a
visually occluded physical keyboard. In Proceedings of the 2017 CHI
Conference on Human Factors in Computing Systems, CHI ’17, p.
5457–5461. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453.
3025783
[40]
C.-H. Wang, S. Yong, H.-Y. Chen, Y.-S. Ye, and L. Chan. Hmd light:
Sharing in-vr experience via head-mounted projector for asymmetric
interaction. In Proceedings of the 33rd Annual ACM Symposium on
User Interface Software and Technology, UIST ’20, p. 472–486. ACM,
New York, NY, USA, 2020. doi: 10.1145/3379337.3415847
[41]
P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean. To-
wards unobtrusive obstacle detection and notification for vr. In Pro-
ceedings of the 24th ACM Symposium on Virtual Reality Software and
Technology, VRST ’18. ACM, New York, NY, USA, 2018. doi: 10.
1145/3281505.3283391
[42] S. Xu, B. Yang, B. Liu, K. Cheng, S. Masuko, and J. Tanaka. Sharing
augmented reality experience between hmd and non-hmd user. In S. Ya-
mamoto and H. Mori, eds., Human Interface and the Management of
Information. Information in Intelligent Systems, pp. 187–202. Springer
International Publishing, Cham, 2019. doi: 10. 1007/978-3-030-22649
-7 16
[43]
M.-S. Yoh. The reality of virtual reality. In Proceedings Seventh
International Conference on Virtual Systems and Multimedia, pp. 666–
674, 2001. doi: 10. 1109/VSMM.2001.969726