PreprintPDF Available

Decoupled Hands: An Approach for Aligning Perspectives in Collaborative Mixed Reality

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

When collaborating relative to a shared 3D virtual object in mixed reality (MR), users may experience communication issues arising from differences in perspective. These issues include occlusion (e.g., one user not being able to see what the other is referring to) and inefficient spatial references (e.g., "to the left of this" may be confusing when users are positioned opposite to each other). This paper presents a novel technique for automatic perspective alignment in collaborative MR involving co-located interaction centered around a shared virtual object. To align one user's perspective on the object with a collaborator's, a local copy of the object and any other virtual elements that reference it (e.g., the collaborator's hands) are dynamically transformed. The technique does not require virtual travel and preserves face-to-face interaction. We created a prototype application to demonstrate our technique and present an evaluation methodology for related MR collaboration and perspective alignment scenarios.
Content may be subject to copyright.
Decoupled Hands: An Approach for Aligning Perspectives in
Collaborative Mixed Reality
Matt Gottsacker
University of Central Florida
Orlando, Florida, USA
mattg@ucf.edu
Nels Numan
University College London
London, England
nels.numan@ucl.ac.uk
Anthony Steed
University College London
London, England
a.steed@ucl.ac.uk
Gerd Bruder
University of Central Florida
Orlando, Florida, USA
gerd.bruder@ucf.edu
Gregory F. Welch
University of Central Florida
Orlando, Florida, USA
welch@ucf.edu
Steven Feiner
Columbia University
New York, New York, USA
feiner@cs.columbia.edu
(a) (b) (c)
Figure 1: (a) Mixed Reality users collaborating with shared virtual objects can encounter occlusion issues when objects block
other objects from a collaborator’s view. In our approach, each user has a local copy of the shared objects. To automatically
align a user’s perspective with their collaborator’s, our technique (b) transforms the user’s local objects so that the user has the
same view as their collaborator. The virtual representations of the collaborator’s hands transform with the objects. (c) The
collaborator can see when the user is aligned with their perspective because the user’s virtual hands inversely transform to
reference the proper virtual position on their (un-rotated) local object. The proper positioning of the virtual hands for both
users allows them to make intuitive references to the objects.
Abstract
When collaborating relative to a shared 3D virtual object in mixed
reality (MR), users may experience communication issues arising
from dierences in perspective. These issues include occlusion (e.g.,
one user not being able to see what the other is referring to) and
inecient spatial references (e.g., “to the left of this” may be confus-
ing when users are positioned opposite to each other). This paper
presents a novel technique for automatic perspective alignment in
collaborative MR involving co-located interaction centered around
a shared virtual object. To align one user’s perspective on the object
with a collaborator’s, a local copy of the object and any other virtual
elements that reference it (e.g., the collaborator’s hands) are dynam-
ically transformed. The technique does not require virtual travel
and preserves face-to-face interaction. We created a prototype ap-
plication to demonstrate our technique and present an evaluation
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI EA ’25, Yokohama, Japan
©2025 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-1395-8/2025/04
https://doi.org/10.1145/3706599.3720219
methodology for related MR collaboration and perspective align-
ment scenarios.
CCS Concepts
Human-centered computing
Mixed / augmented reality;
Collaborative interaction.
Keywords
Mixed Reality, Collaboration, Perspective Sharing
ACM Reference Format:
Matt Gottsacker, Nels Numan, Anthony Steed, Gerd Bruder, Gregory F.
Welch, and Steven Feiner. 2025. Decoupled Hands: An Approach for Align-
ing Perspectives in Collaborative Mixed Reality. In Extended Abstracts of
the CHI Conference on Human Factors in Computing Systems (CHI EA ’25),
April 26-May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 7 pages.
https://doi.org/10.1145/3706599.3720219
1 Introduction
In computer-supported cooperative work (CSCW), understanding
collaborator attention is a fundamental aspect of workspace aware-
ness and collaboration in digital systems [
18
]. In support of this,
view sharing is the process by which collaborators are able to see the
same digital content simultaneously, and transfer control among
arXiv:2503.12253v1 [cs.HC] 15 Mar 2025
CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan Gosacker et al.
each other. Engelbart et al. [
13
] introduced perhaps the rst exam-
ple of view sharing in collaborative document editing applications
in 1968, and this is now an integral element of a variety of digital
collaboration applications. For example, it is common for a user in a
video conferencing meeting to share their screen when presenting a
slideshow and use their mouse to direct others’ attention to specic
objects. While view sharing in this way, the webcam footage of the
users is also displayed, which preserves some social cues present
in face-to-face interaction.
With co-located mixed reality (MR) setups in which multiple
users collaborate using shared virtual objects (such as in table-based
collaboration for construction planning [
1
,
36
], scientic visualiza-
tion [
1
,
30
,
36
,
40
], and geospatial visualization and planning [
1
]),
maintaining strong workspace awareness is important. In these
scenarios, collaborators can experience mismatched viewpoints,
where occupying dierent positions in the physical and/or virtual
worlds results in dierent perspectives of the same virtual object.
Researchers have shown that such asymmetry in perspectives and
interaction capabilities can make it dicult to understand each
other’s spatial references [
7
] and lead to communication issues [
32
].
For example, perspective dierences can make it dicult to refer
to specic features or regions of a virtual object, as a gesture or
verbal reference may not be well-understood across viewpoints.
In addition, certain parts of the virtual content or a collaborator’s
actions (e.g., pointing or gesturing) can be occluded depending on
one’s position.
To explore view-sharing interfaces in co-located MR scenarios,
we consider the simple scenario of a single View Leader sharing
their view with a single View Follower. Note that “leader” and
“follower” may have nothing to do with the actual roles of the users
collaborating (although it might; co-located immersive presenta-
tion [
15
,
16
] and education [
12
] scenarios commonly involve one
user primarily directing the experience). Here, we use these terms
merely to help dierentiate between the users. The most straightfor-
ward approach to accomplish this view sharing is to reproduce the
View Leader’s spatial viewpoint for the View Follower in such
a way that both collaborators see identical virtual stereo imagery.
However, this approach can present multiple challenges. In this
scenario, the View Leader controls the View Follower’s virtual
position and rotation, leading the View Follower to experience
uncontrolled optic ow translations and rotations, which can cause
cybersickness [
6
]. Furthermore, if the View Follower were to oc-
cupy the same virtual position as the View Leader, they would be
unable to see the View Leader’s face, hindering understanding of
social cues.
In this work, we present an approach for MR view sharing de-
signed to account for all of these challenges and provide intuitive
communication and group awareness while preserving face-to-face
interaction cues. We propose a perspective alignment technique that
provides each collaborator with their own copy of the virtual objects
on which they are collaborating. Rather than automatically trans-
lating and rotating the View Follower through the environment,
which causes the aforementioned issues, the View Follower’s
copies of the objects are translated and rotated to achieve alignment.
The same transformation is applied to virtual representations of
the View Leader’s hands so that when the View Leader points
to a virtual object, their virtual hand will be displayed to the View
Follower in the proper virtual location. This idea can be applied
to aligning perspectives with dierences in scales, such as when
the View Leader increases the size of the object or decreases the
size of their view frustum to achieve a more immersive view, but
that kind of transitional interface is beyond the scope of the work
presented in this paper.
2 Background
Our work is inspired by and grounded in research on group aware-
ness and novel techniques for view sharing in collaborative MR
environments.
2.1 Group Awareness in Collaborative MR
In 2002, Gutwin & Greenberg [
18
] developed a framework identify-
ing the primary elements of workspace awareness that a multi-user
system should support in order to provide a good collaborative
experience. At any given moment, collaborators should be aware
of who is involved in the workspace, what they are working on,
and where they are working or attending to [
18
]. While this frame-
work was initially focused on remote collaboration scenarios, more
recently it has been re-contextualized and examined in research
on co-located MR collaboration, which has shown that there are
substantial challenges to providing high workspace awareness even
when collaborators share the same physical space [
33
]. For example,
researchers have pointed out that interfaces should be designed
such that collaborators can understand and direct each other’s at-
tention spatially (in 3D) and eciently [
7
,
32
,
33
]. Challenges to
maintaining this shared workspace awareness in MR arise due to oc-
clusion (i.e., when one user is viewing things that are blocked from
another user’s perspective). Additionally, seeing the workspace
from dierent angles can result in less ecient communication
if users do not easily understand references and need to verbally
clarify a collaborator’s references [
23
]. Diering perspectives can
also lead to the users spending additional time manipulating the
workspace or physically moving through the environment, to better
understand each other [
7
]. To overcome these issues, perspective
alignment interfaces should be designed so that users can easily
and quickly achieve common ground for spatial references.
Researchers have emphasized the importance of collaborators’
abilities to observe each other’s facial expressions and body lan-
guage, which are important for making collaborators aware of each
other’s emotional states [
33
] and building trust [
22
]. For this rea-
son, we chose to include an MR-based face-to-face capability. It
should be noted, however, that there are limitations imposed by
current consumer MR head-worn displays (HWDs). For one, an
HWD covers the upper half of a user’s face, which reduces the
observability of their facial expressions and can hamper social in-
teraction [
15
,
28
]. Researchers have proposed various methods to
restore these cues (e.g., by displaying the users’ eyes on the outside
of the HWD [
4
,
5
,
26
,
27
] or sharing other invisible cues about
the user’s physiological or mental states to enhance communica-
tion [10, 17]).
Our work seeks to support ecient spatial referencing through
aligned perspectives while preserving the social interaction cues of
face-to-face interaction in co-located collaborative MR.
Decoupled Hands: An Approach for Aligning Perspectives in Collaborative Mixed Reality CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan
2.2 Mixed Reality View Sharing Techniques
Researchers have explored a variety of techniques for aligning
perspectives and supporting workspace awareness in collaborative
MR scenarios where users have dierent viewpoints of shared
virtual objects. Tserenchimed et al. [
39
] presented a technique to
exactly align a View Follower’s view with a View Leader in a
remote collaboration scenario involving a mobile AR user and a VR
user. Once the collaborators’ perspectives are aligned, each user
cannot see the other user’s avatar, but they can see 3D rays cast
from each other’s controllers. They found that their technique led to
lower mental demand and faster task completion on a collaborative
engine-xing task. Similarly, Le Chénéchal et al. developed the
Vishnu system [
25
], which positioned a remote expert using VR in
the same virtual position as an AR user. The system showed the
expert’s virtual arms emanating from the same place as the AR
user’s own arms in their view to point to things in the workspace.
Researchers have also explored re-mapping users’ bodies and/or
environments to provide aligned perspectives in collaboration. For
example, Congdon et al. [
9
] presented a method for distorting users’
asymmetric virtual environments to provide a shared interaction
space. Additionally, in face-to-face scenarios, Sousa et al. [35] and
Fidalgo et al. [
14
] aligned collaborator perspectives exactly, and
then warped each collaborator’s original hands and arms around
the virtual object to point in the approximately correct virtual loca-
tion for the other user. However, these techniques were designed
for and evaluated in environments in which the collaborators were
interacting around a small tabletop with a virtual object placed on
top of it. Extending these methods to larger virtual objects would
involve signicantly warping the appearance of collaborators’ orig-
inal arms or hands. Simões et al. [
34
] took this approach in their
SPARC system, which also re-targeted collaborator avatars. In this
work, View Follower avatars were re-targeted from the View
Leader’s perspective to prioritize face-to-face interaction (e.g., by
positioning collaborators avatars across from the View Leader
rather than next to them). When a View Follower pointed to
something, their arms and hands were then stretched and manipu-
lated to point in the proper virtual location. Hoppe et al. developed
the ShiSha system for collaborative virtual reality [
21
], which posi-
tioned View Followers in the same virtual location around a View
Leader to provide all users with aligned perspectives. In the View
Leader’s view, however, the View Followers were positioned
slightly o to the side so that they could still engage in face-to-face
interaction. The View Followers’ avatars were then re-targeted
to point to the approximately correct virtual location.
Our work presents an approach for aligning perspectives exactly
(similar to other works such as [
14
,
21
,
25
,
34
,
35
,
39
]) and that also
preserves face-to-face interaction (similar to [
14
,
34
,
35
]) but with-
out warping users’ avatars, which potentially provides advantages
for the scalability of the technique.
3 Perspective Alignment Technique Design &
Implementation
This section describes the design principles that guided the creation
of our Decoupled Hands perspective alignment technique, technical
details of our approach, and the functional prototype we created to
demonstrate it.
3.1 Perpective Alignment Design Principles
Similar to work on co-located MR presentations [
15
], we based the
design of our MR perspective alignment technique on high-level
design goals that Kumaravel et al. [
38
] developed from formative
interviews about asymmetric interactions between users of VR and
of non-immersive displays. While Kumaravel et al.’s VR–desktop
collaboration scenario was dierent from our MR scenario, their
study of how disparate perspectives aect collaboration aligns with
our research goals.
First, we aimed to support independent exploration for both
users [
38
] by enabling both users to explore and interact with
the shared virtual content. We also aimed to support ecient and
direct spatial references to the virtual content [
7
,
33
,
38
], which is
required to enable the collaborators to discuss specic aspects of the
objects. We provided virtual hand representations for each user and
raycasts so they could clearly indicate specic points on an object
to their partner. Understanding these references is not an issue
when both users have a good view of an object of interest. However,
when shared objects are large or complex, and it becomes dicult
or time-consuming for users to obtain similar views, it is necessary
to provide additional features to enable shared perspectives and
support attention guidance tools for both users. We also aimed to
provide stable virtual content for both users [
38
] to support their
making precise references to content through pointing. To achieve
this principle, the perspective alignment on the virtual content
should be triggered by the View Follower and their view should
not update continuously.
Last, we aimed to support social interaction and co-presence
between users through social cues such as body language [
33
,
38
].
Expressing and observing social cues is an important aspect of
understanding others’ emotional states [
33
], building trust [
22
],
and establishing and gauging interest during computer-mediated
interactions [
29
]. For this reason, we support face-to-face inter-
action through video see-through MR without modifying users’
appearances.
3.2 Perspective Alignment Technique
Our approach provides each user with a local copy of the shared
virtual objects around which the users are collaborating. An axis of
rotation is dened for the set of collaborative objects as the centroid
of all objects in 3D space. For table-based collaboration, this axis
is set at the center of the table. A View Follower can align their
perspective with a View Leader by pointing their controller at
them and pressing the trigger button. The View Follower’s copy
of the virtual objects then rotates about their axis so that the View
Follower sees the objects from the same angle as the View Leader.
The virtual objects always rotate around the shared objects’ axis
by the shortest angular distance between the two perspectives.
After it is computed once (i.e., when a View Follower selects a
View Leader), this rotation value is used to transform all virtual
copies of the View Leader’s objects for the View Follower. The
positions of the virtual copies are transformed around the work
table by the same angular dierence. This method positions the
View Leader’s virtual hands in the exactly correct virtual position
for the View Follower. In other words, if the View Follower’s
virtual object is rotated 100°with respect to the View Leader’s
CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan Gosacker et al.
t = 2 t = 3 t = 4
t = 1
User B does not have a good
view of point target
User A points to something User A sees User B initiate
perspective alignment
User A observes User Bs
hands rotating around object
User B selects User A to
initiate perspective alignment
User B observes object and
User As hands rotating
User B observes aligned
perspectives with User A
User A observes User B has
aligned perspectives
User A
User B
Figure 2: Screenshots from our collaborative AR system in use showing dierent stages of the perspective alignment process.
At time t = 1, User A (top panel, orange) points at something that is out of view for User B (bottom panel, blue). At t = 2, User B
points at User A and presses the trigger to activate a perspective alignment. At t = 3, the perspective alignment is in progress:
User A can see User B’s virtual hands rotating around the virtual object toward them; User B can see the virtual object and
User A’s virtual hands rotating toward them. At t =4, User A has become the View Leader and User B has become the View
Follower. Specically, User B’s 3D map has rotated such that he can see the map from the same angle as User A. Each user can
see virtual representations of the other user’s hands oating over the map in the correct virtual positions relative to the other
user’s view.
object, the View Follower’s virtual copy of the View Leader’s
hands will be oset from the View Leader by the same 100°around
the center of the virtual object. The result is that when the View
Follower transforms their copy of the shared virtual object to
match the View Leader’s perspective, the View Follower sees
a virtual representation of the View Leader’s hands oating in
front of the View Follower. This decoupling of the View Leader’s
virtual hands from their original position allows the object-relative
perspective alignment paradigm to work in a straightforward way
(i.e., without any warping or re-targeting) with large virtual objects,
large numbers of collaborators, and dynamic movements.
3.3 Functional Prototype Application
To demonstrate our approach, we developed an MR application
that allows multiple users to interact with a 3D map and switch
to each other’s perspective. This prototype was the result of a
collaboration between two geographically separated research labs.
Screenshots of the system in action are shown in Figure 2, and a
video of the application in action is included in the supplementary
materials. In our demo scenario, one user can assume the role of
View Leader and teach the View Followers about features on the
map (e.g., landmarks along a route). The View Leader can place
virtual pins on the map and gesture to particular points of interest
to communicate the map-based information. The View Leader has
the best view of the map and its features, so the View Followers
will need to align their perspectives with the View Leader.
3.3.1 Networking Implementation. When users rst connect to the
application, they perform a calibration process to ensure the locally
tracked positions of their devices are aligned in the same coordinate
space on the networking server.
A virtual cap is placed on each user’s head, and virtual hands
mirror their physical hand movement. A user can observe how
closely these virtual objects match up with the physical head and
hands of a collaborator to get a sense for how well their coordinate
spaces are aligned. They can recalibrate their coordinate spaces as
needed. Each user’s virtual cap and hands are assigned a color to
Decoupled Hands: An Approach for Aligning Perspectives in Collaborative Mixed Reality CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan
Category Measure Description
Task Performance Speed How quickly users complete the spatial construction task.
Accuracy How accurately users fulll the constraints of the sub-tasks.
User Experience
Cognitive load NASA Raw Task Load Index [20]
Usability User Experience Questionnaire [24]
Social presence Networked Minds Social Presence Inventory [19]
Group awareness
Questions based on Gutwin & Greenberg’s workspace aware-
ness framework [18]
Observed Collaboration
Behaviors
Communication Total word count, number of deictic phrases used
Social interaction How often participants look at each other
Perspective alignments How often participants align perspectives
Table 1: Planned evaluation metrics
help users dierentiate themselves from each other when aligning
their perspectives.
3.3.2 Prototype Hardware and Soware. We developed the applica-
tion using Unity 2021
.
3
.
13 and deployed the application to the Meta
Quest Pro head-worn MR display. The Quest Pro has a colorized
video see-through mode that allows users to view their physical
surroundings through the headset cameras, a key feature for sup-
porting face-to-face collaborative MR. The map data is streamed
dynamically using the Microsoft Maps SDK for Unity
1
, which pro-
vides API access to detailed 3D terrain and building data for many
cities around the world through Bing Maps. The collaborators’
head and hands transform data is networked using Photon Unity
Networking 22.
4 MR Object-Based Collaboration Evaluation
Methodology
While pilot tests using our demo application in our lab have shown
promise, it is necessary to formally evaluate our technique. In this
way, this paper serves as a “prequel” to future research on the
evaluation of this technique in collaborative MR scenarios. Here,
we introduce an experimental methodology to study the quality
of collaboration around shared objects using dierent perspective
alignment techniques in MR. In a future user study, participant
dyads will collaborate on a spatial task situated on a virtual table
such as populating a 3D landscape with dierent buildings, objects,
and features. This 3D environment will be designed to include
terrain and objects that can cause occlusion issues for users with
dierent viewpoints. Additionally, participants will be assigned
sub-tasks that depend on the work of their collaborator and will
be designed to encourage perspective sharing. For example, one
user will need to position a specic virtual object in the line-of-
sight of an object that their collaborator is responsible for placing.
This task is inspired by earlier experiments on collaborative MR by
Billinghurst et al. [3].
Dierent perspective alignment techniques (such as those de-
scribed in section 2, e.g., [
14
,
21
,
34
]) can be compared with our
Decoupled Hands approach in trials that use dierent but similarly
structured base environments and sub-tasks. We will collect the
1https://github.com/microsoft/MapsSDK-Unity
2https://www.photonengine.com/pun
measures listed in Table 1, along with qualitative analysis of partic-
ipant interviews, to gain insights into the impacts of the dierent
perspective alignment techniques on the quality of participants’
collaboration experiences. We will measure task performance to as-
sess the eciency and eectiveness of users’ collaboration (i.e., how
well they coordinate their actions and accomplish shared goals).
On the user experience side, measuring users’ perceptions of cog-
nitive load and usability will provide insights into how intuitive
and supportive users nd a certain technique. Additionally, users’
perceptions of group awareness will indicate the degree to which a
given technique allows users to make and understand each other’s
references. Measuring social presence will assess how aware users
feel of each other’s presence, which is crucial for establishing trust
and engagement in MR collaboration.
Participants’ behaviors are a useful data source as well. For in-
stance, tracking total word count and deictic phrase usage provides
insight into the clarity and eciency of verbal exchanges. Measur-
ing how often participants look at each other and make eye contact
indicates the level of interpersonal engagement participants experi-
ence. Last, assessing how often participants align their perspectives
reveals how much participants relied on the technique. Altogether,
these measures provide insight into how well a given technique
supports users’ collaboration.
5 Future Work
While a study with just two users will reveal useful insights about
the trade-os between dierent kinds of perspective sharing tech-
niques, we believe it is important to scale and test this technique in
collaborations involving more than two users. It will likely become
more challenging for users to understand “whose hands are whose”
when multiple users have triggered perspective alignments. Tang
et al. [
37
] presented one approach for disambiguating collabora-
tors’ arms and hands in a surface-based mixed-presence system
in which local users’ hands were rendered semi-transparent and
remote users’ hands were opaque. The color-matched virtual hands
and caps in our application may work for this purpose up to a
point, but larger numbers of collaborators may require additional
visualization techniques such as labels or leader lines (even just
temporarily) to disambiguate hand ownership. Additionally, it may
be necessary to incorporate view management techniques [
2
] such
CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan Gosacker et al.
as automatically adjusting the position or transparency of visualiza-
tions and labels to avoid users becoming overwhelmed or confused
with the placement of collaborators’ representations.
Another avenue for future work is to explore this technique for
other physical objects in the collaborative environment. By combin-
ing machine learning techniques for spatial understanding as well
as object segmentation and classication (e.g., Augmented Object
Intelligence [
11
]), our approach for transforming virtual objects
relative to their 3D centroid can be extended to physical objects
as well. Such a system could create a virtual replica of relevant
physical objects [
31
] and apply diminished reality techniques [
8
]
to remove the physical objects from view. This would allow the
virtual replica objects to be transformed for each user in the same
way as in the Decoupled Hands approach.
6 Conclusion
This paper presents Decoupled Hands, a novel technique for achiev-
ing perspective alignment in co-located collaborative mixed reality
environments involving a shared virtual object. The technique ef-
ciently aligns collaborators’ perspectives on the object without
requiring virtual travel and while supporting face-to-face interac-
tion. As it does not require warping or re-targeting the environment
or collaborators inside it, our method has the potential to generalize
to large virtual objects and large numbers of collaborators without
sacricing exact perspective alignment or unmediated face-to-face
interaction. We created a prototype system to demonstrate this
technique involving a 3D map with real map data streamed from
the Internet.
Acknowledgments
This material includes work supported in part by the Oce of Naval
Research under Award Numbers N00014-21-1-2578 and N00014-
21-1-2882 (Dr. Peter Squire, Code 34); the AdventHealth Endowed
Chair in Healthcare Simulation (Prof. Welch); DZYNE subaward
DRP009-S-001 from DARPA; National Science Foundation Award
Number 2037101; and the European Union’s Horizon 2020 Research
and Innovation Programme under Grant Agreement No 739578.
References
[1]
Maneesh Agrawala, Andrew C Beers, Ian McDowall, Bernd Fröhlich, Mark Bolas,
and Pat Hanrahan. 1997. The Two-User Responsive workbench: Support for
Collaboration Through Individual Views of a Shared Space. In Proceedings of the
24th annual conference on Computer graphics and interactive techniques. 327–332.
doi:10.1145/258734.258875
[2]
Blaine Bell, Steven Feiner,and Tobias Höllerer. 2001. View management for virtual
and augmented reality. In Proceedings of the 14th Annual ACM Symposium on User
Interface Software and Technology (Orlando, Florida) (UIST ’01). Association for
Computing Machinery, New York, NY, USA, 101–110. doi:10.1145/502348.502363
[3]
Mark Billinghurst, Hirokazu Kato, Kiyoshi Kiyokawa, Daniel Belcher, and Ivan
Poupyrev. 2002. Experiments with Face-To-Face Collaborative AR Interfaces.
Virtual Reality 6 (2002), 107–121. doi:10.1007/s100550200012
[4]
Evren Bozgeyikli, Lal “Lila” Bozgeyikli, and Victor Gomes. 2024. Googly Eyes:
Exploring Eects of Displaying User’s Eyes Outward on a Virtual Reality Head-
Mounted Display on User Experience. In 2024 IEEE Conference Virtual Reality
and 3D User Interfaces (VR). 979–989. doi:10.1109/VR58804.2024.00117
[5]
Liwei Chan and Kouta Minamizawa. 2017. FrontFace: facilitating communi-
cation between HMD users and outsiders using front-facing-screen HMDs. In
Proceedings of the 19th International Conference on Human-Computer Interac-
tion with Mobile Devices and Services (Vienna, Austria) (MobileHCI ’17). As-
sociation for Computing Machinery, New York, NY, USA, Article 22, 5 pages.
doi:10.1145/3098279.3098548
[6]
Eunhee Chang, Hyun Taek Kim, and Byounghyun Yoo. 2020. Virtual reality
sickness: a review of causes and measurements. International Journal of Human–
Computer Interaction 36, 17 (2020), 1658–1682. doi:10.1080/10447318.2020.1778351
[7]
Jerey W. Chastine, Kristine Nagel, Ying Zhu, and Luca Yearsovich. 2007. Un-
derstanding the design space of referencing in collaborative augmented real-
ity environments. In Proceedings of Graphics Interface 2007 (Montreal, Canada)
(GI ’07). Association for Computing Machinery, New York, NY, USA, 207–214.
doi:10.1145/1268517.1268552
[8]
Yi Fei Cheng, Hang Yin, Yukang Yan, Jan Gugenheimer, and David Lindlbauer.
2022. Towards Understanding Diminished Reality. In Proceedings of the 2022 CHI
Conference on Human Factors in Computing Systems (New Orleans, LA, USA)
(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article
549, 16 pages. doi:10.1145/3491102.3517452
[9]
Ben J Congdon, Tuanfeng Wang, and Anthony Steed. 2018. Merging environ-
ments for shared spaces in mixed reality. In Proceedings of the 24th ACM Sympo-
sium on Virtual Reality Software and Technology. 1–8. doi:10.1145/3281505.3281544
[10]
Arindam Dey, Thammathip Piumsomboon, Youngho Lee, and Mark Billinghurst.
2017. Eects of Sharing Physiological States of Players in a Collaborative Virtual
Reality Gameplay. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems (Denver, Colorado, USA) (CHI ’17). Associationfor Computing
Machinery, New York, NY, USA, 4045–4056. doi:10.1145/3025453.3026028
[11] Mustafa Doga Dogan, Eric J Gonzalez, Karan Ahuja, Ruofei Du, Andrea Colaco,
Johnny Lee, Mar Gonzalez-Franco, and David Kim. 2024. Augmented Object
Intelligence with XR-Objects. In Proceedings of the 37th Annual ACM Symposium
on User Interface Software and Technology. Association for Computing Machinery,
New York, NY, USA. doi:10.1145/3654777.3676379
[12]
Tobias Drey, Patrick Albus, Simon der Kinderen, Maximilian Milo, Thilo
Segschneider, Linda Chanzab, Michael Rietzler, Tina Seufert, and Enrico Rukzio.
2022. Towards Collaborative Learning in Virtual Reality: A Comparison of Co-
Located Symmetric and Asymmetric Pair-Learning. In Proceedings of the 2022
CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA)
(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article
610, 19 pages. doi:10.1145/3491102.3517641
[13]
Douglas C Engelbart and William K English. 1968. A research center for aug-
menting human intellect. In Proceedings of the December 9-11, 1968, Fall Joint
Computer Conference, Part I. 395–410. doi:10.1145/1476589.1476645
[14]
Catarina G Fidalgo, Maurício Sousa, Daniel Mendes, Rafael Kuner Dos Anjos,
Daniel Medeiros, Karan Singh, and Joaquim Jorge. 2023. Magic: Manipulating
avatars and gestures to improve remote collaboration. In 2023 IEEE Conference
Virtual Reality and 3D User Interfaces (VR). IEEE, 438–448. doi:10.1109/VR55154.
2023.00059
[15]
Matt Gottsacker, Mengyu Chen, David Sao, Feiyu Lu, Benjamin Lee, and Blair
MacIntyre. 2025. Examining the Eects of Immersive and Non-Immersive Presen-
ter Modalities on Engagement and Social Interaction in Co-located Augmented
Presentations. In Proceedings of the 2025 CHI Conference on Human Factors in
Computing Systems. 19 pages. doi:10.1145/3706598.3713346
[16]
Matt Gottsacker, Mengyu Chen, David Sao, Feiyu Lu, and Blair MacIntyre. 2023.
Hybrid User Interface for Audience Feedback Guided Asymmetric Immersive
Presentation of Financial Data. In 2023 IEEE International Symposium on Mixed
and Augmented Reality Adjunct (ISMAR-Adjunct). 199–204. doi:10.1109/ISMAR-
Adjunct60411.2023.00046
[17]
Matt Gottsacker, Raia Syamil, Pamela Wisniewski, Gerd Bruder, Carolina Cruz-
Neira, and Gregory Welch. 2022. Exploring Cues and Signaling to Improve
Cross-Reality Interruptions. In 2022 IEEE International Symposium on Mixed
and Augmented Reality Adjunct (ISMAR-Adjunct). 827–832. doi:10.1109/ISMAR-
Adjunct57072.2022.00179
[18]
Carl Gutwin and Saul Greenberg. 2002. A descriptive framework of workspace
awareness for real-time groupware. Computer Supported Cooperative Work
(CSCW) 11 (2002), 411–446. doi:10.1023/a:1021271517844
[19]
Chad Harms and Frank Biocca. 2004. Internal consistency and reliability of the
networked minds measure of social presence. In Seventh annual international
workshop: Presence, Vol. 2004. Universidad Politecnica de Valencia Valencia.
[20]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Pro-
ceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (2006),
904–908. doi:10.1177/154193120605000909
[21]
Adrian H. Hoppe, Florian van de Camp, and Rainer Stiefelhagen. 2021. ShiSha:
Enabling Shared Perspective With Face-to-Face Collaboration Using Redirected
Avatars in Virtual Reality. Proc. ACM Hum.-Comput. Interact. 4, CSCW3, Article
251 (jan 2021), 22 pages. doi:10.1145/3432950
[22]
Hiroshi Ishii, Minoru Kobayashi, and Kazuho Arita. 1994. Iterative design of
seamless collaboration media. Commun. ACM 37, 8 (1994), 83–97. doi:10.1145/
179606.179687
[23] Kiyoshi Kiyokawa, Mark Billinghurst, Sohan E. Hayes, Anoop Gupta, Yuki San-
nohe, and Hirokazu Kato. 2002. Communication behaviors of co-located users in
collaborative AR interfaces. In Proceedings. International Symposium on Mixed
and Augmented Reality. 139–148. doi:10.1109/ISMAR.2002.1115083
[24]
Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008. Construction and Evalu-
ation of a User Experience Questionnaire. In HCI and Usability for Education and
Decoupled Hands: An Approach for Aligning Perspectives in Collaborative Mixed Reality CHI EA ’25, April 26-May 1, 2025, Yokohama, Japan
Work, Andreas Holzinger (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg,
63–76. doi:10.1007/978-3- 540-89350- 9_6
[25]
Morgan Le Chénéchal, Thierry Duval, Valérie Gouranton, Jérôme Royan, and
Bruno Arnaldi. 2016. Vishnu: virtual immersive support for helping users an
interaction paradigm for collaborative remote guiding in mixed reality. In 2016
IEEE Third VR International Workshop on Collaborative Virtual Environments
(3DCVE). IEEE, 9–12. doi:10.1109/3DCVE.2016.7563559
[26]
Christian Mai, Lukas Rambold, and Mohamed Khamis. 2017. TransparentHMD:
revealing the HMD user’s face to bystanders. In Proceedings of the 16th Inter-
national Conference on Mobile and Ubiquitous Multimedia (Stuttgart, Germany)
(MUM ’17). Association for Computing Machinery, New York, NY, USA, 515–520.
doi:10.1145/3152832.3157813
[27]
Nathan Matsuda, Brian Wheelwright, Joel Hegland, and Douglas Lanman. 2021.
VR social copresence with light eld displays. ACM Trans. Graph. 40, 6, Article
244 (dec 2021), 13 pages. doi:10.1145/3478513.3480481
[28]
Gerard McAtamney and Caroline Parker. 2006. An examination of the eects of
a wearable display on informal face-to-face communication. In Proceedings of the
SIGCHI conference on Human Factors in computing systems. 45–54. doi:10.1145/
1124772.1124780
[29]
Prasanth Murali, Javier Hernandez, Daniel McDu, Kael Rowan, Jina Suh, and
Mary Czerwinski. 2021. AectiveSpotlight: Facilitating the Communication of
Aective Responses from Audience Members during Online Presentations. In
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
(Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York,
NY, USA, Article 247, 13 pages. doi:10.1145/3411764.3445235
[30]
Upul Obeysekare, Chas Williams, Jim Durbin, Larry Rosenblum, Robert Rosen-
berg, Fernando Grinstein, Ravi Ramamurti, Alexandra Landsberg, and William
Sandberg. 1996. Virtual Workbench-a non-immersive virtual environment for
visualizing and interacting with 3D objects for scientic visualization. In Pro-
ceedings of Seventh Annual IEEE Visualization ’96. 345–349. doi:10.1109/VISUAL.
1996.568128
[31]
Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven Feiner, and Barbara Tversky.
2015. Virtual Replicas for Remote Assistance in Virtual and Augmented Reality. In
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Tech-
nology (Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery,
New York, NY, USA, 405–415. doi:10.1145/2807442.2807497
[32]
Ohan Oda and Steven Feiner. 2012. 3D referencing techniques for physical objects
in shared augmented reality. In 2012 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR). IEEE, 207–215. doi:10.1109/ISMAR.2012.6402558
[33]
Iulian Radu, Tugce Joy, Yiran Bowman, Ian Bott, and Bertrand Schneider. 2021. A
Survey of Needs and Features for Augmented Reality Collaborations in Collocated
Spaces. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 169 (April 2021),
21 pages. doi:10.1145/3449243
[34]
João Simões, Anderson Maciel, Catarina Moreira, Maurício Sousa, and Joaquim
Jorge. 2025. SPARC: Shared Perspective with Avatar Distortion for Remote Col-
laboration in VR. In Advances in Computer Graphics. Springer Nature Switzerland,
Cham, 99–112. doi:10.1007/978-3- 031-82021-2_7
[35]
Maurício Sousa, Daniel Mendes, Rafael K dos Anjos, Daniel Simões Lopes, and
Joaquim Jorge. 2019. Negative Space: Workspace Awareness in 3D Face-to-Face
Remote Collaboration. In The 17th International Conference on Virtual-Reality
Continuum and its Applications in Industry. 1–2. doi:10.1145/3359997.3365744
[36]
Martin Spindler, Wolfgang Büschel, and Raimund Dachselt. 2012. Use your
head: tangible windows for 3D information spaces in a tabletop environment. In
Proceedings of the 2012 ACM International Conference on Interactive Tabletops and
Surfaces (Cambridge, Massachusetts, USA) (ITS ’12). Association for Computing
Machinery, New York, NY, USA, 245–254. doi:10.1145/2396636.2396674
[37]
Anthony Tang, Carman Neustaedter, and Saul Greenberg. 2007. VideoArms:
Embodiments for Mixed Presence Groupware. In People and Computers XX
Engage. Springer London, London, 85–102. doi:10.1007/978-1- 84628-664-3_8
[38]
Balasaravanan Thoravi Kumaravel, Cuong Nguyen, Stephen DiVerdi, and Bjoern
Hartmann. 2020. TransceiVR: Bridging Asymmetrical Communication Between
VR Users and External Collaborators. In Proceedings of the 33rd Annual ACM
Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST
’20). Association for Computing Machinery, New York, NY, USA, 182–195. doi:10.
1145/3379337.3415827
[39]
Tuvshintulga Tserenchimed and Hyungki Kim. 2024. Viewpoint-sharing method
with reduced motion sickness in object-based VR/AR collaborative virtual envi-
ronment. Virtual Reality 28, 3 (2024), 1–12. doi:10.1007/s10055-024-01005- z
[40]
Kevin Yu, Ulrich Eck, Frieder Pankratz, Marc Lazarovici,Dirk Wilhelm, and Nassir
Navab. 2022. Duplicated Reality for Co-located Augmented Reality Collaboration.
IEEE Transactions on Visualization and Computer Graphics 28, 5 (2022), 2190–2200.
doi:10.1109/TVCG.2022.3150520
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We propose a viewpoint-sharing method with reduced motion sickness in an object-based remote collaborative virtual environment (CVE). The method is designed with an assumption of asymmetric, object-based CVE where collaborators use non-homogeneous devices, such as immersive virtual reality head-mounted display (VR HMD) and tablet-based augmented reality (AR), and simultaneously interact with 3D virtual objects. Therefore, collaborators interact with different interfaces such as virtual reality (VR) users relying on controllers for virtual locomotion and object manipulation, while AR users perform physical locomotion and multi-touch input for object manipulation. The proposed viewpoint-sharing method allows both users to observe and manipulate the objects in interest from the shared point of view, enabling participants to interact with the objects without the need for virtual/physical locomotion. While viewpoint-sharing, instead of changing point of view, the proposed method performs seamless object transformation to provide a shared point of view, reducing motion sickness and associated discomfort. From our user experiment, the viewpoint-share condition resulted in a 35.47% faster task completion time than the baseline condition which is without proposed viewpoint-sharing. The advantage of viewpoint-sharing regarding system usability was significant, while task workloads were similar in the baseline and viewpoint-sharing conditions. We expect that the proposed viewpoint-sharing method allows users to quickly, efficiently, and collaboratively communicate in an object-based CVE, and represents a step forward in the development of effective remote, asymmetric CVE.
Article
Full-text available
When two or more users attempt to collaborate in the same space with Augmented Reality, they often encounter conflicting intentions regarding the occupation of the same working area and self-positioning around such without mutual interference. Augmented Reality is a powerful tool for communicating ideas and intentions during a co-assisting task that requires multi-disciplinary expertise. To relax the constraint of physical co-location, we propose the concept of Duplicated Reality, where a digital copy of a 3D region of interest of the users’ environment is reconstructed in real-time and visualized in-situ through an Augmented Reality user interface. This enables users to remotely annotate the region of interest while being co-located with others in Augmented Reality. We perform a user study to gain an in-depth understanding of the proposed method compared to an in-situ augmentation, including collaboration, effort, awareness, usability, and the quality of the task. The result indicates almost identical objective and subjective results, except a decrease in the consulting user’s awareness of co-located users when using our method. The added benefit from duplicating the working area into a designated consulting area opens up new interaction paradigms to be further investigated for future co-located Augmented Reality collaboration systems.
Article
As virtual reality (VR) devices become increasingly commonplace, asymmetric interactions between people with and without headsets are becoming more frequent. Existing video pass-through VR headsets solve one side of these asymmetric interactions by showing the user a live reconstruction of the outside world. This paper further advocates for reverse pass-through VR , wherein a three-dimensional view of the user's face and eyes is presented to any number of outside viewers in a perspective-correct manner using a light field display. Tying together research in social telepresence and copresence, autostereoscopic displays, and facial capture, reverse pass-through VR enables natural eye contact and other important non-verbal cues in a wider range of interaction scenarios, providing a path to potentially increase the utility and social acceptability of VR headsets in shared and public spaces.