A Literature Review on Collaboration in Mixed
Philipp Ladwig and Christian Geiger
University of Applied Sciences, 40476 D¨usseldorf, Germany,
Abstract. Mixed Reality is deﬁned as a combination of Reality, Aug-
mented Reality, Augmented Virtuality and Virtual Reality. This inno-
vative technology can aid with the transition between these stages. The
enhancement of reality with synthetic images allows us to perform tasks
more easily, such as the collaboration between people who are at diﬀer-
ent locations. Collaborative manufacturing, assembly tasks or education
can be conducted remotely, even if the collaborators do not physically
meet. This paper reviews both past and recent research, identiﬁes ben-
eﬁts and limitations, and extracts design guidelines for the creation of
collaborative Mixed Reality applications in technical settings.
With the advent of aﬀordable tracking and display technologies, Mixed Reality
(MR) has recently gained increased media attention and has ignited the imag-
inations of many prospective users. Considering the progress of research and
enhancement of electronics over recent years, we inevitably will move closer to
the ultimate device which will make it diﬃcult to distinguish between the virtual
world and reality. Star Trek’s Holodeck can be considered as an ultimate display
in which even death can take place. Such a system would provide realistic and
complete embodied experiences incorporating human senses including haptic,
sound or even smell and taste. If it were possible to send this information over
a network and recreate it at another place, this would allow for collaboration as
if the other person were physically at the place where the help is needed.
As of today, technology has not yet been developed to the level of Star Trek’s
Holodeck. Olson and Olson [24, 25] summarized that our technology is not yet
mature enough, and that ”distance matters” for remote collaboration. But many
institutes and companies have branches at diﬀerent locations which implies that
experts of diﬀerent technical ﬁelds are often distributed around a country or
even around the world. But the foundation of a company lies in the expertise
of their employees and in order to be successful, it is critical that the company
or institute shares and exchanges knowledge among colleagues and costumers.
Remote collaboration is possible via tools such as Skype, DropBox or Evernote,
but these forms of remote collaboration usually consists of ”downgraded packets
of communication” such as text, images or video. However, machines, assembly
Fig. 1. Reality - Virtuality Continuum by Milgram and Kishino. 
tasks and 3D CAD data are increasingly getting more complex. Exchanging 3D
data is possible, but interacting remotely in real time on real or virtual spatial
data is still diﬃcult [24, 25]. At this point MR comes into play, which have the
potential to ease many of the problems of todays remote collaboration.
Milgram and Kishinio  deﬁned the Reality-Virtuality Continuum, as de-
picted in Fig. 1, which distinguish between four diﬀerent stages: Reality is the
perception of the real environment without any technology. Augmented Reality
(AR) overlays virtual objects and supplemental information into the real world.
An example of an AR device is Microsoft HoloLens.Augmented Virtuality (AV)
captures real objects and superimposes them into a virtual scene. A video of a
real person, showed in a virtual environment, is an example for AV. Virtual Re-
ality (VR) entirely eliminates the real world and shows only computer generated
graphics. Head Mounted Displays (HMD) such as the HTC Vive or Oculus Rift
are current examples of VR devices. This paper focuses on Mixed Reality which
is deﬁned by Milgram and Kishinio as a blend between AR and AV technology.
In the last three decades, research has shown a large amount of use cases for
collaboration in MR: Supporting assembly tasks over the Internet [2,3, 7, 23,34],
conducting design reviews of a car by experts who are distributed geographically
[12, 22] and the remote investigation of a crime scene [5, 30] are only a few
examples of collaborative applications in MR. Especially the domain of Remote
Engineering and Virtual Instrumentation can beneﬁt from remote guidance in
MR. For example, many specialized, expensive and recent equipment or machines
can only be maintained by highly qualiﬁed staﬀ and are not often available
at the location upon request if the machine happens to become inoperative.
Furthermore, remote education could assist in the prevention of such emergency
cases and help to spread specialized knowledge more easily.
The following sections chronologically describe the progress of research over
recent decades. A predominant scenario can be observed in user studies: A re-
mote user helps a local user to complete a task. Although, diﬀerent authors use
diﬀerent terms for the participants of a remote session, we will use the abbrevi-
ations RU for remote user and LU for local user.
2 Research until the year 2012
A basis function of collaboration in every study examined in this paper is bi-
directional transmission of speech. Every application uses speech as a foundation
for communication. However, language can be ambiguous or vague if it describes
spatial locations and actions in space. Collaborative task performance increases
signiﬁcantly when speech is combined with physically pointing as Heiser et al.
 state. Some of the ﬁrst collaborative systems, which uses MR, were video-
mediated applications as presented by Ishii et al. [9,10]. A video camera, which
was mounted above the participant’s workplace, captured the work on the table
and transmitted it to other meeting participants on a monitor. A similar system
was developed by Kirk and Fraser . They conducted a user study in which
the participants had to perform a Lego assembly task. They found, that AR not
only speeds up the collaboration task but it was also easier for the participants
(in regards to time and errors) to recall the construction steps in a self-assembly
task 24 hours later when they were supported by MR technology instead of only
listening to voice commands.
Baird and Barﬁeld  and Tang et al.  prove that AR reduces the men-
tal workload for assembly tasks. Billinghurst and Kato  reviewed the state
of research on collaborative MR of the late 90’s and concluded that there are
promising applications and ideas, but that they scratch just only the surface
of possibilities. It must be further determined, in which areas MR can be ef-
fectively used. Furthermore, Billinghurst and Kato mention that the traditional
WIMP-interface (Windows-Icons-Menus-Pointer) is not appropriate for such a
platform and must be reinvented for MR.
Klinker et al.  created the system Fata Morgana which allows for col-
laborative design reviews on cars and is capable to focus on details as well as
compare diﬀerent designs.
Monahan, McArdle and Bertolotto  emphasize the potential of Gamiﬁca-
tion for educational purposes: ”Computer games have always been successful at
capturing peoples imagination, the most popular of which utilize an immersive
3D environment where gamers take on the role of a character.”  Li, Yue and
Jauregui  developed a VR e-Learning system and summarize that virtual ”e-
Learning environments can maintain students interest and keep them engaged
and motivated in their learning.” 
Gurevich, Lanir and Cohen  developed a remote-controlled robot with
wheels, named TeleAdvisor, which carries a camera and projector on a movable
arm. The RU sees the camera image, can remotely adjust the position of the
robot and his arm with aid of a desktop PC and is able to project drawings and
visual cues onto a surface by the projector. A robot, which carries a camera,
has the advantage of delivering a steady image to the RU while a head-worn
camera by the LU lead to jittery recordings, which can cause discomfort for the
RU. Furthermore, a system controlled by the RU allows mobility, ﬂexibility and
eases the cognitive overhead for the LU, since the LU does not need to maintain
the Point-of-View (PoV) for the RU.
To summarize this section, the transmission of information were often re-
stricted until the year 2012 due to limited sensors, displays, network bandwidth
and processing power. Many system rely on video transfer and were not capable
of transmitting the sense of ”being there” which restricts the mutual under-
standing of the problem and the awareness of spatial information.
3 New Technology introduces a sustainable change
After the year 2012, more data became available for MR collaboration due to
new technology. The acquisition and triangulation of 3D point clouds of the
environment became aﬀordable and feasible in real time. Better understanding
of the environment results in more robust tracking of MR devices. Furthermore,
display technology was enhanced and enabled the development of inexpensive
HMDs. Tecchia, Alem and Huang  created one of the ﬁrst systems which is
able to record the workplace as well as arms and hands of the RU and LU with
a 3D camera and allows the entrance of the triangulated and textured virtual
scene by an HMD with head tracking. The system revealed improvements in
performance over a 2D-based gesture system. Sodhi et al.  combines the
Mircosoft Kinect and a short range depth sensor and achieved 3D reconstruction
of a desktop-sized workplace and implemented a transmission of a hand avatar to
the remote participant. Instead of a simple pointing ray, a hand avatar allows for
the execution of more complex gestures, therefore delivering more information
among the participants for creating a better mutual understanding.
Moreover, the system by Sodhi et al.  is capable of recognizing real sur-
faces. Understanding surfaces of the real environment allows for realistic physical
interactions such as collision of the hand avatar with real objects such as a table.
If the position of real surfaces are available within the virtual world, snapping
of virtual objects to real surfaces is possible as well. This allows for decreased
time in placing virtual object in the scene such as a furniture or assembly parts.
If the environment is available as a textured 3D geometry, it can be freely
explored by the RU. Tait and Billinghurst  created a system which incorpo-
rates a textured 3D scan of a workplace. It allows the RU to explore the scene
with keyboard and mouse on a monoscoping monitor and allows the selection
of spatial annotations. It was found that increasing view independence (fully
independent view vs. ﬁxed or freeze views of the scene) leads to a faster com-
pletion of collaborative tasks and a decrease in time spent on communication
during the task. Similar results are found by Lanir et al.  and explain: ”A
remote assistance task is not symmetrical. The helper (RU) usually has most
of the knowledge on how to complete the task, while the worker (LU) has the
physical hands and tools as well as a better overall view of the environment.
Ownership of the PoV (Point-of-View), therefore, does not need to be symmet-
rical either. It seems that for helper-driven (RU-driven) construction tasks there
is more beneﬁt in providing control (of the PoV) to the helper (the RU)” .
Oda et al.  uses Virtual Replicas for assembly tasks. A Virtual Replica is a
virtual copy of a real-existing, tracked assembly part. It exists in real life for the
LU and it is rendered as a 3D model in VR for the RU. The position of the virtual
model is constantly synchronized with the real environment. Many assembly
parts of machines have complex forms and in some cases it is diﬃcult for the LU
to follow the instructions of the RU in order to achieve the correct rotation and
placement of such complex objects. Therefore, virtual replicas, controlled by the
RU, can be superimposed in AR for the LU which eases the mental workload for
the task. Oda et al. found that the simple demonstration of how to physically
align the virtual replica on another machine part is faster compared to making
spatial annotations onto the Virtual Replicas as visual guidance for the LU which
allows for an easier placement. Oda et al. employs physical constraints such as
snapping of objects to speed up the task similar to Sodhi et al.
Poelman et al.  developed a system which is also capable of building a 3D
map of the environment in real-time and was developed with the focus to tackle
issues in remote-collaborative crime scene investigation. Datcu et al.  uses the
system of Poelman et al. and proves that MR supports Situational Awareness of
the RU. Situational Awareness is deﬁned as the perception of a given situation,
its comprehension and the prediction of its future state as Endsley descriped .
Pejsa et al.  created a life-size, AR-based, tele-presence projection system
which employs the Microsoft Kinect 2 for capturing the remote scene and recreate
it with the aid of a projector from the other participant’s side. A beneﬁt of such
a system is that nonverbal communication cues, such as facial expressions, can
be better perceived compared to systems where the participants wear HMDs
which covers parts of the face.
Mueller et al.  state that the completion time of remote collaborative
tasks, such as ﬁnding certain virtual objects in a virtual room, beneﬁts by pro-
viding simple Shared Virtual Landmarks. Shared Virtual Landmarks are objects,
such as virtual furniture, which helps to understand deictic expressions such as
”under the ceiling lamp” or ”behind the ﬂoating cube”.
Piumsomboon et al. [28,29] developed a system which combines AR and VR.
The system scans and textures a real room with a Microsoft HoloLens and shares
the copy of the real environment to a remote user who can enter this copy by
a HTC Vive. The hands, ﬁngers, head gaze, eye gaze and Field-of-View (FoV)
were tracked and visualized among both users. Piumsomboon et al. reveal that
rendering the eye gaze and FoV as additional awareness cues in collaborative
tasks can decrease the physical load (as distance traveled by users) and make
the task (subjectively rated by the users) easier. Furthermore, Piumsomboon et
al. oﬀers diﬀerent scalings of the virtual environment. Shrinking the virtual copy
of the real environment allows for a better orientation and path planning with
help of a miniature model in the users hand similar as Stoakley, Conway and
Pausch  show.
In summary, since technology has become advanced enough to scan and un-
derstand the surface of the environment in real time, important enhancements for
collaboration tasks were achieved and attested as important for eﬃcient remote
work. 3D reconstruction of the participants’ body parts and the environment
allows for 1.) better spatial understanding of the remote location (free PoV) 2.)
as well as better communication because of transmission of nonverbal cues (gaze,
gestures) and 3.) allows for incorporating the real surfaces with virtual objects
(virtual collision, snapping). Furthermore, the 3D reconstruction of the environ-
ment implies better understanding of the environment which, in turn, leads to
4.) more robust tracking of devices (phones, tablets, HMDs, Virtual Replicas)
and 5.) new display technologies enables more immersive experiences which lead
to better spatial understanding and problem awareness for both users.
Fig. 2. a) View of a third collaborator through his HoloLens: Users design a sail ship
in a local collaboration scenario. One user is immersed by an VR HMD (HTC Vive)
while his collaborators uses an AR device (HoloLens). b) VR view of the Vive user:
The sail ship in the middle and the Vive controller at the bottom can be seen.
4 Insights from a Development of a Collaborative Mixed
We have developed an application in order to apply recent research outcomes
and we want to share our lessons learned of combining two tracking systems. Our
application is an immersive 3D mesh modeling tool which we have developed and
evaluated previously . Our tool allows creating 3D meshes with the aid of
an HMD and two 6Degree-of-Freedom controllers and is inspired by common
desktop modeling applications such as Blender and Autodesk Maya. We have
extended our system with a server-client communication which enables users
with diﬀerent MR devices to join a modeling session. Our tool can simulate how
colleagues can collaboratively develop, review and discuss ideas, machine parts
It is created with the intent to be as ﬂexible as possible. This includes: First,
the users are free to choose an AR or VR device such as HTC Vive or Microsoft
HoloLens. Second, the user can work with real objects, virtual replicas or entirely
virtual items. Third, the system is capable to work locally in the same room,
depicted in Figure 2a, or remotely at diﬀerent places.
A use case demonstrates how our system works and give insights of connecting
and merging two diﬀerent MR systems: A LU, using a HTC Vive, starts the
modeling application and hosts a session. A RU scans a ﬁducial marker with his
HoloLens in order to join the session. The marker has two purposes. First, it
contains a QR code with connection details such as an IP address to the server.
Second, it represents the origin of the tracking space of the remote Vive system.
This allows the HoloLens user to place the virtual content of the server (content
of the HTC Vive side) to any place in his real environment. Additionally, this
approach also enables the user to synchronize the tracking spaces in the same
room by placing the marker on the origin of the Vive tracking system, as shown
in Figure 2a.
Our ﬁrst tests showed that we can successful merge two diﬀerent tracking
systems, such as the HTC Vive and the HoloLens, but we experienced some is-
sues: The tracking system of the Vive interferes with the tracking system of the
HoloLens as soon as the users approach closer than one meter to each other. It
lead to tracking errors for the HTC Vive. Furthermore, we experienced that the
HoloLens’ processing power is limited due relative low technical speciﬁcations
compared to a workstation which limits the complexity of the rendered scene.
Moreover, we have identiﬁed that even the local network connection in our col-
laboration scenario in the same room reveals delays which are noticeable and
could interfere with natural interaction, nonverbal cues and gestures.
5 Research Agenda, Technology Trends and Outlook
This paper has shown examples of remote collaboration which prove the per-
formance and potential of MR. Although important enhancements and research
results have been discovered in recent years, we still have a long way to go until
we have achieved the ultimate display for collaboration - Star Trek’s Holodeck.
A major concern of research, which up to this point has been scarcely inves-
tigated, is the collaboration between multiple teams. The focus in past research
has been mainly conducted on collaboration between two persons, but how to
exchange complex data and interact between multiple groups has yet to be re-
searched further. Lukosch et al.  have taken the ﬁrst steps in this direction but
stated that further research is necessary. Piirainen, Kolfschoten and Lukosch 
mention that a diﬃculty of collaborative remote work in teams is developing a
consensus about the nature of the problem and speciﬁcation. Situational Aware-
ness cues and Team Awareness cues need to be outlined.
Another important point on the agenda is how to maintain focus of the users
to certain events and parts in the environment. Awareness cues are in general an
ongoing topic of research and must be investigated further. M¨uller, R¨adle und
Reiterer  ascertain that a technique is needed to put events, collaborators or
objects into the users’ focus, which are not in the ﬁeld of view. Pejsa et al.  and
Masai et al.  emphasize the importance of nonverbal communication cues such
as facial expression, posture and proxemics which are important contributors to
empathy but these cues are still diﬃcult to transmit with today’s hardware.
A relative rarely investigated ﬁeld of research is comfort in MR, though it is
an important area for the usage of an application over a long period of time. Up
to this point, a real use case could look like this: A worker conducts a demanding
assembly task for hours on an expensive machine by remote guidance. But the
weight of the HMD, the usability of the application and the fatigue in his arms
from making gestures for interacting with the device lead to a growing frustration
by the worker which lead, in turn, to errors of the assembling. Piirainen et al. 
advise not to underestimate user needs and human factors: ”From a practical
perspective the challenges show that the usability of systems is a key.” Today, a
general problem and consideration for every MR application is comfort for the
user. Only a few years ago, VR and AR hardware was used to be bulky and
heavy and research in regards of comfort was theoretically in vain. Research in
MR is mainly focused on technical feasibility and compares productivity between
non-MR and MR application. However, comfort and usability is important, if
long-term applications are required, but research of comfort is scarce. Ladwig,
Herder and Geiger  consider and evaluate comfort for MR application. Lubos
et al.  revealed important outcomes for comfortable interaction and did ﬁrst
steps into this direction.
Moreover, perceiving virtual haptic is widely an unresolved problem in MR
and researcher tries to substitute it with the aid of constraints such as virtual
collisions and snapping, as Oda et al. shows . Furthermore, Lukosch et al. 
and Billinghurst  mention that further research is needed which particular
tasks can be eﬀectively solved and managed with MR.
Better tracking technologies, faster networks, enhanced sensors and faster
processing will move us to the Holodeck and maybe even beyond. Further areas of
research will arise with the advent of new technologies such as machine learning
for object detection and recognition. MR devices of the future will not only
recognize surfaces of the environment, but also detect objects such as machine
parts, tools and humans.
6 Design Guidelines
Past research and our lessons learned revealed many issues which can be con-
cluded into design guidelines for the development of MR applications:
Provide as much information about the remote environment as pos-
sible Video is a minimum requirement. A 3D mesh of the environment is bet-
ter [5,23, 28–31]. An updated 3D mesh in real-time seems to be the best case .
Provide an independent PoV for investigating the remote scenery It
allows better spatial p erception and problem understanding [1, 28, 29, 33, 35]
Provide as much Awareness Cues as possible Transmitting speech is fun-
damental. Information of posture of collaborators such as head position, head
gaze, eye gaze, FoV [28, 29] is beneﬁcial. For pointing by hand is a virtual ray
suﬃcient but a static hand model  or even a full tracked hand model is
better and conveys more information such as natural gestures [28,29]. Provide
cues for events happen outside the FoV of the users and provide Shared Local
Landmarks . To avoid cluttering the view of the users, awareness cues can
be turned on and oﬀ .
Consider usability and comfort If a long-term usage is desired, take a com-
fortable interface for the user into account and consider human factors [13,15,27].
1. Ownership and control of point of view in remote assistance. p. 2243. ACM Press,
2. K. M. Baird and W. Barﬁeld. Evaluating the eﬀectiveness of augmented reality
displays for a manual assembly task. Virtual Reality, 4(4):250–259, 1999.
3. M. Billinghurst, A. Clark, and G. Lee. A Survey of Augmented Reality Augmented
Reality. Foundations and Trends in Human-Computer Interaction, 8(2-3):73–272,
4. M. Billinghurst and H. Kato. Collaborative Mixed Reality. In Mixed Reality, pp.
261–284. Springer Berlin Heidelberg, 1999.
5. D. Datcu, M. Cidota, H. Lukosch, and S. Lukosch. On the usability of augmented
reality for information exchange in teams from the security domain. In Proceedings
- 2014 IEEE Joint Intelligence and Security Informatics Conference, JISIC 2014,
pp. 160–167. IEEE, 2014.
6. M. R. Endsley. Toward a Theory of Situation Awareness in Dynamic Systems.
Human Factors: The Journal of the Human Factors and Ergonomics Society,
7. P. Gurevich, J. Lanir, and B. Cohen. Design and Implementation of TeleAdvisor: a
Projection-Based Augmented Reality System for Remote Collaboration. Computer
Supported Cooperative Work (CSCW), 24(6):527–562, 2015.
8. J. Heiser, B. Tversky, and M. I. A. Silverman. Sketches for and from collaboration.
Visual and Spatial Reasoning in Design III, pp. 69–78, 2004.
9. H. Ishii, M. Kobayashi, and J. Grudin. Integration of inter-personal space and
shared workspace. In Proceedings of the 1992 ACM conference on Computer-
supported cooperative work - CSCW ’92, pp. 33–42. ACM Press, 1992.
10. H. Ishii and N. Miyake. Toward an open shared workspace: computer and video
fusion approach of TeamWorkStation. ACM, 34(12):37–50, 1991.
11. D. Kirk and D. Fraser. The eﬀects of remote gesturing on distance instruction.
Lawrence Erlbaum Associates, 2005.
12. G. Klinker, A. H. Dutoit, M. Bauer, J. Bayer, V. Novak, and D. Matzke. Fata Mor-
gana - A presentation system for product design. In Proceedings - International
Symposium on Mixed and Augmented Reality, ISMAR 2002, pp. 76–85. IEEE Com-
put. Soc, 2002.
13. P. Ladwig, J. Herder, and C. Geiger. Towards Precise, Fast and Comfortable
Immersive Polygon Mesh Modelling. In ICAT-EGVE 2017 - International Confer-
ence on Artiﬁcial Reality and Telexistence and Eurographics Symposium on Virtual
Environments. The Eurographics Association, 2017.
14. Z. Li, J. Yue, and D. A. G. Jauregui. A new virtual reality environment used for e-
Learning. In 2009 IEEE International Symposium on IT in Medicine & Education,
pp. 445–449. IEEE, 2009.
15. P. Lubos, G. Bruder, O. Ariza, and F. Steinicke. Touching the Sphere: Leveraging
Joint-Centered Kinespheres for Spatial User Interaction. In Proceedings of the 2016
Symposium on Spatial User Interaction, SUI ’16, pp. 13–22. ACM, 2016.
16. S. Lukosch, M. Billinghurst, L. Alem, and K. Kiyokawa. Collaboration in Aug-
mented Reality. Computer Supported Cooperative Work: CSCW: An International
Journal, 24(6):515–525, 2015.
17. S. Lukosch, H. Lukosch, D. Datcu, and M. Cidota. Providing Information on the
Spot: Using Augmented Reality for Situational Awareness in the Security Domain.
Computer Supported Cooperative Work (CSCW), 24(6):613–664, 2015.
18. K. Masai, K. Kunze, M. Sugimoto, and M. Billinghurst. Empathy Glasses. In
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in
Computing Systems - CHI EA ’16, pp. 1257–1263. ACM Press, New York, New
York, USA, 2016.
19. P. Milgram and F. Kishino. A Taxonomy of Mixed Reality Visual Displays. IEICE
Transactions on Information Systems, (12), 1994.
20. T. Monahan, G. McArdle, and M. Bertolotto. Virtual reality for collaborative
e-learning. Computers and Education, 50(4):1339–1353, 2008.
21. J. M¨uller, R. R¨adle, and H. Reiterer. Remote Collaboration With Mixed Real-
ity Displays. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems - CHI ’17, pp. 6481–6486. ACM Press, 2017.
22. Nvidia. Nvidia Holodeck. https://www.nvidia.com/en-us/design-
visualization/technologies/holodeck/ Accessed on 2018-01-25.
23. O. Oda, C. Elvezio, M. Sukan, S. Feiner, and B. Tversky. Virtual Replicas for
Remote Assistance in Virtual and Augmented Reality. In Proceedings of the 28th
Annual ACM Symposium on User Interface Software & Technology - UIST ’15,
pp. 405–415. ACM Press, 2015.
24. G. M. Olson and J. S. Olson. Distance Matters. HumanComputer Interaction,
25. J. S. Olson and G. M. Olson. How to make distance work work. 2014.
26. T. Pejsa, J. Kantor, H. Benko, E. Ofek, and A. D. Wilson. Room2Room: Enabling
Life-Size Telepresence in a Projected Augmented Reality Environment. In Pro-
ceedings of the 19th ACM Conference on Computer-Supported Cooperative Work
& Social Computing - CSCW ’16, pp. 1714–1723. ACM Press, 2016.
27. K. A. Piirainen, G. L. Kolfschoten, and S. Lukosch. The Joint Struggle of Complex
Engineering: A Study of the Challenges of Collaborative Design. International
Journal of Information Technology & Decision Making, 11(6):1087–1125, 2012.
28. T. Piumsomboon, A. Day, B. Ens, Y. Lee, G. Lee, and M. Billinghurst. Exploring
Enhancements for Remote Mixed Reality Collaboration. pp. 1–5, 2017.
29. T. Piumsomboon, Y. Lee, G. Lee, and M. Billinghurst. CoVAR: a collaborative
virtual and augmented reality system for remote collaboration. In SIGGRAPH
Asia 2017 Emerging Technologies on - SA ’17, pp. 1–2. ACM Press, 2017.
30. R. Poelman, O. Akman, S. Lukosch, and P. Jonker. As if being there. Proceedings
of the ACM 2012 conference on Computer Supported Cooperative Work - CSCW
’12, (5):1267, 2012.
31. R. S. Sodhi, B. R. Jones, D. Forsyth, B. P. Bailey, and G. Maciocci. BeThere: 3D
Mobile Collaboration with Spatial Input. Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems - CHI ’13, pp. 179–188, 2013.
32. R. Stoakley, M. J. Conway, and R. Pausch. Virtual reality on a WIM. In Proceedings
of the SIGCHI conference on Human factors in computing systems - CHI ’95, pp.
265–272. ACM Press, New York, New York, USA, 1995.
33. M. Tait and M. Billinghurst. The Eﬀect of View Independence in a Collaborative
AR System. Computer Supported Cooperative Work: CSCW: An International
Journal, 24(6):563–589, 2015.
34. A. Tang, C. Owen, F. Biocca, and W. Mou. Comparative eﬀectiveness of aug-
mented reality in object assembly. In Proceedings of the conference on Human
factors in computing systems - CHI ’03, p. 73, 2003.
35. F. Tecchia, L. Alem, and W. Huang. 3D Helping Hands : a Gesture Based MR
System for Remote Collaboration. VRCAI - Virtual Reality Continuum and its
Applications in Industry, 1(212):323–328, 2012.