PreprintPDF Available

Exploring the Effects of Level of Control in the Initialization of Shared Whiteboarding Sessions in Collaborative Augmented Reality

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Augmented Reality (AR) collaboration can benefit from a shared 2D surface, such as a whiteboard. However, many features of each collaborators physical environment must be considered in order to determine the best placement and shape of the shared surface. We explored the effects of three methods for beginning a collaborative whiteboarding session with varying levels of user control: MANUAL, DISCRETE CHOICE, and AUTOMATIC by conducting a simulated AR study within Virtual Reality (VR). In the MANUAL method, users draw their own surfaces directly in the environment until they agree on the placement; in the DISCRETE CHOICE method, the system provides three options for whiteboard size and location; and in the AUTOMATIC method, the system automatically creates a whiteboard that fits within each collaborators environment. We evaluate these three conditions in a study in which two collaborators used each method to begin collaboration sessions. After establishing a session, the users worked together to complete an affinity diagramming task using the shared whiteboard. We found that the majority of participants preferred to have direct control during the initialization of a new collaboration session, despite the additional workload induced by the Manual method.
Content may be subject to copyright.
This is the author’s version, to appear in the IEEE VR 2025 conference.
Exploring the Effects of Level of Control in the Initialization of Shared
Whiteboarding Sessions in Collaborative Augmented Reality
Logan Lane 1*Jerald Thomas 2Alexander Giovannelli 1Ibrahim Tahmid 1Doug A. Bowman 1
1Center for Human-Computer Interaction, Virginia Tech, USA
2University of Wisconsin Milwaukee, USA
Figure 1: The DISCRETE CHOICE technique shown in the Office environment. Users can pick from three shared whiteboard
suggestions (Labeled 1, 2, and 3).
ABSTRACT
Augmented Reality (AR) collaboration can benefit from a shared
2D surface, such as a whiteboard. However, many features of each
collaborator’s physical environment must be considered in order to
determine the best placement and shape of the shared surface. We
explored the effects of three methods for beginning a collaborative
whiteboarding session with varying levels of user control: MA NUA L,
DISCRETE CHOICE, and AUTO MATI C by conducting a simulated AR
study within Virtual Reality (VR). In the MAN UAL method, users
draw their own surfaces directly in the environment until they agree
on the placement; in the DISCRETE CHOICE method, the system
provides three options for whiteboard size and location; and in the
AUTOMATI C method, the system automatically creates a whiteboard
that fits within each collaborator’s environment. We evaluate these
three conditions in a study in which two collaborators used each
method to begin collaboration sessions. After establishing a session,
the users worked together to complete an affinity diagramming
task using the shared whiteboard. We found that the majority of
participants preferred to have direct control during the initialization
of a new collaboration session, despite the additional workload
induced by the Manual method.
Index Terms: Human-centered computing—Human computer
interaction (HCI)—Interaction paradigms—Mixed / augmented
reality; Human-centered computing—Collaborative and social
computing—Empirical studies in collaborative and social comput-
ing;
*e-mail: logantl@vt.edu
1 INTRODUCTION
Synchronous collaboration software such as Zoom
1
, Microsoft
Teams
2
, and Webex
3
are widely used by the general population
today to conduct business and collaborate with other individuals
from anywhere in the world. This existing collaboration software
allows users to work together on a shared, virtual whiteboard where
any user in the session can contribute, edit, and reference the con-
tent on the whiteboard. Shared whiteboards are useful in that they
allow users to visualize complex topics being discussed in a meet-
ing that would otherwise be confusing when only being verbally
discussed, such as financial data or math equations. People often
use whiteboards to sketch, brainstorm ideas, and to group or cluster
ideas [3, 17, 21].
As immersive technologies such as AR continue to grow in pop-
ularity and develop technologically, there will be a desire from the
user base to utilize immersive technology to collaborate with other
individuals. AR has clear benefits for remote collaboration such as
allowing remote collaborators to work alongside one another as if
they were present in the same location, which can be critical for ac-
tivities such as ideation and brainstorming [1,23]. However, the shift
to immersive technologies for collaboration introduces challenges
that will need to be solved to maintain a seamless and effective
collaboration experience.
In traditional collaboration applications designed for 2D displays,
the shared whiteboard typically serves as an anchor point, with col-
laborators’ cursors positioned relative to it. In contrast, immersive
collaborative applications must account for the physical positions
of collaborators in reference to the shared anchor. This becomes
particularly challenging in industrial collaboration meetings, where
1https://www.zoom.com/
2https://www.microsoft.com/en-us/microsoft- teams/
group-chat- software-b
3https://www.webex.com
1
arXiv:2502.00908v1 [cs.HC] 2 Feb 2025
This is the author’s version, to appear in the IEEE VR 2025 conference.
participants often join from vastly different physical environments.
Consequently, designing shared virtual elements and positioning
collaborators in a manner that makes sense in the diverse physical
contexts of all participants is critical. Such alignment steps, al-
though essential in establishing a coherent, shared positional frame
of reference for virtual elements across collaborators [4, 6, 12], add
additional cognitive load to the user.
One specific benefit of AR collaboration is the ability to work
on a shared surface such as a whiteboard; however, determining an
appropriate location, size, and shape of the whiteboard that works
in multiple physical environments is a difficult problem as each
collaborator’s physical environment could have differing amounts
of available space as well as differing room layouts. It could be
possible to scan each user’s environment and determine compatible
characteristics of the whiteboard by analyzing each environment
scan holistically, but this would reduce the amount of control the
user has over the shared whiteboard that they would be collaborating
on.
In order to inform the design of future remote collaboration expe-
riences in AR, we wanted to explore how the level of control during
the initialization process of a collaborative whiteboarding session
affected the overall experience of collaboration. We report on the
findings of a user study in which participants tested three different
conditions (MA NUA L, DISCRETE CHOICE, and AU TOM ATIC) to be-
gin an AR collaborative session with another remote user. To the best
of our knowledge, little to no research has been conducted regarding
the idea of initialization techniques and assistance for collaboration
sessions when using AR to collaborate remotely. Through conduct-
ing this user study, we sought to answer the following research
question: What is the impact of system-assisted initialization on
the user experience of remote, dyadic shared surface collabora-
tion in augmented reality? Our findings showed generally that
people preferred having control over the location, size, and shape of
the whiteboards that they created.
2 RE LATED WORK
2.1 Initialization of CSCW Software
Our literature review did not reveal any work that specifically dealt
with the initialization of CSCW software. This area is an important
aspect of CSCW software that is under-researched and should be
explored further. This is likely because the initialization portion of
CSCW software was straightforward and simple for standard, 2D
applications whereas AR adds some complexity, such as aligning
multiple physically distinct spaces, that requires further research.
We hope that this paper will serve as inspiration for future literature
regarding the initialization process when spatially collaborating
using AR.
2.2 Physical Space Alignment
When users are collaborating remotely using AR technologies, the
users’ physical environments must be taken into consideration to
support collaboration while maintaining spatial awareness among all
collaborators. This is a complicated problem to solve, as users are
able to collaborate from any environment with differing sizes, shapes,
and obstacles that do not match their collaborators’ environments.
Aligning physical spaces for collaboration has been a topic of interest
within the research community over the past several years [5,18, 19].
This problem has been approached in a variety of ways with the two
main approaches being manual alignment and automated alignment.
The first method proposed to solve the space alignment problem
is by manually aligning differing physical spaces. Prior literature
has proposed scanning each collaborator’s physical space and then
manually annotating similar areas within the scan which can then be
used to align collaborators [4, 20].
Another common strategy for aligning physical spaces is to create
an automated process that can intelligently analyze scans of each
collaborator’s physical space and find common area overlaps to
align the space to maintain spatial awareness between collaborators.
Automating the alignment process is desirable as manually finding
areas of overlap can be a time consuming and unintuitive process,
especially as the number of collaborators increases.
We see examples of automated alignment work in works by
Lehment et al. and Keshavarzi et al. which take in room scans
as input and intelligently find common areas between collaborator’s
spaces [12, 16]. An interesting thing to note is that this work allows
collaborators to move about their environment while reflecting these
movements in a natural way within their collaborator’s environment.
Kim et al. detailed an algorithm that used Object Cluster Regis-
tration to intelligently find overlapping shared space between two
different rooms [14]. Kim et al. also detailed a redirected walking
method that allowed a VR user to appear in an AR user’s environ-
ment naturally by tweaking the translation gains of the movement
of the VR user, thus maintaining spatial awareness [13]. Kang et al.
expanded upon the idea of Kim et al.’s previous work by incorporat-
ing a neural network that could alter avatar movement to maintain
realism [11]. Yoon et al. also used neural networks in a similar
manner [24].
Some existing literature details methods in which collaborators
appear in the spaces of other collaborators, but are confined to an
area or otherwise hindered from moving about the environment in a
natural way. This way, the environments themselves do not need to
be aligned, but rather a common anchor is established within each
collaborator’s environment. Herskovitz et al. provide a toolkit capa-
ble of displaying collaborators through a portal, world-in-miniature
display, or by anchoring them to a common element of both rooms,
such as a chair or table [9]. The idea of anchoring has also been
explored in other works [6, 7, 10]. Yang et al. demonstrate a sys-
tem that also utilizes anchoring to maintain spatial awareness while
providing visual guidance to the collaborators regarding where they
should stand to avoid unrealistic placements such as in the middle
of a table [22].
Collaborating around a shared surface such as a whiteboard does
not require strict environment alignment. Thus, we used the shared
whiteboard as an anchor for the avatars. However, we still must ad-
dress the problem of initializing the whiteboard with an appropriate
location, size, and shape in each environment, in a way that will
facilitate collaboration.
3 DESIGN PRO CE SS OF INITIALIZATION TECHNIQUES
To explore how the level of user control over the initialization of a
shared whiteboard affected their collaboration, we designed three
initialization techniques ranging from fully manual control to fully
automatic initialization. To avoid the limitations of current AR
systems and the dynamic nature of real environments, we prototyped
and evaluated these techniques in VR.
All the techniques result in a shared whiteboard among a group
of collaborators, with each collaborator seeing the whiteboard in
their own virtual environment. Once the whiteboard is established,
we show an avatar for each collaborator that reflects their position
relative to the shared whiteboard in their respective environment.
Doing this allows all collaborators to have a shared understanding
of the other collaborators’ position, gaze, and pointing gestures
toward elements on the shared whiteboard. To keep things simple,
we also assumed that only two users were collaborating during our
prototyping and evaluation.
3.1 Manual Technique
The goal of the manual initialization technique was that each user
should be able to place a whiteboard anywhere in their own environ-
ment. However, the whiteboards would need to have the same size
and shape to be used as shared surfaces.
2
This is the author’s version, to appear in the IEEE VR 2025 conference.
Our first prototype was the Serial Manual technique. The tech-
nique began by having User 1 create a whiteboard in their envi-
ronment that they felt was in an optimal location and was large
enough to complete the given task. This was done by pointing the
ray emanating from the controller at a point on the wall, holding
the primary trigger on either controller and then dragging the ray to
another point on the wall and releasing the trigger. This would create
a rectangle with the upper, left corner defined by the initial trigger
press and the lower, right corner by the trigger release. User 1 then
confirmed the whiteboard and sent it to User 2 by pointing at the
whiteboard with the controller ray and pressing the primary button.
User 2 would then place the whiteboard in their desired location by
pointing the ray at the whiteboard and holding the grip on the side
of the controller. Once User 2 placed the whiteboard in the desired
area, they would then resize the board so that it would fit in the area
by pointing the ray at the whiteboard and moving the joystick up
and down to uniformly scale the whiteboard. User 2 would then pass
the whiteboard back to User 1 where they could relocate or resize
the whiteboard again. This process repeated until both users were
happy with the location and size of the whiteboard.
After some initial pilot testing, we found that the Serial Manual
technique was cumbersome to use and took a long time to allow
users to begin collaborating, which we felt would ultimately lead to
user frustration.
Thus, our second prototype was the Parallel Manual technique.
In this technique, both collaborators created whiteboards in parallel
in their respective environments, using the same control scheme
described above. After both users created a whiteboard, the system
determined their overlap based on the minimum width and height
of the two individual whiteboards. Both users were then shown an
outline of this size, centered at the location of their original white-
board, representing the current proposal for the shared whiteboard.
Users were free to redraw their individual whiteboards as many
times as necessary to get the desired overlapping, shared whiteboard
size. Once both users were happy with the size of the shared white-
board, each user confirmed the shared whiteboard by pointing the
controller ray at the whiteboard outline and pressing the primary
button. We demonstrate this process in Figure 2. It is worth noting
that the location of either collaborator’s whiteboard did not affect the
other collaborator’s whiteboard position. Both collaborators could
place the whiteboard wherever they wanted within their respective
environments.
In our preliminary testing, we found the Parallel Manual tech-
nique to be much quicker and overall more efficient in establishing
a common shared whiteboard that was compatible with both user’s
individual environments. Thus, we included it in the user study as
the MA NUAL condition.
3.2 C-SAW
Before developing system-assisted initialization techniques, we de-
veloped some rules for the system to follow when determining white-
board location, size, and shape. We propose C-SAW (Collaborative
Surface Algorithm for Whiteboarding), which determines candidate
whiteboard placements and sizes based on the constraints of the
physical environments of a group of users. Because we wanted to
explore the effectiveness of this approach, C-SAW was not imple-
mented to be intelligent and autonomous. Instead, we developed a
set of heuristics humans could use to create suggested whiteboards
depending on the specific combination of environments that each
collaborator was in. They are as follows:
1. The user should be able to reach the whiteboard
2.
The whiteboard should not be blocking something important
(e.g., window, door, etc.)
3.
The whiteboard should be placed in relatively open space on a
wall
Figure 2: The process that users follow in the MANUAL condition.
A.) User 1 and User 2 each create a whiteboard in their physical
environment. B.) Users 1 and 2 are shown the “Overlap” (Represented
as the green outline) between their two whiteboards. C.) User 1 alters
the size of their whiteboard. The overlap is updated accordingly. D.)
Both users confirm the whiteboard and begin collaborating with the
same shared whiteboard.
4.
The whiteboard should not be so narrow or short that content
cards do not fit completely on the board
5.
The whiteboard should have enough free space in front of it
for the user to be able to view the entire whiteboard at once
We show an example of C-SAW being applied to two collabora-
tors’ environments in Figure 3.
To avoid bias in the size and location of the whiteboard in the
study environments, a researcher who was not affiliated with the
project was consulted to provide the sizes and locations of the white-
boards in the environments used throughout the user study. During
the selection process, the researcher followed the rules of C-SAW to
select whiteboard sizes and locations.
3.3 Automatic
The AUTOMATI C initialization technique is the fully automated
technique that uses C-SAW to look at each collaborator’s respec-
tive environment holistically. There is no user interaction in this
condition. Each collaborator joins the session, and a single shared
whiteboard that is compatible with both collaborators’ environments
is given to them so that collaboration can begin immediately. The
whiteboard provided by the AUT OMATI C technique is the one that
best follows the rules of C-SAW (based on the analysis of the human
researcher), meaning the given whiteboard was often in an area with
few objects surrounding the whiteboard, as well as being on a wall
that had lots of open space.
3
This is the author’s version, to appear in the IEEE VR 2025 conference.
Figure 3: An example of C-SAW (Specifically, C-SAW working with
the DISCRETE CHOICE technique) being applied to two collabora-
tor’s respective environments. The collaborators are offered three
whiteboard suggestions that fit the constraints of both environments
(Shown as the red, green, and blue numbered rectangles).
3.4 Discrete Choice
In addition to MA NUA L and AUTOMATIC, we wanted to provide a
middle-ground technique. The D ISCRETE CHOICE technique uses
C-SAW to holistically analyze both collaborators’ environments and
provide three suggestions for whiteboards whose sizes, shapes, and
locations are respective of the constraints of each collaborator’s envi-
ronment. After viewing the suggestions, the two collaborators work
together to come to an agreement on which whiteboard suggestion to
choose. Each user would point their controller’s ray at a suggestion
and pull the trigger on the controller to make their selection.
If both collaborators selected the same suggestion, the whiteboard
turned green for both users to confirm much like they did in the
MAN UAL condition. However, if they selected different suggestions,
then the user’s selected suggestion was shown in a yellow color while
their partner’s suggestion was shown in a blue color. Collaborators
were free to change their selection before confirming the suggestion
as many times as necessary. After each user confirmed the mutual
suggestion, the users could begin collaborating.
3.5 Synchronizing Spatial Awareness
Once participants created or selected a shared whiteboard for each
respective condition each user was then able to see a virtual repre-
sentation of the other user in their own environment relative to the
whiteboard. Each user’s created whiteboard acts as the “anchor” for
the avatar. As an example of this, if User 1 is two meters from the
whiteboard in their environment, then User 1’s avatar in User 2’s en-
vironment is also two meters away from the other user’s whiteboard.
Additionally, if User 1 points at a sticky note in the bottom, left
corner of their whiteboard, their avatar in User 2’s environment is
also pointing at the same sticky note located on User 2’s whiteboard.
It is also at this point where each user has a “laser pointer” that
they can use to point at sticky notes on their respective whiteboard
from a distance. Similar to how the avatars work, the intersection
of each user’s laser pointer with their whiteboard is reflected on
the opposite user’s whiteboard. User 1 pointing at a note on their
whiteboard will be reflected on User 2’s whiteboard and vice versa.
We show an example of this in Figure 4.
Figure 4: User 1 pointing their laser pointer at the whiteboard in their
environment (Left). User 1’s laser pointer’s position reflected on User
2’s whiteboard (Right).
4 EXPERIMENT
4.1 Goals
The goal of this study was to understand how the experience of
collaboration was affected by varying the level of control, or, con-
versely, the amount of assistance, provided to the user during the
initial setup process of a shared whiteboard.
Specifically, we examined the following hypotheses:
H1: Users will prefer the D ISCRETE CHOICE condition be-
cause of its balance between ease of initialization and flexibil-
ity.
H2: Users will least prefer the AUTOMATIC condition due to
its lack of choice in location.
H3: Users will be more frustrated while utilizing the AUTO-
MATI C condition
4.2 Experimental Design
The within-subjects study had one primary independent variable:
initialization technique (MA NUA L, DISCRETE CHOICE, and AU TO-
MATI C). We also varied each user’s environment (Office, Kitchen,
Bedroom (shown in Figure 5)) and the dataset used in the collabora-
tion task (Sports, Food, History) in order to test each technique in
a variety of scenarios. The combination of technique and environ-
ment were counterbalanced using a balanced Latin square to ensure
that there were no environmental biases. Pairs of participants were
always in different virtual environments for a particular trial, and
each user saw each environment only once. Each participant pair
was located in the same physical space throughout the study. Each
participant was placed on opposite sides of the room with a barrier
placed between them. Participants were free to speak aloud to one
another as if they were using online voice communication.
Multiple dependent variables were collected. We measured user
preference for the techniques; time to complete initialization; the
size, shape, and location of participant-created whiteboards; and
the participant responses to the post-study interview questions. We
administered the NASA-TLX [8] and the UEQ [15] in order to gauge
how the workload and user experience of collaboration were affected
by the initialization technique. We also administered the SUS ques-
tionnaire [2] after both the MA NUA L and DISCRETE CHOICE con-
ditions to gauge the overall usability of the initialization technique.
We did not administer the SUS questionnaire after the participant
pair completed the AUTOMATIC condition considering that partic-
ipants did not have to do anything in order to establish the shared
whiteboard.
4.3 Apparatus
Each participant used a Meta Quest 2 Head-Worn Display (HWD)
and two Meta Touch controllers. The headset has an 1832 x 1920
resolution per eye with a refresh rate of up to 90 Hz.
4
We also
4https://www.meta.com/quest/products/quest-2/tech-specs/
4
This is the author’s version, to appear in the IEEE VR 2025 conference.
Figure 5: A top-down view of each of the three study environments.
A.) Office - B.) Kitchen - C.) Bedroom
used two PCs running Windows 10 to run the VR applications. One
computer had an i9-11900F CPU with an RTX 3080 GPU and 32GB
of RAM. The other PC had an i9-12900K CPU along with an RTX
3070 ti GPU and 32GB of memory. We also used two Meta Link
cables to connect the HWDs to the PCs. The VR study software
was built with Unity version 2020.3.35f1 and was networked with
Photon PUN 2 to synchronize avatar positions and user interactions.
4.4 Task
In each trial, participants completed two main tasks. The first task
involved establishing a shared whiteboard with the other participant.
The method through which this was accomplished varied based on
the condition that the participant pair was testing.
In the second task, participants worked together to complete
an affinity diagramming task. Affinity diagramming is defined by
Lucero as, “a technique used to externalize, make sense of, and
organize large amounts of unstructured, far-ranging, and seemingly
dissimilar qualitative data [17]. The second task involved organiz-
ing virtual sticky notes on the shared whiteboard until the participant
pair was satisfied.
We used six datasets (sets of sticky notes) throughout the study.
The first three datasets were used as training data for each of the
three interaction techniques and were not included in any of the data
analyses in subsequent sections of this paper. The other three sets
of data were used in the actual study conditions. These datasets
dealt with food, history, and sports. We chose the food and sports
datasets as they were general enough that participants could work
together to create multiple different groups with the given data. The
history dataset allowed us to give a specific grouping directive to the
participant pairs (ordering the historical events chronologically). The
food and sports sticky notes had images of various food items and
sports equipment, while the history set was text-based and featured
various events throughout the world’s history. The food and history
datasets had 20 entries while the sports dataset only had 10 entries.
Each participant in a pair received half of the items in a dataset
at the beginning of the affinity diagramming task, to ensure that
both users would participate equally—participants could only move
items belonging to them. The pair were free to discuss organization
strategies between themselves. An affinity diagramming task in
progress with the sports dataset is shown in Figure 6.
When spawning into the study environment, each participant was
given their half of the notes on a “note holder”, a rectangular surface
that followed them as they moved around the study environment.
The holder was positioned below their waist so that their vision
Figure 6: An affinity diagramming task in progress.
would not be occluded by the holder. The holder was pitched at a
27°angle relative to the user to allow them to easily view all the note
cards.
Participants could interact with the dataset items by pointing their
controller’s ray at one of the items and pulling and holding the
trigger. Once the participant was holding an item, they could then
point their ray at the shared whiteboard where they would be shown
an outline of the item’s position on the whiteboard if they released
the trigger at that moment. Releasing the trigger placed the item on
the whiteboard. Participants could then move the item again using
the same approach. Participants could also point their controller’s
ray at the floor and pull the grip on the controller to teleport around
the environment.
4.5 Procedure
Upon arriving at the laboratory, participant pairs were welcomed
and introduced to one another before beginning the study. Before
participating in the study, participant pairs completed a pre-study
questionnaire where demographics about each of the participants
were gathered.
After taking care of the pre-study documents, participants were
formally introduced to the study. We began by showing participants
a brief pre-study presentation that introduced the concept of AR
whiteboards and some of the challenges that whiteboards of this
kind could introduce.
After the presentation, participants were introduced to the Meta
Quest 2 and the accompanying Meta Touch Controllers. Participants
were shown the different buttons and triggers that they would need to
use. Participants were then shown how to wear and adjust the fit of
the HWD to a comfortable setting. Participants were then instructed
to put the HWD on and adjust it so that the fit was comfortable and
the content in the headset was legible and easy to see.
Next, participants began training for the first interaction technique
they would be utilizing. After training, the participants would then
complete the actual trial condition using the interaction technique
they had just trained for. This process was repeated for each of the
three interaction techniques.
After completing the trial for a given condition, participants com-
pleted surveys regarding both the experience of establishing a shared
whiteboard as well as how the initialization technique affected the
overall experience of grouping the sticky notes. Participants only
completed one trial for each of the techniques.
After completing all three interaction techniques, participants
participated in a joint post-study interview in which they were asked
questions regarding their experience establishing their shared white-
boards for each of the three conditions; their verbal responses were
recorded. Participants were asked questions about their most and
least preferred conditions, the condition that produced the most ideal
5
This is the author’s version, to appear in the IEEE VR 2025 conference.
whiteboard, negotiation strategies during initialization between them
and their partner, the pros and cons of each condition, and their
thoughts on the importance of whiteboard size, shape, and loca-
tion. After concluding the post-study interview, we thanked both
participants for their time and concluded the study session.
4.6 Participants
After receiving IRB approval from our university, we recruited 36
participants (27 Male, 9 Female) for a total of 18 pairs of partic-
ipants. Participants were 18 years of age or older, with normal
or corrected vision (i.e., contacts). Participants who wore glasses
were not included in the study. Participants were recruited from
various computer science and Human-Computer Interaction email
lists. Participants were between the ages of 19 and 35 (M=23.08,
SD=4.17)
Concerning prior experience, 16 of our participants had never
used AR before, 12 had used AR 1-3 times, 4 had used AR 5-10
times, and 4 had used AR more than 10 times. 5 had never used
VR, 18 participants had used VR 1-3 times, 5 participants had used
VR 5-10 times, and 8 people had used VR more than 10 times.
12 participants said that they had never met their study partner,
3 participants said that they were acquaintances with their study
partner, and 21 people said that they were friends with their study
partner.
5 RE SULTS
5.1 Technique Preference
During the post-study interview, participants were polled on their
favorite and least favorite initialization techniques.
Figure 7: Participant rankings of each of the initialization techniques
As Figure 7 shows, half of the participants ranked the manual
condition as their first choice. Eight ranked the D ISCRETE CHOICE
technique first, while ten ranked the AUT OMATI C condition as their
favorite. We found little difference between the rankings of each of
the initialization techniques. A chi-square test (F(4) = 7.33,p= 0.12)
revealed no significant differences in the ranking of the techniques.
5.2 Questionnaire Results
We measured average SUS scores of 80.42 and 84.31 for the MA N-
UAL and D ISCRETE CHOICE conditions, respectively.
We used unweighted NASA-TLX scores in our analysis. The
overall workload scores were 3.61, 3.57, and 3.76 for the MAN UAL ,
DISCRETE CHOICE, and AUTO MATI C conditions, respectively. The
NASA-TLX subscale values had very similar results across all three
of the study conditions. The Mental Demand, Physical Demand,
Temporal Demand, Effort, and Frustration subscales all had low
values that never exceeded 3.6. Likewise, the performance subscale
never dipped below 8.3 for any of the conditions.
The results from the UEQ responses were scored by summing
each of the results for a participant in each of the six scales (i.e.,
Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation,
and Novelty) for each participant for each condition. We found
that each of the scales produced very similar scores across all three
conditions with no scale scoring lower than 3.31 and no higher than
5.58.
5.3 Time to Complete Initialization
After participants completed the initialization process, we marked
the exact time that the process completed. We found that on average
it took participants 45 seconds to complete the initialization for the
MAN UAL condition. In addition, we found that it took participants
an average of 26 seconds to complete initialization in the DISCRETE
CHOICE condition. It should be noted that average times were not
calculated for the AUT OMAT IC condition as there was no input
required from the participant for initialization.
5.4 Data and task Influence on Whiteboard Characteris-
tics
Participants exhibited behavior that suggested that the dataset di-
rectly influenced the characteristics of the whiteboard participants
either created or selected. Quantitatively, we observed that the
average scale of the whiteboards created by participants in the man-
ual condition increased as the number of items to group increased.
Specifically, the average scale of whiteboards created with the sports
dataset (with ten items) were smaller (3.5m x 1.93m) compared to
those created when using the 20-item food dataset (3.94m x 2.22m).
We show the average sizes of the whiteboards overlaid on one an-
other in Figure 8.
Figure 8: Average shapes and relative sizes of whiteboards created
by the participants in the MANUA L condition.
However, the amount of data alone does not influence the char-
acteristics of created whiteboards. Participant pairs were instructed
to work together to order the history dataset chronologically. As a
result, we found that the average whiteboard for this dataset was
wider and less tall (4.78m x 1.84m) compared to the whiteboards
that used the food and sports datasets. Participants created thin and
long whiteboards that resembled a timeline as a result of being given
a timeline task. We also observed that 10 out of 18 participant pairs
chose a whiteboard with a larger aspect ratio during the DISCRETE
CHOICE condition.
6 DISCUSSION
The study was designed to answer the following research question:
“What is the impact of system-assisted initialization on the user
experience of remote, dyadic shared surface collaboration in
augmented reality?”
6
This is the author’s version, to appear in the IEEE VR 2025 conference.
6.1 Tradeoffs Among Initialization Techniques
Prior to conducting the study, we hypothesized that participants
would prefer the DISCRETE CHOICE condition because of the bal-
ance it provided between ease of initialization and flexibility (H1).
We also hypothesized that participants would least prefer the AU-
TOMATIC condition because of the lack of choice in whiteboard
location (H2). We found no significant difference between the tech-
nique rankings. DISCRETE CHOICE actually had the fewest first
place rankings, while MA NUA L had the highest. Thus, the results
do not support H1 or H2. We also found that participants were not
frustrated with the AUTOMATIC condition which does not support
H3.
The interview responses shed some light on why participants
ranked the conditions the way that they did. Half of the participants
stated that they preferred the MANUAL technique because of the
control and freedom that the condition provided them. P9 said, “I
like the one where you could draw your own the best because,
well, for the task...It was better to be able to customize the size
of your own board. Like we wanted a longer timeline ...”
Participants also mentioned liking the freedom to create white-
boards specifically tailored for the given dataset in the MANUAL
condition. Whiteboards became wider and less tall with the history
dataset (Timeline task), and larger with the datasets containing more
items, as described in Section 5.3. Some participants told us during
the post-study interviews that they attempted to create a whiteboard
that resembled a timeline. P29 commented on the whiteboard shape
being influenced by the task at hand, “... let’s say you were to put
something like as a long rectangle or something like that. A lot
of people like consider that to be like chronological or like a
timeline or something.”
On the other hand, a large number of the participants found the
workflow in MANUAL to be unintuitive and difficult to use. Nearly
all of the participants struggled initially with the technique, even
if they did not explicitly say so during the post-study interview.
We found that on average a participant pair together created 10
(M=10.44, SD=7.76) different whiteboards prior to agreeing on a
whiteboard which suggests difficulty in using the technique. During
the post-study interview, seven participants explicitly mentioned the
complexity of the MA NUA L condition as a reason for ranking it as
their least favorite. P3 said, ”Probably the last one [MANUA L].
The overlapping of the whiteboard thing feels a little complicated.
I mean, I didn’t have any trouble with it, but I feel like if more
people were doing that, I feel like it takes a little bit more time.
It’s, you know, not a super huge amount of advantage...
Similarly, five participants mentioned that they did not like the
MAN UAL condition because of the time required to establish a ses-
sion with the other collaborator. P1 said, “. . . I guess least favorite
would be the first one [MANUAL]. Just because of how many
steps you have to do to get a shared collaborative space [White-
board].” On average, it took participant pairs 45 seconds to begin
collaborating in the MA NUA L condition.
While we may not have found a statistically significant difference
in the participant rankings of the three conditions, the participant
ranking data, as well as comments made to us during the post study
interview allow us to conclude that collaborators do prefer to have
explicit control over the location, size, and shape of shared white-
boards, but that our design needs further improvements in efficiency
and understandability.
It is also worth noting that the majority of the technical issues
during the study occurred during the MANUAL condition. There
were two main technical issues encountered during the condition.
The first issue was that occasionally the created whiteboards would
not be passed to the other respective collaborator’s Unity instance.
This was likely caused by network instability and a restart of the
study software would resolve the issue. The other issue occurred
when participants were attempting to create their whiteboard on
a wall. Participants would sometimes create whiteboards so large
that the corner of the whiteboard would begin to encroach into the
corner of the adjacent wall which would prevent the whiteboard from
being created. This led to visible frustration from the participants.
These two issues may have influenced participant perception of the
MAN UAL condition.
Regarding the DISCRETE CHOICE condition, a small subset of
participants seemed to like having multiple options to choose from
within their respective environments. In the post-study interview, ve
participants explicitly mentioned liking the fact that the DISCRETE
CHOICE condition provided them with options for whiteboard size
and location. P11 said, “. . . the [DISCRETE CHOICE] one, I think
that is the best. It still gives you to take some decision where you
want to ... You can actually just pick the right position, and you
don’t have to do a lot ... to draw it on the walls.”
One aspect of the DISCRETE CHOICE condition that participants
sometimes struggled with was the perceived uselessness of some of
the whiteboard suggestions provided during the condition. We at-
tempted to provide a variety of whiteboards that had different shapes
and sizes while still conforming to the requirements of the individual
environments that each participant was in. However, we found that
no participants ever selected the whiteboard choices whose height
was greater than their width (see Figure 9). We expected that some
participant pairs would prefer to use such a whiteboard during the
timeline task to order events from the top to the bottom. During the
post-study interview, some participants said they could never see a
potential scenario of using the vertical whiteboard. P28 said, “... if
the vertical board was horizontal, like the one that was next to it
[horizontal board] was still better than having the vertical one
that went like straight to the ground.”
Figure 9: The whiteboard suggestions selected by each participant
pair during the DISCRETE CHOICE condition.
During the interview, 10 of the 36 participants mentioned that
they believed that the DISCRETE CHOICE condition took the longest
to begin collaborating. Some did not like the concept of choosing
suggestions at all, while others had issues agreeing on a common,
shared whiteboard suggestion to use. One participant pair in par-
ticular (P33/34), actually argued over which suggestion to use and
it took them 77 seconds to begin collaboration (compared to the
average time in the DISCRETE CHOICE condition of 26 seconds).
It could be that participants would find it less frustrating to exert
more effort in the MANUAL condition or rely on the AU TOM ATIC
condition to provide the ideal whiteboard rather than debate with
their partner over which suggestion to use.
With the food and sports datasets, participants told us that during
the MANUAL and DISCRETE CHOICE conditions, they often just
7
This is the author’s version, to appear in the IEEE VR 2025 conference.
went with the largest whiteboard they could either create or select.
When asked which condition produced the most ideal whiteboard,
P4 said, “Probably manual, but that’s just because we can make
it as big as possible.” P4’s collaborator P3 said this when asked if
the suggestions in the DISCRETE CHOICE condition were similar
to what they would have created themselves: “I think they differ
just because I probably would have gone for the biggest surface
area possible and just used whichever part of it I needed. This
suggests that it might be attractive to allow users to resize the sug-
gestions provided by the DISCRETE CHOICE technique as users
could find the “best” suggestion and then tweak the dimensions of
the whiteboard to fit their exact environment. It also suggests that
whiteboard size should perhaps be given the highest weight in the
C-SAW algorithm.
Because the AUT OMATIC condition had the least amount of user
involvement, participants ultimately had the least to say regarding
this condition. One aspect that participants seemed to like was
having the shared whiteboard ready for collaboration with no input or
work required by them. Ten participants made comments regarding
the convenience and simplicity of the AUTO MATIC condition. P7
said, “I think the auto is definitely the most efficient, like for just
getting right into it, right into actually doing the task.”
Participants were satisfied overall with the whiteboard provided
by the AUT OMATIC condition, with all 36 participants mentioning
during the post-study interview that they were happy with the white-
board provided. Some participants did not care for the location
of the whiteboard with 3 of 36 participants explicitly mentioning
poor location during the post-study interview. Otherwise, any other
negative feedback regarding the AUTOM ATIC condition was related
to the characteristics of the condition itself (i.e. having no control
over the whiteboard for collaboration).
On the other hand, participants also said that the AUTOM ATIC
condition was not flexible at all and that the whiteboard chosen by
the technique sometimes had issues, such as being partially occluded
by furniture or other objects within the environment. Eleven partic-
ipants explicitly mentioned one or both of these issues during the
post-study interview. P30 said AUTO MATI C was their least favorite
condition, because like they only gave me one option. But with
the second one[DISCRETE CHOICE], they gave me three options.
P31 commented, “I felt like it [the whiteboard] was placed in an
area that was near furniture and ... it just felt like the furniture
was in the way of viewing the whiteboard sometimes so you have
to get extra close to it.” Thus, future C-SAW algorithms need to
be intelligent in order to make appropriate selections of whiteboard
location, size, and shape. It may be worth considering some kind
of “manual override” option in the DISCRETE CHOICE and AU TO-
MATI C conditions where the user can override the suggestion and
make their own whiteboard. In general, our data suggest that partici-
pants liked automatically being able to collaborate with no input or
effort required from them, as long as the algorithms selecting the
whiteboard location, size, and shape work well.
6.2
Design Considerations for Collaborative Session Ini-
tialization
When designing the initialization experience for a collaboration
session, there are some design considerations that are apparent from
the results of our study.
First, we suggest that the best initialization technique for shared
surfaces could be a combination of the three conditions tested in
the study. Our data showed that both manual control and intelligent
system suggestions could be effective and preferred. We envision
a new technique that would use an intelligent C-SAW algorithm
to offer one whiteboard suggestion based on the task or data the
collaborators were working on. If collaborators do not like the initial
whiteboard, they could then select from other intelligent suggestions
or modify any of the suggestions manually. This would give col-
laborators the ultimate flexibility to get a whiteboard to provide the
best experience for collaboration. We propose the prototyping and
evaluation of this approach as future work.
The second design consideration for collaborative initialization
techniques is keeping all collaborators aware of the actions and data
elements of other collaborators in the session. We observed that
34 of our 36 participants placed all of their data onto the shared
whiteboard prior to grouping them because it was easier to discuss
grouping strategies when all the data was already visible to both
users. Future AR collaboration systems should implement systems
that allow users to see the data to be organized or worked within a
session to keep all collaborators in the loop without making extra
work for them.
Awareness of collaborator actions is also an important issue. In
the MA NUAL condition, participants were unable to see the size and
shape of the whiteboard that their collaborator created. Instead, they
could only see the overlap after both users had created their white-
boards. Three participants explicitly mentioned how useful it would
be to be able to see the dimensions and shape of the whiteboard
that they were creating. P2 commented, “When you’re trying to
match up a whiteboard ... it would be better if you ... see what
the other person’s drawing [Whiteboard] is like. Keeping both
collaborators in the loop would aid in the negotiation of the shared
surface, reducing the number of times the whiteboards would need
to be edited and saving time overall.
7 LIMITATIONS & FUTURE WORK
This work had several limitations. First, our study was conducted
by simulating AR within VR. A study similar to ours should be
conducted with an AR HWD to see if the results from our study
change at all.
Our study and whiteboard collaboration software was designed
to simulate two individuals working together at a physical white-
board, thus we added the requirement that whiteboards needed to
be exactly the same and fit within each user’s respective environ-
ment. Therefore, our system did not include features that you might
find in digital whiteboard collaboration software, such as “infinite”
scrollable whiteboards. A future study should be conducted that
includes the affordances provided by digital whiteboards to see how
the results of the study are altered.
In addition, further refinements to the control scheme in the
MAN UAL and DISCRETE CHOICE conditions would likely improve
participant understanding of those conditions and could influence
participant condition preference.
Finally, our work only evaluated initialization techniques in the
context of an affinity diagramming task. Other shared surface tasks,
such as presentation, annotation, or sketching, should be studied in
the future.
8 CONCLUSION
Making AR collaboration more user-friendly will lead to increased
adoption of AR in industrial settings. This was a major reason we
focused on the initialization aspect of a new collaborative session, as
a poor setup experience could lead to user frustration and a hesitance
to use AR for collaboration on projects within industry.
In this work, we looked at how different levels of control dur-
ing the initialization phase of a collaborative whiteboarding session
affected the overall collaboration experience. We tested three condi-
tions MA NUA L, DISCRETE CHOICE, and AU TOM ATIC and
found that participants overall preferred having control over the size,
shape, and location of whiteboards within their environment. Our
qualitative evidence also suggested that users would be satisfied with
an AUTOMATI C technique as long as the algorithm selecting the
whiteboard position, size, and shape was intelligent enough to make
a choice that matches the expectations of the users.
8
This is the author’s version, to appear in the IEEE VR 2025 conference.
REFERENCES
[1]
S. Aseeri and V. Interrante. The Influence of Avatar Representation on
Interpersonal Communication in Virtual Social Environments. IEEE
Transactions on Visualization and Computer Graphics, 27(5):2608–
2617, May 2021. doi: 10. 1109/TVCG.2021.3067783
[2] J. Brooke. SUS - A quick and dirty usability scale.
[3]
M. Cherubini, G. Venolia, R. DeLine, and A. J. Ko. Let’s go to the
whiteboard: how and why software developers use drawings. In Pro-
ceedings of the SIGCHI Conference on Human Factors in Computing
Systems, pp. 557–566. ACM, San Jose California USA, Apr. 2007. doi:
10.1145/1240624. 1240714
[4]
B. J. Congdon, T. Wang, and A. Steed. Merging environments for
shared spaces in mixed reality. In Proceedings of the 24th ACM Sym-
posium on Virtual Reality Software and Technology, pp. 1–8. ACM,
Tokyo Japan, Nov. 2018. doi: 10.1145/3281505. 3281544
[5]
B. Ens, J. Lanir, A. Tang, S. Bateman, G. Lee, T. Piumsomboon, and
M. Billinghurst. Revisiting collaboration through mixed reality: The
evolution of groupware. International Journal of Human-Computer
Studies, 131:81–98, Nov. 2019. doi: 10.1016/j.ijhcs.2019.05.011
[6]
D. I. Fink, J. Zagermann, H. Reiterer, and H.-C. Jetter. Re-locations:
Augmenting Personal and Shared Workspaces to Support Remote Col-
laboration in Incongruent Spaces. Proceedings of the ACM on Human-
Computer Interaction, 6(ISS):1–30, Nov. 2022. doi: 10. 1145/3567709
[7]
J. E. S. Grønbæk, K. Pfeuffer, E. Velloso, M. Astrup, M. I. S. Pedersen,
M. Kjær, G. Leiva, and H. Gellersen. Partially Blended Realities:
Aligning Dissimilar Spaces for Distributed Mixed Reality Meetings.
In Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems, pp. 1–16. ACM, Hamburg Germany, Apr. 2023.
doi: 10.1145/3544548. 3581515
[8]
S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task
Load Index): Results of Empirical and Theoretical Research. In Ad-
vances in Psychology, vol. 52, pp. 139–183. Elsevier, 1988. doi: 10.
1016/S0166-4115(08)62386-9
[9]
J. Herskovitz, Y. F. Cheng, A. Guo, A. P. Sample, and M. Nebeling. XS-
pace: An Augmented Reality Toolkit for Enabling Spatially-Aware Dis-
tributed Collaboration. Proceedings of the ACM on Human-Computer
Interaction, 6(ISS):277–302, Nov. 2022. doi: 10. 1145/3567721
[10]
X. Huang and R. Xiao. SurfShare: Lightweight Spatially Consis-
tent Physical Surface and Virtual Replica Sharing with Head-mounted
Mixed-Reality. Proceedings of the ACM on Interactive, Mobile, Wear-
able and Ubiquitous Technologies, 7(4):1–24, Dec. 2023. doi: 10.
1145/3631418
[11]
J. Kang, D. Yang, T. Kim, Y. Lee, and S.-H. Lee. Real-time Retar-
geting of Deictic Motion to Virtual Avatars for Augmented Reality
Telepresence. In 2023 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), pp. 885–893. IEEE, Sydney, Australia,
Oct. 2023. doi: 10. 1109/ISMAR59233.2023.00104
[12]
M. Keshavarzi, A. Y. Yang, W. Ko, and L. Caldas. Optimization
and Manipulation of Contextual Mutual Spaces for Multi-User Virtual
and Augmented Reality Interaction. In 2020 IEEE Conference on
Virtual Reality and 3D User Interfaces (VR), pp. 353–362, Mar. 2020.
arXiv:1910.05998 [cs]. doi: 10.1109/VR46266.2020.00055
[13]
D. Kim, J.-e. Shin, J. Lee, and W. Woo. Adjusting Relative Translation
Gains According to Space Size in Redirected Walking for Mixed Re-
ality Mutual Space Generation. In 2021 IEEE Virtual Reality and 3D
User Interfaces (VR), pp. 653–660. IEEE, Lisboa, Portugal, Mar. 2021.
doi: 10.1109/VR50410. 2021.00091
[14]
S. Kim, D. Kim, J.-E. Shin, and W. Woo. Object Cluster Registration
of Dissimilar Rooms Using Geometric Spatial Affordance Graph to
Generate Shared Virtual Spaces. In 2024 IEEE Conference Virtual
Reality and 3D User Interfaces (VR), pp. 796–805. IEEE, Orlando, FL,
USA, Mar. 2024. doi: 10.1109/VR58804. 2024.00099
[15]
B. Laugwitz, T. Held, and M. Schrepp. Construction and Evaluation
of a User Experience Questionnaire. In A. Holzinger, ed., HCI and
Usability for Education and Work, vol. 5298, pp. 63–76. Springer
Berlin Heidelberg, Berlin, Heidelberg, 2008. Series Title: Lecture
Notes in Computer Science. doi: 10. 1007/978-3-540-89350-9 6
[16]
N. H. Lehment, D. Merget, and G. Rigoll. Creating automatically
aligned consensus realities for AR videoconferencing. In 2014 IEEE
International Symposium on Mixed and Augmented Reality (ISMAR),
pp. 201–206. IEEE, Munich, Germany, Sept. 2014. doi: 10.1109/
ISMAR.2014. 6948428
[17]
A. Lucero. Using Affinity Diagrams to Evaluate Interactive Proto-
types. In J. Abascal, S. Barbosa, M. Fetter, T. Gross, P. Palanque, and
M. Winckler, eds., Human-Computer Interaction INTERACT 2015,
vol. 9297, pp. 231–248. Springer International Publishing, Cham, 2015.
Series Title: Lecture Notes in Computer Science. doi: 10. 1007/978-3
-319-22668-2 19
[18]
B. Marques, S. Silva, J. Alves, A. Rocha, P. Dias, and B. S. Santos.
Remote collaboration in maintenance contexts using augmented real-
ity: insights from a participatory process. International Journal on
Interactive Design and Manufacturing (IJIDeM), 16(1):419–438, Mar.
2022. doi: 10. 1007/s12008-021-00798-6
[19]
B. Marques, A. Teixeira, S. Silva, J. Alves, P. Dias, and B. S. Santos.
A critical analysis on remote collaboration mediated by Augmented
Reality: Making a case for improved characterization and evaluation of
the collaborative process. Computers & Graphics, 102:619–633, Feb.
2022. doi: 10. 1016/j.cag.2021.08.006
[20]
T. Pejsa, J. Kantor, H. Benko, E. Ofek, and A. Wilson. Room2Room:
Enabling Life-Size Telepresence in a Projected Augmented Reality En-
vironment. In Proceedings of the 19th ACM Conference on Computer-
Supported Cooperative Work & Social Computing, pp. 1716–1725.
ACM, San Francisco California USA, Feb. 2016. doi: 10.1145/2818048
.2819965
[21]
J. Walny, S. Carpendale, N. Henry Riche, G. Venolia, and P. Fawcett.
Visual Thinking In Action: Visualizations As Used On White-
boards. IEEE Transactions on Visualization and Computer Graphics,
17(12):2508–2517, Dec. 2011. doi: 10. 1109/TVCG.2011.251
[22]
D. Yang, J. Kang, T. Kim, and S.-H. Lee. Visual Guidance for User
Placement in Avatar-Mediated Telepresence between Dissimilar Spaces.
IEEE Transactions on Visualization and Computer Graphics, pp. 1–14,
2024. doi: 10. 1109/TVCG.2024.3354256
[23]
B. Yoon, H.-i. Kim, G. A. Lee, M. Billinghurst, and W. Woo. The
Effect of Avatar Appearance on Social Presence in an Augmented
Reality Remote Collaboration. In 2019 IEEE Conference on Virtual
Reality and 3D User Interfaces (VR), pp. 547–556. IEEE, Osaka, Japan,
Mar. 2019. doi: 10.1109/VR. 2019.8797719
[24]
L. Yoon, D. Yang, J. Kim, C. Chung, and S.-H. Lee. Placement Re-
targeting of Virtual Avatars to Dissimilar Indoor Environments. IEEE
Transactions on Visualization and Computer Graphics, 28(3):1619–
1633, Mar. 2022. doi: 10.1109/TVCG. 2020.3018458
9
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Mixed Reality allows for distributed meetings where people's local physical spaces are virtually aligned into blended interaction spaces. In many cases, people's physical rooms are dissimilar, making it challenging to design a coherent blended space. We introduce the concept of Partially Blended Realities (PBR) — using Mixed Reality to support remote collaborators in partially aligning their physical spaces. As physical surfaces are central in collaborative work, PBR supports users in transitioning between different configurations of tables and whiteboard surfaces. In this paper, we 1) describe the design space of PBR, 2) present RealityBlender to explore interaction techniques for how users may configure and transition between blended spaces, and 3) provide insights from a study on how users experience transitions in a remote collaboration task. With this work, we demonstrate new potential for using partial solutions to tackle the alignment problem of dissimilar spaces in distributed Mixed Reality meetings.
Article
Full-text available
Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.
Article
Full-text available
Problem solving in Industry 4.0 often requires collaboration among remote team members, which face increased complexity on their daily tasks and require mechanisms with adaptive capabilities to share and combine knowledge. Augmented Reality (AR) is one of the most promising solutions, allowing taking advantage from seamless integration of virtual and real-world objects, which can be used to provide a shared understanding of the task and context. In this regard, most research works, so far, have been devoted to explore and evolve the necessary technology. However, it is now important to revisit the subject of remote collaboration in relation with AR to understand how much of the collaborative effort can already be supported and identify gaps that should inform further research. In line with this mindset, we adopted a user-centered approach with partners from the industry sector, including participatory design and a focus group with domain experts to probe how AR could provide solutions to support their collaborative efforts. We focused on using tangible artifacts in the form of storyboards to create a shared understanding with target users in remote collaboration. Afterwards, we identify a set of requirements, which we materialize through the design and creation of a collaborative prototype based on sharing of enhanced AR annotations. Finally, we present and discuss the results from a case study on a maintenance context, which provides interesting insights that can be applied to other remote settings, thus facilitating the digitization of the industry sector.
Article
Rapid advances in technology gradually realize immersive mixed-reality (MR) telepresence between distant spaces. This paper presents a novel visual guidance system for avatar-mediated telepresence, directing users to optimal placements that facilitate the clear transfer of gaze and pointing contexts through remote avatars in dissimilar spaces, where the spatial relationship between the remote avatar and the interaction targets may differ from that of the local user. Representing the spatial relationship between the user/avatar and interaction targets with angle-based interaction features, we assign recommendation scores of sampled local placements as their maximum feature similarity with remote placements. These scores are visualized as color-coded 2D sectors to inform the users of better placements for interaction with selected targets. In addition, virtual objects of the remote space are overlapped with the local space for the user to better understand the recommendations. We examine whether the proposed score measure agrees with the actual user perception of the partner's interaction context and find a score threshold for recommendation through user experiments in virtual reality (VR). A subsequent user study in VR investigates the effectiveness and perceptual overload of different combinations of visualizations. Finally, we conduct a user study in an MR telepresence scenario to evaluate the effectiveness of our method in real-world applications.
Article
Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments.
Article
Remote Collaboration mediated by Mixed and Augmented Reality (MR/AR) shows great potential in scenarios where physically distributed collaborators need to establish a common ground to achieve a shared goal. So far, most research efforts have been devoted to creating the enabling technology, overcoming engineering hurdles and proposing methods to support its design and development. To contribute to more in-depth knowledge on how remote collaboration occurs through these technologies, it is paramount to understand where the field stands and how characterization and evaluation have been conducted. In this vein, this work reports the results of a literature review which shows that evaluation is frequently performed in ad-hoc manners, i.e., disregarding adapting the evaluation methods to collaborative AR. Most studies rely on single-user methods, which are not suitable for collaborative solutions, falling short of retrieving the necessary amount of contextualized data for more comprehensive evaluations. This suggests minimal support of existing frameworks and a lack of theories and guidelines to guide the characterization of the collaborative process using AR. Then, a critical analysis is presented in which we discuss the maturity of the field and a roadmap of important research actions is proposed, that may help address how to improve the characterization and evaluation of the collaboration process moving forward and, in consequence, improve MR/AR based remote collaboration.
Article
Current avatar representations used in immersive VR applications lack features that may be important for supporting natural behaviors and effective communication among individuals. This study investigates the impact of the visual and nonverbal cues afforded by three different types of avatar representations in the context of several cooperative tasks. The avatar types we compared are No_Avatar (HMD and controllers only), Scanned_Avatar (wearing an HMD), and Heal_Avatar (video-see-through). The subjective and objective measures we used to assess the quality of interpersonal communication include surveys of social presence, interpersonal trust, communication satisfaction, and attention to behavioral cues, plus two behavioral measures: duration of mutual gaze and number of unique words spoken. We found that participants reported higher levels of trustworthiness in the Real_Avatar condition compared to the Scanned_Avatar and No_Avatar conditions. They also reported a greater level of attentional focus on facial expressions compared to the No_Avatar condition and spent more extended time, for some tasks, attempting to engage in mutual gaze behavior compared to the Scanned_Avatar and No_Avatar conditions. In both the Heal_Avatar and Scanned_Avatar conditions, participants reported higher levels of co-presence compared with the No_Avatar condition. In the Scanned_Avatar condition, compared with the Heal_Avatar and No_Avatar conditions, participants reported higher levels of attention to body posture. Overall, our exit survey revealed that a majority of participants (66.67%) reported a preference for the Real_Avatar, compared with 25.00% for the Scanned_Avatar and 8.33% for the No_Avatar, These findings provide novel insight into how a user's experience in a social VR scenario is affected by the type of avatar representation provided.