Conference PaperPDF Available

XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces

Authors:

Abstract and Figures

Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user’s environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users’ comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.
Content may be subject to copyright.
XRgonomics: Facilitating the Creation of Ergonomic 3D
Interfaces
João Belo
joaobelo@cs.au.dk
Aarhus University
Denmark
Anna Maria Feit
ETH Zürich
Switzerland
Saarland University, Saarland Informatics Campus
Germany
Tiare Feuchtner
Aarhus University
Denmark
Vienna University of Technology
Austria
Kaj Grønbæk
Aarhus University
Denmark
Figure 1: The XRgonomics toolkit aims to facilitate the design of ergonomic 3D UIs, common in mixed reality applications
(left). We use a user’s physiological model to compute the ergonomic cost of interaction at each reachable position in the
interaction space (center). In XRgonomics, creators can visualize this cost through colored voxels in the interaction space: red
indicates high and blue low-cost areas (right).
ABSTRACT
Arm discomfort is a common issue in Cross Reality applications
involving prolonged mid-air interaction. Solving this problem is
dicult because of the lack of tools and guidelines for 3D user
interface design. Therefore, we propose a method to make existing
ergonomic metrics available to creators during design by estimat-
ing the interaction cost at each reachable position in the user’s
environment. We present XRgonomics, a toolkit to visualize the
interaction cost and make it available at runtime, allowing creators
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI ’21, May 8–13, 2021, Yokohama, Japan
©2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8096-6/21/05.
https://doi.org/10.1145/3411764.3445349
to identify UI positions that optimize users’ comfort. Two scenar-
ios show how the toolkit can support 3D UI design and dynamic
adaptation of UIs based on spatial constraints. We present results
from a walkthrough demonstration, which highlight the potential
of XRgonomics to make ergonomics metrics accessible during the
design and development of 3D UIs. Finally, we discuss how the
toolkit may address design goals beyond ergonomics.
CCS CONCEPTS
Human-centered computing Mixed / augmented reality
;
Virtual reality
;
Systems and tools for interaction design
;Ac-
cessibility design and evaluation methods;Walkthrough evaluations;
Graphical user interfaces.
KEYWORDS
3D User Interfaces, Ergonomics, Toolkit, Computational Interaction,
Optimization, Adaptive User Interfaces, Mid-air Interaction
CHI ’21, May 8–13, 2021, Yokohama, Japan Belo et al.
ACM Reference Format:
João Belo, Anna Maria Feit, Tiare Feuchtner, and Kaj Grønbæk. 2021. XR-
gonomics: Facilitating the Creation of Ergonomic 3D Interfaces. In CHI
Conference on Human Factors in Computing Systems (CHI ’21), May 8–
13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 11 pages. https:
//doi.org/10.1145/3411764.3445349
1 INTRODUCTION
Cross Reality (XR) technologies are becoming mainstream as hard-
ware gets more accessible, resulting in new applications across
dierent sectors [
10
]. Despite the shift in interaction paradigms
(e.g., from mouse input to mid-air interaction with controllers), in-
terface elements and design guidelines for XR User Interfaces (UIs)
are often inspired by 2D UI design. This inuence can negatively af-
fect user experience (UX) [
28
]. In particular, recent literature shows
that creators struggle to address the physical aspects of XR expe-
riences [
1
]. Existing challenges involve designing the posture of
users and reducing fatigue. Remarkably, this problem persists even
though substantial research in the HCI community has focused on
mid-air interactions in the past decade, proposing design guidelines
and evaluation metrics. A possible explanation is that these are
dicult to apply during design and development of XR applications
because:
Proposed metrics [
23
] and models [
25
] focus on evaluating
mid-air interactions that already exist but do not directly
support the creation of new 3D UIs.
General guidelines [
3
,
23
] do not apply to the dynamic nature
of MR applications that need to adapt constantly to the user’s
context [30].
To address these issues, we propose a method to make existing
ergonomics metrics accessible to creators during design. We use
a physiological model of the arm to assign a cost of interaction to
any point in the user’s reachable 3D space, that we call ergonomic
cost. Its computation comprises the following steps:
(1) Discretization of the interaction space
– transfer of the
continuous interaction space into a discrete representation.
(2) Computation of arm poses
– computation of multiple arm
poses for each position in the interaction space using Inverse
Kinematics (IK).
(3) Computation of ergonomic cost
– calculation of the er-
gonomic cost for each arm pose using existing metrics and
heuristics that assess ergonomics.
To make our method accessible to creators, we introduce XR-
gonomics - a toolkit to compute and visualize the ergonomic cost of
the user’s 3D interaction space. It comprises two major components:
A Graphical User Interface (GUI) and an Application Programming
Interface (API). The GUI allows creators to visualize the ergonomic
cost associated with each position in the interaction space. The API
gives access to this data at runtime to support development of adap-
tive interfaces. XRgonomics does not require any specications
about the XR application, making it easy to use during various de-
sign processes. To achieve this, we simplify the computation of the
ergonomic cost by considering only static arm poses. We disregard
users’ arm motion between points of interaction which is dicult
for creators to predict [1].
Two scenarios show how the toolkit can support the design of
static UI elements and dynamic adaptation of UIs based on spatial
constraints. To assess the usefulness of the toolkit, we present our
ndings from a walkthrough demonstration conducted with UI
design experts. Finally, we discuss the potential of XRgonomics to
address design goals beyond ergonomics. All the source code is
available at: https://github.com/joaobelo92/xrgonomics.
2 BACKGROUND AND RELATED WORK
2.1 Designing for ergonomics
2.1.1 Ergonomic factors in physical workstation design. Assessing
ergonomic factors plays an important role when designing physical
spaces, such as workstations, cars, and terminals. Much prior work
in this domain estimates discomfort and ergonomic issues based
on simple heuristics, such as joint angles. An example is RULA,
a survey method for investigating work-related upper limb disor-
ders [
32
]. RULA records working postures and attributes scores
depending on risk factors. It assesses the risk for upper limb dis-
orders considering aspects such as arm poses, movements, and
forces.
Analysis of robot workspaces shares challenges also found in
ergonomics design. For instance, Zacharias et al. [
44
] proposed a
method to show which positions are easy to reach for robot arms.
A signicant challenge inherent to these scenarios is the limitation
imposed by the agent’s physical environment. For example, the
physical space within a car cockpit limits the possibilities for where
to mount a dashboard. In contrast, virtual workspaces are more
exible and allow the 3D user interface to adapt continuously to
the user’s context.
2.1.2 Ergonomic factors in mid-air interaction. Ergonomics are a
signicant factor in the design of virtual user interfaces, particu-
larly in 3D UIs. Arm fatigue is one of the main issues designers
must consider [
28
]. It is a common problem in interaction with ver-
tical screens, also known as the gorilla-arm eect [
7
]. Researchers
have proposed novel approaches to address this issue, ranging from
novel interaction techniques [
7
,
15
,
31
] to UI optimization meth-
ods [
33
] to reduce muscle strain and fatigue. What these approaches
have in common is that they intend to reduce fatigue in interaction.
Among the most prominent qualitative methods to assess subjective
fatigue are Likert scales [
8
], the NASA Task Load Index (NASA-
TLX) [
21
], and the Borg CR10 scale [
6
]. HCI studies usually apply
these approaches because they are non-invasive and do not require
specialized equipment. However, substantial work must go into
preparation and user studies, and these techniques provide only a
coarse estimation of fatigue. While objective methods overcome
some of these limitations, techniques used in biology and sports
science often rely on external measurements, such as muscle acti-
vations [
9
], blood pressure [
40
], and heart rate [
39
]. Because these
methods require specialized equipment and might interfere with
the user’s task, they are often inappropriate for HCI studies.
The HCI community has proposed alternatives to objective meth-
ods that are not intrusive. For example, Consumed Endurance
(CE) [
23
] is a metric that tracks the user’s arm pose to quantify
arm-fatigue. CE computes the center of mass of the arm over time
and uses that information to predict how long the user can continue
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces CHI ’21, May 8–13, 2021, Yokohama, Japan
interaction before the shoulder muscles need rest. The authors show
that the metric correlates well with the Borg CR10 scale and pro-
pose several guidelines for the design of mid-air UIs. Other studies
use muscle activations from biomechanical models as indicators
of fatigue. Bachynskyi et al. [
2
] show that predictions of muscle
activation from static optimization correlate well with EMG data.
In subsequent work, Bachynskyi et al. [
3
] applied biomechanical
simulations to create a set of heuristics for designing 3D pointing
interfaces, highlighting the potential of biomechanical simulations
in UI design. Later, Jang et al. [
25
] proposed a method for modeling
cumulative fatigue. Their approach quanties arm fatigue by intro-
ducing a model for estimating muscle states (active, rest, fatigue)
and uses a biomechanical arm model to estimate maximum shoul-
der strength. This approach makes it possible to consider periods
of both interaction and rest.
Another ergonomic issue interlinked with fatigue is user com-
fort [
28
]. User comfort covers both physical and psychological
dimensions, encompassing broader aspects such as posture and
social awkwardness that may arise from using gestures in public
spaces. Although there is work exploring subtle mid-air interac-
tion [
31
], we are not aware of guidelines or objective metrics to
evaluate this issue.
The primary goal of XRgonomics is to make ergonomics metrics
accessible to creators during design. We use established objective
metrics as heuristics for comfort, to assess the quality of positions
in the interaction space regarding ergonomic factors, such as fa-
tigue. Our toolkit supports several of the metrics introduced above,
namely RULA, Consumed Endurance, and muscle activations.
2.2 Computational support for UI design
Already 20 years ago, Myers, Hudson and Pausch highlighted the
need for toolkits to support the creation of user interfaces [
34
].
Since then, researchers have proposed several computational meth-
ods to support UI design (see survey by Oulasvista et al. for an
overview [
36
]). Some of these methods focus on ensuring user per-
formance, while others make suggestions to improve the aesthetic
qualities of an interface. Such computational methods often dier
in the degree of involvement of the designer. At one end of the
spectrum, tools automatically create UI designs and do not require
designer involvement [
16
]. Other toolkits support the creator by
observing their design process, evaluating manually created solu-
tions, and generating alternative designs or changes, which the
creator can choose to follow [
4
,
42
]. Studies show that such tools
improve the quality of designs and inspire creators, ultimately re-
sulting in a collaborative environment involving the designer and
the toolkit [27].
The support of computational methods is crucial for MR applica-
tions where the context of the user continuously changes. Existing
work has explored methods that automatically determine where
to place virtual content [
14
,
17
,
35
]. Others have investigated how
to display virtual content to the user [
13
,
26
,
41
], or a combina-
tion of multiple aspects [
30
]. However, none of these automated
approaches considers ergonomics. Designing for the physical as-
pects of interaction is one of several common diculties during the
creation of XR applications, as highlighted recently by Ashtari et
al [
1
]. Among the key challenges identied by the authors, we aim
to address the lack of concrete and accessible design guidelines.
In this work, we use computational methods to support 3D UI
design for MR and VR applications. XRgonomics is a toolkit that
supports the visual exploration of the design space in terms of
ergonomics, enabling creators to make informed decisions about
where to place UI elements as part of their standard design process.
Also, creators can use XRgonomics to guide the layout of 3D UIs at
runtime and specify areas of interaction to avoid or prioritize.
3 ERGONOMIC COST PIPELINE
When designing XRgonomics, our primary goal was to create a
method that supports the design of ergonomic user interfaces dur-
ing the early design stages of XR applications. To facilitate acces-
sibility, we did not want to impose constraints on the application
itself, nor require the content creator to provide extensive input
about the to-be-designed interface (e.g., usage data, user proles, or
physical environment). For that reason, we developed an approach
that does not make assumptions about the interaction space or
interaction techniques involved. Another noteworthy aspect is that
interaction in XR applications is often context-dependent. Consider
a typical Hololens 2
1
application, where the UI comprises virtual
mid-air interaction with buttons and sliders. Contextual aspects
such as the task, environment, or user’s pose can limit interaction
with the system. For this reason, general guidelines for ergonomic
3D interface design are often inappropriate for XR applications. To
overcome this challenge, we facilitate exploration of the interaction
space during UI design and allow developers to use ergonomics met-
rics at runtime. In our approach, we analyze the entire interaction
space and assign a cost of interaction at each reachable position in
3D space. We call this the ergonomic cost. For it to be accessible
in real-time, we propose a pipeline that shifts the computation-
ally intensive tasks to a pre-processing stage. This ergonomic cost
pipeline comprises three steps that we describe in the following
sections. Our approach allows the comparison of distinct reachable
positions regarding dierent ergonomic aspects, opening novel
possibilities for designing and optimizing user interfaces.
3.1 Discretization of the interaction space
In the initial step of the pipeline, we transfer the continuous interac-
tion space into a discrete representation. This is necessary to make
the problem computationally tractable. Hence, we represent the in-
teraction space as a 3D Cartesian grid and call each element a voxel
- a common term in computer graphics. We dene the interaction
space based on the positions a human can reach and manipulate
objects with his hands from a xed torso position, a concept also
known as the reach envelope [
11
]. We use a simple kinematic chain
between the shoulder and hands. A user representation that in-
cludes both arms requires a xed oset between the shoulders and
thorax. However, the shoulder’s mechanics are complex, and shoul-
der joint motion depends on its component joints [
24
]. Hence, this
simplication results in some loss of precision, but not enough to
justify a more complex kinematic chain for our use case.
To generate the interaction space’s voxel representation, we start
by setting up voxel dimensions with a default side length of 10cm.
1https://www.microsoft.com/en-us/hololens/hardware
CHI ’21, May 8–13, 2021, Yokohama, Japan Belo et al.
Creators can adjust the voxels’ side length to change the granularity
of the interaction space representation. We use a simple algorithm
that iterates through an overestimated 3D Cartesian grid in a cube
(Figure 2, black cube). Its side length is equal to the kinematic chain
dimensions, which delimit the arm’s reach. Applications can include
a calibration step, so these dimensions accurately reect the user.
In our standard implementation, we use the arm dimensions of
the 50th percentile male [
18
]. Then, we verify which voxels belong
to the interaction space, removing the voxels outside of the reach
envelope (Figure 2, yellow sphere). We do this by checking whether
the distance from the shoulder to the center of a voxel is smaller or
equal to the user’s arm length.
3.2 Computation of arm poses
At the end of the pipeline, the result will be the cost of interaction
for each position in the user’s reach. But rst, we must compute
multiple poses the arm can take to reach each voxel in the inter-
action space. Related work points out lower risks of injury and
reduced muscle load for postures with the wrist in a neutral posi-
tion [
32
] (deviation and twist is 0 degrees). We aim to nd the pose
that minimizes discomfort, and because postures of the wrist in
neutral positions are considered optimal, we simplify the kinematic
chain further by removing this degree of freedom. This results in a
two-segment body of the arm, where the forearm and wrist consti-
tute a single segment (see Figure 2, pink kinematic chain). While
this approach considers fewer possible arm poses, it signicantly
reduces the complexity of the inverse kinematics (IK) process and
computation time of the pipeline. We base our IK process on the
work of Tolani et al [
43
]. Considering xed end-eector and shoul-
der positions, the elbow is free to swivel on an axis between these
two points (see Figure 3), allowing us to express the elbow position
as a function of 𝜙about the ûaxis:
𝑒=𝑟[𝑐𝑜𝑠 (𝜙)û+𝑠𝑖𝑛(𝜙)ˆ
v] + 𝑐
Where rdenotes the radius and cthe center of the circle described
by the swiveling elbow joint [
43
]. The variable
𝜙
controls the elbow
position, which is at its lowest height when
𝜙
= 0. Note that the
Figure 2: The interaction space is computed from an overes-
timated 3D Cartesian grid (black cube) and delimited by the
user’s reach envelope (yellow sphere). A simple kinematic
chain representing the user’s arm can be seen in pink.
arm’s physiology constrains the
𝜙
value and we disregard unrea-
sonable postures of the arm, based on impossible joint angles and
elbow positions. Therefore, to generate arm poses, we increase
𝜙
by a constant value
𝜓
, which determines how much the elbow
rotates until it reaches an anatomically impossible threshold (e.g.,
150 degrees). Here, the
𝜓
value determines how ne-grained the
discretization of the elbow position is. At this stage, it is possible
to customize thresholds for
𝜙
and create additional rules, to con-
sider factors such as a user’s physical impairments or constraints
imposed by hardware.
3.3 Computation of the ergonomic cost
Our toolkit implements established metrics from HCI and ergonomics
research to assess the ergonomic cost of each reachable voxel. In
theory, any metric that considers arm poses to assess ergonomic
factors is appropriate to compute this ergonomic cost. XRgonomics
currently supports consumed endurance (CE) [
23
], Rapid Upper
Limb Assessment (RULA) [
32
], and muscle activations from biome-
chanical simulations. Notice that some of these metrics, such as CE,
consider motion. In those cases, we adjust the metric to consider
only static arm poses and use the result as a heuristic for strain. In
other words, the ergonomic cost is a measure of how comfortable
it is to maintain interaction at a specic position in the interaction
space.
In the previous step of the pipeline, the toolkit generated several
arm poses for reaching each voxel. We then compute the ergonomic
cost for each of these poses, and assign the one of these costs (i.e., of
the pose with least discomfort) to the corresponding voxel. We base
this strategy on ndings that humans tend to use more ecient
poses [37].
In XRgonomics, a creator can compute the ergonomic cost us-
ing one of the supported metrics, or combine multiple metrics by
assigning a weight to each. In the next sections we will describe
how we applied each metric in our pipeline.
3.3.1 Consumed Endurance (CE). To quantify fatigue in mid-air
interaction, CE [
23
] considers endurance of the shoulder in terms
of torque as ratio to the interaction time. We follow the authors’
Figure 3: Our inverse kinematics approach considers the
shoulder and end-eector (i.e., hand) positions to be xed,
and the elbow is free to swivel about the shoulder-hand axis.
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces CHI ’21, May 8–13, 2021, Yokohama, Japan
approach and use shoulder torque as an index for muscle strain. As
the authors mention, when there is no motion, the shoulder torque
has to match the gravity torque ®𝑔:
®
𝑇𝑠ℎ𝑜𝑢𝑙𝑑𝑒 𝑟
=®𝑟×𝑚®𝑔
Where
®𝑟
is the distance from the shoulder joint to the center of mass
of the arm, and mis the mass of the arm. Since we are working
with static poses, we can directly compute the center of mass of
each pose and apply the formula above. The result is a heuristic for
the ergonomic cost based on the CE approach to estimate muscle
contraction.
3.3.2 Rapid Upper Limb Assessment (RULA). RULA [
32
] assigns
posture scores to the upper limbs, neck, trunk and legs, depending
on joint angles. Combining that information with muscle and force
scores, the method results in a nal score to assess risk factors
associated with upper-limb disorders. Even though our approach
only considers arm poses, RULA posture ratings convey relevant
information about postures that prevent or might result in upper
limb disorders. Hence, we use RULA’s posture scores to compute
a score based on the joint angles of the upper and lower arms’
joint angles (see posture scores for group A [
32
]). Low posture
scores reect a working posture with minimal risk factors, while
higher numbers indicate an increasing presence of risk factors. The
nal score can indicate which positions in the interaction space are
preferable to avoid upper-limb disorders.
3.3.3 Muscle activations from Biomechanical Simulations. Biome-
chanical simulations can estimate muscle activation for a motion,
which can indicate energy consumption and fatigue [
2
]. Therefore,
this method has great potential as a heuristic for the design of 3D
UIs. Typical biomechanical simulation pipelines use experimental
motion data, which typically involve mapping physical to virtual
markers, scaling the model to match the subject dimensions, using
inverse kinematics to compute joint angles, and a nal step to esti-
mate muscle activations [
3
]. For our simulations we use OpenSim
4.1
2
, an open-source tool for biomechanical modeling and simu-
lation [
12
], and the upper extremity model created by Saul et al.
(MoBL) [
38
]. This model has the dimensions of the 50th percentile
male, and must be scaled to support other arm dimensions. Because
we generate arm poses in the previous step of the pipeline, we only
use OpenSim to estimate muscle activations. However, we must
convert our vector representation of the arm’s pose into OpenSim’s
generalized model coordinates and generate corresponding motion
les where each arm pose remains static over a short time (refer
to the source code for more details). We use static optimization
3
to estimate muscle activations for each pose, which is a fast and
ecient method. To run our simulations, we follow Hicks et al.’s
recommendations [
22
]. We used reserve actuators to prevent the
model from being under-actuated and avoid failures in static opti-
mization. These reserve actuators complement the model’s muscles
when these cannot generate sucient forces to achieve a pose. It is
important that reserve moments are small or non-existent [
22
], so
2https://simtk.org/projects/opensim
3
https://simtk-conuence.stanford.edu/display/OpenSim/How+Static+
Optimization+Works
that the model’s muscles exert most of the forces necessary to main-
tain each pose. Hence, we use low optimal forces in our reserve
actuators, to ensure the cost function in the static optimization
algorithm prioritizes muscle forces. Because MoBL is a complex
model and static optimization can converge to dierent results, we
analyze each pose over time and save the timeframe that minimizes
reserve actuation for each pose. This results in an activation value
for each muscle and reserve actuator in the model. To facilitate
comparison with other metrics, we combine these into a single
ergonomic cost value. To do so, we average the muscle activations
and sum all the reserve actuators. To prioritize results that mostly
use muscle forces, we penalize cases where reserve moments are
high. Note that while muscle activation ranges from 0 to 1, the same
does not apply to reserve actuators. Therefore, we set a threshold
for the maximum acceptable reserve forces (
𝑇𝑟𝑒𝑠𝑒 𝑟 𝑣𝑒
), based on the
net joint moments [
22
]. Then, we divide the sum of the reserve
forces by
𝑇𝑟𝑒𝑠𝑒 𝑟 𝑣𝑒
, which will always result in a higher value than
the average muscle activation, if it the reserve forces are above the
threshold. This results in the following ergonomic cost function:
𝑒𝑟𝑔 𝑐𝑜𝑠𝑡 =
Í𝑀
𝑛=1𝑛𝑎𝑐𝑡𝑖 𝑣𝑎𝑡 𝑖𝑜𝑛
𝑀+Í𝐴
𝑛=1𝑎𝑎𝑐𝑡𝑖 𝑣𝑎𝑡 𝑖𝑜𝑛
𝑇𝑟𝑒𝑠𝑒 𝑟 𝑣𝑒
Where
𝑀
is the number of muscles and
𝐴
the number of reserve
actuators in the model.
4 THE XRGONOMICS TOOLKIT
The pipeline described in section 3 constitutes the central part of
XRgonomics, a toolkit that gives creators of 3D applications easy
Figure 4: Visualization of the interaction space of the right
arm using the supported metrics: A) Consumed endurance,
B) RULA, C) Muscle activation, D) Weighted average (arith-
metic mean in this case). The image shows only the voxels
at the 40 cm slice (x-axis)
CHI ’21, May 8–13, 2021, Yokohama, Japan Belo et al.
Figure 5: The XRgonomics GUI allows creators to visualize the interaction space and ergonomic cost of each voxel according
to dierent ergonomic metrics. We will briey describe each UI element: A) Dropdown menu for metric selection; B) Slider for
voxel size setting; C) Menu to run computation pipeline for dierent arm dimensions; D) Buttons for retrieving the "optimal"
voxel with the lowest ergonomic cost; E) Checkboxes for enabling/disabling spatial constraints; F) Dropdown list with com-
parison operators (=, >= or <=); G) Sliders for setting constraint values; H) Checkbox to toggle display of the avatar as visual
reference for the shoulder position; I) Camera controls; J) Visualization of the interaction space and ergonomic cost in form of
colored voxels; K) Avatar; L) Color mapping for the ergonomic cost, from blue (most comfortable) to red (least comfortable).
access to ergonomics metrics during design and development of
3D adaptive UIs. The toolkit comprises two major components: A
Graphical User Interface (GUI) and an Application Programming
Interface (API). The GUI allows creators to visualize the interaction
space and each voxel’s ergonomic cost. It can support the design of
static interfaces (e.g., positioning virtual buttons on a desk) or give
an overview of dierent metrics and their correspondent ergonomic
cost for dierent positions in the interaction space (Figure 4). The
API allows developers to use the ergonomic cost at runtime. This
feature allows developers to create adaptive 3D UIs that consider
user comfort as a criterion in the formulation of the optimization
problem. For example, developers can retrieve voxels that minimize
the ergonomic cost under specied spatial constraints in real-time.
In this initial version of the toolkit, we consider only the right arm.
Therefore, we set the center of the interaction space on the shoulder
instead of the user’s thorax. The source code for XRgonomics is
available at https://github.com/joaobelo92/xrgonomics.
4.1 Graphical User Interface (GUI)
The GUI is implemented in Unity and uses the API to retrieve the
ergonomic cost data. By default, it supports the arm dimensions of
the 50th percentile male [
18
]. Creators can directly change param-
eters, such as the user’s arm and voxel dimensions (Figure 5, C).
Modications to other parts of the pipeline require updates in the
source code (see API section for more details). The GUI allows cre-
ators to visualize the interaction space and each voxel’s ergonomic
cost (Figure 5, J). The user can select between dierent ergonomic
metrics supported by the toolkit (Figure 5, A). XRgonomics sup-
ports the metrics described in section 3.3 and a weighted average of
those three metrics. Because the interaction space is a sphere, the
voxels in the interior might be occluded. For that reason, the GUI
features controls to apply spatial constraints on each coordinate
axis (Figure 5, E), to limit the range of visible voxels. For example,
it is possible to visualize a "slice" of voxels by adding an equality
constraint on one axis (Figure 4). Creators can also reduce the ren-
dering dimensions of the voxels to visualize data through more
than one "slice" (Figure 5, B). An avatar is depicted in the center
of the GUI, as a reference for the user’s shoulder position in the
interaction space (Figure 5, K).
Each voxel is colored according to the selected metric and the arm
pose with the minimum ergonomic cost. As previously mentioned,
we base this design choice on the principle that humans tend to
use ecient poses [
37
]. Because CE and Biomechanical simulations
output continuous results, we normalize all the ergonomic cost data
using a simple feature scaling formula:
𝑥𝑛𝑒𝑤 =
𝑥𝑥𝑚𝑖𝑛
𝑥𝑚𝑎𝑥 𝑥𝑚𝑖𝑛
The color mapping is a linear interpolation from blue to red, rep-
resenting low to high ergonomic cost, respectively (Figure 5, L).
This mapping allows creators to visualize and compare voxels with
similar values. Note that computed muscle activations from biome-
chanical simulations dier by small values when not inuenced
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces CHI ’21, May 8–13, 2021, Yokohama, Japan
Figure 6: Creators can click on a voxel to visualize the possi-
ble positions of the elbow and the pose’s ergonomic cost.
by reserve forces, which may result in identical voxel colors, even
though there is a dierence in average muscle activations. There-
fore, we use a dierent normalization strategy to facilitate the
visualization of this metric. We normalize the average muscle ac-
tivations, multiply them by a scaling factor, and sum it with the
ergonomic cost previously computed. This makes voxels with high
reserve forces appear red, while smaller dierences in the average
muscle activations remain visible.
Finally, creators can click on a single voxel to visualize the arm
poses generated by the IK process and their correspondent er-
gonomic cost (Figure 6). Since we are using a simplied kinematic
chain, only the elbow positions dier in each pose.
4.2 Application Programming Interface (API)
We implemented the API in Python and used NumPy for most math-
ematical operations. The API has an endpoint to run the ergonomic
cost pipeline for dierent arm and voxel dimensions, but developers
must update the source code to change parameters in the inverse
kinematics step, such as joint rotation limits or complex spatial con-
straints. While we implemented the algorithms for CE and RULA,
the toolkit uses the OpenSim 4.1 Python bindings to run biomechan-
ical simulations. However, XRgonomics does not directly support
scaling of the arm or other changes to the biomechanical model,
and the OpenSim tool is necessary for such tasks. We store voxel
and ergonomic cost data in R*Trees [
5
], using an SQLite database.
This makes it fast to nd positions in 3D space that minimize a
particular ergonomic metric or meet specic requirements. R*trees
allow developers to query voxels within a bounding-box or arbi-
trary shapes like the area visible to a 3D camera. For networking,
the API uses the ZeroMQ
4
framework, a fast messaging library.
These networking features allow the API to communicate with the
GUI (Unity), and enable developers to integrate XRgonomics in
their applications. For example, developers can run the ergonomic
cost pipeline for custom arm dimensions and retrieve data under
specied spatial constraints. In our tests, the response time for
such queries was less than 10ms, showing that the API can process
requests in real-time and is suitable for XR applications.
4https://zeromq.org/
Figure 7: Creators can use the XRgonomics GUI to guide the
design of static UI elements in traditional AR applications.
Representations of physical/virtual objects can be added in
the Unity scene to facilitate the task. In this case, the cre-
ators use constraints on the x and y axis to visualize the in-
teraction space above and to the right zone of a table.
5 EVALUATION OF THE TOOLKIT
Ledo et al. introduced a categorization of evaluation strategies
for HCI toolkit research [
29
]. We applied two strategies identied
in this work to evaluate XRgonomics. First, we illustrate what the
toolkit might support by discussing the usage of XRgonomics in two
distinct scenarios: ergonomically optimized placement of static 3D
UI elements, and runtime adaptation of a 3D UI based on dynamic
constraints. Then, we collect feedback from potential toolkit users
to explore its utility through a walkthrough demonstration.
5.1 Demonstration of application scenarios
To demonstrate the functionality of XRgonomics, we implemented
two application scenarios for 3D UI design that we describe in the
following sections:
5.1.1 Guiding the placement of static UI elements. Consider a "tradi-
tional" AR application, as Grubert describes it [
19
], where a designer
denes the position of UI elements based on the user’s pose. Cre-
ators can use the XRgonomics GUI to visualize the ergonomic cost
for each position in the user’s interaction space and guide the place-
ment of 3D UI elements under specied constraints. For example,
consider positioning virtual input elements on an oce desk. The
designer can analyze all the positions above the desk by setting
constraints on dierent axes (see Figure 7), and use this information
to design virtual elements such as a calculator or a drawing-board.
5.1.2 Dynamic adaptation of 3D UIs. Changes in the user’s task,
environment, and pose can limit interaction in MR applications.
Because context changes are dicult or impossible to predict during
design and development, a solution is to adapt the UI to the user’s
context at runtime. We implemented a prototype to show how
creators can use XRgonomics to design adaptive ergonomic UIs.
In this simplied MR scenario, we use the XRgonomics API to
CHI ’21, May 8–13, 2021, Yokohama, Japan Belo et al.
Figure 8: A proof-of-concept application on the Hololens allows automatic placement of a virtual menu (music player) in the
ergonomically optimal position within the user’s FoV. The left image shows the user interacting with the virtual menu while
looking straight ahead. The upper right visualization shows the ergonomic cost within the user’s current FoV. When turning
the head in other direction, the constraints are updated based on the new FoV (right picture). With a gesture, the user can
summon the menu to reappear in the most comfortable position within this new zone of the interaction space.
adapt the placement of a virtual music player menu in a Hololens
2 application. The UI consists of a virtual 3D menu with buttons
to play, stop, or change songs. Although the controls are easily
accessible when the menu is visible, the limited eld of view (FoV)
of the Hololens can make interaction challenging. To overcome this
issue, the user can request the menu to move into his FoV with a
gesture (see Figure 8). The prototype then uses the XRgonomics
API to identify the most comfortable and reachable position in the
user’s FoV and moves the virtual menu there. To achieve that, we
use the view frustum of the Hololens as a spatial constraint. This
application was implemented in Unity, using the Mixed Reality
Toolkit (MRTK) and XRgonomics API. MRTK provides algorithms
to facilitate the positioning of virtual menus (solvers
5
). However,
these are limited to behavior like surface magnetism or following a
virtual object and do not consider ergonomics.
5.2 Walkthrough demonstration of toolkit
To explore the utility of the toolkit, we conducted a walkthrough
demonstration [
29
] with representatives from our target group,
such as UI designers, developers, and HCI researchers. The study
comprised six phases (Table 1). In each phase, we conducted semi-
structured interviews with open-ended questions, rather than using
for example questionnaires, to gain more in-depth insights. We will
discuss the goals and ndings from each phase in the following
sections. Due to the COVID-19 pandemic, the study was conducted
online through a video conferencing tool with screen sharing. For
further reference, the study protocol, interview transcripts, and
questionnaires are available in the project repository. We recruited
eight participants (2 female; age:
𝑀=
31
.
3
, 𝑆𝐷 =
2
.
8). All partic-
ipants were familiar with UI design and XR technology, as they
were professional software developers or VR/MR researchers. Most
participants were uncertain about the concept of ergonomics, and
none had prior knowledge of the metrics RULA, CE, or muscle
activations.
5
https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_
Solver.html
5.2.1 Review of 3D UIs. In an initial discussion about 3D UI design,
we aimed to learn about participants’ prior knowledge and concerns
regarding ergonomics. Three participants could relate directly to
ergonomics and fatigue issues (P1, P3, P7) and all acknowledged the
importance of the topic. However, none had addressed the problem
in practice and they were not aware of existing metrics or strategies
to use, apart from referring to guidelines for particular tools (e.g.,
ARKit
6
and ARCore
7
) (P3). These insights highlight one of the key
barriers identied by Ashtari et al. [
1
], about the diculties in
designing for the physical aspects of AR/VR applications.
5.2.2 Introduction of GUI. In this phase, we introduced the XR-
gonomics prototype, explained the interaction space voxel represen-
tation, and instructed the participants on how to use the GUI (see
section 4.1). To conrm that participants understood the visualiza-
tion based on our explanations, we prompted them to describe the
ergonomic characteristics of the interaction space when referring
to a "slice" of voxels using the CE metric, as illustrated in Figure 5
(J). All participants showed an intuitive understanding of blue areas
being "most comfortable" (P1), "easiest" (P5), and "most relaxed" (P7)
to reach with the hand. We then challenged their understanding
of this visualization, by pointing out some questionable CE results
for positions above head-level (see Figure 5). Several participants
expressed some uncertainty and even disagreement with the values
in this area, which they perceived as hard to reach and therefore
expected a higher ergonomic cost (e.g., P0, P3, P4, P7). However, in-
stead of doubting the metric, they came up with likely explanations
for why their opinions were wrong (e.g., P1, P3, P4, P5). We con-
clude that while the ergonomic cost and the toolkit visualization are
easy to understand, creators might over-trust the tool, interpreting
the visualizations as the ground-truth instead of reecting on the
validity of the metric. Hence, such tools should encourage creators
to be critical and clarify the the metrics strengths and weaknesses.
6
https://developer.apple.com/design/human-interface-guidelines/ios/system-
capabilities/augmented-reality/
7https://designguidelines.withgoogle.com/ar-design/
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces CHI ’21, May 8–13, 2021, Yokohama, Japan
1. Review of 3D UIs 2. Introduction of GUI 3. Design Task 4. Metrics overview 5. Introduction of API 6. Conclusion
Topic introduction and
examination of prior
knowledge
Instruction on how to
use the GUI and visu-
alization
Participants use the
tool to design a static
3D UI with 3 elements
Demonstration and ex-
planation of dierent
ergonomics metrics
Demonstration and
discussion of API
features
General feedback and
nal remarks
Table 1: Overview of the study procedure consisting of a walkthrough demonstration of XRgonomics and a design task (phase
3), where participants used the GUI to create a static 3D UI.
5.2.3 Design Task. To evaluate the usefulness of XRgonomics in
UI design, participants completed a design task using the toolkit.
It consisted of planning the layout of three UI elements with dif-
ferent usability aspects (e.g., usage frequency) in a workstation
(similar to Figure 5). At this stage, we enabled remote control of the
mouse cursor, and the participants could use the toolkit running
on the experimenter’s PC. We asked them to think aloud while
exploring the visualization, and show their desired UI element loca-
tions by pointing with their mouse or selecting a particular voxel.
For a UI element that required frequent hand manipulation, all
the participants used the toolkit to locate voxels with a low er-
gonomic cost. When deciding on the position for a rarely used
element, with which inadvertent interaction is undesirable (e.g.,
”delete all”), participants pursued dierent strategies. To ensure the
user makes a deliberate choice, some participants selected areas
with a high ergonomic cost (P0-P3), while others also considered
the workspace layout (P4-P7). All the participants stated that the
visualization of the interaction space and ergonomic cost informed
their decisions. Finally, when placing a non-interactive display el-
ement, participants pointed out that the supported metrics were
not relevant, revealing an opportunity to integrate other metrics
beyond ergonomics, such as visibility.
To explore the potential benets of using XRgonomics in contrast
to formulated guidelines from existing work, we quoted two design
guidelines from the CE paper [
23
] and asked participants how they
would apply these in the previous task. Participants highlighted
several limitations of written guidelines, such as verbal statements
being ambiguous or open to interpretation (P2-P6), whereas XR-
gonomics allows the designer to visually explore the interaction
space (P0, P5, P7). Further, written guidelines may not apply if the
recommended area is unavailable (e.g., because of physical restric-
tions). In contrast, setting constraints in XRgonomics allows the
creator to analyze voxels in specic zones, compare, and identify
locations that may not be the best overall but are optimal for a
particular scenario (P0, P1, P3, P4).
5.2.4 Metrics overview. Next, we showed the ability to visualize
dierent metrics in XRgonomics, briey explaining the underlying
theory and how the ergonomic cost is computed for CE, RULA,
and muscle activation, respectively. All participants agreed that the
visualizations aided their understanding of the underlying concepts,
while appreciating that the toolkit makes the metrics accessible and
useful without knowledge of the formal details.
5.2.5 Introduction of API. To explore the potential of XRgonomics
to develop adaptive UIs, we explained the features accessible through
the API and showed a video of the AR prototype described in sec-
tion 4. Overall, participants appreciated the idea of generating con-
straints automatically and proposed several use cases for adaptive
UIs. However, some participants mentioned that the toolkit should
allow the designer or end-user to modify these constraints (P1, P2,
P3) to address personal preferences, implicit spatial requirements,
or a physical disability.
5.2.6 Conclusion. To collect general feedback and identify lim-
itations of the toolkit, we concluded the study with some nal
questions. Participants agreed that the visualization provided un-
derstandable information about ergonomics in the interaction space,
and mentioned that XRgonomics would help 3D UI design from
early stages of design and development. They also proposed support
for additional metrics beyond ergonomics, such as spatial relations
between (physical/virtual) objects (P3, P5, P6), eye strain (P6), and
visibility (P0, P1, P4-P7). A participant asked about having XR-
gonomics integrated into development tools, such as Unity (P4),
which would facilitate access to it.
We conclude the results from this walkthrough demonstration high-
light the potential of XRgonomics to make ergonomics metrics
accessible during the design and development of 3D UIs.
6 DISCUSSION
In this work, we propose a method to estimate the ergonomic cost at
each reachable position in the user’s interaction space. We make this
cost available to creators during design and development through
XRgonomics, a toolkit to facilitate the creation of ergonomic 3D
UIs. We demonstrated its potential through two examples: guidance
for placement of static UI elements, and dynamic adaptation of 3D
UIs optimized for comfort. Finally, we presented a walkthrough
demonstration that highlights how XRgonomics can support UI
design experts. We will now discuss the limitations of our approach,
avenues for future work, and other relevant ndings.
6.1 Limitations of XRgonomics
To create a method that runs in real-time, we simplied multiple
steps of the pipeline. A simple kinematic chain limits the number of
possible poses represented by the model to allow for a simple and
fast inverse kinematics algorithm. Although in most cases a xed
wrist angle in the kinematic chain results in an ergonomic position,
interaction with complex 3D input does not always work under such
conditions, and environmental constraints might require poses with
dierent wrist angles. Further, modeling the shoulder mechanism
and its relation with the torso would require more complex IK.
Another design trade-o we made, was to ignore motion. With-
out context, existing models that analyze movement and fatigue
are dicult to use, because it is hard to forecast certain aspects of
interaction, like movement [
1
]. Therefore, we consider only static
poses, allowing creators to easily use XRgonomics at design time.
CHI ’21, May 8–13, 2021, Yokohama, Japan Belo et al.
Finally, XRgonomics currently supports metrics related to the er-
gonomics of the upper limbs. However, several other factors impact
interaction in XR applications, such as visibility and consistency.
6.2 Future work
Improvements to the IK implementation can result in higher ac-
curacy and support more arm poses. In particular, extending the
kinematic chain to consider the wrist angle is a natural improve-
ment to the work we present. A possible approach would be to use
a spiral point algorithm at each voxel to compute possible wrist po-
sitions. Another related improvement is to consider the user’s torso
position, with a model that represents the shoulder mechanism.
This would result in more realistic arm poses and an improved
representation of the interaction space. Another avenue for future
work is to consider motion and model fatigue over time, consid-
ering the movement between voxels, their ergonomic cost, and
muscle endurance. It will also be interesting to expand XRgonomics
to consider other human factors beyond ergonomics of the upper
limbs, such as vision, cognition, and spatial relations of objects.
Integration in existent MR and VR toolkits, such as MRTK solvers
or Unity’s IDE, is another important direction that would make
our method more accessible to creators, as mentioned by study
participants. The walkthrough demonstration also revealed several
opportunities for GUI improvements, such as improved camera con-
trols and better control for voxel selection (e.g., selecting between
a range of values).
On a dierent topic, the user study revealed that participants’
may over-trust the metrics when using the GUI. When we encour-
aged further reection, participants reported doubts and treated the
visualization more critically, after receiving explanations about how
the metrics worked and their limitations. This highlights a potential
issue of trust-calibration, which is an ongoing research topic in
Visual Analytics [
20
]. To address this, XRgonomics could provide
explanations for each metric and a disclaimer of their limitations.
6.3 Supported ergonomics metrics
In our current implementation, we incorporate three existing met-
rics to compute the ergonomic cost of interaction: CE, RULA, and
muscle activations. While these represent important research in er-
gonomics, we discovered limitations throughout development and
the user study. For instance, Hincápie-Ramos et al. proposed CE as a
metric to quantify fatigue of mid-air interactions [
23
]. However, the
main scenario considered in their work is interaction with vertical
displays. We assume this inuenced the design of the metric, which
is base on the cross-product of the gravity vector and the center of
mass of the arm for static poses. This results in questionable results
when reaching overhead (Figure 4, A), which was a common topic
of discussion in our study.
Then, RULA investigates risk factors associated with work-related
disorders [
32
]. Although such information is relevant to the design
of ergonomic 3D UIs, it does not consider poses where the arm is
at rest. Moreover, it relies on wide-angle ranges for scoring arm
poses, resulting in similar values for several voxels (Figure 4, B).
In our biomechanical simulations, the optimization algorithm
did not always converge. Without inspecting each individual case,
it was impossible to discern whether this was due to poses being
physiologically impossible, or caused by issues with the model
or optimization step. Nevertheless, we argue that biomechanical
models can represent important information which other metrics
cannot, such as physical constraints, muscles, and tendon length.
We believe that XRgonomics can support the understanding of
existing ergonomic metrics and the development of new ones by
oering a simple way to inspect, compare, and debug them.
7 CONCLUSION
In this paper, we proposed a method to estimate the ergonomic
cost of interaction at each reachable position in the user’s envi-
ronment. We make it available through the XRgonomics toolkit,
which aims to support the design of ergonomic 3D UIs by making
existing ergonomics metrics accessible to creators. The GUI allows
creators to visualize the user’s reachable interaction space and the
ergonomic cost in each position. The API allows the creation of
complex and dynamic constraints, which enable real-time adapta-
tion of 3D UIs (e.g., repositioning the UI to avoid hitting physical
obstacles). We illustrate functionalities XRgonomics can support
through two scenarios. Finally, a walkthrough demonstration of
the prototype shows the usefulness of our approach and highlights
its potential to integrate additional factors beyond ergonomics.
ACKNOWLEDGMENTS
We thank Aïna Linn Georges, Jens Emil Grønbæk and Hans-Jörg
Schulz for their helpful feedback and discussions. Finally, we are
grateful to Pedro Batalha for the illustrations, and to Sebastian
Knudsen for the video. This work was supported by the Innovation
Fund Denmark (IFD grant no. 6151-00006B), as part of the Man-
ufacturing Academy of Denmark (MADE) Digital project, and by
the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme (grant no. StG-
2016-717054).
REFERENCES
[1]
Narges Ashtari, Andrea Bunt, Joanna McGrenere, Michael Nebeling, and Par-
mit K. Chilana. 2020. Creating Augmented and Virtual Reality Applications:
Current Practices, Challenges, and Opportunities. In Proceedings of the 2020
CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)
(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13.
https://doi.org/10.1145/3313831.3376722
[2]
Myroslav Bachynskyi, Antti Oulasvirta, Gregorio Palmas, and Tino Weinkauf.
2014. Is Motion Capture-Based Biomechanical Simulation Valid for HCI Studies?
Study and Implications. In Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for
Computing Machinery, New York, NY, USA, 3215–3224. https://doi.org/10.1145/
2556288.2557027
[3]
Myroslav Bachynskyi, Gregorio Palmas, Antti Oulasvirta, and Tino Weinkauf.
2015. Informing the Design of Novel Input Methods with Muscle Coactivation
Clustering. ACM Trans. Comput.-Hum. Interact. 21, 6, Article 30 (Jan. 2015),
25 pages. https://doi.org/10.1145/2687921
[4]
Gilles Bailly, Antti Oulasvirta, Timo Kötzing, and Sabrina Hoppe. 2013. Men-
uOptimizer: Interactive Optimization of Menu Systems. In Proceedings of the 26th
Annual ACM Symposium on User Interface Software and Technology (St. Andrews,
Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery,
New York, NY, USA, 331–342. https://doi.org/10.1145/2501988.2502024
[5]
Norbert Beckmann, Hans-Peter Kriegel, Ralf Schneider, and Bernhard Seeger.
1990. The R*-Tree: An Ecient and Robust Access Method for Points and Rectan-
gles. SIGMOD Rec. 19, 2 (May 1990), 322–331. https://doi.org/10.1145/93605.98741
[6]
Gunnar Borg. 1990. Psychophysical scaling with applications in physical work
and the perception of exertion. Scandinavian Journal of Work, Environment &
Health 16 (1990), 55–58. http://www.jstor.org/stable/40965845
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces CHI ’21, May 8–13, 2021, Yokohama, Japan
[7]
Sebastian Boring, Marko Jurmu, and Andreas Butz. 2009. Scroll, Tilt or Move
It: Using Mobile Phones to Continuously Control Pointers on Large Public Dis-
plays. In Proceedings of the 21st Annual Conference of the Australian Computer-
Human Interaction Special Interest Group: Design: Open 24/7 (Melb ourne,Australia)
(OZCHI ’09). Association for Computing Machinery, New York, NY, USA, 161–168.
https://doi.org/10.1145/1738826.1738853
[8]
James Cario and Rocco J Perla. 2007. Ten common misunderstandings, mis-
conceptions, persistent myths and urban legends about Likert scales and Likert
response formats and their antidotes. Journal of social sciences 3, 3 (2007), 106–
116.
[9]
Mario Cifrek, Vladimir Medved, Stanko Tonković, and Saša Ostojić. 2009. Surface
EMG based muscle fatigue evaluation in biomechanics. Clinical Biomechanics 24,
4 (2009), 327 – 340. https://doi.org/10.1016/j.clinbiomech.2009.01.010
[10]
Perkins Coie. 2020. 2020 Augmented and Virtual Reality Survey Re-
port. https://www.perkinscoie.com/en/ar-vr-survey-results/2020-augmented-
and-virtual- reality-survey- results.html. Accessed: 2020-06-16.
[11]
Joachim Deisinger, Ralf Breining, and Andreas Rößler. 2000. ERGONAUT: A Tool
for Ergonomic Analyses in Virtual Environments. In Virtual Environments 2000,
Jurriaan Mulder and Robert van Liere (Eds.). Springer Vienna, Vienna, 167–176.
[12]
Scott L Delp, Frank C Anderson, Allison S Arnold, Peter Loan, Ayman Habib,
Chand T John, Eran Guendelman, and Darryl G Thelen. 2007. OpenSim: Open-
Source Software to Create and Analyze Dynamic Simulations of Movement. IEEE
Transactions on Biomedical Engineering 54, 11 (2007), 1940–1950.
[13]
Stephen DiVerdi,Tobias Hollerer, and Richard Schreyer. 2004. Level of detail inter-
faces. In Third IEEE and ACM International Symposium on Mixed and Augmented
Reality. IEEE, New York, NY, USA, 300–301.
[14]
Andreas Fender, Philipp Herholz, Marc Alexa, and Jörg Müller. 2018. OptiSpace:
Automated Placement of Interactive 3D Projection Mapping Content. In Pro-
ceedings of the 2018 CHI Conference on Human Factors in Computing Systems
(Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New
York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3173843
[15]
Tiare Feuchtner and Jörg Müller. 2018. Ownershift: Facilitating Overhead In-
teraction in Virtual Reality with an Ownership-Preserving Hand Space Shift. In
Proceedings of the 31st Annual ACM Symposium on User Interface Software and
Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery,
New York, NY, USA, 31–43. https://doi.org/10.1145/3242587.3242594
[16]
Krzysztof Z. Gajos, Daniel S. Weld, and Jacob O. Wobbrock. 2010. Automatically
generating personalized user interfaces with Supple. Articial Intelligence 174,
12 (2010), 910 – 950.
[17]
Ran Gal, Lior Shapira, Eyal Ofek, and Pushmeet Kohli. 2014. FLARE: Fast layout
for augmented reality applications. In 2014 IEEE International Symposium on
Mixed and Augmented Reality (ISMAR). IEEE, New York, NY, USA, 207–212.
[18]
Claire C Gordon, Thomas Churchill, Charles E Clauser, Bruce Bradtmiller, and
John T McConville. 1989. Anthropometric survey of US army personnel: methods
and summary statistics 1988. Technical Report. Anthropology Research Project
Inc Yellow Springs OH.
[19]
Jens Grubert, Tobias Langlotz, Stefanie Zollmann, and Holger Regenbrecht. 2017.
TowardsPer vasive AugmentedReality: Context-Awareness in Augmented Reality.
IEEE Transactions on Visualization and Computer Graphics 23, 6 (2017), 1706–1724.
[20]
Wenkai Han and Hans-Jörg Schulz. 2020. Beyond Trust Building – Calibrating
Trust in Visual Analytics. In Proceedings of the Workshop on TRust and EXperience
in Visual Analytics (TREX). IEEE, New York, NY, USA, 9–15. to appear.
[21]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX
(Task Load Index): Results of Empirical and Theoretical Research. In Human
Mental Workload, Peter A. Hancock and Najmedin Meshkati (Eds.). Advances
in Psychology, Vol. 52. Elsevier (North Holland Publishing Co.), Amsterdam,
Netherlands, 139 – 183. https://doi.org/10.1016/S0166-4115(08)62386- 9
[22]
Jennifer L. Hicks, Thomas K. Uchida, Ajay Seth, Apoorva Rajagopal, and Scott L.
Delp. 2015. Is My Model Good Enough? Best Practices for Verication and
Validation of Musculoskeletal Models and Simulations of Movement. Jour-
nal of Biomechanical Engineering 137, 2 (02 2015), 24. https://doi.org/10.1115/
1.4029304 arXiv:https://asmedigitalcollection.asme.org/biomechanical/article-
pdf/137/2/020905/6091748/bio_137_02_020905.pdf 020905.
[23]
Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang
Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air
Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Com-
puting Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing
Machinery, New York, NY, USA, 1063–1072. https://doi.org/10.1145/2556288.
2557130
[24]
Verne T Inman, JB deC M Saunders, and LeRoy C Abbott. 1944. Observations on
the function of the shoulder joint. The Journal of Bone and Joint Surgery 26, 1
(1944), 1–30.
[25]
Sujin Jang, Wolfgang Stuerzlinger, Satyajit Ambike, and Karthik Ramani. 2017.
Modeling Cumulative Arm Fatigue in Mid-Air Interaction Based on Perceived
Exertion and Kinetics of Arm Motion. In Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17).
Association for Computing Machinery, New York, NY, USA, 3328–3339. https:
//doi.org/10.1145/3025453.3025523
[26]
Simon Julier, Marco Lanzagorta, Yohan Baillot, Lawrence Rosenblum, Steven
Feiner, Tobias Hollerer, and Sabrina Sestito. 2000. Information ltering for mobile
augmented reality. In Proceedings IEEE and ACM International Symposium on
Augmented Reality (ISAR 2000). IEEE, New York, NY, USA, 3–11.
[27]
Janin Koch, Andrés Lucero, Lena Hegemann, and Antti Oulasvirta. 2019. May
AI? Design Ideation with Cooperative Contextual Bandits. In Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland
Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12.
https://doi.org/10.1145/3290605.3300863
[28]
Joseph J LaViola Jr, Ernst Kruij, Ryan P McMahan, Doug Bowman, and Ivan P
Poupyrev. 2017. 3D user interfaces: theory and practice. Addison-Wesley Profes-
sional, Boston, MA, USA.
[29]
David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquardt, Lora Oehlberg,
and Saul Greenberg. 2018. Evaluation Strategies for HCI Toolkit Research. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
(Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New
York, NY, USA, Article 36, 17 pages. https://doi.org/10.1145/3173574.3173610
[30]
David Lindlbauer, Anna Maria Feit, and Otmar Hilliges. 2019. Context-Aware
Online Adaptation of Mixed Reality Interfaces. In Proceedings of the 32Nd Annual
ACM Symposium on User Interface Software and Technology (New Orleans, LA,
USA) (UIST ’19). ACM, New York, NY, USA, 147–160. https://doi.org/10.1145/
3332165.3347945
[31]
Mingyu Liu, Mathieu Nancel, and Daniel Vogel. 2015. Gunslinger: Subtle Arms-
down Mid-Air Interaction. In Proceedings of the 28th Annual ACM Symposium on
User Interface Software & Technology (Charlotte, NC, USA) (UIST ’15). Association
for Computing Machinery, New York, NY, USA, 63–71. https://doi.org/10.1145/
2807442.2807489
[32]
Lynn McAtamney and E Nigel Corlett. 1993. RULA: a sur vey method for the
investigation of work-related upper limb disorders. Applied ergonomics 24, 2
(1993), 91–99.
[33]
Roberto A. Montano Murillo, Sriram Subramanian, and Diego Martinez Plasencia.
2017. Erg-O: Ergonomic Optimization of Immersive Virtual Environments. In
Proceedings of the 30th Annual ACM Symposium on User Interface Software and
Technology (Québec City, QC, Canada) (UIST ’17). Association for Computing Ma-
chinery, New York, NY, USA, 759–771. https://doi.org/10.1145/3126594.3126605
[34]
Brad Myers, Scott E. Hudson, and Randy Pausch. 2000. Past, Present, and Future
of User Interface Software Tools. ACM Trans. Comput.-Hum. Interact. 7, 1 (March
2000), 3–28.
[35]
Benjamin Nuernberger, Eyal Ofek, Hrvoje Benko, and Andrew D. Wilson. 2016.
SnapToReality: Aligning Augmented Reality to the Real World. In Proceedings
of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose,
California, USA) (CHI ’16). Association for Computing Machinery, New York, N Y,
USA, 1233–1244. https://doi.org/10.1145/2858036.2858250
[36]
Antti Oulasvirta, Niraj Ramesh Dayama, Morteza Shiripour, Maximilian John,
and Andreas Karrenbauer. 2020. Combinatorial Optimization of Graphical User
Interface Designs. Proc. IEEE 108, 3 (March 2020), 434–464.
[37]
David A. Rosenbaum. 2010. Chapter 2 - Core Problems. In Human Motor Control
(Second Edition) (second edition ed.), David A. Rosenbaum (Ed.). Academic Press,
San Diego, 11 – 41. https://doi.org/10.1016/B978- 0-12-374226- 1.00002-4
[38]
Katherine R. Saul, Xiao Hu, Craig M. Goehler, Meghan E. Vidt, Melissa Daly,
Anca Velisar, and Wendy M. Murray. 2015. Benchmarking of dynamic simulation
predictions in two software platforms using an upper limb musculoskeletal
model. Computer Methods in Biomechanics and Biomedical Engineering 18, 13
(2015), 1445–1458.
[39]
Suzanne C. Segerstrom and Lise Solberg Nes. 2007. Heart Rate Variabil-
ity Reects Self-Regulatory Strength, Eort, and Fatigue. Psychological Sci-
ence 18, 3 (2007), 275–281. https://doi.org/10.1111/j.1467-9280.2007.01888.x
arXiv:https://doi.org/10.1111/j.1467-9280.2007.01888.x PMID: 17444926.
[40]
Gisela Sjøgaard, Gabrielle Savard, and Carsten Juel. 1988. Muscle blood ow
during isometric activity and its relation to muscle fatigue. European journal of
applied physiology and occupational physiology 57, 3 (1988), 327–335.
[41]
Markus Tatzgern, Valeria Orso, Denis Kalkofen, Giulio Jacucci, Luciano Gam-
berini, and Dieter Schmalstieg. 2016. Adaptive information density for augmented
reality displays. In 2016 IEEE Virtual Reality (VR). IEEE, New York, NY, USA, 83–
92.
[42]
Kashyap Todi, Daryl Weir, and Antti Oulasvirta. 2016. Sketchplore: Sketch and
Explore with a Layout Optimiser. In Proceedings of the 2016 ACM Conference on
Designing Interactive Systems (Brisbane, QLD, Australia) (DIS ’16). Association for
Computing Machinery, New York, NY, USA, 543–555. https://doi.org/10.1145/
2901790.2901817
[43]
Deepak Tolani and Norman I Badler. 1996. Real-Time Inverse Kinematics of
the Human Arm. Presence: Teleoperators and Virtual Environments 5, 4 (1996),
393–401. https://doi.org/10.1162/pres.1996.5.4.393
[44]
Franziska Zacharias, Christoph Borst, and Gerd Hirzinger. 2007. Capturing
robot workspace structure: representing robot capabilities. In 2007 IEEE/RSJ
International Conference on Intelligent Robots and Systems. IEEE, New York, NY,
USA, 3229–3236.
... To realize such adaptive MR UI behaviors, recent research formulates the problem as multi-objective optimization [9,10,16,17,22,25,26,32]. User's goals are formulated as a set of objective functions and placements are selected that maximize/minimize these objectives. ...
... For example, Lindlbauer et al. implemented an adaptive UI system in MR using integer programming taking into account the user's task and cognitive load [32]. Belo et al. explored placement of UIs in MR that maximizes ergonomic comfort for users [16]. Cheng et al. explored automatic placement of MR content while optimizing visibility, consistency, and semantic correlations using linear programming [10]. ...
... • More considerations on reward formulations. As an initial exploration, our current RL setup did not consider other aspects while formulating the reward, such as semantics [10] and affordances [9,45] of physical objects, content type (e.g., 2D vs 3D, text-heavy vs. images), spatial consistency [15], and user ergonomics [16]. Recent work has highlighted the benefits of such considerations. ...
Preprint
Full-text available
Mixed Reality (MR) could assist users' tasks by continuously integrating virtual content with their view of the physical environment. However, where and how to place these content to best support the users has been a challenging problem due to the dynamic nature of MR experiences. In contrast to prior work that investigates optimization-based methods, we are exploring how reinforcement learning (RL) could assist with continuous 3D content placement that is aware of users' poses and their surrounding environments. Through an initial exploration and preliminary evaluation, our results demonstrate the potential of RL to position content that maximizes the reward for users on the go. We further identify future directions for research that could harness the power of RL for personalized and optimized UI and content placement in MR.
... Despite our data collection pipeline operates in 3D, this work only focuses on 2D visualization shot maps due to their wide use. While 3D data has potential for ball trajectory-based analysis, body motion, and both to capture 3D shots reachability [EBFFG21], it still requires highly accurate reconstruction methods to capture spin, the Magnus effect and human movement dynamics. ...
Article
Full-text available
Shot maps are popular in many sports as they typically plot events and player positions in the way they are collected, using a pitch or a table as an absolute coordinate system. We introduce a variation of a table tennis shot map that shifts the point of view from the table to the player. This results in a new reference system to plot incoming balls relative to the player's position rather than on the table. This approach aligns with how table tennis tactical analysis is conducted, focusing on identifying empty spaces and weak spots around the players. We describe the motivation behind this work, built through close collaboration with two table tennis experts, and demonstrate how this approach aligns with the way they analyze games to reveal key tactical aspects. We also present the design rationale and the computer vision pipeline used to accurately collect data from broadcast videos. Our findings show that the technique enables capturing insights that were not visible with the absolute coordinate system, particularly in understanding regions that are reachable and those close to the pivot area of the player.
... The second issue is that typical interaction methods for virtual objects, such as hand interaction and mid-air gestures [8], are fatiguing [9], [10], lack precision, and are difficult to use depending on the target object [11]. One potential approach to resolving this issue is the adaptation of the Hybrid User Interface (HUI). ...
Article
Full-text available
In this study, we propose a novel interaction method, TouchWIM, which combines World in Miniature (WIM) and Hybrid User Interface (HUI) to enhance the efficiency of object manipulation and reduce the workload in spatial design by using Augmented Reality (AR). WIM provides an additional overview perspective in AR by displaying a miniature representation of a room, and HUI enables an accurate and easy input by combining a head-mounted display (HMD) with a tablet. Our system allows the placement and manipulation of objects within a real space by touch interaction with the miniature representation of the room displayed on the tablet. To evaluate TouchWIM, we conducted user studies using a prototype spatial design system, comparing it with existing methods such as Hand-Ray + Direct Touch and WIM alone. The results demonstrated that TouchWIM is the most efficient and reduces the workload for the task of creating a specified spatial layout. This interaction method provides new insights into object manipulation and spatial design in AR.
... Evaluate fatigue. Ergonomics and comfort are important issues in XR interaction techniques[25,54], especially those involving prolonged tasks such as text input. Unfortunately, very few techniques are evaluated on this factor, and those that do often use custom questionnaires. ...
Preprint
Full-text available
Text entry for extended reality (XR) is far from perfect, and a variety of text entry techniques (TETs) have been proposed to fit various contexts of use. However, comparing between TETs remains challenging due to the lack of a consolidated collection of techniques, and limited understanding of how interaction attributes of a technique (e.g., presence of visual feedback) impact user performance. To address these gaps, this paper examines the current landscape of XR TETs by creating a database of 176 different techniques. We analyze this database to highlight trends in the design of these techniques, the metrics used to evaluate them, and how various interaction attributes impact these metrics. We discuss implications for future techniques and present TEXT: Text Entry for XR Trove, an interactive online tool to navigate our database.
... Participants in the interview were mainly concerned about eye strain and hardware weight for long-term use rather than fatigue from hand gestures. However, challenges like the "gorilla arm" effect caused by mid-air interaction [33,49] can arise. To mitigate this, mid-arm gestures can be supported [44], constrained [105], or combined with microgestures [17] or with other modalities such as gaze [17,64] and foot [76,122] inputs, as also suggested by our results. ...
Conference Paper
Full-text available
Augmented Reality (AR) promises to enhance daily office activities involving numerous textual documents, slides, and spreadsheets by expanding workspaces and enabling more direct interaction. However, there is a lack of systematic understanding of how knowledge workers can manage multiple documents and organize, explore, and compare them in AR environments. Therefore, we conducted a user-centered design study (N = 21) using predefined spatial document layouts in AR to elicit interaction techniques, resulting in 790 observation notes. Thematic analysis identified various interaction methods for aggregating, distributing, transforming, inspecting, and navigating document collections. Based on these findings, we propose a design space and distill design implications for AR document arrangement systems, such as enabling body-anchored storage, facilitating layout spreading and compressing, and designing interactions for layout transformation. To demonstrate their usage, we developed a rapid prototyping system and exemplify three envisioned scenarios. With this, we aim to inspire the design of future immersive offices.
Conference Paper
Full-text available
Trust is a fundamental factor in how users engage in interactions with Visual Analytics (VA) systems. While the importance of building trust to this end has been pointed out in research, the aspect that trust can also be misplaced is largely ignored in VA so far. This position paper addresses this aspect by putting trust calibration in focus – i.e., the process of aligning the user’s trust with the actual trustworthiness of the VA system. To this end, we present the trust continuum in the context of VA, dissect important trust issues in both VA systems and users, as well as discuss possible approaches that can build and calibrate trust.
Article
Full-text available
The graphical user interface (GUI) has become the prime means for interacting with computing systems. It leverages human perceptual and motor capabilities for elementary tasks such as command exploration and invocation, information search, and multitasking. For designing a GUI, numerous interconnected decisions must be made such that the outcome strikes a balance between human factors and technical objectives. Normally, design choices are specified manually and coded within the software by professional designers and developers. This article surveys combinatorial optimization as a flexible and powerful tool for computational generation and adaptation of GUIs. As recently as 15 years ago, applications were limited to keyboards and widget layouts. The obstacle has been the mathematical definition of design tasks, on the one hand, and the lack of objective functions that capture essential aspects of human behavior, on the other. This article presents definitions of layout design problems as integer programming tasks, a coherent formalism that permits identification of problem types, analysis of their complexity, and exploitation of known algorithmic solutions. It then surveys advances in formulating evaluative functions for common design-goal foci such as user performance and experience. The convergence of these two advances has expanded the range of solvable problems. Approaches to practical deployment are outlined with a wide spectrum of applications. This article concludes by discussing the position of this application area within optimization and human-computer interaction research and outlines challenges for future work.
Conference Paper
Full-text available
Design ideation is a prime creative activity in design. However, it is challenging to support computationally due to its quickly evolving and exploratory nature. The paper presents cooperative contextual bandits (CCB) as a machine-learning method for interactive ideation support. A CCB can learn to propose domain-relevant contributions and adapt their exploration/exploitation strategy. We developed a CCB for an interactive design ideation tool that 1) suggests inspirational and situationally relevant materials ("may AI?"); 2) explores and exploits inspirational materials with the designer; and 3) explains its suggestions to aid reflection. The application case of digital mood board design is presented, wherein visual inspirational materials are collected and curated in collages. In a controlled study, 14 of 16 professional designers preferred the CCB-augmented tool. The CCB approach holds promise for ideation activities wherein adaptive and steerable support is welcome but designers must retain full outcome control. CCS CONCEPTS • Human-centered computing → Interactive systems and tools; • Computing methodologies → Machine learning ; • Applied computing → Arts and humanities.
Conference Paper
Full-text available
We present Ownershift, an interaction technique for easing overhead manipulation in virtual reality, while preserving the illusion that the virtual hand is the user's own hand. In contrast to previous approaches, this technique does not alter the mapping of the virtual hand position for initial reaching movements towards the target. Instead, the virtual hand space is only shifted gradually if interaction with the overhead target requires an extended amount of time. While users perceive their virtual hand as operating overhead, their physical hand moves gradually to a less strained position at waist level. We evaluated the technique in a user study and show that Ownershift significantly reduces the physical strain of overhead interactions, while only slightly reducing task performance and the sense of body ownership of the virtual hand.
Conference Paper
Full-text available
We present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces.
Conference Paper
Full-text available
Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios.
Conference Paper
We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.
Conference Paper
Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what 'evaluating' a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.