Conference PaperPDF Available

Virtual Humans in AR: Evaluation of Presentation Concepts in an Industrial Assistance Use Case


Abstract and Figures

Embedding virtual humans in educational settings enables the transfer of the approved concepts of learning by observation and imitation of experts to extended reality scenarios. Whilst various presentation concepts of virtual humans for learning have been investigated in sports and rehabilitation, little is known regarding industrial use cases. In prior work on manual assembly, Lampen et al. [21] show that three-dimensional (3D) registered virtual humans can provide assistance as effective as state-of-the-art HMD-based AR approaches. We extend this work by conducting a comparative user study (N=30) to verify implementation costs of assistive behavior features and 3D registration. The results reveal that the basic concept of a 3D registered virtual human is limited and comparable to a two-dimensional screen aligned presentation. However, by incorporating additional assistive behaviors, the 3D assistance concept is enhanced and shows significant advantages in terms of cognitive savings and reduced errors. Thus, it can be concluded, that this presentation concept is valuable in situations where time is less crucial, e.g. in learning scenarios or during complex tasks.
Content may be subject to copyright.
Virtual Humans in AR: Evaluation of Presentation Concepts in
an Industrial Assistance Use Case
Eva Lampen
EvoBus GmbH
Neu-Ulm, Germany
Jannes Lehwald
EvoBus GmbH
Neu-Ulm, Germany
Thies Pfeier
Faculty of Technology, University of
Applied Sciences Emden/Leer
Emden, Germany
(a) (b)
Figure 1: Illustration of the evaluated presentation concepts of a virtual human in AR. Contrasted are (a) a 2D screen aligned
presentation vs. (b) a 3D registered presentation, the eld of view of the AR device is shown in blue.
Embedding virtual humans in educational settings enables the trans-
fer of the approved concepts of learning by observation and im-
itation of experts to extended reality scenarios. Whilst various
presentation concepts of virtual humans for learning have been
investigated in sports and rehabilitation, little is known regarding
industrial use cases. In prior work on manual assembly, Lampen et
al. [
] show that three-dimensional (3D) registered virtual humans
can provide assistance as eective as state-of-the-art HMD-based
AR approaches. We extend this work by conducting a comparative
user study (N=30) to verify implementation costs of assistive behav-
ior features and 3D registration. The results reveal that the basic
concept of a 3D registered virtual human is limited and compa-
rable to a two-dimensional screen aligned presentation. However,
by incorporating additional assistive behaviors, the 3D assistance
concept is enhanced and shows signicant advantages in terms of
cognitive savings and reduced errors. Thus, it can be concluded,
that this presentation concept is valuable in situations where time
is less crucial, e.g. in learning scenarios or during complex tasks.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from
VRST ’20, November 1–4, 2020, Virtual Event, Canada
©2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-7619-8/20/11. . . $15.00
Computer systems organization Embedded systems
dundancy; Robotics; Networks Network reliability.
Augmented Reality, Virtual Human, Expert-Based Learning
ACM Reference Format:
Eva Lampen, Jannes Lehwald, and Thies Pfeier. 2020. Virtual Humans in
AR: Evaluation of Presentation Concepts in an Industrial Assistance Use
Case. In 26th ACM Symposium on Virtual Reality Software and Technology
(VRST ’20), November 1–4, 2020, Virtual Event, Canada. ACM, New York, NY,
USA, 5 pages.
Various methods are developed and investigated with the goal to
educate motor tasks within heterogeneous domains, for example in
sports [
], rehabilitation [
] or industry [
]. The eectiveness
of such methods is strongly related to the specic use case with
its pedagogical goal and the user itself [
]. In learning scenarios
without prior task related knowledge of the user, instructions pro-
vided by an expert in one-to-one settings are preferable and lead to
better task performances [
]. With the possibility to record or
simulate and afterwards digitally present expert’s motions to the
trainee, digital expert-based learning is enabled. Besides the presen-
tation of recorded real-world videos [
], using virtual humans for
educational settings gains importance, in particular due to a general
shortage of experts [
]. Whilst in other domains research with
regard to expert-based learning with virtual humans and dierent
VRST ’20, November 1–4, 2020, Virtual Event, Canada Lampen et al.
presentation concepts exists, limited empirical evidence demon-
strates the usefulness of such concepts during manual assembly
tasks, having regard to implementation costs. This is surprising, as
the complexity of manual assembly tasks is increasing, driven by
the growing number of dierent product variants [
], and hence
an adoption of assistance concepts successfully employed in other
domains seems advised.
This paper addresses both the general applicability of a virtual hu-
man to an industrial assistance setting with realistic assembly tasks
as well as the merit of two dierent presentation concepts.
Besides the utilization of virtual humans to increase the feeling of
presence of other users in collaborative work situation [
virtual humans are more and more used to enable digital motor
learning. A wide range of dierent presentation concepts of virtual
humans utilized in motor learning scenarios exists. The presenta-
tion concepts dier in consideration of their implementation costs
and learning outcome. In the following an overview of developed
and evaluated presentation concepts is given.
2.1 Two-Dimensional Presentation Concepts
Comparable to the visualization of real-world videos, the presenta-
tion of animated virtual humans in a two-dimensional (2D) screen
aligned way is realized in dierent ways [
]. A benet of display-
ing animations over real-world videos is the fact, that the learning
outcome can be enhanced by focusing on relevant information [
and integrating assistive behavior [
]. Due to that, in addition to
the presentation of pre-recorded dance videos, a 2D screen aligned
skeleton representation was integrated in the YouMove system [
to enable feedback provision. Whilst a skeleton presentation instead
of a realistic human visualization was utilized, ndings suggest that
within dynamic information presentations more realistic shapes
increase the acceptance [
] and movement accuracy [
]. By
displaying the three-dimensional (3D) content in a 2D way, lower
implementation costs occur. Therefore, a screen aligned presenta-
tion format is preferable as long as there is no need of environmental
interaction of the virtual character [32].
2.2 Three-Dimensional Presentation Concepts
Extended reality (XR) technologies facilitate a realistic presentation
of a virtual human by enabling a non screen-aligned 3D registered
presentation. Thereby XR-based imitation learning becomes more
and more realistic and is widely utilized in sports, despite of higher
implementation costs.
Chen et al. [
] introduced Immertai, an immersive virtual reality
(VR) training tool to enable remote motion training. The Tai-Chi
trainer’s motions are mapped to a virtual human, observed and
imitated by the attendees. The results indicate advantages of 3D en-
vironments for learning scenarios in terms of learning time, motion
similarity and user experience over the presentation of 2D material.
Apart from presenting a virtual human in an exocentric perspec-
tive, which leads to cognitive load, due to multiple stimuli and the
needed eort of transferring the perceived exocentric motion to the
own body [
], egocentric presentations are utilized. AR-Arm [
] is
an immersive augmented reality (AR) tool to train Tai-Chi motions
with regard to the upper limbs in a rst-person perspective: the
movement of the virtual arms are displayed in an egocentric per-
spective and imitated by the users which leads to benets in terms
of body ownership compared to a 2D screen method. Moreover, the
idea of motor learning from a rst-person perspective is transferred
to VR with the system Onebody [
]. The results reveal advantages
in terms of posture matching accuracy, user experience and time
of completion over 2D presentation techniques like video, video
conferencing and a VR third-person 3D view.
Although, the idea of presenting a 3D virtual human or body parts
of such a human gain importance in sports scenarios, to our knowl-
edge, solely Lampen et al. [
] proposed the adoption for motor
learning in an industrial setting so far. The authors displayed mo-
tions of basic manual assembly tasks in AR, performed by a true to
scale 3D registered virtual human. The presentation of the virtual
human, which was benchmarked against a paper-based method and
a 3D registered product related presentation, decreases cognitive
load and supports performance parameter (i.e errors, completion
The present work builds upon a large body of related work. Whilst
a general usefulness of a virtual human during basic manual as-
sembly tasks was proven [
], the questions occur whether a 2D
screen aligned presentation with lower implementation costs evoke
similar results and whether the integration of assistive behavior
could enhance the concept. Therefore the presented work extends
previous work in several ways:
Evaluation with regard to relevant performance criteria for
manual assembly use cases in a setup with realistic assembly
Evaluation on the merit of the integration of assistive behav-
ior in a multi stationary assembly setting
Comparison of a 3D registered to a less complex 2D screen
aligned presentation concept in HMD-based AR
In the following a description of a set of three dierent presen-
tation concepts (see Fig. 1) in an industrial assistance use case is
given with regard to the related work. The general concepts of
the evaluated virtual human assistances are presented, likewise a
brief explanation of their technical realization. For further infor-
mation see previous work [
] and additional video material [
The hardware set-up consists of a head mounted display (HMD,
MS HoloLens I), to allow for a hands free interaction with the envi-
ronment [
]. The motions of the expert have been captured by an
XSens system [27] and mapped on a virtual human.
4.1 Presentation Concepts
Two aspects of presentation concepts are contrasted. First, whether
the virtual human is presented in 2D (in-view) or 3D (in-situ), and
second, whether additional assistive behaviors are incorporated or
not (see Tab. 1). Due to feasibility constraints the last aspect is only
contrasted in-situ.
Presentation Concepts of Virtual Humans in AR VRST ’20, November 1–4, 2020, Virtual Event, Canada
Table 1: Considered presentation concepts
Presentation Dimensional Perspective Assistive
Concept View View Behavior
𝑉 𝐻2𝐷in-view 1st & 3rd person -
𝑉 𝐻3𝐷in-situ 1st & 3rd person -
𝑉 𝐻3𝐷+𝐴𝐵 in-situ 1st & 3rd person [20]
4.1.1 Virtual Human 2D Screen Aligned. The 2D screen aligned
concept (
) is realized as follows (see Fig. 1(a)): a virtual 2D
screen is displayed in front of the user, always in the same dis-
tance and position in the eld of view (FOV) of the user. It was
thus carried along with any head movement, which was shown
to be preferable to registered 2D methods [
]. Whilst walking
motions, the 2D view is rendered using a third-person view and
during environmental interactions of the virtual human, the 2D
view switches to rst-person perspective, which corresponds to
stepping inside the virtual human realized in the 3D registered pre-
sentation concepts, and therefore increases comparability between
the approaches. The visualization of the environmental interaction
linked with the change of camera perspective is invoked by the
user entering the virtual human’s personal space [
] after walking
tasks, according to the presented basic chapter functionality by
Lampen et al. [20].
4.1.2 Virtual Human 3D Registered. With regard to the dimen-
sional view a presentation concept of a 3D registered virtual human
) is included (see Fig. 1(b) ). Considering the related work of
true to scale presentation and the stated increasing adoption of such
concepts within movement learning scenarios, the implemented
visualization comprises a real size representation of a virtual human
with the possibility to interact with spatial registered virtual objects.
Similar to the
concept the waiting behavior is integrated to
trigger the subsequent motions.
4.1.3 Virtual Human 3D Registered + Assistive Behavior. The evalu-
ated presentation concept of a 3D registered virtual human with
assistive behavior (
) mainly follows the approach intro-
duced in previous work [
] (see Fig. 1(b) ). Regarding the dimen-
sional view, the presentation concept is similar to the aforemen-
concept. However, in addition, four assistive behavior
features are integrated to prevent information loss, due to small
FOVs of current AR HMDs: visibility control, progress control,
attention control and feedback control.
4.2 Technical Setup
All described concepts of a virtual human are implemented using
the game engine Unity. In general, the technical framework pre-
sented in previous work [
] is adopted. Regarding the integration
of multiple concepts, each of the stated presentation concepts is
realized within a unique Unity scene, whereby a simple applicabil-
ity of the respective presentation concept is ensured. The technical
setup is integrated within a car door assembly environment of 6.0m
x 7.0m and a ceiling height of 3.0m. For gathering environmental
knowledge and therefore, enabling the incorporation of the be-
havioral features, the instruction device as well as three Microsoft
Kinects are used as sensors. A virtual true to scale reection of
the environmental setup is mapped to the real-world by utilizing
Vuforia 8.3.8.
Within a study, the eects of a virtual human-based assistance in
consideration of the dimensional view as well as assistive behav-
ior enrichment are investigated. Thereby, Furthermore, by taking
performance parameter into account, the open gap between pre-
sentation concepts of virtual humans and industrial assistance use
cases is addressed.
5.1 Experimental design
The aforementioned technical and environmental setup with the de-
scribed set of presentation concepts as independent variable is used.
To prevent learning eects a between-subject study design was
utilized. Three main tasks (door handle, door module, door panel
assembly) derived from a real-world car door assembly station,
each consisting of 15-22 sub-tasks (picking, carrying, plugging and
screwing tasks), are adopted within the experiment [
]. Moreover,
to ensure equal task complexity across the dierent presentation
concepts, the task sequence remained unchanged. To measure the
performance of the users, the absolute task completion time as
well as the relative number of incorrect sub-tasks are measured
as dependent variables. A sub-task is considered as incorrect, if
either the spatial placement of the component does not match the
dened target position, the wrong component was assembled or if
the task was not completed at all. A main task was completed when
the participant entered the start/end zone again and conrmed the
completion without a further hint of the experimenter regarding
incomplete sub tasks. Furthermore, due to the general industrial
assistance system’s goal of simplied decision-making [
], the cog-
nitive load was measured for each main task using the NASA-RTLX
score [
]. To gather insights into the subjective perception of the
assessed concepts, the perceived experience was identied with the
UEQ-S [28].
5.2 Procedure
The participants were equipped with the HMD and a brief unied
training scenario across the dierent presentation concepts was
conducted as often as subjectively required. During the main ex-
perimental phase, the participants were asked to perform the tasks
showed by the virtual human with rst priority of making no errors
and second priority on speed. After nishing the rst main task,
the NASA-RTLX questionnaire was handed out. The procedure was
repeated for each of the three main tasks, which took approximately
30 minutes per person. At the end, the participants had to answer
the UEQ-S questionnaire.
5.3 Participants
A group of 30 voluntary participants was recruited for the between-
subject experiment without getting any extra rewards. The partici-
pants in the
group (2 females, 8 males) were aged between
24 and 49 (
=9.45), in the
group (3 females, 7 males)
between 24 and 59 (
=10.18) and in the
group (2
VRST ’20, November 1–4, 2020, Virtual Event, Canada Lampen et al.
(a) (b) (c) (d)
Figure 2: Results of (a) incorrect tasks (b) completion time
(c) cognitive load and (d) user experience.
Table 2: Summary statistics for the four evaluation criteria
Evaluation criteria Presentation concepts
𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝑝 𝜔2
Incorrect tasks 35.50% 8.66% 46.94% 21.15% 17.32% 12.93% 0.001 0.34
Completion time 175.00s 18.09 195.75s 46.27s 233.06s 49.41s 0.017 0.20
Cognitive load 59.17% 10.08% 43.39% 15.35% 27.83% 12.35% 0.001 0.46
User experience -0.14 0.09 1.01 0.80 1.43 1.07 0.004 0.28
females, 8 males) between 22 and 59 (
=10.37). The assem-
bly tasks as well as the technical setup were new to all participants.
5.4 Analysis
Overall, 30 data sets for the four evaluation criteria are presented
and statistically compared (see Tab. 2 and Fig. 2). Besides the com-
parison of the respective means (
) and standard deviations (
the Shapiro-Wilk tests as well as the Levene’s tests proved exis-
tences of normal distribution and variance homogeneity for the
testing scenarios (p
0.05). Consequently, one-way ANOVAs and
subsequently Tukey’s HSD post-hoc tests were conducted, provided
that signicant dierences are identied. The eect sizes (
) were
quantied with 0.01 for a small, 0.06 for medium and 0.14 for large
eects [9].
The results show that the additional assistive behavior concept is es-
pecially benecial in terms of error (Tukey HSD:
: p=0.043) and cognitive load (Tukey
: p=0.001,
: p=0.04) re-
duction. The concept exceeds the others signicantly within these
parameters, but underperforms considering time criteria (Tukey
: p=0.014). With regard to the specic inten-
tion of the integrated features (i.e. prevention of information loss),
the higher completion time could be explained by the occurrence of
a speed-accuracy-tradeo. Thus, the development of other features
aiming at time saving (e.g. gamication elements like visualizing
progress [17]) could lead to dierent results.
Considering the dimensional view, the 3D view has signicant pos-
itive eects with regard to the user experience (Tukey HSD:
: p=0.037) and the perceived cognitive load (Tukey HSD:
: p=0.037). Reduced cognitive load can be linked to
known eects of product-related 3D registered visualization [
The lower user experience scores for the 2D method, especially
for the hedonic values, can be explained with the familiarization
of the presentation concepts. It can be suggested that the in-view
presentation of a 2D screen is similar to real-world screens, and
therefore known by the participants, whereas a virtual human is
not comparable to real-world phenomenons.
Consequently, the worth of extra costs for the realization of spe-
cic presentation concepts of a virtual human assistance in AR
depends on the particular requirements of the use case. Whilst the
2D presentation concept with its lower implementation costs, is
benecial for the assistance of unknown and less challenging tasks
with small error rates, the results for the cognitive load criteria
reveal the advantage of a 3D registered method for complex tasks
within learning scenarios. Moreover, the evaluated 3D presenta-
tion concept with its assistive behavior, takes eect particularly
in situations, where the need of free attention capacity exists and
time saving is less important than faultlessness. With regard to
the industrial assistance use case, such free attention capacities
could be shifted to a motion execution in an ergonomic way, or
consolidation of the specic tasks. The stated user experience re-
sults together with the importance of motivational aspects within
learning scenarios [
], strengthen the worth of a 3D registered
implementation in such scenarios.
This paper presents an evaluation with regard to dierent presenta-
tion concepts of a virtual human in HMD-based AR and furthermore
provides insights on the applicability in industrial assistance use
During the conduction of real-world derived manual assembly tasks,
performance criteria (i.e. completion time, incorrect tasks) likewise
perceived cognitive load and user experience were measured. Our
results demonstrate signicant advantages of a 3D registered vir-
tual human enhanced with assistive behavior in terms of cognitive
savings and reduced errors. However, these results can only be
revealed by the incorporation of assistive behavior. No signicant
dierences concerning performance criteria are shown by the com-
parison of a 3D registered with a 2D screen aligned presentation.
These ndings highlight the usefulness of a 3D registered virtual
human with assistive behavior in industrial assistance use cases.
Primarily, in scenarios requiring high amount of free attention
capacities, like learning scenarios or during the conduction of com-
plex tasks, in which time is less crucial. In contrast, a 2D screen
aligned presentation concept with its smaller implementation costs
is preferable for the assistance of less challenging tasks with small
error rates.
To further enhance the assistance approach of a virtual human in
HMD-based AR, the authors will focus on the presentation of solely
relevant parts of the virtual human, due to small FOV of current
HMD AR-devices.
The authors acknowledge the nancial support by the Federal Min-
istry of Education and Research of Germany (MOSIM project, grant
no. 01IS18060A-H).
Presentation Concepts of Virtual Humans in AR VRST ’20, November 1–4, 2020, Virtual Event, Canada
Fraser Anderson, Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2013.
YouMove: enhancing movement training with an augmented reality mirror. In
Proceedings of the 26th annual ACM symposium on User interface software and
technology - UIST ’13 (St. Andrews, Scotland, United Kingdom). ACM Press,
Xiaoming Chen, Zhibo Chen, Ye Li, Tianyu He, Junhui Hou, Sen Liu, and Ying
He. 2019. ImmerTai: Immersive Motion Learning in VR Environments. 58 (2019),
Alexandru Dancu. 2012. Motor learning in a mixed reality environment. In
Proceedings of the 7th Nordic Conference on Human-Computer Interaction Making
Sense Through Design - NordiCHI ’12 (Copenhagen, Denmark). ACM Press, 811.
Maximilian Dürr, Rebecca Weber, Ulrike Pfeil, and Harald Reiterer. 2020. EGuide:
Investigating dierent Visual Appearances and Guidance Techniques for Ego-
centric Guidance Visualizations. In Proceedings of the Fourteenth International
Conference on Tangible, Embedded, and Embodied Interaction (Sydney NSW Aus-
tralia). ACM, 311–322.
Eleni Efthimiou, Stavroula-Evita Fotinea, Anna Vacalopoulou, Xanthi S. Pa-
pageorgiou, Alexandra Karavasili, and Theodore Goulas. 2019. User cen-
tered design in practice: adapting HRI to real user needs. In Proceedings of
the 12th ACM International Conference on PErvasive Technologies Related to
Assistive Environments - PETRA ’19 (Rhodes, Greece). ACM Press, 425–429.
Elsa Eiriksdottir and Richard Catrambone. 2011. Procedural Instructions, Prin-
ciples, and Examples: How to Structure Instructions for Procedural Tasks to
Enhance Performance, Learning, and Transfer. 53, 6 (2011), 749–770. https:
Timo Engelke, Jens Keil, Pavel Rojtberg, Folker Wientapper, Michael Schmitt,
and Ulrich Bockholt. 2015. Content rst: a concept for industrial augmented
reality maintenance applications using mobile devices. In Proceedings of the 6th
ACM Multimedia Systems Conference on - MMSys ’15 (Portland, Oregon). ACM
Press, 105–111.
A. Fast-Berglund, T. Faessberg, F. Hellman, A. Davidsson, and J. Stahre. 2013.
Relations between complexity, quality and cognitive automation in mixed-model
assembly. 32, 3 (2013), 449 – 455.
Andy Field and Graham Hole. 2013. How to design and report experiments (repr
ed.). Los Angeles: Sage.
Markus Funk, Thomas Kosch, Romina Kettner,Oliver Korn, and Albrecht Schmidt.
2016. motionEAP: An Overview of 4 Years of Combining Industrial Assembly
with Augmented Reality for Industry 4.0. (2016), 1–4.
Markus Funk, Mareike Kritzler, and Florian Michahelles. 2017. HoloCollab: a
shared virtual platform for physical assembly training using spatially-aware
head-mounted displays. In Proceedings of the Seventh International Conference on
the Internet of Things - IoT ’17 (Linz, Austria). ACM Press, 1–7.
Ping-Hsuan Han, Kuan-Wen Chen, Chen-Hsin Hsieh, Yu-Jie Huang, and Yi-Ping
Hung. 2016. AR-Arm: Augmented Visualization for Guiding Arm Movement
in the First-Person Perspective. In Proceedings of the 7th Augmented Human
International Conference 2016 on - AH ’16 (Geneva, Switzerland). ACM Press, 1–4.
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. 50, 9
(2006), 904–908.
Thuong N. Hoang, Martin Reinoso, Frank Vetere, and Egemen Tanin. 2016. One-
body: Remote Posture Guidance System using First Person View in Virtual En-
vironment. In Proceedings of the 9th Nordic Conference on Human-Computer
Interaction - NordiCHI ’16 (Gothenburg, Sweden). ACM Press, 1–10. https:
[15] Iason Kastanis and Mel Slater. 2012. Reinforcement learning utilizes proxemics:
An avatar learns to manipulate the position of people in immersive virtual reality.
9, 1 (2012), 1–15.
Kangsoo Kim, Luke Boelling, Steen Haesler, Jeremy Bailenson, Gerd Bruder,
and Greg F. Welch. 2018. Does a Digital Assistant Need a Body? The Inuence of
Visual Embodiment and Social Behavior on the Perception of Intelligent Virtual
Agents in AR. In 2018 IEEE International Symposium on Mixed and Augmented
Reality (ISMAR) (Munich, Germany). IEEE, 105–114.
Oliver Korn and Adrian Rees. 2019. Aective eects of gamication: using
biosignals to measure the eects on working and learning users. In Proceedings
of the 12th ACM International Conference on PErvasive Technologies Related to
Assistive Environments - PETRA ’19 (Rhodes, Greece). ACM Press, 1–10. https:
Felix Kosmalla, Florian Daiber, Frederik Wiehr, and Antonio Krüger. 2017. Climb-
Vis: Investigating In-situ Visualizations for Understanding Climbing Move-
ments by Demonstration. In Proceedings of the Interactive Surfaces and Spaces
on ZZZ - ISS ’17 (Brighton, United Kingdom). ACM Press, 270–279. https:
Eva Lampen, Jannes Lehwald, and Thies Pfeier. 2020. Additional Material: Virtual
Humans in AR: Evaluation of Presentation Concepts in an Industrial Assistance Use
Case. Zenodo.
Eva Lampen, Jannes Lehwald, and Thies Pfeier. 2020. A Context-Aware As-
sistance Framework for Implicit Interaction with an Augmented Human. In
Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications,
Jessie Y. C. Chen and Gino Fragomeni (Eds.). Vol. 12191. Springer International
Publishing, 91–110. 3-030-49698-2_7 Series Title:
Lecture Notes in Computer Science.
Eva Lampen, Jonas Teuber, Felix Gaisbauer, Thomas Bär, Thies Pfeier, and Sven
Wachsmuth. 2019. Combining Simulation and Augmented Reality Methods for
Enhanced Worker Assistance in Manual Assembly. 81 (2019), 588–593. https:
Bibeg Hang Limbu, Halszka Jarodzka, Roland Klemke, and Marcus Specht. 2018.
Using sensors and augmented reality to train apprentices using recorded expert
performance: A systematic literature review. 25 (2018), 1–22.
Frieder Loch, Fabian Quint, and Iuliia Brishtel. 2016. Comparing Video and
Augmented Reality Assistance in Manual Assembly. In 2016 12th International
Conference on Intelligent Environments (IE) (London, United Kingdom). IEEE,
Sergio Moya, Sergi Grau, Dani Tost, Ricard Campeny, and Marcel Ruiz. 2011.
Animation of 3D Avatars for Rehabilitation of the Upper Limbs. In 2011 Third
International Conference on Games and Virtual Worlds for Serious Applications
(Athens, Greece). IEEE, 168–171. GAMES.2011.32
Andrew Robb, Andrea Kleinsmith, Andrew Cordar, Casey White, Adam Wendling,
Samsun Lampotang, and Benjamin Lok. 2016. Training Together: How Another
Human Trainee’s Presence Aects Behavior during Virtual Human-Based Team
Training. 3 (2016).ct.2016.00017
Cindy M. Robertson, Blair MacIntyre, and Bruce N. Walker. 2008. An evaluation
of graphical context when the graphics are outside of the task area. In 2008 7th
IEEE/ACM International Symposium on Mixed and Augmented Reality (Cambridge,
UK). IEEE, 73–76.
Martin Schepers, Matteo Giuberti, and Giovanni Bellusci. 2018. Xsens MVN:
Consistent Tracking of Human Motion Using Inertial Sensing. (2018). https:
Martin Schrepp, Andreas Hinderks, and Jörg Thomaschewski. 2017. Design and
Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S). 4,
6 (2017), 103.
Roland Sigrist, Georg Rauter, Robert Riener, and Peter Wolf. 2013. Augmented
visual, auditory, haptic, and multimodal feedback in motor learning: A review.
20, 1 (2013), 21–53. 012-0333-8
Maurício Sousa, João Vieira, Daniel Medeiros, Artur Arsenio, and Joaquim Jorge.
2016. SleeveAR: Augmented Reality for Rehabilitation using Realtime Feedback.
In Proceedings of the 21st International Conference on Intelligent User Interfaces -
IUI ’16 (Sonoma, California, USA). ACM Press, 175–185.
Theophilus Teo, Gun A. Lee, Mark Billinghurst, and Matt Adcock. 2019. 360Drops:
Mixed Reality Remote Collaboration using 360 Panoramas within the 3D Scene*.
In SIGGRAPH Asia 2019 Emerging Technologies (Brisbane QLD Australia). ACM,
Daniel Wagner, Mark Billinghurst, and Dieter Schmalstieg. 2006. How real should
virtual characters be?. In Proceedings of the 2006 ACM SIGCHI international con-
ference on Advances in computer entertainment technology - ACE ’06 (Hollywood,
California). ACM Press, 57.
Sabine Webel, Ulrich Bockholt, Timo Engelke, Nirit Gavish, and Franco Tec-
chia. 2011. Design Recommendations for Augmented Reality based Training of
Maintenance Skills. In Recent Trends of Mobile Collaborative Augmented Real-
ity Systems, Leila Alem and Weidong Huang (Eds.). Springer New York, 69–82. 4419-9845- 3_5
Stefan Wiedenmaier, Olaf Oehme, Ludger Schmidt, and Holger Luczak. 2003. Aug-
mented Reality (AR) for Assembly Processes Design and Experimental Evaluation.
16, 3 (2003), 497–514.
Tzong-Hann Wu, Feng Wu, Ci-Jyun Liang, Yi-Fen Li, Ching-Mei Tseng, and
Shih-Chung Kang. 2019. A virtual reality tool for training in global engineering
collaboration. 18, 2 (2019), 243–255. 0594-0
... Thus, it seems that the use of AR technology for behavioral training is a relatively new research topic in this area. Applications ranged from medical applications [65,67] to the foodservice [63], green driving [64], and industrial assembly [66] fields. In this context, the use of innovative technological systems and devices was more frequent, as in the case of complex gaming systems that used head-up devices (HUDs) [64] or were equipped with sensors and kinetic simulators [66]. ...
... Applications ranged from medical applications [65,67] to the foodservice [63], green driving [64], and industrial assembly [66] fields. In this context, the use of innovative technological systems and devices was more frequent, as in the case of complex gaming systems that used head-up devices (HUDs) [64] or were equipped with sensors and kinetic simulators [66]. The use of smart glasses [49] and head-mounted devices (HMDs) [66] was also frequent. ...
... In this context, the use of innovative technological systems and devices was more frequent, as in the case of complex gaming systems that used head-up devices (HUDs) [64] or were equipped with sensors and kinetic simulators [66]. The use of smart glasses [49] and head-mounted devices (HMDs) [66] was also frequent. ...
Full-text available
In recent years, educational researchers and practitioners have become increasingly interested in new technologies for teaching and learning, including augmented reality (AR). The literature has already highlighted the benefit of AR in enhancing learners’ outcomes in natural sciences, with a limited number of studies exploring the support of AR in social sciences. Specifically, there have been a number of systematic and scoping reviews in the AR field, but no peer-reviewed review studies on the contribution of AR within interventions aimed at teaching or training behavioral skills have been published to date. In addition, most AR research focuses on technological or development issues. However, limited studies have explored how technology affects social experiences and, in particular, the impact of using AR on social behavior. To address these research gaps, a scoping review was conducted to identify and analyze studies on the use of AR within interventions to teach behavioral skills. These studies were conducted across several intervention settings. In addition to this research question, the review reports an investigation of the literature regarding the impact of AR technology on social behavior. The state of the art of AR solutions designed for interventions in behavioral teaching and learning is presented, with an emphasis on educational and clinical settings. Moreover, some relevant dimensions of the impact of AR on social behavior are discussed in more detail. Limitations of the reviewed AR solutions and implications for future research and development efforts are finally discussed.
To cope with the complexification of manual assembly, new assistance methods are developed continuously. However, those hardware-dependent methods are not deployed context-aware. Hence, workers are not supported situationally and new methods have to be implemented at great expense on a heterogeneous system landscape, evoking an inappropriate maintenance effort. As known from the plant engineering, standardized encapsulation of specific methods provides a solution to integrate heterogeneous applications into one generic system. Therefore, besides the propose of a novel Extensible Worker Assistance (EWA) framework, the underlying novel concepts of so-called Assistance Model Units (AMUs) is utilized as a standardized way to abstract from specific implementations and thus, enable the integration of various assistance methods into one generic system. Furthermore, the applicability of the EWA framework with its underlying core concepts is shown by a use case-specific implementation within the bus assembly. Hence, a first step towards the provision of an optimal worker assistance tailored to the individual needs is done, by the presentation of the EWA framework with the ability to integrate different assistance methods, devices and consider various contextual knowledge within the context-aware assistance selection. Future work has to be done to further develop and investigate the single components of the comprehensive framework.
Virtual teachers embedded into reality enable the transfer of skills and knowledge to students by demonstration. However, authoring the animation of virtual characters aligned with real environments is difficult for educators. In this study, a novel animation authoring framework for demonstration teaching is proposed based on human–computer interaction and intelligent algorithm, and its application is demonstrated in mixed reality experimental education. We developed a simulation experiment environment; users could operate equipment using gestures by a Leap Motion controller. The interaction trajectory can be adaptively adjusted by the dynamic motion primitives algorithm to generate a virtual teacher demonstration trajectory aligned with real environments. For the details of the virtual teacher animation, the deep reinforcement learning algorithm performed the adaptive grabbing for objects of different shapes and sizes. Our experimental results showed that the framework is easy to use; users are able to easily author a natural animation of a virtual teacher performing a chemical experiment aligned with real environments. Also, the user feedback showed that the virtual teacher's demonstration animation in MR is impressive, and students can learn from the animation by observation and imitation.
Conference Paper
Mit dem Fortschreiten der Digitalisierung in der Industrie findet sich Augmented Reality (AR) in immer mehr Einsatzbereichen. Dennoch bleibt die industrielle Verbreitung trotz sich stetig entwickelnder Technik hinter den Prognosen zurück. Es existieren bereits Arbeiten, die sich mit der Klassifizierung von AR jedoch mit Fokus auf die tatsächliche Implementierung bzw. Umsetzung der Anwendung beschäftigen. Um Anwendungsgebiete und damit die eigentliche Problemstellung, in denen AR einen Mehrwert bieten kann, besser vergleichen und Anforderungen für industrielle Bereiche ableiten zu können, stellt dieser Beitrag ein Klassifizierungssystem für diese Einsatzgebiete vor. Auf vorhergehenden Arbeiten aufbauend wird gezeigt, dass eine Klassifizierung der Einsatzszenarien auf Basis der drei Dimensionen zu unterstützende Aktion, Lebenszyklusphase und Grad der Hilfestellung erfolgen kann. Dazu wird eine systematische Literaturrecherche von industriellen AR Anwendungen und Studien der Jahre 2016 bis 2020 durchgeführt und nach dem vorgeschlagenen Schema klassifiziert. Neben den daraus gewonnen Erkenntnissen werden in den Beiträgen verwendete Technologien, wie die Darstellungstechnik, der Detailierungsgrad, der Reifegrad der Anwendung und die Art der Inhaltserstellung analysiert. Außerdem werden Probleme bei der Umsetzung sowie künftige Forschungsthemen und -schwerpunkte herausgearbeitet.
Full-text available
The automotive industry is currently facing massive challenges. Shorter product life cycles together with mass customization lead to a high complexity for manual assembly tasks. This induces the need for effective manual assembly assistances which guide the worker faultlessly through different assembly steps while simultaneously decrease their completion time and cognitive load. While in the literature a simulation-based assistance visualizing an augmented digital human was proposed, it lacks the ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features. Finally, a preliminary study (N = 6) is conducted to give a first insight into the effectiveness of context-awareness for the simulation-based assistance with respect to subjective perception criteria. The results suggest that the user experience is improved by context-awareness in general and the developed context-aware features were overall perceived as useful in terms of error, time and cognitive load reduction as well as motivational increase. However, the developed software architecture offers potential for improvement and future research considering performance parameters is mandatory.
Full-text available
Due to mass customization product variety increased steeply in the automotive industry, entailing the increment of worker’s cognitive load during manual assembly tasks. Although worker assistance methods for cognitive automation already exist, they proof insufficient in terms of usability and achieved time saving. Given the rising importance of simulation towards autonomous production planning, a novel approach is proposed using human simulation data in context of worker assistance methods to alleviate cognitive load during manual assembly tasks. Within this paper, a new concept for augmented reality-based worker assistance is presented. Additionally, a comparative user study (N=24) was conducted with conventional worker assistance methods to evaluate a prototypical implementation of the concept. The results illustrate the enhancing opportunity of the novel approach to save cognitive abilities and to induce performance improvements. The implementation provided stable information presentation during the entire experiment. However, with regard to the recentness, there has to be carried out further developments and research, concerning performance adaptions and investigations of the effectiveness.
Conference Paper
Full-text available
Here we present the methodological principles and technologies, which compose the backbone of designing the i-Walk platform Human Robot Interaction (HRI) environment. The reported work builds upon experience gained from previous engagement with development of the multimodal HRI communication model of the MOBOT assistive robotic rollator and its end-user evaluation, leading to our enhanced methodology for identifying and prioritizing user needs as applied in our current HRI design approach. Emphasis is placed on adopting multimodal communication patterns from actual human-human interaction in the context of mobility rehabilitation, which may enrich human-robot communication by increasing the naturalness in interaction from the side of the robotic device by adding more "human" characteristics both in respect to understanding and reaction capabilities of the robot.
Conference Paper
Full-text available
What emotional effects does gamification have on users who work or learn with repetitive tasks? In this work, we use biosignals to analyze these affective effects of gamification. After a brief discussion of related work, we describe the implementation of an assistive system augmenting work by projecting elements for guidance and gamification. We also show how this system can be extended to analyse users' emotions. In a user study, we analyse both biosignals (facial expressions and electrodermal activity), and regular performance measures (error rate and task completion time). For the performance measures, the results confirm known effects like increased speed and slightly increased error rate. In addition, the analysis of the biosignals provides strong evidence for two major affective effects: the gamification of work and learning tasks incites highly significantly more positive emotions and increases emotionality altogether. The results add to the design of assistive systems, which are aware of the physical as well as the affective context.
Technical Report
Full-text available
Xsens MVN is an easy to use, cost efficient system that captures full-body human motion in any environment. It is based on small, unobtrusive inertial and magnetic sensors combined with advanced algorithms and biomechanical models. The newly released motion capture engine is immune to magnetic distortions and is available either as MVN Animate for the 3D character animation market, or as MVN Analyze for the human motion measurement market. This whitepaper describes key characteristics and shows an analysis of the performance of the new engine. The performance analysis includes a comparison with an optical position measurement system in combination with OpenSim for walking data, as well as a consistency analysis for running data. The analysis shows RMS differences of less than 5 degrees for the dominant joint angles during walking and consistent performance over more than 90 minutes of running data. The MVN Analyze and MVN Animate engines enable reliable and consistent tracking of any type of movement including running, jumping, squatting, crawling, cartwheeling, in any type of environment, including severe magnetically distorted environments.
Conference Paper
Mixed Reality (MR) remote guidance has become a practical solution for collaboration that includes nonverbal communication. This research focuses on integrating different types of MR remote collaboration systems together allowing a new variety for remote collaboration to extend its features and user experience. In this demonstration, we present 360Drops, a MR remote collaboration system that uses 360 panorama images within 3D reconstructed scenes. We introduce a new technique to interact with multiple 360 Panorama Spheres in an immersive 3D reconstructed scene. This allows a remote user to switch between multiple 360 scenes “live/static, past/present,” placed in a 3D reconstructed scene to promote a better understanding of space and interactivity through verbal and nonverbal communication. We present the system features and user experience to the attendees of SIGGRAPH Asia 2019 through a live demonstration.
Experts are imperative for training apprentices, but learning from experts is difficult. Experts often struggle to explicate and/or verbalize their knowledge or simply overlook important details due to internalization of their skills, which may make it more difficult for apprentices to learn from experts. In addition, the shortage of experts to support apprentices in one-to-one settings during trainings limits the development of apprentices. In this review, we investigate how augmented reality and sensor technology can be used to capture expert performance in such a way that the captured performance can be used to train apprentices without increasing the workload on experts. To this end, we have analysed 78 studies that have implemented augmented reality and sensor technology for training purposes. We explored how sensors have been used to capture expert performance with the intention of supporting apprentice training. Furthermore, we classified the instructional methods used by the studies according to the 4C/ID framework to understand how augmented reality and sensor technology have been used to support training. The results of this review show that augmented reality and sensor technology have the potential to capture expert performance for training purposes. The results also outline a methodological approach to how sensors and augmented reality learning environments can be designed for training using recorded expert performance.