Content uploaded by Fridolin Wild
Author content
All content in this area was uploaded by Fridolin Wild on Nov 03, 2017
Content may be subject to copyright.
Affordances for Capturing and Re-enacting Expert
Performance with Wearables
Will Guest1, Fridolin Wild1, Alla Vovk1, Mikhail Fominykh2, Bibeg Limbu3, Roland
Klemke3, Puneet Sharma4, Jaakko Karjalainen5, Carl Smith6, Jazz Rasool6,
Soyeb Aswat7, Kaj Helin5, Daniele Di Mitri3, and Jan Schneider3
1 Oxford Brookes University, UK
2 Europlan UK ltd., UK
3 Open University of the Netherlands,
Netherlands
4 University of Tromsø, Norway
5 VVT, Finland
6 Ravensbourne, UK
7 Myndplay, UK
{16102434, wild, 16022839}@brookes.ac.uk;
mikhail.fominykh@europlan-uk.eu; {bibeg.limbu, Roland.Klemke,
Daniele.Dimitri, jan.schneider}@ou.nl; puneet.sharma@uit.no;
{Jaakko.karjalainen, Kaj.Helin}@vtt.fi;
{c.smith, j.rasool}@rave.ac.uk; soyeb@myndplay.com
Abstract. The WEKIT.one prototype is a platform for immersive procedural
training with wearable sensors and Augmented Reality. Focusing on capture and
re-enactment of human expertise, this work looks at the unique affordances of
suitable hard- and software technologies. The practical challenges of interpreting
expertise, using suitable sensors for its capture and specifying the means to
describe and display to the novice are of central significance here. We link
affordances with hardware devices, discussing their alternatives, including
Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG
headband, and a heart rate sensor. Following the selection of sensors, we describe
integration and communication requirements for the prototype. We close with
thoughts on the wider possibilities for implementation and next steps.
Keywords: Affordances, Augmented reality, Wearable technologies, Capturing
expertise.
1 Introduction
In recent years, delivery devices and sensor technology evolved significantly, while
costs of hard- and software and development kits decreased rapidly, bringing about
novel opportunities for the development of multi-sensor, augmented reality systems
that will be investigated here for their ability to contribute to the much needed
continuous up-skilling of already skilled workers (to support product innovation), as
they usually do not get enough vocational training in Europe: According to the
Eurostat’s lifelong learning statistics, e.g., the EU-27 show only a participation rate of
10.7%, instead of the 2020 target of 15% [1].
In this paper, we elaborate which affordances are both possible and needed for
capturing of expert experience and its guided re-enactment by trainees. Our
2
understanding of ‘affordance’, beginning with Gibson’s notion of a subject finding
usefulness in their environment [2], finds interesting application with the inclusion of
virtual elements into the environment, specifically those with which the user can
interact. Affordances are opportunities for action and belong neither to the environment
nor to the individual directly, but rather to the relationships between them [3].
Capturing and re-enactment of expert performance is a form of Performance
Augmentation, serving, for example, as scaffold in training procedural tasks, possibly
increasing training efficiency (reduced time to competence, increased number of
iterations at same cost, with less constraints on trainer involvement) and training
effectiveness (error proofing with more active learning under direct guidance).
In this paper, we unravel affordances that are conducive to capturing and re-
enactment of experience. We outline recent work in this domain (Sec. 2), those
affordances of particular interest and the hardware selection that offers them (Sec. 3),
and, finally, concluding remarks with next steps and current limitations (Sec. 4).
2 Background and Related Work
Ericsson and Smith define expert performance as consistently-superior, effective
behaviour on a specified set of representative tasks [4]. Expert performance can be
attained in a particular domain through collecting experience in a deliberate manner,
differing from everyday skills in the level of proficiency as well as in the level of
conscious and continuous planning invested into updating and upgrading.
Apprentices often collect experience in their craft through hands-on practice under
supervision of an expert, rather than from written manuals or textbooks. As Newell and
Simon propose, the outstanding performance of the expert is the result of incremental
increase in knowledge and skill due to continuous exposure to experience [5]. Enabling
experts to share their experience with apprentices in a perceptible way is an essential
aspect of expertise development.
Wearable sensors and AR bear potential to capture expert performance and
knowledge contained in a training activity. If knowledge is stored in such learning
activity, it can be re-experienced many times over, analysed, and reflected upon,
individually or collaboratively [6,7]. AR then provides a rich multimodal and
multisensory medium for apprentices to observe captured expert performance. Such
medium enriches the apprentice’s experience, augmenting their perception through
visual, audio, and haptic modes. AR overlays virtual content on the real environment
to create an immersive platform [8,9], placing the apprentice in a real-world context,
engaging all senses. Augmented perception allows better interaction with the
environment [10], equipping apprentices with better tools to mimic expert performance
and build knowledge. Fominykh et al. provide an overview on existing approaches for
capturing performance in the real world [6], with consideration also given to the tacit
knowledge and the its role in learning new tasks.
3
Table 1. Affordances in capturing and re-enactment of expert performance using sensors
Affordance
Applications for
Prototype
Sensor types
Related
work
Virtual/tangible manipulation,
object enrichment
Record of inertial
data
Wireless inertial
sensor, depth camera
[10-15]
Contextualisation, In situ real
time feedback, haptic hints
Record of Force
applied
Pressure sensor
[16,17]
Directed focus,
Contextualisation
Record of eye
tracking data or
gaze direction
Eye tracker,
gyroscope
[18,19]
Self-awareness of physical state
Monitor and record
physiological data
EEG, EMG, ECG,
gyroscope,
accelerometer, VHR
[17,20-22]
Virtual post it (annotation),
Contextualisation
Record annotations
and display in AR
AR and spatial
environment
[12,23]
Think aloud
Record audio
Microphone
[24]
Remote symmetrical tele-
assistance, zoom
Record video
Camera
[24]
3 Affordances for Capturing and Re-Enactment
The WEKIT framework guides the sensor selection process [7] identifying affordances
that allow the capture of specific key aspects of expert performance and provision of
affordances to the trainees in re-enactment (Table 1). For each affordance, suitable
technological solutions are considered together with the type of sensor that would need
to be used.
The proposed hardware framework for
capturing expertise and re-enactment is
depicted in Figure 1, accommodating for
comfort, wearability and accessibility.
This hardware platform uses panels
incorporated into the garment to encase
both the Myo Armband and the heart rate
sensor at the wrist, with wire casing
running up the outer sleeve. Sensors sit
flat against the body and do not move
about with wear. The hardware prototype
integrates sensors in total a wearable item
of clothing, connecting a Microsoft
Hololens, a Thalmic Labs MYO, an Alex
Posture sensor, an EEG headband, and a
heart rate sensor [25]. The garment
provides an inclusion for wires of posture
Figure 1. WEKIT wearable solution
4
sensor and armband connecting them with the smart glasses. Additional, adjustable
casings for Leap Motion sensors were designed for use on the arms and torso.
Choosing AR glasses was based on a requirements analysis report [25]. After taking
into consideration features such as built-in microphone array, environment capture,
gesture tracking, mixed reality capture, Wi-Fi 802.11ac, and fully untethered
holographic computing, Microsoft Hololens was selected. Furthermore, the built in
components of Hololens enable us to capture several different attributes of the user and
her environment. For EEG, the MyndBand and Neurosky chipset were favoured due to
the availability of the processed data and real time feedback. For detecting hand, arm
movements and gestures: Leap Motion and Myo armband were chosen. To track the
position and angle of the neck, Alex posture tracker was suggested.
Table 2. Selected Sensors and Requirements for Capturing, and Re-enactment
Sensors
Requirements for
capturing affordances
Requirements for re-
enactment affordances
Requirements
for bandwidth
AR Glasses
(Hololens)
Track location of user
and objects in the
environment.
View instructions,
activity, videos, and
virtual post-its,
application.
High (60 fps,
maximum
resolution 1268
by 720 per eye)
Point of view
camera
(Hololens)
Start/stop video
recording, take digital
pictures, enable/ disable
camera, capturing
current view.
Capturing current point
of view, enable/ disable
point of view camera.
High (2.4-
megapixel
resolution)
Built-in
microphone
(Hololens)
Start/stop the
microphone,
enable/disable
microphone.
Start/ stop the
microphone, enable/
disable microphone.
Moderate (4
audio streams)
Gaze
(Hololens)
Estimate gaze direction,
select objects in the
environment, place
virtual post-its.
Estimate gaze direction,
select objects in the
environment, place
virtual post-its.
Low (XYZ
coordinates)
MyndBand
and Neurosky
chipset
Estimate attention,
focus eye blinks, and
other metrics,
enable/disable EEG.
Estimate attention,
focus eye blinks, and
other metrics, enable/
disable EEG.
Low (Attention
and stress
levels, range
[0,100])
Leap Motion
Recognize hand
movements and
gestures.
Recognize hand
movements and
gestures.
Moderate (3D
model, skeleton
data)
Position
tracker (Myo)
Recognize gestures and
location of user
Recognize gestures, use
vibrations as feedback
on some activities.
Low (XYZ
coordinates and
gestures)
Electromyogr
am (Myo)
Recognise hand
movements
Recognise hand
movements
Low (gestures)
Alex posture
tracker
Recognize posture.
Vibration feedback.
Low (XYZ
coordinates)
In Table 2, we look at the requirements associated with capturing, re-enactment, and
5
data bandwidth. We can clearly see that different sensors require different bandwidths.
It is particularly high for video signals (e.g., AR display, Point of view) and low for
Myo, Myndband, and Alex posture tracker.
4 Concluding Remarks
Understanding that both expert and learner advance from the affordances provided by
wearable technology, we make begin to weave together the requirements for
maximising the benefit at each stage of knowledge transfer. This paper summarises the
integration of new knowledge on the pedagogical level (by creating the WEKIT
learning framework), technological level (by designing a hard- and software for
capturing and re-enactment of expertise), and on the semantic level (by describing a
process model for sharing and dissemination of task performance).
As a training method, expertise capturing needs to complement existing/new
technical documentation. It has to be done at the right level of abstraction and enabling
comparison of performances using the recorded data. Both knowledge capture and
representation should strive to blend with the user's actions, considering the manner in
which information is conveyed and ensuring that it is realistic, believable and correct.
With the right hardware and a software platform, this method will provide trainees with
a useful approximation to the full experience of becoming the expert, enabling
immersive, in-situ, and intuitive learning just as a traditional apprentice would,
following in the footsteps of the master and fitted with the specialist knowledge of
technical communicators.
References
1. Eurostat: Lifelong learning statistics. (2016), http://ec.europa.eu/eurostat/statistics-
explained/index.php/Lifelong_learning_statistics
2. Gibson, James J. "The theory of affordances. Shaw & Bransford (Eds.): Perceiving, acting,
and knowing: Toward an ecological psychology (pp. 67–82)." (1977)
3. Rizzo A.: The origin and design of intentional affordances. In: Proc 6th Conf Designing
Interactive systems, University Park, PA, USA, pp. 239–240. ACM, New York (2006)
4. Ericsson K.A., Smith J.: Prospects and limits of the empirical study of expertise: An
introduction. In: Ericsson KA, Smith J (eds.) Toward a general theory of expertise: Prospects
and limits. pp. 1-39. Cambridge University Press, Cambridge, UK (1991)
5. Newell A., Simon H.A.: Human problem solving. Prentice Hall, Englewood Cliffs, NJ
(1972)
6. Fominykh M., Wild F., Smith C., Alvarez V., Morozov M.: An Overview of Capturing Live
Experience with Virtual and Augmented Reality. In: Preuveneers (ed.) Workshop
Proceedings of the 11th International Conference on Intelligent Environments, pp. 298–305.
IOS Press, Amsterdam, Netherlands (2015)
7. Limbu B., Fominykh M., Klemke R., Specht M., Wild F.: Supporting Training of Expertise
with Wearable Technologies: The WEKIT Reference Framework. In: The International
Handbook of Mobile and Ubiquitous Learning. Springer, New York (2017)
6
8. Bacca J., Baldiris S., Fabregat R., Graf S., Kinshuk: Augmented Reality Trends in
Education: A Systematic Review of Research and Applications. Educational Technology &
Society 17 (4), 133–149 (2014)
9. Bower M., Sturman D.: What are the educational affordances of wearable technologies?
Computers & Education 88, 343–353 (2015)
10. Wagner R.K., Sternberg R.J.: Practical intelligence in real-world pursuits: The role of tacit
knowledge. Journal of Personality and Social Psychology 49 (2), 436–458 (1985)
11. Wei Y., Yan H., Bie R., Wang S., Sun L.: Performance monitoring and evaluation in dance
teaching with mobile sensing technology. Personal and Ubiquitous Computing 18 (8), 1929–
1939 (2014)
12. Li H., Lu M., Chan G., Skitmore M.: Proactive training system for safe and efficient precast
installation. Automation in Construction 49, Part A, 163–174 (2015)
13. Prabhu V.A., Elkington M., Crowley D., Tiwari A., Ward C.: Digitisation of manual
composite layup task knowledge using gaming technology. Composites Part B: Engineering
112, 314–326 (2017)
14. Jang S.-A., Kim H.-i., Woo W., Wakefield G.: AiRSculpt: A Wearable Augmented Reality
3D Sculpting System. In: Streitz, Markopoulos (eds.): Proc. DAPI 2014. pp. 130–141.
Springer, Cham (2014)
15. Meleiro P., Rodrigues R., Jacob J., Marques T.: Natural User Interfaces in the Motor
Development of Disabled Children. Procedia Technology 13, 66–75 (2014)
16. Araki A., Makiyama K., Yamanaka H., Ueno D., Osaka K., Nagasaka M., Yamada T., Yao
M.: Comparison of the performance of experienced and novice surgeons. Surgical
Endoscopy 31 (4), 1999–2005 (2017)
17. Asadipour A., Debattista K., Chalmers A.: Visuohaptic augmented feedback for enhancing
motor skills acquisition. The Visual Computer 33 (4), 401–411 (2017)
18. Kim S., Dey A.K.: Augmenting human senses to improve the user experience in cars.
Multimedia Tools and Applications 75 (16), 9587–9607 (2016)
19. Ke F., Lee S., Xu X.: Teaching training in a mixed-reality integrated learning environment.
Computers in Human Behavior 62, 212–220 (2016)
20. Duente T., Pfeiffer M., Rohs M.: On-skin technologies for muscle sensing and actuation. In:
Proc. UbiComp’16, pp. 933–936. ACM, New York (2016)
21. Kwon Y., Lee S., Jeong J., Kim W.: HeartiSense: a novel approach to enable effective basic
life support training without an instructor. In: CHI '14 Extended Abstracts on Human Factors
in Computing Systems, pp. 431–434. ACM, New York, NY (2014)
22. Benedetti F., Catenacci Volpi N., Parisi L., Sartori G.: Attention Training with an Easy–to–
Use Brain Computer Interface. In: Shumaker, Lackey (eds.) Proc. Virtual, Augmented and
Mixed Reality (VAMR’14), pp. 236–247. Springer, Cham (2014)
23. Kowalewski K.-F., Hendrie J.D., Schmidt M.W., Garrow C.R., Bruckner T., Proctor T., Paul
S., Adigüzel D., Bodenstedt S., Erben A., Kenngott H., Erben Y., Speidel S., Müller-Stich
B.P., Nickel F.: Development and validation of a sensor- and expert model-based training
system for laparoscopic surgery: the iSurgeon. Surgical Endoscopy, 1–11 (2016)
24. Sanfilippo F.: A multi-sensor fusion framework for improving situational awareness in
demanding maritime training. Reliability Engineering & System Safety 161, 12–24 (2017)
25. Sharma P., Wild F., Klemke R., Helin K., Azam T.: Requirement analysis and sensor
specifications: First version. WEKIT, D3.1, (2016).