Conference PaperPDF Available

Abstract and Figures

The prototype is a platform for immersive procedural training with wearable sensors and Augmented Reality. Focusing on capture and re-enactment of human expertise, this work looks at the unique affordances of suitable hard- and software technologies. The practical challenges of interpreting expertise, using suitable sensors for its capture and specifying the means to describe and display to the novice are of central significance here. We link affordances with hardware devices, discussing their alternatives, including Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG headband, and a heart rate sensor. Following the selection of sensors, we describe integration and communication requirements for the prototype. We close with thoughts on the wider possibilities for implementation and next steps.
Content may be subject to copyright.
Affordances for Capturing and Re-enacting Expert
Performance with Wearables
Will Guest1, Fridolin Wild1, Alla Vovk1, Mikhail Fominykh2, Bibeg Limbu3, Roland
Klemke3, Puneet Sharma4, Jaakko Karjalainen5, Carl Smith6, Jazz Rasool6,
Soyeb Aswat7, Kaj Helin5, Daniele Di Mitri3, and Jan Schneider3
1 Oxford Brookes University, UK
2 Europlan UK ltd., UK
3 Open University of the Netherlands,
4 University of Tromsø, Norway
5 VVT, Finland
6 Ravensbourne, UK
7 Myndplay, UK
{16102434, wild, 16022839};; {bibeg.limbu, Roland.Klemke,
Daniele.Dimitri, jan.schneider};;
{Jaakko.karjalainen, Kaj.Helin};
{c.smith, j.rasool};
Abstract. The prototype is a platform for immersive procedural
training with wearable sensors and Augmented Reality. Focusing on capture and
re-enactment of human expertise, this work looks at the unique affordances of
suitable hard- and software technologies. The practical challenges of interpreting
expertise, using suitable sensors for its capture and specifying the means to
describe and display to the novice are of central significance here. We link
affordances with hardware devices, discussing their alternatives, including
Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG
headband, and a heart rate sensor. Following the selection of sensors, we describe
integration and communication requirements for the prototype. We close with
thoughts on the wider possibilities for implementation and next steps.
Keywords: Affordances, Augmented reality, Wearable technologies, Capturing
1 Introduction
In recent years, delivery devices and sensor technology evolved significantly, while
costs of hard- and software and development kits decreased rapidly, bringing about
novel opportunities for the development of multi-sensor, augmented reality systems
that will be investigated here for their ability to contribute to the much needed
continuous up-skilling of already skilled workers (to support product innovation), as
they usually do not get enough vocational training in Europe: According to the
Eurostat’s lifelong learning statistics, e.g., the EU-27 show only a participation rate of
10.7%, instead of the 2020 target of 15% [1].
In this paper, we elaborate which affordances are both possible and needed for
capturing of expert experience and its guided re-enactment by trainees. Our
understanding of ‘affordance’, beginning with Gibson’s notion of a subject finding
usefulness in their environment [2], finds interesting application with the inclusion of
virtual elements into the environment, specifically those with which the user can
interact. Affordances are opportunities for action and belong neither to the environment
nor to the individual directly, but rather to the relationships between them [3].
Capturing and re-enactment of expert performance is a form of Performance
Augmentation, serving, for example, as scaffold in training procedural tasks, possibly
increasing training efficiency (reduced time to competence, increased number of
iterations at same cost, with less constraints on trainer involvement) and training
effectiveness (error proofing with more active learning under direct guidance).
In this paper, we unravel affordances that are conducive to capturing and re-
enactment of experience. We outline recent work in this domain (Sec. 2), those
affordances of particular interest and the hardware selection that offers them (Sec. 3),
and, finally, concluding remarks with next steps and current limitations (Sec. 4).
2 Background and Related Work
Ericsson and Smith define expert performance as consistently-superior, effective
behaviour on a specified set of representative tasks [4]. Expert performance can be
attained in a particular domain through collecting experience in a deliberate manner,
differing from everyday skills in the level of proficiency as well as in the level of
conscious and continuous planning invested into updating and upgrading.
Apprentices often collect experience in their craft through hands-on practice under
supervision of an expert, rather than from written manuals or textbooks. As Newell and
Simon propose, the outstanding performance of the expert is the result of incremental
increase in knowledge and skill due to continuous exposure to experience [5]. Enabling
experts to share their experience with apprentices in a perceptible way is an essential
aspect of expertise development.
Wearable sensors and AR bear potential to capture expert performance and
knowledge contained in a training activity. If knowledge is stored in such learning
activity, it can be re-experienced many times over, analysed, and reflected upon,
individually or collaboratively [6,7]. AR then provides a rich multimodal and
multisensory medium for apprentices to observe captured expert performance. Such
medium enriches the apprentice’s experience, augmenting their perception through
visual, audio, and haptic modes. AR overlays virtual content on the real environment
to create an immersive platform [8,9], placing the apprentice in a real-world context,
engaging all senses. Augmented perception allows better interaction with the
environment [10], equipping apprentices with better tools to mimic expert performance
and build knowledge. Fominykh et al. provide an overview on existing approaches for
capturing performance in the real world [6], with consideration also given to the tacit
knowledge and the its role in learning new tasks.
Table 1. Affordances in capturing and re-enactment of expert performance using sensors
Applications for
Sensor types
Virtual/tangible manipulation,
object enrichment
Record of inertial
Wireless inertial
sensor, depth camera
Contextualisation, In situ real
time feedback, haptic hints
Record of Force
Pressure sensor
Directed focus,
Record of eye
tracking data or
gaze direction
Eye tracker,
Self-awareness of physical state
Monitor and record
physiological data
accelerometer, VHR
Virtual post it (annotation),
Record annotations
and display in AR
AR and spatial
Think aloud
Record audio
Remote symmetrical tele-
assistance, zoom
Record video
3 Affordances for Capturing and Re-Enactment
The WEKIT framework guides the sensor selection process [7] identifying affordances
that allow the capture of specific key aspects of expert performance and provision of
affordances to the trainees in re-enactment (Table 1). For each affordance, suitable
technological solutions are considered together with the type of sensor that would need
to be used.
The proposed hardware framework for
capturing expertise and re-enactment is
depicted in Figure 1, accommodating for
comfort, wearability and accessibility.
This hardware platform uses panels
incorporated into the garment to encase
both the Myo Armband and the heart rate
sensor at the wrist, with wire casing
running up the outer sleeve. Sensors sit
flat against the body and do not move
about with wear. The hardware prototype
integrates sensors in total a wearable item
of clothing, connecting a Microsoft
Hololens, a Thalmic Labs MYO, an Alex
Posture sensor, an EEG headband, and a
heart rate sensor [25]. The garment
provides an inclusion for wires of posture
Figure 1. WEKIT wearable solution
sensor and armband connecting them with the smart glasses. Additional, adjustable
casings for Leap Motion sensors were designed for use on the arms and torso.
Choosing AR glasses was based on a requirements analysis report [25]. After taking
into consideration features such as built-in microphone array, environment capture,
gesture tracking, mixed reality capture, Wi-Fi 802.11ac, and fully untethered
holographic computing, Microsoft Hololens was selected. Furthermore, the built in
components of Hololens enable us to capture several different attributes of the user and
her environment. For EEG, the MyndBand and Neurosky chipset were favoured due to
the availability of the processed data and real time feedback. For detecting hand, arm
movements and gestures: Leap Motion and Myo armband were chosen. To track the
position and angle of the neck, Alex posture tracker was suggested.
Table 2. Selected Sensors and Requirements for Capturing, and Re-enactment
Requirements for
capturing affordances
for bandwidth
AR Glasses
Track location of user
and objects in the
High (60 fps,
resolution 1268
by 720 per eye)
Point of view
Start/stop video
recording, take digital
pictures, enable/ disable
camera, capturing
current view.
High (2.4-
Start/stop the
Moderate (4
audio streams)
Estimate gaze direction,
select objects in the
environment, place
virtual post-its.
Low (XYZ
and Neurosky
Estimate attention,
focus eye blinks, and
other metrics,
enable/disable EEG.
Low (Attention
and stress
levels, range
Leap Motion
Recognize hand
movements and
Moderate (3D
model, skeleton
tracker (Myo)
Recognize gestures and
location of user
Low (XYZ
coordinates and
am (Myo)
Recognise hand
Low (gestures)
Alex posture
Recognize posture.
Low (XYZ
In Table 2, we look at the requirements associated with capturing, re-enactment, and
data bandwidth. We can clearly see that different sensors require different bandwidths.
It is particularly high for video signals (e.g., AR display, Point of view) and low for
Myo, Myndband, and Alex posture tracker.
4 Concluding Remarks
Understanding that both expert and learner advance from the affordances provided by
wearable technology, we make begin to weave together the requirements for
maximising the benefit at each stage of knowledge transfer. This paper summarises the
integration of new knowledge on the pedagogical level (by creating the WEKIT
learning framework), technological level (by designing a hard- and software for
capturing and re-enactment of expertise), and on the semantic level (by describing a
process model for sharing and dissemination of task performance).
As a training method, expertise capturing needs to complement existing/new
technical documentation. It has to be done at the right level of abstraction and enabling
comparison of performances using the recorded data. Both knowledge capture and
representation should strive to blend with the user's actions, considering the manner in
which information is conveyed and ensuring that it is realistic, believable and correct.
With the right hardware and a software platform, this method will provide trainees with
a useful approximation to the full experience of becoming the expert, enabling
immersive, in-situ, and intuitive learning just as a traditional apprentice would,
following in the footsteps of the master and fitted with the specialist knowledge of
technical communicators.
1. Eurostat: Lifelong learning statistics. (2016),
2. Gibson, James J. "The theory of affordances. Shaw & Bransford (Eds.): Perceiving, acting,
and knowing: Toward an ecological psychology (pp. 6782)." (1977)
3. Rizzo A.: The origin and design of intentional affordances. In: Proc 6th Conf Designing
Interactive systems, University Park, PA, USA, pp. 239240. ACM, New York (2006)
4. Ericsson K.A., Smith J.: Prospects and limits of the empirical study of expertise: An
introduction. In: Ericsson KA, Smith J (eds.) Toward a general theory of expertise: Prospects
and limits. pp. 1-39. Cambridge University Press, Cambridge, UK (1991)
5. Newell A., Simon H.A.: Human problem solving. Prentice Hall, Englewood Cliffs, NJ
6. Fominykh M., Wild F., Smith C., Alvarez V., Morozov M.: An Overview of Capturing Live
Experience with Virtual and Augmented Reality. In: Preuveneers (ed.) Workshop
Proceedings of the 11th International Conference on Intelligent Environments, pp. 298305.
IOS Press, Amsterdam, Netherlands (2015)
7. Limbu B., Fominykh M., Klemke R., Specht M., Wild F.: Supporting Training of Expertise
with Wearable Technologies: The WEKIT Reference Framework. In: The International
Handbook of Mobile and Ubiquitous Learning. Springer, New York (2017)
8. Bacca J., Baldiris S., Fabregat R., Graf S., Kinshuk: Augmented Reality Trends in
Education: A Systematic Review of Research and Applications. Educational Technology &
Society 17 (4), 133149 (2014)
9. Bower M., Sturman D.: What are the educational affordances of wearable technologies?
Computers & Education 88, 343353 (2015)
10. Wagner R.K., Sternberg R.J.: Practical intelligence in real-world pursuits: The role of tacit
knowledge. Journal of Personality and Social Psychology 49 (2), 436458 (1985)
11. Wei Y., Yan H., Bie R., Wang S., Sun L.: Performance monitoring and evaluation in dance
teaching with mobile sensing technology. Personal and Ubiquitous Computing 18 (8), 1929
1939 (2014)
12. Li H., Lu M., Chan G., Skitmore M.: Proactive training system for safe and efficient precast
installation. Automation in Construction 49, Part A, 163174 (2015)
13. Prabhu V.A., Elkington M., Crowley D., Tiwari A., Ward C.: Digitisation of manual
composite layup task knowledge using gaming technology. Composites Part B: Engineering
112, 314326 (2017)
14. Jang S.-A., Kim H.-i., Woo W., Wakefield G.: AiRSculpt: A Wearable Augmented Reality
3D Sculpting System. In: Streitz, Markopoulos (eds.): Proc. DAPI 2014. pp. 130141.
Springer, Cham (2014)
15. Meleiro P., Rodrigues R., Jacob J., Marques T.: Natural User Interfaces in the Motor
Development of Disabled Children. Procedia Technology 13, 6675 (2014)
16. Araki A., Makiyama K., Yamanaka H., Ueno D., Osaka K., Nagasaka M., Yamada T., Yao
M.: Comparison of the performance of experienced and novice surgeons. Surgical
Endoscopy 31 (4), 19992005 (2017)
17. Asadipour A., Debattista K., Chalmers A.: Visuohaptic augmented feedback for enhancing
motor skills acquisition. The Visual Computer 33 (4), 401411 (2017)
18. Kim S., Dey A.K.: Augmenting human senses to improve the user experience in cars.
Multimedia Tools and Applications 75 (16), 95879607 (2016)
19. Ke F., Lee S., Xu X.: Teaching training in a mixed-reality integrated learning environment.
Computers in Human Behavior 62, 212220 (2016)
20. Duente T., Pfeiffer M., Rohs M.: On-skin technologies for muscle sensing and actuation. In:
Proc. UbiComp’16, pp. 933936. ACM, New York (2016)
21. Kwon Y., Lee S., Jeong J., Kim W.: HeartiSense: a novel approach to enable effective basic
life support training without an instructor. In: CHI '14 Extended Abstracts on Human Factors
in Computing Systems, pp. 431434. ACM, New York, NY (2014)
22. Benedetti F., Catenacci Volpi N., Parisi L., Sartori G.: Attention Training with an Easyto
Use Brain Computer Interface. In: Shumaker, Lackey (eds.) Proc. Virtual, Augmented and
Mixed Reality (VAMR’14), pp. 236247. Springer, Cham (2014)
23. Kowalewski K.-F., Hendrie J.D., Schmidt M.W., Garrow C.R., Bruckner T., Proctor T., Paul
S., Adigüzel D., Bodenstedt S., Erben A., Kenngott H., Erben Y., Speidel S., Müller-Stich
B.P., Nickel F.: Development and validation of a sensor- and expert model-based training
system for laparoscopic surgery: the iSurgeon. Surgical Endoscopy, 111 (2016)
24. Sanfilippo F.: A multi-sensor fusion framework for improving situational awareness in
demanding maritime training. Reliability Engineering & System Safety 161, 1224 (2017)
25. Sharma P., Wild F., Klemke R., Helin K., Azam T.: Requirement analysis and sensor
specifications: First version. WEKIT, D3.1, (2016).
... Enhanced interactions will happen through touch and hand gestures. Although it was planned to use streamed data about a person's physical wellbeing using signals from body temperature and heart rate sensors, the final installation did not deploy this, however another project that proposed using the same fidelity of experience did use all affordances, using a mixed reality approach to enhance learning and development [24,25]. ...
... In line with this strategy we are currently specifically exploring additional use cases in the psycho-motor domain, where GaDeP and the WEKIT framework complement each other. One of our current targets is to combine GaDeP and the WEKIT framework on the methodological level and to explore the integration of Gamifire with a framework for multimodal interaction [34][35][36][37]. ...
Full-text available
Gamification aims at addressing problems in various fields such as the high dropout rates, the lack of engagement, isolation, or the lack of personalisation faced by Massive Open Online Courses (MOOC). Even though gamification is widely applied, not only in MOOCs, only few cases are meaningfully designed and empirically tested. The Gamification Design Process (GaDeP) aims to cover this gap. This article first briefly introduces GaDeP, presents the concept of meaningful gamification, and derives how it motivates the need for the Gamifire platform (as a scalable and platform-independent reference infrastructure for MOOC). Secondly, it defines the requirements for platformindependent gamification and describes the development of the Gamifire infrastructure. Thirdly we describe how Gamifire was successfully applied in four different cases. Finally, the applicability of GaDeP beyond MOOC is presented by reporting on a case study where GaDeP has been successfully applied by four student research and development projects. From both, the Gamifire cases and the GaDeP cases we derive the key contribution of this article: insights in the strengths and weaknesses of the Gamifire infrastructure as well as lessons learned about the applicability and limitations of the GaDeP framework. The paper ends detailing our future works and planned development activities.
... In the context of wearable technology, a handful of authors have taken up the affordance lens. On the one hand, studies taking a design perspective, have tapped into the concept, using affordances as requirements wearable devices have to fulfill (Benbunan-Fich, 2018; Guest et al., 2017;Zhang and Lowry, 2016). Another stream, more closely related to our study, has investigated the perceived affordances of WATs. ...
Conference Paper
Full-text available
Wearable Activity Trackers (WATs) are often ascribed the ability to reduce health risks by promoting physical activity and healthful eating habits. However, research has shown that their use does not always lead to behavior changes. Using the affordance lens, this study investigates how WATs' material features facilitate behavioral outcomes, as users interpret WATs in light of their personal health-related goals. Using narrative interviews with twenty-five WAT users, we found two categories of af-fordances-learning affordances and behavior-focused affordances-leading to three behavioral outcomes: behavior change, compliance change, and remaining with the status quo. Moreover, we identified four types of users (based on their goal configurations) that actualized different affordances and showed different behavioral outcomes. While some types of users fundamentally changed their daily routines as a result of using WATs, others simply complied with technology cues or did not change their behavior at all. Our results have several implications for research on WATs and WATs' design.
Dieser Beitrag stellt Forschungsergebnisse hinsichtlich der Technologie von „Smart Glasses“ dar, die zur Unterstützung von Logistikdienstleistern eingesetzt werden können. Ziel ist es, die Abdeckung der gesamten Wertschöpfungskette in der Logistik mit Smart Glasses zu erreichen, um bisherige Einzelforschungen im Bereich der Kommissionierung in eine Gesamtlösung zur digitalen Dienstleistungsunterstützung zu integrieren und auszuweiten. Der Beitrag erläutert die Veränderungen in der Logistik durch die digitale Transformation und zeigt dabei die Nutzenpotenziale von Smart Glasses auf. Die Ergebnisse dieses Beitrags sind hergeleitete Anforderungen, Use Cases, Architekturen, Implementierungen, Designprinzipien, Vorgehensmodelle sowie Entwicklungsperspektiven im Rahmen der Einführung von Smart Glasses innerhalb betrieblicher Prozesse.
Im Forschungsfeld Learning Analytics haben Studien konkrete Beispiele dazu geliefert, wie die direkte Interaktion mit sog. Learning-Management-Systemen zur Optimierung und zum besseren Verständnis von Lernprozessen dienen kann. Allerdings findet Lernen nicht nur in der unmittelbaren Interaktion mit einem solchen System statt. Mittels Sensoren können Daten zu Lernenden und ihrer Umgebung überall erfasst werden, womit die Nutzungsbeispiele für Learning Analytics erweitert werden. Daher haben wir einen multimodalen Lern-Hub (MLH) entwickelt, d. h.ein System zur Verbesserung des Lernens in übergreifenden Situationen. Hierzu werden multimodale Daten aus individualisierbaren Konfigurationen erfasst und integriert. Im vorliegenden Beitrag wird der MLH beschrieben. Ergebnisse einer Studie zur Zuverlässigkeit des Systems in Hinblick auf die Integration multimodaler Daten werden dargelegt.
Full-text available
Intelligente Planungssysteme und smarte Roboter: In Zukunft werden Autonome Systeme, die auf den Prinzipien der Künstlichen Intelligenz basieren, Einzug auch in die Arbeitswelt halten. Dies wirft nicht nur erneut Fragen nach den Konsequenzen für Jobs und Qualifikationen auf - ungeklärt sind weiterhin die tatsächlichen Anwendungsmöglichkeiten dieser Systeme sowie die daraus folgenden arbeitspolitischen und ethischen Probleme. Dieser Band leistet einen interdisziplinären Beitrag zum laufenden Diskurs.
Work processes such as assembly in manufacturing are often highly complex and change frequently due to today’s high rate of technological innovation. Thus, the usage of assistance services to support workers in assembly can result in significant benefits. However, adequate assistance requires knowledge about the actual actions of the workers. In this chapter, we present a use case in aviation, where a manufacturing environment that carries no sensors at all is extended with off-the-shelf sensors that enable capturing the effect of physical actions and, in consequence, adequate reactions of a support system. We also give an overview of technologies of the Internet of Things and a category of human errors in industry to simplify the replication of the described digitization in other workplaces.
Full-text available
Experts are imperative for supporting expertise development in apprentices but learning from them is difficult. In many cases, there are shortages of experts to train apprentices. To address this issue, we use wearable sensors and augmented reality to record expert performance for supporting the training of apprentices. In this context, we present the conceptual framework which outlines different instructional design methodologies for training various attributes of a task. These instructional design methodologies are characterized by their dependencies on expert performance and experts as model for training. In addition, they exploit the affordances of modern wearable sensors and augmented reality. The framework also outlines a training workflow based on the 4C/ID model, a pedagogic model for complex learning, which ensures that all aspects of conventional training are considered. The paper concludes with application guidelines and examples along with reflection of the authors.
Full-text available
In this chapter, we present a conceptual reference framework for designing augmented reality applications for supporting training. The framework leverages the capabilities of modern augmented reality and wearable technology for capturing the expert’s performance in order to support expertise development. It has been designed in the context of Wearable Experience for Knowledge Intensive Training (WEKIT) project which intends to deliver a novel technological platform for industrial training. The framework identifies the state-of-the-art augmented reality training methods, which we term as “transfer mechanisms” from an extensive literature review. Transfer mechanisms exploit the educational affordances of augmented reality and wearable technology to capture the expert performance and train the trainees. The framework itself is based upon Merrienboer’s 4C/ID model which is suitable for training complex skills. The 4C/ID model encapsulates major elements of apprenticeship models which is a primary method of training in industries. The framework complements the 4C/ID model with expert performance data captured with help of wearable technology which is then exploited in the model to provide a novel training approach for efficiently and effectively mastering the skills required. In this chapter, we will give a brief overview of our current progress in developing this framework.
Full-text available
Increased market demand for composite products and shortage of expert laminators is compelling the composite industry to explore ways to acquire layup skills from experts and transfer them to novices and eventually to machines. There is a lack of holistic methods in literature for capturing composite layup skills especially involving complex moulds. This research aims to develop an informatics-based method, enabled by consumer-grade gaming technology and machine learning, to capture and digitise manufacturing task knowledge from skill-intensive hand layup. The digitisation is underpinned by the proposed human-workpiece interaction theory and implemented to automatically extract and decode key knowledge constituents such as layup strategies, ply manipulation techniques, motion mechanics and problem-solving during hand layup, collectively categorised as layup skills. The significance of this research is its potential to facilitate cost-effective transfer of skills from experts to novices, real-time automated supervision of hand layup and automation of layup tasks in the future.
Full-text available
Real offshore operational scenarios can involve a considerable amount of risk. Sophisticated training programmes involving specially designed simulator environments constitute a promising approach for improving an individual's perception and assessment of dangerous situations in real applications. One of the world's most advanced providers of simulators for such demanding offshore operations is the Offshore Simulator Centre AS (OSC). However, even though the OSC provides powerful simulation tools, techniques for visualising operational procedures that can be used to further improve Situational awareness (SA), are still lacking.
Full-text available
Introduction: Training and assessment outside of the operating room is crucial for minimally invasive surgery due to steep learning curves. Thus, we have developed and validated the sensor- and expert model-based laparoscopic training system, the iSurgeon. Materials: Participants of different experience levels (novice, intermediate, expert) performed four standardized laparoscopic knots. Instruments and surgeons' joint motions were tracked with an NDI Polaris camera and Microsoft Kinect v1. With frame-by-frame image analysis, the key steps of suturing and knot tying were identified and registered with motion data. Construct validity, concurrent validity, and test-retest reliability were analyzed. The Objective Structured Assessment of Technical Skills (OSATS) was used as the gold standard for concurrent validity. Results: The system showed construct validity by discrimination between experience levels by parameters such as time (novice = 442.9 ± 238.5 s; intermediate = 190.1 ± 50.3 s; expert = 115.1 ± 29.1 s; p < 0.001), total path length (novice = 18,817 ± 10318 mm; intermediate = 9995 ± 3286 mm; expert = 7265 ± 2232 mm; p < 0.001), average speed (novice = 42.9 ± 8.3 mm/s; intermediate = 52.7 ± 11.2 mm/s; expert = 63.6 ± 12.9 mm/s; p < 0.001), angular path (novice = 20,573 ± 12,611°; intermediate = 8652 ± 2692°; expert = 5654 ± 1746°; p < 0.001), number of movements (novice = 2197 ± 1405; intermediate = 987 ± 367; expert = 743 ± 238; p < 0.001), number of movements per second (novice = 5.0 ± 1.4; intermediate = 5.2 ± 1.5; expert = 6.6 ± 1.6; p = 0.025), and joint angle range (for different axes and joints all p < 0.001). Concurrent validity of OSATS and iSurgeon parameters was established. Test-retest reliability was given for 7 out of 8 parameters. The key steps "wrapping the thread around the instrument" and "needle positioning" were most difficult to learn. Conclusion: Validity and reliability of the self-developed sensor-and expert model-based laparoscopic training system "iSurgeon" were established. Using multiple parameters proved more reliable than single metric parameters. Wrapping of the needle around the thread and needle positioning were identified as difficult key steps for laparoscopic suturing and knot tying. The iSurgeon could generate automated real-time feedback based on expert models which may result in shorter learning curves for laparoscopic tasks. Our next steps will be the implementation and evaluation of full procedural training in an experimental model.
Full-text available
Background Laparoscopic surgical techniques are difficult to learn, and developing such skills involves a steep learning curve. To ensure surgeons achieve a high skill level, it is important to be able to measure and assess their skills. Therefore, it is necessary to understand the performance differences between experienced and novice surgeons, as such information could be used to help surgeons learn laparoscopic skills. We examined the differences in gripping and reaction force between experienced and novice surgeons during laparoscopic surgery. Methods We measured the gripping force generated during laparoscopic surgery performed on pigs using forceps with pressure sensors. Several sensors, including strain gauges, accelerometers, and a potentiometer, were attached to the forceps. This study included 4 experienced and 4 novice surgeons. Each subject was asked to elevate the kidney in order to approach the renal hilus using the forceps. Throughout the experiment, we measured the gripping force and reaction force generated during the movement of the forceps in real time. ResultsThe experienced and novice surgeons exhibited similar reaction force levels. Conversely, gripping force differed significantly between the groups. The experienced and novice surgeons exhibited mean gripping force levels of 3.06 and 7.15 N, respectively. The gripping force standard deviation values for the experienced and novice surgeons were 1.43 and 3.54 N, respectively. The mean and standard deviation gripping force values of the experienced surgeons were significantly lower than those of the novice surgeons (P = 0.015 and P = 0.011, respectively). Conclusions This study indicated that experienced surgeons generate weaker but more stable gripping force than novice surgeons during laparoscopic procedures.
Full-text available
Serious games are accepted as an effective approach to deliver augmented feedback in motor (re-)learning processes. The multi-modal nature of the conventional computer games (eg. audiovisual representation) plus the ability to interact via haptic enabled inputs provides a more immersive experience. Thus, particular disciplines such as medical education in which frequent hands on rehearsals play a key role in learning core motor skills (eg. physical palpations) may benefit from this technique. Challenges such as the impracticality of verbalising palpation experience by tutors, and ethical considerations may prevent the medical students from correctly learning core palpation skills. This work presents a new data glove, built from off-the shelf components which captures pressure sensitivity designed to provide feedback for palpation tasks. In this work the data glove is used to control a serious game adapted from the infinite runner genre to improve motor skill acquisition. A comparative evaluation on usability and effectiveness of the method using multimodal visualisations, as part of a larger study to enhance pressure sensitivity, is presented. Thirty participants divided into a game-playing group (n = 15) and a control group (n = 15) were invited to perform a simple palpation task. The game playing group significantly outperformed the control group in which abstract visualisation of force were provided to the users in a blind-folded transfer test. The game-based training approach was positively described by the game-playing group as enjoyable and engaging.
Conference Paper
Full-text available
In this paper, we present a new kind of wearable augmented reality (AR) 3D sculpting system called AiRSculpt in which users could directly translate their fluid finger movements in air into expressive sculptural forms and use hand gestures to navigate the interface. In AiRSculpt, as opposed to VR-based systems, users could quickly create and manipulate 3D virtual content directly with their bare hands in a real-world setting, and use both hands simultaneously in tandem or as separate tools to sculpt and manipulate their virtual creations. Our system uses a head-mounted display and a RGB-D head-mounted camera to detect the 3D location of hands and fingertips then render virtual content in calibration with real-world coordinates.
Conference Paper
Electromyography (EMG) and electrical muscle stimulation (EMS) are promising technologies for muscle sensing and actuation in wearable interfaces. The required electrodes can be manufactured to form a thin layer on the skin. We discuss requirements and approaches for EMG and EMS as on-skin technologies. In particular, we focus on fine-grained muscle sensing and actuation with an electrode grid on the lower arm. We discuss a prototype, scenarios, and open issues.