Content uploaded by Alexander Marquardt
Author content
All content in this area was uploaded by Alexander Marquardt on Sep 16, 2019
Content may be subject to copyright.
Content uploaded by Ernst Kruijff
Author content
All content in this area was uploaded by Ernst Kruijff on Oct 02, 2018
Content may be subject to copyright.
Tactile Hand Motion and Pose Guidance for 3D Interaction
Alexander Marquardt
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
alexander.marquardt@h-brs.de
Jens Maiero
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
jens.maiero@h-brs.de
Ernst Kruij
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
ernst.kruij@h-brs.de
Christina Trepkowski
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
christina.trepkowski@h-brs.de
Andrea Schwandt
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
andrea.schwandt@h-brs.de
André Hinkenjann
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
andre.hinkenjann@h-brs.de
Johannes Schöning
University of Bremen
Bremen, Germany
schoening@uni-bremen.de
Wolfgang Stuerzlinger
Simon Fraser University
Surrey, Canada
w.s@sfu.ca
Figure 1: Hand pose and motion changes and associated vibration patterns using the TactaGuide interface: radial/ulnar devi-
ation (A), pronation/supination (B), nger exion (pinching) (C) and hand/arm movement (D). Tactor locations are green.
ABSTRACT
We present a novel forearm-and-glove tactile interface that can
enhance 3D interaction by guiding hand motor planning and coor-
dination. In particular, we aim to improve hand motion and pose
actions related to selection and manipulation tasks. Through our
user studies, we illustrate how tactile patterns can guide the user,
by triggering hand pose and motion changes, for example to grasp
(select) and manipulate (move) an object. We discuss the potential
and limitations of the interface, and outline future work.
CCS CONCEPTS
•Human-centered computing →Haptic devices
;Interaction
techniques; HCI design and evaluation methods;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
VRST ’18, November 28-December 1, 2018, Tokyo, Japan
©2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-6086-9/18/11.. .$15.00
https://doi.org/10.1145/3281505.3281526
KEYWORDS
Tactile Feedback; 3D User Interface; Hand Guidance
ACM Reference Format:
Alexander Marquardt, Jens Maiero, Ernst Kruij, Christina Trepkowski,
Andrea Schwandt, André Hinkenjann, Johannes Schöning, and Wolfgang
Stuerzlinger. 2018. Tactile Hand Motion and Pose Guidance for 3D Interac-
tion. In VRST 2018: 24th ACM Symposium on Virtual Reality Software and
Technology (VRST ’18), November 28-December 1, 2018, Tokyo, Japan. ACM,
New York, NY, USA, 10 pages. https://doi.org/10.1145/3281505.3281526
1 INTRODUCTION AND MOTIVATION
Over the last decade, 3D user interfaces have advanced rapidly,
making systems that support a wide range of application domains
available [
29
]. Despite these advances, many challenges remain
to be addressed. In this paper, we focus on how we can improve
hand motor planning and coordination for 3D selection and manip-
ulation tasks, i.e., the dierent actions of moving and reorienting
a hand through space. Especially in visually complex 3D scenes,
such actions can be dicult to perform as they can be constrained
by visual conicts, resulting in diculties in judging spatial inter-
relationships between the hand and the scene. This often results
in unwanted object penetrations. In real life, we often depend on
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
complementary haptic cues to perform tasks in visually-complex
situations. However including haptic cues is not always straightfor-
ward in 3D applications, as it often depends on complex mechanics,
such as exoskeletons or tactor grids.
1.1 Cues for motor planning and coordination
Motor planning and coordination of selection and manipulation
tasks is generally performed in a task chain with key control points
that relate to biomechanical actions [
22
]. These actions contain
contact-driven touch events that can inform the planning and co-
ordination of hand
motion
and
pose
actions. For example, a user
may grasp an object (touch informs hand pose to grasp) and change
its rotation and translation in space by moving and reorienting the
hand (motion, pose) while avoiding touching other objects (touch)
[
29
]. As the hand-arm is a biomechanical lever system, hand mo-
tion can be accomplished by arm motion, but also by wrist rotation.
Within this article, we specically focus on motion and pose guid-
ance, and reect on interrelationships with touch in our discussion.
Pose not only relates to the orientation of the hand itself but also to
its specic postures needed to select and manipulate an object, e.g.,
to grasp or move an object through a tunnel. While contact-point
feedback on a user’s hand may provide useful feedback to avoid
touching other objects during pose and motion changes, such ac-
tions can also be performed independent of (or even avoid) touch
contact. To do so, both in real life and in 3D applications we may
rely on proprioceptive cues, which are typically acquired through
motor learning [
45
]. However, cues beyond proprioception and vi-
sual feedback about the scene may be required to perform (or learn)
a task correctly. So-called augmented feedback – information pro-
vided about an action that is supplemental to the inherent feedback
typically received from the sensory system – is an important factor
supporting motor learning [
30
]. While learning how to optimally
perform a task – regardless if it is in a purely virtual environment or
a simulated real-world task – most interfaces unfortunately do not
provide feedback to encourage correct hand motions and poses, i.e.,
no form of guidance. However, selection and manipulation tasks,
and potentially subsequent motor learning, likely will benet from
such guidance. For example, consider training users for assembly
tasks where knowledge acquired in a virtual environment needs to
be transfered to the real world [11].
1.2 Limitations of haptic devices for pose and
motion guidance
Traditional haptic interfaces, such as the (Geomagic) Phantom, can
guide hand motion to a certain extent to improve selection and ma-
nipulation task performance, often in a contact-driven manner. As
such, haptics can potentially overcome limitations caused by visual
ambiguities that, for example, make it dicult to judge when the
hand collides with an object [
12
]. However, there are certain limita-
tions that directly aect motion and pose guidance. Most common
haptic devices depend on a pen-based actuation metaphor instead
of full-hand feedback. How we hold an actuated pen does not neces-
sarily match how we interact with many objects in real life. Further-
more, while typical contact-driven haptic feedback models support
overall motion guidance, they do not aid users in achieving a spe-
cic pose,
unless a full-hand interface like an exoskeleton is used.
Finally, most haptic devices are limited in operation range, impos-
ing constraints on the size of training environments.
1.3 Approach
To overcome these limitations, we investigate the use of tactile
feedback, even in non-contact situations. Tactile feedback is unique
in that it directly engages our motor learning systems [
31
], and
performance is improved by both the specicity of feedback and
its immediacy [
2
]. Deliberately, we give tactile feedback indepen-
dent of visual cues, to avoid confounds or constraints imposed by
such visual cues. Normally, designing tactile cues is challenging,
as haptic (force) stimuli cannot be fully replaced by tactile ones
without loss of sensory information [
24
]. To avoid this issue, we
provide instructional tactile cue patterns, instead of simulating con-
tact events. Also, tactile devices can provide light-weight solutions
with good resolution and operation range [
33
,
53
]. Current touch-
based vibrotactile approaches typically do not provide pose and
motion requirement indications. In our study, we look specically
at feedback that addresses these issues, by providing feedback to
guide the user to move in a particular way or assume a specic
hand pose. Our methods use localized vibration patterns that trig-
ger specic bodily recongurations or motions. Previous work, e.g.,
[
43
,
46
,
47
], indicates that vibration patterns – independent of touch
actions – can aid in changing general body pose and motion, which
we extend in this work to support more ne-grained selection and
manipulation actions.
1.4 Research questions
To design an eective tactile interface for motion and pose guidance,
we need to address several challenges. In this paper, we examine
how we can guide the user to perform specic
motion
and
pose
actions along key control points in the task chain, ideally indepen-
dent of contact events. Doing so, we can identify the following
three research questions (RQ).
RQ1. How well can tactors be localized and dierentiated across
the hand and lower arm?
RQ2. How do users interpret tactile pose and motion patterns and
what are their preferences?
RQ3. How does tactile pose and motion guidance perform in a
guided selection and manipulation task?
In this paper, we assess each RQ through a respective user
study. In study 1, we measure the eects of vibration on local-
ization/dierentiation, which informed study 2, which looks into
the interpretation of tactile cues on pose and motion changes, while
analyzing user preference for patterns. Study 3 takes the main user
preferences and uses a Wizard-of-Oz methodology to assess the
cues in a simulated selection and manipulation task, where we
measured the eectiveness of operator-controlled cues. This study
is designed to illustrate cue potential in real application scenarios.
1.5 Contributions
In this paper, we present the design, implementation and validation
of a tactile pose and motion guidance system, TactaGuide, which
is a vibrotactile glove and arm sleeve interface. We show that our
new guidance methods aord ne hand motion and pose guidance,
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
which supports selection and manipulation actions in 3D user in-
terfaces. We go beyond the state of the art that mainly focused on
vibrotactile cues for body and arm motions [
5
,
25
,
37
,
46
,
47
], or
general poses [
10
,
55
]. In that, we extend previous work to ne hand
manipulation actions through a set of vibrotactile cues provided
via TactaGuide, through the following ndings:
•
Localization and dierentiation: we show that tactors can
be well localized at dierent hand and arm locations and
illustrate that simultaneous vibration works best. We also
show that the back of the hand (normally used infrequently)
scored as good as the index nger, and is a useful location
for contact-driven feedback.
•
Pattern interpretation: Based on the biomechanical constraints
of various hand/arm parts, we illustrate that most users suc-
cessfully match patterns to the right motion or bodily recon-
guration.
•
Selection and manipulation guidance: through a Wizard-of-
Oz experiment we show that vibration patterns support ner-
grained 3D selection and manipulation tasks, conrming the
validity of our approach.
We deliberately performed all studies in the absence of visual
cues to reliably identify the eect of tactile guidance in isolation,
with an eye towards eye-free interaction scenarios. We reect on
the potential for combinations of visual and tactile patterns for
guidance in the discussion section.
2 RELATED WORK
In this section, we outline work in related areas.
Haptic feedback for 3D interaction has been explored for many
years, though is still limited by the need for good cue integration and
control [
27
,
48
], cross-modal eects [
40
], limitations in actuation
range [
19
], and delity issues [
38
]. The majority of force feedback
devices provide feedback through a grounded (tethered) device.
These devices are often placed on a table and generally make use of
an actuated pen that is grasped by the ngertips, instead of full hand
operation, e.g., [
36
]. In contrast, glove or exoskeleton interfaces
can provide feedback such as grasping forces and enable natural
movement during haptic interactions [
9
,
44
]. Few haptic devices
provide feedback for the full hand. An example is the CyberGrasp
(CyberGlove systems), a robot-arm actuated glove system that can
provide haptic feedback to individual ngers. Tactile methods aord
more exibility by removing the physical restrictions imposed by
the actuated (pen-)arm or exoskeleton construction. However, they
can be limited as haptic cues have to be “translated” within the
somatosensory system [
24
]. While substituted cues have been found
to be a powerful alternative [
26
,
28
], they can never communicate all
sensory aspects. In 3D applications, research has mostly revolved
around smaller tactile actuators that are hand-held, e.g., [
8
], or
glove-based, e.g., [
16
]. Some work has explored the usage of a
dense vibrotactors grid at or in the hand, e.g., [
17
,
35
,
42
], which is
related to our glove design.
Some systems provide guidance cues to trigger body motions
and rotations. Most approaches focus on corrective feedback with
varying degrees of freedom. The majority of systems focuses on
some form of motor learning, which may be coupled with visual
instructions of the motion pattern [
31
]. Eective motion patterns
have yet to be found, as illustrated by the variety of patterns in
the dierent studies [
5
]. However, one common insight is that
the spatial location of vibrations naturally conveys the body part
the user should move and that saltation patterns are naturally
interpreted as directional information [
47
]. Such saltation patterns
are a sequence of properly spaced and timed tactile pulses from the
region of the rst contactor to that of the last, allowing for good
directionality perception [
2
]. Yet, there is no conclusive answer for
rotation patterns. Researchers have provided cues at arms, legs and
the torso [
41
] to train full-body poses that, for example, help with
specic sports like snowboarding [
47
]. Research has also focused
specically at guiding arm motions [
46
,
51
] in 3D environments.
Further variants of this work look at arm [
13
] or wrist rotation
[
49
] for more general applications. All these methods target only
general motions and are not particularly useful for hand pose and
motion guidance for 3D selection and manipulation. In contrast,
other systems use electromuscular stimulation (EMS) to control
hand and arm motions to produce ner motions and poses [
50
]. The
most closely related work looked at triggering muscular actions at
the hand and arm via EMS [
32
]. Yet, EMS systems are awkward to
use, and often have limited usage duration or user acceptance. Also,
receptors or muscles may get damaged through use of EMS [39].
For hand guidance, the usage of proximity models to improve
spatial awareness around the body to indirectly trigger hand mo-
tion and pose adaptations is another related area. Some researchers
have explored proximity cues with a haptic mouse [
20
], the usage of
proximity to trigger actions [
7
], and auditory feedback for collision
avoidance [1].
Extending the state of the art, we introduce a novel set of vibro-
tactile cues that can guide hand motion and pose congurations
that have high relevance for 3D selection and manipulation.
3 POSE AND MOTION GUIDANCE FEEDBACK
We provide tactile feedback through our new TactaGuide system, a
vibrotactile glove and arm sleeve (Fig. 2). The device aords a full
arm motion operation range, tracked by a Leap Motion. Both glove
and sleeve are made of stretchable eco-cotton that is comfortable
to wear. In the glove, tactors are placed at the ngertips (5 tactors),
inner hand palm (7), middle phalanges (5), and the back of the hand
(4), for a total of 21 tactors (Fig. 2). Cables are held in place through
a 3D printed plate embedded in the fabric on top of the wrist. The
arm sleeve consists of 6 tactors, positioned to form a 3D coordinate
system “through” the arm. We use 8-mm Precision Microdrive coin
vibration motors (model 308-100). All tactors are driven by Arduino
boards. To overcome limitations in motor response caused by iner-
tia (tactors can take up to ~75 ms to start), we use pulse overdrive
[
35
] to reduce the latency by about 25 ms. After that, pulse width
modulation (PWM) is used to reduce the duty cycle to the desired
ratio under consideration of the corresponding tactor balancing (Fig.
2) to generate dierent tactile patterns. The system was previously
used for another purpose, namely proximity feedback [
34
], where
we showed that proximity cues in combination with collision and
friction cues can signicantly improve performance.
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
Figure 2: Tactor IDs and balancing of TactaGuide glove, based on pilot study results. The tactors at the arm sleeve were un-
modied.
Many selection and manipulation tasks depend on ne control
over hand motion and poses. However, in complex 3D scenes, such
motor actions maybe be dicult to plan and coordinate. For ex-
ample, consider training the hand to move behind an object, to
grasp a small and occluded object (or part). While adjusting the
visualization may solve some issues – x-ray visualization has been
used to look “through” an occluding object [
4
] – the associated
visual ambiguities can make performing the task challenging. To
overcome such visual limitations, we assume that tactile cues are
valuable to guide hand motion and poses. Inspired by related work,
e.g., [
13
,
49
], the basic premise of our hand motion and pose guid-
ance system is centered around providing various pattern stimuli –
activating tactors in a specic region in a specic sequence – using
a specic vibration mode (Fig. 1 and 4). Previous work indicates
that such patterns are well interpretable by the user, while cue loca-
tion and directionality inform the user about the specic body part
or joint that should be actuated [
31
]. These cues can be triggered
independent of contact events, i.e., events that relate to touching
an object. For example, stimulating three tactors in a serial manner
from hand palm to ngertip may indicate to the user that they
should stretch that nger (Fig. 1C). Similarly, a forward pattern
over the arm may indicate the arm needs to be moved forward
(Fig. 1D). Further details on the patterns are discussed in Section
4.2. By focusing on motion and pose adjustment for selection and
manipulation, which requires ner control over hand and ngers,
we extend previous work [
13
,
49
], that focused only on arm or
wrist rotation. Our target actions are closer to EMS-based work
[50], though without their aforementioned limitations.
We looked closely at the dierent actions undertaken by the hand
during 3D selection and manipulation. Each of these actions is
generally associated with a specic hand or arm region. The dier-
ent posture/motion actions refer to fundamental hand movements
(Fig. 1) and thus to biomechanical actions that involve various
joint/muscle activations:
•Radial/ulnar deviation: turning of the hand (yaw).
•Pronation/supination: rotation of the hand (roll).
•
Move: arm movement to move the hand in the scene, includ-
ing abduction and adduction (moving arm up and down),
forward/backward and left/right motion aorded by the arm
lever system.
•
Finger exion/extension: straightening of ngers to pinch
or grasp an object.
While exion and extension can also refer to orienting the hand
around the wrist (pitch), we did not support this motion in our
work, as it is used infrequently in the frame of selection and ma-
nipulation tasks. For ngers, we use dierent patterns for closing
(palm to ngertip vibration) and opening gestures (ngertip to palm
vibration), while hand rotations simply involve directional patterns.
With respect to arm movement, the arm is a biomechanical lever
system as bones and muscles form levers in the body to create
human movement – joints form the axes, and the muscles crossing
the joints apply the force to move the arm.
Based on ease of detection of location, direction, and guidance
interpretation (which hand motion or pose change does the pattern
depict?), we implemented three dierent vibration modes, which
we then assessed in our user studies. The location of a stimulus
guides the biomechanical action. E.g., when a nger needs to be
bent, the vibration pattern is provided at the nger [
47
]. The three
modes were continuous (a continuous vibration stimuli), stutter
(a pulsed vibration stimuli), and mixed (a mixture of both). We
assumed that the stutter at the end of the mixed mode pattern could
indicate direction. Prior to the studies, we performed a pilot study,
where we veried stimuli with 5 users and ne-tuned the system.
4 EXPERIMENT
Pose and motion guidance was examined in three studies, 1, 2 and
3, which investigated how well dierent vibration patterns and
modes trigger hand pose and motion changes, to potentially guide
the design of haptic selection and manipulation techniques. These
studies were designed to show if hand pose and motion guidance is
principally possible, and to investigate its potential and limitations.
As noted before, we deliberately did so independently of visual
cues, to avoid confounds or constraints imposed by such cues.
Dierent user samples were recruited for each study. In each
study users wore the complete TactaGuide glove and arm sleeve
setup. Post-hoc questionnaires for each study were composed of 7-
point Likert items (0 = “fully disagree” to 6 = “fully agree”), related to
mental demand, comfort, usability, and also task-specic perceptual
issues. Users were seated at a desk and could rest their elbow on
the armrest of a chair in study 1 and 2, while vibrotactor locations
(IDs) were shown on a 27" desktop screen. In study 1 we examined
if and to what extent our glove enables users to accurately localize
tactile feedback and their ability to discriminate between dierent
tactors. Study 2 focused on the user’s interpretation of vibration
patterns into assuming hand poses and performing motions. In
study 3, the user’s hand pose and motion were guided through
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
vibration patterns that were chosen on the basis of the previous
studies. Study 3 deployed a Wizard-of-Oz methodology to overcome
nger tracking limitations associated with the LeapMotion, which
cannot reliably detect the hand once it is rotated vertically. Yet, this
pose is required for many grasping actions.
4.1 Study 1 - Tactor localization and
dierentiation
This study focused at the ability of users to locate and dierentiate
between tactors to ensure that users can detect the actual region
that receives biomechanical actuation. As higher-resolution tactile
gloves are scarce, there is no information in the literature about
the detectability of individual tactor locations (stimuli), especially
with respect to our particular locations at the TactaGuide glove.
Also, while sensitivity is well studied for the inside of the hand,
sensitivity at the back of the hand has hardly been studied [23].
In task 1, participants were asked to locate a single actuated
tactor. A within subjects 2 x 2 factorial design was employed to
study the eect of factor feedback mode (stutter, continuous) and
hand pose (straight, st) on feedback localization performance
(mean hits per trial). Vibration feedback was provided at all 21
dierent hand locations of the TactaGuide glove, resulting in 84
trials. Two feedback modes were also compared at 6 locations on
the wrist, resulting in 12 additional trials. The total of 96 trials were
randomly presented. Participants were informed that only a single
tactor provided feedback at any given time. In each trial feedback
was provided for 2 seconds, after which the participant selected a
tactor (ID) from the overview shown on a desktop monitor showing
the hand with tactor locations.
In task 2, combinations of two or three actuated tactors had
to be located and dierentiated. A 2 x 4 x 7 factorial design was
used to study the localization of tactors depending on their number
(two or three tactors), feedback mode (simultaneous, continuous;
simultaneous, stutter; serial, continuous; serial, stutter) and zone
(thumb, index, pinkie, palm, back of the hand, from the back to the
inner hand, wrist). Each factor combination was repeated, resulting
in 112 trials, presented in randomized order. Before starting the task,
participants were informed that either two or three tactors would
be actuated. Feedback was always provided for 2 seconds. As in the
rst task, participants responded with tactor ID displayed at the
screen. Together, both tasks took around 45 minutes to complete.
4.1.1 Results. Eight right-handed persons (2 females, mean age
39 (SD 15.7), with a range of 25–65 years) volunteered. Six wore
glasses or contact lenses and two had normal vision. Within subjects
repeated-measures analysis was used to study task specic main
and interaction eects of factors on dependent measures.
In task 1, a total of 768 trials were analysed. For each trial the ac-
tually activated tactor and the participant’s choice were compared,
to record a hit as the correct tactor was chosen (1) or a miss if not
(0). As expected, the hand pose but not the mode aected hit rate
(hits/trials), which was signicantly higher with a straight hand
pose (M = 0.82, SE = 0.02) than with a st (M = 0.69, SE = 0.4), F(1
,7) = 13.44, p = .008,
η2
= .66. With a st, tactors are closer together,
making it more dicult to localize a stimulus. In a secondary analy-
sis, tactors were grouped into six zones across which we compared
hit rates (thumb; middle ngers:[index,middle,ring]; pinkie; back
of the hand; palm; wrist). The zone aected the hit rate, F(5,35)
= 6.48, p < .001,
η2
= .48. Post-hoc comparisons showed that only
the pinkie with the lowest hit rate (M = 0.61, SE = 0.05) diered
signicantly from the back of the hand, which had a high hit rate
(M = 0.85, SE = 0.03), p = .015.
In task 2, a total of 896 trials were analysed. In this task activated
tactors were compared to participants’ responses. Depending on
their perception, participants could either name three tactor IDs
or they could name less than three and state there were no more
activated tactors. We scored a hit for each correctly named tactor
and also for correctly stating that no more tactor was activated.
That is, the maximum number of hits per trial was always three.
Mean hits depended signicantly on the stimulated zone F(6,42)
= 2.62, p = .03,
η2
= .27 (see Fig. 3 for mean values and standard
errors), the feedback mode F(3,21) = 10.81, p < .001,
η2
= .61 and its
interaction with the number of activated tactors F(3,21) = 22.98, p
< .001,
η2
= .77 (see Table 1 for mean values and standard errors).
A post-hoc test showed that the mean number of hits was higher
when feedback was provided at the back of the hand compared to
the thumb, the pinkie, and the palm. Performance on the back of
the hand was also marginally better than feedback transitioning
from the back to the inner hand (p = .058). There were also more
hits when feedback was provided at the index nger than at the
palm (p = .048). In trials with two activated tactors and for both
simultaneous feedback modes, participants got more hits compared
to both serial activations (p < .01). When three tactors were activated
dierences became non-signicant.
Figure 3: Study 1, task 2 (tactor localization and dierentia-
tion): Mean number of hits per trial by stimulated zone with
standard errors (SE) hits per trial, hit range = [0;3].
Table 1: Study 1, task 2 (tactor localization and dierentia-
tion): Mean hits per trial by number of activated tactors and
feedback mode with standard errors (SE), hit range = [0;3].
Number of
tactors
Feedback
mode Mean (SE)
Two
Si-C 2.33 (0.09)*
Si-S 2.45 (0.09)*
Se-C 1.72 (0.08)
Se-S 1.87 (0.09)
Three
Si-C 1.89 (0.08)
Si-S 1.94 (0.09)
Se-C 2.15 (0.16)
Se-S 2.05 (0.14)
Si = Simultaneous, Se = Serial, C = Continuous, S = Stutter
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
Performance was best at the index nger and the back of the
hand. While the mean dierences between zones were statistically
signicant, they were relatively small (up to 0.23 = 8% of the maxi-
mum score). This outcome might be related to the distribution and
sensitivity of mechanoreceptors of glabrous skin [
23
], where the
density of low threshold mechanoreceptive units at the ngers is
principally higher than in the palm. Therefore, vibrations are in
general harder to dierentiate inside the palm, especially in case of
adjacent, nearby located tactors. Simultaneous activations led to
better performance compared to serial continuous activation when
two tactors vibrated. Mean dierences ranged from 0.46 to 0.72
(=15% to 24% of the maximum score). However, when three tactors
were activated, participants generally achieved a good hit rate for
serial feedback, as they correctly identied two out of three tactors
on average. There was no interaction eect between feedback mode
and stimulated region, that is, the optimum feedback mode was not
region specic.
4.2 Study 2 - Pattern interpretation and
preference
We explored motion interpretation and preferences in this observa-
tional study in two dierent tasks. In task 1 we focused at how users
would interpret a certain trigger (pattern + mode) by adjusting their
hand pose or motion, while task 2 investigated which vibration
mode was preferred for a stated hand pose or motion change.
For task 1 of study 2, feedback was provided at the same six hand
zones as in the second task of study 1 (localization and dierenti-
ation), as well as at the wrist and at an additional hand zone that
includes the thumb and index. A specic feedback pattern with
varying numbers of involved tactors depending on the zone, see
Table 2. We actuated the tactor-vibrations serially in three modes:
Stutter, continuous and a mixed mode (see Fig. 4).
Figure 4: Activation sequence of dierent feedback modes
using the example of nger pointing motion (index nger)
with three involved tactors.
In mixed mode, the rst tactor(s) was in continuous mode, while
the last one was stuttering. Unlike study 1, simultaneous feedback
modes were not used in study 2, as we provided directional feedback
cues through serial activation. Feedback patterns at each zone were
provided using zone-specic vectors in two opposite directions
(forward/clockwise and backwards/counterclockwise), except for
the wrist at which three vectors with opposite directions were
provided (forward/backwards; up/down; left/right). Feedback was
provided and randomized blockwise. Participants completed one
block of 36 trials with feedback at six hand zones rst (6 regions x
3 modes x 2 directions), followed by 18 trials for the wrist (3 modes
x 3 vectors x 2 directions) and nally 6 trials involving the thumb
and index at the same time (3 modes x 2 directions), for a total
of 60 trials per participant. Participants were told to change their
hand pose in a way that they felt matched the provided pattern
best. The starting pose for each trial was resting the elbow on the
armrest of a chair while the hand was hanging down in a relaxed
manner (i.e., a pose between a st and fully stretched hand gesture).
No further instructions were given and users could choose their
movements and gestures freely. The experimenter recorded the
resulting motions.
For task 2, the zone-specic feedback patterns and directions
were the same as in task 1. We pre-dened specic hand poses for
each zone-specic feedback pattern and direction, see Table 2. In
each trial, the experimenter rst demonstrated which movement
or hand pose should be initiated by the feedback that followed.
Then the corresponding feedback was provided in three dierent
modes (continuous, stutter, mixed mode), presented in randomized
order. The modes were not examined as factor but functioned as
response options: that is, the user had to choose which cue was
most suitable for initiating the previously shown movement or pose.
The suitability of the feedback for the respective movement/pose
was also rated on a 7-point Likert scale (6 being “totally suitable”).
As in task 1, six hand zones, the wrist, and the zone including
thumb and index were tested and randomized blockwise. With one
repetition 24 trials were presented for the six hand zones (6 zones
x 2 directions x 2 repetitions), 18 trials for the wrist (2 repetitions x
3 vectors x 3 directions) and 4 trials for thumb and index zone (2
repetitions x 2 directions), resulting in 46 trials. The experimenter
recorded the choice of mode and suitability rating for each trial.
Eight participants (7 right-handed, 2 females, mean age 29.6, SD
5.3, with a range of 23–40 years) volunteered. Three wore glasses
or contact lenses and ve had normal vision.
4.2.1 Results. For task 1, 480 trials were analysed. All feedback-
dependent interpretations were listed and counted if they occurred
suciently often, that is, were used by at least three of the partici-
pants. When feedback was provided at the thumb, the back of the
hand, palm, or wrist, resulting movements were diverse for each
feedback direction and mode and no coherent movement/gesture
could be observed. Feedback provided once at the index, pinkie
and at thumb and index, or repeatedly from the back to the inner
hand resulted in “successful” movements/gestures, which corre-
spond to our interpretation of the respective feedback. That is,
forward/backward feedback at the index and pinkie resulted in
stretching/bending respective ngers, simultaneous feedback at the
thumb and index was interpreted as pinch movement and feedback
provided from the back to the inner hand resulted in supinations.
For task 2, 384 trials were analyzed. Mode preferences for hand
and wrist were analyzed separately, as three instead of two direc-
tional vectors were used for the wrist. For each participant and
factor combination we calculated how many times each mode was
preferred. With one repetition each mode could maximally be pre-
ferred two times for a given combination. Generally, the continuous
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
Table 2: Study 2, task 1 (pattern interpretation) and 2 (prefer-
ence): Pre-dened hand movements depending on zone, acti-
vated tactors and feedback direction. The + symbol indicates
simultaneous activation of concatenated numbers.
Zone
IDs of activated
tactors (see Fig. 2)
and order of
activation
Movement for
tactor activation
→from left to right,
←from right to left
Thumb 7, 1, 14 →stretch
←bend
Pinkie 6, 5, 10
Index 7, 2, 13
Thumb and
Index 7,1+2,13+14 →pinch
←release
Hand inner 18, 16, 20, 17 →ulnar deviation
←radial deviationBack of
the hand 8, 7, 6, 9
From back
to inner hand 7, 6, 20, 16 →supination
←pronation
Wrist
24, 23, 22 →forward
←backward
26, 23, 25 →right
←left
27, 23 →up
←down
mode was preferred at the hand, M = 1.21, SE = 0.1, over the stutter,
M = 0.2, SE = 0.06, p = .001, and mixed mode, M = 0.6, SE = 0.06,
p = 0.18, F(1.15,8.03) = 30.09, p < .001,
η2
= .81. Nevertheless, this
preference was not consistent across zones as, especially at the back
of the hand and the palm, the mixed mode was chosen more often
than the continuous mode, but not signicantly so. At the wrist the
continuous mode was also preferred, F(2,14) = 8.71, p = .003,
η2
=
.56. Post-hoc comparisons showed that the continuous mode, M =
1.27, SE = 0.19, was signicantly superior to stutter vibration, M
= 0.25, SE = 0.1, p = .02. Mode preferences in percent by zone are
listed in Figure 5.
0%
25%
50%
75%
100%
Thumb Index Pinkie Palm Back of
the hand
From
back to
the inner
hand
Wrist Pinch
Continuous Stutter Mixed
Figure 5: Task 2: Vibration feedback mode preferences by
zone in percent.
The direction (at hand zones and wrist) and the vector (at the
wrist) did not aect mode preference. Suitability ratings were gen-
erally slightly positive, while feedback patterns that were provided
on the wrist to trigger up/down and left/right movements got more
neutral ratings.
Results from task 1 indicate, that in principle patterns can be
reasonably well interpreted, i.e., users did perform the intended
main action. However, the interpretation of direction was often
an issue. Most likely, the generally good detection of the main ac-
tion can be associated to the biomechanical limitations and prime
actions of hand and ngers, e.g., ngers are mainly bent, not ro-
tated. Still, as we did not inform users what kind of action a pattern
could potentially trigger, they had little possibility of learning a
pattern. For task 2, it is not clear why the mixed mode was pre-
ferred for some areas. One possible explanation is that both areas
(inner, back of hand) are quite at, and exhibit dierent mechanical
properties compared to, for example, the ngers. Suitability ratings
indicated that feedback patterns used at the hand zones and wrist
are generally appropriate for guidance.
4.3 Study 3 - Hand pose and motion guidance
Based on the outcomes of the rst two studies (1 and 2), we per-
formed a Wizard-of-Oz [
18
] study to assess the cues for controlling
ner-grained hand selection and manipulation actions. We delib-
erately chose a Wizard-of-Oz methodology to overcome some of
the evident limitations of the hand tracking system we used (Leap
Motion), which cannot track ngers precisely when the hand is
held vertically, due to the occlusion of the ngers in the camera
image. This study investigated user performance in six selection
and manipulation tasks that cover hand pose changes and hand
motions. Grids were used to control and measure performance on
the horizontal and vertical plane with 25 x 16 grid elds on each
plane and a grid eld size of 2 x 2 cm, see Fig. 6.
Figure 6: Apparatus for Study 3, showing the measurement
grids used for observing performance in the tasks.
The six tasks involved 1) moving the hand to a specic eld in
straight horizontal directions on the grid and 2) on the vertical
plane using the shortest path, 3) performing supination/pronation,
4) radial/ulnar deviation, 5) pointing and 6) grasping one of four
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
wooden blocks that were arranged on the horizontal plane in a
2 x 2 matrix. We included pointing in addition to selection and
manipulation, as it is often used for cohesion in training tasks. To
trigger actions we applied a pattern that we also used in study 2 and
that corresponds to a pre-dened motion, see Table 2: pinching was
used to grasp blocks. We decided to use the continuous vibration
mode as it was preferred overall in study 2. Before starting the
actual experiment, participants received a 5-minute training session
to learn the association between vibration feedback patterns and
corresponding actions.
Each participant performed the six tasks in random order. The
experimenter acted as operator who had an overview about the
tasks and the order and “controlled” each action of the participant
step by step, using a visual interface to trigger the predened pat-
terns. The operator started and stopped the specic feedback that
was required for the respective task. False movements were not
corrected, that is, if the user’s hand moved too far, the operator
provided feedback, as if the hand was at the correct position. After
a task was nished, an observer (assistant of the experimenter) who
was not aware of the targeted position and who could only see
the participant, recorded the nal position of the hand, noted any
further observations and took pictures. After having nished all six
tasks, the participant started a new trial that required him/her to do
the six tasks again in a random order. All tasks were the same for
the second time, except for grasping the block. When participants
encountered a task for the rst time, blocks had a distance of two
elds between each other. The second time around the diculty
was raised by reducing the distance to one eld. Study 3 was video-
recorded with permission of the users. After having completed the
study, participants rated feedback perception, task easiness, needed
concentration, ease of remembering movements/gestures that cor-
respond to a respective feedback, suitability of feedback and their
performance. Eight right-handed participants (2 females, mean age
35.8 (SD 16.4), with a range of 23–65 years volunteered.
4.3.1 Results. For study 3, we analysed 48 trials. The compari-
son of the targeted and the actually reached grid eld showed that
participants could be guided quite precisely to a specic grid eld
on the horizontal plane. In the rst trial, the reached eld had only
an average deviation of M = 1.88, SD = 1.36 elds from the targeted
one, and M = 2.25, SD = 2.05 in the second trial. Deviations on the
vertical plane were even smaller: M = 0.88, SD = 0.35 in the rst
and M = 0.63, SD = 1.06 in the second trial. Pointing and grasp-
ing the bricks at the two diculty levels was always successful.
Nevertheless, sometimes participants confused radial/ulnar devi-
ation with supination/pronation, radial with ulnar deviation and
up/down with left/right feedback. Participants’ ratings were com-
pared between dierent tasks. Generally, all ratings were positive,
especially concerning pointing and grasping. While ratings for the
tasks that targeted supination/pronation, radial/ulnar deviation and
moving the arm around received slightly positive feedback, ratings
for pointing and grasping were strongly positive. Suitability ratings
for moving the arm up/down and left/right were also slightly posi-
tive and higher than in study 1b. Grasping and pointing required
even less concentration than the other tasks and the assignment of
the vibration feedback to the movement/gesture seems to have been
easier to remember. Participants thought they performed better in
pointing and grasping than in the other tasks and that the pattern
initiating pointing and grasping tted “better” compared to other
patterns. Overall comfort and usability ratings are listed in Table
3. In general ratings are rather positive, only the cable seemed to
have disrupted users slightly, which could be due to the weight of
the cable as users also felt somewhat exhausted after wearing the
glove for some time.
Table 3: Overall comfort and usability ratings for Study 3.
Statement Mean (SD)
Glove wearing comfort 5 (1.2)
Sitting comfort 5.13 (2.03)
No disruption through the cable 3.88 (2.17)
Noticeability of vibrations 4.5 (0.93)
Not exhausted 3.75 (2.38)
Ease of learning the system 5.5 (0.03)
Ease of using the system 4.88 (1.13)
Expected improvement through exercise 6.5 (0.54)
While results are generally encouraging, hand rotation guidance
was not followed reliably. As noted below in the discussion section,
based on previous work [
6
], we can assume that the combination
of tactile and (non-ambiguous) visual cues could address this and
further improve performance.
5 DISCUSSION
Here, we will discuss ndings with regards to the research questions.
RQ1. How well can tactors be localized and dierentiated across
the hand and lower arm?
We showed that users can reasonably well localize and dieren-
tiate cues. Especially interesting is the good performance of cues at
the back of the hand, which performed about as well as the index
nger (which is highly sensitive, in contrast to the back of the hand).
This result is useful as the back of the hand can also be used for
other purposes, like the provision of touch-driven events that can
be coupled to guidance, e.g., touching a wall with the back of the
hand while moving a grasped object.
RQ2. How do users interpret tactile pose and motion patterns and
what are their preferences?
While tactors could be localized well, the interpretation of more
complex stimuli – in particular direction – was not without errors.
For several reasons, this is not surprising. First, a previous study
also found that users interpret some patterns as either push or
pull motions [
47
]. That is, the direction a pattern refers to may
be interpreted dierently by dierent users. While recognition of
the dominant biomechanical action (e.g., exion of the nger, or
rotation of the hand) was reasonably high, we assume that per-
sonalizing patterns will result in a higher percentage of correct
motions. In our study we observed that the eciency of our system
likely improves with learning. This means that over time, users
will likely be able to interpret the patterns more easily and reliably.
Previous work already noted that the level of abstraction likely
inuences learning rates [
37
], with lower abstraction resulting in
quicker learning. Here, we assume that our guidance patterns are
at a medium to low abstraction level, as patterns are (a) easily
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
localizable and (b) have good directional information that can be as-
sociated with a dominant biomechanical action. Learning will likely
also be required to separate dierent types of feedback. Currently,
we did not focus on touch cues, which would involve vibration
at specic contact points. Depending on the context of operation,
such vibration could be misunderstood, especially in cases where
the user touches an object while receiving a pose change guidance
pattern. While we could use dierent vibration modes to encode
dierent events, the ability of users to actually dierentiate among
them requires further study, especially if we want to integrate pose
and motion guidance methods with haptically supported selection
and manipulation techniques.
RQ3. How does tactile pose and motion guidance perform in a
guided selection and manipulation task?
With respect to hand guidance, we showed that our guidance
methods can trigger motions and poses that can support ner-
grained 3D selection and manipulation, independent of touch cues
that may normally drive hand guidance. Our results extend previous
tactile methods that only support general motion and pose guidance
[
13
,
49
], while our granularity is similar to EMS-based methods
[
32
,
50
], but without their disadvantages. Furthermore, while our
current patterns only triggered start-to-end motions, e.g., to move
the nger from a stretched to a bent conguration, guidance to
intermediate stages is possible by running the pattern as long as
needed. We might also use the strength of the feedback to provide
a further indication about when to stop a motion, e.g., by making
the feedback proportional to the error being made [14].
In all studies, we decoupled tactile cues from visual feedback. We
deliberately did not use visualization aids to isolate the performance
of our tactile guidance method, without interference from any
given visualization method. However, related work, e.g., [
6
], has
established that cross-modal feedback, such as the combination
of visual and haptic cues, may also reduce error rates. We assume
that tactile guidance methods can be visually enhanced to reduce
ambiguities, based on visual and haptic stimuli integration theories
[
15
]. The challenge is to do this in an unambiguous manner and
to avoid visual conicts. An example of visualization techniques
that can aid to this respect are see-through visualization techniques
[
3
], like transparency or cut-away. While cut-away techniques
may limit spatial understanding as inter-object spatial relationships
may be more dicult to understand (as objects are not rendered),
transparency has been shown to maintain a reasonable level of
spatial understanding [
3
]. Such visualization could be combined
with feedback co-located with the hand (instead of embedded in
the scene) that provides motion and pose guidance. We assume
co-located feedback – for example by overlaying a second hand
/ nger animation over the virtual hand to provide guidance –
will likely have a higher success rate, to avoid ambiguity issues.
However, this requires further study. Furthermore, coupling of
feedback in multiple modalities may increase cognitive load [
54
].
In our case, the somatosensory system performs complex processes
involving multiple brain areas to interpret the haptic cues, while
cognitive load can vary based on dierent haptic properties [
52
].
Still, cognitive load likely decreases through learning, [
56
], an issue
we plan to follow up in future work.
6 CONCLUSION
We presented a novel tactile approach to improve hand motor plan-
ning and action coordination in complex spatial 3D applications,
by guiding hand motion and poses. Such guidance can be highly
useful for 3D interaction, especially for applications that suer
from visual occlusions. Extending previous work on tactile cues
that only worked on more general body motions, we showed that
ner-grained pose and motion adjustments can be triggered. While
learning and visual cues are expected to further improve perfor-
mance, e.g., by reducing some interpretation errors, the results of
our user studies already provide a solid basis for implementing
tailored 3D selection and manipulation techniques that can be used
in the frame of applications that require ne motor control, such
as assembly training.
Future work includes full integration of guidance methods into
3D applications and study thereof. An important next step is the
coupling of tactile guidance with hand co-located visual cues, which
will likely lead to further improvements and a better understanding
of the full potential of guidance support methods. We also want to
investigate task chain variations to see when and how guidance
feedback aects performance in complex situations, including tac-
tile cues for guidance and touch-related events (collision, friction)
in combination with visual feedback. For real training applications,
guidance methods need to be coupled to behavior and ideal path
analysis to dynamically guide users through, for example, training
scenarios. To address the nger detection problems with a single
sensor, hand tracking must be improved, e.g., via a multi-sensor
setup [
21
]. Finally, we like to point out that due to the independence
from visual cues, our system can be used in other domains, such as
guiding visually-disabled people [31].
ACKNOWLEDGMENTS
This work was partially supported by the Deutsche Forschungsge-
meinschaft (KR 4521/2-1) and the Volkswagen Foundation through
a Lichtenbergprofessorship.
REFERENCES
[1]
C. Afonso and S. Beckhaus. 2011. How to Not Hit a Virtual Wall: Aural Spatial
Awareness for Collision Avoidance in Virtual Environments. In Proceedings of
the 6th Audio Mostly Conference: A Conference on Interaction with Sound (AM ’11).
ACM, 101–108.
[2]
R. B. Ammons. 1956. Eects of Knowledge of Performance: A Survey and Ten-
tative Theoretical Formulation. The Journal of General Psychology 54, 2 (1956),
279–299.
[3]
F. Argelaguet, A. Kulik, A. Kunert, C. Andujar,and B. Froehlich. 2011. See-through
techniques for referential awareness in collaborative virtual reality. International
Journal of Human-Computer Studies 69, 6 (2011), 387–400.
[4]
R. Bane and T. Hollerer. 2004. Interactive Tools for Virtual X-Ray Vision in Mobile
Augmented Reality. In Proceedings of the 3rd IEEE/ACM International Symposium
on Mixed and Augmented Reality (ISMAR ’04). IEEE, 231–239.
[5]
K. Bark, P. Khanna, R. Irwin, P. Kapur,S. A. Jax, L. Buxbaum, and K. Kuchenbe cker.
2011. Lessons in using vibrotactile feedback to guide fast arm motions. In World
Haptics Conference (WHC), 2011 IEEE. IEEE, 355–360.
[6]
P. W. Battaglia, M. Di Luca, M. Ernst, P. R. Schrater, T. Machulla, and D. Kersten.
2010. Within- and Cross-Modal Distance Information Disambiguate Visual Size-
Change Perception. PLOS Computational Biology 6, 3 (03 2010), 1–10.
[7]
S. Beckhaus, F. Ritter, and T. Strothotte. 2000. CubicalPath-dynamic potential
elds for guided exploration in virtual environments. In Proceedings the Eighth
Pacic Conference on Computer Graphics and Applications. 387–459.
[8]
H. Benko, C. Holz, M. Sinclair, and E. Ofek. 2016. NormalTouch and TextureTouch:
High-delity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers.
In Proceedings of the 29th Annual Symposium on User Interface Software and
Technology (UIST ’16). ACM, 717–728.
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
[9]
J. Blake and H. B. Gurocak. 2009. Haptic Glove With MR Brakes for Virtual
Reality. IEEE/ASME Transactions on Mechatronics 14, 5 (2009), 606–615.
[10]
A. Bloomeld and N. Badler. 2008. Virtual training via vibrotactile arrays. Presence:
Teleoperators and Virtual Environments 17, 2 (2008), 103–120.
[11]
A. Bloomeld, Y. Deng, J. Wampler, P. Rondot, M. Harth, D.and McManus, and
N. Badler. 2003. A taxonomy and comparison of haptic actions for disassembly
tasks. In Virtual Reality, 2003. Proceedings. IEEE. IEEE, 225–231.
[12]
G. C. Burdea. 1996. Force and Touch Feedback for Virtual Reality. John Wiley &
Sons, Inc.
[13] C. Chen, Y. Chen, Y. Chung, and N. Yu. 2016. Motion Guidance Sleeve: Guiding
the Forearm Rotation Through External Articial Muscles. In Proceedings of the
2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM,
3272–3276.
[14]
D. Drobny and J. O. Borchers. 2010. Learning basic dance choreographies with
dierent augmented feedback modalities. In Proceedings of the 28th International
Conference on Human Factors in Computing Systems, CHI 2010, Extended Abstracts
Volume, 2010. 3793–3798.
[15]
M. Ernst and M. Banks. 2002. Humans integrate visual and haptic information in
a statistically optimal fashion. Nature 415, 6870 (2002), 429.
[16]
P. Gallotti, A. Raposo, and L. Soares. 2011. v-Glove: A 3D Virtual Touch Interface.
In 2011 XIII Symposium on Virtual Reality. 242–251.
[17]
U. Gollner, T. Bieling, and G. Joost. 2012. Mobile Lorm Glove: Introducing a
Communication Device for Deaf-blind People. In Proceedings of the Sixth Inter-
national Conference on Tangible, Embedded and Embodied Interaction (TEI ’12).
ACM, 127–130.
[18]
P. Green and L. Wei-Haas. 1985. The rapid development of user interfaces:
Experience with the Wizard of Oz method. In Proceedings of the Human Factors
Society Annual Meeting, Vol. 29. SAGE Publications Sage CA: Los Angeles, CA,
470–474.
[19]
C. Hatzfeld and T.A. Kern. 2014. Engineering Haptic Devices: A Beginner’s Guide.
Springer London.
[20]
B. Holbert. 2007. Enhanced Targeting in a Haptic User Interface for the Physically
Disabled Using a Force Feedback Mouse. Ph.D. Dissertation. Advisor(s) Huber, M.
AAI3277666.
[21]
H. Jin, Q. Chen, Z. Chen, Y. Hu, and J. Zhang. 2016. Multi-LeapMotion sensor
based demonstration for robotic rene tabletop object manipulation task. CAAI
Transactions on Intelligence Technology 1, 1 (2016), 104 – 113.
[22]
R. Johansson and R. Flanagan. 2009. Coding and use of tactile signals from the
ngertips in object manipulation tasks. Nature reviews. Neuroscience 10, 5 (2009),
345.
[23]
R. S Johansson and Å B Vallbo. 1979. Tactile sensibility in the human hand: relative
and absolute densities of four types of mechanoreceptive units in glabrous skin.
The Journal of physiology 286, 1 (1979), 283–300.
[24]
K. Kaczmarek, J. Webster, P. Bach-y Rita, and W. Tompkins. 1991. Electrotactile
and vibrotactile displays for sensory substitution systems. IEEE Transactions on
Biomedical Engineering 38, 1 (1991), 1–16.
[25]
M. Klapdohr, B. Wöldecke, D. Marinos, J. Herder, C. Geiger, and W. Vonolfen.
2010. Vibrotactile Pitfalls: Arm Guidance for Moderators in Virtual TV Studios.
In Proceedings of the 13th International Conference on Humans and Computers (HC
’10). University of Aizu Press, 72–80.
[26]
E. Kruij, A. Marquardt, C. Trepkowski, R. W. Lindeman, A. Hinkenjann, J. Maiero,
and B. E. Riecke. 2016. On Your Feet!: Enhancing Vection in Leaning-Based
Interfaces Through Multisensory Stimuli. In Proceedings of the 2016 Symposium
on Spatial User Interaction (SUI ’16). ACM, 149–158.
[27]
E. Kruij, A. Marquardt, C. Trepkowski, J. Schild, and A. Hinkenjann. 2017.
Designed Emotions: Challenges and Potential Methodologies for Improving
Multisensory Cues to Enhance User Engagement in Immersive Systems. Vis.
Comput. 33, 4 (April 2017), 471–488.
[28]
E. Kruij, K. Wesche, G.and Riege, G. Goebbels, M. Kunstman, and D.Schmalstieg.
2006. Tactylus, a Pen-input Device Exploring Audiotactile Sensory Binding. In
Proceedings of the ACM Symposium on Virtual Reality Software and Technology
(VRST ’06). ACM, 312–315.
[29]
J.J. LaViola, E. Kruij, R.P. McMahan, D. Bowman, and I.P. Poupyrev. 2017. 3D
User Interfaces: Theory and Practice. Pearson Education.
[30]
D. Levac and H. Sveistrup. 2014. Motor Learning and Virtual Reality. , 25-46 pages.
[31]
J. Lieberman and C. Breazeal. 2007. TIKL: Development of a Wearable Vibrotactile
Feedback Suit for Improved Human Motor Learning. (2007).
[32]
P. Lopes, D. Yüksel, F. Guimbretière, and P. Baudisch. 2016. Muscle-plotter: An
Interactive System Based on Electrical Muscle Stimulation That Produces Spatial
Output. In Proceedings of the 29th Annual Symposium on User Interface Software
and Technology (UIST ’16). ACM, 207–217.
[33]
V. Maheshwari and R. Saraf. 2008. Tactile Devices To Sense Touch on a Par
with a Human Finger. Angewandte Chemie International Edition 47, 41 (2008),
7808–7826.
[34]
A. Marquardt, E. Kruij, C. Trepkowski, J. Maiero, A. Schwandt, A. Hinkenjann,
W. Stuerzlinger, and J. Schoening. 2018. Audio-Tactile Feedback for Enhancing 3D
Manipulation. In Proceedings of the ACM Symposium on Virtual Reality Software
and Technology (VRST ’18). ACM.
[35]
J. Martinez, A. Garcia, M. Oliver, J. P. Molina, and P. Gonzalez. 2016. Identifying
Virtual 3D Geometric Shapes with a Vibrotactile Glove. IEEE Computer Graphics
and Applications 36, 1 (Jan 2016), 42–51.
[36]
T. H. Massie, K. Salisbury, et al
.
1994. The phantom haptic interface: A device
for probing virtual objects. In Proceedings of the ASME winter annual meeting,
symposium on haptic interfaces for virtual environment and teleoperator systems,
Vol. 55. 295–300.
[37]
T. McDaniel, D. Villanueva, S. Krishna, and S. Panchanathan. 2010. MOVeMENT:
A framework for systematically mapping vibrotactile stimulations to fundamental
body movements. In Haptic Audio-Visual Environments and Games (HAVE), 2010
IEEE International Symposium on. IEEE, 1–6.
[38]
R. P McMahan, D. A Bowman, D. J Zielinski, and R. B Brady. 2012. Evaluating
display delity and interaction delity in a virtual reality game. IEEE transactions
on visualization and computer graphics 18, 4 (2012), 626–633.
[39]
K. Nosaka, A. Aldayel, M. Jubeau, and T. C. Chen. 2011. Muscle damage induced
by electrical stimulation. European Journal of Applied Physiology 111, 10 (03 Aug
2011), 2427.
[40]
D. Pai. 2005. Multisensory interaction: Real and virtual. In Robotics Research. The
Eleventh International Symposium. Springer, 489–498.
[41]
E. Piateski and L. Jones. 2005. Vibrotactile pattern recognition on the arm and
torso. In Eurohaptics Conference, 2005 and Symposium on Haptic Interfaces for
Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005. First Joint.
IEEE, 90–95.
[42]
H. Regenbrecht, J. Hauber, R. Schoenfelder, and A. Maegerlein. 2005. Virtual
Reality Aided Assembly with Directional Vibro-tactile Feedback. In Proceedings of
the 3rd International Conference on Computer Graphics and Interactive Techniques
in Australasia and South East Asia (GRAPHITE ’05). ACM, 381–387.
[43]
E. Rualdi, A. Filippeschi, A. Frisoli, O. Sandoval, C. A. Avizzano, and M. Berga-
masco. 2009. Vibrotactile perception assessment for a rowing training system. In
World Haptics 2009 - Third Joint EuroHaptics conference and Symposium on Haptic
Interfaces for Virtual Environment and Teleoperator Systems. 350–355.
[44]
K. Sato, K. Minamizawa, N. Kawakami, and S. Tachi. 2007. Haptic Telexistence.
In ACM SIGGRAPH 2007 Emerging Technologies (SIGGRAPH ’07). ACM, Article
10.
[45]
R.A. Schmidt and C.A. Wrisberg. 2004. Motor Learning and Performance. Human
Kinetics.
[46]
C. Schönauer, K. Fukushi, A. Olwal, H. Kaufmann,and R. Raskar. 2012. Multimodal
Motion Guidance: Techniques for Adaptiveand D ynamic Feedback. In Proceedings
of the 14th ACM International Conference on Multimodal Interaction (ICMI ’12).
ACM, 133–140.
[47]
D. Spelmezan, M. Jacobs, A. Hilgers, and J. Borchers. 2009. Tactile Motion
Instructions for Physical Activities. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’09). ACM, 2243–2252.
[48]
C. Spence and S. Squire. 2003. Multisensory integration: maintaining the percep-
tion of synchrony. Current Biology 13, 13 (2003), R519–R521.
[49]
A. A. Stanley and K. J. Kuchenbecker. 2012. Evaluation of Tactile Feedback
Methods for Wrist Rotation Guidance. EEE Trans. Haptics 5, 3 (Jan. 2012), 240–
251.
[50]
E. Tamaki, T. Miyaki, and J. Rekimoto. 2011. PossessedHand: Techniques for
Controlling Human Hands Using Electrical Muscles Stimuli. In Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM,
543–552.
[51]
H. Uematsu, D. Ogawa, R. Okazaki, T. Hachisu, and H. Kajimoto. 2016. HALUX:
projection-based interactive skin for digital sports. In SIGGRAPH Emerging Tech-
nologies.
[52]
G. H. Van Doorn, V. Dubaj, D. B. Wuillemin, B. L. Richardson, and M. A. Symmons.
2012. Cognitive Load Can Explain Dierences in Active and Passive Touch.
In Haptics: Perception, Devices, Mobility, and Communication, P. Isokoski and
J. Springare (Eds.). Springer Berlin Heidelberg, 91–102.
[53]
S. Vishniakou, B. W. Lewis, X. Niu, A. Kargar, K. Sun, M. Kalajian, N. Park, M.
Yang, Y. Jing, P. Brochu, et al
.
2013. Tactile Feedback Display with Spatial and
Temporal Resolutions. Scientic reports 3 (2013), 2521.
[54]
H. S. Vitense, J. A. Jacko, and V. K. Emery. 2002. Multimodal Feedback: Estab-
lishing a Performance Baseline for Improved Access by Individuals with Visual
Impairments. In Proceedings of the Fifth International ACM Conference on Assistive
Technologies (Assets ’02). ACM, 49–56.
[55]
J. Zheng, Y.and Morrell. 2010. A vibrotactile feedback approach to posture
guidance. In Haptics Symposium, 2010 IEEE. IEEE, 351–358.
[56]
M. Zhou, D.B. Jones, S.D. Schwaitzberg, and C.G.L. Cao. 2007. Role of Haptic
Feedback and Cognitive Load in Surgical Skill Acquisition. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting 51, 11 (2007), 631–635.