Conference PaperPDF Available

Tactile Hand Motion and Pose Guidance for 3D Interaction


Abstract and Figures

Tactile Hand Motion and Pose Guidance for 3D Interaction
Alexander Marquardt
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
Jens Maiero
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
Ernst Kruij
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
Christina Trepkowski
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
Andrea Schwandt
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
André Hinkenjann
Bonn-Rhein-Sieg University of
Applied Sciences
Sankt Augustin, Germany
Johannes Schöning
University of Bremen
Bremen, Germany
Wolfgang Stuerzlinger
Simon Fraser University
Surrey, Canada
Figure 1: Hand pose and motion changes and associated vibration patterns using the TactaGuide interface: radial/ulnar devi-
ation (A), pronation/supination (B), nger exion (pinching) (C) and hand/arm movement (D). Tactor locations are green.
We present a novel forearm-and-glove tactile interface that can
enhance 3D interaction by guiding hand motor planning and coor-
dination. In particular, we aim to improve hand motion and pose
actions related to selection and manipulation tasks. Through our
user studies, we illustrate how tactile patterns can guide the user,
by triggering hand pose and motion changes, for example to grasp
(select) and manipulate (move) an object. We discuss the potential
and limitations of the interface, and outline future work.
Human-centered computing Haptic devices
techniques; HCI design and evaluation methods;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from
VRST ’18, November 28-December 1, 2018, Tokyo, Japan
©2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-6086-9/18/11.. .$15.00
Tactile Feedback; 3D User Interface; Hand Guidance
ACM Reference Format:
Alexander Marquardt, Jens Maiero, Ernst Kruij, Christina Trepkowski,
Andrea Schwandt, André Hinkenjann, Johannes Schöning, and Wolfgang
Stuerzlinger. 2018. Tactile Hand Motion and Pose Guidance for 3D Interac-
tion. In VRST 2018: 24th ACM Symposium on Virtual Reality Software and
Technology (VRST ’18), November 28-December 1, 2018, Tokyo, Japan. ACM,
New York, NY, USA, 10 pages.
Over the last decade, 3D user interfaces have advanced rapidly,
making systems that support a wide range of application domains
available [
]. Despite these advances, many challenges remain
to be addressed. In this paper, we focus on how we can improve
hand motor planning and coordination for 3D selection and manip-
ulation tasks, i.e., the dierent actions of moving and reorienting
a hand through space. Especially in visually complex 3D scenes,
such actions can be dicult to perform as they can be constrained
by visual conicts, resulting in diculties in judging spatial inter-
relationships between the hand and the scene. This often results
in unwanted object penetrations. In real life, we often depend on
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
complementary haptic cues to perform tasks in visually-complex
situations. However including haptic cues is not always straightfor-
ward in 3D applications, as it often depends on complex mechanics,
such as exoskeletons or tactor grids.
1.1 Cues for motor planning and coordination
Motor planning and coordination of selection and manipulation
tasks is generally performed in a task chain with key control points
that relate to biomechanical actions [
]. These actions contain
contact-driven touch events that can inform the planning and co-
ordination of hand
actions. For example, a user
may grasp an object (touch informs hand pose to grasp) and change
its rotation and translation in space by moving and reorienting the
hand (motion, pose) while avoiding touching other objects (touch)
]. As the hand-arm is a biomechanical lever system, hand mo-
tion can be accomplished by arm motion, but also by wrist rotation.
Within this article, we specically focus on motion and pose guid-
ance, and reect on interrelationships with touch in our discussion.
Pose not only relates to the orientation of the hand itself but also to
its specic postures needed to select and manipulate an object, e.g.,
to grasp or move an object through a tunnel. While contact-point
feedback on a user’s hand may provide useful feedback to avoid
touching other objects during pose and motion changes, such ac-
tions can also be performed independent of (or even avoid) touch
contact. To do so, both in real life and in 3D applications we may
rely on proprioceptive cues, which are typically acquired through
motor learning [
]. However, cues beyond proprioception and vi-
sual feedback about the scene may be required to perform (or learn)
a task correctly. So-called augmented feedback – information pro-
vided about an action that is supplemental to the inherent feedback
typically received from the sensory system – is an important factor
supporting motor learning [
]. While learning how to optimally
perform a task – regardless if it is in a purely virtual environment or
a simulated real-world task – most interfaces unfortunately do not
provide feedback to encourage correct hand motions and poses, i.e.,
no form of guidance. However, selection and manipulation tasks,
and potentially subsequent motor learning, likely will benet from
such guidance. For example, consider training users for assembly
tasks where knowledge acquired in a virtual environment needs to
be transfered to the real world [11].
1.2 Limitations of haptic devices for pose and
motion guidance
Traditional haptic interfaces, such as the (Geomagic) Phantom, can
guide hand motion to a certain extent to improve selection and ma-
nipulation task performance, often in a contact-driven manner. As
such, haptics can potentially overcome limitations caused by visual
ambiguities that, for example, make it dicult to judge when the
hand collides with an object [
]. However, there are certain limita-
tions that directly aect motion and pose guidance. Most common
haptic devices depend on a pen-based actuation metaphor instead
of full-hand feedback. How we hold an actuated pen does not neces-
sarily match how we interact with many objects in real life. Further-
more, while typical contact-driven haptic feedback models support
overall motion guidance, they do not aid users in achieving a spe-
cic pose,
unless a full-hand interface like an exoskeleton is used.
Finally, most haptic devices are limited in operation range, impos-
ing constraints on the size of training environments.
1.3 Approach
To overcome these limitations, we investigate the use of tactile
feedback, even in non-contact situations. Tactile feedback is unique
in that it directly engages our motor learning systems [
], and
performance is improved by both the specicity of feedback and
its immediacy [
]. Deliberately, we give tactile feedback indepen-
dent of visual cues, to avoid confounds or constraints imposed by
such visual cues. Normally, designing tactile cues is challenging,
as haptic (force) stimuli cannot be fully replaced by tactile ones
without loss of sensory information [
]. To avoid this issue, we
provide instructional tactile cue patterns, instead of simulating con-
tact events. Also, tactile devices can provide light-weight solutions
with good resolution and operation range [
]. Current touch-
based vibrotactile approaches typically do not provide pose and
motion requirement indications. In our study, we look specically
at feedback that addresses these issues, by providing feedback to
guide the user to move in a particular way or assume a specic
hand pose. Our methods use localized vibration patterns that trig-
ger specic bodily recongurations or motions. Previous work, e.g.,
], indicates that vibration patterns – independent of touch
actions – can aid in changing general body pose and motion, which
we extend in this work to support more ne-grained selection and
manipulation actions.
1.4 Research questions
To design an eective tactile interface for motion and pose guidance,
we need to address several challenges. In this paper, we examine
how we can guide the user to perform specic
actions along key control points in the task chain, ideally indepen-
dent of contact events. Doing so, we can identify the following
three research questions (RQ).
RQ1. How well can tactors be localized and dierentiated across
the hand and lower arm?
RQ2. How do users interpret tactile pose and motion patterns and
what are their preferences?
RQ3. How does tactile pose and motion guidance perform in a
guided selection and manipulation task?
In this paper, we assess each RQ through a respective user
study. In study 1, we measure the eects of vibration on local-
ization/dierentiation, which informed study 2, which looks into
the interpretation of tactile cues on pose and motion changes, while
analyzing user preference for patterns. Study 3 takes the main user
preferences and uses a Wizard-of-Oz methodology to assess the
cues in a simulated selection and manipulation task, where we
measured the eectiveness of operator-controlled cues. This study
is designed to illustrate cue potential in real application scenarios.
1.5 Contributions
In this paper, we present the design, implementation and validation
of a tactile pose and motion guidance system, TactaGuide, which
is a vibrotactile glove and arm sleeve interface. We show that our
new guidance methods aord ne hand motion and pose guidance,
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
which supports selection and manipulation actions in 3D user in-
terfaces. We go beyond the state of the art that mainly focused on
vibrotactile cues for body and arm motions [
], or
general poses [
]. In that, we extend previous work to ne hand
manipulation actions through a set of vibrotactile cues provided
via TactaGuide, through the following ndings:
Localization and dierentiation: we show that tactors can
be well localized at dierent hand and arm locations and
illustrate that simultaneous vibration works best. We also
show that the back of the hand (normally used infrequently)
scored as good as the index nger, and is a useful location
for contact-driven feedback.
Pattern interpretation: Based on the biomechanical constraints
of various hand/arm parts, we illustrate that most users suc-
cessfully match patterns to the right motion or bodily recon-
Selection and manipulation guidance: through a Wizard-of-
Oz experiment we show that vibration patterns support ner-
grained 3D selection and manipulation tasks, conrming the
validity of our approach.
We deliberately performed all studies in the absence of visual
cues to reliably identify the eect of tactile guidance in isolation,
with an eye towards eye-free interaction scenarios. We reect on
the potential for combinations of visual and tactile patterns for
guidance in the discussion section.
In this section, we outline work in related areas.
Haptic feedback for 3D interaction has been explored for many
years, though is still limited by the need for good cue integration and
control [
], cross-modal eects [
], limitations in actuation
range [
], and delity issues [
]. The majority of force feedback
devices provide feedback through a grounded (tethered) device.
These devices are often placed on a table and generally make use of
an actuated pen that is grasped by the ngertips, instead of full hand
operation, e.g., [
]. In contrast, glove or exoskeleton interfaces
can provide feedback such as grasping forces and enable natural
movement during haptic interactions [
]. Few haptic devices
provide feedback for the full hand. An example is the CyberGrasp
(CyberGlove systems), a robot-arm actuated glove system that can
provide haptic feedback to individual ngers. Tactile methods aord
more exibility by removing the physical restrictions imposed by
the actuated (pen-)arm or exoskeleton construction. However, they
can be limited as haptic cues have to be “translated” within the
somatosensory system [
]. While substituted cues have been found
to be a powerful alternative [
], they can never communicate all
sensory aspects. In 3D applications, research has mostly revolved
around smaller tactile actuators that are hand-held, e.g., [
], or
glove-based, e.g., [
]. Some work has explored the usage of a
dense vibrotactors grid at or in the hand, e.g., [
], which is
related to our glove design.
Some systems provide guidance cues to trigger body motions
and rotations. Most approaches focus on corrective feedback with
varying degrees of freedom. The majority of systems focuses on
some form of motor learning, which may be coupled with visual
instructions of the motion pattern [
]. Eective motion patterns
have yet to be found, as illustrated by the variety of patterns in
the dierent studies [
]. However, one common insight is that
the spatial location of vibrations naturally conveys the body part
the user should move and that saltation patterns are naturally
interpreted as directional information [
]. Such saltation patterns
are a sequence of properly spaced and timed tactile pulses from the
region of the rst contactor to that of the last, allowing for good
directionality perception [
]. Yet, there is no conclusive answer for
rotation patterns. Researchers have provided cues at arms, legs and
the torso [
] to train full-body poses that, for example, help with
specic sports like snowboarding [
]. Research has also focused
specically at guiding arm motions [
] in 3D environments.
Further variants of this work look at arm [
] or wrist rotation
] for more general applications. All these methods target only
general motions and are not particularly useful for hand pose and
motion guidance for 3D selection and manipulation. In contrast,
other systems use electromuscular stimulation (EMS) to control
hand and arm motions to produce ner motions and poses [
]. The
most closely related work looked at triggering muscular actions at
the hand and arm via EMS [
]. Yet, EMS systems are awkward to
use, and often have limited usage duration or user acceptance. Also,
receptors or muscles may get damaged through use of EMS [39].
For hand guidance, the usage of proximity models to improve
spatial awareness around the body to indirectly trigger hand mo-
tion and pose adaptations is another related area. Some researchers
have explored proximity cues with a haptic mouse [
], the usage of
proximity to trigger actions [
], and auditory feedback for collision
avoidance [1].
Extending the state of the art, we introduce a novel set of vibro-
tactile cues that can guide hand motion and pose congurations
that have high relevance for 3D selection and manipulation.
We provide tactile feedback through our new TactaGuide system, a
vibrotactile glove and arm sleeve (Fig. 2). The device aords a full
arm motion operation range, tracked by a Leap Motion. Both glove
and sleeve are made of stretchable eco-cotton that is comfortable
to wear. In the glove, tactors are placed at the ngertips (5 tactors),
inner hand palm (7), middle phalanges (5), and the back of the hand
(4), for a total of 21 tactors (Fig. 2). Cables are held in place through
a 3D printed plate embedded in the fabric on top of the wrist. The
arm sleeve consists of 6 tactors, positioned to form a 3D coordinate
system “through” the arm. We use 8-mm Precision Microdrive coin
vibration motors (model 308-100). All tactors are driven by Arduino
boards. To overcome limitations in motor response caused by iner-
tia (tactors can take up to ~75 ms to start), we use pulse overdrive
] to reduce the latency by about 25 ms. After that, pulse width
modulation (PWM) is used to reduce the duty cycle to the desired
ratio under consideration of the corresponding tactor balancing (Fig.
2) to generate dierent tactile patterns. The system was previously
used for another purpose, namely proximity feedback [
], where
we showed that proximity cues in combination with collision and
friction cues can signicantly improve performance.
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
Figure 2: Tactor IDs and balancing of TactaGuide glove, based on pilot study results. The tactors at the arm sleeve were un-
Many selection and manipulation tasks depend on ne control
over hand motion and poses. However, in complex 3D scenes, such
motor actions maybe be dicult to plan and coordinate. For ex-
ample, consider training the hand to move behind an object, to
grasp a small and occluded object (or part). While adjusting the
visualization may solve some issues – x-ray visualization has been
used to look “through” an occluding object [
] – the associated
visual ambiguities can make performing the task challenging. To
overcome such visual limitations, we assume that tactile cues are
valuable to guide hand motion and poses. Inspired by related work,
e.g., [
], the basic premise of our hand motion and pose guid-
ance system is centered around providing various pattern stimuli –
activating tactors in a specic region in a specic sequence – using
a specic vibration mode (Fig. 1 and 4). Previous work indicates
that such patterns are well interpretable by the user, while cue loca-
tion and directionality inform the user about the specic body part
or joint that should be actuated [
]. These cues can be triggered
independent of contact events, i.e., events that relate to touching
an object. For example, stimulating three tactors in a serial manner
from hand palm to ngertip may indicate to the user that they
should stretch that nger (Fig. 1C). Similarly, a forward pattern
over the arm may indicate the arm needs to be moved forward
(Fig. 1D). Further details on the patterns are discussed in Section
4.2. By focusing on motion and pose adjustment for selection and
manipulation, which requires ner control over hand and ngers,
we extend previous work [
], that focused only on arm or
wrist rotation. Our target actions are closer to EMS-based work
[50], though without their aforementioned limitations.
We looked closely at the dierent actions undertaken by the hand
during 3D selection and manipulation. Each of these actions is
generally associated with a specic hand or arm region. The dier-
ent posture/motion actions refer to fundamental hand movements
(Fig. 1) and thus to biomechanical actions that involve various
joint/muscle activations:
Radial/ulnar deviation: turning of the hand (yaw).
Pronation/supination: rotation of the hand (roll).
Move: arm movement to move the hand in the scene, includ-
ing abduction and adduction (moving arm up and down),
forward/backward and left/right motion aorded by the arm
lever system.
Finger exion/extension: straightening of ngers to pinch
or grasp an object.
While exion and extension can also refer to orienting the hand
around the wrist (pitch), we did not support this motion in our
work, as it is used infrequently in the frame of selection and ma-
nipulation tasks. For ngers, we use dierent patterns for closing
(palm to ngertip vibration) and opening gestures (ngertip to palm
vibration), while hand rotations simply involve directional patterns.
With respect to arm movement, the arm is a biomechanical lever
system as bones and muscles form levers in the body to create
human movement – joints form the axes, and the muscles crossing
the joints apply the force to move the arm.
Based on ease of detection of location, direction, and guidance
interpretation (which hand motion or pose change does the pattern
depict?), we implemented three dierent vibration modes, which
we then assessed in our user studies. The location of a stimulus
guides the biomechanical action. E.g., when a nger needs to be
bent, the vibration pattern is provided at the nger [
]. The three
modes were continuous (a continuous vibration stimuli), stutter
(a pulsed vibration stimuli), and mixed (a mixture of both). We
assumed that the stutter at the end of the mixed mode pattern could
indicate direction. Prior to the studies, we performed a pilot study,
where we veried stimuli with 5 users and ne-tuned the system.
Pose and motion guidance was examined in three studies, 1, 2 and
3, which investigated how well dierent vibration patterns and
modes trigger hand pose and motion changes, to potentially guide
the design of haptic selection and manipulation techniques. These
studies were designed to show if hand pose and motion guidance is
principally possible, and to investigate its potential and limitations.
As noted before, we deliberately did so independently of visual
cues, to avoid confounds or constraints imposed by such cues.
Dierent user samples were recruited for each study. In each
study users wore the complete TactaGuide glove and arm sleeve
setup. Post-hoc questionnaires for each study were composed of 7-
point Likert items (0 = “fully disagree” to 6 = “fully agree”), related to
mental demand, comfort, usability, and also task-specic perceptual
issues. Users were seated at a desk and could rest their elbow on
the armrest of a chair in study 1 and 2, while vibrotactor locations
(IDs) were shown on a 27" desktop screen. In study 1 we examined
if and to what extent our glove enables users to accurately localize
tactile feedback and their ability to discriminate between dierent
tactors. Study 2 focused on the user’s interpretation of vibration
patterns into assuming hand poses and performing motions. In
study 3, the user’s hand pose and motion were guided through
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
vibration patterns that were chosen on the basis of the previous
studies. Study 3 deployed a Wizard-of-Oz methodology to overcome
nger tracking limitations associated with the LeapMotion, which
cannot reliably detect the hand once it is rotated vertically. Yet, this
pose is required for many grasping actions.
4.1 Study 1 - Tactor localization and
This study focused at the ability of users to locate and dierentiate
between tactors to ensure that users can detect the actual region
that receives biomechanical actuation. As higher-resolution tactile
gloves are scarce, there is no information in the literature about
the detectability of individual tactor locations (stimuli), especially
with respect to our particular locations at the TactaGuide glove.
Also, while sensitivity is well studied for the inside of the hand,
sensitivity at the back of the hand has hardly been studied [23].
In task 1, participants were asked to locate a single actuated
tactor. A within subjects 2 x 2 factorial design was employed to
study the eect of factor feedback mode (stutter, continuous) and
hand pose (straight, st) on feedback localization performance
(mean hits per trial). Vibration feedback was provided at all 21
dierent hand locations of the TactaGuide glove, resulting in 84
trials. Two feedback modes were also compared at 6 locations on
the wrist, resulting in 12 additional trials. The total of 96 trials were
randomly presented. Participants were informed that only a single
tactor provided feedback at any given time. In each trial feedback
was provided for 2 seconds, after which the participant selected a
tactor (ID) from the overview shown on a desktop monitor showing
the hand with tactor locations.
In task 2, combinations of two or three actuated tactors had
to be located and dierentiated. A 2 x 4 x 7 factorial design was
used to study the localization of tactors depending on their number
(two or three tactors), feedback mode (simultaneous, continuous;
simultaneous, stutter; serial, continuous; serial, stutter) and zone
(thumb, index, pinkie, palm, back of the hand, from the back to the
inner hand, wrist). Each factor combination was repeated, resulting
in 112 trials, presented in randomized order. Before starting the task,
participants were informed that either two or three tactors would
be actuated. Feedback was always provided for 2 seconds. As in the
rst task, participants responded with tactor ID displayed at the
screen. Together, both tasks took around 45 minutes to complete.
4.1.1 Results. Eight right-handed persons (2 females, mean age
39 (SD 15.7), with a range of 25–65 years) volunteered. Six wore
glasses or contact lenses and two had normal vision. Within subjects
repeated-measures analysis was used to study task specic main
and interaction eects of factors on dependent measures.
In task 1, a total of 768 trials were analysed. For each trial the ac-
tually activated tactor and the participant’s choice were compared,
to record a hit as the correct tactor was chosen (1) or a miss if not
(0). As expected, the hand pose but not the mode aected hit rate
(hits/trials), which was signicantly higher with a straight hand
pose (M = 0.82, SE = 0.02) than with a st (M = 0.69, SE = 0.4), F(1
,7) = 13.44, p = .008,
= .66. With a st, tactors are closer together,
making it more dicult to localize a stimulus. In a secondary analy-
sis, tactors were grouped into six zones across which we compared
hit rates (thumb; middle ngers:[index,middle,ring]; pinkie; back
of the hand; palm; wrist). The zone aected the hit rate, F(5,35)
= 6.48, p < .001,
= .48. Post-hoc comparisons showed that only
the pinkie with the lowest hit rate (M = 0.61, SE = 0.05) diered
signicantly from the back of the hand, which had a high hit rate
(M = 0.85, SE = 0.03), p = .015.
In task 2, a total of 896 trials were analysed. In this task activated
tactors were compared to participants’ responses. Depending on
their perception, participants could either name three tactor IDs
or they could name less than three and state there were no more
activated tactors. We scored a hit for each correctly named tactor
and also for correctly stating that no more tactor was activated.
That is, the maximum number of hits per trial was always three.
Mean hits depended signicantly on the stimulated zone F(6,42)
= 2.62, p = .03,
= .27 (see Fig. 3 for mean values and standard
errors), the feedback mode F(3,21) = 10.81, p < .001,
= .61 and its
interaction with the number of activated tactors F(3,21) = 22.98, p
< .001,
= .77 (see Table 1 for mean values and standard errors).
A post-hoc test showed that the mean number of hits was higher
when feedback was provided at the back of the hand compared to
the thumb, the pinkie, and the palm. Performance on the back of
the hand was also marginally better than feedback transitioning
from the back to the inner hand (p = .058). There were also more
hits when feedback was provided at the index nger than at the
palm (p = .048). In trials with two activated tactors and for both
simultaneous feedback modes, participants got more hits compared
to both serial activations (p < .01). When three tactors were activated
dierences became non-signicant.
Figure 3: Study 1, task 2 (tactor localization and dierentia-
tion): Mean number of hits per trial by stimulated zone with
standard errors (SE) hits per trial, hit range = [0;3].
Table 1: Study 1, task 2 (tactor localization and dierentia-
tion): Mean hits per trial by number of activated tactors and
feedback mode with standard errors (SE), hit range = [0;3].
Number of
mode Mean (SE)
Si-C 2.33 (0.09)*
Si-S 2.45 (0.09)*
Se-C 1.72 (0.08)
Se-S 1.87 (0.09)
Si-C 1.89 (0.08)
Si-S 1.94 (0.09)
Se-C 2.15 (0.16)
Se-S 2.05 (0.14)
Si = Simultaneous, Se = Serial, C = Continuous, S = Stutter
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
Performance was best at the index nger and the back of the
hand. While the mean dierences between zones were statistically
signicant, they were relatively small (up to 0.23 = 8% of the maxi-
mum score). This outcome might be related to the distribution and
sensitivity of mechanoreceptors of glabrous skin [
], where the
density of low threshold mechanoreceptive units at the ngers is
principally higher than in the palm. Therefore, vibrations are in
general harder to dierentiate inside the palm, especially in case of
adjacent, nearby located tactors. Simultaneous activations led to
better performance compared to serial continuous activation when
two tactors vibrated. Mean dierences ranged from 0.46 to 0.72
(=15% to 24% of the maximum score). However, when three tactors
were activated, participants generally achieved a good hit rate for
serial feedback, as they correctly identied two out of three tactors
on average. There was no interaction eect between feedback mode
and stimulated region, that is, the optimum feedback mode was not
region specic.
4.2 Study 2 - Pattern interpretation and
We explored motion interpretation and preferences in this observa-
tional study in two dierent tasks. In task 1 we focused at how users
would interpret a certain trigger (pattern + mode) by adjusting their
hand pose or motion, while task 2 investigated which vibration
mode was preferred for a stated hand pose or motion change.
For task 1 of study 2, feedback was provided at the same six hand
zones as in the second task of study 1 (localization and dierenti-
ation), as well as at the wrist and at an additional hand zone that
includes the thumb and index. A specic feedback pattern with
varying numbers of involved tactors depending on the zone, see
Table 2. We actuated the tactor-vibrations serially in three modes:
Stutter, continuous and a mixed mode (see Fig. 4).
Figure 4: Activation sequence of dierent feedback modes
using the example of nger pointing motion (index nger)
with three involved tactors.
In mixed mode, the rst tactor(s) was in continuous mode, while
the last one was stuttering. Unlike study 1, simultaneous feedback
modes were not used in study 2, as we provided directional feedback
cues through serial activation. Feedback patterns at each zone were
provided using zone-specic vectors in two opposite directions
(forward/clockwise and backwards/counterclockwise), except for
the wrist at which three vectors with opposite directions were
provided (forward/backwards; up/down; left/right). Feedback was
provided and randomized blockwise. Participants completed one
block of 36 trials with feedback at six hand zones rst (6 regions x
3 modes x 2 directions), followed by 18 trials for the wrist (3 modes
x 3 vectors x 2 directions) and nally 6 trials involving the thumb
and index at the same time (3 modes x 2 directions), for a total
of 60 trials per participant. Participants were told to change their
hand pose in a way that they felt matched the provided pattern
best. The starting pose for each trial was resting the elbow on the
armrest of a chair while the hand was hanging down in a relaxed
manner (i.e., a pose between a st and fully stretched hand gesture).
No further instructions were given and users could choose their
movements and gestures freely. The experimenter recorded the
resulting motions.
For task 2, the zone-specic feedback patterns and directions
were the same as in task 1. We pre-dened specic hand poses for
each zone-specic feedback pattern and direction, see Table 2. In
each trial, the experimenter rst demonstrated which movement
or hand pose should be initiated by the feedback that followed.
Then the corresponding feedback was provided in three dierent
modes (continuous, stutter, mixed mode), presented in randomized
order. The modes were not examined as factor but functioned as
response options: that is, the user had to choose which cue was
most suitable for initiating the previously shown movement or pose.
The suitability of the feedback for the respective movement/pose
was also rated on a 7-point Likert scale (6 being “totally suitable”).
As in task 1, six hand zones, the wrist, and the zone including
thumb and index were tested and randomized blockwise. With one
repetition 24 trials were presented for the six hand zones (6 zones
x 2 directions x 2 repetitions), 18 trials for the wrist (2 repetitions x
3 vectors x 3 directions) and 4 trials for thumb and index zone (2
repetitions x 2 directions), resulting in 46 trials. The experimenter
recorded the choice of mode and suitability rating for each trial.
Eight participants (7 right-handed, 2 females, mean age 29.6, SD
5.3, with a range of 23–40 years) volunteered. Three wore glasses
or contact lenses and ve had normal vision.
4.2.1 Results. For task 1, 480 trials were analysed. All feedback-
dependent interpretations were listed and counted if they occurred
suciently often, that is, were used by at least three of the partici-
pants. When feedback was provided at the thumb, the back of the
hand, palm, or wrist, resulting movements were diverse for each
feedback direction and mode and no coherent movement/gesture
could be observed. Feedback provided once at the index, pinkie
and at thumb and index, or repeatedly from the back to the inner
hand resulted in “successful” movements/gestures, which corre-
spond to our interpretation of the respective feedback. That is,
forward/backward feedback at the index and pinkie resulted in
stretching/bending respective ngers, simultaneous feedback at the
thumb and index was interpreted as pinch movement and feedback
provided from the back to the inner hand resulted in supinations.
For task 2, 384 trials were analyzed. Mode preferences for hand
and wrist were analyzed separately, as three instead of two direc-
tional vectors were used for the wrist. For each participant and
factor combination we calculated how many times each mode was
preferred. With one repetition each mode could maximally be pre-
ferred two times for a given combination. Generally, the continuous
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
Table 2: Study 2, task 1 (pattern interpretation) and 2 (prefer-
ence): Pre-dened hand movements depending on zone, acti-
vated tactors and feedback direction. The + symbol indicates
simultaneous activation of concatenated numbers.
IDs of activated
tactors (see Fig. 2)
and order of
Movement for
tactor activation
from left to right,
from right to left
Thumb 7, 1, 14 stretch
Pinkie 6, 5, 10
Index 7, 2, 13
Thumb and
Index 7,1+2,13+14 pinch
Hand inner 18, 16, 20, 17 ulnar deviation
radial deviationBack of
the hand 8, 7, 6, 9
From back
to inner hand 7, 6, 20, 16 supination
24, 23, 22 forward
26, 23, 25 right
27, 23 up
mode was preferred at the hand, M = 1.21, SE = 0.1, over the stutter,
M = 0.2, SE = 0.06, p = .001, and mixed mode, M = 0.6, SE = 0.06,
p = 0.18, F(1.15,8.03) = 30.09, p < .001,
= .81. Nevertheless, this
preference was not consistent across zones as, especially at the back
of the hand and the palm, the mixed mode was chosen more often
than the continuous mode, but not signicantly so. At the wrist the
continuous mode was also preferred, F(2,14) = 8.71, p = .003,
.56. Post-hoc comparisons showed that the continuous mode, M =
1.27, SE = 0.19, was signicantly superior to stutter vibration, M
= 0.25, SE = 0.1, p = .02. Mode preferences in percent by zone are
listed in Figure 5.
Thumb Index Pinkie Palm Back of
the hand
back to
the inner
Wrist Pinch
Continuous Stutter Mixed
Figure 5: Task 2: Vibration feedback mode preferences by
zone in percent.
The direction (at hand zones and wrist) and the vector (at the
wrist) did not aect mode preference. Suitability ratings were gen-
erally slightly positive, while feedback patterns that were provided
on the wrist to trigger up/down and left/right movements got more
neutral ratings.
Results from task 1 indicate, that in principle patterns can be
reasonably well interpreted, i.e., users did perform the intended
main action. However, the interpretation of direction was often
an issue. Most likely, the generally good detection of the main ac-
tion can be associated to the biomechanical limitations and prime
actions of hand and ngers, e.g., ngers are mainly bent, not ro-
tated. Still, as we did not inform users what kind of action a pattern
could potentially trigger, they had little possibility of learning a
pattern. For task 2, it is not clear why the mixed mode was pre-
ferred for some areas. One possible explanation is that both areas
(inner, back of hand) are quite at, and exhibit dierent mechanical
properties compared to, for example, the ngers. Suitability ratings
indicated that feedback patterns used at the hand zones and wrist
are generally appropriate for guidance.
4.3 Study 3 - Hand pose and motion guidance
Based on the outcomes of the rst two studies (1 and 2), we per-
formed a Wizard-of-Oz [
] study to assess the cues for controlling
ner-grained hand selection and manipulation actions. We delib-
erately chose a Wizard-of-Oz methodology to overcome some of
the evident limitations of the hand tracking system we used (Leap
Motion), which cannot track ngers precisely when the hand is
held vertically, due to the occlusion of the ngers in the camera
image. This study investigated user performance in six selection
and manipulation tasks that cover hand pose changes and hand
motions. Grids were used to control and measure performance on
the horizontal and vertical plane with 25 x 16 grid elds on each
plane and a grid eld size of 2 x 2 cm, see Fig. 6.
Figure 6: Apparatus for Study 3, showing the measurement
grids used for observing performance in the tasks.
The six tasks involved 1) moving the hand to a specic eld in
straight horizontal directions on the grid and 2) on the vertical
plane using the shortest path, 3) performing supination/pronation,
4) radial/ulnar deviation, 5) pointing and 6) grasping one of four
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
wooden blocks that were arranged on the horizontal plane in a
2 x 2 matrix. We included pointing in addition to selection and
manipulation, as it is often used for cohesion in training tasks. To
trigger actions we applied a pattern that we also used in study 2 and
that corresponds to a pre-dened motion, see Table 2: pinching was
used to grasp blocks. We decided to use the continuous vibration
mode as it was preferred overall in study 2. Before starting the
actual experiment, participants received a 5-minute training session
to learn the association between vibration feedback patterns and
corresponding actions.
Each participant performed the six tasks in random order. The
experimenter acted as operator who had an overview about the
tasks and the order and “controlled” each action of the participant
step by step, using a visual interface to trigger the predened pat-
terns. The operator started and stopped the specic feedback that
was required for the respective task. False movements were not
corrected, that is, if the user’s hand moved too far, the operator
provided feedback, as if the hand was at the correct position. After
a task was nished, an observer (assistant of the experimenter) who
was not aware of the targeted position and who could only see
the participant, recorded the nal position of the hand, noted any
further observations and took pictures. After having nished all six
tasks, the participant started a new trial that required him/her to do
the six tasks again in a random order. All tasks were the same for
the second time, except for grasping the block. When participants
encountered a task for the rst time, blocks had a distance of two
elds between each other. The second time around the diculty
was raised by reducing the distance to one eld. Study 3 was video-
recorded with permission of the users. After having completed the
study, participants rated feedback perception, task easiness, needed
concentration, ease of remembering movements/gestures that cor-
respond to a respective feedback, suitability of feedback and their
performance. Eight right-handed participants (2 females, mean age
35.8 (SD 16.4), with a range of 23–65 years volunteered.
4.3.1 Results. For study 3, we analysed 48 trials. The compari-
son of the targeted and the actually reached grid eld showed that
participants could be guided quite precisely to a specic grid eld
on the horizontal plane. In the rst trial, the reached eld had only
an average deviation of M = 1.88, SD = 1.36 elds from the targeted
one, and M = 2.25, SD = 2.05 in the second trial. Deviations on the
vertical plane were even smaller: M = 0.88, SD = 0.35 in the rst
and M = 0.63, SD = 1.06 in the second trial. Pointing and grasp-
ing the bricks at the two diculty levels was always successful.
Nevertheless, sometimes participants confused radial/ulnar devi-
ation with supination/pronation, radial with ulnar deviation and
up/down with left/right feedback. Participants’ ratings were com-
pared between dierent tasks. Generally, all ratings were positive,
especially concerning pointing and grasping. While ratings for the
tasks that targeted supination/pronation, radial/ulnar deviation and
moving the arm around received slightly positive feedback, ratings
for pointing and grasping were strongly positive. Suitability ratings
for moving the arm up/down and left/right were also slightly posi-
tive and higher than in study 1b. Grasping and pointing required
even less concentration than the other tasks and the assignment of
the vibration feedback to the movement/gesture seems to have been
easier to remember. Participants thought they performed better in
pointing and grasping than in the other tasks and that the pattern
initiating pointing and grasping tted “better” compared to other
patterns. Overall comfort and usability ratings are listed in Table
3. In general ratings are rather positive, only the cable seemed to
have disrupted users slightly, which could be due to the weight of
the cable as users also felt somewhat exhausted after wearing the
glove for some time.
Table 3: Overall comfort and usability ratings for Study 3.
Statement Mean (SD)
Glove wearing comfort 5 (1.2)
Sitting comfort 5.13 (2.03)
No disruption through the cable 3.88 (2.17)
Noticeability of vibrations 4.5 (0.93)
Not exhausted 3.75 (2.38)
Ease of learning the system 5.5 (0.03)
Ease of using the system 4.88 (1.13)
Expected improvement through exercise 6.5 (0.54)
While results are generally encouraging, hand rotation guidance
was not followed reliably. As noted below in the discussion section,
based on previous work [
], we can assume that the combination
of tactile and (non-ambiguous) visual cues could address this and
further improve performance.
Here, we will discuss ndings with regards to the research questions.
RQ1. How well can tactors be localized and dierentiated across
the hand and lower arm?
We showed that users can reasonably well localize and dieren-
tiate cues. Especially interesting is the good performance of cues at
the back of the hand, which performed about as well as the index
nger (which is highly sensitive, in contrast to the back of the hand).
This result is useful as the back of the hand can also be used for
other purposes, like the provision of touch-driven events that can
be coupled to guidance, e.g., touching a wall with the back of the
hand while moving a grasped object.
RQ2. How do users interpret tactile pose and motion patterns and
what are their preferences?
While tactors could be localized well, the interpretation of more
complex stimuli – in particular direction – was not without errors.
For several reasons, this is not surprising. First, a previous study
also found that users interpret some patterns as either push or
pull motions [
]. That is, the direction a pattern refers to may
be interpreted dierently by dierent users. While recognition of
the dominant biomechanical action (e.g., exion of the nger, or
rotation of the hand) was reasonably high, we assume that per-
sonalizing patterns will result in a higher percentage of correct
motions. In our study we observed that the eciency of our system
likely improves with learning. This means that over time, users
will likely be able to interpret the patterns more easily and reliably.
Previous work already noted that the level of abstraction likely
inuences learning rates [
], with lower abstraction resulting in
quicker learning. Here, we assume that our guidance patterns are
at a medium to low abstraction level, as patterns are (a) easily
Tactile Hand Motion and Pose Guidance for 3D Interaction VRST ’18, November 28-December 1, 2018, Tokyo, Japan
localizable and (b) have good directional information that can be as-
sociated with a dominant biomechanical action. Learning will likely
also be required to separate dierent types of feedback. Currently,
we did not focus on touch cues, which would involve vibration
at specic contact points. Depending on the context of operation,
such vibration could be misunderstood, especially in cases where
the user touches an object while receiving a pose change guidance
pattern. While we could use dierent vibration modes to encode
dierent events, the ability of users to actually dierentiate among
them requires further study, especially if we want to integrate pose
and motion guidance methods with haptically supported selection
and manipulation techniques.
RQ3. How does tactile pose and motion guidance perform in a
guided selection and manipulation task?
With respect to hand guidance, we showed that our guidance
methods can trigger motions and poses that can support ner-
grained 3D selection and manipulation, independent of touch cues
that may normally drive hand guidance. Our results extend previous
tactile methods that only support general motion and pose guidance
], while our granularity is similar to EMS-based methods
], but without their disadvantages. Furthermore, while our
current patterns only triggered start-to-end motions, e.g., to move
the nger from a stretched to a bent conguration, guidance to
intermediate stages is possible by running the pattern as long as
needed. We might also use the strength of the feedback to provide
a further indication about when to stop a motion, e.g., by making
the feedback proportional to the error being made [14].
In all studies, we decoupled tactile cues from visual feedback. We
deliberately did not use visualization aids to isolate the performance
of our tactile guidance method, without interference from any
given visualization method. However, related work, e.g., [
], has
established that cross-modal feedback, such as the combination
of visual and haptic cues, may also reduce error rates. We assume
that tactile guidance methods can be visually enhanced to reduce
ambiguities, based on visual and haptic stimuli integration theories
]. The challenge is to do this in an unambiguous manner and
to avoid visual conicts. An example of visualization techniques
that can aid to this respect are see-through visualization techniques
], like transparency or cut-away. While cut-away techniques
may limit spatial understanding as inter-object spatial relationships
may be more dicult to understand (as objects are not rendered),
transparency has been shown to maintain a reasonable level of
spatial understanding [
]. Such visualization could be combined
with feedback co-located with the hand (instead of embedded in
the scene) that provides motion and pose guidance. We assume
co-located feedback – for example by overlaying a second hand
/ nger animation over the virtual hand to provide guidance –
will likely have a higher success rate, to avoid ambiguity issues.
However, this requires further study. Furthermore, coupling of
feedback in multiple modalities may increase cognitive load [
In our case, the somatosensory system performs complex processes
involving multiple brain areas to interpret the haptic cues, while
cognitive load can vary based on dierent haptic properties [
Still, cognitive load likely decreases through learning, [
], an issue
we plan to follow up in future work.
We presented a novel tactile approach to improve hand motor plan-
ning and action coordination in complex spatial 3D applications,
by guiding hand motion and poses. Such guidance can be highly
useful for 3D interaction, especially for applications that suer
from visual occlusions. Extending previous work on tactile cues
that only worked on more general body motions, we showed that
ner-grained pose and motion adjustments can be triggered. While
learning and visual cues are expected to further improve perfor-
mance, e.g., by reducing some interpretation errors, the results of
our user studies already provide a solid basis for implementing
tailored 3D selection and manipulation techniques that can be used
in the frame of applications that require ne motor control, such
as assembly training.
Future work includes full integration of guidance methods into
3D applications and study thereof. An important next step is the
coupling of tactile guidance with hand co-located visual cues, which
will likely lead to further improvements and a better understanding
of the full potential of guidance support methods. We also want to
investigate task chain variations to see when and how guidance
feedback aects performance in complex situations, including tac-
tile cues for guidance and touch-related events (collision, friction)
in combination with visual feedback. For real training applications,
guidance methods need to be coupled to behavior and ideal path
analysis to dynamically guide users through, for example, training
scenarios. To address the nger detection problems with a single
sensor, hand tracking must be improved, e.g., via a multi-sensor
setup [
]. Finally, we like to point out that due to the independence
from visual cues, our system can be used in other domains, such as
guiding visually-disabled people [31].
This work was partially supported by the Deutsche Forschungsge-
meinschaft (KR 4521/2-1) and the Volkswagen Foundation through
a Lichtenbergprofessorship.
C. Afonso and S. Beckhaus. 2011. How to Not Hit a Virtual Wall: Aural Spatial
Awareness for Collision Avoidance in Virtual Environments. In Proceedings of
the 6th Audio Mostly Conference: A Conference on Interaction with Sound (AM ’11).
ACM, 101–108.
R. B. Ammons. 1956. Eects of Knowledge of Performance: A Survey and Ten-
tative Theoretical Formulation. The Journal of General Psychology 54, 2 (1956),
F. Argelaguet, A. Kulik, A. Kunert, C. Andujar,and B. Froehlich. 2011. See-through
techniques for referential awareness in collaborative virtual reality. International
Journal of Human-Computer Studies 69, 6 (2011), 387–400.
R. Bane and T. Hollerer. 2004. Interactive Tools for Virtual X-Ray Vision in Mobile
Augmented Reality. In Proceedings of the 3rd IEEE/ACM International Symposium
on Mixed and Augmented Reality (ISMAR ’04). IEEE, 231–239.
K. Bark, P. Khanna, R. Irwin, P. Kapur,S. A. Jax, L. Buxbaum, and K. Kuchenbe cker.
2011. Lessons in using vibrotactile feedback to guide fast arm motions. In World
Haptics Conference (WHC), 2011 IEEE. IEEE, 355–360.
P. W. Battaglia, M. Di Luca, M. Ernst, P. R. Schrater, T. Machulla, and D. Kersten.
2010. Within- and Cross-Modal Distance Information Disambiguate Visual Size-
Change Perception. PLOS Computational Biology 6, 3 (03 2010), 1–10.
S. Beckhaus, F. Ritter, and T. Strothotte. 2000. CubicalPath-dynamic potential
elds for guided exploration in virtual environments. In Proceedings the Eighth
Pacic Conference on Computer Graphics and Applications. 387–459.
H. Benko, C. Holz, M. Sinclair, and E. Ofek. 2016. NormalTouch and TextureTouch:
High-delity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers.
In Proceedings of the 29th Annual Symposium on User Interface Software and
Technology (UIST ’16). ACM, 717–728.
VRST ’18, November 28-December 1, 2018, Tokyo, Japan Marquardt et al.
J. Blake and H. B. Gurocak. 2009. Haptic Glove With MR Brakes for Virtual
Reality. IEEE/ASME Transactions on Mechatronics 14, 5 (2009), 606–615.
A. Bloomeld and N. Badler. 2008. Virtual training via vibrotactile arrays. Presence:
Teleoperators and Virtual Environments 17, 2 (2008), 103–120.
A. Bloomeld, Y. Deng, J. Wampler, P. Rondot, M. Harth, D.and McManus, and
N. Badler. 2003. A taxonomy and comparison of haptic actions for disassembly
tasks. In Virtual Reality, 2003. Proceedings. IEEE. IEEE, 225–231.
G. C. Burdea. 1996. Force and Touch Feedback for Virtual Reality. John Wiley &
Sons, Inc.
[13] C. Chen, Y. Chen, Y. Chung, and N. Yu. 2016. Motion Guidance Sleeve: Guiding
the Forearm Rotation Through External Articial Muscles. In Proceedings of the
2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM,
D. Drobny and J. O. Borchers. 2010. Learning basic dance choreographies with
dierent augmented feedback modalities. In Proceedings of the 28th International
Conference on Human Factors in Computing Systems, CHI 2010, Extended Abstracts
Volume, 2010. 3793–3798.
M. Ernst and M. Banks. 2002. Humans integrate visual and haptic information in
a statistically optimal fashion. Nature 415, 6870 (2002), 429.
P. Gallotti, A. Raposo, and L. Soares. 2011. v-Glove: A 3D Virtual Touch Interface.
In 2011 XIII Symposium on Virtual Reality. 242–251.
U. Gollner, T. Bieling, and G. Joost. 2012. Mobile Lorm Glove: Introducing a
Communication Device for Deaf-blind People. In Proceedings of the Sixth Inter-
national Conference on Tangible, Embedded and Embodied Interaction (TEI ’12).
ACM, 127–130.
P. Green and L. Wei-Haas. 1985. The rapid development of user interfaces:
Experience with the Wizard of Oz method. In Proceedings of the Human Factors
Society Annual Meeting, Vol. 29. SAGE Publications Sage CA: Los Angeles, CA,
C. Hatzfeld and T.A. Kern. 2014. Engineering Haptic Devices: A Beginner’s Guide.
Springer London.
B. Holbert. 2007. Enhanced Targeting in a Haptic User Interface for the Physically
Disabled Using a Force Feedback Mouse. Ph.D. Dissertation. Advisor(s) Huber, M.
H. Jin, Q. Chen, Z. Chen, Y. Hu, and J. Zhang. 2016. Multi-LeapMotion sensor
based demonstration for robotic rene tabletop object manipulation task. CAAI
Transactions on Intelligence Technology 1, 1 (2016), 104 – 113.
R. Johansson and R. Flanagan. 2009. Coding and use of tactile signals from the
ngertips in object manipulation tasks. Nature reviews. Neuroscience 10, 5 (2009),
R. S Johansson and Å B Vallbo. 1979. Tactile sensibility in the human hand: relative
and absolute densities of four types of mechanoreceptive units in glabrous skin.
The Journal of physiology 286, 1 (1979), 283–300.
K. Kaczmarek, J. Webster, P. Bach-y Rita, and W. Tompkins. 1991. Electrotactile
and vibrotactile displays for sensory substitution systems. IEEE Transactions on
Biomedical Engineering 38, 1 (1991), 1–16.
M. Klapdohr, B. Wöldecke, D. Marinos, J. Herder, C. Geiger, and W. Vonolfen.
2010. Vibrotactile Pitfalls: Arm Guidance for Moderators in Virtual TV Studios.
In Proceedings of the 13th International Conference on Humans and Computers (HC
’10). University of Aizu Press, 72–80.
E. Kruij, A. Marquardt, C. Trepkowski, R. W. Lindeman, A. Hinkenjann, J. Maiero,
and B. E. Riecke. 2016. On Your Feet!: Enhancing Vection in Leaning-Based
Interfaces Through Multisensory Stimuli. In Proceedings of the 2016 Symposium
on Spatial User Interaction (SUI ’16). ACM, 149–158.
E. Kruij, A. Marquardt, C. Trepkowski, J. Schild, and A. Hinkenjann. 2017.
Designed Emotions: Challenges and Potential Methodologies for Improving
Multisensory Cues to Enhance User Engagement in Immersive Systems. Vis.
Comput. 33, 4 (April 2017), 471–488.
E. Kruij, K. Wesche, G.and Riege, G. Goebbels, M. Kunstman, and D.Schmalstieg.
2006. Tactylus, a Pen-input Device Exploring Audiotactile Sensory Binding. In
Proceedings of the ACM Symposium on Virtual Reality Software and Technology
(VRST ’06). ACM, 312–315.
J.J. LaViola, E. Kruij, R.P. McMahan, D. Bowman, and I.P. Poupyrev. 2017. 3D
User Interfaces: Theory and Practice. Pearson Education.
D. Levac and H. Sveistrup. 2014. Motor Learning and Virtual Reality. , 25-46 pages.
J. Lieberman and C. Breazeal. 2007. TIKL: Development of a Wearable Vibrotactile
Feedback Suit for Improved Human Motor Learning. (2007).
P. Lopes, D. Yüksel, F. Guimbretière, and P. Baudisch. 2016. Muscle-plotter: An
Interactive System Based on Electrical Muscle Stimulation That Produces Spatial
Output. In Proceedings of the 29th Annual Symposium on User Interface Software
and Technology (UIST ’16). ACM, 207–217.
V. Maheshwari and R. Saraf. 2008. Tactile Devices To Sense Touch on a Par
with a Human Finger. Angewandte Chemie International Edition 47, 41 (2008),
A. Marquardt, E. Kruij, C. Trepkowski, J. Maiero, A. Schwandt, A. Hinkenjann,
W. Stuerzlinger, and J. Schoening. 2018. Audio-Tactile Feedback for Enhancing 3D
Manipulation. In Proceedings of the ACM Symposium on Virtual Reality Software
and Technology (VRST ’18). ACM.
J. Martinez, A. Garcia, M. Oliver, J. P. Molina, and P. Gonzalez. 2016. Identifying
Virtual 3D Geometric Shapes with a Vibrotactile Glove. IEEE Computer Graphics
and Applications 36, 1 (Jan 2016), 42–51.
T. H. Massie, K. Salisbury, et al
1994. The phantom haptic interface: A device
for probing virtual objects. In Proceedings of the ASME winter annual meeting,
symposium on haptic interfaces for virtual environment and teleoperator systems,
Vol. 55. 295–300.
T. McDaniel, D. Villanueva, S. Krishna, and S. Panchanathan. 2010. MOVeMENT:
A framework for systematically mapping vibrotactile stimulations to fundamental
body movements. In Haptic Audio-Visual Environments and Games (HAVE), 2010
IEEE International Symposium on. IEEE, 1–6.
R. P McMahan, D. A Bowman, D. J Zielinski, and R. B Brady. 2012. Evaluating
display delity and interaction delity in a virtual reality game. IEEE transactions
on visualization and computer graphics 18, 4 (2012), 626–633.
K. Nosaka, A. Aldayel, M. Jubeau, and T. C. Chen. 2011. Muscle damage induced
by electrical stimulation. European Journal of Applied Physiology 111, 10 (03 Aug
2011), 2427.
D. Pai. 2005. Multisensory interaction: Real and virtual. In Robotics Research. The
Eleventh International Symposium. Springer, 489–498.
E. Piateski and L. Jones. 2005. Vibrotactile pattern recognition on the arm and
torso. In Eurohaptics Conference, 2005 and Symposium on Haptic Interfaces for
Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005. First Joint.
IEEE, 90–95.
H. Regenbrecht, J. Hauber, R. Schoenfelder, and A. Maegerlein. 2005. Virtual
Reality Aided Assembly with Directional Vibro-tactile Feedback. In Proceedings of
the 3rd International Conference on Computer Graphics and Interactive Techniques
in Australasia and South East Asia (GRAPHITE ’05). ACM, 381–387.
E. Rualdi, A. Filippeschi, A. Frisoli, O. Sandoval, C. A. Avizzano, and M. Berga-
masco. 2009. Vibrotactile perception assessment for a rowing training system. In
World Haptics 2009 - Third Joint EuroHaptics conference and Symposium on Haptic
Interfaces for Virtual Environment and Teleoperator Systems. 350–355.
K. Sato, K. Minamizawa, N. Kawakami, and S. Tachi. 2007. Haptic Telexistence.
In ACM SIGGRAPH 2007 Emerging Technologies (SIGGRAPH ’07). ACM, Article
R.A. Schmidt and C.A. Wrisberg. 2004. Motor Learning and Performance. Human
C. Schönauer, K. Fukushi, A. Olwal, H. Kaufmann,and R. Raskar. 2012. Multimodal
Motion Guidance: Techniques for Adaptiveand D ynamic Feedback. In Proceedings
of the 14th ACM International Conference on Multimodal Interaction (ICMI ’12).
ACM, 133–140.
D. Spelmezan, M. Jacobs, A. Hilgers, and J. Borchers. 2009. Tactile Motion
Instructions for Physical Activities. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’09). ACM, 2243–2252.
C. Spence and S. Squire. 2003. Multisensory integration: maintaining the percep-
tion of synchrony. Current Biology 13, 13 (2003), R519–R521.
A. A. Stanley and K. J. Kuchenbecker. 2012. Evaluation of Tactile Feedback
Methods for Wrist Rotation Guidance. EEE Trans. Haptics 5, 3 (Jan. 2012), 240–
E. Tamaki, T. Miyaki, and J. Rekimoto. 2011. PossessedHand: Techniques for
Controlling Human Hands Using Electrical Muscles Stimuli. In Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM,
H. Uematsu, D. Ogawa, R. Okazaki, T. Hachisu, and H. Kajimoto. 2016. HALUX:
projection-based interactive skin for digital sports. In SIGGRAPH Emerging Tech-
G. H. Van Doorn, V. Dubaj, D. B. Wuillemin, B. L. Richardson, and M. A. Symmons.
2012. Cognitive Load Can Explain Dierences in Active and Passive Touch.
In Haptics: Perception, Devices, Mobility, and Communication, P. Isokoski and
J. Springare (Eds.). Springer Berlin Heidelberg, 91–102.
S. Vishniakou, B. W. Lewis, X. Niu, A. Kargar, K. Sun, M. Kalajian, N. Park, M.
Yang, Y. Jing, P. Brochu, et al
2013. Tactile Feedback Display with Spatial and
Temporal Resolutions. Scientic reports 3 (2013), 2521.
H. S. Vitense, J. A. Jacko, and V. K. Emery. 2002. Multimodal Feedback: Estab-
lishing a Performance Baseline for Improved Access by Individuals with Visual
Impairments. In Proceedings of the Fifth International ACM Conference on Assistive
Technologies (Assets ’02). ACM, 49–56.
J. Zheng, Y.and Morrell. 2010. A vibrotactile feedback approach to posture
guidance. In Haptics Symposium, 2010 IEEE. IEEE, 351–358.
M. Zhou, D.B. Jones, S.D. Schwaitzberg, and C.G.L. Cao. 2007. Role of Haptic
Feedback and Cognitive Load in Surgical Skill Acquisition. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting 51, 11 (2007), 631–635.
... Furthermore, tactile guidance towards a specific target [310] or motion and pose [266] has shown promise. Yet, both the usage context and approaches differ fundamentally from our tactile guidance approach, which aims to increase spatial awareness to better support manipulation of objects in 3D interaction scenarios. ...
... The glove has also been used for other purposes, namely hand motion and pose guidance. In [266] we illustrated how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. ...
... Non-visual guidance can be implemented in various ways. In terms of vibro-tactile cues, they can be used to direct navigation [242,410], for 3D selection tasks [10,265], for supporting pose and motion guidance [22,266], and visual search tasks [235,243]. In [267], we reported on different audio-tactile approaches that guide the user in 3D space. ...
Full-text available
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
... Based on previous work, we use both continuous and pulsed modes. The different modes have been used for tactile instructions for motion instructions Spelmezan et al. (2009) and pose guidance Marquardt et al. (2018b), and represent commonly used encoding schemes in vibration feedback. For example, object distance can be encoded as increasing frequency or strength (continuous) or as increasing frequency of pulses. ...
... For example, navigation and selection/manipulation tasks could be supported by proximity and collision cues. Cues could for example be provided to the lower body (navigation) and the hands (selection/manipulation) (Marquardt et al., 2018b). Future work is needed to investigate how well users will be able to perceive objects presented to both hands and lower body as the same object. ...
Full-text available
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
... From navigation [60] and skin reading [28] to movement guidance [15,31] and haptic learning [16,29], research proposed palm-based vibrotactile displays as a promising output modality for a diverse set of use-cases. These interfaces are especially applicable in situations where multi-modal interaction is beneficial, where interaction with video and audio displays is infeasible or not recommended -e.g., while driving or riding a bike -or where subtle interaction is required -e.g., while holding a conversation. ...
Palm-based tactile displays have the potential to evolve from single motor interfaces (e.g., smartphones) to high-resolution tactile displays (e.g., back-of-device haptic interfaces) enabling richer multi-modal experiences with more information. However, we lack a systematic understanding of vibrotactile perception on the palm and the influence of various factors on the core design decisions of tactile displays (number of actuators, resolution, and intensity). In a first experiment (N=16), we investigated the effect of these factors on the users' ability to localize stationary sensations. In a second experiment (N=20), we explored the influence of resolution on recognition rate for moving tactile sensations.Findings show that for stationary sensations a 9 actuator display offers a good trade-off and a $3\times3$ resolution can be accurately localized. For moving sensations, a $2\times4$ resolution led to the highest recognition accuracy, while $5\times10$ enables higher resolution output with a reasonable accuracy.
... In the field of robotics, many studies have been introduced on sensory design for grasp control and manipulation of real objects with human-hand-like motions [15]. In a few more recent results, researchers have introduced haptic devices and rendering. ...
Full-text available
Recent advances in virtual reality (VR) technologies such as immersive head-mounted display (HMD), sensing devices, and 3D printing-based props have become much more feasible for providing improved experiences for users in virtual environments. In particular, research on haptic feedback is being actively conducted to enhance the effect of controlling virtual objects. Studies have begun to use real objects that resemble virtual objects, i.e., passive haptic, instead of using haptic equipment with motor control, as an effective method that allows natural interaction. However, technical difficulties must be resolved to match transformations (e.g., position, orientation, and scale) between virtual and real objects to maximize the user’s immersion. In this paper, we compare and explore the effect of passive haptic parameters on the user’s perception by using different transformation conditions in immersive virtual environments. Our experimental study shows that the participants felt the same within a certain range, which seems to support the “minimum cue” theory in giving sufficient sensory stimulation. Thus, considering the benefits of the model using our approach, haptic interaction in VR content can be developed in a more economical way.
... A great number of approaches have been explored for estimating 3D hand joint positions in the literature. Early work generally makes good use of optical markers [34,54] or glove techniques [12,26,47] 1 South China University of Technology, Guangzhou, China ering joint positions with high fidelity. Recent development shows that deep neural networks (DNNs) are very promising to reconstruct 3D hand poses from RGB/D images taken by consumer-level cameras [40,42,45,52,56], although it is still a challenging task for these approaches to predict 3D joint positions with accuracy comparable to traditional methods due to complexity and occlusion. ...
Full-text available
This paper investigates the estimate of motion parameters from 3D hand joint positions. We formulate the issue as an inverse kinematics problem with biomechanical constraints and propose a fast and robust iterative approach to address the constrained optimization. It elaborately designs a coordinate descent algorithm to decompose the problem into a sequence of decisions on the transformation around each kinematic node (i.e., joint), while the decision for each node is equivalent to a point matching problem. Addressing the whole optimization then amounts to considering all nodes of the kinematic tree from its root to leaves one by one. This not only accelerates the process but also improves the accuracy of the solution of the inverse kinematic optimization. Experiments show that our approach is able to yield results comparable to and even better than those by the state-of-the-art methods.
... Non-visual guidance can be implemented in various ways. In terms of vibro-tactile cues, they can be used to direct navigation [53,85], for 3D selection tasks [2,60], for supporting pose and motion guidance [7,61], and visual search tasks [51,54]. In [62], we reported on different audio-tactile approaches that guide the user in 3D space. ...
Full-text available
Current augmented reality displays still have a very limited field of view compared to the human vision. In order to localize out-of-view objects, researchers have predominantly explored visual guidance approaches to visualize information in the limited (in-view) screen space. Unfortunately, visual conflicts like cluttering or occlusion of information often arise, which can lead to search performance issues and a decreased awareness about the physical environment. In this paper, we compare an innovative non-visual guidance approach based on audio-tactile cues with the state-of-the-art visual guidance technique EyeSee360 for localizing out-of-view objects in augmented reality displays with limited field of view. In our user study, we evaluate both guidance methods in terms of search performance and situation awareness. We show that although audio-tactile guidance is generally slower than the well-performing EyeSee360 in terms of search times, it is on a par regarding the hit rate. Even more so, the audio-tactile method provides a significant improvement in situation awareness compared to the visual approach.
A major challenge in haptic engineering has been to design practical methods to efficiently stimulate distributed areas of skin. Here, we show how to use a single actuator to generate vibrotactile stimuli which cause sensations of temporally varying spatial extent. Through optical vibrometry methods, we show that vibrational stimuli applied at the fingertip elicit waves in the finger that propagate proximally toward the hand and show how the frequency-dependent damping behavior of skin causes propagation distances to decrease rapidly with increasing frequency of stimulation. Utilizing these results, we design haptic stimuli applied through a single actuator that produces wavefields that expand or contract in size. In a perception experiment, participants accurately (median >95%) identified these stimuli as expanding or contracting without prior exposure or training. As a potential application, we used these effects as haptic cues for interactions in virtual reality. We show through a second experiment that the spatiotemporal haptic stimuli were rated as significantly more engaging than conventional vibrotactile stimuli. These findings demonstrate how the physics of waves in skin can be utilized to excite spatiotemporal tactile effects over large surface areas with a single actuator, and inform methods to utilize the effects in practical applications.
Conference Paper
Full-text available
Figure 1: From Left to right: Schematic representation of proximity-based feedback, where directional audio and tactile feedback increases in strength with decreasing distance, scene exploration task study 1, tunnel task study 2 with example path visualization (objects in study 1 and 2 were not visible to participants during the experiments), and reach-in display with the tunnel (shown for illustration purposes only). ABSTRACT In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
Conference Paper
Full-text available
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces (“human joystick”) can be enhanced by adding walk- ing-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a mul- ti-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants’ feet) could all enhance par- ticipants’ sensation of self-motion (vection) and involve- ment/presence. These benefits occurred similarly for seated joy- stick and standing leaning locomotion. Footstep sounds and vi- brotactile cues also enhanced participants’ self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking- in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improv- ing user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.
Full-text available
In this article, we report on challenges and potential methodologies to support the design and validation of multisensory techniques. Such techniques can be used for enhancing engagement in immersive systems. Yet, designing effective techniques requires careful analysis of the effect of different cues on user engagement. The level of engagement spans the general level of presence in an environment, as well as the specific emotional response to a set trigger. Yet, measuring and analyzing the actual effect of cues is hard as it spans numerous interconnected issues. In this article, we identify the different challenges and potential validation methodologies that affect the analysis of multisensory cues on user engagement. In doing so, we provide an overview of issues and potential validation directions as an entry point for further research. The various challenges are supported by lessons learned from a pilot study, which focused on reflecting the initial validation methodology by analyzing the effect of different stimuli on user engagement.
Full-text available
In some complicated tabletop object manipulation task for robotic system, demonstration based control is an efficient way to enhance the stability of execution. In this paper, we use a new optical hand tracking sensor, LeapMotion, to perform a non-contact demonstration for robotic systems. A Multi-LeapMotion hand tracking system is developed. The setup of the two sensors is analyzed to gain a optimal way for efficiently use the informations from the two sensors. Meanwhile, the coordinate systems of the Mult-LeapMotion hand tracking device and the robotic demonstration system are developed. With the recognition to the element actions and the delay calibration, the fusion principles are developed to get the improved and corrected gesture recognition. The gesture recognition and scenario experiments are carried out, and indicate the improvement of the proposed Multi-LeapMotion hand tracking system in tabletop object manipulation task for robotic demonstration.
Conference Paper
We explore how to create interactive systems based on electrical muscle stimulation that offer expressive output. We present muscle-plotter, a system that provides users with input and output access to a computer system while on the go. Using pen-on-paper interaction, muscle-plotter allows users to engage in cognitively demanding activities, such as writing math. Users write formulas using a pen and the system responds by making the users' hand draw charts and widgets. While Anoto technology in the pen tracks users' input, muscle-plotter uses electrical muscle stimulation (EMS) to steer the user's wrist so as to plot charts, fit lines through data points, find data points of interest, or fill in forms. We demonstrate the system at the example of six simple applications, including a wind tunnel simulator. The key idea behind muscle-plotter is to make the user's hand sweep an area on which muscle-plotter renders curves, i.e., series of values, and to persist this EMS output by means of the pen. This allows the system to build up a larger whole. Still, the use of EMS allows muscle-plotter to achieve a compact and mobile form factor. In our user study, muscle-plotter made participants draw random plots with an accuracy of ±4.07 mm and preserved the frequency of functions to be drawn up to 0.3 cycles per cm.
Conference Paper
We present an investigation of mechanically-actuated hand-held controllers that render the shape of virtual objects through physical shape displacement, enabling users to feel 3D surfaces, textures, and forces that match the visual rendering. We demonstrate two such controllers, NormalTouch and TextureTouch, which are tracked in 3D and produce spatially-registered haptic feedback to a user's finger. NormalTouch haptically renders object surfaces and provides force feedback using a tiltable and extrudable platform. TextureTouch renders the shape of virtual objects including detailed surface structure through a 4×4 matrix of actuated pins. By moving our controllers around while keeping their finger on the actuated platform, users obtain the impression of a much larger 3D shape by cognitively integrating output sensations over time. Our evaluation compares the effectiveness of our controllers with the two de-facto standards in Virtual Reality controllers: device vibration and visual feedback only. We find that haptic feedback significantly increases the accuracy of VR interaction, most effectively by rendering high-fidelity shape output as in the case of our controllers.
Conference Paper
Entertainment contents employing users' whole-body action is now becoming popular, along with the prevalence of low-cost whole-body motion capture systems. To add haptic modality to this context, latency becomes a critical issue because it leads to spatial disparity between the assumed contact location and tactile stimulation position. To cope with this issue, we propose to project drive signal in advance so as to eliminate latency derived from communication. We do not explicitly control each vibrator, but we project "position-dependent, vibration strength distribution" image. Furthermore, the system becomes highly scalable, enabling simultaneous drive of hundreds of units attached to the body.
Conference Paper
Online fitness videos make it possible and popular to do exercise at home. However, it is not easy to notice the details of motions by merely watching training videos. We propose a new type of motion guidance system that simulates the way that the human body moves as driven by muscle contractions. We have designed external artificial muscles on a sleeve to create a pulling sensation that can guide the forearm's pronation (internal rotation) and the forearm's supination (external rotation). The sleeve consists of stepper motors to provide pulling force, fishing lines and elastic bands to imitate muscle contraction to drive the forearm to rotate instinctively. We present two preliminary experiments. The first one shows that this system can effectively guide the forearm to rotate in the correct direction. The second one shows that users can be guided to the targeted angle by utilizing a tactile cue. We also report users' feedback through the experiments and provide design recommendations and directions for future research.
The chapter summarizes the rationale and evidence for attributes of VR technology that target the motor learning variables of practice, augmented feedback, motivation, and observational learning. The potential for motor learning achieved with VR-based therapy to transfer and generalize to the tasks in the physical environment is discussed. Recommendations are provided for clinicians interested in emphasizing motor learning using VR-based therapy.