Conference PaperPDF Available

Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces

Authors:

Abstract and Figures

Employing a 2x2 within-subjects design, forty-eight experienced drivers (28 male, 20 female) undertook repeated button selection and ‘slider-bar’ manipulation tasks, to compare a traditional touchscreen with a virtual mid-air gesture interface in a driving simulator. Both interfaces were tested with and without haptic feedback generated using ultrasound. Results show that combining gestures with mid-air haptic feedback was particularly promising, reducing the number of long glances and mean off-road glance time associated with the in-vehicle tasks. For slider-bar tasks in particular, gestures-with-haptics was also associated with the shortest interaction times, highest number of correct responses and least ‘overshoots’, and was favoured by participants. In contrast, for button-selection tasks, the touchscreen was most popular, enabling the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. The study shows clear potential for gestures with mid-air ultrasonic haptic feedback in the automotive domain.
Content may be subject to copyright.
Exploring the Use of Mid-Air Ultrasonic Feedback to
Enhance Automotive User Interfaces
Kyle Harrington, David R. Large, Gary Burnett
Human Factors Research Group, University of Nottingham
Nottingham, UK
{kyle.harrington; david.r.large; gary.burnett}@nottingham.ac.uk
Orestis Georgiou
Ultrahaptics Ltd.
Bristol, UK
orestis.georgiou@ultrahaptics.com
ABSTRACT
Employing a 2x2 within-subjects design, forty-eight
experienced drivers (28 male, 20 female) undertook
repeated button selection and slider-bar manipulation
tasks, to compare a traditional touchscreen with a virtual
mid-air gesture interface in a driving simulator. Both
interfaces were tested with and without haptic feedback
generated using ultrasound. Results show that combining
gestures with mid-air haptic feedback was particularly
promising, reducing the number of long glances and mean
off-road glance time associated with the in-vehicle tasks.
For slider-bar tasks in particular, gestures-with-haptics was
also associated with the shortest interaction times, highest
number of correct responses and least ‘overshoots’, and was
favoured by participants. In contrast, for button-selection
tasks, the touchscreen was most popular, enabling the
highest accuracy and quickest responses, particularly when
combined with haptic feedback to guide interactions,
although this also increased visual demand. The study
shows clear potential for gestures with mid-air ultrasonic
haptic feedback in the automotive domain.
Author Keywords
Ultrasound; mid-air haptics; gestures; visual demand;
touchscreen; simulated driving.
CCS Concepts
Human-centered computing~Haptic devices
• Human-centered computing~Gestural input
INTRODUCTION
Touchscreens appear to be the current de facto automotive
user interface, with some research supporting their
application for certain tasks in vehicles (e.g. simple menu
selection), compared to other physical devices [1].
However, touchscreens inherently demand visual attention.
This is due in part to designers’ slavish adherence to
skeuomorphic elements to reflect previously physical
buttons, and is further exaggerated by the smooth surface
and notable absence of genuine tactile cues. Clearly, any
visual attention directed towards a touchscreen means that a
driver may not be adequately attending to the road situation,
and this can detriment driving performance and vehicle
control, thereby elevating the risk to drivers and other road
users. This is a particular concern when in-vehicle glances
extend beyond 2.0-seconds [2].
Although research aiming to minimise the visual demand
associated with touchscreens in vehicles has been prolific
(e.g. [3]), it is important to understand that the number and
duration of glances directed towards an in-vehicle device
are in fact defined by two elements: the inherent and
underlying visual demand associated with undertaking a
specific task or interaction, and the driver’s personal
motivation or desire to engage visually with the interface.
The former is defined by the characteristics of the
interaction and interface (e.g. number, layout and size of
targets on a touchscreen) [4] and can therefore be reduced
through artful designs and novel interaction techniques. In
contrast, the latter is motivated by what the driver deems to
be ‘acceptable’ based on the current driving situation and
their own attitudes and opinions [5]. This has even led to
the classification of drivers in terms of their visual
behaviour, resulting, for example, in so-called ‘long
glancers’, i.e. drivers who are more inclined to take their
eyes of the road for periods greater than 2.0-seconds [5].
Whereas attempts to minimise the visual demand of touch-
surfaces may go some way to disincline or dissuade such
drivers from looking at the device, proposed solutions are
fundamentally limited by the individual behaviour and
motivations of drivers. Thus, while a visual stimulus exists,
drivers may still choose to direct their attention to the
screen, irrespective of whether this is actually required to
complete the task or interaction presented to them.
Consequently, more radical approaches that have the power
to eliminate the need (or temptation) for vision completely
need to be explored. A number of novel technologies and
innovative operating concepts have thus been proposed.
Gestures
Gestures are generally considered to be an intuitive
interaction mechanism, and therefore favour infrequent or
novice users. Further advantages are that they are not bound
to any surface and do not require vision [6]. However,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
AutomotiveUI '18, September 2325, 2018, Toronto, ON, Canada
© 2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-5946-7/18/09…$15.00
https://doi.org/10.1145/3239060.3239089
gestures can lack accuracy and/or the capacity for complex
or multiple interactions, and present the potential for
inadvertent operation. Even so, developments in
technology, combined with lucrative commercialisation
opportunities, have pioneered devices such as Microsoft
Kinect, and means that gesture control is now ubiquitous
within some application domains, such as gaming.
However, gestural interfaces still often require, or actively
invite, visual attention (for example, to view a visualisation
of the user’s hand/gesture, or confirm menu selections), and
may also require the user to learn several different gestures
particularly where a number of potential options exist.
In an automotive domain, the use of gestures as an input
modality has also been considered [7, 8, 9]. Although
research generally indicates that gestures are rated more
highly (more ‘enjoyable’) than conventional controls, task
success rate and ‘time to goal’ are typically highly variable
(particularly for more complex gestures or tasks), with the
highest successes and shortest task times associated with
‘simple’ and ‘natural’ gestures. Authors also warn of the
additional time required to learn more complex
manoeuvres, and potential cultural differences associated
with gestures [7]. Nevertheless, commercial in-vehicle
applications do now exist (e.g. [10]), although these are
often designed to be used in conjunction with an existing
touchscreen display or interface.
Haptics
An alternative approach to reduce the visual demand of
existing touch-surface interfaces is to augment routine
interactions with additional information, such as sensations
that can be ‘felt’. Employing haptics naturally reduces the
need for vision as users are not required to visually confirm
selection or activation, but this has traditionally still
required physical contact with the interface. For example,
haptic sensations have been added by varying the friction
coefficient of the touch-surface by vibrating it with
ultrasound [11], or physically changing its shape [12], and
this has also been investigated in a driving context [13].
Attempts to enable haptic feedback above a surface (i.e.
without the need for physical contact with the surface) have
also been explored but initially required users to wear an
additional device to detect or guide their hands or fingers,
for example, based on electromagnetic actuation [14] or
vibrators [15]. While these allow users to move their hands
freely without physically contacting the surface, the
requirement to wear an additional device limits the potential
for spontaneous interaction and can sacrifice simplicity and
accessibility [16]. Moreover, in a divided attention context,
such as driving, wearing an additional secondary device
may interfere with primary task execution.
In contrast, Hoshi et al. [17] proposed the use of ultrasound
to stimulate receptors in the human body. The approach,
further developed and brought to market by Carter et al.
[16] (who coined the term ‘Ultrahaptics’, and first
recognised the potential in the automotive domain), uses the
principle of acoustic radiation force, i.e. the force generated
when ultrasound is reflected. Therefore, no physical contact
or additional wearable device is required. Instead,
ultrasound is focused directly onto the surface of the skin,
where it induces vibrations in the skin tissue. The
displacement triggers mechanoreceptors within the skin
generating a haptic sensation [18]. By focussing the
acoustic radiation force and modulating this at 200Hz (the
frequency at which vibrations are perceptible by human
skin mechano-receptors), the illusion of a virtual ‘target’ in
mid-air can be created. Moreover, by synchronising the
phase delays of multiple transducers, different focal points
can be generated which are perceivable as different mid-air
targets, or interface elements [16]. The ultrasound used has
a frequency of 40kHz and thus the smallest interface
element has physical diameter equal to the wavelength of
8mm
Combining mid-air haptic sensations with simple, intuitive
gestures therefore has the potential to eliminate the need for
vision completely. For example, discrete virtual buttons
could be presented in three-dimensional space to replicate
the familiar layout of a traditional touchscreen interface (i.e.
as an ordered array of buttons). Once identified, selections
could be made by physically moving the hand downwards
(to emulate pressing a button). The approach has the added
advantage that users are not required to remember the
semantic meaning of different gestures, or locations of
interface elements, but rather use their sense of touch to
locate the buttonbased on targeted haptic feedback, and
then use a simple, intuitive gesture, such as a ‘press’ to
activate it. As such, existing ‘mental modelsof interface
layouts (e.g. a 2x3 structured array of buttons) can be
logically applied. Additionally, the virtual array of targets
can be placed anywhere in three-dimensional space and is
therefore not bound to an existing surface or infrastructure.
In an automotive domain, this offers potential benefits in
anthropometrics and physical ergonomics, as well as space
savings in vehicles.
Overview of Study
The aim of the current study was to explore the use of
gestures augmented with mid-air haptic feedback (created
using ultrasound) in a driving context. Therefore, in the
interests of scientific rigour (i.e. to ensure a fair and
unbiased comparison in a 2x2 experimental design, with
independent variables of touch/gesture and with/without
haptic feedback), a touchscreen remained present
throughout, acting as the interface during ‘touch’ conditions
and providing an abstracted view of selections during
‘gesture’ conditions, even though a visual display would
not strictly be required for the latter. Consequently, it is not
expected that visual demand would be eliminated
completely when using gestures-and-haptics, although this
is a perfectly realistic goal in future investigations and
evaluations.
METHOD
Participants
Forty-eight people took part in the study: 28 male, 20
female, with ages ranging from 23 to 70 years (mean age:
35.4). A representative proportion (8) of participants were
left-handed. All participants held a valid UK driving
licence, and were experienced and active drivers (mean
number of years with licence: 13.8, average annual mileage:
7091, range: 10k-20k). Participants were self-selecting
volunteers (acquired through convenience sampling) who
responded to advertisements placed around the University
of Nottingham campus, and were reimbursed with £10
(GBP) of shopping vouchers as compensation for their
time. All participants provided written informed consent
before taking part.
Apparatus
The study took place in a medium-fidelity, fixed-based
driving simulator at the University of Nottingham (Figure
1). The simulator comprises a right-hand drive Audi TT car
positioned within a curved screen, affording a 270 degrees
forward and side contiguous image of the driving scene via
three overhead HD projectors, together with rear and side
mirror displays. A Thrustmaster 500RS force feedback
steering wheel and pedal set are integrated faithfully with
the existing Audi primary controls, with a dashboard
created using a bespoke application and re-presented on a
7-inch LCD screen, replacing the original Audi instrument
cluster. The simulated driving environment was created
using STISIM (v3) software, and comprised a three-lane
UK motorway with both sides of the carriageway populated
by moderate levels of traffic, and authentic road signage
and geo-typical roadside terrain.
Mid-air haptic sensations were created using the
Ultrahaptics touch development kit (TDK)
1
, installed in the
centre of the car (between driver and passenger seats)
(Figure 2), as might be expected for such an interface. This
location naturally lends itself to comfortable ‘open palm’
interactions, and eliminates potential safety concerns
associated with using directional ultrasonic waves. The
TDK employs a 14x14 ultrasonic transducer array board to
create three-dimensional mid-air sensations that could be
best described as gentle, pressurised airflow on the palm of
the hand. A Leap Motion
2
camera is used to detect and
track the driver’s hand movements and localise sensations.
Interaction techniques and textures were developed in
collaboration with Ultrahaptics Ltd. using the Ultrahaptics
software development kit (SDK) integrated with Unity, to
replicate multiple target arrays (‘buttons’) and a graduated
‘slider bar’. For buttons, active regions utilised four focal
points to create perceivable ‘button’ shapes that were fixed
in three-dimensional space (but not bound to a single plane
in x-y space). Edges and the space between buttons were
defined by the absence of haptic feedback, with the size and
layout of button arrays directly corresponding with the on-
screen representation. For slider-bar tasks, the centre of the
slider-bar was determined by the participant’s first open-
hand gesture (i.e. not fixed in three-dimensional space).
Thereafter, the slider-bar interface allowed approximately
20cm movement in either direction.
1
https://www.ultrahaptics.com/products-programs/touch-
development-kit/
2
https://www.leapmotion.com/
Figure 2. Participant interacting with mid-air haptics,
showing transducer array (beneath hand), Leap Motion
camera (beneath wrist) and visual (touchscreen) display
with button ‘one’ of three selected (in red).
Figure 1. Medium fidelity driving simulator, showing
motorway scenario.
Experimental Design, Tasks and Procedure
The ‘car following’ paradigm was adopted as the primary
driving task [19]. At the start of the scenario, a yellow car
was presented ahead of the participant’s vehicle on the
motorway. This began moving when the participant started
driving, and travelled at a variable speed (between 65 and
75mph). Participants were instructed to follow the lead car,
which remained in lane one, at a distance that they deemed
to be safe and appropriate. While following the lead
vehicle (‘the primary task’), participants were asked to
interact with the in-vehicle interface (‘the secondary task’)
using four different techniques in a 2x2 experimental design
(touch versus gesture and with/without haptic feedback). As
stated, the touchscreen remained present during all
conditions, providing visual feedback during the ‘gesture’
conditions.
Each participant was provided with training and
familiarisation using the touchscreen and gestures (with and
without haptic feedback). This occurred firstly whilst
stationary (i.e. seated in the car) and secondly, while
driving. For each technique, the participant was required to
demonstrate three consecutive successful interactions (i.e.
using the correct behaviour and selecting the correct target
without any false activations), before they were deemed to
be competent. Each participant was subsequently asked to
undertake four experimental drives. During each drive,
participants were presented with a different interaction
technique, resulting in four drives, or conditions, with the
order of exposure counterbalanced between participants:
1. Touch No Haptics (TN): Tasks were completed using a
conventional touchscreen with no haptic feedback.
2. Touch with Haptics (TH): Tasks were completed using
the touchscreen enhanced with ultrasonic haptic
feedback aiming to guide the participant’s hand towards
the touchscreen (i.e. haptic feedback was provided when
their hand was in close proximity to the screen).
3. Gesture No Haptics (GN): Tasks were completed using
simple gestures (identified using the Leap Motion
sensor) but without haptic feedback.
4. Gesture with Haptics (GH): Tasks were completed
using the same gestures enhanced by haptic sensations.
Bespoke interfaces were created comprising
monochromatic, interaction elements of ersatz in-vehicle
tasks, i.e. discrete ‘buttons’ and a continuous slider bar
(Figure). While conventional interfaces would likely
include more complex interaction elements (e.g. multiple
buttons in elaborate configurations and colour schemes), the
intention in using an abstracted interface was to explore the
behaviour associated with isolated, constituent elements
and thus avoid potential confounds associated with more
intricate designs as well as potential differences in the
semantic interpretation or actuation of specific tasks. In
addition, the chosen techniques (button selection and slider-
bar manipulation) are highly representative of current
automotive touch-surface interface elements, and of
particular interest given that they pose different interaction
characteristics (i.e. discrete versus continuous).
For button selection tasks, participants were provided with
either a 2, 3 or 4-item structured menu (with targets
numbered consecutively) (e.g. Figure 3), and were asked to
select a specific target item (either by touching the screen or
using a gesture) with a pre-recorded voice message. For
example, “On this three-item menu, select two”. To achieve
this for the ‘gesture’ conditions (GN and GH), drivers were
required to locate the correct button (either by using visual
feedback, or identifying the relevant haptic sensation) and
make a simple downward movement of their open hand to
simulate a button press.
For slider bar tasks, the pointer was initially placed in the
centre of the slider bar, and participants were asked to
increase or decrease the value by a specified amount, up to
five increments in either direction (for example, “Please
increase the value by three”), by dragging the pointer on the
touchscreen or using mid-air gestures. For gesture
conditions, participants were required to initially ‘select’
the pointer by making an open-palm gesture. Participants
were then required to move their open hand right or left to
increase or decrease the value, and then ‘grab’ (by making a
Figure 3. Button selection (top) and slider bar tasks
(bottom).
fist) to make a selection. Where appropriate, ultrasound
haptics were provided to signify incremental changes (i.e.
separate pulsed sensations were generated as the
participant’s hand passed each subsequent value).
Participants completed two repeats for all possible targets
and configurations, for both button selection and slider bar
tasks, culminating in 18 button presses and 20 slider tasks
per drive. Each drive therefore lasted approximately 8-10
minutes, and the entire study took about hours for each
participant.
Measures and Analysis Approach
To record visual behaviour, participants wore SMI eye-
tracking glasses (ETG) (visible in Figure 2), with gaze data
analysed using semantic gaze mapping. Off-road (‘in-
vehicle’) glances were subsequently defined from the
moment the driver’s gaze started to move towards the in-car
display, to the time it returned to the road scene (i.e.
including the transition time from and back-to the road).
Thus, a single in-car glance could comprise several
fixations on the in-vehicle display.
Secondary task performance was determined through
measures of accuracy (percentage of correct button/slider-
bar selection, and cumulative slider-bar ‘overshoots’) and
task-time. For slider-bar tasks, the task-time comprised the
‘reaction time’, i.e. the time from the delivery of the task
instruction to the start of the interaction (when the Leap
Motion sensor detected the hand and initiated the haptic
sensations, or the participant made contact with the touch
surface), and ‘interaction time’ reflecting the time that each
participant took to manipulate the interface and make their
selection (using either the appropriate mid-air gesture or
touch). For button selection tasks, only total task time was
recorded (given that for button selections using the touch
surface, there was no discernible ‘interaction’ time).
In addition, driving performance data were captured from
STISIM. These were used to calculate standard deviations
Figure 4. Number of off-road glances per task
(mean values, with SD error bars).
Figure 5. Total off-road glance time per task
(mean values, with SD error bars).
Figure 6. Mean off-road glance duration per task
(mean values, with SD error bars).
0.0
0.5
1.0
1.5
2.0
2.5
3.0
TN TH GN GH
Number of Glances
BUTTONS SLIDER-BAR
0.0
0.5
1.0
1.5
2.0
2.5
TN TH GN GH
Total Glance Time (s)
BUTTONS SLIDER-BAR
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
TN TH GN GH
Mean Glance Duration (s)
BUTTONS SLIDER-BAR
Figure 7. Number of off-road glances > 2.0s per
condition/drive (mean values, with SD error bars).
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
TN TH GN GH
Number of Glances > 2.0-sec
BUTTONS SLIDER-BAR
of lane position (SDLP) and headway (SDHW) for each
drive. Finally, participants ranked each of the four
conditions in order of their preference, illuminating their
decisions with comments captured during a post-study
interview. For each measure, results were compared across
conditions using two-way repeated-measures ANOVAs,
comparing Touch/Gesture and With/Without Haptics
(unless specified otherwise).
RESULTS
Visual Behaviour
Based on current guidelines [20], visual behaviour is
presented as the number of off-road glances (NOG) (Figure
4), the total off-road glance time (TGT) (Figure 5), and
mean off-road glance duration (MGD) (Figure 6). The
number of off-road glances longer than 2.0-seconds are also
presented. For clarity, these relate to each condition (or
drive) (i.e. all 18 button and 20 slider-bar tasks,
respectively) (Figure 7). ‘Off-road (or ‘in-vehicle’) glances
comprise all visual behaviour directed towards the interface
(i.e. the screen and/or transducer array).
Number of Off-Road Glances (NOG)
For button-selection tasks, there were no significant
differences identified in NOG for either Touch or Haptics
(Figure 4). However, there were significantly fewer glances
over 2.0-seconds (‘long glances’) associated with the
touchscreen (F(1,126) = 15.3, p < .001) (Figure 7). There
was also a significant interaction for Touch*Haptics
(F(1,126) = 4.22, p = .042), indicating that adding haptics to
gestures decreased the number of long glances, whereas the
number of long glances associated with the touchscreen
increased when haptics were added. A similar trend can be
observed for the number of glances overall.
For slider-bar tasks, there was a significant difference in
NOG associated with Touch (F(1,125) = 4.0, p = .047),
indicating fewer off-road glances associated with gestures.
Adding gestures had no significant effect on the number of
off-road glances overall, but tended to reduce NOG
associated with the gesture interface. There were no
significant differences in the number of long glances
associated with slider-bar tasks, although there was a trend
for fewer when haptics was added.
Total Off-Road Glance Time
For button-selection tasks, total off-road glance time was
significantly lower for Touch compared to Gesture
(F(1,126) = 4.93, p = .028) (Figure 5). Nevertheless, adding
haptics reduced TGT for the gesture interface, but extended
TGT for the touchscreen.
For slider-bar tasks, there were no significant differences
identified for either Touch or Haptics, although adding
haptics to the gestures tended to reduce total off-road
glance time.
Mean Off-Road Glance Duration
For button-selection tasks, the mean off-road glance
duration was significantly shorter for Touch compared to
Figure 8. Number of overshoots during slider-bar task.
Figure 9. Total task-time for button selection tasks.
Figure 10. Interaction time for slider-bar tasks.
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
TN TH GN GH
Number of overshoots
Condition
0.0
1.0
2.0
3.0
4.0
5.0
6.0
TN TH GN GH
Total task-time (s)
Condition
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
TN TH GN GH
Interaction time (s)
Condition
Gesture (F(1,126) = 5.69, p = .019) (Figure 6). There was
also a significant interaction for Touch*Haptics (F(1,126) =
6.97, p = .009), indicating that adding haptics to gestures
decreased the mean off-road glance duration, whereas mean
off-road glance duration increased when haptics were
added to the touchscreen.
For slider-bar tasks, significant differences were found for
Haptics (F(1,125) = 4.40, p = .038) (Figure 6), showing that
the provision of ultrasound haptic feedback reduced the
mean glance duration for both the touchscreen and gestures.
No significant differences in MGD were found associated
with Touch for slider-bar tasks.
Secondary Task Performance
Accuracy
For button-selection tasks, there was a significant difference
between Touch and Gesture for accuracy (percentage of
selections made correctly) (F(1,47)=13.95, p = .001), with
the greatest success achieved when using the touchscreen.
In addition, haptics provided benefit in terms of improved accuracy for the Touch condition but not for Gesture
(F(1,47)=2164.9, p < .001).
There was also significant differences between Touch and
Gesture for slider-bar ‘overshoots’ (F(1,46)=202.8, p <
.001) (Figure 8), showing that fewer errors of this type were
made when using gestures (both with and without haptics).
In addition, when interactions were enhanced with haptics,
benefits were more evident during the Touch condition than
with Gesture (F(1,46)=1848.0, p < .001). There were no
significant differences identified for percentage of correct
selections for the slider-bar tasks between Touch and
Gesture, although again, haptics tended to benefit the Touch
condition more than Gesture for this measure.
Task Time
Total task-time comprises both time to respond to the task
instruction (‘reaction time’, i.e. moving the hand to the
active zone of the Leap Motion sensor, or making contact
with the touch-surface), and the time to undertake the task
itself (‘interaction time’, i.e. manipulating the interface and
making a selection). For button-selection tasks, there was a
significant difference between Touch and Gesture
(F(1,46)=187.3, p < .001) (Figure 9), with a two-way
ANOVA showing that Touch was significantly quicker than
Gesture for total task-time, both with and without haptic
feedback. It was not feasible to split total task-time for
button-selections using the touch surface as there was no
discernible ‘interaction’ time, and therefore no comparison
could be made with Gesture for this measure.
When task-time was broken down into reaction time and
interaction time for slider-bar tasks, it was evident that
the time taken to undertake the task (i.e. the interaction
time) was significantly shorter when using gestures
(F(1,46)=43.15, p < .001) (Figure 10), whereas reaction
time (i.e. the time to respond to the task instruction) was
quicker for Touch compared to Gesture (F(1,46)=232.0, p <
.001) (Figure 11).
Condition
TN
TH
GN
GH
Score
101
74
21
82
Rank
1
3
4
2
Table 1. Preference scores and pairwise ranking for
button selection tasks, where 1=most preferred and
4=least preferred.
Condition
TN
TH
GN
GH
Score
61
49
62
113
Rank
3
4
2
1
Table 2. Preference scores and pairwise ranking for
slider-bar tasks, where 1=most preferred and 4=least
preferred.
Figure 11. Total task-time for slider-bar tasks split by
reaction time and interaction time (mean values).
Figure 12. Standard deviation of lane position.
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
TN TH GN GH
Task Time (s)
Reaction Time Interaction Time
0.0
0.5
1.0
1.5
2.0
2.5
TN TH GN GH
SDLP (ft)
Condition
Driving Performance
There was a significant difference for SDLP
(F(1,45)=407.0, p < .001), with pairwise comparisons
suggesting that SDLP was lower during Touch-only
compared to Gesture-only (p = .039). However, when this
was corrected for multiple comparisons, differences were
no longer significant (Figure 12Figure). There were no
significant differences revealed in SDHW between
conditions.
Preferences
Participants were asked to rank the four conditions in order
of preference. Pairwise ranking was used to systematically
compare each condition with each other. Table 1 and Table
2 show pairwise scores and rankings for button-selection
tasks and slider-bar tasks, respectively. For button selection
tasks, participants tended to prefer the touchscreen, whereas
gestures-with-haptics was preferred for slider-bar tasks.
These ratings were supported by comments made by
participants. For example, regarding button-selection tasks:
“The touchscreen was good because you could see what
you pressed.”
Even so, some participants recognised the limitations
associated with the touchscreen: “The touchscreen was
difficult to actually get…and you do have to actually look at
the number…it meant I had to look at the screen for longer,
which was more distracting.”
Support for gestures-with-haptics was evident for both
button-selection and slider-bar tasks: You don’t even need
to look, you can kind of feel. So the haptic feedback helps
with that; “It gave me more confidence that I choose the
correct number”; “The haptics slider was probably the
easiest, because as you moved it gave you a click-click-click
feedback.
It was also clear from participants’ comments that gestures
alone (i.e. without haptic feedback) were more challenging
to use: “Gesture control without haptics was difficult
because you couldn’t tell what you were activating.
DISCUSSION
By employing ultrasound to deliver discrete haptic
sensations that can be felt on the palm of the hand, the
illusion of an array of virtual buttons, or other interaction
elements such as a slider-bar, can be created in mid-air and
subsequently actuated using gestures [16]. This has the
potential to remove the need for vision when interacting
with an interface, and is therefore of particular relevance in
an automotive context, where a driver’s visual attention
may already be consumed by the primary driving task. The
aim of the current study was to investigate the use of mid-
air ultrasonic haptic feedback to enhance interactions in this
context, with a particular focus on understanding the impact
on drivers’ visual behaviour. It is worth highlighting again
that as part of the experimental design, the touchscreen
remained present during the ‘gesture’ conditions. In the
absence of haptic feedback, this was arguably necessary to
support the use of the gesture interface. However, when
gestures were enhanced by mid-air ultrasonic feedback,
visual feedback was not strictly required, and therefore the
presence of the touchscreen may have inadvertently
attracted additional visual attention in this situation. Thus,
the study was unlikely to reveal the full potential of vision-
free interaction that gestures-with-haptics could enable.
Nevertheless, there were some interesting visual behaviours
revealed.
For example, when making button selections, the
touchscreen attracted fewer long glances (>2.0-seconds)
than the gesture interface. In addition, the total off-road
glance time and mean glance duration was shorter when
using the touchscreen for button selections. However, the
addition of mid-air haptic feedback increased the number of
long glances made to the touchscreen (as well as TGT and
MGD), whilst actually reducing the visual demand of the
gesture interface, evidenced by fewer long glances, and a
significant reduction in TGT and MGD. In contrast, for
slider-bar tasks, the gesture interface attracted the fewest
number of glances, but again, the addition of haptics tended
to reduce the number of glances (and in particular, the
number of glances over 2.0-seconds), as well as the total
off-road glance time and mean glance duration.
Therefore, although there is some evidence (based on the
visual performance measures) to suggest that the
touchscreen alone performed better than gestures for button
selections, it was not possible to reduce the visual demand
further by providing haptics. Instead, providing haptic
feedback to the touchscreen actually increased visual
demand. In contrast, the provision of ultrasound haptic
feedback to gestures significantly reduced the visual
demand associated with this interface, and for all tasks (i.e.
button-selections and slider-bar manipulation). This is an
important finding as interfaces are becoming less reliant on
single button presses (which can be limited in their scope),
and increasing incorporating novel interaction elements and
techniques, that may be better serviced using gestures.
Moreover, results suggest that current concerns regarding
the potential additional demands associated with such
interfaces may be alleviated through the careful provision
of ultrasound haptic feedback.
There were, however, notable benefits in terms of accuracy
(for the slider-bar task), when the touchscreen was
enhanced by haptic feedback. Nevertheless, the best
performance overall (in terms of the percentage of correct
responses and minimising target ‘overshoots’) was
associated with gestures and haptics. Conversely, utilising
gestures for button selections appeared to extend total task-
time. Examined more closely (i.e. by segregating ‘reaction
time’ and ‘interaction time’ for the slider-bar tasks), it was
clear that the additional time was associated with the
drivers ‘reaction’, i.e. the time taken from the delivery of
the task instruction to the start of their interaction. For the
gesture/haptics condition this included the time that the
driver took to move their hand into position. Considering
the ‘interaction’ time in isolation (i.e. the time taken to
manipulate the slider bar and make selections), there were
notable benefits in terms of reduced response time when
using gestures-with-haptics.
For gestures not bound in physical space (i.e. the slider-bar
tasks), a key factor is the successful accomplishment of
interactions was participants’ initial hand-position.
Primarily, this was to ensure that the Leap Motion system
could detect the hand, but it was also important to ensure
that there was adequate space to complete manoeuvres. For
example, poor initial hand placement could hinder slider-
bar value increments due to the physical presence of the
steering wheel (i.e. if interactions were started too close to
the wheel), whereas decrementing the slider-bar could be
difficult due to limitations in participants’ reach, if their
initial hand placement was too far away. Thus, participants
were encouraged to carefully locate their hand in three-
dimensional space before commencing each interaction,
and this would likely have taken additional time and effort.
This self-imposed formality would be expected to reduce as
drivers’ familiarity with the technology increases.
In addition, even when located ‘correctly’, the Leap Motion
camera was required to detect the driver’s hand and initiate
haptic sensations. Consequently, there was also an inherent
hardware/software latency (in addition to the time for the
driver to physically move their hand into position), and this
is likely to reduce in future implementations as the
flexibility and capability of the technology improves.
There were also some differences apparent in driving
performance measures (in particular, SDLP), with better
lateral vehicle control evident when participants used the
touch-surface, compared to gestures without haptics.
However, when the gestures were augmented with
ultrasound sensations, vehicle control was comparable,
suggesting that while gestures on their own may detriment
driving performance, the additional provision of mid-air
haptic feedback could negate deleterious effects on driving
performance (i.e. making them comparable to using the
touchscreen on its own). However, these effects were small
and therefore further longer-term driving studies are
recommended to explore this further.
Support for the gestures and haptics is also evident in the
preference ratings, with ‘gesture with hapticsidentified as
the most popular (by far) for the slider-bar task, and the
second most popular for selecting buttons. The fact that the
touch-surface was most popular for button selections (and
achieved the shortest task times for these) is unsurprising
given that touchscreens are now common in many contexts,
and the interaction itself (i.e. touching the screen) remains
perceptively quicker and easier than locating and activating
a virtual mid-air button (which was a novel experience to
many of our participants). However, it is worth noting that
the Audi TT simulator utilised during the study is a
compact vehicle and has a characteristically small interior.
As such, the touchscreen was located close to the driver
(placed in front of the centre console) (see Figure 2), and
therefore generally within easy reach. This might not be the
case in larger vehicles, such as SUVs etc., where
touchscreens may be placed outside of easy-reach zones.
There was also recognition for the potential benefit of using
gestures and haptics revealed through the post-study
interviews, with many participants recognising that gestures
and haptics could enable completely vision-free interactions
as familiarity and usability improves; similar claims could
not be made for current touchscreen technology.
It is worth noting that the gestures, associated haptic
sensations and experimental interfaces were developed
specifically for the study, and therefore all results are based
on this bespoke implementation, experimental set-up and
‘post-hoc’ installation. Thus, some of the performance
metrics associated with the combination of touchscreen and
haptics, for example, may simply reflect a poor integration
of these two technologies. Moreover, future
implementations would likely be seamlessly installed
within vehicle interiors. In addition, further developments
in the ultrasound technology and automotive-UX design are
likely to offer benefits in terms of improved usability and
reduced response latency for all interfaces. For the gestures-
with-haptics, in particular, there is further scope to develop
novel gestures and distinct haptic sensations to help drivers
differentiate and select targets. Moreover, future gestures-
with-haptics interfaces need not be bound by the traditional
restrictions of a visual interface (e.g. a limited physical
space to present a finite number of elements), and this could
dramatically increase the scope for novel, multifarious
interactions.
Finally, it is worth highlighting that the activities under
investigation were purposefully chosen to be task-oriented
and not goal-oriented. Therefore, participants were required
to make repeated selections and manipulations using a
rudimentary interface with limited ‘real-world’
functionality or appeal. This was a necessary experimental
constraint to avoid confounding effects, but it is recognised
that in practice, drivers would have a goal in mind when
interacting with an in-vehicle device, such as increasing
music volume. This would not only provide feedback (e.g.
the music gets louder), but might also not necessitate the
accuracy demanded during the study (cf. moving the slider-
bar by 3 increments). While these factors may affect real-
world behaviour with such a system, the intention in
conducting the study was to provide a robust and controlled
investigation of gestures enhanced with mid-air haptic
feedback compared to a traditional touch-surface interface,
in a driving context.
CONCLUSION
The study evaluated the novel use of ultrasound to emulate
discrete mid-air buttons and a graduated slider-bar,
activated using gestures, in a driving context. By comparing
this with a traditional touch-surface interface, the study
shows clear potential for gestures enhanced with mid-air
ultrasonic haptic feedback in the automotive domain, with
reductions in visual demand, shorter interaction times, and
improved accuracy (for slider-bar tasks) evident when
haptic feedback was provided. The combined gesture-
haptics interface was also very popular amongst
participants, who rated it particularly highly for
‘continuous’ slider-bar manipulations. Further work is
required to optimise sensations and interactions, for
example, to improve the speed of response during discrete
button-selection tasks. In addition, future work could seek
to locate the technology within more representative driving
contexts, where vibrations, vehicle movements, and other
demands of real-world driving may impact on usability.
ACKNOWLEDGMENTS
The research was conducted in collaboration with
Ultrahaptics Ltd., and the authors gratefully acknowledge
their advice and support.
REFERENCES
1.
D. Large, G. Burnett, E. Crundall, G. Lawson and L.
Skrypchuk, “Twist It, Touch It, Push It, Swipe It:
Evaluating Secondary Input Devices for Use with an
Automotive Touchscreen HMI,” in Proceedings of the
8th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications,
2016.
2.
S. Klauer, T. Dingus, V. Neale, J. Sudweeks and D.
Ramsey, “The impact of driver inattention on near-
crash/crash risk: An analysis using the 100-car
naturalistic driving study data,” 2006.
3.
S. Rümelin and A. Butz, “How to make large touch
screens usable while driving,” in Proceedings of the 5th
International Conference on Automotive User
Interfaces and Interactive Vehicular Applications ,
2013.
4.
D. Large, G. Burnett, E. Crundall, E. van Loon, A. Eren
and L. Skrypchuk, “Developing predictive equations to
model the visual demand of in-vehicle touchscreen
HMIs.,” International Journal of HumanComputer
Interaction, vol. 34, no. 1, pp. 1-14, 2018.
5.
B. Donmez, L. Boyle and J. Lee, “Differences in off-
road glances: effects on young drivers’ performance.,”
Journal of transportation engineering, vol. 136, no. 5,
pp. 403-409, 2009.
6.
L. Garber, “Gestural Technology: Moving Interfaces in
a New Direction,” Computer, vol. 46, no. 10, pp. 22-25,
2013.
7.
A. van Laack, Measurement of Sensory and Cultural
Influences on Haptic Quality Perception of Vehicle
Interiors, BoDBooks on Demand, 2014.
8.
A. Laack, O. Kirsch, G. Tuzar and J. Blessing,
“Controlling vehicle functions with natural body
language,” in Mensch und Computer 2016
Workshopband, 2016.
9.
S. Rümelin, T. Gabler and J. Bellenbaum, “Clicks are in
the Air: How to Support the Interaction with Floating
Objects through Ultrasonic Feedback.,” in 9th
International Conference on Automotive User
Interfaces and Interactive Vehicular Applications,
Oldenburg, Germany, 2017.
10.
Postmedia Network Inc., “How It Works: BMW’s
Gesture Control,” 10 08 2016. [Online]. Available:
http://driving.ca/bmw/7-series/auto-news/news/how-it-
works-bmw-gesture-control. [Accessed 22 03 2018].
11.
M. Biet, F. Giraud and B. Lemaire-Semail,
“Implementation of tactile feedback by modifying the
perceived friction,” The European Physical Journal-
Applied Physics, vol. 43, no. 1, pp. 123-135, 2008.
12.
H. Iwata, H. Yano, F. Nakaizumi and R. Kawamura,
“Project FEELEX: adding haptic surface to graphics,”
in Proceedings of the 28th annual conference on
Computer graphics and interactive techniques , 2001.
13.
M. Pitts, G. Burnett, L. Skrypchuk, T. Wellings, A.
Attridge and M. Williams, “Visual-haptic feedback
interaction in automotive touchscreen use,” Displays,
vol. 33, no. 1, pp. 7-16, 2012.
14.
M. Weiss, C. Wacharamanotham, S. Voelker and J.
Borchers, “ FingerFlux: near-surface haptic feedback on
tabletops.,” in Proceedings of the 24th annual ACM
symposium on User interface software and technology,
2011.
15.
C. Wusheng, W. Tianmiao and H. Lei, “Design of data
glove and arm type haptic interface,” in Haptic
Interfaces for Virtual Environment and Teleoperator
Systems, 2003. HAPTICS 2003., 2003.
16.
T. Carter, S. Seah, B. Long, B. Drinkwater and S.
Subramanian, “UltraHaptics: multi-point mid-air haptic
feedback for touch surfaces.,” in Proceedings of the
26th annual ACM symposium on User interface
software and technology, 2013.
17.
T. Hoshi, M. Takahashi, T. Iwamoto and H. Shinoda,
“Noncontact tactile display based on radiation pressure
of airborne ultrasound,” IEEE Transactions on Haptics,
vol. 3, no. 3, pp. 155-165, 2010.
18.
L. Gavrilov and E. Tsirulnikov, “Mechanisms of
Stimulation Effects of Focused Ultrasound on Neural
Structures: Role of Nonlinear Effects.,” in Nonlinear
Acoust. at the Beginning of the 21st Cent., 2002, p.
445448.
19.
K. Brookhuis, D. Waard and B. Mulder, “Measuring
driving performance by car-following in traffic.,”
Ergonomics, vol. 37, no. 3, pp. 427-434, 1994.
20.
National Highway Traffic Safety Administration
(NHTSA), “Visual-Manual NHTSA Driver Distraction
Guidelines For In-Vehicle Electronic Devices,” 2013.
... Various wearable devices such as gloves [73], armbands [74], or wrist-based devices [75,76] have been applied to provide haptic feedback. Besides wearable haptic devices, an array of ultrasonic transducers could also be used to provide haptic feedback for gestures [77] or spatial cues for digital visualisations [17,18]. The user evaluation indicates that the haptic feedback from ultrasonic transducers could improve the interaction accuracy [17] and reduce visual demand [77]. ...
... Besides wearable haptic devices, an array of ultrasonic transducers could also be used to provide haptic feedback for gestures [77] or spatial cues for digital visualisations [17,18]. The user evaluation indicates that the haptic feedback from ultrasonic transducers could improve the interaction accuracy [17] and reduce visual demand [77]. ...
Article
Full-text available
Augmented reality (AR) technologies can blend digital and physical space and serve a variety of applications intuitively and effectively. Specifically, wearable AR enabled by optical see-through (OST) AR head-mounted displays (HMDs) might provide users with a direct view of the physical environment containing digital objects. Besides, users could directly interact with three-dimensional (3D) digital artefacts using freehand gestures captured by OST HMD sensors. However, as an emerging user interaction paradigm, freehand interaction with OST AR still requires further investigation to improve user performance and satisfaction. Thus, we conducted two studies to investigate various freehand selection design aspects in OST AR, including target placement, size, distance, position, and haptic feedback on the hand and body. The user evaluation results indicated that 40 cm might be an appropriate target distance for freehand gestural selection. A large target size might lower the selection time and error rate, and a small target size could minimise selection effort. The targets positioned in the centre are the easiest to select, while those in the corners require extra time and effort. Furthermore, we discovered that haptic feedback on the body could lead to high user preference and satisfaction. Based on the research findings, we conclude with design recommendations for effective and comfortable freehand gestural interaction in OST AR.
... This is usually achieved by focusing algorithms applied to phased arrays comprising hundreds of ultrasound transducers. Studies have shown that by providing mid-air haptic feedback to infotainment systems in cars [11], digital kiosks and pervasive displays [12], user performance and experience can be improved significantly. Notably, ultrasound phased arrays have recently been able to generate multimodal volumetric displays for visual, tactile and audio presentation using acoustic trapping techniques [13], [14]. ...
... Modulating the focus (or foci) in time and/or space and at the right frequency causes perceptible vibrations on the skin, which has since then been termed as mid-air haptics [8], [17]; a technology commercialised by Ultrahaptics (now Ultraleap) since 2014. Applications of mid-air haptics include automotive human machine interfaces [11], wireless power transfer [18], digital signage [12], augmented, virtual, and mixed reality (AR/VR/MR) [19]- [21]. A comprehensive review article was recently published on this topic [10]. ...
Article
We present UltraButton a minimalist touchless button including haptic, audio and visual feedback costing only $200. While current mid-air haptic devices can be too bulky and expensive (around $2 k) to be integrated into simple mid-air interfaces such as point and select, we show how a clever arrangement of 83 ultrasound transducers and a new modulation algorithm can produce compelling mid-air haptic feedback and parametric audio at a minimal cost. To validate our prototype, we compared its haptic output to a commercially-available mid-air haptic device through force balance measurements and user perceived strength ratings and found no significant differences. With the addition of 20 RGB LEDs, a proximity sensor and other off-the-shelf electronics, we then propose a complete solution for a simple multimodal touchless button interface. We tested this interface in a second experiment that investigated user gestures and their dependence on system parameters such as the haptic and visual activation times and heights above the device. Finally, we discuss new interactions and applications scenarios for UltraButtons .
... The ability for UMH devices to display multiple focal points at once [4] has been proposed as a means of displaying multiple tactile elements (e.g. in mid-air tactile user interfaces [12]). In this scenario, it is imperative to understand how close such tactile elements can be in different conditions while still allowing a user to easily discriminate between elements. ...
Article
Ultrasound mid-air haptic (UMH) devices are a novel tool for haptic feedback, capable of providing localized vibrotactile stimuli to users at a distance. UMH applications largely rely on generating tactile shape outlines on the users’ skin. Here, we investigate how to achieve sensations of continuity or gaps within such 2D curves by studying the perception of pairs of amplitude-modulated (AM) focused ultrasound stimuli. On the one hand, we aim to investigate perceptual effects which may arise from providing simultaneous UMH stimuli. On the other, we wish to provide perception-based rendering guidelines for generating continuous or discontinuous sensations of tactile shapes. Finally, we hope to contribute towards a measure of the perceptually achievable resolution of UMH interfaces. We performed a user study to identify how far apart two focal points need to be in order to elicit a perceptual experience of two distinct stimuli separated by a gap. Mean gap detection thresholds were found at 32.3mm spacing between focal points, but a high within- and between-subject variability was observed. Pairs spaced below 15mm were consistently (>95%) perceived as a single stimulus, while pairs spaced 45mm apart were consistently (84%) perceived as two separate stimuli. To investigate the observed variability, we resort to acoustic simulations of the resulting pressure fields. These show a non-linear evolution of actual peak pressure spacing as a function of nominal focal point spacing. Beyond an initial threshold in spacing (between 15mm and 18mm), which we believe to be related to the perceived size of a focal point, the probability of detecting a gap between focal points appears to linearly increase with spacing. Our work highlights physical interactions and perceptual effects to consider when designing or investigating the perception of UMH shapes.
... The basic idea is that a collection of ultrasonic speakers is electronically controlled to create one or more high-pressure points in mid-air that induce a vibrotactile effect when touched by a human hand. Applications of this technology are far reaching (see recent review [2]), ranging from in-vehicle infotainment systems [3], and touchless kiosks [4], to controller-free VR interactions [5]. Much of the technological developments surrounding mid-air haptics have been enabled by hardware improvements, the creation of software toolkits, psychophysical studies about touch, and interactive prototypes that demonstrate the capabilities of the underlying haptic technology. ...
... Much of the technological development surrounding this core idea has been down to hardware improvements, the creation of software toolkits, deep psychophysical studies about touch, and a plethora of interactive prototypes that demonstrate the capabilities of the underlying haptic technology. The latter mostly includes haptic effects to accompany 3D holograms in AR/VR/MR during dexterous manipulation [8], and haptic feedback effects during gesture input in control interfaces found in cars [9] and kiosks [10]. ...
Conference Paper
Haptic devices have often been used to enhance a variety of audio experiences such as listening to music, meditating, wayfinding, accessibility, and communicating. In most cases, the haptic interface is wearable or handheld and therefore suffers from limitations related to ergonomics or a limited palette of haptic sensation effects. In this paper, we present a touchless audio-haptic demonstrator experience that enhances the immersive narrative of an emotional short story. To do so, we have created an audio-haptic mapping that is semantically congruent and have synchronized the presentation of audio and haptic effects to the narrative timeline. The haptic effects presented to the user's palm are both spatially and temporally modulated so that they convey a rich palette of sensations (e.g., tapping, direction, rotation, rain, electricity, etc.) that are triggered by keywords or events in the story.
... For example, in the form of a cutaneous push on the steering wheel for navigation [84], a tactor to convey warning cues for driver inattention [222], a pin array that creates tactile patterns for fingers and hand [173], a silicone touchscreen cover foil to increase tactile feedback [414], or a 3D printed stencil that makes underlying touchscreen controls tangible [66]. Besides, instead of skin contact with the tactile device, Shakeri et al. [338] and Harrington et al. [132] proposed mid-air ultrasonic tactile feedback for gesture interaction. ...
Preprint
Full-text available
Automotive user interfaces constantly change due to increasing automation, novel features, additional applications, and user demands. While in-vehicle interaction can utilize numerous promising modalities, no existing overview includes an extensive set of human sensors and actuators and interaction locations throughout the vehicle interior. We conducted a systematic literature review of 327 publications leading to a design space for in-vehicle interaction that outlines existing and lack of work regarding input and output modalities, locations, and multimodal interaction. To investigate user acceptance of possible modalities and locations inferred from existing work and gaps unveiled in our design space, we conducted an online study (N=48). The study revealed users' general acceptance of novel modalities (e.g., brain or thermal activity) and interaction with locations other than the front (e.g., seat or table). Our work helps practitioners evaluate key design decisions, exploit trends, and explore new areas in the domain of in-vehicle interaction.
... For example, in the form of a cutaneous push on the steering wheel for navigation [84], a tactor to convey warning cues for driver inattention [222], a pin array that creates tactile patterns for fingers and hand [173], a silicone touchscreen cover foil to increase tactile feedback [414], or a 3D printed stencil that makes underlying touchscreen controls tangible [66]. Besides, instead of skin contact with the tactile device, Shakeri et al. [338] and Harrington et al. [132] proposed mid-air ultrasonic tactile feedback for gesture interaction. ...
Article
Full-text available
Automotive user interfaces constantly change due to increasing automation, novel features, additional applications, and user demands. While in-vehicle interaction can utilize numerous promising modalities, no existing overview includes an extensive set of human sensors and actuators and interaction locations throughout the vehicle interior. We conducted a systematic literature review of 327 publications leading to a design space for in-vehicle interaction that outlines existing and lack of work regarding input and output modalities, locations, and multimodal interaction. To investigate user acceptance of possible modalities and locations inferred from existing work and gaps unveiled in our design space, we conducted an online study (N=48). The study revealed users' general acceptance of novel modalities (e.g., brain or thermal activity) and interaction with locations other than the front (e.g., seat or table). Our work helps practitioners evaluate key design decisions, exploit trends, and explore new areas in the domain of in-vehicle interaction.
Chapter
Mid-air haptic feedback presents exciting new opportunities for useful and delightful interactive systems. However, with these opportunities come several design challenges that vary greatly depending on the application at hand. In this chapter, we reveal these challenges from a user experience perspective. To that end, we first provide a comprehensive literature review covering many of the different applications of the technology. Then, we present 12 design guidelines and make recommendations for effective mid-air haptic interaction designs and implementations. Finally, we suggest an iterative haptic design framework that can be followed to create a quality mid-air haptic experience.
Chapter
Mid-air technology is not well studied in the context of multisensory experience. Despite increasing advances in mid-air interaction and mid-air haptics, we still lack a good understanding of how such technologies might influence human behaviour and experience. Compare this with the understanding, we currently have about physical touch, which highlights the need for more knowledge in this area. In this chapter, I describe three areas of development that consider human multisensory perception and relate these to the study and use of mid-air haptics. I focus on three main challenges of developing multisensory mid-air interactions. First, I describe how crossmodal correspondence could improve the experience of mid-air touch. Then, I outline some opportunities to introduce mid-air touch to the study of multisensory integration. Finally, I discuss how this multisensory approach can benefit applications that encourage and support a sense of agency in interaction with autonomous systems. Considering these three contributions, when developing mid-air technologies can provide a new multisensory perspective, resulting in the design of more meaningful and emotionally-loaded mid-air interactions.
Conference Paper
Full-text available
Touchscreen Human-Machine Interfaces (HMIs) inherently demand some visual attention. By employing a secondary device, to work in unison with a touchscreen, some of this demand may be alleviated. In a medium-fidelity driving simulator, twenty-four drivers completed four typical in- vehicle tasks, utilising each of four devices – touchscreen, rotary controller, steering wheel controls and touchpad (counterbalanced). Participants were then able to combine devices during a final ‘free-choice’ drive. Visual behaviour, driving/task performance and subjective ratings (workload, emotional response, preferences), indicated that in isolation the touchscreen was the most preferred/least demanding to use. In contrast, the touchpad was least preferred/most demanding, whereas the rotary controller and steering wheel controls were largely comparable across most measures. When provided with ‘free-choice’, the rotary controller and steering wheel controls presented as the most popular candidates, although this was task-dependent. Further work is required to explore these devices in greater depth and during extended periods of testing.
Chapter
Full-text available
The feasibility of the use of short pulses or amplitude modulation of focused ultrasound for noninvasive local stimulation of superficial and deep-seated neural structures of humans to induce different somatic (tactile, warmth, cold, pain, etc.), and hearing sensations is discussed. Such possibilities are of interest for the application of this method in the diagnosis of neurological, dermatological, hearing and other diseases and disorders involving changes in perception of different sensations. We have investigated the main factors responsible for the stimulation effects of focused ultrasound. The results of the work show that nonlinear effects play an important role in the mechanisms of the stimulation effects of focused ultrasound on neural structures.
Conference Paper
Gestural interfaces are increasingly integrated in human-machine interfaces, but they often rely on visual and auditory feedback and neglect the haptic channel. With technologies providing tactile feedback in midair becoming more mature, it is important to investigate their specific properties. In this paper, we focus on pointing gestures and the combination with floating displays, more precisely tapping buttons. We report the results of a user study investigating in detail different parameters of feedback created through ultrasonic transducers. The results show that the feedback is perceived as most suitable when a modulation frequency of about 150--200Hz is used, which confirms the findings on sensitivity for vibro-tactile stimuli on surfaces. The duration of the signal of either 50 or 130ms did not have a significant effect. Moreover, we present a questionnaire consisting of word pairs to rate the subjective perception of haptic characteristics, to provide a basis for future research.
Article
Touchscreen HMIs are commonly employed as the primary control interface and touch-point of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully-functioning prototypes, test rigs and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modelling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick-Hyman Law decision/search time and Fitts’ Law pointing time, and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behaviour associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimise visual demand.
Conference Paper
We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.
Conference Paper
Large touch screens are recently appearing in the automotive market, yet their usability while driving is still controversial. Flat screens do not provide haptic guidance and thus require visual attention to locate interactive elements that are displayed. Thus, we need to think about new concepts to minimize the visual attention needed for interaction, to keep the driver's focus on the road and ensure safety. In this paper, we explore three different approaches. The first one is designed to make use of proprioception. The second approach incorporates physical handles to ease orientation on a large flat surface. In the third approach, directional touch gestures are applied. We describe the results of a comparative study that investigates the required visual attention as well as task performance and perceived usability, in comparison to a state-of-the-art multifunctional controller. We found that direct touch buttons provide the best results regarding task completion time, but with a size of about 6x8 cm, they were not yet large enough for blind interaction. Physical elements in and around the screen space were regarded useful to ease orientation. With touch gestures, participants were able to reduce visual attention to a lower level than with the remote controller. Considering our findings, we argue that there are ways to make large screens more appropriate for in-car usage and thus harness the advantages they provide in other aspects.
Article
Gesture-based interfaces - which let users control devices with, for example, hand or finger motions - are becoming increasingly popular. These interfaces utilize gesture-recognition algorithms to identify body movements. The systems then determine which device command a particular gesture represents and take the appropriate action. For example, moving a hand sideways might mean that a user wants to turn a page on an e-reader screen. Proponents say gesture recognition-which uses computer vision, image processing, and other techniques-is useful largely because it lets people communicate with a machine in a more natural manner, without a mouse or other intermediate device. Although the technology has long been discussed as a potentially useful, rich interface and several gesture-control products have been released over the years, it has never achieved mainstream status.
Article
Touchscreen interfaces offer benefits in terms of flexibility and ease of interaction and as such their use has increased rapidly in a range of devices, from mobile phones to in-car technology. However, traditional touchscreens impose an inevitable visual workload demand that has implications for safety, especially in automotive use. Recent developments in touchscreen technology have enabled feedback to be provided via the haptic channel. A study was conducted to investigate the effects of visual and haptic touchscreen feedback on visual workload, task performance and subjective response using a medium-fidelity driving simulator. Thirty-six experienced drivers performed touchscreen ‘search and select’ tasks while engaged in a motorway driving task. The study utilised a 3 × 2 within-subjects design, with three levels of visual feedback: ‘immediate’, ‘delayed’, ‘none’; and two levels of haptic feedback: ‘visual only’, ‘visual + haptic’. Results showed that visual workload was increased when visual feedback was delayed or absent; however, introducing haptic feedback counteracted this effect, with no increases observed in glance time and count. Task completion time was also reduced when haptic feedback was enabled, while driving performance showed no effect due to feedback type. Subjective responses indicated that haptic feedback improved the user experience and reduced perceived task difficulty.
Article
This paper describes implementation and initial evaluation of variable friction displays. We first analyse a device that comprises a stator of an ultrasonic motor supplied by only one channel. In this way, the stator does not induce any rotative movement but creates a slippery feeling on the stator's surface. Considering the range of frequency and amplitude needed to obtain this phenomenon, we interpret it as the squeeze film effect, which may be the dominant factor causing an impression of lubrication. This effect is thus able to decrease the friction coefficient between the fingertip and the stator as a function of the vibration amplitude. Moreover, if we add a position sensor, we can create a textured surface by generating alternatively sliding and braking sensations by tuning the vibration amplitude of the wave. Then, based on the principle of the first device, another device is proposed in order to enable a free exploration of the surface, according to ergonomic requirements.