Content uploaded by Tobias Schneider
Author content
All content in this area was uploaded by Tobias Schneider on Apr 21, 2021
Content may be subject to copyright.
Increasing the User Experience in Autonomous Driving through
dierent Feedback Modalities
Tobias Schneider
Stuttgart Media University
Stuttgart, Germany
schneidert@hdm-stuttgart.de
Sabiha Ghellal
Stuttgart Media University
Stuttgart, Germany
ghellal@hdm-stuttgart.de
Steve Love
The Glasgow School of Art
Glasgow, Schotland
s.love@gsa.ac.uk
Ansgar Gerlicher
Stuttgart Media University
Stuttgart, Germany
gerlicher@hdm-stuttgart.de
ABSTRACT
Within the ongoing process of dening autonomous driving so-
lutions, experience design may represent an important interface
between humans and the autonomous vehicle. This paper presents
an empirical study that uses dierent ways of unimodal commu-
nication in autonomous driving to communicate awareness and
intent of autonomous vehicles. The goal is to provide recommen-
dations for feedback solutions within holistic autonomous driving
experiences. 22 test subjects took part in four autonomous, simu-
lated virtual reality shuttle rides and were presented with dierent
unimodal feedback in the form of light, sound, visualisation, text
and vibration. The empirical study showed that, compared to a no-
feedback baseline ride, light, and visualisation were able to create a
positive user experience.
CCS CONCEPTS
•Human-centered computing →Empirical studies in HCI
;
User centered design;Interaction techniques;Interaction devices;
KEYWORDS
user experience, explainable articial intelligence, autonomous driv-
ing
ACM Reference Format:
Tobias Schneider, Sabiha Ghellal, Steve Love, and Ansgar Gerlicher. 2021.
Increasing the User Experience in Autonomous Driving through dierent
Feedback Modalities. In 26th International Conference on Intelligent User
Interfaces (IUI ’21), April 14–17, 2021, College Station, TX, USA. ACM, New
York, NY, USA, 4 pages. https://doi.org/10.1145/3397481.3450687
1 INTRODUCTION
With vehicle assistant systems being on the rise for multiple years,
autonomous vehicles (
AV
s) are starting to become reality [
3
,
7
,
28
,
32
,
35
]. Previous studies proof that losing the possibility of control
may lead to a negative User Experience (
UX
), especially in urban
IUI ’21, April 14–17, 2021, College Station, TX, USA
© 2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8017-1/21/04.
https://doi.org/10.1145/3397481.3450687
areas with a lot of situations happening at the same time [
11
,
17
,
36
].
Providing appropriate feedback could address potential issues and
lead to more positive experiences. We argue that autonomous driv-
ing immersion via feedback modalities should play a major role in
the holistic User Experience Design (
UXD
) of an automotive vehi-
cle and that by communication awareness and intent via unimodal
feedback modalities passengers can have a better understanding
of the
AV
’s actions and an improved overall
UX
. In this paper, we
introduce and discuss autonomous driving immersion focusing on
light, audio, text, visualisation and vibration feedback.
2 RELATED WORK
Making articial systems understandable to humans is part of the
explainable articial intelligence (
XAI
) research area [
1
,
8
,
23
] and
focuses on explainability and explanation [
21
]. One problem is that
experts in the eld of articial intelligence (
AI
) may not always be
the right ones to explain these complex models [
8
,
21
,
22
,
31
]. There-
fore, simpler, context-dependent and well-timed human-friendly
ways of communication have to be found [
13
,
21
]. The connec-
tion of UXD and XAI and the possible increase in transparency in
the human-AI interaction motivate this paper. Using single (uni-
modal) or multiple (multimodal) feedback modalities to communi-
cate awareness and intent in (semi-)autonomous driving has been
the topic of research in recent years. Multiple studies have shown
the usefulness of feedback modalities such as light, audio, visuali-
sation, text or vibration and their combinations to convey informa-
tion to a driver or passenger [
12
,
16
,
18
,
19
,
24
–
27
,
34
]. They can
increase the understanding of a machine’s decision, and its limits
[
4
,
10
,
20
,
30
,
37
] as well as help to increase the feeling of trust and
safety of a passenger [
9
,
15
]. However, none of these studies have
purely focused on the
UX
in autonomous driving when it comes
to communicating the
AI
’s awareness and intent. Therefore, this
paper wants to provide a rst step in this direction.
3 PROTOTYPE
3.1 Driving Situations
In the context of autonomous driving, dierent types of situations
can occur. During a workshop with researchers and automotive
employees at a research campus, the following four dierent cate-
gories of driving situations were identied: proactive and reactive
ones, which both can either be critical or non-critical.
7
This work is licensed under a Creative Commons Attribution-ShareAlike
International 4.0 License.
IUI ’21, April 14–17, 2021, College Station, TX, USA Tobias Schneider, et al.
Figure 1: Three out of the ve designed modalities. Left: reactive critical - light, middle: proactive non-critical - visualisation,
right: proactive non-critical - text.
Proactive Non-Critical
: The
AV
has sucient reaction time
for a situation that does not endanger human health or lives. For
example, a small construction site is in the vehicle’s way and needs
to be bypassed. The
AV
has to wait until oncoming trac passed
and can then bypass the construction site.
Proactive Critical
: The
AV
has sucient reaction time for a
situation that does endanger human health or lives. For example,
a car in oncoming trac is overtaking the car in front. The
AV
recognises the situation and brakes early, to avoid a dangerous
situation.
Reactive Non-Critical
: The
AV
has insucient reaction time
for a situation that does not endanger human health or lives. For
example, a dog suddenly appears behind a parked car on the side
and runs across the street. The
AV
has to perform an emergency
brake.
Reactive Critical
: The
AV
has insucient reaction time for a
situation that does endanger human health or lives. For example, a
car in oncoming trac is overtaking the car in front. The
AV
has
to perform an emergency brake to avoid an accident.
3.2 Feedback Design
A virtual reality (
VR
) prototype was created in Unity [
33
]. Follow-
ing the aforementioned studies’ research, a set of ve dierent
feedback modalities was designed: light, visualisation, text, audio
and vibration, which all oer dierent depths of information.
Light
: Based on the related work, light bars in the front left and
right of the vehicle use a sort of trac light system to indicate
driving status, and reactions [
20
,
30
]. White: everything is normal,
green: an obstacle was recognised, but a reaction is not necessary,
yellow: an obstacle was recognised, and the vehicle will brake,
red: risk of collision was recognised, the vehicle will perform an
emergency braking. If events happen at the front left of the vehicle,
the left light bar changes colour. The same applies to the right side.
If something happens directly in front of the vehicle, both light bars
change colour.
Audio
: Passengers were presented with abstract sounds for three
dierent types of situations, so-called auditory icons or earcons
[
5
]. Continue drive: a short, relaxing sounding two-note chime
signalling that the shuttle continues its drive. Reaction: a neutral
sounding two-note chime signalling that a situation was recognised
and the shuttle will react. Warning: three consecutive beep sounds
that sound alerting and communicate dangerous situations.
Visualisation
: A visualisation, inspired by the world of a minia-
ture representation of [
15
] and Tesla and Waymo [
32
,
35
] is shown
to the passengers via a display. It highlights recognised objects,
vehicles and pedestrians as well as route information.
Text
: When the vehicle performs actions related to driving situ-
ations, they are displayed via a short text. The focus lies on why-
messages, as introduced in [
18
], where testers reported a better
driving performance. For the four dierent driving situations in the
prototype it says "construction site", "overtaking", "!!animal!!" and
"!!ghost driver!!". Visually, it is always displayed the same way.
Vibration
: Vibration patterns in the seat are used to communi-
cate dierent types of events as in [
16
,
24
,
25
]. Proactive situation:
vibrating three times with a duration of 0.3 seconds per vibration.
Reactive situation: vibrating six times with a shorter duration of
0.1 seconds per vibration.
3.3 Driving Simulator
For each of the four aforementioned driving situations, a virtual
driving scene was created, taking place in a single-lane urban area.
The
AV
was driving at around 30 km/h, and all driving situations
took around 25 seconds to experience. Besides the vibration, all
feedback modalities are part of the virtual world. The vibration was
realised via a vibration motor that is connected to an Arduino Uno
[
2
] which communicated directly with the Unity prototype. The
motor was installed in a small metal tube which lied on the sitting
surface.
8
Increasing the User Experience in Autonomous Driving through dierent Feedback Modalities IUI ’21, April 14–17, 2021, College Station, TX, USA
No Feedback Light Sound Visualisation Text Vibration
Pragmatic
Quality
0.55
α=0.77
2.05
α=0.94
1.19
α=0.88
1.15
α= 0.94
0.85
α=0.82
0.01
α=0.79
Hedonic
Quality
-0.40
α=0.86
0.63
α=0.91
–0.35
α=0.93
1.60
α=0.63
-0.41
α=0.85
0.63
α= 0.87
Overall 0.07 1.34
p<.001
0.42
p=.171
1.38
p<.001
0.22
p=.864
0.32
p=.442
Table 1: UEQ-S results of the dierent feedback modalities. The p-values refer to the no-feedback baseline comparison.
4 STUDY DESIGN
Based on related work, we present the following hypothesis:
Hypothesis H1
: Dierent, single feedback modalities in the
form of light, audio, visualisation, text and vibration will create a
positive passenger UX in dierent driving situations.
A mixed-method within-subjects design was used for this ex-
periment. All testers experienced all four driving situations with
all feedback modalities, which were counterbalanced with a Latin
square design. All subjects were rst-time users with no prior in-
formation about the dierent modalities. The
UX
, in the form of
pragmatic and hedonic quality [
14
], was measured using the
UEQ-S
[
29
]. Testers were asked to answer the questionnaire regarding the
feedback they experienced during the rides. For the baseline ride
without any feedback, they were told to only rate the shuttle’s
driving actions as feedback. Qualitative feedback was collected
during and after the tests via the think-aloud method and an un-
structured interview. Afterwards, testers were asked to state which
feedback modality they liked best in which situation and which pair
of feedback modalities they would like to have in each situation.
The 22 participants (12 female, medium age = 28.36, SD = 4.04)
did not share a common background and were recruited from a
university (students and employees), dierent research institutes
and the industry. All of them had a valid driver’s license and no prior
experience with fully autonomous driving. All reported normal or
corrected-to-normal vision and no physical constraints. Since all
of the testers were rst-time users, they were introduced to the
prototype via a
VR
tutorial which explained the dierent feedback
modalities. They did not experience any distractions or a secondary
task during the dierent drives. Each test took roughly one hour.
5 RESULTS AND DISCUSSION
5.1 User Experience Questionnaire - Short
Table 1 shows the results for the dierent feedback rides. Reliability
was measured using Cronbach alpha [
6
]. The
UEQ-S
states values >
0.8 as a positive evaluation and values < -0.8 as a negative evaluation.
Since the
UEQ-S
data was not normally distributed a Friedman’s
Two-Way Analysis was performed.
Looking at the results, there is an indication that the hypothesis
H
1
is partly valid. For every feedback modality, the
UEQ-S
scores
were higher than the no-feedback baseline. However, not all feed-
back modalities created a signicantly better
UX
. The feedback
modalities that showed only neutral overall scores were sound, text
and vibration. Sound had a positive pragmatic quality (
PQ
) but a
negative neutral hedonic quality (
HQ
). This is consistent with the
testers’ statements, who said that the sound feedback might be
annoying in the long run (10 mentions). The feedback modality text
scored the worst overall score of all feedback modalities. It was de-
scribed as redundant (6), was missing some sort of colour highlight
(10) and took testers too long to read and process (6). Vibration also
had a low overall score and is the only feedback modality with a
lower
PQ
than the no-feedback baseline. The results are supported
by testers saying that vibration is annoying and unpleasant (11),
not always understandable (3) and creates a feeling that one needs
to act, which they cannot as passive passengers (4).
The feedback modalities light and visualisation both created a
signicantly positive
UX
compared to the baseline (p<.001). The
light’s high positive
PQ
is supported by it being useful and well
understandable in some or all situations (18). However, it only
scored a neutral
HQ
which might be due to too many gradients of
light, which felt unnecessary and provided too much feedback (12).
The feedback modality visualisation is the only one that scored
positively for both
PQ
and
HQ
and achieved the best overall score of
all modalities. Its positive
PQ
is supported by the testers’ statements
that it is helpful (7). Highlighting relevant things in the visualisation
(7) might increase its
PQ
even more. The high
HQ
is supported by
testers stating that it is reassuring to see what the AV sees (8).
5.2 Favourite Feedback Modalities
Testers preferred softer, less intrusive feedback for the proactive
situations (light or visualisation) and more prominent feedback
for the reactive ones (sound or light), which is consistent with
what Politis et al
.
found out [
26
]. Looking at the desired feedback
modality pairs, testers wished for the pairs light & visualisation and
visualisation & text for the proactive situations. This is supported by
their think-aloud statements, saying that visualisation is missing an
intention (6) and should be combined with text (3). For the reactive
situations, more prominent feedback was preferred again, with
light being either extended by sound or vibration.
6 LIMITATIONS
As with all empirical studies, there are certain limitations to the
experiment. While
VR
is an immersive medium and situations like
the critical overtaking scared testers, which showed that there is an
emotional reaction to the situations, a digital world is always only
a representation of the real world and may lack immersion due
to graphic, sound or g-forces limitations. Moreover, testers were
told to only focus on the ride to achieve comparability between the
dierent feedback modalities. In a real-world scenario, passengers of
an
AV
might be working, listening to music or having a conversation
9
IUI ’21, April 14–17, 2021, College Station, TX, USA Tobias Schneider, et al.
and might not focus on the ride as much. Furthermore, all testers
experienced the dierent modalities for the rst time and only for
four rides. Also, only people with a median age of 28 were tested.
7 CONCLUSION & OUTLOOK
The study shows that, overall, the dierent feedback modalities
were able to create a better
UX
compared to the no-feedback base-
line. However, only light and visualisation were able to create a
signicantly positive
UX
. For the dierent driving situations, testers
preferred softer, less intrusive feedback modalities for proactive
situations and more prominent feedback for the reactive ones. This
is true for unimodal feedback and feedback pairs and shows that
it might be reasonable to present an
AV
’s driving decision with
dierent feedback modalities depending on the situation. Further
studies will take a look at the combination of light and visualisation
and their impact on the understanding of the actions of an
AV
as
well as their implications on the UX of an autonomous ride.
ACKNOWLEDGMENTS
This research was carried out as part of the FlexCAR project of the
Arena2036 research campus and was funded by the German Federal
Ministry of Education and Research (funding number: 02P18Q647).
REFERENCES
[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A
Survey on Explainable Articial Intelligence (XAI). IEEE Access 6 (2018), 52138–
52160. https://doi.org/10.1109/ACCESS.2018.2870052
[2]
Arduino. 2020. Arduino Uno Rev3. Retrieved January, 2020 from https://store.
arduino.cc/arduino-uno-rev3
[3]
Audi. 2020. Autonomous Driving. Retrieved January, 2020 from https://www.
audi.com/en/experience-audi/mobility- and-trends/autonomous- driving.html
[4]
Johannes Beller, Matthias Heesen, and Mark Vollrath. 2013. Improving the Driver-
Automation Interaction - An Approach Using Automation Uncertainty. Human
Factors 55, 6 (2013), 1130–1141.
[5]
Stephen Brewster, Peter C Wright, and Alistair D N Edwards. 1993. An evalua-
tion of earcons for use in auditory human-computer interfaces. In the SIGCHI
conference. ACM Press, New York, New York, USA, 222–227.
[6]
Lee J Cronbach. 1951. Coecient alpha and the internal structure of tests.
Psychometrika 16, 3 (Sept. 1951), 297–334.
[7]
Daimler AG. 2020. Autonomous Driving. Retrieved January, 2020 from https:
//www.daimler.com/innovation/product-innovation/autonomous-driving/
[8]
Filip Karlo Dosilovic, Mario Brcic, and Nikica Hlupic. 2018. Explainable articial
intelligence - A survey.. In 2018 41st International convention on information
and communication technology, electronics and microelectronics (MIPRO). IEEE,
0210–0215.
[9]
Fredrick Ekman, Mikael Johansson, and Jana Sochor. 2016. To See or Not to See
- The Eect of Object Recognition on Users’ Trust in "Automated Vehicles".. In
Proceedings of the 9th Nordic Conference on Human-Computer Interaction. ACM
Press, New York, New York, USA, 1–4.
[10]
Sarah Faltaous, Martin Baumann, Stefan Schneegass, and Lewis Chuang. 2018.
Design Guidelines for Reliability Communication in Autonomous Vehicles.. In
Proceedings of the 7th international conference on automotive user interfaces and
interactive vehicular applications. ACM Press, New York, New York, USA, 258–
267.
[11]
Anna-Katharina Frison, Philipp Wintersberger, Tianjia Liu, and Andreas Riener.
2019. Why do you like to drive automated? - a context-dependent analysis of
highly automated driving to elaborate requirements for intelligent user interfaces..
In Proceedings of the 24th International Conference on Intelligent User Interfaces.
ACM Press, New York, New York, USA, 528–537.
[12]
Nick Gang, Srinath Sibi, Romain Michon, Brian K Mok, Chris Chafe, and Wendy
Ju. 2018. Don’t Be Alarmed - Sonifying Autonomous Vehicle Perception to
Increase Situation Awareness.. In Proceedings of the 7th international conference
on automotive user interfaces and interactive vehicular applications. ACM Press,
New York, New York, USA, 237–246.
[13]
Jacob Haspiel, Na Du, Jill Meyerson, Lionel P Robert Jr, Dawn Tilbury, X Jessie
Yang, and Anuj K Pradhan. 2018. Explanations and expectations: Trust building in
automated vehicles. In Companion of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction. Association for Computing Machinery, 119–120.
[14]
Marc Hassenzahl. 2007. The hedonic/pragmatic model of user experience. Towards
a UX manifesto 10 (2007).
[15]
Renate Häuslschmid, Max von Bülow, Bastian Peging, and Andreas Butz. 2017.
SupportingTrust in Autonomous Driving. ACM, New York, New York, USA.
[16]
Cristy Ho, Hong Z Tan, and Charles Spence. 2005. Using spatial vibrotactile cues
to direct visual attention in driving scenes. Transportation Research Part F: Trac
Psychology and Behaviour 8, 6 (Nov. 2005), 397–412.
[17]
Myounghoon Jeon, Andreas Riener, Jason Sterkenburg, Ju-Hwan Lee, Bruce N
Walker, and Ignacio Alvarez. 2018. An International Survey on Automated
and Electric Vehicles: Austria, Germany, South Korea, and USA. In Digital
Human Modeling. Applications in Health, Safety,Ergonomics, and Risk Management.
Springer, Cham, Cham, 579–587.
[18]
Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert, Larry Leifer, and Cliord
Nass. 2014. Why did my car just do that? Explaining semi-autonomous driving
actions to improve driver understanding, trust, and performance. International
Journal on Interactive Design and Manufacturing (IJIDeM) 9, 4 (April 2014), 269–
275.
[19]
Bridget A Lewis, B N Penaranda, Daniel M Roberts, and Carryl L Baldwin. 2017.
Eectiveness of Bimodal Versus Unimodal Alerts for Distracted Drivers. In Driv-
ing Assessment Conference. University of Iowa, Iowa City, Iowa, 376–382.
[20]
B W Meerbeek, C de Bakker, Y A W de Kort, E J van Loenen, and T Bergman.
2016. Automated blinds with light feedback to increase occupant satisfaction
and energy saving. Building and Environment 103 (July 2016), 70–85.
[21]
Tim Miller. 2019. Explanation in articial intelligence: Insights from the social
sciences. Articial Intelligence 267 (Feb. 2019), 1–38.
[22]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI - Beware of
Inmates Running the Asylum Or - How I Learnt to Stop Worrying and Love the
Social and Behavioural Sciences. arXiv preprint arXiv:1712.00547 (2017).
[23]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A Survey of Evaluation
Methods and Measures for Interpretable Machine Learning. CoRR abs/1811.11839
(2018). arXiv:1811.11839 http://arxiv.org/abs/1811.11839
[24]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2014. Evaluating multimodal
driver displays under varying situational urgency.. In Ijcai-2016 ethics for articial
intelligence workshop. ACM Press, New York, New York, USA, 4067–4076.
[25]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2015. Language-based mul-
timodal displays for the handover of control in autonomous cars.. In Proceedings
of the 7th international conference on automotive user interfaces and interactive
vehicular applications. ACM Press, New York, New York, USA, 3–10.
[26]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2015. To Beep or Not to
Beep? - Comparing Abstract versus Language-Based Multimodal Driver Displays..
In Ijcai-2016 ethics for articial intelligence workshop. ACM Press, New York, New
York, USA, 3971–3980.
[27]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2017. Using Multimodal
Displays to Signify Critical Handovers of Control to Distracted Autonomous Car
Drivers. IJMHCI 9, 3 (2017), 1–16.
[28]
SAE On-Road Automated Vehicle Standards Committee and others. 2018. Tax-
onomy and Denitions for Terms Related to Driving Automation Systems for
On-Road Motor Vehicles. SAE International Warrendale, PA, USA (2018).
[29]
Martin Schrepp, Andreas Hinderks, and Jörg Thomaschewski. 2017. Design and
Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S).
IJIMAI 4, 6 (2017), 103.
[30]
Bobbie D Seppelt and John D Lee. 2007. Making adaptive cruise control (ACC)
limits visible. International Journal of Human-Computer Studies 65, 3 (March
2007), 192–205.
[31]
Erik Strumbelj and Igor Kononenko 0001. 2014. Explaining prediction models and
individual predictions with feature contributions. Knowl. Inf. Syst. 41, 3 (2014),
647–665.
[32]
Tesla. 2020. Autopilot. Retrieved January, 2020 from https://www.tesla.com/
autopilot?redirect=no
[33] Unity. 2020. Unity - Unity. Retrieved January, 2020 from https://unity.com/
[34]
Marcel Walch, Kristin Lange, Martin Baumann, and Michael Weber. 2015. Au-
tonomous driving - investigating the feasibility of car-driver handover assistance..
In Proceedings of the 7th international conference on automotive user interfaces and
interactive vehicular applications. ACM Press, New York, New York, USA, 11–18.
[35]
Waymo. 2020. Waymo – Waymo. Retrieved January, 2020 from https://waymo.
com
[36]
Philipp Wintersberger, Andreas Riener, and Anna-Katharina Frison. 2016. Au-
tomated Driving System, Male, or Female Driver: Who’d You Prefer? Comparative
Analysis of Passengers’ Mental Conditions, Emotional States & Qualitative Feedback.
ACM, New York, New York, USA.
[37]
Robert Wortham, Andreas Theodorou, and Joanna Bryson. 2016. What does the
robot think? Transparency as a fundamental design requirement for intelligent
systems. In Ijcai-2016 ethics for articial intelligence workshop.
10