Conference PaperPDF Available

Increasing the User Experience in Autonomous Driving through different Feedback Modalities

Authors:

Abstract and Figures

Within the ongoing process of defining autonomous driving solutions, experience design may represent an important interface between humans and the autonomous vehicle. This paper presents an empirical study that uses different ways of unimodal communication in autonomous driving to communicate awareness and intent of autonomous vehicles. The goal is to provide recommendations for feedback solutions within holistic autonomous driving experiences. 22 test subjects took part in four autonomous, simulated virtual reality shuttle rides and were presented with different unimodal feedback in the form of light, sound, visualisation, text and vibration. The empirical study showed that, compared to a no-feedback baseline ride, light, and visualisation were able to create a positive user experience.
Content may be subject to copyright.
Increasing the User Experience in Autonomous Driving through
dierent Feedback Modalities
Tobias Schneider
Stuttgart Media University
Stuttgart, Germany
schneidert@hdm-stuttgart.de
Sabiha Ghellal
Stuttgart Media University
Stuttgart, Germany
ghellal@hdm-stuttgart.de
Steve Love
The Glasgow School of Art
Glasgow, Schotland
s.love@gsa.ac.uk
Ansgar Gerlicher
Stuttgart Media University
Stuttgart, Germany
gerlicher@hdm-stuttgart.de
ABSTRACT
Within the ongoing process of dening autonomous driving so-
lutions, experience design may represent an important interface
between humans and the autonomous vehicle. This paper presents
an empirical study that uses dierent ways of unimodal commu-
nication in autonomous driving to communicate awareness and
intent of autonomous vehicles. The goal is to provide recommen-
dations for feedback solutions within holistic autonomous driving
experiences. 22 test subjects took part in four autonomous, simu-
lated virtual reality shuttle rides and were presented with dierent
unimodal feedback in the form of light, sound, visualisation, text
and vibration. The empirical study showed that, compared to a no-
feedback baseline ride, light, and visualisation were able to create a
positive user experience.
CCS CONCEPTS
Human-centered computing Empirical studies in HCI
;
User centered design;Interaction techniques;Interaction devices;
KEYWORDS
user experience, explainable articial intelligence, autonomous driv-
ing
ACM Reference Format:
Tobias Schneider, Sabiha Ghellal, Steve Love, and Ansgar Gerlicher. 2021.
Increasing the User Experience in Autonomous Driving through dierent
Feedback Modalities. In 26th International Conference on Intelligent User
Interfaces (IUI ’21), April 14–17, 2021, College Station, TX, USA. ACM, New
York, NY, USA, 4 pages. https://doi.org/10.1145/3397481.3450687
1 INTRODUCTION
With vehicle assistant systems being on the rise for multiple years,
autonomous vehicles (
AV
s) are starting to become reality [
3
,
7
,
28
,
32
,
35
]. Previous studies proof that losing the possibility of control
may lead to a negative User Experience (
UX
), especially in urban
IUI ’21, April 14–17, 2021, College Station, TX, USA
© 2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8017-1/21/04.
https://doi.org/10.1145/3397481.3450687
areas with a lot of situations happening at the same time [
11
,
17
,
36
].
Providing appropriate feedback could address potential issues and
lead to more positive experiences. We argue that autonomous driv-
ing immersion via feedback modalities should play a major role in
the holistic User Experience Design (
UXD
) of an automotive vehi-
cle and that by communication awareness and intent via unimodal
feedback modalities passengers can have a better understanding
of the
AV
’s actions and an improved overall
UX
. In this paper, we
introduce and discuss autonomous driving immersion focusing on
light, audio, text, visualisation and vibration feedback.
2 RELATED WORK
Making articial systems understandable to humans is part of the
explainable articial intelligence (
XAI
) research area [
1
,
8
,
23
] and
focuses on explainability and explanation [
21
]. One problem is that
experts in the eld of articial intelligence (
AI
) may not always be
the right ones to explain these complex models [
8
,
21
,
22
,
31
]. There-
fore, simpler, context-dependent and well-timed human-friendly
ways of communication have to be found [
13
,
21
]. The connec-
tion of UXD and XAI and the possible increase in transparency in
the human-AI interaction motivate this paper. Using single (uni-
modal) or multiple (multimodal) feedback modalities to communi-
cate awareness and intent in (semi-)autonomous driving has been
the topic of research in recent years. Multiple studies have shown
the usefulness of feedback modalities such as light, audio, visuali-
sation, text or vibration and their combinations to convey informa-
tion to a driver or passenger [
12
,
16
,
18
,
19
,
24
27
,
34
]. They can
increase the understanding of a machine’s decision, and its limits
[
4
,
10
,
20
,
30
,
37
] as well as help to increase the feeling of trust and
safety of a passenger [
9
,
15
]. However, none of these studies have
purely focused on the
UX
in autonomous driving when it comes
to communicating the
AI
’s awareness and intent. Therefore, this
paper wants to provide a rst step in this direction.
3 PROTOTYPE
3.1 Driving Situations
In the context of autonomous driving, dierent types of situations
can occur. During a workshop with researchers and automotive
employees at a research campus, the following four dierent cate-
gories of driving situations were identied: proactive and reactive
ones, which both can either be critical or non-critical.
7
This work is licensed under a Creative Commons Attribution-ShareAlike
International 4.0 License.
IUI ’21, April 14–17, 2021, College Station, TX, USA Tobias Schneider, et al.
Figure 1: Three out of the ve designed modalities. Left: reactive critical - light, middle: proactive non-critical - visualisation,
right: proactive non-critical - text.
Proactive Non-Critical
: The
AV
has sucient reaction time
for a situation that does not endanger human health or lives. For
example, a small construction site is in the vehicle’s way and needs
to be bypassed. The
AV
has to wait until oncoming trac passed
and can then bypass the construction site.
Proactive Critical
: The
AV
has sucient reaction time for a
situation that does endanger human health or lives. For example,
a car in oncoming trac is overtaking the car in front. The
AV
recognises the situation and brakes early, to avoid a dangerous
situation.
Reactive Non-Critical
: The
AV
has insucient reaction time
for a situation that does not endanger human health or lives. For
example, a dog suddenly appears behind a parked car on the side
and runs across the street. The
AV
has to perform an emergency
brake.
Reactive Critical
: The
AV
has insucient reaction time for a
situation that does endanger human health or lives. For example, a
car in oncoming trac is overtaking the car in front. The
AV
has
to perform an emergency brake to avoid an accident.
3.2 Feedback Design
A virtual reality (
VR
) prototype was created in Unity [
33
]. Follow-
ing the aforementioned studies’ research, a set of ve dierent
feedback modalities was designed: light, visualisation, text, audio
and vibration, which all oer dierent depths of information.
Light
: Based on the related work, light bars in the front left and
right of the vehicle use a sort of trac light system to indicate
driving status, and reactions [
20
,
30
]. White: everything is normal,
green: an obstacle was recognised, but a reaction is not necessary,
yellow: an obstacle was recognised, and the vehicle will brake,
red: risk of collision was recognised, the vehicle will perform an
emergency braking. If events happen at the front left of the vehicle,
the left light bar changes colour. The same applies to the right side.
If something happens directly in front of the vehicle, both light bars
change colour.
Audio
: Passengers were presented with abstract sounds for three
dierent types of situations, so-called auditory icons or earcons
[
5
]. Continue drive: a short, relaxing sounding two-note chime
signalling that the shuttle continues its drive. Reaction: a neutral
sounding two-note chime signalling that a situation was recognised
and the shuttle will react. Warning: three consecutive beep sounds
that sound alerting and communicate dangerous situations.
Visualisation
: A visualisation, inspired by the world of a minia-
ture representation of [
15
] and Tesla and Waymo [
32
,
35
] is shown
to the passengers via a display. It highlights recognised objects,
vehicles and pedestrians as well as route information.
Text
: When the vehicle performs actions related to driving situ-
ations, they are displayed via a short text. The focus lies on why-
messages, as introduced in [
18
], where testers reported a better
driving performance. For the four dierent driving situations in the
prototype it says "construction site", "overtaking", "!!animal!!" and
"!!ghost driver!!". Visually, it is always displayed the same way.
Vibration
: Vibration patterns in the seat are used to communi-
cate dierent types of events as in [
16
,
24
,
25
]. Proactive situation:
vibrating three times with a duration of 0.3 seconds per vibration.
Reactive situation: vibrating six times with a shorter duration of
0.1 seconds per vibration.
3.3 Driving Simulator
For each of the four aforementioned driving situations, a virtual
driving scene was created, taking place in a single-lane urban area.
The
AV
was driving at around 30 km/h, and all driving situations
took around 25 seconds to experience. Besides the vibration, all
feedback modalities are part of the virtual world. The vibration was
realised via a vibration motor that is connected to an Arduino Uno
[
2
] which communicated directly with the Unity prototype. The
motor was installed in a small metal tube which lied on the sitting
surface.
8
Increasing the User Experience in Autonomous Driving through dierent Feedback Modalities IUI ’21, April 14–17, 2021, College Station, TX, USA
No Feedback Light Sound Visualisation Text Vibration
Pragmatic
Quality
0.55
α=0.77
2.05
α=0.94
1.19
α=0.88
1.15
α= 0.94
0.85
α=0.82
0.01
α=0.79
Hedonic
Quality
-0.40
α=0.86
0.63
α=0.91
–0.35
α=0.93
1.60
α=0.63
-0.41
α=0.85
0.63
α= 0.87
Overall 0.07 1.34
p<.001
0.42
p=.171
1.38
p<.001
0.22
p=.864
0.32
p=.442
Table 1: UEQ-S results of the dierent feedback modalities. The p-values refer to the no-feedback baseline comparison.
4 STUDY DESIGN
Based on related work, we present the following hypothesis:
Hypothesis H1
: Dierent, single feedback modalities in the
form of light, audio, visualisation, text and vibration will create a
positive passenger UX in dierent driving situations.
A mixed-method within-subjects design was used for this ex-
periment. All testers experienced all four driving situations with
all feedback modalities, which were counterbalanced with a Latin
square design. All subjects were rst-time users with no prior in-
formation about the dierent modalities. The
UX
, in the form of
pragmatic and hedonic quality [
14
], was measured using the
UEQ-S
[
29
]. Testers were asked to answer the questionnaire regarding the
feedback they experienced during the rides. For the baseline ride
without any feedback, they were told to only rate the shuttle’s
driving actions as feedback. Qualitative feedback was collected
during and after the tests via the think-aloud method and an un-
structured interview. Afterwards, testers were asked to state which
feedback modality they liked best in which situation and which pair
of feedback modalities they would like to have in each situation.
The 22 participants (12 female, medium age = 28.36, SD = 4.04)
did not share a common background and were recruited from a
university (students and employees), dierent research institutes
and the industry. All of them had a valid driver’s license and no prior
experience with fully autonomous driving. All reported normal or
corrected-to-normal vision and no physical constraints. Since all
of the testers were rst-time users, they were introduced to the
prototype via a
VR
tutorial which explained the dierent feedback
modalities. They did not experience any distractions or a secondary
task during the dierent drives. Each test took roughly one hour.
5 RESULTS AND DISCUSSION
5.1 User Experience Questionnaire - Short
Table 1 shows the results for the dierent feedback rides. Reliability
was measured using Cronbach alpha [
6
]. The
UEQ-S
states values >
0.8 as a positive evaluation and values < -0.8 as a negative evaluation.
Since the
UEQ-S
data was not normally distributed a Friedman’s
Two-Way Analysis was performed.
Looking at the results, there is an indication that the hypothesis
H
1
is partly valid. For every feedback modality, the
UEQ-S
scores
were higher than the no-feedback baseline. However, not all feed-
back modalities created a signicantly better
UX
. The feedback
modalities that showed only neutral overall scores were sound, text
and vibration. Sound had a positive pragmatic quality (
PQ
) but a
negative neutral hedonic quality (
HQ
). This is consistent with the
testers’ statements, who said that the sound feedback might be
annoying in the long run (10 mentions). The feedback modality text
scored the worst overall score of all feedback modalities. It was de-
scribed as redundant (6), was missing some sort of colour highlight
(10) and took testers too long to read and process (6). Vibration also
had a low overall score and is the only feedback modality with a
lower
PQ
than the no-feedback baseline. The results are supported
by testers saying that vibration is annoying and unpleasant (11),
not always understandable (3) and creates a feeling that one needs
to act, which they cannot as passive passengers (4).
The feedback modalities light and visualisation both created a
signicantly positive
UX
compared to the baseline (p<.001). The
light’s high positive
PQ
is supported by it being useful and well
understandable in some or all situations (18). However, it only
scored a neutral
HQ
which might be due to too many gradients of
light, which felt unnecessary and provided too much feedback (12).
The feedback modality visualisation is the only one that scored
positively for both
PQ
and
HQ
and achieved the best overall score of
all modalities. Its positive
PQ
is supported by the testers’ statements
that it is helpful (7). Highlighting relevant things in the visualisation
(7) might increase its
PQ
even more. The high
HQ
is supported by
testers stating that it is reassuring to see what the AV sees (8).
5.2 Favourite Feedback Modalities
Testers preferred softer, less intrusive feedback for the proactive
situations (light or visualisation) and more prominent feedback
for the reactive ones (sound or light), which is consistent with
what Politis et al
.
found out [
26
]. Looking at the desired feedback
modality pairs, testers wished for the pairs light & visualisation and
visualisation & text for the proactive situations. This is supported by
their think-aloud statements, saying that visualisation is missing an
intention (6) and should be combined with text (3). For the reactive
situations, more prominent feedback was preferred again, with
light being either extended by sound or vibration.
6 LIMITATIONS
As with all empirical studies, there are certain limitations to the
experiment. While
VR
is an immersive medium and situations like
the critical overtaking scared testers, which showed that there is an
emotional reaction to the situations, a digital world is always only
a representation of the real world and may lack immersion due
to graphic, sound or g-forces limitations. Moreover, testers were
told to only focus on the ride to achieve comparability between the
dierent feedback modalities. In a real-world scenario, passengers of
an
AV
might be working, listening to music or having a conversation
9
IUI ’21, April 14–17, 2021, College Station, TX, USA Tobias Schneider, et al.
and might not focus on the ride as much. Furthermore, all testers
experienced the dierent modalities for the rst time and only for
four rides. Also, only people with a median age of 28 were tested.
7 CONCLUSION & OUTLOOK
The study shows that, overall, the dierent feedback modalities
were able to create a better
UX
compared to the no-feedback base-
line. However, only light and visualisation were able to create a
signicantly positive
UX
. For the dierent driving situations, testers
preferred softer, less intrusive feedback modalities for proactive
situations and more prominent feedback for the reactive ones. This
is true for unimodal feedback and feedback pairs and shows that
it might be reasonable to present an
AV
’s driving decision with
dierent feedback modalities depending on the situation. Further
studies will take a look at the combination of light and visualisation
and their impact on the understanding of the actions of an
AV
as
well as their implications on the UX of an autonomous ride.
ACKNOWLEDGMENTS
This research was carried out as part of the FlexCAR project of the
Arena2036 research campus and was funded by the German Federal
Ministry of Education and Research (funding number: 02P18Q647).
REFERENCES
[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A
Survey on Explainable Articial Intelligence (XAI). IEEE Access 6 (2018), 52138–
52160. https://doi.org/10.1109/ACCESS.2018.2870052
[2]
Arduino. 2020. Arduino Uno Rev3. Retrieved January, 2020 from https://store.
arduino.cc/arduino-uno-rev3
[3]
Audi. 2020. Autonomous Driving. Retrieved January, 2020 from https://www.
audi.com/en/experience-audi/mobility- and-trends/autonomous- driving.html
[4]
Johannes Beller, Matthias Heesen, and Mark Vollrath. 2013. Improving the Driver-
Automation Interaction - An Approach Using Automation Uncertainty. Human
Factors 55, 6 (2013), 1130–1141.
[5]
Stephen Brewster, Peter C Wright, and Alistair D N Edwards. 1993. An evalua-
tion of earcons for use in auditory human-computer interfaces. In the SIGCHI
conference. ACM Press, New York, New York, USA, 222–227.
[6]
Lee J Cronbach. 1951. Coecient alpha and the internal structure of tests.
Psychometrika 16, 3 (Sept. 1951), 297–334.
[7]
Daimler AG. 2020. Autonomous Driving. Retrieved January, 2020 from https:
//www.daimler.com/innovation/product-innovation/autonomous-driving/
[8]
Filip Karlo Dosilovic, Mario Brcic, and Nikica Hlupic. 2018. Explainable articial
intelligence - A survey.. In 2018 41st International convention on information
and communication technology, electronics and microelectronics (MIPRO). IEEE,
0210–0215.
[9]
Fredrick Ekman, Mikael Johansson, and Jana Sochor. 2016. To See or Not to See
- The Eect of Object Recognition on Users’ Trust in "Automated Vehicles".. In
Proceedings of the 9th Nordic Conference on Human-Computer Interaction. ACM
Press, New York, New York, USA, 1–4.
[10]
Sarah Faltaous, Martin Baumann, Stefan Schneegass, and Lewis Chuang. 2018.
Design Guidelines for Reliability Communication in Autonomous Vehicles.. In
Proceedings of the 7th international conference on automotive user interfaces and
interactive vehicular applications. ACM Press, New York, New York, USA, 258–
267.
[11]
Anna-Katharina Frison, Philipp Wintersberger, Tianjia Liu, and Andreas Riener.
2019. Why do you like to drive automated? - a context-dependent analysis of
highly automated driving to elaborate requirements for intelligent user interfaces..
In Proceedings of the 24th International Conference on Intelligent User Interfaces.
ACM Press, New York, New York, USA, 528–537.
[12]
Nick Gang, Srinath Sibi, Romain Michon, Brian K Mok, Chris Chafe, and Wendy
Ju. 2018. Don’t Be Alarmed - Sonifying Autonomous Vehicle Perception to
Increase Situation Awareness.. In Proceedings of the 7th international conference
on automotive user interfaces and interactive vehicular applications. ACM Press,
New York, New York, USA, 237–246.
[13]
Jacob Haspiel, Na Du, Jill Meyerson, Lionel P Robert Jr, Dawn Tilbury, X Jessie
Yang, and Anuj K Pradhan. 2018. Explanations and expectations: Trust building in
automated vehicles. In Companion of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction. Association for Computing Machinery, 119–120.
[14]
Marc Hassenzahl. 2007. The hedonic/pragmatic model of user experience. Towards
a UX manifesto 10 (2007).
[15]
Renate Häuslschmid, Max von Bülow, Bastian Peging, and Andreas Butz. 2017.
SupportingTrust in Autonomous Driving. ACM, New York, New York, USA.
[16]
Cristy Ho, Hong Z Tan, and Charles Spence. 2005. Using spatial vibrotactile cues
to direct visual attention in driving scenes. Transportation Research Part F: Trac
Psychology and Behaviour 8, 6 (Nov. 2005), 397–412.
[17]
Myounghoon Jeon, Andreas Riener, Jason Sterkenburg, Ju-Hwan Lee, Bruce N
Walker, and Ignacio Alvarez. 2018. An International Survey on Automated
and Electric Vehicles: Austria, Germany, South Korea, and USA. In Digital
Human Modeling. Applications in Health, Safety,Ergonomics, and Risk Management.
Springer, Cham, Cham, 579–587.
[18]
Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert, Larry Leifer, and Cliord
Nass. 2014. Why did my car just do that? Explaining semi-autonomous driving
actions to improve driver understanding, trust, and performance. International
Journal on Interactive Design and Manufacturing (IJIDeM) 9, 4 (April 2014), 269–
275.
[19]
Bridget A Lewis, B N Penaranda, Daniel M Roberts, and Carryl L Baldwin. 2017.
Eectiveness of Bimodal Versus Unimodal Alerts for Distracted Drivers. In Driv-
ing Assessment Conference. University of Iowa, Iowa City, Iowa, 376–382.
[20]
B W Meerbeek, C de Bakker, Y A W de Kort, E J van Loenen, and T Bergman.
2016. Automated blinds with light feedback to increase occupant satisfaction
and energy saving. Building and Environment 103 (July 2016), 70–85.
[21]
Tim Miller. 2019. Explanation in articial intelligence: Insights from the social
sciences. Articial Intelligence 267 (Feb. 2019), 1–38.
[22]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI - Beware of
Inmates Running the Asylum Or - How I Learnt to Stop Worrying and Love the
Social and Behavioural Sciences. arXiv preprint arXiv:1712.00547 (2017).
[23]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A Survey of Evaluation
Methods and Measures for Interpretable Machine Learning. CoRR abs/1811.11839
(2018). arXiv:1811.11839 http://arxiv.org/abs/1811.11839
[24]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2014. Evaluating multimodal
driver displays under varying situational urgency.. In Ijcai-2016 ethics for articial
intelligence workshop. ACM Press, New York, New York, USA, 4067–4076.
[25]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2015. Language-based mul-
timodal displays for the handover of control in autonomous cars.. In Proceedings
of the 7th international conference on automotive user interfaces and interactive
vehicular applications. ACM Press, New York, New York, USA, 3–10.
[26]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2015. To Beep or Not to
Beep? - Comparing Abstract versus Language-Based Multimodal Driver Displays..
In Ijcai-2016 ethics for articial intelligence workshop. ACM Press, New York, New
York, USA, 3971–3980.
[27]
Ioannis Politis, Stephen Brewster, and Frank Pollick. 2017. Using Multimodal
Displays to Signify Critical Handovers of Control to Distracted Autonomous Car
Drivers. IJMHCI 9, 3 (2017), 1–16.
[28]
SAE On-Road Automated Vehicle Standards Committee and others. 2018. Tax-
onomy and Denitions for Terms Related to Driving Automation Systems for
On-Road Motor Vehicles. SAE International Warrendale, PA, USA (2018).
[29]
Martin Schrepp, Andreas Hinderks, and Jörg Thomaschewski. 2017. Design and
Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S).
IJIMAI 4, 6 (2017), 103.
[30]
Bobbie D Seppelt and John D Lee. 2007. Making adaptive cruise control (ACC)
limits visible. International Journal of Human-Computer Studies 65, 3 (March
2007), 192–205.
[31]
Erik Strumbelj and Igor Kononenko 0001. 2014. Explaining prediction models and
individual predictions with feature contributions. Knowl. Inf. Syst. 41, 3 (2014),
647–665.
[32]
Tesla. 2020. Autopilot. Retrieved January, 2020 from https://www.tesla.com/
autopilot?redirect=no
[33] Unity. 2020. Unity - Unity. Retrieved January, 2020 from https://unity.com/
[34]
Marcel Walch, Kristin Lange, Martin Baumann, and Michael Weber. 2015. Au-
tonomous driving - investigating the feasibility of car-driver handover assistance..
In Proceedings of the 7th international conference on automotive user interfaces and
interactive vehicular applications. ACM Press, New York, New York, USA, 11–18.
[35]
Waymo. 2020. Waymo – Waymo. Retrieved January, 2020 from https://waymo.
com
[36]
Philipp Wintersberger, Andreas Riener, and Anna-Katharina Frison. 2016. Au-
tomated Driving System, Male, or Female Driver: Who’d You Prefer? Comparative
Analysis of Passengers’ Mental Conditions, Emotional States & Qualitative Feedback.
ACM, New York, New York, USA.
[37]
Robert Wortham, Andreas Theodorou, and Joanna Bryson. 2016. What does the
robot think? Transparency as a fundamental design requirement for intelligent
systems. In Ijcai-2016 ethics for articial intelligence workshop.
10
... Participants valued the IVIA's provision of critical information about the vehicle's status and behavior, which previous research has shown enhances trust in automated vehicles [25,[50][51][52]. Notably, critical information received higher ratings than relevant information, with participants articulating the importance of knowing the vehicle's actions. ...
... The provision of information regarding the vehicle's status and behavior is paramount. Prior research has established that effectively presenting vehicle information not only enhances trust but also positively influences user experience [25,50,52]. Our analysis revealed that all six categories of information-safety-related information, vehicle status and diagnostics, navigation and route information, entertainment and media, personalized assistance, and communication and connectivity-held significant importance to participants. ...
Article
Full-text available
As fully automated vehicles (FAVs) advance towards SAE Level 5 automation, the role of in-vehicle intelligent agents (IVIAs) in shaping passenger experience becomes critical. Even at SAE Level 5 automation, effective communication between the vehicle and the passenger will remain crucial to ensure a sense of safety, trust, and engagement. This study explores how different types and combinations of information provided by IVIAs influence user experience, acceptance, and trust. A sample of 25 participants was recruited for the study, which experienced a fully automated ride in a driving simulator, interacting with Iris, an IVIA designed for voice-only communication. The study utilized both qualitative and quantitative methods to assess participants’ perceptions. Findings indicate that critical and vehicle-status-related information had the highest positive impact on trust and acceptance, while personalized information, though valued, raised privacy concerns. Participants showed high engagement with non-driving-related activities, reflecting a high level of trust in the FAV’s performance. Interaction with the anthropomorphic IVIA was generally well received, but concerns about system transparency and information overload were noted. The study concludes that IVIAs play a crucial role in fostering passenger trust in FAVs, with implications for future design enhancements that emphasize emotional intelligence, personalization, and transparency. These findings contribute to the ongoing development of IVIAs and the broader adoption of automated driving technologies.
... For instance, an HMI may be an interface to alert the human driver to take over the control of a vehicle in an emergent situation. Other potential examples are text messages displayed in monitors, sound, light signals, and vibrotactile technology that explain the vehicle's intentions and bring situation awareness for people in the loop, as shown in Schneider et al.'s work [93]. ...
... A simulator is used to test and verify three driving situations where a human driver's input can improve safety of self-driving. Apart from these works, Schneider et al. involve human participants in their empirical studies to understand the role of explanations for the public acceptance of AVs [93], [155]. They explore the role of explainability-supplied UX in AVs, provide driving-related explanations to end users with different methods, such as textual, visual, and lighting techniques, and conclude that providing context-aware explanations on autonomous driving actions increases users' trust in this technology. ...
Article
Full-text available
Autonomous driving has achieved significant milestones in research and development over the last two decades. There is increasing interest in the field as the deployment of autonomous vehicles (AVs) promises safer and more ecologically friendly transportation systems. With the rapid progress in computationally powerful artificial intelligence (AI) techniques, AVs can sense their environment with high precision, make safe real-time decisions, and operate reliably without human intervention. However, intelligent decision-making in such vehicles is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, AVs must also explain their AI-guided decision-making process in order to be regulatory-compliant across many jurisdictions. Our study sheds comprehensive light on the development of explainable artificial intelligence (XAI) approaches for AVs. In particular, we make the following contributions. First, we provide a thorough overview of the state-of-the-art and emerging approaches for XAI-based autonomous driving. We then propose a conceptual framework considering the essential elements for explainable end-to-end autonomous driving. Finally, we present XAI-based prospective directions and emerging paradigms for future directions that hold promise for enhancing transparency, trustworthiness, and societal acceptance of AVs.
... For example, providing continuous information about upcoming events and providing insight into system processes through HMIs has been found to foster trust (Ekman et al., 2018). In line with this, incorporating complex visualizations, such as animated live views of the surrounding traffic and upcoming driving maneuvers, has been shown to increase feelings of safety and significantly improve the user experience compared to simpler visualizations (Häuslschmid et al., 2017;Schneider et al., 2021). ...
... For example, Faltaous et al.s' [14] user study in a simulation environment shows that providing multimodal explanations with auditory, visual, and vibrotactile feedback is effective for potential takeover requests in highly-urgent driving conditions. Furthermore, Schneider et al. [15] have tested five various feedback techniqueslight, audio, object visualization, textual information, and vibration -in a virtual driving scene, and evaluate user satisfaction with these explanation modalities. Their findings show that light or object visualization is preferred more for proactive situations, while sound and light are more favored for reactive scenarios. ...
Article
Full-text available
Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches with varied interests and expectations within the AV ecosystem. This study presents a review to discuss the complexities associated with explanation generation and presentation to facilitate the development of more effective and inclusive explainable AV systems. Our investigation led to categorising existing literature into three primary topics: explanatory tasks, explanatory information, and explanatory information communication. Drawing upon our insights, we have proposed a comprehensive roadmap for future research centred on (i) knowing the interlocutor, (ii) generating timely explanations, (ii) communicating human-friendly explanations, and (iv) continuous learning. Our roadmap is underpinned by principles of responsible research and innovation, emphasising the significance of diverse explanation requirements. To effectively tackle the challenges associated with implementing explainable AV systems, we have delineated various research directions, including the development of privacy-preserving data integration, ethical frameworks, real-time analytics, human-centric interaction design, and enhanced cross-disciplinary collaborations. By exploring these research directions, the study aims to guide the development and deployment of explainable AVs, informed by a holistic understanding of user needs, technological advancements, regulatory compliance, and ethical considerations, thereby ensuring safer and more trustworthy autonomous driving experiences.
Article
Full-text available
Advanced energy benchmarking in residential buildings, using data-driven modeling, provides a fast, accurate,and systematic approach to assessing energy performance and comparing it with reference standards or targets.This process is essential for identifying opportunities to improve energy efficiency and for shaping effective energyretrofit strategies. However, building professionals often face barriers to adopting these tools, mainly due to thecomplexity and limited interpretability of data-driven models, which can negatively affect decision-making.In order to contribute in addressing these issues, this study combines data-driven modeling with ExplainableArtificial Intelligence (XAI) techniques to advance energy benchmarking analysis in residential buildings andenhance their usability by also non-expert users.The proposed process focuses on estimating primary energy demand for space heating and domestic hot water inresidential building units, extracting knowledge from about 49,000 Energy Performance Certificates (EPCs) issuedin the Piedmont Region, Italy. The effectiveness of five machine learning algorithms is assessed to select the mostsuitable estimation model. Then to ensure the trustworthiness of the selected model, a XAI layer is implementedto identify and remove input variable domain regions that demonstrated to be critical for the robustness of theinference mechanism learnt in the training phase. Moreover, the study assesses the model capability to evaluatebuilding energy performance, examining both the current state and potential scenarios for energy retrofitting. Asecond XAI layer is then introduced to provide local explanations for model estimations of both pre- and post-retrofit conditions of a building. The final aim is to enable an external benchmarking analysis, by extractingfrom the analysed EPCs reference groups of similar buildings, that facilitate a performance comparison for theinvestigated retrofit scenarios. This energy benchmarking process promotes transparent and informed decision-making, aiming to instill confidence in final users when leveraging data-driven models for energy planning in thebuilding sector (PDF) An interpretable data analytics-based energy benchmarking process for supporting retrofit decisions in large residential building stocks.
Article
Explanations in automated vehicles enhance passengers' understanding of vehicle decision-making, mitigating negative experiences by increasing their sense of control. These explanations help maintain situation awareness, even when passengers are not actively driving, and calibrate trust to match vehicle capabilities, enabling safe engagement in non-driving related tasks. While design studies emphasize timing as a crucial factor affecting trust, machine learning practices for explanation generation primarily focus on content rather than delivery timing. This discrepancy could lead to mistimed explanations, causing misunderstandings or unnecessary interruptions. This gap is partly due to alack of datasets capturing passengers' real-world demands and experiences with in-vehicle explanations. We introduce TimelyTale, an approach that records passengers' demands for explanations in automated vehicles. The dataset includes environmental, driving-related, and passenger-specific sensor data for context-aware explanations. Our machine learning analysis identifies proprioceptive and physiological data as key features for predicting passengers' explanation demands, suggesting their potential for generating timely, context-aware explanations. The TimelyTale dataset is available at https://doi.org/10.7910/DVN/CQ8UB0.
Conference Paper
Full-text available
Technology acceptance is a critical factor influencing the adoption of automated vehicles. Consequently, manufacturers feel obliged to design automated driving systems in a way to account for negative effects of automation on user experience. Recent publications confirm that full automation will potentially lack in the satisfaction of important user needs. To counteract, the adoption of Intelligent User Interfaces (IUIs) could play an important role. In this work, we focus on the evaluation of the impact of scenario type (represented by variations of road type and traffic volume) on the fulfillment of psychological needs. Results of a qualitative study (N=30) show that the scenario has a high impact on how users perceive the automation. Based on this, we discuss the potential of adaptive IUIs in the context of automated driving. In detail, we look at the aspects trust, acceptance, and user experience and its impact on IUIs in different driving situations.
Article
Full-text available
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of Artificial Intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on Explainable Artificial Intelligence. A research field that holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to explainable AI. Through the lens of literature, we review existing approaches regarding the topic, we discuss trends surrounding its sphere and we present major research trajectories.
Conference Paper
Full-text available
Currently offered autonomous vehicles still require the human intervention. For instance, when the system fails to perform as expected or adapts to unanticipated situations. Given that reliability of autonomous systems can fluctuate across conditions, this work is a first step towards understanding how this information ought to be communicated to users. We conducted a user study to investigate the effect of communicating the system's reliability through a feedback bar. Subjective feedback was solicited from participants with questionnaires and semi-structured interviews. Based on the qualitative results, we derived guidelines that serve as a foundation for the design of how autonomous systems could provide continuous feedback on their reliability.
Conference Paper
Full-text available
In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.
Conference Paper
Full-text available
Trust is a vital determinant of acceptance of automated vehicles (AVs) and expectations and explanations are often at the heart of any trusting relationship. Once expectations have been violated, explanations are needed to mitigate the damage. This study introduces the importance of timing of explanations in promoting trust in AVs. We present the preliminary results of a within-subjects experimental study involving eight participants exposed to four AV driving conditions (i.e. 32 data points). Preliminary results show a pattern that suggests that explanations provided before the AV takes actions promote more trust than explanations provided afterward.
Article
Full-text available
The user experience questionnaire (UEQ) is a widely used questionnaire to measure the subjective impression of users towards the user experience of products. The UEQ is a semantic differential with 26 items. Filling out the UEQ takes approximately 3-5 minutes, i.e. the UEQ is already reasonably efficient concerning the time required to answer all items. However, there exist several valid application scenarios, where filling out the entire UEQ appears impractical. This paper deals with the creation of an 8 item short version of the UEQ, which is optimized for these specific application scenarios. First validations of this short version are also described.
Article
Full-text available
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that, if these techniques are to succeed, the explanations they generate should have a structure that humans accept. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
Article
Full-text available
Until full autonomy is achieved in cars, drivers will still be expected to take over control of driving, and critical warnings will be essential. This paper presents a comparison of abstract versus language-based multimodal warnings signifying handovers of control in autonomous cars. While using an autonomous car simulator, participants were distracted from the road by playing a game on a tablet. An automation failure together with a car in front breaking was then simulated; a rare but very critical situation for a non-attentive driver to be in. Multimodal abstract or language-based warnings signifying this situation were then delivered, either from the simulator or from the tablet, in order to discover the most effective location. Results showed that abstract cues, cues including audio and cues delivered from the tablet improved handovers. This indicates the potential of moving simple but salient autonomous car warnings to where a gaming side task takes place.
Conference Paper
Lack of trust can arise when people do not know what autonomous vehicles perceive in the environment. To convey this information without causing alarm or compelling people to act, we designed and evaluated a way to sonify an autonomous vehicle's perception of salient driving events using abstract auditory icons, or "earcons." These are localized in space using an in-car quadraphonic speaker array to correspond with the direction of events. We describe the interaction design for these awareness cues and a validation experiment (N=28) examining the effects of sonified events on drivers' sense of situation awareness, comfort, and trust. Overall, this work suggests that our designed earcons do improve people's awareness of in-simulation events. The effect of the increased situational awareness on trust and comfort is inconclusive. However, post-study design feedback suggests that sounds should have low levels of intensity and dissonance, and a sense of belonging to a common family.