Conference PaperPDF Available

In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving


Abstract and Figures

In the evolution of technical systems, freedom from error and early adoption plays a major role for market success and to maintain competitiveness. In the case of automated driving, we see that faulty systems are put into operation and users trust these systems, often without any restrictions. Trust and use are often associated with users' experience of the driver-vehicle interfaces and interior design. In this work, we present the results of our investigations on factors that influence the perception of automated driving. In a simulator study, N=48 participants had to drive a SAE level 2 vehicle with either perfect or faulty driving function. As a secondary activity, participants had to solve tasks on an infotainment system with varying aesthetics and usability (2x2). Results reveal that the interaction of conditions significantly influences trust and UX of the vehicle system. Our conclusion is that all aspects of vehicle design cumulate to system and trust perception.
Content may be subject to copyright.
In UX We Trust
Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their
Impact on the Perception of Automated Driving
Anna-Katharina Frison∗†
Technische Hochschule Ingolstadt
Ingolstadt, Germany
Philipp Wintersberger∗†
Technische Hochschule Ingolstadt
Ingolstadt, Germany
Andreas Riener
Technische Hochschule Ingolstadt
Ingolstadt, Germany
Clemens Schartmüller
Technische Hochschule Ingolstadt
Ingolstadt, Germany
Linda Ng Boyle
University of Washington
Seattle, WA, USA
Erika Miller
Colorado State University
Fort Collins, CO, USA
Klemens Weigl
Technische Hochschule Ingolstadt
Ingolstadt, Germany
In the evolution of technical systems, freedom from error
and early adoption plays a major role for market success
and to maintain competitiveness. In the case of automated
driving, we see that faulty systems are put into operation
and users trust these systems, often without any restrictions.
Trust and use are often associated with users’ experience of
the driver-vehicle interfaces and interior design. In this work,
we present the results of our investigations on factors that
inuence the perception of automated driving. In a simulator
study, N=48 participants had to drive a SAE level 2 vehicle
with either perfect or faulty driving function. As a secondary
activity, participants had to solve tasks on an infotainment
system with varying aesthetics and usability (2x2). Results
Both rst and second author contributed equally
Also with Johannes Kepler University.
Also with Katholische Universität Eichstätt-Ingolstadt.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for prot or commercial advantage and that
copies bear this notice and the full citation on the rst page. Copyrights
for components of this work owned by others than the author(s) must
be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic
permission and/or a fee. Request permissions from
CHI 2019, May 4–9, 2019, Glasgow, Scotland Uk
2019 Copyright held by the owner/author(s). Publication rights licensed
to ACM.
ACM ISBN 978-1-4503-5970-2/19/05.. .$15.00
reveal that the interaction of conditions signicantly inu-
ences trust and UX of the vehicle system. Our conclusion
is that all aspects of vehicle design cumulate to system and
trust perception.
Human-centered computing Empirical studies in
HCI;Interactive systems and tools;
automated driving systems; user experience; UX; trust; dis-
trust; SAE J3016; aesthetic; reliability
ACM Reference Format:
Anna-Katharina Frison, Philipp Wintersberger, Andreas Riener,
Clemens Schartmüller, Linda Ng Boyle, Erika Miller, and Klemens
Weigl. 2019. In UX We Trust: Investigation of Aesthetics and Us-
ability of Driver-Vehicle Interfaces and Their Impact on the Per-
ception of Automated Driving. In CHI Conference on Human Fac-
tors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019,
Glasgow, Scotland Uk. ACM, New York, NY, USA, 13 pages. https:
The development of technology has focused on supporting
individual mobility and satisfying human desire for auton-
omy, which is one of the most important psychological needs
]. Each new invention – from the wheel over horse car-
riages, steam driven railways, up to the automobile – fun-
damentally changed our daily life and the societies we live
in. Automated vehicles (AVs) are a major next step in this
evolution. AVs promise several benets, such as less conges-
tion and pollution, higher safety, as well as more leisure time
and enhanced mobility for diverse target groups (children,
elderly, impaired) [
]. However, all these advantages can-
not be delivered instantly. Fully automated “level 5” vehicles
] that can operate in all circumstances are not expected
on the market before 2030 or 2040 [
]. In the meantime,
automated driving systems (ADSs) with lower levels of au-
tomation are being gradually introduced. During this era of
mixed trac (co-existence of vehicles at dierent levels of
automation), drivers must be able to cope with automation
limitations and act either as monitoring (level 2) or fallback
(level 3) authority [
]. Monitoring over extended periods
of time is a challenge, even for “highly motivated human
beings” (c.f., irony of automation [
]). This is particularly rel-
evant as drivers, who are not necessarily well trained domain
experts, are expected to operate a safety critical system in
potentially dangerous environments [
]. Recent incidents
with AVs (such as the fatal accidents with Tesla Autopilot or
the Uber self-driving taxi [
]) conrm that AV technology
is highly susceptible to overreliance/overtrust [
]. Since
this problem is already well known from research on dri-
ver assistance systems such as adaptive cruise control [
trust calibration for AVs is an issue receiving a great deal of
attention [
]. Ideally, users’ trust levels should ap-
propriately “match an objective measure of trustworthiness”,
e.g., system performance (reliability/ability/predictability to
achieve its goals) [
]. However, reality is much more
complex: trust in automation has many dimensions and is in-
uenced by a variety of aspects, including aesthetics, design,
and other factors of user experience (UX) [
]. Conversely,
also UX is impacted by users’ trust in a system. For vehicle
manufacturers, this results in a nearly unsolvable paradox.
On the one hand, they should design their systems in a way
that can prevent overtrust/overreliance. On the other hand,
they must maximize UX qualities of their vehicles to main-
tain competitiveness.
In the context of AVs, how the partly overlapping constructs
of UX and trust actually inuence each other is widely un-
known. Vehicles consist of many subsystems and design
aspects that may all contribute to users’ overall assessment
of both constructs and, further on, may cause so called “halo
eects” [
]. Hence, this paper aims at re-
vealing how both UX and trust inuence each other, and
how proper design could simultaneously support UX quali-
ties while preventing miscalibrated trust. To evaluate this, we
conducted a driving simulator study, that, to the best of our
knowledge, combines for the rst time relevant parameters
of both UX (usability, aesthetics) and trust (system perfor-
mance/reliability) in a single experiment. In our study, users
had to complete several tasks in an in-vehicle infotainment
Figure 1: Study setup showing the driving scenario and an
example of the IVIS used to investigate the interaction be-
tween UX and trust.
system (IVIS) while safely operating an AV in SAE level 2. We
utilized a mixed-model design that varies the reliability of
an ADS as a between-subjects factor. For the within-subjects
factors, we varied pragmatic (representing usability) and
hedonic (representing aesthetics) qualities of the IVIS. For
a holistic evaluation of UX, trust, aect, and psychological
need fulllment, we applied a triangulation of subjective
(AttrakDi mini [
], PANAS short [
], Trust Scale [
Need Scale [
], semi-structured interviews) and objective
(galvanic skin response, braking behavior) measures. For all
subjective measures, we emphasized participants to assess
the AV as a whole and not distinguish between subsystems.
The results of our study give insights in how the stream of
experiences combining performance, usability, and aesthet-
ics of dierent vehicle subsystems correlate and inuence
each other.
In the HCI community, UX and trust research are two areas
which are often considered in isolation. However, similarities
between both constructs cannot be denied and, thus, we
propose to consider them in a holistic way.
Trust in automation can be dened as “attitude that an agent
will achieve an individual’s goals in a situation characterized
by uncertainty and vulnerability”, and is built upon analytic,
analogical, and aective processes [
]. Trust is sensitive to
individual traits (such as age, personality, etc.) and states
(self-condence, emotional state, etc.), properties of the au-
tomation (complexity, task diculty, etc.), as well as design
features (appearance, ease of use, communication style, etc.)
], and is the result of processes happening before (“dispo-
sitional trust”), during (“situational trust”), and after (“learned
trust”) system interaction [
]. In contrast, UX can (accord-
ing to ISO 9241/210) be dened as a “person’s perceptions and
responses resulting from the use and/or anticipated use of a
product, system or service”. Thereby, experience can “occur
before, during and after use”, and “[...] is a consequence of
brand image, presentation, functionality, system performance,
interactive behaviour and assistive capabilities of the inter-
active system, the user’s internal and physical state resulting
from prior experiences, attitudes, skills and personality, and
the context of use”.
Although trust and UX have separate denitions, they seem
to be inuenced by similar factors and processes. Hence,
it is not surprising that trust is a mentioned (however, not
yet focused) construct in UX theory literature. The term
trust is regarded as a component of UX [
], users’ personal
quality of experience [
] or as (context-dependent [
]) per-
ceived value [
]. Desmet et al. [
] mention trust within
their general set of 25 emotions relevant in human-product
interaction. Although trust is not an emotion itself, a product
can help users to feel condent and courageous if it is per-
ceived as trustworthy. Thus, designers need to decide which
psychological needs they want to fulll. Distler et al. [
revealed the need of security as one of the most important
needs for AVs (in the driving domain, the term “need for
safety” would presumably t better than “security”, but for
reasons of consistency we stick to the original formulation
provided by [
] throughout the paper). In order to fulll
this need, a specic form of interaction has to be selected
which aims at expressing trustworthiness and thereby trig-
gers trust [
]. In this sense, trust can be regarded
as subjective sentiment and evaluative feeling dependent on
the fulllment of users’ higher goals, such as the psychologi-
cal need of security [
]. To provide examples, Väätäjä
et al. [70] include trust as item in the AttrakWork question-
naire to measure a products’ hedonic quality and Roedel et
al. [
] chose trust as relevant UX factor when evaluating
user acceptance and experience of ADSs at dierent levels of
automation. Hence, a question that could arise in this regard,
especially considering that both constructs are discussed as
very broad, fuzzy and hard to understand [
], is,
whether or not trust and UX can be considered the same in
a specic context?
The main dierence becomes visible when looking at the
goals both constructs aim to achieve. UX research tries to
maximize the quality of interaction by satisfying psycho-
logical needs and thereby providing pragmatic and hedonic
quality [
]. For designers, there is no upper limit – the
more these qualities are supported, the better. Thus, previous
research focused on the impact of visual aesthetics, usability,
and branding on users’ perceived trustworthiness, predom-
inantly in the area of e-commerce systems [
] and
websites [
]. These studies aimed to increase users’ per-
ceived trustworthiness and, consequently, enhance UX. In
trust research however, maximizing trust is not the major
goal. Here, the challenge is to precisely adjust users’ subjec-
tive trust levels to a systems’ actual performance (“calibration
of trust” [
]) while taking the operational and environmen-
tal context into account [
]. Thus, although trust may need
to be raised in many situations, an upper limit should not be
exceeded to prevent users from overreliance. In the domain
of automated driving, recent studies addressing trust can
broadly be divided into two areas. Those dealing with dis-
trust to reduce automation disuse, and those that address the
problem of overtrust/overreliance to prevent misuse [
]. For
both issues, various resolution strategies have been proposed.
Trust may be raised by increasing system transparency, us-
ing various techniques such as why-and-how information
], symbolic representation [
], augmented reality [
or anthropomorphic agents [
]. An often proposed so-
lution to deal with overtrust is the provision of uncertainty
displays in dierent forms and modalities [
]. In this
context, a problem that we see in many trust studies is that a
distinction between the two constructs (trust and UX) is not
made. For example, was the aim of an experiment actually to
address trust/reliance or were mainly UX aspects evaluated
which potentially overlap with trust?
Research Opportunities
Lindgaard et al. [
] claim that so called “halo eects” are a
reason for the interrelation of usability, aesthetics, and trust
in websites. These eects emerge from the paradox of “what
is beautiful is usable” [
] or “I like it, it must be good on all
attributes” [
], already mentioned by [
]. The in-
terference model [
] proved the existence of evaluative
consistency (i.e., “halo eects”), which assumes that users
interfere unavailable attributes from a general value to keep
their overall judgment consistent. Hence, there is an indirect
link between beauty which leads to goodness and, with it,
pragmatic quality. In contrast, a probabilistic consistency is
a conceptually or causally linked judgment (high aesthetics
expects a high perceived hedonic quality). According to this,
Tuch et al. [
] identied negative aects, such as frustration
from poor usability, as a mediator variable that potentially
decreases perceived aesthetics. Further, Minge et al. [
] dif-
ferentiate between pragmatic “halo eects”, where usability
impacts perceived visual attractiveness, and hedonic “halo
eects”, where visual aesthetics inuences perceived usabil-
ity. Consequently, we wonder if trust in automation can be
investigated in the absence of UX to draw useful conclusions.
A central question that arises is, how the two constructs
are correlated in the context of AVs? Similiar to [
], we ex-
pect halo eects of aesthetics and usability as biasing factors
for trust, what could become highly relevant for the future
implementation of automated driving technology.
We conducted a driving simulator study to investigate the
interaction (potential correlation and “halo eects” of UX and
trust) between an ADS’s performance/reliability and relevant
UX factors (usability/aesthetics) of in-vehicle interfaces, as
well as their eect on the perception of AVs in general; aiming
to answer the following research questions:
How does IVIS design (usability and aesthetics) aect
UX of AVs with varying system performance?
How does IVIS design (usability and aesthetics) aect
users’ trust in AVs with varying system performance?
RQ3: Is there a correlation between UX and trust in AVs?
Experimental Design
We applied a full factorial mixed-model design varying the
performance of the ADS as between-subjects factor, and aes-
thetics and usability of the IVIS as within-subjects factor
(each on two levels). Each participant had to perform various
tasks on four dierent IVISs that represented all combina-
tions of usability (good/bad) and aesthetics (nice/ugly).
Study Setup
The experiment was conducted in a high-delity driving
simulator (remodeled VW Golf on hexapod platform) and
an IVIS on a tablet PC installed on top of the center console
(see Figure 1).
Driving Scenario. We simulated an AV at SAE level 2 (i.e.,
combination of longitudinal and lateral control) driving on a
2-lane highway using IPG CarMaker, inspired by the setting
used in [
]. The AV drove with a constant speed of 120km/h
on the left lane and was confronted with 12 lead vehicles
driving at lower speed (70km/h). In such a situation, the ADS
detected the lead vehicle and reduced the speed to prevent a
crash (similar to an ACC system). As soon as the ego vehicle
slowed down to 70km/h, the lead vehicle performed a lane
change to the right, allowing the ego vehicle to accelerate
again to the target speed. In the high-performance condition
(group A), all 12 lead vehicles were successfully detected
(thus, no manual interventions were necessary). In the low-
performance condition (group B), the ADS (randomly) failed
to detect the lead vehicle in 3 out of the 12 cases (75% relia-
bility), generating the need for interventions – participants
thus had to brake manually to prevent a crash (however, they
never had to manually engage in lateral control).
In-Vehicle Infotainment System. We implemented four vari-
ants of IVISs in HTML/Javascript on a 10.2” tablet (Google
Pixel C). The IVISs consisted of a main navigation and three
typically available subsystems (a phone/call screen including
a list of contacts, a media player including a collection of al-
bums/songs as well as dierent radio stations, and a climate
control), see Figure 3.
Figure 3: A/C menu of the nice (left) and ugly (right) IVIS.
The visual design was selected from a set of examples
created by groups of undergraduate students during a design
class. Students were provided a specic menu/navigation
structure and instructed to create an IVIS skin. All designs
were evaluated using the UEQ [
] on a 7-point semantic dif-
ferential scale from -3 (negative) to +3 (positive) with at least
5 participants. We utilized the results of the subscale “At-
tractiveness (Att-UEQ)” and selected the IVISs with the best
and worst values. While the nice design has a mean value
of ATT-UEQ=1.92 (excellent with respect to the UEQ bench-
mark dataset [
]), the ugly design shows mean value of only
ATT-UEQ=0.45 (bad compared to the benchmark). This pro-
cess aimed as guidance to conrm our subjective selection
of a nice and ugly IVIS, however, was no controlled experi-
ment. To provide a potentially “bad” usability, we followed
the denition provided in ISO 9241-11 that states usability
to be the “extent to which a product can be used by specied
users to achieve specied goals with eectiveness, eciency
and satisfaction in a specied context of use” [
]. Thus, we
chose to manipulate the IVISs reliability by semi-randomly
calculating the chance for a successful button-press action,
where at least two and at most 8 clicks were required for a
successful action.
Participants and Procedure
In total, 48 participants (16 female, 32 male) aged between
19 and 26 (M
= 22.09, SD
= 1.89) years, all undergrad-
uate students, voluntarily participated in the experiment.
Each participant was assigned to either group A (high ADS
performance) or group B (low ADS performance), potential
dierences between the groups considering gender and age
were counterbalanced. No participant had to be excluded
due to simulator sickness or technical problems. After com-
pleting a short questionnaire assessing demographics, each
subject conducted a 3-minute test drive to become famil-
iar with the AV. Then, we instructed participants that they
Demographics Test Drive
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Need Scale
Trust Scale
Interview t
Figure 2: Study procedure: The top row represents the drives with low, the bottom row drives with high ADS performance. The
red color indicates decreased qualities (nger: usability, tablet: aesthetics, driving simulator: performance).
will experience four dierent types of AVs with dierent
IVISs. We further told them that manual braking interven-
tions could be necessary due to automation failures, and that
safely completing the drive has the highest priority. After-
wards, participants experienced four consecutive 5-minute
lasting trips while experiencing the 4 dierent IVISs (in ran-
domized order). Within each condition, participants had to
complete seven tasks on the IVISs with two levels of complex-
ity. Easy tasks consisted of a single instruction only (such
as “call John”), while complex tasks required participants to
remember multiple steps (such as “switch to Radio Disney
Channel and adjust the volume to 8”). The task instructions
were presented auditory (pre-recorded sound les). Success-
ful completion of a task was indicated with a notication
sound and the next task was issued 35 seconds afterwards.
In case all seven tasks were completed before nishing the 5-
minute lasting drive, the experimental condition was stopped
earlier. The selection of tasks from the set was randomized
over the conditions, and quasi-randomized within the sce-
narios (each task was only presented once during the entire
experiment). After each condition, participants had to com-
plete a survey including a set of dierent standardized scales
to assess trust and UX in the AV (see Figure 2), whereby
we repeatedly instructed them to assess the AV as a whole,
single system based on their experiences. Additionally, a
short semi-structured interview with all participants was
conducted after the experiment to reveal further insights
into their thoughts and attitudes. The whole experiment
lasted approx. 90 minutes for each participant.
Data Collection
To be able to evaluate the proposed research questions, we
triangulate a set of subjective and objective measures derived
from established theory as emphasized in the following.
Subjective Measures. To assess UX and trust, we utilized mul-
tiple subjective scales. We used the AttrakDi mini [
] with
a 7-point semantic dierential scale ranging from 0 (low)
to 6 (high). Thereby, the subscale attractiveness (ATT), con-
sisting of two items for beauty and goodness, assesses the
overall perception combining both pragmatic (PQ) and hedo-
nic quality (HQ). Since for all subscales Cronbachs’
in acceptable values (
> .
60, see Table 1), we calculated mean
scale values. All UX qualities are intercorrelated (Pearson’s
correlation coecient), ranging from r=.412 to r=.880 across
all conditions. HQ and PQ showed least (r<.60), HQ and ATT
highest intercorrelations (r>.60). As UX is also dependent on
the satisfaction of psychological needs [
], we further
utilized the need scale (same version as used in [
] with
7-point Likert scale) and focused on the needs most relevant
in the context of AVs: autonomy (AUT), competence (COM),
stimulation (STI), and security (SEC) [
]. Also here, relia-
bility of all subscales was acceptable (
α> .
70, see Table 1).
Intercorrelation between the subscales across all conditions
ranged from r=.26 to r=.81. Further, system interaction leads
to particular (positive and negative) emotions [
] resulting
from need fulllment [
]. Thus, we included the short
version of PANAS also with a 7-point Likert scale [
PA and NA did not correlate (r <.12) and reliability of all
subscales was acceptable (α> .70, see Table 1).
To evaluate subjective trust we used the trust scale provided
by Jian et al. [
]. This scale consists of two subscales for
trust (T) and distrust (DT) (7-point Likert) and is widely used
to assess trust in automation or robotic systems [
]. Also
here, Cronbachs’
resulted in acceptable values while T and
DT showed a negative correlation (r>.80).
Objective Measures. Galvanic Skin Response (GSR) is com-
monly used as an indicator for the sympathetic nervous
system. Changes in skin conductance have been linked to
arousal [
], (cognitive) workload [
], usability [
user experience [
], but also trust [
]. Signal peaks, so
called Skin Conductance Responses (SCRs), indicate such
activation while the general signal level is subject to bias
Dep. Variable Items Cronbach’s αRef.
UX Qualities
Attractiveness (ATT) 2 (Beauty and Goodness) .65 [29]
Pragmatic Q. (PQ) 4 .77 [29]
Hedonic Q. (HQ) 4 .79 [29]
Autonomy (AUT) 3 .84 [27, 64]
Competence (COM) 3 .86 [27, 64]
Stimulation (STI) 3 .83 [27, 64]
Security (SEC) 2 .77 [27, 64]
Positive (PA) 5 .75 [74]
Negative (NA) 5 .85 [74]
Trust (T) 6 .91 [34]
Distrust (DT) 5 .87 [34]
Table 1: Summary of subjective methods employed.
by individual dierences, room temperature, etc. [
]. We
utilized a professional 500 Hz physiological measurement
system from g.tec medical engineering ( and
attached two skin electrodes to the volar (inner) middle pha-
langes (muscle limbs) of the non-dominant hand’s middle
and ring ngers (see guidelines by [
]). Since GSR is sensi-
tive to motion artifacts, we instructed participants to behave
naturally but also to prevent waving their hand excessively.
We used Ledalab for Matlab [
] to extract all SCRs since the
implemented Continuous Decomposition Analysis (CDA) is
supposed to be more robust at discriminating single SCRs
than traditional peak-detection methods [
]. For the evalua-
tion, we utilized the number of SCRs, which is argued to be
less aected by individual dierences and other forms of bias
]. To evaluate driving behavior, we recorded participants’
brake pedal actuation and calculated three parameters – the
number of brakes representing the quantity of manual in-
terventions, the average duration of a brake pedal actuation,
and the average brake intensity (on a scale from 0 to 1).
In the following we present a detailed analysis of the col-
lected data with respect to our research questions (all re-
sults with
p< .
05 are reported as statistically signicant).
Since tests for normality (Shapiro-Wilk’s,
p> .
05), marginal
existence of outliers, and homogeneity of error variances
assessed by Levene’s test (
p> .
05) were passed for all de-
pendent variables (except for driving performance, see Table
1), parametric tests were applied. We performed three-way
mixed ANOVAs with the independent variables ADS perfor-
mance as between-subjects, and IVIS usability and aesthetics
as within-subjects factors. As the collected driving perfor-
mance measures did not follow a normal distribution, non-
parametric tests (Mann-Whitney-U tests for the between-,
and Wilcoxon Signed-Rank tests for the within-subject fac-
tors) were applied. To analyze correlations between the sub-
jective constructs of UX and trust, we conducted Pearson’s
bivariate correlation analyses.
User Experience (RQ1)
To answer RQ1, we analyzed the data of UX (UX Qualities,
Needs and Aect) scales as well as the objective data on
participants’ arousal given by GSR. Concerning multivariate
tests statistics, we utilized Pillai’s Trace.
UX alities. Multivariate tests evaluating the impact of ADS
performance, IVIS aesthetics and usability on participants’
perception of product quality (measured by AttrakDi) re-
veals no signicant main eect for the between-subject fac-
tor ADS performance (
). How-
ever, separate univariate ANOVAs on the outcome vari-
ables show a signicant eect for pragmatic quality (PQ,
). Results for high ADS perfor-
mance were perceived as signicantly better than for low
performance conditions. Ratings for attractiveness (ATT,
Goodness and Beauty) and hedonic quality (HQ) did not
dier signicantly (see Table 2). Additionally, multivariate
tests reveal that the overall perceived system quality sig-
nicantly diers regarding IVIS usability (
6.47,p< .001
). Univariate tests conrm a signicant eect
for ATT (
). Regarding the items
“Goodness” and “Beauty” separately, there is only a signi-
cant eect on “Goodness” (
F(1,46)=25.45,p< .001,η2=.36
). Also
PQ (
F(1,46)=25.36,p< .001,η2=.36
) and HQ (
) diered signicantly. Thus, systems with good
IVIS usability were perceived better than those with bad
IVIS usability across all conditions. Further, we can report
a signicant main eect for the within-subject factor IVIS
aesthetics (
V=.57,F(5,42)=11.24,p< .001
). Univariate tests re-
veal signicant eects for ATT (
F(1,46)=50.22,p< .001,η2=.52
Here, both items, “Goodness” (
F(1,45)=20.22,p< .001,η2=.30
and “Beauty”(
F(1,46)=58.23,p< .001,η2=.56
), show signi-
cant eects. Also PQ (
F(1,46)=28.29,p< .001,η2=.38
) and HQ
F(1,46)=52.44,p< .001,η2=.53
). Thus, across all conditions the
nice IVIS was rated better than the ugly IVIS. Moreover, our
data conrms the inference model [
] – better aesthet-
ics leads to a signicantly higher ratings for goodness and
therewith higher ratings for PQ, and not only beauty (evalua-
tive consistency). However, no two or three-way interaction
eects could be revealed.
Needs. For users’ need fulllment of Autonomy (AUT), Com-
petence (COM), Stimulation (STI), and Security (SEC), we
can report a signicant eect for ADS performance regard-
ing the multivariate test statistic (
). Univariate tests reveal only signicant dierences in par-
ticipants’ need of SEC (
), which
was less fullled in the group with the low ADS perfor-
mance. Multivariate tests show a signicant main eect
for IVIS usability (
), univari-
ate tests resulted in a signicant decrease of SEC in case
95% Condence Interval
Dep. Variable Ind. Variable M SD lower upper
ADS performance
PQ high 4.17 0.18 3.81 4.53
low 3.42 0.18 3.06 3.79
IVIS usability
ATT good 3.36 1.01 3.07 3.65
bad 2.96 1.14 2.62 3.30
Goodness good 3.74 1.23 3.39 4.09
bad 2.96 1.45 2.54 3.37
PQ good 4.10 0.92 3.86 4.34
bad 3.49 1.14 3.17 3.81
HQ good 2.97 0.83 2.72 3.21
bad 2.68 0.89 2.42 2.94
IVIS aesthetics
ATT nice 3.72 1.06 3.40 4.04
ugly 2.60 1.10 2.24 2.97
Beauty nice 3.72 1.34 3.39 4.05
ugly 2.24 1.37 3.39 4.09
Goodness nice 3.72 1.34 3.34 4.10
ugly 2.98 1.38 2.58 3.38
PQ nice 4.05 0.97 3.78 4.32
ugly 3.54 1.04 3.26 3.82
HQ nice 3.38 0.80 3.14 3.61
ugly 2.27 1.10 1.95 2.59
Table 2: Signicant UX Quality Values.
of bad IVIS usability (
) and COM
). Further, also for the within-
subject factor IVIS aesthetics, a signicant main eect could
be revealed (
). Regarding univari-
ate tests we can observe eects for the need of STI (
), AUT (
), SEC
) and COM (
). Thereby, all these needs are signicantly less
fullled when driving in an AV with ugly IVIS (see Table 3
for means). Here, data analysis did not reveal any two- or
three-way interaction eects.
95% Condence Interval
Dep. Variable Ind. Variable M SD lower upper
ADS performance
SEC high 3.24 1.20 2.79 3.69
low 2.10 0.25 1.65 2.56
IVIS usability
SEC good 2.82 1.34 2.47 3.18
bad 2.52 1.23 2.20 2.84
COM good 3.13 1.37 2.73 3.52
bad 2.79 1.36 2.40 3.18
IVIS aesthetics
AUT nice 2.48 1.34 2.09 2.86
ugly 2.30 1.41 1.89 2.71
STI nice 2.77 1.12 2.45 3.09
ugly 2.40 1.27 2.04 2.76
SEC nice 2.82 1.28 2.48 3.16
ugly 2.53 1.30 2.19 2.87
COM nice 3.07 1.26 2.71 3.44
ugly 2.84 1.43 2.43 3.26
Table 3: Signicant Need Values.
Aect. Participants’ positive (PA) and negative aect (NA)
revealed a signicant main eect for ADS performance, (
.36,F(2,45)=11.33,p< .001
). A look at univariate tests revealed
that NA for the low ADS performance is signicantly higher
than for the high ADS performance condition(
), while PA was not aected. Regarding the within-
subject factor IVIS usability, we can observe similar results.
Multivariate tests reveal a signicant main eect (
), however, also here only NA showed dierences
in case IVIS usability is bad (
Contrarily, IVIS aesthetics, which also has a signicant main
eect (
), shows signicant dif-
ferences for both PA (
) and NA
). Thereby, PA is slightly (but
still signicantly) higher for the nice IVIS aesthetics in con-
trast to the ugly IVIS variants. Again, no two or three-way
interaction eects could be revealed (see Table 4 for means).
95% Condence Interval
Dep. Variable Ind. Variable M SD lower upper
ADS performance
NA high 1.04 0.16 0.64 1.44
low 2.40 0.23 2.00 2.80
IVIS usability
NA good 1.58 1.26 1.27 1.89
bad 1.86 1.22 1.57 2.15
IVIS aesthetics
PA nice 2.98 0.98 2.69 3.27
ugly 2.80 1.05 2.49 3.11
NA nice 1.57 1.21 1.27 1.86
ugly 1.87 1.27 1.56 2.18
Table 4: Signicant Aect Values.
Arousal. Analysis of GSR data revealed a signicant main
eect for the within-subject factor IVIS usability (
). Bad IVIS usability leads to signicantly
more peaks, thus arousal, than good usability. We further
can observe a two-way interaction eect for IVIS usability
and ADS performance(
). Descrip-
tive statistic show that if ADS performance is low and IVIS
usability is bad, participants are signicantly more aroused
than if ADS performance is high and IVIS usability is good.
However, when ADS performance is low although the IVIS
usability is good, the number of GSR peaks is also increasing.
No further main eects for ADS performance or IVIS aes-
thetics, and also no further two- and three-way interaction
eects could be revealed by our statistical analysis (see Table
5 for descriptive statistics).
95% Condence Interval
Dep. Variable Ind. Variable M SD lower upper
IVIS usability
Peaks good 203.06 65.66 183.99 222.14
bad 220.45 78.46 196.84 244.07
ADS performance x IVIS usability
Peaks high & good 194.20 67.20 167.22 221.18
high & bad 223.95 81.00 190.55 257.35
low & good 211.93 63.00 184.95 238.90
low & bad 216.95 77.71 183.55 250.35
Table 5: Signicant Arousal Values.
Trust (RQ2)
To answer RQ2, we analyzed the subjective trust ratings as
well as participants’ braking behavior.
Trust Scale. Multivariate data analysis (using Pillai’s Trace)
of users’ trust (T) and distrust (DT) revealed a signicant
main eect for ADS performance (
Univariate tests on the dependent variables show signif-
icant eects for T (
F(1,46)=18.07,p< .001,η2=.28
) and DT
F(1,46)=15.09,p< .001,η2=.25
). While T is decreasing in condi-
tions of low ADS performance, DT is increasing. Contrarily,
T is increasing for high ADS performance and DT decreas-
ing. We can report another main eect for IVIS usability
). Also here, signicant eects
for T (
F(1,46)=14.54,p< .001,η2=.24
) and DT (
) are visible. Descriptive data shows similar ef-
fects like for the between-subject factor ADS performance.
Further, also IVIS aesthetics shows a signicant main ef-
fect (
). However, here only DT
could be signicantly decreased by a nice IVIS interface,
(F(1,46)=12.58,p=.001,η2=.22); see Table 6).
95% Condence Interval
Dep. Variable Ind. Variable M SD lower upper
ADS performance
T high 3.91 0.21 3.34 4.36
low 2.55 0.24 2.09 3.00
DT high 2.34 0.20 1.90 2.79
low 3.56 0.24 3.11 4.01
IVIS usability
T good 3.40 1.31 3.08 3.72
bad 3.06 1.35 2.71 3.40
DT good 2.80 1.26 2.48 3.12
bad 3.10 1.31 2.76 3.44
IVIS aesthetics
DT nice 2.81 1.26 2.49 3.13
ugly 3.09 1.28 2.76 3.43
Table 6: Signicant Trust Values.
Braking Behavior. Since braking data was not normal dis-
tributed we performed non-parametric tests. Mann-Whitney
U tests with Bonferroni correction (
=.0125) were conducted
to conrm expected dierences in braking behavior between
low and high ADS performance. All braking parameters are,
across all IVIS conditions, signicantly higher in conditions
with low than with high ADS performance (see Table 7).
To compare the impact of the IVIS on braking behavior,
we calculated separate Friedman tests for low and high ADS
performance with Bonferroni correction (
=.008). The num-
ber of brake actions diers only signicantly for the group
of the low ADS performance (
). Post-hoc
analysis revealed signicant dierences only between good
& nice and bad & nice (
022), which led to more brake ac-
tions. Further, also braking duration is signicantly dierent
in conditions with low ADS performance (
ADS performance Test Statistic
IVIS Mdn (high) Mdn (low) Mann-Whitney-U test
Number bad & nice 0 5* U = 510, z = 4.73, p < .001
bad & ugly 0 5 U = 515, z = 4.83, p < .001
good & nice 0 7* U = 530, z = 5.12, p < .001
good & ugly 0 6 U = 499, z = 4.47, p < .001
Duration bad & nice 0 2.65* U = 511, z = 4.72, p < .001
bad & ugly 0 2.47 U = 491, z = 4.30, p < .001
good & nice 0 1.99* U = 438, z = 3.16, p < .002
good & ugly 0 2.42* U = 511, z = 4.70, p < .001
Intensity bad & nice 0 .72 U = 533, z = 5.19, p < .001
bad & ugly 0 .72 U = 536, z = 5.25, p < .001
good & nice 0 .55 U = 515, z = 4.79, p < .001
good & ugly 0 .65 U = 519, z = 5.19, p < .001
Table 7: Braking Behavior. Signicances between vari-
ables are indicated by *.
Post-hoc analysis revealed a signicant dierence only be-
tween good & nice, which shows lowest braking duration
median and bad & nice with the highest braking duration
median (p = .005), and additionally between good & nice and
good & ugly (p = .022). For braking intensity, no signicant
eects could be revealed.
User Experience x Trust (RQ3)
To evaluate a potential correlation between the constructs
UX and trust we ran bivariate Pearson correlation anal-
yses of averaged correlation-coecients after Fisher’s Z-
Transformation (see Table 8). Thereby, we applied Bonferroni
Correction and adjust the signicance level to α=.016.
Trust (T) Distrust (DT)
UX Qualities
ATT .5* -.48*
Beauty .27 -.25
Goodness .59* -.57*
HQ .37* -.36*
PQ .68* -.66*
AUT .25 -.16
COM .30* -.18
STI .29 -.25
SEC .75* -.74*
PA .06 .04
NA -.79* .8*
Table 8: Averaged correlations between measures af-
ter z-transformation. Signicances are indicated by *
Correlations. Participants’ product quality perceptions show
correlations with the constructs trust (T) and distrust (DT, s.
Table 8). Although the overall perceived attractiveness (ATT)
and almost all sub components correlate positive with T and
negative with DT, the sole perception of beauty does not
correlate signicantly with T or DT. Regarding correlations
of participants’ psychological needs, we can observe a signi-
cant positive correlation of the need for security (SEC) and T,
and a negative correlation with DT. The need of competence
(COM) correlates positive with T. Moreover, only negative
aect (NA) correlates negative with T and positive with DT.
Arousal and the construct trust do not correlate across all
conditions. Also, no correlation could be identied between
arousal and braking behavior.
Semi-structured Interviews. Semi-structured interviews (trans-
lated from German) conrm a correlation between perceived
pragmatic quality and trust. Thereby, also participants in
group with low ADS performance expressed to trust the sys-
tem with good IVIS usability most: “I would trust most in the
ADS with a running infotainment system. If this is running I
can also concentrate on other things around because I know
this works” (P3, low ADS performance). Several participants
mentioned the distraction from monitoring the ADS as rea-
son for decreased comfort and trust in the condition with
bad IVIS usability. For some participants the inuence of us-
ability and aesthetics on trust was conscious, e.g., “the whole
vehicle has to look appealing and of high-quality that I agree
to drive automated. The whole concept needs to be harmonious.
(P13, high ADS performance) Others, in contrast could not
identify why they trusted most in the ADS with the good
and nice IVIS. For example, one participant in the low ADS
performance condition rated the ADS with good and nice
IVIS as most trustworthy, however, reasoned “because the AV
performed best here”(P5, low ADS performance) – actually,
automation performed equally good for a in all conditions
he experienced. Participants experiencing high ADS perfor-
mance expressed that their trust increased gradually from
beginning of the experiment to the end: “At the beginning
I was nervous while solving the tasks and I looked always on
the street. In the end I relied on that the ADS is working” (P1,
high ADS performance). Another participant stated: “The
longer I tested the system, the more I trusted in it. The system
I trusted most was the AV used in the second drive (nice and
good), the interface of the IVIS was the most beautiful. My
overall experience was impacted by it, thus, I also trusted more
in this AV” (P21, high ADS performance).
In the following we discuss the RQs and derive implications
for AV research and development. Regarding RQ1, all inde-
pendent variables show inuence on multiple UX qualities.
Especially the large inuence of visual design on UX re-
garding users’ higher goals conrms results from previous
studies investigating the “halo eect” of usability and aes-
thetics [
] in the context of AD. As ADS
performance solely aected pragmatic aspects and thereby
only the negative aect (probabilistic consistency), we can
assume objective system performance to be a hygiene factor
] for UX. Experience is only negatively aected if high
system performance cannot be achieved.
Regarding trust (
), ADS performance led to dierent
results for both trust and distrust, what is also visible for
the within-subject factor usability (probabilistic consistency).
Aesthetics aected only distrust (with respect to our study
sample). Thus, trust cannot be increased by a good design
only, however, distrust can be decreased. This can be re-
garded as evaluative consistency, as there is no direct relation.
The mutual inuence of the independent variables on subjec-
tive trust indicates that users hardly dierentiate between
(for the driving task) more (ADS performance) and less (IVIS)
important subfunctions (what Lee and See refer to as “low
functional specicity” [45]). Still, we see a clear connection
of perception and actual behavior. When looking at driving
behavior, we can see that when UX aspects were degraded,
participants actuated the brakes longer, thus de-accelerated
to lower speeds and drove more carefully (this statement
can be made as braking intensity did not dier, thus longer
braking actions with similar intensity consequently lead to
lower driving speed).
Correlation analysis further conrmed the familiarity of both
constructs (
). All UX quality dimensions (beside the
perception of beauty), the psychological need for security
and negative aect were correlated with trust/distrust. The
inuence of usability/aesthetics on trust was further em-
phasized in semi-structured interviews, even though some
participants were not conscious of the impact. Our results
do also not rely on subjective data only. Obtained GSR data
shows that impairment of ADS performance and usability
led to signicantly higher arousal. Considering our results,
we suggest the following recommendations for researchers
and designers of automated driving systems:
Creating Public Awareness about System Complexity. The
mutual inuence of all variables reveals a huge problem –
halo eects and low functional specicity considering trust
conrm that it is hard for users to (at least initially) assess
an AV based on objective characteristics. This is a known
issue for interactive products, however, for AVs, the resulting
negative eects might be dramatic. For example, falsely infer-
ring trustworthiness from design aspects due to evaluative
consistency could quickly lead to hazardous situations, and
the safety critical environment simply does not allow longer
system exposure and real-life experiences mediating this
eect later on. Public authorities and/or vehicle manufac-
turers must thus create awareness, for example by adopting
teaching practices in driving schools, public campaigns, etc.
More Sophisticated Study Design and Evaluation. We highly
recommend trust researchers to design studies addressing
trust more carefully, especially regarding evaluation meth-
ods. We recommend including UX measurements into studies
that rely on subjective trust scales to better distinguish the
outcomes of the objective properties of trust (performance
aspects) from design aspects. When evaluating HMI for trust
calibration, we recommend using a minimalist design to re-
duce the inuence of design aspects or, even better, evaluate
the same concept with varying degree of aesthetics to see
if the desired eects are independent of the actual imple-
mentation. UX researchers, on the other hand, should more
carefully consider the consequences reporting trust in their
studies, particularly when evaluating safety critical systems
such as AVs. Instead of seeing trust “just as another factor
of UX”, we urge them to regard the concept of trust/reliance
in relation to system capabilities, as well as the danger of
ADS Performance as Hygienic Factor. UX and trust are im-
paired in all conditions of unreliable ADS performance. Thus,
primary objectives should be to improve automation, and
such improvements should become integral part of the user
interface. As the need for security seems to be most rele-
vant for ADSs [
], the success of AVs will be dependent
on the introduction of higher levels of automation where
monitoring is no more needed. Recent studies conducted at
real test tracks indicate that many drivers are not capable
of intervening in upcoming crash situations despite eyes on
the road and hands on the wheel [
]. A valid strategy could
be to not oer vehicles operating at SAE level 2, which is
of interest to the automotive companies, but unfortunately,
dicult to achieve given the imperfections of the existing
Don’t Sell a Wolf in Sheep’s Clothing. Vehicle designers
should carefully consider halo-eects and it must be pre-
vented to give users the impression that systems perform
better than they actually do. Theoretically, our results could
suggest that systems should be designed with bad usability
and low aesthetics to reduce the chance of overtrust. How-
ever, it is clear that vehicle manufacturers aim for maximiz-
ing UX qualities to maintain competitiveness and enthuse
customers for their products. This is also necessary to achieve
broad acceptance/proliferation of ADSs on the market. Thus,
they urgently need to take other methods into account to
better communicate performance aspects to users. Manufac-
turers of ADSs should immediately include solutions that
have already been suggested to approach the problem – such
as making their systems transparent for the users by commu-
nicating system decisions [
] and uncertainties [
or behavioral measures to avoid misuse (such as preventing
automation from being enabled in environments it was not
designed for).
The presented work has some limitations. As dierences be-
tween age groups concerning ADS experience exist, which
are in particular related to the need of security and trust [
future research needs to address this issue by involving a
more heterogeneous user group. Thereby, age, cultural back-
ground, or personality must be included to achieve more
generalizable results (as suggested by [
]). Another limita-
tion of our study is the simulation environment. Although
many studies addressing trust are conducted with driving
simulators [
], their results must be interpreted cau-
tiously. Also the IVIS implemented on a tablet computer was
only an example, and since we could reveal strong inuence
of non-performance based aspects (such as aesthetics), other
interfaces present in our simulator might have inuenced
results too. Future work thus needs to build up on our results
and conduct studies in real AV prototypes and in authentic
road conditions. Further, the impact of in-vehicle technology
that supports non-driving related tasks [
] but also of un-
obtrusive interfaces for trust calibration, like light designs
[52, 54], should be looked at in detail.
In this paper we have investigated the mutual inuence of
drivers’ trust and user experience in automated vehicles.
We were interested in how subjective trust and UX corre-
late when modifying relevant parameters of both constructs
(here: system performance of the AV, representing the most
important criterium for appropriately calibrated trust, as
well as usability and aesthetics of an IVIS as relevant UX
parameters). Results of a driving simulator study, where 48
participants had to safely complete drives in an AV at level
2 while performing tasks on an IVIS, conrm that UX and
trust inuence each other and correlate. Participants were
not able to solely adjust their trust levels to an objective
measure of trustworthiness (system performance) as their
judgment was strongly inuenced by the UX of the IVIS (and
vice-versa). Variations of investigated independent variables
signicantly aected both constructs of trust and UX. The
study further conrms the existence of so-called “halo ef-
fects” in the context of AVs, which is an important nding
as overtrust/overreliance already led to fatal accidents. Re-
search investigating methods aiming to deal with trust issues
should, thus, not only rely on subjective measurements of
trust, but also consider and include user experience mea-
sures. Our study shows that level 2 driving may not be safely
possible without making system performance accessible to
drivers. Otherwise, the inuence of design features could hin-
der drivers’ ability to judge the trustworthiness of automated
vehicles with the necessary objectivity.
We applied the SDC approach for the sequence of authors.
This work is supported under the FH-Impuls program of the
German Federal Ministry of Education and Research, Grant
Number 13FH7I01IA (SAFIR).
Lisanne Bainbridge. 1983. Ironies of automation. In Analysis, Design
and Evaluation of Man–Machine Systems 1982. Elsevier, 129–135.
Hanna Bellem, Barbara Thiel, Michael Schrauf, and Josef F Krems.
2018. Comfort in automated driving: An analysis of preferences for
dierent automated driving styles and their dependence on personality
traits. Transportation research part F: trac psychology and behaviour
55 (2018), 90–100.
Johannes Beller, Matthias Heesen, and Mark Vollrath. 2013. Improving
the driver–automation interaction: An approach using automation
uncertainty. Human factors 55, 6 (2013), 1130–1141.
Mathias Benedek and Christian Kaernbach. [n. d.]. Ledalab Software.
Mathias Benedek and Christian Kaernbach. 2010. A continuous mea-
sure of phasic electrodermal activity. Journal of Neuroscience Methods
190, 1 (2010), 80–91.
Wolfram Boucsein. 2012. Electrodermal activity. Springer Science &
Business Media.
Wolfram Boucsein, Don C. Fowles, Sverre Grimnes, Gershon Ben-
Shakhar, Walton T. Roth, Michael E. Dawson, and Diane L. Filion.
2012. Publication recommendations for electrodermal measurements.
Psychophysiology 49, 8 (2012), 1017–1034.
JJ Braithwaite, DG Watson, Jones Robert, and Rowe Mickey. 2013. A
Guide for Analysing Electrodermal Activity (EDA) & Skin Conductance
Responses (SCRs) for Psychological Experiments. .. . (2013), 1–42.
K. A. Brookhuis, D. DE Waard, and S. H. Fairclough. 2003. Criteria
for driver impairment. Ergonomics 46, 5 (apr 2003), 433–445. https:
James D Brown and Warren J Human. 1972. Psychophysiological
measures of drivers under actual driving conditions. Journal of Safety
Research 4, 4 (1972), 172–178.
C Collet, A Clarion, M Morel, A Chapon, and C Petit. 2009. Physi-
ological and behavioural changes associated to the management of
secondary tasks while driving. Applied Ergonomics 40, 6 (2009), 1041–
SAE On-Road Automated Vehicle Standards Committee et al
Taxonomy and denitions for terms related to on-road motor vehicle
automated driving systems. SAE International (2014).
SAE On-Road Automated Vehicle Standards Committee et al
2016. Hu-
man Factors Denitions for Automated Driving and Related Research
Topics. SAE International (2016).
Pieter Desmet and Paul Hekkert. 2007. Framework of product experi-
ence. International journal of design 1, 1 (2007).
Pieter MA Desmet. 2012. Faces of product pleasure: 25 positive emo-
tions in human-product interactions. International Journal of Design
David Alexander Dickie and Linda Ng Boyle. 2009. Drivers’ under-
standing of adaptive cruise control limitations. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, Vol. 53. Sage
Publications Sage CA: Los Angeles, CA, 1806–1810.
Verena Distler, Carine Lallemand, and Thierry Bellet. 2018. Acceptabil-
ity and Acceptance of Autonomous Mobility on Demand: The Impact
of an Immersive Experience. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems. ACM, 612.
Florian N Egger et al
2001. Aective design of e-commerce user
interfaces: How to maximise perceived trustworthiness. In Proc. Intl.
Conf. Aective Human Factors Design. 317–324.
Mica R Endsley. 2017. Autonomous driving systems: a preliminary
naturalistic study of the Tesla Model S. Journal of Cognitive Engineering
and Decision Making 11, 3 (2017), 225–238.
Anna-Katharina Frison, Laura Aigner, Philipp Wintersberger, and An-
dreas Riener. 2018. Who is Generation A? Investigating the Experience
of Automated Driving for Dierent Age Groups. AutomotiveUI’ 18,
September 23-25, 2018, Toronto, Canada (2018), in Press.
Anna-Katharina Frison, Philipp Wintersberger, Andreas Riener, and
Clemens Schartmüller. 2017. Driving Hotzenplotz: A Hybrid Interface
for Vehicle Control Aiming to Maximize Pleasure in Highway Driving.
AutomotiveUI’ 17 Adjunct, September 24-27, 2017, Oldenburg, Germany
(2017), pp. 6.
Eva Ganglbauer, Johann Schrammel, Stephanie Deutsch, and Manfred
Tscheligi. 2009. Applying psychophysiological methods for measuring
user experience: possibilities, challenges and feasibility. In Workshop
on user experience evaluation methods in product development. Citeseer.
Marc Hassenzahl. 2003. The thing and I: understanding the relationship
between user and product. In Funology. Springer, 31–42.
Marc Hassenzahl. 2004. The interplay of beauty, goodness, and usabil-
ity in interactive products. Human-computer interaction 19, 4 (2004),
Marc Hassenzahl. 2008. User experience (UX): towards an experiential
perspective on product quality. In Proceedings of the 20th Conference
on l’Interaction Homme-Machine. ACM, 11–15.
Marc Hassenzahl. 2018. The thing and I: understanding the relationship
between user and product. In Funology 2. Springer, 301–313.
Marc Hassenzahl, Sarah Diefenbach, and Anja Göritz. 2010. Needs,
aect, and interactive products–Facets of user experience. Interacting
with computers 22, 5 (2010), 353–362.
Marc Hassenzahl and Andrew Monk. 2010. The Inference of
Perceived Usability From Beauty. HumanâĂŞComputer Interaction
25, 3 (2010), 235–260.
Marc Hassenzahl and Andrew Monk. 2010. The inference of perceived
usability from beauty. Human–Computer Interaction 25, 3 (2010), 235–
Marc Hassenzahl, Annika Wiklund-Engblom, Anette Bengs, Susanne
Hägglund, and Sarah Diefenbach. 2015. Experience-oriented and
product-oriented evaluation: psychological need fulllment, posi-
tive aect, and product perception. International journal of human-
computer interaction 31, 8 (2015), 530–544.
Renate Häuslschmid, Max von Buelow, Bastian Peging, and Andreas
Butz. 2017. SupportingTrust in Autonomous Driving. In Proceedings of
the 22nd International Conference on Intelligent User Interfaces. ACM,
Kevin Anthony Ho and Masooda Bashir. 2015. Trust in automation:
Integrating empirical evidence on factors that inuence trust. Human
Factors 57, 3 (2015), 407–434.
ISO 9241-11:1998(en) 1998. Ergonomic requirements for oce work
with visual display terminals (VDTs) âĂŤ Part 11: Guidance on usability.
Standard. International Organization for Standardization, Geneva, CH.
Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations
for an empirically determined scale of trust in automated systems.
International Journal of Cognitive Ergonomics 4, 1 (2000), 53–71.
Patrick W Jordan. 2003. Designing great stu that people love. From
Usability to Enjoyment (2003).
Theresa T Kessler, Cintya Larios, Tiani Walker, Valarie Yerdon, and
PA Hancock. 2017. A Comparison of Trust Measures in Human–Robot
Interaction Scenarios. In Advances in Human Factors in Robots and
Unmanned Systems. Springer, 353–364.
Jinwoo Kim and Jae Yun Moon. 1998. Designing towards emotional
usability in customer interfacesâĂŤtrustworthiness of cyber-banking
system interfaces. Interacting with computers 10, 1 (1998), 1–29.
Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert, Larry Leifer,
and Cliord Nass. 2015. Why did my car just do that? Explaining
semi-autonomous driving actions to improve driver understanding,
trust, and performance. International Journal on Interactive Design and
Manufacturing (IJIDeM) 9, 4 (2015), 269–275.
Johannes Maria Kraus, Jessica Sturn, Julian Elias Reiser, and Martin
Baumann. 2015. Anthropomorphic agents, transparent automation and
driver personality: towards an integrative multi-level model of deter-
minants for eective driver-vehicle cooperation in highly automated
vehicles. In Adjunct Proceedings of the 7th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications. ACM,
Alexander Kunze, Stephen J Summerskill, Russell Marshall, and Ash-
leigh J Filtness. 2017. Enhancing driving safety and user experience
through unobtrusive and function-specic feedback. In Proceedings
of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications Adjunct. ACM, 183–189.
H Landin. 2009. Anxiety and trust and other expressions of interaction
[Doctoral dissertation]. Göteborg, Sweden: Chalmers University of
Technology (2009).
Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008. Construction
and evaluation of a user experience questionnaire. In Symposium of
the Austrian HCI and Usability Engineering Group. Springer, 63–76.
Ee Lai-Chong Law. 2011. The measurability and predictability of
user experience. In Proceedings of the 3rd ACM SIGCHI symposium on
Engineering interactive computing systems. ACM, 1–10.
Ee Lai-Chong Law, Arnold POS Vermeeren, Marc Hassenzahl, and
Mark Blythe. 2007. Towards a UX manifesto. In Proceedings of the 21st
British HCI Group Annual Conference on People and Computers: HCI...
but not as we know it-Volume 2. BCS Learning & Development Ltd.,
John D Lee and Katrina A See. 2004. Trust in automation: Designing
for appropriate reliance. Human factors 46, 1 (2004), 50–80.
Eva Lenz, Sarah Diefenbach, and Marc Hassenzahl. 2014. Aesthetics
of interaction: a literature synthesis. In Proceedings of the 8th Nordic
Conference on Human-Computer Interaction: Fun, Fast, Foundational.
ACM, 628–637.
Yung-Ming Li and Yung-Shao Yeh. 2010. Increasing trust in mobile
commerce through design aesthetics. Computers in Human Behavior
26, 4 (2010), 673–684.
Tao Lin, Masaki Omata, Wanhua Hu, and Atsumi Imamiya. 2005. Do
physiological data relate to traditional usability indexes? Proceedings of
the 17th Australia conference on Computer-Human Interaction: Citizens
Online: Considerations for Today and the Future (2005), 1–10. http:
Gitte Lindgaard, Cathy Dudek, Devjani Sen, Livia Sumegi, and Patrick
Noonan. 2011. An exploration of relations between visual appeal, trust-
worthiness and perceived usability of homepages. ACM Transactions
on Computer-Human Interaction (TOCHI) 18, 1 (2011), 1.
Todd Litman. 2017. Autonomous vehicle implementation predictions.
Victoria Transport Policy Institute Victoria, Canada.
Todd Litman. 2018. Autonomous vehicle implementation predictions.
Victoria Transport Policy Institute Victoria, Canada.
Andreas Löcken, Wilko Heuten, and Susanne Boll. 2015. Supporting
lane change decisions with ambient light. In Proceedings of the 7th
International Conference on Automotive User Interfaces and Interactive
Vehicular Applications. ACM, 204–211.
Andrew Mackinnon, Anthony F Jorm, Helen Christensen, Ailsa E
Korten, Patricia A Jacomb, and Bryan Rodgers. 1999. A short form
of the Positive and Negative Aect Schedule: Evaluation of factorial
validity and invariance across demographic variables in a community
sample. Personality and Individual dierences 27, 3 (1999), 405–416.
Alexander Meschtscherjakov, Christine Döttlinger, Christina Rödel,
and Manfred Tscheligi. 2015. ChaseLight: ambient LED stripes to
control driving speed. In Proceedings of the 7th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications.
ACM, 212–219.
Michael Minge and Manfred Thüring. 2018. Hedonic and pragmatic
halo eects at early stages of user experience. International Journal of
Human-Computer Studies 109 (2018), 13–25.
Drew M Morris, Jason M Erno, and June J Pilcher. 2017. Electrodermal
Response and Automation Trust during Simulated Self-Driving Car
Use. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, Vol. 61. SAGE Publications Sage CA: Los Angeles, CA, 1759–
Bonnie M Muir. 1987. Trust between humans and machines, and the
design of decision aids. International Journal of Man-Machine Studies
27, 5-6 (1987), 527–539.
Brittany E Noah, Philipp Wintersberger, Alexander G Mirnig, Shailie
Thakkar, Fei Yan, Thomas M Gable, Johannes Kraus, and Roderick Mc-
Call. 2017. First Workshop on Trust in the Age of Automated Driving.
In Proceedings of the 9th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications Adjunct. ACM, 15–21.
Donald A Norman. 2004. Introduction to this special section on beauty,
goodness, and usability. Human-Computer Interaction 19, 4 (2004),
Ingrid Pettersson, Florian Lachner, Anna-Katharina Frison, Andreas
Riener, and Andreas Butz. 2018. A Bermuda Triangle?. In Proceedings
of the 2018 CHI Conference on Human Factors in Computing Systems.
ACM, 461.
Bastian Peging, Maurice Rang, and Nora Broy. 2016. Investigating
user needs for non-driving-related activities during automated driv-
ing. In Proceedings of the 15th International Conference on Mobile and
Ubiquitous Multimedia. ACM, 91–99.
Christina Rödel, Susanne Stadler, Alexander Meschtscherjakov, and
Manfred Tscheligi. 2014. Towards autonomous cars: The eect of
autonomy levels on acceptance and user experience. In Proceedings
of the 6th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications. ACM, 1–8.
Martin Schrepp, Andreas Hinderks, and Jörg Thomaschewski. 2017.
Construction of a Benchmark for the User Experience Questionnaire
(UEQ). IJIMAI 4, 4 (2017), 40–44.
Kennon M Sheldon, Andrew J Elliot, Youngmee Kim, and Tim Kasser.
2001. What is satisfying about satisfying events? Testing 10 candidate
psychological needs. Journal of personality and social psychology 80, 2
(2001), 325.
Kennon M Sheldon, Richard Ryan, and Harry T Reis. 1996. What
makes for a good day? Competence and autonomy in the day and in
the person. Personality and social psychology bulletin 22, 12 (1996),
David Sward and Gavin Macarthur. 2007. Making user experience a
business strategy. In E. Law et al.(eds.), Proceedings of the Workshop on
Towards a UX Manifesto, Vol. 3. 35–40.
Noam Tractinsky, Adi S Katz, and Dror Ikar. 2000. What is beautiful is
usable. Interacting with computers 13, 2 (2000), 127–145.
Alexandre N Tuch and Kasper Hornbæk. 2015. Does Herzberg’s notion
of hygienes and motivators apply to user experience? ACM Transac-
tions on Computer-Human Interaction (TOCHI) 22, 4 (2015), 16.
Alexandre N Tuch, Sandra P Roth, Kasper HornbæK, Klaus Opwis,
and Javier A Bargas-Avila. 2012. Is beautiful really usable? Toward
understanding the relation between usability, aesthetics, and aect in
HCI. Computers in Human Behavior 28, 5 (2012), 1596–1607.
Heli Väätäjä, Tiina Koponen, and Virpi Roto. 2009. Developing prac-
tical tools for user experience evaluation: a case from mobile news
journalism. In European Conference on Cognitive Ergonomics: Design-
ing beyond the Product—Understanding Activity and User Experience in
Ubiquitous Environments. VTT Technical Research Centre of Finland,
Paul van Schaik, Marc Hassenzahl, and Jonathan Ling. 2012. User-
Experience from an Inference Perspective. ACM Trans. Comput.-Hum.
Interact. 19, 2, Article 11 (July 2012), 25 pages.
Trent W Victor, Emma Tivesten, Pär Gustavsson, Joel Johansson,
Fredrik Sangberg, and Mikael Ljung Aust. 2018. Automation Expecta-
tion Mismatch: Incorrect Prediction Despite Eyes on Threat and Hands
on Wheel. Human factors (2018), 0018720818788164.
Alan R Wagner, Jason Borenstein, and Ayanna Howard. 2018. Overtrust
in the robotic age. Commun. ACM 61, 9 (2018), 22–24.
David Watson, Lee Anna Clark, and Auke Tellegen. 1988. Development
and validation of brief measures of positive and negative aect: the
PANAS scales. Journal of personality and social psychology 54, 6 (1988),
Adam Waytz, Joy Heafner, and Nicholas Epley. 2014. The mind in
the machine: Anthropomorphism increases trust in an autonomous
vehicle. Journal of Experimental Social Psychology 52 (2014), 113–117.
Philipp Wintersberger and Andreas Riener. 2016. Trust in technology
as a safety aspect in highly automated driving. i-com 15, 3 (2016),
Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison,
and Andreas Riener. 2017. Trac Augmentation as a Means to In-
crease Trust in Automated Driving Systems. In Proceedings of the 12th
Biannual Conference on Italian SIGCHI Chapter. ACM, 17.
Dennis Wixon. 2011. Measuring fun, trust, condence, and other
ethereal constructs: it isn’t that hard. interactions 18, 6 (2011), 74–77.
Peter Wright, John McCarthy, and Lisa Meekison. 2003. Making sense
of experience. In Funology. Springer, 43–53.
... Current infotainment system design accounts for these safety issues (Holstein et al., 2015), and the not fully elaborated user interface interaction leads to many accidents (Ramnath et al., 2019). This is tragic because user interface design has a significant impact on trust and system perception (Frison et al., 2019). Therefore, immersive and intuitive car infotainment systems are needed to enhance driver safety and overall mobility experience. ...
... Regarding the target of complexity reduction proposed by Lee & Jin (2018), Garzon (2012) emphasizes the importance of simplifying the execution of a preferred action (e.g. adjusting the volume) by short-cutting, while Frison et al. (2019) demonstrate that interaction significantly influences trust and safety with users diverting their attention from the road. On a more meta-level of connectivity, Coutinho et al. (2018) provide design guidelines for network protocolling of infotainment systems which involves the communication between different components of the system. ...
... They also reflect the goals, values, and context of the design domain, as well as any relevant regulations, standards, or best practices. Here, the design requirements are defined as interaction (DR1), drawing from AD4 and specifying the human-system-interaction level and modes to account for a seamless user experience (Frison et al., 2019;Lee & Ji, 2018;Pearl et al., 2016;Politis et al., 2018); attention (DR2), drawing from AD2/AD3 and specifying the attention guidance for non-distraction information display and representation (Blankenbach, 2019;Varala & Yammiyavar, 2018;Strayer et al., 2019); safety (DR3), drawing from AD1 and specifying the priority of superimposed safety alerts/messages/warnings; and context-awareness (DR4), drawing from AD5 and specifying the relevant functionalities of the information system and their contextual relevance to minimize the cognitive load of the user (Choi et al., 2019;Lee & Ji, 2018;Strohmann et al., 2019). ...
Conference Paper
Full-text available
The rise in distraction-related traffic accidents necessitates in-car infotainment systems that prioritize safety, as current systems often prioritize entertainment and convenience, contributing to the high number of distracted driving fatalities. Therefore, this paper applied a design science research approach to develop a design theory for Augmented Automotive Spaces that holistically consider safety, interface, interaction, and attention requirements. The main artifact is a design theory intended to help researchers and practitioners understand the design requirements and principles of intuitive, immersive, and attention-grabbing infotainment user interfaces that enhance safety and overall mobility experience. The design theory was further implemented in a prototypical reference scenario and positively evaluated for developmental usefulness, conciseness, explanatory power, and extendibility. Overall, the authors' work contributes to the research areas of user interface design, infotainment, automotive information systems, and safety measures as well as providing a comprehensive guideline for the development of immersive and safe automotive infotainment systems.
... Regarding the assessment of distraction, the visual demand of interfaces has proven to be an effective measure, as long glances away from the road (t>2 s) are directly correlated with increased crash risk [29]. As a result, the evaluation of IVIS in terms of usability and driver distraction is a well-researched topic [19,22,23,25,26]. However, there is a considerable gap between the academic research conducted to evaluate IVISs and the tools and methods available to the professionals in industry who eventually design these systems. ...
User Experience (UX) professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for In-Vehicle Information Systems (IVIS) lacks data-driven support and effective tools for visualizing and analyzing user interaction data. Therefore, we propose ICEBOAT, an interactive visualization tool tailored to the needs of automotive UX experts to effectively and efficiently evaluate driver interactions with IVISs. ICEBOAT visualizes telematics data collected from production line vehicles, allowing UX experts to perform task-specific analyses. Following a mixed methods User-centered design (UCD) approach, we conducted an interview study (N=4) to extract the domain specific information and interaction needs of automotive UX experts and used a co-design approach (N=4) to develop an interactive analysis tool. Our evaluation (N=12) shows that ICEBOAT enables UX experts to efficiently generate knowledge that facilitates data-driven design decisions.
... Prior studies found that lack of experience with AV technology is a significant factor in the low acceptance of AVs among older adults (Haghzare et al., 2021a). Studies also found that there is a relationship between user experience (UX) of AVs and drivers' trust in AVs (Frison et al., 2019). In addition to trust issues, older adults with MMCI need passengers' support to drive safely. ...
Full-text available
The population of older Americans with cognitive impairments, especially memory loss, is growing. Autonomous vehicles (AVs) have the potential to improve the mobility of older adults with cognitive impairment; however, there are still concerns regarding AVs' usability and accessibility in this population. Study objectives were to (1) better understand the needs and requirements of older adults with mild and moderate cognitive impairments regarding AVs, and (2) create a prototype for a holistic, user-friendly interface for AV interactions. An initial (Generation 1) prototype was designed based on the literature and usability principles. Based on the findings of phone interviews and focus group meetings with older adults and caregivers (n = 23), an enhanced interface (Generation 2) was developed. This generation 2 prototype has the potential to reduce the mental workload and anxiety of older adults in their interactions with AVs and can inform the design of future in-vehicle information systems for older adults.
Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.
Autonomous vehicles (AV) are predicted to change our current transportation system, however, how and when they become fully adopted is still an uncertain matter. One essential aspect to consider is how people form trust towards AV. In the context of AV, trust in technology is critical for safety considerations. Although humans are capable of making instinctive assessments of the trustworthiness of other people, this ability does not directly translate to technological systems. The rising complexity of autonomous systems (AS) (e.g., cruise control) requires the operators to calibrate their trust in the system to achieve their safety and performance goals. As such, a detailed understanding of how trust develops, and especially the underlying mental processes, will facilitate the prediction of how trust levels influence behavior mode and decision-making strategies when interacting with AV. To investigate this in the context of AV, we conducted interviews and follow-up surveys to examine users’ current behavior with an analogous system (cruise control) and explored its relationship with the perception of AV trustworthiness. Our findings suggest that external factors play a role in the adoption of cruise control (an analogous system), while internal factors determine non-adoption. Trustworthiness in AV is affected by external factors, users’ trust in others, and their knowledge of advanced vehicle technology.KeywordsHuman-centered computingUser studiesAutonomous System
Conference Paper
Full-text available
The prevalence of Automated Driving Systems (ADS) is expected to open up many possibilities for different user groups with individual needs and challenges. Former secondary/tertiary tasks can become primary tasks, and driving with all its interactions and responsibilities steps back or disappears at all. At higher levels of AD it is expected that the elderly could maintain or regain individual mobility, thus, play a major role for future markets. To understand individual mindsets concerning technology acceptance and user needs we conducted an explorative interview study (N=27). In a simulated automated driving environment, driving experience over time was compared across three age groups (elderly people >65, younger adults <30, younger adults <30 with age simulation suite), utilizing the STAM model for content analysis. Results of the age-comparison indicate no major differences in the general technology acceptance, however, fine-grained analysis revealed interesting differences in participants' perceptions concerning UX design requirements.
Full-text available
Objective: The aim of this study was to understand how to secure driver supervision engagement and conflict intervention performance while using highly reliable (but not perfect) automation. Background: Securing driver engagement-by mitigating irony of automation (i.e., the better the automation, the less attention drivers will pay to traffic and the system, and the less capable they will be to resume control) and by communicating system limitations to avoid mental model misconceptions-is a major challenge in the human factors literature. Method: One hundred six drivers participated in three test-track experiments in which we studied driver intervention response to conflicts after driving highly reliable but supervised automation. After 30 min, a conflict occurred wherein the lead vehicle cut out of lane to reveal a conflict object in the form of either a stationary car or a garbage bag. Results: Supervision reminders effectively maintained drivers' eyes on path and hands on wheel. However, neither these reminders nor explicit instructions on system limitations and supervision responsibilities prevented 28% (21/76) of drivers from crashing with their eyes on the conflict object (car or bag). Conclusion: The results uncover the important role of expectation mismatches, showing that a key component of driver engagement is cognitive (understanding the need for action), rather than purely visual (looking at the threat), or having hands on wheel. Application: Automation needs to be designed either so that it does not rely on the driver or so that the driver unmistakably understands that it is an assistance system that needs an active driver to lead and share control.
Conference Paper
Full-text available
User experience (UX) evaluation is a growing field with diverse approaches. To understand the development since previous meta-review efforts, we conducted a state-of-the-art review of UX evaluation techniques with special attention to the triangulation between methods. We systematically selected and analyzed 100 papers from recent years and while we found an increase of relevant UX studies, we also saw a remaining overlap with pure usability evaluations. Positive trends include an increasing percentage of field rather than lab studies and a tendency to combine several methods in UX studies. Triangulation was applied in more than two thirds of the studies, and the most common method combination was questionnaires and interviews. Based on our analysis, we derive common patterns for triangulation in UX evaluation efforts. A critical discussion about existing approaches should help to obtain stronger results, especially when evaluating new technologies.
Conference Paper
Full-text available
Autonomous vehicles have the potential to fundamentally change existing transportation systems. Beyond legal concerns, these societal evolutions will critically depend on user acceptance. As an emerging mode of public transportation [7], Autonomous mobility on demand (AMoD) is of particular interest in this context. The aim of the present study is to identify the main components of acceptability (before first use) and acceptance (after first use) of AMoD, following a user experience (UX) framework. To address this goal, we conducted three workshops (N=14) involving open discussions and a ride in an experimental autonomous shuttle. Using a mixed-methods approach, we measured pre-immersion acceptability before immersing the participants in an on-demand transport scenario, and eventually measured post-immersion acceptance of AMoD. Results show that participants were reassured about safety concerns, however they perceived the AMoD experience as ineffective. Our findings highlight key factors to be taken into account when designing AMoD experiences.
Full-text available
The integration of self-driving vehicles may expose individuals with health concerns to undue amounts of stress. Psychophysiological indicators of stress were used to determine changes in tonic and phasic stress levels brought about by a high-fidelity autonomous ve-hicle simulation. Twenty-eight participants completed one manual driving task and two automated driving tasks. Participants reported their subjective level of trust in the auto-mated systems using the Automation Trust Survey. Psychophysiological stress was in-dexed using skin conductance and trapezius muscle tension. Results indicate that users show more signs of physiological stress when the vehicle drives autonomously than when the users is in control. Results also indicate that users show an additional increase in stress when the user reports low trust in the autonomous vehicle. These findings suggest that health-care professionals and manufactures should be aware of additional stress associat-ed with self-driving technology.
Conference Paper
Full-text available
Inappropriate trust in the capabilities of automated driving systems can result in misuse and insufficient monitoring behaviour that impedes safe manual driving performance following takeovers. Previous studies indicate that the communication of system uncertainty can promote appropriate use and monitoring by calibrating trust. However, existing approaches require the driver to regularly glance at the instrument cluster to perceive the changes in uncertainty. This may lead to missed uncertainty changes and user disruptions. Furthermore, the benefits of conveying the uncertainty of the different vehicle functions such as lateral and longitudinal control have yet to be explored. This research addresses these gaps by investigating the impact of unobtrusive and function-specific feedback on driving safety and user experience. Transferring knowledge from other disciplines, several different techniques will be assessed in terms of their suitability for conveying uncertainty in a driving context.
Conference Paper
Full-text available
This workshop intends to address contemporary issues surrounding trust in technology in the challenging and constantly changing context of automated vehicles. In particular, this workshop focuses on two main aspects: (1) appropriate definitions of trust and associated concepts for the automated driving context, especially regarding trust calibration in individual capabilities versus overall trust; (2) appropriate measures (qualitative and quantitative) to quantify trust in automated vehicles and in-vehicle interfaces. The workshop proceeds on the basis of a keynote and accepted position papers byparticipants as a basis for the focused breakout sessions. The outcome of the workshop will become the basis for a subsequent joint publication of organizers and participants discussing the issues (1) and (2).
Conference Paper
Full-text available
A prerequisite to foster proliferation of automated driving is common system acceptance. However, different users groups (novice, enthusiasts) decline automation, which could be, in turn, problematic for a successful market launch. We see a feasible solution in the combination of the advantages of manual (autonomy) and automated (increased safety) driving. Hence, we've developed the Hotzenplotz interface, combining possibility-driven design with psychological user needs. A simulator study (N=30) was carried-out to assess user experience with subjective criteria (Need Scale, PANAS/-X, HEMA, AttrakDiff) and quantitative measures (driving behavior , HR/HRV) in different conditions. Our results confirm that pure AD is significantly less able to satisfy user needs compared to manual driving and make people feeling bored/out of control. In contrast, the Hotzenplotz interface has proven to reduce the negative effects of AD. Our implication is that drivers should be provided with different control options to secure acceptance and avoid deskilling.
As technical realization of highly and fully automated vehicles draws closer, attention is being shifted from sheer feasibility to the question of how an acceptable driving style and thus comfort can be implemented. It is increasingly important to determine, how highly automated vehicles should drive to ensure driving comfort for the now passive drivers. Thus far, only little research has been conducted to examine this issue. In order to lay a basis on how automated vehicles should drive to ensure passenger comfort, different variations of three central maneuvers were rated and analyzed. A simulator study (N = 72) was conducted in order to identify comfortable driving strategies. Three variations of lane changes, accelerations and decelerations were configured by manipulating acceleration and jerk, and thus the course of each maneuver. Furthermore, the influence of personality traits and self-reported driving style on preferences of differently executed automated maneuvers was analyzed. Results suggest keeping acceleration and jerk as small as possible for acceleration maneuvers. For lane changes, both small accelerations as well as an early motion feedback are advisable. Interestingly, decelerating as a manual driver would is rejected compared to two artificial alternatives. Moreover, no influence of personality traits on maneuver preference was found. Only self-reported driving style had a marginal effect on participants' preferences. In conclusion, a recommendation for an automated driving style can be given, which was perceived as comfortable by participants regardless of their personality.