Content uploaded by Anna Katharina Frison
Author content
All content in this area was uploaded by Anna Katharina Frison on Sep 11, 2018
Content may be subject to copyright.
Veröffentlicht durch die Gesellschaft für Informatik e. V. 2018 in
R. Dachselt, G. Weber (Hrsg.):
Mensch und Computer 2018 – Workshopband, 02.–05. September 2018, Dresden.
Copyright (C) 2018 bei den Autoren. https://doi.org/10.18420/muc2018-ws15-0475
Moral Behavior of Automated Vehi-
cles: The Impact on Product Percep-
tion
Anna-Katharina Frison12, Philipp Wintersberger12, Andreas Riener12, Clem-
ens Schartmüller12
Technische Hochschule Ingolstadt (THI), Human-Computer Interaction Group, Germany1
Johannes Kepler University, Linz, Austria2
firstname.lastname@thi.de
Abstract
With further development of automation, more responsibilities will be transferred from users to technol-
ogy. Consequently, algorithms of highly automated vehicles should be programmed to behave similarly
to the affect- and intuition-based reasoning of human drivers. This includes making decisions in various
exceptional circumstances, such as moral dilemmas. We assume that the perceived quality of a holistic
driving experience is dependent on the accordance of vehicles’ moral and ethical decisions with users’
expectations concerning values and attitudes. In this work, we discuss implementation strategies for
moral behavior in automated driving systems in order to fulfill users’ needs and match their values. The
reported findings are based on data from an online survey (n=330). We investigated how subjects assess
moral decisions and the overall product experience. Initial results show tendencies among subjects in
accepting a decision over life and death and significant dependencies concerning the overall product
perception.
1 Introduction
An increasing number of automated systems will soon take over tasks that were recently per-
formed by humans, such as automated vehicles, rescue and health robots, assistance for public
authorities, etc. (Wintersberger, Frison, Riener, & Thakkar, 2017). As for automated vehicles
(AVs), predictions forecast that they will account for 50% of vehicle sales by 2040 (Litman,
2014). Due to the increasing number of automated vehicle tests on real roads, situations in
which the safety (fallback) driver has to take over control doubled in California from 2016 to
2017 (Herger, 2018). Thus, it seems not surprising that AVs become involved in road acci-
dents. In June 2016, J. Brown was the first fatally injured (Boudette, 2017) while using an
automated driving system (ADS). A main cause of the accident in the SAE level 2 system is
attributed to overtrust (Wintersberger & Riener, 2016). Just recently (March 2018), the first
pedestrian died in an accident with an AV that was part of Uber driving tests. The car did not
even brake, and the safety driver, distracted and fatigued from her job, failed to intervene.
Some people claim, that the casualty was forced to participate in a “test without consent”
Veröffentlicht durch die Gesellschaft für Informatik e. V. 2018 in
R. Dachselt, G. Weber (Hrsg.):
Mensch und Computer 2018 – Workshopband, 02.–05. September 2018, Dresden.
Copyright (C) 2018 bei den Autoren. https://doi.org/10.18420/muc2018-ws15-0475
1
688 Frison, A. K. et al.
(Taylor, 2018), which again boosts discussions about ethics and trust in ADS. It is feared, that
a downside of tests on reals roads could be increasing numbers of humans involved in acci-
dents. Those could affect individuals’ and societies’ acceptance in new technology and impede
its success. However, as technology enhances, failures like the before mentioned might be-
come history. Still, we cannot guarantee that (lethal) accidents will not happen again in the
future and thus deeper discussions about the ethical implications of AVs are necessary. In the
end, somebody has to decide how AVs should react in moral dilemmas, such as the “Trolley
Problem” (Foot, 1967). In the development of AVs, researchers, engineers and designers must
decide what is morally acceptable and what is not. In 2017, the Ethics Commission of the
German Federal Ministry of Transport and Digital Infrastructure published the first guidelines
for politics and legislation, making demands for security, human dignity, autonomy of decision
and data (2017). In addition, the Vienna Convention on Road Traffic (German federal ministry
of transport and digital infrastructure, 2016), which stated that the driver must be able/allowed
to deactivate ADSs and in any possible situation, manifests the autonomy of the driver itself.
“Users of AV technology” are both drivers but also other traffic participants. The behavior of
a vehicle once on the road will highly affect his experiences in any user-role (i.e., driver, pas-
senger, pedestrian, etc.) and, as a consequence, have a major impact on the future perception
of the technology in general and individual car brands in particular. The market success of
products, systems or services strongly depend on various factors like functionality, usability
and aesthetics, but also abstract concepts like Hedonism and Eudaimonia (Hassenzahl, 2003;
Hassenzahl, 2008; Mekler & Hornbaek, 2016). To create satisfying and delighting user expe-
riences in ADS, users psychological needs (i.e., autonomy) have to be fulfilled (Hassenzahl,
Diefenbach, & Görlirz, 2010). To achieve these higher (sometimes hidden) goals, the quality
of product experiences should be maximized in a holistic way. Clearly, the “moral” behavior
of ADSs will play an important role in this experience, similar to other humans’ moral behavior
and ethical values, which play a major role in our personal judgement. Hence, to deal with the
loss of decision autonomy for road users, we must find and implement accepted behavior to
act in morally ambiguous situations. Users’ individual needs and values have to be focused
and ethical implications of a technology need to be anticipated and assessed (Albrechtslund,
2007). Only if the decisions of AVs are perceived to be in consent with personal values, users
will accept and trust them. Having a closer look at humans’ individual value compass thus
must become a central part of future product development. In this work, we present a survey
study analyzing how user experience (UX) and thus product perception is affected by ethical
decisions of AVs, and aim to investigate the impact of experiences lacking to fulfill users’
need for autonomy on trust, and as well in a certain vehicle brand (Friedman & Kahn Jr, 2002).
2 Related Work
Ethical behavior of ADSs was a focused research topic in the last years. However, a direct
connection to UX and product perception is not yet existing. The “Trolley Problem” is an
often-used example to investigate the problem. In (Friedman & Kahn Jr, 2002) it was shown
that subjects’ decisions in the Trolley Problem were powered by different motivations. There-
2
Moral Behavior of Automated Vehicles: The Impact on Product Perception 689
fore, it might be dangerous to consider only the decision, without involving the complex struc-
tures of the decision-making processes behind. Thus, Blythe et al. (2015) illustrated the need
for a more socially and ethically just perspective for designing vehicles. They presented a
framework beyond the techno-centric and utilitarian perspective using participatory design to
investigate how future automated driving could look like. Bonnefon et al. (2016) conducted
six surveys addressing the Trolley Problem and concluded that data driven approaches can
provide new insights into moral, cultural, and legal aspects of ethical decisions. Li et al. (2016)
conducted two experiments using moral dilemmas to evaluate the behavior of AVs by present-
ing narratives to subjects, and emphasized that their research can help to reveal patterns in
perception and help lawmakers and car manufacturer in the design process. In a driving simu-
lator study, subjects had to decide (using the “Trolley Problem” in an AV) which behavior
they would expect from the technology. The results show significant tendencies for utilitarian
decisions but qualitative statements in semi-structured interviews revealed different underly-
ing motivations (Frison, Wintersberger, & Riener, 2016). Further, by analyzing which ethical
behavior is socially acceptable, several studies revealed that users are willing to sacrifice them-
selves (or at least accept severe injuries) to safe others (Wintersberger, Frison, Riener, &
Hasirlioglu, 2017; Bergmann, et al., 2018). The question is, if such empirical results from
experimental ethics, interviews and surveys can lead to pragmatic design suggestions and thus
an acceptable and appropriate experience for everyone by a mandatory ethical setting. Some
researchers (Holstein & Dodig-Crnkovic, 2018; Nyholm & Smids, 2016) argue that the use of
the “Trolley Problem” is misleading, as it is intrinsically unfair by assuming that different lives
have different values. However, it is necessary to analyze real complex engineering problems.
Therefore, they analyzed regulative instruments, standards, and designs to identify practical
social and ethical challenges. Gogoll and Müller (2017) challenge whether every driver should
be able to choose his personal ethical setting. They conclude, though people would not be
willing a system, which sacrifices themselves, that a mandatory ethical setting is in their best
interest to avoid a prisoner dilemma, which prevents to achieve the socially preferred result.
Consequently, it is widely discussed how to get to the best possible solution for society, as a
distinct answer to the problem seems hard or even impossible to find. Beside all the efforts
aiming to find socially acceptable behavior, we should not forget the individual user and the
implications of system decisions on his/her experience. Understanding and satisfying users’
needs is a central component of UX Design and essential for creating valuable experiences.
An experience is “shaped by both, characteristics of the user (e.g., personality, skills, back-
ground, cultural values, and motives) and properties of the product (e.g., shape, texture, color,
and behavior)” (Desmet & Hekkert, 2017). Thereby the quality of UX is dependent on the
fulfillment of users’ psychological needs, i.e., autonomy, competence, security, meaning, re-
latedness, popularity and stimulation (Hassenzahl, Diefenbach, & Görlirz, 2010). By sense-
making, users construct their experiences on their perceptions before, while and after interact-
ing with them, and continuously assess if their higher goals (needs but also ethical principles)
are met (Wright, McCarthy, & Meekison, 2003). Thus, if users feel impeded in their decision
autonomy in a moral dilemma situation, their whole experience is impacted. From a designer’s
perspective product aspects like content, features, functionality and interactions are defined to
achieve a certain product character with the intention to create a certain level of a pragmatic
but as well hedonic quality (Hassenzahl, 2003; Hassenzahl, 2008). The hedonic quality of
3
690 Frison, A. K. et al.
identification represents the concept of self-identification with a certain product. Thus, selec-
tion for or against a brand is highly dependent on a potential overlap of personal and brand
values. Meschtscherjakov et al. (2014) investigated the emotional attachment of mobile phones
and referred to the strong connection between Apple and iPhone. Especially when designing
automated systems, brands face the challenge of fulfilling users’ individual ethical and moral
guidelines while harmonizing them with their own brand values. The ethical behavior of an
automated vehicle, experienced at several touchpoints, also already before really using it (e.g.,
media articles), might affect the overall experience similarly as aesthetics, usability and func-
tionality, and thus should match users’ higher goals.
Hence, even though a personalized ethical setting for AVs might not lead to the best solution
for society, implications of users’ individual experience of an ethical behavior on the overall
acceptance of ADSs cannot be ignored by the automotive industry. Investigations utilizing the
Trolley Problem can thereby help to get insights for real-worlds engineering problems.
3 Utilizing the “Trolley Problem”
Our aim is to understand the complex construct of ethical behavior, users’ values and the over-
all experience with AVs. To investigate general tendencies towards acceptance rates of spe-
cific moral decisions of ADSs we applied an explorative research strategy, utilizing a low-
fidelity approach. Using this strategy, we wanted to reveal correlations between the acceptance
of certain ethical decisions in moral dilemmas and the perception of product qualities.
We distributed an online survey and asked subjects (n = 330) about their acceptance of an AV
given ethical decisions (presented in form of the Trolley Problem). The AV is not able to break
and has to decide who will be sacrificed: the algorithm favored one person over another based
on knowledge of the persons’ age, comparing a young (20 years) and an old adult (80 years).
The Trolley Problem is used here as an extreme example for an ethical behavior which might
not match with users ethical values. It was presented to subjects in form of a short scenario
description, a sketch of the situation and a picture of the product (Mercedes F015). Subjects
were confronted with randomized decision outputs (between-subject design) of the moral al-
gorithm. The algorithm either chose randomization (representing an equalitarian approach) or
saves the young or old person (representing a utilitarian approach). Using respectively a 7-
point Likert scale (1: Strongly disagree; 7: Strongly agree), subjects had to rate if they accept
the decision and how they perceive the system. For users’ perception, we defined four scales:
general need fulfillment, aesthetics, pragmatics, and overall product assessment. By statisti-
cally analyzing the data of the online survey, we wanted to investigate the general acceptance
rates of the different decision outputs as well as correlations concerning subjects’ product per-
ception. In total, 122 female and 210 male subjects in the age range 18 to 70 years (77% aged
30 years or younger) participated in this survey.
3.1 Acceptance
Evaluating subjects’ acceptance rates of a certain decision output, we can observe a significant
4
Moral Behavior of Automated Vehicles: The Impact on Product Perception 691
effect in the scenarios, see Figure 1. As the data was not normally distributed, a Kruskal-Wallis
test was used. We can report significant differences in acceptance between the two decisions
outputs, H(2) = 86.03; p < .001. With a median (Mdn) value of “6”, most subjects voted for
the decision to rescue the younger person of 20 years in favor of rescuing the elderly person.
In contrast, saving the 80-year-old person resulted in a median value of only “2”. For compar-
ison, a pure random decision would have been rated by a median value of “4”.
Figure 1: Distribution of acceptance ratings for an ADS’s different ethical behaviors
3.2 Product Perception
To investigate the impact of ethical behavior on users’ product perception, we evaluated the
defined scales. Based on the scenario description (supported by a sketch) and a product picture
of the AV, subjects had to assess the product quality by completing scales for users’ needs
(does the product fulfill users’ higher goals), and the product’s aesthetic (is the product ap-
pealing) and the pragmatic quality (best outcome with least effort). Since the data was not
normally distributed, we performed a Kendall-Tau-b test (non-parametric statistic) to investi-
gate correlations between the acceptance of each decision output and components of product
perception. We can report positive correlations in all conditions of ethical behavior (see Figure
2). More concrete, as subjects did not vote for rescuing an elderly person in the dilemma situ-
ation (Mdn = 2), the users’ need fulfillment (Mdn = 1, b= .391; p < .001), the perception of
the aesthetics of the product (Mdn = 2, b = .274; p < .05), and the pragmatic quality (Mdn =
1, b = .287; p < .001) were rated negatively. This also significantly affects the overall as-
sessment of the vehicle (Mdn = 1, b = .353; p < .001). In contrast, saving the life of the young
person was highly accepted (Mdn = 6). All aspects of product perception as well as the general
product assessment correlate significantly (needs: b = .527; p < .001; aesthetics: b = .419;
p < .001; pragmatics: b = .373; p < .001; overall: b = .426; p < .001). Random decision
were rated lower, what correlates significantly in all other scales (needs: b = .511; p < .001;
aesthetics: b = .528; p < .001; pragmatics: b = .309; p < .001; overall: b = .532; p < .001).
Figure 2: Median values of the product perception ratings. Color-coding indicates possible correlations.
5
692 Frison, A. K. et al.
To sum up, all results show a clear connection between the acceptance of ethical behavior and
assessment of the overall product in different dimensions.
4 Discussion
In our study, we have investigated how a certain moral behavior affects the overall perception
of products. People favor an utilitarian approach (saving the young person) more in highly
safety critical scenarios. Although the ethic commission of the German Federal Ministry of
Transport and Digital Infrastructure postulates that algorithms are not allowed to weight value
of lives (2017) and the general unfairness of the Trolley Problem cannot be disclaimed
(Holstein & Dodig-Crnkovic, 2018), results indicate that age is an accepted parameter for
moral algorithms. A random decision is, however, not so acceptable.
Our results also show that the acceptance of moral decisions correlates with product quality
perception in terms of users’ need fulfillment, the perception of aesthetics and pragmatics.
Considering that the AV was presented only in form of a simple image, this becomes a strong
argument. It shows that moral implications can play an important role for UX. Furthermore,
specific moral decisions also affect product perception and vice-versa. Even though a manda-
tory ethical setting is socially more preferred, our results show that automotive industry will
have to deal with the problem that a similar ethical behavior will be perceived differently base
on the brand’s image.
Investigating moral dilemmas in automated driving is still timely and challenging, as we do
not yet have “real” systems, and a low-fidelity approach is just a first step to proceed in this
domain. Although the presented studies show some interesting aspects, we are aware of limi-
tations. The different systems were presented only visually, but for a full holistic evaluation of
a product and its perception, additional dimensions like context, haptics, size, odor or form
must be considered as well as the important dimension of time (before, while and after). In
addition, subjects could perceive the implications of the presented moral dilemmas from a
distant perspective, as they have not been confronted with the effects of the decisions them-
selves. In future work we will investigate additional scenarios more common than the Trolley
problem, that demand moral behavior of automated systems in everyday situations e.g., normal
traffic on the street and concentrate less on dilemmas with high severity. Furthermore, we want
to develop strategies on evaluating the impact of moral behavior on UX and brand experience
in more high-fidelity approaches, e.g., simulation environment or driving on a test track.
5 Conclusion
Even though we might never find clear solutions for deeply philosophical moral dilemmas,
such as the “Trolley Problem”, a connection between our perception and ethical implications
of products already exists. Following E. Herzberg’s death, public and media discussed if com-
panies like Uber or Tesla are going too far when testing their products in safety-critical envi-
ronments. Brands will have to think about the ethical components in their image, and how they
want their products being perceived by individuals and societies. We do not suggest that car
6
Moral Behavior of Automated Vehicles: The Impact on Product Perception 693
companies should develop algorithms that represent their desired brand expectations in poten-
tially lethal moral dilemmas, but still the Trolley Problem is an easily graspable abstraction
that can represent a wide range of scenarios with moral implications. Results of the studies
presented in this paper confirm a correlation of ethical behavior and perception of automated
vehicles. Since more responsibilities are transferred from humans to technology, involving
users and their personal values in design decisions will become more important to increase the
general acceptance. Valuable insights into needs and higher goals, such as personal values, can
improve products and might lead to more valuable holistic experiences. As users’ experience
in a vehicle is affected by perceptions also already before the real usage, articles in media and
discussions in society can prevent the comprehensive establishment of AVs of single brands
on the market. While German brands are still careful and cautious in testing their systems
extensively in real road environments (Taylor, 2018), Tesla and Uber already have the chal-
lenge to recover and disseminate their desired image.
References
Albrechtslund, A. (2007). Ethics and technology design. (Springer, Ed.) Ethics and Information Technology, 1, pp.
63-71.
Bergmann, L. T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., & Stephan, A. (2018). Autonomous
Vehicles Require Socio-Political Acceptance-An Empirical and Philosophical Perspective on the Problem
of Moral Decision Making. (Frontiers, Ed.) Frontiers in behavioral neuroscience, 31, p. 31.
Blyth, P.-L., Mladenovic, M. N., Nardi, B. A., Su, N. M., & Ekbia, H. R. (2015). Driving the self-driving vehicle:
Expanding the technological design Horizon. 2015 IEEE International Symposium on Technology and
Society (ISTAS) (pp. 1-6). IEEE.
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 6293, pp.
1573-1576.
Boudette, N. E. (2017, January). The New York Times. Retrieved from Tesla's Self-Driving System Cleared in Deadly
Crash: https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html}
Desmet, P., & Hekkert, P. (2017). Framework of product experience. International Journal of Design.
Foot, P. (1967). The problem of abortion and the doctrine of double effect.
Friedman, B., & Kahn Jr, P. H. (2002). Human values, ethics, and design. In The human-computer interaction
handbook} (pp. 1177--1201). L. Erlbaum Associates Inc.
Frison, A.-K., Wintersberger, P., & Riener, A. (2016). First Person Trolley Problem: Evaluation of Drivers' Ethical
Decisions in a Driving Simulator. roceedings of the 8th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications Adjunct (S. 117-122). ACM.
German federal ministry of transport and digital infrastructure. (2016). Amendments to Article 8 and Article 39 of
1968 Convention on Road Traffic.
German federal ministry of transport and digital infrastructure. (2017). Ethik-Kommission Automatisiertes und
vernetztes Fahren.
German federal ministry of transport and digital infrastucture. (2017, June 20). Ethik-Kommission zum automatisierten
Fahren legt Bericht vor. Retrieved April 7, 2018, from
https://www.bmvi.de/SharedDocs/DE/Pressemitteilungen/2017/084-dobrindt-bericht-der-ethik-
kommission.html
Gogoll, J., & Müller, J. F. (2017). Autonomous cars: in favor of a mandatory ethics setting. (Springer, Hrsg.) Science
and engineering ethics, 3, S. 681-700.
7
694 Frison, A. K. et al.
Hassenzahl, M. (2003). The thing and I: understanding the relationship between user and product. In Funology (S. 31-
42). Springer.
Hassenzahl, M. (2008). User experience (UX): towards an experiential perspective on product quality. Proceedings
of the 20th Conference on l'Interaction Homme-Machine (S. 11-15). ACM.
Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener
hedonischer und pragmatischer Qualität. Mensch & Computer (S. 187-196). Springer.
Hassenzahl, M., Diefenbach, S., & Görlirz, A. (2010). Needs, affect, and interactive products--Facets of user
experience. (O. U. Press, Hrsg.) Interacting with computers, 5(22), 353--362.
Hassenzahl, M., Wiklund-Engblom, A., Bengs, A., Hägglund, S., & Diefenbach, S. (2015). Experience-oriented and
product-oriented evaluation: psychological need fulfillment, positive affect, and product perception.
International journal of human-computer interaction, 8(31), 530-544.
Herger, M. (1. Februar 2018). Disengagement Report 2017 – The Good, The Bad, The Ugly. Abgerufen am 7. April
2018 von https://thelastdriverlicenseholder.com/2018/02/01/disengagement-report-2017-the-good-the-
bad-the-ugly/
Holstein, T., & Dodig-Crnkovic, G. (2018). Avoiding the Intrinsic Unfairness of the Trolley Problem. ICSE 2018
workshop FairWare.
Holstein, T., Dodig-Crnkovic, G., & Pelliccione, P. (2018). Ethical and Social Aspects of Self-Driving Cars. arXiv
preprint arXiv:1802.04103.
Li, J., Zhao, X., Cho, M.-J., Ju, W., & Malle, B. F. (2016). From Trolley to Autonomous Vehicle: Perceptions of
Responsibility and Moral Norms in Traffic Accidents with Self-Driving Cars. SAE Technical Paper.
Litman, T. (2014). Autonomous vehicle implementation predictions. Victora Transport Policy Institute 28.
Mekler, E. D., & Hornbaek, K. (2016). Momentary pleasure or lasting meaning?: Distinguishing eudaimonic and
hedonic user experiences. Proceedings of the 2016 CHI Conference on Human Factors in Computing
Systems (S. 4509-4520). ACM.
Meschtscherjakov, A., Wilfinger, D., & Tscheligi, M. (2014). Mobile attachment causes and consequences for
emotional bonding with mobile phones. Proceedings of the 32nd annual ACM conference on Human
factors in computing systems (S. 2317--2326). ACM.
Nyholm, S., & Smids, J. (2016). The ethics of accient-algorithms for self-driving cars: an applied trolley problem?
Ethical Theory and Moral Practice, S. 1275 - 1289.
Taylor, M. (22. March 2018). Forbes. Abgerufen am 7. April 2018 von Fatal Uber Crash Was 'Inevitable,' Says
BMW's Top Engineer: https://www.forbes.com/sites/michaeltaylor/2018/03/22/fatal-uber-crash-
inevitable-says-bmws-top-engineer/
Wintersberger, P., & Riener, A. (2016). Trust in technology as a safety aspect in highly automated driving. (D. G.
Oldenbourg, Hrsg.) i-com, 3, S. 297--310.
Wintersberger, P., Frison, A.-K., Riener, A., & Hasirlioglu, S. (2017). The experience of ethics: Evaluation of self
harm risks in automated vehicles. Intelligent Vehicles Symposium (IV), 2017 IEEE (S. 385-391). IEEE.
Wintersberger, P., Frison, A.-K., Riener, A., & Thakkar, S. (2017). Do Moral Robots Always Fail? Investigating
Human Attitudes Towards Ethical Decisions of Automated Systems. IEEE International Symposium mon
Robot and Human Interactive Communication. Lisbon.
Wright, P., McCarthy, J., & Meekison, L. (2003). Making sense of experience. Funology (S. 43-53). Springer.
8