Conference PaperPDF Available

Moral Behavior of Automated Vehicles: The Impact on Product Perception


Abstract and Figures

With further development of automation, more responsibilities will be transferred from users to technology. Consequently, algorithms of highly automated vehicles should be programmed to behave similarly to the affect- and intuition-based reasoning of human drivers. This includes making decisions in various exceptional circumstances, such as moral dilemmas. We assume that the perceived quality of a holistic driving experience is dependent on the accordance of vehicles’ moral and ethical decisions with users’ expectations concerning values and attitudes. In this work, we discuss implementation strategies for moral behavior in automated driving systems in order to fulfill users’ needs and match their values. The reported findings are based on data from an online survey (n=330). We investigated how subjects assess moral decisions and the overall product experience. Initial results show tendencies among subjects in accepting a decision over life and death and significant dependencies concerning the overall product perception.
Content may be subject to copyright.
Veröffentlicht durch die Gesellschaft für Informatik e. V. 2018 in
R. Dachselt, G. Weber (Hrsg.):
Mensch und Computer 2018 – Workshopband, 02.–05. September 2018, Dresden.
Copyright (C) 2018 bei den Autoren.
Moral Behavior of Automated Vehi-
cles: The Impact on Product Percep-
Anna-Katharina Frison12, Philipp Wintersberger12, Andreas Riener12, Clem-
ens Schartmüller12
Technische Hochschule Ingolstadt (THI), Human-Computer Interaction Group, Germany1
Johannes Kepler University, Linz, Austria2
With further development of automation, more responsibilities will be transferred from users to technol-
ogy. Consequently, algorithms of highly automated vehicles should be programmed to behave similarly
to the affect- and intuition-based reasoning of human drivers. This includes making decisions in various
exceptional circumstances, such as moral dilemmas. We assume that the perceived quality of a holistic
driving experience is dependent on the accordance of vehiclesmoral and ethical decisions with users
expectations concerning values and attitudes. In this work, we discuss implementation strategies for
moral behavior in automated driving systems in order to fulfill usersneeds and match their values. The
reported findings are based on data from an online survey (n=330). We investigated how subjects assess
moral decisions and the overall product experience. Initial results show tendencies among subjects in
accepting a decision over life and death and significant dependencies concerning the overall product
1 Introduction
An increasing number of automated systems will soon take over tasks that were recently per-
formed by humans, such as automated vehicles, rescue and health robots, assistance for public
authorities, etc. (Wintersberger, Frison, Riener, & Thakkar, 2017). As for automated vehicles
(AVs), predictions forecast that they will account for 50% of vehicle sales by 2040 (Litman,
2014). Due to the increasing number of automated vehicle tests on real roads, situations in
which the safety (fallback) driver has to take over control doubled in California from 2016 to
2017 (Herger, 2018). Thus, it seems not surprising that AVs become involved in road acci-
dents. In June 2016, J. Brown was the first fatally injured (Boudette, 2017) while using an
automated driving system (ADS). A main cause of the accident in the SAE level 2 system is
attributed to overtrust (Wintersberger & Riener, 2016). Just recently (March 2018), the first
pedestrian died in an accident with an AV that was part of Uber driving tests. The car did not
even brake, and the safety driver, distracted and fatigued from her job, failed to intervene.
Some people claim, that the casualty was forced to participate in a “test without consent
Veröffentlicht durch die Gesellschaft für Informatik e. V. 2018 in
R. Dachselt, G. Weber (Hrsg.):
Mensch und Computer 2018 – Workshopband, 02.–05. September 2018, Dresden.
Copyright (C) 2018 bei den Autoren.
688 Frison, A. K. et al.
(Taylor, 2018), which again boosts discussions about ethics and trust in ADS. It is feared, that
a downside of tests on reals roads could be increasing numbers of humans involved in acci-
dents. Those could affect individualsand societiesacceptance in new technology and impede
its success. However, as technology enhances, failures like the before mentioned might be-
come history. Still, we cannot guarantee that (lethal) accidents will not happen again in the
future and thus deeper discussions about the ethical implications of AVs are necessary. In the
end, somebody has to decide how AVs should react in moral dilemmas, such as the “Trolley
Problem” (Foot, 1967). In the development of AVs, researchers, engineers and designers must
decide what is morally acceptable and what is not. In 2017, the Ethics Commission of the
German Federal Ministry of Transport and Digital Infrastructure published the first guidelines
for politics and legislation, making demands for security, human dignity, autonomy of decision
and data (2017). In addition, the Vienna Convention on Road Traffic (German federal ministry
of transport and digital infrastructure, 2016), which stated that the driver must be able/allowed
to deactivate ADSs and in any possible situation, manifests the autonomy of the driver itself.
“Users of AV technologyare both drivers but also other traffic participants. The behavior of
a vehicle once on the road will highly affect his experiences in any user-role (i.e., driver, pas-
senger, pedestrian, etc.) and, as a consequence, have a major impact on the future perception
of the technology in general and individual car brands in particular. The market success of
products, systems or services strongly depend on various factors like functionality, usability
and aesthetics, but also abstract concepts like Hedonism and Eudaimonia (Hassenzahl, 2003;
Hassenzahl, 2008; Mekler & Hornbaek, 2016). To create satisfying and delighting user expe-
riences in ADS, users psychological needs (i.e., autonomy) have to be fulfilled (Hassenzahl,
Diefenbach, & Görlirz, 2010). To achieve these higher (sometimes hidden) goals, the quality
of product experiences should be maximized in a holistic way. Clearly, the “moralbehavior
of ADSs will play an important role in this experience, similar to other humans’ moral behavior
and ethical values, which play a major role in our personal judgement. Hence, to deal with the
loss of decision autonomy for road users, we must find and implement accepted behavior to
act in morally ambiguous situations. Users individual needs and values have to be focused
and ethical implications of a technology need to be anticipated and assessed (Albrechtslund,
2007). Only if the decisions of AVs are perceived to be in consent with personal values, users
will accept and trust them. Having a closer look at humansindividual value compass thus
must become a central part of future product development. In this work, we present a survey
study analyzing how user experience (UX) and thus product perception is affected by ethical
decisions of AVs, and aim to investigate the impact of experiences lacking to fulfill users
need for autonomy on trust, and as well in a certain vehicle brand (Friedman & Kahn Jr, 2002).
2 Related Work
Ethical behavior of ADSs was a focused research topic in the last years. However, a direct
connection to UX and product perception is not yet existing. The Trolley Problemis an
often-used example to investigate the problem. In (Friedman & Kahn Jr, 2002) it was shown
that subjectsdecisions in the Trolley Problem were powered by different motivations. There-
Moral Behavior of Automated Vehicles: The Impact on Product Perception 689
fore, it might be dangerous to consider only the decision, without involving the complex struc-
tures of the decision-making processes behind. Thus, Blythe et al. (2015) illustrated the need
for a more socially and ethically just perspective for designing vehicles. They presented a
framework beyond the techno-centric and utilitarian perspective using participatory design to
investigate how future automated driving could look like. Bonnefon et al. (2016) conducted
six surveys addressing the Trolley Problem and concluded that data driven approaches can
provide new insights into moral, cultural, and legal aspects of ethical decisions. Li et al. (2016)
conducted two experiments using moral dilemmas to evaluate the behavior of AVs by present-
ing narratives to subjects, and emphasized that their research can help to reveal patterns in
perception and help lawmakers and car manufacturer in the design process. In a driving simu-
lator study, subjects had to decide (using the “Trolley Problem” in an AV) which behavior
they would expect from the technology. The results show significant tendencies for utilitarian
decisions but qualitative statements in semi-structured interviews revealed different underly-
ing motivations (Frison, Wintersberger, & Riener, 2016). Further, by analyzing which ethical
behavior is socially acceptable, several studies revealed that users are willing to sacrifice them-
selves (or at least accept severe injuries) to safe others (Wintersberger, Frison, Riener, &
Hasirlioglu, 2017; Bergmann, et al., 2018). The question is, if such empirical results from
experimental ethics, interviews and surveys can lead to pragmatic design suggestions and thus
an acceptable and appropriate experience for everyone by a mandatory ethical setting. Some
researchers (Holstein & Dodig-Crnkovic, 2018; Nyholm & Smids, 2016) argue that the use of
the Trolley Problemis misleading, as it is intrinsically unfair by assuming that different lives
have different values. However, it is necessary to analyze real complex engineering problems.
Therefore, they analyzed regulative instruments, standards, and designs to identify practical
social and ethical challenges. Gogoll and Müller (2017) challenge whether every driver should
be able to choose his personal ethical setting. They conclude, though people would not be
willing a system, which sacrifices themselves, that a mandatory ethical setting is in their best
interest to avoid a prisoner dilemma, which prevents to achieve the socially preferred result.
Consequently, it is widely discussed how to get to the best possible solution for society, as a
distinct answer to the problem seems hard or even impossible to find. Beside all the efforts
aiming to find socially acceptable behavior, we should not forget the individual user and the
implications of system decisions on his/her experience. Understanding and satisfying users
needs is a central component of UX Design and essential for creating valuable experiences.
An experience is shaped by both, characteristics of the user (e.g., personality, skills, back-
ground, cultural values, and motives) and properties of the product (e.g., shape, texture, color,
and behavior)” (Desmet & Hekkert, 2017). Thereby the quality of UX is dependent on the
fulfillment of userspsychological needs, i.e., autonomy, competence, security, meaning, re-
latedness, popularity and stimulation (Hassenzahl, Diefenbach, & Görlirz, 2010). By sense-
making, users construct their experiences on their perceptions before, while and after interact-
ing with them, and continuously assess if their higher goals (needs but also ethical principles)
are met (Wright, McCarthy, & Meekison, 2003). Thus, if users feel impeded in their decision
autonomy in a moral dilemma situation, their whole experience is impacted. From a designers
perspective product aspects like content, features, functionality and interactions are defined to
achieve a certain product character with the intention to create a certain level of a pragmatic
but as well hedonic quality (Hassenzahl, 2003; Hassenzahl, 2008). The hedonic quality of
690 Frison, A. K. et al.
identification represents the concept of self-identification with a certain product. Thus, selec-
tion for or against a brand is highly dependent on a potential overlap of personal and brand
values. Meschtscherjakov et al. (2014) investigated the emotional attachment of mobile phones
and referred to the strong connection between Apple and iPhone. Especially when designing
automated systems, brands face the challenge of fulfilling usersindividual ethical and moral
guidelines while harmonizing them with their own brand values. The ethical behavior of an
automated vehicle, experienced at several touchpoints, also already before really using it (e.g.,
media articles), might affect the overall experience similarly as aesthetics, usability and func-
tionality, and thus should match usershigher goals.
Hence, even though a personalized ethical setting for AVs might not lead to the best solution
for society, implications of usersindividual experience of an ethical behavior on the overall
acceptance of ADSs cannot be ignored by the automotive industry. Investigations utilizing the
Trolley Problem can thereby help to get insights for real-worlds engineering problems.
3 Utilizing the Trolley Problem
Our aim is to understand the complex construct of ethical behavior, usersvalues and the over-
all experience with AVs. To investigate general tendencies towards acceptance rates of spe-
cific moral decisions of ADSs we applied an explorative research strategy, utilizing a low-
fidelity approach. Using this strategy, we wanted to reveal correlations between the acceptance
of certain ethical decisions in moral dilemmas and the perception of product qualities.
We distributed an online survey and asked subjects (n = 330) about their acceptance of an AV
given ethical decisions (presented in form of the Trolley Problem). The AV is not able to break
and has to decide who will be sacrificed: the algorithm favored one person over another based
on knowledge of the personsage, comparing a young (20 years) and an old adult (80 years).
The Trolley Problem is used here as an extreme example for an ethical behavior which might
not match with users ethical values. It was presented to subjects in form of a short scenario
description, a sketch of the situation and a picture of the product (Mercedes F015). Subjects
were confronted with randomized decision outputs (between-subject design) of the moral al-
gorithm. The algorithm either chose randomization (representing an equalitarian approach) or
saves the young or old person (representing a utilitarian approach). Using respectively a 7-
point Likert scale (1: Strongly disagree; 7: Strongly agree), subjects had to rate if they accept
the decision and how they perceive the system. For usersperception, we defined four scales:
general need fulfillment, aesthetics, pragmatics, and overall product assessment. By statisti-
cally analyzing the data of the online survey, we wanted to investigate the general acceptance
rates of the different decision outputs as well as correlations concerning subjectsproduct per-
ception. In total, 122 female and 210 male subjects in the age range 18 to 70 years (77% aged
30 years or younger) participated in this survey.
3.1 Acceptance
Evaluating subjectsacceptance rates of a certain decision output, we can observe a significant
Moral Behavior of Automated Vehicles: The Impact on Product Perception 691
effect in the scenarios, see Figure 1. As the data was not normally distributed, a Kruskal-Wallis
test was used. We can report significant differences in acceptance between the two decisions
outputs, H(2) = 86.03; p < .001. With a median (Mdn) value of “6”, most subjects voted for
the decision to rescue the younger person of 20 years in favor of rescuing the elderly person.
In contrast, saving the 80-year-old person resulted in a median value of only “2”. For compar-
ison, a pure random decision would have been rated by a median value of “4”.
Figure 1: Distribution of acceptance ratings for an ADS’s different ethical behaviors
3.2 Product Perception
To investigate the impact of ethical behavior on usersproduct perception, we evaluated the
defined scales. Based on the scenario description (supported by a sketch) and a product picture
of the AV, subjects had to assess the product quality by completing scales for users needs
(does the product fulfill usershigher goals), and the product’s aesthetic (is the product ap-
pealing) and the pragmatic quality (best outcome with least effort). Since the data was not
normally distributed, we performed a Kendall-Tau-b test (non-parametric statistic) to investi-
gate correlations between the acceptance of each decision output and components of product
perception. We can report positive correlations in all conditions of ethical behavior (see Figure
2). More concrete, as subjects did not vote for rescuing an elderly person in the dilemma situ-
ation (Mdn = 2), the usersneed fulfillment (Mdn = 1, b= .391; p < .001), the perception of
the aesthetics of the product (Mdn = 2, b = .274; p < .05), and the pragmatic quality (Mdn =
1, b = .287; p < .001) were rated negatively. This also significantly affects the overall as-
sessment of the vehicle (Mdn = 1, b = .353; p < .001). In contrast, saving the life of the young
person was highly accepted (Mdn = 6). All aspects of product perception as well as the general
product assessment correlate significantly (needs: b = .527; p < .001; aesthetics: b = .419;
p < .001; pragmatics: b = .373; p < .001; overall: b = .426; p < .001). Random decision
were rated lower, what correlates significantly in all other scales (needs: b = .511; p < .001;
aesthetics: b = .528; p < .001; pragmatics: b = .309; p < .001; overall: b = .532; p < .001).
Figure 2: Median values of the product perception ratings. Color-coding indicates possible correlations.
692 Frison, A. K. et al.
To sum up, all results show a clear connection between the acceptance of ethical behavior and
assessment of the overall product in different dimensions.
4 Discussion
In our study, we have investigated how a certain moral behavior affects the overall perception
of products. People favor an utilitarian approach (saving the young person) more in highly
safety critical scenarios. Although the ethic commission of the German Federal Ministry of
Transport and Digital Infrastructure postulates that algorithms are not allowed to weight value
of lives (2017) and the general unfairness of the Trolley Problem cannot be disclaimed
(Holstein & Dodig-Crnkovic, 2018), results indicate that age is an accepted parameter for
moral algorithms. A random decision is, however, not so acceptable.
Our results also show that the acceptance of moral decisions correlates with product quality
perception in terms of usersneed fulfillment, the perception of aesthetics and pragmatics.
Considering that the AV was presented only in form of a simple image, this becomes a strong
argument. It shows that moral implications can play an important role for UX. Furthermore,
specific moral decisions also affect product perception and vice-versa. Even though a manda-
tory ethical setting is socially more preferred, our results show that automotive industry will
have to deal with the problem that a similar ethical behavior will be perceived differently base
on the brands image.
Investigating moral dilemmas in automated driving is still timely and challenging, as we do
not yet have “real” systems, and a low-fidelity approach is just a first step to proceed in this
domain. Although the presented studies show some interesting aspects, we are aware of limi-
tations. The different systems were presented only visually, but for a full holistic evaluation of
a product and its perception, additional dimensions like context, haptics, size, odor or form
must be considered as well as the important dimension of time (before, while and after). In
addition, subjects could perceive the implications of the presented moral dilemmas from a
distant perspective, as they have not been confronted with the effects of the decisions them-
selves. In future work we will investigate additional scenarios more common than the Trolley
problem, that demand moral behavior of automated systems in everyday situations e.g., normal
traffic on the street and concentrate less on dilemmas with high severity. Furthermore, we want
to develop strategies on evaluating the impact of moral behavior on UX and brand experience
in more high-fidelity approaches, e.g., simulation environment or driving on a test track.
5 Conclusion
Even though we might never find clear solutions for deeply philosophical moral dilemmas,
such as the Trolley Problem, a connection between our perception and ethical implications
of products already exists. Following E. Herzberg’s death, public and media discussed if com-
panies like Uber or Tesla are going too far when testing their products in safety-critical envi-
ronments. Brands will have to think about the ethical components in their image, and how they
want their products being perceived by individuals and societies. We do not suggest that car
Moral Behavior of Automated Vehicles: The Impact on Product Perception 693
companies should develop algorithms that represent their desired brand expectations in poten-
tially lethal moral dilemmas, but still the Trolley Problem is an easily graspable abstraction
that can represent a wide range of scenarios with moral implications. Results of the studies
presented in this paper confirm a correlation of ethical behavior and perception of automated
vehicles. Since more responsibilities are transferred from humans to technology, involving
users and their personal values in design decisions will become more important to increase the
general acceptance. Valuable insights into needs and higher goals, such as personal values, can
improve products and might lead to more valuable holistic experiences. As usersexperience
in a vehicle is affected by perceptions also already before the real usage, articles in media and
discussions in society can prevent the comprehensive establishment of AVs of single brands
on the market. While German brands are still careful and cautious in testing their systems
extensively in real road environments (Taylor, 2018), Tesla and Uber already have the chal-
lenge to recover and disseminate their desired image.
Albrechtslund, A. (2007). Ethics and technology design. (Springer, Ed.) Ethics and Information Technology, 1, pp.
Bergmann, L. T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., & Stephan, A. (2018). Autonomous
Vehicles Require Socio-Political Acceptance-An Empirical and Philosophical Perspective on the Problem
of Moral Decision Making. (Frontiers, Ed.) Frontiers in behavioral neuroscience, 31, p. 31.
Blyth, P.-L., Mladenovic, M. N., Nardi, B. A., Su, N. M., & Ekbia, H. R. (2015). Driving the self-driving vehicle:
Expanding the technological design Horizon. 2015 IEEE International Symposium on Technology and
Society (ISTAS) (pp. 1-6). IEEE.
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 6293, pp.
Boudette, N. E. (2017, January). The New York Times. Retrieved from Tesla's Self-Driving System Cleared in Deadly
Desmet, P., & Hekkert, P. (2017). Framework of product experience. International Journal of Design.
Foot, P. (1967). The problem of abortion and the doctrine of double effect.
Friedman, B., & Kahn Jr, P. H. (2002). Human values, ethics, and design. In The human-computer interaction
handbook} (pp. 1177--1201). L. Erlbaum Associates Inc.
Frison, A.-K., Wintersberger, P., & Riener, A. (2016). First Person Trolley Problem: Evaluation of Drivers' Ethical
Decisions in a Driving Simulator. roceedings of the 8th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications Adjunct (S. 117-122). ACM.
German federal ministry of transport and digital infrastructure. (2016). Amendments to Article 8 and Article 39 of
1968 Convention on Road Traffic.
German federal ministry of transport and digital infrastructure. (2017). Ethik-Kommission Automatisiertes und
vernetztes Fahren.
German federal ministry of transport and digital infrastucture. (2017, June 20). Ethik-Kommission zum automatisierten
Fahren legt Bericht vor. Retrieved April 7, 2018, from
Gogoll, J., & Müller, J. F. (2017). Autonomous cars: in favor of a mandatory ethics setting. (Springer, Hrsg.) Science
and engineering ethics, 3, S. 681-700.
694 Frison, A. K. et al.
Hassenzahl, M. (2003). The thing and I: understanding the relationship between user and product. In Funology (S. 31-
42). Springer.
Hassenzahl, M. (2008). User experience (UX): towards an experiential perspective on product quality. Proceedings
of the 20th Conference on l'Interaction Homme-Machine (S. 11-15). ACM.
Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener
hedonischer und pragmatischer Qualität. Mensch & Computer (S. 187-196). Springer.
Hassenzahl, M., Diefenbach, S., & Görlirz, A. (2010). Needs, affect, and interactive products--Facets of user
experience. (O. U. Press, Hrsg.) Interacting with computers, 5(22), 353--362.
Hassenzahl, M., Wiklund-Engblom, A., Bengs, A., Hägglund, S., & Diefenbach, S. (2015). Experience-oriented and
product-oriented evaluation: psychological need fulfillment, positive affect, and product perception.
International journal of human-computer interaction, 8(31), 530-544.
Herger, M. (1. Februar 2018). Disengagement Report 2017 The Good, The Bad, The Ugly. Abgerufen am 7. April
2018 von
Holstein, T., & Dodig-Crnkovic, G. (2018). Avoiding the Intrinsic Unfairness of the Trolley Problem. ICSE 2018
workshop FairWare.
Holstein, T., Dodig-Crnkovic, G., & Pelliccione, P. (2018). Ethical and Social Aspects of Self-Driving Cars. arXiv
preprint arXiv:1802.04103.
Li, J., Zhao, X., Cho, M.-J., Ju, W., & Malle, B. F. (2016). From Trolley to Autonomous Vehicle: Perceptions of
Responsibility and Moral Norms in Traffic Accidents with Self-Driving Cars. SAE Technical Paper.
Litman, T. (2014). Autonomous vehicle implementation predictions. Victora Transport Policy Institute 28.
Mekler, E. D., & Hornbaek, K. (2016). Momentary pleasure or lasting meaning?: Distinguishing eudaimonic and
hedonic user experiences. Proceedings of the 2016 CHI Conference on Human Factors in Computing
Systems (S. 4509-4520). ACM.
Meschtscherjakov, A., Wilfinger, D., & Tscheligi, M. (2014). Mobile attachment causes and consequences for
emotional bonding with mobile phones. Proceedings of the 32nd annual ACM conference on Human
factors in computing systems (S. 2317--2326). ACM.
Nyholm, S., & Smids, J. (2016). The ethics of accient-algorithms for self-driving cars: an applied trolley problem?
Ethical Theory and Moral Practice, S. 1275 - 1289.
Taylor, M. (22. March 2018). Forbes. Abgerufen am 7. April 2018 von Fatal Uber Crash Was 'Inevitable,' Says
BMW's Top Engineer:
Wintersberger, P., & Riener, A. (2016). Trust in technology as a safety aspect in highly automated driving. (D. G.
Oldenbourg, Hrsg.) i-com, 3, S. 297--310.
Wintersberger, P., Frison, A.-K., Riener, A., & Hasirlioglu, S. (2017). The experience of ethics: Evaluation of self
harm risks in automated vehicles. Intelligent Vehicles Symposium (IV), 2017 IEEE (S. 385-391). IEEE.
Wintersberger, P., Frison, A.-K., Riener, A., & Thakkar, S. (2017). Do Moral Robots Always Fail? Investigating
Human Attitudes Towards Ethical Decisions of Automated Systems. IEEE International Symposium mon
Robot and Human Interactive Communication. Lisbon.
Wright, P., McCarthy, J., & Meekison, L. (2003). Making sense of experience. Funology (S. 43-53). Springer.
... In recent years, substantial research has been dedicated to investigating decisions made by automated systems in situations of moral dilemma and to the social dilemmas resulting from those decisions [6]- [9]. The Trolley Problem is a thought experiment whose most common version was proposed by the English philosopher Philippa Foot [10]. ...
Full-text available
The rapid development of automation has led to machines increasingly taking over tasks previously reserved for human operators, especially those involving high-risk settings and moral decision making. To best benefit from the advantages of automation, these systems must be integrated into work environments, and into the society as a whole. Successful integration requires understanding how users gain acceptance of technology by learning to trust in its reliability. It is, thus, essential to examine factors that influence the integration, acceptance, and use of automated technologies. As such, this study investigated the conditions under which human operators were willing to relinquish control, and delegate tasks to automated agents by examining risk and context factors experimentally. In a decision task, participants ( $N=43$ , 27 female) were placed in different situations in which they could choose to delegate a task to an automated agent or manual execution. The results of our experiment indicated that both, context and risk, significantly influenced people’s decisions. While it was unsurprising that the reliability of an automated agent seemed to strongly influence trust in automation, the different types of decision support systems did not appear to impact participant compliance. Our findings suggest that contextual factors should be considered when designing automated systems that navigate moral norms and individual preferences.
Full-text available
In this paper we provide a proof of principle of a new method for addressing the ethics of autonomous vehicles (AVs), the Data-Theories Method, in which vehicle crash data is combined with philosophical ethical theory to provide a guide to action for AV algorithm design. We use this method to model three scenarios in which an AV is exposed to risk on the road, and determine possible actions for the AV. We then examine how different philosophical perspectives on agent partiality, or the degree to which one can act in one's own self-interest, might address each scenario. This method shows why modelling the ethics of AVs using data is essential. First, AVs may sometimes have options that human drivers do not, and designing AVs to mimic the most ethical human driver would not ensure that they do the right thing. Second, while ethical theories can often disagree about what should be done, disagreement can be reduced and compromises found with a more complete understanding of the AV's choices and their consequences. Finally, framing problems around thought experiments may elicit preferences that are divergent with what individuals might prefer once they are provided with information about the real risks for a scenario. Our method provides a principled and empirical approach to productively address these problems and offers guidance on AV algorithm design.
Conference Paper
Full-text available
The intrinsic unfairness of the trolley problem comes from the assumption that lives of different people have different values. In this paper, techno-social arguments are used to show the infeasibility of the trolley problem when addressing the ethics of self-driving cars. We argue that different components can contribute to an “unfair” behaviour and features, which requires ethical analysis on multiple levels and stages of the development process. Instead of an idealized and intrinsically unfair thought experiment, we present real-life techno-social challenges relevant for the domain of software fairness in the context of self-driving cars.
Full-text available
Autonomous vehicles, though having enormous potential, face a number of challenges. As a computer system interacting with society on a large scale and human beings in particular, they will encounter situations, which require moral assessment. What will count as right behavior in such situations depends on which factors are considered to be both morally justified and socially acceptable. In an empirical study we investigated what factors people recognize as relevant in driving situations. The study put subjects in several “dilemma” situations, which were designed to isolate different and potentially relevant factors. Subjects showed a surprisingly high willingness to sacrifice themselves to save others, took the age of potential victims in a crash into consideration and were willing to swerve onto a sidewalk if this saved more lives. The empirical insights are intended to provide a starting point for a discussion, ultimately yielding societal agreement whereby the empirical insights should be balanced with philosophical considerations.
Full-text available
As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethics. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems are typically being presented in the form of an idealized unsolvable decision-making problem, the so-called trolley problem, which is grossly misleading. We argue that an applied engineering ethical approach for the development of new technology is what is needed; the approach should be applied, meaning that it should focus on the analysis of complex real-world engineering problems. Software plays a crucial role for the control of self-driving cars; therefore, software engineering solutions should seriously handle ethical and social considerations. In this paper we take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services and we present practical social and ethical challenges that have to be met, as well as novel expectations for software engineering.
Conference Paper
Full-text available
Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems-relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance-but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.
Conference Paper
Full-text available
Automated vehicles, but also safety or driver assistance systems for manually driven cars, will soon face situations where they have to choose between several options with negative or even lethal outcome for the one or the other party. Experimental ethics is an approach to evaluate expectations humans put into the morality of digital agents. With this work we present a personalized abstraction of the “Trolley Problem” evaluated in a driving simulator. The aim of the study was to assess drivers’ individual attitudes in ethical decisions and derive common knowledge about how to solve such situations. In contrast to previous work, the study at hand looks at the problem from a holistic point of view, including uncertainty and accident risk (presented to subjects as their probability to survive). Furthermore, subjective scales and semistructured interviews to determine subjects’ justifications for ethical decisions complemented the setting. Our results (n=40) suggest that most drivers want their vehicles to act in an utilitarian way and opt for the more severe collision - even when their own probability to survive is substantially low. In addition, age and size of the injured party have a significant effect on the results. Qualitative data (interviews) indicate that the justification, in particular for decisions with the same outcome, strongly differs as most people have embodied their own moral concepts.
Conference Paper
Full-text available
No matter how safe automated driving will be in the future, there will remain situations where an accident is unavoidable and vehicles will have to decide between options that all potentially result in lethal outcome. Empirical ethics is one method that can provide decisions that are socially acceptable. To gain knowledge on attitudes towards ethical decision making in hazardous situations, we ran a driving simulator study with personalized abstractions of a classical ethical dilemma (cf., " Trolley Problem "). Our results indicate that drivers, who are confronted with the decision of sacrificing the life of others, want vehicles to decide utilitarian, even when the probability of their own survival is substantially low and provoking an accident could result in their own death. Interviews with subjects after the study revealed that such ethical decisions polarize: some could not live with possessing a vehicle that might harm others for their own good, others would demand exactly such a behavior.
Full-text available
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100% safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
Full-text available
Codes of conduct in autonomous vehicles When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle. Science , this issue p. 1573 ; see also p. 1514
Trust in technology is an important factor to be considered for safety-critical systems. Of particular interest today is the transport domain, as more and more complex information and assistance systems find their way into vehicles. Research in driving automation / automated driving systems is in the focus of many research institutes worldwide. On the operational side, active safety systems employed to save lives are frequently used by non-professional drivers that neither know system boundaries nor the underlying functional principle. This is a serious safety issue, as systems are activated under false circumstances and with wrong expectations. At least some of the recent incidents with advanced driving assistance systems (ADAS) or automated driving systems (ADS; SAE J3016) could have been prevented with a full understanding of the driver about system functionality and limitations (instead of overreliance). Drivers have to be trained to accept and use these systems in a way, that subjective trust matches objective trustworthiness (cf. “appropriate trust”) to prevent disuse and / or misuse. In this article, we present an interaction model for trust calibration that issues personalized messages in real time. On the showcase of automated driving we report the results of two user studies related to trust in ADS and driving ethics. In the first experiment (N = 48), mental and emotional states of front-seat passengers were compared to get deeper insight into the dispositional trust of potential users of automated vehicles. Using quantitative and qualitative methods, we found that subjects accept and trust ADSs almost similarly as male / female drivers. In another study (N = 40), moral decisions of drivers were investigated in a systematic way. Our results indicate that the willingness of drivers to risk even severe accidents increases with the number and age of pedestrians that would otherwise be sacrificed. Based on our initial findings, we further discuss related aspects of trust in driving automation. Effective shared vehicle control and expected advantages of fully / highly automated driving (SAE levels 3 or higher) can only be achieved when trust issues are demonstrated and resolved.