ArticlePDF Available

The dark side of the 'Moral Machine' and the fallacy of computational ethical decision-making for autonomous vehicles

Authors:

Abstract

To cite this article: Hubert Etienne (2021): The dark side of the 'Moral Machine' and the fallacy of computational ethical decision-making for autonomous vehicles, Law, Innovation and Technology, ABSTRACT This paper reveals the dangers of the Moral Machine experiment, alerting against both its uses for normative ends, and the whole approach it is built upon to address ethical issues. It explores additional methodological limits of the experiment on top of those already identified by its authors and provides reasons why it is inadequate in supporting ethical and juridical discussions to determine the moral settings for autonomous vehicles. Demonstrating the inner fallacy behind computational social choice methods when applied to ethical decision-making, it also warns against the dangers of computational moral systems, such as the 'voting-based system' recently developed out of the Moral Machine's data. Finally, it discusses the Moral Machine's ambiguous impact on public opinion; on the one hand, laudable for having successfully raised global awareness with regard to ethical concerns about autonomous vehicles, and on the other hand pernicious, as it has led to a significant narrowing of the spectrum of autonomous vehicle ethics, de facto imposing a strong unidirectional approach, while brushing aside other major moral issues.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rlit20
Law, Innovation and Technology
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rlit20
The dark side of the ‘Moral Machine’ and the
fallacy of computational ethical decision-making
for autonomous vehicles
Hubert Etienne
To cite this article: Hubert Etienne (2021): The dark side of the ‘Moral Machine’ and the fallacy of
computational ethical decision-making for autonomous vehicles, Law, Innovation and Technology,
DOI: 10.1080/17579961.2021.1898310
To link to this article: https://doi.org/10.1080/17579961.2021.1898310
Published online: 23 Mar 2021.
Submit your article to this journal
View related articles
View Crossmark data
The dark side of the Moral Machineand the fallacy
of computational ethical decision-making for
autonomous vehicles
Hubert Etienne
Department of Philosophy, Ecole Normale Supérieure, Paris, France; Laboratory of Computer
Sciences, Sorbonne University, Paris, France
ABSTRACT
This paper reveals the dangers of the Moral Machine experiment, alerting
against both its uses for normative ends, and the whole approach it is built
upon to address ethical issues. It explores additional methodological limits of
the experiment on top of those already identied by its authors and provides
reasons why it is inadequate in supporting ethical and juridical discussions to
determine the moral settings for autonomous vehicles. Demonstrating the
inner fallacy behind computational social choice methods when applied to
ethical decision-making, it also warns against the dangers of computational
moral systems, such as the voting-based systemrecently developed out of
the Moral Machines data. Finally, it discusses the Moral Machines ambiguous
impact on public opinion; on the one hand, laudable for having successfully
raised global awareness with regard to ethical concerns about autonomous
vehicles, and on the other hand pernicious, as it has led to a signicant
narrowing of the spectrum of autonomous vehicle ethics, de facto imposing
a strong unidirectional approach, while brushing aside other major moral
issues.
ARTICLE HISTORY Received 23 July 2019; Accepted 8 May 2020
KEYWORDS Autonomous vehicles; self-driving cars; AI ethics; Moral Machine; trolley problem
1. Introduction
Whether they reach level 4 or level 5 autonomy within the next decade,
autonomous vehicles (AV)
1
whose tremendous expected benets justify
the erce competition between both manufacturers and national govern-
ments represent a crucial challenge for AI ethics. While players have
© 2021 Informa UK Limited, trading as Taylor & Francis Group
CONTACT Hubert Etienne hubert.etienne@sciencespo.fr Department of Philosophy, Ecole
Normale Supérieure, 45 rue dUlm, 75005, Paris, France; Laboratory of Computer Sciences, Sorbonne
University, 4 place Jussieu, 75005, Paris, France
1
By autonomous vehicle, let us refer to a vehicle with either level 4 or level 5 automation, according to
the Society of Automotive Engineersclassication (www.nhtsa.gov/technology-innovation/
automated-vehicles-safety), when the self-driving mode is activated. Unless indicated otherwise, all
websites were (accessed 1 December 2019).
LAW, INNOVATION AND TECHNOLOGY
https://doi.org/10.1080/17579961.2021.1898310
invested up to $80 billion between August 2014 and June 2017
2
in the hope of
conquering a solid share of a market forecasted to reach $6.7 trillion by
2030,
3
governments have understood the necessity of adapting national regu-
lations to support self-driving testing, foreseeing AVshigh potential to con-
tribute to economic growth and increased public safety.
4
Such advances are,
however, not without their own legal and ethical issues. The most discussed
so far has been that of moral responsibility and legal liability in the case of
fatal accidents, which became a tragic reality in March 2018, when a self-
driving Uber car killed a pedestrian in Arizona.
5
Anticipating complex situations in which AVs may not be able to avoid
accidents and would consequently have to allocate harm between several
groups of individuals, researchers have found in the trolley problem
6
a
theoretical framework to address the resulting moral dilemmas. Awad
et al.
7
expanded on this by developing the Moral Machine (MM), an
online platform reproducing trolley-style thought experiments in various
situations involving an AV, with the aim of establishing a global represen-
tation of moral preferences. The great success of the experiment, gathering
about 40 million answers, then became the starting point for Noothigattu
et al.
8
to develop a voting-based system(VBS), grounded in computational
social choice theories, with the intention of automating ethical decisions by
aggregating the individualsmoral preferences collected by the MM.
This paper presents a multi-level critique of both the MM and the VBS,
highlighting their intrinsic limitations and revealing their deleterious
eects on the debate. It brings to light the dangers proceeding from the
use of the MM data for normative purposes, and the inner fallacy of attempt-
ing to automate ethical decision-making processes. The rst part analyses the
construction of the AV moral dilemmas and its current approach in the
debate, from the advent of a moral imperative supporting the development
of AVs to the conceptualisation of the dilemmas on the trolley problems
model and the deployment of the MM. The second part then criticises the
use of the MM data by the VBS and refutes the possibility of developing a
2
Cameron F Kerry and Jack Karsten, Gauging Investments in Self-Driving CarsBrookings (16 October
2017), www.brookings.edu/research/gauging-investment-in-self-driving-cars/ (accessed 30 December
2020).
3
Detlev Mohr and others, Automotive Revolution-Perspective Towards 2030. How the Convergence of Dis-
ruptive Technology-Driven Trends Could Transform Auto Industry (McKinsey & Company, 2016) 6.
4
European Commission (EC), On the Road to Automated Mobility: An EU Strategy for Mobility of the Future
COM(2018) 283, 2.
5
Sam Levin and Julia C Wong, Self-Driving Uber Kills Arizona Woman in First Crash Involving Pedestrian,
The Guardian (London, 19 March 2018).
6
Philippa Foot, The Problem of Abortion and the Doctrine of Double Eect(1967) 5 Oxford Review 5;
Judith J Thomson, Killing, Letting Die, and the Trolley Problem(1976) 59(2) The Monist 204; Judith
J Thomson, The Trolley Problem(1985) 94(6) Yale Law Journal 1395.
7
Edmond Awad and others, The Moral Machine Experiment(2018) 563 Nature 59.
8
Ritesh Noothigattu and others, A Voting-based System for Ethical Decision Making(2018) Proceedings
of the 32nd AAAI Conference on Articial Intelligence.
2H. ETIENNE
legitimate coherent computational system to automate ethical decisions.
Finally, the last part denounces an instrumentalization of the ethical dis-
course, rejecting the highest moral imperativeunderlying this project,
and exposing the distracting eects of the MM on public opinion, having
both polarised the debate on erroneous principles and categories, and pre-
vented relevant ethical issues from receiving appropriate attention.
2. A responsibility issue resulting from the transfer of
autonomy
This rst section analyses each step of the construction of the AV moral
dilemmas, from the advent of a moral imperative supporting the develop-
ment of AVs to the conceptualisation of such a dilemma on the trolley pro-
blems model and the deployment of the MM. It insists on the economic and
political issues, which I hold to play a major role in the justication of this
moral imperative referred to as the HMI and further discussed in Part 4.
2.1. From economic and social benets toward a moral imperative to
develop autonomous vehicles
According to their proponents, AVs represent a great advancement in circu-
lating people and goods, expected to provide the automotive industry with
tremendous business opportunities, customers with substantive advantages,
and benets to society as a whole in regards to road safety, economic growth
and trac management.
For the automobile industry, AVs are the centre of a common vision to
expand the market and deeply modify its structure, specically regarding
its revenue streams. Taking automotive competition beyond its traditional
dierentiation factors, the AV project aims to produce a radical change in
mobility consumption behaviour, from a private car ownership model to a
system dominated by shared mobility solutions.
9
Partnerships between tech-
nology companies and car manufacturers such as Waymo and Fiat Chrysler,
Uber and Volvo, or Lyft and General Motors, are thus leading the race to
complete the disruption of the mobility sector initiated by Uber a decade
ago. These companies not only expect overwhelming revenue increases, as
the aggregate of both price and volume eects induced by the replacement
of human drivers but also a vast diversication in their services to absorb
the whole of the delivery economy.
9
Mohr (n 3) 8; Wolfgang Gruel and Joseph M Stanford, Assessing the Long-Term Eects of Autonomous
Vehicles: A Speculative Approach(2016) 13 Transportation Research Procedia 28. While the McKinsey
study demonstrates that a change in mobility behaviour will make economic sense, Gruel and Stanford
explain why such change may be required to attain a greener and more sustainable mobility system,
which the introduction of AVs alone cannot achieve.
LAW, INNOVATION AND TECHNOLOGY 3
On the consumersside, one can expect safer rides under all circum-
stances, allowing people to go home safely whatever their alcohol or tiredness
level, as well as better time management, enabling them to rest, work or
conduct any other activity during their journey. AVs also promise to be
more economical for both shared-ride users and car owners. The average
operating cost of an electric-powered vehicle was estimated as 2.3x lower
than a fuel-powered vehicle,
10
thus compensating for the sale price
premium in a few years, which is itself expected to decrease with AVs demo-
cratisation. Furthermore, AVs claim more inclusive mobility, opening the
roads to people with reduced driving capacity (e.g. those with mobility or
visual impairments), together with all those who do not have a drivers
license.
With reference to governments and societies, the deployment of AVs is
promoted as leading to better trac management, a signicant positive
environmental impact, and considerable public savings. In mixed-autonomy
trac, a low proportion of AVs may be sucient to increase tracuidity by
reducing congestion and increasing the average speed some suggest that
trac comprised of only 10% AVs could even double the average car
speed
11
while cutting carbon dioxide emissions by up to 60%
12
due to
greater fuel eciency and speed management. In fully autonomous trac,
most road infrastructure (signage, radars, etc.), together with their dedicated
agents (trac police, public transportation drivers, etc.), would become
redundant, resulting in massive cost reductions for public administrations.
A recent study also suggests that the introduction of AVs in Boston, USA,
could reduce the need for parking spaces up to 50% by 2030.
13
Finally, the greatest advantage of AVs decidedly dwells in its potential to
make roads safer. Around 1.35 million people die every year in trac acci-
dents across the world
14
including 25,500 people in the EU
15
and 37,461
in the US
16
in 2016 while 94% of car accidents are said to result from
human error
17
AVs are expected to drastically reduce this number, matching
10
Michael Sivak and Brandon Schoettle, Relative Costs of Driving Electric and Gasoline Vehicles in the
Individual U.S. States(2018) Report No. SWT-2018-1, University of Michigan, 6.
11
Eugene Vinitsky and others, Benchmarks for Reinforcement Learning in Mixed-Autonomy Trac
(2018) 87 Proceedings of the 2nd Conference on Robot Learning, PMLR 11.
12
Michele Bertoncello and Dominik Wee, Ten Ways Autonomous Driving Could Redene the Automotive
World (McKinsey & Company, 1 June 2015).
13
Nikolaus S Lang and others, Making Autonomous Vehicles a Reality. Lessons from Boston and beyond
(The Boston Consulting Group, October 2017).
14
World Health Organization, Global Status Report on Road Safety 2018,4.
15
European Commission, Statistiques de la sécurité routière pour 2016: que révèlent les chires? (10 April
2018) Information (n 1).
16
National Center for Statistics and Analysis, 2016 Fatal Motor Vehicle Crashes: Overview (2017) Trac
Safety Facts Research Note. Report No. DOT HS 812 456, National Highway Trac Safety Adminis-
tration, 1.
17
Santokh Singh, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation
Survey (2015) Trac Safety Facts Crash Stats. Report No. DOT HS 812 115, National Highway Trac
Safety Administration, 1.
4H. ETIENNE
a top political priority of both the US National Safety Council, which leads
the Road to Zero Coalition, and the EU, whose Zero Visionaims for
zero fatalities on European roads by 2050.
18
It is worth underlining the
fact that road mortality constitutes a considerable economic burden on
nation stateseconomic growth and healthcare systems. The National
Highway Trac Safety Administration (NHTSA) estimates the impact of
vehicle crashes amounts to $242 billion in US economic activity, with an
additional $594 billion due to injuries.
19
Considering the numerous deaths
that AVs may avoid, some of its partisans have even proclaimed a moral obli-
gation to deploy them as soon as possible,
20
supporting relaxed regulations
for manufacturers and lowering their liability in case of accident to avoid dis-
couraging them.
21
Let us refer to this claim as the highest moral imperative
(HMI), particularly well-illustrated by Mark Rosekins remarks while still
chief regulator at the NHTSA: We cant stand idly by while we wait for
the perfect [] We lost 35,200 lives on our roads last year []How
many lives might we be losing if we wait?.
22
For some of the HMI propo-
nents, the moral imperative applies as soon as AVs can reduce the net
balance of total annual deaths by one, which leads them to develop simu-
lation instruments designed to help policy makers identify this critical
moment,
23
while others assert, on the same ground, that regular cars
should be prohibited as soon as AVs become safer.
24
2.2. The birth of a responsibility issue and the Moral Machine
experiment
Although AVs supporters advance a reasonable argument in favour of early
deployment, some advocate that additional issues should rst be fully con-
sidered before bringing AVs to market. AVs may indeed help avoid the
majority of todays accidents resulting from human factors. However, they
will not realistically prevent all accidents, which could still occur through
18
European Commission, Roadmap to a Single European Transport Area Towards a Competitive and
Resource Ecient Transport System COM(2011) 144, 22.
19
National Highway Trac Safety Administration, Automated Vehicles for Safety,https://www.nhtsa.
gov/technology-innovation/automated-vehicles-safety (accessed 30 December 2020).
20
Jean-François Bonnefon, Azim Shariand Iyad Rahwan, The Social Dilemma of Autonomous Vehicles
(2016) 352(6293) Science 1575; Azim Shari, Jean-François Bonnefon and Iyad Rahwan, Psychological
Roadblocks to the Adoption of Self-Driving Vehicles(2017) Nature Human Behaviour 694.
21
Alexander Hevelke and Julian Nida-Rümelin, Responsibility for Crashes of Autonomous Vehicles: An
Ethical Analysis(2015) 21(3) Science and Engineering Ethics 619, 629.
22
Melissa Bauman and Alyson Youngblood, Why Waiting for Perfect Autonomous Vehicles May Cost
Lives(RAND Corporation, 2017).
23
Nidhi Kalra and David G Groves, The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect
Automated Vehicles (RAND Corporation, 2017).
24
Robert Sparrow and Mark Howard, When Human Beings are Like Drunk Robots: Driverless Vehicles,
Ethics, and the Future of Transport(2017) 80 Transportation Research Part C: Emerging Technologies
206, 20910.
LAW, INNOVATION AND TECHNOLOGY 5
technical outages or meteorological conditions, sometimes resulting in
complex situations where no evident choice could be unanimously preferred.
One of these is analysed by Patrick Lin
25
in a thought experiment where an
agent is driving a non-autonomous vehicle (NAV) on the central lane of the
highway, right behind a large truck, surrounded by a car in the left lane and a
motorcycle in the right lane. A large box suddenly falls from the truck
towards the agent who does not have sucient space to stop the car safely,
and thus needs to arbitrate between three alternatives: (1) keep straight
and greatly endanger the cars passengers by hitting the box; (2) swerve to
the left lane to avoid the box and hit the other car, moderately endangering
all the passengers of the two vehicles; (3) swerve to the right lane and hit the
motorcyclist, severely endangering his or her life but with low harm to the
passengers of the subjects car.
This illustrates the sort of complex situation a driver (let us call her
Aliénor) may encounter, facing a moral dilemma involving several
peoples lives, including her own, with no evident unequivocal solution.
Under such conditions, moral philosophers tend to agree that, whatever
Aliénors choice may be, neither her moral responsibility nor her legal liab-
ility is at stake here unless she entered this situation by breaking the law, for
instance by exceeding the authorised speed limit as her decision results
from an instinctive reaction, rather than a rational deliberate judgment.
Now, by replacing the NAV with an AV, the results are an entirely
dierent assessment of the decision taken by the algorithm in regards to
both responsibility and liability. Unlike Aliénor who is only granted a few
tenths of a second to both understand the situation, make a decision and
implement it, the AV driving software has a much better ability to react,
as well as an a priori knowledge of the appropriate decision to take, its man-
ufacturers having anticipated such scenarios and beneting from an appro-
priate amount of time to identify the best alternative. Furthermore, we
observe a shift in the decision-makers position; whereas Aliénor is directly
involved in the dilemma situation, making a particular decision in praesenti
to manage an existing scenario that may hurt her in the NAV case, manufac-
turers are indirectly involved in the dilemma situation, making general ex
ante decisions to address potential scenarios that may endanger Aliénor,
but not them in the AV case.
To help conceptualise the problem, researchers have drawn an analogy
between such AV dilemmas and the well-known trolley problem,
26
rst
25
Patrick Lin, The Ethical Dilemma of Self-Driving Cars(2015) Ted-Ed.
26
Examples include Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong
(Oxford University Press, 2008); Anders Sandberg and Heather Bradshaw-Martin, What Do Cars Think
of Trolley Problems: Ethics for Autonomous Cars?,Proceedings of the 2013 International Conference,
Beyond AI, 2013; Noah J Goodall, Machine Ethics and Automated Vehiclesin G Meyer and S Beiker
(eds), Road Vehicle Automation (Springer, 2014) 93. Patrick Lin, Heres a Terrible Idea: Robot Cars
With Adjustable Ethics SettingsWired (18 August 2014); Jean-François Bonnefon, Azim Shariand
6H. ETIENNE
conceived of by Philippa Foot
27
and then notably explored by Judith
Thomson
28
and Frances Kamm.
29
Foots thought experiment questions
which is the lesser of two evil actions to make for a moral agent when con-
fronted with a critical situation, traditionally addressed either from a deon-
tological view (considering that killing is worse than letting die), or from a
consequentialist approach (for which saving ve people is better than
saving one).
Inspired by the trolley problem, Bonnefon et al. investigated the psychol-
ogy of individuals faced with AV moral dilemmas. An initial study, com-
prised of three surveys completed on Amazons Mechanical Turk
platform, found that although a large majority of participants (c. 75%)
were in favour of utilitarian AVs(i.e. cars programmed to minimise the
number of total deaths), signicantly fewer believed that AVs would actually
be programmed to this end, foreseeing the incentive for manufacturers to
prioritise the life of AV passengers over otherslives. Furthermore, despite
feeling comfortable with others buying utilitarian AVs, respondents were
much less willing to buy such cars themselves. The authors then concluded
on the existence of a social dilemma, summarised as: People mostly agree
on what should be done for the greater good of everyone, but it is in every-
bodys self-interest not to do it themselves.
30
Assuming that a typical sol-
ution to overcome social dilemmas consists of regulators enforcing a
targeted behaviour, thus eliminating the opportunity to free ride, the
researchers conducted another study, focused on the impact of governmental
regulation. From the analysis of six surveys also submitted to the Mechanical
Turk platform, they came out with a paradoxical conclusion: regulation may
be necessary, but at the same time counterproductive.
31
In fact, while regu-
lation may solve the social dilemma, they nd that most people would likely
disapprove of a regulation that would enforce utilitarian AVs, ultimately
leading to a more serious problem, that is a conict with the HMI: regu-
lation could substantially delay the adoption of AVs, which means that the
lives saved by making AVs utilitarian may be outnumbered by the deaths
caused by delaying the adoption of AVs altogether.
32
Iyad Rahwan, Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?
(2015) ArXiv; Christian Gerdes and Sarah Thornton, Implementable Ethics for Autonomous Vehicles,
in M Maurer, C Gerdes, B Lenz and H Winner (eds), Autonomes Fahren. Technische, rechtliche und
gesellschaftliche Aspekte (Springer, 2015) 87; Giuseppe Contissa, Francesca Lagioia and Giovanni
Sartor, The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law(2017) 25(3) Arti-
cial Intelligence and Law 365; Wulf Loh and Janina Loh, Autonomy and Responsibility in Hybrid
Systems: The Example of Autonomous Cars, in P Lin, K Abney and R Jenkins (eds), Robot Ethics 2.0
(Oxford University Press, 2017) 35.
27
Foot (n 6).
28
Thomson (n 6).
29
Frances M Kamm, Harming Some to Save Others(1989) 57(3) Philosophical Studies 227.
30
Bonnefon et al. (n 26) 8.
31
Bonnefon et al. (n 20) 1575.
32
Ibid. 15756.
LAW, INNOVATION AND TECHNOLOGY 7
Supportedby other colleagues for the purpose of collecting wide-scale infor-
mation about individualspreferences regarding AV moral dilemmas, identify-
ing potential cultural regularities and mapping geographical trends, the
researchers nally deployed the MM. Assuming an explicit foundation in
Thomsonscasesbased on the illustrations presented on their website the
online experimental platform oers trolley problem-type situations for which
participants are asked to choose the less evil consequence between letting the
car continue ahead or swerve into the other lane, resulting in dierent out-
comes, implying at least one persons death.Each dilemma set contains thirteen
randomly selected situations designed to evaluate the participantspreferences
according to nine factors: (1) sparing humans versus pets, (2) staying on course
versus swerving, (3) sparing passengers versus pedestrians, (4) sparing more
lives versus fewer lives, (5) sparing men versus women, (6) sparing the young
versus the elderly, (7) sparing pedestrians who cross legally versus jaywalking,
(8) sparing the t versus the less t, and (9) sparing those with higher social
status versus those with lower social status.
The MM was undeniably successful in its reach, collecting 39.61 million
answers from 1.3 million respondents across 233 countries and territories
in only two years. Its results reveal a global preference for sparing humans
over animals, saving more lives versus fewer, and privileging the young
versus the elderly. The researchers also identied three cultural clusters
associated with dierent world regions (Western, Eastern, Southern), and
observed specic trends opposing individualistic and collectivistic cultures.
33
Relayed by rst-class international newspapers, the MM succeeded in reach-
ing a much wider audience than the narrow sphere of AI ethicists, shedding
some well-deserved light on an issue that would otherwise have remained in
the shadow of the innovation race.
34
However, the publishing of the MM
results was soon followed by associated works pursuing antagonistic goals.
3. Of the dangerous uses of the Moral Machine experiment
In this Part of the article, I discuss the problematic shift in approach from the
MM to the VBS, criticising the disloyal intentions it reveals about the whole
experiment, presenting two main types of methodological limitations dis-
qualifying the use of MM data for such normative intentions, as well as
several arguments against the relevance of computational social choice the-
ories to provide satisfying answers to ethical dilemmas.
33
Awad et al. (n 7).
34
The words of Christoph von Hugo, Mercedes-Benzs manager of driver assistance systems and active
safety, at the 2016 Paris Motor Show proves that many manufacturers would have otherwise been
keen to solve AV moral dilemmas on their own: If you know you can save at least one person, at
least save that one. Save the one in the car(Lindsay Dodgson, Why Mercedes Plans to Let Its Self-
Driving Cars Kill Pedestrians in Dicey SituationsBusiness Insider France (Paris, 12 October 2016)).
8H. ETIENNE
3.1. From the Moral Machine to the voting-based system
Elaborating on the results of the MM experiment, Edmond Awad declared
that What [they] are trying to show here is descriptive ethics: peoples pre-
ferences in ethical decisions [] But when it comes to normative ethics,
which is how things should be done, that should be left to experts,
35
rmly placing the MM in the spirit of Bonnefon et al.s original worksambi-
tion –‘to be clear, we do not mean that these thorny ethical questions can be
solved by polling the public and enforcing the majority opinion. Survey data
will inform the construction and regulation of moral algorithms for AVs, not
dictate them.
36
However, Awad and two other authors of the MM exper-
iment co-signed another paper, published a month after the release of the
MM results, presenting a voting-based system for ethical decision making
(VBS) trained on the MM dataset and arguing that the starting point of
[their] work was the realization that the MM dataset can be used not just
to understand people, but also to automate decisions.
37
In the wake of
Joshua Greenes connection between computational social choice and
ethical decision making
38
together with Vincent Conitzers statement that
aggregating moral views could lead to the development of a morally better
system,
39
the authors assert that decision making can, in fact, be automated,
even in the absence of such ground-truth principles, by aggregating peoples
opinions on ethical dilemmas. They present a concrete approach for ethical
decision-making based on computational social choicewith the goal of
serving as a foundation for incorporating future ground-truth ethical and
legal principleswhich, once implemented on the MM dataset, can make
credible decisions on ethical dilemmas in the autonomous vehicle
domain.
40
The shift in the researchersintentions regarding the collected
data between the MM and the VBS cannot seriously be associated to any-
thing else than a clear ethical fault and an academic fellony against the exper-
iments subjects. It should be added to the list of scandals faced by the MIT
Media
41
Lad and leading us to challenge the sincerity of the authorsphilan-
thropic ambitions.
35
James Vincent, Global Preferences for Who to Save in Self-Driving Car Crashes RevealedThe Verge (24
October 2018).
36
Bonnefon et al. (n 26) 4.
37
Noothigattu et al. (n 8) 4.
38
Joshua Greene and others, Embedding Ethical Principles in Collective Decision Support Systems(2016)
Proceedings of the 30th AAAI Conference on Articial Intelligence 414751.
39
Vincent Conitzer and others Moral Decision Making Frameworks for Articial Intelligence(2017) Pro-
ceedings of the 31st AAAI Conference on Articial Intelligence 48315.
40
Noothigattu et al. (n 8) 1, 2, 2, 20.
41
It may be relevant to point out here that between the submission and the nal acceptance of the
present article, the MIT Media Lab (ML) from where the MM and the VBS experiments were con-
ducted went through a series of scandals. These involve undisclosed funding received from
Jerey Epstein (Noam Cohen Dirty Money and Bad Science at MITs Media Lab(Wired, 16 January
2020)) to the benet of both the ML and its directors investment funds, resulting in the latters res-
ignation and public apology (www.media.mit.edu/posts/my-apology-regarding-jerey-epstein/), as
LAW, INNOVATION AND TECHNOLOGY 9
Let us examine two types of methodological limits regarding the MM data,
justifying its disqualication to serve for any end other than descriptive ones,
before demonstrating why computational social choice theories cannot be
applied to ethical decisions.
Firstly, the scientic relevance of the VBS results is highly limited by the
poor quality of the MM data,including strong selection biases across respon-
dents. The sample is self-selectedand arguably close to the internet-con-
nected, tech-savvy population that is interested in driverless car
technologyjustifying the caution that policymakers should not embrace
[this] data as the nal word on societal preferences.
42
Although Awad
et al. then try to minimise this biass weight, based on the fact that the het-
erogeneity of answers across countries exhibits cultural and economic
specicities, as a matter of fact, their sample only takes into account the pre-
ferences of this tech-savvy population. The MM data is then certainly biased
in favour of a population more likely to buy AVs, thus to occupy the passen-
ger seat in the dilemma, rather than those more susceptibles to end up in the
pedestrian position. These latters preferences, nonetheless, also deserve to be
counted and may certainly diverge. Furthermore, there are concrete reasons
to be skeptical about the seriousness of many respondents when taking the
MM tests, as well as the accuracy of their geo-localization, captured by
their IP address. In fact, no information is provided about eventual strategies
to exclude virtual private networks users VPNs being frequently used by
26% of the internet population
43
a community expected to be overrepre-
sented in the MM sample of tech-savvy people. While Awad et al. point
out the simplistic aspect of the MM, this does not include uncertainty
about consequences, thus implying risk management under limited infor-
mation, a aw that also makes the MM technologically unviable.
44
Secondly, whereas the MM project is explicitly presented as an applied
trolley dilemma deriving from Thomsons cases, such an analogy encounters
several objections. Two of them are presented by Sven Nyholm and Jilles
Smids.
45
Firstly, there is an asymmetry between the scope of interests con-
sidered in the trolley scenarios (for which purpose, the account is taken
only of the interests of the moral patients present in the particular situation),
well as an ML graduate student accusing the lab of being aligned with a lobbying strategy to manip-
ulate AI ethics, is such a way as to avoid state regulation (Rodrigo Ochigame, The Invention of Ethical
AI. How Big Tech Manipulates Academia to Avoid Regulation(The Intercept, 20 December 2019)).
42
Awad et al. (n 7) 63.
43
J Clement Global VPN Usage Reach 2018, by Region(22 July 2019), https://www.statista.com/
statistics/306955/vpn-proxy-server-use-worldwide-by-region/ (accessed 21 November 2019).
44
Not only could the software not distinguish a criminal from a doctor nor produce a precise approxi-
mation of pedestriansages (particularly when faces are recorded from the prole or back), but it
also often fails to recognize human beings, such as when Googles software identied a black
person under the gorilla label.
45
Sven Nyholm and Jilles Smids, The ethics of accident-algorithms for self-driving cars: an applied trolley
problem?(2016) 19(5) Ethical Theory and Moral Practice 1275.
10 H. ETIENNE
and AV dilemmas, whose normative dimension requires the assessment of
the interests of all people who may end up in such a situation. Secondly,
the trolley dilemmas focus strictly on the agents moral responsibility,
whereas AV dilemmas also require the assessment of their legal liability,
which signicantly impacts decisions and constrains rights to action.
46
What is more, these dis-analogies do not preserve the MM from the criti-
cisms expressed against the trolley problem itself, which essentially target
the inapplicability of its results. It has indeed been observed that respon-
dentsdecisions change with the level of concreteness of the experiment,
being more reluctant to push the fat man over the bridge (Thomsonsfat
man case) in virtual reality,
47
as well as to redirect an electroshock toward
one mouse to avoid hitting ve of them (Thomsonsbystander at the
switch case) in real conditions.
48
Finally, it has also been remarked that the
humoristic perception of the dilemma may alter respondentsdecision-
making process,
49
which is not only problematic for the fat man case (the
less tcriteria in the MM) in which two-thirds of the respondents were
reportedly laughing, but also for the bystander at the switch case, in which
a third of them were reportedly laughing.
3.2. The inner fallacy of computational social choice applied to
ethical choices
Having established that the MM data cannot be used for normative inten-
tions because of its methodological limits, I shall now expose two arguments
demonstrating that the whole project to build an ethical decision-making
system based on computational social choice theories, and upon which the
VBS is based, is not only fallacious but also dangerous for our democracies.
First, let us recall that, by denition, only moral agents are capable of
making moral decisions. Moral agents can be dened as autonomous sub-
jects provided with a certain idea of the good and whose free will allows
them to determine their own general principles of action from which to
make particular decisions. They are capable of justifying these and respon-
sible for their intended consequences. In contrast, there is today no algor-
ithm autonomous in the philosophical sense and stochastic algorithms are
particularly unable to justify each of their choices with consistent rules,
46
Allen Wood, Humanity as an End in Itself, in D Part and S Scheer (eds), On What Matters, vol. 2,
(Oxford University Press, 2011) 58, 745.
47
Kathryn B Francis and others, Virtual Morality: Transitioning from Moral Judgment to Moral Action?
(2017) 12(1) Plos One.
48
Dries H Bostyn, Sybren Sevenhant and Arne Roets, Of Mice, Men, and Trolleys: Hypothetical Judgment
Versus Real-Life Behavior in Trolley-Style Moral Dilemmas(2018) 29(7) Psychological Science 1084.
49
Christopher W Bauman and others, Revisiting External Validity: Concerns About Trolley Problems and
Other Sacricial Dilemmas in Moral Psychology(2014) 8(9) Social and Personality Psychology Compass
536.
LAW, INNOVATION AND TECHNOLOGY 11
nor be held accountable for the consequences of their actions. There is thus,
for now, insucient ground to question the moral status of such
algorithms without jeopardising the responsibility around their conse-
quences, and also to refuse considering them as moral proxies. Jason
Millar recalls that a moral proxy is a person responsible for making health-
care decisions on behalf of anotherwhen such a person is incapable to do so
themselves;
50
the moral proxy of a moral agent is thus another moral agent
making a decision in the best interests of the rst one. Although it may look
only semantic, this distinction is crucial because unlike Greene et al. who
focus on hybrid collective decision-making systems
51
with the purpose of
improving communication between humans and robots for better collabor-
ation, Conitzer et al. and Noothigattu et al. intend to create an autonomous
system of moral decision-making, both considering such a system to make
moral decisions
52
or make ethical decisions.
53
Granting algorithms the
capacity to take moral decisions greatly jeopardises the traceability chain
of decisions and responsibility, which in turn is necessary to fairly allocate
sentences when algorithms produce harmful consequences. Consequently,
because algorithms are not moral agents, they cannot make moral decisions
and the production of ethical decisions cannot be automated.
Secondly, although the VBS does not produce moral judgments, it never-
theless claims to aggregate moral agentsjudgments, which may result in a
morally better system than that of any individual human, for example,
because idiosyncratic moral mistakes made by individual humans are
washed out in the aggregate.
54
To refute this claim, let us focus on the
approach underlying the MM. To solve an iconic problem of moral philos-
ophy, Awad et al. opted for an unconventional approach, both psychological
and descriptive. While philosophical reasoning consists in transforming
opinions into knowledge through a dialectical reection and contradictory
debate, the authors chose to infer general principles from aggregated a
priori opinions, collected from individuals who have not received any back-
ground to address these issues nor contradictors to challenge their answers.
This methodological choice to target peoplesprima facie perception of mor-
ality rather than reasoned and informed moral decisions results from the
belief that, if philosophers have not been able to agree upon a solution yet,
a consensus is not expected in the appropriate time.
55
The importance of
50
Jason Millar, Technology as Moral Proxy: Autonomy and Paternalism by Design(2014) Proceedings of
the 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering 128.
51
Greene et al. (n 38) 4150.
52
Conitzer et al. (n 39) 4831.
53
Noothigattu et al. (n 8) 20.
54
Conitzer et al. (n 39) 4834.
55
Jean-François Bonnefon said that philosophers had the luxury not to solve the trolley dilemma. But
today, we will not have to solve it but to nd a solution with which we feel comfortable, from my
translation of Les philosophes avaient le luxe de ne pas résoudre le dilemme du tramway. Mais il
12 H. ETIENNE
nding the right decisions to fairly allocate harm in each dilemma situation is
thus explicitly subordinated to the HMI.
56
The fact that Awad et al. do not
seek moral rightness or fairness across results, but the widest social accep-
tance, is also illustrated by the practical strategies they suggest to persuade
people to buy AVs and solve the social dilemma, including virtue signalling
and fear placebo.
57
However, people often change their minds about moral choices, whose
volatility is highly and negatively correlated to the degree of information
and deliberate reasoning that they result from. Imagine a journalist asking
people on the street about their perception of the ideal income tax rate,
without informing them he was appointed by Congress to pilot a tax
reform. Respondents may certainly give him a much lower rate than the
present one. Not only are they abused by the journalist who is hiding
from them his surveys goal, making such answers illegitimate to use, but
their answers do not even necessarily match their actual preferences. Once
the survey is completed and the reform implemented, the same people
might start complaining about the drastic loss of public services following
the tax reform, arguing they would have changed their answers in favour
of a higher tax rate had they been aware of the number of public services
these taxes were funding and taken more time to respond, had they been
aware of the consequences of their replies. The MM faces the same issue
because aggregating individual uninformed beliefs does not produce any
common reasoned knowledge.
In response to this, Noothigattu et al. actually concede that Moral Machine
users may be poorly informed about the dilemmas at hand, or may not spend
enough time thinking through the options, potentially leading in some cases
to inconsistent answers and poor modelsbut believe, though, that much of
this noise cancels out in Steps III [Summarization] and IV [Aggregation],
58
which is consistent with the idea that idiosyncratic moral mistakes made
by individual humans are washed out in the aggregate.
59
However, such
aggregation does not reduce noise but only normalises answers around an
average social belief, one which presents no guarantee of approximating the
va falloir aujourdhui non pas le résoudre, mais trouver une solution avec laquelle nous sommes con-
fortablesGregory Rozieres, Les voitures autonomes doivent-elles vous sacrier pour sauver un enfant
ou un chien ?The Hungton Post (24 October 2018).
56
Every day the adoption of autonomous cars is delayed is another day that people will continue to lose
their lives to the non-autonomous human drivers of yesterday(Shariet al. (n 20) 696).
57
Shariet al. (n 20), argue that convincing people to buy AVs implies making them feel both safe and
virtuous. Such results can be achieved by establishing virtue signalling as the possibility for AVs
buyers to show otheir virtuous consumption (i.e. buying AVs) to others via ostentatious signals
and by educat[ing] the public about the actual risks [] in a calculated way,oering them fear
placebo dened as high-visibility, low-cost gestures that do the most to assuage the publics
fears without undermining the real benets that autonomous vehicles might bring(Ibid. 695).
58
Noothigattu et al. (n 8) 20.
59
Conitzer et al. (n 39) 4834.
LAW, INNOVATION AND TECHNOLOGY 13
right choices. If we dene a wrong answer as a given answer which would
change in the case that the respondent was given enough time, information,
and opportunity to debate against a challenging opponent, then these mis-
takesare only washed out in the aggregate given two conditions: (a) the
majority of people within the sample happen to be right(which is impossible
to falsify) and (b) respondents are not consistent in their wrong answers
(which they actually are). Whether people are right or wrong when prioritis-
ing a category over another, they tend to stick to this rule across scenarios;
they are wrongabout the general principle upon which their answers are
based, but not about a particular answer. If we now dene mistakes as
maginal inconsistencies between respondents replies, (sometimes prioritis-
ing saving the many over the few, and sometimes not), nothing could
then be inferred from it, as the asymmetry of replies may just as much
result from a lack of attention than they could reect an unaccounted
factor that the simplistic aspect of the MM fails to identify. Additionally, what-
ever the aggregation process selected, the VBS necessarily remains limited by
Condorcets paradox and Arrows impossibility theorem.
In contrast, Greene et al. acknowledge that aggregating just preferences
may lead to outcomes that do not follow any ethical principles or safety con-
straints, and then suggest fusing rather than aggregating values and prefer-
ences, combining hard constraints for basic ethical laws with only two levels
of satisfaction (yes or no), and relaxed constraints for preferences, which can
be satised at several levels.
60
Such an approach is preferable because it
implies dening specic basic ethical laws that need to be human-generated
and rationally justied, and conversely because it permits the inclusion of
utilitarian principles of preference maximisation under hard constraints of
inviolable deontologist rights. However, were a system to succeed in
suggesting the closest approximation of a potential consensus within a
social group, this nevertheless would not qualify it as a moral decision. Con-
sidering a population struggling with a two-alternative scenario, where 50%
of the population is in favour of option 1 and 50% in favour of option 2, a
unique decision-maker may end up satisfying 100% or 0% of the population,
according to the perceived legitimacy of the procedure it chooses.
4. The instrumentalization of the ethical discourse
In this section, I refute the heart of the moral claim justifying the VBS,
namely the lack of alternative solutions and the HMI, both justifying the
authorsapproach to solve AV dilemmas in the last resort. I then explain
why the MM is not only fallacious, but also dangerous for society, elaborat-
ing on both its psychological impact and its distracting eect.
60
Greene et al. (n 38) 4147.
14 H. ETIENNE
4.1. Refutation of the highest moral imperative
So far, I hope to have demonstrated that using the MM data to develop a
computational ethical decision-making system for normative ends such as
the VBS is scientically limited by the quality of the MM data. It is ontologi-
cally impossible because of the nature of such a system, which does not have
the ability to make moral decisions. It is necessarily fallacious when aggregat-
ing uninformed beliefs to grant them an intersubjective common moral
value. Finally, it is also dangerous, betraying the initial ambitions of the
MM to close the public debate it allegedly intended to open, using illegiti-
mate data which was not collected for the purpose of solving the AV dilem-
mas. However, there is one argument that could still be raised to justify the
use of VBS-like systems, suggesting inadequate but not too shocking sol-
utions to AV dilemmas in order to accelerate their deployment and that
is the HMI.
In defense of their approach, Noothigattu et al. assert that in their work
on fairness in machine learning, Dwork et al. concede that, when ground-
truth ethical principles are not available, we must use an approximation
as agreed upon by society.But how can society agree on the ground truth
or an approximation thereof when even ethicists cannot?.
61
This justi-
cation is to be refuted on three levels: rstly, because the work quoted has
very little relevance for ethical considerations; secondly, because there is,
in fact, a ground upon which ethicists do agree; and thirdly because the
underlying axiom justifying the need to develop an ethical decision-
making system in the absence of univocal agreement about any ground
truth is unacceptable.
At rst, the work cited by Noothigattu et al. is supported by a poor theor-
etical grounding, merely mentioning a short denition of equality of oppor-
tunityproposed by John Rawls, given out of context and without any further
comment regarding Rawlss theory of justice.
62
In addition, the paper written
by researchers at Microsoft Research is clearly not ethics-oriented but
business-oriented, while citizens and legislators, when assessing the AV
moral issues, may adopt a more ethical-oriented approach:
63
In keeping with the motivation of fairness in online advertising, our approach
will permit [] the vendor, as much freedom as possible, without knowledge
of or trust in this party. This allows the vendor to benet from investment in
data mining and market research in designing its classier, while our absolute
guarantee of fairness frees the vendor from regulatory concerns.
61
Noothigattu et al. (n 8) 1.
62
Cynthia Dwork and others, Fairness Through Awareness(2011) Proceedings of the 3rd Innovations in
Theoretical Computer Science Conference 21426.
63
Ibid. 1.
LAW, INNOVATION AND TECHNOLOGY 15
Secondly, it is true that philosophers still debate the priority between the
moral obligation not to infringe individualsrights and the moral permission
to seek to save more lives rather than fewer lives, ceteris paribus.
They however tend to agree on most of the other aspects of the
dilemma,
64
especially refusing unfairly discriminatory criteria, and mostly
dier in their interpretation of the theoretical problem rather than on the
principles upon which the decision should be made. Furthermore, there
exist several ways to settle such disagreements, among them the law pro-
duction process which enables people to agree to disagreein modern
democracies. When investigating the dilemma to build a common identity
in a multicultural state, Charles Taylor recognises the challenge to
combine the need for strong popular cohesion around a common political
identity to develop social trust, with a multiculturalist condition to avoid
the exclusion of minorities. Taylor comes to the conclusion that democratic
regimes should be such that citizens are free because they not only take part
in the decision-making unit by a vote equal to that cast by others, but because
they are also included within a fair common discussion preceding the vote.
65
This is nothing else than the democratic way for people to agree to disagree
in modern democracies. Conversely, because individuals may agree to dis-
agree for good reasons, they may also disagree to agree for bad ones. This
is the issue VBS-like initiatives are exposed to. Whereas they are based on
the assumption that training an algorithm on a sample of collected a
priori uninformed inclinations to identify the compromise with the greatest
chances to be accepted by the population may be a quicker way to bring AVs
on roads rather than waiting for the outcome of a public debate it could
actually lead to the opposite result. In fact, while most people could, a priori,
be in favour of a principle that seeks to save more lives rather than fewer
lives, they may nonetheless reject it for the sole reason that it results from
a procedure that is perceived to be imposed rather than legitimate, just
like a court is often obliged to reject useful but unacceptable evidence
when its sourcing is irregular.
Thirdly, it could be argued that some ends justify all means, and that early
deployment of AVs would be one of these. HMIs backers include ocials,
such as Mark Rosekin, and manufacturers, like Teslas CEO Elon Musk,
but also academics, including Bonnefon et al.:
66
manufacturers and regulators will need to accomplish three potentially incom-
patible objectives: being reasonably consistent, not causing public outrage, and
64
Considering the steering driver case, the only distintion between Thomson and Foots conclusions
relates to the modality of the agents moral commitment. The driver should(moral obligation)
turn the trolley for Foot, whereas he may(moral permission) for Thomson (n 6) 2067.
65
Charles Taylor, Political Identity and the Problem of Democratic ExclusionABC Religion and Ethics
(2016).
66
Bonnefon et al. (n 26) 2.
16 H. ETIENNE
not discouraging buyers. Not discouraging buyers is a commercial necessity
but it is also in itself a moral imperative, given the social and safety benets
AVs provide over conventional cars.
Considering the size of the political and economic interests at stake behind
AVs, together with the evidences of recklessness and outright negligence
characterising the AV industry and self-driving tests,
67
it is not unreasonable
to question the sincerity of such philanthropic ambitions. This, however,
does not temper the need for a reasoned refutation of the HMI, which
could be enunciated as follows: the fact that thousands of people die every
year on the roads due to poor human driving skills justies the existence
of a moral obligation for car manufacturers to deploy AVs as soon as poss-
ible, and for regulators to authorise their marketing as early as AVs can
reduce the net balance of total annual death by one,
68
to implement regu-
lation with low liability for manufacturers in case of accident in order to
avoid discouraging them,
69
and even to prohibit the use of NAVs when
AVs become safer than them.
70
Let us rstly agree on the fact that although a private companys employ-
ees may have personal moral obligations conicting with their activity in the
company as recently illustrated by Googles internal oppositions regarding
Maven, Jedi and Dragony projects
71
the company itself is only legally
bound by compliance to the law, and only ethically constrained by the pro-
visions its shareholders have decided to include in its articles of association,
specically their social objects. Subsequently, unless Tesla, Waymo and Uber
have explicitly included it in their statutes, they have no moral obligation to
develop AVs to savepeople who may die on the roads in the future.
72
Sec-
ondly, to be taken seriously, HMIs backers would also need to explain either
why car accidents are a less tolerable cause of death than starvation, or why
hungry peoples lives in developing countries are worth less than healthy
peoples in a developed country (even if unborn).
73
Much more signicant
and assured results in terms of the saving of life could indeed be reached
by investing AVsR&D budgets to feed the 821 million undernourished
people worldwide.
74
An alternative solution for governments would also
67
Heather Somerville and David Shepardson, Uber CarsSafetyDriver Streamed TV Show Before Fatal
Crash: Police,Reuters (22 June 2018); Brian Merchant, The Deadly Recklessness of the Self-Driving Car
IndustryGizmodo (13 December 2018).
68
Kalra and Groves (n 23).
69
Hevelke and Nida-Rümelin (n 21); Bonnefon et al. (n 26).
70
Sparrow and Howard (n 24).
71
David Samuels, Is Big Tech Merging With Big Brother? Kinda Looks Like ItWired (23 January 2019).
72
It is one thing for a company to promote in its code of ethics the safety of its customers when using its
products, and another to include the will to ght against road deaths in its statutes, orientating its
activity toward this end.
73
The HMI supporters usually refer to all the future lives that could be saved, including those of people
not born yet.
74
Food and Agriculture Organisation, The State of Food Security and Nutrition in the World (2018) United
Nations, 2.
LAW, INNOVATION AND TECHNOLOGY 17
consist in enforcing regulation lowering the authorised speed limits to
25 km/h on all roads, what could even result in a fairer society,
75
rather
than allowing an unprecedented regulation gap.
76
Another argument against the claim that delaying AVs results in sacri-
cing lives
77
is given by Lin
78
who relates the issue to Derek Parts non-iden-
tity problem.
79
Although AVs may halve the number of deaths due to car
accidents, Lin says the people who will still die on the roads will unlikely
all be the same ones as those who would have died otherwise. In other
words, the net total of lives savedwould remain positive if the introduction
of AVs provoked 999 additional deaths while preventing 1000, resulting in
changing the identity of many victims. The equivalence of deaths presup-
posed by utilitarians may then be challenged when considering the net
average level of responsibility. Assuming that 94% of NAVs accidents are
caused by human error
80
(i.e. almost all NAV accidents involve at least
one person with some degree of responsibility) whereas 100% of AV acci-
dents involve at least some fully innocent people (the AV passengers), we
may rationally suppose that a major part of the 999 traded dead people
would be less responsible for their own death than a majority of the 1,000.
In fact, we surely concede that it would be unfair to save an at-fault
drunk-driving persons life at the cost of AV passengerslives.
Finally, unlike private companies, governments do have a political objec-
tive to promote safety on roads. However, the fact that some people make
dangerous use of NAVs is insucient to support the paternalistic measure
to oblige all of them to adopt AVs, prohibiting the use of NAVs.
81
By com-
parison, while the duty for a government to maximise the safety of its soldiers
is arguably more stringent than that toward motorists, it is however insu-
cient to prevail over other ethical considerations as illustrated by the litera-
ture on lethal autonomous weapons systems. Therefore, my point here is not
that AVs deployment should be unnecessarily delayed, but that the instru-
mental use of moral considerations as leverage to develop a favourable
75
Ivan Illich, Energie et équité (Seuil 1973).
76
Joan Claybrook and Shaun Kildare, Autonomous Vehicles: No Driver No Regulation?(2018) 361
(6397) Science 36.
77
Bauman and Youngblood (n 22): What should we do today so that over time autonomous vehicles
become as safe as possible as quickly as possible without sacricing lives to get there?
78
Patrick Lin, The Ethics of Saving Lives With Autonomous Cars is far Murkier Than You ThinkWired (30
July 2013).
79
Derek Part, Reasons and Persons (Oxford University Press 1986).
80
It is worth noting that this number is cited by the great majority of AV-related papers, including ocial
reports from the European Commission, and used as a ground truth to justify the HMI. Despite its great
inuence, no concern has been raised about the fact that it comes from a study conducted by the
NHTSA (which is openly engaged in favour of AVsdevelopment), six years ago, based on a narrow
sample of 5,470 crashes, all located in the U.S., and which did not necessarily imply fatalities Additional
independent research should have been conducted to challenge these results.
81
Jean-Baptiste Jeangène-Vilmer, Terminator Ethics: faut-il interdire les robots tueurs?(2014) 4 Poli-
tique étrangère 151, 163.
18 H. ETIENNE
regulation for manufacturers has no solid foundations. In contrast, the duty
for governments to only allow AVs if they are implemented with fair moral
principles to answer dilemma situations deriving from the foundation of
their legitimacy rooted in the necessity to respect peoples individual
rights, especially in terms of equality and non-discrimination cannot be
subordinated to the opportunity oered by AVs to save a number of
peoples lives when engaged in a dangerous driving activity they know to
entail perils. As summed up by the German Ethics Commission, there is
no ethical rule that always places safety before freedom.
82
It would be
wrong to believe that an old womans rights would only be infringed if she
happens to be involved in a dilemma situation where the AV is instructed
to drive over her instead of a young boy because she is elder. They would
be violated every single day from the legal deployment of AVs implemented
with such preferences, and she would be aware that her life is valued as less
worthy than any younger person in society.
4.2. The negative impacts of the MM on society
It is tempting to fall into the trap of considering the MM without the VBS as
a valuable experiment. Let us consider two strong negative eects it had on
public opinion, one psychological and one distractive, that prove it wrong.
The principle of the MM suggests that individualsvalue of life varies with
their characteristics and that the nine dierentiation factors selected by
Awad et al. are relevant to conduct life arbitrations. As demonstrated some-
where else,
83
not only are most of these criteria morally irrelevant (some of
them even clearly unconstitutional), but the whole MMs characteristic-
based approach is dangerous in itself, polarising the debate around erro-
neous AV dilemmas. The concrete damage deriving from the MM is then
psychological, enforcing peoples belief that it is acceptable and morally rel-
evant to allocate death based on gender, weight, or social status. I had the
opportunity to empirically observe these eects on my own students.
While an overwhelming majority promptly raised their hands to arbitrate
between men versus women or slim people versus less slim ones, most of
them refused to answer when being asked who should be saved between a
black and a white person, or a Muslim versus a Christian.
84
82
Federal Ministry of Transport and Digital Infrastructure (FMTDI), Ethics Commission on Automated and
Connected Driving (2017) Report, 20.
83
Hubert Etienne, A Practical Role-Based Approach to Solve Moral Dilemmas for Self-Driving Cars
(forthcoming).
84
I do not think any experimental evidence is necessary here. For those who may disagree, I suggest that
they should wonder whether the MM would have had the same greeting from newspapers and public
opinion if its authors had based their scenarios on ethnicity, skin colour or religion, rather than on
gender, weight and social status.
LAW, INNOVATION AND TECHNOLOGY 19
Second, a side-eect of the MM popularity was to distract from other rst-
order ethical issues. Some of them include preserving the integrity of
embedded systems against hacking and threats of AV use for terrorist
ends,
85
losing the possibility to react in critical situations (e.g. exceeding
the authorised speed limit for medical emergencies, to escape impending
aggression, or to run over a threatening gunman), and realising the impor-
tance for a country to have its own navigation systems for national indepen-
dence. The impacts of AVs on the job market have also been raised and
millions of workers may soon be undergoing career shifts considering that
about 11% of American workers drive as part of their work.
86
I would
now like to elaborate on two major issues which have comparatively received
very little attention so far: the forthcoming prohibition of NAVs and the
building of an AV-based mass surveillance system.
While two scenarios coexist regarding the deployment of AVs, it is clear
that a mixed-trac scenario could be nothing else than a transitory step
towards an inevitable fully autonomous trac, alongside the prohibition
of NAVs as notably announced by Elon Musk.
87
Only such conditions
would lead to the full extent of AVsexpected benets, including the
removal of redundant expensive signage, reducing the number of accidents,
increasing the speed limit, closing the safety distance between cars, and
improving tracuidity with intersection trac management algorithmic
regulators.
88
Such a prohibition may then arrive either from the law gov-
ernments would declare NAVs too dangerous,
89
justifying their removal for
safety reasons similarly to how many cities are progressively banning diesel
and even fuel cars for ecological reasons, and considering that the zero death
objective claimed by both the US and EU authorities could only be achieved
in a fully autonomous environment.
90
Or it could arrive through the market
insurance companies setting prohibitive prices for NAVs, while AV
85
Lin presents a convincing argument against the probability of AVs being weaponized for terrorist goals,
highlighting their cost-ineciency when explosive drones can achieve similar results. Although it is
right that AVs do not represent a substantial new opportunity for terrorist groups, he may however
underestimate the weight of symbols for such organizations, as terror economy does not follow the
same rules as the rational agents-based modern economy (Patrick Lin, Dont Fear the Car BombBul-
letin of the Atomic Scientists (17 August 2014)).
86
David N Beede, Regina Powers and Cassandra Ingram, The Employment Impact of Autonomous Vehicles
(US Department of Commerce, Economics and Statistics Administration, 2017).
87
Stuart Dredge, Elon Musk: Self-Driving Cars Could Lead to Ban on Human DriversThe Guardian
(London, 18 March 2015).
88
Tsz-Chiu Au and Peter Stone, Motion Planning Algorithms for Autonomous Intersection Management
(2010) Proceedings of the 1st AAAI Conference on Bridging the Gaps Between Task and Motion Planning
29.
89
Sparrow and Howard (n 24) 209.
90
Having progressively expelled old cars from Paris, the mayor recently decided to ban all diesel vehicles
from circulating in the city as early as 2024, and all fuel cars by 2030. Such prohibitions for ecological
reasons could absolutely be transposed to NAVs for safety motives, just as Bill Gates predicted (www.
rtl.fr/actu/insolite/pour-bill-gates-conduire-sa-propre-voiture-sera-un-jour-illegal-7782354890).
20 H. ETIENNE
insurance is included in their sale price.
91
The forthcoming prohibition of
NAVs is problematic, as it will force people to signicantly change the
way they relate to their freedom of movement 9% of Americans do not
want to ride an AV because they enjoy the physical act of driving
92
and
put them in a situation of extreme surveillance.
AVs are equipped with an exhaustive range of sensors, including odome-
try, infrared and ultrasonic sensors, inertial and satellite navigation systems,
as well as radars, lidars and cameras. The information they capture is relayed
to the manufacturers network as part of a continuous ow allowing the
sharing of trac information in real time. This raises major privacy issues
(notably the images captured by internal and external cameras together
with positioning information) and unprecedented concerns for individuals
surveillance. Even more so, would this data be made available to public auth-
orities as proposed by the European Commission:
93
as some of the data generated by vehicles may be of public interest, the Com-
mission will consider the need to extend the right of public authorities to have
access to more data. In particular, it will consider specications under the
Intelligent Transport Systems Directive regarding the access to data generated
by vehicles to be shared with public authority.
AV data sharing with public entities is unavoidable, both because there will
come a time when a common authority will be needed for trac manage-
ment purposes (e.g. intersection trac management), and because all
public transportation services expect an autonomous future. Therefore, the
real danger does not relate to public authority access to AV data, but
rather to the strength of the wall securing this data in the hands of an inde-
pendent authority and preventing national security agencies from accessing
it for surveillance purposes. There is not much doubt that AV sensors will be
tied to the existing 176 million-strong public camera network using facial
recognition to monitor the Chinese population. Even in France, often
cited as one of the most protective countries regarding privacy and personal
data, facial recognition experiments in public spaces for security purposes
were recently authorised in Nice. The mayor, Christian Estrosi, deployed
facial recognition systems in the citys public cameras network to track all
comings and goings, on public transit, arteries, public placesof a list of indi-
viduals identied as potential threats for state security, arguing that we
should put all the possible innovations at the service of our security.
94
91
This option was already tested by Tesla in Asia (Danielle Muolo, Tesla Wants to Sell Future Cars With
Insurance and Maintenance Included in the PriceBusiness Insider France (Paris, 23 February 2017)).
92
Aaron Smith and Monica Anderson, AmericansAttitudes Toward Driverless Vehicles(Pew Research
Center, 2017).
93
European Commission (n 4) 13.
94
My translation of pouvoir suivre toutes les allées et venues, dans les transports en commun, dans les
artères, dans les lieux publics, des individus en questionand Nous devons utiliser toutes les
LAW, INNOVATION AND TECHNOLOGY 21
Combining the exterior surveillance of the streets with the interior surveil-
lance of the passengers through embedded vocal assistants, AVs may then
become governmentseyes and ears.
They may ultimately become their hands, as a police interception tool to
take control over fugitivesvehicles, as a discrete means to eliminate dissi-
dents, driving them to unfrequented places to be assassinated without wit-
nesses, or as weapons to stop public enemies. With regard to the latter,
consider the following situation: Leo is leaving South Manhattan in his
Tesla AV, making his way to the Sunday family brunch at his parents
place at Farmington, Connecticut. Meanwhile, the FBI has been notied
that Alex, a young radicalised man whose track had been lost some days
ago, is about to commit a terrorist attack on the Queensboro Bridge, excep-
tionally crowded because of the Marathon. The FBI has no information on
Alexs position and no time for investigation; they call the New York
central station of trac management for help, which identies Alex from
Leos Tesla cameras, only one street away from the bridge. On the FBIs
order, Leos Tesla suddenly accelerates, climbs on the sidewalk and fatally
hits Alex. Without his consent, Leos private car just became a national
security weapon.
5. Conclusion
AVs are equipped with several of the most promising applications in AI, and
their development will result in profound ethical, social, political and econ-
omic impacts on the lives of billions of people. They can deservedly be con-
sidered as the ethical challenge of the century in AI Ethics, in the sense that
the way their underlying issues will be approached and settled will certainly
mark jurisprudence, giving a direction to the development of the discipline.
This is precisely why it is important to resist the sirens of the market calling
for emergency responses. Although computational approaches may not be
abandoned,they should however be deployed with greater prudence, to
inform human choices rather than to substitute them. Here again, there is
a high risk of ceding to the temptation of using them to solve complex
social decisions, short-circuiting public consultation and producing irre-
sponsible non-human ethics, incapable of consistently explaining its
choices and justifying its legitimacy. Such a threat is ironically captured by
the fresco of Cesare Maccari chosen by Noothigattu et al. to illustrate the
VBS projects webpage,
95
which at rst glance depicts an orator discoursing
in front of a chamber of representatives, but actually represents Cicero
innovations possibles au service de notre sécurité(Agence France Presse, Nice va tester la reconnais-
sance faciale sur la voie publiqueLe Monde (Paris, 18 February 2019)).
95
www.media.mit.edu/projects/a-voting-based-system-for-ethical-decision-making/overview/.
22 H. ETIENNE
denouncing the Catilines plotting to the Senate and its dangers for the
Roman Republic.
96
Disclosure statement
No potential conict of interest was reported by the author(s).
Notes on contributor
Hubert Etienne is a French philosopher conducting research in AI ethics and com-
putational social sciences at Facebook AI Research and Ecole Normale Supérieure.
He is a lecturer in Data Economics at HEC Paris, a lecturer in AI Ethics at Sciences
Po, ESCP Europe and Ecole Polytechnique, as well as a research associate at Oxford
Universitys Centre for Technology and Global Aairs.
ORCID
Hubert Etienne http://orcid.org/0000-0002-7884-1614
96
Cicerone denuncia Catilina, fresco of Cesare Maccari, 1889, Roma, Palazzo Madama, Roma.
LAW, INNOVATION AND TECHNOLOGY 23
... In 2016, MIT launched the "Moral Machine" online experimental platform to study the ethics decision challenges faced by autonomous vehicles [49]. The experiment simulated inevitable traffic accident scenarios, where the choices of autonomous vehicles would lead to two different outcomes. ...
Article
Full-text available
Artificial intelligence (AI) has rapidly advanced, increasingly showcasing its powerful learning and computational capabilities. This progress has resulted in significant breakthroughs in areas such as image processing, speech recognition, and autonomous driving. Scientists predict that by around 2045, AI will overcome existing technological barriers, allowing strong AI to surpass human intelligence. However, it will inevitably affect human social relationships and order. Ethical issues associated with AI technology, such as unemployment, privacy breaches, and discrimination, generate a sense of threat among people, resulting in a loss of confidence in AI, which hampers its sustainable progress. Therefore, AI ethical issues are not only significant topics in academia but also become critical concerns for individuals, society, and nations. This article aims to address the challenges of AI ethics safety and the erosion of human confidence, while promoting the sustainable development of AI. It presents an AI ethics safety framework that analyzes engineering ethics and human trust within the context of sustainable AI development, and it recommends governance methods and strategies informed by case studies. Furthermore, we propose evaluation criteria and methods, establishing early-warning thresholds to keep potential AI risks within acceptable limits. Finally, the future prospects for AI ethics safety are highlighted. We hope our research contributes to the sustainable development of AI, ensuring that the arrival of the AI singularity has a positive impact on society with a long-term harmonious coexistence between AI and humanity.
... Martinho et al. (2021) compile ethical issues within the autonomous vehicles industry, considering various guidelines. Studies by Wang et al. (2020), Etienne (2021), and others explore ethical decision-making in autonomous vehicles and emerging technologies, advocating for ethical AI methods to mimic human ethics while addressing privacy concerns. ...
Conference Paper
Full-text available
This article addresses the complex ethical challenges posed by the integration of Artificial Intelligence (AI) across various sectors. It emphasizes critical issues such as biases and discrimination inherent in AI algorithms, particularly in areas like criminal justice and healthcare. The review underscores the urgent need for equitable and unbiased AI applications, highlighting the role of transparency and ethical frameworks in ensuring fairness. Additionally, it explores the paramount importance of privacy and data protection in AI, advocating for robust frameworks and the ethical responsibilities of developers to safeguard individual rights. The review also delves into the need for accountability and transparency in AI decision-making processes, stressing the significance of ethical guidelines and user comprehension. The article examines the societal impact of AI, especially its transformative effects on the workforce, emphasizing the necessity for human-centric approaches and policy interventions in adapting to AI-driven changes. It also critically evaluates the ethical complexities of autonomous systems, particularly in military applications, advocating for responsible governance and human oversight. Conclusively, the review calls for continuous ethical reassessment and collaborative efforts among academia, industry, and policymakers to develop responsible AI practices.
... The experiment provided a framework for exploring human moral preferences, identifying trends such as prioritizing human lives over animals and favoring the preservation of a greater number of lives. While acknowledging its limitations [20][21][22][23][24][25], this work has stimulated extensive research into the ethical implications of AI in autonomous systems [26][27][28][29]. ...
Preprint
Full-text available
The rapid advancement of Large Language Models (LLMs) and their potential integration into autonomous driving systems necessitates understanding their moral decision-making capabilities. While our previous study examined four prominent LLMs using the Moral Machine experimental framework, the dynamic landscape of LLM development demands a more comprehensive analysis. Here, we evaluate moral judgments across 51 different LLMs, including multiple versions of proprietary models (GPT, Claude, Gemini) and open-source alternatives (Llama, Gemma), to assess their alignment with human moral preferences in autonomous driving scenarios. Using a conjoint analysis framework, we evaluated how closely LLM responses aligned with human preferences in ethical dilemmas and examined the effects of model size, updates, and architecture. Results showed that proprietary models and open-source models exceeding 10 billion parameters demonstrated relatively close alignment with human judgments, with a significant negative correlation between model size and distance from human judgments in open-source models. However, model updates did not consistently improve alignment with human preferences, and many LLMs showed excessive emphasis on specific ethical principles. These findings suggest that while increasing model size may naturally lead to more human-like moral judgments, practical implementation in autonomous driving systems requires careful consideration of the trade-off between judgment quality and computational efficiency. Our comprehensive analysis provides crucial insights for the ethical design of autonomous systems and highlights the importance of considering cultural contexts in AI moral decision-making.
... These decisions involve moral judgments that are challenging to codify into algorithms. The ethical frameworks that guide these decisions must be transparent and reflect societal values, yet consensus on these values can be elusive [81][82][83]. ...
Article
Full-text available
Autonomous vehicles (AVs) represent a transformative advancement in transportation technology, promising to enhance travel efficiency, reduce traffic accidents, and revolutionize our road systems. Central to the operation of AVs is the integration of artificial intelligence (AI), which enables these vehicles to navigate complex environments with minimal human intervention. This review critically examines the potential dangers associated with the increasing reliance on AI in AV navigation. It explores the current state of AI technologies, highlighting key techniques such as machine learning and neural networks, and identifies significant challenges including technical limitations, safety risks, and ethical and legal concerns. Real-world incidents, such as Uber's fatal accident and Tesla's crash, underscore the potential risks and the need for robust safety measures. Future threats, such as sophisticated cyber-attacks, are also considered. The review emphasizes the importance of improving AI systems, implementing comprehensive regulatory frameworks, and enhancing public awareness to mitigate these risks. By addressing these challenges, we can pave the way for the safe and reliable deployment of autonomous vehicles, ensuring their benefits can be fully realized.
... While it is true that questions remain as to AV's ultimate use and benefits (Martínez-Díaz and Soriguera, 2018;Mueller et al., 2020;Litman, 2023)especially as they relate to vulnerable road users (Owens et al., 2019), as well as to pedestrians and cyclists (Sandt and Owens, 2017) many argue that AVs are a key disruptor technology (Medrano-Berumen and Akbaş, 2021;Nikitas et al., 2021;Othman, 2022). For some AVs offer the enticing prospect of decongested traffic, lowering public transport costs, ecological advantages, not to mention the prospect of radically reduced road deaths (Etienne, 2021). ...
Article
Full-text available
On November the 13th, 2022 video footage was released purportedly showing a Tesla Model Y malfunctioning by speeding through the streets of a Chinese city killing two people. Video footage such as this has the potential to undermine trust in AVs. While Tesla has responded by stating they will get to the “truth,” there are questions as to how this truth is to be decided, and perhaps more importantly how the public can trust either Tesla or negative press. We explore the “facts” of the incident and discuss the challenges of building trust in new AVs systems based on transparency. In this article we argue that transparency is more than simply getting to the “truth.” It is fostering a relational dialogue between the facts and stakeholder. Using O’Brien’s window metaphor, this article explores the need for AV manufacturers to consider the content of such incidents, the different perceptions of stakeholders, and the medium through which the content is presented. Apart from the need for independent crash investigators, there is a need for AV manufacturers to go beyond simply’ getting to the truth’ and to engage with the public responsibly.
... 2) Mixed land uses and short travel distance to workplaces are positively associated with BI to purchase AVs (H7)(Bansal and Kockelman 2018;Bansal et al. 2016). 3) People who live in neighborhoods with a higher rate of democratic supporters are interested to purchase AVs (H8)(Etienne 2021;Maranges et al. 2022).d) Psychological factors1) Perceived usefulness, safety, and effectiveness are positively related to BI to purchase AVs (H9)(Panagiotopoulos and Dimitrakopoulos 2018;Rahman et al. 2017). ...
Article
Full-text available
This study aims to investigate people’s perceptions and opinions on Autonomous Vehicles (AVs) and the key factors that influence their Behavioral Intention (BI) to purchase and use AVs. Data were sourced from the 2019 California Vehicle Survey to explore the determinants of AV purchase. A Structural Equation Model (SEM) of stated intentions is estimated to validate a theoretical framework drawn on relevant bodies of literature. The descriptive statistics show that many people are already aware of AVs. Many people also think that traveling by AVs is enjoyable, safe, and effective, although some of them would miss the joy of driving and would not entrust a driverless AV to shuttle their children. Results from the SEM indicate that being working-age adults, having children, household income, per capita income, and educational attainment are attributes positively associated with AV purchase intention. Similarly, psychological factors (e.g., perceived enjoyment, usefulness, and safety), prior knowledge of AVs, and experience with emerging technologies (e.g., electric vehicles) significantly enhance BI to purchase AVs. This study finds that family structure and psychological factors are the most influential factors of AV purchase intention, and more so than the built environment, transportation, and other socioeconomic factors.
Chapter
As autonomous driving technology continues to advance, ethical dilemmas are emerging that raise complex philosophical, social, and public policy questions. In this paper, we explore the ethical dilemmas that arise when programming autonomous vehicles to make life-and-death decisions in situations where accidents are unavoidable. We begin by examining the trolley problem, a classic ethical thought experiment that has become a popular framework for discussing ethical dilemmas in autonomous driving. We argue that while the trolley problem can be a useful starting point for thinking about ethical dilemmas, it is ultimately limited in its ability to capture the complexity and nuance of real-world situations. There are also social and public policy implications of autonomous driving technology. For instance, the widespread adoption of autonomous vehicles could lead to significant job loss in the transportation industry. Additionally, there are concerns about the impact of autonomous vehicles on urban planning, such as increased traffic congestion and the need for additional infrastructure. In conclusion, ethical dilemmas in autonomous driving pose significant challenges for society and require careful consideration of philosophical, social, and public policy implications. By engaging in ongoing ethical reflection and dialogue, we can ensure that the development and implementation of autonomous driving technology is guided by principles of justice, fairness, and respect for human dignity.
Article
Full-text available
A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement of autonomous vehicles. In particular, Patrick Lin's concern that any security gain derived from the introduction of autonomous cars would constitute a trade-off in human lives will be addressed. The second question is whether it would be morally permissible to impose liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. Doubts about the moral legitimacy of such a scheme are based on the notion that it is a form of defamation if a person is held to blame for causing the death of another by his inattention if he never had a real chance to intervene. Therefore, the legitimacy of such an approach would depend on the user having an actual chance to do so. The last option discussed in this paper is a system in which a person using an autonomous vehicle has no duty (and possibly no way) of interfering, but is still held (financially, not criminally) responsible for possible accidents. Two ways of doing so are discussed, but only one is judged morally feasible.
Article
It is often argued that driverless vehicles will save lives. In this paper, we treat the ethical case for driverless vehicles seriously and show that it has radical implications for the future of transport. After briefly discussing the current state of driverless vehicle technology, we suggest that systems that rely upon human supervision are likely to be dangerous when used by ordinary people in real-world driving conditions and are unlikely to satisfy the desires of consumers. We then argue that the invention of fully autonomous vehicles that pose a lower risk to third parties than human drivers will establish a compelling case against the moral permissibility of manual driving. As long as driverless vehicles aren’t safer than human drivers, it will be unethical to sell them. Once they are safer than human drivers when it comes to risks to 3rd parties, then it should be illegal to drive them: at that point human drivers will be the moral equivalent of drunk robots. We also describe two plausible mechanisms whereby this ethical argument may generate political pressure to have it reflected in legislation. Freeing people from the necessity of driving, though, will transform the relationship people have with their cars, which will in turn open up new possibilities for the transport uses of the automobile. The ethical challenge posed by driverless vehicles for transport policy is therefore to ensure that the most socially and environmentally beneficial of these possibilities is realised. We highlight several key policy choices that will determine how likely it is that this challenge will be met.
Conference Paper
The impressive results of the 2007 DARPA Urban Challenge showed that fully autonomous vehicles are technologically feasible with current intelligent vehicle hardware. It is natural to ask how current transportation infrastructure can be improved when most vehicles are driven autonomously in the future. Dresner and Stone proposed a new intersection control mechanism called Autonomous Intersection Management (AIM) and showed in simulation that intersection control can be made more efficient than the traditional control mechanisms such as traffic signals and stop signs. In this paper, we extend the study by examining the relationship between the precision of cars’ motion controllers and the efficiency of the intersection controller. We propose a planning-based motion controller that can reduce the chance that autonomous vehicles stop before intersections, and show that this controller can increase the efficiency of the intersection control mechanism.
The Social Dilemma of Autonomous Vehicles' (2016) 352(6293) Science 1575; Azim Shariff, Jean-François Bonnefon and Iyad Rahwan, 'Psychological Roadblocks to the Adoption of Self-Driving Vehicles
  • Jean-François Bonnefon
  • Azim Shariff
  • Iyad Rahwan
Jean-François Bonnefon, Azim Shariff and Iyad Rahwan, 'The Social Dilemma of Autonomous Vehicles' (2016) 352(6293) Science 1575; Azim Shariff, Jean-François Bonnefon and Iyad Rahwan, 'Psychological Roadblocks to the Adoption of Self-Driving Vehicles' (2017) Nature Human Behaviour 694.
Why Waiting for Perfect Autonomous Vehicles May Cost Lives
  • Melissa Bauman
  • Alyson Youngblood
Melissa Bauman and Alyson Youngblood, 'Why Waiting for Perfect Autonomous Vehicles May Cost Lives' (RAND Corporation, 2017).
The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect Automated Vehicles
  • Nidhi Kalra
  • G David
  • Groves
Nidhi Kalra and David G Groves, The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect Automated Vehicles (RAND Corporation, 2017).
The Employment Impact of Autonomous Vehicles
  • Regina David N Beede
  • Cassandra Powers
  • Ingram
David N Beede, Regina Powers and Cassandra Ingram, The Employment Impact of Autonomous Vehicles (US Department of Commerce, Economics and Statistics Administration, 2017).
Elon Musk: Self-Driving Cars Could Lead to Ban on Human Drivers' The Guardian
  • Stuart Dredge
Stuart Dredge, 'Elon Musk: Self-Driving Cars Could Lead to Ban on Human Drivers' The Guardian (London, 18 March 2015).