ChapterPDF Available

Autonomous vehicles and moral uncertainty

Authors:

Abstract

Our chief purposes in this chapter are to motivate the problem of moral uncertainty as it pertains to autonomous vehicles and to outline possible solutions. The problem is the following: How should autonomous vehicles be programmed to act when the person who has the authority to decide the ethics of the autonomous vehicle is under moral uncertainty? Roughly, an agent is morally uncertain when she has access to all (or most) of the relevant non-moral facts, including but not limited to empirical and legal facts, but still remains uncertain about what morality requires of her. We argue that the problem of moral uncertainty in the context of autonomous vehicles is an important problem and then critically engage with two solutions to the problem. We conclude by discussing a solution that we think is more promising—that of the philosopher Andrew Sepielli—and offer some support in its defense.
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
AUTONOMOUS VEHICLES AND MORAL
UNCERTAINTY
Vikram Bhargava and Tae WanKim
Autonomous vehicles have escaped the connes of our imaginations
and found their way onto our roadways. Major companies, including
GM, Nissan, Mercedes- Benz, Toyota, Lexus, Hyundai, Tesla, Uber,
and Volkswagen, are developing autonomous vehicles. Tech giant
Google has reported that the development of autonomous vehicles
is among its ve most important business projects (Urmson 2014).
Autonomous cars are here, not going away, and are likely to become
increasingly prevalent (Fagnant and Kockelman 2013; Goodall
2014a,b).
As the number of autonomous vehicles on roadways increases, sev-
eral distinctively philosophical questions arise (Lin2015):
Crash: Suppose a large autonomous vehicle is going to crash
(perhaps due to hitting a patch of ice) and that it is on its way
to hitting a minivan with ve passengers head on. If it hits the
minivan head on, it will kill all ve passengers. However, the
autonomous vehicle recognizes that since it is approaching an
intersection, on the way to colliding with the minivan it can
swerve in such a way that it rst collides into a small roadster,
thus lessening the impact on the minivan. is would spare the
minivan’s ve passengers, but it would unfortunately kill the
one person in the roadster. Should the autonomous vehicle be
programmed to rst crash into the roadster?
is scenario of course closely resembles the famous trolley problem
(Foot 1967; omson 1976).1 It also raises a question at the inter-
section of moral philosophy, law, and public policy that is unique
to autonomous vehicles. e question is, who should be able to
choose the ethics for the autonomous vehicle— drivers, consumers,
1
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 5 3/31/2017 8:19:26 AM
6     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
passengers, manufacturers, programmers, or politicians (Lin 2015; Millar 2014)?2
ere is another question that arises even once we settle who ought to be able to
choose the ethics for the autonomous vehicles:
e Problem of Moral Uncertainty:How should autonomous vehicles be
programmed to act when the person who has the authority to choose the
ethics of the autonomous vehicle is under moral uncertainty?
Roughly, an agent is morally uncertain when she has access to all (or most) of the
relevant non- moral facts, including but not limited to empirical and legal facts,
but still remains uncertain about what morality requires of her. is chapter is
about how the person who is ultimately chosen to make the decisions in the Crash
scenario can make appropriate decisions when in the grip of moral uncertainty.
For simplicity’s sake, in this chapter we assume this person is a programmer.
Decisions are oen made in the face of uncertainty. Indeed, there is a vast
literature on rational decision- making under uncertainty (De Groot 2004; Raia
1997). However, this literature focuses largely on empirical uncertainty. Moral
uncertainty, on the other hand, has received vastly less scholarly attention. With
advances in autonomous vehicles, addressing the problem of moral uncertainty
has new urgency. Our chief purpose in this chapter is a modest one:to explore
the problem of moral uncertainty as it pertains to autonomous vehicles and to
outline possible solutions to the problem. In section 1.1, we argue that the prob-
lem is a signicant one and make some preliminary remarks. In section 1.2, we
critically engage with two proposals that oer a solution to the problem of moral
uncertainty. In section 1.3, we discuss a solution that we think is more promising,
the solution provided by the philosopher Andrew Sepielli. In section 1.4, we oer
some support in its defense. We conclude in section1.5.
. Motivation and Preliminaries
Let’s return to the Crash scenario. Suppose Tegan, a programmer tasked with
deciding the appropriate course of action in the Crash scenario, thinks she should
program the autonomous vehicle to collide into the roadster on the way to the
minivan, under the consequentialist rationale that the roadster has fewer pas-
sengers than the minivan. She hesitates because she recalls her ethics professor’s
deontological claim that doing so would be seriously wrong3it would use the
one passenger in the roadster as a mere- means in a way that is morally impermis-
sible. Tegan is not persuaded by her professor’s prior guidance and gives 90%
credence (subjective probability) to the view that she should program the vehicle
to rst crash into the roadster; she gives only 10% credence to her professor’s con-
clusion that she should not crash into the roadster on the way to the minivan.4
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 6 3/31/2017 8:19:26 AM
Autonomous Vehicles and Moral Uncertainty 7
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
From Tegan’s own perspective, how should she program an autonomous vehicle
to deal with Crash- like scenarios? Tegan faces the problem of moral uncertainty.
Caveat:Any real driving situation an autonomous vehicle will face is likely to
be much more complicated than the foregoing scenario. Nevertheless, the sce-
nario contains enough relevant factors for our purposes. For Tegan, moral argu-
ments are the source of normative uncertainty, but it is worth noting that other
types of normative views (e.g., legal, cultural, religious) can play similar roles, cre-
ating prescriptive uncertainty. Also, Tegan faces just two competing arguments,
while many decision makers can face more than two. For simplicity’s sake, how-
ever, we discuss scenarios in which two arguments are competing, although some
of the frameworks we discuss can, in principle, accommodate scenarios with
more than two competing arguments.
e problem of moral uncertainty derives primarily from the following con-
ditions:(1)the two normative propositions corresponding to Tegan’s decision—
“I should program the vehicle to crash into the roadster” and “I should not
program the vehicle to crash into the roadster”— are mutually exclusive, and
(2) Tegan’s credence is divided between the two propositions— she is uncer-
tain about which of the two propositions is true (Sepielli 2009).5 Put another
way:even if Tegan is certain about a range of empirical facts, she may still remain
uncertain about the reasons that those very facts give her with respect to what
to do (Sepielli2009).
We acknowledge that a solution to Crash is not complete unless we oer a
plausible framework of decision- making under empirical uncertainty (e.g., Hare
2012). We assume for now that the solution we discuss can coherently be com-
bined with the best account of decision- making under empirical uncertainty—
ascertaining whether this assumption is in fact true is a promising future avenue
of research.6
Tegan requires a meta- normative framework to adjudicate between the nor-
mative prescriptions of competing moral theories. In this chapter, we argue for
the importance of such a framework and encourage scholars of robot ethics to
pay more attention to the problem of moral uncertainty. But rst, it is worth
dealing with a thought that might be lurking in some readers’ minds, namely,
that the very notion of moral uncertainty is a misguided one. Specically, some
readers might be thinking that of the two competing moral arguments, only one
of them is right. erefore, there is no uncertainty ab initio and the problem is
only apparent. For instance, if it is in fact morally true that one should never use
another as a mere- means, then Tegan should not program the car to rst crash
into the roadster.
We agree that there may be no moral uncertainty from the perspective of
objective reason or objective “should” (Harman 2015). Moreover, we do not
deny the importance of guring out the objectively correct answer, assuming
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 7 3/31/2017 8:19:26 AM
8     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
one exists— that is, that ethics is not relative. But it is important to note that
the aforementioned concern is consistent with the view we defend. ough
programmers should strive to ascertain the objectively correct answer, this does
not eliminate the fact that a decision might have to be made prior to one’s hav-
ing secured the objectively correct answer. Tegan, in the above scenario, cannot
make her decision purely on the basis of objective reasons, since her doxastic
state is already plagued with moral uncertainty. Yet she needs to decide how to
program the car. e given reality is that Tegan cannot help but base her deci-
sion on her degree of belief in a moral view— that is, from her representation of
the objective “should” (Sepielli 2009). us Tegan ought to make the best deci-
sion given her degree of belief in the relevant normative prescription. So Tegan
requires an additional decision framework, one that is not designed primarily
for objective reason or objective “should”— that is, a framework that begins with
her own uncertain normative beliefs but still helps her make more appropriate
and rational decisions.
. Two Possibilities
We now consider (and ultimately reject) two proposals for making decisions
under moral uncertainty. e rst proposal— the “Continue Deliberating
view— suggests that Tegan should not make a decision; instead, she should con-
tinue deliberating until she gures out what morality requires of her. We are
sympathetic to this position. Indeed, we too think that programmers should con-
tinue to deliberate about moral problems insofar as they are able. Nevertheless,
we believe that there are circumstances in which programmers may lack the lux-
ury of time or resources to continue deliberating but must nevertheless decide
how to act. Tegan might deliberate for some time, but she cannot put all of her
time and eort into guring out what to do in Crash and will need to make a
decision soon enough.
Perhaps more important, continuing to deliberate is in some contexts, in
eect, making a decision about one of the very choices the programmer is uncer-
tain about. For example, if Tegan opts not to program the autonomous vehicle to
rst swerve into the roadster, she in eect already commits to the prescription of
one of the moral views. at is, if she decides not to program the autonomous car
to rst swerve into the roadster, she rejects the prescription of the consequential-
ist view and allows more lives to be killed. Inaction is oen a choice, and it is typi-
cally a choice of status quo. e “Continue Deliberating” view lacks the resources
to explain why the existing state of aairs is the most appropriate choice.
e second proposal— call it the “My Favorite eory” view— is an initially
tempting response to moral uncertainty (Gustafsson and Torpman 2014). at
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 8 3/31/2017 8:19:26 AM
Autonomous Vehicles and Moral Uncertainty 9
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
is, do what the conclusion of the normative argument you think is most likely to
be correct tells you to do. For instance, if Tegan thinks the consequentialist pre-
scription to crash into the roadster is most likely true, then she should program
the car to do just that. But this view has some problems, an analysis of which
will yield an important condition any adequate solution to the problem of moral
uncertainty must meet. We can better understand this problem by considering a
parallel problem in empirical uncertainty (Sepielli 2009). Consider a hypotheti-
cal variant of the real case of the FordPinto:
Pinto:e CEO of Ford is deciding whether to authorize the sale of its
recently designed hatchback car, the Pinto. She is not sure how to act,
because she is empirically uncertain about how the Pinto’s fuel tank will
aect the life of its drivers and passengers. Aer reading a crash- test
report, she has a 0.2 degree of belief that the Pinto’s fuel tank will rupture,
causing a potentially fatal re if the car is rear- ended, and a 0.8 degree of
belief that there will not be any such problems. inking she should go
with what she thinks is most likelythat there will not be any problems
with the fuel tank— she authorizes the sale of thePinto.
Here the CEO clearly makes a poor choice. One cannot simply compare 0.2 and
0.8. One must consider the value of the outcomes. Of course, car designs can-
not be perfect, but a 20% probability of a life- threatening malfunction is obvi-
ously too high. e CEO failed to weigh the consequences of the actions by their
respective probabilities. If she had taken into consideration the value of the out-
comes, it would not have made sense to authorize the sale of the Pinto. Asimilar
problem applies in the moral domain— the weight of the moral value at stake
must be taken into consideration.
For instance, returning to the situation Tegan faces, even if she thinks that the
proposition “I should program the vehicle to crash into the roadster” is most likely
true, it would be a very serious wrong if the competing proposition, “I should not
program the vehicle to crash into the roadster,” is correct, since treating someone
as a mere- means would be a serious deontic wrong. In other words, though Tegan
thinks that her professor’s view is mistaken, she recognizes that if her professors
arguments are correct and she nevertheless programs the car to rst crash into the
roadster, then she would commit a serious deontologicalwrong.
As such, an adequate solution to the problem of moral uncertainty must take
into account the moral values associated with the particular normative proposi-
tion, weighted by their respective probabilities, not merely the probability that
the normative proposition in question is true. Another way to put the point is
that a programmer, in the face of moral uncertainty, must hedge against the view
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 9 3/31/2017 8:19:26 AM
10     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
with the greater moral value at stake or meet what we shall call the “expected
moral value condition,” which we apply to programmers in the followingway:
e Expected Moral Value Condition:Any adequate solution to the prob-
lem of programming under moral uncertainty must oer the resources by
which a programmer can weigh the degree of moral harm, benet, wrong,
right, good, bad, etc., by their relevant probabilities.7
On an account of moral uncertainty that meets this condition, there may very
well be instances when a programmer should act according to what she consid-
ers the less probable normative view because the less probable normative view
has something of signicant moral value at stake. But we’ve gotten ahead of
ourselves. Making this sort of ascription requires being able to compare moral
value across dierent moral views or theories. And it is not obvious how this
can be meaningfully done. In the next section, we will elucidate what we think
is a promising approach for making comparisons of moral value across dierent
moralviews.
. An Expected Moral Value Approach
Suppose Tegan is convinced of the importance of weighing the moral value at
stake and decides she wants to use an expected moral value approach to do so.8 In
other words, Tegan must gure out the expected moral value of the two mutually
exclusive actions and choose the option that has the higher expected moral value.
But she soon realizes that she faces a serious problem.
Tegan might have some sense of how signicant a consequentialist good it is
to save the ve lives in the minivan, and she also might have some sense of how
signicant a deontological bad it is to use another as a mere- means (namely, the
person in the roadster); but still, troublingly, she may not know how the two
compare. It is not clear that the magnitude of the moral value on the consequen-
tialist view is commensurable with the magnitude of the moral value on the
deontological view. is has been called the “Problem of Inter- theoretic Value
Comparisons” (PIVC) (Lockhart 2000; Sepielli 2006, 2009, 2013; MacAskill,
forthcoming).
e PIVC posits that moral hedging requires comparing moral values across
dierent normative views. And it is not obvious that this can be done. For exam-
ple, it is not clear how Tegan can compare the consequentialist value of maximiz-
ing net lives saved with the deontic wrong of using someone as a mere- means.
Neither consequentialist views nor deontological views themselves indicate how
to make inter- theoretic comparisons. Any adequate expected value proposal
must explain how it will handle this problem.9
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 10 3/31/2017 8:19:26 AM
Autonomous Vehicles and Moral Uncertainty 11
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
Although the PIVC is thorny, we maintain it can be overcome. We nd
Sepielli’s works (2006, 2009, 2010, 2012, 2013) helpful for this purpose.
Signicantly, Sepielli points out that just because two things cannot be com-
pared under one set of descriptions does not mean they cannot be compared
under an analogous set of re- descriptions. Sepielli’s nuanced body of work is
more complex than the following three steps that we articulate. Still, these three
steps form the essence of what he calls the “background ranking approach” and
are also helpful for our investigation.10 Using the background ranking approach
to solve the problem of inter- theoretic value comparisons is simple at its core,
but its execution may require practice, much as with employing other rational
decision- makingtools.
Step 1:e rst step involves thinking of two morally analogous actions to pro-
gramming the vehicle to crash into the roadster. e rst analogy should be such
that, if the moral analogy were true, it would mean that crashing into the roadster is
better than not crashing into the roadster. e second analogy should be such that,
if the moral analogy were true, it would mean that not crashing into the roadster is
better than crashing into the roadster. Suppose the rst analog y to crashing into the
roadster is donating to an eective charity that would maximize lives saved, instead
of donating to a much less eective charity that would save many fewer lives. (In
this case, the analogy is in line with the consequentialist prescription. It is a decision
strictly about maximizing net lives saved.) Call this the “charity analogy.” Suppose
the second analogy to crashing into the roadster is a doctor killing a healthy patient
so she could extract the patient’s organs and distribute them to ve other patients
in vital need of organs. (In this case, the analogy is in line with the deontological
prescription of not using a person as a mere- means.) Call this the “organ extrac-
tion analogy.” Note that performing this step may require some moral imagination
(Werhane 1999), skill in analogical reasoning, and perhaps even some familiarity
with what the moral literature says on anissue.
Step 2:Identify one’s credence in the two following mutually exclusive propo-
sitions:“I should program the vehicle to crash into the roadster” and “I should
not program the vehicle to crash into the roadster.” As stated earlier, Tegan has a
0.9 credence in the rst proposition and a 0.1 credence in the second proposition.
Step 3: e third step involves identifying the relative dierences in the
magnitude of the moral value between the two propositions from Step 2 on the
assumption that each of the analogies from Step 1 in question holds. Let’s call
the dierence in the moral value of programming the vehicle to crash into the
roadster versus not doing so, given the charity analog y is true, “W.” Suppose then
that the dierence in moral value of programming the vehicle to crash into the
roadster versus not doing so, given the organ extraction analogy is true, is “50W”
(i.e., the dierence in moral value is y times that of the dierence in moral
value associated with the charity analogy).
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 11 3/31/2017 8:19:27 AM
12     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
Keep in mind that Tegan can do this because her views about the relative dif-
ferences in the magnitude of the moral value between the two propositions, con-
ditional on each analogy holding true, can be independent of her beliefs about the
two mutually exclusive prescriptions of “I should program the vehicle to crash
into the roadster” and “I should not program the vehicle to crash into the road-
ster.” As Sepielli notes, “Uncertainty about the ranking of a set of actions under
one set of descriptions in no way precludes certainty about the ranking of the
same set of actions under a dierent set of descriptions. at every action falls
under innite descriptions gives us a fair bit of room to work here” (2009, 23).
e fact that one can make this sort of comparison through analogical reasoning
is an important feature of the background ranking procedure.
One might reasonably wonder where “y” (in 50W) came from. We have
admittedly imputed this belief to Tegan in an ad hoc manner. However, as we
noted, given that the source of uncertainty for Tegan is regarding the decision
to program the car to rst crash into the roadster or not, we think it is indeed
plausible that Tegan may have a good sense of how the other analogies we have
introduced fare against eachother.
ese three steps capture the essence of the background ranking procedure.
Now we are in a position to perform an expected moral value calculation:
(1) Tegan’s credence in the proposition, “I should program the autonomous
vehicle to crash into the roadster” [insert value] × the dierence in the
magnitude of the moral value between crashing into the roadster and not
crashing into the roadster, on the condition that the charity analogy holds
[insertvalue]
(2) Tegan’s credence in the proposition, “I should not program the autonomous
vehicle to crash into the roadster” [insert value] × the dierence in the mag-
nitude of the moral value between crashing into the roadster and not crash-
ing into the roadster, on the condition that the organ extraction analogy
holds [insertvalue]
whichis
(1) (0.9)(W)=0.9W
(2) (0.1)(50W)=5W
Finally, to determine what Tegan should do, we simply take the dierence
between the expected value of programming the vehicle to crash into the road-
ster (.9W) versus not programming the vehicle to do so (5W). When the value
is positive, she should program the vehicle to crash into the roadster, and when it
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 12 3/31/2017 8:19:27 AM
Autonomous Vehicles and Moral Uncertainty 13
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
is negative, she should not. (If the dierence is zero, she could justiably choose
either.) Given that .9W5W is a negative value (4.1W), from Tegan’s perspec-
tive, the appropriate choice is “I should not program the autonomous vehicle to
crash into the roadster.
We recognize that the proposal we have just articulated has problems (some
readers might think serious problems). For instance, some readers may nd it
objectionable that the procedure does not always prevent one from achieving
the intuitively wrong outcomes. is unseemly feature is inherent to any ratio-
nal decision procedure that incorporates one’s subjective inputs. However, it is
important to note that we do not claim that the expected moral value approach
guarantees the moral truth from the view of objective reason. We aim only to
show that the expected value approach oers rational decision- making guidance
when the decision maker must make a decision in her current doxasticstate.
Perhaps most worrisome is that the procedure on oer might strike some
as question- begging. at is, some readers might think that it presupposes the
truth of consequentialism. But this is not quite right for two reasons. First, an
assumption underlying this worry is that a view that tells in favor of “numbers
mattering,” as it were, must be a consequentialist view. is assumption is unten-
able. In the trolley problem, for instance, a Kantian can coherently choose to save
more lives because she believes doing so is the best she can do with respect to her
deontological obligations (Hsieh, Strudler, and Wasserman 2006). Likewise, the
fact that the procedure we defend employs an expected value approach need not
mean it is consequentialist. It can be defended on deontological grounds aswell.
More fundamentally, the procedure we defend is a meta- normative account
that is indierent to the truth- value of any particular rst- order moral theory.
is brings us to the second reason the question- beg ging worry is not problematic.
e approach we defend concerns what the programmer all things considered
ought to do— what the programmer has strongest reason to do. is sort of all-
things- considered judgment of practical rationality incorporates various types of
reasons, not exclusively moralones.
A range of reasons impact what one has strongest reason to do:self- interested,
agent- relative, impartial moral, hedonistic, and so on. e fact that moral reasons
favor Φ- ing does not settle the question of whether Φ- ing is what one ought to do
all things considered. As Derek Partnotes:
When we ask whether certain facts give us morally decisive reasons not
to act in some way, we are asking whether these facts are enough to make
this act wrong. We can then ask the further question whether these mor-
ally decisive reasons are decisive all things considered, by outweighing any
conicting non- moral reasons. (Forthcoming)
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 13 3/31/2017 8:19:27 AM
14     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
What practical rationality requires one to do will turn on a range of normative
reasons including moral ones, but not only moral ones. It would indeed be wor-
risome, and potentially question- begging, if we were claiming that the procedure
provides the programmer with morally decisive guidance. But this is not what we
are doing. We are concerned with helping the programmer determine what the
balance of reasons favors doing— what the programmer has strongest reason to
do— and this kind of judgment of practical rationality turns on a range of dier-
ent kinds of reasons (Sepielli2009).
In sum, what practical rationality requires of the programmer is distinct from
what the programmer ought to believe is wrong, and even what the program-
mer has most moral reason to do. e procedure on oer helps the programmer
reason about her moral beliefs, and this in itself is not a moral judgment (though
what the programmer ought to do all things considered could, of course, coincide
with what she has most moral reason to do). e procedure we are defending can
help the programmer gure out what the balance of reasons favors doing, what
she has strongest reason to do all things considered.11
. ABrief Moral Defense and Remaining Moral
Objections
While the procedure itself concerns practical rationality, we nevertheless think
that there may be good independent moral reasons that favor the programmer’s
using such a procedure in the face of moral uncertainty. First, an expected value
approach can help a programmer to avoid acting in a morally callous or indier-
ent manner (or at least to minimize the impact of moral indierence). If the pro-
grammer uses the expected value procedure but arrives at the immoral choice, she
is less blameworthy than had she arrived at the immoral choice without using the
procedure. is is because using the procedure displays a concern for the impor-
tance of morality.
If Tegan, in the grip of moral uncertainty, were to fail to use the expected value
procedure, she would display a callous disregard for increasing the risk of wrong-
ing other human beings.12 As David Enoch says, “[I] f Ihave a way to minimize
the risk of my wronging people, and if there are no other relevant costs why
on earth wouldn’t Iminimize this risk?” (2014, 241). Aprogrammer who in the
face of moral uncertainty chooses to make a decision on a whim acts recklessly.
Using the expected moral value approach lowers the risk of wronging others.
A second moral reason for using an expected value approach to deciding
under moral uncertainty is that it can help the programmer embody the virtue
of humility (e.g., Snow 1995). Indeed, as most professional ethicists admit, moral
matters are deeply complex. Aprogrammer who in the grip of moral uncertainty
insists on using the “My Favorite eory” approach fails to respect the diculty
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 14 3/31/2017 8:19:27 AM
Autonomous Vehicles and Moral Uncertainty 15
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
of moral decision making and thereby exhibits a sort of intellectual chauvinism.
Karen Jones’s remark seems apt:“If it is so very bad to make a moral mistake, then
it would take astonishing arrogance to suppose that this supports a do- it- yourself
approach” (1999, 66– 67). e fact that a programmer must take into account the
possibility that she is mistaken about her moral beliefs builds a kind of humility
into the very decision procedure.
Some may worry that a programmer who uses the expected moral value
approach is compromising her integrity (Sepielli, forthcoming). e thought is
that the programmer who follows the approach will oen have to act in accor-
dance with a normative view that she thinks is less likely correct. And acting in
accordance with a theory one thinks less likely to be correct compromises one’s
integrity. ough this line of thought is surely tempting, it misses its mark. e
value of integrity is something that must be considered along with other moral
values. And how to handle the importance of the value of integrity is again a
question that may fall victim to moral uncertainty. So the issue of integrity is not
so much an objection as it is another consideration that must be included in the
set of issues a programmer is morally uncertainabout.
Another objection to a programmer using an expected value procedure holds
that the programmer would forgo something of great moral importance— that
is, moral understanding. For instance, Alison Hills (2009) claims that it is not
enough to merely make the right moral judgments; one must secure moral under-
standing.13 She argues that even reciting memorized reasons for the right actions
will not suce. Instead, she claims, one must develop understanding— that is,
roughly, the ability to synthesize the moral concepts and apply the concepts in
other similar contexts. And clearly, a programmer who is inputting her credences
and related normative beliefs into an expected moral value procedure lacks the
sort of moral understanding that Hills requires.
But this sort of objection misses the point. First, it is important to keep in
mind that while the outcome of the procedure might not have been what a pro-
grammer originally intended, it is the programmer herself who is deciding to use
the procedure that forces her to consider the moral implications of the possibility
of deciding incorrectly. Second, it would indeed be the ideal situation to develop
moral understanding, fully exercise one’s autonomy, and perform the action that
the true moral view requires. However, we agree with Enoch, who aptlynotes:
Tolerating a greater risk of wronging others merely for the value of moral
autonomy and understanding is thus self- defeating, indeed perhaps even
practically inconsistent. Someone willing to tolerate a greater risk of act-
ing impermissibly merely in order to work on her (or anyone else’s) moral
understanding, that is, independently of the supposed instrumental
payos of having more morally understanding people around, is acting
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 15 3/31/2017 8:19:27 AM
16     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
wrongly, and indeed exhibits severe shortage in moral understanding (of
the value of moral understanding, among other things). (2014,249)
One can imagine how absurd an explanation from a programmer who decided
on her own and acted in a way that wronged someone would sound if she were
asked by the person who was wronged why she did not attempt to lower the risk
of wronging another and she responded, “Well, Iwanted to exercise my personal
autonomy and work on my moral understanding.14 Such a response would be
patently oensive to the person who was wronged given that the programmer did
have a way to lower her risk of wronging another.
. Conclusion
In this chapter we aimed to show that programmers (or whoever will ultimately
be choosing the ethics of autonomous vehicles) are likely to face instances where
they are in the grip of moral uncertainty and require a method to help them
decide how to appropriately act. We discussed three proposals for coping with
this uncertainty:Continue Deliberating, My Favorite eory, and a particular
expected moral value approach. We oered some moral reasons for why the
programmer has reasons to employ the third procedure in situations with moral
uncertainty.
While there are surely many remaining issues to be discussed with respect to
the question of how to deal with moral uncertainty in programming contexts,
this chapter aims to provide a rst step to oering programmers direction on how
to appropriately handle decision- making under moral uncertainty. We hope it
encourages robot ethics scholars to pay more attention to guiding programmers
who are under moral uncertainty.
Notes
1. Patrick Lin (2014, 2015)is one of the rst scholars to explore the relevance of the
trolley problem in the context of autonomous vehicles.
2. ere are important moral questions that we do not consider in this chapter. For
instance, should the age of passengers in a vehicle be taken into account in deciding
how the vehicle should be programmed to crash? Who should be responsible for an
accident caused by autonomous vehicles? Is it possible to confer legal personhood
on the autonomous vehicle? What liability rules should we as society adopt to regu-
late autonomous vehicles? (Douma and Palodichuk 2012; Gurney 2013). For ethical
issues involving robots in general, see Nourbakhsh (2013), Lin, Abney, and Bekey
(2012), and Wallach and Allen (2009).
3. Robotics Institute of Carnegie Mellon University, for instance, oers a course named
“Ethics and Robotics.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 16 3/31/2017 8:19:27 AM
Autonomous Vehicles and Moral Uncertainty 17
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
4. Here we mirror the formulation of an example in MacAskill (forthcoming).
5. As will become clear, we owe a signicant intellectual debt to Andrew Sepielli for his
work on the topic of moral uncertainty.
6. One might think that technological advances can soon minimize empirical uncer-
tainties. But this is a naive assumption. Existing robots are far from being able to fully
eliminate or account for possible empirical uncertainties. We are grateful to Illah
Nourbakhsh, professor of robotics at Carnegie Mellon University, for bringing this
point to our attention.
7. e solution we support resembles the celebrated Pascal’s Wager argument that
Blaise Pascal oers for why one should believe in God. Pascal states, “Either
God is or he is not. But to which view shall we be inclined? Let us weigh
up the gain and the loss involved in calling heads that God exists. Let us assess
two cases:if you win you win everything, if you lose you lose nothing. Do not
hesitate then; wager that he does exist” (1670, § 233). We recognize that many
nd Pascal’s wager problematic (Du 1986), although there are writers who
nd it logically valid (Hájek 2012; Mackie 1982; Rescher 1985). At any rate,
we are not concerned here to intervene in a debate about belief in divine exis-
tence. Nevertheless, we do think insights from Pascal’s Wager can usefully be
deployed, with relevant modications, for the problem of programming under
moral uncertainty.
8. Sepielli considers a scenario much like the one Tegan is in. He notes, “Some conse-
quentialist theory may say that it’s better to kill 1 person to save 5 people than it is to
spare that person and allow the 5 people to die. Adeontological theory may say the
opposite. But it is not as though the consequentialist theory has, somehow encoded
within it, information about how its own dierence in value between these two
actions compares to the dierence in value between them according to deontology”
(2009,12).
9. Philosopher Ted Lockhart oers a proposal that aims to hedge and also claims
to avoid the PIVC. Lockhart’s view requires one to maximize “expected moral
rightness” (Lockhart 2000, 27; Sepielli 2006) and thus does indeed account
not only for the probability that a particular moral theory is right, but also for
the moral weight (value, or degree) of the theory. One important problem with
Lockhart’s view is that it regards moral theories as having equal rightness in every
case (Sepielli 2006, 602). For a more detailed criticism of Lockhart’s position, see
Sepielli (2006,2013).
10. e example we use to explain the three steps also closely models an example from
Sepielli (2009). It is worth noting that Sepielli does not break down his analysis
into steps as we have. We have oered these steps with the hope that they accurately
capture Sepielli’s important insights while also allowing for practical application.
11. We are grateful to Suneal Bedi for helpful discussion regarding the issues in this
section.
12. David Enoch (2014) oers this reason for why one ought to defer to a moral expert
with regard to moral decisions.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 17 3/31/2017 8:19:27 AM
18     
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
13. Hills states, “Moral understanding is important not just because it is a means to act-
ing rightly or reliably, though it is. Nor is it important only because it is relevant to
the evaluations of an agent’s character. It is essential to acting well” (2009,119).
14. is sort of objection to Hills (2009) is due to Enoch (2014). Enoch objects in the
context of a person failing to defer to a moral expert for moral guidance.
WorksCited
De Groot, Morris H. 2004. Optimal Statistical Decisions. NewYork:Wiley- Interscience.
Douma, Frank and Sarah A. Palodichuk. 2012. “Criminal Liability Issues Created by
Autonomous Vehicles.” Santa Clara Law Review 52:1157– 69.
Du, Anthony. 1986. “Pascal’s Wager and Innite Utilities.” Analysis 46:107– 9.
Enoch, David. 2014. “A Defense of Moral Deference.Journal of Philosophy 111 :
229– 58.
Fagnant, Daniel and Kara M. Kockelman. 2013. Preparing a Nation for Autonomous
Vehicles: Opportunities, Barriers and Policy Recommendations. Washington, DC:
Eno Center for Transportation.
Foot, Philippa. 1967. “e Problem of Abortion and the Doctrine of Double Eect.”
Oxford Review 5:5– 15.
Goodall, Noah J. 2014a. “Ethical Decision Making During Automated Vehicle Crashes.
Transportation Research Record :Journal of the Transportation Research Board, 58– 65.
Goodall, Noah J. 2014b. “Vehicle Automation and the Duty to Act.Proceedings of the
21st World Congress on Intelligent Transport Systems. Detroit.
Gurney, Jerey K. 2013. “Sue My Car Not Me: Products Liability and Accidents
Involving Autonomous Vehicles.” Journal of Law, Technology & Policy 2:247– 77.
Gustafsson, John. E. and Olle Torpman. 2014. “In Defence of My Favourite eory.
Pacic Philosophical Quarterly 95:159– 74.
Hájek, Alan. 2012. “Pascal’s Wager.Stanford Encyclopedia of Philosophy. http:// plato.
stanford.edu/ entries/ pascal- wager/ .
Hare, Caspar. 2012. “Obligations to Merely Statistical People.” Journal of Philosophy
109:378– 90.
Harman, Elizabeth. 2015. “e Irrelevance of Moral Uncertainty.” In Oxford Studies
in Metaethics, vol. 10, edited by Luss- Shafer Laundau, 53– 79. NewYork: Oxford
UniversityPress.
Hills, Alison. 2009. “Moral Testimony and Moral Epistemology.Ethics 120:94– 127.
Hsieh, Nien- ĥe, Alan Strudler, and David Wasserman. 2006. “e Numbers Problem.”
Philosophy & Public Aairs 34:352– 72.
Jones, Karen. 1999. “Second- Hand Moral Knowledge.” Journal of Philosophy 96:
55– 78.
Lin, Patrick. 2014. “e Robot Car of Tomorrow May Just Be Programmed to Hit
You.” Wired, May 6.http:// www.wired.com/ 2014/ 05/ the- robot- car- of- tomorrow-
might- just- be- programmed- to- hit- you/
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 18 3/31/2017 8:19:27 AM
Autonomous Vehicles and Moral Uncertainty 19
2C30B.3A1 Template Standardized 07- 07- 2016 and Last Modied on 31-03-2017
Lin, Patrick. 2015. “Why Ethics Matters for Autonomous Cars.” In Autonomes Fahren,
edited by M. Maurer, C. Gerdes, B. Lenz, and H. Winner, 70– 85. Berlin:Springer.
Lin, Patrick, Keith Abney, and George A. Bekey, eds. 2012. Robot Ethics:e Ethical
and Social Implications of Robotics. Cambridge, MA:MITPress.
Lockhart, Ted. 2000. Moral Uncertainty and Its Consequences. New York: Oxford
UniversityPress.
MacAskill, William. Forthcoming. “Normative Uncertainty as a Voting Problem.”Mind.
Mackie, J. L. 1982. e Miracle of eism. NewYork:Oxford UniversityPress.
Millar, Jason. 2014. “You Should Have a Say in Your Robot Car’s Code of Ethics.
Wired, September 2.http:// www.wired.com/ 2014/ 09/ set- the- ethics- robot- car/ .
Nourbakhsh, Illah R. 2013. Robot Futures. Cambridge, MA:MITPress.
Part, Derek. Forthcoming. On What Matters, part3. Oxford UniversityPress.
Pascal, Blaise. (1670) 1966. Pensées. Translated by A. K. Krailsheimer. Reprint,
Baltimore:PenguinBooks.
Raia, Howard. 1997. Decision Analysis: Introductory Lectures on Choices under
Uncertainty. NewYork:McGraw- Hill College.
Rescher, Nicholas. 1985. Pascal’s Wager. South Bend, IN:Notre Dame University.
Sepielli, Andrew. 2006. “Review of Ted Lockhart’s Moral Uncertainty and Its
Consequences.” Ethics 116:601 4.
Sepielli, Andrew. 2009. “What to Do When You Don’t Know What to Do.” In Oxford
Studies in Metaethics, vol. 4, edited by Russ- Shafer Laundau, 5– 28. New York:
Oxford UniversityPress.
Sepielli, Andrew. 2010. “Along an Imperfectly- Lighted Path:Practical Rationality and
Normative Uncertainty.” PhD dissertation, Rutgers University.
Sepielli, Andrew. 2012. “Normative Uncertainty for Non- Cognitivists.Philosophical
Studies 160:191– 207.
Sepielli, Andrew. 2013. “What to Do When You Don’t Know What to Do When You
Don’t Know What to Do ,” Nous 47:521– 44.
Sepielli, Andrew. Forthcoming. “Moral Uncertainty.” In Routledge Handbook of Moral
Epistemology, edited by Karen Jones. Abingdon:Routledge.
Snow, Nancy E. 1995. “Humility.Journal of Value Inquiry 29:203– 16.
omson, Judith Jarvis. 1976. “Killing, Letting Die, and the Trolley Problem.” Monist
59:204– 17.
Urmson, Chris. 2014. “Just Press Go:Designing a Self- driving Vehicle.” Google Ocial
Blog, May 27. http:// googleblog.blogspot.com/ 2014/ 05/ just- press- go- designing-
self- driving.html.
Wallach, Wendell and Colin Allen. 2009. Moral Machines:Teaching Robots Right om
Wron g . NewYork:Oxford UniversityPress.
Werhane, Patricia. 1999. Moral Imagination and Management Decision- Making.
NewYork:Oxford UniversityPress.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 19 3/31/2017 8:19:27 AM
... If the model 'decides' to open with the Queen's Gambit, this is not a moral decision under any definition of 'morality'. In contrast, the decisions made by an autonomous weapon system (Arkin, 2008a,b;Krishnan, 2009;Tonkens, 2012;Hellström, 2013;Asaro, 2020), a healthcare robot (Anderson et al., 2006;Anderson and Anderson, 2008;Sharkey and Sharkey, 2012;Conti et al., 2017), or an autonomous vehicle (Bhargava and Kim, 2017;Sommaggio and Marchiori, 2018;Evans et al., 2020) may have moral weight. In these cases, the action space may include decision points that we might call 'moral' or 'immoral'-for example, choosing to prioritise one patient over another. ...
... The error of reasoning arises from the implication that since people say they would act in this way (a descriptive claim), it follows that the machine ought to act in this way (a normative claim). 10 See, for example, (Allen et al., 2011;Wallach and Allen, 2009;Saptawijaya, 2015, 2011;Berreby et al., 2015;Danielson, 2015;Lin, 2015;Malle et al., 2015;Pereira, 2015, 2016;Bentzen, 2016;Bhargava and Kim, 2017;Casey, 2017;Cointe et al., 2017;Greene, 2017;Lindner et al., 2017;Santoni de Sio, 2017;Welsh, 2017;Wintersberger et al., 2017;Bjørgen et al., 2018;Grinbaum, 2018;Misselhorn, 2018;Pardo, 2018;Sommaggio and Marchiori, 2018;Baum et al., 2019;Cunneen et al., 2019;Krylov et al., 2019;Sans and Casacuberta, 2019;Wright, 2019;Agrawal et al., 2020;Awad et al., 2020;Banks, 2021;Bauer, 2020;Etienne, 2020;Gordon, 2020;Harris, 2020;Lindner et al., 2020;Nallur, 2020). ...
Preprint
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research.
... By letting each society or culture determine its own accident algorithm the relativistic car avoids these negative effects. 23 21 For moral uncertainty frameworks that have been proposed in the context of discussions about the ethics of self-driving cars see, e.g., Bhargava and Kim 2017;Bogosian 2017. 22 One might object that the relativistic car at least does not account for intra-cultural moral differences. ...
Article
Full-text available
Almost all participants in the debate about the ethics of accidents with self-driving cars have so far assumed moral universalism. However, universalism may be philosophically more controversial than is commonly thought, and may lead to undesirable results in terms of non-moral consequences and feasibility. There thus seems to be a need to also start considering what I refer to as the “relativistic car” — a car that is programmed under the assumption that what is morally right, wrong, good, bad, etc. is determined by the moral beliefs of one’s society or culture. My investigation of this idea involves six steps. First, I explain why and how the moral universalism/relativism debate is relevant to the issue of self-driving cars. Second, I argue that there are good reasons to consider accident algorithms that assume relativism. Third, I outline how a relativistic car would be programmed to behave. Fourth, I address what advantages such a car would have, both in terms of its non-moral consequences and feasibility. Fifth, I address the relativistic car’s disadvantages. Finally, I qualify and conclude my considerations.
... This technology has more than theoretical ethical impacts: it can impact bystanders with lethal force and it has done so already [Griggs, 2018]. Determining responsibility when things go wrong is an ethical concern with these technologies, as it is with autonomous weapons [Bonnefon, Shariff & Rahwan, 2016;Lin, 2016;Bhargava & Kim, 2017;De Sio, 2017;Lin, 2017]. Since these vehicles are on public streets they represent a public health concern [Fleetwood, 2017]. ...
Article
Philosophy and AI have had a difficult relationship from the beginning. The “classic” period from 1950 to 2000 saw four major conflicts, first about the logical coherence of AI as an endeavor, and then about architecture, semantics, and the Frame Problem. Since 2000, these early debates have been largely replaced by arguments about consciousness and ethics, arguments that now involve neuroscientists, lawyers, and economists as well as AI scientists and philosophers. We trace these developments, and speculate about the future.
... Suppose an autonomous vehicle crashes in a way that kills a person. 38 What would it mean to hold this autonomous vehicle morally responsible for that death? Should we program a virus into the autonomous vehicle? ...
Article
Full-text available
Este trabajo examina el supuesto fáctico en el que un peatón, intentado evitar el atropello por un vehículo autónomo, da muerte al pasajero a bordo con el objeto de resguardar su integridad física. Lo anterior se abordará desde la perspectiva del estado de necesidad y el doble efecto, ahondando en los criterios axiológicos de valoración en torno a la occisión directa, así como el sustrato normativo involucrado y su relación con la figura del dolo penal.
Article
Full-text available
In the present article we exploit the logical notions of correctness and completeness to provide an analysis of some fundamental problems that can be encountered by a software developer when transforming norms for traffic circulation into programming instructions. Relying on this analysis, we then introduce a question and answer procedure that can be helpful, in case of an accident, to clarify which components of an existing framework should be revised and to what extent software developers can be held responsible.
Article
The advancement of artificial intelligence (AI) technology has become the fundamental catalyst in the research and development of autonomous vehicle (AV). AVs equipped with AI are expected to perform better than humans and forecasted to reduce the number of road accidents. AV will improve humans’ quality of life, such as creating more mobility for the elderly and disabled, increasing productivity, and creating an environmentally friendly system. Despite AV’s promising abilities, reports indicate that AV can go phut, causing road fatalities to the AV user and other road users. The autonomous nature of AV exacerbates the difficulty in determining who is at fault. This article aims to examine the ability of the existing legal framework to identify the person at fault so as to determine the tortious liability in road accidents involving AV. This article demonstrated that the existing legal scheme is insufficient to determine tortious liability in road accidents involving AV. This article explored the possibility of shouldering the liability on the manufacturer, the user, and even on the AV itself. This article also investigated alternative approaches that could be adopted to resolve issues on the distribution of tortious liability in road accidents involving AV. The outcome of this article could contribute to issues relating to the liability of AI.
Article
Full-text available
The ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of ‘moral positions’ concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle’s ethical decision making.
Article
Full-text available
How should self-driving cars react in cases of unavoidable collisions? The complexity of specific dilemma situations that might arise in the context of autonomous driving pushes well-established ethical traditions of thought to their limits. This paper attempts to open up new opportunities for approaching this issue. By reframing the underlying decision problem from the perspective of ethics of risk, it is argued that the latter is highly relevant for the programming of ethical crash algorithms. The paper’s main contribution lies in providing an interpretation of dilemma situations as ethical problems of risk distribution as well as an outline of the resulting implications in terms of the permissibility of imposing these risks. Initially, moral dilemmas are shown to constitute a particularly challenging risk constellation. Drawing upon the positions of Sven Ove Hansson and Julian Nida-Rümelin, the paper makes the case for a deontological approach based on individual rights on the one hand and a fair distribution of the resulting risks of damage on the other hand. These two criteria are applied to and further elaborated in the context of self-driving car dilemmas. With regard to the first criterion, it is argued that individual rights could be considered to be adequately safeguarded if, and only if, the resulting levels of risk imposition on the individual are acceptable (absolute principle). Besides, some difficulties that emerge in the context of the second criterion of distributive justice (relative principle) are outlined. At this point, various ethical challenges such as the compensation of potential benefits, the principle of harm minimisation, and fundamental differences in individuals’ possibilities of reducing personal harm are critically examined.
Chapter
Full-text available
If motor vehicles are to be truly autonomous and able to operate responsibly on our roads, they will need to replicate – or do better than – the human decision-making process. But some decisions are more than just a mechanical application of traffic laws and plotting a safe path. They seem to require a sense of ethics, and this is a notoriously difficult capability to reduce into algorithms for a computer to follow.
Article
Full-text available
Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash, (b) an automated vehicle's decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.
Article
Some philosophers have recently argued that decision-makers ought to take normative uncertainty into account in their decisionmaking. These philosophers argue that, just as it is plausible that we should maximize expected value under empirical uncertainty, it is plausible that we should maximize expected choice-worthiness under normative uncertainty. However, such an approach faces two serious problems: how to deal with merely ordinal theories, which do not give sense to the idea of magnitudes of choice-worthiness; and how, even when theories do give sense to magnitudes of choice-worthiness, to compare magnitudes of choice-worthiness across different theories. Some critics have suggested that these problems are fatal to the project of developing a normative account of decision-making under normative uncertainty. The primary purpose of this article is to show that this is not the case. To this end, I develop an analogy between decision-making under normative uncertainty and the problem of social choice, and then argue that the Borda Rule provides the best way of making decisions in the face of merely ordinal theories and intertheoretic incomparability.
Article
The act of driving always carries some level of risk. With the introduction of vehicle automation, it is probable that computer-driven vehicles will assess this changing level of risk while driving, and make decisions as to the allowable risk for itself and other road users. In certain situations, an automated vehicle may be forced to select whether to expose itself and its passengers to a small risk in order to protect other road users from an equal or greater amount of cumulative risk. In legal literature, this is known as the duty to act. The moral and legal responsibilities of an automated vehicle to act on the behalf of other road users are explored.
Article
In some cases the morality of action is an inter‐personal affair. I am obliged to do something and there is a person to whom I am obliged to do it. I do wrong and there is a person I wrong. Some routine examples: I do wrong, and wrong you, by doing something bad for you, by feeding you contaminated meat. I do wrong, and wrong you, by failing to do something good for you, by ignoring your S.O.S. I do wrong, and wrong you, by violating your rights, by stealing your stuff. I do wrong, and wrong you, by disrespecting you, by brazenly discounting your opinions.