ChapterPDF Available

Machine Ethics and Automated Vehicles

Authors:
  • Virginia Transportation Research Council

Abstract

Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver’s oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research.
Machine Ethics and Automated Vehicles
Author: Noah J. Goodall
Pre-print version. Published in G. Meyer and S. Beiker (eds.), Road Vehicle Au-
tomation, Springer, 2014, pp. 93-102. Available at http://dx.doi.org/10.1007/978-
3-319-05990-7_9
Abstract
Road vehicle travel at a reasonable speed involves some risk, even when using
computer-controlled driving with failure-free hardware and perfect sensing. A ful-
ly-automated vehicle must continuously decide how to allocate this risk without a
human driver’s oversight. These are ethical decisions, particularly in instances
where an automated vehicle cannot avoid crashing. In this chapter, I introduce the
concept of moral behavior for an automated vehicle, argue the need for research in
this area through responses to anticipated critiques, and discuss relevant applica-
tions from machine ethics and moral modeling research.
1 Ethical Decision Making for Automated Vehicles
Vehicle automation has progressed rapidly this millennium, mirroring improve-
ments in machine learning, sensing, and processing. Media coverage often focuses
on the anticipated safety benefits from automation, as computers are expected to
be more attentive, precise, and predictable than human drivers. Mentioned less of-
ten are the novel problems from automated vehicle crash. The first problem is lia-
bility, as it is currently unclear who would be at fault if a vehicle crashed while
self-driving. The second problem is the ability of an automated vehicle to make
ethically-complex decisions when driving, particularly prior to a crash. This chap-
ter focuses on the second problem, and the application of machine ethics to vehi-
cle automation.
Driving at any significant speed can never be completely safe. A loaded tractor
trailer at 100 km/hr requires eight seconds to come to a complete stop, and a pa s-
senger car requires three seconds [1]. Truly safe travel requires accurate predic-
tions of other vehicle behavior over this time frame, something that is simply not
possible given the close proximities of road vehicles.
To ensure its own safety, an automated vehicle must continually assess risk:
the risk of traveling a certain speed on a certain curve, of crossing the centerline to
2
pass a cyclist, of side-swiping an adjacent vehicle to avoid a runaway truck clos-
ing in from behind. The vehicle (or the programmer in advance) must decide how
much risk to accept for itself and for the adjacent vehicles. If the risk is deemed
acceptable, it must decide how to apportion this risk among affected parties. These
are ethical questions that, due to time constraints during a crash, must be decided
by the vehicle autonomously.
The remainder of the chapter is organized into the parts. In section 2, responses
are provided to nine criticisms of the need for ethics research in automated vehicle
decision systems. Section 3 contains reviews of relevant ethical theories and moral
modeling research. The chapter is summarized in Section 4.
2 Criticisms of the Need for Automated Vehicle Ethics
Systems, and Responses
Future automated vehicles will encounter situations where the “right” action is
morally or legally ambiguous. In these situations, vehicles need a method to de-
termine an ethical action. However, there is disagreement among experts on both
of these points. This section lists nine criticisms of the importance of ethics in ve-
hicle automation, with responses to each.
Criticism 1: Automated vehicles will never (or rarely) crash. If an automated
vehicle never crashes, then there is no need to assess or assign risk because driv-
ing no longer contains risk. Industry experts are mostly cautious regarding wheth-
er vehicle automation can ever eliminate all crashes. Claims of complete safety are
often based on assumptions about the capabilities automated vehicles and their
environments. These assumptions can be grouped into three scenarios: automated
vehicles with imperfect systems, automated vehicles with perfect systems driving
in mixed traffic with human drivers, and automated vehicles with perfect systems
driving exclusively with other automated vehicles. Crashes are possible in each
scenario, as described in the following paragraphs.
 Imperfect systems. Any system ever engineered has occasionally
failed. In the realm of automated vehicle, Fraichard and Kuffner list four
reasons for a collision: hardware failures, software bugs, perceptual er-
rors, and reasoning errors [2]. While hardware failures may be some-
what predictable and often gradual, software failures are unexpected and
sudden, and may prove riskier at high speeds. Perceptual errors may re-
sult in misclassifying an object on the roadway. Even if a pedestrian is
correctly classified, an automated vehicle would need some way to per-
ceive her intent, e.g. whether she is about to step into the road or is
merely standing on the sidewalk. A mistake in this calculation could
3
lead to a crash, especially considering the close proximity and high
speed differentials on roadways.
 Perfect systems with mixed human-driven traffic. A perfect automated
vehicle with complete awareness of its surroundings should be able to
safely avoid static objects. Dynamic objects with unpredictable behavior
pose a greater challenge. The best way to avoid a collision is to avoid any
place, time, and trajectory on the roadway (referred to as a state) which
could possibly lead to a crash. In robotics, a state where all possible
movement result in a crash is referred to as an inevitable collision state
[3]. Researchers have acknowledged that with road vehicles, there is no
way to completely avoid inevitable collision states [4], only to minimize
the probability of entering one [5]. The only reasonable strategy is to
construct a model of the expected behavior of nearby vehicles and try to
avoid likely collisionsbased on patent filings, this appears to be a com-
ponent of Google’s self-driving car [6]. Without a sophisticated model of
expected vehicle behavior, a “safe” automated vehicle would be forced to
overreact to perceived threats. For example, a “flying pass” maneuver,
where a vehicle approaches a stopped queue at high speed only to move
into a dedicated turn lane at the last moment, appears identical to a pre-
crash rear-end collision [7, p. 140]. To guarantee safety, an automated
vehicle would have to evade many similar maneuvers each day. This is
both impractical and dangerous.
 Perfect systems without human-driven traffic. Perfect vehicles travel-
ing on a freeway with other perfect vehicles should be able to safely pre-
dict each other’s behavior and even communicate wirelessly to avoid col-
lisions. Yet these vehicles would still face threats from wildlife (256,000
crashes in the U.S. in 2000), pedestrians (73,000 crashes), and bicyclists
(51,000 crashes) [8]. Although a sophisticated automated vehicle would
be safer than a human driver, some crashes may be unavoidable. Fur-
thermore, the perfect systems described in this scenario are neither likely
nor near-term.
Criticism 2: Crashes requiring complex ethical decisions are extremely un-
likely. In order to demonstrate the difficulty of some ethical decisions, philoso-
phers will use examples that seem unrealistic. The trolley problem [9], where a
person must decide whether to switch the path of a trolley onto a track that will
kill one person in order to spare five passengers, is a common example [10]. The
trolley problem is popular because it is both a difficult problem and one where
people’s reactions are sensitive to context, e.g. pushing a person onto the track in-
stead of throwing a switch produces different responses, even though the overall
outcome is the same.
The use of hypothetical examples may suggest that ethics are only needed in
incredibly rare circumstances. However a recent profile of Google’s self-driving
4
car team suggests that ethics are already being considered in debris avoidance:
“What if a cat runs into the road? A deer? A child? There were moral questions as
well as mechanical ones, and engineers had never had to answer them be fore
[11]. Ethical decisions are needed whenever there is risk, and risk is always pre-
sent when driving.
One can argue that these are simple problems, e.g. avoid the child at all costs
and avoid the cat if safe to do so. By comparison, however, the trolley problem is
actually fairly straight-forwardit has only one decision, with known conse-
quences for each alternative. This is highly unrealistic. A vehicle faces decisions
with unknown consequences, uncertain probabilities of future actions, even uncer-
tainty of its own environment. With these uncertainties, common ethical problems
become “complex” very quickly.
Criticism 3: Automated vehicles will never (or rarely) be responsible for a
crash. This assumes that absence of liability is equivalent to ethical behavior. Re-
gardless of fault, an automated vehicle should behave ethically to protect not only
its own occupants, but also those at fault.
Criticism 4: Automated vehicles will never collide with another automated
vehicle. This assumes that an automated vehicles only interactions will be with
other automated vehicles. This is unlikely to happen in the near future for two rea-
sons. First, the vehicle fleet is slow to turn over. Even if every new vehicle sold in
the U.S. was fully-automated, it would be 30 years before 90% percent of vehicles
were replaced [12]. Second, unless automated vehicle-only zones are established,
any fully-automated vehicle will have to interact with human drivers, pedestrians,
bicyclists, motorcyclists, and trains. Even an automated-only zone would encoun-
ter debris, wildlife, and inclement weather. These are all in addition to a vehicle's
own hardware, software, perceptual, and reasoning failures. Any of these factors
can contribute to or independently cause a crash.
Criticism 5: In level 2 and 3 vehicles, a human will always be available to take
control, and therefore the human driver will be responsible for ethical deci-
sion making. Although the National Highway Traffic and Safety Administration
(NHTSA) definitions require that a person be available to take control of a vehicle
with no notice in a level 2 automated vehicle and within a reasonable amount of
time in a level 3 automated vehicle [13], this may be an unrealistic expectation for
most drivers.
In a level 2 vehicle, this would require that a driver pay constant attention to
the roadway, similar to when using cruise control. Drivers in semi-autonomous
vehicles with lane-keeping abilities on an empty test track exhibited significant in-
creases in eccentric head turns and secondary tasks during automated driving,
even in the presence of a researcher [14]. Twenty-five percent of test subjects
were observed reading while the vehicle was in autonomous mode. Similar results
have been found in driving simulator studies [15]. The effect of automation on a
driver’s attention level remains an open question, but early research suggests that
5
a driver cannot immediately take over control of the vehicle safely. Most drivers
will require some type of warning time.
Level 3 vehicles provide this warning time, but the precise amount of time
needed is unknown. The NHTSA guidance does not specify an appropriate warn-
ing time [13], although some guidance can be found in highway design standards.
The American Association of State Highway and Transportation Officials
(AASHTO) recommends highway designers allow 200 to 400 meters for a driver
to perceive and react to an unusual situation at 100 km/hr [16]. This corresponds
to 7 to 14 seconds, much of which is beyond the range of today’s radar at 9 se-
conds [17]. In an emergency, a driver may be unable to assess the situation and
make an ethical decision within the available time frame. In these situations, the
automated vehicle would maintain control of the vehicle, and by default be re-
sponsible for ethical decision making.
Criticism 6: Humans rarely make ethical decisions when driving or in crash-
es, and automated vehicles should not be held to the same standard. Drivers
may not believe themselves to be making ethical decisions while driving, but they
actually make these decisions often. The decision to speed or to cross a yellow
line to provide a cyclist additional room are examples of ethical decisions. Any ac-
tivity that transfers risk from one person to another involves ethics, and automated
vehicles should be able to make acceptable decisions in similar environments.
Considering that Americans drive 4.8 trillion kilometers each year [18], novel sit-
uations requiring ethics should emerge steadily.
Criticism 7: An automated vehicle can be programmed to follow the law,
which will cover ethical situations. Existing laws are not nearly comprehensive
or specific enough to produce reasonable actions in a computer. Lin provides an
example of an automated vehicle coming across a tree branch in the road. If there
was no oncoming traffic, a reasonable person would cross the double yellow line
to get around the tree, but an automated vehicle programmed to follow the law
would be forced to wait until the branch was cleared [19].
Of course, laws could be added for these types of situations. This can quickly
become a massive undertakingone would need computer-understandable defini-
tions of terms like “obstruction and safefor an automated vehicle whose per-
ception system is never completely certain of anything. If enough laws were writ-
ten to cover the vast majority of ethical situations, and they were written in such a
way as to be understood by computers, then the automated vehicle ethics problem
would be solved. Current law is not close to these standards.
Criticism 8: An automated vehicle should simply try to minimize damage at
all times. This proposes a utilitarian ethics system, which is addressed in section
3.1 and in previous work [20]. Briefly, utilitarianism’s main obstacle is that it does
not recognize the rights of individuals. An utilitarian automated vehicle given the
choice between colliding with two different vehicles would select the one with the
6
higher safety rating. Although this would maximize overall safety, most would
consider it unfair.
Criticism 9: Overall benefits outweigh any risks from an unethical vehicle.
This is perhaps the strongest argument against automated vehicle ethics research,
that any effort which may impede the progress of automation indirectly harms
those who die in the interim between immediate and actual deployment.
While preliminary evidence does not prove automation is safer than human
drivers [20], it seems likely that automation will eventually reduce the crash rate.
Lin has argued, however, that a reduction in overall fatalities may be considered
unethical [21], as improved safety for one group may come at the expense of an-
other. If vehicle fatalities are reduced, but cyclist fatalities increase, even an over-
all safety improvement might be unacceptable to society.
Second, this assumption uses a purely utilitarian view that maximizing lives
saved is the preferred option. Society, however, often uses a different value sys-
tem considering the context of a given situation. For example, the risk of death
from nuclear meltdown is often over-valued, while traffic fatalities are under-
valued. Society may disagree that a net gain in safety is worth a particularly
frightening risk. If, in fact, the ultimate goal is to improve safety, then ensuring
that automated vehicles behave in acceptable ways is critical to earning the pub-
lic’s trust of these new technologies.
Finally, the safety benefits of automated vehicles are still speculative. To be
considered safer than a human driver with 99% confidence, an automated passen-
ger vehicle would need to travel 1.1 million kilometers without crashing and 482
million kilometers without a fatal crash [20]. As of this writing, an automated ve-
hicle has yet to safely reach these mileages.
3 Relevant Work in Machine Ethics and Moral Modeling
There are two main challenges when formulating an ethical response for an auto-
mated vehicle. The first is to articulate society’s values across a range of scenari-
os. This is especially difficult given that most research into morality focuses on
single choices with known outcomes (one person will always die if the trolley
changes track), while in reality outcomes are uncertain and there are several layers
of choices. The second challenge is to translate these morals into language that a
computer can understand without a human’s ability to discern and analogize.
The recent field of machine ethics addresses these challenges through the d e-
velopment of artificial autonomous agents which can behave morally. While much
of machine ethics work is theoretical, a few practical applications include comput-
er modeling of human ethics in areas such as medicine, defense, and engineering.
This section provides background on ethical theories, and reviews examples of
computational moral modeling.
7
3.1 Ethical Theories
Researchers have investigated the potential for various moral theories for use in
machine ethics applications, including utilitarianism [22], Kantianism [23][25],
Smithianism [26], and deontologicalism [27], [28]. Deontologicalism and utilitari-
anism have been discussed as potentials for automated vehicle ethics, with short-
comings found with both theories [20].
Deontological ethics consist of limits that are placed on a machine’s behavior,
or a set of rules that it cannot violate. Asimov’s three laws of robotics are a well-
known example of deontological ethics [29]. A shortcoming of deontological eth-
ics appears when reducing complex human values into computer code. Similar to
the traffic law example from this chapter’s seventh criticism, rules generally re-
quire some common sense in their application, yet computers are only capable of
literal interpretations. These misinterpretations can lead to unexpected behavior.
In Asimov’s laws, an automated vehicle might avoid braking before a collision
because this action would first give its occupants whiplash, thereby violating the
first law prohibiting harm to humans. Rules can be added or clarified to cover dif-
ferent situations, but it is unclear if any set of rules could encompass all situations.
Developing rules also requires that someone articulate human morals, an excep-
tionally difficult task given that there has never been complete agreement on the
question of what is right and wrong.
Another useful moral theory is utilitarianism. This dictates that an action is
moral if the outcome of that an actionor in the case of automated vehicles, the
expected outcomemaximizes some utility. The advantage of this method is that
it is easily computable. However, it is difficult to define a metric for the outcome.
Property damage estimates can produce unfair outcomes, as they would recom-
mend colliding with a helmeted motorcyclist over a non-helmeted one, as the hel-
meted rider is less likely experience costly brain damage. This example illustrates
another shortcoming of utilitarianismit generally maximizes the collective bene-
fit rather than individuals’ benefits, and does not consider equity. One group may
consistently benefit (un-helmeted riders) while another loses.
Hansson has noted that risk-taking in radiation exposure combines the three
main ethical theories of virtue (referred to as justification), utilitarianism (optimi-
zation), and deontologicalism (individual dose limits) [30]. Automated vehicle
ethics will also likely require a combination of two or more ethical theories.
3.2 Practical Applications
There have been several attempts to develop software that can provide guid-
ance in situations requiring ethics. One of the first examples was a utilitarian soft-
ware tool called Jeremy [31]. This program measured the utility of any action’s
outcome by using the straightforward product of the outcome’s utility intensity,
duration, and probability, each of which were estimated by the user. In an auto-
mated vehicle environment, utility could be defined as safety or the inverse of
8
damage costs, with intensity, duration, and probability estimated from crash mod-
els. A major shortcoming of this model is its exclusive use of utilitarianism, an
ethical theory which disregards context, virtues, and limits on individual harm.
The team behind Jeremy later introduced two other software tools. The first
was W.D. [31], which used a duty-based ethical theory influenced by William D.
Ross [32] and John Rawls [33]. This was followed by a similar program
MedEthEx [34], a tool meant for medical applications and reflecting the duties
identified in Principles of Biomedical Ethics [35]. Both of these program are de-
ontological, and are trained using test cases that either violate or adhere to a for-
mulated set of duties as indicated an integer score. The software uses machine
learning to determine whether test cases of action are moral or immoral based on
adherence to ethical principles, and calibrates these assessments using expert
judgment. The output provides an absolute conclusion whether an action is right
or wrong, and indicates which ethical principles were most important in the deci-
sion.
McLaren has developed two tools to aid in ethical decision making. The first
tool is Truth-Teller, a program that analyzes two case studies where the subject
must decide whether or not to tell the truth [36]. The program identifies similari-
ties and differences between the cases, and lists reasons for or against telling the
truth in each situation. This is an example of casuistic reasoning, where one reach-
es a conclusion by comparing a problem with similar situations instead of using
rules learned from a set of test cases. Case studies are input using symbols rather
than natural language processing to be more easily machine-readable. A similar
program from McLaren, SIROCCO [36], uses casuistry to identify principles from
the National Society of Professional Engineers code of ethics relevant to an engi-
neering ethics problem. Like Truth-Teller, SIROCCO avoids moral judgments,
and instead suggests ethically relevant information that can help a user make deci-
sions.
The U.S. Army recently funded research into automated ethical decision mak-
ing as a support tool for commanders and eventual use in robotic systems. The
first step in this effort is a computer model which attempts to assess the relative
morality of two competing actions in a battlefield environment. This model, re-
ferred to by its developers as the Metric of Evil, attempts to “provide results that
resemble human reasoning about morality and evil” rather than replicate the pro-
cess of human reasoning [37]. To calculate the Metric of Evil, the model sums the
evil for each individual consequence of an action, taking into account high and
low estimates of evil, confidence intervals, and intentionality. A panel of experts
then rates a set of ethical test cases, and the weights of each type of consequence
are adjusted so that the model output matches expert judgment. While the Metric
of Evil provide decisions on which action is more ethical, it does not provide the
user with evidence supporting its conclusion.
Computational moral modeling is in its infancy. The efforts described in this
chapter, particularly MedEthEx and the Metric of Evil, show that it is possible to
solve ethical problems automatically, although much work is needed, particularly
in model calibration and incorporating uncertainty.
9
4 Summary
Automated vehicles, even sophisticated examples, will continue to crash. To min-
imize damage, the vehicle must continually assess risk to itself and others. Even
simple maneuvers will require the vehicle to determine if the risk to itself and oth-
er is acceptable. These calculations, the acceptance and apportionment of risk, are
ethical decisions, and human drivers will not be able to oversee these decisions.
The vehicle must at times make ethical choices autonomously, either via explicit
pre-programmed instructions, a machine learning approach, or some combination
of the two. The fields of moral modeling and machine ethics has made some pro-
gress, but much work remains. This chapter is meant as a guide for those first en-
countering ethical systems as applied in automated vehicles to help frame the
problem, convey core concepts, and provide directions for useful research in relat-
ed fields.
References
[1] D. B. Fambro, K. Fitzpatrick, and R. J. Koppa, “Determination of Stopping
Sight Distances,” Transportation Research Board, Washington D.C., NCHRP
400, 1997.
[2] T. Fraichard and J. J. Kuffner, “Guaranteeing Motion Safety for Robots,” Au-
tonomous Robots, vol. 32, no. 3, pp. 173175, Feb. 2012.
[3] T. Fraichard and H. Asama, “Inevitable Collision States - A Step Towards
Safer Robots?,” Advanced Robotics, vol. 18, no. 10, pp. 10011024, 2004.
[4] R. Benenson, T. Fraichard, and M. Parent, “Achievable Safety of Driverless
Ground Vehicles,” in 10th International Conference on Control, Automation,
Robotics and Vision, 2008. ICARCV 2008, 2008, pp. 515521.
[5] A. Bautin, L. Martinez-Gomez, and T. Fraichard, “Inevitable Collision States:
A Probabilistic Perspective,” in 2010 IEEE International Conference on Ro-
botics and Automation (ICRA), 2010, pp. 40224027.
[6] D. I. F. Ferguson and D. A. Dolgov, “Modifying Behavior of Autonomous
Vehicle Based on Predicted Behavior of Other Vehicles,” United States Pa-
tent Application 20130261872 Kind Code: A1, 2013.
[7] T. A. Dingus, S. G. Klauer, V. L. Neale, A. Petersen, S. E. Lee, J. Sudweeks,
M. A. Perez, J. Hankey, D. Ramsey, S. Gupta, C. Bucher, Z. R. Doerzaph, J.
Jermeland, and R. R. Knipling, “The 100-Car Naturalistic Driving Study,
Phase II - Results of the 100-Car Field Experiment.,” Virginia Tech Trans-
portation Institute, DOT HS 810 593, Apr. 2006.
[8] W. G. Najm, B. Sen, J. D. Smith, and B. N. Campbel, “Analysis of Light Ve-
hicle Crashes and Pre-Crash Scenarios Based on the 2000 General Estimates
System,” National Highway Traffic Safety Administration, Washington, DC,
DOT-VNTSC-NHTSA-02-04, Feb. 2003.
10
[9] P. Foot, “The Problem of Abortion and the Doctrine of Double Effect,” Ox-
ford Review, vol. 5, pp. 515, 1967.
[10] B. Templeton, “Enough with the Trolley Problem, Already,” Brad Ideas, 10-
Oct-2013. [Online]. Available: http://www.ideas.4brad.com/enough-trolley-
problem-already. [Accessed: 11-Oct-2013].
[11] B. Bilger, “Auto Correct: Has the Self-driving Car at Last Arrived?,” The
New Yorker, 25-Nov-2013.
[12] John A. Volpe National Transportation Systems Center, “Vehicle-
Infrastructure Integration (VII) Initiative Benefit-Cost Analysis Version 2.3
(Draft),” Federal Highway Administration, May 2008.
[13] National Highway Traffic Safety Administration, “Preliminary Statement of
Policy Concerning Automated Vehicles,” National Highway Traffic Safety
Administration, Washington DC, NHTSA 14-13, May 2013.
[14] R. E. Llaneras, J. A. Salinger, and C. A. Green, “Human Factors Issues Asso-
ciated with Limited Ability Autonomous Driving Systems: Drivers’ Alloca-
tion of Visual Attention to the Forward Roadway,” in Proceedings of the Sev-
enth International Driving Symposium on Human Factors in Driver
Assessment, Training, and Vehicle Design, Bolton Landing, NY, 2013, pp.
9298.
[15] A. H. Jamson, N. Merat, O. M. J. Carsten, and F. C. H. Lai, “Behavioural
Changes in Drivers Experiencing Highly-automated Vehicle Control in Vary-
ing Traffic Conditions,” Transportation Research Part C: Emerging Tech-
nologies, vol. 30, pp. 116125, May 2013.
[16] American Association of State Highway and Transportation Officials, A Poli-
cy on Geometric Design of Highways and Streets, 6th ed. Washington, DC:
AASHTO, 2011.
[17] Bosch, “LRR3: 3rd Generation Long-Range Radar Sensor,” Robert Bosch
GmbH, Leonberg, Germany, 292000P03W-C/SMC2-200906-En, 2009.
[18] United States Census Bureau, “Statistical Abstract of the United States,”
United States Census Bureau, Washington, DC, Table 1107. Vehicles In-
volved in Crashes by Vehicle Type, Rollover Occurrence, and Crash Severi-
ty: 2009, 2012.
[19] P. Lin, “The Ethics of Autonomous Cars,” The Atlantic, 08-Oct-2013.
[Online]. Available:
http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-
autonomous-cars/280360/. [Accessed: 09-Oct-2013].
[20] N. J. Goodall, “Ethical Decision Making During Automated Vehicle Crash-
es,” submitted for publication, 2013.
[21] P. Lin, “The Ethics of Saving Lives With Autonomous Cars Are Far Murkier
Than You Think,” Wired Opinion, 30-Jul-2013. [Online]. Available:
http://www.wired.com/opinion/2013/07/the-surprising-ethics-of-robot-cars/.
[Accessed: 30-Jul-2013].
[22] B. Hibbard, “Avoiding Unintended AI Behaviors,” in Artificial General Intel-
ligence: 5th International Conference, Oxford, UK, 2012, pp. 107116.
11
[23] A. F. Beavers, “Between Angels and Animals: The Question of Robot Ethics,
or Is Kantian Moral Agency Desirable?,” in Annual Meeting of the Associa-
tion of Practical and Professional Ethics, Cincinnati, OH, 2009.
[24] T. M. Powers, “Prospects for a Kantian Machine,” IEEE Intelligent Systems,
vol. 21, no. 4, pp. 4651, 2006.
[25] R. Tonkens, “A Challenge for Machine Ethics,” Minds & Machines, vol. 19,
no. 3, pp. 421438, Aug. 2009.
[26] T. M. Powers, “Prospects for a Smithian Machine,” in Proceedings of the In-
ternational Association for Computing and Philosophy, College Park, Mary-
land, 2013.
[27] T. M. Powers, “Deontological Machine Ethics,” in Machine Ethics: Papers
from the AAAI Fall Symposium, Menlo Park, CA: AAAI Press, 2005.
[28] S. Bringsjord, K. Arkoudas, and P. Bello, “Toward a General Logicist Meth-
odology for Engineering Ethically Correct Robots,” IEEE Intelligent Systems,
vol. 21, no. 4, pp. 3844, 2006.
[29] I. Asimov, “Runaround,” Astounding Science Fiction, pp. 94103, Mar-1942.
[30] S. O. Hansson, “Ethics and Radiation Protection,” Journal of Radiological
Protection, vol. 27, no. 2, p. 147, Jun. 2007.
[31] M. Anderson, S. L. Anderson, and C. Armen, “Towards Machine Ethics: Im-
plementing Two Action-Based Ethical Theories,” in Proceedings of the AAAI
2005 Fall Symposium on Machine Ethics, Arlington, VA, 2005.
[32] W. D. Ross, The Right and the Good. Oxford University Press, 1930.
[33] J. Rawls, A Theory of Justice, Rev. ed. Cambridge, Mass: Belknap Press of
Harvard University Press, 1999.
[34] M. Anderson, S. L. Anderson, and C. Armen, “MedEthEx: A Prototype Med-
ical Ethics Advisor,” in Proceedings of the 18th Conference on Innovative
Applications of Artificial Intelligence, Boston, Massachusetts, 2006, vol. 2,
pp. 17591765.
[35] T. L. Beauchamp and J. F. Childress, Principles of Biomedical Ethics. New
York, NY: Oxford University Press, 1979.
[36] B. M. McLaren, “Computational Models of Ethical Reasoning: Challenges,
Initial Steps, and Future Directions,” IEEE Intelligent Systems, vol. 21, no. 4,
pp. 2937, 2006.
[37] G. S. Reed and N. Jones, “Toward Modeling and Automating Ethical Deci-
sion Making: Design, Implementation, Limitations, and Responsibilities,”
Topoi, vol. 32, no. 2, pp. 237250, Oct. 2013.
Full Authors’ Information
Noah J. Goodall, Ph.D., P.E.
Virginia Center for Transportation Innovation and Research
530 Edgemont Road
Charlottesville, VA
12
United States
E-mail: noah.goodall@vdot.virginia.gov
Keywords
Keywords: automation, autonomous, ethics, risk, morality
... Any form of offsetting represents a violation of the moral duty to respect individual dignity and is thus questionable on a level of fundamental rights. In particular, utilitarian positions are critically assessed from normative points of view [38,55,91]. The utilitarian orientation towards total expected utility goes hand in hand with an indifference towards morally relevant 1 The abbreviation CAV stands for 'Connected and Automated Vehicle'. ...
Article
Full-text available
How should self-driving vehicles react when an accident can no longer be averted in dangerous situations? The complex issue of designing crash algorithms has been discussed intensively in recent research literature. This paper refines the discourse around a new perspective which reassesses the underlying dilemma structures in the light of a metaethical analysis. It aims at enhancing the critical understanding of both the conceptual nature and specific practical implications that relate to the problem of crash algorithms. The ultimate aim of the paper is to open up a way to building a bridge between the inherent structural issues of dilemma cases on the one hand and the characteristics of the practical decision context related to driving automation scenarios on the other. Based on a reconstruction of the metaethical structure of crash dilemmas, a pragmatic orientation towards the ethical design of crash algorithms is sketched and critically examined along two central particularities of the practical problem. Firstly, pertinent research on the social nature of crash dilemmas is found to be merely heuristic. Secondly, existing work from ethics of risk hardly offers explicit ethical solutions to relevant and urgent challenges. Further investigation regarding both aspects is ultimately formulated as a research desideratum.
Article
As the impact of AI is difficult to assess by a single group, policymakers should prioritize societal and environmental well being and seek advice from interdisciplinary groups focusing on ethical aspects, responsibility, and transparency in the development of algorithms.
Chapter
Talent management in the AI era offers businesses looking to maximize their human capital strategy both previously unheard-of potential and challenging tasks. AI technologies have the potential to change many aspects, including hiring procedures, customized learning programs, workforce planning, and employee engagement. This chapter explores the concept of talent management and AI. Further the need of AI and in talent management has discussed and how AI-driven tools enhance these processes, improving efficiency and reducing bias. It emphasizes the balance between technical AI skills and human-centered competencies like creativity and emotional intelligence. The chapter also addresses ethical considerations, transparency, and fairness in AI decision-making, and highlights future research directions, including ethics, diversity, and AI integration with other emerging technologies. Successful AI adoption in talent management blends human acumen with AI, fostering innovation and competitiveness in the digital age.
Article
Full-text available
The development of autonomous cars has been made possible in order to improve the safety of those who utilize transportation. These cars are able to have the ability to perceive their surroundings and make judgments without any assistance from outside sources in order to generate the most efficient path to reach a location. Prior to putting the solution into action, it is necessary to exercise caution, despite the fact that the concept seems to be futuristic and, if it is effectively implemented, would address a great deal of the existing problems that are associated with transportation. In this study, we will examine both the positive and negative aspects of putting autonomous cars into operation. Any tampering or manipulation of the data collected and communicated by these sensors might have fatal effects, since human lives are at risk in this situation. The sensors that are present on the cars are very dependent on the sensors that are present on the vehicles. This article covers a variety of assaults that may be launched against the various types of sensors that are installed on an autonomous vehicle.
Chapter
The persistence of cervical cancer as a substantial global health issue is particularly evident in settings with limited resources. Timely detection and diagnosis play a pivotal role in enhancing survival rates and reducing the financial burden of treatment. The deployment of artificial intelligence (AI) is improbable to revolutionize the healthcare sector, particularly when it pertains to providing innovative solutions for the early detection and diagnosis of cervical cancer. This chapter investigates the diverse functions that AI serves in the identification of cervical cancer, encompassing automated image analysis, aiding in colposcopy, computer-aided diagnosis (CAD), risk evaluation, integration with electronic health records (EHR), predictive models and mobile health (mHealth) solutions. Moreover, it explores the phases involved in the identification and diagnosis of cervical cancer through AI, covering screening and triage, examination of histopathology images, forecasting and risk evaluation, planning of treatment, tracking of response and offering clinical decision-making assistance.
Article
The potential of artificial intelligence integration in higher education shows huge transformative change in learning and teaching practices. This exploratory and descriptive study investigates the opportunities and challenges of AI adoption in higher education, drawing on insights from a diverse sample of 78 students, facultys, and administrators. A well-structured questionnaire and observation tools were used to collect the primary data, and Holmes (2019)’s theoretical notions on AI have largely been used to conduct this mixed-method study. The analysis shows strong agreement regarding rewards that are positive and related to the use of AI in improving learning in a personalized way, making it more engaging and interactive, reducing administrative load, and providing timely feedback and support. however, points out serious barriers: the challenge of data security and privacy, gaps in technological infrastructure, massive training and support needs, ethical concerns, resistance or reluctance to change, and third, a need for training. These findings further called for strategies to be designed to address these barriers by insisting on solid data protection and scalable technological infrastructures, continuous training, and ethical guidance. Only in this way will the higher education institution successfully bring about the required integration of AI, hence improving the quality and access to education.
Article
Recent deaths involving automated vehicles have sparked calls for legislative reform. Scholars argue that the law lags behind new and disruptive technological innovations. Autonomous vehicles are hailed as the next step in the shifting paradigm of disruptive technology. With the introduction of automated vehicles, changes will occur in many areas of law and society. These changes will impact notions of property, identity and the physical landscape of Australia, including the architecture of the future fleet of motor vehicles and the infrastructure surrounding mass road transport. The legal frameworks in Australia appear fairly well adapted to the introduction of automated vehicles as there are several structures in place to allow the law to investigate and adapt to new technology. This article seeks to outline some of the social and legal impacts arising from the introduction of automated vehicles. It is structured in three parts. First it outlines a brief history of automated vehicles. Then it considers some different areas of law intersected by the introduction of automated vehicles; criminal law, privacy law, personal injury and product liability. Finally, it reflects on some of the potential physical and social impacts surrounding the introduction of automated vehicles. It concludes with whether the Australian law is adaptable to this new and disruptive technology.
Preprint
Full-text available
AI tools provide individualized learning experiences tailored to specific language learners' requirements. Artificial intelligence-driven programs can customize lessons, exercises, and feedback by evaluating a learner's progress and determining strengths and shortcomings. This individualized method aids students in concentrating on their areas of weakness, thus improving their total language competency. This narrative literature review aims to gather relevant literature discussing AI tools as supplementary support in aiding a learner's language acquisition. Relevant studies stated that there are a lot of chances to improve learning outcomes when AI tools are used in language acquisition. Teachers who want to give their students more resources and help should consider including these tools in their lessons. To preserve the interpersonal and cultural components of language learning, it is imperative to strike a balance between the application of AI and conventional teaching techniques. AI tools are always accessible, providing assistance to learners whenever needed. This is especially helpful for individuals with unpredictable schedules or restricted access to conventional language learning materials.
Technical Report
Full-text available
The “100-Car Naturalistic Driving Study” is a three-phased effort designed to accomplish three objectives: Phase I, Conduct Test Planning Activities; Phase II, Conduct a Field Test; and Phase III, Prepare for Large-Scale Field Data Collection Effort. This report documents the efforts of Phase II. Project sponsors are the National Highway Traffic Safety Administration (NHTSA) and the Virginia Department of Transportation (VDOT). The 100-Car Naturalistic Driving Study is the first instrumented-vehicle study undertaken with the primary purpose of collecting large-scale, naturalistic driving data. Drivers were given no special instructions, no experimenter was present, and the data collection instrumentation was unobtrusive. In addition, 78 of 100 vehicles were privately owned. The resulting database contains many extreme cases of driving behavior and performance, including severe drowsiness, impairment, judgment error, risk taking, willingness to engage in secondary tasks, aggressive driving, and traffic violations. The data set includes approximately 2,000,000 vehicle miles, almost 43,000 hours of data, 241 primary and secondary drivers, 12 to 13 months of data collection for each vehicle, and data from a highly capable instrumentation system including 5 channels of video and many vehicle state and kinematic sensors. From the data, an “event” database was created, similar in classification structure to an epidemiological crash database, but with video and electronic driver and vehicle performance data. The events are crashes, near-crashes, and other “incidents.” Data is classified by pre-event maneuver, precipitating factor, event type, contributing factors, associative factors, and the avoidance maneuver. Parameters such as vehicle speed, vehicle headway, time-to-collision, and driver reaction time are also recorded. The current project specified ten objectives or goals that would be addressed through the initial analysis of the event database. This report addresses the first 9 of these goals, which include analyses of rear-end events, lane change events, the role of inattention, and the relationship between levels of severity. Goal 10 is a separate report and addresses the implications for a larger-scale data collection effort.
Chapter
Full-text available
If motor vehicles are to be truly autonomous and able to operate responsibly on our roads, they will need to replicate – or do better than – the human decision-making process. But some decisions are more than just a mechanical application of traffic laws and plotting a safe path. They seem to require a sense of ethics, and this is a notoriously difficult capability to reduce into algorithms for a computer to follow.
Article
Full-text available
Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash, (b) an automated vehicle's decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.
Article
Full-text available
One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the relative evil of two courses of action. We compare this program to other attempts to simulate ethical decision-making, assess possibilities for overcoming the trade-off between input simplification and output reliability, and discuss the responsibilities of users and designers in implementing such programs. We conclude by discussing the implications that this project highlights for the successes and challenges of developing automated mechanisms for ethical decision making.
Article
Full-text available
The this special issue of Autonomous Robots aims at exploring the motion safety issue from the decision-making point of view, assuming that the robotic system is working properly and has an accurate understanding of its current situation. The nine papers comprising this special issue address different collision avoidance instances of varying complexity, such as one robot against multiple robots; 2D against 3D environments, and static against dynamic environments. The first two papers address motion safety in 2D static environments. The first paper by Lapierre and Zapata considers a single robot case where the authors propose a reactive path following control technique for a nonholonomic unicycle. Their solution is based on the latest advancements in nonlinear control theory, which ensures asymptotic convergence to the path and obstacle avoidance.
Article
Machine ethics, in contrast to computer ethics, is concerned with the behavior of machines towards human users and other machines. It involves adding an ethical dimension to machines. Our increasing reliance on machine intelligence that effects change in the world can be dangerous without some restraint. We explore the implementation of two action-based ethical theories that might serve as a foundation for machine ethics and present details of prototype systems based upon them.
Conference Paper
Artificial intelligence (AI) systems too complex for predefined envi-ronment models and actions will need to learn environment models and to choose actions that optimize some criteria. Several authors have described mechanisms by which such complex systems may behave in ways not intended in their designs. This paper describes ways to avoid such unintended behavior. For hypothesized powerful AI systems that may pose a threat to humans, this paper proposes a two-stage agent architecture that avoids some known types of unintended behavior. For the first stage of the architecture this paper shows that the most probable finite stochastic program to model a finite history is finitely computable, and that there is an agent that makes such a computation without any unintended instrumental actions.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.