Robot minds and human ethics: the need for a comprehensive model of moral decision making
ABSTRACT Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical
behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought”
of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and
yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms,
e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However,
assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance
of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties
in order to function satisfactorily in responding to morally significant situations. But working through methods for building
AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and
the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans
arrive at satisfactory moral judgments.
KeywordsMoral psychology-Moral agent-Machine ethics-Moral philosophy-Decision making-Moral judgment-Virtues-Computers-Emotions-Robots
- SourceAvailable from: psu.edu
Article: The evolution of cooperation.[show abstract] [hide abstract]
ABSTRACT: Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease.Science 04/1981; 211(4489):1390-6. · 31.03 Impact Factor
- Psychological Review 05/1994; 101(2):343-52. · 9.80 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.AI & Society 01/2008; 22:495-521.
Robot minds and human ethics: the need for a comprehensive
model of moral decision making
Published online: 4 July 2010
? Springer Science+Business Media B.V. 2010
scores the fragmentary character of presently available
models of human ethical behavior. It is a distinctly different
enterprise from either the attempt by moral philosophers to
illuminate the ‘‘ought’’ of ethics or the research by cognitive
scientists directed at revealing the mechanisms that influ-
ence moral psychology, and yet it draws on both. Philoso-
phers and cognitive scientists have tended to stress the
importance of particular cognitive mechanisms, e.g., rea-
soning, moral sentiments, heuristics, intuitions, or a moral
grammar, in the making of moral decisions. However,
assemblingasystem fromthebottom-upwhich iscapableof
accommodating moral considerations draws attention to the
importance of a much wider array of mechanisms in honing
cognitive faculties in order to function satisfactorily in
responding to morally significant situations. But working
through methods for building AMAs will have a profound
effect in deepening an appreciation for the many mecha-
nisms that contribute to a moral acumen, and the manner in
which these mechanisms work together. Building AMAs
Building artificial moral agents (AMAs) under-
highlights the need for a comprehensive model of how
humans arrive at satisfactory moral judgments.
Machine ethics ? Moral philosophy ? Decision making ?
Moral judgment ? Virtues ? Computers ? Emotions ?
Moral psychology ? Moral agent ?
Building moral machines is a practical, not a theoretical,
goal. It is spurred by the need to ensure that increasingly
autonomous machines will not cause harm to humans and
other entities worthy of moral consideration. As the
autonomy of computer systems and robots (collectively
referred to hereafter as (ro)bots)1expands, designers and
engineers cannot always predict the choices and actions the
systems will take when encountering unanticipated situa-
tions or inputs. A new field of inquiry variously known as
machine ethics (Anderson and Anderson 2006a), machine
morality (Wallach et al. 2008), artificial morality (Daniel-
son 1992), computational ethics (Allen 2002), or friendly
AI (Yudkowsky 2001) has emerged from the challenge of
implementing moral decision making faculties in (ro)bots.
The prospect of building moral machines gives rise to a
host of questions. Are (ro)bots the kinds of entities that can
make moral decisions? Whose morality or what morality
should be implemented in (ro)bots? Can (ro)bots be ade-
quate moral agents? How will the moral agency of (ro)bots
be evaluated? What harms to humans might be caused by
Moral Machines Blog: http://moralmachines.blogspot.com/
Technology and Ethics at the Yale Interdisciplinary Center for
W. Wallach (&)
7 Loeffler Road, Bloomfield, CT 06002, USA
Yale University Institution for Social and Policy Studies,
Interdisciplinary Center For Bioethics, P.O. Box 208209,
New Haven, CT 06520-8209, USA
1Agents within computer systems are often referred to as bots.
Therefore the term (ro)bots seems a useful way to collectively refer to
physical robots and software agents within computers and networks.
Ethics Inf Technol (2010) 12:243–250
(ro)bots that are not fully developed artificial moral agents
(AMAs), or by (ro)bots that circumvent their ethical con-
straints in pursuit of their own goals? I have addressed
these questions in articles and in a 2009 book I co-authored
(with Colin Allen) titled, Moral Machines: Teaching
Robots Right From Wrong. In this paper I will focus on
how machine morality contributes to a better understanding
of human ethics.2In particular, I will propose that:
1.The practical task of engineering AMAs requires a
more detailed understanding of human moral decision
making than presently exists.
Examining human moral behavior with a high degree
of specificity will not only promote a better under-
standing of moral reasoning, but will also lead to an
appreciation for the contribution to moral judgments
made in many contexts by a broad array of cognitive
The contributions of the secondary cognitive mecha-
nisms to good judgments indicate the need for a more
comprehensive model of human moral decision mak-
ing than presently exists, and for a rethinking of the
capabilities an agent will require in order to be
designated a moral agent.
Building moral machines is a grand thought experiment
that forces philosophers and engineers to approach ethics in
an unusually comprehensive manner. This thought exper-
iment draws upon the existing canon of moral philosophy
and upon the research by cognitive scientists interested in
revealing the mechanisms that support decision making for
both humans and non-human animals.
Reflection about and experimentation in building
AMAs forces one to think deeply about how humans
function, which human abilities can be implemented
in the machines humans design, and what character-
istics truly distinguish humans from animals or from
new forms of intelligence that humans create.3
Moral philosophy and moral psychology
The study of moral decision making is profoundly influ-
enced by a long-standing tension between moral philoso-
phy and moral psychology. Moral philosophy and moral
psychology developed hand-in-hand. However, throughout
the 20th century two philosophical positions, the is-ought
distinction and the ‘‘naturalistic fallacy’’, served to buttress
a division between these fields of study.
Until recently, discussions of moral psychology focused
primarily on the relationship between emotions and reason
in the making of moral decisions. While Aristotle appre-
ciated that emotions could either serve or interfere with
honing a virtuous character, the more influential perspec-
tive was that of the Greek and Roman Stoics who perceived
emotions as the enemy of reason and moral judgment. In
the 18th century, David Hume (1739–1740) and Adam
Smith (1759) emphasized the importance of moral senti-
ments. Hume considered emotions to be antecedent to
reason, but also famously claimed that one could not derive
an ‘‘ought’’ from an ‘‘is.’’ The is-ought distinction is
broadly understood as a fundamental gap between all
descriptive or factual statements and normative or pre-
scriptive judgments. This can mean that understanding the
psychology of how people make moral decisions does not
inform us about what people ought to do.
The naturalistic fallacy is Moore’s (1903) contention
that ethical concepts such as ‘‘good’’ are indefinable and
irreducible to natural properties. Both intuitionists and
emotivists were convinced by this argument that the study
of empirical science was irrelevant to the elucidation of
moral philosophy. In addition, the influence of behaviorism
placed the scientific study of moral psychology on a back
burner until the emergence of cognitive science in the
1960s and 1970s. A renewed interest in psychological
influences on moral judgment was prompted by the critique
of people as rational agents developed by scientists
studying decision theory (Miller 1956; Simon 1957, 1982;
Tversky and Kahneman 1974). Developmental and social
psychologists (Piaget 1972; Isen and Levin 1972; Darley
and Batson 1973), sociobiologists (Wilson 1975), and
game theorists (Hamilton 1964a, b; Axelrod and Hamilton
1981) were among the first to reawaken the fascination
with research on moral psychology. Interest in the evolu-
tionary antecedents of morality was soon complemented by
evidence of cooperation and other prosocial behaviors in
non-human primates (de Waal 1996; Flack and de Waal
2000). Studies of the cognitive mechanisms that influence
human moral behavior (Haidt 2001) and brain imaging of
people confronted with moral dilemmas have followed
(Greene et al. 2001; Sanfey et al. 2003).
Perhaps the most significant discovery about human
has been the role played by unconscious or nonconscious
mechanisms in determining many choices and actions (Ul-
eman and Bargh 1989; Greenwald and Banaji 1995; Hauser
2006). This discovery has contributed to a devaluing of
reason as central to the making of moral judgments.
Discussion of the roles played by emotions, intuitions,
implicit values, heuristics, or computational mechanisms in
making moral decisions can be confusing. The distinctions
between these terms are still being clarified, and as
2While some philosophers have tried to distinguish ethics from
morals, in this paper I bow to the more common practice, which is to
use the words interchangeably.
3Moral Machines, p. 8.
categories they can overlap. Attempts have been made to
close the gap between moral philosophy and moral psy-
chology. But it remains unclear whether the elucidation of
the specific cognitive mechanisms that influence judgments
or support moral decision making can tell us much about
what people ought to do. Nevertheless, collegial debate has
led to more nuanced claims, and both cognitive scientists
and moral philosophers have come to recognize that the
research is still in an early phase.
It is within this context that the challenge of building
moral decision making capabilities into (ro)bots is arising.
Machine ethics naturally draws upon both moral philoso-
phy and moral psychology in developing approaches for
designing AMAs. In doing so, it also facilitates a more
comprehensive understanding of the manner in which
humans make moral decisions.
Organic and inorganic machines that make
How might one build machines capable of making satis-
factory decisions and acting safely when confronted with
challenges in which values compete, the available infor-
mation is confusing or incomplete, and the results arising
from various courses of action cannot be predicted? Why
should strategies for designing and engineering AMAs
upon a logical silicon-based platform tell us anything about
the ethical capabilities and behavior of biochemical emo-
tional creatures, whose higher-order cognitive faculties
emerged from a long evolutionary process?
In theory there is no reason that AMAs need be anything
like humans, however, humans are the only model cur-
rently available for creatures with the general intelligence
to make moral decisions (although some animals demon-
strate higher order faculties and prosocial behavior). Thus,
approaches for building AMAs have initially focused on
morality is human centered, directed at the needs and
concerns of our species. Evaluations of whether the
behavior of an AMA is acceptable will be made by
applying criteria similar to those people use for evaluating
each other’s behavior, with the proviso that AMAs be more
ethical than highly fallible humans (Allen et al. 2000).
Computer designers, engineers, and programmers have
discovered that simulating any complex activity requires a
very high degree of attention to detail. Incomplete simu-
lations are prone to fail or will act in unanticipated ways
when encountering new inputs. Indeed, it is very difficult to
build a simulation when there are many variables. Even
self-assembling systems (genetic algorithms, behavioral
robotics, artificial life, etc.) need all fundamental elements
represented in order to evolve.
Either simulating or evolving AMAs requires that
human moral behavior be analyzed with a high degree of
specificity. Attention alone might contribute to a better
understanding of human ethics. However, the attention
required is multiplied as scholars come to recognize that
very little is understood regarding the function and impact
of many secondary cognitive mechanisms that support the
ability of people to make nuanced judgments. In order to
build an AMA each of these mechanisms, or their func-
tional equivalent, will need to be simulated (Allen et al.
The study of human ethics has presumed a preexisting
platform, a human mind and body, with the faculties that
most people possess. Thus, the study of both moral phi-
losophy and moral psychology tend to focus on those
mechanisms that provide an explicitly moral character to
human judgments and actions. A number of questions arise.
What model of reasoning best captures the ought of ethics?
Are moral judgments informed by an innate moral gram-
mar (Rawls 1971; Mikhail 2000)? Are intuitions (Haidt
2001), moral emotions (Haidt 2003), or heuristics (Gige-
renzer forthcoming) the primary determinants of some
forms of moral behavior?
Building an AMA draws attention to the importance of
cognitive mechanisms other than the capacity to reason for
the selection of appropriate courses of action. These
mechanisms perform many tasks including the activation
of influences on judgment or the providing of information
essential to the making of a moral decision. Understanding
the roles played by secondary mechanisms is crucial for
answering some essential questions that become apparent
when designing AMAs. For example, how does the agent
recognize that it is in a situation in which moral consid-
erations need to be factored into its choices and actions?
How will it discern what information is salient, and what
information is of little importance? What kind of behavior
might one expect from an entity that is capable of evalu-
ating information by consequentialist or deontological
criteria but lacks consciousness, emotions, a theory of
mind, social skills, or is not embodied in the natural world?
In addressing complex situations, humans have methods
for discerning the relevant information while not explicitly
considering most of the irrelevant information. To date,
engineers have been bedeviled by how to design computer
systems with a similar capacity—a challenge commonly
embodied, understand the semantic content of symbols, and
have social skills and a theory of mind. Generally these and
othercapabilitiesare taken forgranted whenconsideringthe
manner in which people address moral dilemmas, though at
times they are acknowledged indirectly. Consider, for
example, Kant’s contention that will and autonomy are
necessary for an entity to be a moral agent. The ability to
Robot minds and human ethics 245
function as an autonomous being, or the capacity to will,
suggest faculties beyond pure reason. However, little is
understood regarding the manner in which Kantian will and
autonomy are supported by and emerge from the capacity to
reason and other cognitive mechanisms.
Within the law, knowing right from wrong is a pre-
requisite for considering a person a moral agent and
holding the individual legally responsible for her actions.
Nevertheless, the cognitive mechanisms that make knowing
or understanding possible are far from fully understood.
When, for example, John Searle (1980) argued that his
‘Chinese Room’ thought experiment demonstrated that
computational systems were incapable of understanding,
incapable of appreciating the semantic content of symbols,
he encountered significant opposition that continues to this
day. Computational agents may or may not be capable of
understanding right and wrong. Either way, in the attempt
to implement computationally a semantic appreciation for
ethical concepts, philosophers will learn a great deal about
what human understanding is, and what it cannot be.
Consider consciousness. It is often presumed that moral
agents are conscious, but the exact role of consciousness in
the making of moral decisions is seldom discussed. Fur-
thermore, it is far from clear as to which of the functional
attributes associated with consciousness are most important
for making satisfactory moral decisions. In an article titled,
‘‘Ethics and consciousness in artificial agents,’’ Torrance
(2008) proposes that being conscious and having the
capacity to be empathetic go hand-in-hand.
are… unlikely to possess sentience and hence will fail
to be able to exercise the kind of empathic rationality
that is a prerequisite for being a moral agent.4
One might disagree with Torrance that sentience is
necessary for empathic rationality. Nevertheless, it would
be extremely valuable, if not essential, for an AMA to have
the ability to understand and share the feelings of others.
However, only a few of the affective and cognitive
mechanisms that support being empathetic are known.
It has been suggested that artificial agents may be
capable of performing the functional tasks associated with
consciousness even if there is no way to demonstrate that
they also have the phenomenal experience humans asso-
ciate with being conscious (Franklin 2003). It may be
possible that sophisticated robots can perform in a morally
acceptable manner without being phenomenally conscious.
But it is more probable that some form of experiencing is
essential for satisfactorily processing all the information
that impinges upon complicated, ethically charged choices.
Indeed, it may be impossible to build AMAs that possess
the functional attributes of consciousness without also
having some form of phenomenal experience.
How should we understand the role of these secondary
cognitive mechanisms in moral decision making? Are the
additional mechanisms merely sources of information, the
fodder for those cognitive mechanisms that do the heavy
lifting in evaluating various courses of action? Or do these
secondary mechanisms enter into the making of moral
judgments in ways that alter or limit the role of reason?
There are many cognitive tools that contribute to the
ability of people to arrive at satisfactory judgments in real
time while consciously and unconsciously processing a
tremendous amount of information that might impinge
upon the challenge at hand. Failing to implement any of
these mechanisms or functionally equivalent mechanisms
could be disastrous when building (ro)bots that must
interact safely with humans in the commerce of daily life.
The failure of any of these mechanisms can undermine the
ability of a human to function as a moral agent.
The development of AMAs will occur over a long period
of time. Systems will be built for practical applications and
with the mechanisms that designers and engineers believe
will make the artificial agent sensitive to the moral consid-
erations that arise in that context. In testing these systems
mechanisms that have been implemented. Engineers may
even discover dangers posed by systems that lack simula-
tions of specific cognitive mechanisms, such as emotions or
a theory of mind. Perhaps we will discover that without
agents will fail to be sufficiently sensitive to moral consid-
Furthermore, it may even turn out to be impossible to
artificial agents. In other words, the staged development of
AMAs willofferopportunitiestostudywhateach secondary
cognitive mechanism adds to the general and moral intelli-
gence of artificial agents. In turn, this will help us appreciate
the role each mechanism plays in human decision making.
Top-down and bottom-up approaches
Most of the work to date on developing autonomous
(ro)bots that function satisfactorily in responding to moral
challenges is directed at systems that perform discrete tasks
within limited contexts, such as the administration of
informed consent by healthcare professionals (Anderson
et al. 2006b). Nevertheless, the development of artificial
entities functioning as autonomous moral agents has
already spawned a rich library of academic reflection on
how more sophisticated (ro)bots might be developed.
The approaches for building artificial agents capable of
making moral decisions fall within two broad categories.
Top-down approaches attempt to implement the deonto-
logical and consequentialist theories favored by moral
philosophers. The bottom-up approaches favor evolving or
simulating the mechanisms out of which an aptitude for
appropriate moral judgment might emerge. In this sense,
the bottom-up approaches are more in line with the study of
moral psychology than moral philosophy (Allen et al.
2000; Wallach and Allen 2009).
One question researchers have naturally asked is whe-
ther the top-down models of moral reasoning can be
implemented computationally. Can the top-down imple-
mentation of norms, standards, or theories (the Ten-Com-
mandments, utilitarianism, the categorical imperative, or
even Asimov’s Three Laws of Robotics) guide the design
of a system’s control architecture? The verdict is out on
whether any model of moral reasoning can be fully
instantiated, but preliminary analysis indicates that each
model of moral reasoning suffers from three overlapping
1.Limitations already recognized by moral philosophers:
For example, in a utilitarian calculation, how can
consequences be calculated when information is
limited and the effects of actions cascade in never-
ending interactions? Which consequences should be
factored into the maximization of utility? Is there a
The ‘‘frame problem’’: Each model of moral reasoning
suffers from some version of the frame problem—
computational load due to requirements for: psycho-
logical knowledge, knowledge of effects of actions in
the world, and estimating the sufficiency of the initial
The need for background information: What mecha-
nisms will the system require in order to acquire the
information it needs to make its calculations? How
does one ensure that this information is up to date in
Furthermore, using the implementation of consequen-
tialist reasoning as an example, would this alone ensure
that an artificial agent be designated as an acceptable moral
agent? Probably not. Consider that many moral philoso-
phers criticize maximizing utility for the possibility that it
can lead to actions that would violate the rights of indi-
viduals. Would people be any more likely to accept the
utilitarian calculations of a machine, particularly if those
calculations led to actions that cause harm or even death to
a few individuals?
If a model of moral reason cannot be implemented
computationally, then what is its function? How can a
theory that does not lead to action procedures be a moral
guide? If attempting to computerize Kant’s categorical
imperative, for example, does not lead to algorithmic
procedures for determining which actions are correct,
should it be considered a guide by people confronted with
selecting appropriate courses of action for real world
challenges? Questions such as these are not new to moral
philosophers. However, the inherent limitations of ethical
theories are underscored if even purely logical machines
cannot use them to derive appropriate action procedures.
One strength of a top-down approach to ethics is that
ethical goals are defined so broadly that they cover
countless situations. But a weakness of top-down approa-
ches lies in goals being defined so broadly or abstractly that
specific applications are debatable. For example, the
greatest good for the greatest numbers does not tell us
which agents should be included in our calculation. Jeremy
Bentham (1823) and Peter Singer (1990) argue that the
suffering of animals should be factored into one’s calcu-
lations, but all utilitarians do not agree on this point. While
more static definitions can lead to situational inflexibility,
using heuristic or affective methods for solving the
‘‘frame’’ problem tend to compromise the theoretical
clarity of a top-down theory of ethics.
Bottom-up approaches for developing moral machines
suffer from comparable concerns. Bottom-up approaches
model dynamics of ethical interactions such as learning,
emotion, sociability, and the building of trust. If they use a
prior theory at all, bottom-up approaches do so only as a
way of specifying the task (goal) for the system, but not as
a way of specifying an implementation method or control
structure (Wallach et al. 2008; Wallach and Allen 2009).
Bottom-up approaches draw upon the research of game
theorists and evolutionary psychologists in facilitating the
emergence of values in artificial systems. They can also
attempt to implement insights derived from theories about
learning and the work of developmental psychologists in
building associative learning platforms and behavior based
robots. Eventually it may be possible to build artificial
learning platforms that simulate theories regarding the
development of moral reasoning, such as those proposed by
Lawrence Kohlberg (1981, 1984), or theories regarding the
development of moral identity and moral character
(Lapsley and Narvaez 2004; Nucci and Narvaez 2008).
One strength of a bottom-up approach is the ability to
dynamically integrate input from discrete subsystems or
modules. But even so, it can be difficult to scale these
approaches in order to assemble large numbers of discrete
components into a functioning integrated system. A weak-
ness of building AMAs from the bottom-up alone lies in
defining a moral task for the system. Maximizing goodness
or achieving justice can be rathervague ends whenthe agent
end accompanied by a top-down definition.
Robot minds and human ethics 247
Either top-down approaches or bottom-up approaches
may be helpful in building artificial agents capable of
responding to moral challenges in limited contexts. But
neither alone is likely to be adequate for building autono-
mous (ro)bots with full capacity to make appropriate
choices in many varied and unexpected contexts. Eventu-
ally hybrid AMAs will need to be developed. These
hybrids will maintain the dynamic and flexible morality
offered by bottom-up approaches that accommodate
diverse inputs, while subjecting the evaluation of choices
and actions to top-down principles that represent ideals
humans strive to meet.
While it is possible to imagine a hybrid AMA that
evaluates choices and behavior by consequentialist or
deontic criteria, virtue ethics may be a particularly good
model for a hybrid agent. Virtue theory does not view
moral behavior as being the result of either rules or con-
sequences. Virtue theorists emphasize the importance of
developing good character or good habits. Good actions
flow naturally from good character. In line with Aristotle,
contemporary virtue theorists view the development of
character as a slow learning process, dependent on the
support and direction of a virtuous community. While the
virtues are acquired from the bottom-up through experi-
ence and habit, the virtues are supported and may be
evaluated from the top-down.
AMAs will also require subsystems (the secondary mech-
anisms discussed above). These secondary faculties may
not be explicitly dedicated to moral judgment, moral rea-
soning, or perceived as being intrinsic to virtuous charac-
ter. But they may provide some of the input necessary for
choosing and acting appropriately. Emotions (at least
simulated emotions) and social skills are among the
suprarational capabilities that will be needed to function
properly in many contexts. It may also be required that the
agent is embodied in the world, is conscious, has a theory
of mind, is capable of being empathetic, and understands
the semantic content of information.
When suprarational faculties are integrated into the full
system, their design and inherent flaws or limitations are
likely to dynamically affect the system’s output. Thus,
modules only tangential to explicit moral decision making
mechanisms can alter behavior in ways that make choices
and actions more acceptable. But the same mechanisms can
also, under certain circumstances, lead to behavior that is
unacceptable when evaluated by external or top-down
criteria. This, for example, has been a challenge posed by
emotions, which can both facilitate good moral behavior
and can lead to all the sins of biasing judgments to which
the Stoics have alerted us.
An AMA will need to factor a broad array of moral
considerations into its choices and actions. For such an
agent, acting in a manner that will not harm humans or
other agents worthy of moral consideration is much more
than applying an algorithm to a body of information.
Designing AMAs draws attention to the fact that moral
judgment emerges out of the interaction of many mecha-
nisms. It will be difficult to isolate those mechanisms
explicitly engaged in decision making from those that
support the decision making process, and those that con-
tribute information which is processed.
Top-down, bottom-up, and suprarational mechanisms all
contribute to the ability of humans to make moral deci-
sions. However, it is hard to recognize this fact since
human decision making has seldom been approached in a
Conclusion: a comprehensive model
This article has underscored ways in which the thought
experiment of building a computational agent capable of
functioning as a full moral agent requires thinking com-
prehensively about the many faculties or cognitive mech-
anisms that contribute toward that goal. But can the
importance of these many mechanisms be recognized
without the particular thought experiment posed by
designing moral machines? Certainly. Will they be suffi-
ciently taken into account? I suspect not. Without a plat-
form for testing the adequacy of a particular model of
moral decision making, it can be quite easy to overlook
A computational representation of moral decision
making will entail a comprehensive model. To the extent
that human higher-order faculties can be represented
computationally, a comprehensive model of those mecha-
nisms provides a platform for testing the accuracy or via-
bility of theories regarding the manner in which humans
arrive at satisfactory decisions and act in ways that mini-
Does a comprehensive model of moral decision making
exist? No. Wallach, Franklin, and Allen (Wallach and
Allen 2009, Wallach et al. forthcoming) have suggested
one framework based on Franklin’s LIDA (Franklin and
Patterson 2006), a model for an agent with artificial general
intelligence. Franklin and his colleagues at the University
of Memphis are particularly concerned with insuring that
the model is compatible with recent research findings by
This LIDA-based model for accommodating moral
considerations has been proposed as a way of helping
248 W. Wallach
theorists to begin thinking about how the many cognitive
modules that contribute to moral decision making work
together. But the task has just begun. The model proposed
is quite rudimentary, and hopefully a more satisfactory
framework will evolve in coming years.
If I am correct, there has been a failure in our under-
standing of human ethics resulting from the tendency to
focus on particular faculties dedicated to this task rather
than recognizing that moral acumen emerges from a host of
cognitive mechanisms. In sum, these mechanisms facilitate
the agent’s ability to accommodate a broad array of moral
considerations. Sensitive agents factor untold consider-
ations into their choices and actions. Each of those con-
siderations might be evident in a feeling or a valance in the
strength of a relationship between objects. But all of those
considerations either merge into a composite feeling or
conflict in ways that prompt the need for further attention
and reflection. Certainly some considerations, such as not
killing, physically harming, or causing serious mental
distress to other people, must be given significant weight.
Necessity requires that an agent function as an inte-
grated being. The moral challenge lies in doing so, while
accommodating as many considerations as possible in its
nuanced expressions and actions.
Developing AMAs will contribute to a discipline dis-
tinct from either moral philosophy or moral psychology, a
discipline dedicated to understanding how agents make
successful moral judgments, which in turn free them to
pursue their goals and purposes. The challenge of building
moral machines will promote a high degree of specificity in
the examination of human moral behavior.
Perhaps it is too grandiose to suggest that machine
morality will be a partner equal to moral philosophy and
the empirical study of moral psychology in elucidating a
comprehensive understanding of how moral decisions are
and should be made. Nevertheless, research directed at
developing AMAs will be significant in suggesting, for-
mulating, and testing comprehensive models of moral
of the ideas in this paper during our collaborative work together. I am
also most appreciative for the many helpful suggestions from four
Colin Allen and Iva Smit contributed to many
Allen, C. (2002). Calculated morality: Ethical computing in the limit.
In I. Smit & G. Lasker (Eds.), Cognitive, emotive and ethical
aspects of decision making and human action, vol I. Germany/
Windsor, Ontario: Baden Baden/IIAS.
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future
artificial moral agent. Journal of Experimental and Theoretical
Artificial Intelligence, 12, 251–261.
Anderson, M., & Anderson, S. (2006). Machine ethics. IEEE
Intelligent Systems, 21(4), 10–11.
Anderson, M., Anderson, S., & Armen, C. (2006). An approach to
computing ethics. IEEE Intelligent Systems, 21(4), 56–63.
Axelrod, R., & Hamilton, W. (1981). The evolution of cooperation.
Science, 211, 1390–1396.
Bentham, J.  2008. Introduction to the Principles of Morals and
Legislation. Whitefish, MT: Kessinger Publishing, LLC.
Danielson, P. (1992). Artificial morality: Virtuous robots for virtual
games. New York: Routledge.
Darley, J., & Batson, D. (1973). From Jerusalem to Jericho: A study
of situational and dispositional variables in helping behavior.
Journal of Personality and Social Psychology, 27, 100–108.
de Waal, F. (1996). Good natured: The evolution of right & wrong in
humans and other animals. Cambridge, MA: Harvard University
Flack, J., & de Waal, F. (2000) ‘Any Animal Whatever’: Darwinian
building blocks of morality in monkeys and apes. In L. Katz
(Ed.), Evolutionary origins of morality (pp. 1–30). Imprint
Franklin, S. (2003). IDA: A conscious artifact? Journal of Con-
sciousness Studies, 10, 47–66.
Franklin, S., & Patterson, F. G. (2006). The LIDA architecture:
Adding new modes of learning to an intelligent, Autonomous,
Software Agent. IDPT-2006 Proceedings (Integrated Design
and Process Technology). Society for Design and Process
Gigerenzer, G. (2010). Moral satisficing: Rethinking morality as
bounded rationality. TopiCS (forthcoming).
Greene, J., Sommerville, B., Nystrom, L., Darley, J., & Cohen, J.
(2001). An fMRI investigation of emotional engagement in
moral Judgment. Science, vol. 293, Sept. 14, 2001, 2105–2108,
Greenwald, A., & Banaji, M. (1995). Implicit social cognition:
Attitudes, self-esteem, and stereotypes. Psychological Review,
Haidt, J. (2001). The emotional dog and its rational tail: A social
intuitionist approach to moral judgment. Psychological Review,
Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R.
Scherer, & H. H. Goldsmith (Eds.), Handbook of affective
sciences (pp. 852–870). Oxford: Oxford University Press.
Hamilton, W. (1964a). The general evolution of social behavior I.
Journal of Theoretical Biology, 7, 1–16.
Hamilton, W. (1964b). The general evolution of social behavior II.
Journal of Theoretical Biology, 7, 17–52.
Hauser, M. (2006). Moral minds: How nature designed our universal
sense of right and wrong. New York: Ecco.
Hume, D. [1739–1740] 2009. A treatise on human nature: Being an
attempt to introduce the experimental method of reasoning into
moral subjects. Ithaca: Cornell University Press.
Isen, A., & Levin, P. F. (1972). Effect of feeling good on helping:
Cookies and kindness. Journal of Personality and Social
Psychology, 21, 384–388.
Kohlberg, L. (1981). Essays on moral development, vol. I: The
philosophy of moral development. San Francisco: Harper &
Kohlberg, L. (1984). Essays on moral development, vol 2: The
psychology of moral development. San Francisco: Harper &
Lapsley, D., & Narvaez, D. (Eds.). (2004). Moral development, self,
Mikhail, J. (2000). Rawls’ linguistic analogy: A study of the
‘‘generative grammar’’ model of moral theory described by
Robot minds and human ethics249
John Rawls in A Theory of Justice. PhD Dissertation, Cornell
Miller, G. (1956). The magical number seven, plus or minus two:
Some limits on our capacity for processing information. The
Psychological Review, 63(2), 81–97.
Moore, G. E.  2008. Principia Ethica. Cambridge, UK:
Cambridge University Press.
Nucci, L., & Narvaez, D. (2008). Handbook of moral and character
education. New York: Routledge.
Piaget, J. (1972). Judgment and reasoning in the child. Totowa, NJ:
Littlefield, Adams and Company.
Rawls, J.  1999. A theory of justice. Cambridge, MA: Harvard
Sanfey, A., Rilling, J., Aronson, J., Nystrom, L., & Cohen, J. (2003)
The neural basis of economic decision-making in the Ultimatum
Game. Science, 300(5626), 1755–1758.
Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain
Sciences, 3(3), 417–458.
Simon, H. (1957). A behavioral model of rational choice, in models of
man, social and rational: Mathematical essays on rational
human behavior in a social setting. New York: Wiley.
Simon, H. (1982). Models of bounded rationality, vols. 1 and 2.
Cambridge, MA: MIT Press.
Singer, P. (1990). Animal liberation. New York: New York Review
Smith, A.  2004. The theory of moral sentiments. Whitefish,
MT: Kessinger Publishing, LLC.
Torrance, S. (2008). Ethics and consciousness in artificial agents.
Artificial Intelligence and Society, 22(4), 34.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty:
Heuristics and biases. Science, 185, 1124–1131.
Uleman, J., & Bargh, J. (Eds.). (1989). Unintended thought. New
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots
right from wrong. New York: Oxford University Press.
Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-
up and top-down approaches for modelling human moral
faculties. AI and Society, 22(4), 565–582.
Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and
computational model of moral decision making in human and
artificial agents. TopiCS (forthcoming).
Wilson, E. (1975). Sociobiology: The new synthesis. Cambridge, MA:
Harvard University Press.
Yudkowsky, E. (2001). What is Friendly AI? Available online at