Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf Technol

Yale University Institution for Social and Policy Studies, Interdisciplinary Center For Bioethics, P.O. Box 208209, New Haven, CT 06520-8209, USA
Ethics and Information Technology (Impact Factor: 0.56). 09/2010; 12(3):243-250. DOI: 10.1007/s10676-010-9232-8


Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical
behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought”
of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and
yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms,
e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However,
assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance
of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties
in order to function satisfactorily in responding to morally significant situations. But working through methods for building
AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and
the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans
arrive at satisfactory moral judgments.

KeywordsMoral psychology-Moral agent-Machine ethics-Moral philosophy-Decision making-Moral judgment-Virtues-Computers-Emotions-Robots

Download full-text


Available from: Wendell Wallach,
  • Source
    • "However, at least some of the agent's understanding of mental states and ethical values should be learned as a process of development. Thus, we are aiming to combine a " top down " with a " bottom-up " approach to the design of an ethical agent [16] "
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the design of an artificial agent that can determine whether a human action is acceptable according to ethical norms and values that typical humans would use in the same situation. Such a decision often depends on whether an action was intended, or on what the actor knows. Therefore the decision-maker needs to reason about mental states of others, a capability known as "Theory of Mind" (ToM). To understand moral scenarios, humans have a rich understanding of mental concepts as a result of experience. In this paper, we argue that many of these concepts can be defined in terms of information processing and mental states in a generic sense, and can be implemented computationally. For example, affective states may be defined in terms of goals, resources and degree of control. We argue that an agent can acquire some understanding of mental concepts and moral norms by developing models of its own information processing on different levels of abstraction and using these models to simulate other minds.
    AISB/IACAP 2012 Symposium on Theory of Mind and Moral Cognition:; 06/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: What roles or functions does consciousness fulfill in the making of moral decisions? Willartificial agents capable of making appropriate decisions in morally charged situations requiremachine consciousness? Should the capacity to make moral decisions be considered an attributeessential for being designated a fully conscious agent? Research on the prospects for developingmachines capable of making moral decisions and research on machine consciousness havedeveloped as independent fields of inquiry. Yet there is significant overlap. Both fields are likelyto progress through the instantiation of systems with artificial general intelligence (AGI).Certainly special classes of moral decision making will require attributes of consciousness suchas being able to empathize with the pain and suffering of others. But in this article we willpropose that consciousness also plays a functional role in making most if not all moral decisions.Work by the authors of this article with LIDA, a computational and conceptual model of humancognition, will help illustrate how consciousness can be understood to serve a very broad role inthe making of all decisions including moral decisions.
    International Journal of Machine Consciousness 06/2011; 3(1):177-192. DOI:10.1142/S1793843011000674
Show more