Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf Technol

Yale University Institution for Social and Policy Studies, Interdisciplinary Center For Bioethics, P.O. Box 208209, New Haven, CT 06520-8209, USA
Ethics and Information Technology (Impact Factor: 0.56). 09/2010; 12(3):243-250. DOI: 10.1007/s10676-010-9232-8


Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical
behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought”
of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and
yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms,
e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However,
assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance
of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties
in order to function satisfactorily in responding to morally significant situations. But working through methods for building
AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and
the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans
arrive at satisfactory moral judgments.

KeywordsMoral psychology-Moral agent-Machine ethics-Moral philosophy-Decision making-Moral judgment-Virtues-Computers-Emotions-Robots

Download full-text


Available from: Wendell Wallach, Oct 01, 2015
1 Follower
109 Reads
  • Source
    • "However, at least some of the agent's understanding of mental states and ethical values should be learned as a process of development. Thus, we are aiming to combine a " top down " with a " bottom-up " approach to the design of an ethical agent [16] "
    AISB/IACAP 2012 Symposium on Theory of Mind and Moral Cognition:; 06/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Artificial intelligence, the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [McCarthy 1959]. We have previously argued [Waser 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a “self” that can “learn” to be intelligent. Therefore, following expert advice on the nature of self [Llinas 2001; Hofstadter 2007; Damasio 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins’ [1976] speculation that “perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself” by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and “free-will” that continue to pave the way towards the creation of safe/moral autopoiesis.
    International Journal of Machine Consciousness 06/2013; 05(1-01):59-74. DOI:10.1142/S1793843013400052
Show more