Article

Robot minds and human ethics: the need for a comprehensive model of moral decision making

7 Loeffler Road, Bloomfield, CT 06002, USA; Yale University Institution for Social and Policy Studies, Interdisciplinary Center For Bioethics, P.O. Box 208209, New Haven, CT 06520-8209, USA
Ethics and Information Technology (Impact Factor: 0.56). 09/2010; 12(3):243-250. DOI: 10.1007/s10676-010-9232-8

ABSTRACT Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical
behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought”
of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and
yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms,
e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However,
assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance
of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties
in order to function satisfactorily in responding to morally significant situations. But working through methods for building
AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and
the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans
arrive at satisfactory moral judgments.

KeywordsMoral psychology-Moral agent-Machine ethics-Moral philosophy-Decision making-Moral judgment-Virtues-Computers-Emotions-Robots

0 Bookmarks
 · 
105 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Artificial intelligence, the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [McCarthy 1959]. We have previously argued [Waser 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a “self” that can “learn” to be intelligent. Therefore, following expert advice on the nature of self [Llinas 2001; Hofstadter 2007; Damasio 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins’ [1976] speculation that “perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself” by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and “free-will” that continue to pave the way towards the creation of safe/moral autopoiesis.
    International Journal of Machine Consciousness 01/2013; 05(01):59-74.

Full-text (2 Sources)

View
189 Downloads
Available from
May 23, 2014