Robot minds and human ethics: the need for a comprehensive model of moral decision making

Yale University Institution for Social and Policy Studies, Interdisciplinary Center For Bioethics, P.O. Box 208209, New Haven, CT 06520-8209, USA
Ethics and Information Technology (Impact Factor: 0.56). 09/2010; 12(3):243-250. DOI: 10.1007/s10676-010-9232-8

ABSTRACT Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical
behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought”
of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and
yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms,
e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However,
assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance
of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties
in order to function satisfactorily in responding to morally significant situations. But working through methods for building
AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and
the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans
arrive at satisfactory moral judgments.

KeywordsMoral psychology-Moral agent-Machine ethics-Moral philosophy-Decision making-Moral judgment-Virtues-Computers-Emotions-Robots

1 Bookmark

Full-text (2 Sources)

Available from
May 23, 2014