ArticlePDF AvailableLiterature Review

Holding Robots Responsible: The Elements of Machine Morality

Authors:

Abstract

As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.
Running Head: Holding Robots Responsible 1
Holding Robots Responsible: The Elements of Machine Morality
Yochanan E. Bigman1, Adam Waytz2, Ron Alterovitz3, and Kurt Gray1
In press, Trends in Cognitive Sciences
1Department of Psychology and Neuroscience, University of North Carolina at Chapel-Hill.
2Kellogg School of Management, Northwestern University.
3Department of Computer Science, University of North Carolina at Chapel-Hill.
*Correspondence: ybigman@gmail.com (Y.E. Bigman)
Keywords: Autonomous Machines, Autonomy, Responsibility, Morality
Holding Robots Responsible 2
Abstract
As robots become more autonomous, people will see them as more responsible for wrongdoing.
Moral psychology suggests that judgments of robot responsibility will hinge on perceived
situational awareness, intentionality, and free willplus anthropomorphism and the robot’s
capacity for harm. We also consider questions of robot rights and moral decision-making.
Holding Robots Responsible 3
Advances in robotics mean that humans already share roads, skies, and hospitals with
autonomous machines. Soon, it will become commonplace for cars to autonomously maneuver
across highways, military drones to autonomously select missile trajectories, and medical robots
to autonomously seek out and remove tumors. The actions of these autonomous machines can
spell life and death for humans [1], such as when self-driving vehicles kill pedestrians. When
robots harm humans, how will we understand their moral responsibility?
Morality and Autonomy
Philosophy, law, and modern cognitive science all reveal that judgments of human moral
responsibility hinge on autonomy [2,3]. This explains why children, who seem to have less
autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial
in judgments of robot moral responsibility [4,5]. The reason people ponder and debate the ethical
implications of drones and self-driving cars (but not tractors or blenders) is because these
machines can act autonomously.
Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to
develop fully autonomous robotsmachine systems that can act without human input [6]. As
robots become more autonomous their potential for moral responsibility will only grow. Even as
roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy
may be more important: work in cognitive science suggest that autonomy and moral
responsibility are more matters of perception than objective truths [3].
Perceiving the Minds of Robots
For programmers and developers, autonomy is understood as a robot’s ability to operate in
dynamic real-world environments for extended periods of time without external human control
[6]. However, for everyday people, autonomy is more likely tied a robot’s mental capacities.
Some may balk at the idea that robots have (or will have) any human-like mental capacities, but
people also long balked at the idea that animals had minds, and now think of them as having rich
inner lives.
Holding Robots Responsible 4
Of course, animals are flesh and blood whereas machines are silicon and circuits, but research
emphasizes that minds are always matters of perception [3,7]. The “problem of other minds”
means that the thoughts and feelings of others are ultimately inaccessible, and so we are left to
perceive them based upon context, cues, and cultural assumptions. Importantly, people do
ascribe to machines at least some ability to think, plan, remember, and exert self-control [7,8]
and as when judging humans, people make sense of the morality of robots based upon these
ascriptions of mind [8].
How people see mind—i.e., “mind perception”—predicts moral judgments [3], but mind
perception is not monolithic: there are many mental abilities [8], some of which (e.g., the ability
to plan ahead) are more relevant to autonomy and moral judgment than others (e.g., the ability to
feel thirsty). Cognitive science has outlined these autonomy-relevant abilities as they concern
humans, but only a subset of these are likely important for making sense of morality in
autonomous machines. Here we outline one subset of robot “mental” abilities that likely seem
relevant to autonomy (and therefore moral judgment).
Autonomous Elements Tied to Robot Morality
Situation Awareness
For someone to be perceived as morally responsible for wrongdoing, that person must seem to be
aware of the moral concerns inherent in the situation [9]. For example, a young child with no
understanding about the danger of guns will not be held responsible for shooting someone. For a
robot to be held responsible for causing harm, it will likely need to be seen as aware that its
actions are indeed harmful. Although today’s robots cannot appreciate the depths of others’
suffering, they can at least understand some situational aspects. For example, robots can
understand whether stimuli belong to protected categories, such as civilians for military drones,
pedestrians for autonomous cars, and healthy-organs for medical robots. People already ascribe
some of this “meaning-lite” understanding to machines [7], and we suggest that greater
ascriptions of situational awareness will increase perceptions of robot responsibility.
Intentionality
Harm-doers are seen as more responsible for intentional actions than for unintentional actions,
often because people infer a desire or a reason behind intentional acts [10]. Although people are
unlikely to perceive robots as capable of desire, they do see robots as capable of intentionality
Holding Robots Responsible 5
holding a belief that an action will have a certain outcome [7]. This perception is consistent with
robots’ ability to evaluate multiple response options in the service of achieving a goal [11]. We
suggest that the more people see robots as intentional agentsbeing able to understand and
select their own goalsthe more they will be ascribed moral responsibility.
Free Will
The ability to freely act—or to “do otherwise” [2]is a cornerstone of lay judgments of moral
responsibility [2]. Although robots are not seen as possessing a rich humanlike free will, they are
ascribed the ability to independently implement actions [7]. Consistent with this ascription,
today’s robots can independently execute action programs [11], however this independence is
relatively constrained. The behavior of robots is predictable given the transparency of their
(human-given) programming, and predictability undermines perceptions of free will [2].
Technological advances (e.g., deep neural networks) will likely render the minds of machines
less transparent to both programmers and perceivers, thereby elevating perceptions of
unpredictability. We suggest that as robotic minds become more opaque, people will see robots
as possessing more free willand ascribe them more moral responsibility.
Anthropomorphism
People perceive the mind of machines based on their abilities and behaviors, but also on their
appearance. The more humanlike a machine looks, the more people perceive it as having a mind,
a phenomenon called anthropomorphism [12]. Individuals vary in their tendency to
anthropomorphize, but people consistently perceive more mindand therefore more moral
responsibilityin machines that look and act like humans [13]. We suggest that having
humanlike bodies, humanlike voices, and humanlike faces will all cause people to attribute more
moral responsibility to machines.
Potential Harm
Even with powerful computational abilities, today’s robots are limited in their physical ability to
act upon the world. As technology advances, these increased capacities (e.g., the ability to walk,
shoot, operate, and drive) will allow robots to cause more damage to humans. Studies reveal that
observing damage and suffering lead people to search for an intentional agent to hold responsible
for that damage [14]. If people cannot find another person to hold responsible, they will seek
other agentsincluding corporations and gods [14]and infer the capacity for intention. This
link between suffering and intention means that the more robots cause damage, the more they
Holding Robots Responsible 6
will seem to possess intentionality, which (as we outline above) will then lead to increased
perceptions of moral responsibility. We therefore suggest that causing harm can amplify both
perceptions of mind and judgments of moral responsibility.
Future Implications
The future of robotics holds considerable promise, but it is also important to consider what
today’s semi-autonomous machines might mean for moral judgment. As Box 1 explores, even
robots with some perceived mind can help shield their human creators and owners (e.g.,
corporations and governments) from responsibility. Today’s machines are also capable of
making some kind of moral decisions, and Box 2 explores whether people actually want
machines to make these basic decisions.
Although we focus here on moral responsibility, we note that people might also see sophisticated
machines as worthy of moral rights. While some might find the idea of robots rights to be
ridiculous, the American Society for the Prevention of Cruelty to Robots and a 2017 European
Union report both argue for extending some moral protections to machines. Debates about
whether to recognize the personhood of robots often revolve around its impact on humanity (i.e.,
expanding the moral circle to machines may better protect other people), but also involves
questions about whether robots possess the appropriate mind required for rights. Although
autonomy is important for judgments of moral responsibility, discussions of moral rights
typically focus on the ability to feel. It is an open question whether robots will ever be capable
of feeling love or painand relatedly, whether people will ever perceive these abilities in
machines.
Whether we are considering questions of moral responsibility or rights, issues of robot morality
may currently seem like science fiction. However, we suggest that nowwhile machines and
our intuitions about them are still in fluxis the best time to systematically explore questions of
robot morality. By understanding how human minds make sense of morality, and how we
perceive the mind of machines, we can help society think more clearly about the impending rise
of robots, and help roboticists understand how their creations are likely to be received.
Holding Robots Responsible 7
Box 1 Machines can shield humans from responsibility
When people harm others, they often try to avoid responsibility by pointing fingers elsewhere.
Soldiers who commit heinous acts invoke the mantra that they were “just following orders” from
superior officers. Conversely, superior officers shirk responsibility by claiming that they did not
actually pull the trigger. These excuses can work because responsibility is often a zero-sum
game. The more we assign responsibility to the proximate agent (the entity who physically
perpetrated the harm) the less we assign responsibility to the distal agent (the entity who directed
the harm)and vice versa [3].
As robots spread through society, they will more frequently become the proximal agent in harm-
doing: collateral damage will be caused by drones and accidents will caused by self-driving cars.
Although humans will remain the distal agents who program and direct these machines, the more
that people can point fingers at their autonomous robots, the less they will be held accountable
for wrongdoinga fact that corporations and governments could leverage to escape
responsibility for misdeeds. Increasing autonomy for robots could mean increasing absolution for
their owners.
Holding Robots Responsible 8
Box 2 Do we want machines making moral decisions?
Much discussion in robotics concerns how robots should make moral decisions [1], but it is
worth asking whether they should make moral decisions in the first place. For example, some
argue that autonomous military robots (e.g., drones) should never independently make decisions
about human life and death. However, others argue in favor of these autonomous military
robots, suggesting that they could be programmed to follow the rules of war better than humans.
Putting these ethical debates in perspective is new research revealing that people are reluctant to
have machines make any moral decisionswhether in the military, the law, driving, or medicine
[8]. One reason for people’s aversion to machines making moral decisions is that they see robots
as lacking a full human mind [7,8]. Without the full human ability to think and feel, we do not
see robots as qualified to make decisions about human lives.
This aversion to machine moral decision-making has seem quite robust [8], but may fade as the
perceived mental capacities of machines advance [15]. As the autonomy of machines rises,
people may become more comfortable with robots making moral decisions, although people may
eventually wonder whether the goals of machines align with their own.
Holding Robots Responsible 9
Acknowledgments
We thank Bertram Malle, Ilan Finkelstein, Michael Clamann and an anonymous reviewer for
their comments on a draft of this paper. This work has been supported by the National Science
Foundation SBE Postdoctoral Research fellowship (1714298) to YEB, by the National Science
Foundation awards IIS-1149965 and CCF-1533844 to RA, and a grant from the Charles Koch
Foundation to KG.
Holding Robots Responsible 10
References
[1] Awad E. et al. (2018) The moral machine experiment. Nature. 563,5964
[2] Shariff A.F. et al. (2014) Free will and punishment: A mechanistic view of human nature
reduces retribution. Psychol Sci. 25,156370
[3] Wegner D.M. and Gray K. (2017) The mind club, Viking
[4] Kim T. and Hinds P. (2006) Who should I blame? Effects of autonomy and transparency
on attributions in human-robot interaction. In ROMAN 2006 - The 15th IEEE
International Symposium on Robot and Human Interactive Communication, pp. 8085,
IEEE
[5] van der Woerdt S. and Haselager P. (in press) When robots appear to have a mind: The
human perception of machine agency and responsibility. New Ideas Psychol
[6] Bekey G.A. (2005). Autonomous robots : from biological inspiration to implementation
and control, The MIT Press
[7] Weisman K. et al. (2017) Rethinking people’s conceptions of mental life. Proc Natl Acad
Sci. 114, 11374-11379
[8] Bigman Y.E. and Gray K. (2018) People are averse to machines making moral decisions.
Cognition. 181, 2134
[9] Kissinger-Knox A. et al. (2018). Does non-moral ignorance exculpate? Situational
awareness and attributions of blame and forgiveness. Acta Anal. 33, 161179
[10] Monroe A.E. and Malle B.F. (2017) Two paths to blame: Intentionality directs moral
information processing along two distinct tracks. J Exp Psychol Gen. 146, 2333
[11] Dudek G. and Jenkin M. (2010) Computational principles of mobile robotics, Cambridge
University Press
[12] de Visser E.J. et al. (2016). Almost human: Anthropomorphism increases trust resilience
in cognitive agents. J Exp Psychol Appl. 22, 331349
Holding Robots Responsible 11
[13] Waytz A. et al., (2014) The mind in the machine: Anthropomorphism increases trust in an
autonomous vehicle. J Exp Soc Psychol. 52, 113117
[14] Gray K. et al., (2014) The myth of harmless wrongs in moral cognition: Automatic dyadic
completion from sin to suffering. J Exp Psychol Gen. 143, 16001615
[15] Malle B.F. et al (in press) AI in the sky: How people morally evaluate human and machine
decisions in a lethal strike dilemma. In Robots and well-being (Aldinhas Ferreira I., Silva
Sequeira J., Virk G.S., Kadar E.E., and Tokhi O., eds), Springer
... Unlike other forms of technology-including non-autonomous vehicles, which people maintain a high degree of control over while operating, and airplanes, in which passengers cede control to another human operator (i.e., the pilot) -AVs act as autonomous agents in which passengers entrust their lives and safety. Prior work on mind perception and morality has found that the ascription of moral responsibility is a matter of perception that depends to a large extent on agency (Gray et al., 2007(Gray et al., , 2012Bigman et al., 2019). That is, in order to hold someone or something morally responsible a person must perceive that they have the ability to plan, remember, make decisions and communicate. ...
... That is, in order to hold someone or something morally responsible a person must perceive that they have the ability to plan, remember, make decisions and communicate. Because autonomous technologies, such as AVs, may be seen to have these characteristics, people may be more likely to reason about them as moral agents than they are human-operated technology (Waytz et al., 2014;Bigman et al., 2019). As such, despite a lack of attention to the perceived integrity of the technology in prior literature, it seems plausible that people ascribe not only competence, but also integrity to AV technology-trusting the vehicles, for example, to "do the right thing, " keep people safe in unexpected circumstances, and "communicate openly" with their passengers about potential dangers. ...
... Hence, when it comes to traveling by airplane, perceptions of integrity are likely based on both the pilot and the plane itself. Because fully autonomous AVs lack a human operator, people riding in an AV are entrusting the technology itself with their lives and safety, and hence may be more likely to perceive AVs as moral agents with integrity (Waytz et al., 2014;Bigman et al., 2019;Siegrist, 2019). Nor are airplanes emerging technology that remains relatively unfamiliar to the general public. ...
Article
Full-text available
Compared to human-operated vehicles, autonomous vehicles (AVs) offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment (total N = 3,937), the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology integrity predicts adoption intent for AVs and that messages that increase perceived integrity of AV technology result in greater AV adoption intent and policy support. This knowledge can be used to guide communication efforts aimed at increasing public trust in AVs, and ultimately enhance integration of AVs into transport systems.
... However, it is also possible that involving a dispassionate party (such as algorithms) in a decision-making process might "water down" the blame of other parties involved in the process. Machines could thus be used to shift responsibility for unfavorable outcomes (Bigman et al., 2019;Köbis et al., 2021). In such cases machinesnot humansare used as moral crumple zones. ...
Preprint
Full-text available
Current research does not resolve how people will judge harmful activist investments if machines (machine learning algorithms) are involved in the investment process as advisors, and not the ones “pulling the trigger”. On the one hand, machines might diffuse responsibility for making a socially responsible, but harmful investment. On the other hand, machines could exacerbate the blame that is assigned to the investment fund, which can be penalized for outsourcing part of the decision process to an algorithm. We attempted to resolve this issue experimentally. In our experiment (N = 956), participants judge either a human research team or a machine learning algorithm as the source of advice to the investment team to short-sell a company that they suspect of committing financial fraud. Results suggest that investment funds will be similarly blameworthy for an error regardless of using human or machine intelligence to support their decision to short-sell a company. This finding highlights a novel and relevant circumstance in which reliance on algorithms does not backfire by making the final decision-maker (e.g., an investment fund) more blameworthy. Nor does it lessen the perceived blameworthiness of the final decision-maker, by making algorithms into “electronic scapegoats” for providing well-intended but harmful advice.
... As such, this is inclusive of anthropomorphism (the granting of humanlike qualities to nonhuman entities 3,4 ), dehumanization (the denial of humanlike capacities to human entities 5 ), and emotion ascription (the granting of emotion experience capability to an entity [6][7][8][9] ). The fundamental properties used to make mind perception judgements has important real-world implications for decision-making processes across a multitude of domains including health and lifestyle decisions 10,11 , technology 12,13 , and social attitudes [14][15][16][17] . ...
Article
Full-text available
The last decade has witnessed intense interest in how people perceive the minds of other entities (humans, non-human animals, and non-living objects and forces) and how this perception impacts behavior. Despite the attention paid to the topic, the psychological structure of mind perception—that is, the underlying properties that account for variance across judgements of entities—is not clear and extant reports conflict in terms of how to understand the structure. In the present research, we evaluated the psychological structure of mind perception by having participants evaluate a wide array of human, non-human animal, and non-animal entities. Using an entirely within-participants design, varied measurement approaches, and data-driven analyses, four studies demonstrated that mind perception is best conceptualized along a single dimension.
... deserving punishment for wrongdoing) correlates with these two dimensions . This is aligned with ideas in philosophy, law, and cognitive science, wh ich relate moral responsibility to agency [17,18] and rights and privileges to experience. ...
Preprint
Full-text available
People are known to judge artificial intelligence using a utilitarian moral philosophy and humans using a moral philosophy emphasizing perceived intentions. But why do people judge humans and machines differently? Psychology suggests that people may have different mind perception models for humans and machines, and thus, will treat human-like robots more similarly to the way they treat humans. Here we present a randomized experiment where we manipulated people's perception of machines to explore whether people judge more human-like machines more similarly to the way they judge humans. We find that people's judgments of machines become more similar to that of humans when they perceive machines as having more agency (e.g. ability to plan, act), but not more experience (e.g. ability to feel). Our findings indicate that people's use of different moral philosophies to judge humans and machines can be explained by a progression of mind perception models where the perception of agency plays a prominent role. These findings add to the body of evidence suggesting that people's judgment of machines becomes more similar to that of humans motivating further work on differences in the judgment of human and machine actions.
... Accordingly, individuals' behaviors are not fully determined by their desires (and beliefs) because they can choose a course of action. Different terms have been used to describe this idea within the context of artificial agents, including intentionality (Bigman et al., 2019), free will (Krausová & Hazan, 2013), agency (Himma, 2009), and autonomy (Beer et al., 2014). We use the term choice to describe the ability to freely choose between multiple courses of action, which is the fifth and final dimension of the anthropomorphism of robots. ...
Article
Full-text available
With robots increasingly assuming social roles (e.g., assistants, companions), anthropomorphism (i.e., the cognition that an entity possesses human characteristics) plays a prominent role in human–robot interactions (HRI). However, current conceptualizations of anthropomorphism in HRI have not adequately distinguished between precursors, consequences, and dimensions of anthropomorphism. Building and elaborating on previous research, we conceptualize anthropomorphism as a form of human cognition, which centers upon the attribution of human mental capacities to a robot. Accordingly, perceptions related to a robot’s shape and movement are potential precursors of anthropomorphism, while attributions of personality and moral value to a robot are potential consequences of anthropomorphism. Arguing that multidimensional conceptualizations best reflect the conceptual facets of anthropomorphism, we propose, based on Wellman’s (1990) Theory-of-Mind (ToM) framework, that anthropomorphism in HRI consists of attributing thinking, feeling, perceiving, desiring, and choosing to a robot. We conclude by discussing applications of our conceptualization in HRI research.
... Many scholars have established the current situation regarding a sense of social responsibility, along with its influencing factors [19,[30][31][32][33][34][35][36][37], and conducted pathway studies on the cultivation of a sense of social responsibility [33,38,39]. While foreign studies tend to focus more on the impact of corporate social responsibility on the industry [25,[39][40][41][42][43][44][45], there are many interpretations of a sense of social responsibility in particular contexts [46], such as during the COVID-19 epidemic [47][48][49], or the embodiment of a sense of social responsibility in the retail industry [50], etc., Although theorists focus on revealing low-carbon consumption behavior, which is more of an antecedent variable [51][52][53][54][55][56], they concentrate mainly on its relationship with low-carbon consumption behavioral intentions, using a theory of planned behavior [57][58][59] approach, and on the willingness to pay for low-carbon product premiums [60,61]. ...
Article
Full-text available
People’s increasing attention towards environmental issues and carbon emission level per capita of consumption has made the influencing factors of low-carbon consumption behavior a research hotspot. In this study, a random sample of tourists in Zhangjiajie National Forest Park in China were surveyed by questionnaire to examine the impact of tourists’ perceived value and sense of social responsibility on the low-carbon consumption behavior intention. Results suggest that tourists’ perceived value has a direct and significant positive effect on the sense of social responsibility and low-carbon consumption behavior intention. Tourists’ sense of social responsibility demonstrates a significant positive impact on consumption attitude, with the latter having a positive impact on tourists’ low-carbon consumption behavior intention. A sense of social responsibility and the consumption attitude are found to play an important intermediary role between perceived value and tourists’ low-carbon consumption behavior intention. Some suggestions for managing and promoting tourists’ low-carbon consumption behavior intention are also put forward in this paper.
... Our studies widen the scope of existing work which has hitherto focused on topics such as moral emotions (e.g., Rozin et al., 1999), utilitarianism (Greene et al., 2001;Greene, 2013), (un)intentional harms (e.g., Hesse et al., 2015), the role of evolutionary cognitive processes in of coalition formation (e.g., DeScioli & Kurzban, 2013), or moral identity and perceptions of free will (e.g., Clark et al., 2014). In this article we have introduced the study of personal autonomy violations in the context of moral cognition, while simultaneously focusing on moral issues of robotics-as recommended by Malle et al. (2015) and Bigman et al. (2019). The level of novelty in our research is highlighted by the fact that the recent Atlas of Moral Psychology (Gray & Graham, 2018) Finally, our studies can be lined up with current discussion on AI safety and the so-called value alignment problem (e.g., Bostrom, 2015;Tegmark, 2017): ideally, we should design and build moral AIs that behave in ways that are aligned with our own moral values. ...
Article
Full-text available
Artificial intelligences (AIs) are widely used in tasks ranging from transportation to healthcare and military, but it is not yet known how people prefer them to act in ethically difficult situations. In five studies (an anthropological field study, n = 30, and four experiments, total n = 2150), we presented people with vignettes where a human or an advanced robot nurse is ordered by a doctor to forcefully medicate an unwilling patient. Participants were more accepting of a human nurse's than a robot nurse's forceful medication of the patient, and more accepting of (human or robot) nurses who respected patient autonomy rather than those that followed the orders to forcefully medicate (Study 2). The findings were robust against the perceived competence of the robot (Study 3), moral luck (whether the patient lived or died afterwards; Study 4), and command chain effects (Study 5; fully automated supervision or not). Thus, people prefer robots capable of disobeying orders in favour of abstract moral principles like valuing personal autonomy. Our studies fit in a new era in research, where moral psychological phenomena no longer reflect only interactions between people, but between people and autonomous AIs.
Article
The rapid deployment of semi-autonomous systems (i.e., systems requiring human monitoring such as Uber AVs) poses ethical challenges when these systems face morally-laden situations. We ask how people evaluate morally-laden decisions of humans who monitor these systems in situations of unavoidable harm. We conducted three pre-registered experiments (total N = 1811), using modified trolley moral dilemma scenarios. Our findings suggest that people apply different criteria when judging morality and deserved punishment of regular-car versus AV drivers. Regular-car drivers are judged according to a consequentialist minimizing harm criterion, whereas AV drivers are judged according to whether or not they took action, with a more favorable prior for acting. Integrating judgment and decision-making research with moral psychology, the current research illuminates how the presence versus absence of automation affects moral judgments.
Chapter
Understanding the main factors that can compromise or improve the efficacy of home-based rehabilitation on the perspective of technology requires the review of the features of the equipment used in this area, both in terms of hardware and software. The purpose of this paper is to summarize the most recent technologies implemented into home-based rehabilitation scenario, namely wearable devices, robotic end-effectors, exoskeletons, and the role of artificial intelligence amongst these fields. Evidence is presented to better understand the present state of art of home-based rehabilitation systems, facilitating the exploration of hypothetical healthcare uses for these systems or their possible future upgrades.
Chapter
Robot-inflicted deaths and injuries are often, unfortunately, among the grim side effects of production and mobility despite decades of efforts on safety and risk management. Deaths and injuries that are associated with robots and other autonomous entities are often placed in a different light than other sorts of incidents in dramaturgical perspective; the sense of these deaths as being engendered outside of human control often intensifies their personal and social impacts. Themes and images of murder, domination, and malice introduced from science fiction and popular discourse often emerge along with expressed feelings of creepiness that are akin to those associated with monsters and zombies. Trauma associated with these robotic attacks for onlookers, associates, and first responders can be devastating and have lasting impacts. Potentials for associated anti-robot backlash and security breaches have also increased, despite extensive research on how to make robots more palatable and attractive to humans.
Chapter
Full-text available
Even though morally competent artificial agents have yet to emerge in society, we need insights from empirical science into how people will respond to such agents and how these responses should inform agent design. Three survey studies presented participants with an artificial intelligence (AI) agent, an autonomous drone, or a human drone pilot facing a moral dilemma in a military context: to either launch a missile strike on a terrorist compound but risk the life of a child, or to cancel the strike to protect the child but risk a terrorist attack. Seventy-two percent of respondents were comfortable making moral judgments about the AI in this scenario and fifty-one percent were comfortable making moral judgments about the autonomous drone. These participants applied the same norms to the two artificial agents and the human drone pilot (more than 80% said that the agent should launch the missile). However, people ascribed different patterns of blame to humans and machines as a function of the agent’s decision of how to solve the dilemma. These differences in blame seem to stem from different assumptions about the agents’ embeddedness in social structures and the moral justifications those structures afford. Specifically, people less readily see artificial agents as embedded in social structures and, as a result, they explained and justified their actions differently. As artificial agents will (and already do) perform many actions with moral significance, we must heed such differences in justifications and blame and probe how they affect our interactions with those agents.
Article
Full-text available
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
Article
Full-text available
Do people want autonomous machines making moral decisions? Nine studies suggest that that the answer is 'no'-in part because machines lack a complete mind. Studies 1-6 find that people are averse to machines making morally-relevant driving, legal, medical, and military decisions, and that this aversion is mediated by the perception that machines can neither fully think nor feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes. Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines' perceived experience (Study 8), and increasing machines' perceived expertise (Study 9). Although some of these routes show promise, the aversion to machine moral decision-making is difficult to eliminate. This aversion may prove challenging for the integration of autonomous technology in moral domains including medicine, the law, the military, and self-driving vehicles.
Article
Full-text available
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human-automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism-the degree to which an agent exhibits human characteristics-is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human-agent trust as well as novel automation design. (PsycINFO Database Record
Article
Full-text available
If free will beliefs support attributions of moral responsibility, then reducing these beliefs should make people less retributive in their attitudes about punishment. Four studies using both measured and manipulated free will beliefs found that people with weaker beliefs reported less retributive, but not consequentialist, punishment towards criminals (Study 1). Subsequent studies showed that exposing people to research about the neural bases of human behavior, either through lab-based manipulations or by virtue of having taken an undergraduate neuroscience class, reduced retributive punishment (Studies 2-4). These results illustrate the consequences that exposure to debates about free will and scientific research on the neural basis of behavior may have on attributions of moral responsibility.
Article
Full-text available
When something is wrong, someone is harmed. This hypothesis derives from the theory of dyadic morality, which suggests a moral cognitive template of wrongdoing agent and suffering patient (i.e., victim). This dyadic template means that victimless wrongs (e.g., masturbation) are psychologically incomplete, compelling the mind to perceive victims even when they are objectively absent. Five studies reveal that dyadic completion occurs automatically and implicitly: Ostensibly harmless wrongs are perceived to have victims (Study 1), activate concepts of harm (Studies 2 and 3), and increase perceptions of suffering (Studies 4 and 5). These results suggest that perceiving harm in immorality is intuitive and does not require effortful rationalization. This interpretation argues against both standard interpretations of moral dumbfounding and domain-specific theories of morality that assume the psychological existence of harmless wrongs. Dyadic completion also suggests that moral dilemmas in which wrongness (deontology) and harm (utilitarianism) conflict are unrepresentative of typical moral cognition. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.
Article
An important topic in the field of social and developmental psychology is how humans attribute mental traits and states to others. With the growing presence of robots in society, humans are confronted with a new category of social agents. This paper presents an empirical study demonstrating how psychological theory may be used for the human interpretation of robot behavior. Specifically, in this study we applied Weiner's Theory of Social Conduct as a theoretical background for studying attributions of agency and responsibility to NAO robots. Our results suggest that if a robot's failure appears to be caused by its (lack of) effort, as compared to its (lack of) ability, human observers attribute significantly more agency and, although to a lesser extent, more responsibility to the robot. However, affective and behavioral responses to robots differ in such cases as compared to reactions to human agents.
Article
Significance How do ordinary people make sense of mental life? Pioneering work on the dimensions of mind perception has been widely interpreted as evidence that lay people perceive two fundamental components of mental life: experience and agency. However, using a method better suited to addressing this question, we discovered a very different conceptual structure. Our four studies consistently revealed three components of mental life—suites of capacities related to the body, the heart, and the mind—with each component encompassing related aspects of both experience and agency. This body–heart–mind framework distinguishes itself from the experience–agency framework by its clear and importantly different implications for dehumanization, moral reasoning, and other important social phenomena.
Article
There is broad consensus that features such as causality, mental states, and preventability are key inputs to moral judgments of blame. What is not clear is exactly how people process these inputs to arrive at such judgments. Three studies provide evidence that early judgments of whether or not a norm violation is intentional direct information processing along 1 of 2 tracks: if the violation is deemed intentional, blame processing relies on information about the agent’s reasons for committing the violation; if the violation is deemed unintentional, blame processing relies on information about how preventable the violation was. Owing to these processing commitments, when new information requires perceivers to switch tracks, they must reconfigure their judgments, which results in measurable processing costs indicated by reaction time (RT) delays. These findings offer support for a new theory of moral judgment (the Path Model of Blame) and advance the study of moral cognition as hierarchical information processing.