ArticlePDF Available

The Conflict Between People’s Urge to Punish AI and Legal Systems

Frontiers
Frontiers in Robotics and AI
Authors:

Abstract and Figures

Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.
This content is subject to copyright.
The Conict Between Peoples Urge to
Punish AI and Legal Systems
Gabriel Lima
1
,
2
, Meeyoung Cha
1
,
2
*, Chihyung Jeon
3
and Kyung Sin Park
4
1
School of Computing, KAIST, Daejeon, South Korea,
2
Data Science Group, Institute for Basic Science, Daejeon, South Korea,
3
Graduate School of Science and Technology Policy, KAIST, Daejeon, South Korea,
4
School of Law, Korea University, Seoul,
South Korea
Regulating articial intelligence (AI) has become necessary in light of its deployment in high-
risk scenarios. This paper explores the proposal to extend legal personhood to AI and
robots, which had not yet been examined through the lens of the general public. We
present two studies (N3,559) to obtain peoples views of electronic legal personhood
vis-à-vis existing liability models. Our study reveals peoples desire to punish automated
agents even though these entities are not recognized any mental state. Furthermore,
people did not believe automated agentspunishment would fulll deterrence nor
retribution and were unwilling to grant them legal punishment preconditions, namely
physical independence and assets. Collectively, these ndings suggest a conict between
the desire to punish automated agents and its perceived impracticability. We conclude by
discussing how future design and legal decisions may inuence how the public reacts to
automated agentswrongdoings.
Keywords: articial intelligence, robots, AI, legal system, legal personhood, punishment, responsibility
1 INTRODUCTION
Articial intelligence (AI) systems have become ubiquitous in society. To discover where and how
these machines
1
affect peoples lives does not require one to go very far. For instance, these
automated agents can assist judges in bail decision-making and choose what information users are
exposed to online. They can also help hospitals prioritize those in need of medical assistance and
suggest who should be targeted by weapons during war. As these systems become widespread in a
range of morally relevant environments, mitigating how their deployment could be harmful to those
subjected to them has become more than a necessity. Scholars, corporations, public institutions, and
nonprot organizations have crafted several ethical guidelines to promote the responsible
development of the machines affecting peoples lives (Jobin et al., 2019). However, are ethical
guidelines sufcient to ensure that such principles are followed? Ethics lacks the mechanisms to
ensure compliance and can quickly become a tool for escaping regulation (Resseguier and Rodrigues,
2020). Ethics should not be a substitute for enforceable principles, and the path towards safe and
responsible deployment of AI seems to cross paths with the law.
The latest attempt to regulate AI has been advanced by the European Union (EU; (European
Commission, 2021)), which has focused on creating a series of requirements for high-risk systems
(e.g., biometric identication, law enforcement). This set of rules is currently under public and
Edited by:
David Gunkel,
Northern Illinois University,
United States
Reviewed by:
Henrik Skaug Sætra,
Østfold University College, Norway
Kamil Mamak,
Jagiellonian University, Poland
*Correspondence:
Meeyoung Cha
mcha@ibs.re.kr
Specialty section:
This article was submitted to
Ethics in Robotics and Articial
Intelligence,
a section of the journal
Frontiers in Robotics and AI
Received: 10 August 2021
Accepted: 07 October 2021
Published: 08 November 2021
Citation:
Lima G, Cha M, Jeon C and Park KS
(2021) The Conict Between Peoples
Urge to Punish AI and Legal Systems.
Front. Robot. AI 8:756242.
doi: 10.3389/frobt.2021.756242
1
We use the term machineas a interchangeable term for AI systems and robots, i.e., embodied forms of AI. Recent work on the
human factors of AI systems has used this term to refer to both AI and robots (e.g., (Köbis et al., 2021)), and some of the
literature that has inspired this research uses similar terms when discussing both entities, e.g., (Matthias, 2004).
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562421
ORIGINAL RESEARCH
published: 08 November 2021
doi: 10.3389/frobt.2021.756242
scholarly scrutiny, and experts expect it to be the starting point of
effective AI regulation. This research explores one proposal
previously advanced by the EU that has received extensive
attention from scholars but was yet to be studied through the
lens of those most affected by AI systems, i.e., the general public.
In this work, we investigate the possibility of extending legal
personhood to autonomous AI and robots (Delvaux, 2017).
The proposal to hold machines, partly or entirely, liable for
their actions has become controversial among scholars and
policymakers. An open letter signed by AI and robotics experts
denounced its prospect following the EU proposal (http://www.
robotics-openletter.eu/). Scholars opposed to electronic legal
personhood have argued that extending certain legal status to
autonomous systems could create human liability shields by
protecting humans from deserved liability (Bryson et al., 2017).
Those who argue against legal personhood for AI systems regularly
question how they could be punished (Asaro, 2011;Solaiman,
2017). Machines cannot suffer as punishment (Sparrow, 2007), nor
do they have assets to compensate those harmed.
Scholars who defend electronic legal personhood argue that
assigning liability to machines could contribute to the coherence
of the legal system. Assigning responsibility to robots and AI could
imbue these entities with realistic motivations to ensure they act
accordingly (Turner, 2018). Some highlight that legal personhood
has also been extended to other nonhumans, such as corporations,
and doing so for autonomous systems may not be as implausible
(Van Genderen, 2018). As these systems become more autonomous,
capable, and socially relevant, embedding autonomous AI into legal
practices becomes a necessity (Gordon, 2021;Jowitt, 2021).
We note that AI systems could be granted legal standing
regardless of their ability to fulll duties, e.g., by granting them
certain rights for legal and moral protection (Gunkel, 2018;
Gellers, 2020). Nevertheless, we highlight that the EU proposal
to extend a specic legal status to machines was predicated on
holding these systems legally responsible for their actions. Many
of the arguments opposed to the proposal also rely on these
systemsincompatibility with legal punishment and pose that
these systems should not be granted legal personhood because
they cannot be punished.
An important distinction in the proposal to extend legal
personhood to AI systems and robots is its adoption under
criminal and civil law. While civil law aims to make victims
whole by compensating them (Prosser, 1941), criminal law
punishes offenses. Rights and duties come in distinct bundles
such that a legal person, for instance, may be required to pay for
damages under civil law and yet not be held liable for a criminal
offense (Kurki, 2019). The EU proposal to extend legal
personhood to automated systems has focused on the former
by defending that they could make good any damage they may
cause.However, scholarly discussion has not been restricted to
the civil domain and has also inquired how criminal offenses
caused by AI systems could be dealt with (Abbott, 2020).
Some of the possible benets, drawbacks, and challenges of
extending legal personhood to autonomous systems are unique to
civil and criminal law. Granting legal personhood to AI systems
may facilitate compensating those harmed under civil law
(Turner, 2018), while providing general deterrence (Abbott,
2020) and psychological satisfaction to victims (e.g., through
revenge (Mulligan, 2017)) if these systems are criminally
punished. Extending civil liability to AI systems means these
machines should hold assets to compensate those harmed
(Bryson et al., 2017). In contrast, the difculties of holding
automated systems criminally liable extend to other domains,
such as how to dene an AI systems mind, how to reduce it to a
single actor (Gless et al., 2016), and how to grant them physical
independence.
The proposal to adopt electronic legal personhood addresses
the difcult problem of attributing responsibility for AI systems
actions, i.e., the so-called responsibility gap (Matthias, 2004). Self-
learning and autonomous systems challenge epistemic and
control requirements for holding actors responsible, raising
questions about who should be blamed, punished, or answer
for harms caused by AI systems (de Sio and Mecacci, 2021). The
deployment of complex algorithms leads to the problem of many
things,where different technologies, actors, and artifacts come
together to complicate the search for a responsible entity
(Coeckelbergh, 2020). These gaps could be partially bridged if
the causally responsible machine is held liable for its actions.
Some scholars argue that the notion of a responsibility gap is
overblown. For instance, Johnson (2015) has asserted that
responsibility gaps will only arise if designers choose and
argued that they should instead proactively take responsibility
for their creations. Similarly, Sætra (2021) has argued that even if
designers and users may not satisfy all requirements for
responsibility attribution, the fact that they chose to deploy
systems that they do not understand nor have control over
makes them responsible. Other scholars view moral
responsibility as a pluralistic and exible process that can
encompass emerging technologies (Tigard, 2020).
Danaher (2016) has made a case for a distinct gap posed by the
conict between the human desire for retribution and the absence of
appropriate subjects of retributive punishment, i.e., the retribution
gap. Humans look for a culpable wrongdoer deserving of
punishment upon harm and justify their intuitions with
retributive motives (Carlsmith and Darley, 2008). AI systems are
not appropriate subjects of these retributive attitudes as they lack the
necessary conditions for retributive punishment, e.g., culpability.
The retribution gap has been criticized by other scholars, who
defend that people could exert control over their retributive
intuitions (Kraaijeveld, 2020)andarguethatconicts between
peoples intuitions and moral and legal systems are dangerous
only if theydestabilize such institutions (Sætra, 2021). This research
directly addresses whether such conict is real and could pose
challenges to AI systemsgovernance. Coupled with previous work
nding that people blame AI and robots for harm (e.g., (Kim and
Hinds, 2006;Malle et al., 2015;Furlough et al., 2021;Lee et al., 2021;
Lima et al., 2021)), there seems to exist a clash between peoples
reactive attitudes towards harms caused by automated systems and
their feasibility. This conict is yet to be studied empirically.
We investigate this friction. We question whether people
would punish AI systems in situations where human agents
would typically be held liable. We also inquire whether these
reactive attitudes can be grounded on crucial components of legal
punishment, i.e., some of its requirements and functions.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562422
Lima et al. Conicts of AI Legal Punishment
Previous work on the proposal to extend legal standing to AI
systems has been mostly restricted to the normative domain, and
research is yet to investigate whether philosophical intuitions
concerning the responsibility gap, retribution gap, and electronic
legal personhood have similarities with the public view. We
approach this research question as a form of experimental
philosophy of technology (Kraaijeveld, 2021). This research
does not defend that responsibility and retribution gaps are
real or can be solved by other scholarsproposals. Instead, we
investigate how peoples reactive attitudes towards harms caused
by automated systems may clash with legal and moral doctrines
and whether they warrant attention.
Recent work has explored how public reactions to automated
vehicles (AVs) could help shape future regulation (Awad et al.,
2018). Scholars posit that psychology research could augment
information available to policymakers interested in regulating
autonomous machines (Awad et al., 2020a). This body of
literature acknowledges that the public view should not be
entirely embedded into legal and governance decisions due to
harmful and irrational biases. Yet, they defend that obtaining the
general publics attitude towards these topics can help regulators
discern policy decisions and prepare for possible conicts.
Viewing the issues of responsibility posed by automated
systems as political questions, Sætra (2021) has defended that
these questions should be subjected to political deliberation.
Deciding how to attribute responsibility comes with inherent
trade-offs that one should balance to achieve responsible and
benecial innovation. A crucial stakeholder in this endeavor is
those who are subjected to the indirect consequences of
widespread deployment of automated systems, i.e., the public
(Dewey and Rogers, 2012). Scholars defend that automated
systems should be regulated according to the political will of
a given community(Sætra and Fosch-Villaronga, 2021), where
the general public is a major player. Acknowledging the public
opinion facilitates the political process to nd common ground
for the successful regulation of these new technologies. If legal
responsibility becomes too detached from the folk conception of
responsibility, the law might become unfamiliar to those whose
behavior it aims to regulate, thus creating the law in the books
instead of the law in action(Brożek and Janik, 2019).
Peoples expectations and preconceptions of AI systems and
robots have several implications to their adoption, development,
and regulation (Cave and Dihal, 2019). For instance, fear and
hostility may hinder the adoption of benecial technology (Cave
et al., 2018;Bonnefon et al., 2020), whereas a more positive take
on AI and robots may lead to unreasonable expectations and
overtrustwhich scholars have warned against (Bansal et al.,
2019). Narratives about AI and robots also inform and open new
directions for research among developers and shape the views of
both policymakers and its constituents (Cave and Dihal, 2019).
This research contributes to the maintenance of the algorithmic
social contract,which aims to embed societal values into the
governance of new technologies (Rahwan, 2018). By
understanding how all stakeholders involved in developing,
deploying, and using AI systems react to these new
technologies, those responsible for making governance
decisions can be better informed of any existing conicts.
2 METHODS
Our research inquired how peoples moral judgments of
automated systems may clash with existing legal doctrines
through a survey-based study. We recruited 3,315 US residents
through Amazon Mechanical Turk (see SI for demographic
information), who attended a study where they 1) indicated
their perception of automated agentsliability and 2)
attributed responsibility, punishment, and awareness to a wide
range of entities that could be held liable for harms caused by
automated systems under existing legal doctrines.
We employed a between-subjects study design, in which each
participant was randomly assigned to a scenario, an agent, and an
autonomy level. Scenarios covered two environments where
automated agents are currently deployed: medicine and war (see
SI for study materials). Each scenario posited three agents: an AI
program, a robot (i.e., an embodied form of AI), or a human actor.
Although the proposal of extending legal standing to AI systems
and robots have similarities, they also have distinct aspects worth
noting. For instance, although a robot death penaltymay be a
viable option through its destruction, killingan AI system may
not have the same expressive benets due to varying levels of
anthropomorphization. However, extensive literature discusses the
two actors in parallel, e.g., (Turner, 2018;Abbott, 2020). We come
back to this distinction in our nal discussion. Finally, our study
introduced each actor as either supervised by a humanor
completely autonomous.
Participants assigned to an automated agent rst evaluated
whether punishing it would fulll some of legal punishments
functions, namely reform, deterrence, and retribution (Solum,
1991;Asaro, 2007). They also indicated whether they would be
willing to grant assets and physical independence to automated
systemstwo factors that are preconditions for civil and criminal
liability, respectively. If automated systems do not hold assets to be
taken away as compensation for those they harm, they cannot be
held liable under civil law. Similarly, if an AI system or robot does
not possess any level of physical independence, it becomes hard to
imagine their criminal punishment. These questions were shown
in random order and answered using a 5-point bipolar scale.
After answering this set of questions or immediately after
consenting to the research terms for those assigned to a human
agent, participants were shown the selected vignette in plain text.
They were then asked to attribute responsibility, punishment, and
awareness to their assigned agent. Responsibility and punishment
are closely related to the proposal of adopting electronic legal
personhood, while awareness plays a major role in legal
judgments (e.g., mens rea in criminal law, negligence in civil
law). We also identiedaseriesofentities(hereafterassociates)
that could be held liable under existing legal doctrines, such as an
automated systemsmanufacturerunderproductliability,and
asked participants to attribute the same variables to each of
them. All questions were answered using a 4-pt scale. Entities
were shown in random order and one at a time.
We present the methodology details and study materials in
the SI. A replication with a demographically representative
sample (N244) is also shown in the SI to substantiate all of
the ndings presented in the main text. This research had been
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562423
Lima et al. Conicts of AI Legal Punishment
approved by the rst authors Institutional Review Board
(IRB). All data and scripts are available at the projects
repository: https://bit.ly/3AMEJjB.
3 RESULTS
Figure 1A shows the mean values of responsibility and
punishment attributed to each agent depending on their
autonomy level. Automated agents were deemed moderately
responsible for their harmful actions (M1.48, SD 1.16),
and participants wished to punish AI and robots to a signicant
level (M1.42, SD 1.28). In comparison, human agents were
held responsible (M2.34, SD 0.83) and punished (M2.41,
SD 0.82) to a larger degree.
A 3 (agent: AI, robot, human) ×2(autonomy:completely
autonomous, supervised) ANOVA on participants
judgments of responsibility revealed main effects of both
agent (F(2, 3309) 906.28, p<0.001, η2
p0.35) and
autonomy level (F(1, 3309) 43.84, p<0.001, η2
p0.01).
The extent to which participants wished to punish agents was also
dependent on the agent (F(2, 3309) 391.61, p<0.001, η2
p0.16)
and its autonomy (F(1, 3309) 45.56, p<0.001, η2
p0.01). The
interaction between these two factors did not reach signicance in
any of the models (p>0.05). Autonomous agents were overall
viewed as more responsible and deserving of a larger punishment
for their actions than their supervised counterparts. We did not
observe noteworthy differences between AI systems and robots; the
latter were deemed marginally less responsible than AI systems.
Figure 1A shows the mean perceived awareness of AI, robots,
and human agents upon a legal offense. Participants perceived
automated agents as only slightly aware of their actions (M0.54,
SD 0.88), while human agents were considered somewhat aware
(M1.92, SD 1.00). A 3 ×2 ANOVA model revealed main
effects for both agent type (F(2, 3309) 772.51, p<0.001, η2
p
0.35) and autonomy level (F(1, 3309) 43.87, p<0.001, η2
p0.01).
The interaction between them was not signicant (p0.401).
Robots were deemed marginallyless aware of their offenses than AI
systems. A mediation analysis revealed that perceived awareness of
AI systems (coded as -1) and robots (coded as 1) mediated
judgments of responsibility (partial mediation, coef 0.04,
95% CI [0.06, 0.02]) and punishment (complete mediation,
coef 0.05, 95% CI [0.07, 0.02]).
The leftmost plot of Figure 1B shows participantsattitudes
towards granting assets and some level of physical independence
to AI and robots using a 5-pt scale. These two concepts are crucial
preconditions for imposing civil and criminal liability, respectively.
Participants were largely contrary to allowing automated agents to
hold assets (M0.96, SD 1.16) or physical independence (M
0.55, SD 1.30). Figure 1B also shows the extent to which
participants believed the punishment of AI and robots might satisfy
deterrence, retribution, and reform, i.e.,some of legalpunishments
functions. Respondents did not believe punishing an automated
agent would fulll its retributive functions (M0.89, SD 1.12)
or deter them from future offenses (M0.75, SD 1.22);
however, AI and robots were viewed as able to learn from their
wrongful actions (M0.55, SD 1.17). We only observed
marginal effects (η2
p0.01) of agent type and autonomy in
participantsattitudes towards preconditions and functions of
legal punishment and present these results in the SI.
The viability and effectiveness of AI systemsand robots
punishment depend on fullling certain legal punishments
preconditions and functions. As discussed above, the
incompatibility between legal punishment and automated
FIGURE 1 | Attribution of responsibility, punishment, and awareness to human agents, AI systems, and robots upon a legal offense (A). Participantsattitudes
towards granting legal punishment preconditions to AI systems and robots (e.g., assets and physical independence) and respondentsviews that automated agents
punishment would (not) satisfy the deterrence, retributive, and reformative functions of legal punishment (B). Standard errors are shown as error bars.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562424
Lima et al. Conicts of AI Legal Punishment
agents is a common argument against the adoption of electronic
legal personhood. Collectively, our results suggest a conict
between peoples desire to punish AI and robots and the
punishments perceived effectiveness and feasibility.
We also observed that the extent to which participants wished to
punish automated agents upon wrongdoing correlated with their
attitudes towards granting them assets (r(1935) 0.11, p<0.001)
and physical independence (r(224) 0.21, p<0.001). Those who
anticipated the punishment of AI and robots to fulll deterrence
(r(1711) 0.34, p<0.001) and retribution (r(1711) 0.28, p<
0.001) also tended to punish them more. However, participants
views concerning automated agentsreform were not correlated with
their punishment judgments (r(1711) 0.02, p0.44). In
summary, more positive attitudes towards granting assets and
physical independence to AI and robots were associated with
larger punishment levels. Similarly, participants that
perceived automated agentspunishment as more successful
concerning deterrence and retribution punished them more.
Nevertheless, most participants wished to punish automated agents
regardless of the punishments infeasibility and unfulllment of
retribution and deterrence.
Participants also judged a series of entities that could be
held liable under existing liability models concerning their
responsibility, punishment, and awareness for an agents
wrongful action. All of the automated agentsassociates
were judged responsible, deserving of punishment, and
aware of the agentsactions to a similar degree (see
Figure 2). The supervisor of a supervised AI or robot was
judged more responsible, aware, and deserving of punishment
than that of a completely autonomous system. In contrast,
attributions of these three variables to all other associates were
larger in the case of an autonomous agent. In the case of
human agents, their employers and supervisors were deemed
more responsible, aware, and deserving of punishment when
the actor was supervised. We present a complete statistical
analysis of these results in the SI.
4 DISCUSSION
Our ndings demonstrate a conict between participantsdesire
to punish automated agents for legal offenses and their perception
that such punishment would not be successful in achieving
deterrence or retribution. This clash is aggravated by
participantsunwillingness to grant AI and robots what is
needed to legally punish them, i.e., assets for civil liability and
physical independence for criminal liability. This contradiction in
peoples moral judgments suggests that people wish to punish AI
and robots even though they believe that doing so would not be
successful, nor are they willing to make it legally viable.
FIGURE 2 | Attribution of responsibility, punishment, and awareness to AI systems, robots, and entities that could be held liable under existing doctrines
(i.e., associates) (A).Assignment of responsibility, punishment, and awarenessto human agents and corresponding associates(B). Standard errors are shownas error bars.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562425
Lima et al. Conicts of AI Legal Punishment
These results are in agreement with Danahers(2016)
retribution gap. Danaher acknowledges that people might blame
and punish AI and robots for wrongful behavior due to humans
retributive nature, although they may be wrong in doing so. Our
data implies that Danahers concerns about the retribution gap are
signicant and can be extended to other considerations,
i.e., deterrence and the preconditions for legal punishment. Past
research shows that people also ground their punishment
judgments in functions other than retribution (Twardawski
et al., 2020). Public intuitions concerning the punishment of
automated agents are even more contradictory than previously
advanced by Danaher: they wish to punishAI and robots for harms
even though their punishment wouldnot be successful in achieving
some of legal punishments functions or even viable, given that
people would not be willing to grant them what is necessary to
punish them.
Our results show that even if responsibility and retribution
gaps can be easily bridged as suggested by some scholars (Sætra,
2021;Tigard, 2020;Johnson, 2015), there still exists a conict
between the public reaction to harms caused by automated
systems and their moral and legal feasibility. The public is an
important stakeholder in the political deliberation necessary for
the benecial regulation of AI and robots, and their perspective
should not be rejected without consideration. An empirical
question that our results pose is whether this conict warrants
attention from scholars and policymakers, i.e., if they destabilize
political and legal institutions (Sætra, 2021) or leads to lack of
trust in legal systems (Abbott, 2020). For instance, it may well be
that the public may need to be taught to exert control over their
moral intuitions, as suggested by Kraaijeveld (2020).
Although participants did not believe punishing an automated
agent would satisfy the retributive and deterrence aspects of
punishment, they viewed robots and AI systems as capable of
learning from their mistakes. Reform may be the crucial component of
peoples desire to punish automated agents. Although the current
research might not be able to clear this inquiry, we highlight that future
work should explore how participants imagine the reform of
automated agents. Reprogramming an AI system or robots can
prevent future offenses, yet it will not satisfy other indirect
reformative functions of punishment, e.g., teaching others that a
specic action is wrong. Legal punishment, as it stands, does not
achieve the reprogramming necessary for AI and robots. Future
studies may question how peoples preconceptions of automated
agentsreprogramming inuence peoplesmoraljudgments.
It might be argued that our results are caused by how the study
was constructed. For instance, participants who punished
automated agents might have reported being more optimistic
about its feasibility so that their responses become compatible.
However, we observe trends that methodological biases cannot
explain but can only result from participantsa priori contradiction
(see SI for detailed methodology). This work does not posit this
contradiction as a universal phenomenon; we observed a
signicant number of participants attributing no punishment
whatsoever to electronic agents. Nonetheless, we observed
similar results in a demographically representative sample of
respondents (see SI).
We did not observe signicant differences between
punishment judgments of AI systems and robots. The
differences in responsibility and awareness judgments were
marginal and likely affected by our large sample size. As
discussed above, there are different challenges when adopting
electronic legal personhood for AI and robots. Embodied
machines may be easier to punish criminally if legal systems
choose to do so, for instance through the adoption of a robot
death penalty.Nevertheless, our results suggest that the conict
between peoples moral intuitions and legal systems may be
independent of agent type. Our study design did not control
for how people imagined automated systems, which could have
affected how people make moral judgments about machines. For
instance, previous work has found that people evaluate the moral
choices of a human-looking robot as less moral than humansand
non-human robotsdecisions (Laakasuo et al., 2021).
People largely viewed AIand robots as unaware of their actions.
Much human-computer interaction research has focused on
developing social robots that can elicit mind perception through
anthropomorphization (Waytz et al., 2014;Darling, 2016).
Therefore, we may have obtained higher perceived awareness
had we introduced what the robot or AI looked like, which in
turn could have affected respondentsresponsibility and
punishment judgments, as suggested by Bigman et al. (2019)
and our mediation analysis. These results may also vary by
actor, as robots are subject to higher levels of
anthropomorphization. Past research has also shown that if an
AI system is described as an anthropomorphized agent rather than
a mere tool, it is attributed more responsibility for creating a
painting (Epstein et al., 2020). A similar trend was observed with
autonomous AI and robots, which were assigned more
responsibility and punishment than supervised agents, as
previously found in the case of autonomous vehicles (Awad
et al., 2020b) and other scenarios (Kim and Hinds, 2006;
Furlough et al., 2021).
4.1 The Importance of Design, Social, and
Legal Decisions
Participantsattitudes concerning the fulllment of punishment
preconditions and functions by automated agents were correlated
with the extent to which respondents wished to punish AI and
robots. This nding suggests that peoples moral judgments of
automated agentsactions can be nudged based on how their
feasibility is introduced.
For instance, to clarify that punishing AI and robots will not
satisfy the human need for retribution, will not deter future
offenses, or is unviable given they cannot be punished
similarly to other legal persons may lead people to denounce
automated agentspunishment. If legal and social institutions
choose to embrace these systems, e.g., by granting them certain
legal status, nudges towards granting them certain perceived
independence or private property may affect peoples decision
to punish them. Future work should delve deeper into the causal
relationship between peoples attitudes towards the topic and
their attribution of punishment to automated agents.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562426
Lima et al. Conicts of AI Legal Punishment
Our results highlight the importance of design, social, and
legal decisions in how the general public may react to automated
agents. Designers should be aware that developing systems that
are perceived as aware by those interacting with them may lead to
heightened moral judgments. For instance, the benets of
automated agents may be nullied if their adoption is
impaired by unfullled perceptions that these systems should
be punished. Legal decisions concerning the regulation of AI and
their legal standing may also inuence how people react to harms
caused by automated agents. Social decisions concerning how to
insert AI and robots into society, e.g., as legal persons, should also
affect how we judge their actions. Future decisions should be
made carefully to ensure that laypeoples reactions to harms
caused by automated systems do not clash with regulatory efforts.
5 CONCLUDING REMARKS
Electronic legal personhood grounded on automated agentsabilities
to fulll duties does not seem a viable path towards the regulation of
AI. This approach can only become an option if AI and robots are
granted assets or physical independence, which would allow civil or
criminal liability to be imposed, or if punishment functions and
methods are adapted to AI and robots. Peoples intuitions about
automated agentspunishment are somewhat similar to scholars who
oppose the proposal. However, a signicant number of people still
wish to punish AI and robots independently of their a priori intuitions.
By no means this research proposes that robots and AI should be
the sole entities to hold liability for their actions. In contrast,
responsibility, awareness, and punishment were assigned to all
associates. We thus posit that distributing liability among all
entities involved in deploying these systems would follow the
public perception of the issue. Such a model could take joint and
several liability models as a starting point by enforcing the proposal
that various entities should be held jointly liable for damages.
Our work also raises the question of whether people wish to punish
AI and robots for reasons other than retribution, deterrence, and
reform. For instance, the public may punish electronic agents for
general or indirect deterrence (Twardawski et al., 2020). Punishing an
AI could educate humans that a specicactioniswrongwithoutthe
negative consequences of human punishment. Recent literature in
moral psychology also proposes that humans might strive for a
morally coherent world, where seemingly contradictory judgments
arise so that the public perception of agentsmoral qualities match the
moral qualities of their actionsoutcomes (Clark et al., 2015). We
highlight that legal punishment is not only directed at the wrongdoer
but also fullls other functions in society that future work should
inquire about when dealing with automated agents. Finally, our work
poses the question of whether proactive actions towards holding
existing legal persons liable for harms caused by automated agents
would compensate for peoples desire to punish them. For instance,
future work might examine whether punishing a systems
manufacturer may decrease the extent to which people punish AI
and robots. Even if the responsibility gap can be easily solved, conicts
between the public and legal institutions might continue to pose
challenges to the successful governance of these new technologies.
We selected scenarios from active areas of AI and robotics
(i.e., medicine and war; see SI). Peoples moral judgments might
change depending on the scenario or background. The proposed
scenarios did not introduce, for the sake of feasibility and brevity,
much of the background usually considered when judging
someones actions legally. We did not control for any previous
attitudes towards AI and robots or knowledge of related areas,
such as law and computer science, which could result in different
judgments among the participants.
This research has found a contradiction in peoples moral
judgments of AI and robots: they wish to punish automated
agents, although they know that doing so is not legally viable nor
successful. We do not defend the thesis that automated agents
should be punished for legal offenses or have their legal standing
recognized. Instead, we highlight that the publics preconceptions
of AI and robots inuence how people react to their harmful
consequences. Most crucially, we showed that peoples reactions
to these systemsfailures might conict with existing legal and
moral systems. Our research showcases the importance of
understanding the public opinion concerning the regulation of
AI and robots. Those making regulatory decisions should be
aware of how the general public may be inuenced or clash with
such commitments.
DATA AVAILABILITY STATEMENT
The datasets and scripts used for analysis in this study can be found at
https://bitly.com/3AMEJjB.
ETHICS STATEMENT
The studies involving human participants were reviewed and
approved by the Institutional Review Board (IRB) at KAIST. The
patients/participants provided their informed consent to
participate in this study.
AUTHOR CONTRIBUTIONS
All authors designed the research. GL conducted the research. GL
analyzed the data. GL wrote the paper, with edits from MC, CJ, and KS.
FUNDING
This research was supported by the Institute for Basic Science
(IBS-R029-C2).
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at:
https://www.frontiersin.org/articles/10.3389/frobt.2021.756242/
full#supplementary-material
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562427
Lima et al. Conicts of AI Legal Punishment
REFERENCES
Abbott, R. (2020). The Reasonable Robot: Articial Intelligence and the Law.
Cambridge University Press.
Asaro, P. M. (2011). 11 a Body to Kick, but Still No Soul to Damn: Legal
Perspectives on Robotics. Robot Ethics ethical Soc. implications robotics,
169186.
Asaro, P. M. (2007). Robots and Responsibility from a Legal Perspective. Proc. IEEE
4, 2024.
Awad, E., Dsouza, S., Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2020a).
Crowdsourcing Moral Machines. Commun. ACM 63, 4855. doi:10.1145/
3339904
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The
Moral Machine experiment. Nature 563, 5964. doi:10.1038/s41586-018-
0637-6
Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A.,
et al. (2020b). Drivers Are Blamed More Than Their Automated Cars when
Both Make Mistakes. Nat. Hum. Behav. 4, 134143. doi:10.1038/s41562-019-
0762-8
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., and Horvitz, E. (2019).
Beyond Accuracy: The Role of Mental Models in Human-Ai Team
Performance,in Proceedings of the AAAI Conference on Human
Computation and Crowdsourcing, 211.
Bigman, Y. E., Waytz, A., Alterovitz, R., and Gray, K. (2019). Holding Robots
Responsible: The Elements of Machine Morality. Trends Cognitive Sciences 23,
365368. doi:10.1016/j.tics.2019.02.008
Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2020). The Moral Psychology of AI and
the Ethical Opt-Out Problem. Oxford, UK: Oxford University Press.
Brożek, B., and Janik, B. (2019). Can Articial Intelligences Be Moral Agents. New
Ideas Psychol. 54, 101106. doi:10.1016/j.newideapsych.2018.12.002
Bryson, J. J., Diamantis, M. E., and Grant, T. D. (2017). Of, for, and by the People:
the Legal Lacuna of Synthetic Persons. Artif. Intell. L. 25, 273291. doi:10.1007/
s10506-017-9214-9
Carlsmith, K. M., and Darley, J. M. (2008). Psychological Aspects of Retributive
justice. Adv. Exp. Soc. Psychol. 40, 193236. doi:10.1016/s0065-2601(07)00004-4
Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., et al. (2018).
Portrayals and Perceptions of Ai and Why They Matter.
Cave, S., and Dihal, K. (2019). Hopes and Fears for Intelligent Machines in Fiction
and Reality. Nat. Mach Intell. 1, 7478. doi:10.1038/s42256-019-0020-9
Clark, C. J., Chen, E. E., and Ditto, P. H. (2015). Moral Coherence Processes:
Constructing Culpability and Consequences. Curr. Opin. Psychol. 6, 123128.
doi:10.1016/j.copsyc.2015.07.016
Coeckelbergh, M. (2020). Articial Intelligence, Responsibility Attribution, and a
Relational Justication of Explainability. Sci. Eng. Ethics 26, 20512068.
doi:10.1007/s11948-019-00146-8
Danaher, J. (2016). Robots, Law and the Retribution gap. Ethics Inf. Technol. 18,
299309. doi:10.1007/s10676-016-9403-3
Darling, K. (2016). Extending Legal protection to Social Robots: The Effects of
Anthropomorphism, Empathy, and Violent Behavior towards Robotic
Objects,in Robot Law (Edward Elgar Publishing).
de Sio, F. S., and Mecacci, G. (2021). Four Responsibility Gaps with Articial
Intelligence: Why They Matter and How to Address Them. Philos. Tech.,128.
doi:10.1007/s13347-021-00450-x
Delvaux, M. (2017). Report with Recommendations to the Commission on Civil
Law Rules on Robotics (2015/2103 (Inl)). European Parliament Committee on
Legal Affairs.
Dewey, J., and Rogers, M. L. (2012). The Public and Its Problems: An Essay in
Political Inquiry. Penn State Press.
Epstein, Z., Levine, S., Rand, D. G., and Rahwan, I. (2020). Who Gets Credit for Ai-
Generated Art. Iscience 23, 101515. doi:10.1016/j.isci.2020.101515
European Commission (2021). Proposal for a Regulation of the European
Parliament and of the council Laying Down Harmonised Rules on Articial
Intelligence (Articial Intelligence Act) and Amending Certain union
Legislative Acts).
Furlough, C., Stokes, T., and Gillan, D. J. (2021). Attributing Blame to Robots: I.
The Inuence of Robot Autonomy. Hum. Factors 63, 592602. doi:10.1177/
0018720819880641
Gellers, J. C. (2020). Rights for Robots: Articial Intelligence, Animal and
Environmental Law (Edition 1). Routledge.
Gless, S., Silverman, E., and Weigend, T. (2016). If Robots Cause Harm, Who Is to
Blame? Self-Driving Cars and Criminal Liability. New Criminal L. Rev. 19,
412436. doi:10.1525/nclr.2016.19.3.412
Gordon, J. S. (2021). Articial Moral and Legal Personhood. AI Soc. 36, 457471.
doi:10.1007/s00146-020-01063-2
Gunkel, D. J. (2018). Robot Rights. mit Press.
Jobin, A., Ienca, M., and Vayena, E. (2019). The Global Landscape of Ai
Ethics Guidelines. Nat. Mach Intell. 1, 389399. doi:10.1038/s42256-019-
0088-2
Johnson, D. G. (2015). Technology with No Human Responsibility. J. Bus Ethics
127, 707715. doi:10.1007/s10551-014-2180-1
Jowitt, J. (2021). Assessing Contemporary Legislative Proposals for Their
Compatibility With a Natural Law Case for AI Legal Personhood. AI Soc.
36, 499508. doi:10.1007/s00146-020-00979-z
Kim, T., and Hinds, P. (2006). Who Should I Blame? Effects of Autonomy and
Transparency on Attributions in Human-Robot Interaction,in ROMAN
2006-The 15th IEEE International Symposium on Robot and Human
Interactive Communication (IEEE), 8085.
Köbis, N., Bonnefon, J.-F., and Rahwan, I. (2021). Bad Machines Corrupt Good
Morals. Nat. Hum. Behav. 5, 679685. doi:10.1038/s41562-021-01128-2
Kraaijeveld, S. R. (2020). Debunking (The) Retribution (gap). Sci. Eng. Ethics 26,
13151328. doi:10.1007/s11948-019-00148-6
Kraaijeveld, S. R. (2021). Experimental Philosophy of Technology. Philos. Tech.,
120. doi:10.1007/s13347-021-00447-6
Kurki, V. A. (2019). A Theory of Legal Personhood. Oxford University Press.
Laakasuo, M., Palomäki, J., and Köbis, N. (2021). Moral Uncanny valley: a Robots
Appearance Moderates How its Decisions Are Judged. Int. J. Soc. Robotics,
110. doi:10.1007/s12369-020-00738-6
Lee, M., Ruijten, P., Frank, L., de Kort, Y., and IJsselsteijn, W. (2021). People
May Punish, but Not Blame Robots,in Proceedings of the 2021 CHI
Conference on Human Factors in Computing Systems, 111. doi:10.1145/
3411764.3445284
Lima, G., Grgić-Hlača, N., and Cha, M. (2021). Human Perceptions on Moral
Responsibility of Ai: A Case Study in Ai-Assisted Bail Decision-Making,in
Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems, 117. doi:10.1145/3411764.3445260
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., and Cusimano, C. (2015). Sacrice
One for the Good of many? People Apply Different Moral Norms to Human
and Robot Agents,in 2015 10th ACM/IEEE International Conference on
Human-Robot Interaction (HRI) (IEEE), 117124.
Matthias, A. (2004). The Responsibility gap: Ascribing Responsibility for the
Actions of Learning Automata. Ethics Inf. Technol. 6, 175183. doi:10.1007/
s10676-004-3422-1
Mulligan, C. (2017). Revenge against Robots. SCL Rev. 69, 579.
Prosser, W. L. (1941). Handbook of the Law of Torts. West Publishing.
Rahwan, I. (2018). Society-in-the-loop: Programming the Algorithmic Social
Contract. Ethics Inf. Technol. 20, 514. doi:10.1007/s10676-017-9430-8
Resseguier, A., and Rodrigues, R. (2020). Ai Ethics Should Not Remain Toothless! a
Call to Bring Back the Teeth of Ethics. Big Data Soc. 7, 2053951720942541.
doi:10.1177/2053951720942541
Sætra, H. S. (2021). Confounding Complexity of Machine Action: a Hobbesian
Account of Machine Responsibility. Int. J. Technoethics (Ijt) 12, 87100.
doi:10.4018/IJT.20210101.oa1
Sætra, H. S., and Fosch-Villaronga, E. (2021). Research in Ai Has Implications for
Society: How Do We Respond. Morals &Machines 1, 6073. doi:10.5771/2747-
5174-2021-1-60
Solaiman, S. M. (2017). Legal Personality of Robots, Corporations, Idols and
Chimpanzees: a Quest for Legitimacy. Artif. Intell. L. 25, 155179. doi:10.1007/
s10506-016-9192-3
Solum, L. B. (1991). Legal Personhood for Articial Intelligences. NCL Rev. 70,
1231.
Sparrow, R. (2007). Killer Robots. J. Appl. Philos. 24, 6277. doi:10.1111/j.1468-
5930.2007.00346.x
Tigard, D. W. (2020). There Is No Techno-Responsibility gap. Philos. Tech.,119.
doi:10.1007/s13347-020-00414-7
Turner, J. (2018). Robot Rules: Regulating Articial Intelligence. Springer.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562428
Lima et al. Conicts of AI Legal Punishment
Twardawski, M., Tang, K. T. Y., and Hilbig, B. E. (2020). Is it All about Retribution?
the Flexibility of Punishment Goals. Soc. Just Res. 33, 195218. doi:10.1007/
s11211-020-00352-x
van den Hoven van Genderen, R. (2018). Do we Need New Legal Personhood in
the Age of Robots and Ai,in Robotics, AI and the Future of Law (Springer),
1555. doi:10.1007/978-981-13-2874-9_2
Waytz, A., Heafner, J., and Epley, N. (2014). The Mind in the Machine:
Anthropomorphism Increases Trust in an Autonomous Vehicle. J. Exp.
Soc. Psychol. 52, 113117. doi:10.1016/j.jesp.2014.01.005
Conict of Interest: The authors declare that the research was conducted in the
absence of any commercial or nancial relationships that could be construed as a
potential conict of interest.
Publishers Note: All claims expressed in this article are solely those of the authors
and do not necessarily represent those of their afliated organizations, or those of
the publisher, the editors and the reviewers. Any product that may be evaluated in
this article, or claim that may be made by its manufacturer, is not guaranteed or
endorsed by the publisher.
Copyright © 2021 Lima, Cha, Jeon and Park. This is an open-access article
distributed under the terms of the Creative Commons Attribution License (CC
BY). The use, distribution or reproduction in other forums is permitted, provided the
original author(s) and the copyright owner(s) are credited and that the original
publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with
these terms.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 7562429
Lima et al. Conicts of AI Legal Punishment
... Robot mistakes and failures have been explored in HRI research, but mostly through the lens of minimizing or recovering lost trust in the robot [1]- [6]. As robots and AI are placed into high-stakes, serious roles in which their actions can lead to real harm (e.g., healthcare robots), questions of moral and legal responsibility when the technology does something wrong are becoming a topic of great debate [7], [8]. An important piece of these discussions is understanding how users, and society Fig. 1. ...
... The construct of morality largely involves notions of right and wrong, although there is not a singular, universal definition. One consistent component of morality involves responsibility, or whether an entity deserves punishment or blame for their actions [7]- [17]. Other components involved with morality include situational awareness (e.g., moral or emotional knowledge), intentionality, desire, and free will [10], [15], [18]. ...
... Moral Judgments Questionnaire Participants completed a questionnaire consisting of questions about different aspects of morality. Each of these questions were drawn from prior literature [7]- [13], [16]- [19], [23], [31], [32], [52], [53]. The first seven questions were presented in a randomized order using a 4-point scale: definitely no, somewhat no, somewhat yes, definitely yes. ...
... For the former, recognizing AI could raise debates about other rights (Marshall, 2023), such as whether using AI is a form of slavery (Hakan Kan, 2024), limit the freedom of humans (Abbott and Sarch, 2024), and lead to loopholes (Bryson et al., 2017). For the latter, AI systems are believed to lack the emotional capabilities that make humans experience the intended consequences of compensation and punishment, have debated ownership of assets (Barbosa, 2024;Lima et al., 2021;Sueur et al., 2024), which make them less able to compensate, and lack intent to commit harm, which, according to many laws, makes them instruments instead of perpetrators. The human guardians, such as the developers or other users, can be responsible (Hakan Kan, 2024). ...
Article
Full-text available
The recent Metaverse technology boom in major areas of the public’s life makes the safety of users a pressing concern. Though the nature of the Metaverse as a geographically unbounded space blending the physical and the virtual presents new challenges for law enforcement and governance. To tackle these, this paper supports the establishment of a unified international legal framework. Specifically, from a law enforcer’s perspective, it provides the first comprehensive discussion in the past five years on legal concerns related to identity, various types of potential crimes, and challenges to unified law enforcement in the Metaverse based on prior incidents.
... Основным консультативным органом стал Европейский совет по искусственному интеллекту (AI Board) 234 из представителей государств-членов Европейского союза. ...
Book
Full-text available
Учебное пособие содержит лекционные материалы и планы семинарских занятий по темам, связанным с искусственным интеллектом, его влиянием на право и регулированием общественных отношений, возникающих вследствие развития и применения технологий искусственного интеллекта на практике. При подготовке учебного пособия были использованы как работы российских, так и иностранных исследователей-правоведов, экономистов, специалистов в области информационных технологий. В лекционной части пособия представлены не только положения, не вызывающие разногласий у исследователей, но и взгляды различных авторов на вопросы, поиск ответов к которым еще идет. Планы семинарских занятий дополнены списком рекомендуемых источников по каждой теме. В учебном пособии содержится также большой объем практических заданий, разработанных для качественного усвоения обучающимися материалов курса, примерные темы научных работ и перечень вопросов к зачету (экзамену). Учебное пособие предназначено для магистрантов юридических факультетов вузов.
... Can AI be held morally responsible? According to previous studies, individuals are inclined to blame AI for various transgressions, such as causing environmental damage [9], choosing to hit an innocent pedestrian [10], and making decisions that cause medical accidents and military harms [11]. As AIs are not social entities, this raises intriguing questions about moral blame attribution. ...
Article
Full-text available
Can artificial intelligences (AIs) be held accountable for moral transgressions? Current research examines how attributing human mind to AI influences the blame assignment to both the AI and the humans involved in real-world moral transgressions. We hypothesized that perceiving AI as having a human mind-like qualities would increase moral blame directed towards AI while decreasing blame attribution to human agents involved. Through three empirical studies—utilizing correlational methods with real-life inspired scenarios in Study 1 and employing experimental manipulations in Studies 2 and 3—our findings demonstrate that perceiving mind in AI increases the likelihood of blaming AIs for moral transgressions. We also explore whether it also diminishes the perceived culpability of human stakeholders, particularly the involved company. Our findings highlight the significance of AI mind perception as a key determinant in increasing blame attribution towards AI in instances of moral transgressions. Additionally, our research sheds light on the phenomenon of moral scapegoating, cautioning against the potential misuse of AI as a scapegoat for moral transgressions. These results emphasize the imperative of further investigating blame attribution assigned to AI entities.
Chapter
This handbook introduces readers to the emerging field of experimental jurisprudence, which applies new empirical methods to address fundamental philosophical questions in legal theory. The book features contributions from a global group of leading professors of law, philosophy, and psychology, covering a diverse range of topics such as criminal law, legal interpretation, torts, property, procedure, evidence, health, disability, and international law. Across thirty-eight chapters, the handbook utilizes a variety of methods, including traditional philosophical analysis, psychology survey studies and experiments, eye-tracking methods, neuroscience, behavioural methods, linguistic analysis, and natural language processing. The book also addresses cutting-edge issues such as legal expertise, gender and race in the law, and the impact of AI on legal practice. In addition to examining United States law, the work also takes a comparative approach that spans multiple legal systems, discussing the implications of experimental jurisprudence in Australia, Germany, Mexico, and the United Kingdom.
Chapter
The rapid adoption of artificial intelligence technologies in organizational contexts creates unprecedented opportunities as well as multiple ethical challenges. The chapter on ethical governance of AI in organizations discusses issues related to transparency, fairness, accountability, and human rights in detail. Therefore, the purpose of the chapter is to serve as a foundational resource for policymakers and organizational leaders, offering guidance on the principles, challenges, and strategies necessary for ethical AI governance. The chapter emphasizes the need for stakeholders to collaboratively develop such frameworks, tailored to their specific organizational and societal contexts. Throughout the best practice, the authors provide an overview of how organizations can reconcile innovation with values to ensure that AI advances contribute to the betterment of society and rights of individuals. Thus, the chapter gives ready solutions to the current challenges by providing organizations with guidance on how to proceed on the way to ethical AI usages.
Chapter
Robots are with us, but law and legal systems are not ready. This book identifies the issues posed by human-robot interactions in substantive law, procedural law, and law's narratives, and suggests how to address them. When human-robot interaction results in harm, who or what is responsible? Part I addresses substantive law, including the issues raised by attempts to impose criminal liability on different actors. And when robots perceive aspects of an alleged crime, can they be called as a sort of witness? Part II addresses procedural issues raised by human-robot interactions, including evidentiary problems arising out of data generated by robots monitoring humans, and issues of reliability and privacy. Beyond the standard fare of substantive and procedural law, and in view of the conceptual quandaries posed by robots, Part III offers chapters on narrative and rhetoric, suggesting different ways to understand human-robot interactions, and how to develop coherent frameworks to do that. This title is also available as Open Access on Cambridge Core.
Article
Full-text available
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
Article
Full-text available
As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight. Köbis et al. outline how artificial intelligence (AI) agents can negatively influence human ethical behaviour. They discuss how this capacity of AI agents can cause problems in the future and put forward a research agenda to gain behavioural insights for better AI oversight.
Article
Full-text available
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
Article
Full-text available
Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy-including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy and philosophy of technology. In this paper, I develop and defend a research program for an experimental philosophy of technology.
Article
Full-text available
Artificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human—a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
Article
Full-text available
In this article, the core concepts in Thomas Hobbes's framework of representation and responsibility are applied to the question of machine responsibility and the responsibility gap and the retribution gap. The method is philosophical analysis and involves the application of theories from political theory to the ethics of technology. A veil of complexity creates the illusion that machine actions belong to a mysterious and unpredictable domain, and some argue that this unpredictability absolves designers of responsibility. Such a move would create a moral hazard related to both (a) strategically increasing unpredictability and (b) taking more risk if responsible humans do not have to bear the costs of the risks they create. Hobbes's theory allows for the clear and arguably fair attribution of action while allowing for necessary development and innovation. Innovation will be allowed as long as it is compatible with social order and provided the beneficial effects outweigh concerns about increased risk. Questions of responsibility are here considered to be political questions.
Chapter
This chapter discusses the limits of normative ethics in new moral domains linked to the development of AI. In these new domains, people have the possibility to opt out of using a machine if they do not approve of the ethics that the machine is programmed to follow. In other words, even if normative ethics could determine the best moral programs, these programs would not be adopted (and thus have no positive impact) if they clashed with users’ preferences—a phenomenon that can be called “ethical opt-out.” The chapter then explores various ways in which the field of moral psychology can illuminate public perception of moral AI and inform the regulations of such AI. The chapter’s main focus is on self-driving cars, but it also explores the role of psychological science for the study of other moral algorithms.