ArticlePublisher preview available

Artificial moral and legal personhood

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which is critical of the Civil Law Rules on Robotics (and particularly of §59 f.). The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood, including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular. It examines two analogies, to corporations (which are treated as legal persons) and animals, that have been proposed to elucidate the moral and legal status of robots. The paper concludes that one should not ascribe moral and legal personhood to currently existing robots, given their technological limitations, but that one should do so once they have achieved a certain level at which they would become comparable to human beings.
Vol.:(0123456789)
1 3
AI & SOCIETY (2021) 36:457–471
https://doi.org/10.1007/s00146-020-01063-2
ORIGINAL ARTICLE
Artificial moral andlegal personhood
John‑StewartGordon1
Received: 9 August 2019 / Accepted: 21 August 2020 / Published online: 9 September 2020
© Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract
This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots
once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social rela-
tions. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics (2017) and
its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the
background of the so-called Robotics Open Letter, which is critical of the Civil Law Rules on Robotics (and particularly of
§59 f.). The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood,
including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular.
It examines two analogies, to corporations (which are treated as legal persons) and animals, that have been proposed to elu-
cidate the moral and legal status of robots. The paper concludes that one should not ascribe moral and legal personhood to
currently existing robots, given their technological limitations, but that one should do so once they have achieved a certain
level at which they would become comparable to human beings.
Keywords Moral personhood· Legal personhood· Moral status· Legal status· Civil law rules of robotics· EU Parliament·
Robot rights· AI robots
1 Introduction
The concept of personhood is one of the most important
concepts in moral philosophy, but also one of the most con-
troversial. If a being is considered to have moral personhood,
then she necessarily has certain moral rights that others are
obligated to respect (Warren 1997; Kamm 2007). The stand-
ard case is an adult human being. Less controversial but still
a topic of lively debate among scholars in legal philoso-
phy and law is the concept of legal personhood, which is
fundamental to our conceptions of rights and obligations,
morality, and agency. Establishing the legal personhood of
a (human) being is a necessary prerequisite for ascribing
rights and duties to that being. A being who possesses legal
personhood thereby enjoys protection from harm and has a
high moral and legal status, recognised by law.
A typically functioning adult human being has both full
moral and legal personhood. Other human beings such as
children, new-borns, people with severe mental impairments,
and people in a non-responsive state—as well as non-human
beings such as higher-functioning animals—may carry some
degree of moral status, but do not have the same moral and
legal personhood as that granted to a typically functioning
adult human being (Dyschkant 2015). Therefore, they do not
have the same moral and legal rights or obligations.
It is commonly assumed that beings (at least, human
beings) with legal personhood also have moral personhood,
and that beings who possess moral personhood have it,
because they are considered to have sufficient moral sta-
tus.1 In other words, without moral status, there can be no
moral and legal personhood. However, whether corporations
and trust funds, which are commonly considered legal per-
sons, also have moral personhood is debated in the realms
of legal philosophy and law (Koops etal. 2010, p. 517). A
related question is whether moral personhood should also
be ascribed to some ships, idols, and environmental objects
(such as particular rivers) that are seen as legal persons in
some jurisdictions (see Sect.4.3). Certainly, as Matthias
(2008) has suggested, one should not anthropomorphise
* John-Stewart Gordon
johnstgordon@pm.me
1 Faculty ofHumanities, Vytautas Magnus University, V.
Putvinskio g. 23 (R 306), LT-44243Kaunas, Lithuania
1 For an excellent overview of the notion of moral personhood
against the background of moral agency and patiency, see Gunkel
(2012, pp. 39–65).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... However, we specifically champion the call for empirical research into the public's expectations of punishment. Because expectations for punishment will stem from the perceived, not actual, nature of AI, we make public perception our focus, in contrast with scholarship on AI as true moral agents [37,72,78] or legal persons [28,66]. It is not necessary for an AI to be truly culpable, or able to experience punishment, for humans to be motivated to enact punishment on AI. ...
... We intend only to offer a brief overview of differing perspectives on whether to punish AI itself, to substantiate our claim that public expectations of punishment must be explored. We also do not attempt to tackle the question of whether AI truly merits legal personhood [28,66], due to our focus on public expectations. Such legal fictions may be necessary as a last resort when there is "nothing satisfactory [judges] can do" (italics our own), as Watson [74] argues for the trial of animals in the medieval era. ...
... Punishment of an AI can also communicate our beliefs about its capabilities: Abbott and Sarch [2] write that if AI is held accountable by the state, this will increase anthropomorphism of AI and thus increase beliefs about the AI's capabilities. Critiques of the European Parliament's proposal to investigate legal personhood for robots received similar criticism [28]. When exploring the topic of punishment of AI, legal practitioners must be mindful of how this punishment might be interpreted by the public with regards to whether it speaks to the general capabilities of the AI as having human-like free will and culpability. ...
Preprint
Full-text available
There are countless examples of how AI can cause harm, and increasing evidence that the public are willing to ascribe blame to the AI itself, regardless of how "illogical" this might seem. This raises the question of whether and how the public might expect AI to be punished for this harm. However, public expectations of the punishment of AI have been vastly underexplored. Understanding these expectations is vital, as the public may feel the lingering effect of harm unless their desire for punishment is satisfied. We synthesise research from psychology, human-computer and -robot interaction, philosophy and AI ethics, and law to highlight how our understanding of this issue is still lacking. We call for an interdisciplinary programme of research to establish how we can best satisfy victims of AI harm, for fear of creating a "satisfaction gap" where legal punishment of AI (or not) fails to meet public expectations.
... Instead, the study identifies human agents-such as manufacturers, designers, owners, or users-as bearers of liability depending on the specific circumstances of harm. This approach aligns with Gordon's (2020) assertion that legal systems must evolve to attribute responsibility effectively while considering the unique capabilities and limitations of AI systems (Gordon, 2020). ...
... Instead, the study identifies human agents-such as manufacturers, designers, owners, or users-as bearers of liability depending on the specific circumstances of harm. This approach aligns with Gordon's (2020) assertion that legal systems must evolve to attribute responsibility effectively while considering the unique capabilities and limitations of AI systems (Gordon, 2020). ...
Article
Full-text available
The expansion of modern technologies and the consequent legal challenges necessitate aligning regulations with these domains. One such instance is artificial intelligence (AI) technology. Establishing a clear and coherent civil liability framework for AI is of social and economic significance. National laws adopt diverse approaches to addressing AI-related challenges. The research methodology is descriptive-analytical, utilizing legal material analysis and legal interpretative methods. Given the broad concepts arising from the social nature of the subject, substantial efforts have been made. Topics akin to legal concepts are explained and legally articulated. The aim of equality for all before the law necessitates a library-based research approach without sampling. Ensuring that society as a whole benefits from social provisions and finding necessary solutions fosters motivation and diligence for further studies aimed at providing scientific and legal answers. The foundational principle that can justify civil liability in robotic actions is the "principle of respect," which, compared to other bases of civil liability, encounters no theoretical or practical challenges. Furthermore, this principle constitutes a jurisprudential foundation with robust supporting evidence. On the other hand, since the primary objective of civil liability is compensation for damages, and robots lack legal or electronic personality, they cannot directly be obligated to compensate for damages. Instead, due to their non-human nature, the responsible human agent—such as the owner, possessor, hacker, manufacturer, or designer—must be identified based on the specific circumstances.
... This most recent wave shows some standard features. The topics covered are mostly the same and with a bottom-up approache.g., criminal liability, tort law, ownership (Brown, 2021;Simmler & Markwalder, 2019)but with a more significant number of scholars and scientific perspectives, with ethics, public policy and social sciences gaining an important role (Gordon, 2021;van den Hoven van Genderen, 2018), advocating for more inter-disciplinary research (Kostenko et al., 2024). This period has also seen the emergence of empirical studies examining public attitudes toward AI rights and legal personhood (Kouravanas & Pavlopoulos, 2022;Martínez & Winter, 2021), as well as statistical analyses of court decisions regarding legal personhood (Banteka, 2020). ...
Article
Full-text available
This paper examines the debate on AI legal personhood, emphasizing the role of path dependencies in shaping current trajectories and prospects. Three primary path dependencies emerge: prevailing legal theories on personhood (singularist vs. clustered), the actual participation of AI in socio-digital institutions (instrumental vs. non-instrumental), and the impact of technological advancements. We argue that these factors dynamically interact, with technological optimism fostering broader attribution of the legal entitlements to AI entities and periods of scepticism narrowing such entitlements. Additional influences include regulatory cross-linkages (e.g., data privacy, liability, cybersecurity) and historical legal precedents. Current regulatory frameworks, particularly in the EU, generally resist extending legal personhood to AI systems. Case law suggests that without explicit legislation, courts are unlikely to grant AI legal personhood on their own, although some authors suggest that the courts can do so. For this to happen, AI systems would first need to prove de facto legitimacy through sustained participation within socio-digital institutions. The chapter concludes by assessing near- and long-term prospects for legal personification, from generative AI and AI agents in the next 5–20 years to transformative possibilities such as AI integration with human cognition via Brain-Machine Interfaces in a more distant future.
Chapter
AI in Society provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand.
Chapter
AI in Society provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand.
Chapter
This chapter explores the moral status of artificial intelligence through the lens of Ted Chiang’s The Lifecycle of Software Objects. Using the concept of moral status, I examine whether Chiang’s digients—intelligent, evolving AI beings—should be afforded ethical consideration. Drawing from philosophical theories on personhood, sentience, and rights, I analyze the implications of granting AI moral status and whether they should be treated as animals, children, or legal persons. I argue that AI with cognitive and emotional capacities warrants protection and distinct rights, challenging conventional ethical boundaries between human, machine, and moral responsibility in an increasingly digital world.
Chapter
Full-text available
The new avatars and bots modeled after humans, the large language models with a “persona,” and the seemingly autonomously acting robots raise the question of whether AI technologies can also possess personhood or at least be part of our personhood. Do we extend our personhood through living or death bots in the digital realm? This article explores the application of the moral concept of personhood to AI technologies. It presents a twofold thesis: first, it illustrates, through various examples, how the concept of personhood is being disrupted in the context of AI technologies. Second, it discusses the potential evolution of the concept and argues for abandoning the personhood concept in AI ethics, based on reasons such as its vagueness, harmful and discriminatory character, and disconnection from society. Finally, the article outlines future perspectives for approaches moving forward, emphasizing the need for conceptual justice in moral concepts.
Article
Full-text available
This article challenges the dominant ‘black box’ metaphor in critical algorithm studies by proposing a phenomenological framework for understanding how social media algorithms manifest themselves in user experience. While the black box paradigm treats algorithms as opaque, self-contained entities that exist only ‘behind the scenes’, this article argues that algorithms are better understood as genetic phenomena that unfold temporally through user-platform interactions. Recent scholarship in critical algorithm studies has already identified various ways in which algorithms manifest in user experience: through affective responses, algorithmic self-reflexivity, disruptions of normal experience, points of contention, and folk theories. Yet, while these studies gesture toward a phenomenological understanding of algorithms, they do so without explicitly drawing on phenomenological theory. This article demonstrates how phenomenology, particularly a Husserlian genetic approach, can further conceptualize these already-documented algorithmic encounters. Moving beyond both the paradigm of artifacts and static phenomenological approaches, the analysis shows how algorithms emerge as inherently relational processes that co-constitute user experience over time. By reconceptualizing algorithms as genetic phenomena rather than black boxes, this paper provides a theoretical framework for understanding how algorithmic awareness develops from pre-reflective affective encounters to explicit folk theories, while remaining inextricably linked to users’ self-understanding. This phenomenological framework contributes to a more nuanced understanding of algorithmic mediation in contemporary social media environments and opens new pathways for investigating digital technologies.
Technical Report
Full-text available
New entities in the information society, such as pseudonyms, avatars, software agents, and robots, create an 'accountability gap' because they operate at increasing distance from their principals. One way of addressing this is to attribute legal rights and/or duties in some contexts to non-humans, thus creating entities that are addressable in law themselves rather than the persons 'behind' them. In this article, we review existing literature on rights for non-humans, with a particular focus on emerging entities in the information society. We discuss three strategies for the law to deal with the challenge of these new entities: interpreting and extending existing law, introducing limited legal personhood with strict liability, and granting full legal personhood. To assess these strategies, we distinguish between different types of persons (abstract, legal, and moral) and different types of agency (automatic, autonomic, and autonomous). We conclude that interpretation and extension of the law seems to work well enough with today's emerging entities, but that sooner or later, attributing limited legal personhood with strict liability is probably a good solution to bridge the accountability gap for autonomic entities; for software agents, this may be sooner rather than later. The technology underlying new entities will, however, have to develop considerably further from facilitating autonomic to facilitating autonomous behavior, before it becomes legally relevant to attribute 'posthuman' rights to new entities.
Article
Full-text available
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.
Book
A provocative attempt to think about what was previously considered unthinkable: a serious philosophical case for the rights of robots. We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality—self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In this book, David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing. In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems.
Book
Bioethics was “born in the USA” and the values American bioethics embrace are based on American law, including liberty and justice. This book crosses the borders between bioethics and law, but moves beyond the domestic law/bioethics struggles for dominance by exploring attempts to articulate universal principles based on international human rights. The isolationism of bioethics in the US is not tenable in the wake of scientific triumphs like decoding the human genome, and civilizational tragedies like international terrorism. Annas argues that by crossing boundaries which have artificially separated bioethics and health law from the international human rights movement, American bioethics can be reborn as a global force for good, instead of serving mainly the purposes of U.S. academics. This thesis is explored in a variety of international contexts such as terrorism and genetic engineering, and in U.S. domestic disputes such as patient rights and market medicine. The citizens of the world have created two universal codes: science has sequenced the human genome and the United Nations has produced the Universal Declaration of Human Rights. The challenge for American bioethics is to combine these two great codes in imaginative and constructive ways to make the world a better, and healthier, place to live.
Article
Purpose The purpose of this paper is to examine and comment on disability rights legislation by focusing on international documents on people with impairments of the last decades, in order to provide more information on the dynamics of the disability rights movement and their moral plea for full inclusion. Design/methodology/approach By analyzing the international legislation and most important guidelines with respect to people with impairments, it is possible to portray a socio-political change by unfolding the agenda of the historical dimension of the decisive events. Findings The long and difficult struggle of people with impairments to beneficiaries of full human rights protection is a fundamental socio-political change that is documented by adhering to important international legislation and guidelines. Originality/value The examination of recent international legislation with respect to people with impairments provides historical context for current developments in the context of disability and full inclusion by conceding human rights as their moral and legal foundation.