ArticlePDF Available

Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy

Authors:

Abstract

Robots are now associated with various aspects of our lives. These sophisticated machines have been increasingly used in different manufacturing industries and services sectors for decades. During this time, they have been a factor in causing significant harm to humans, prompting questions of liability. Industrial robots are presently regarded as products for liability purposes. In contrast, some commentators have proposed that robots be granted legal personality, with an overarching aim of exonerating the respective creators and users of these artefacts from liability. This article is concerned mainly with industrial robots that exercise some degree of self-control as programmed, though the creation of fully autonomous robots is still a long way off. The proponents of the robot’s personality compare these machines generally with corporations, and sporadically with, inter alia, animals, and idols, in substantiating their arguments. This article discusses the attributes of legal personhood and the justifications for the separate personality of corporations and idols. It then demonstrates the reasons for refusal of an animal’s personality. It concludes that robots are ineligible to be persons, based on the requirements of personhood.
Citation:
S M Solaiman, (2017) ‘Legal Personality of Robots, Corporations, Idols and Chimpanzees: A
Quest for Legitimacy’ 25(2) Artificial Intelligence and Law pp 155-179
Please click on the following link for the full text:
hp://rdcu.be/yUKz
... Expert assessments are also divided about whether and how to attribute moral, legal, or political status to AI systems (Asaro 2007;Chopra and White 2011;Coeckelbergh 2010;Danaher 2020;Darling 2012;Delcker, Janosch 2018;Friedman 2023;Gordon and Pasvenskiene 2021;Guingrich and Graziano 2024;Gunkel 2012Gunkel , 2018Kurki 2019;Mamak 2022Mamak , 2023Miller 2015;Müller 2021;Schwitzgebel and Garza 2015;Sebo and Long 2023;Shevlin 2021;Solaiman 2017). Whether an entity has subjective experience is generally viewed as relevant to how we should treat that entity (Gibert and Martin 2021;Mazor et al. 2023;Shepherd 2018). ...
... As with the empirical questions raised by the possibility of artificial minds, academic work on the moral and legal standing of artificial systems relative to their characteristics and mental capacities has also been contentious, covering a wide range of viewpoints. (Asaro 2007;Bryson 2010;Chopra and White 2011;Coeckelbergh 2010;Danaher 2020;Darling 2012;Friedman 2023;Gordon and Pasvenskiene 2021;Guingrich and Graziano 2024;Gunkel 2012;Kurki 2019;Mamak 2022Mamak , 2023Miller 2015;Müller 2021;Schwitzgebel and Garza 2015;Sebo and Long 2023;Shevlin 2021;Solaiman 2017). At the same time, the number of related publications and research efforts has steadily been growing and accelerating since the early 2000s (Harris and Anthis 2021;Malle 2016). ...
Preprint
Full-text available
We surveyed 582 AI researchers who have published in leading AI venues and 838 nationally representative US participants about their views on the potential development of AI systems with subjective experience and how such systems should be treated and governed. When asked to estimate the chances that such systems will exist on specific dates, the median responses were 1% (AI researchers) and 5% (public) by 2024, 25% and 30% by 2034, and 70% and 60% by 2100, respectively. The median member of the public thought there was a higher chance that AI systems with subjective experience would never exist (25%) than the median AI researcher did (10%). Both groups perceived a need for multidisciplinary expertise to assess AI subjective experience. Although support for welfare protections for such AI systems exceeded opposition, it remained far lower than support for protections for animals or the environment. Attitudes toward moral and governance issues were divided in both groups, especially regarding whether such systems should be created and what rights or protections they should receive. Yet a majority of respondents in both groups agreed that safeguards against the potential risks from AI systems with subjective experience should be implemented by AI developers now, and if created, AI systems with subjective experience should treat others well, behave ethically, and be held accountable. Overall, these results suggest that both AI researchers and the public regard the emergence of AI systems with subjective experience as a possibility this century, though substantial uncertainty and disagreement remain about the timeline and appropriate response.
... − huquq subyektliligi tushunchasining tarixiy dinamikasi oʻrganilib, uning yuri- S.M. Solaiman taʼkidlaganidek: "Huquq sub yektliligi tushunchasi tarixan oʻzga ruvchan boʻlib, jamiyat va texnologiyaning rivojlanishi bilan yangi turlarni oʻz ichiga qamrab olgan" [11]. Sunʼiy intellektning huquqiy maqomi bo'yicha xalqaro tajriba, xususan, AQSh, Yevropa Ittifoqi, Janubiy Koreya va boshqa rivojlangan davlatlarning qonunchiligi va sud amaliyoti chuqur tahlil qilindi. ...
... Tadqiqot jarayonida qoʻllangan metodlar, turli olimlarning yondashuvlari bilan solishtirilganda, oʻziga xos afzallik va cheklovlarga ega. Mazkur tadqiqotda qoʻllangan qiyosiy-huquqiy metod S.M. Solaiman [11] va M.U. Scherer [8] yondashuvlariga oʻxshash boʻlib, turli mamlakatlar tajribasini oʻrganishga imkon yaratdi. ...
Article
This scientific article examines the legal status of artificial intelligence systems and autonomous vehicles in modern society. The article comprehensively examines the concept of legal personality and its historical evolution, international experience in the legal status of artificial intelligence, as well as existing regulatory legal acts in the legislation of Uzbekistan, including the Civil Code of the Republic of Uzbekistan and the Laws “On Personal Data” and “On Insurance Activities,” as well as issues of liability for damage caused by autonomous vehicles. As a result of the research, the concept of granting artificial intelligence the status of an “electronic person” was discussed, and the need to create a special legal regime for autonomous vehicles was substantiated. The article analyzes the statistics of violations and damage caused by autonomous vehicles and examines the legislative experience of more than 45 countries in the field of artificial intelligence. The paper also examines such scientific problems as terminological inconsistency, the lack of empirical data, and the complexity of the interdisciplinary approach. The article proposes an approach based on the principles of technological neutrality, security priority, and a clear definition of the system of responsibility. In addition, the concept of the “electronic person fund” was proposed in a model adapted to the conditions of Uzbekistan, and the need to implement international standards into national legislation was emphasized.
... Sin perjuicio de lo anterior, hay diferencias evidentes, la principal es que cuando nos referimos a los animales estamos hablando de un ser vivo, mientras el otro es un producto hecho por el hombre (Solaiman, 2017). Otra diferencia importante es que el robot actuará en base a algoritmos y los animales lo harán por instinto (Čerka et al., 2015), es decir, un robot no se comportará necesariamente de manera impredecible. ...
Chapter
Full-text available
El capítulo titulado "Conflictos socioambientales y minería en la ‘estrella hídrica del sur’ del Ecuador: El caso de la cordillera de Fierro Urco", analiza los conflictos emergentes derivados de la expansión minera en la cordillera Fierro Urco, un ecosistema estratégico ubicado entre las provincias de Loja y El Oro. El estudio se enmarca en un enfoque cualitativo, con entrevistas a actores clave y análisis de contenido mediante el software NVivo, para identificar las tensiones generadas entre comunidades, instituciones públicas y empresas mineras. El capítulo identifica tres tipos principales de conflicto: intragrupales, intergrupales y de intereses, los cuales responden a la falta de consulta previa, el otorgamiento inconsulto de concesiones, la amenaza a fuentes hídricas y la polarización comunitaria. Los autores resaltan que Fierro Urco es considerado un territorio sagrado por pueblos campesinos e indígenas, cuyas formas de vida se ven amenazadas por actividades extractivas. Como alternativa, se propone la implementación de una veeduría ciudadana para fortalecer la gobernanza local y monitorear el impacto de la minería. El capítulo evidencia cómo la minería transforma territorialidades, vulnera derechos colectivos y genera nuevas formas de acción colectiva en defensa del agua y del territorio.
... Some scholars propose granting AI systems limited legal personhood akin to corporations. This would allow them to bear rights and obligations, including liability (Solaiman, 2017). Critics argue that this could enable companies to offload responsibility onto entities they control (Yeung, 2018). ...
Article
Full-text available
The proliferation of autonomous artificial intelligence (AI) technologies presents profound legal challenges, particularly concerning accountability and liability. Autonomous AI systems-ranging from self-driving vehicles to algorithmic decision-makers in healthcare, finance, and public administration-operate with varying degrees of human oversight, raising critical questions about responsibility when these systems cause harm. Traditional legal doctrines, rooted in human-centric accountability, are increasingly strained in assigning liability to AI systems that demonstrate a level of autonomy in decision-making processes. This article examines current regulatory frameworks and proposes legal strategies for addressing the complexities introduced by autonomous AI. It explores diverse legal models, such as strict liability, negligence, and vicarious liability, as well as novel approaches including AI personhood and insurance-based solutions. A comparative analysis is conducted between the European Union's Artificial Intelligence Act and the United States' sectoral regulatory approach, highlighting the implications of each on cross-border AI governance. The article argues that to ensure justice, legal certainty, and continued innovation, the legal system must evolve to include a hybrid regulatory framework integrating ethical oversight, technical standards, and legal responsibility. This includes clarifying the roles of developers, deployers, and users, while ensuring victims of AI-induced harm are not left without recourse. Furthermore, regulatory approaches must consider emerging risks such as algorithmic bias, data privacy violations, and cybersecurity threats. In concluding, the article calls for international collaboration to harmonize AI liability frameworks and ensure that regulation keeps pace with technological advancement while upholding human rights and the rule of law.
Chapter
This handbook introduces readers to the emerging field of experimental jurisprudence, which applies new empirical methods to address fundamental philosophical questions in legal theory. The book features contributions from a global group of leading professors of law, philosophy, and psychology, covering a diverse range of topics such as criminal law, legal interpretation, torts, property, procedure, evidence, health, disability, and international law. Across thirty-eight chapters, the handbook utilizes a variety of methods, including traditional philosophical analysis, psychology survey studies and experiments, eye-tracking methods, neuroscience, behavioural methods, linguistic analysis, and natural language processing. The book also addresses cutting-edge issues such as legal expertise, gender and race in the law, and the impact of AI on legal practice. In addition to examining United States law, the work also takes a comparative approach that spans multiple legal systems, discussing the implications of experimental jurisprudence in Australia, Germany, Mexico, and the United Kingdom.
Article
Full-text available
This article explores how the economic analysis of law method can improve the understanding of the structure of legal entities. While economic analysis cannot address all legal issues, it can serve as a supplementary tool for evaluating the effectiveness and necessity of legal norms and their implementation. As a methodological foundation, economic analysis can be useful in examining societal phenomena. The study emphasises that the tools of modern economic theory are employed in both economic and legal studies of legal entities to provide economic substantiation and explanation of their nature. It is therefore evident that the fundamental nature of a legal entity is economically substantiated not solely by cost-effectiveness (where revenues equal or exceed expenditures) but also by productive appropriation or the sphere of dominance by legally capable organisations. The conclusion drawn from the economic analysis of law is that a legal entity can be understood as follows: firstly, a system of contracts based on relationships between founders (participants), managers, and the legal entity itself, which has advantages and disadvantages in various economic contexts; secondly, a tool for separating property to limit risks for the real (physical) persons behind it. Consequently, the concept of a legal entity as an extension of its founders' individualism (egoism) remains pertinent, as recent events have vividly demonstrated. The present analysis explores the question of whether a legal entity is solely a means of property separation to limit the risks of property loss by real individuals. From an economic perspective, this assertion is valid, as the risk undertaken by the founders is confined to the assets transferred to the entity. The economic risk is realised indirectly through potential inefficiencies (losses) in the organisation's operations. The subject of the research is the application of economic analysis of law to the understanding and conceptualisation of legal entities. The purpose of the research is to examine how economic analysis of law contributes to the interpretation of legal entities, particularly in justifying their structure, function, and essence in terms of economic rationality and risk limitation. The research methodology is based on a combination of analytical approaches tailored to examine the economic and legal dimensions of the subject. The economic analysis of law serves as the principal methodological framework, allowing for the evaluation of economic efficiency, clarification of the nature and function of legal entities, and informed managerial decision-making. The logical method is applied through analysis and synthesis of global academic theories, contributing to an understanding of the limitations and supplementary role of economic analysis within legal research. The systemic-structural method ensures a holistic and consistent examination of legal categories, enabling the identification of defining characteristics of legal entities from an economic standpoint. Quantitative and qualitative analysis techniques are employed to track changes in economic activity, evaluate the influence of various factors, and observe development trends across economic structures, thereby supporting the investigation of both internal and external drivers of business and legal processes. Finally, comparative analysis is used to contrast national and international perspectives in the field of legal and economic thought, through integration of diverse scholarly approaches.
Chapter
AI in Society provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand.
Chapter
AI in Society provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand.
Article
Full-text available
Background: Use of robotic systems for minimally invasive surgery has rapidly increased during the last decade. Understanding the causes of adverse events and their impact on patients in robot-assisted surgery will help improve systems and operational practices to avoid incidents in the future. Methods: By developing an automated natural language processing tool, we performed a comprehensive analysis of the adverse events reported to the publicly available MAUDE database (maintained by the U.S. Food and Drug Administration) from 2000 to 2013. We determined the number of events reported per procedure and per surgical specialty, the most common types of device malfunctions and their impact on patients, and the potential causes for catastrophic events such as patient injuries and deaths. Results: During the study period, 144 deaths (1.4% of the 10,624 reports), 1,391 patient injuries (13.1%), and 8,061 device malfunctions (75.9%) were reported. The numbers of injury and death events per procedure have stayed relatively constant (mean = 83.4, 95% confidence interval (CI), 74.2-92.7 per 100,000 procedures) over the years. Surgical specialties for which robots are extensively used, such as gynecology and urology, had lower numbers of injuries, deaths, and conversions per procedure than more complex surgeries, such as cardiothoracic and head and neck (106.3 vs. 232.9 per 100,000 procedures, Risk Ratio = 2.2, 95% CI, 1.9-2.6). Device and instrument malfunctions, such as falling of burnt/broken pieces of instruments into the patient (14.7%), electrical arcing of instruments (10.5%), unintended operation of instruments (8.6%), system errors (5%), and video/imaging problems (2.6%), constituted a major part of the reports. Device malfunctions impacted patients in terms of injuries or procedure interruptions. In 1,104 (10.4%) of all the events, the procedure was interrupted to restart the system (3.1%), to convert the procedure to non-robotic techniques (7.3%), or to reschedule it (2.5%). Conclusions: Despite widespread adoption of robotic systems for minimally invasive surgery in the U.S., a non-negligible number of technical difficulties and complications are still being experienced during procedures. Adoption of advanced techniques in design and operation of robotic surgical systems and enhanced mechanisms for adverse event reporting may reduce these preventable incidents in the future.
Book
This book explores how the design, construction, and use of robotics technology may affect today’s legal systems and, more particularly, matters of responsibility and agency in criminal law, contractual obligations, and torts. By distinguishing between the behaviour of robots as tools of human interaction, and robots as proper agents in the legal arena, jurists will have to address a new generation of “hard cases.” General disagreement may concern immunity in criminal law (e.g., the employment of robot soldiers in battle), personal accountability for certain robots in contracts (e.g., robo-traders), much as clauses of strict liability and negligence-based responsibility in extra-contractual obligations (e.g., service robots in tort law). Since robots are here to stay, the aim of the law should be to wisely govern our mutual relationships.
Article
Article
The growing use of artificial intelligence (AI) software and robots in the commercial, industrial, military, medical, and personal spheres has triggered a broad conversation about human relationships with these entities. There is a deep and common concern in modern society about AI technology and the ability of existing social and legal arrangements to cope with it. What are the legal ramifications if an AI software program or robotic entity causes harm? Although AI and robotics are making their way into everyday modern life, there is little comprehensive analysis about assessing liability for robots, machines, or software that exercise varying degrees of autonomy. Gabriel Hallevy develops a general and legally sophisticated theory of the criminal liability for AI and robotics that covers the manufacturer, programmer, user, and all other entities involved. Identifying and selecting analogous principles from existing criminal law, Hallevy proposes specific ways of thinking through criminal liability for a diverse array of autonomous technologies in a diverse set of circumstances.
Article
In this article the authors explore the various ways in which robot behaviour is regulated. A distinction is drawn between imposing regulations on robots, imposing regulation by robots, and imposing regulation in robots. Two angles are looked at in depth: regulation that aims at influencing human behaviour and regulation whose scope is robots' behaviour. The artificial agency of robots requires designers and regulators to look at the issue of how to regulate robot behaviour in a way that renders it compliant with legal norms. Regulation by design offers a means for this. Returning to Asimov's three laws of robotics, which have been widely neglected by hands-on roboticists, the idea of artificial agency is explored through the example of automated cars. Whilst practical issues such as space locomotion, obstacle avoidance and automatic learning are important, robots also have to observe social and legal norms. For example, social robots in hospitals are expected to observe social rules, and robotic dust cleaners scouring the streets for waste as well as automated cars will have to observe traffic regulations.