Article

Law As Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of Statistics

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... However, technology and innovation may change the traditional image of the judicial function, remove judges from the job entirely, and replace them with a robot (Alarie, 2016). Therefore, the judge's role in this case will become less interactive (Economou & Hedin, 2020;Hildebrandt, 2018). Although the idea of the robot judge or "artificial intelligence judge" is still new or in its infancy, there are signs and indications that the concept of the robot judge is constantly increasing, and many studies and efforts are being made to support this idea and apply it to some ongoing cases, despite its lack of support and popularity among society in general and the legal community in particular (Ziobroń, 2021;Köbis et al.,2022). ...
... The law must specify which detained persons will be released or detained until trial (Ziobroń, 2021). There is also the risk of escape, as the risk that the arrested person will escape if released is also one of the main costs of paying bail (Hildebrandt, 2018). ...
... However, if enough data is collected, AI will likely be good at discovering these connections between related variables and traits. Otherwise, the task is purely automated, except for the requirement that someone specify what data and information will be collected and perhaps classified for the AI to work (Hildebrandt, 2018). At this stage, Judge A may be unable to determine which method to use and, therefore, cannot direct the other judges (Ziobroń, 2021). ...
Article
Full-text available
Objective: The paper addresses how artificial intelligence can play this distinctive role at present and the vision for its development in the future. Theoretical Framework: The paper addresses the advantages enjoyed by human judges regarding cases, issuing judicial rulings and decisions, and resolving disputes. Additionally, it will discuss artificial intelligence's place in this area and the benefits it can bring to the table to promote fairness, impartiality, and objectivity in court decisions. Method: The analytical approach was used to extract the working mechanisms of artificial intelligence in this field and identify cases of bias based on practical practice and the experiences of some countries, such as the United States of America and China. Results and Discussion: Artificial intelligence plays a significant role in all fields, including legal and judicial. However, we cannot rely on the AI systems 100% and replace human judges. Research Implications: This paper examines the controversy surrounding replacing human judges with other robotic judges supported by artificial intelligence technology, powerful algorithms, and big data. Originality/Value: This study contributes to the literature by explaining the role of Artificial intelligence and how AI can play a significant role in all fields, including legal and judicial.
... There is much that is distorted in a legal system, he notes, if described as a command 11 as, for instance, leaving no place for the notion of rights. 12 The attempted translation of all legal phenomena into the language of commands is misleading and distorts instead of illuminating. ...
... 8 Id. at 41. 9 Id. at 42. 10 See [10]. 11 Id. at 602. 12 Id. at 605. 13 HART. supra note 3, at 70. 14 Id. at 81. 15 Id. at 94. 16 Even when the addressees of rules are non-human or collective entities, like unions, states, corporations or intelligent machines, rules still ultimately determine acts to be performed or avoided by human beings. ...
... 26 See [3]. 27 See [12]. 28 33 See [13]. ...
Article
Full-text available
Artificial intelligence (AI) decision-making systems are already being extensively used to make decisions in situations where legal rules are applied to establish rights and obligations. In the United States, algorithmic systems are employed to determine the rights of individuals to disability benefits, to evaluate the performance of employees, selecting who will be fired, and to assist judges in granting or denying bail and probation. In this paper I explore some possible implications of H. L. A. Hart’s theory of law as a system of primary and secondary rules to the ongoing debate on the viability and the limits of an adjudicating artificial intelligence. Although much has been recently discussed about the potential practical roles of artificial intelligence in legal practice and assisted decision making, the implication to general jurisprudence still requires further development. I try to map some issues of general jurisprudence that may be consequential to the question of whether a non-human entity (an artificial intelligence) would be theoretically able to perform the kind of legal reasoning made by human judges.
... Moses (2020) for instance points out that "not every tool that can do something should be used in all circumstances" (p. 216) and Hildebrandt (2018) highlights the necessity of arguing about the application of the law, as it would be a grave mistake to conflate a "mathematical simulation of a legal judgment" with a legal judgment itself (p. 23). ...
... Both, the use of vague terms (such as "reasonableness", "good faith"), as well as the evolution of our understanding of law reflecting changing times with changing interpretations of social norms, where changing needs (in particular through case law) are a feature of law and not a bug (Hildebrandt 2020). Losing this flexibility of the law to ensure for "moving targets" is an issue that has been raised in the context of APR (Cobbe 2020;Hildebrandt 2015Hildebrandt , 2020Moses 2020). The first point on vagueness highlights a core difficulty that technical research on implementing APR has been faced with. ...
... For law, including positive law, cannot be otherwise defined as a system and an institution whose very meaning is to serve justice". Without wanting to open a discussion on the distinction between descriptive and normative aspects of the law (see here Shapiro 2011), it is important to highlight the rich legal discourse delving into these properties of legal systems (Hildebrandt 2015;Moore 2020;Radbruch 2006;Waldron 2001). Interestingly, the perception on how to balance such attributes against each other is also tied to the historical environment in which these discussions took place, namely under the fascist regime where legal certainty had a much higher prominence than any ideas of justice (Bix 2011). ...
Article
Full-text available
The field of computational law has increasingly moved into the focus of the scientific community, with recent research analysing its issues and risks. In this article, we seek to draw a structured and comprehensive list of societal issues that the deployment of automatically processable regulation could entail. We do this by systematically exploring attributes of the law that are being challenged through its encoding and by taking stock of what issues current projects in this field raise. This article adds to the current literature not only by providing a needed framework to structure arising issues of computational law but also by bridging the gap between theoretical literature and practical implementation. Key findings of this article are: (1) The primary benefit (efficiency vs. accessibility) sought after when encoding law matters with respect to the issues such an endeavor triggers; (2) Specific characteristics of a project—project type, degree of mediation by computers, and potential for divergence of interests—each impact the overall number of societal issues arising from the implementation of automatically processable regulation.
... This disturbance transcends questions about methods and the nature of legal knowledge, reaching fundamental inquiries about the very nature and structure of legal reality. This is because the introduction of data science in the legal field not only challenges the understanding of how law is known but questions what constitutes legal reality itself -which generates an ontological disturbance that emerges when traditional legal categories, constructed over centuries of legal thought, confront new forms of representation and analysis provided by data science (Hildebrandt, 2018). ...
... This recognition requires a shift from a static, essentialist view of legal categories to a more dynamic, relational understanding of legal ontology. Legal entities and concepts must be seen not as fixed, pre-given realities but as emergent, context-dependent constructs that are shaped by the socio-technical practices in which they are embedded (Hildebrandt, 2018). ...
Chapter
Full-text available
False information including misinformation and disinformation is being recognized as a severe global risk anticipated over the coming years. Access to generative artificial intelligence (AI) has dramatically increased the capacity for creating and disseminating falsified information. This is further compounded by algorithmic promotion of divisive content and creation of filter bubbles, leading to a precarious environment. We analyse the role of AI in exacerbating the false information crisis, evaluate regulatory responses to false information across various jurisdictions, and propose strategic policy recommendations for the Global Majority to effectively counter the threats of misinformation and disinformation in the age of AI.
... This disturbance transcends questions about methods and the nature of legal knowledge, reaching fundamental inquiries about the very nature and structure of legal reality. This is because the introduction of data science in the legal field not only challenges the understanding of how law is known but questions what constitutes legal reality itself -which generates an ontological disturbance that emerges when traditional legal categories, constructed over centuries of legal thought, confront new forms of representation and analysis provided by data science (Hildebrandt, 2018). ...
... This recognition requires a shift from a static, essentialist view of legal categories to a more dynamic, relational understanding of legal ontology. Legal entities and concepts must be seen not as fixed, pre-given realities but as emergent, context-dependent constructs that are shaped by the socio-technical practices in which they are embedded (Hildebrandt, 2018). ...
Book
Full-text available
This volume is the result of a participatory process developed by the Data and Artificial Intelligence Governance (DAIG) Coalition of the United Nations Internet Governance Forum (IGF). The views and opinions expressed in this volume are those of the authors and do not necessarily reflect those of the United Nations Secretariat. The designations and terminology employed may not conform to United Nations practice and do not imply the expression of any opinion whatsoever on the part of the Organization. For any comments on the chapters of this volume, please contact the authors or the editors.
... In the context of determining the applicable law in cross-border disputes, AI systems can play a significant role in analysing and interpreting complex legal data, identifying relevant laws and regulations, and predicting the outcome of disputes. (Hildebrandt M, 2019) However, the use of AI technology in the legal system also poses several challenges, including issues related to data privacy, algorithmic transparency, and accountability. (Kuner C, 2019) As a result, the intersection of the Rome II Regulation and AI raises important questions about how the legal framework should adapt to the growing use of AI in cross-border disputes. ...
... AI algorithms may be opaque, making it difficult to understand how a particular decision was reached. (Hildebrandt M, 2019) This can be problematic in cases where the determination of the applicable law has a significant impact on the outcome of a dispute. The information included to develop Intelligent systems might cause bias and provide unreliable or unjust results. ...
Article
The rapid advancement of artificial intelligence (AI) has significantly impacted choice of legislation issues in international litigation under the Rome II Regulation. This research paper aims to analyse the impact of AI on the application of the Rome II Regulation and identify the challenges it poses to the current legal framework. The paper will first examine the fundamental principles of the Rome II Regulation and its application to cross-border disputes. It will then explore the role of AI in determining the applicable law, focusing on the challenges faced by courts in applying the Regulation to disputes involving AI systems. The study will also assess the potential implications of AI on the interpretation and application of the Rome II Regulation's provisions on non-contractual obligations. Additionally, it will analyse the suitability of the Rome II Regulation for regulating disputes arising from the use of AI systems. Finally, the research paper will offer recommendations on how to address the challenges posed by AI choice of legislation issues in international litigation under the Rome II Regulation. The study is expected to contribute to the current understanding of the impact of AI on the legal system and inform future policy development in this area.
... На протяжении XX в. много сил было затрачено на «юридикометрию» в попытках рассматривать право как «науку», а правовые нормы -как логические предпосылки, поддающиеся строгой форме дедуктивного расчета. В сочетании с духом логического позитивизма и бихевиоризма (пусть и не в точном совпадении с их историческими временными рамками) будущий «статистик» представлялся идеальным юристом, проводящим юридические «расчеты» для получения правильного ответа в рамках любого спора 21 . Эта точка зрения получила воплощение в некоторых современных юридических технических приложениях и программах, которые стремятся предсказать результаты дел на основе экстралегальных факторов, например моделей речи участников судебного процесса 22 . ...
... P. 76 (с учетом расставленных нами акцентов). 21 Gaakeer необходим ых для построения статистических моделей, которые делают возможными прогнозы. Напротив, с 1970х гг. ...
Article
Full-text available
The introduction of statistical “legal tech” raises questions about the future of law and legal practice. While technologies have always mediated the concept, practice, and texture of law, a qualitative and quantitative shift is taking place: statistical legal tech is being integrated into mainstream legal practice, and particularly that of litigators. These applications — particularly in search and document generation — mediate how practicing lawyers interact with the legal system. By shaping how law is “done”, the applications ultimately come to shape what law is. Where such applications impact on the creative elements of the litigator’s practice, for example via automation bias, they affect their professional and ethical duty to respond appropriately to the unique circumstances of their client’s case — a duty that is central to the Rule of Law. The statistical mediation of legal resources by machine learning applications must therefore be introduced with great care, if we are to avoid the subtle, inadvertent, but ultimately fundamental undermining of the Rule of Law. In this contribution we describe the normative effects of legal tech application design, how they are (in)compatible with law and the Rule of Law as normative orders, particularly with respect to legal texts which we frame as the proper source of “lossless law”, uncompressed by statistical framing. We conclude that reliance on the vigilance of individual lawyers is insufficient to guard against the potentially harmful effects of such systems, given their inscrutability, and suggest that the onus is on the providers of legal technologies to demonstrate the legitimacy of their systems according to the standards inherent in the legal system.The translation and publication of this article is based on the CC BY Attribution-NonCommercial 4.0 International license, under which this article was published in English at https://osf.io/preprints/socarxiv/ts259/. The article is accepted for publication in Communitas (2022).
... Technology is already changing the practice of law, for example, shaping the process of trials by replacing, supporting or supplementing the judicial role. Literature [10] states that Artificial Legal Intelligence stems from the wave of Artificial Intelligence, which was just beginning to be called Judicial Metrology. It is based on the understanding of legal algorithms, thus considering logic as the only element of correct legal argumentation. ...
Article
Full-text available
This paper constructs an intelligent decision support system using text combination feature extraction, case similarity, XBoost sentencing prediction model, and other related technologies. The performance of the smart decision support system is tested for grabbing responses and monitoring response reliability. The XGBoost algorithm is used to construct the sentencing prediction model to achieve the intelligence of sentencing prediction. The effectiveness of the sentencing prediction model is examined by comparing the XGBoost model with different algorithms (Random Forest, CBDT, CNN). The practical application of intelligent decision support systems was summarized, highlighting the positive and negative effects. The results show that the overall responsiveness performance of the intelligent decision support system’s magisterial document capture and retrieval is reliable, and the number of captured and retrieved magisterial documents rises as the capture time increases. In XGBoost’s sentencing prediction, nine overlapping parts of the ten keywords were extracted for sentence and fine. In contrast, the keyword similarities are all higher than 0.5, and the difference between the predicted and real sentence values and fine is small. The intelligent decision support system has resulted in a gradual decrease in the number of court cases and an improvement in efficiency. The re-sentencing and retrial initiation rate decreased by over 20% from 2019 to 2023.
... The methods used in this study could help identify cases for publication that are more representative than current publication practices allow. 12 The methods used in this study can also help address an especially worrisome problem caused by the skewed nature of published refugee law jurisprudence in Canada as we enter the era of computational law (Frankenreiter & Livermore, 2020;Hildebrandt, 2018;Sutherland, 2022). Governments 12 The Refugee Law Laboratory, hosted at York University's Centre for Refugee Studies, is undertaking a variety of initiatives to help address the skew in published refugee decisions including by using this methodology. ...
Article
Full-text available
This article overviews outcomes in different types of refugee claims in Canada. It critiques standard legal research methodologies in the refugee law field due to skews in publication practices. To address these skews, the article employs empirical quantitative research methods using administrative tribunal data and computational methods. It provides a snapshot of refugee claim numbers, countries of origin, claim categories, and outcomes. The article then underscores the benefits of supplementing doctrinal legal research with empirical quantitative research methods, outlines barriers to the adoption of such methods, and offers guidance and tools to assist other researchers in overcoming those barriers.
... Jurimetry, which consists in applying analyses based on data science in the field of law, provides a systematic perspective on the factors that influence or play a role in the judge's decision, as it helps define legal behavior patterns supported by quantitative elements (Ramírez et al, 2016;Hildebrandt, 2018;Andrade, 2018). Through it, it is possible to establish innumerable legal events, including the probability of reaching agreement between procedural parties (Andrade et al., 2020;Visser, 2006). ...
Conference Paper
Full-text available
The Sistema Único de Saúde (SUS - Unified Health System) should provide universal access to health throughout Brazil, however it has been suffering from problems of insufficient funding. Thus, many citizens seek additional assistance through private health plans. These are supervised by a regulatory agency but, when necessary, there is legal action by law firms. This research presents a descriptive exploratory cross-sectional study carried out by academics and professors from the Medicine, Law and Information Technology courses, which aims to demonstrate the inclusion of the highlighted universe, with the results of the application of Artificial Intelligence (AI), during Medical and Legal education, in the legal analysis of clients in the private health area against their respective health plans. Data were collected on 126 lawsuits against health plans. Then, standardization and data analysis for the application of AI techniques for pattern recognition, and finally, a critical evaluation of the strategies that could be carried out by the legal department, via AI, in addition to the students' evaluation of the experience. AI techniques were applied to evaluate processes in terms of Provenance, Disapplicability and Partial Provenance. AI algorithms were used to predict the outcome of an action, reaching a maximum accuracy of 71%. There was an interdisciplinary integration of professors, with assessments from each area. Despite the low accuracy of the AI solution,due to the insufficient number of samples, the experiment brought an integration of knowledge from Medicine, Law and Technology, in addition to Scientific concepts, where the greatest contribution was in the teaching methodology.
... It is also shown that the idea of artificial legal originated with a previous wave of artificial intelligence, known as jurimetrics at the time. It was on the basis of an "algorithmic understanding of the law" that promotes logic as the only component of legal argumentation and the outcome of legal intelligence project in AI might be much more beneficial in predicting and providing the content of positive law [121]. ...
Article
Full-text available
In this paper, we argue that Responsible Artificial Intelligence Systems (RAIS) require a shift toward embedded ethics to address value-based challenges facing AI in disaster management; and we propose a model to achieve it. Disaster management requires Artificial Intelligence Systems (AIS) that would be sensitive to ethical, legal, and multi-dimensional values while being responsive and accountable in complex and acute disruptions that simultaneously call for fair, value-laden, and immediate decisions. Without such a necessary shift, AIS will be incapable of responding properly to major value-based challenges of axiological and hierarchical types, and might leave AIS vulnerable to meta-disasters, such as intelligent digital disasters. This study focuses on RAI in the context of disaster management and proposes a model of Embedded Ethics for Responsible Artificial Intelligence Systems (EE-RAIS), which is empowered by four platforms of embedded ethics—educational, cross-functional, developmental, and algorithmic embedded ethics—as well as four imperative metrics—ethical intelligence, legal intelligence, social-emotional competency, and artificial wisdom. The final section of the paper explores how EE-RAIS can be deployed for the purpose of disaster management and fair crisis informatics.
... Artificial intelligence has been implemented in various social practices. For instance, there is a developed transportation infrastructure facility designed for vehicle movement, which supports driverless vehicles where this typical vehicle creates legal uncertainty regarding its position in the structure of the legal relations (Hildebrandt, 2018 imagined. As with the capabilities of artificial intelligence to independently take actions that qualify as crimes and its creation that ably signifies crimes, the regulation of criminal law governing the legal subjects of artificial intelligence is very necessary (Gaifutdinov et al., 2021). ...
Article
Full-text available
The use of artificial intelligence can increase productivity and efficiency in various sectors of life. However, it can also potentially cause legal problems especially criminal law if they result in losses. The subject of law in determining who should be responsible is a separate issue. This research examines whether technology using artificial intelligence can be used as the subject of criminal law so that criminal responsibility can be held. This research is normative juridical research with a statutory, conceptual approach and cases related to artificial intelligence and criminal law issues. The study shows that the ability to analyze and make decisions possesed by artificial intelligence can be indicated as "malicious intent". Yet, the concept of punishment for the artificial intelligence system requires a unique formula, as the personality of artificial intelligence cannot be equated with the personality of a human or legal entity. The granting of legal status through a criminal sanction mechanism in the form of machine deactivation, reprogramming, and the severity of destroying machines is expected to provide future solutions to minimize the risk of criminal acts by artificial intelligence.
... On a technical level, this is entirely possible. Hildebrandt (2018) pointed out that data-driven artificial legal intelligence may be much more successful in predicting the content of positive law. Likewise, profound developments in information technology are changing the way banks work, relying more on reliable quantitative information from online and credit bureaus, contributing to AI-based decision-making (Jakšič & Marinc, 2019). ...
Article
Full-text available
In today’s environment of the rapid rise of artificial intelligence (AI), debate continues about whether it has beneficial effects on economic development. However, there is only a fragmented perception of what role and place AI technology actually plays in economic development (ED). In this paper, we pioneer the research by focusing our detective work and discussion on the intersection of AI and economic development. Specifically, we adopt a two-step methodology. At the first step, we analyze 2211 documents in the AI&ED field using the bibliometric tool Bibliometrix, presenting the internal structure and external characteristics of the field through different metrics and algorithms. In the second step, a qualitative content analysis of clusters calculated from the bibliographic coupling algorithm is conducted, detailing the content directions of recently distributed topics in the AI&ED field from different perspectives. The results of the bibliometric analysis suggest that the number of publications in the field has grown exponentially in recent years, and the most relevant source is the “Sustainability” journal. In addition, deep learning and data mining-related research are the key directions for the future. On the whole, scholars dedicated to the field have developed close cooperation and communication across the board. On the other hand, the content analysis demonstrates that most of the research is centered on the five facets of intelligent decision-making, social governance, labor and capital, Industry 4.0, and innovation. The results provide a forward-looking guide for scholars to grasp the current state and potential knowledge gaps in the AI&ED field.
... Bei der für die Auslegung von Normen wichtigen, argumentativ geleiteten Sinndeutung stoßen Algorithmen (jedenfalls bisher) auf Grenzen. 135 Zumindest beschränkt ist ferner die Fähigkeit, komplexe Abwägungen unter kontextbezogener Justierung der Abwägungskriterien und ihrer Zuordnung vorzunehmen. Probleme bereiten u. a. die Sicherung von Transparenz und Verantwortung sowie die Kontrollierbarkeit der Nutzung insbesondere lernender algorithmischer Systeme. ...
Chapter
Full-text available
Der Band untersucht, mit welchen Methoden digitale Disruptionen und Transformationen in Recht und Rechtswissenschaft verarbeitet und wie zentrale Kategorien des Rechts darauf eingestellt werden können. Die Folgen des Medienwechsels für Recht und Rechtswissenschaft werden herausgearbeitet, das methodische Potential und die Grenzen der Analogien zum Analogen untersucht und exemplarisch im Urheberrecht Momente der Fortentwicklung oder des Umbruchs identifiziert. Für die rechtlichen Grundkategorien von Verantwortung und Begründung wird gezeigt, wie sie auf künstliche Intelligenz eingestellt werden können. Eine übergreifende Erfassung der Herausforderungen und die Einordnung in die Innovationsforschung bilden den Rahmen. Mit Beiträgen von Wolfgang Hoffmann-Riem, Linda Kuschel, Timo Rademacher, Ingo Schulz-Schaeffer, Thomas Vesting, Thomas Wischmeyer und Herbert Zech
... Machines work with data and code; they do not attribute meaning (Hildebrandt, 2018). Inevitably, therefore, the algorithms that identify and block terrorist content cannot be expected to operate with 100 percent accuracy. ...
Article
Social-media companies make extensive use of artificial intelligence in their efforts to remove and block terrorist content from their platforms. This paper begins by arguing that, since such efforts amount to an attempt to channel human conduct, they should be regarded as a form of regulation that is subject to rule-of-law principles. The paper then discusses three sets of rule-of-law issues. The first set concerns enforceability. Here, the paper highlights the displacement effects that have resulted from the automated removal and blocking of terrorist content and argues that regard must be had to the whole social-media ecology, as well as to jihadist groups other than the so-called Islamic State and other forms of violent extremism. Since rule by law is only a necessary, and not a sufficient, condition for compliance with rule- of -law values, the paper then goes on to examine two further sets of issues: the clarity with which social-media companies define terrorist content and the adequacy of the processes by which a user may appeal against an account suspension or the blocking or removal of content. The paper concludes by identifying a range of research questions that emerge from the discussion and that together form a promising and timely research agenda to which legal scholarship has much to contribute.
Preprint
Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it.
Article
Purpose The number of construction dispute cases has surged in recent years. The effective exploration and management of risks associated with construction contracts helps to directly enhance the overall project performance. The existing approaches to identify the risks associated with construction project contracts have a heavy reliance on manual review techniques, which are inefficient and highly restricted by personnel experience. The existing intelligent approaches only work for the contract query and storage. Hence, it is necessary to improve the intelligence level for contract risk management. This study aims to propose a novel method for the intelligent identification of risks in construction contract clauses based on natural language processing. Design/methodology/approach This proposed method can formalize the linguistic logic and semantic information of contract clauses into multiple triples and transform the structural processing results of general clauses in a construction contract into rights and interests rules for risk review. In addition, the core semantic information of special clauses in a construction contract, rights and interests rules are used for semantic conflict detection. Finally, this study achieves the intelligent risk identification of construction contract clauses. Findings The method is verified by selecting several construction contracts that had been applied in engineering contracting as a corpus. The results showed a high level of accuracy and applicability of the proposed method. Originality/value This novel method can identify the risks in contract clauses with complex syntactic structures and realize rule extension according to the semantic relation network of the ontology. It can support efficient contract review and assist the decision-making process in contract risk management.
Article
Full-text available
As AI algorithms are employed to apply legal rules in determining rights and obligations, questions related to the observance of due legal process arise. The development of opaque machine learning models, whose predictions cannot be satisfactorily explained, has spurred debates around the idea of explainability of AI models for decision-making. The article argues that (i) in relation to AI models for judicial decision-making, the standard of explainability, besides proving insufficient to meet the requirements for publicity and reasoning of judicial decisions, imposes a form of nakedness not required of human judges; and (ii) a more appropriate standard would be that of interpretable models for judicial decision-making, characterized as able to offer decisions that are referred to current law (legality), internally and externally coherent (consistency), and compatible with the decision of a human judge in a similar case.
Article
Full-text available
O avanço da IA se reflete cada vez mais no direito, em variadas aplicações e técnicas. Nos últimos dez anos, diferentes projetos de uso de IA se institucionalizaram no sistema de justiça no Brasil. O presente artigo mapeia as implicações dessa transformação para a pesquisa em direito. Argumenta-se que, embora a IA jurídica tenha grande potencial, pode também levar a erros e até mesmo amplificar injustiças estruturais na sociedade. Por isso, o artigo identifica questões centrais para a discussão de quando, como e onde é desejável usar IA para a pesquisa jurídica.
Article
Full-text available
This chapter examines the empirical methods applicable in Criminology, Economic Analysis of Law, and the law embodied in the form of algorithms. The first part of the paper explores the empirical research methods used in Criminology. Focusing on the fundamental features of criminological methodology, the chapter elaborates on fundamental and applied research. The second part focuses on interdisciplinary methodology applicable in the field of Economic Analysis of Law (EAL), and examines the accompanying controversies and challenges generated by the development of behavioral research that has fundamentally changed the findings of the EAL. The third part elaborates on the importance of empirical data in the context of law as an algorithm and the “new trichotomy” reflecting the nature of data: text-driven law, data-driven law, and code-driven law. The trichotomy emerges as a result of an attempt to transform legal norms into machine-readable algorithms, as well as to ensure the application of these modalities in the legal context. The authors discuss the importance of empirical methods in law and the “extension” of standard legal methodology.
Article
Full-text available
The use of AI in the public sector is emerging around the world and its spread affects the core States functions: the administrative, the judiciary, and the legislative. Nevertheless, a comprehensive approach to AI in the life-cycle of rules - from the proposal of a new rule to its implementation, monitoring and review- is currently lacking in the rich panorama of studies from different disciplines. The analysis shows that AI has the power to play a crucial role in the life-cycle of rules, by performing time-consuming tasks, increasing access to knowledge base, and enhancing the ability of institutions to draft effective rules and to declutter the regulatory stock. However, it is not without risks, ranging from discrimination to challenges to democratic representation. In order to play a role in achieving law effectiveness while limiting the risks, a complementarity between human and AI should be reached both at the level of the AI architecture and ex post. Moreover, an incremental and experimental approach is suggested, as well as the elaboration of a general framework, to be tailored by each regulator to the specific features of its tasks, aimed at setting the rationale, the role, and adequate guardrails to AI in the life-cycle of rules. This agile approach would allow the AI revolution to display its benefits while preventing potential harms or side effects.
Chapter
The impact of the development of artificial intelligence (AI) is often studied by contrasting benefits and risks to deduce a framework for action that will maximize the positive effects of this technology. However, a much more profound transformation escapes this type of analysis and affects the very foundations of the rule of law. This transformation of decision-making methods, substituting calculation and statistics for the provisions of law, is particularly evident in the use of AI in the public sector. This chapter therefore aims to document the emergence of what could be described as the “rule of algorithms,” through concrete examples of the use of AI to combat fraud, establish compensation trends based on the analysis of court decisions or carry out border controls. This chapter presents a series of requirements to be applied by public administrations to properly regulate the use of algorithms, in a context where the ethics of AI and economically oriented public policies do not currently safeguard against potential misuses.KeywordsAIArtificial intelligenceRule of lawRule of algorithmsPublic sector
Chapter
This chapter deals with the highly pertinent question of whether the use of AI technologies by a government agency in performing its administrative tasks would constitute delegation of state powers. This question is analyzed against the backdrop of the CE and the administrative-law framework. In conjunction with this, the chapter also highlights certain AI use cases that would not constitute delegation of state power. The aim behind scrutinizing these use cases is to shed light on the underlying question of the key considerations for ascertaining whether and to what extent an administrative task could be delegated to AI.
Chapter
This chapter presents a brief overview of the nature and causes of an AI-based system’s opaqueness and explores the role of transparency in guaranteeing the rule of law. It then further examines the specific transparency requirements at issue in the context of Estonian administrative procedure and concludes with an analysis of the challenges accompanying efforts to strike a balance between transparency measures and conflicting interests, in areas such as privacy, intellectual property, trade secrets, and national security.
Chapter
Full-text available
The computer revolution, directly related to the Network society (CASTELLS, 1996, 2010), the Information Age (BYRON, 2010; FLORIDI, 2008), or the 4th Revolution (SCHWAB, 2017) has provided powerful but also misleading cross-field advances thanks to Artificial Intelligence. Besides, this related algorithm thinking does not express the reality and complexity of human thinking, which is opportunistic, multi heuristic, and can be functionally defined as blended (VALLVERDÚ & MÜLLER, 2019). Such incompleteness in human thinking can be analyzed in all kinds of fields, and the judical and law spheres are not free from that characteristic. Therefore, new advances in such fields like computational law (HILDEBRANDT, 2017), legal informatics, legal analitics, computational legal theory, or AI Law, engineering law (HOWARTH, 2013), among others, are biased in a different set of posibilities. Although it is affirmed that such computational tecnologies will bring transparency, justice, equity, and clarity to citizens, I have identified several challenges for the completion of such promises. I will explore them under four categories: Design Biases, Wrong causal models, Failed automatization, and Non-universality of Justice models.
Chapter
Full-text available
This chapter examines lawyers’ perceptions on the use of artificial intelligence (AI) in their legal work. A meta-synthesis of published large-scale surveys of the legal profession completed in 2019 and 2020 in several leading jurisdictions, e.g., the UK, US, and EU, reveals some dissonance between hype and reality. While some lawyers see the potential contribution that AI and machine-learning (ML) driven legal tech innovation can make to transform aspects of legal practice, others have little awareness of the existence of the same. While there appears to be first mover advantage for some legal practitioners to incorporate innovative AI and ML based legal tech tools into their developing business model, there are few metrics that exist that can help legal teams evaluate whether such legal tech tools provide a sustainable competitive advantage to their legal work. A non-representative expert sampling of UK-based non-lawyer legal tech professionals whose work focuses on the utilisation of AI and ML based legal tech tools in different legal practice environments confirms the findings derived from the meta-synthesis. This expert sampling was also evaluated against published peer-reviewed research featuring semi-structured interviews of UK lawyer and non-lawyer legal tech professionals on the challenges and opportunities presented by AI and ML for the legal profession. Further research in the form of undertaking a qualitative survey of non-lawyer legal tech professionals with follow-on semi-structured interviews is proposed.KeywordsArtificial IntelligenceLegal TechMachine LearningMeta-synthesisSustainable Competitive AdvantageExpert Sampling
Chapter
Al technologies affect the center of private autonomy and its limits, the notion of a contract and its interpretation, the equilibrium of parties’ interests, the structure and means of enforcement, the effectiveness of legal and contractual remedies, and the vital attributes of the legal system of effectiveness, fairness, impartiality, and predictability. The increasing global investments in blockchain technology justify a progressive regulatory adaptation to the altering materiality and so, civil liability and the insurance sector are required to amend and govern an ever-more pressing techno-economic evolution. It is worth noting that adapting existing rules to deal with the technology will need an understanding of the various manners robots and humans respond to legal rules. A robot cannot make an instinctive judgment about the value of a human life. It is argued that the automation of legal services is a manner to enhance access to justice, diminish legal costs, and upgrade the rule of law, which means that these improvements are a democratization of law. There is a shifting role of artificial intelligence in the legal course.
Article
Full-text available
In this brief contribution, I distinguish between code-driven and data-driven regulation as novel instantiations of legal regulation. Before moving deeper into data-driven regulation, I explain the difference between law and regulation, and the relevance of such a difference for the rule of law. I discuss artificial legal intelligence (ALI) as a means to enable quantified legal prediction and argumentation mining which are both based on machine learning. This raises the question of whether the implementation of such technologies should count as law or as regulation, and what this means for their further development. Finally, I propose the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the rule of law. This entails obligating developers, lawyers and those subject to the decisions of ALI to re-introduce adversarial interrogation at the level of its computational architecture. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.
Article
Full-text available
This article considers some of the risks and challenges raised by the use of algorithm-assisted decision-making and predictive tools by the public sector. Alongside, it reviews a number of long-standing English administrative law rules designed to regulate the discretionary power of the state. The principles of administrative law are concerned with human decisions involved in the exercise of state power and discretion, thus offering a promising avenue for the regulation of the growing number of algorithm-assisted decisions within the public sector. This article attempts to re-frame key rules for the new algorithmic environment and argues that ‘old’ law—interpreted for a new context—can help guide lawyers, scientists and public sector practitioners alike when considering the development and deployment of new algorithmic tools. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.
Article
Full-text available
The idea of artificial legal intelligence stems from a previous wave of artificial intelligence, then called jurimetrics. It was based on an algorithmic understanding of law, celebrating logic as the sole ingredient for proper legal argumentation. However, as Oliver Wendell Holmes has noted, the life of the law is experience rather than merely logic. Machine learning, which determines the current wave of artificial intelligence, is built on data-driven machine experience. The resulting artificial legal intelligence may be far more successful in terms of predicting the content of positive law. In this article, I discuss the assumptions of law and the Rule of Law and confront them with those of computational systems. As a twin article to my Chorley lecture on law as information, this should inform the extent to which artificial legal intelligence provides for responsible innovation in legal decision making.
Article
Full-text available
A few notes on the use of machine learning in medicine.
Article
On August 5, 2014, the Federal Reserve Board and the Federal Deposit Insurance Corporation criticized shortcomings in the Resolution Plans of the first Systematically Important Financial Institution (SIFI) filers. In his public statement, FDIC Vice Chairman Thomas M. Hoenig said “each plan [submitted by the first 11 filers] is deficient and fails to convincingly demonstrate how, in failure, any one of these firms could overcome obstacles to entering bankruptcy without precipitating a financial crisis.”The first eleven SIFIs — Bank of America, Bank of New York Mellon, Barclays, Citigroup, Credit Suisse, Deutsche Bank, Goldman Sachs, JPMorgan Chase, Morgan Stanley, State Street Corp. and UBS — include some of the largest organizations in the world, with sophisticated internal and external teams of professional advisors. According to Jamie Dimon of JPMorgan Chase in 2013, it took 500 professionals over 1 million hours per year to produce JPMorgan Chase’s annual Resolution plan. With regulatory pressure increasing, that number is likely to be consistent or increasing across first-wave filers, and suggests significant spending by all filers.So why were the plans criticized despite heavy compliance investment?The Fed and FDIC identified two common shortcomings across the first 11 SIFI filers: “(i) assumptions that the agencies regard as unrealistic or inadequately supported, such as assumptions about the likely behavior of customers, counterparties, investors, central clearing facilities, and regulators, and (ii) the failure to make, or even to identify, the kinds of changes in firm structure and practices that would be necessary to enhance the prospects for orderly resolution.” We believe this regulatory response highlights, in part, the need for lawyers (and other advisors) to develop approaches that can better manage complexity, encompassing modern notions of design, use of technology, and management of complex systems. In this paper, we will describe the information mapping aspects of the Resolution Planning challenge as an exemplary “Manhattan Project” of law: a critical enterprise that will require — and trigger — the development of new tools and methods for lawyers to apply in their work handling complex problems without resort to unsustainably swelling workforce, and wasteful diversion of resources. Fortunately, much of this approach has already been developed in innovative Silicon Valley legal departments and has been applied by leading banks. Although much of the focus of the Dodd-Frank Act is on re-organizing and simplifying banks, we will focus here on the information architecture issues which underlie much of what should — and will — change about how law is delivered, not just for Resolution Planning, but more broadly.
Article
Algorithms have developed into somewhat of a modern myth. They “compet[e] for our living rooms” (Slavin 2011), “determine how a billion plus people get where they’re going” (McGee 2011), “have already written symphonies as moving as those composed by Beethoven” (Steiner 2012), and “free us from sorting through multitudes of irrelevant results” (Spring 2011). Nevertheless, the nature and implications of such orderings are far from clear. What exactly is it that algorithms “do”? What is the role attributed to “algorithms” in these arguments? How can we turn the “problem of algorithms” into an object of productive inquiry? This paper sets out to trouble the coherence of the algorithm as an analytical category and explores its recent rise in scholarship, policy, and practice through a series of provocations.
Book
Information: A Very Short Introduction explores the concept of information, central to modern science and society, from thermodynamics and DNA to our use of the mobile phone and the Internet. It moves from a brief look at the mathematical roots of information — its definition and measurement in ‘bits’ — to its role in genetics, and its social meaning and value, before considering the ethics of information, including issues of ownership, privacy, and accessibility; copyright and open source. This VSI also considers concepts such as ‘Infoglut’ (too much information to process) and the emergence of an information society.
the-psa-how-fisher-neyman-pearson-bayes-weretransformed-into-the-null-ritual-comments-and-queries-i/>, though I am all for serious introductions to the seminal work of Shannon, Wiener, and Gigerenzer
  • Daniel Martin Katz
Daniel Martin Katz, 'The MIT School of Law? A Perspective on Legal Education in the 21st Century' (2014) 5 U Ill L Rev 1431 [Katz, 'MIT School of Law']. On the lack of methodological integrity in the use of statistics in social science, see e.g. D Mayo, 'Gigerenzer at the PSA: "How Fisher, Neyman-Pearson, and Bayes Were Transformed into the Null Ritual": Comments and Queries (ii)' (blog), Error Statistics Philosophy (8 November 2016), online: Error Statistics Philosophy Blog <https://errorstatistics. com/2016/11/08/gigerenzer-at-the-psa-how-fisher-neyman-pearson-bayes-weretransformed-into-the-null-ritual-comments-and-queries-i/>, though I am all for serious introductions to the seminal work of Shannon, Wiener, and Gigerenzer. For an interesting example of adversarial statistics, see JASP, online: JASP <https://jasp-stats.org>.
Rise of the Robolawyers
  • Jason Koebler
Jason Koebler, 'Rise of the Robolawyers,' The Atlantic (April 2017), online: The Atlantic Monthly Group <https://www.theatlantic.com/magazine/archive/2017/04/riseof-the-robolawyers/517794/>.
The Glass Cage: Automation and Us
  • Nicholas Carr
Nicholas Carr, The Glass Cage: Automation and Us (New York: WW Norton, 2014).