Chapter

Chapter 5: Legal framework for the use of artificial intelligence and automated decision-making in public governance

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Artificial intelligence algorithms govern in subtle, yet fundamental ways, the way we live and are transforming our societies. The promise of efficient, low‐cost or ‘neutral’ solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high‐stakes aspects of our public existence – from hiring and education decisions, to the governmental use of enforcement powers (policing) or liberty‐restricting decisions (bail and sentencing), this necessarily raises important accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision‐making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually‐guided manner, the implications of these systems, and the limitations they pose, for public accountability. This article is protected by copyright. All rights reserved.
Article
Full-text available
Law, Technology and Humans book review editor Dr Faith Gordon reviews Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.
Conference Paper
Full-text available
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
Conference Paper
Full-text available
Interpretability is often a major concern in machine learning. Although many authors agree with this statement, interpretability is often tackled with intuitive arguments, distinct (yet related) terms and heuristic quantifications. This short survey aims to clarify the concepts related to interpretability and emphasises the distinction between interpreting models and representations, as well as heuristic-based and user-based approaches.
Article
Full-text available
This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.
Article
Full-text available
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a 'right to explanation' of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a 'right to be informed'. Further, the ambiguity and limited scope of the 'right not to be subject to automated decision-making' contained in Article 22 (from which the alleged 'right to explanation' stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.
Preprint
Cite as Lilian Edwards and Michael Veale, 'Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for' (2017) 16 Duke Law and Technology Review 18–84. (First posted on SSRN 24 May 2017)Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to “open the black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive.However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical vs decompositional explanations ) in dodging developers' worries of IP or trade secrets disclosure.Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centred.
Article
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.
Council Framework Decision 2002/584/JHA of 13
  • Council
Council. 2002. Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States. Available at: https:// eur-lex.europa.eu/legalcontent/EN/TXT/?uri=celex%3A32002F0584.
FLEXPUB Public e-Service Strategy - Work package 2 - Baseline Measurement
  • M Chantillon
  • R Kruk
  • A Simonofski
  • T Tombal
  • J Crompvoets
  • C De Terwangne
  • N Habra
  • M Snoeck
  • B Vanderose
Algorithmes: prévenir l'automatisation des discriminations
  • Cnil Défenseurs Des Droits
Défenseurs des droits et CNIL. 2020. Algorithmes: prévenir l'automatisation des discriminations. Available at: https://www.defenseurde sdroits.fr/sites/defaul t/files/atoms/files/ synth-algos-num2-29.05.20.pdf.
Gender shades: intersectional accuracy disparities in eoniritercial gender classification
  • J Buolamwini
  • T Gebru
Buolamwini, J., and T. Gebru. 2018. Gender shades: intersectional accuracy disparities in eoniritercial gender classification. In S.A. Friedler, and C. Wilson (eds.) Proceedings of the 15! Conference on HE Accountability and Transparency, Proceedings of Machine Learning Research. 1-15. New York University, New York, NY, USA.
Explaining the black box -when law controls AI. CERRE Issue Paper
  • A De Streel
  • A Bibal
  • B Frenay
  • M Lognoul
De Streel, A., A. Bibal, B. Frenay, and M. Lognoul, 2020. Explaining the black box -when law controls AI. CERRE Issue Paper. Available at: https://www.cerre.eu/p ublications/ explaining-black-box-when-lawcontrols-ai.
The use of secret algorithms to combat social fraud in belgium
  • E Degrave
Degrave, E, 2020. The use of secret algorithms to combat social fraud in belgium. European Review of Digital Administration & Law 1: 167-177.
Robotisation des services publics: l'intelligence artificielle peut-elle s'immiscer sans heurt dans nos administrations
  • L Gérard
Gérard, L. 2017. Robotisation des services publics: l'intelligence artificielle peut-elle s'immiscer sans heurt dans nos administrations. In: H. Jacquemin, and A. De Streel, A. (eds.) l'Intelligence Artificielle et le Droit. 413-436. Larcier, Brussels, Belgium.
Study on the European Civil Law rules in robotics. Commissioned by the European Parliament’s Legal Affairs Committee
  • N Nevejans
Les droits de la personne concernée dans le RGPD
  • T Tombal
Incompatible: the GDPR in the age of big data
  • T Zarsky
La réforme de la Convention 108 du Conseil de l'Europe pour la protection des personnes à l'égard du traitement automatisé des données à caractère personnel
  • C De Terwangne
De Terwangne, C. 2015. La réforme de la Convention 108 du Conseil de l'Europe pour la protection des personnes à l'égard du traitement automatisé des données à caractère personnel. In: C. Castets-Renard (ed.) Quelle protection des données personnelles en Europe? 81-120. Larcier, Brussels, Belgium.
Overview of the use and impact of AI in public services in the EU. Publications Office of the European Union
  • G Misuraca
  • C Van Noordt
Explanatory report of the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108) -Modernised Convention
Committee of Ministers of the Council of Europe. 2018. Explanatory report of the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108) -Modernised Convention 108, CM(2018)2-addfinal. Available at: hetgstitecdeoe. cre)
LE-Gouvernement et la protection de la vie privée. Légalité, transparence et contrôle
  • E Degrave
Degrave, E. 2014. LE-Gouvernement et la protection de la vie privée. Légalité, transparence et contrôle. Larcier, Brussels, Belgium.
Automating inequalities: how high-tech tools profile
  • V Eubanks
Eubanks, V. 2018. Automating inequalities: how high-tech tools profile, police, and punish the poor. Law, Technology and Humans 1: 162-164. https://doi.org/10.5204 /Ithj.v1i0.1386
Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions
European Commission. 2018. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Artificial Intelligence for Europe. COM(2018) 237 final. Available at: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=COM%3A20 18%3A237%3AFIN.