Preprint

Demystifying the Draft EU Artificial Intelligence Act

Authors:
To read the file of this research, you can request a copy directly from the authors.

Abstract

In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act. We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. Aspects of the AI Act, such as different rules for different risk-levels of AI, make sense. But we also find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals. Several overarching aspects, including the enforcement regime and the effect of maximum harmonisation on the space for AI policy more generally, engender significant concern. These issues should be addressed as a priority in the legislative process.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

... The EC's whitepaper On Artificial Intelligence proposes a capability-based approach for assigning obligations arguing that "the actor(s) who is (are) best placed to address" the respective issue should be obliged to do so (European Commission, 2020). On the contrary, the AI Act argues that the "majority of all obligations" should fall on the person or body "placing [the AI system] on the market or putting it into service under its own name or trademark" (Veale & Zuiderveen Borgesius, 2021) and thus focuses on rather fixed addressees. ...
... Both the AI Act and the whitepaper propose a risk-based approach to regulating AI, i.e., to apply different governance measures depending on a risk level assigned to the application based on its application area, features, and purpose. While AI systems that are considered to pose an unacceptable risk are outright prohibited, especially in the case of high-risk applications and limited risk applications, 7 a large proportion of suggested measures take the form of obligations for regulated actors (European Commission, 2020, 2021cVeale & Zuiderveen Borgesius, 2021). 8 However, given the multitude of actors involved in the development, deployment, and operation of many AI systems, there are different approaches to assigning obligations to these actors. ...
... Instead, it attempts to assign obligations to well-defined and clearly identifiable actors. Here, the focus is on "providers" and, to a lesser degree, "users" as the main addressees of obligations (Veale & Zuiderveen Borgesius, 2021). The EC defines "providers" as "a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge," 11 and "users" as "any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity" (European Commission, 2021c, pp. ...
Article
Full-text available
The emergence and increasing prevalence of Artificial Intelligence (AI) systems in a growing number of application areas brings about opportunities but also risks for individuals and society as a whole. To minimize the risks associated with AI systems and to mitigate potential harm caused by them, recent policy papers and regulatory proposals discuss obliging developers, deployers, and operators of these systems to avoid certain types of use and features in their design. However, most AI systems are complex socio-technical systems in which control over the system is extensively distributed. In many cases, a multitude of different actors is involved in the purpose setting, data management and data preparation, model development, as well as deployment, use, and refinement of such systems. Therefore, determining sensible addressees for the respective obligations is all but trivial. This article discusses two frameworks for assigning obligations that have been proposed in the European Commission’s whitepaper On Artificial Intelligence—A European approach to excellence and trust and the proposal for the Artificial Intelligence Act respectively. The focus is on whether the frameworks adequately account for the complex constellations of actors that are present in many AI systems and how the various tasks in the process of developing, deploying, and using AI systems, in which threats can arise, are distributed among these actors.
... They can incorporate a process considering product development, deployment, and acquisition. Ideally, 5 As mentioned above, the European Commission's proposal draws a continuum between algorithmic systems prohibited insofar as they would exploit vulnerabilities of specific groups, ex-ante obligations for systems involving high stake decisions, and finally, transparency obligations for systems involving less significant risks [50]. The algorithms that concern us fall into this third category. ...
Article
Full-text available
The growing use of artificial intelligence (A.I.) algorithms in businesses raises regulators' concerns about consumer protection. While pricing and recommendation algorithms have undeniable consumer-friendly effects, they can also be detrimental to them through, for instance, the implementation of dark patterns. These correspond to algorithms aiming to alter consumers' freedom of choice or manipulate their decisions. While the latter is hardly new, A.I. offers significant possibilities for enhancing them, altering consumers' freedom of choice and manipulating their decisions. Consumer protection comes up against several pitfalls. Sanctioning manipulation is even more difficult because the damage may be diffuse and not easy to detect. Symmetrically, both ex-ante regulation and requirements for algorithmic transparency may be insufficient, if not counterproductive. On the one hand, possible solutions can be found in counter-algorithms that consumers can use. On the other hand, in the development of a compliance logic and, more particularly, in tools that allow companies to self-assess the risks induced by their algorithms. Such an approach echoes the one developed in corporate social and environmental responsibility. This contribution shows how self-regulatory and compliance schemes used in these areas can inspire regulatory schemes for addressing the ethical risks of restricting and manipulating consumer choice.
... Companies are theoretically free to follow whichever standards they please, yet following harmonised standards developed by European bodies will likely be a cheaper and safer bet. On the surface, this might appear to be another check, however, relying on standards bodies to explicate the value-laden provisions of the draft AI Act is problematic, particularly given the high barriers to entry for interest groups to participate in the standards making process (Veale & Borgesius, 2021). Likewise, text surrounding disparate impact assessments are vague and non-committal, with little in the way of formal requirements for checks on bias (MacCarthy, & Propp, 2021). ...
Article
Full-text available
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.
... This text provides the basis for European negotiations about the final shape of this regulation, which will take place over the next few years. While the EU's AI framework contains multiple stipulations regarding the registration of AI systems on the EU market in open databases (Veale & Zuiderveen Borgesius, 2021), not unlike the municipal registers at the center of this paper, the Commission's approach is primarily one that aims to create an enabling environment for a European AI market to develop (Jansen, 2021). This context helps us position these municipal AI registers, and Floridi's editorial letter, in a wider societal debate. ...
Preprint
Full-text available
In this commentary, we respond to a recent editorial letter by Professor Luciano Floridi entitled 'AI as a public service: Learning from Amsterdam and Helsinki'. Here, Floridi considers the positive impact of these municipal AI registers, which collect a limited number of algorithmic systems used by the city of Amsterdam and Helsinki. There are a number of assumptions about AI registers as a governance model for automated systems that we seek to question. Starting with recent attempts to normalize AI by decontextualizing and depoliticizing it, which is a fraught political project that encourages what we call 'ethics theater' given the proven dangers of using these systems in the context of the digital welfare state. We agree with Floridi that much can be learned from these registers about the role of AI systems in municipal city management. Yet, the lessons we draw, on the basis of our extensive ethnographic engagement with digital well-fare states are distinctly less optimistic.
Article
Full-text available
This article argues that the emergence of AI systems and AI regulation showcases developments that have significant implications for computer ethics and make it necessary to reexamine some key assumptions of the discipline. Focusing on design-and policy-oriented computer ethics, the article investigates new challenges and opportunities that occur in this context. The main challenges concern how an AI system's technical, social, political, and economic features can hinder a successful application of computer ethics. Yet, the article demonstrates that features of AI systems that potentially interfere with successfully applying some approaches to computer ethics are (often) only contingent, and that computer ethics can influence them. Furthermore, it shows how computer ethics can make use of how power manifests in an AI system's technical, social, political, and economic features to achieve its goals. Lastly, the article outlines new interdependencies between policy-and design-oriented computer ethics, manifesting as either conflicts or synergies.
Chapter
Police departments are increasingly relying on surveillance technologies to tackle public security issues in smart cities. Automated facial recognition is deployed in public spaces for real-time identification of suspects and warranted individuals. In some cases, law enforcement is going even further by exploiting also emotion recognition technologies. In preventive operations indeed, emotion facial recognition (EFR) is being used to infer individuals’ inner affective states from traits like facial muscle movements. In this way, law enforcement aims to obtain insightful hints on unknown persons acting suspiciously in public or strategic venues (e.g. train stations, airports). While the employment of such tools still seems to be relegated to dystopian scenarios, it is already a reality in some parts of the world. Hence, there emerges a need to explore their compatibility with the European human rights framework. The Chapter undertakes this task and examines whether and how EFR can be considered compliant with the rights to privacy and data protection, the freedom of thought and the presumption of innocence.
Article
Full-text available
Earlier this year, the European Commission (EC) registered the ‘Civil society initiative for a ban on biometric mass surveillance practices’, a European Citizens’ Initiative. Citizens are thus given the opportunity to authorize the EC to suggest the adoption of legislative instruments to permanently ban biometric mass surveillance practices. This contribution finds the above initiative particularly promising, as part of a new development of bans in the European Union (EU). It analyses the EU’s approach to facial, visual and biometric surveillance,2 with the objective of submitting some ideas that the European legislator could consider when strictly regulating such practices.
Chapter
ETIAS is an upcoming, largely automated IT system to identify risks posed by visa-exempt Third Country Nationals (TCNs) traveling to the Schengen area. It is expected to be operational by the end of 2022. The largely automated ETIAS risk assessments include the check of traveller data against not yet defined abstract risk indicators which might discriminate against certain groups of travellers. Moreover, there is evidence for the planned use of machine learning (ML) for risk assessments under the ETIAS framework. The risk assessments that could result in personal data being entered into terrorist watchlists or in a refusal of a travel authorisation have strong impacts especially on the fundamental right to data protection. The use of ML-trained models for such risk assessments raises concerns, since existing models lack transparency and, in some cases, have been found to be significantly biased. The paper discusses selected requirements under EU data protection law for ML-trained models, namely human oversight, information and access rights, accuracy, and supervision. The analysis considers provisions of the AI Act Proposal of the European Commission as the proposed regulation can provide guidance for the application of existing data protection requirements to AI.KeywordsMachine learningArtificial intelligenceAutomated decisionsData protectionEU border control
Article
Full-text available
The EU continues its quest to draw the contours of innovative legislation for the digital domain. The European Commission’s draft Regulation on artificial intelligence (AI) is a clear departure from previous ‘soft’ attempts to set the rules through ethical principles and industry pledges. The EU aspires to be the first global player to adopt a comprehensive framework that classifies and regulates the roll-out of AI software and hardware within its internal market. The draft rules try to provide legal certainty for public and private bodies across the EU, while making sure that potential risks to its citizens are minimised. This article sketches out some of the most important provisions of the draft Regulation and tries to critically assess its potential shortcomings related to implementation and enforcement. The final version of the AI proposal should avoid the mistakes of previous attempts to draft transnational rules for the online space and establish a sufficiently flexible legal framework.
Book
The author verifies the hypothesis concerning the possibility of using algorithms – applied in automated decision making in public sector – as information which is subject to the law governing the right to access information or the right to access official documents in European law. She discusses problems caused by the approach to these laws in the European Union, as well as lack of conformity of the jurisprudence between the Court of Justice of the European Union and the European Court of Human Rights.
Chapter
This introduction presents the fifth volume of a series started twelve years ago: the AI Approaches to the Complexity of Legal Systems (AICOL). The introduction revises the recurrently addressed topics of technology, Artificial Intelligence and law and presents new challenges and areas of research, such as the AI ethical and legal turn, hybrid and conflictive intelligences, regulatory compliance and AI explainability. Other domains not yet fully explored include the regulatory models of the Web of Data and the Internet of Things that integrate legal reasoning and legal knowledge modelling.
Article
Full-text available
It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Article
Full-text available
Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining (DADM) and fairness, accountability and transparency machine learning (FATML), their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such as redlining. Such organisations might also lack the knowledge and capacity to identify and manage fairness issues that are emergent properties of complex sociotechnical systems. This paper presents and discusses three potential approaches to deal with such knowledge and information deficits in the context of fairer machine learning. Trusted third parties could selectively store data necessary for performing discrimination discovery and incorporating fairness constraints into model-building in a privacy-preserving manner. Collaborative online platforms would allow diverse organisations to record, share and access contextual and experiential knowledge to promote fairness in machine learning systems. Finally, unsupervised learning and pedagogically interpretable algorithms might allow fairness hypotheses to be built for further selective testing and exploration. Real-world fairness challenges in machine learning are not abstract, constrained optimisation problems, but are institutionally and contextually grounded. Computational fairness tools are useful, but must be researched and developed in and with the messy contexts that will shape their deployment, rather than just for imagined situations. Not doing so risks real, near-term algorithmic harm.
Article
Full-text available
After addressing the meaning of "trust" and "trustworthiness," we review survey-based research on citizens' judgments of trust in governments and politicians, and historical and comparative case study research on political trust and government trustworthiness. We first provide an overview of research in these two traditions, and then take up four topics in more detail: (a) political trust and political participation; (b) political trust, public opinion, and the vote; (c) political trust, trustworthy government, and citizen compliance; and (d) political trust, social trust, and cooperation. We conclude with a discussion of fruitful directions for future research.
Article
Introduction The rise of social media has provided a platform for the use of celebrities' persona: name, voice, likeness and other persona features, by third parties. Brands often want to form an association with famous individuals due to the immense publicity value or advertising force embodied in the likeness or image of a well-known personality such as a movie actor or sports hero.¹ The commercial use of personas are mostly authorized in order to obtain the benefits of collaboration. There are, however, multiple instances of the less scrupulous use of an individual's persona without authorization.² Most notably, the emergence of ‘deepfakes’, a recent Artificial Intelligence (AI) development presents new challenges to regulations that question the applicability of existing law in the USA and the UK. Deepfakes involve a photograph or image manipulated to create a virtual representation of a person, which can be animated to portray the individual speaking or acting in a certain way.³ For instance, a ‘deepfake’ video of Mark Zuckerberg promoting an art exhibition was created by two artists as part of a conceptual art project on AI.⁴ As technology becomes more sophisticated, the range of uses of persona will potentially increase.
Article
In the last year and a half, deepfakes have garnered a lot of attention as the newest form of digital manipulation. While not problematic in and of itself, deepfake technology exists in a social environment rife with cybermisogyny, toxic-technocultures, and attitudes that devalue, objectify, and use women’s bodies against them. The basic technology, which in fact embodies none of these characteristics, is deployed within this harmful environment to produce problematic outcomes, such as the creation of fake and non-consensual pornography. The sophisticated technology and metaphysical nature of deepfakes as both real and not real (the body of one person, the face of another) makes them impervious to many technical, legal, and regulatory solutions. For these same reasons, defining the harm deepfakes causes to those targeted is similarly difficult and very often targets of deepfakes are not afforded the protection they require. We argue that it is important to put an emphasis on the social and cultural attitudes that underscore the nefarious use of deepfakes and thus to adopt a more material-based approach, opposed to technological, to understanding the harm presented by deepfakes.
This structure differs from the HLEG-AI's initial recommendation in this area, to prohibit 'mass scale scoring' assessing 'moral personality' or 'ethical integrity'. See High-Level Expert Group on Artificial Intelligence
This structure differs from the HLEG-AI's initial recommendation in this area, to prohibit 'mass scale scoring' assessing 'moral personality' or 'ethical integrity'. See High-Level Expert Group on Artificial Intelligence, 'Ethics Guidelines for Trustworthy AI' (April 2019) 34; High-Level Expert Group on Artificial Intelligence, 'Policy and Investment Recommendations for Trustworthy AI' (26 June 2019) 20.
broad and accurate range of demographic, socio-economic and behavioural characteristics on each adult and household. See Lina Dencik and others
For example, the Mosaic dataset of Experian, a consumer credit reporting company, encompasses a 'broad and accurate range of demographic, socio-economic and behavioural characteristics on each adult and household. See Lina Dencik and others, 'Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services' (Data Justice Lab, Cardiff University, 2018) 92-93 <https://perma.cc/39CY-H8L7> accessed 21 August 2020; see generally across the EU, Algorithm Watch, 'Automating Society Report 2020' (October 2020) <https://automatingsociety.algorithmwatch.org> accessed 20 June 2021.
Help Wanted -An Exploration of Hiring Algorithms, Equity and Bias
  • Miranda Bogen
  • Aaron Rieke
Miranda Bogen and Aaron Rieke, Help Wanted -An Exploration of Hiring Algorithms, Equity and Bias (Upturn 2018) 38-39.
See, for an introduction to EU non-discrimination law applied to AI, Frederik Zuiderveen Borgesius
See, for an introduction to EU non-discrimination law applied to AI, Frederik Zuiderveen Borgesius, 'Price Discrimination, Algorithmic Decision-Making, and European Non-Discrimination Law' (2020) 31 European Business Law Review 401.
The exemption is based on strict necessity and subject to certain safeguards
  • A I Act
AI Act, art 10(5). The exemption is based on strict necessity and subject to certain safeguards.
); see further Kristian Lum and William Isaac
  • A I Act
AI Act, art 15(3); see further Kristian Lum and William Isaac, 'To Predict and Serve?' (2016) 13 Significance 14;
); see generally Battista Biggio and Fabio Roli
  • A I Act
AI Act, art 15(4); see generally Battista Biggio and Fabio Roli, 'Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning' (2018) 84 Pattern Recognition 317.
Algorithms that Remember: Model Inversion Attacks and Data Protection Law
cf in relation to European law, Michael Veale and others, 'Algorithms that Remember: Model Inversion Attacks and Data Protection Law' (2018) 376 Phil Trans R Soc A 20180083.
A similar provision allows the Commission to instead propose 'common specifications' to specify Chapter 2 essential requirements; the main difference to harmonised standards is that failure to apply must be justified
  • A I Act
AI Act, art 40. A similar provision allows the Commission to instead propose 'common specifications' to specify Chapter 2 essential requirements; the main difference to harmonised standards is that failure to apply must be justified; yet the Commission has not alluded to a desire to use this, so we do not cover it extensively.
Impact Assessment Accompanying the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence
European Commission, 'Impact Assessment Accompanying the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021) 206 Final)' (2021) 57.
Note that such feedback can be useful but can also be geared towards increasing the profitability of notified bodies by reducing audit costs and rigour
Systems for Machine Learning: Lessons from Sustainability' [2021] Regulation & Governance. Note that such feedback can be useful but can also be geared towards increasing the profitability of notified bodies by reducing audit costs and rigour. See Jean-Pierre Galland, 'Big Third-Party Certifiers and the Construction of Transnational Regulation' (2017) 670 The ANNALS of the American Academy of Political and Social Science 263, 274. 115 AI Act, art 33(11).
Note that the criminal prevention exemption does not apply to systems that help reporting of crime
  • A I Act
AI Act, art 52(1). Note that the criminal prevention exemption does not apply to systems that help reporting of crime.
Language Models Are Few-Shot Learners
  • B Tom
  • Brown
  • Others
Tom B Brown and others, 'Language Models Are Few-Shot Learners' [2020] arXiv:200514165 [cs].
Under the GDPR, the obligations are imposed on 'data controllers'. the Bavarian DPA is on file with the lead author (LDA-1085.4-1368/17-I
  • Gdpr
  • Phd
  • Ku Leuven
GDPR, art 13. Under the GDPR, the obligations are imposed on 'data controllers'. the Bavarian DPA is on file with the lead author (LDA-1085.4-1368/17-I, dated 8 June 2017). Additional potential examples are given in Damian Clifford, 'The Legal Limits to the Monetisation of Online Emotions' (PhD, KU Leuven 2019) paras 309, 311.
The Chief Constable of South Wales Police and Secretary of State for the Home Department
R (on the application of Edward Bridges) v The Chief Constable of South Wales Police and Secretary of State for the Home Department [2019] EWHC 2341 (Admin) [59];
Navigating this distinction is challenging online with the rise of personal brands and influencer marketing, many of those involved already using AI 'filters
  • A I Act
AI Act, art 3(4). Navigating this distinction is challenging online with the rise of personal brands and influencer marketing, many of those involved already using AI 'filters' on e.g. Snapchat or TikTok. See generally Catalina Goanta and Sofia Ranchordás, 'The Regulation of Social Media Influencers: An Introduction' in Catalina Goanta and Sofia Ranchordás (eds), The Regulation of Social Media Influencers (Edward Elgar Publishing 2020).
How They're Connected, and What We Can Do about It
  • Roel See Generally
  • Meredith Dobbe
  • Whittaker
  • Climate Ai
  • Change
See generally Roel Dobbe and Meredith Whittaker, 'AI and Climate Change: How They're Connected, and What We Can Do about It' (AI Now Institute, 17 October 2019) <https://medium.com/@AINowInstitute/ai-and-climatechange-how-theyre-connected-and-what-we-can-do-about-it-6aa8d0f5b32c> accessed 2 July 2021.
Rules may fall into scope based on their potential, rather than documented reality, to impede trade. See Case C-184/96 Commission v France
Rules may fall into scope based on their potential, rather than documented reality, to impede trade. See Case C-184/96 Commission v France ('Foie Gras') ECLI:EU:C:1998:495 [17].
Viking and Laval : The EU Internal Market Perspective
  • Stephen See
  • Weatherill
See generally Stephen Weatherill, 'Viking and Laval : The EU Internal Market Perspective' in Mark Freedland and Jeremias Adams-Prassl (eds), Viking, Laval and Beyond (Hart Publishing 2014) alongside the other chapters in that volume.
Pharmacovigilance is the science and activities relating to the detection, assessment, understanding and prevention of adverse effects or any other medicine-related problem'. See European Medicines Agency
  • C- Case
Case C-341/05 Laval ECLI:EU:C:2007:809. 171 'Pharmacovigilance is the science and activities relating to the detection, assessment, understanding and prevention of adverse effects or any other medicine-related problem'. See European Medicines Agency, 'Pharmacovigilance' (2015) <https://www.ema.europa.eu/documents/leaflet/pharmacovigilance_en.pdf> accessed 5 July 2021. On the impact on NLF regimes, see Christopher Hodges, 'The Role of Authorities in Post-Marketing Safety' in European Regulation of Consumer Product Safety (Oxford University Press 2005);
Law and Corporate Behaviour: Integrating Theories of Regulation, Enforcement, Compliance and Ethics
  • Christopher Hodges
Christopher Hodges, Law and Corporate Behaviour: Integrating Theories of Regulation, Enforcement, Compliance and Ethics (Hart Publishing 2015) 552.