Article

Law and Technology: Two decades of co-evolution

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The article revisits the trajectory of technology governance in the European Union over the past two decades. Drawing from law and technology theory, and technology governance literature, it adopts a retrospective, critical approach towards the governance and regulatory initiatives focusing on technology. It revisits the most challenging areas of technology governance that became key EU priorities, offering a bird’s eye view of the EU technology governance landscape while shedding light on shifts and changes in perspective and approach across the span of the past twenty years.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
The internet and the state: bedfellows or adversaries? This pivotal issue attracts many polarised opinions. For some, the state threatens contemporary society's main space of openness and freedom. For others, the internet threatens contemporary society's main source of order and welfare. Yet for most people, one suspects, the relationship between the internet and the state is ambiguous and uncertain: It is not clear what the connection is and what it should be. This book mainly addresses this third audience of the undecided majority. The chapters explore arguments across the ideological spectrum and experiences around the world with an overall aim to bring greater precision and depth to the debate about the internet and the state. As its primary guiding questions, the volume asks: (a) In what ways and to what extent do (and might) we see increased state involvement in contemporary internet governance; and (b) under what conditions can that greater government role in the internet be a good or a bad thing? In addressing these questions, the chapters examine issues such as the role of the state vis-à-vis multistakeholder governance of the internet, the various inter-net policies of authoritarian and democratic governments, and the relationship between (global) capitalism and the state in internet regulation. The internet was largely born of a state, the United States government, between the late 1960s and the early 1990s. However, the main expansion of the global internet over subsequent decades unfolded with governments mostly as spectators. With time, though, many states have become increasingly uneasy with this uncontrolled (by them) development. Outside of government, too, many citizens have worried about corporate power, fake news, phishing, hacking and online violence in an under-regulated global internet. At the same time, sceptics view increased state intervention in the internet as a slippery slope to inefficiency and oppression. Clearly, 50 years after the internet's invention, the return of the state is very much in question. Emmanuel Macron, President of France, aptly identified three general lines of approach to the issue in his speech to the Internet Governance Forum (IGF) in November 2018. At one extreme, Macron discerned a so-called "California" model, where strong private global players run the internet with limited democratic accountability. At another extreme, Macron described a "Chinese" model based on authoritarian state control, protectionist support of the domestic internet industry,
Article
Full-text available
The proposed Artificial Intelligence Act (AI Act) is the first comprehensive attempt to regulate artificial intelligence (AI) in a major jurisdiction. This article analyses Article 9, the key risk management provision in the AI Act. It gives an overview of the regulatory concept behind the norm, determines its purpose and scope of application, offers a comprehensive interpretation of the specific risk management requirements and outlines ways in which the requirements can be enforced. This article can help providers of high-risk systems to comply with the requirements set out in Article 9. In addition, it can inform revisions of the current draft of the AI Act and efforts to develop harmonised standards on AI risk management.
Article
Full-text available
Efforts to set standards for artificial intelligence (AI) reveal striking patterns: technical experts hailing from geopolitical rivals, such as the United States and China, readily collaborate on technical AI standards within transnational standard‐setting organizations, whereas governments are much less willing to collaborate on global ethical AI standards within international organizations. Whether competition or cooperation prevails can be explained by three variables: the actors that make up the membership of the standard‐setting organization, the issues on which the organization's standard‐setting efforts focus, and the “games” actors play when trying to set standards within a particular type of organization. A preliminary empirical analysis provides support for the contention that actors, issues, and games affect the prospects for cooperation on global AI standards. It matters because shared standards are vital for achieving truly global frameworks for the governance of AI. Such global frameworks, in turn, lower transaction costs and the probability that the world will witness the emergence of AI systems that threaten human rights and fundamental freedoms.
Article
Full-text available
With its proposed EU AI Act, the EU is aspiring to lead the world in admiral AI regulation (April 2021). In this brief, we summarise and comment on the ‘Presidency compromise text’, which is a revised version of the proposed act reflecting the consultation and deliberation by member states and actors (November 2021). The compromise text echoes the sentiment of the original text, much of which remains largely unchanged. However, there are important shifts and some significant changes. Our main comments focus on exemptions to the act with respect to national security; changes that seek to further protect research, development and innovation; and the attempt to clarify the draft legislation’s stance on algorithmic manipulation. Our target readership for this paper is those who are interested in tracking the evolution of the proposed EU AI act, such as policy-makers and those in the legal profession.
Chapter
Full-text available
The current decade will be critical for Europe’s aspiration to attain and maintain digital sovereignty so as to effectively protect and promote its humanistic values in the evolving digital ecosystem. Digital sovereignty in the current geopolitical context remains a fluid concept as it must rely on a balanced strategic interdependence with the USA, China, and other global actors. The developing strategy for achieving this relies on the coordinated use of three basic instruments, investment, regulation, and completion of the digital internal market. Investment, in addition to the multiannual financial framework (2021–2027) instruments, will draw upon the 20% of the 750 billion recovery fund. Regulation, in addition to the Digital Governance Act and the Digital Market Act, will include the Data Act, the new AI regulation, and more that is in the pipeline, leveraging the so-called Brussels effect. Of key importance for the success of this effort remains the timing and “dovetailing” of the particular actions taken.
Article
Full-text available
This article examines whether the territorial scope of the EU General Data Protection Regulation promotes European values. While the regulation received international attention, it remains questionable whether provisions with extraterritorial effect support a power-based approach or a value-driven strategy. Developments around the enforceability of a ‘right to be forgotten’, or the difficulties in regulating transatlantic data flows, raise doubts as to whether unilateral standard setting does justice to the plurality and complexity of the digital sphere. We conclude that extraterritorial application of EU data protection law currently adopts a power-based approach which does not promote European values sustainably. Rather, it evokes wrong expectations about the universality of individual rights.
Article
Full-text available
Voluntary guidelines on ‘ethical practices’ have been the response by stakeholders to address the growing concern over harmful social consequences of artificial intelligence and digital technologies. Issued by dozens of actors from industry, government and professional associations, the guidelines are creating a consensus on core standards and principles for ethical design, development and deployment of artificial intelligence (AI). Using human rights principles (equality, participation and accountability) and attention to the right to privacy, this paper reviews 15 guidelines preselected to be strongest on human rights, and on global health. We find about half of these ground their guidelines in international human rights law and incorporate the key principles; even these could go further, especially in suggesting ways to operationalize them. Those that adopt the ethics framework are particularly weak in laying out standards for accountability, often focusing on ‘transparency’, and remaining silent on enforceability and participation which would effectively protect the social good. These guidelines mention human rights as a rhetorical device to obscure the absence of enforceable standards and accountability measures, and give their attention to the single right to privacy. These ‘ethics’ guidelines, disproportionately from corporations and other interest groups, are also weak on addressing inequalities and discrimination. We argue that voluntary guidelines are creating a set of de facto norms and re-interpretation of the term ‘human rights’ for what would be considered ‘ethical’ practice in the field. This exposes an urgent need for action by governments and civil society to develop more rigorous standards and regulatory measures, grounded in international human rights frameworks, capable of holding Big Tech and other powerful actors to account.
Article
Full-text available
As more and more governments release national strategies on artificial intelligence (AI), their priorities and modes of governance become more clear. This study proposes the first comprehensive analysis of national approaches to AI from a hybrid governance perspective, reflecting on the dominant regulatory discourses and the (re)definition of the public-private ordering in the making. It analyses national strategies released between 2017 and 2019, uncovering the plural institutional logics at play and the public-private interaction in the design of AI governance, from the drafting stage to the creation of new oversight institutions. Using qualitative content analysis, the strategies of a dozen countries (as diverse as Canada and China) are explored to determine how a hybrid configuration is set in place. The findings show a predominance of ethics-oriented rather than rule-based systems and a strong preference for functional indetermination as deliberate properties of hybrid AI governance.
Article
Full-text available
The past decade has witnessed the emergence of many technologies that have the potential to fundamentally alter our economic, social, and indeed personal lives. The problems they pose are in many ways unprecedented, posing serious challenges for policymakers. How should governments respond to the challenges given that the technologies are still evolving with unclear trajectories? Are there general principles that can be developed to design governance arrangements for these technologies? These are questions confronting policymakers around the world and it is the objective of this special issue to offer insights into answering them both in general and with respect to specific emerging disruptive technologies. Our objectives are to help better understand the regulatory challenges posed by disruptive technologies and to develop generalizable propositions for governments' responses to them.
Article
Full-text available
The technical and economic benefits of artificial intelligence (AI) are counterbalanced by legal, social and ethical issues. It is challenging to conceptually capture and empirically measure both benefits and downsides. We therefore provide an account of the findings and implications of a multi-dimensional study of AI, comprising 10 case studies, five scenarios, an ethical impact analysis of AI, a human rights analysis of AI and a technical analysis of known and potential threats and vulnerabilities. Based on our findings, we separate AI ethics discourse into three streams: (1) specific issues related to the application of machine learning, (2) social and political questions arising in a digitally enabled society and (3) metaphysical questions about the nature of reality and humanity. Human rights principles and legislation have a key role to play in addressing the ethics of AI. This work helps to steer AI to contribute to human flourishing.
Article
Full-text available
The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralised? We draw on the history of environment, trade, and security regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak for centralisation. The risk of creating a slow and brittle institution, and the difficulty of pairing deep rules with adequate participation, speak against it. Other considerations depend on the specific design. A centralised body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial, and fragmented institutions could self-organise. In sum, these trade-offs should inform development of the AI governance architecture, which is only now emerging. We apply the trade-offs to the case of the potential development of high-level machine intelligence. We conclude with two recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, fragmentation will likely persist for now. The developing landscape should be monitored to see if it is self-organising or simply inadequate.
Article
Full-text available
The dark web and the proliferation of criminals who have exploited its cryptographic protocols to commit crimes anonymously has created major challenges for law enforcement around the world. Traditional policing techniques have required amendment and new techniques have been developed to break the dark web’s use of encryption. As with all new technology, the law has been slow to catch up and police have historically needed to use legislation which was not designed with the available technology in mind. This paper discusses the tools and techniques police use to investigate and prosecute criminals operating on the dark web in the UK and the legal framework in which they are deployed. There are two specific areas which are examined in depth: the use of covert policing and hacking tools, known in the UK as equipment interference. The operation of these investigatory methods within the context of dark web investigations has not previously been considered in UK literature, although this has received greater analysis in the United States and Australia. The effectiveness of UK investigatory powers in the investigation of crimes committed on the dark web are analysed and recommendations are made in relation to both the law and the relevant Codes of Practice. The article concludes that while the UK has recently introduced legislation which adequately sets out the powers police can use during online covert operations and when hacking, the Codes of Practice need to specifically address the role these investigative tools play in dark web investigations. Highlighted as areas of particular concern are the risks of jurisdiction forum shopping and hacking overseas. Recommendations are made for reform of the Investigatory Powers Act 2016 to ensure clarity as to when equipment interference can be used to search equipment when the location of that equipment is unknown.
Article
Full-text available
This Article is the first of its kind to map out imminent challenges facing the World Trade Organization (WTO) against the emergence of artificial intelligence. It does so by examining critically AI’s normative implications for four issue areas—robot lawyers, automated driving systems, computer-generated works, and automated decision-making processes. By unpacking the diverse governance approaches taken in addressing these issues, this Article highlights the underlying economic, societal, cultural, and political interests in different jurisdictions and identifies the growing normative relevance of global legal pluralism. In light of the changing fabric of international law, this Article seeks to reconceptualize AI and global trade governance by offering three recommendations and two caveats. First, more institutional flexibility within the WTO is essential to allow for rigorous and dynamic cross-sectoral dialogue and cooperation. Less focus should be laid on the specificity and predictability of rules, but on their adaptability and optimal design. Second, while we acknowledge that the human rights-based approach to AI governance offers a promising baseline for many, it is crucial to point out that the global trading system should be more deferential to local values and cultural contexts in addressing AI-related issues. One must exercise greater caution and refrain from pushing strong harmonization initiatives. The third recommendation highlights incrementalism, minilateralism, and experimentalism. We propose that the global trading system should accommodate and encourage emerging governance initiatives of AI and trade governance. Two crucial caveats, however, should be noted. For one, we must bear in mind the “pacing problem” faced by law and the society in keeping up with rapid technological development. For another, the changing power dynamic and interest groups landscape in the age of AI cannot be neglected. In contrast to the conventional power dynamics in international law, states with stronger technology and more quality data will likely dominate, and one may envisage a new North-South divide reshaping the international economic order.
Article
Full-text available
In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.
Article
Full-text available
This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it. Asserting that human rights can provide such compass, this paper first examines what a human rights-based approach to AI governance entails, and sets out the promise it propagates. Subsequently, it examines the pitfalls associated with human rights, particularly focusing on the criticism that these rights may be too Western, too individualistic, too narrow in scope and too abstract to form the basis of sound AI governance. After rebutting these reproaches, a plea is made to move beyond the calls for a human rights-based approach, and start taking the necessary steps to attain its realisation. It is argued that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretise those rights where appropriate; enhancing existing enforcement mechanisms and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose.
Article
Full-text available
Privacy has been defined as the selective control of information sharing, where control is key. For social media, however, an individual user’s informational control has become more difficult. In this theoretical article, I review how the term control is part of theorizing on privacy, and I develop an understanding of online privacy with communication as the core mechanism by which privacy is regulated. The results of this article’s theoretical development are molded into a definition of privacy and the social media privacy model. The model is based on four propositions: Privacy in social media is interdependently perceived and valued. Thus, it cannot always be achieved through control. As an alternative, interpersonal communication is the primary mechanism by which to ensure social media privacy. Finally, trust and norms function as mechanisms that represent crystallized privacy communication. Further materials are available at https://osf.io/xhqjy/
Article
Full-text available
Every road vehicle must have a driver able to control it while in motion. These requirements, explicit in two important conventions on road traffic, have an uncertain relationship to the automated motor vehicles that are currently under development—often colloquially called “self-driving” or “driverless.” The immediate legal and policy questions are straightforward: Are these requirements consistent with automated driving and, if not, how should the inconsistency be resolved? More subtle questions go directly to international law's role in a world that artificial intelligence is helping to rapidly change: In a showdown between a promising new technology and an entrenched treaty regime, which prevails? Should international law bend to avoid breaking? If so, what kind of flexibility is appropriate with respect to both the status and the substance of treaty obligations? And what role should deliberate ambiguity play in addressing these obligations? This essay raises these questions through the concrete case of automated driving. It introduces the road traffic conventions, identifies competing interpretations of their core driver requirements, and highlights ongoing efforts at the Global Forum for Road Traffic Safety to reach a consensus.
Article
Full-text available
The article focuses on the general issues of legal regulation of relations that emerge in the field of application of VR technologies and presents issues associated with the regulation of development of such technologies. It looks at the features of this technology that create challenges for the development of a system of legal regulation of its application. The article also gives a perspective at major factors that make application of the existing law difficult and offers analysis of the emerging issues of its regulation. The author arrives at a conclusion that this technology is fundamentally different from the other existing technologies as it combines the properties of both physical reality and cyberspace. Among the challenges of the legal regulation of VR are a high realism, complete immersion user experience, and low cyber protection of both hardware and software components. The author evaluates several regulatory approaches, which could be used in the case of virtual reality and finds that all of them have major deficiencies. Contemporary research findings in secure application of VR in the fields of teaching and entertainment get rapidly outdated as they cannot catch up with the technology development, therefore they can only serve as a ground for the development of a system of VR regulation with consideration of this factor.
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards. Artificial intelligence (AI) is becoming more and more common in people’s lives. Here, the authors use an expert elicitation method to understand how AI may affect the achievement of the Sustainable Development Goals.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
Algorithms form an increasingly important part of our daily lives, even if we are often unaware of it. They are enormously useful in many different ways. They facilitate the sharing economy, help detect diseases, assist government agencies in crime control, and help us choose what series or film to watch. Yet, there is also a darker side to algorithms, and that is that they (and their applications) can easily interfere with our fundamental rights. This column explores some of the main fundamental rights challenges set by the pervasiveness of algorithms, and it presents a brief outlook for the future.
Article
Full-text available
We build on discussions of ethical values for the good society by foregrounding the potential role of emerging technologies in realizing the good society. Specifically, we look at the emergence of drones and the evolving regulation of their ownership and use. We present a qualitative, thematic analysis of city council meetings in 20 cities in southern California where drone regulation was discussed from 2014 to 2017. These results show the ethical themes that were operative in such discussions: privacy, safety, enforceability, crime, nuisance and professionality. Underlying most of these themes is trust. We discuss this concept with respect to the good society, and we suggest that trust may be more critical to emerging technologies than technologies in general. That is, trust in an emerging technology is required for it to be accepted and more fully integrated in society. Without trust, it is difficult for society to accept the technology.
Book
Full-text available
This book provides an incisive analysis of the emergence and evolution of global Internet governance, revealing its mechanisms, key actors and dominant community practices. Based on extensive empirical analysis covering more than four decades, it presents the evolution of Internet regulation from the early days of networking to more recent debates on algorithms and artificial intelligence, putting into perspective its politically-mediated system of rules built on technical features and power differentials. For anyone interested in understanding contemporary global developments, this book is a primer on how norms of behaviour online and Internet regulation are renegotiated in numerous fora by a variety of actors - including governments, businesses, international organisations, civil society, technical and academic experts - and what that means for everyday users. The book is freely available in open access: https://global.oup.com/academic/product/negotiating-internet-governance-9780198833079?q=roxana+radu&lang=en&cc=ch
Article
Full-text available
Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy. This paper first describes the four core elements of today's digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper closes with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose. This article is part of a theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
In its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of “trustworthiness” with “acceptability” in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
Article
This book has compiled the tech policy debate into a toolkit for policy makers, legal experts, and academics seeking to address platform dominance and its impact on society today. It discusses the global consensus around technology regulation with recommendations of cutting-edge policy innovations from around the world. It also explores the proposed policy toolkit through comprehensive coverage of existing and future policy on data, antitrust, competition, freedom of expression, jurisdiction, fake news, elections, liability, and accountability. The book identifies potential policy impacts on global communication, user rights, public welfare, and economic activity. It outlines a policy framework that address the interlocking challenges of contemporary tech regulation and offer actionable solutions for the technological future.
Article
How do we regulate a changing technology, with changing uses, in a changing world? This chapter argues that while existing (inter)national AI governance approaches are important, they are often too siloed. Often, technology-centric approaches focus on individual AI applications, while law-centric approaches emphasize AI’s effects on pre-existing legal fields or doctrines. This chapter argues that to foster a more systematic, functional, and effective AI regulatory ecosystem, policy actors should instead complement these approaches with a regulatory perspective that emphasizes how, when, and why AI applications enable patterns of “sociotechnical change.” Drawing on theories from the emerging field of “techlaw,” it explores how this perspective can provide informed, more nuanced, and actionable perspectives on AI regulation. A focus on sociotechnical change can help analyze when and why AI applications actually create a meaningful rationale for new regulation—and how they are consequently best approached as targets for regulatory intervention, considering not just the technology, but also six distinct “problem logics” that accompany AI issues across domains. The chapter concludes by briefly sketching concrete institutional and regulatory actions that can draw on this approach to improve the regulatory triage, tailoring, timing and responsiveness, and design of AI policy.
Article
This chapter argues that the notion of human dignity provides an overarching normative framework for assessing the ethical and legal acceptability of emerging life sciences technologies. After depicting the increasing duality that characterizes modern technologies, this chapter examines two different meanings of human dignity: the classical meaning that refers to the inherent worth of every individual, and the more recent understanding of this notion that refers to the integrity and identity of humankind, including future generations. The close connection between human dignity and human rights is outlined, as well as the key-role of dignity in international human rights law, and very especially in the human rights instruments relating to bioethics. The chapter concludes by briefly presenting the challenges to human dignity and human rights posed by neurotechnologies and germline gene editing technologies.
Book
Law and the Technologies of the Twenty-First Century provides a contextual account of the way in which law functions in a broader regulatory environment across different jurisdictions. It identifies and clearly structures the four key challenges that technology poses to regulatory efforts, distinguishing between technology as a regulatory target and tool, and guiding the reader through an emerging field that is subject to rapid change. By extensive use of examples and extracts from the texts and materials that form and shape the scholarly and public debates over technology regulation, it presents complex material in a stimulating and engaging manner. Co-authored by a leading scholar in the field with a scholar new to the area, it combines comprehensive knowledge of the field with a fresh approach. This is essential reading for students of law and technology, risk regulation, policy studies, and science and technology studies.
Book
This book provides an article-by-article commentary to the provisions of the 2019 EU Directive on copyright in the Digital Single Market. It investigates the history, objectives, and content of Directive 2019/790's complex provisions as well as the relationship between some of those provisions and between the Directive and the pre-existing acquis . It explains why the EU Directive on copyright in the Digital Single Market is a significant and foundational part of the broader EU copyright architecture. The book aims to navigate the legislative provisions that were adopted in 2019 to make EU copyright fit for the Digital Single Market. It marks two important anniversaries in the EU copyright harmonization history: the thirtieth anniversary of the first ever adopted copyright directive, Software Directive 91/250, and the twentieth anniversary of InfoSoc Directive 2001/29, an ambitious legislative instrument.
Article
With the increase in online content circulation new challenges have arisen: the dissemination of defamatory content, non-consensual intimate images, hate speech, fake news, the increase of copyright violations, among others. Due to the huge amount of work required in moderating content, internet platforms are developing artificial intelligence to automate decision-making content removal. This article discusses the reported performance of current content moderation technologies from a legal perspective, addressing the following question: what risks do these technologies pose to freedom of expression, access to information and diversity in the digital environment? The legal analysis developed by the article focuses on international human rights law standards. Despite recent improvements, content moderation technologies still fail to understand context, thereby posing risks to users’ free speech, access to information and equality. Consequently, it is concluded, these technologies should not be the sole basis for reaching decisions that directly affect user expression.
Article
I. Introduction Article 17¹ of the recently adopted Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market (DSMD) and amending Directives 96/9/EC and 2001/29/EC (OJ L 130/92 [2019]) contains a copyright-specific liability regime for online content-sharing service providers (OCSSPs) in relation to content uploaded by users of their services. This has been probably the most disputed provision of the new directive, causing even public protests,² which have mostly focused on ‘upload filters’ that might be put in place by OCSSPs to avoid liability for unauthorized content on their platforms. The concern was and remains that such filters could also prevent the upload of lawful content. In the end, the protests and evoked scenarios of the end of the free Internet³ could not prevent Article 17 DSMD from entering into force. The public debate has however influenced the development of the provision since its first draft by the Commission as Article 13. Article 17 DSMD has evolved as a complex provision that tries to acknowledge and balance the different interests at stake.
Article
We inspect the relevant literature on how artificial intelligence disrupts the job market, providing both quantitative evidence on trends and numerous in-depth empirical examples. Building our argument by drawing on data collected from Accenture, The Economist, Frontier Economics, PitchBook, Tractica, we performed analyses and made estimates regarding the impact of artificial intelligence on industry output: real gross value added in 2035 (UStrillions),numberofAIusecasesbyindustrywithhighjobimpact,globalmergerandacquisitionactivityrelatedtoartificialintelligence(numberofdealsandvalue, trillions), number of AI use cases by industry with high job impact, global merger-and-acquisition activity related to artificial intelligence (number of deals and value, bn), and the economic impact of AI on countries: annual growth rates by 2035 of gross value added (a close approximation of GDP).
Conference Paper
Given the ubiquity of artificial intelligence (AI) in modern societies, it is clear that individuals, corporations, and countries will be grappling with the legal and ethical issues of its use. As global problems require global solutions, we propose the establishment of an international AI regulatory agency that --- drawing on interdisciplinary expertise --- could create a unified framework for the regulation of AI technologies and inform the development of AI policies around the world. We urge that such an organization be developed with all deliberate haste, as issues such as cryptocurrencies, personalized political ad hacking, autonomous vehicles and autonomous weaponized agents are already a reality, affecting international trade, politics, and war.