Mariarosaria TaddeoUniversity of Oxford | OX · Oxford Internet Institute
Mariarosaria Taddeo
European PhD in Philosophy
About
217
Publications
290,480
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
12,812
Citations
Introduction
To know more about my work visit my website
http://rosariataddeo.net
Additional affiliations
January 2015 - January 2015
June 2010 - June 2012
Publications
Publications (217)
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue...
Extended reality (XR) technologies have experienced cycles of development—“summers” and “winters”—for decades, but their overall trajectory is one of increasing uptake. In recent years, immersive extended reality (IXR) applications, a kind of XR that encompasses immersive virtual reality (VR) and augmented reality (AR) environments, have become esp...
This article reviews two main approaches to human control of AI systems: supervisory human control and human–machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system behaviour to ensure that AI systems are effective throughout their deployment. Specifically, the article looks at...
Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or a...
This article reviews two main approaches to human control of AI systems: supervisory human control and human-machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system behaviour to ensure that AI systems are effective throughout their deployment. Specifically, the article looks at...
International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Clim...
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a m...
The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neg...
Artificial intelligence (AI) assurance is an umbrella term describing many approaches – such as impact assessment, audit, and certification procedures – used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at,...
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a m...
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regu...
This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political gover...
In this article we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approache...
The SARS-CoV-2 (COVID-19) pandemic has caused social and economic devastation. As the milestone of two years of ‘living with the virus’ approaches, governments and businesses are attempting to develop means of reopening society whilst still protecting public health. However, developing interventions – particularly technological interventions – that...
Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysi...
The article analyses the role of smart contracts in the architecture of the European Union’s Data Act proposal. It identifies five difficulties: lack of flexibility in terms of both content and operation; dependence on oracles which could lead to errors; vulnerability to bugs and changes in architecture; immutability and privacy; and problems of en...
In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA seeks to promote European digital sovereignty, among other goals. This article reviews the literature and related d...
Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining account...
Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing...
In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlookin...
The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when...
The widespread integration of autoregressive-large language models (AR-LLMs), such as ChatGPT, across established applications, like search engines, has introduced critical vulnerabilities with uniquely scalable characteristics. In this article, we analyse these vulnerabilities, their dependence on natural language as a vector of attack, and their...
Decentralised finance (DeFi) promises to improve personal financial autonomy and disrupt many aspects of financial markets, in particular how individuals transact with each other, store their money, invest, lend, and borrow. For this reason, the growing adoption of the open protocols that make up the DeFi space comes with both risks and opportuniti...
In September 2021, the UK government published a set of proposed reforms to its data protection regime for public consultation, later in June 2022 it released the response to the consultation, acknowledging the reforms it intends to adopt. The reforms are part of a broader national strategy, which aims to incentivize data-driven innovation and make...
The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler...
This chapter reviews the business ethics literature on ethics auditing to extract lessons for the emerging practice of ethics auditing of Artificial Intelligence (AI). It reviews the definitions, purposes and motivations of ethics audits, identifies their benefits as well as limitations, and compares various theoretical and practical approaches to...
Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence dom...
The modern abundance and prominence of data have led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data sc...
In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We chara...
Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining account...
In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches...
In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way...
capAI provides an ethics-based audit of AI systems to meet the conformity assessment mandate under the proposed EU Artificial Intelligence Act.
In this article, we focus on the scholarly and policy debate on autonomous weapon systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war (...
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mit...
In September 2021, the UK government released a set of proposed reforms to its data protection regime for public consultation. The reforms are part of a broader national strategy, which aims to incentivise data-driven innovation and make the UK an international “data hub”. In this article, we argue that taken together, the proposed reforms risk (1)...
In this commentary, we focus on the ethical challenges of data sharing and its potential in supporting biomedical research. Taking human genomics (HG) and European governance for sharing genomic data as a case study, we consider how to balance competing rights and interests-balancing protection of the privacy of data subjects and data security, wit...
Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence dom...
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the v...
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theor...
As cyber attacks continue to escalate in terms of frequency, impact, and level of refinement so do the efforts of state actors to acquire new offensive capabilities to defend, counter or retaliate incoming attacks. Artificial Intelligence (AI) has become a key technology both for attacking and defending in cyberspace. When considered in the current...
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several gove...
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched (Scopus, Google Scholar, Philpapers, Web of Science, Pub Med), in April 2019,...
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mit...
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become a world leader in AI by 2030, to monetise AI into a trillion-yuan ($150 billion) industry, and to emerge as t...
Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime (AIC). AIC is theoretically f...
The article has the goal of indicating how to harness the potential for good of artificial intelligence (AI) – defined as a distinct form of autonomous and self- learning agency and thus raises unique ethical challenges – while mitigating its ethical challenges. The analyses focuses first on uses of AI that may lead to undue discrimination, lack of...
In this chapter, I draw on my previous work on trust and cybersecurity to offer a definition of trust and trustworthiness to understand to what extent trusting AI for cybersecurity tasks is justified and what measures can be put in place to rely on AI in cases where trust is not justified, but the use of AI is still beneficial.
The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising that the underlying SARS-CoV-2 has caused the greatest global crisis since World War II. In this chapter, we present a framework to evaluate whether and to what extent the use of digital systems that track and/or trace potentially infected individuals...
Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this chapter AI-Crime (AIC). AIC is theoretically f...
Important decisions that impact human lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outc...
In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However,...
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory out...
Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.
The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview...
The potential presented by Artificial Intelligence (AI) for healthcare has long been recognised by the technical community. More recently, this potential has been recognised by policymakers, resulting in considerable public and private investment in the development of AI for healthcare across the globe. Despite this, excepting limited success stori...
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and t...
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitatio...
We invited authors of selected Comments and Perspectives published in Nature Machine Intelligence in the latter half of 2019 and first half of 2020 to describe how their topic has developed, what their thoughts are about the challenges of 2020, and what they look forward to in 2021.
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the dev...
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the v...
Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.