ArticlePDF Available

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US


Abstract and Figures

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the practices of each government are contributing to the achievement of their stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. The article identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes.
Content may be subject to copyright.
Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US
Huw Roberts1, Josh Cowls1,2, Emmie Hine1, Francesca Mazzi 1,3, Andreas Tsamados1, Mariarosaria
Taddeo,1,2 Luciano Floridi1,2
1 Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS, UK
2Alan Turing Institute, British Library, 96 Euston Rd, London NW1 2DB, UK
3Saïd Business School, University of Oxford, Park End St, Oxford OX1 1HP
Email of correspondence author:
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies,
released by governments around the world, that seek to maximise the benefits of AI and minimise
potential harms. This article provides a comparative analysis of the European Union (EU) and the
United States (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are
forwarded in key policy documents and their opportunity costs, (ii) the extent to which the
practices of each government are contributing to the achievement of their stated aims and (iii) the
consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation.
The article identifies areas where the EU, and especially the US, need to improve in order to
achieve ethical outcomes.
Key words: Artificial Intelligence, European Union, Policy, United States, Social Good.
Electronic copy available at:
1. Introduction
Artificial intelligence (AI) is a new form of smart agency that has unprecedented capacity to
reshape individual lives, societies and the environment (Yang et al., 2018). Governmental efforts
to regulate AI are relatively recent and have only gained substantial traction in the past few years.
In a previous article Cath et al. (2018), which some of the authors of this article co-authored,1 an
analysis was undertaken of the AI strategies of the United States (US), the European Union (EU)
and the United Kingdom (UK). The article concluded that the documents analysed addressed a
variety of ethical, social and economic factors associated with AI, but that none provided an
overarching, long-term political vision for the development of a ‘Good AI Society’, nor a clear
indication of what such a society would look like in reality.
Since the publication of that article, a number of high-profile AI documents have been
released by all three of the state actors that were considered. Many other states have demonstrated
a keen interest in AI governance, with over 60 releasing AI policy documents (OECD.AI, 2021),
a reflection and consequence of the current ‘summer’ which AI is experiencing (Floridi, 2020a;
Tsamados et al., 2021). Given these policy developments and the significant societal impact that
AI technologies are increasingly having, it is important to revisit these AI strategies to assess their
differences and the extent to which long-term visions have been developed. This is the task of the
following pages.
Because of the growing global interest in AI governance, one possible approach would be
to expand the analysis beyond the US, EU and UK. Doing so would facilitate a more varied
approach on account of the greater breadth of cultural values that are informing other AI strategies
(Duan, 2020; Sambasivan et al., 2020). Indeed, we have analysed China’s AI strategy in other
articles (Roberts, Cowls, Hine, et al., 2021; Roberts, Cowls, Morley, et al., 2021) and many analyses
have considered other national approaches elsewhere (Chatterjee, 2020; Cisse, 2018; Gal, 2020).
However, in this article, we will focus on two case studies only: the EU and the US. The decision
to limit our analysis to two case studies is based on an inevitable trade-off between breadth and
depth. We chose to focus on the EU and US in particular because of their global influence over
AI governance, which far exceeds other countries (excluding China). More substantively, the EU
and the US make for an interesting comparative case study because of their often-touted political
alignment over guiding values, such as representative democracy, the rule of law, and freedom.
Indeed, this alignment has led to widespread calls for deeper transatlantic cooperation in AI
governance, particularly in light of the perceived threat that China poses to these values (Delcker,
Some of the authors of that and this article were and are members of the Digital Ethics Lab at the Oxford Internet
Institute, University of Oxford.
Electronic copy available at:
2020; Lawrence & Cordey, 2020; Meltzer et al., 2020). Focusing on the AI strategies of the EU
and the US leads us to consider two specific research questions: firstly, which vision of the ‘Good
AI Society’ forwarded is more ethically desirable; and secondly, given these differing approaches
to AI governance, to what extent deep transatlantic cooperation is viable.
Before turning to our analysis, two clarificatory points are required based on this framing.
First, policy analyses have already been produced that consider the AI strategies of the EU and of
the US in isolation (Brattberg et al., 2020; Rasser, 2019) and comparatively (Allison & Schmidt,
2020; Gill, 2020; Imbrie et al., 2020). However, these analyses have typically focused only on the
capacities of states, including investments in AI, access to data and hardware, and domestic talent,
without engaging with these governments’ visions for the role of AI in society, or what we describe
here as developing a ‘Good AI Society’. Addressing and contextualising the longer-term,
overarching visions that the EU and US have for AI is important, as it provides room for making
informed normative judgements about the direction and goals of national strategies.
Second, it is important to explain what we mean by a ‘Good AI Society’. What constitutes
good in terms of the development and use of AI is culturally and politically dependent. Failing to
acknowledge this constitutes a form of absolutism, according to which there is only one, absolutely
(i.e., unrelated to any historical or cultural circumstances) ‘valid’ vision, complete and correct, for
what would make the use of AI socially ‘good’(Wong, 2020). At the same time, to consider no
values as inherently ‘good’ is a form of relativism, according to which nothing of substance can
ever be said justifiably about the respective merits of different visions. We place ourselves between
these two extremes, in the middle ground of ethical pluralism (Ess, 2020). Certain values are
viewed at national and international levels (justifiably, in our view) as desirablesuch as
democracy, the protection of human rights, and a commitment to environmental protectionand
should be granted ‘universally’ applicable status. Ethical pluralism allows for this limited
affirmation of universally valid values and also an acceptance that values which may (at some Level
of Abstraction) seem similar can be interpreted and implemented differently across different
societies and cultures (Ess, 2020; Wong, 2020).
Different visions of a ‘Good AI Society’ should be underpinned by the values that are
considered universally important. However, how these are explicated in practice may differ. While
some fundamental values may be interpreted similarly in the EU and US, other values may differ,
at least in terms of their order of priority and level of importance, and may therefore underlie
different visions of a ‘Good AI Society’. Endorsing a perspective of ethical pluralism enables us
to make informed, normative judgements about whether AI strategies are inclusive of universal
values and whether interpretations are adequate for achieving ethical outcomes.
Electronic copy available at:
Having clarified how we use the expression ‘Good AI Society’, and in light of the gap
identified in the literature, in the following pages we compare the development of the governance
strategies of these two governments, structuring our analysis around three points:
i) How the EU and the US conceptualise a ‘Good AI Society’ and the opportunity costs
associated with each approach.
ii) The extent to which the practices of each government are living up to their stated aims.
iii) The consequences that these differing visions of a ‘Good AI Society’ have for
transatlantic cooperation.
In Sections 2 and 3, we discuss points (i-ii) in regard to the EU’s and US’s AI strategies respectively.
We outline how AI policies have developed from a first wave in 2016 to a second wave which
contains distinctive domestic- and international-facing elements. We identify and assess key
themes in each, which includes improving economic, social and ethical outcomes domestically;
developing a position on military AI and international relations externally; and the internal
fragmentation of policy outcomes when these strategies are applied in practice. In Section 4, we
address point (iii) and consider existing initiatives that promote transatlantic cooperation, as well
as barriers to further cooperation that emerge from these differing visions. We conclude by
comparing the ethical permissibility of each vision.
2. The EU’s Approach to AI
In May 2016, the EU released its first document addressing the issue of AI governance: a draft
report, published by JURI, the European Parliament’s Committee on Legal Affairs, entitled ‘Civil
Law Rules on Robotics’. This report called for a coordinated European approach that would
employ a mix of hard and soft laws, including a new guiding ethical framework, to guard against
possible risks. While this report began to address many of the ethical and social issues associated
with the development and use of AI, Cath et al. (2018) highlighted a number of gaps, including
viewing AI merely as an underlying component of robotics and the failure to acknowledge
accountability or transparency as guiding ethical values.
Since the 2016 JURI report, the focus on AI by EU policymakers has increased
significantly, resulting in a second wave of AI policies. In April 2018, 25 European countries signed
a Declaration of Cooperation on Artificial Intelligence,2 where they stated their intention to
promote a collective European response to the opportunities and challenges that AI presents.3
Shortly after, the European Commission published a Communication on the European Approach
Disclosure: LF participated in the meeting.
The remaining Member States, namely, Greece, Cyprus, Romania and Croatia, signed the declaration later in the
Electronic copy available at:
to AI (the Communication), which defined the parameters of the EU’s strategy and outlines a
coordinated path forward which centres on the following three priorities:
1) Boosting the EU’s technological and industrial capacity across the economy, in both the
private and public sectors.
2) Preparing for the changes brought about by AI through anticipating market change,
modernising education and training and adapting social protection systems.
3) Ensuring that there is an appropriate legal and ethical framework that is in line with the
EU’s values.
In April 2019, the Ethics Guidelines on Artificial Intelligence (the Guidelines) were released.
Defined by the High-Level Expert Group on AI (HLEG),
they focus on developing trustworthy
AI by ensuring that systems are lawful, ethical and robust (European Commission, 2019). To
achieve this in practice, seven key requirements have been outlined (Figure 1), with an
accompanying assessment list that offers guidance for practical implementation. The HLEG
subsequently released ‘Policy and Investment Recommendations for Trustworthy AI’, which offer
33 recommendations for sustainable growth.
Human agency and oversight
AI systems should allow humans to make informed
decisions and be subject to proper oversight.
Technical robustness and safety
AI systems need to be resilient, secure, safe,
accurate, reliable, and reproducible.
Privacy and data governance
Adequate data governance mechanisms that fully
respect privacy must be ensured.
The data, system and AI business models should be
transparent and explainable to stakeholders.
Diversity, non-discrimination
and fairness
Unfair bias must be avoided to mitigate the
marginalisation of vulnerable groups and the
exacerbation of discrimination.
Societal and environmental well-
AI systems should be sustainable and benefit all
human beings, including future generations.
Responsibility and accountability for AI systems
and their outcomes should be ensured.
Figure 1 - Requirements for trustworthy AI (European Commission, 2019)
Disclosure: LF was a member.
Electronic copy available at:
In February 2020, the European Commission released a White Paper which identified different
policy options for regulating AI. It was later substantiated in the European Commission’s draft of
the Artificial Intelligence Act (2021). This document proposes a risk-based approach to regulating
AI and outlines four categories of risk: unacceptable, high-risk, limited risk, and minimal/no risk.
Systems deemed to be of unacceptable risk will be prohibited, including cases of social scoring and
subliminally manipulative systems. High-risk AI includes systems which are safety-critical
components and those that pose specific risks to fundamental rights. For these systems, specific
obligations for providers, importers, distributers, users, and authorised representatives are
outlined. Limited risk systems are those that interact with humans, are used for biometric
categorisation or generate manipulative content (e.g. deepfakes); these systems have specific
transparency requirements. For systems which are not high risk, voluntary codes of conduct are
encouraged. Violating this regulation would lead to fines of up to 6% of global turnover or 30
million euros.
2.1. Domestic policy
The second wave of EU AI policies, emerging from 2018, addresses many of the key drawbacks
highlighted in Cath et al. (2018), including a shift in how AI is conceptualised so as to recognise it
as an unembodied technology which ought to be analysed independently from robotics. However,
the EU’s definition of AI, as outlined in the proposed AI Act, is extremely broad and includes
‘statistical approaches’. This will likely be beneficial for future-proofing the EU’s definition but
could also lead to systems that are not commonly considered AI being regulated by the Act. As a
result, the EU’s vision for a ‘Good AI Society’ has shifted from the narrow governance of robotics
to an inclusive consideration of a variety of techniques and approaches. Additionally, the HLEG’s
ethical principles and the AI Act now explicitly include accountability and transparency
requirements, which represents a marked improvement for guiding the ethical development of AI.
Overall, the EU’s long-term vision for a ‘Good AI Society’, including the specific mechanisms
that will be used for achieving this, appears coherent. The vision for governing AI is underpinned
by fundamental European values, including human dignity, privacy, and democracy. It has three
practical cornerstones which are: improving economic outcomes, minimising social disruption,
and developing appropriate ethical and legal frameworks. The risk-based approach, which
combines hard and soft law, aims to ensure that harms to people are minimised, while allowing
EU Member States to benefit from these technologies at a societal level and in the private sector
(Floridi et al., 2018). Guidance from the HLEG outlines how to develop AI systems ethically, yet
this is non-binding. The enforcement measures outlined in the European Commission’s AI Act
Electronic copy available at:
would reconcile this but are currently only a proposal and will need to be confirmed by the
European Parliament and Council. The process could take over two years and will likely be subject
to negotiation, revisions and compromises. Accordingly, the vision for AIwhich has largely been
defined and outlined by the Commissioncould still be altered by European MEPs, by Member
State influence through the Council of Europe, and indirectly through private sector influence.
The EU’s domestic conception of a ‘Good AI Society’ involves inevitable trade-offs. The
European vision has been criticised for focusing too heavily on the protection of individual rights,
as opposed to stimulating innovation, hindering economic growth and competitiveness (Brattberg
et al., 2020). It can also be criticised for not actively incentivising AI for social good, which could
include, for example, using AI to help meet the United Nations Sustainable Development Goals
by 2030 (Cowls et al., 2021a; Vinuesa et al., 2020). The outlined vision could also do more to
support collective interests and social values; for example, group privacy (Floridi, 2014; Taylor et
al., 2017). Finally, the EU’s vision has little consideration for how to address systemic risk, focusing
on the risk to individuals from specific systems rather than the potential of AI to cause wider
changes (Whittlestone et al., 2021). It would be desirable for the EU (in line with its vision) to
intensify efforts and take a leading role in promoting the use of AI for the social good through a
more proactive and bold approach (Cowls et al., 2021b; Foffano et al., 2020) and to more
adequately address collective harms (Tsamados et al., 2021).
2.2. International policy
The EU’s international vision predominantly focuses on promoting cooperation in AI governance
based on the respect for fundamental rights (European Commission, 2020a). However, it is also
characterised by the extraterritorial scope of proposed measures. If passed, the AI Act would apply
not only to providers established in the EU, but to anyone who places services with AI systems in
the EU’s Single Market and where the output produced by an AI system is used in the EU. These
measures could lead to a so-called ‘Brussels Effect’, which has already been seen with the
enactment of the GDPR, whereby the market and regulatory power of the EU creates global
standards that are followed by technology companies and in turn, adopted by third countries
(Bradford, 2020).
The EU’s external-facing vision for a ‘Good AI Society’ should also be understood as part of
the wider objective of ‘digital sovereignty’, for which AI is a constituent part (Roberts et al.,
forthcoming; Floridi, 2020b). Though ‘digital sovereignty’ has not been clearly defined nor
consistently used across EU institutions, it is generally understood in terms of maintaining strategic
autonomy through having the capacity to determine its long-term social and economic future
Electronic copy available at:
(Timmers, 2019). At present, this is threatened most significantly by American and Chinese
technology companies, that are, as one European Parliament report put it, “increasingly seen as
dominating entire sectors of the EU economy” (Madiega, 2020). Digital sovereignty is a policy
agenda formulated in response to this challenge and aims at strengthening European control over
external actors through improved regulatory means and the promotion of domestic innovation
(Roberts et al., forthcoming).
One notable omission in the EU’s outward-facing strategy is the absence of a developed
position on the use of AI in the military domain. The proposed AI Act mentions AI for defence
only to remark at the outset that the subject is not covered in the document, despite the fact that
a number of Member States’ armies and security agencies have already begun to research and
develop AI-enabled capabilities (Boulanin et al., 2020; Franke & Sartori, 2019). Several EU
agencies such as the European Defence Agency (EDA), the Agency for Cybersecurity, and the
Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom,
Security and Justice, have opted so far for a cautious approach, sharing research, reports and
recommendations, but no definite guidelines (ENISA, 2020; eu-LISA, 2020). Similarly, the
European Parliament recently voted to adopt non-binding guidelines for military and non-military
use of AI, focusing on the need to preserve human decision-making, respect human dignity and
human rights (European Parliament, 2021). The much-anticipated AI action plan of the EDA
that builds on requirements and preferences expressed by Member States is expected to provide
clearer guidelines on the topic soon (EDA 2020).
2.3. Assessing progress in achieving the EU’s Good AI Society
The EU’s progress in achieving its high-level policy priorities is mixed. In regard to the first two
overarching priorities outlined in the Communication, namely boosting industry capacity and
preparing for social disruption, numerous initiatives have been introduced. There has been a
concerted effort by both the Commission and Member States to invest heavily in the opportunities
AI presents. For instance, the European Investment Bank (EIB) together with the European
Investment Fund (EIF) pledged 150 million to support AI companies across Europe (European
Commission, 2020b). Various policies have also been outlined that seek to mitigate the societal
disruptions that AI can cause, including the Digital Education Action Plan (2021-2027) to ensure
that citizens have robust digital skills in the face of market change, and introducing a Code of
Practice on Disinformation for platforms to follow.
Significant progress has also been made towards the EU’s third priority of developing legal
and ethical frameworks for the governance of AI. Specific guidance was developed for the
Electronic copy available at:
HLEG’s principles and recommendations, which provides those developing and deploying AI
with a checklist for deploying principles into practice. Although this guidance is voluntary, the
proposed AI Act would provide enforceable regulation to support these ends.
Despite these efforts, the extent to which the EU’s vision of a ‘Good AI Society’ is being
practically achieved is still questionable. The aim of boosting the EU’s industrial capacity is
hamstrung by the current funding of the EU AI ecosystem, which has been criticised as being
inadequate when compared to the US’s and China’s (Bughin et al., 2019). Moreover, some EU
research fundingespecially via Horizon 2020has been criticised as being unethical because of
the support provided to several projects using AI for law enforcement, despite potentially
threatening fundamental EU values and producing no documentation about the ethical and legal
implications of their research and development (Fuster & Brussel, 2020). This indicates that issues
remain for the EU’s AI funding model.
A more fundamental question is whether the EU’s economic, social and regulatory aims
will be achieved equally throughout Europe. Although the lack of EU funding is assuaged by that
of certain European countries, including France, Germany and Sweden, this is not the case
everywhere. This spending gap is symptomatic of a wider problem of ensuring an even spread of
positive social and economic outcomes throughout the EU. Some Member States, typically in
Western Europe, have developed AI strategies, yet this is mostly not the case in Eastern and
Southern Europe (Brattberg et al., 2020). This is consequential for social outcomes, as different
levels of investment lead to an unequal accrual of benefits as well as divergent levels of risk from
social disruption, for example because of automation. For instance, Finland ranks 55 places higher
than Croatia for AI readiness, measured by a number of metrics such as vision, infrastructure and
data quality (Government AI Readiness Index 2020, n.d.).
From a regulatory perspective, the proposed AI Act provides a strong foundation for
standardising protections across Europe through prohibiting some use cases and providing
common criteria for defining and regulating high and limited-risk AI. The AI Act proposes a
European AI Board, made up of the heads of each Member States’ national supervisory authorities,
which seeks to provide ongoing guidance for governing AI. Importantly, the proposed AI Act
gives the Commission significant influence in regulating AI, such as through chairing the European
AI Board.
Some commentators have speculated that the proposal also gives the Commission
power to intervene in national enforcement, if national authorities lack the expertise, resources or
This is a change from the European Data Protection Board (EDPB) that was mandated in the GDPR to fulfil a
similar function, with the chair of the EDPB elected from national supervisory authorities.
Electronic copy available at:
will to enforce the regulation’s measures (Reinhold & Müller, 2021). If this proposal comes to
fruition, then it could ensure that the EU’s regulatory vision is enforced across Europe.
Even if enforcement is relatively standardised across the EU, there is a risk that the
proposals may either ‘overregulate’ or ‘underregulate’ AI, depending on the standpoint taken.
In terms of banned systems, the proposed AI Act states that certain AI systems intended
to distort human behaviour, whereby physical or psychological harms are likely to occur, should
be forbidden”. This clause is encompassing and could potentially include a range of use cases, such
as recommender systems which are intended to nudge an individual in a particular direction,
attempting to attract them to a specific kind of content whilst limiting their exposure to others
(Milano et al., 2020). It could reasonably be argued that these systems cause mental and physical
harms, such as through promoting ‘anti-vax’ content or encouraging extremism (as was arguably
the case with the Capitol Riots) (Hao, 2021). Whilst recommender systems can undoubtedly prove
harmful, prohibiting them would be a disproportionate response. Without further clarification,
overregulation could stymie innovation and lead the EU to lose competitiveness against other
states, hindering positive economic outcomes (Brattberg et al., 2020; Murgia & Espinoza, 2020).
At the same time, questions can be raised about the EU underregulating. Some members
of the HLEG criticised the vague and non-committal nature of the Guidelines, which was blamed
on a regulatory capture by a substantial industry contingent that outweighed the number of
ethicists on the board (Kelly, 2020). The AI Act may alleviate the concerns of non-committal
enforcement, however, key weaknesses remain in measures. The ban on ‘real-time’ remote
biometric surveillance has a number of key exclusions, including for cases of missing children,
terrorist threats or for serious crimes with prior judicial approval. In response to this, the European
Data Protection Supervisor, Wojciech Wiewiórowski, stated that he was disappointed that the Act
did not provide for a complete moratorium on remote biometric surveillance (European Data
Protection Supervisor, 2021), as had been called for by a cross-party group of 40 MEPs a week
earlier (Lomas, 2021)
Likewise, the requirements for providers of high-risk systems appears strict with an ex-
ante conformity assessment needing to be completed which includes requirements for data,
documentation, transparency, oversight, robustness, and security. However, for most providers
this is an internal process rather than a third-party document check. Likewise, text surrounding
disparate impact assessments are vague and non-committal, with little in the way of formal
requirements for checks on bias (MacCarthy, & Propp, 2021). As a result, effective protection
from high-risk systems will be largely reliant on effective internal compliance by companies, which
could be lacking in practice. For instance, an EU-affiliated survey reported that over half of small
Electronic copy available at:
companies were not GDPR compliant in 2019 (Wolford, 2019). This raises a more general
question over the effectiveness of a prescriptive regulatory approach for governing AI (Clark &
Hadfield, 2019).
A final uncertainty, likely to be influential in determining whether the EU is able to achieve
its vision of a ‘Good AI Society’, is the extent to which the EU agenda of digital sovereignty is
both articulated and successfully enacted. The loose aim of improving the EU’s to ‘act
independently’ (Madiega, 2020; Von Der Leyen, 2020)fails to adequately address whom the EU is
seeking to achieve independence ‘from’ (i.e., which other actors might claim digital sovereignty)
nor the extent to which choices can be made by the EU ‘independently’ (Roberts et al.,
forthcoming). Without a clear articulation of the EU’s digital sovereignty agenda, it is difficult to
pool support around a clear, holistic policy approach to the digital that can stimulate EU
competitiveness beyond regulatory measures for AI.
3. The US’s approach to AI
In October 2016, the White House Office of Science and Technology Policy (OSTP) released the
first US government report focusing specifically on AI, entitled ‘Preparing for the Future of
Artificial Intelligence’. The report considered AI as a tool for innovation and defined the
government’s role in the development of AI as a facilitator of innovation and a minimalist
regulator, with the aim of using existing regulatory frameworks wherever possible. An
accompanying document, entitled the ‘National Artificial Intelligence Research and Development
Strategic Plan’, also released in October 2016, provided a more detailed description of how federal
R&D investments would guide the “long term transformational impact of AI”. In their assessment
of these policy documents, Cath et al. (2018) stressed that merely applying existing frameworks to
new problems was inadequate and that, where values were elucidated, the 2016 document offered
little specific guidance.
The Trump administration continued the laissez-faire approach to AI laid out in these
documents, and initially went further by stating that it had no intention of developing a national
plan and that minimising government interference was the best way of ensuring that the
technologies flourished. This position was criticised on account of its failure to stimulate
investment, nurture talent and minimise harms (Knight, 2018) and the administration backtracked
to some extent with the signing of the American AI Initiative in February 2019. This executive
order emphasised the importance of continued American leadership in AI for economic and
national security, as well as international influence. The strategy was underpinned by five key
Electronic copy available at:
1) Driving technological breakthroughs in AI to promote scientific discovery, economic
competitiveness and national security.
2) Developing appropriate technical standards and reducing barriers to the safe testing
and deployment of AI in order to enable the creation and adoption.
3) Training American workers with the skills to prepare them for jobs of the future.
4) Fostering public trust and confidence in AI technologies and protect civil liberties,
privacy and American values.
5) Promoting an international environment that supports research and opens markets for
American AI industries, while protecting the US’s technological advantage, including
AI technologies from competitors.
Policy documents released since flesh out these five principles. In November 2020, as Trump’s
own administration was coming to a close, the White House released guidance for government
agencies proposing new AI regulations for the private sector, which centres around three themes:
limiting regulatory overreach that might dampen innovation; ensuring public engagement; and
promoting trustworthy AI that is fair, transparent and safe (Hao, 2020). These ideas are contained
within a preamble, as well as 10 principles for the stewardship of AI applications (Figure 2).
The government must promote reliable, robust,
and trustworthy AI applications.
The public should have a chance to participate
in all stages of the rule-making process.
Policy decisions should be based on science.
Agencies should decide which risks are
Agencies should select approaches that
maximise net benefits.
Agencies should pursue a technology-neutral,
flexible approach.
Agencies should make sure AI systems do not
discriminate illegally.
Context-specific transparency measures are
necessary for public trust.
Agencies should promote AI systems that are
safe, secure, and operate as intended.
Electronic copy available at:
Interagency cooperation and coordination is
necessary for consistent policies.
Figure 2 - US Guidance for Regulation of AI Principles (Executive Office of the President, 2020).
Most recently, the National AI Initiative Act 2020, codified the US’s vision into law when it passed
in early 2021. The aims of the Act largely reflect those of the American AI Initiative, centring on
American AI leadership in R&D and the development of trustworthy AI systems, as well as
preparing for potential workforce disruptions and coordinating military and civilian sectors. It also
establishes a number of bodies to provide federal-level guidance for AI. Most notably, the Act
mandates that the OSTP establish a ‘National Artificial Intelligence Office’, which is tasked with
supporting AI R&D, educational initiatives, interagency planning and international cooperation.
Other mandated bodies include an expert National AI Advisory Committee for assessing the
degree to which the US is fulfilling its aims, and a subcommittee for AI and law enforcement that
advises on issues such as legality, biases and data security.
3.1. Domestic policy
The US’s vision for a ‘Good AI Society’ is characterised by an acute focus on limiting regulatory
overreach. However, the National AI Initiative Act signals a growing recognition of the need to
provide some coordination and support for the US to fulfil its ambitions. Nonetheless, rather than
foregrounding regulations that protect individual freedoms, in the sense of negative liberty, the US
strategy still centres on empowering positive liberty of individuals and corporations to benefit
from AI.
A largely laissez-faire approach to the governance of AI technologies may be permissible
from an economic perspective, though the assumption that limiting regulation is the best way of
ensuring innovation has been questioned by experts, including those who previously advised the
Obama administration on AI (Johnson, 2020). Ethically, however, the US’s vision is problematic
given the numerous ethical challenges linked to the pervasive distribution of AI technologies. The
White House’s emphasis on avoiding overregulation is a clear disincentive for federal agencies to
introduce rigorous regulations, and White House officials have criticised states that have
considered banning facial recognition technology (Vincent, 2020a). Whilst self-regulation by
industry may resolve some potential ethical harms, the lack of specific regulatory measures and
oversight can lead to practices such as ethics washing (introducing superficial measures), ethics
shopping (choosing ethical frameworks that justify actions a posteriori) and ethics lobbying
(exploiting digital ethics to delay regulatory measures) (Floridi, 2019). In practice, the inadequacy
Electronic copy available at:
of private sector regulatory measures has facilitated numerous examples of harms, such as from
biases in facial recognition technology (Buolamwini & Gebru, 2018).
3.2 International policies
In contrast to this hands-off approach on the domestic stage, the US’s vision appears geared towards
a hands-on approach to the governance of AI internationally. The American AI Initiative states the
need to promote an international environment that opens markets for American AI industries,
protects the US’s technological advantage and ensures that international cooperation is consistent
with US values. In doing so, this vision incorporates mercantile undertones.
Within this overarching vision, the US’s AI strategy for defence is perhaps the most
developed aspect. The 2019 National Defense Authorization Act established the National
Security Commission on AI (NSCAI), a body set up to review advances in AI to address the
national security needs of the US. In its first report, the NSCAI stressed that the US was facing
strategic competition and made specific recommendations in areas such as investing in R&D,
training and developing AI talent, and protecting and furthering US technical advantages, amongst
others (Rasser, 2019).
Ethical principles for the use of AI in defence, which seek to ensure the compliance of AI
applications with International Humanitarian Law while reinforcing national security, were issued
by the US Defense Innovation Board in February 2020. The identified principlesaccording to
which uses of AI in defence should be responsible, equitable, traceable, reliable and governable
have been defined considering AI as a defence capability that is essential for maintaining
technological, and therefore operational, superiority over the adversary. This is particularly
problematic considering a principle of equitability is included in place of fairness (Taddeo &
Taylor, forthcoming). It is specified in the supporting documents that the reason for using
equitable rather than fair AI is that this principle stems from the DoD mantra that fights should
not be fair, as DoD aims to create the conditions to maintain an unfair advantage over any potential
adversaries, thereby increasing the likelihood of deterring conflict from the outset” (Defense
Innovation Board, 2020, p. 31). This is also problematic, because maintaining advantage over an
adversary is seen to outweigh issues of justice and fairness.
Considering the US’s outward-facing AI strategy raises a number of questions over its
efficacy. Although the US’s strategy of export bans may slow China down, programmes such as
‘Made in China 2025’ and ‘China Standards 2035’ seek to improve the domestic creation of
emerging technologies and international technology standards respectively. As a result, export
controls to China are no guarantee of prolonged US AI leadership. For instance, Huawei is making
Electronic copy available at:
rapid progress on a chip plant that will eschew American technology, allowing them to skirt US
sanctions and maintain their chip supply (Hille et al., 2020). Indeed, these policies could just speed
up China’s effort to develop and export competing capacities. Similarly, the absence of a developed
domestic approach for promoting responsible innovation could harm the US’s international
competitiveness (Rasser et al., 2019). For example, whilst China is luring back expats through
incentive schemes such as the Thousand Talents Programme, which could improve its overall
competitiveness (Joske, 2020), the US has enacted hostile visa policies which could prove
detrimental for AI firms (Coles, 2020).
Finally, it should be stressed that ‘American values’ are ill-defined. External protectionism
contrasts with internal classical liberalism, with these inconsistencies raising questions about how
unified the American approach to the governance of AI is within the federal government.
3.3. Assessing progress in achieving the US’s Good AI Society
A number of steps have been taken to materialise the US’s vision of a ‘Good AI Society’. In terms
of promoting economic competitiveness and achieving R&D breakthroughs, the government has
committed to double federal non-defence R&D AI funding in the 2021 fiscal year (Office of
Science and Technology Policy, 2020). In August 2020, the US announced $1 billion in funding
for 12 research hubs that focus on AI and quantum computing (Vincent, 2020b). Importantly,
government funding only forms part of a dynamic AI ecosystem in which private sector innovation
plays a central role. Though exact figures are difficult to ascertain, many estimates indicate that the
US is the leading country in terms of private sector investment (The Global AI Index, n.d.). Despite
these developments, some commentators argue that the government is not doing enough to
stimulate the AI ecosystem (Rasser et al., 2019).
Efforts to prepare the US for societal change through educational and training initiatives
are also growing. For instance, policy documents have been released which seek to improve
lifetime STEM education (National Science and Technology Council, 2018) and enhance R&D
through a number of research fellowships and training programmes (Artificial Intelligence for the
American People, n.d.). However, the extent to which such programmes can resolve the longer-term
societal changes that AI might cause should be questioned, particularly in regard to the less
educated parts of the population. In 2018, the US was ranked 9th out of 25 advanced economies
in terms of readiness for automation, with vocational technical training considered inadequate
(Paquette, 2018). Other studies echo this finding, pointing to the US as having weaker problem-
solving skills in technology-rich environments than a number of other developed countries, with
suggestions made that further lifelong training programmes are needed (Cummins et al., 2019).
Electronic copy available at:
When it comes to developing the types of AI that may lead to radical change in the job
market, the lack of focus on ensuring equitable development may compound these issues. The
National Science and Technology Council’s ‘Charting a Course for Success: America’s Strategy for
STEM Education’ has “Increase Diversity, Equity, and Inclusion in STEM” as one of its pillars,
but neither this document nor other AI-focused documents emphasise maintaining diversity
throughout AI development, raising questions about its effectiveness at retaining
underrepresented groups through the AI pipeline. Failing to focus on equity in training and
retention across racial and socioeconomic groups could cause the societal changes linked to AI to
be detrimental for already marginalised groups that may not be served by education and retraining
initiatives. The National Academies’ artificial intelligence impact study on the workforce,
mandated as part of the National AI Initiative Act, could shed more light on current workforce
failings and future needs, though it is imperative that any study considers disparate impacts.
Federal efforts to ensure that AI is safe and trustworthy have been limited, with the
aforementioned high-level principles the main existing action. These principles are merely
guidance and the emphasis on ethical AI is outweighed by the clear prioritisation of a laissez faire
regulatory approach. Another notable development is a 2021 blog post by the Federal Trade
Commission, which stated that it prohibited unfair practices and that this included racially biased
algorithms (Jillson, 2021). This is suggestive of the potential for federal enforcement against biased
systems. Nonetheless, this example is the exception rather than the norm amongst federal agencies.
In contrast to a hands-off approach by the federal government, a handful of US cities have
introduced their own measures to mitigate potential ethical harms of AI. Recent high-profile
examples include San Francisco’s 2019 moratorium on government use of facial recognition
technology and Portland’s private sector ban of the same technologies (Simonite, 2020). In the
absence of clear national regulations, these are positive developments for achieving ethical
outcomes within these states, however, local regulatory measures are leading to significant gaps in
standards between states.
The US has made considerable progress in promoting an international environment that
is beneficial for its vision of a ‘Good AI Society’. It has ensured open markets by pushing back
against data localisation measures that restrict the flow of data, by asserting pressure both formally
in trade deals and also informally through public threats and increased diplomatic coercion (Basu,
2020; Sherman, 2020). Steps have also been taken to counter strategic competitors, including
export restrictions on numerous AI products in an effort to keep certain technologies out of
China’s hands for supposed economic and security reasons (‘U.S. Government Limits Exports of
Electronic copy available at:
Artificial Intelligence Software’, 2020). Combined, these measures have allowed the US to maintain
open markets for its own products, whilst hampering the efforts of competitors to overtake.
Despite this, some scholars continue to argue that the US has not gone far enough in
protecting its AI capacities, including its data sets and stopping the illicit transfer of technologies
(Rasser et al., 2019; Thomas, 2020). The criticism is that the US has not adequately protected its
technological and economic leadership, nor prevented ethically concerning uses of its technologies
for repression elsewhere, such as the surveillance-enabled persecution of the Uyghur minority in
Xinjiang, China (Mozur & Clark, 2020).
4. Transatlantic cooperation on AI governance
Both visions for a ‘Good AI Society’ emphasise the importance of international dialogue, and each
government has already taken steps to improve international governance efforts. After the US
initially refused to join the Global Partnership on AI (GPAI)an international and multi-
stakeholder initiative to guide the responsible development and use of AI (Erdélyi & Goldsmith,
2020)due to a fear that the ethics focus would hinder innovation, it joined, like the EU and
others, in May 2020. This was predominantly an attempt to counter the perceived threat of China
and ensure that AI develops in line with American values globally. The EU and US also support
the OECD’s AI ethics principles, which centre on developing trustworthy AI based on five key
values: sustainable growth and wellbeing, human-centred and fair, transparent and explainable,
robust and safe, and accountable (OECD.AI, n.d.).
Since Biden’s inauguration in January 2021, there have been signs of the potential for
deeper transatlantic ties, with Biden promising an American return to multilateralism and
cooperation to defend democratic values (Schaake & Barker, 2020). In fact, technology is central
on the agenda of the summit of democracies that Biden has promised to convene, and there is a
call to use such opportunity for establishing a dialogue on good governance in the digital world
aimed at promoting shared values such as human rights in the digital democratic space. Similar
calls have been made by EU for greater cooperation on technology governance (Hanke Vela &
Herszenhorn, 2020).
Firmer statements by figures in the Biden administration on high-risk technologies could
create fertile ground for greater cooperation with Europe. For instance, as a candidate for
president, Vice President Kamala Harris showed concern regarding the potential problems of
using AI in the criminal justice system, and explicitly committed to “ensure that technology used
by federal law enforcement, such as facial recognition and other surveillance, does not further
racial disparities or other biases” (Harris, 2019). With Harris now in office, this appears to be a
Electronic copy available at:
promising foundation for deeper agreement in defining international AI ethics beyond the
OECD’s high-level principles.
In reality, expectations on deep current or future transatlantic cooperation should be
tempered. Assessing the international-facing strategies of the EU and the US reveals tensions that
may be detrimental for international cooperation. Both the EU and US can be seen to be exporting
their ideological approaches to AI beyond their domestic jurisdiction (albeit indirectly in the EU’s
case). Indeed, the EU’s desire for digital sovereignty, which largely centres on curtailing American
technology companies, is in direct conflict with the US agenda for ‘digital free trade’ and low
barriers to entry for its corporations. It seems unlikely that these underlying positions will shift,
given fundamental differences in areas such as digital trade which predate the Trump
administration (Azmeh & Foster, 2016; Jones et al., 2021). Even within the short time that Biden
has been president, there have been discussions of retaliatory tax measures against states which
seek to levy taxes on American technology companies (Islam, 2021).
Likewise, support from both the EU and US for the OECD’s ethical principles should not
be read as alignment on what trustworthy AI should look like. Indeed, there has been a general
coalescing around five ethical principles for governing AI within the West (Floridi & Cowls, 2019).
Yet without a clear definition of the underpinning values system on the one hand and clear
recommendations on the other, AI principles alone provide little indication how ethical AI will be
developed and used and of what a Good AI Society looks like (Roberts, Cowls, Hine, et al., 2021).
As the ethical approaches of the EU and the US further develop, it is likely that differences
in underlying values will become clearer, both in terms of how principles are understood and
enforced. This has already been the case with the interpretation of ‘human-centred values’,
including privacy. The EU has far more stringent data protection laws than the US, with the
distinction between the two typified by the European Court of Justice’s judgment in the July 2020
case of Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems, which ruled that
the data protection provided by the EU-US Privacy Shield, a framework for regulating the
exchanges of data between the EU and the US, was inadequate (Tracol, 2020).
5. Conclusion
The EU and US have made significant progress since 2018 in solidifying their visions for what a
‘Good AI Society’ should look like and setting out paths forward for achieving these visions. These
different visionsprioritising individual citizens in the EU versus national competitiveness of the
UShave important consequences for the ethical governance of AI both domestically and
internationally. From standpoint of domestic AI governance, the EU’s approach is ethically
Electronic copy available at:
superior, as it foregrounds protecting citizens’ rights. This includes outlining the guiding value of
human-centric trustworthy AI, which anchors additional ethical principles, and providing explicit
guidance for operationalising key principles. The enactment of the proposed AI Act would solidify
this lead in ethical governance through providing a robust enforcement mechanism for high-risk
uses of AI.
The EU’s efforts to provide a concrete regulatory framework that prioritise European
values and safeguard individual rights may seem to have a negative impact on innovation, but this
is necessary to ensure that ethical harms are mitigated in the long term. Looking forward, this
regulatory framework needs to be accompanied by stronger measures that promote responsible
innovation to guarantee ethical outcomes. The EU’s relative lack of investment and leading
technology companies has a knock-on effect into other areas, such as the attraction of AI talent.6
These deficiencies undermine the ability of the EU to ensure that technologies are developed and
deployed in line with European values, as evidenced by the difficulties EU countries faced in
developing and using domestic contact tracing applications in the COVID-19 pandemic (Sharon,
2020). Accordingly, the promotion of European competitors in the field of AI, including
underlying and complementary technologies such as microelectronics, cloud, supercomputing and
cybersecurity are needed to ensure that the EU’s vision of a ‘Good AI Society’ can materialise in
The laissez faire path taken by the US at a domestic level is more ethically questionable. It
has largely placed the governance of AI in the hands of the private sector, leaving significant scope
for organisations to prioritise their own interests over that of citizens. Whilst some of the measures
outlined in the National AI Initiative Act appear to be a promising step in the right direction, an
emphasis on ethical and regulatory measures is largely secondary to further strengthening R&D to
improve competitiveness. Without government regulatory oversight, there is a significant risk of
harm through ethics washing or shopping, with the effectiveness of private sector principles
questionable (Hagendorff, 2020). As such, American values may be protected in the sense that
free market principles are preserved. However, this may come at the cost of protecting other
important values, such as fundamental human rights (including those enshrined in the US
constitution) and dignity. This is inadequate from an ethical standpoint.
One continued risk for both the EU and US, if new measures are not introduced, is internal
fragmentation. The benefits and risks associated with AI and data-driven technologies are already
being spread unevenly across both the EU and the US, on account of different regulatory
protections and opportunities from AI that are afforded to citizens. Such a fractious
Electronic copy available at:
implementation is problematic as it will lead to uneven enforcement and unequal protection of the
rights of citizens. This undermines the ability to maintain and project a consistent vision of a
‘Good AI Society’ from a governance perspective and raises ethical questions over whom is left
unprotected by fractious regulations.
Whilst not irreconcilable, and despite the new administration in the US, it is unlikely that
these two visions will coalesce around deep transatlantic cooperation. In fact, as the EU’s digital
sovereignty agenda continues to develop, increased friction with the US and its technology
companies may be anticipated. How these tensions play out will have a significant influence on
the development and deployment of AI and the protections that are afforded to citizens globally.
From an ethical standpoint, it would be beneficial for the US to turn its rhetoric on ethical AI into
reality. The statement by the Federal Trade Commission that it will consider enforcing against
unfair or deceptive AI could provide a strong starting point for other agencies to announce their
own measures, but this alone is inadequate. Enforceable AI regulations, meaningful anti-trust
measures and a cooperative international environment, both with allies and competitors, are all
necessary for a truly ‘Good AI Society’ to materialise for all those who are part of it.
Allison, G., & Schmidt, E. (2020). Is China Beating the U.S. to AI Supremacy? Harvard Kennedy
School, Belfer Center for Science and International Affairs.
Artificial Intelligence for the American People. (n.d.). Trump White House Archives. Retrieved 24
February 2021, from
Azmeh, S., & Foster, C. (2016). The TPP and the digital trade agenda: Digital industrial policy
and Silicon Valley’s influence on new trade agreements. LSE International Development
Working Paper Series, 16(175), 36.
Basu, A. (2020, January). The Retreat of the Data Localization Brigade: India, Indonesia and Vietnam.
The Diplomat.
Boulanin, V., Saalman, L., Topychkanov, P., Su, F., & Carlsson, M. P. (2020). Artificial Intelligence,
Strategic Stability and Nuclear Risk | SIPRI. SIPRI.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University
Brattberg, E., Csernatoni, R., & Rugova, V. (2020, June 9). Europe and AI: Leading, Lagging Behind,
or Carving Its Own Way? Carnegie Endowment for International Peace.
Bughin, J., Seong, J., Manyika, J., Hämäläinen, L., Windhagen, E., & Hazan, E. (2019). AI in
Europe: Tackling the gap. McKinsey Global Institute.
Electronic copy available at:
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification. Conference on Fairness, Accountability and Transparency,
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and
the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2),
Chatterjee, S. (2020). AI strategy of India: Policy framework, adoption challenges and actions for
government. Transforming Government: People, Process and Policy, 14(5), 757775.
Cisse, M. (2018). Look to Africa to advance artificial intelligence. Nature, 562(7728), 461461.
Clark, J., & Hadfield, G. K. (2019). Regulatory Markets for AI Safety. ArXiv:2001.00078
Coles, T. (2020, July 7). Visa Restrictions Unwelcome News for Firms in Need of AI Workers.
Center for Security and Emerging Technology.
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021a). A definition, benchmark and
database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111115.
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021b). The AI Gambit Leveraging Artificial
Intelligence to Combat Climate Change: Opportunities, Challenges, and Recommendations (SSRN
Scholarly Paper ID 3804983). Social Science Research Network.
Cummins, P. A., Yamashita, T., Millar, R. J., & Sahoo, S. (2019). Problem-Solving Skills of the
U.S. Workforce and Preparedness for Job Automation. Adult Learning, 30(3), 111120.
Defense Innovation Board. (2020). AI Principles: Recommendations on the Ethical Use of Artificial
IntelligenceSupporting Document.
Delcker, J. (2020, September 6). Wary of China, the West closes ranks to set rules for artificial intelligence.
Duan, W. (2020). Build a robust and agile artificial intelligence ethics and governance framework
[构建健敏捷的人工智能理与治理框架]. Science Research [
], 15(03), 11-
EDA. (2020). Artificial Intelligence: Joint quest for future defence applications. Default.
ENISA. (2020). Artificial Intelligence Cybersecurity Challenges [Report/Study].
Erdélyi, O. J., & Goldsmith, J. (2020). Regulating Artificial Intelligence: Proposal for a Global
Solution. ArXiv.
Ess, C. (2020). Digital Media Ethics. John Wiley & Sons.
eu-LISA. (2020). Artificial intelligence in the operational management of large-scale IT systems: Research and
technology monitoring report: perspectives for eu LISA. Publications Office.
European Commission. (2019, April 8). Ethics guidelines for trustworthy AI [Text].
European Commission. (2020a, February 19). White Paper On Artificial IntelligenceA European
approach to excellence and trust.
Electronic copy available at:
European Commission. (2020b, December). New EU financing instrument of up to €150 million to
support European artificial intelligence companies. https://digital-
European Data Protection Supervisor. (2021, April). Artificial Intelligence Act: A welcomed initiative,
but ban on remote biometric identification in public space is necessary.
European Parliament. (2021, January 20). Guidelines for military and non-military use of Artificial
Intelligence | News | European Parliament.
Executive Office of the President. (2020, November). Guidance for Regulation of Artificial Intelligence
Floridi, L. (2014). Open Data, Data Protection, and Group Privacy. Philosophy & Technology, 27(1),
Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being
Unethical. Philosophy & Technology, 32(2), 185193.
Floridi, L. (2020a). AI and Its New Winter: From Myths to Realities. Philosophy & Technology,
33(1), 13.
Floridi, L. (2020b). The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially
for the EU. Philosophy & Technology, 33(3), 369378.
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society.
Harvard Data Science Review, 1(1).
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C.,
Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018).
AI4PeopleAn Ethical Framework for a Good AI Society: Opportunities, Risks,
Principles, and Recommendations. Minds and Machines, 28(4), 689707.
Foffano, F., Scantamburlo, T., Cortés, A., & Bissolo, C. (2020). European Strategy on AI: Are
we truly fostering social good? ArXiv.
Franke, U., & Sartori, P. (2019). Machine politics: Europe and the AI revolution. European Council
on Foreign Relations.
Fuster, D. G. G., & Brussel, V. U. (2020). Artificial Intelligence and Law EnforcementImpact on
Fundamental Rights. 92.
Gal, D. (2020, July 9). Perspectives and Approaches in AI Ethics. The Oxford Handbook of Ethics of
Gill, I. (2020, January 17). Whoever leads in artificial intelligence in 2030 will rule the world until
2100. Brookings.
Government AI Readiness Index 2020. (n.d.). Oxford Insights. Retrieved 23 February 2021, from
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and
Machines, 30(1), 99120.
Electronic copy available at:
Hanke Vela, J., & Herszenhorn, D. M. (2020, November 30). EU seeks anti-China alliance on tech
with Biden. POLITICO.
Hao, K. (2020, January). The US just released 10 principles that it hopes will make AI safer. MIT
Technology Review.
Hao, K. (2021, March). He got Facebook hooked on AI. Now he can’t fix its misinformation addiction.
MIT Technology Review.
Harris, K. (2019, September 9). Kamala’s Plan to Transform the Criminal Justice System and Re-Envision
Public Safety in America. Medium.
Hille, K., Yang, Y., & Liu, Q. (2020, November 1). Huawei develops plan for chip plant to help beat US
Imbrie, A., Kania, E., & Laskai, L. (2020, January). The Question of Comparative Advantage in
Artificial Intelligence: Enduring Strengths and Emerging Challenges for the United
States. Center for Security and Emerging Technology.
Islam. (2021, March 29). Biden administration threatens tariffs on UK goods in ‘tech tax’ row.
BBC News.
Jillson, E. (2021, April 19). Aiming for truth, fairness, and equity in your company’s use of AI. Federal
Trade Commission.
Johnson, K. (2020, January 12). Obama-era tech advisors list potential challenges for the White
House’s AI principles. VentureBeat.
Jones, E., Kira, B., Sands, A., & Garrido Alves, D. B. (2021). The UK and Digital Trade: Which way
forward? Blavatnik School of Government.
Joske, A. (2020). Hunting the Phoenix. Australian Strategic Policy Institute.
Kelly, É. (2020, July). EU struggles to go from talk to action on artificial intelligence. Science|Business.
Knight, W. (2018, April). Here’s how the US needs to prepare for the age of artificial intelligence. MIT
Technology Review.
Lawrence, C., & Cordey, S. (2020). The Case for Increased Transatlantic Cooperation on Artificial
Intelligence (p. 148). The Belfer Center for Science and International Affairs.
Lomas, N. (2021, April). MEPs call for European AI rules to ban biometric surveillance in
public. TechCrunch.
MacCarthy, M., & Propp, K. (2021, April 28). Machines Learn That Brussels Writes the Rules: The
EU’s New AI Regulation. Lawfare.
Madiega, T. (2020, July). Digital Sovereignty for Europe [European Parliament Think Tank].
Electronic copy available at:
Meltzer, J. P., Kerry, C. F., & Engler, A. (2020, June 17). The importance and opportunities of
transatlantic cooperation on AI. Brookings.
Milano, S., Taddeo, M., & Floridi, L. (2020). Recommender systems and their ethical challenges.
AI & SOCIETY, 35(4), 957967.
Mozur, P., & Clark, D. (2020, November 23). China’s Surveillance State Sucks Up Data. U.S.
Tech Is Key to Sorting It. The New York Times.
Murgia, M., & Espinoza, J. (2020, February 26). The four problems with Europe’s vision of AI.
National Science and Technology Council. (2018, December). Charting a Course for Success:
America’s Strategy for STEM Education.
OECD.AI. (n.d.). The OECD Artificial Intelligence (AI) Principles. Retrieved 24 February 2021, from
OECD.AI. (2021). Powered by EC/OECD (2021), STIP Compass database.
Office of Science and Technology Policy. (2020, February). President Trump’s FY 2021 Budget
Commits to Double Investments in Key Industries of the Future The White House.
Paquette, D. (2018, April). The United States is way behind other countries on robot ‘readiness,’
report says. Washington Post.
Pepe, E. (2020). NATO and collective thinking on AI. IISS.
Rasser, M. (2019, December). The United States Needs a Strategy for Artificial Intelligence.
Foreign Policy.
Rasser, M., Lamberth, M., Riikonen, A., Guo, C., Horowitz, M., & Scharre, P. (2019). The
American AI Century: A Blueprint for Action. Center for New American Security.
Reinhold, F., & Müller, A. (2021, April). AlgorithmWatch’s response to the European
Commission’s proposed regulation on Artificial Intelligence A major step with major
gaps. AlgorithmWatch.
Roberts, H., Cowls, J., Hine, E., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). Governing
Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical
Outcomes (SSRN Scholarly Paper ID 3811034). Social Science Research Network.
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese
approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI &
SOCIETY, 36(1), 5977.
Sambasivan, N., Arnesen, E., Hutchinson, B., & Prabhakaran, V. (2020). Non-portability of
Algorithmic Fairness in India. ArXiv.
Schaake, M., & Barker, T. (2020, November 24). Democratic Source Code for a New U.S.-EU Tech
Alliance. Lawfare.
Electronic copy available at:
Sharon, T. (2020). Blind-sided by privacy? Digital contact tracing, the Apple/Google API and
big tech’s newfound role as global health policy makers. Ethics and Information Technology.
Sherman, J. (2020, April 10). The US Is Waging War on Digital Trade Barriers. Wired.
Simonite, T. (2020, September). Portland’s Face-Recognition Ban Is a New Twist on ‘Smart
Cities’. Wired.
Taylor, L., Floridi, L., & Sloot, B. van der (Eds.). (2017). Group Privacy: New Challenges of Data
Technologies. Springer International Publishing.
The Global AI Index. (n.d.). Tortoise. Retrieved 24 February 2021, from
Thomas, M. A. (2020). Time for a Counter-AI Strategy. Strategic Studies Quarterly, 14(1), 38.
Timmers, P. (2019). Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds and
Machines, 29(4), 635645.
Tracol, X. (2020). “Schrems II”: The return of the Privacy Shield. Computer Law & Security Review,
39, 105484.
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021).
The ethics of algorithms: Key problems and solutions. AI & SOCIETY.
U.S. government limits exports of artificial intelligence software. (2020, January 3). Reuters.
Vincent, J. (2020a, January 7). White House encourages hands-off approach to AI regulation. The Verge.
Vincent, J. (2020b, August 26). US announces $1 billion research push for AI and quantum computing.
The Verge.
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A.,
Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence
in achieving the Sustainable Development Goals. Nature Communications, 11(1), 233.
Von Der Leyen, U. (2020, February). Shaping Europe’s digital future [Text]. European Commission.
Whittlestone, J., Belfield, H., ÓhÉigeartaigh, S., Maas, M., Hagerty, A., Burden, J., & Avin, S.
(2021, April). Comment on the EU’s world-first AI regulation: ‘an historic opportunity’. CEFR.
Wolford, B. (2019, May 20). Millions of small businesses aren’t GDPR compliant, our survey finds.
Wong, P.-H. (2020). Cultural Differences as Excuses? Human Rights and Cultural Values in
Global Ethics and Governance of AI. Philosophy & Technology, 33(4), 705715.
Yang, G.-Z., Bellingham, J., Dupont, P. E., Fischer, P., Floridi, L., Full, R., Jacobstein, N.,
Kumar, V., McNutt, M., Merrifield, R., Nelson, B. J., Scassellati, B., Taddeo, M., Taylor,
R., Veloso, M., Wang, Z. L., & Wood, R. (2018). The grand challenges of Science
Robotics. Science Robotics, 3(14), eaar7650.
Electronic copy available at:
... In assessing the AI approaches of the US, UK, and EU (before new developments in American AI policy), Cath et al. (2018) used the term "Good AI Society" to analyse the visions of AI-enabled societies endorsed in policy documents, which informs this analysis. Recently, Roberts et al. (2021a) compared the strategies of China and the EU, and Roberts et al. (2021b) compared those of the EU and US. Nevertheless, a gap still exists for a comparison of the strategies of China and the US. ...
... This particular balance is clarified in the drive for harmony endorsed in Chinese AI ethical principles. Different government institutions have approved three sets of AI ethics principles (Roberts et al. 2021a). Two of them feature the modern word for "harmony" (和谐, hexie). ...
... This can be observed in the AIDP and Action Plan and the highly paternalistic draft regulations on recommendation algorithms. Roberts et al. (2021a) interprets the explicit references to "social construction" and "preserving social stability" in the AIDP as "humancentric" in the sense that it prioritises China as a society, rather than the individual. This is seen in how surveillance systems prioritise recall over precision, resulting in more "harmful actors" being identified but also in more false positives (Roberts et al. 2021b), as well as the government being given more latitude to collect personal information and deploy algorithms (Toner et al. 2021). ...
Full-text available
As China and the United States strive to be the primary global leader in AI, their visions are coming into conflict. This is frequently painted as a fundamental clash of civilisations, with evidence based primarily around each country’s current political system and present geopolitical tensions. However, such a narrow view claims to extrapolate into the future from an analysis of a momentary situation, ignoring a wealth of historical factors that influence each country’s prevailing philosophy of technology and thus their overarching AI strategies. In this article, we build a philosophy-of-technology-grounded framework to analyse what differences in Chinese and American AI policies exist and, on a fundamental level, why they exist. We support this with Natural Language Processing methods to provide an evidentiary basis for our analysis of policy differences. By looking at documents from three different American presidential administrations––Barack Obama, Donald Trump, and Joe Biden––as well as both national and local policy documents (many available only in Chinese) from China, we provide a thorough comparative analysis of policy differences. This article fills a gap in US–China AI policy comparison and constructs a framework for understanding the origin and trajectory of policy differences. By investigating what factors are informing each country’s philosophy of technology and thus their overall approach to AI policy, we argue that while significant obstacles to cooperation remain, there is room for dialogue and mutual growth.
... While the amount of algorithmic systems performing in questionable ethical manner continues to grow (Tsamados et al. 2021a), governmental efforts to regulate AI have gained momentum (Smith et al. 2016;SCMP Research 2020;European Commission 2021). At a regional level, the European Union is considered to have an ethically superior regulatory framework in terms of citizens' rights (Allison and Schmidt 2019;Gill 2020;Imbrie et al. 2020;Roberts et al. 2021), which has a positive impact at a global level (Bradford 2020). At the core of the EU AI framework, there is the principle of "diversity, non-discrimination and fairness", including the "avoidance of unfair bias", especially in the case of the historically discriminated groups (HLE-GAI 2019). ...
Full-text available
Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies.
... Digital sovereignty. EU initiatives for the governance of the digital are increasingly centered around the concept of digital sovereignty (Roberts et al., 2021). When considering the topic of this commentary, this concept has two implications. ...
In this commentary, we focus on the ethical challenges of data sharing and its potential in supporting biomedical research. Taking human genomics (HG) and European governance for sharing genomic data as a case study, we consider how to balance competing rights and interests-balancing protection of the privacy of data subjects and data security, with scientific progress and the need to promote public health. This is of particular relevancy in light of the current pandemic, which stresses the urgent need for international collaborations to promote health for all. We draw from existing ethical codes for data sharing in HG to offer recommendations as to how to protect rights while fostering scientific research and open science.
Full-text available
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment.
Full-text available
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Full-text available
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. Many researchers have become interested in implementing artificial intelligence methods in applications with socially beneficial outcomes. To provide a way to study and benchmark such ‘AI for social good’ applications, Josh Cowls et al. use the United Nations’ Sustainable Development Goals to systematically analyse AI for social good applications.
Full-text available
Cultural differences pose a serious challenge to the ethics and governance of artificial intelligence (AI) from a global perspective. Cultural differences may enable malignant actors to disregard the demand of important ethical values or even to justify the violation of them through deference to the local culture, either by affirming the local culture lacks specific ethical values, e.g., privacy, or by asserting the local culture upholds conflicting values, e.g., state intervention is good. One response to this challenge is the human rights approach to AI governance, which is intended to be a universal and globally enforceable framework. The proponents of the approach, however, have so far neglected the challenge from cultural differences or left out the implications of cultural diversity in their works. This is surprising because human rights theorists have long recognized the significance of cultural pluralism for human rights. Accordingly, the approach may not be straightforwardly applicable in “non-Western” contexts because of cultural differences, and it may also be critiqued as philosophically incomplete insofar as the approach does not account for the (non-) role of culture. This commentary examines the human rights approach to AI governance with an emphasis on cultural values and the role of culture. Particularly, I show that the consideration of cultural values is essential to the human rights approach for both philosophical and instrumental reasons.
Full-text available
Since the outbreak of COVID-19, governments have turned their attention to digital contact tracing. In many countries, public debate has focused on the risks this technology poses to privacy, with advocates and experts sounding alarm bells about surveillance and mission creep reminiscent of the post 9/11 era. Yet, when Apple and Google launched their contact tracing API in April 2020, some of the world’s leading privacy experts applauded this initiative for its privacy-preserving technical specifications. In an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments. This article proposes to view the Apple/Google API in terms of a broader phenomenon whereby tech corporations are encroaching into ever new spheres of social life. From this perspective, the (legitimate) advantage these actors have accrued in the sphere of the production of digital goods provides them with (illegitimate) access to the spheres of health and medicine, and more worrisome, to the sphere of politics. These sphere transgressions raise numerous risks that are not captured by the focus on privacy harms. Namely, a crowding out of essential spherical expertise, new dependencies on corporate actors for the delivery of essential, public goods, the shaping of (global) public policy by non-representative, private actors and ultimately, the accumulation of decision-making power across multiple spheres. While privacy is certainly an important value, its centrality in the debate on digital contact tracing may blind us to these broader societal harms and unwittingly pave the way for ever more sphere transgressions.
Full-text available
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.
On 16 July 2020, the Grand Chamber of the European Court of Justice rendered its landmark judgment in Case C-311/18 Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems (“Schrems II”). The Grand Chamber invalidated the Commission decision on the adequacy of the data protection provided by the EU-US Privacy Shield. It however considered that the decision of the Commission on standard contractual clauses (“SCCs”) issued by the Commission for the transfer of personal data to processors established in third states was legally valid. The legal effects of the judgment should first be clarified. In addition, it has far-reaching implications for companies which transfer personal data from the EU to the US. The judgment of the Grand Chamber has also far-reaching implications for transfers of personal data from the EU to other third states. Last, it has far-reaching implications for the UK in the context of Brexit. © 2020 Published by Elsevier Ltd. All rights reserved.
Purpose The purpose of this study is to provide recommendations for policy framework on artificial intelligence (AI) in India. Design/methodology/approach Studies have been conducted through focus group discussion and the other sources such as different company websites using AI, Indian Government strategy reports on AI, literature studies, different policies implemented on AI in different locations and other relevant documents. After those studies, a charter of recommendation has been provided. This will help the authority to frame the AI policy for India. Findings This study highlights that “National Strategy for AI” for India needs improvement to provide comprehensive inputs for framing policy on AI. This study also implies that focus is to be given on security, privacy issues including issues of governance. Research limitations/implications AI-related technology has immense potential toward the development of organizations. This study implies the necessity of framing a comprehensive policy on AI for India. If there is a comprehensive policy on AI for India, the Indian industries will derive many benefits. Practical implications This study provides inputs on how the industries of India can be benefitted with the help of AI and how R&D can develop the AI activities to harness maximum benefits from this innovative technology. Social implications AI-related policy will have appreciable influence on the society in terms of human–device interactions and communications. The policy framework on AI for India is expected to project far-reaching effects toward deriving benefits to the society. Originality/value This paper has taken a holistic and unique attempt to provide inputs to the policymakers for framing a comprehensive and meaningful policy on AI for India.