Content uploaded by Bb As
Author content
All content in this area was uploaded by Bb As on Jun 16, 2023
Content may be subject to copyright.
1 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
The Limits of the Brussels Effect on Emerging Techno-Norms
Bright Simons
Introduction: The Source of the Brussels Effect
The “Brussels Effect” (Bradford, 2020) is an elaborate theory of how a political jurisdiction might
evolve into a “Global Regulatory Superpower”.
In the classical Bradfordian version of this concept (Bradford, 2012), the European Union (EU) center
(the core institutions of the EU commonly referred to as, “Brussels”, and comprising the decision-
making heart of the sprawling EU system) has become a global source of regulatory norms.
A major reason for this turn is that economic actors choose, on the basis of rational calculation, to
model their global compliance systems in a number of technical areas on a EU regulatory template
(Foulsham, 2019). Likewise, certain governments, sometimes in response to business action but also
because of the sense that the EU offers “best practice”, model their own, usually lagging, regulations
using EU blueprints (Jorgens, 2004; cf. De Ville & Gunst, 2021).
But why do companies choose to use compliance systems developed to meet EU regulatory
requirements globally? And, especially in the technology field, even adopt “by-design” methods that
result in them “trading up” their overall level of compliance (Bradford, 2020; Young, 2003; Van
Cleynenbreugel, 2022)? Why do they not, instead, explore “regulatory arbitrage” opportunities (cf. Li
& Newham, 2022)?
The answers to these questions are well recognised in the literature.
Due to rich historical ties to many parts of the globe, the EU’s various legal traditions already serve as
a foundation of regulatory design in many parts of the world. That legal heritage deftly combines civil
and common law strands into an “autonomous legal order” (Koopmans, 1991; cf. Robertson, 2012).
Even with the departure of the United Kingdom (UK), Ireland remains as the inheritor of the common
law legacy, contributing to preserve the full tapestry of EU legal pluralism. (Herlin-Karnell, 2019; Van
Hoecke, 2007).
2 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Moreover, the EU’s large market of 450 million people with a relatively harmonious, though by no
means homogenous, market regulatory infrastructure means that once a company has “solved for the
EU”, they most probably would have solved for many other parts of the world as well. Another way to
put it is that each EU regulatory blueprint has a “critical mass of tamed heterogeneity” to reflect the
diverse regulatory concerns of many other jurisdictions, even those farther afield. And, of course, 450
million wealthy consumers are not so easily ignored.
The EU’s institutional depth has also been cited, including the sophisticated deliberative process of
the trilogues, through which the three main arms of EU law and policymaking – Parliament, Council
and Commission - come into alignment (Hoppe, 2020; Greenwood & Roederer-Rynning, 2015).
Observers go further to cite the political will to enforce regulations when passed (Bradford, 2020;
Scholten, 2017; Scholten & Scholten, 2016). Given that companies may encounter laxity, uneven
enforcement standards, corruption, lack of clarity, and sheer incompetence in other jurisdictions, it
makes sense to prioritise compliance for the jurisdiction that, so to speak, has got its act together.
Even in those jurisdictions that compete with the EU on some of these metrics, like the United States
and Japan, ideological conflicts and bureaucratic inertia often lead to uneven and unpredictable
enforcement. The austere Brussels model contrasts with the upheavals we have seen in the interface
between politics and regulation in the United States, for instance (Wei et al, 2022; cf. Scheingate &
Greer, 2021).
The General Data Protection Regulation (GDPR) is the summit of Brussels Effect manifestation in the
field of technology (Hoofnagle, 2019). Its massive impact on the compliance landscape of internet
business models best exemplifies the reach of EU regulatory norm-making (Li & He, 2019; Ryngaert
& Taylor, 2020).
The internet’s chequered history with privacy and data protection and the sheer cost of maintaining a
coherent user experience whilst also attempting jurisdiction-specific privacy and data protection
methods strongly motivates the adoption of what might present as the “gold standard” regulatory
regime and be done with it (Schwartz, 2019). Especially also when it is very hard for an entity to
officially compete on the basis that it is under-compliant on matters of privacy and data protection,
and when the internet is, in most practical respects, cross-border by design. Whatever an entity’s true
compliance culture, the public and formal adoption of GDPR makes sense for many internet-driven
business models.
3 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
With that elaborate context, the subject matter of this essay can now take shape. What are the limits
of this Brussels Effect paradigm in vertical technology domains that are still emerging, such as
genomics, artificial intelligence and their admixtures?
This essay uses the micro case study approach to address this question and, in the course of doing
so, to contribute to the literature that argues that the Brussels Effect has been exaggerated (Orbie,
2021; Lindseth, 2021). It does so, however, with a novel emphasis on the difference between low-
tech and high-tech contexts, in one dimension, and on the tension between precautionary risk
management norm and techno-utilitarian logic, in the other dimension.
Early Days Yet: Brussels Reaches for AI Ethics Dominance
Vogel (2012) has carefully charted the history of how sometime in the 1990s the EU overtook the US
as the dominant champion of the “precautionary” approach to regulation. The precautionary principle
enjoins public authorities to make rules with the intent of preventing harm even if there is a lack of
scientific consensus, or even active disagreement, about the likely impact, and even effects, of the
suspected cause of that harm provided that some information exists to suggest potentially serious
impacts if left unchecked (Wiener, 2001; Wiener, 2007; Mhyr & Traavik, 2002). Considered another
way, the precautionary principle “shifts the burden” to the regulated actor to justify why harm is
unlikely to result from their actions rather than require this proof or evidence of harm from the
regulator or a third-party opponent of the action (Sunstein, 2003).
It is obvious from the foregoing that in technological matters, especially of the high-tech variety,
where innovation and its default acceleration of an uncertain future is often involved, precautionary
risk management is bound to stir up strong passions about neo-luddism, anti-progress, bureaucratic
inertia and even irrationality (Weeramantry, 1998; Marchant & Sylvester, 2006).
The emergence of the precautionary principle as the dominant norm of EU regulatory philosophy thus
finds exceptional resonance in the field of Artificial Intelligence (AI) regulation, where intense debate
is raging about the likely harms of a new technology with a profoundly uncertain trajectory.
Unsurprisingly, the draft-stage EU AI Act is a rich study in the EU’s distinct approach to risk
management for new technologies. The speed with which OpenAI (owners of ChatGPT) rushed to
align with GDPR-based enforcement actions by the Italian Data Protection Agency suggests to some
that the Brussels Effect is already taking hold (Yakışır, 2023). But is it?
4 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Early trends raise many doubts.
The precautionary model, generally, tends to struggle when pitted against innovations where the
strength of expectations about benefits are particularly strong and alternative solutions are harder to
envisage.
From April 2021, when the EU Commission first proposed the EU AI Act (Kop, 2021), to June 14th,
2023, when the EU Parliament reached its internal consensus ahead of engagement with other EU
stakeholders, the actions of key European actors show that Europe would love to eat its cake and
have it. Every effort has been made to carve out areas of maximal benefit even whilst proposing a
high burden of regulation on private AI companies.
The Slovenian presidency, for instance, championed an amendment to remove general purpose AI
systems (GPAIs) - AI tools, like ChatGPT, that can handle many tasks without requiring fresh training
– from consideration in the Act (Bertuzzi, 2021). Post the presidency, Slovenia continues to pursue
these and similar aims, primarily through the Council, and in direct conflict with the co-rapporteurs of
the Act in the Parliament who have proposed that GPAIs be automatically subject to the highest risk
tier in the EU AI Act. On top of this, the Rapporteurs want GPAI “value chains” to undergo rigorous
ongoing external audits
1
. Such stark contradictions usually lead to functional incoherence in final
compromise texts, reducing their appeal for broader emulation.
Indeed, if the Council has its way, the scope of the act will be limited to machine learning systems,
and not to AI as a whole. Additionally, wide-ranging national security-based exemptions will be made.
On that latter point, there are many in the Parliament, especially from the center-right leaning camps,
who had wanted to see the national security safeharbor widened well beyond military and police
applications. “Benefits maximisation” of this type goes to the heart of economic self-preservation for
individual European states and to the EU’s geopolitical standing vis a vis China, the United States,
and other powers. The widespread view that AI could confer strategic advantages of infinite scope,
and yet of a highly uncertain character, complicates the risk-based picture of the precautionary super-
norm.
For example, Kreitmeir and Raschky (2023) finds that the productivity of Italian software developers
dropped 50% following the ChatGPT ban. Whether causation is rigorously established or not, such
strong correlations reinforce the sense among elites of super-high AI benefits in domestic economic
5 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
settings. Likewise, the Commission’s dire report to the Parliament and Council about the EU’s
unfavourable position in the global technology arms race (EU Commission, 2022) underpins the
geopolitical point. In the Commission’s own words:
In a shifting geopolitical environment, the EU needs to continue strengthening its resilience
and open strategic autonomy in critical sectors linked to the transitions.
It goes without saying that other geopolitical centers see things in a similar light. Both the United
States and China have released a stream of policies aimed at maximising the benefits of advanced
technologies for strategic advantage (Demchak, 2019). In such a scenario, politics dominate
normative considerations, and regulatory globalisation becomes ever more unrealistic.
The “geo-normative” framing of regulatory convergence is best captured in a matrix to show tensions
and levers within the relations of Great Powers that accelerate or decelerate convergence and
smooth the way for regulatory globalisation. In the case of AI, strategic bargaining in the neo-realist
model leading to some kind of mutual toleration of conflicting systems is the most viable outcome,
and not the unilateral diffusion and normative entrepreneurship logic at the root of the Brussels Effect
concept.
Fig 1. Rendition of a framework on the constraints of Great Power regulatory globalisation. Source: Newman & Posman
(2015) with visualisation by Renda (2022)
Indeed, as a Brookings study has found, the US has already diverged substantially from EU AI
regulatory approaches with no sign of any imminent convergence (Engler, 2023).
6 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
India, an emerging global digital power, initially, emphatically, announced that it has no plans to
introduce any AI regulations or legislation at all (CPI, 2023). Eventually, the Union Minister
responsible for technology matters suggested the possibility of a multilateral framework but made no
commitment to a domestic regime even remotely close to the EU approach (TNN, 2023).
Several of the use cases of AI prohibited in the EU AI Act, such as automated surveillance, are areas
of strong research and deployment in China, with active state support and participation (Liang et al,
2018; Roberts et al, 2018). Whilst the EU prohibits AI-driven social scoring, China is devising cleverer
AI algorithms to extend the concept into even more areas of national life (Neuwirth, 2023; Kazim et al,
2022).
To be clear, there was no expectation of the Brussels Effect manifesting in the AI terrain in the form of
mass copycatting of EU regulations by world Powers, great and small. The classical operation of the
paradigm should, nonetheless, have led to signs that the European approach is exerting suasive
force on how non-European actors perceive the matter. So far, there is scant evidence of any of this
happening. Countries are striking out in ways that explicitly distinguishes their approach from the EU.
They are not introducing risk-tiering with precautionary obligations on providers, and they are not
prohibiting particular use cases, etc.
A deep splintering of the geonormative sphere is clearly already underway.
The Biotech Revolution meets the Precautionary Juggernaut
In 2019, the EU Court of Justice (CJEU) issued a ruling in case C-528/16 that novel gene-editing
technologies like CRISPR must be made to conform to a dated biotech regulatory regime, the GMO
directive
2
. Following the judgment, the Council asked the Commission to advise on the implications of
this development. On 29th April 2021, the Commission submitted its findings establishing that:
“[T]he EU legislation on GMOs has clear implementation challenges and that there are legal
uncertainties as regards new techniques and new applications.”
In short, trying to regulate the fast-changing world of modern genomics with a regulatory regime
whose core and essence dates to 2001 is, putting it mildly, daunting. Yet, despite years of sustained
deliberations amongst the Commission and European founts of expertise such as the European
7 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Medicines Agency and the European Group on Ethics in Science and Technologies, there has been
no clear policy direction to fix the challenge (EMA, 2018).
Unsurprisingly, the CJEU’s decision was greeted with universal anguish across the European
genomics sector. One academic described the likely consequence as a “death blow to plant biotech
in Europe.” In view of the inflexibility of the GMO Directive.
3
Genome-editing technologies, such as CRISPR, are widely believed to hold the key to vast prospects
of biomedical, agricultural, ecological-conservation and nutritional benefits, from curing cancer to
breeding climate-resilient crops (Garland, 2021). They are also, obviously, highly novel systems
wreathed with a great deal of scientific uncertainty, pitting benefits-maximisation, from a techno-
utilitarian perspective, against the precautionary super-norm.
Before the CJEU gave its ruling, the European Academies Science Advisory Council (EASAC),
representing the weight of eminent scientific opinion in the EU had cautioned against the regulation of
genomic editing along the same lines as GMOs. Following the ruling, EASAC has doubled down on
its criticism of the EU biotech posture (EASAC, 2020).
Cross-channel conflict began soon after the ruling. And the United Kingdom (UK) eventually
introduced legislation permitting gene-editing within the UK, though some devolved administrations,
such as Wales, continue to keep faith with the EU’s approach (Thompson, 2022).
It is common knowledge that US and EU biotech regulation in the agricultural space has diverged
massively, not just in respect of GMOs and agrochemical residues, but in a number of other important
areas as well (Grossman, 2018; Asquer & Krachkovskaya, 2021). What is not as well appreciated is
how increasingly isolated the EU is becoming in the wider genomic innovation world, even as internal
ideological polarisation in the Single Market itself stalls decisive action (Hjort, 2021).
The scope of this anomie is not limited to gene-editing either. A fast-growing consensus in the
literature is that, in the broader context of general genomic research, the EU’s approach to data-
sharing is steadily orphaning European science from the rest of the world’s, forcing some of its best
scientists to migrate beyond the bounds of European regulation (Gabor & Korbel, 2020).
8 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Conclusion: Emerging Techno-Norms are Multi-Polar
The reliance on developments in the aftermath of GDPR’s passage to project an outsize role for the
EU in global standards-setting suffers from a serious myopia of the contextual factors.
GDPR is an easy operational fit with the EU’s precautionary normative stance because privacy-laxity
does not promise a preponderance of benefits apparent to elites and major interests. Credible
alternatives to privacy-laxity for operational convenience have always been well known, and now they
abound, from data subject anonymisation to synthetic data and federated architectures (Costa, 2011).
Furthermore, GDPR was not interposed into a high-tech, super-novel, context at birth. It actually drew
on a large body of consensus legislation that had established privacy as a high-risk area over
decades. The EU was not even particularly ahead of the curve in risk identification. At best, it was an
aggregator and sharpener of international norms. Data protection laws in the 1970s in Europe, judicial
rulings in the 1980s, and many global frameworks and declarations, some with treaty standing, had
paved the way to erode any lingering notion of privacy-laxity as a strategic advantage (Kuner, 2009).
Another critical factor is the nature of policymaking in the EU itself and how it shapes the regulation of
new technologies high up on the high-tech totem pole.
A much-debated effect of the Treaty of Amsterdam is the growing informalisation of the EU’s
legislative process (Reh et al, 2013), especially through so-called early decisions and conciliations
(Shackleton & Raunio, 2003; Rasmussen, 2011; Hage & Kaeding, 2007). Whilst the case for this
trend emphasises the need to eliminate rigidities and bureaucratic logjams, the resulting outcome has
been one where the Commission’s technocratic input is increasingly diluted and yet open market
preferences (that should come from greater public awareness of the matters being deliberated) do not
exert much impact (cf. Farrell & Heritier, 2003; Christiansen & Neuhold, 2013). Instead, specialist
lobbies have seen their influence rise leading to legislation that is sometimes technically disjointed
and out of touch with market sentiment (Greenwood & Roederer-Rynning, 2021). The discordance
around AI and genome-editing policymaking in the EU can be traced back to this confused state of
affairs.
In fact, there have only ever been a few failures of the conciliation process in the co-decision
framework, post the Treaty of Amsterdam; and both involved technology innovations. The first in 1994
9 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
was the rejection by the Parliament of the proposal to apply the Open Network Provision (ONP) to
voice telephony. And the second was the decision in 1995 by the European Parliament to vote down
the Conciliation Committee’s proposal, representing a compromise between delegations of the
Council and the Parliament, to harmonise procedures for the patenting of biotechnological inventions
across Europe.
It is thus safe to say that the subsequent efforts to deepen informal consultations and reduce formal
EU legislative procedures into rubberstamping ceremonies were influenced heavily by the experience
of trying to adapt the vaunted model of the trilogues to the unique complexities of regulating
technology innovation. And yet in both the biotech and ONP controversies, the highly technocratic
Commission was far more in tune with market preferences than the supposedly more democratic, and
of course far more politicised, Council and Parliament.
The nature of high-tech industries is such that deep knowledge is often a requirement for serving
consumer needs and, to that extent, approximating public interest. Politicised institutions are on the
other hand more vulnerable to lobbies pushing shallow, narrow, objectives. As the EU’s precautionary
super-norm has come against the demands of high-tech innovation with a relatively risky profile, it
has floundered due to the backsliding of technocratic centrality in Brussels.
The Juncker Commission’s vision of a Digital Single Market, the Von der Leyden Commission’s
ambition of a Digital Regulatory Superstate, and the Strategic Autonomy aspirations of the Horizon
era, have all struggled to keep up with the rapid pace of technology multipolarisation around the world
(cf. Renda, 2020). That pace, driven by norms that privilege the search for geo-strategic advantage
and the holy grail of a maximal-benefits curve in a world under resource pressure, promises a
proliferation of fractures along the most promising axes of differential performance, such as data-
driven genomics (Holman, 2019).
As countries and companies experiment with new institutional forms, hybrid public-private models
(whereby the lines between government-regulator and regulated-enterprise are increasingly blurred),
and complex technology supply chain alliances; and grow comfortable with interoperability across
highly divergent systems, all in a bid to stay ahead of the new technologies curve, the notion of a
single source of truth for any techno-norms in a world on edge has increasingly become difficult to
sustain.
10 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
References
Asquer, A., & Krachkovskaya, I. (2021). Uncertainty, institutions and regulatory responses to
emerging technologies: CRISPR Gene editing in the US and the EU (2012–2019). Regulation &
Governance, 15(4), 1111-1127.
Bertuzzi, L. (2021). EU Council presidency pitches significant changes to AI Act proposal. Euractiv.
Bradford, A. (2012). The Brussels Effect. Northwestern University Law Review, 107: 1.
Bradford, A., (2020). The Brussels Effect: How the European Union Rules the World. Faculty Books.
232.
https://scholarship.law.columbia.edu/books/232
Christiansen, T., & Neuhold, C. (2013). Informal politics in the EU. JCMS: Journal of Common Market
Studies, 51(6), 1196–1206. https://doi.org/10.1111/jcms.12068
Christilla Roederer-Rynning & Justin Greenwood (2015) The culture of trilogues, Journal of European
Public Policy, 22:8, 1148-1165, DOI: 10.1080/13501763.2014.992934
CJEU. (2019). Court of Justice of the European Union PRESS RELEASE No 111/18 Luxembourg, 25
July 2018 Judgment in Case C-528/16 Confédération paysanne and Others v Premier ministre and
Ministre de l’Agriculture, de l’Agroalimentaire et de la Forêt.
https://curia.europa.eu/jcms/upload/docs/application/pdf/2018-07/cp180111en.pdf
Costa, Luiz. (2011). Privacy and the Precautionary Principle. Computer Law & Security Report. 28.
10.1016/j.clsr.2011.11.004.
CPI. (2023). India Does Not Plan To Regulate AI For The Time Being. Competition Policy
International. https://www.competitionpolicyinternational.com/india-does-not-plan-to-regulate-ai-for-
the-time-being/
11 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Demchak, C. C. China: Determined to Dominate Cyberspace and AI, 75 (3) Bulletin of the atomic
scientists 99 (2019).
De Ville, F., & Gunst, S. (2021). The Brussels Effect: How the GDPR Conquered Silicon
Valley. European Foreign Affairs Review, 26(3).
EASAC. (2020). The regulation of genome-edited plants in the European Union. EASAC
Commentary.
https://easac.eu/fileadmin/PDF_s/reports_statements/Genome_Editing/EASAC_Genome-
Edited_Plants_Web.pdf
EMA. (2018). Report of the EMA Expert Meeting on Genome Editing Technologies Used in Medicinal
Product Development. EMA/47066/2018.
Engler, A. (2023). The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to
alignment. Brookings. https://www.brookings.edu/research/the-eu-and-us-diverge-on-ai-regulation-a-
transatlantic-comparison-and-steps-to-alignment/
EU Commission. (2022). 2022 Strategic Foresight Report Twinning the green and digital transitions in
the new geopolitical context. European Commission. https://eur-lex.europa.eu/legal-
content/EN/TXT/HTML/?uri=CELEX:52022DC0289
Farrell H., Héritier A. (2003). Formal and informal institutions under codecision: Continuous
constitution-building in Europe. Governance, 16, 577-600.
Foulsham, M. (2019). Living with the new general data protection regulation (GDPR). Financial
Compliance: Issues, Concerns and Future Directions, 113-136.
Garland, S. (2021). EU policy must change to reflect the potential of gene editing for addressing
climate change. Global Food Security, 28, 100496.
Greenwood, J., & Roederer-Rynning, C. (2021). Organized interests and trilogues in a post-regulatory
era of EU policy-making. Journal of European Public Policy, 28(1), 112-
131. https://doi.org/10.1080/13501763.2020.1859592
12 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Grossman, M.R. (2018). Agricultural Biotechnology: Regulation in the United States and the
European Union. In: Bremmers, H., Purnhagen, K. (eds) Regulating and Managing Food Safety in the
EU. Economic Analysis of Law in European Legal Scholarship, vol 6. Springer, Cham.
https://doi.org/10.1007/978-3-319-77045-1_15
Häge F., Kaeding M. (2007). Reconsidering the European Parliament’s legislative influence: Formal
vs. informal procedures. Journal of European Integration, 29, 341-361
Herlin-Karnell, E. (2019) The EU as a Promoter of Values and the European Global Project. German
Law Journal, 10.1017/S207183220001782X, 13, 11, (1225-1246).
Hjort, C., Cole, J., & Frébort, I. (2021). European genome editing regulations: Threats to the
European bioeconomy and unfit for purpose. EFB Bioeconomy Journal, 1, 100001.
Holman, C. M. (2019). A fractured international response to CRISPR-enabled gene editing of
agricultural products. Biotechnology law report, 38(1), 3-23.
Hoofnagle, C. J., Van Der Sloot, B., & Borgesius, F. Z. (2019). The European Union general data
protection regulation: what it is and what it means. Information & Communications Technology
Law, 28(1), 65-98.
Hoppe, A. (2020). The devil is in the process: an analysis of the impact of negotiation processes in
trilogues on EU legislation. PhD Dissertation, Utrecht University. https://doi.org/10.33540/34
Jorgens, H. (2004). Governance by Diffusion: Implementing Global Norms through Cross-National
Imitation and Learning. Chapter 9, in Governance for Sustainable Development: The Challenge of
Adopting Form to Function, Edited by Lafferty, W. Elgar.
Kazim, E., Güçlütürk, O., Almeida, D., Kerrigan, C., Lomas, E., Koshiyama, A., ... & Trengove, M.
(2022). Proposed EU AI Act—Presidency compromise text: select overview and comment on the
changes to the proposed regulation. AI and Ethics, 1-7.
Koopmans, T. (1991). The birth of European law at the crossroads of legal traditions. Am. J. Comp.
L., 39, 493.
13 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Kop, M. (2021, September). EU artificial intelligence act: the European approach to AI. Stanford-
Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford
University, Issue.
Kreitmeir, D. H., & Raschky, P. A. (2023). The Unintended Consequences of Censoring Digital
Technology--Evidence from Italy's ChatGPT Ban. arXiv preprint arXiv:2304.09339.
Kuner, C. (2009). An international legal framework for data protection: Issues and
prospects. Computer law & security review, 25(4), 307-317.
Li, H., Yu, L., & He, W. (2019). The impact of GDPR on global technology development. Journal of
Global Information Technology Management, 22(1), 1-6.
Li, S., & Newman, A. (2022). Over the shoulder enforcement in European regulatory networks:the
role of arbitrage mitigation mechanisms in the General Data Protection Regulation, Journal of
European Public Policy, 29:10, 1698-1720, DOI: 10.1080/13501763.2022.2069845
Liang, F., Das, V., Kostyuk, N., & Hussain, M. M. (2018). Constructing a data‐driven society: China's
social credit system as a state surveillance infrastructure. Policy & Internet, 10(4), 415-453.
Lindseth, P. (2021). Book Review: Anu Bradford, The Brussels Effect: How the European Union
Rules the World, Oxford University Press 2020, American Journal of Comparative Law (Forthcoming),
Available at SSRN: https://ssrn.com/abstract=3810798
Marchant, G. & Sylvester, D. (2006). Transnational Models for Regulation of Nanotechnology, p 34.
Journal of Law, Medicine and Ethics 714, 722
Molnár-Gábor, F., & Korbel, J. O. (2020). Genomic data sharing in Europe is stumbling-Could a code
of conduct prevent its fall?. EMBO molecular medicine, 12(3), e11421.
https://doi.org/10.15252/emmm.201911421
Myhr, A. I., & Traavik, T. (2002). The precautionary principle: Scientific uncertainty and omitted
research in the context of GMO use and release. Journal of agricultural and environmental ethics, 15,
73-86.
14 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Newman, A. L., and Posner, E. (2015). Putting the EU in Its Place: Policy Strategies and the Global
Regulatory Context. Journal of European Public Policy 22 no. 9: 1316−1335.
Orbie, J. (2021). EU trade policy meets geopolitics: what about trade justice?. European Foreign
Affairs Review, 26(2), 197-202.
Rapela, M. (2019). The European Court of Justice ruling on products derived from genome editing: a
case for Brazil and Argentina. Seed News Magazine, 18(1), 6-8.
Rasmussen A. (2011). Early conclusion in bicameral bargaining: Evidence from the co-decision
legislative procedure of the European Union. European Union Politics, 12, 41-64.
Reh C, Héritier A, Bressanelli E, et al. (2013) The informal politics of legislation explaining secluded
decision making in the European Union. Comparative Political Studies 46(9): 1112–1142.
Renda, A. (2020). Single Market 2.0: the European Union as a Platform. Research Papers in Law
2/2020.
Renda, A. (2022). Beyond the Brussels effect. leveraging digital regulation for strategic
autonomy. FEPS Policy Brief.
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach
to artificial intelligence: an analysis of policy, ethics, and regulation. Ethics, Governance, and Policies
in Artificial Intelligence, 47-79.
Ryngaert, C., & Taylor, M. (2020). The GDPR as global data protection regulation?. American Journal
of International Law, 114, 5-9.
Sheingate, A., & Greer, A. (2021). Populism, Politicization and Policy Change in US and UK Agro-
food Policies, Journal of Comparative Policy Analysis: Research and Practice, 23:5-6, 544-
560, DOI: 10.1080/13876988.2020.1749518
Scholten, M. (2017). Mind the trend! Enforcement of EU law has been moving to ‘Brussels’, Journal of
European Public Policy, 24:9, 1348-1366, DOI: 10.1080/13501763.2017.1314538
15 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Scholten, M., and Scholten, D. (2016) ‘From regulation to enforcement in the EU policy cycle: A new
type of functional spillover?’ Paper presented at the Sixth Biennial Conference, ECPR Standing
Group on Regulatory Governance, July 6, 2016
Schwartz, P. M. (2019). Global data privacy: The EU way. NYUL Rev., 94, 771.
Shackleton M., Raunio T. (2003). Codecision since Amsterdam: A laboratory for institutional
innovation and change. Journal of European Public Policy, 10, 171-187.
Sunstein, C. (2003. The paralyzing principle: Does the precautionary principle point us in any helpful
direction?. Regulation, Winter, 2002–2003.
https://heinonline.org/HOL/Page?handle=hein.journals/rcatorbg25&div=58&g_sent=1&casa_token=&
collection=journals
Thompson, K. (2022). Governments split over gene-edited food. CIEH. 15 June, 2O22.
https://www.cieh.org/news/blog/2022/governments-split-over-gene-edited-food/
TNN. (2023). India for international framework to regulate AI platforms. Times of India, May 18th,
2023.
Van Cleynenbreugel, P. (2022). By-design regulation and European Union law: Opportunities,
challenges, and the road ahead. Law, Regulation and Governance in the Information Society, 49-66.
Van Hoecke, M. (2007). European legal cultures in a context of globalisation. In XXIII IVR World
Congress of Philosophy of Law and Social Philosophy (pp. 81-99). Oficyna a Wolters Kluwer Polska.
Vogel, D. (2012), The Politics of Precaution: Regulating Health, Safety, and Environmental Risks in
Europe and the United States. Princeton, NJ: Princeton University Press.
Weeramantry, C. (1998). Justice without frontiers; Protecting Human Rights in the Age of
Technology. Kluwer Law. International, The Hague
Wei, Y., Song, J., Werner, T., & Jia, N. (2022). Strategic Consequences of Democratic Backsliding:
An Examination of the Trump Effect on Firm Regulatory Risk.
16 | T h e L i m i t s o f t h e B r u s s e l s E f f e c t o n E m e r g i n g T e c h n o - N o r m s | B r i g h t S i m o n s
Wiener, J. (2001). Precaution in a Multi-Risk World. In D Paustenbach (ed), The Risk Assessment of
Environmental and Human Health Hazards, 2nd ed (John Wiley, 2001).
Wiener, J. B. (2007). Precaution, in Bodansky, D., Brunnée, J., & Hey, E. (Eds.). The Oxford
Handbook of International Environmental Law. Oxford University Press.
Yakışır, C. (2023). An Evaluation of the ChatGPT Decision, Which Italy Blocked Access on the
Grounds of Violation of the GDPR. Which Italy Blocked Access on the Grounds of Violation of the
GDPR (April 19, 2023).
Young, A. (2003). Political Transfer and “Trading Up”?: Transatlantic Trade in Genetically Modified
Food and U.S. Politics. World Politics, 55(4), 457-484. doi:10.1353/wp.2003.0026
Notes
1
See: DRAFT REPORT on the proposal for a regulation of the European Parliament and of the
Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain
Union Legislative Acts (COM2021/0206 – C9-0146/2021 – 2021/0106(COD)) from the Committee on
the Internal Market and Consumer Protection Committee on Civil Liberties, Justice and Home Affairs
Rapporteur: Brando Benifei, Ioan-Dragoş Tudorache (Joint committee procedure – Rule 58 of the
Rules of Procedure). Text available at: https://www.europarl.europa.eu/doceo/document/CJ40-PR-
731563_EN.pdf
2
The high-level summary is as follows: “Organisms obtained by mutagenesis are GMOs and are, in
principle, subject to the obligations laid down by the GMO Directive. However, organisms obtained by
mutagenesis techniques which have conventionally been used in a number of applications and have
a long safety record are exempt from those obligations, on the understanding that the Member States
are free to subject them, in compliance with EU law, to the obligations laid down by the directive or to
other obligations.” (CJEU, 2019).
3
Sarah Schmidt of the Heinrich Heine University in Dusseldorf (Rapela, 2019).