Content uploaded by Julia Rone
Author content
All content in this area was uploaded by Julia Rone on Mar 22, 2021
Content may be subject to copyright.
Part 3
Internet governance and
democratic states
8 The return of the state? Power
and legitimacy challenges to
the EU’s regulation of online
disinformation
Julia Rone
Introduction
In November 2018, at the Internet Governance Forum (IGF) Opening Ceremony,
French President Emmanuel Macron shared a new vision of the state’s role in
internet regulation. Macron argued that states could and should regulate the
internet and were the actors in the best position to do so ( Macron 2018 ). Accord-
ing to the French leader, the choice between a laissez faire internet, driven by
corporate rule (what he referred to as a “Californian internet”), and a compart-
mentalised internet, “entirely monitored by strong and authoritarian states” (a
“Chinese internet”), is a false choice. Democratic states, he argued, should step in
to regulate the internet, while preserving respect for human rights and freedom of
information. Macron’s speech seemed to suggest that there is a third, European,
model of regulating the internet.
Several months later, in March 2019, Mark Zuckerberg, the CEO of Facebook,
the social networking company with more than 2.45 billion users, published an
op- ed in The Washington Post claiming that the internet “needs new rules” ( Zuck-
erberg 2019 ). Considering that Facebook had argued against regulation for years
( Kayali 2019 ), it seemed that fi nally the company acknowledged the necessity of
government regulation.
In a sense, these two interventions, one by the leader of a G7 country and one
by the CEO of the world’s seventh most valuable private company, both point to
a “return of the state” in the fi eld of internet governance, presupposing public
control over what was once generally considered the domain of private companies
in the West. What is more, it suggests a particular “return of the democratic state”,
different from existing authoritarian attempts at controlling the internet, and has
met as such serious challenges along the way.
This chapter narrows down the general question of the nature of the “return of
the state” in internet regulation to focus in particular on the regulation of online
disinformation. It focuses explicitly on the case of the European Union, where the
topic of disinformation has gained widespread prominence in the aftermath of the
Brexit referendum and amid fears of Russian intervention in the 2019 European
elections ( Apuzzo and Satariano 2019 ; Cadwalladr 2017 ; Tucker et al. 2018 ). We
pose several key questions: Who is carrying out the regulation of disinformation
172 Julia Rone
in the EU and what power does this require and give them? What legitimacy
challenges arise in the process and how are they addressed? Has the EU managed
to offer a third model of internet regulation, going beyond both the Californian
model of private- sector- led regulation with minimal government regulation and
the Chinese model of centralised control?
While disinformation has been a global problem for internet regulation, much of
the literature so far has focused on the US experience and debates ( Mourão and
Robertson 2019 ; Nyhan 2019 ; Tucker et al. 2018 ). This chapter argues that focusing
on the EU could provide a novel and important perspective to the global problem of
regulation. The EU has been the most active democratic jurisdiction when it comes
to state involvement in internet regulation in general. The implementation of the
General Data Protection Regulation in May 2018 has been a crucial step in legislating
data protection not only in the EU but also worldwide. In this sense, if democratic
states’ attempts to regulate disinformation would be successful anywhere, the EU
would be a most likely case. Second, the EU has aimed to be a soft- power exporter of
regulation to other parts of the world. Consequently, any developments in the fi eld
of regulating disinformation in the EU are likely to infl uence other countries to vary-
ing extents. In this sense, it is important to study the EU as a potential trendsetter in
global internet regulation. Third, the multilevel governance structure of the EU poses
specifi c challenges and provides a great test case to trace the importance of state
power and ambitions in attempting to address internet regulation – a fi eld currently
dominated by private players such as Facebook. EU states with big markets such as
France and Germany have been much more ambitious in trying to pressure Facebook
and Twitter to accept regulation than smaller EU member states. In principle, all
sovereign countries are equal when it comes to regulating internet corporations, yet
in practice some are more equal than others. We are likely to observe similar power
dynamics between states and corporations also in non- EU countries.
To be sure, regulating disinformation – defi ned as intentionally deceptive false-
hood ( Tandoc, Lim and Ling 2020 , 3) – has been only one fi eld of regulation
in a wider EU effort to deal with hate speech, terrorist speech, as well as other
platform- related issues such as competition and taxes in the digital environment.
Unlike hate speech or terrorist speech, however, disinformation is not illegal in
most EU countries and this reality has posed serious problems when it comes to
legitimating stricter forms of regulation of internet companies, including legis-
lation. If the EU and its member states are to put forward a model of internet
regulation based on democracy and human rights, the question of legitimacy is
fundamental. Beyond the issue of political power and clout – that is, whether
states can actually make giant US companies comply with their regulations – this
need for democratic legitimacy creates an additional limitation to what EU states
and institutions can do and leads them to adopt strategies of decentred govern-
ance in which public and private actors cooperate – and confl ict – in defi ning the
problem of disinformation and addressing it. The chapter’s analysis of the regula-
tion of disinformation uses the lens of decentred regulation (see, e.g., Black 2001 )
that places important decision- making power in the hands of private actors and
appears far from a unilateral return of the state as advocated by President Macron.
The return of the state? 173
Focusing on the dynamics among public and private actors and, in particular,
the agency of private actors within decentred regulation within the context of
internet governance, this chapter introduces two concepts: preemptive coopera-
tion and confl ictual cooperation. Preemptive cooperation refers to private actors’
readiness to participate in the early stages of drafting state regulation in order to
infl uence it in their preferred direction, usually to weaken it. Cooperation thus
acts as a preemptive measure that helps companies avoid unfavourable regulation.
Confl ictual cooperation, on the other hand, refers to the ways in which private
actors enter into confl ict with public actors or with each other in the process of
cooperation. In short, cooperation does not put an end to regulatory confl icts
but allows them to simmer contained within an established network of relations.
Together, these concepts provide us with a more nuanced means to understand
how decentred regulation may operate within the realm of internet governance.
The chapter proceeds as follows. To begin, it discusses decentred regulation and
digital sovereignty generally and the EU’s approach to digital sovereignty and how
it differs from approaches taken by China or Russia in particular. It then analyses
the importance of power and democratic legitimacy to understand the challenges
the EU has faced in its recent attempts to regulate disinformation. Next, it focuses
on the issue of disinformation and problematises the dangerous trend towards
political and legislative bundling together of different types of harmful content
such as disinformation, hate speech, and defamation. Importantly, it questions
defi nitions of disinformation that place an excessive focus on foreign actors intent
on disrupting elections while relatively neglecting the role of domestic, especially
far- right players. The third section of the chapter discusses the concrete actions
taken by the EU and its member states in order to regulate disinformation. It high-
lights initiatives by individual member states, most notably France and the UK
(before Brexit), and by EU institutions. We then move on to analyse the complex
instances of preemptive and confl ictual cooperation between private global plat-
forms and public actors in the EU. The fi nal section of the chapter offers an over-
view of some of the blind spots of regulatory efforts so far on the basis of interviews
with experts. We also offer a suggestion for the broader fi eld of internet regulation
based on some of the challenges in regulating disinformation in the EU, namely
a possible way out of the legitimacy dilemmas of both state sovereign regulation
and decentred regulation through an engagement with parliamentary and popular
sovereignty. Parliamentary discussions, as well as public consultations and more
deliberative forms of public debates both within nation states and across the EU,
are among the important ways to provide much needed democratic legitimacy to
the tough decisions involved in internet regulation.
Challenges of power and legitimacy in decentred regulation
versus digital sovereignty
President Macron’s call for more state sovereignty and for democratic states to
move beyond the “Californian model of the internet” is essentially a call to seek
an alternative to the decentred regulation that has been at the centre of research
174 Julia Rone
and governance practice for more than 20 years now (see ten Oever, this volume).
Indeed, when we talk about a “return of the state” in internet regulation, and espe-
cially in the regulation of content, we need to emphasise that this is not a return
in the sense of going back to a situation from the past but rather a return of a
player that has been relatively marginalised (see Cavalli and Scholte, Santaniello,
and ten Oever, all this volume). Since the 1990s, the dominant model of inter-
net regulation in democratic states has been decentred, or networked, regulation,
which involves “a shift in the locus of the activity of ‘regulating’ from the state
to other, multiple, locations, and the adoption on the part of the state of particu-
lar strategies of regulation” ( Black 2001 , 112). Accompanying this recognition of
non- state actors’ role is the understanding that regulatory strategies include non-
state- centred forms of governance like industry self- regulation. Forms of decen-
tred regulation have become increasingly common in a wide range of policy fi elds,
from the production and distribution of agricultural commodities ( McNaughton
and Lockie 2017 ) to global fi nance ( Andenas and H- Y- Chiu 2014 ; Scholte 2013 )
and, of course, the governance of new and emerging digital technologies ( Leiser
and Murray 2017 ).
While both the concept of decentred regulation and the related concept of self-
regulation have their origins in theories of autopoietic systems, that is, those sys-
tems capable of reproducing themselves from within themselves, related concepts
such as “networked” or “nodal governance” have been traced rather to the work
of Foucault on power perceived as relational and circulating through networks.
Drawing on a wide range of authors, Farrand and Carrapico (2013 , 359) empha-
sise that in networked regulation “political decision- making is not restricted to
formal governmental institutions, but is the result of the creation, construction,
and establishment of policy networks.” The concepts of decentred and networked
regulation both highlight the multiplicity of actors involved in regulation and the
blurring of the distinction between public and private actors. As such, these con-
cepts are starkly opposed to newly emerging doctrines of digital sovereignty that
have become increasingly relevant in the past few years.
The use of the term “digital sovereignty” in the ProQuest collection of data-
bases has increased from six mentions in the period before 2011 to 239 mentions
in the period 2015 to 2018 ( Couture and Toupin 2019 ). This term, as well as
the related terms “information sovereignty” and “data sovereignty”, have been
used in a broad range of ways that go beyond narrow conceptions of authoritarian
control. Analysts and activists have invoked various and sometimes diametrically
opposed discourses such as “indigenous digital sovereignty” ( Kukutai and Taylor
2016 ) – related to indigenous populations’ control of technologies and digital
infrastructures – and state digital sovereignty – related to the state’s capacity to
control crucial technical infrastructure and the fl ow of information within and
across its borders ( Kukutai and Taylor 2016 ).
This chapter focuses above all on digital sovereignty, understood as state digital
sovereignty, a concept that often comes with a strong geopolitical fl avour. While
states have traditionally controlled the fl ows of goods and people over their bor-
ders, the idea that the fl ows of data and content over the internet, or internet’s
The return of the state? 175
infrastructure itself, can (or should) be controlled was put forward explicitly as a
state doctrine by China only in 2010 ( Powers and Jablonski 2015 ). In 2015, mean-
while, China and Russia signed a cyber- defence agreement whose purpose was
to limit the use of information technologies designed to “interfere in the internal
affairs of states; undermine sovereignty, political, economic and social stability;
[and] disturb public order” ( Margolin 2016 ). With this move, China and Russia
effectively posed a challenge to the dominant American internationalist approach
to norms of digital governance. It is this particular understanding of digital sov-
ereignty that Macron referred to when talking about the “Chinese model” of the
internet.
Yet, just as sovereignty is the property not only of authoritarian states but of
all states in the international state system, the advocacy and pursuit of digital
sovereignty is not only the purview of authoritarian states. The doctrine of digital
sovereignty started gaining traction in Western democratic countries after the rev-
elations by Edward Snowden that the US National Security Agency, together with
its global partners, had engaged in mass surveillance of both foreign nationals and
US citizens. The leaks showed that even the phone of German Chancellor Angela
Merkel had been hacked ( Bauman et al. 2014 ). In response to these revelations,
countries such as Germany and Brazil started contemplating data localisation ini-
tiatives ( Hill 2014 ). By 2019, discourses on digital sovereignty made their way
into European Union policy with the EU announcing the launch of a new project
called Gaia- X that aims to achieve “cloud independence” and allow local provid-
ers to compete with dominant US cloud providers ( Meyer 2019 ).
Another crucial watershed in Western political opinion, this time with respect
to content regulation, came with Brexit and the election of Donald Trump in the
United States in 2016. Major newspapers quickly explained away these complex
political developments as the result of fake news and foreign disinformation, lead-
ing to increased attention to the topic ( Cadwalladr 2017 ; Viner 2016 ). In the
aftermath, an almost Cold War degree of rhetoric raising concerns about foreign
interference fl ourished. Since then, both the US and the EU have expressed the
desire to establish some version of digital sovereignty over fl ows of information
within and across country borders. While China and Russia have a long history of
censoring and regulating content online, there was little appetite for such initia-
tives in the West. Child pornography and terrorist speech, for example, have been
the object of regulatory battles since the 1990s ( Wagner 2013 ), but monitoring
political speech online was generally considered a no- go zone. Following Brexit
and Trump’s election, the 1990s- era cyber- libertarian belief that the internet rep-
resents the frontier of ultimate freedom from the state ceded ground to a gener-
alised acceptance that greater state regulation of the internet is necessary. All
this shows that digital sovereignty is not necessarily an autocratic concept and
encompasses more than what a narrow invocation of the “Chinese model” of the
internet would suggest.
What Macron’s speech offered as an alternative to the “Chinese model” seems
to be a different, “European”, version of digital sovereignty, applicable in demo-
cratic states, that respects human rights and democratic process. The problem,
176 Julia Rone
however, is that achieving this is easier said than done. To begin, getting the big
US corporations to comply with rules that might be costly to them and technically
challenging to implement requires considerable state power and capacity, includ-
ing technical expertise. Second, as already mentioned, while disinformation may
not be socially desirable, it remains legal, at least in the EU. Furthermore, there
is far from a public consensus on how it should be regulated, with far- right actors
calling the EU Commission “Ministry of Truth” because of its attempts to regulate
disinformation ( Mooney 2019 ). Due to these problems of both power and legiti-
macy, the EU and its member states have found it diffi cult to regulate the digital
sphere. In fact, it is impossible to understand the challenges the EU has faced in
regulating disinformation, without paying attention to the concepts of power and
legitimacy and the ways they relate to each other.
For the purposes of this chapter, we defi ne power as the ability of the state or any
other agent “to get others to act in ways that they desire even when the subject
does not want to do what the agent wants him to do” ( Christiano 2012 ). When it
comes to regulating disinformation, particular actors, such as states, would have
power if they manage to get other actors, such as private companies, to do what
they want them to do. States’ regulatory power in this context comes to a large
extent from the size of their markets, as larger states can employ the threat of
market access to compel compliance from corporate actors. Following this logic,
countries such as Germany and France, with bigger markets, would be more per-
suasive than smaller countries such as Slovenia and Bulgaria, for example. But
state regulatory capacity is not only about country size. It also has some elements
of expertise. Similar to other highly technical areas of regulation, such as stock
markets, in order to be able to tell internet companies what they should do, states
need to know better how these companies operate, including what algorithms they
use. Yet, internet giants have been opaque about their internal operations, with
their algorithms famously protected by trade secrets. Of course, larger states such
as Germany and France can more easily regulate the internet giants even without
knowing the intricacies of their operations, as was the case with Germany’s 2017
law to compel Facebook to remove hate speech from its platform ( Lomas 2017 ).
Nevertheless, the regulation of disinformation faces problems not only of power
but also of legitimacy, further diminishing the options of what even big democratic
states can do.
A simple defi nition of legitimacy points to legal validity and conformity with
the law. The three key dimensions of legitimacy in democratic states, as outlined
in the literature, are 1) democracy, referring to “the structural aspects such as the
representation of the population and the separation of powers”; 2) identifi cation,
pointing to “the popular acceptance of the project of the political authority that
governs”; and 3) performance, defi ned as “the relation of the political system to
the ends or purposes it should serve and the effectiveness of its decision- making
procedures” (Beetham and Lord 1998, as quoted in Voermans, Hartmann and
Kaeding 2014 , 12).
If a powerful authoritarian state such as Russia or China attempts to regulate
the internet, they may have the power to coerce cooperation. However, even
The return of the state? 177
authoritarian states face limitations, as the chapters in this volume by Jia, Luo and
Lv, and Stadnik show. In contrast, if the EU and its member states attempt to regu-
late the internet, power is not enough: Democratic legitimacy is also crucial. This
is where the main difference from the so- called Chinese model becomes clear.
Following the defi nition of legitimacy presented earlier, in order to be perceived as
legitimate, regulatory arrangements on disinformation in the EU are expected to
ensure democratic participation, gain popular acceptance, and achieve the goals
they set for themselves.
It is because of the demands for legitimacy, coupled with the perception –
accurate or perceived – that internet fi rms have essential, specialised knowledge
for the regulation of disinformation, that the EU Commission refrained from
attempting state- centred regulation and involved private internet companies in
the process of regulation instead. Yet, as the EU’s response shows, the composition
of state/non- state actors matters in terms of representativeness. The EU’s efforts
focused too closely on private companies and experts and failed to involve ordi-
nary citizens in a meaningful and sustained way in both defi ning where the prob-
lem with disinformation lies and in devising ways to solve it. The multistakeholder
model of decentred governance that the EU Commission fell back on has been
often held as a best practice in internet governance, but numerous authors have
noticed it leads to window- dressing and the privileging of certain actors over oth-
ers ( Buxton 2019 ; Donders, Van den Bulck and Raats 2019; Iusmen and Boswell
2016 ; Schleifer 2019 ). This chapter shows that this has been very much the case
also when it comes to regulating disinformation.
Before moving on to discuss current instances of regulation of disinformation
and the legitimacy problems they pose, a short overview of the current state of
discussions on disinformation is needed.
Defi ning “disinformation” and justifying its regulation
In his IGF speech, President Macron claimed that “[O]ur governments, our popu-
lations will not tolerate much longer the torrents of hate coming over the internet
from authors protected by anonymity which is now proving problematic” ( 2018 ).
Macron’s examples of problematic content in his IGF speech refer above all to
hate speech and terrorist speech (see also Santaniello, this volume). The word
“disinformation” is not mentioned a single time, while fake and doctored images
are mentioned once. Nevertheless, it is very likely that disinformation was on his
mind: Only fi ve days after this speech, France introduced a new law targeting fake
news ( Fiorentino 2018 ).
Macron’s speech is indicative of the fact that in many contexts the regulation of
disinformation is justifi ed by analogy to the need to regulate other types of speech.
For instance, a 2019 consultative paper of the UK government regarding online
content regulation ( Department for Digital, Culture, Media & Sport and Home
Offi ce 2019 ) identifi ed as “online harms” not only familiar categories such as ter-
rorism and child sexual abuse but also “revenge porn, hate crime, harassment, pro-
motion of self- harm, content uploaded by prisoners, disinformation, trolling, and
178 Julia Rone
the sale of illegal goods” ( Volpicelli 2019 ). Needless to say, there are massive dif-
ferences among these different types of content. This inclusion of different types of
content (as objectionable as they may each be) together as “online harms” brings
to mind the argument of Richard Stallman (2006 ), the famous founder of the Free
Software Foundation, that bundling together trademarks, copyright, and patents
under the label intellectual property is a “seductive mirage” that favours the inter-
ests of big companies. Similarly, we can claim today that speaking of “harmful
content” in general is a “seductive mirage” that could justify state censorship of
problematic but not necessarily illegal content “by analogy” with actually illegal
content, without actually making the problematic content illegal.
Disinformation is a perfect example of such problematic- but- not- illegal content:
It has been used for centuries by political actors to shape or promote particular policy
options. Disinformation cannot simply be lumped with “hate speech”. Hate speech
is regulated in the EU because of the threats it poses to human dignity as a fun-
damental right, protected by Article 1 of the EU Charter of Fundamental Rights
( Belavusau 2012 ). But there is no such corresponding justifi cation when it comes
to disinformation. The European Commission’s Action Plan against Disinformation
( High Representative of the Union for Foreign Affairs and Security Policy 2018 ) has
justifi ed combatting disinformation above all by asserting its incompatibility with
the normal functioning of the democratic process. Furthermore, a key criterion for
identifying disinformation has been the intent of content producers to spread disin-
formation to “intentionally cause public harm or for profi t” (High Level Group on
Fake News and Online Disinformation 2018b , 10). But how does one decide what
constitutes public harm and threatens the democratic process in the absence of con-
crete criteria? And who decides what these criteria are? For example, the UK’s Par-
liament report on fake news and disinformation points to the removal by Facebook
of 289 pages and 75 accounts that “posted about topics like anti- NATO sentiment,
protest movements, and anti- corruption” ( Digital Culture, Media and Sport Com-
mittee 2019 , 70). Topics such as anti- NATO sentiment, protest movements, and
anti- corruption are certainly highly political and politicised, yet viewed at this high
level it is not clear to what extent they may be considered disinformation. The pre-
supposition that it is easy to defi ne “public harm” leaves the door open for censorship
and using “fake news” as a label ( Egelhofer and Lecheler 2019 ) to target legitimate
political speech that might actually be expressing dissenting views.
While fake news has been part of public debate for centuries ( Burkhardt 2017 ),
the qualitative difference we have observed in the 2010s has been the ease with
which fake news can spread on online platforms that are designed to maximise
users’ attention in order to extract revenue. The dominant liberal narrative on dis-
information presupposes that foreign actors, such as Russia, spread misleading and
inaccurate information online in order to cause public harm and sow division in
the EU ( High Representative of the Union for Foreign Affairs and Security Policy
2018 ). What this liberal narrative does is to present disinformation, fi rst, as a prob-
lem of accuracy above all and, second, one that is caused by external actors. With
respect to accuracy, the issue of disinformation is not as clear- cut as “real” versus
“fake” news. Recent academic studies of fake news websites in the US context, for
The return of the state? 179
instance, have shown that only a few of the news items published on them can
be classifi ed as completely fake, while most involve genre- blending, mixing sensa-
tionalism, click- bait, and hyperpartisan political content ( Mourão and Robertson
2019 ). Furthermore, the “accuracy” narrative on disinformation tends to ignore
the extent to which the supply of disinformation is driven by economic motives:
“fake news” content can be a profi table way for advertising- based social networks
to encourage users’ attention and thus increase advertising revenues.
With respect to the actors driving the problem, the liberal narrative also tends
to ignore that the rising problem in EU politics is actually bottom- up propaganda
by domestic far- right actors such as Politically Incorrect News in Germany or
VoxNews in Italy that spread highly biased, but not necessarily untrue, political
content ( Rone 2019 ). We should not forget that for the rising far- right movement
in Europe disinformation is actually spread by mainstream media, as evidenced in
the “lying press” chant featured prominently in far- right mobilisations in Dresden,
Germany, and beyond ( Berntzen and Weisskircher 2016 ). Thus, the far right offers
its own “alternative” media online.
Finally, as has already been noted, the nature and extent of the harms caused by
disinformation remain unclear. There is still no conclusive research on the effects of
disinformation on voting patterns, either in the United States or in the EU ( Nyhan
2019 ), bringing into question the rhetoric around the issue. Disinformation may be
a problem, and there is a consensus that there is a problem, but there is neither con-
sensus nor clarity about what exactly the problem is – is it foreign disinformation,
or foreign propaganda, or domestic disinformation or propaganda, full stop? What
effects does it have? Most probably, the problem is multifaceted, with nuanced effects,
which just makes it even more diffi cult to address in the absence of a solid legal basis.
The implications of all these diffi culties around defi ning disinformation and
the resulting potential for undesired censorship are crucial obstacles for securing
the legitimacy of democratic government intervention in this area. They are also
a key reason that EU regulation in this fi eld to date has tended to take a light
touch in the EU and most of its member states. In the next section, we provide an
overview of existing public efforts to regulate disinformation before discussing the
same issue from the perspective of private actors in the section on preemptive and
confl ictual cooperation.
Power and legitimacy as limiting factors for EU online
disinformation regulation
Different EU states and EU institutions have opted for very different strategies
to deal with disinformation. As a result, the current regime of regulation of disin-
formation has been quite complicated, with no common unifying strategy. While
some strategies have involved greater degrees of interventions by the state, in
none of the cases considered have states simply told companies what to do. And in
all cases, both the capacity of the state to implement its preferred strategy and the
need for democratic legitimation with respect primarily to censorship fears have
limited the actions they were able to undertake.
180 Julia Rone
One of the fi rst proactive attempts to deal with disinformation in the current
context was initiated by the European Council in the aftermath of the Russian
military intervention in Crimea in 2014. Created in March 2015, the EastStrat-
ComForce focused on proactive communication to support EU delegations in six
countries from EU’s Eastern countries – Armenia, Azerbaijan, Belarus, Georgia,
Moldova, Ukraine, and Russia itself. The plan’s goal was to provide alternative
sources of information different from Russia’s sources, communicate and promote
“EU Policies and Values”, support independent media, and increase awareness of
“disinformation activities by external actors” ( Jozwiak 2015 ). Among the products
of the EastStratComForce is the fact- checking website EUVsDisinfo ( https://euvs-
disinfo.eu ), which regularly publishes fact checks and fl ags perceived “disinforma-
tion”. Nevertheless, the EUVsDisinfo project raised substantial controversy when
three Dutch media outlets sued the EU because the fact- checker wrongly accused
them of spreading disinformation ( Nijeboer 2018 ). After receiving the subpoena,
EUVsDisinfo removed the three articles from their database without informing
the relevant media and without providing information about the retraction or
apologising for the mistake ( Nijeboer 2018 ). The website continues to function
as of July 2020, but as a result of this case it now focuses on fact- checking news
produced outside of Europe (BBC Trending 2019 ). This case demonstrates clearly
that fact- checking as a regulatory practice is only as effective as the complaints are
accurate and based on clear criteria. In the absence of democratic participation in
defi ning disinformation and a clear consensus on what disinformation is, attempts
to remove content fl agged as disinformation risks raising serious fears over censor-
ship, threatening the policy’s legitimacy.
Aware of such democratic legitimacy challenges with respect to regulating dis-
information, the European Commission adopted a more careful approach and
attempted to involve different groups in both defi ning and addressing the problem
of disinformation. Such an approach followed the long- established tradition of
decentred regulation of the internet, in which private actors have a key role. In late
2017, the commission announced the creation of a High- level Expert Group that
gathered 40 representatives of social media platforms and media organisations,
citizens, civil- society organisations, and experts such as journalists and academics
to tackle the issue ( High Level Group on Fake News and Online Disinforma-
tion 2018a ). Furthermore, the commission tasked with drafting a self- regulatory
code of practice a multistakeholder forum on online disinformation, composed
of online platforms, leading social networks, advertisers, and advertising agen-
cies ( Multistakeholder Forum 2018 ). The code of practice on Disinformation was
signed by Facebook, Google, Twitter, Mozilla, and various trade associations, such
as the European Association of Communication Agencies, the Interactive Adver-
tising Bureau Europe, and the World Federation of Advertisers. The signatories
committed to taking actions in the following fi ve areas:
Disrupting advertising revenues of certain accounts and websites that spread
disinformation; Making political advertising and issue based advertising
more transparent; Addressing the issue of fake accounts and online bots;
The return of the state? 181
Empowering consumers to report disinformation and access different news
sources, while improving the visibility and fi ndability of authoritative con-
tent; [and] Empowering the research community to monitor online disinfor-
mation through privacy- compliant access to the platforms’ data.
( Lomas 2018 )
In addition to this code of practice, on 5 December 2018 the European Commis-
sion and the High Representative of the Union for Foreign Affairs and Security
Policy presented the EU’s Action Plan Against Disinformation that focused on
improved detection; coordinated response; online platforms and industry; and
raising awareness and empowering citizens in order to build up the EU’s capa-
bilities and strengthen cooperation between member states and the EU ( High
Representative of the Union for Foreign Affairs and Security Policy 2018 ). As an
implementation of the action plan, the European Commission also launched the
European Observatory against Disinformation, bringing together fact- checkers,
media organisations, researchers, social media innovators, and policy makers from
across the EU. Several campaigns on digital literacy were also launched including
the All- Digital Week, held the week of 25 March 2019 ( All Digital 2019 ).
All things considered, it is quite clear from these actions that the commission
refrained from strong unilateral regulation and actively tried to include private
companies in defi ning what is to be regulated and the regulation process itself.
This more light- touch approach when it comes to regulating disinformation is in
clear contrast to the multiple fi nes the European Commission imposed on Google
for breaking competition rules, for example, in a series of antitrust cases ( Scott
2019 ). Instead of applying unilateral pressure in the case of disinformation as well,
the commission acknowledged the legitimacy problems it faces there and reverted
to well- known multistakeholder approaches from the past.
At the member- state level, big states encountered the same problems of legiti-
macy as the commission and were often accused of censorship by domestic actors,
while smaller states had to contend with serious capacity problems that often
made them opt for less ambitious strategies focused primarily on media literacy
and educating citizens above all. One of the big EU member states that took the
lead in regulating disinformation and faced a huge societal backlash was France.
On 20 November 2018, fi ve days after Macron’s IGF speech, the French Parlia-
ment passed a law against the manipulation of information. The law’s purpose was
to enact stricter rules on the media during electoral campaigns and, more specifi -
cally, in the three months preceding any vote ( Fiorentino 2018 ). According to the
law, candidates and political parties would be able to appeal to a judge to help stop
“false information” and require tech platforms to remove the targeted informa-
tion within 48 hours ( Fiorentino 2018 ; Rici 2018 ). Platforms were obliged by the
state to cooperate and promote transparency about how their algorithms func-
tion, promote content from mainstream press agencies, remove fake accounts that
propagate massive misinformation, disclose information about sponsored content,
including identity of individuals or organisations that promoted it, and promote
media literacy initiatives ( Rici 2018 ).
182 Julia Rone
The law provoked a huge backlash in both the French Senate and the soci-
ety at large. Before Parliament accepted the law, the French Senate rejected it
twice, pointing to the diffi culty of ascertaining the veracity of information within
48 hours and the potential dangers arising from the removal of lawful informa-
tion ( Boring 2018 ). Only a week after the law was approved, more than 50 sena-
tors from the French centre- right Republican Party (LR) and the Centrist Union
group appealed to the French Constitutional Court over the law, claiming that it
fails the principle of proportionality and enters in confl ict with the existing penal
code ( Rici 2018 ). Furthermore, opposition parties strongly opposed the law on
grounds of being “liberticidal”, according to the far- right politician Marine Le Pen,
or grossly overlooking systemic problems in the media sphere, according to the
far- left politician Jean- Luc Mélenchon (ibid).
The United Kingdom encountered similar accusations of censorship with
regard to its 2019 consultation paper, “White Paper on Online Harms.” It pro-
posed a new regulatory model including a statutory “duty of care,” a contextual
obligation “to exercise reasonable care and/or skill to avoid the risk of injury to
relevant others” ( Woods 2019 , 7). According to an analysis by the digital- rights
groups Access Now and the European Digital Rights Initiative ( Access Now and
EDRi 2019 ), the duty of care, combined with the prospect of fi nes for companies,
creates the incentives for them to block “legal but harmful” content – that is,
content that may cause societal harm but might not be against the law. What is
more, to make this possible, companies could opt for content- fi ltering measures
that could result in monitoring of information shared on online platforms, with
the boundary between specifi c and general monitoring being diffi cult to estab-
lish in practice ( Woods 2019 , 16). Such large- scale monitoring could also ille-
gitimately restrict freedom of expression and lead to online censorship ( Woods
2019 ). As seen in both the examples of France and the UK, disinformation is
notoriously diffi cult to defi ne and getting it wrong easily opens the way to accusa-
tions of disproportional actions, censorship, and even abuse of power, thus erod-
ing the legitimacy of any proposed legislation. This is likely not what Macron
meant when discussing offering an alternative to both the “Chinese” and the
“Californian” model.
Apart from these initiatives of France, the UK, and the EU as a whole, few
other countries have undertaken such concerted attempts to convince internet
giants to cooperate on disinformation- related issues. Indeed, it remains uncertain
to what extent they could successfully implement this type of regulatory frame-
work considering the market power of the US- based corporations, a challenge that
Stadnik recognises in her chapter. Most EU member states, in fact, have preferred
more proactive and citizen- oriented measures to counter disinformation. Italy set
up an online portal where citizens could report misinformation to the police, while
Sweden and Spain set up task forces ( Funke 2019 ). Belgium and the Netherlands,
on the other hand, initiated media literacy campaigns very much in line with
one of the recommendations in the European Action Plan against Disinformation
(ibid). Many smaller states lacked the ambition to initiate any proactive measures
against disinformation at all.
The return of the state? 183
All in all, if we could speak of the “return of the state” in regulating power-
ful US companies, it has been the return of the big state. In a move that could
be described as an attempt to increase digital sovereignty, EU institutions and
some big EU member states have tried to regain control over the fl ow of informa-
tion within and across their borders through legislation or control over private
intermediaries. But even large, high- capacity states such as France and the UK
received a lot of criticism for their efforts and were only partially successful in their
attempts to implement their preferred disinformation- regulation frameworks. On
the other hand, the EU Commission, also not lacking in capacity, chose to remain
cautious in the implementation of its plans and ended up working in close collabo-
ration with other actors in a multistakeholder approach very much in line with the
decentred way it had previously worked in the area of internet governance.
The chapter describes in more detail the patterns of preemptive and confl ictual
cooperation between public and private actors in internet regulation in the next
section.
Preemptive and confl ictual cooperation
While this chapter has discussed regulation mainly from the perspective of public
actors so far, private tech companies’ cooperation in combatting disinformation
should not be taken as given and non- problematic. Some analysts have suggested
that Facebook’s readiness to cooperate regulating disinformation and beyond
stems more from public relations considerations than from a deep- seated change
of attitude ( Scott 2018 ). While their lobbying strategy until now has been to avoid
regulation at all cost, tech fi rms that have reached monopoly status have realised
that their best strategy in the current public climate is preemptive cooperation – par-
ticipating in the lawmaking process in order to end up with laws that are as weak
and fl exible as possible.
There are multiple examples of platforms’ strategies of preemptive cooperation
in the EU context. To begin with, participants in the High- Level Group tasked
with helping to prevent the spread of disinformation have complained that rep-
resentatives of Facebook and Google undermined the work of the group and
opposed proposals that would have forced them to be more transparent about
their business models ( Schmidt and Nivet 2018 ). Monique Goyens, the director
of the European Consumer Association, suggested that experts were blackmailed
to leave aside the important question of whether tech platforms’ business models
(based on the use of algorithms to ensure that certain types of content go viral)
were crucial in helping disinformation to spread (ibid). The threat was that if
discussions about competition policy tools were pushed too far, Facebook could
stop its funding for journalistic and academic projects in which some of the High-
Level Group experts participated. In other words, Facebook tried to use academic
and fact- check funding as a bargaining chip in order to avoid more fundamental
questioning of its operations.
Such attempts to move discussions on disinformation away from the topic of
platforms’ business models is extremely problematic since these business models
184 Julia Rone
have been among the main causes for the rise of disinformation ( Access Now and
EDRi 2019 ). The ascent of “attention merchants” ( Wu 2016 ) such as Facebook,
Twitter, and Google and their advertising empires has gone hand in hand with the
demise of traditional media that have lost advertising revenue and have decreased
their investment in investigative journalism, special correspondents, and local
news, thus lowering the quality of their content in what has been described as
the “de- democratising of news” ( Fenton 2012 ). Not surprisingly, this lowering of
journalistic quality has led to a further erosion of public trust in media. What’s
more, tech platforms and search engines have weakened the direct relationship
between readers and publishers: Over half of the combined sample of the Reuters
Digital News Report (55 percent) “prefer to access news through search engines,
social media, or news aggregators, interfaces where large tech companies typically
deploy algorithms rather than editors to select and rank stories” ( Newman 2019 ,
13). The very business model of platforms emphasises the distribution of viral
content that drives conversation, regardless of whether that content is accurate
or not or hate speech or not ( Bogost 2019 ; Wu 2016 ). Facebook’s bottom line
is not concerned with whether information is true or false, but with the distinc-
tion between content that captures users’ attention and content that does not.
What social media platforms achieved by being actively involved in the process
of defi ning disinformation in the EU was to devise solutions to the problem that
leave as untouched as possible their business models, which are an important
reason that the disinformation problem, as it is perceived in the EU, exists in the
fi rst place.
Apart from ignoring the elephant in the room, the solutions internet giants
offered in terms of content moderation online were quite problematic in them-
selves – they took place with little oversight or transparency and on the basis
of either automated content detection or outsourced fact- checking work ( Fisher
2018 ; Tusikov 2017 ). Both Facebook and Twitter invested in cost- effi cient tech
solutions to deal with disinformation. Nevertheless, these efforts revealed the
inadequacy of algorithmic approaches to complex societal and media problems.
Facebook’s tweak of its algorithm from January 2018 that promoted more personal
content at the expense of media content threatened the existence of independent
alternative media, highly dependent on the platform for distribution ( Rone 2018 ).
In an extreme case, Twitter identifi ed as Russian bots and suspended the accounts
of multiple Bulgarian users simply because they were using the Cyrillic alphabet.
The fact that more countries than Russia use Cyrillic (not to mention that not all
Russian accounts are bots) was overlooked both by the designers of the algorithm
and by the algorithm as a blunt tool that silenced multiple users just because of the
alphabet they happen to use ( Savov 2018 ).
By engaging in preemptive cooperation, platforms avoided questioning of their
business model and got the freedom to experiment with solutions that did not
cost them too much. But that also meant that the solutions proposed were far
from the best for the public, both in terms of legitimacy and in terms of effi ciency.
For instance, the blunt algorithmic methods to detect disinformation preferred
by platforms were not only a suboptimal way to identify cases of disinformation
The return of the state? 185
online with many false- positives but also led to the removal of content without
judicial oversight. Ultimately, the code of practice ceded too much power to big
tech platforms, with insuffi cient public oversight or accountability mechanisms
( Farrand and Carrapico 2013 ; Gillespie 2018 ; Gorwa 2019 ; Tusikov 2017 ).
But platforms engaged not only in preemptive cooperation. Sometimes, they
fl exed their power and entered in open confl ict with regulators, subverting pro-
posed regulations by turning them against the regulators themselves. This is what
we call in this chapter confl ictual cooperation. For example, in April 2019, Twitter
blocked a social media campaign by the French government encouraging people to
vote. The reason was that Twitter was required by the new French law to provide
information on who had sponsored the ad and with what amount of money, but
it had not yet updated its services to do this. Thus, the company preferred not to
invest the resources to change its policies at that time and blocked the campaign
outright ( Tidman 2019 ). In the wake of the 2019 European elections, it turned
out that European parties could not have EU- wide communication campaigns on
Facebook due to code of practice rule that advertisers should be registered in the
country in which they advertise ( Alemanno 2019 ). These two cases are perfect
examples of confl ictual cooperation that show clearly that in situations of decen-
tred regulation, confl ict between actors with diverging interests is subdued but
rarely completely ruled out.
To be sure, frictions arise not only in the relations between tech platforms
and institutions but also in the relations between civil society and institutions.
While the EU has been more than happy to support fact- checkers, many fact-
checkers have been wary of co- optation and have being wary of being used by
EU institutions for political purposes ( Funke 2019 ). Thus, cooperation between
nongovernmental organisations (NGOs) and public institutions has also occa-
sionally assumed the character of confl ictual cooperation. When it comes to
relations between NGOs and tech platforms, cooperation between them has
been encouraged by public regulators and has been welcomed by platforms,
which are happy to outsource fact- checking whenever possible. Tech platforms
have engaged in preemptive cooperation with civil society and academics by
funding projects that do not threaten the essence of their business model. For
their part, civil society and academics have had friction with tech platforms
mainly with regard to the latter’s famous secrecy regarding crucial aspects of
their operations. For example, NGOs and scientists have had serious problems
in trying to receive data for research from platforms despite attempts to improve
coordination ( Gibney 2019 ).
Both the preemptive and the confl ictual cooperation between private and pub-
lic actors show that rather than simply implementing governments’ agendas and
rules, private actors, most notably tech platforms, have engaged in setting the
terms of debate and the rules themselves. EU institutions attempted to regain
digital sovereignty with regard to online disinformation coming from Russia by
counting on US private platforms such as Facebook, Google, and Twitter and non-
elected NGOs to regulate this content. Thus, they ended up caught between a
rock and a hard place.
186 Julia Rone
Discussion and conclusion
This chapter has shown that despite the rhetoric of French President Macron in
his 2018 IGF speech and Mark Zuckerberg’s professed enthusiasm for regulation,
in the fi eld of disinformation there has been no shift to strong state legislation and
control of private actors by public institutions. To begin with, due to their lack of
regulatory capacity and limited ability to compel corporate compliance, smaller
EU member states have engaged relatively little in attempting to regulate internet
giants. Big EU member states such as France, on the other hand, have indeed tried
to introduce strict laws to combat fake news, but these attempts were met with
general criticism and accusations of censorship and lack of due process. Finally,
aware of the legitimacy challenges ahead, the European Commission did not
emulate the French state- led approach focused on legislation but opted instead
for decentred regulation, in which private actors partnered with public institu-
tions, often on their own terms and with varying degrees of cooperation. Within
this practice of decentred regulation, corporations such as Facebook and Google
engaged in complex strategies of preemptive and confl ictual cooperation, both of
which were suboptimal in terms of realising effective regulation of disinformation
in the public interest.
There are two important questions that follow from these developments in the
regulation of online disinformation in the EU. The fi rst, narrower question is how
can we achieve better regulation of online disinformation in the EU? Second, and
related to this, is the broader question of what insights on global internet regula-
tion we can get from the particular case of regulation of disinformation in the EU.
Both of these questions touch upon the issues of power and legitimacy that we
discussed in this chapter.
Starting with the fi rst question, it is clear that current EU policies have given
too much weight to US tech giants to defi ne both what the problems with
disinformation are and how to propose solutions, while other actors such as
media regulators have remained largely neglected. Media regulation expert Iva
Nenadic has emphasised that in order to regulate disinformation more effec-
tively, we need more oversight of tech platforms and a better understanding of
their practices of content moderation, both those undertaken by algorithms and
those outsourced to workers in low- labour- cost countries.
1 In addition, Nenadic
has emphasised the need to give more roles to media authorities that already
exist in EU member states and enhance their capacities and cooperation with
each other across countries, since disinformation is not a single- country phe-
nomenon but crosses borders easily. To be sure, small EU member states cannot
miraculously increase their power vis- à- vis internet giants, but a better under-
standing of how these companies operate combined with better coordination
among EU member states would allow states to address the problem much more
comprehensively and adequately. While one small state cannot make Facebook
change its policy, a commonly negotiated strategy backed by all EU member
states has much greater chances to succeed and thus change Facebook’s policies
also in smaller states.
The return of the state? 187
Another important step for achieving more effective regulation of disinforma-
tion is related to expanding the scope of current measures. Most public attention
so far, and this chapter is not an exception, has focused on political disinformation,
especially in the run- up to elections. But disinformation is a much more com-
plex phenomenon that goes beyond elections. Jules Darmanin, the coordinator
of the FactCheckEU initiative, has emphasised the need to focus on more types
of disinformation, especially content related to climate change denial or public
health, such as anti- vax conspiracies.
2 The boom of disinformation in relation to
the Covid- 19 pandemic is another case in point. What’s more, more research and
investigative reporting is needed on the funding schemes of “alternative” media
online.
Third, regulating disinformation should focus not only on the symptoms but
also on the root causes for the current media malaise. Trying to regulate disinfor-
mation without questioning the business models of tech giants and their monopoly
power is doomed to fail. One might go even further than the media sphere and
argue that the spread of news classifi ed by the EU as disinformation cannot be
understood without paying attention to the radical right movement that has risen
to prominence in the aftermath of the 2008 Economic Crisis ( Berntzen and Weis-
skircher 2016 ; Gattinara and Pirro 2019 ; Rone 2019 ). Removing content and
teaching media literacy can hardly change the political opinions of an already
highly politicised segment of the population.
Fourth, the current confi guration of decentred regulation as observed in the
actions of the EU Commission reveals state/non- state dynamics in terms of
preemptive and confl ictual cooperation strategies. The EU’s anti- disinformation
campaign, however, might curb the spread of disinformation but at too high a cost.
The ever- present danger of state censorship is currently made even stronger by
giving censorship power also to big tech platforms with dubious methodology for
identifying problematic content and no democratic mandate. This is problematic
in itself but it is also troubling because the attempts of the EU and its member
states to regulate disinformation have been instrumentally used as a justifi cation
for harsh laws against “fake news” in authoritarian countries such as Russia and
Singapore ( Funke 2019 ). The EU has traditionally prided itself with being a soft
power that exports high democratic standards across the world. In the case of
regulating disinformation, unfortunately the EU example has been far from “best
practice”.
One possible solution to these issues involves confronting the thorny question of
what counts as disinformation in the fi rst place. Disinformation, in a sense, is in the
eye of the beholder, which means that any state defi nition will require some degree
of democratic legitimisation. If the EU and its member states want to get out of
the current power and legitimacy impasses in addressing disinformation, and avoid
both the Californian and the Chinese models, they could involve the public, the
European citizens themselves, in defi ning the problem and suggesting how to solve
it in ways that go beyond current Band- Aid approaches. Some steps have been
already made in this direction, but they can be taken much further. UK’s practice
of Parliamentary hearings on disinformation, for example, can be expanded with
188 Julia Rone
a more active use of citizen dialogues and citizen consultations, both instruments
already used at the European level but often with little effect on policy conse-
quences. Radical proposals might include breaking up tech giants or investing more
in ethical innovation in order to design platforms not based on exploiting users’
attention in order to extract their data. Radical proposals might also have nothing
to do with tech platforms but focus on supporting local journalism or more con-
structive journalism instead ( Constructive Journalism Network 2019 ).
No one knows what proposals might come up and get approval since there have
been few inclusive public debates on the issue yet, whether in individual EU member
states, or in the EU as a whole. The UK’s white paper on online harms, for exam-
ple, was open to broad public consultation. However, public participation could be
strengthened through involvement in surveys and focus groups, as well as more
innovative forms of citizen participation and deliberation, including public consulta-
tions, citizen assemblies, as well as publicly organised debates, publicised on national
mainstream media. More hearings and debates on the issue in the European Parlia-
ment but also in national parliaments in each EU country are to be encouraged, as
well as much more inter- parliamentary cooperation to ensure that there is if not a
common then at least a coordinated approach to disinformation in the EU.
In fact, it is precisely this procedural point that goes beyond the narrow ques-
tion of regulating online disinformation and offers a potential new approach to
the fi eld of internet regulation in general. Legitimacy is a central issue to be con-
sidered in any attempt to put into practice Macron’s call to regulate the inter-
net in a way that goes beyond both the Chinese and the Californian models. If
democratic states want to assert their democratic digital sovereignty, a good way
to legitimise these attempts would be to encourage much more parliament and
citizen participation in discussions on what we want to regulate, how, and why.
Legitimacy in democratic states, as discussed in this chapter, is based on demo-
cratic participation, popular acceptance of a policy, and effi ciency. It is true that
proposed solutions to the disinformation problem can be democratically negoti-
ated and still ineffi cient. Yet, a democratically negotiated regulation can also be
much more effi cient as citizens will also have ownership of proposed solutions
and will not feel arbitrarily censored. Current approaches to disinformation, on
the contrary, are neither legitimate nor particularly effi cient. We can no longer
ignore the striking absence of the “people” and citizens when discussing internet
regulation, especially considering the increasing demand for popular sovereignty
in fi elds as diverse as trade policy or economic budget making ( Brack, Coman and
Crespy 2019 ). Following this trend, popular and parliamentary sovereignty over
digital infrastructure, data, and content could offer the basis for a truly progressive
model of digital sovereignty that escapes the pitfalls of the archetypical “Chinese
internet” but also the complex and often private interest–driven reality of the
“Californian model” of decentred regulation.
At the time of writing the conclusion to this chapter, the coronavirus epidemic
is at its peak. EU member states such as Hungary introduced straightforward
authoritarian measures to deal with the pandemic including rule by decree, sus-
pension of Parliament and, especially relevant for the chapter, jail terms for up to
The return of the state? 189
fi ve years for “intentionally spreading misinformation that hinders the govern-
ment response to the pandemic” ( Walker and Rankin 2020 ). It remains to be seen
how long- lasting the changes brought about by the epidemic will be. One thing
is certain: Considering that both states and internet giants have become more
powerful in this situation of emergency, citizen participation and the safeguarding
of the democratic process become even more important in order to safeguard both
civil liberties and the quality of public debate.
Notes
1 Interview with Iva Nenadic for the current chapter, June 2019.
2 Interview with Jules Darmanin for the current chapter, June 2019.
References
Access Now and EDRi. 2019. Content regulation – What’s the (Online) Harm? https://edri.
org/content- regulation- whats- the- online- harm/ . Accessed 5 August 2020.
Alemanno, Alberto. 2019. “Facebook Versus the EU.” Politico . 24 May. www.politico.eu/
article/facebook- european- union- disinformation- elections/ . Accessed 5 August 2020.
All Digital. 2019. “Join the Largest Digital Empowerment Campaign in Europe – All Digi-
tal Week 2019.” All Digital . https://all- digital.org/join- all- digital- week- 2019/ . Accessed
5 August 2020.
Andenas, Mads, and Iris H.- Y. Chiu. 2014. The Foundations and Future of Financial Regula-
tion . London: Routledge.
Apuzzo, Matt, and Adam Satariano. 2019. “Russia and Far Right Spreading Disinformation
Ahead of EU Elections, Investigators Say.” The Independent . 12 May. www.independent.
co.uk/news/world/europe/eu- elections- latest- russia- far- right- interference- fake- news-
meddling- a8910311.html . Accessed 5 August 2020.
Bauman, Zygmunt, Didier Bigo, Paulo Esteves, Elspeth Guild, Vivienne Jabri, David Lyon,
and R.B.J. Walker. 2014. “After Snowden: Rethinking the Impact of Surveillance.” Inter-
national Political Sociology 8 (2): 121–144. https://doi.org/10.1111/ips.12048 .
BBC Trending. 2019. “Is Russia Trying to Sway the European Elections?” BBC News World
Service . 18 May. www.bbc.co.uk/programmes/w3csyvms . Accessed 5 August 2020.
Beetham, David and Christopher Lord (eds.). 1998. Legitimacy and the European Union .
Longman: Harlow.
Belavusau, Uladzislau. 2012. “Fighting Hate Speech Through EU Law.” Amsterdam Law
Forum 4 (1), 20–35.
Berntzen, Lars Erik, and Manès Weisskircher. 2016. “Anti- Islamic PEGIDA Beyond
Germany: Explaining Differences in Mobilisation.” Journal of Intercultural Studies 37
(6): 556–573. https://doi- org.proxy.library.brocku.ca/10.1080/07256868.2016.1235021 .
Black, Julia. 2001. “Decentereing Regulation: Understanding the Role of Regulation and
Self- Regulation in a ‘Post- regulatory World’.” Current Legal Problems 54 (1): 103–146.
Bogost, Ian. 2019. “Facebook’s Dystopian Defi nition of ‘Fake’.” The Atlantic . 28 May.
www.theatlantic.com/technology/archive/2019/05/why- pelosi- video- isnt- fake-
facebook/590335/ . Accessed 5 August 2020.
Boring, Nicholas. 2018. “France: Senate Rejects ‘Fake News Bills’.” In Global Legal Monitor .
United States: Library of Congress. 24 September. www.loc.gov/law/foreign- news/article/
france- senate- rejects- fake- news- ban- bills/ . Accessed 5 August 2020.
190 Julia Rone
Brack, Nathalie, Ramona Coman, and Amandine Crespy. 2019. “Unpacking Old and New
Confl icts of Sovereignty in the European Polity.” Journal of European Integration 41 (7):
817–832. https://doi.org/10.1080/07036337.2019.1665657 .
Burkhardt, Joanna. 2017. “Combating Fake News in the Digital Age.” Library Technology
Reports 53 (8).
Buxton, Nick. 2019. Multistakeholderism: A CRITICAL LOOK. Workshop Report: Corporate
Power Project . Transnational Institute. March. www.tni.org/fi les/publication- downloads/
multistakeholderism- workshop- report- tni.pdf . Accessed 5 August 2020.
Cadwalladr, Carole. 2017. “The Great British Brexit Robbery: How Our Democracy Got
Hijacked.” The Guardian . 7 May. www.theguardian.com/technology/2017/may/07/the-
great- british- brexit- robbery- hijacked- democracy . Accessed 5 August 2020.
Christiano, Tom. 2012. “Authority.” Stanford Encyclopedia of Philosophy . https://plato.
stanford.edu/entries/authority/ . Accessed 5 August 2020.
Constructive Journalism. 2019. Constructive Journalism Network . http://constructivejournalism.
network/ . Accessed 5 August 2020.
Couture, Stephane, and Sophie Toupin. 2019. “What Does the Notion of ‘Sovereignty’
Mean When Referring to the Digital?” New Media & Society 21 (10): 2305–2322. https://
doi.org /10.1177%2F1461444819865984.
Department for Digital, Culture, Media & Sport and Home Offi ce. 2019. Online Harms
White Paper . London. https://assets.publishing.service.gov.uk/government/uploads/system/
uploads/attachment_data/fi le/793360/Online_Harms_White_Paper.pdf . Accessed 5 August
2020.
Digital, Culture, Media and Sport Committee. 2019. Disinformation and ‘Fake News’: Final
Report . Eighth Report of Session 2017–19. United Kingdom: House of Commons. https://
publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf . Accessed
5 August 2020.
Donders, Karen, Hilde Van den Bulck, and Tim Raats. 2019. “The Politics of Pleasing:
A Critical Analysis of Multistakeholderism in Public Service Media Policies in Flan-
ders.” Media, Culture & Society 41 (3): 347–366. https://doi.org/10.1177%2F016344
3718782004 .
Egelhofer, Jana Laura, and Sophie Lecheler. 2019. “Fake News as a Two- Dimensional Phe-
nomenon: A Framework and Research Agenda.” Annals of the International Communica-
tion Association 43 (2): 97–116. https://doi.org/10.1080/23808985.2019.1602782 .
Farrand, Benjamin, and Helena Carrapico. 2013. “Networked Governance and the Regula-
tion of Expression on the Internet: The Blurring of the Role of Public and Private Actors
as Content Regulators.” Journal of Information Technology & Politics 10 (4): 357–368.
https://doi.org/10.1080/19331681.2013.843920 .
Fenton, Natalie. 2012. “De- Democratizing the News? New Media and the Structural Prac-
tices of Journalism.” In Handbook of Global Online Journalism , edited by Eugenia Siapera
and Andreas Veglis, 119–134. Chichester: Wiley & Blackwell.
Fiorentino, Michael- Ross. 2018. “France Passes Controversial ‘Fake News’ Law.” Euronews .
22 November. www.euronews.com/2018/11/22/france- passes- controversial- fake- news-
law . Accessed 5 August 2020.
Fisher, Mark. 2018. “Inside Facebook’s Secret Rulebook for Global Political Speech.”
The New York Times . 27 December. www.nytimes.com/2018/12/27/world/facebook-
moderators.html . Accessed 5 August 2020.
Funke, Daniel. 2019. A Guide to Anti- Misinformation Actions Around the World . Poynter .
www.poynter.org/ifcn/anti- misinformation- actions/#germany . Accessed 5 August 2020.
The return of the state? 191
Gattinara, Pietro Castelli, and Andrea L.P. Pirro. 2019. “The Far Right as Social Move-
ment.” European Societies 21 (4): 447–462. https://doi.org/10.1080/14616696.2018.149
4301 .
Gibney, Elizabeth. 2019. “Privacy Hurdles Thwart Facebook Democracy Research.” Nature .
3 October. www.nature.com/articles/d41586- 019- 02966- x . Accessed 5 August 2020.
Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation and the
Hidden Decisions That Shape Social Media . New Haven, CT: Yale University Press.
Gorwa, Robert. 2019. “What Is Platform Governance?” Information, Communication & Soci-
ety 22 (6): 854–871. https://doi.org/10.1080/1369118X.2019.1573914 .
High Level Group on Fake News and Online Disinformation. 2018a. High- Level Group
on Fake News and Online Disinformation: Event Report . 6 February. https://ec.europa.eu/
digital- single- market/en/news/high- level- group- fake- news- and- online- disinformation .
Accessed 5 August 2020.
High Level Group on Fake News and Online Disinformation. 2018b. A Multi- Dimensional
Approach to Disinformation . Brussels: European Commission. http://ec.europa.eu/
newsroom/dae/document.cfm?doc_id=50271 . Accessed 5 August 2020.
High Representative of the Union for Foreign Affairs and Security Policy. 2018. Action Plan
Against Disinformation . Brussels: European Commission. https://eeas.europa.eu/sites/
eeas/fi les/action_plan_against_disinformation.pdf . Accessed 5 August 2020.
Hill, Jonah. 2014. “The Growth of Data Localization Post- Snowden: Analysis and Rec-
ommendations for U.S. Policymakers and Business Leaders.” The Hague Institute for
Global Justice, Conference on the Future of Cyber Governance . https:/doi.org/10.2139/
ssrn.2430275 .
Iusmen, Ingi, and John Boswell. 2017. “The Dilemmas Of Pursuing ‘Throughput Legiti-
macy’ Through Participatory Mechanisms.” West European Politics 40 (2): 459–478.
https://doi.org/10.1080/01402382.2016.1206380 .
Jozwiak, Rikard. 2015. “EU to Counter Russian Propaganda by Promoting ‘European
Values’.” The Guardian . 25 June. www.theguardian.com/world/2015/jun/25/eu- russia-
propaganda- ukraine . Accessed 5 August 2020.
Kayali, Laura. 2019. “Inside Facebook’s Fight against European Regulation.” Politico . 23 Jan-
uary. www.politico.eu/article/inside- story- facebook- fi ght- against- european- regulation/ .
Accessed 5 August 2020.
Kukutai, Tahu, and John Taylor. 2016. Indigenous Data Sovereignty: Toward an Agenda . Can-
berra: ANU Press.
Leiser, Mark, and Andrew Murray. 2017. “The Role of Non- State Actors and Institu-
tions in the Governance of New and Emerging Digital Technologies.” In The Oxford
Handbook of Law, Regulation and Technology , edited by Roger Brownsword, Eloise Scot-
ford, and Karen Yeung. Oxford: Oxford University Press. https://doi.org/10.1093/
oxfordhb/9780199680832.013.28 .
Lomas, Natasha. 2017. “Germany’s Social Media Hate Speech Law is Now in Effect.” Tech-
crunch . https://techcrunch.com/2017/10/02/germanys- social- media- hate- speech- law- is-
now- in- effect/ . Accessed 5 August 2020.
Lomas, Natasha. 2018. “Tech and ad Giants Sign Up to Europe’s First Weak Bite at ‘Fake
News’.” Techcrunch . https://techcrunch.com/2018/09/26/tech- and- ad- giants- sign- up- to-
europes- fi rst- weak- bite- at- fake- news/ . Accessed 5 August 2020.
Macron, Emmanuel. 2018. IGF 2018 Speech by French President Emmanuel Macron . www.
intgovforum.org/multilingual/content/igf- 2018- speech- by- french- president- emmanuel-
macron . Accessed 5 August 2020.
192 Julia Rone
Margolin, Jack. 2016. “Russia, China and the Push for Digital Sovereignty.” The Global
Observatory . https://theglobalobservatory.org/2016/12/russia- china- digital- sovereignty-
shanghai- cooperation- organization/ . Accessed 5 August 2020.
McNaughton, Anne, and Stewart Lockie. 2017. “Private Actors in Multi- Level Gov-
ernance: GLOBALG.A.P. Standard- Setting for Agricultural and Food Products.” In
Multi- level Governance: Conceptual Challenges and Case Studies from Australia , edited by
Katherine A. Daniell and Adrian Kay, 385–402. Canberra: ANU Press.
Meyer, David. 2019. “Europe Is Starting to Declare Its Cloud Independence.” Fortune .
30 October. https://fortune.com/2019/10/30/europe- cloud- independence- gaia- x- germany-
france/ . Accessed 5 August 2020.
Mooney, Brian. 2019. “Coming Soon: the EU’s Ministry of Truth ? ” Campaign for an Inde-
pendent Britain . 8 July. https://campaignforanindependentbritain.org.uk/coming- soon-
the- eus- ministry- of- truth/ . Accessed 5 August 2020.
Mourão, Rachel R., and Craig T. Robertson. 2019. “Fake News as Discursive Integration:
An Analysis of Sites That Publish False, Misleading, Hyperpartisan and Sensational
Information.” Journalism Studies 20 (14): 2077–2095. https://doi.org/10.1080/14616
70X.2019.1566871 .
Multistakeholder Forum. 2018. Meeting of the Multistakeholder Forum on Disinformation:
Event Report . European Commission. https://ec.europa.eu/digital- single- market/en/news/
meeting- multistakeholder- forum- disinformation . Accessed 5 August 2020.
Newman, Nick. 2019. “Executive Summary and Key Findings of the 2019 Report.”
Reuters Digital News Report . www.digitalnewsreport.org/survey/2019/overview- key-
fi ndings- 2019/ . Accessed 5 August 2020.
Nijeboer, Arjin. 2018. “Why the EU Must Close EU Vs Disinfo.” EU Observer . 28 March.
https://euobserver.com/opinion/141458 . Accessed 5 August 2020.
Nyhan, Brendan. 2019. “Why Fears of Fake News Are Overhyped.” Medium . 4 Febru-
ary. https://medium.com/s/reasonable- doubt/why- fears- of- fake- news- are- overhyped-
2ed9ca0a52c9 . Accessed 5 August 2020.
Powers, Shawn, and Michael Jablonski. 2015. The Real Cyber War: The Political Economy of
internet Freedom . Chicago: University of Illinois Press.
Rici, Alexander Damiano. 2018. “French Opposition Parties Are Taking Macron’s Anti-
Misinformation Law to Court.” Poynter . 4 December. www.poynter.org/fact- checking/
2018/french- opposition- parties- are- taking- macrons- anti- misinformation- law- to- court/ .
Accessed 5 August 2020.
Rone, Julia. 2018. “Collateral Damage: How Algorithms to Counter Fake News Threaten
Citizen Media in Bulgaria.” LSE Media Project . 18 June. https://blogs.lse.ac.uk/
mediapolicyproject/2018/06/18/collateral- damage- how- algorithms- to- counter- fake-
news- threaten- citizen- media- in- bulgaria/ . Accessed 5 August 2020.
Rone, Julia. 2019. “Why Talking About ‘Disinformation’ Misses the Point When Consider-
ing Radical Right ‘Alternative’ Media.” LSE Media Project . 3 January. https://blogs.lse.
ac.uk/mediapolicyproject/2019/01/03/why- talking- about- disinformation- misses- the-
point- when- considering- radical- right- alternative- media/ . Accessed 5 August 2020.
Savov, Vlad. 2018. “Twitter is Treating Bulgarians Tweeting in Cyrillic Like Russian Bots.”
The Verge . 22 May. www.theverge.com/2018/5/22/17380630/twitter- moderation- cyrillic-
russian- bots . Accessed 5 August 2020.
Schleifer, Philip. 2019. “Varieties of Multi- Stakeholder Governance: Selecting Legitima-
tion Strategies in Transnational Sustainability Politics.” Globalizations 16 (1): 50–66.
https://doi.org/10.1080/14747731.2018.1518863 .
The return of the state? 193
Schmidt, Nico, and Daphné Dupont- Nivet. 2018. “Facebook and Google Pressured EU
Experts to Soften Fake News Regulations, Say Insiders.” Open Democracy . 21 May. www.
opendemocracy.net/en/facebook- and- google- pressured- eu- experts- soften- fake- news-
regulations- say- insiders/ . Accessed 5 August 2020.
Scholte, Jan Aart. 2013. “Civil Society and Financial Markets: What is Not Happening
and Why.” Journal of Civil Society 9 (2): 129–147. https://doi.org/10.1080/17448689.20
13.788925 .
Scott, Mark. 2018. “How Big Tech Learned to Love Regulation.” Politico . 11 November. www.
politico.eu/article/google- facebook- amazon- regulation- europe- washington- brussels-
privacy- competition- tax- vestager/ . Accessed 5 August 2020.
Scott, Mark. 2019. “Europe Fines Google €1.49B in Third Antitrust Case.” Politico . 20 March.
www.politico.eu/article/europe- google- margrethe- vestager- adsense- antitrust- competition-
fi ne/ . Accessed 5 August 2020.
Stallman, Richard M. 2006. “Did You Say ‘Intellectual Property’? It’s a Seductive Mirage.”
Policy Futures in Education 4 (4): 334–336. https://doi.org/10.2304%2Fpfie.2006.
4.4.334 .
Tandoc, Edson C., Darren Lim, and Richard Ling. 2020. “Diffusion of Disinformation:
How Social Media Users Respond to Fake News and Why.” Journalism 21 (3): 381–398.
https://doi.org/10.1177%2F1464884919868325 .
Tidman, Zoe. 2019. “Twitter Rules out French Government Advertising over Anti- Fake
News Law.” The Independent . 3 April. www.independent.co.uk/news/world/europe/
twitter- france- fake- news- europe- elections- a8852731.html . Accessed 5 August 2020.
Tucker, Joshua A, Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey
Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Political Polarization, and Political Disin-
formation: A Review of the Scientifi c Literature . William + Flora Hewlett Foundation. March.
https://hewlett.org/wp- content/uploads/2018/03/Social- Media- Political- Polarization-
and- Political- Disinformation- Literature- Review.pdf . Accessed 5 August 2020.
Tusikov, Natasha. 2017. Chokepoints: Global Private Regulation on the Internet . Berkeley, CA:
University of California Press.
Viner, Katharine. 2016. “How Technology Disrupted the Truth.” The Guardian . 12 July.
www.theguardian.com/media/2016/jul/12/how- technology- disrupted- the- truth . Accessed
5 August 2020.
Voermans, Wim, Josephine Hartmann, and Michael Kaeding. 2014. “The Quest for Legiti-
macy in EU Secondary Legislation.” The Theory and Practice of Legislation 2 (1): 5–32.
https://doi.org/10.5235/2050- 8840.2.1.5 .
Volpicelli, Gian. 2019. “All That Is Wrong with UK’s Crusade Against Online Harm.” Wired .
9 April. https://wired.co.uk/article/online- harms- white- paper- uk- analysis . Accessed
5 August 2020.
Wagner, Ben. 2013. “Governing Internet Expression: How Public and Private Regulation
Shape Expression Governance.” Journal of Information Technology & Politics 10 (4): 389–
403. https://doi.org/10.1080/19331681.2013.799051 .
Walker, Shaun, and Jennifer Rankin. 2020. “Hungary Passes Law That Will Let Orbán
Rule by Decree.” The Guardian . 30 March. www.theguardian.com/world/2020/mar/30/
hungary- jail- for- coronavirus- misinformation- viktor- orban . Accessed 5 August 2020.
Woods, Lorna. 2019. “The Duty of Care in the Online Harms White Paper.” Journal of
Media Law 11 (1): 6–17. https://doi.org/10.1080/17577632.2019.1668605 .
Wu, Tim. 2016. The Attention Merchants: The Epic Scramble to Get Inside our Heads . New
York: Penguin Random House.
194 Julia Rone
Zuckerberg, Mark. 2019. “Mark Zuckerberg: The Internet Needs New Rules. Let’s
Start in These Four Areas.” Washington Post . 30 March. www.washingtonpost.com/
opinions/mark- zuckerberg- the- internet- needs- new- rules- lets- start- in- these- four-
areas/2019/03/29/9e6f0504–521a- 11e9- a3f7–78b7525a8d5f_story.html . Accessed 5 August
2020.