PreprintPDF Available

The Strategic Logic of Digital Disinformation Offense, Defence and Deterrence in Information Warfare

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Why do countries engage in disinformation campaigns even though they know that they will likely be debunked later on? We explore a core puzzle in information warfare whereby countries that pursue disinformation to confuse and demobilize their adversaries usually suffer from reputational penalties after they are debunked, yet they nonetheless continue to pursue such tactics. In order to explain this dilemma, we employ a formal model and walk through anarchy, pre-emption, and cost miscalculation explanations of disinformation and demonstrate that countries may rationally engage in disinformation campaigns if they have a different calculus about reputational costs, if they believe their adversaries will not be able to debunk their claims successfully, and if those adversaries will not be able to disseminate their debunked claims well enough to incur reputational costs on the initiator. Ultimately, we suggest that deterrence in information warfare is attainable if the ‘defender’ can signal its debunking and ‘naming-shaming’ capacity prior to the disinformation campaign and if it can mobilize the support of the international audience against the attacker. We conclude by arguing that a country’s fact-checking ecosystem and its pre-existing perception within the mainstream international digital media environment are the strongest defenses against disinformation.
Content may be subject to copyright.
1
The Strategic Logic of Digital Disinformation
Offense, Defence and Deterrence in Information Warfare
H. Akin Unver – Ozyegin University, Department of International Relations
1
(akin.unver@ozyegin.edu.tr) (ORCID: 0000-0002-6932-8325)
Arhan S. Ertan – Bogazici University, Department of International Trade
(arhan.ertan@boun.edu.tr) (ORCID: 0000-0001-9730-8391)
Abstract
: Why do countries engage in disinformation campaigns even though they know
that they will likely to be debunked later on? We explore a core puzzle in information
warfare in whereby countries that pursue disinformation to confuse and demobilize their
adversaries usually suffer from reputational penalties after they are debunked, yet they
nonetheless continue to pursue such tactics. In order to explain this dilemma, we employ
a formal model and walk through anarchy, pre-emption, and cost miscalculation
explanations of disinformation and demonstrate that countries may rationally engage in
disinformation campaigns if they have a different calculus about reputational costs, if they
believe their adversaries will not be able to debunk their claims successfully, and if those
adversaries will not be able to disseminate their debunked claims well enough to incur
reputational costs on the initiator. Ultimately, we suggest that deterrence in information
warfare is attainable if the ‘defender’ can signal its debunking and ‘naming-shaming’
capacity prior to the disinformation campaign and if it can mobilize the support of the
international audience against the attacker. We conclude by arguing that a country’s fact-
checking ecosystem and its pre-existing perception within the mainstream international
digital media environment are strongest defences against disinformation.
1
This work is partly funded by The Scientific and Technological Research Institution of Turkey (TUBITAK),
ARDEB 1001 Program, Project Number: 120K986, Title: Digital Public Diplomacy of Armed Organizations Syria
and Iraq Cases'' and The Science Academy Society of Turkey, Young Scientist Support Program (BAGEP) 2021.
Electronic copy available at: https://ssrn.com/abstract=4098988
2
Introduction
Governments, state agencies and foreign policy institutions are increasingly
deploying organized disinformation to distract and confuse their adversaries. In their 2020
report, Oxford Computational Propaganda Project identified organized, state-sponsored
disinformation campaigns in 81 countries with rapidly increasing number of ‘cyber troops’
(semi-officially employed individuals working on state-sponsored information operations)
and campaign intensities.
1
While disinformation has largely been constructed as a form of ‘attack’ perpetrated
by authoritarian countries against liberal democracies, more democratic countries too,
have engaged in organized disinformation attempts abroad. For example, both France and
Russia engaged in disinformation campaigns in Mali, Central African Republic and other
Sahel region countries to build influence and discredit opponents.
2
US State Department
had a long-running program of digital disinformation against Jihadi content online,
inserting its analysts into extremist discussions via pseudonyms and sharing false
information to misdirect the militant group’s online efforts.
3
In Hungary, the government
used disinformation against Romania in order to make the case internationally that the
refugee crisis was Bucharest’s fault. Similarly, both Belarus and Poland instrumentalized
disinformation during the most recent Ukrainian refugee crisis.
4
A 2021 European
Parliament report has indicated that disinformation between nations have become
rampant in Western Balkans, disrupting the political stability of the region and generating
significant discrediting momentum for the EU.
5
From Brazil, Argentina to South Africa,
India and Australia, a broad range of countries and regime types have been involved in
organized disinformation.
6
Propaganda, manipulation and misdirection have been long-standing tactics of
diplomacy and international competition. In the last decade, and especially around the
2016 US Presidential election, ‘fake news’ and disinformation became buzzwords of sorts
that led to a rediscovery of the role of information in political competition. The biggest
Electronic copy available at: https://ssrn.com/abstract=4098988
3
difference between the traditional and more recent debates on the matter is the
digitalization of information warfare and the subsequent scale, volume and speed
advantages brought by this digitalization. The advent of Information and
Communications Technologies (ICTs) brought about a faster information exchange
medium where traditional gatekeepers like editors, censors or curators are of secondary
importance and often irrelevant. While in more traditional media forms, broadcast is
dependent on the approval of an intermediary individual or a group, with ICTs and social
media, this approval is often hard to enforce with the sheer scale of information poured
into such media venues. Although automated content moderation works in most cases, it
can easily be circumvented.
7
With information gatekeepers out of the way, information
becomes disintermediated (reduction in the use of intermediaries), with information
suppliers (citizen journalists or anyone with access to social media) directly able to reach
information consumers around the world, in real time.
8
The disintermediated nature of
modern information exchange has rendered ICTs a conducive ground for misinformation
(unintended spread of false information), disinformation (purposeful creation and
dissemination of false information) and malinformation (deliberate use of accurate or
inaccurate information with the purpose of harming an individual or people).
9
To that end, the study of disinformation in the digital domain requires renewed
attention as traditional studies of propaganda fail to address the speed, scale and the
disintermediated nature of information sharing. Digital disinformation has been relatively-
well studies in domestic political context, especially in the United States. However,
opportunities for disinformation research within comparative politics and international
relations (IR) fields are still very much untapped. Particularly, there is still no consensus
in the field over how to conceptualize disinformation within the confines of IR: is it best
understood as a ‘weapon’, a ‘tactic’, or is it simply a more robust form of propaganda?
10
How is digital disinformation different than disinformation in older media systems and
how much does disintermediation affect the way people communicate and consume
information, and as a result seek to alter or contribute to international affairs?
11
More
importantly, why do countries choose to engage in disinformation against other countries
Electronic copy available at: https://ssrn.com/abstract=4098988
4
and does disinformation as a foreign policy tool serves a different purpose than
disinformation as a domestic political tool?
This chapter seeks to contribute to this emerging debate by exploring
disinformation in international relations as a rational actor problem. It situates
information warfare as a dyadic-dynamic interaction between the side that initiates
disinformation (Attacker), the side that seeks to counter these efforts (Defender) and the
international audience (IA) that affects the ‘winner’ of this interaction. Ultimately, the
chapter explores the payoff calculus of the Attacker and the Defender and aims to provide
a path of exploration into the inner workings of deterrence in information warfare.
Why Do Countries Resort to Organized Manipulation? Unpacking the
Demobilizing Logic of Disinformation
What accounts for the rapid explosion of digital disinformation in modern politics
in the last decade? Although organized manipulation and propaganda have long been key
tactics in statecraft and international politics, we do not yet have robust explanations for
how mass digitalization of communication changed its main parameters in inter-state
political communication. Traditional works on propaganda treats political manipulation
as a small part of a diverse array of communicative strategies, aimed to promote a political
point of view, often through misleading and biased narratives.
12
However, such traditional
works do not address the role disintermediation in political communication and how such
disintermediation – coupled with vast size and lightning speed of ICTs – affects organized
disinformation. A rapidly emerging, yet nascent body work focuses overwhelmingly on the
domestic implications of disinformation, particularly concentrating around election
manipulation and foreign meddling in democratic processes of a small group of Western
nations.
13
While disinformation research has so far been heavily focused on domestic
political contexts of a small group of countries, there is still much to be done on
fundamental conceptualization and case study research work on how digital
disinformation alters existing communicative processes in international relations. This
Electronic copy available at: https://ssrn.com/abstract=4098988
5
includes how to conceptualize disinformation in foreign affairs: is it a tactic, a weapon, a
form of attack (similar to cyber-attacks), or a tool of diplomacy?
14
Several countries have already contextualized disinformation as a national security
threat. The US State Department defines disinformation as a
quick and fairly cheap way
to destabilize societies and set the stage for potential military action
’.
15
A joint US State
Department and Joint Chiefs of Staff white paper later posited that:
Information has
been weaponized, and disinformation has become an incisive instrument of state policy
”.
16
Russian General Valery Gerasimov wrote in a 2013 article portraying disinformation as
crucial:
The role of nonmilitary means of achieving political and strategic goals has
grown, and, in many cases, they have exceeded the power of force of weapons in their
effectiveness
”.
17
Similarly United Nations Development Program discourse also constructs
disinformation as a weapon a trend, which began with the COVID-19 pandemic.
18
In
the same vein, NATO’s securitization of disinformation began with the first Ukraine war
(2014) and remained as a crucial component of hybrid warfare in NATO doctrines.
19
The
concept of a digital information war had already been included in Russian 2010 Military
Doctrine, which was broadened in its 2014 update to include social media and ICTs.
20
Since then, both practices and allegations of disinformation have proliferated across other
governments including China, UK, France, Italy, South Africa, Turkey and Kenya (among
others), where disinformation is used domestically to discredit political opponents, a
foreign policy tool to confuse international rivals and a key threat that has to be defended
against – all at the same time.
21
Indeed, disinformation is increasingly being used as a force factor during
international crises. During the 2017 Gulf crisis between Saudi Arabia, Qatar and the
UAE, a robust disinformation campaign, followed by cyber-attacks, brought them to the
brink of limited conventional war and required significant diplomatic effort to disentangle
the damage caused by disinformation.
22
In that case, disinformation had significantly
aided in the escalation of the crisis, increasing the costs of backing down by raising
audience costs. Following the shooting down of its SU24 in northern Syria, Russia had
Electronic copy available at: https://ssrn.com/abstract=4098988
6
launched a major disinformation campaign against Turkey, drawing a wedge between
Ankara and other NATO countries on the Syrian war, damaging intra-alliance cohesion
and ultimately creating itself a relatively unchallenged information space as it entered the
Syrian theatre militarily.
23
In Nigeria, the government employed a disinformation
campaign in the last decade targeting international aid agencies and workers, significantly
impairing the ability of those agencies to work on humanitarian relief in the country.
24
From click farms of Indonesia to the ‘black campaigning’ in the Philippines,
disinformation is being deployed as a distinct strategy to discredit and demobilize regional
politics in Southeast Asia and more recently, to sow mistrust towards China’s Sinovac-
CoronaVac vaccine.
25
The most commonly agreed-upon purpose of disinformation in both domestic and
international politics is to distract, confuse and demobilize adversaries.
26
The chain of
thought is as follows: disinformation seeks to confuse an adversary to such a degree that
even if it is debunked later on, the short-term distraction yields sufficient strategic payoff
for the source of disinformation (‘Attacker’) in the form of demobilizing and dividing the
other side (‘Defender’). Since disinformation attempts often get fact-checked and
debunked, their strategic utility is often short-term.
27
Therefore, disinformation works
best during time sensitive events such as elections, diplomatic crises, emergencies and
natural disasters.
In the case of an election, the ‘Attacker’ attempts to boost the chances of the
friendly candidate and weaken the hostile one in the ‘Defender’ country, increasing the
likelihood of the friendly candidate getting elected. If the hostile candidate’s victory is
inevitable, the purpose of disinformation becomes weakening the hostile candidate’s
winning margin so that their rule becomes more difficult and contested. This prevents the
hostile candidate’s ability to focus on the Attacker country after the election.
In the case of a diplomatic crisis or escalation, the logic works similar: by
distracting and confusing an adversary, the Attacker seeks to gain short-term strategic
Electronic copy available at: https://ssrn.com/abstract=4098988
7
payoff either by reducing the level of support for the Defender government or its policies,
or slow down and demobilize its current course of action. In cases of direct armed conflict,
disinformation is used to demoralize, demobilize, discredit and slow down the adversary’s
diplomatic and military efforts.
When an international dispute is pursued through non-military terms,
disinformation can be used both to further escalate a crisis (especially when the Attacker
is distinctly superior to the Defender), or to de-escalate one (when the Attacker is
distinctly weaker than the Defender). Current empirical evidence and the relevant
literature makes it hard to present a conclusive statement about whether disinformation
is the ‘tool of the weak’.
28
Stronger states have used disinformation against weaker
adversaries as much as vice versa, and such disinformation has been deployed during
active conflicts as much as non-militarized disputes. While there is a tendency to portray
democracies as primary targets of disinformation, democracies have been sources of
disinformation as well, preventing a clear regime type argument to materialize.
29
Contrary to classical forms of communicative disruption – such as traditional media
propaganda – digital disinformation has a shorter strategic utility span, but has a wider
and faster reach.
30
Digital disinformation attempts can spread across a global audience
within a matter of minutes and potentially alter short-term beliefs about key events, but
is always subject to debunking and fact-checking after its immediate benefits. Moreover,
in contrast to traditional propaganda, digital disinformation often focuses more on
weakening the adversary’s narrative and framing of events rather than strengthening own
side and it is this uniquely disruptive focus of digital disinformation that separates its
strategic utility from that of classical propaganda efforts.
31
Yet, the Attacker’s advantage in disinformation is generally short-lived. Following
its immediate effect on demobilizing, demoralizing and debilitating Defender’s efforts, the
Attacker gets debunked and the fact-checked version of its narrative spreads equally fast
across ICTs and social media. This debunking generates a form of ‘reputational penalty’
Electronic copy available at: https://ssrn.com/abstract=4098988
8
for the Attacker, who gets ‘named and shamed’ in international platforms and suffers
from an additional, secondary ‘suspicion penalty’ on its successive communication
efforts.
32
An Attacker that overuses disinformation suffers these reputational and
suspicion penalties at an increasing rate, causing its factually correct public diplomacy
and government communication efforts to be met with low interest and resistance, and
being accused of crying wolf. In turn, each utilization of disinformation by an Attacker as
a foreign policy tool, reduces the effectiveness of its successive communication efforts
disinformation and otherwise lowering the net utility the Attacker gets from
communicative manipulation at each successive turn. In simple terms, disinformation as
a foreign policy tool has diminishing returns, whereby its overproduction results in
increasing penalties in successive rounds of an iterative multi-stage game.
The core puzzle of disinformation is therefore that although it has short-term
strategic advantages, it has more significant reputational and communicative costs to the
Attacker in the medium-to-long term, yet Attackers nonetheless rely on disinformation
knowing that they will suffer from these costs. To that end, the focus of this chapter is
this
ex-post
inefficiency of disinformation: why do countries engage in disinformation even
though they know that their
ex-ante
gains from it often dwarf in contrast to their
ex-post
reputational costs? In
Fearonian
fashion, an IR-Realist explanation of disinformation
would fit into three broad categories: anarchy, preventive action and positive expected
utility.
33
In terms of anarchy, the absence of a deterrence mechanism or a supranational
authority creates an international disinformation environment that favours the Attacker.
Similar to cybersecurity debates where the Attacker has a distinct advantage due to
attribution problems and mediocre chance of reprisals,
34
disinformation too, is a domain
where the Attacker has a distinct timing and first-mover’s advantage against the
Defender. Quite often, a successful disinformation campaign generates substantial short-
term benefit around important time-sensitive events, even if such campaigns are debunked
and a counter disinformation offensive begins. To that end, the world of global
disinformation is a significantly anarchic domain.
Electronic copy available at: https://ssrn.com/abstract=4098988
9
In a system dominated by anarchy and first-mover advantages, preventative action
becomes the norm. Attacker in an information war finds initiating a campaign more
preferable if it perceives that information war unavoidable. Since the disinformation
domain is anarchic and information war is perceived as inevitable, a government will find
the payoff of initiating a campaign greater than the risk of not initiating, in order not to
suffer from the penalties of becoming a Defender by moving in late. To that end pre-
emption becomes a form of prevention: not of the information war, but of the costs of
suffering from the initial salvo.
Finally, Attackers often play down the reputational costs of initiating a
disinformation campaign and overestimate their short-term net utility compared to their
medium- and long-term reputational costs originating from this action. Attackers may
view short-term payoffs from initiating a disinformation campaign (by distracting and
demobilizing the Defender) more preferable compared to any obscure reputational and
suspicion penalties later on or believe that the Defender cannot debunk the claim.
Indeed, the Defender may not always successfully debunk a disinformation claim,
especially when the volume of disinformation is too high (such as a botnet campaign), or
the disinformation is built in a sophisticated, hard-to-debunk fashion. If the Attacker
believes that its disinformation campaign is sophisticated enough that it will be difficult
to fact-check its claims, it will pursue disinformation believing that it will not suffer from
reputational or suspicion penalties later on. Similarly, even if the Defender successfully
debunks a claim, it may not be able to disseminate its fact-checked response widely or in
time. Indeed, this was the core claim of Vosoughi, Roy and Aral’s (2018) seminal work:
in politically charged environments, false news spread faster and wider than accurate news
across social media platforms.
35
Similarly, Saling et. al. (2021) finds that even users that
regularly fact-check news online can still share disinformation inadvertently if the event
is emotional and momentous enough.
36
To that end, in some cases, the Defender may
debunk successfully, but it may fail to disseminate true claims sufficiently and as a result,
may fail to generate international public reaction against the Attacker and fall short of
‘naming and shaming’ the Attacker.
37
Electronic copy available at: https://ssrn.com/abstract=4098988
10
Therefore, an Attacker may find initiating a disinformation campaign preferable,
if:
a. it has a different calculus about reputational and suspicion costs of the interaction
compared to the Defender and the international audience (IA),
b. it believes the Defender will not be able to ‘name and shame’ – or at least on time,
c. it believes the Defender will not be able to successfully debunk and disseminate the
claims,
d. if will not suffer from significant reputational or suspicion costs beyond the short-
term, or if both costs are not significant enough in comparison to its short-term
payoff.
A Model of Disinformation in International Crises
In order to demonstrate the ‘offense-defence balance’ in an information war, we offer
a multi-stage ‘decision theoretic’ model (as opposed to a ‘game theoretic’ one), as in our
model there is one “agent” (player) who makes a strategic decision. This remains an
iterative, multi-stage interaction where the decision-making agent considers the expected
utility from all alternative outcomes, simultaneously considering the discounted value of
utility from future interactions.
The following formal representation of the problem outlines the core interaction. There
are three players, ‘Attacker’ (A), Defender’ (D) and the digital ‘international audience’
(IA) which both A and D seeks to convince. Attacker is the origin point of the information
campaign and in our model, is the only strategic decision-maker in the model. We choose
to model these dynamics as an infinitely repeated interaction among 3 parties:
1. The Attacker’s repertoire of action consists of three moves:
Electronic copy available at: https://ssrn.com/abstract=4098988
11
Action
TN
: Do not engage in disinformation - Share only
true news
.
o This action is taken when the ground reality works in Attacker’s favour and
the Attacker doesn’t feel a need to resort to disinformation. Or the Attacker
believes that the short-term strategic gains from spreading fake news is too
low compared to the reputation and suspicion penalties it will suffer later
on. The latter case often happens when a militarily weaker country seeks
the support of stronger, more democratic allies and have to keep its choices
within an ‘acceptable’ range in order not to attract the criticism of its more
powerful allies.
Action
FN
: Spread
fake news
and engage in an organized disinformation
campaign.
o This line of action happens when the ground reality works against the
Attacker’s interests and the Attacker feels a need to alter short-term
perceptions of the Defender and the international audience either to gain a
strategic edge, or to disrupt the level of mobilization and consensus in the
adversary camp. This choice materializes when the Attacker views short-
term strategic utility from disinformation greater than the reputational and
suspicion costs that will arise from the following rounds.
Action
NN
: The Attacker does not engage in any information activity
accurate or inaccurate and remains silent.
o Often, countries choose not to engage in any form of public diplomacy and
choose to remain silent on emerging issues in order to calibrate their
responses in later rounds of the game.
2. We assume that the dominant strategy of the defender is to
debunk
as many
possible ‘attacks’ of disinformation as its information capacity allows. While in
some cases the Defender may also prefer to do nothing, or fire back with more
Electronic copy available at: https://ssrn.com/abstract=4098988
12
disinformation rather than true information, these action types are rare, and will
remain beyond the scope of this model.
Action
DNS
:
Debunk
the fake news and
name and shame
the Attacker in
international stage.
o This debunking action happens when the Defender views the Attacker’s
disinformation campaign too damaging and resorts to countermeasures in
the form of fact-checking and disseminating true version of the events.
3. In this model, we also consider the international audience as an actor with agency,
which also acts based on whether it believes the Attacker’s version of events, or
the Defender’s. In our model, the international audience does not always believe in
accurate news and may often believe and help sustain a disinformation campaign,
especially if the disinformation campaign in question is a sophisticated one that is
hard to debunk and triggers emotional sensitivities of the global audience.
Action
B
: In this case, the international audience gravitates in favour of the
fake news and the overall digital community converges upon
believing
the
disinformation campaign. When this happens, the Attacker’s short-term utility
from spreading disinformation becomes sustained in the medium-term and
yields the greatest payoff for the Attacker
Action
NB
: When successful debunking and fact-checking performance meets
with a successful countermeasure campaign, the international digital
community converges upon
rejecting
the disinformation campaign. When this
happens, the Defender’s ‘name and shame’ strategy yields the greatest payoff,
and the Attacker suffers from reputational and suspicion penalties early on.
Attacker initiates the first phase of the interaction. If conditions on the
ground favour the Attacker’s version of events, it engages in
TN
(share true news;
Electronic copy available at: https://ssrn.com/abstract=4098988
13
or engage in public diplomacy) as it does not gain additional payoff by sharing fake news.
For example, if the Attacker is meddling in the Defender’s elections and the candidate
that is friendly to the Attacker country has a distinct advantage in polls, the Attacker
has no interest in destabilizing the Defender and risk exposure. If the ground conditions
work against the Attacker’s favour, then the Attacker has an interest in disseminating
disinformation and destabilize/demobilize the Defender. Let
PT
be equal to how much
value the Attacker assigns to the probability of ground conditions working in favour its
interests. This is a subjective value and is assigned by the Attacker on a case-by-case
basis, so any value the Attacker assigns on ground reality not favouring its interests is 1
-
PT.
In case the ground reality works in the Attacker’s favour, the dominant strategy for
the Attacker is to spread the accurate news and conduct public diplomacy communication
(not disinformation) through that event. If the Attacker finds the ground truth as
contrary to its interests, it has two options: a) do nothing and let the situation progress
on its own, or b) engage in disinformation to alter short-term perceptions of the
international and domestic audiences.
In the next stage, the international audience (IA from now on) is either successfully
misled by the disinformation attempt, or reject the information provided by the Attacker,
based on their previous iterations of the information exchange. This choice of IA is
contingent upon previous iterations of the information provided by the Attacker. While
in our model, the interaction starts off with the Attacker’s choice, it is important to keep
in mind that there have been numerous interactions in the past, causing the IA to develop
pre-existing beliefs about both the Attacker and the Defender. We can model the IA’s
probability of believing the disinformation attempt by the attacker and help it spread
(
PB
) in two alternative pathways:
a) IA’s probability of believing the disinformation attempt by the Attacker and help
it spread (
PB
) either due to a successful disinformation attempt, or poor fact-
checking practices, or IA’s pre-existing beliefs in favour of the Attacker,
Electronic copy available at: https://ssrn.com/abstract=4098988
14
b) IA’s probability of not believing in the disinformation attempt
(1-
PB
)
due to fast
and high-volume fact-checking and IA’s pre-existing beliefs in favour of the
Defender.
These two alternative models are mathematically equivalent, so we continue with
the first interpretation. Here we assume that,
PB
does not vary based on the news spread
by the Attacker being actually true or false. This is likely at this stage, as the ground
truth is not known by the international audience but only by the Attacker.
In the final stage of the model, the Defender tries to debunk as much of the
disinformation attempts as possible, as it is the apparent dominant strategy for them. In
rare cases, the Defender may also choose to remain silent and not respond, or in other
cases it may respond by initiating a disinformation campaign itself. But these rare
attempts have a high risk of failure too, either because the Defender might not have
resources/time to deal with the scale of disinformation, or even when it debunks
successfully, it may not convince the international audience due to pre-existing beliefs, or
weak dissemination performance and media capacity. In this case again, these two
alternative scenarios are mathematically equivalent, and we choose to assign a probability
(
PD)
for a ‘successful debunk’ of a disinformation attempt.
It is important to note here that, both
PB and PD
are unknown
ex-ante
for all the parties
except the Attacker, and the Attacker assigns a subjective probability for these options,
as it does for
PT
.
Electronic copy available at: https://ssrn.com/abstract=4098988
15
Figure 1.1
Decision tree of the information interaction model between the Attacker,
Defender and the International Audience
Evaluating Game Outcomes
As can be seen from the decision tree representation of our model in Figure-1, there
are 7 potential outcomes of the interactions among the three ‘agents’ of our model. Brief
descriptions of the scenarios behind each outcome and corresponding value of each
outcome for the Attacker are as follows:
Outcome
TB
: Ground reality conforms to the Attacker’s interests; hence it tries to
spread it as part of public diplomacy, IA is convinced by this information policy.
Benefit for Attacker:
VTB
Electronic copy available at: https://ssrn.com/abstract=4098988
16
Outcome
TNB
: Ground reality conforms to the Attacker’s interests; hence it tries to
spread it as part of public diplomacy, but IA is not convinced by this information
policy.
Benefit for Attacker:
VTNB
Outcome
NN
: Ground reality does not conform to the Attacker’s interests, but it
chooses to remain silent and not engage in any particular information action on the
matter.
Benefit for Attacker:
VNN
Outcome
FB
: Ground reality does not conform to the Attacker’s interests; hence it
tries to engage in a disinformation campaign. IA is convinced by the disinformation
campaign it and the Defender cannot successfully debunk claims.
Benefit for Attacker:
VFB
Outcome
FBD
: Ground reality does not conform to the Attacker’s interests; hence it
tries to engage in a disinformation campaign. IA is convinced by the disinformation
campaign, but then it is successfully debunked by the Defender. Defender successfully
incurs ‘name & shame’ punishment (
PUN
) against the attacked. IA’s likelihood of
believing the Attacker (
PB
) decreases in all future interactions.
Benefit for Attacker:
VFB PUN
Outcome
FNB
: Ground reality does not conform to the Attacker’s interests; hence it
tries to engage in a disinformation campaign. IA is not convinced by the disinformation
campaign.
Benefit for Attacker:
VFNB
Outcome
FNBD
: Ground reality does not conform to the Attacker’s interests; hence
it tries to engage in a disinformation campaign. IA is not convinced by the
disinformation campaign. Defender successfully incurs ‘name & shame’ punishment
Electronic copy available at: https://ssrn.com/abstract=4098988
17
(
PUN
) against the Attacker. IA’s likelihood of believing the Attacker (
PB
) decreases
in all future interactions.
Benefit for Attacker:
VFNB – PUN
It is important to underline that it is not realistic to expect the Attacker to make
precise estimations about the values of these benefits; hence they are highly subjective
estimations. As it will be apparent below, in our discussion of the dynamics of the model,
the subjective nature of these benefits may cause the Attacker and in some cases the
Defender – to miscalculate the pros and cons of each strategy.
Expected Utilities for the Attacker
The Attacker pursues disinformation only when the ground reality does not
conform to its interests. Since we assume that the pay-off structure of the interactions
among all three decision making agents remain static over time, the comparison of
expected utilities does not change over time either, hence the Attacker continues with the
same onset strategy in all future iterations of this interaction. When the ground reality
does not conform to its interests, the Attacker has two strategies to choose from:
FN
or
NN
. Hence, a rational decision requires to compare the expected utilities resulting from
each of those strategies.
So, the equations follow:
NN
– doing nothing when the ground reality does not conform to the attacker’s interests:
𝐸𝑈𝑁𝑁 =𝑉𝑁𝑁 +1
𝑟[((1 𝑃𝑇)𝑥𝑉𝑁𝑁 ) + (𝑃𝑇𝑥𝐸𝑈𝑇𝑁 )
(eqn. 1)
FN
engaging in a disinformation campaign when ground reality does not conform to the
attacker’s interests:
Electronic copy available at: https://ssrn.com/abstract=4098988
18
𝐸𝑈𝐹𝑁 (𝑃𝐵) = 𝑃𝐵𝑥[(1 𝑃𝐷)𝑥𝑉𝐹𝐵 +𝑃𝐷𝑥(𝑉𝐹𝐵 𝑃𝑈𝑁)] + (1 𝑃𝐵)𝑥[(1 𝑃𝐷)𝑥𝑉𝐹𝑁𝐵 +
𝑃𝐷𝑥(𝑉𝐹𝑁𝐵 𝑃𝑈𝑁)) + 1
𝑟𝐸𝑈𝐹𝑁
𝐿𝑅
(eqn. 2)
Where:
r = discount rate
2
𝐸𝑈𝑇𝑁 = (𝑃𝐵𝑥𝑉𝑇𝐵) + ((1 𝑃𝐵)𝑥𝑉𝑇𝑁𝐵)
𝐸𝑈𝐹𝑁
𝐿𝑅 = (1 𝑃𝑇)𝑥[𝑃𝐷𝑥𝐸𝑈𝐹𝑁 (𝑃𝐵𝑙𝑜𝑤 )+(1 𝑃𝐷)𝑥𝐸𝑈𝐹𝑁 (𝑃𝐵)]+𝑃𝑇𝑥[𝑃𝐷𝑥𝐸𝑈𝑇𝑁 (𝑃𝐵𝑙𝑜𝑤 )+
(1 𝑃𝐷)𝑥𝐸𝑈𝑇𝑁 (𝑃𝐵)]
𝑃𝐵𝑙𝑜𝑤 =𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝐼𝐴𝑠 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑜𝑓 𝑏𝑒𝑙𝑖𝑒𝑣𝑖𝑛𝑔 𝑡ℎ𝑒 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 𝑎𝑓𝑡𝑒𝑟 𝑎 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑓𝑢𝑙 𝑑𝑒𝑏𝑢𝑛𝑘
(eqn. 3)
Since the Attacker decides between
FN
and
NN
by comparing the corresponding
expected utilities of these two strategies, the direction of the effects of a number of
important parameters on the values of outcomes for Attacker must be addressed.
Variables that increase
EUFN
and/or variables which causes a decrease in
EUNN
makes
engaging in a disinformation campaign more attractive for the Attacker. Therefore, as
outlined in the preceding formulations, the expected utilities change in favour
EUFN
(as
opposed to
NN
) – causing Attacker to choose
FN
strategy when:
PT
is lower,
PB
is higher,
PD
is lower
VNN
is lower,
VFB
is higher,
VFNB
is lower,
VTB
is higher,
VTNB
is lower,
PUN
is lower.
2
Note that, as it is an “infinite interaction” model, the equation to be used for calculating discounted present value
of future values simplifies to the form in equation-1.
Electronic copy available at: https://ssrn.com/abstract=4098988
19
It is crucial to note that, both the subjective nature of probabilities as well as the benefits
of the Attacker at alternative potential outcomes may cause the Attacker to choose a
strategy which brings them a lower ex-post actual utility.
Discussion
Our model posits that deterrence in international information war can be attainable
through two venues: first, if the Defender can successfully signal its debunking capacity
to the Attacker, and second, if the Defender can demonstrate its ability to rally the
international audience to its cause quickly enough. The first condition is attainable
through building and maintaining a robust fact-checking ecosystem either within the
government ranks, or the civil society, but ideally both. If a Defender regularly
demonstrates rapid and successful debunking performance in past interactions, the
Attacker’s valuation of its short-term utility by deploying disinformation will likely be
reduced. Since one of the core drivers of disinformation is the Attacker’s high valuation
of its short-term demobilizing and distracting potential against its adversaries, those
adversaries can in turn demonstrate that they can debunk and disseminate accurate news
rapidly in order to reduce the Attacker’s perceived payoffs.
The second condition is attainable through diplomatic alliances, media and cultural
power, and fewer reliance on fake news as a government strategy in past information
interactions. If the Defender has regularly used disinformation as a conscious strategy in
past interactions more so compared to the Attacker it will be harder for the IA to
rally behind the Defender’s cause. To that end not resorting to disinformation is a
cumulative resource that countries ‘save’ over time, and can ‘cash-in’ during emergencies,
by rendering the IA more receptive towards its cause.
This chapter has identified two important dynamics. First, the fact that
disinformation may incur reputational and suspicion costs to the Attacker suggests that
there must be greater short-term gains for the Attacker so that it prefers spreading fake
Electronic copy available at: https://ssrn.com/abstract=4098988
20
news. This reasoning advances the prevalent wisdom that states engage in disinformation
because it is a cheap way of demobilizing an adversary and there are no repercussions
against this form of action. Second, Defenders are not as completely vulnerable to
disinformation campaigns as the mainstream debate suggests and the Attacker’s
advantage in information warfare is no absolute. The Attacker chooses to engage in
organized disinformation because it believes that the Defender will not be able to debunk
these claims on time and at scale. If the Defender demonstrates the opposite reliably, then
the Attacker’s decision to launch a disinformation campaign is not a given and automatic,
and may be deterred from choosing this course of action. In addition, the Defender may
deter the Attacker through its influence over the IA, by not regularly engaging in
disinformation campaigns itself and optimizing its cultural and media power, which
accumulates over time, through ‘good practices’.
We expect a number of criticisms on our model. First, we concede that the Defender
may not be isolated into a single decision to automatically debunk all disinformation
attempts. It can indeed choose not to take action, or spread disinformation itself to win
the short-term information war. This is indeed an important point, but empirically, such
cases have been so rare that modelling them within the same parameters of likelihood
with the rest of our model can be misleading. Second, we anticipate that our decision to
include the International Audience as a decision-making actor may incur too much agency
on it, as in most cases IA’s likelihood of believing disinformation or not is very
deterministic: if the Attacker is skilful and can generate sufficient emotional triggers, the
IA has a likelihood of believing the disinformation. Also, pre-existing beliefs about the
Attacker and the Defender matters significantly in driving the momentum of the IA.
However, we believe that the IA is the actual kingmaker in this interaction and its
gravitational dynamics impact the outcome of the information war. To that end, we favour
retaining the agency of the IA.
Ultimately, future research can be directed towards exploring the Defender’s
actions in more detail and building more extensive games with IA as an actor or as a
Electronic copy available at: https://ssrn.com/abstract=4098988
21
bystander to the information war. Additionally, works that deploy our model in actual
empirical cases from around the world (not just isolated to Russia-China versus the US
and Europe) would enrich and modify the model significantly. We hope that future
debates that explore the rationality and payoff structure of disinformation proliferate and
generate increased attention.
References
1
Samantha Bradshaw, Hannah Bailey, and Philip N. Howard, “Industrialized Disinformation: 2020 Global
Inventory of Organized Social Media Manipulation,” Computational Propaganda Research Project (Oxford,
UK: Oxford Internet Institute, 2021), https://demtech.oii.ox.ac.uk/research/posts/industrialized-
disinformation.
2
“Rival Disinformation Campaigns Targeted African Users, Facebook Says,”
The Guardian
, December 15,
2020, sec. Technology, https://www.theguardian.com/technology/2020/dec/15/central-african-republic-
facebook-disinformation-france-russia.
3
Sohini Chatterjee and Peter Kreko, “State-Sponsored Disinformation in Western Democracies Is the
Elephant in the Room ǀ View,”
Euronews
, July 6, 2020, sec. news_news,
https://www.euronews.com/2020/07/06/state-sponsored-disinformation-in-western-democracies-is-the-
elephant-in-the-room-view; Jacob Silverman, “The State Department’s Twitter Jihad,”
POLITICO
Magazine
, July 22, 2014, https://www.politico.com/magazine/story/2014/07/the-state-departments-
twitter-jihad-109234.
4
“Ukrainian Refugees and Disinformation: Situation in Poland, Hungary, Slovakia and Romania,”
European
Digital Media Observatory
(blog), April 5, 2022, https://edmo.eu/2022/04/05/ukrainian-refugees-and-
disinformation-situation-in-poland-hungary-slovakia-and-romania/.
5
Samuel Greene et al., “Mapping Fake News and Disinformation in the Western Balkans and Identifying
Ways to Effectively Counter Them” (Brussels: Policy Department for External Relations Directorate
General for External Policies of the Union, February 2021).
6
Davey Alba and Adam Satariano, “At Least 70 Countries Have Had Disinformation Campaigns, Study
Finds,”
The New York Times
, September 26, 2019, sec. Technology,
https://www.nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.html.
7
Tarleton Gillespie, “Content Moderation, AI, and the Question of Scale,”
Big Data & Society
7, no. 2
(July 1, 2020): 2053951720943234, https://doi.org/10.1177/2053951720943234.
8
Scott A. Eldridge, Lucía García-Carretero, and Marcel Broersma, “Disintermediation in Social Networks:
Conceptualizing Political Actors’ Construction of Publics on Twitter,”
Media and Communication
7, no. 1
(March 21, 2019): 27185, https://doi.org/10.17645/mac.v7i1.1825.
9
Matteo Cinelli et al., “The Limited Reach of Fake News on Twitter during 2019 European Elections,”
PLOS ONE
15, no. 6 (June 18, 2020): e0234689, https://doi.org/10.1371/journal.pone.0234689.
10
Don Fallis, “The Varieties of Disinformation,” in
The Philosophy of Information Quality
, ed. Luciano
Floridi and Phyllis Illari, Synthese Library (Cham: Springer International Publishing, 2014), 13561,
https://doi.org/10.1007/978-3-319-07121-3_8.
11
Christina la Cour, “Theorising Digital Disinformation in International Relations,”
International Politics
57, no. 4 (August 1, 2020): 70423, https://doi.org/10.1057/s41311-020-00215-x.
Electronic copy available at: https://ssrn.com/abstract=4098988
22
12
Ernst B. Haas, “The Balance of Power: Prescription, Concept, or Propaganda?,”
World Politics
5, no. 4
(July 1953): 44277, https://doi.org/10.2307/2009179; Gary D. Rawnsley, “Radio Diplomacy and
Propaganda,” in
Radio Diplomacy and Propaganda: The BBC and VOA in International Politics, 195664
,
ed. Gary D. Rawnsley, Studies in Diplomacy (London: Palgrave Macmillan UK, 1996), 617,
https://doi.org/10.1007/978-1-349-24499-7_2; Ben D. Mor, “The Rhetoric of Public Diplomacy and
Propaganda Wars: A View from Self-Presentation Theory,”
European Journal of Political Research
46, no.
5 (2007): 66183, https://doi.org/10.1111/j.1475-6765.2007.00707.x.
13
Yochai Benkler, Robert Faris, and Hal Roberts,
Network Propaganda: Manipulation, Disinformation, and
Radicalization in American Politics
(Oxford University Press, 2018); W Lance Bennett and Steven
Livingston, “The Disinformation Order: Disruptive Communication and the Decline of Democratic
Institutions,”
European Journal of Communication
33, no. 2 (April 1, 2018): 12239,
https://doi.org/10.1177/0267323118760317; Chris Tenove, “Protecting Democracy from Disinformation:
Normative Threats and Policy Responses,”
The International Journal of Press/Politics
25, no. 3 (July 1,
2020): 51737, https://doi.org/10.1177/1940161220918740.
14
Alexander Lanoszka, “Disinformation in International Politics,”
European Journal of International
Security
4, no. 2 (June 2019): 22748, https://doi.org/10.1017/eis.2019.6.
15
“Disarming Disinformation: Our Shared Responsibility,” United States Department of State, April 7,
2022, https://www.state.gov/disarming-disinformation/.
16
“Russian Strategic Intentions: A Strategic Multilayer Assessment (SMA) White Paper,” SMA White
Papers (Boston, MA: National Security Innovations, May 2019), https://nsiteam.com/sma-white-paper-
russian-strategic-intentions/.
17
Molly K. Mckew, “The Gerasimov Doctrine,” POLITICO Magazine, October 2017,
https://politi.co/2KZQlKd.
18
“UNDP: Governments Must Lead Fight against Coronavirus Misinformation and Disinformation,” United
Nations Development Programme, June 10, 2020, https://www.undp.org/press-releases/undp-governments-
must-lead-fight-against-coronavirus-misinformation-and.
19
Akin Unver and Ahmet Kurnaz, “Securitization of Disinformation in NATO Lexicon: A Computational
Text Analysis,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, February 21,
2022), https://doi.org/10.2139/ssrn.4040148.
20
Michał Pietkiewicz, “The Military Doctrine of the Russian Federation,”
Polish Political Science Yearbook
47, no. 3 (2018): 50520.
21
Daniel Funke and Daniela Flamini, “A Guide to Anti-Misinformation Actions around the World,” Poynter
Institute for Media Studies, August 2020, https://www.poynter.org/ifcn/anti-misinformation-actions/.
22
Marc Owen Jones, “The Gulf Information War| Propaganda, Fake News, and Fake Trends: The
Weaponization of Twitter Bots in the Gulf Crisis,”
International Journal of Communication
13, no. 0
(March 15, 2019): 27; H. Akin Unver, “Can Fake News Lead to War? What the Gulf Crisis Tells Us,” War
on the Rocks, June 13, 2017, https://warontherocks.com/2017/06/can-fake-news-lead-to-war-what-the-gulf-
crisis-tells-us/.
23
H. Akin Unver, “Russia Has Won the Information War in Turkey,”
Foreign Policy
(blog), April 21, 2019,
https://foreignpolicy.com/2019/04/21/russia-has-won-the-information-war-in-turkey-rt-sputnik-putin-
erdogan-disinformation/; Akin Unver and Ahmet Kurnaz, “Russian Digital Influence Operations in Turkey
2015-2020,”
Project on Middle East Political Science
43 (August 5, 2020): 8390.
24
Idayat Hassan and Jamie Hitchen, “Nigeria’s Disinformation Landscape,”
Social Science Research Council
(blog), October 6, 2020, https://items.ssrc.org/disinformation-democracy-and-conflict-prevention/nigerias-
disinformation-landscape/.
Electronic copy available at: https://ssrn.com/abstract=4098988
23
25
Hoang Linh Dang, “Social Media, Fake News, and the COVID-19 Pandemic: Sketching the Case of
Southeast Asia,”
Austrian Journal of South-East Asian Studies
14, no. 1 (June 28, 2021): 3758,
https://doi.org/10.14764/10.ASEAS-0054.
26
Axel Gelfert, “Fake News: A Definition,”
Informal Logic
38, no. 1 (2018): 84117,
https://doi.org/10.22329/il.v38i1.5068; Michael Jensen, “Russian Trolls And Fake News: Information Or
Identity Logics?,”
Journal of International Affairs
71, no. 1.5 (2018): 11524.
27
Peter J. Phillips and Gabriela Pohl, “The Hidden Logic of Disinformation and the Prioritization of
Alternatives The Era of Dis-and-Misinformation,”
Seton Hall Journal of Diplomacy and International
Relations
22, no. 1 (2021): 2434.
28
Alexander Rozanov, Julia Kharlamova, and Vladislav Shirshikov, “The Role of Fake News in Conflict
Escalation: A Theoretical Overview,” SSRN Scholarly Paper (Rochester, NY: Social Science Research
Network, May 10, 2021), https://doi.org/10.2139/ssrn.3857007.
29
Edda Humprecht, “Where ‘Fake News’ Flourishes: A Comparison across Four Western Democracies,”
Information, Communication & Society
22, no. 13 (November 10, 2019): 197388,
https://doi.org/10.1080/1369118X.2018.1474241.
30
Manuel Castells,
The Rise of the Network Society
, vol. 1, The Information Age: Economy, Society and
Culture (Hoboken, NJ: John Wiley & Sons, 2011).
31
Joshua A. Tucker et al., “Social Media, Political Polarization, and Political Disinformation: A Review of
the Scientific Literature,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March
19, 2018), https://doi.org/10.2139/ssrn.3144139.
32
Nathan Walter et al., “Fact-Checking: A Meta-Analysis of What Works and for Whom,
Political
Communication
37, no. 3 (May 3, 2020): 35075, https://doi.org/10.1080/10584609.2019.1668894.
33
James D. Fearon, “Rationalist Explanations for War,”
International Organization
49, no. 3 (1995): 379
414.
34
Rebecca Slayton, “What Is the Cyber Offense-Defense Balance?: Conceptions, Causes, and Assessment,”
International Security
41, no. 3 (2016): 72109.
35
Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,”
Science
359,
no. 6380 (March 9, 2018): 114651, https://doi.org/10.1126/science.aap9559.
36
Lauren L. Saling et al., “No One Is Immune to Misinformation: An Investigation of Misinformation
Sharing by Subscribers to a Fact-Checking Newsletter,”
PLOS ONE
16, no. 8 (August 10, 2021): e0255702,
https://doi.org/10.1371/journal.pone.0255702.
37
David M. J. Lazer et al., “The Science of Fake News,”
Science
359, no. 6380 (March 9, 2018): 109496,
https://doi.org/10.1126/science.aao2998.
Electronic copy available at: https://ssrn.com/abstract=4098988
Article
Full-text available
We develop a simulation framework for studying misinformation spread within online social networks that blends agent-based modeling and natural language processing techniques. While many other agent-based simulations exist in this space, questions over their fidelity and generalization to existing networks in part hinder their ability to drive policy-relevant decision making. To partially address these concerns, we create a ’digital clone’ of a known misinformation sharing network by downloading social media histories for over ten thousand of its users. We parse these histories to both extract the structure of the network and model the nuanced ways in which information is shared and spread among its members. Unlike many other agent-based methods in this space, information sharing between users in our framework is sensitive to topic of discussion, user preferences, and online community dynamics. To evaluate the fidelity of our method, we seed our cloned network with a set of posts recorded in the base network and compare propagation dynamics between the two, observing reasonable agreement across the twin networks over a variety of metrics. Lastly, we explore how the cloned network may serve as a flexible, low-cost testbed for misinformation countermeasure evaluation and red teaming analysis. We hope the tools explored here augment existing efforts in the space and unlock new opportunities for misinformation countermeasure evaluation, a field that may become increasingly important to consider with the anticipated rise of misinformation campaigns fueled by generative artificial intelligence.
Article
Full-text available
Executive Summary Over the past three years, we have monitored the global organization of social media manipulation by governments and political parties. Our 2019 report analyses the trends of computational propaganda and the evolving tools, capacities, strategies, and resources. 1. Evidence of organized social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically. 2.Social media has become co-opted by many authoritarian regimes. In 26 countries, computational propaganda is being used as a tool of information control in three distinct ways: to suppress fundamental human rights, discredit political opponents, and drown out dissenting opinions. 3. A handful of sophisticated state actors use computational propaganda for foreign influence operations. Facebook and Twitter attributed foreign influence operations to seven countries (China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela) who have used these platforms to influence global audiences. 4. China has become a major player in the global disinformation order. Until the 2019 protests in Hong Kong, most evidence of Chinese computational propaganda occurred on domestic platforms such as Weibo, WeChat, and QQ. But China’s new-found interest in aggressively using Facebook, Twitter, and YouTube should raise concerns for democracies. 5. Despite there being more social networking platforms than ever, Facebook remains the platform of choice for social media manipulation. In 56 countries, we found evidence of formally organized computational propaganda campaigns on Facebook.
Article
Full-text available
Like other disease outbreaks, the COVID-19 pandemic has led to the rapid generation and dissemination of misinformation and fake news. We investigated whether subscribers to a fact checking newsletter (n = 1397) were willing to share possible misinformation, and whether predictors of possible misinformation sharing are the same as for general samples. We also investigated predictors of willingness to have a COVID-19 vaccine and found that although vaccine acceptance was high on average, it decreased as a function of lower belief in science and higher conspiracy mentality. We found that 24% of participants had shared possible misinformation and that this was predicted by a lower belief in science. Like general samples, our participants were typically motivated to share possible misinformation due to interest in the information, or to seek a second opinion about claim veracity. However, even if information is shared in good faith and not for the purpose of deceiving or misleading others, the spread of misinformation is nevertheless highly problematic. Exposure to misinformation engenders faulty beliefs in others and undermines efforts to curtail the spread of COVID-19 by reducing adherence to social distancing measures and increasing vaccine hesitancy.
Article
Full-text available
This paper investigates Russian Military Doctrines which establish the military construction and training of state armed forces and set out the forms and methods of conducting war. The main provisions of Military/Defense Doctrines have been formed and changed depending on current policy and the existent social system. This includes the level of productive forces and new scientific achievements and the nature of anticipated war. The 2014 Russian Military Doctrine defines the country as a strictly defensive entity and issues a list of provisions where Russia would be motivated to act militarily towards other countries. State policy and military doctrine are inextricably linked because the competent military policy meets all changes in international and domestic situations and successive military reforms are impossible without corresponding reflection in Military Doctrine.
Article
Full-text available
How does the content of so-called ‘fake news’ differ across Western democracies? While previous research on online disinformation has focused on the individual level, the current study aims to shed light on cross-national differences. It compares online disinformation re-published by fact checkers from four Western democracies (the US, the UK, Germany, and Austria). The findings reveal significant differences between English-speaking and German-speaking countries. In the US and the UK, the largest shares of partisan disinformation are found, while in Germany and Austria sensationalist stories prevail. Moreover, in English-speaking countries, disinformation frequently attacks political actors, whereas in German-speaking countries, immigrants are most frequently targeted. Across all of the countries, topics of false stories strongly mirror national news agendas. Based on these results, the paper argues that online disinformation is not only a technology-driven phenomenon but also shaped by national information environments.
Article
Full-text available
Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the efforts of movements and parties on the radical right to mobilize supporters against centre parties and the mainstream press that carries their messages. The spread of disinformation can be traced to growing legitimacy problems in many democracies. Declining citizen confidence in institutions undermines the credibility of official information in the news and opens publics to alternative information sources. Those sources are often associated with both nationalist (primarily radical right) and foreign (commonly Russian) strategies to undermine institutional legitimacy and destabilize centre parties, governments and elections. The Brexit campaign in the United Kingdom and the election of Donald Trump in the United States are among the most prominent examples of disinformation campaigns intended to disrupt normal democratic order, but many other nations display signs of disinformation and democratic disruption. The origins of these problems and their implications for political communication research are explored.
Article
Published in: Journal of Diplomacy and International Relations Both orthodox and behavioral decision theory are concerned with the prioritization of alternative risky prospects. We are concerned with how this ranking or prioritization process is impacted by disinformation. This might interest those who support the use of decision theory in international relations. In fact, international relations scholars were among the first to apply prospect theory, a key theoretical framework of behavioral economics, to the analysis of situations falling outside of the usual types of risky choices that interested economists. In this paper, we explain how decision theory may be used to analyze the ways in which disinformation—an old but perennial problem— may influence the decisions of individuals and groups working under conditions of risk and uncertainty. Those proponents of prospect theory’s use in political science and international relations may find it possible to incorporate part of this explanation into their analysis. It is, of course, a matter for individual scholars to determine the degree of significance accorded to the impact of micro-foundations on the direction of international affairs.
Article
Following public revelations of interference in the United States 2016 election, there has been widespread concern that online disinformation poses a serious threat to democracy. Governments have responded with a wide range of policies. However, there is little clarity in elite policy debates or academic literature about what it actually means for disinformation to endanger democracy, and how different policies might protect it. This article proposes that policies to address disinformation seek to defend three important normative goods of democratic systems: self-determination, accountable representation, and public deliberation. Policy responses to protect these goods tend to fall in three corresponding governance sectors: self-determination is the focus of international and national security policies; accountable representation is addressed through electoral regulation; and threats to the quality of public debate and deliberation are countered by media regulation. The article also reveals some of the challenges and risks in these policy sectors, which can be seen in both innovative and failed policy designs.
Article
Despite its growing prominence in news coverage and public discourse, there is still considerable ambiguity regarding when and how fact-checking affects beliefs. Informed by theories of motivated reasoning and message design, a meta-analytic review was undertaken to examine the effectiveness of fact-checking in correcting political misinformation (k = 30,N = 20,963). Fact-checking has a significantly positive overall influence on political beliefs (d = 0.29), but the effects gradually weaken when using “truth scales,” refuting only parts of a claim, and fact-checking campaign-related statements. Likewise, the ability to correct political misinformation with fact-checking is substantially attenuated by participants’ preexisting beliefs, ideology, and knowledge. The study concludes with a discussion of the fact-checking literature in light of current gaps and future opportunities.