ArticlePDF Available

The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them

SAGE Publications Inc
New Media & Society
Authors:

Abstract and Figures

This study is the first to scrutinize the psychological effects of online astroturfing in the context of Russia’s digitally enabled foreign propaganda. Online astroturfing is a communicative strategy that uses websites, “sock puppets,” or social bots to create the false impression that a particular opinion has widespread public support. We exposed N = 2353 subjects to pro-Russian astroturfing comments and tested: (1) their effects on political opinions and opinion certainty and (2) the efficiency of three inoculation strategies to prevent these effects. All effects were investigated across three issues and from a short- and long-term perspective. Results show that astroturfing comments can indeed alter recipients’ opinions, and increase uncertainty, even when subjects are inoculated before exposure. We found exclusively short-term effects of only one inoculation strategy (refutational-same). As these findings imply, preemptive media literacy campaigns should deploy (1) continuous rather than one-time efforts and (2) issue specific rather than abstract inoculation messages.
Content may be subject to copyright.
The disconcerting potential of Russia’s trolls: Persuasive effects of astroturfing
comments and three strategies for inoculation against them
Thomas Zerback (corresponding author)
University of Zurich, Andreasstrasse 15, 8050 Zürich,
t.zerback@ikmz.uzh.ch
Florian Töpfl
Free University of Berlin, Garystraße 55, 14195 Berlin,
f.toepfl@fu-berlin.de
Maria Knöpfle
Ludwig-Maximilians-University Munich, Oettingenstraße 67, 81538 Munich,
M.Knoepfle@campus.lmu.de
Author Note
Thomas Zerback, Ph.D. is Assistant Professor for political communication at the
Department of Communication and Media Research at the University of Zurich, Switzerland.
Florian Töpfl, Ph.D. is an Emmy Noether Junior Research Group leader at the Institute
for Media and Communication Studies at the Free University of Berlin, Germany.
Maria Knöpfle is a student assistant at the Department of Department of Media and
Communication at the Ludwig-Maximilians-University Munich, Germany.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 1
Abstract
This study is the first to scrutinize the psychological effects of online astroturfing in the
context of Russia’s digitally-enabled foreign propaganda. Online astroturfing is a
communicative strategy that use-s websites, “sock puppets,” or social bots to create the false
impression that a particular opinion has widespread public support. We exposed N = 2,353
subjects to pro-Russian astroturfing comments and tested: (1) the comments’ effects on
political opinions and opinion certainty, and (2) the efficiency of three inoculation strategies
to prevent these effects. All effects were investigated across three issues as well as from a
short- and long-term perspective. Results show that astroturfing comments can indeed alter
recipients’ opinions, and increase uncertainty, even when recipients are inoculated before
exposure. We found only one inoculation strategy (refutational-same) to be effective.
Consequences for future inoculation research and practical applications are discussed.
Keywords: disinformation, misinformation, Russia, state propaganda, online astroturfing,
opinion certainty, uncertainty, countermeasures, inoculation
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 2
The disconcerting potential of Russia’s trolls: Persuasive effects of astroturfing
comments and three strategies for inoculation against them
Particularly in the aftermath of the 2016 US presidential election, disinformation and its
consequences for democratic societies have been subject to extensive political (European
Commission, 2018) and scholarly debate (e.g., Bennett and Livingston, 2018). At the most
abstract level, disinformation can be understood as “[i]naccurate or manipulated information /
content that is spread intentionally. This can include false news, or it can involve more subtle
methods such as false flag operations, feeding inaccurate quotes or stories to innocent
intermediaries, or knowingly amplifying biased or misleading information” (Weedon et al.,
2017: 5). Therefore, disinformation is also persuasive communication (Zhang et al., 2013). In
this paper we deal with an important and widespread subtype of disinformation, known as
astroturfing” (Kovic et al., 2018; Zhang et al., 2013). Astroturfing can be defined as the
“manipulative use of media and other political techniques to create the perception of a
grassroots community organization where none exists for the purpose of political gain
(McNutt and Boland, 2007: 169). Although the phenomenon itself is not new (e.g., Lyon and
Maxwell, 2004), the Internet and especially social media have paved the way for new forms,
often referred to as digital or online astroturfing (Kovic et al., 2018; Zhang et al., 2013).
A central strategic instrument of online astroturfing is the manufacturing of user
comments designed to appear as authentic citizen voices on highly visible news or social
networking sites (SNS). We focus here on this specific form of online astroturfing because it
has been one of the most widely debated in the context of recent national elections across the
Western world (Ferrara, 2017; Kovic et al., 2018; Zelenkauskaite and Balduccini, 2017).
Examples of campaigns that were targeted include the 2016 presidential election in the US
(Bessi and Ferrara, 2016; Woolley and Guilbeault, 2017), the 2017 presidential election in
France (Ferrara, 2017), and the 2012 presidential elections in South Korea (Keller et al.,
2017). As a key sponsor of these astroturfing activities, academic studies, investigative news
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 3
articles and think-tank reports have pointed to Russia’s ruling elites (see, for example,
Bugorkova, 2015; Zelenkauskaite and Balduccini, 2017), who are closely tied to an
organization operating under the name Internet Research Agency (IRA). In 2013 this entity,
also referred to as Russia’s “troll factory”, employed approximately 600 people and had an
estimated annual budget of US$ 10 million (Bugorkova, 2015). Amongst others, the so called
Russian trolls targeted foreign audiences by setting up fake SNS accounts (known as “sock
puppets”) mimicking grassroots support for Russian policies on a range of news and social-
media platforms (Kovic et al., 2018).
Among Western political leaders, these digitally enabled propaganda efforts have
sparked not only concern but explicit indignation (European Commission, 2018). In the
academic realm, they have stimulated a fast-growing body of research on the phenomenon.
So far, however, this research has focused almost exclusively on the detection of
manufactured commentsthat is the question of how to identify sock-puppet accounts or
automated social bots (Keller et al., 2017; King et al., 2017). By contrast, we still know very
little about the psychological effects that such manufactured user commenting has on media
audiences, and even less about possible ways of forestalling these effects. Against this
background, our study advances existing research in three ways:
(1) We examine whether online astroturfing comments affect the political opinions and
opinion-certainty of those exposed to them.
(2) We investigate whether these persuasive effects can be mitigated, or even prevented, by
the use of inoculation messages designed to educate the audience about the manipulative
intent and argumentative tactics of the astroturfing actors.
(3) We analyze the duration of the inoculation’s immunizing effects.
Our study is based on a three-wave experiment carried out over the course of four weeks.
2,353 participants were exposed to typical Russian online astroturfing comments posted
beneath social media news items in order to determine their persuasive effects. In addition,
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 4
we tested the efficiency of three different inoculation treatments in countering these effects
both in the short run and in the long run. All stimuli messages were administered in the
context of three different issues prone to Russian astroturfing activities: the poisoning of
former Russian intelligence officer Sergei Skripal, the manipulation of the 2016 US
presidential election, and the use of toxic gas by a close Russian ally, the Syrian government.
The effects of astroturfing comments on an audience
Online astroturfing comments imitate ordinary citizens’ voices in order to create the
impression that a certain opinion has widespread public support, while the real agent behind
the message conceals his identity (Zhang et al., 2013). Astroturfing comments are almost
impossible to distinguish from authentic user comments; hence the audience find themselves
in situations where they are either completely unaware of the fact that a comment might be
sponsored by a principal, or they may suspect such an influence but cannot be entirely sure
about it. Given their authentic appearance and the lack of knowledge, and/or uncertainty, on
the part of audiences, astroturfing comments carry the potential to influence the opinions of
those who read them.
An answer to the question, how astroturfing comments can alter personal opinions is
given by exemplification research, which has investigated the effects of ordinary citizen
depictions in the media (also known as “exemplars) (Zillmann, 1999). Exemplars possess
several characteristics contributing to their persuasive potential: firstly, as personalized
information they attract the audience’s attention, making persuasive effects more likely in the
first place (Taylor and Thompson, 1982). Secondly, the opinion voiced by an exemplar
becomes cognitively available and more accessible in the recipients’ memories (Zillmann,
1999), and highly accessible information has a greater chance of influencing subsequent
judgments (Domke et al., 1998). Finally, fellow citizens are often considered to be more
trustworthy and more similar to ourselves by comparison with other actors present in the
media, such as, for example, politicians (Lefevere et al., 2012). Trustworthiness and
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 5
similarity have both been shown to be strong facilitators of persuasive effects (Hovland et al.,
1953).
Although, from a theoretical point of view, depictions of citizens hold a great
potential to influence the opinions of those confronted with them, empirical evidence on their
persuasive potential is rather mixed. Whereas some researchers have observed opinion
changes resulting from exemplar exposure both in traditional (e.g., Daschmann, 2000) and in
online media (e.g., Sikorski, 2018), others could not find such effects (e.g., Zerback and
Peter, 2018). This leads to the question of why online astroturfing comments, in particular,
should exert a persuasive effect. The answer lies in the way they are composed: in many
cases, astroturfing comments do not merely consist of an opinion, but also include arguments
that support the position advocated. An analysis by the EU vs. Disinformation project (2019)
found that, particularly in the case of Russian propaganda, the most common strategy
employed was to offer alternative explanations for negative events for which Russia was
being publicly accused. These pro-Russian astroturfing messages deny Russian
responsibility, present other potential culprits, or portray Russia as the victim of widespread
and unfounded Russophobia or public persecution (see also Nimmo, 2015). Persuasion
research has repeatedly shown that arguments included in a message increase its persuasive
impact (Petty and Cacioppo, 1984), which should also be the case for astroturfing comments.
So far, only two studies have provided insights into the effects of astroturfing
activities on audience attitudes. However, in all these cases, researchers have not used online
comments but other types of astroturfing information. In an experiment, Cho, Martens, Kim,
and Rodrigue (2011) showed that people who were exposed to astroturf websites became
more uncertain, as compared with those who saw real grassroots websites, about the causes of
global warming and humans’ role in the phenomenon. Interestingly, these effects occurred
despite the fact that participants had (correctly) perceived the information from the
astroturfing websites to be less credible and the organization less trustworthy. In another
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 6
study, Pfau, Haigh, Sims, and Wigley (2007) investigated the effects of corporate front-group
stealth campaigns. Very similarly to astroturfing activities, these groups disseminate
persuasive messages while masking their true identity and interests. After they were
confronted with the disguised corporate messages, the opinions of those initially favoring
restrictive federal efforts on different issues were significantly eroded. Given the theoretical
and empirical evidence, we assume that pro-Russian online comments will influence the
opinions of those who read them.
H1 Exposing individuals to pro-Russian astroturfing comments will change their opinions
in the direction of the comments.
The effects of astroturfing comments on opinion certainty
Whereas an attitude or opinion represents a person’s evaluation of an object, situation, or
person, attitude or opinion certainty refers to the conviction about the attitude or the extent to
which one is confident in it (Gross et al., 1995). Certainty is an important dimension of an
attitude or opinion, because it influences its stability, durability, and behavioral impact. There
are several theoretical reasons why astroturfing comments can be expected to influence
opinion certainty. Firstly, research has shown that opinion certainty can be altered by
messages contradicting an existing opinion, because these decrease the structural consistency
of the underlying beliefs or knowledge. Hence, information with contradictory evaluative
implications (e.g., messages that contradict the overall evaluation of an object) should
decrease opinion certainty (Smith et al., 2008). Secondly, opinion certainty is influenced by
the subjective ease with which opinion-relevant information comes into an individual’s mind.
If information supporting the opinion is easily cognitively retrieved (e.g., because the
individual has recently been exposed to it), the information is deemed more valid and thus
fosters opinion certainty (Tormala et al., 2002). Conversely, easily retrieved counter-
attitudinal information—as provided by astroturfing comments—should decrease opinion
certainty. Finally, and especially important for the case of astroturfing, is the fact that people
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 7
hold opinions with greater certainty when they perceive social consensus for them (e.g.,
Visser and Mirabile, 2004). As other studies have shown, online user comments can serve as
indicators of such a consensus (Zerback and Fawzi, 2017).
Although creating uncertainty among people in democratic societies is considered a
central goal of political astroturfing in the context of elections (Zhang et al., 2013), only the
previously mentioned study by Cho and colleagues (2011) and one further study by Kang and
colleagues (2016), which replicated the former’s examination of uncertainty, have
investigated such effects. Both show that individuals who were exposed to astroturfing
websites on global warming became more uncertain regarding the causes of climate change
and the role played by humans in this context. On the basis of the theoretical work and
empirical studies described, we assume that counter-attitudinal astroturfing comments will
decrease individual opinion certainty.
H2 Exposing individuals to pro-Russian astroturfing comments will decrease opinion
certainty.
Inoculation as a countermeasure to the effects of astroturfing comments
Given the supposed effects of astroturfing comments, the question arises as to what can be
done to neutralize these. One effective way of inhibiting or even preventing the impact of
persuasive attacks is to inoculate people against them (see Compton and Pfau, 2005).
Inoculation theory explains this process by reference to a biological analogy (McGuire,
1964): resistance to future persuasive messages can be increased by administering a
weakened version of the “virus” to the individual—in this case, the impending persuasive
message. An effective inoculation procedure consists of two core elements: threat and
refutational preemption (see Compton, 2012 for an overview). Threat means that the
individual receives a warning about a pending persuasive attack that will challenge their
existing attitudes. Following this warning, the person is provided with information intended
to strengthen the existing individual attitude in the face of the attack. This second element is
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 8
termed “refutational preemption,” and exists in two common variants: refutational-same
preemptions raise and refute exactly the same arguments as used in the subsequent attack
message, whereas refutational-different preemptions include arguments that are not part of
the subsequent attack. Empirical studies have shown that both preemption types can increase
resistance to attack messages (Banas and Rains, 2010; McGuire, 1964).
Despite the promising potential of the inoculation approach, to our knowledge no
study to date has investigated the effectiveness of inoculation treatments in the context of
astroturfing campaigns, although leading scholars in the field have emphasized its benefits
and suitability as a countermeasure to contemporary forms of disinformation (van der Linden
et al., 2017). While some researchers have tested the effectiveness of inoculation strategies in
the context of mis- or disinformation, their studies do not deal with astroturfing campaigns or
state-induced propaganda in general, but rather with conspiracy theories (Banas and Miller,
2013), media reports on climate change (Cook et al., 2017), and front-group stealth
campaigns (Pfau et al., 2007). Nevertheless, all these studies confirm the effectiveness of
preemptive inoculation measures in hampering the effects of persuasive messages on
personal opinions.
Whereas the works described above investigated inoculation to prevent opinion
change, Tormala and Petty (2002) offer an additional perspective that also allows to derive
theoretical assumptions with regard to opinion certainty. They argue that the mere subjective
experience of resisting a persuasive attack can increase certainty, but only when the attack is
perceived to be strong. Although the authors clearly point out the differences between the
original inoculation approach and their theoretical conception, they state: “As long as
resistance does occur, the stronger the attack is perceived to be, the stronger the predicted
effects [on certainty] will be” (p. 1300). Because an inoculation message empowers people to
resist a subsequent persuasive attack, we expect a higher level of opinion certainty in those
who receive an inoculation treatment as compared with those who do not. This assumption
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 9
has also been confirmed by empirical studies showing that attitude certainty increased after
participants were inoculated against persuasive messages (Compton and Pfau, 2004; Pfau et
al., 2004). Therefore, we assume the following:
H3 Administering an inoculation treatment prior to an astroturfing comment will inhibit
the assumed persuasive effects on opinion change (H3a) and opinion certainty (H3b).
Durability of inoculation effects
One of the most challenging questions in the context of inoculation is how long it provides
protection from subsequent attack messages. McGuire (1964) assumes that some time must
pass between the inoculation treatment and the persuasive attack in order to strengthen
resistance. However, due to a declining motivation over time to defend one’s opinion, wear-
out effects could also occur, decreasing resistance in the long run (Insko, 1967). The co-
occurrence of both processes has led researchers to assume that the effectiveness of an
inoculation treatment follows an inversely U-shaped curve, which brings up the question of
the ideal time interval between inoculation and attack (Compton and Pfau, 2005). Empirical
studies have used varying time intervals, ranging from attack messages immediately
following the inoculation treatment to intervals of several months. In their extensive meta-
analysis of inoculation studies, Banas and Rains (2010) found some support for a declining
immunizing effect when they compared short (immediate attack message), moderate (attack
message after 13 days), and long (attack message after 14 days or later) intervals. However,
the decline was not significant. In his literature review, Compton (2012) found some
indication of a drop in resistance after a two-week period. Hence, we propose the following
research question:
RQ1 Will inoculation effects on opinion change (H3a) and opinion certainty (H3b) still
exist after a two-week delay between inoculation and the astroturfing comments?
Method
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 10
The following analyses are based on a three-wave online experiment employing a 3 (issue) x
5 (inoculation) x 2 (delay between inoculation and attack message) between subject design.
Participants were recruited via a commercial online access panel (Consumer Fieldwork
GmbH) in September 2018 and randomly assigned to one of the experimental conditions.
2,353 subjects took part in all three waves of the experiment.1 They were aged 48.8 years (SD
= 15.2) on average; 44.4 % possessed the highest German high-school degree (Abitur), and
49.9 % were female.
Stimulus and procedure
Because online astroturfing comments often occur in the context of professional journalistic
content (Kovic et al., 2018), our experimental stimulus consisted of a short, fictitious
Facebook news teaser ostensibly from the largest German television newscast, Tagesschau
(see Online Supplementary File for example). To assess the generalizability of the results,
three identical teasers were produced, differing only in the issue they dealt with. Two of these
issues (the murder attempt on Sergei Skripal and the manipulation of the 2016 US
presidential election) related to direct Russian involvement, accusing the Russian government
of being responsible for the events concerned. The third issue (the use of toxic gas in Syria)
involved the Syrian government—a close ally of Russiaas a responsible actor. Each teaser
consisted of a picture illustrating the issue, a short headline and a caption, both depicting
either the Russian (issues one and two) or the Syrian government (issue three) as responsible
for the event.
Furthermore, each teaser was accompanied by two user comments representing
typical astroturfing attack messages. In constructing the astroturfing messages, we closely
followed the analysis offered by the EU vs. Disinformation initiative, which identified the
most prevalent argumentative figures used by Russian propagandists (EU vs. Disinformation,
2019). More specifically, the comments presented to the subjects all expressed doubt
regarding a Russian/Syrian involvement in the event by bringing up arguments supporting
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 11
this position and offering alternative explanations (e.g., “So the guy [Skripal] was a proven
double agent and had connections to the mafia. There were a lot of other people who wanted
to kill him”). To make sure that the strength of the arguments did not differ between the three
issues, since this would jeopardize the interpretation of potential effects, all comments were
pre-tested by N = 20 subjects who were not part of the final study. The pretest results
indicated that all arguments offered in the astroturfing comments were perceived as
moderately strong, with no significant differences between the issue conditions (see Table 1
in Online Supplementary File).
[FIGURE 1]
The experiment was carried out in three waves, covering a period of four weeks
(Figure 1). In wave one, we measured participants’ prior opinions and opinion certainty for
all three issues and collected socio-demographic information. To avoid raising suspicion
regarding the true goal of these questions, the first wave took place two weeks before the
actual stimulus presentation. In addition, all issue-specific questions were embedded in larger
item sets also encompassing other issues. Two weeks later, in wave two, participants received
a second questionnaire including the inoculation treatments. In line with our theoretical
outline, three different inoculation messages were administered. The “threat only” inoculation
condition (IC1) included only a warning about commenters paid by the Russian government
(so-called trolls), who attempt to sway citizens’ opinions regarding the respective issue. In
the “refutational-different” condition (IC2), subjects received exactly the same warning, but
were additionally informed about the general persuasive strategies employed by these trolls,
namely, that they would try to offer alternative explanations for events in order to take
Russia/Syria out of the line of fire. Subjects were also told that these alternative explanations
contradicted independent official investigations of the events. Similarly, in the “refutational-
same” condition (IC3), subjects were warned about the possible persuasive attempts and
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 12
informed about the strategy; however, this time by telling them the exact arguments that the
trolls would put forward.
In order to determine the persuasive effects of the astroturfing comments on subjects’
opinions and opinion certainty (H1 and H2), the inoculation factor also included two
additional control conditions, in which subjects did not receive an inoculation treatment. In
control condition 1 (CC1) participants were only exposed to the news teaser, without the
astroturfing comments; in CC2 they saw the teaser including the comments. Consequently,
differences between the two control groups indicate the astroturfing comments’ effects.
In order to assess the durability of the three inoculation treatments (RQ1), all subjects
in wave two received the inoculation treatment; however, the procedure differed with respect
to the point in time when they were exposed to the subsequent news teaser with the
astroturfing comments. Half of the subjects received the teaser including the comments,
immediately after the inoculation; the other half received it two weeks later (wave three).
Measures
Because all astroturfing comments were intended to raise doubt about Russian/Syrian
involvement in the events presented, we asked our participants specifically for their opinion
on the national government’s responsibility for the event, and how certain they were of this
opinion. Subjects’ opinions were measured using a five-point Likert scale indicating
agreement with the statement that Russia/Syria was responsible for the event described in the
news teaser (1 “Do not agree” to 5 “Fully agree”). The measure for opinion certainty was
adopted from Tormala and Petty (2002), asking how certain the subject was of the opinion he
or she had indicated above (1 “Not certain at all” to 5 “Extremely certain”). By subtracting
participants’ post-stimulus answers from their pre-stimulus answers, two scores were
calculated, reflecting changes in opinion and opinion certainty before and after stimulus
presentation (opinion change: MSyria = 0.24; SDSyria = 1.04; MSkripal = 0.33; SDSkripal = 1.07;
MUS election = 0.30; SDUS election = 1.00; change in opinion certainty: MSyria = 0.27; SDSyria =
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 13
1.27; MSkripal = 0.31; SDSkripal = 1.31; MUS election = 0.07; SDUS election = 1.21). Positive values
on the opinion-change measure indicate that respondents held Russia/Syria less responsible
for the events after they were confronted with the stimulus. Positive values on the opinion-
certainty change-measure indicate higher uncertainty as compared to the initial certainty
assessment.
Results
Manipulation Checks
Manipulation checks were performed regarding the perception of the inoculation message
yielding satisfying results. Most subjects in these inoculation conditions correctly recalled
that they had received an inoculation message (88.5%). Similarly, those who had not been
inoculated correctly remembered that they had not seen such a message (87.9 %), χ²(2, N =
2221) = 1281.99, p = .000. Similarly also, most of the participants who were exposed to
astroturfing comments correctly remembered having seen comments beneath the news teaser
(77.1%), as did those in the non-comment condition, where 73.8% stated that they had not
seen any comments, χ²(2, N = 2233) = 540.66, p = .000.
Effects of online astroturfing comments on opinions and opinion certainty
To test whether the online astroturfing comments affected participants’ opinions, we will first
focus on the non-inoculated subjects in the two control conditions by comparing participants
who only saw the news teaser (CC1) to those exposed to the news teaser including the
astroturfing comments (CC2). Figure 2 depicts opinion changes in both groups, with positive
scores indicating changes in a pro-Russian/pro-Syrian direction, that is towards holding them
less responsible.2 Firstly, it is interesting to see that, over the course of the two weeks
between the pre- and post-stimulus measurements, subjects in all issue conditions became
more supportive of the Russian/Syrian position. However, while this effect was only marginal
in the news-teaser-only condition (CC1) (M = 0.12, SD = 0.96), it was clearly pronounced for
those who had been exposed both to the news teaser and to the online astroturfing comments
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 14
(M = 0.42, SD = 1.08). Put differently, those who found pro-Russian/pro-Syrian astroturfing
comments beneath the news teaser ascribed significantly less responsibility to Russia/Syria
for the event depicted, b = 0.30, p < .001.3 From a cross-issue perspective, H1 can thus be
confirmed. However, a closer inspection of the issue-specific patterns shows that the
astroturfing comments’ persuasive effect can mainly be traced back to the Skripal case, b =
0.54, p = .000, and somewhat to the Syria issue, b = 0.21, p = .09. Hence, H1 finds support
only in this case.
[FIGURES 2 and 3]
We further assumed that online astroturfing comments would increase uncertainty in those
who initially thought that Russia/Syria was responsible for the negative events (H2).
Therefore, unlike in the previous analysis, we confine our examination to subjects who had
initially seen the two states as culprits (indicated by values of pre-stimulus opinions of 4 or 5;
N = 995). Figure 3 shows that, among these participants, astroturfing comments affected
opinion certainty in the expected direction across all issue conditions.4 Again, when
comparing the two control groups CC1 (M = 0.34, SD = 1.19) and CC2 (M = 0.64, SD
=1.11), participants who saw counter-attitudinal online astroturfing comments became
significantly more uncertain of their initial view that Russia/Syria were to blame for the
depicted events, b = 0.30, p = .009, as compared with those who did not see the comments.
From a cross-issue perspective, H2 can thus be confirmed. Again, an issue-specific
examination shows that the effect was only significant in the Skripal scenario, b = 0.58, p =
.005. Therefore, H2 can only be confirmed in this case.
Effects of inoculation treatments
In a next step, we examine whether the three inoculation strategies were able to prevent the
persuasive effects of the astroturfing comments. In order to do so, we compare the three
groups who received the astroturfing comments after being inoculated (IC1, IC2, and IC3) to
the group who had seen the same comments without prior inoculation (CC2). An effective
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 15
inoculation treatment should have prevented opinion change, ideally reducing it to the level
of those who had only seen the news teaser without any astroturfing comments (CC1). A
visual inspection of Figure 2 supports this notion, at least for the refutational-same
inoculation treatment (IC3), b = -0.20, p = .007: participants who were educated in advance
about Russia’s persuasive goals and exact arguments were less influenced by the astroturfing
comments (M = 0.22, SD = 1.05) as compared with non-inoculated subjects (M = 0.42, SD =
1.08). In contrast, the remaining two inoculation strategies (threat only: b = -0.04, p = .561;
refutational-different: b = -0.06, p = .391) did not prevent opinion change in the direction of
the astroturfing comments. A further issue-specific examination of the data shows that the
overall effect of the refutational-same preemption was largely rooted in the Skripal and Syria
cases. Multiple group comparisons indicate that the refutational-same strategy reduced
opinion change in both issue conditions to a sufficient level, with a significant difference
from non-inoculated participants receiving comments (CC2) (bSyria = -0.24, p = .06; bSkripal =
-0.36, p = .01) and a non-significant difference from those who had only seen the news teaser
(CC1) (bSyria = -0.03, p = .84; bSkripal = 0.18, p = .15). Hence, H3a finds support in these two
cases (see Table 2 in Online Appendix for complete documentation of means and statistical
tests).
Following the previous logic, we finally examined the efficiency of inoculation in
relation to opinion-certainty changes (H3b). Again, the visual patterns in Figure 3 seem to
support the effectiveness of the refutational-same treatment, which hampered the increase in
uncertainty (M = 0.43, SD = 1.19) as compared with non-inoculated subjects in CC1 (M =
0.64, SD = 1.11), although not to a highly significant extent, b = -0.21, p = .07. As Table 3
(Online Appendix) shows, none of the three inoculation strategies was able to prevent
changes in opinion certainty within the single-issue conditions significantly.
Duration of inoculation effects
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 16
In a final step, we examined how long the observed immunization effect persisted (RQ1). The
two lines in Figure 4 represent the different delay conditions implemented in our experiment
design (immediate and delayed astroturfing attack). It is important to recall that delay
represents a between factor, so for each delay condition, we collected data across all
inoculation groups.
[FIGURE 4]
As can be seen, the two lines mostly parallel each other, with only minor and non-significant
differences when comparing the short- and long-term conditions (see Table 4 in Online
Supplementary File for means and statistical tests). However, there is one noteworthy
exception, which manifests itself in a nearly significant interaction effect between inoculation
and delay, F(4, 2054) = 2.23, p = .06: the refutational-same treatment, which we have
identified as the most potent in reducing opinion changes, was only effective when
administered immediately prior to the astroturfing comments (Mshort delay = 0.09, SDshort delay =
1.03), whereas its effect largely diminished after two weeks (Mlong delay = 0.36, SDlong delay =
1.01), t(385) = -2.64, p = .01. When we look at the issue-specific short- and long-term effects,
we find exactly the same pattern, but, again, only in the Skripal case, indicating a significant
decrease over time in the immunizing effect of the refutational-same treatment (Mshort delay = -
0.13, SDshort delay = 1.10; Mlong delay = 0.60, SDlong delay = 1.12), t(135) = -3.18, p = .002.
Corresponding mean differences in the Syria condition, t(118) = -0.233, p = .816, and US
election condition, t(137) = -0.909, p = .365, could not be observed.
With regard to changes in opinion certainty, we found no significant three-way
interaction between issue, inoculation strategy and delay, F(8, 965) = 0.55, p = .820. Short-
and long-term inoculation effects on opinion certainty did not differ significantly across the
three issue conditions.
Discussion
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 17
In this paper, we have examined the persuasive effects of astroturfing comments posted
beneath news items on Facebook in the context of three Russia-related issues: the poisoning
of former Russian intelligence officer Sergei Skripal, the manipulation of the 2016 US
presidential election, and the use of toxic gas by a close Russian ally, the Syrian government.
We define as astroturfing comments those that imitate ordinary citizens’ voices to create the
impression that a certain opinion has widespread public support, while the real agent behind
the commenter’s message is concealed. Drawing upon extant in-depth analysis of Russia’s
online propaganda (EU vs. Disinformation, 2019; Nimmo, 2015), we designed our
astroturfing stimuli comments so that these would support a pro-Russian position by offering
alternative explanations for the three events, and by denying Russia’s responsibility for them.
In a subsequent step, we tested the effectiveness of three different inoculation strategies in
preventing the persuasive effects of astroturfing comments: (1) threat-only, (2) refutational-
different, and (3) refutational-same treatments.
The disconcerting potential of Russia’s trolls: The impact of astroturfing comments
The results of our study show that astroturfing comments, can indeed change audiences
political opinions and increase uncertainty. However, these effects did not occur at equal
strength across all three issues. While we could clearly observe effects in the Skripal case and
to some extent in the Syria scenario, we did not find equally strong evidence in the context of
the manipulations of the 2016 US presidential election. Against this backdrop, a key task for
further research appears to be to specify the reasons for such issue-specific differences. For
instance, it is possible that participants’ opinions in the Skripal and the Syria conditions were
more uncertain in the first place and were therefore easier to influence through astroturfing
attacks. However, our data does not support this interpretation. A comparison of pre-stimulus
opinion-certainty scores shows that subjects’ issue-specific uncertainty levels did not differ
significantly (MSyria = 3.21, SDSyria = 1.09; MSkripal = 3.30, SDSkripal = 1.04; MUS election = 3.28,
SDUS election = 1.08), F(2, 2252) = 1.31, p = .270. Another possible reason might be that
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 18
respondents’ opinions about the US presidential election and Syria were more difficult to
influence through astroturfing comments because they are more abstract scenarios and
therefore more difficult to process and understand, especially when someone offers
alternative explanations for them. The Skripal case, on the other hand, as a more narrowly
defined and concrete event, makes it easier to understand and accept possible explanations.
Unfortunately, our data did not enable us to test this assumption.
Immunizing citizens against astroturfing campaigns
This study advances research on inoculation effects because it is the first to transfer this
approach to the realm of online astroturfing comments, that is to one of the currently most
widely debated forms of disinformation in the context of democratic elections (Ferrara, 2017;
Kovic et al., 2018; Zelenkauskaite and Balduccini, 2017). As extant inoculation research
conducted in other contexts has shown, inoculation messages can help to confer on
individuals cognitive resistance to “a range of falsehoods in diverse domains such as climate
change, public health, and emerging technologies” (van der Linden et al., 2017: 1141).
Contrary to these expectations, only one strategy proved to be effective in mitigating the
persuasive effects of astroturfing comments: only when subjects were educated in advance
about the exact arguments deployed by the Russian trolls (refutational-same), changes in
opinions and opinion certainty could be prevented. This result adds to a rather disconcerting
overall picture. In essence, it means that, in order to neutralize the effects of astroturfing
campaigns sponsored by foreign governments or other powerful actors, citizens will have to
learn the very specific lines of argument that these astroturfing actors use. Without any doubt,
designing and disseminating such highly tailored inoculation messages in a timely manner
will require enormous resources, as well as highly professionalized counter-campaigning.
However, it is important to recall that in our study participants were inoculated only once,
which probably limited the immunizing potential of the treatments. There is empirical
evidence supporting the notion that booster sessions used to refresh the initial inoculation
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 19
message could enhance its power and durability. However, overall results on the
effectiveness of booster sessions are mixed, and this probably also depends on the timing of
the repeated exposure (Compton and Pfau, 2005).
A third disconcerting finding of our study was that even the immunizing effect of the
refutational-same treatment was only short-lived. It vanished almost completely after a two-
week delay. This finding is in line with other inoculation studies in the context of political
issues (Pfau and Burgoon, 1988).
The potentially negative effects of immunizing citizens against astroturfing comments
With regard to transferring inoculation research to the realm of astroturfing comments,
perhaps the most difficult problem to solve relates to the fact that such comments can
typically not be distinguished from genuine citizens’ voices (the defining element of
astroturfing). This poses a dilemma because, while inoculation messages might mitigate the
harmful effects of astroturfing messages (positive consequence), they might also undermine
the credibility of citizen commenting in public online spaces, and of online deliberation in
general (negative consequence). This potential “side-effect” of inoculation campaigns
(Compton, 2012: 15) could only be prevented if astroturfing comments were unambiguously
identifiable and distinguishable from authentic citizen commentswhich will almost never
be the case. Those who initiate counter campaigns will thus have to make difficult decisions
as to whether, and how, citizens can and should be inoculated against political astroturfing
campaigns. Rather abstract “threat-only” treatments, for instance, can be disseminated with
relatively limited costs and efforts. Yet these have the disadvantage that they undermine the
credibility of online citizen debate around entire political issues. Moreover, they are,
according to our findings, relatively inefficient. Highly specific refutational-same treatments,
by contrast, can be very effective in mitigating the persuasive effects of astroturfing
comments, as the findings of this study indicate. They also have the advantage that they
discredit only those user comments that actually convey very narrowly defined pieces of
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 20
misleading and inaccurate information. The downside of refutational-same inoculation
treatments is, however, that it requires extensive resources to tailor and administer such
highly issue- and argument-specific counter messages. Finally, refutational-different
treatments can be seen as taking up a position between “threat-only” and “refutational-same”
treatments, combining their benefits and drawbacks.
Practical applications and promising paths for future research
Our results also have implications for how news organizations and professional journalists,
who have to moderate comment sections during online astroturfing campaigns, can integrate
the dissemination of inoculation messages into their work routines. Firstly, as this study
indicates, the effectiveness of threat-only and refutational-different preemptions is very
limited. Designing relatively abstract inoculation messages based on these strategies thus
appears not to be a suitable tool for preventing the impact of political astroturfing campaigns
on news audiences. Since immunizing effects appear only to derive from refutational-same
treatments, messages that present the exact arguments used in subsequent astroturfing attacks
will have to be first designed and then disseminated among a news audience. Secondly, the
short-term nature of the effects (even of refutational-same treatments) detected in our study
further imply that inoculation messages will have to be presented to the audience shortly
before they receive astroturfing comments. In reality, this means that journalists or
moderators of public social-media accounts have to inoculate their audiences “just in time”.
On the basis of the findings of this study, we have to conclude that banners or warnings
presented in the immediate vicinity of commenting fields, and administering highly issue-
specific refutational-same treatments, appear to be the only promising strategy for
immunizing news audiences against astroturfing content posted by paid political trolls or by
similar means.
Of course, our study also has limitations. Although we increased external validity by
including three different issues in our design and by investigating the short- and long-term
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 21
effects of astroturfing and inoculation messages, we still relied on results gathered in an
experimental setting. Participants were purposely exposed to stimuli that they otherwise
might not have encountered, for example because they did not use social media or did not
read the comments beneath news articles. In this sense, the effects of comments that we
found probably overestimate the effect on society as a whole. On the other hand, participants
in our experiment were only exposed once to the astroturfing comments and to the
inoculation messages. In a real-world environment, people probably encounter comments
repeatedly, which enhances the astroturfing comments’ persuasive power. The same is true of
inoculation messages: simply because a one-time inoculation proves to be inefficient or loses
its effect after a while, this does not mean that inoculation as a countermeasure to astroturfing
is an ineffective strategy. It seems plausible that multiple inoculation treatments would
sustain the immunization or might even increase it by aggregating the effects of the single
treatments. The question of how repeated exposure influences the persuasive effects of
astroturfing comments, and also those of inoculation messages, represents a further promising
avenue for future research. By following up on these and related paths of scrutiny, future
research should theorize and investigate in significantly more depth the psychological
mechanisms that facilitate the disconcerting persuasive potential of political trolling. On the
basis of this knowledge, they then need to identify and further specify the most promising
strategies for minimizing the harm that this type of political disinformation can cause to
democratic public life.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 22
References
Banas JA and Miller G (2013) Inducing resistance to conspiracy theory propaganda: Testing
inoculation and metainoculation strategies. Human Communication Research 39(2): 184–
207. DOI: 10.1111/hcre.12000.
Banas JA and Rains SA (2010) A meta-analysis of research on inoculation theory.
Communication Monographs 77(3): 281–311. DOI: 10.1080/03637751003758193.
Bennett WL and Livingston S (2018) The disinformation order: Disruptive communication
and the decline of democratic institutions. European Journal of Communication 33(2):
122–139. DOI: 10.1177/0267323118760317.
Bessi A and Ferrara E (2016) Social Bots Distort the 2016 US Presidential Election Online
Discussion. First Monday 21(11).
Bugorkova O (2015) Ukraine conflict: Inside Russia's 'Kremlin troll army'. BBC News, 2015.
Retrieved from: https://www.bbc.com/news/world-europe-31962644.
Cho CH, Martens ML, Kim H, et al. (2011) Astroturfing global warming: It isn’t always
greener on the other side of the fence. Journal of Business Ethics 104(4): 571–587. DOI:
10.1007/s10551-011-0950-6.
Compton J (2012) Inoculation Theory. In: Dillard J and Shen L (eds) The SAGE Handbook of
Persuasion: Developments in Theory and Practice: Thousand Oaks, CA: Sage
Publications, pp. 1–20.
Compton JA and Pfau M (2004) Use of inoculation to foster resistance to credit card
marketing targeting college students. Journal of Applied Communication Research 32(4):
343–364. DOI: 10.1080/0090988042000276014.
Compton JA and Pfau M (2005) Inoculation theory of resistance to influence at maturity:
Recent progress in theory development and application and suggestions for future
research. Annals of the International Communication Association 29(1): 97–146. DOI:
10.1080/23808985.2005.11679045.
Cook J, Lewandowsky S and Ecker UKH (2017) Neutralizing misinformation through
inoculation: Exposing misleading argumentation techniques reduces their influence. PloS
one 12(5): 1-21. DOI: 10.1371/journal.pone.0175799.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 23
Daschmann G (2000) Vox pop & vox polls: The impact of poll results and voter statements in
the media on the perception of a climate of opinion. International Journal of Public
Opinion Research 12(2): 160–181. DOI: 10.1093/ijpor/12.2.160.
Domke D, Shah DV and Wackman DB (1998) Media priming effects: Accessibility,
association, and activation. International Journal of Public Opinion Research 10(1): 51–
74. DOI: 10.1093/ijpor/10.1.51.
EU vs. Disinformation (2019) Conspiracy mania marks one-year anniversary of the Skripal
poisoning. Available at: https://euvsdisinfo.eu/conspiracy-mania-marks-one-year-
anniversary-of-the-skripal-poisoning/.
European Commission (2018) A multi-dimensional approach to disinformation: Report of the
independent high level group on fake news and online disinformation. Luxembourg:
Publications Office of the European Union.
Ferrara E (2017) Disinformation and social bot operations in the run up to the 2017 French
presidential election. First Monday 22(8): 1-30. DOI: 10.2139/ssrn.2995809.
Gross SR, Holtz R and Miller N (1995) Attitude certainty. In: Petty RE and Krosnick JA
(eds) Attitude strenght. Antecedents and consequences: Mahwah, NJ: Lawrence Erlbaum
Associates, pp. 215–245.
Hayes AF (2005) Statistical methods for communication science. Mahwah, N.J: Lawrence
Erlbaum Associates.
Hovland CI, Janis I and Kelley HH (1953) Communication and persuasion: Psychological
studies of opinion change. New Haven, CO, London: Yale University Press.
Insko CA (1967) Theories of attitude change. New York: Appleton-Century-Crofts.
Kang J, Kim H, Chu H, et al. (2016) In distrust of merits: The negative effects of astroturfs
on people's prosocial behaviors. International Journal of Advertising 35(1): 135–148.
DOI: 10.1080/02650487.2015.1094858.
Keller FB, Schoch D, Stier S, et al. (2017) How to manipulate social media: Analyzing
political astroturfing using ground truth data from South Korea.
King G, Pan J and Roberts ME (2017) How the Chinese government fabricates social media
posts for strategic distraction, not engaged argument. American Political Science Review
111(03): 484–501. DOI: 10.1017/S0003055417000144.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 24
Kovic M, Rauchfleisch A, Sele M, et al. (2018) Digital astroturfing in politics: Definition,
typology, and countermeasures. Studies in Communication Sciences 18(1): 69–85.
Lefevere J, Swert K de and Walgrave S (2012) Effects of popular exemplars in television
news. Communication Research 39(1): 103–119. DOI: 10.1177/0093650210387124.
Lyon TP and Maxwell JW (2004) Astroturf: Interest Group Lobbying and Corporate
Strategy. Journal of Economics & Management Strategy 13(4): 561–597. DOI:
10.1111/j.1430-9134.2004.00023.x.
McGuire WJ (1964) Inducing resistance to persuasion: Some contemporary approaches. In:
Berkowitz L (ed.) Advances in experimental social psychology: New York: Academic
Press, pp. 191–229.
McNutt J and Boland K (2007) Astroturf, technology and the future of community
mobilization: Implications for nonprofit theory. The Journal of Sociology & Social
Welfare 34(3): 165–178.
Nimmo B (2015) Anatomy of an info-war: How Russia’s propaganda machine works, and
how to counter it. Available at: https://www.stopfake.org/en/anatomy-of-an-info-war-
how-russia-s-propaganda-machine-works-and-how-to-counter-it/.
Petty RE and Cacioppo JT (1984) The effects of involvement on responses to argument
quantity and quality: Central and peripheral routes to persuasion. Journal of Personality
and Social Psychology 46(1): 69–81. DOI: 10.1037/0022-3514.46.1.69.
Pfau M and Burgoon M (1988) Inoculation in political campaign communication. Human
Communication Research 15(1): 91–111. DOI: 10.1111/j.1468-2958.1988.tb00172.x.
Pfau M, Compton J, Parker KA, et al. (2004) The traditional explanation for resistance versus
attitude accessibility. Human Communication Research 30(3): 329–360. DOI:
10.1111/j.1468-2958.2004.tb00735.x.
Pfau M, Haigh MM, Sims J, et al. (2007) The influence of corporate front-group stealth
campaigns. Communication Research 34(1): 73–99. DOI: 10.1177/0093650206296083.
Sikorski C von (2018) The effects of reader comments on the perception of personalized
scandals: Exploring the roles of comment valence and commenters’ social status.
International Journal of Communication 10: 4480–4501.
Smith SM, Fabrigar LR, MacDougall BL, et al. (2008) The role of amount, cognitive
elaboration, and structural consistency of attitude-relevant knowledge in the formation of
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 25
attitude certainty. European Journal of Social Psychology 38(2): 280–295. DOI:
10.1002/ejsp.447.
Taylor SE and Thompson SC (1982) Stalking the elusive "vividness" effect. Psychological
Review 89(2): 155–181. DOI: 10.1037//0033-295X.89.2.155.
Tormala ZL and Petty RE (2002) What doesn't kill me makes me stronger: The effects of
resisting persuasion on attitude certainty. Journal of Personality and Social Psychology
83(6): 1298–1313. DOI: 10.1037/0022-3514.83.6.1298.
Tormala ZL, Petty RE and Briñol P (2002) Ease of retrieval effects in persuasion: A self-
validation analysis. Personality and Social Psychology Bulletin 28(12): 1700–1712. DOI:
10.1177/014616702237651.
van der Linden S, Maibach E, Cook J, et al. (2017) Inoculating against misinformation.
Science 358(6367): 1141–1142. DOI: 10.1126/science.aar4533.
Visser PS and Mirabile RR (2004) Attitudes in the social context: The impact of social
network composition on individual-level attitude strength. Journal of Personality and
Social Psychology 87(6): 779–795. DOI: 10.1037/0022-3514.87.6.779.
Weedon J, Nuland W and Stamos A (2017) Information operations on Facebook.
Woolley SC and Guilbeault DR (2017) Computational propaganda in the United States of
America: Manufactoring consensus online. Working Paper No. 2017.5. Oxford:
University of Oxford.
Zelenkauskaite A and Balduccini M (2017) “Information warfare” and online news
commenting: Analyzing forces of social influence through location-based commenting
user typology. Social Media + Society 3(3): 1-13. DOI: 10.1177/2056305117718468.
Zerback T and Fawzi N (2017) Can online exemplars trigger a spiral of silence? Examining
the effects of exemplar opinions on perceptions of public opinion and speaking out. New
Media & Society 19(7): 1034–1051. DOI: 10.1177/1461444815625942.
Zerback T and Peter C (2018) Exemplar effects on public opinion perception and attitudes:
The moderating role of exemplar involvement. Human Communication Research 14(2):
125. DOI: 10.1093/hcr/hqx007.
Zhang J, Carpenter D and Ko M (2013) Online Astroturfing. A theoretical perspective:
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago,
Illinois, August 15-17, 2013.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 26
Zillmann D (1999) Exemplification theory: Judging the whole by some of its parts. Media
Psychology 1(1): 69–94. DOI: 10.1207/s1532785xmep0101_5.
1 A statistical power analysis showed that, in order to find small interaction effects between
all three experimental factors, a minimum sample size of N = 2,283 is necessary.
2 See also Table 2 in Online Supplementary File for complete documentation of means,
standard deviations, and statistical tests for group comparisons.
3 To test the group for differences, we followed Hayes (2005), dummy-coded the inoculation
factor and inserted k-1 dummy variables as independents in a linear regression model. The
respective comparison group served as the reference category. Besides testing for significant
mean differences, the unstandardized regression coefficient b also indicates the direction and
magnitude of the mean difference between the two groups.
4 See also Table 3 in Online Appendix for complete documentation of means, standard
deviations, and statistical tests for group comparisons.
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 27
Figure 1
Experimental design
The design was replicated for all three issue conditions.
Pre-
screening of
opinions, public opinion perceptions, and randomization
Post-measurement of
public opinion perceptions
No inoculation
message
Threat
only
Refutational
different
No inoculation
message
Refutational
same
No inoculation
message
Threat
only
Refutational
different
No inoculation
message
Refutational
same
News teaser
only
News teaser
with comments
News teaser
with comments
News teaser
with comments
News teaser
with comments
News teaser
only
News teaser
with comments
News teaser
with comments
News teaser
with comments
News teaser
with comments
Inoculation
pre-treatment Immediate
attack message Delayed
attack message
Short-term inoculation effects
Long-term inoculation effects
2-week delay 2-week delay
CC1
CC2
IC1
IC2
IC3
CC1
CC2
IC1
IC2
IC3
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 28
Figure 2
Effects of astroturfing comments and inoculation treatments on opinion change
N = 2,064
0
0,2
0,4
0,6
0,8
1
News teaser
only
(CC1)
News teaser
with comments
(CC2)
Inoculation:
threat only
(IC1)
Inoculation:
refutational
different
(IC2)
Inoculation:
refutational
same
(IC3)
Opinion change
Syria Skripal US-Election
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 29
Figure 3
Effects of astroturfing comments and inoculation treatments on opinion-certainty change
N = 995 participants initially indicating that Russia/Syria was responsible for the event
depicted (values of pre-stimulus opinion 4 or 5).
0
0,2
0,4
0,6
0,8
1
News teaser
only
(CC1)
News teaser
with comments
(CC2)
Inoculation:
threat only
(IC1)
Inoculation:
refutational
different
(IC2)
Inoculation:
refutational
same
(IC3)
Opinion certainty change
Syria Skripal US-Election
THE DISCONCERTING POTENTIAL OF RUSSIA’S TROLLS 30
Figure 4
Short- and long-term inoculation effects on opinion change
N = 2,064
0
0,2
0,4
0,6
0,8
1
News teaser
only
(CC1)
News teaser
with comments
(CC2)
Inoculation:
threat only
(IC1)
Inoculation:
refutational
different
(IC2)
Inoculation:
refutational
same
(IC3)
Opinion change
Astroturfing attack immediately after inoculation
Astroturfing attack two weeks after inoculation
Supplementary Materials to the Article
The disconcerting potential of Russia’s trolls: Persuasive effects of astroturfing
comments and three strategies for inoculation against them
Contents
Table 1 Perceived argument strength in online astroturfing comments (pretest results) ...... 3
Table 2 Group mean differences in opinion change ............................................................. 4
Table 3 Group mean differences in opinion certainty change .............................................. 5
Table 4 Differences in opinion change after a short and long delay ..................................... 6
Figure I Example of the inoculation message (refutational different) ................................... 7
Figure II Example of the news teaser including astroturfing comments (Skripal issue) ........ 8
Table 1 Perceived argument strength in online astroturfing comments (pretest results)
Argument strength
M (SD) α
Syria (N = 19)
Comment 1: “Like Assad's the only one with poison gas. What about the
thousands of IS henchmen? If someone is known for massacring civilians,
then it's probably them.”
3.71 (0.85) 0.91
Comment 2: “Nothing's proved! Wouldn't be the first time somebody invented
weapons of mass destruction to wage a fucking war.” 3.59 (1.21) 0.97
Overall strength
3.65 (0.94)
0.94
Skripal (N = 20)
Comment 1: “So the guy was a proven double agent and had connections to
the mafia. There were a lot of other people who wanted to kill him.” 2.73 (1.18) 0.96
Comment 2: “If the Russians wanted Skripal dead, they simply would have
done it without leaving traces. But no, they used a poison that directly points
to them. Right!
3.01 (1.41) 0.96
Overall strength
2.87 (1.21)
0.96
US election (N = 17)
Comment 1: “Russian wire-pullers? Yeah sure! Cold-blooded economic
interests are behind the election manipulations: Facebook, Cambridge
Analytica. Do I have to say any more?”
2.65 (1.02) 0.93
Comment 2: “Nothing's proved! It wouldn't be the first time someone
manipulated an election to gain power in the country.” 3.32 (1.02) 0.95
Overall strength
2.99 (0.81)
0.91
* N = 58 participants took part in the pretest and indicated argument strength on a bipolar
scale from 1 to 5 using the following items: not convincing convincing, weak – strong,
implausible – plausible, incorrect – correct. All items were used to construct a scale indicating
the perceived strength of each argument.
Table 2 Group mean differences in opinion change
only
(CC1)
Teaser with
astroturfing
comments
(CC2)
Inoculation:
Threat only
(IC1)
Inoculation:
Refutat.
different
(IC2)
Inoculation:
Refutat.
same
(IC3)
M (SD) M (SD) M (SD) M (SD) M (SD)
Syria
(n = 657) 0.11c (1.02) 0.33 (1.12) 0.40ae
(1.01) 0.28 (0.96) 0.08c (1.04)
Skripal
(n = 685) 0.06bcd
(0.83) 0.60ae
(1.15) 0.36a (1.21) 0.41a (0.94) 0.24b (1.14)
US election
(n = 722) 0.18 (1.03) 0.33 (0.98) 0.36 (1.08) 0.36 (0.99) 0.31 (0.95)
All issues
(N = 2,064) 0.12bcd
(0.96) 0.42ae
(1.08) 0.37ae
(1.10) 0.35ae
(0.96) 0.22bc
(1.05)
Group comparisons are based on linear multiple regression analysis using the inoculation
factor as a dummy variable. Superscripts indicate significant mean differences (p < .05).
Table 3 Group mean differences in opinion certainty change
Teaser
only
(CC1)
Teaser with
astroturfing
comments
(CC2)
Inoculation:
Threat only
(IC1)
Inoculation:
Refutat.
different
(IC2)
Inoculation:
Refutat.
same
(IC3)
M (SD) M (SD) M (SD) M (SD) M (SD)
Syria
(n = 331) 0.44c (1.26) 0.63 (1.12) 0.89ae (1.18) 0.74 (0.98) 0.42c (1.32)
Skripal
(n = 349) 0.26bcd
(1.26) 0.84a (1.24) 0.90a (1.34) 0.69a (1.12) 0.51 (1.28)
US election
(n = 315) 0.31 (1.05) 0.39 (0.84) 0.40 (1.07) 0.62 (1.22) 0.35 (0.96)
All issues
(N = 995) 0.34bcd
(1.19) 0.64a (1.11) 0.75ae (1.23) 0.68ae (1.11) 0.43cd (1.19)
*Participants stating that Russia / Syria was responsible for the event depicted (pre-stimulus
opinion 4 or 5). Positive values indicate higher opinion uncertainty. Group comparisons are
based on linear multiple- regression analysis using the inoculation factor as a dummy
variable. Superscripts indicate significant mean differences (p < .05).
Table 4 Differences in opinion change after a short and long delay
Opinion change
only
(CC1)
Teaser with
astroturfing
comments
(CC2)
Inoculation:
Threat only
(IC1)
Inoculation:
Refutat.
different
(IC2)
Inoculation:
Refutat. same
(IC3)
M (SD) M (SD) M (SD) M (SD) M (SD)
Short delay
(n =1,107) 0.12 (0.97) 0.35 (1.14) 0.34 (1.12) 0.41 (0.96) 0.09a (1.02)
Long delay
(n = 957) 0.12 (0.95) 0.50 (0.99) 0.42 (1.08) 0.29 (0.96) 0.36a (1.06)
Group comparisons represent simple main effects of delay. Superscripts indicate significant
mean differences between the short- and long-delay conditions within a single inoculation
group (p < .05).
Figure I Example of the inoculation message (refutational different)
Figure II Example of the news teaser including astroturfing comments (Skripal issue)
... What sets fake news apart in public relations is often its association with financial gain, as PR professionals may resort to deceptive practices to insert persuasive messaging into news media. Another disinformation tactic is astroturfing, which often emerges from unethical PR campaigns where the origin and intent of the message are deliberately concealed to hide the identities of the backers (e.g., Boman & Schneider, 2021;Zerback et al., 2021). ...
... Communication experts employ a multifaceted approach to combat (mis/dis)information, utilizing proactive strategies like inoculation and supportive messaging, reactive responses, and media literacy (Compton et al., 2021;Jang & Kim, 2018;van der Linden, 2022;Zerback et al., 2021). ...
Article
Full-text available
In the digital age, public relations professionals play a vital role in managing information, particularly (mis/dis)information. Understanding the ethical implications and resources needed to navigate (mis/dis)information is crucial. This study explores PR professionals' perceptions of ethical education and the skills and resources required to address (mis/dis)information. In-depth interviews with professionals reveal insights for better preparing the next generation of PR professionals to navigate the post-truth era effectively. These findings shed light on the multifaceted skill set needed by PR professionals and the essential role of both informal and formal resources in preparing them to navigate (mis/dis)information effectively. Recommendations for education emphasize experiential learning, ethics-focused content, and readily available online resources.
... However a growing body of evidence suggests that LLM-based systems can produce content which is likely to be judged as generated by a human, both in static evaluation tasks (Kovács, 2024;Rathi et al., 2024) and in more interactive and adversarial evaluations (Jones and Bergen, 2024a,b). Human impersonation could allow LLM-based systems to reap the persuasive advantages of appearing to others as a real person (Dennett, 2023;Leong and Selinger, 2019), as well as "astroturfing"-simulating the appearance of grassroots support for an issue online (Zerback et al., 2021). Simulation of specific real or fictional people could also enable fraud, or allow models to take advantage of pre-existing bonds that a user has with an impersonated figure, potentially leading to tragic consequences (Montgomery, 2024). ...
... Each of these groups would have incentives to use LLMs to increase the effectiveness of their campaigns. They could do this through making traditional public campaigns more effective, targeting specific influential people (such as lawmakers) to effect changes, overwhelming politicians and public consultations with compelling and personalised submissions, astroturfing, and coordinating boycotts or protests using LLMs to build and maintain relationships with large numbers of volunteers (Zerback et al., 2021). ...
Preprint
Full-text available
Large Language Models (LLMs) can generate content that is as persuasive as human-written text and appear capable of selectively producing deceptive outputs. These capabilities raise concerns about potential misuse and unintended consequences as these systems become more widely deployed. This review synthesizes recent empirical work examining LLMs' capacity and proclivity for persuasion and deception, analyzes theoretical risks that could arise from these capabilities, and evaluates proposed mitigations. While current persuasive effects are relatively small, various mechanisms could increase their impact, including fine-tuning, multimodality, and social factors. We outline key open questions for future research, including how persuasive AI systems might become, whether truth enjoys an inherent advantage over falsehoods, and how effective different mitigation strategies may be in practice.
... These bots comment on and support specific content to help creators gain certain benefits, which creates the false impression that a particular viewpoint has received broad public support (Mihaylov et al., 2018), thereby increasing susceptibility to misinformation (Zerback et al., 2021). ...
Article
Full-text available
The rapid expansion of the Internet and social media has intensified the spread of health misinformation, posing significant risks, especially for older adults. This meta-analysis synthesizes evidence on the prevalence and interventions of health misinformation among older adults. Our findings reveal a high prevalence rate of 47% (95% CI [33%, 60%]), surpassing recent estimates. Offline research settings have a higher prevalence of health misinformation. Despite methodological variances, the prevalence remains consistent across different measures and development levels. Interventions show significant effectiveness (Hedges’ g = 0.76, 95% CI [0.25, 1.26]), with graphic-based approaches outperforming video-based ones. These results underscore the urgent need for tailored, large-scale interventions to mitigate the adverse impacts of health misinformation on older adults. Further research should focus on refining intervention strategies and extending studies to underrepresented regions and populations.
... Sockpuppeting can also involve using fake accounts or avatars to present a false identity, which can aid the user in gaining credibility and influence in online discussions. Sockpuppeting can be organised and coordinated on a massive scale, and is therefore linked to astroturfing (Zerback et al., 2020). However, there are records of its use by individuals, including some well-known, such as Orlando Figes, the acclaimed British writer and historian who published numerous comments and ratings on the global sales site Amazon from many different accounts under different pseudonyms, in which he criticised books by other historians specialising in Russian history while praising his own books (Topping, 2010). ...
Article
Depression is one of the leading causes of disability worldwide. Individuals with depression often experience unrealistic and overly negative thoughts, i.e. cognitive distortions, that cause maladaptive behaviors and feelings. Now that a majority of the US population uses social media platforms, concerns have been raised that they may serve as a vector for the spread of distorted ideas and thinking amid a global mental health epidemic. Here, we study how individuals (N=838) interact with distorted content on social media platforms using a simulated environment similar to Twitter (now X). We find that individuals with higher depression symptoms tend to prefer distorted content more than those with fewer symptoms. However, a simple one-shot intervention can teach individuals to recognize and drastically reduce interactions with distorted content across the entire depression scale. This suggests that distorted thinking on social media may disproportionally affect individuals with depression, but simple awareness training can mitigate this effect. Our findings have important implications for understanding the role of social media in propagating distorted thinking and potential paths to reduce the societal cost of mental health disorders.
Article
Although previous studies have recognized the widespread presence of disinformation networks, we know little about the extent to which such networks affect the ability of disinformation spreaders to disseminate falsehoods. In this study, we conceptualize disinformation networks as a form of coordinated strategic communication and apply an innovative algorithm to quantify the networked influence of disinformation spreaders. We found that coordinated networks account for up to 62% of disinformation spreaders’ ability to engage the broader public and 23% of their ability to have their message shared more frequently. These findings suggest that any effective disinformation prevention effort needs to incorporate plans aimed at disrupting networks, rather than solely focusing on notable individuals. In addition, our further analysis reveals that the countries of origin and the type of disinformation spreaders significantly affect their ability to gain networked influence among their peers. Theoretical and practical implications are discussed.
Book
Misinformation can be broadly defined as information that is inaccurate or false according to the best available evidence, or information whose validity cannot be verified. It is created and spread with or without clear intent to cause harm. There is well-documented evidence that misinformation persists despite fact-checking and the presentation of corrective information, often traveling faster and deeper than facts in the online environment. Drawing on the frameworks of social judgment theory, cognitive dissonance theory, and motivated information processing, the authors conceptualize corrective information as a generic type of counter-attitudinal message and misinformation as attitude-congruent messages. They then examine the persistence of misinformation through the lens of biased responses to attitude-inconsistent versus -consistent information. Psychological inoculation is proposed as a strategy to mitigate misinformation.
Article
Mis- and disinformation have been associated with detrimental political consequences, such as increasing ideological and epistemic polarization. Yet, we know little about how people perceive the risks of misinformation across countries and domains of information. As holding high-risk perceptions of encountering misinformation across domains may result in high levels of media cynicism and uncertainty, it is important to explore news users’ relative risk perceptions related to mis- and disinformation. Therefore, this article relies on original survey data collected in seven countries: Argentina ( N = 507), Brazil ( N = 650), Chile ( N = 485), Mexico ( N = 461), the United States ( N = 521), Spain ( N = 576), and the Netherlands ( N = 518) (total N = 3,718). Main findings indicate that news users arrive at high estimates of mis- and disinformation’s proportion across all countries. Although higher-risk information domains (i.e., political advertising) are generally more likely to be associated with misinformation than lower-risk domains (i.e., scientific evidence), our findings foreground important country-level differences that relate to varying levels of resilience across the seven democracies studied. Our findings offer important evidence for the relative assessments of risk related to misinformation across contexts that vary on vulnerability to the threats of misinformation.
Article
Full-text available
Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources supporting such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability. Based on a snowball sampling of German fringe communities on Facebook, we test if debunking and source exposure reduce groups’ consumption levels of two popular misinformation sources. Results support a proactively engaging counter-misinformation approach to reduce consumption of misinformation sources.
Article
Full-text available
Political astroturfing, a centrally coordinated disinformation campaign in which participants pretend to be ordinary citizens acting independently, has the potential to influence electoral outcomes and other forms of political behavior. Yet, it is hard to evaluate the scope and effectiveness of political astroturfing without “ground truth” information, such as the verified identity of its agents and instigators. In this paper, we study the South Korean National Information Service’s (NIS) disinformation campaign during the presidential election in 2012, taking advantage of a list of participating accounts published in court proceedings. Features that best distinguish these accounts from regular users in contemporaneously collected Twitter data are traces left by coordination among astroturfing agents, instead of the individual account characteristics typically used in related approaches such as social bot detection. We develop a methodology that exploits these distinct empirical patterns to identify additional likely astroturfing accounts and validate this detection strategy by analyzing their messages and current account status. However, an analysis relying on Twitter influence metrics shows that the known and suspect NIS accounts only had a limited impact on political social media discussions. By using the principal-agent framework to analyze one of the earliest revealed instances of political astroturfing, we improve on extant methodological approaches to detect disinformation campaigns and ground them more firmly in social science theory.
Article
Full-text available
In 2016, the Internet Research Agency (IRA) deployed thousands of Twitter bots that released hundreds of thousands of English language tweets. It has been hypothesized this affected public opinion during the 2016 U.S. presidential election. Here we test that hypothesis using vector autoregression (VAR) comparing time series of election opinion polling during 2016 versus numbers of re-tweets or ‘likes’ of IRA tweets. We find that changes in opinion poll numbers for one of the candidates were consistently preceded by corresponding changes in IRA re-tweet volume, at an optimum interval of one week before. In contrast, the opinion poll numbers did not correlate with future re-tweets or ‘likes’ of the IRA tweets. We find that the release of these tweets parallel significant political events of 2016 and that approximately every 25,000 additional IRA re-tweets predicted a one percent increase in election opinion polls for one candidate. As these tweets were part of a larger, multimedia campaign, it is plausible that the IRA was successful in influencing U.S. public opinion in 2016.
Article
Full-text available
In recent years, several instances of political actors who created fake grassroots activity on the Internet have been uncovered. We propose to call such fake online grassroots activity digital astroturfing, and we define it as a form of manufactured, deceptive and strategic top-down activity on the Internet initiated by political actors that mimics bottom-up activity by autonomous individuals. The goal of this paper is to lay out a conceptual map of the phenomenon of digital astroturfing in politics. To that end, we introduce, first, a typology of digital astroturfing according to three dimensions (target, actor type, goals), and, second, the concept of digital astroturfing repertoires, the possible combinations of tools, venues and actions used in digital astroturfing efforts. Furthermore, we explore possible restrictive and incentivizing countermeasures against digital astroturfing. Finally, we discuss prospects for future research: Even though empirical research on digital astroturfing is difficult, it is neither impossible nor futile.
Article
Full-text available
The unprecedented spread of misinformation threatens citizens’ ability to form evidence-based opinions on issues of great societal importance, including public health, climate change, and national security
Article
Full-text available
Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the efforts of movements and parties on the radical right to mobilize supporters against centre parties and the mainstream press that carries their messages. The spread of disinformation can be traced to growing legitimacy problems in many democracies. Declining citizen confidence in institutions undermines the credibility of official information in the news and opens publics to alternative information sources. Those sources are often associated with both nationalist (primarily radical right) and foreign (commonly Russian) strategies to undermine institutional legitimacy and destabilize centre parties, governments and elections. The Brexit campaign in the United Kingdom and the election of Donald Trump in the United States are among the most prominent examples of disinformation campaigns intended to disrupt normal democratic order, but many other nations display signs of disinformation and democratic disruption. The origins of these problems and their implications for political communication research are explored.
Article
Full-text available
Article
This research examines the contemporary landscape relative to information-driven strategies used for global gain by analyzing Russian activities in particular. With Russia functioning as a cause of global democratic disruption, this exploratory project focuses on information-based, computational, and media-related political strategies. The findings provide a way to see patterns over time offering further evidence of ‘hybrid’ warfare identified in recent literature. This work allows readers to connect events in recent years in order to view them together as a strong case of ‘hybrid’ war. These findings also provide scholars, practitioners, and citizens interested in democratic processes around the globe the opportunity to consider the many threats to contemporary political processes, and contributes to ongoing academic conversations about digital political disruptions and warfare. Particularly for readers concerned about political influence via social media and digital security, this study of Russia’s information-related activity as a case of international interference will be of particular interest.
Article
The depiction of citizens expressing their opinions in the news is an increasingly popular feature of modern journalism, because they can serve as exemplars that illustrate abstract or complex issues. However, citizen exemplars differ regarding their level of personal involvement in the issue they talk about. Affected exemplars have some kind of personal experience with an issue (e.g. ill people) whereas unaffected citizens, as ordinary “people from the street,” do not. We examine for the first time how exemplars’ personal involvement moderates their effects on recipients’ perceptions of public opinion and personal attitudes. The results show that compared to unaffected exemplars, affected exemplars weaken effects on public opinion perceptions. In contrast, the moderating role of exemplar involvement for attitudinal effects was limited.
Article
The Chinese government has long been suspected of hiring as many as 2 million people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.