Conference PaperPDF Available

The Spread of Disinformation on the Web: An Examination of Memes on Social Networking

Authors:

Abstract and Figures

Social media has become a potent vector for political disinformation and propaganda, often spread by malicious actors such as trolls or even foreign intelligence services, as famously occurred during the 2016 United States election. However, what makes social media a particularly potent vector for disinforma-tion is not so much the behavior of malicious actors themselves, but rather, ordinary users, who play a vital role in spreading and magnifying this disinformation. In order to understand and combat the spread of disinformation, we conduct two surveys examining the patterns of user behavior in sharing different types of disinformation. The first survey classifies a variety of image memes based on user reaction and interpretation. The second survey will be evaluating user behavior toward those memes with additional measures in place to assess the personality and trait affect of users. The goal is to help understand how ordinary social media users behave in regards to political propaganda and disinformation.
Content may be subject to copyright.
The Spread of Disinformation on the Web:
An Examination of Memes on Social Networking
Marc Dupuis
Computing and Software Systems
University of Washington
Bothell, Washington, USA
marcjd@uw.edu
Andrew Williams
Computing and Software Systems
University of Washington
Bothell, Washington, USA
andrewwi@uw.edu
Abstract—Social media has become a potent vector for political
disinformation and propaganda, often spread by malicious actors
such as trolls or even foreign intelligence services, as famously
occurred during the 2016 United States election. However, what
makes social media a particularly potent vector for disinforma-
tion is not so much the behavior of malicious actors themselves,
but rather, ordinary users, who play a vital role in spreading
and magnifying this disinformation. In order to understand and
combat the spread of disinformation, we conduct two surveys
examining the patterns of user behavior in sharing different types
of disinformation. The first survey classifies a variety of image
memes based on user reaction and interpretation. The second
survey will be evaluating user behavior toward those memes with
additional measures in place to assess the personality and trait
affect of users. The goal is to help understand how ordinary
social media users behave in regards to political propaganda
and disinformation.
Index Terms—disinformation, propaganda, fake news, social
media, memes, information integrity, cybersecurity
I. INTRODUCTION
One of the pillars of cybersecurity has always been finding
ways to protect the integrity of information transmitted online.
’Integrity’ is usually defined as ensuring that information is not
altered between the sender and the receiver, however, more
broadly, integrity refers to the trustworthiness of information.
In a time in which more and more people are getting their
information about the world through social media [30], how
do we ensure the integrity of the information they receive
through these platforms? The nature of social media, which
allows any user to share almost any information, within the
bounds of a loosely-enforced code of conduct, means we can
never guarantee that all information on social media will be
true or trustworthy.
In this case, perhaps protecting the integrity of information
does not mean not that the information is necessarily true or
factual, but merely that it has not been unduly manipulated.
For example, how can people be sure that the information
they see on their social media feeds is the result of an organic
process of posting and sharing by users, rather than the result
of malicious actors or targeted manipulation campaigns?
These sort of purposeful disinformation efforts received a
lot of attention during the 2016 U.S. Presidential Election [16]
and in its wake [51], however, they have not abated in the
time since. Such efforts are already well underway in targeting
candidates running in the 2020 Presidential election [23].
In addition to long-term efforts during Presidential cam-
paigns, shorter-term efforts to manipulate social media also
spring up in the wake of major news events. For example, in
the wake of the release of Robert Mueller’s report on Russian
interference in the 2016 election, a network of 5,000 Twitter
bots was mobilized to promote the hashtag #Russiagate and
discredit the Russia investigation [8]. And after the Notre
Dame fire in April 2019, conspiracy theories spread quickly
on social media through a variety of means [26].
Various technological solutions have been proposed to help
combat the spread of false information on social media, such
as fact checking [5], crowd signals [46], natural language
processing [34], and more. Technological solutions that can
automatically flag such efforts are important piece of the
puzzle, however, it is also important to consider the role
that ordinary users play in the magnification and spread of
misinformation.
Misinformation campaigns, as conducted by foreign state
actors in the 2016 election or by individuals controlling
networks of bots [8], often rely on ordinary users to pick up
a false story and make it go viral. To make matters more
complicated, such false stories may not be actual stories or
articles, but simple image memes or blocks of text. In such
a case, analyzing the source or the language patterns in the
item may not be a useful way of determining if an item is
legitimate or not.
Even if an item is known to be false, or not legitimate in
some way, will that prevent ordinary users from spreading it?
For this paper, we examined the ways in which ordinary users
contribute to the spread of misinformation. Ultimately, in order
to ensure the integrity of social media, we must examine not
just the sources of disinformation, but the role that ordinary
users play in helping such disinformation spread or go viral.
To that end, we have conducted one survey with another
survey in development. This first survey was designed to
identify a subset of content that represents a wide political
and emotional spectrum. The second survey will identify how
likely users are to share this content on social media, and to
see how user perceptions of their platform, their audience and
their anonymity correlate to the willingness of users to share
disinformation, either unwittingly or on purpose.
II. BACKGROU ND
Propaganda, disinformation, and hoaxes have always been
pitfalls in reporting the news and current events. However,
with the decline of ’traditional’ sources of journalism such
as newspapers and the rise of social media, it has become
possible for malicious actors to manipulate the media in new
and disturbing ways.
In Media Manipulation and Disinformation Online, by Alice
Marwick and Rebecca Lewis [29], several recent case studies
are presented in which malicious actors such as conspiracy
theorists or even hate groups use (or abuse) the tools of social
media to amplify their message. For example, in one such
case in 2015, a white nationalist website, through coordinated
action by its users on social media, was able to get a hoax
about the creation of ’White Student Unions’ picked by media
outlets as well-known as USA Today [6].
Political candidates can be targets of media manipulation as
well. Networks of bots may be used to target candidates and
spread false messages, which are then shared or re-tweeted
by regular users, and amplified much more as a result. An
early example of this occurred in 2010 when a ’Twitter bomb’
targeted Massachusetts Senate candidate Martha Oakley in
the days before the 2010 Senate special election [32], in a
successful effort to influence real-time search results for the
candidates.
These tactics have become more far-reaching and problem-
atic over time. Controversy regarding media influence and
’fake news’ swirled, and continues to swirl, around two of
the most consequential events of 2016: the Brexit referendum
and the U.S. Presidential election [7].
Today, scholars and researchers have pointed to disinforma-
tion on the Internet and social media as a growing threat to
the functioning of democratic institutions around the world. [3]
Social media gives users the ability to isolate themselves from
opposing viewpoints, and construct ’cocoons’ in which they
are less frequently exposed to viewpoints other than their own
[45] [14]. Research has also shown that people may react with
hostility to factual sources that do not agree with them [42].
If this is true, people may be particularly susceptible to the
type of manipulations we discuss here, in which ordinary users
either knowingly or unknowingly magnify disinformation and
’fake news’ [43].
Note that the polarization of social media discussed in the
above papers is not in itself the result of outside tampering
or influence campaigns, but instead the natural behavior of
ordinary users. Manipulation campaigns seek to exploit, not
necessarily create, this polarization.
Therefore, in order to ensure the integrity of information on
social media, we need to understand how ordinary users inter-
act with information, how their behavior can be distinguished
from malicious influence campaigns, and how ordinary user
behavior– inadvertent or otherwise– may actually aid those
campaigns. But before we delve into how social media users
interact with information, we need to provide a little more
context about the topic of disinformation, or ”fake news”.
The term ’fake news’ has entered the common vernacular
since the 2016 election, to describe news that has been manip-
ulated or is intentionally false. It is frequently used as a pejora-
tive in the face of damaging legitimate stories, and can include
satirical sites such as The Onion or The Borowitz Report.
’Fake news’ is a broad term whose meaning varies from user
to user, and often from moment to moment. ’Disinformation’
may be a more precise term, although it does not necessarily
cover all types of malicious or false information on social
media. ’Disinformation’ usually refers to information which
is intentionally incorrect, whereas ’misinformation’ refers to
information which is unintentionally incorrect [28].
This distinction is important, because if users are uninten-
tionally sharing false information, then additional education
of users could provide a remedy. Incorporating more fact-
checking functionality into social media [5], or using auto-
mated natural language processing to flag such items and alert
users [34] [7], could provide a useful tool to combat ’fake
news’.
However, if users will intentionally share information even
if they know it is false, then user education may provide little
remedy to the propagation of disinformation or misinformation
on social media. Indeed, one survey on the efficacy of fact-
checking found that users were likely to care about fact-
checking only insofar as it was advantageous to their own
group, and fact-checking that went against their chosen group
was more likely to draw hostility [42]. Additionally, the simple
mechanic of repeated exposure to a story was likely to increase
users’ perception of its truth, even if the story was flagged as
inaccurate. [35]
A paper by Rashkin, et al [38] examines the truth of an item
via two axes: on one axis is the Quality of the Information
(ranging from Fake to Trustworthy), on the other axis is the
intention of the author, or how much they are intending to
deceive the reader. Satire such as The Onion might have
false or low-quality information, but it would also rank low
regarding the intention of the author to deceive. On the other
hand, an actual hoax would rank similarly low in information
quality but rank high regarding the intention of the author to
deceive. Propaganda can occupy a wide range of values on
the graph, but usually it is almost always misleading and the
author usually intends to deceive the reader to some degree,
for the sake of pushing a particular political message.
For the purposes of this paper we are less interested in ex-
amining the precise motives of the original authors, and more
interested in examining the behavior of ordinary social media
users when confronted with disinformation or propaganda-
style content. Throughout the paper, we will use ’disinforma-
tion’ as an umbrella term for ’fake news’, propaganda, hoaxes,
and other false information, which are common factors in
malicious campaigns to manipulate or influence social media.
Suppose for a moment that you were a creator of such
disinformation, looking to design content to go viral as part
of an effort to manipulate social media. If you were to want
to design content for the sake of going viral, what would
that content look like? What content is most likely to get
shared and magnified by users across your target social media
network? What motivates users to share this content in the
first place?
Generally speaking, users have a variety of motivations
for sharing news on social media. In a study from 2015, in
which 18 participants were interviewed in-depth about their
motivations for sharing news [50], the results could be broadly
broken down into a desire to inform or a desire to entertain.
Additionally, sub-motivations were found; some users were
motivated by maintaining a connection with a group to which
they identify– which could also contribute to polarization–
whereas other users are motivated by changing minds or
sparking debate. Some shared items in order to distinguish
themselves, others shared items in order to ’join the crowd’,
so to speak.
Another 2015 study by Oh, et al [33] found that people’s
motivations also varied across different social media platforms.
They found that Facebook and Twitter users are more moti-
vated by learning than Youtube users, and Facebook users in
particular were more motivated by social engagements. How-
ever, interviews and in-depth surveys on user motivations may
capture after-the-fact justifications, rather than the emotion of
the moment when the user decides to like or share something.
Users tend to scroll through social media quickly, and sharing
decisions may be made more on momentary emotional or
psychological arousal than anything else.
Many studies have been done into how different emotions
affect the tendency of users to share content (e.g., [13]). A
study by Jonah Berger [4] found that more than a particular
emotion, it was psychological arousal that increased the social
transmission of information. For example, when considering
negative emotions, evoking anxiety resulted in more willing-
ness to share than evoking sadness, and on the positive side,
evoking humor or amusement resulted in more willingness to
share than merely evoking contentment.
Other studies, such as a 2013 study by Guadagno, et al
[15] found similar results when examining a single piece of
content, finding that users who felt sronger affective responses
to a video were much more likely to share it.
Ideologically extreme content is more likely to be shared,
perhaps precisely because it arouses stronger emotions. A
study by [37] found that Twitter users with more extreme
ideological positions shared content disproportionately more
than moderate users. This means that users may be exposed to
a disproportionate ratio of more extreme content as compared
to more moderate content.
During the 2016 political campaign, a survey was conducted
studying how anger and anxiety influenced users to share infor-
mation related to the campaign. It found that users who were
deeply plugged in to online news were both more angry and
more anxious about the opposing Presidential candidates, and
that people who felt more anger about the opposing candidate
shared information about the election more frequently [17].
Anger and anxiety, once again, high arousal emotions, are seen
to play a big role in determining user sharing behavior.
A user’s perception of their audience, and their anonymity,
also affect their behavior online. In the area of cyberbullying,
for example, problematic behavior tends to increase when
users believe they are acting anonymously. [1] Other studies
have pointed to a relationship between the users’ anonymity
and their sharing behavior, suggesting that user anonymity
does have an effect on what users share online, and users
may be more comfortable sharing items of negative valence if
they feel safer in their anonymity [27]. Additionally, a 2014
study found that controversial content was 3.2x more likely to
be shared anonymously than non-anonymously [52].
Different social media allows users to interact more or less
anonymously. Facebook, Twitter, and Instagram all allow dif-
ferent levels of anonymity, and have designed their interfaces
to allow the users varying degrees of freedom in the ways in
which they can manipulate their identity [31]. This alters the
way in which users use social media, and it may also alter
their perceptions of their audience.
Users often have an imagined target audience in mind when
they post on social media [13], [25]. This audience could be
people with personal ties, such as family or friends. It could be
people with professional ties, such as co-workers or potential
employers, as one might find on LinkedIn. Or it could include
communal ties, such as people who share a hobby– or people
who support a political candidate, or even members of a hate
group.
We have already seen, in the survey from Oh et al [33]
that user motivations can be measured differently across social
media. But how do perceptions of anonymity and audience
factor into user behavior, specifically as it relates to sharing
political (dis)information? To answer that question, we decided
to look at user behavior as it relates to sharing one particular
type of content: image memes.
We chose to focus on image memes because their nature
allows for user behavior to be more easily surveyed on a
large scale, through an automated survey platform such as
Mechanical Turk. Users can quickly view and digest the
meaning of an image meme within a couple of seconds,
without having to click on an external link or do additional
reading. Memes usually convey an uncomplicated idea, often
using humor to get their point across [22]. A user can interact
with an image meme within a second or two and continue
scrolling, which makes them well-adapted to modern social
media.
The term ’meme’ has existed since Richard Dawkins origi-
nally coined it in 1976, as a unit of cultural transmission that
serves a similar role in cultural evolution that a gene serves
in biological evolution. On the modern web a ’meme’ can
be defined as an image or video containing a block of text,
which can be easily shared on social media. Variations of a
meme may consist of lots of different versions of the same
image, often featuring a certain stock character, containing
different blocks of text, such as ’sheltering suburban Mom’ or
’Annoying Facebook girl’ [41].
Memes such as Pepe the Alt-Right Frog, are frequently used
by the alt-right in their efforts to influence social media [24].
Political memes may also be images of political figures, such
as Donald Trump holding up a signed piece of paper, which
has been altered in various ways, or former Speaker of the
House Paul Ryan gesturing to a board that has been altered to
say various things [40]. Memes are also frequently used during
the sorts of misinformation campaigns discussed earlier in this
section, as they are easy to create, easy to circulate, and hard
to fact-check.
Because memes are highly visual and often contain humor,
they are well-suited to being viewed and shared on social
media, and can potentially be a powerful way to influence
opinion online [22] [19]. Memes were a frequent tool of
disinformation campaigns during the 2016 election, such as
the ’Draft our Daughters’ campaign that was targeted at the
Hillary Clinton campaign [16]. In a way, memes provide one
of the most potent weapons for intentional disinformation
campaigns, as they provide an easy way for almost anyone to
create catchy, colorful content that is easily digested, viewed,
and shared across multiple platforms
While much work has been done showing how influence
campaigns use memes, less work has been done as far as
demonstrating their actual efficacy in promoting their view-
points, perhaps because most memes are meant primarily for
amusement or humor. However, there is no doubt that political
memes are widely used for propaganda purposes, and more
study of their efficacy is needed. They are certainly effective
in terms of their ability to spread on social media, but their
ability to convince viewers is still uncertain.
One study of feminist memes featuring Ryan Gosling did
find that exposure to the test memes increased viewer en-
dorsement of specific feminist beliefs [49]. But regardless
of whether memes actually convince viewers of their view-
point, that may not even be the most important point. If a
meme becomes widespread enough on social media affects the
national conversation, as trends on social media will almost
always get picked up by national media, politicians, and spread
beyond social media. The questions that swirled around Hillary
Clinton’s health during the 2016 campaign were just one
example of this, in which memes and posts on social media
fed stories in traditional media, which in turn fed social media,
and so on [16] [29].
With this in mind, our first goal was to compile a set of
image memes for testing, and measure user reaction to ensure
that we had a set of memes that contained a variety of political
content and provoked a range of reactions. In the second
survey, our goal will be to measure how user sharing behaviors
changed based on the user’s perception of the truthfulness of
the meme, as well as their perceived audience and how they
evaluated their own anonymity. Measures for personality and
trait affect will also be incorporated into this second survey.
III. MET HO DS
Using Google image search, we compiled a set of memes
that represented a variety of political content and addressed
a variety of issues, which we ultimately whittled down to 12
memes for testing in our first survey.
The memes included a mix of left-leaning and right-leaning
memes, as well as memes that made fun of both parties. The
memes included two that were circulated on Facebook by
Russia during the 2016 election, as well as a mix of general
memes on a range of issues. For example, we included memes
that promoted both left- and right-leaning messages on issues
such as climate change and gun control. We also included a
Bernie Sanders meme with both a left-leaning message and a
right-leaning message.
Fig. 1. Example of a left-leaning climate change meme
Fig. 2. Example of a right-leaning climate change meme
In order to obtain some baseline data on how political
affiliation, political ideology, and various demographic factors
influence one’s judgment of a meme, a large-scale survey
was employed. Amazon’s Mechanical Turk (MTurk) was used
to recruit survey participants. MTurk provides researchers
with a relatively low-cost and quick turnaround platform
for participant recruitment [11], [44]. Participants generally
represent a broader cross-section of the population than other
methods often employed, such as college sophomores in an
introductory psychology class [39]. IRB approval was on file
prior to collecting data and informed consent was obtained.
Participants were compensated with $2 for their participation
in the study. One quality control questions was used. If
participants failed the quality control question, the survey
would conclude with an explanation of why it has ended.
We used the Qualtrics survey platform. A total of 203
responses were collected. Participants are asked at the end
of the survey how the effort and time required to complete
the survey compared to similar work offered through the
MTurk platform. Most participants indicated that it was either
easier (19.4%) or comparable (61.2%) to other projects with
some indicating more effort was required (19.4%). Of note, a
pilot study was employed beforehand to check for any issues
with the survey, including survey logic and question wording
problems, as well as the same question noted above. The
compensation was subsequently adjusted from the pilot study
to better reflect a comparable amount of time and effort for
research participants. Thus, we believe we accomplished this
given the above results from this question in the final survey.
In the process of selecting memes for the study, as well
as the survey questions themselves (i.e., what were going
to measure), we employed the Delphi technique [9], [18],
[36]. The Delphi technique is a method that is used to reach
consensus on a matter. In the context of this study, we wanted
to make sure there was appropriate coverage in what we were
assessing for each meme and for the memes themselves. We
employed three rounds of the Delphi technique to a small
group of participants, which is considered a good number
of rounds that effectively balances robustness with fatigue
that can set in from too many rounds. Additionally, we set
a 75% threshold for consensus. In other words, if 75% of
the participants involved in the Delphi technique were in
agreement, then consensus was considered achieved.
Ultimately, we decided that we would measure the following
items for each meme:
1) Ideological Agenda
2) Political Party being Advanced
3) Propaganda (Classification)
4) Hoax (Classification)
5) Satire (Classification)
6) Truth (Classification)
7) Funny (Is it?)
8) Trying to be Funny (Is it?)
9) True (Is it?)
10) Underlying Message True (Is it?)
The memes were presented in a random order to
the participants with the same questions for each meme.
The 12 memes used in this study can be found at:
http://www.aristotle.cc/Memes.pdf
Results were obtained from those that classified themselves
as Democrats (N=86), Republicans (N=58), and Independents
(N=56). There was an equal split between males (49.3%)
and females (50.2%) that completed the survey with one
participant indicating other. Next, we discuss some of the
findings from the initial phase of the study.
IV. DISCUSSION
Table I contains the results from the survey. Not sur-
prisingly, individuals responded consistent with their polit-
ical affiliation. For example, Meme A is a pro gun rights
meme. Republicans rated this lower on propaganda than either
Democrats or Independents, with the latter rating it between
the two others. Republicans also rated it higher with respect
to its truth value.
In contrast, Meme B is for gun control. Interestingly,
Independents rated this the highest for propaganda, even more
so than Republicans. Republicans rated this as lower on truth
value than either Democrats or Independents.
Meme G is generally considered politically neutral as it
suggests that there is no discernible difference between the
political parties. Nonetheless, we still see some notable differ-
ences in how it is rated based on political affiliation. Perhaps
Democrats believe it is targeting their party to some extent as
they rate it higher on propaganda value and lower on truth
value than either Republicans or Independents.
Overall, the results are not overly surprising. Additional
analysis involving political ideology would also be interesting
as it may tease out some differences we may see with
Independents. While the results are not surprising, they do
help validate the approach we will be taking in the next phase
of this research.
V. LOOKING AH EA D
In the first phase of this research, we wanted to determine
appropriate baseline numbers for the memes that were em-
ployed in this study. This will allow us to control for political
affiliation and other factors in the next phase. It also provides
us with an opportunity to ensure we assess a broad spectrum
of political memes in the next phase of the study.
For the final phase of this study, we will be looking at how
different type of people interact with memes and what they
ultimately think about the message they are trying to convey.
This will include an examination of how this varies based on
political affiliation, political ideology, gender, age, personality
[2], [20], [21], and trait affect [47], [48].
Several pertinent questions will be addressed through this
research, including whether some personality types or those
with a certain type of trait affect are more prone to spreading
misinformation than others as prior research suggests differ-
ences based on these factors in the context of social media,
security, and privacy [10], [12]. Misinformation continues to
be a significant problem and strikes at the very foundation of
cybersecurity through its compromise of information integrity.
In addition to understanding more fully how misinformation
is spread, the results will lend themselves to various data
analytic tools and techniques, such as machine learning. As
evidence suggests that Russia continues to take aim at elections
in the United States and elsewhere [23], it is more important
now than ever that proactive measures are taken to address the
threats to information integrity.
This meme can be classified as.... Is the meme...
Meme Responders Rate the politics
of this meme Propaganda Hoax Satire Truth Funny Trying to
be Funny True Underlying
message true
A Democrats 5.09 3.08 2.21 2.93 2.80 2.87 3.44 2.93 3.05
A Republicans 5.52 2.19 2.09 2.41 3.88 2.79 2.95 3.84 3.98
A Independents 5.38 2.64 1.95 3.09 3.18 2.82 3.29 3.13 3.29
B Democrats 2.83 2.64 2.02 2.22 3.97 2.27 2.63 3.72 3.84
B Republicans 3.88 2.83 2.54 2.38 3.14 2.14 2.34 3.16 3.48
B Independents 2.25 3.21 2.11 2.07 3.36 1.96 2.37 3.18 3.16
C Democrats 5.44 3.65 2.79 2.56 2.20 2.00 2.79 1.99 2.08
C Republicans 4.86 2.76 2.29 2.66 3.53 2.41 2.58 3.64 3.62
C Independents 5.57 3.52 2.62 2.34 2.02 1.77 2.52 1.80 1.98
D Democrats 2.69 2.44 2.02 1.81 3.65 1.90 2.00 3.51 3.77
D Republicans 4.07 2.48 2.49 2.14 3.69 2.16 2.19 3.69 3.72
D Independents 2.52 2.86 2.29 1.75 3.23 1.86 1.88 3.18 3.25
E Democrats 5.69 4.03 3.10 2.10 1.87 1.86 2.26 1.69 1.79
E Republicans 4.60 2.86 2.80 2.28 2.98 2.14 2.19 3.19 3.33
E Independents 5.59 3.95 2.89 2.15 2.07 1.66 2.11 2.09 2.04
F Democrats 2.78 2.70 2.02 2.88 3.48 2.58 3.01 3.38 3.51
F Republicans 4.38 3.12 2.52 2.12 3.19 2.26 2.60 3.41 3.45
F Independents 2.68 3.16 1.89 2.80 2.86 2.43 2.82 2.89 3.00
G Democrats 4.15 3.13 2.16 3.44 2.81 3.15 4.09 2.85 2.92
G Republicans 4.97 2.56 2.29 3.53 3.07 3.32 3.52 3.28 3.34
G Independents 4.14 2.75 2.18 3.64 3.21 3.43 4.02 3.43 3.57
H Democrats 3.83 2.56 2.10 3.79 2.87 3.15 4.13 3.05 3.13
H Republicans 4.83 2.52 2.40 3.61 3.03 3.45 3.71 3.24 3.29
H Independents 4.02 2.50 1.96 3.95 3.00 3.55 4.09 3.23 3.29
I Democrats 2.71 2.16 1.81 2.26 4.07 2.46 2.81 3.97 4.09
I Republicans 3.41 2.91 2.93 2.40 2.91 2.30 2.64 3.07 3.14
I Independents 2.45 2.43 1.80 2.25 3.87 2.54 3.13 3.68 3.87
J Democrats 5.53 2.90 2.20 3.62 2.19 2.97 4.09 2.17 2.29
J Republicans 5.54 2.40 2.17 3.26 2.98 3.59 3.60 3.38 3.52
J Independents 5.57 2.82 1.93 3.96 2.29 3.13 3.95 2.52 2.73
K Democrats 2.49 2.51 2.08 3.65 3.31 3.59 4.07 3.51 3.77
K Republicans 4.33 2.91 2.45 2.96 3.09 2.66 3.28 3.22 3.41
K Independents 2.64 2.89 1.95 3.70 2.79 3.21 3.93 2.95 3.07
L Democrats 5.05 3.53 2.34 2.93 2.52 2.57 3.81 2.52 2.62
L Republicans 4.72 2.95 2.55 2.81 3.42 2.81 3.33 3.66 3.69
L Independents 4.79 3.57 2.39 3.05 2.64 2.63 3.68 2.80 2.77
TABLE I
MEA NS OF T HE R ESP ONS ES F ROM T HE LIK ERT Q UES TIO NS A SKE D IN T HE FIR ST S URVE Y.
VI. CONCLUSION
This study takes aim at better understanding the antecedents
of the spread of misinformation. We do this through the lens
of it being an attack on cybersecurity through its compro-
mise on information integrity. Through the development of a
classification and rating scheme via the Delphi technique, and
subsequent data collection and validation, we are able to then
take the next step and assess how various factors relate to the
spread of misinformation.
REFERENCES
[1] Christopher P. Barlett. Anonymously hurting others online: The effect
of anonymity on cyberbullying frequency. Psychology of Popular Media
Culture, 4(2):70–79, 2015.
[2] Vernica Benet-Martnez and Oliver P. John. Los cinco grandes across
cultures and ethnic groups: Multitrait-multimethod analyses of the big
five in spanish and english. Journal of personality and social psychology,
75(3):729, 1998.
[3] W Lance Bennett and Steven Livingston. The disinformation order:
Disruptive communication and the decline of democratic institutions.
European Journal of Communication, 33(2):122–139, April 2018.
[4] Jonah Berger. Arousal Increases Social Transmission of Information.
Psychological Science, 22(7):891–893, July 2011.
[5] Petter Bae Brandtzaeg, Asbjrn Flstad, and Mara ngeles Chaparro Dom-
nguez. How Journalists and Social Media Users Perceive Online Fact-
Checking and Verification Services. Journalism Practice, 12(9):1109–
1129, October 2018.
[6] Walbert Castillo. ’Illini White Student Union challenges ’Black Lives
Matter’, November 2015.
[7] Murphy Choy and Mark Chong. Seeing Through Misinformation: A
Framework for Identifying Fake Online News. ARXIV, page 14, 2018.
[8] Ben Collins. After Mueller report, Twitter bots pushed Russiagate hoax
narrative, April 2019.
[9] C Duffield. The delphi technique. The Australian journal of advanced
nursing: a quarterly publication of the Royal Australian Nursing Fed-
eration, 6(2), 1988.
[10] Marc Dupuis and Robert Crossler. The compromise of one’s personal
information: Trait affect as an antecedent in explaining the behavior of
individuals. In Proceedings of the 52nd Hawaii International Conference
on System Sciences. IEEE, 2019.
[11] Marc Dupuis, Barbara Endicott-Popovsky, and Robert Crossler. An
analysis of the use of amazon’s mechanical turk for survey research in
the cloud. In International Conference on Cloud Security Management,
Oct 2013.
[12] Marc Dupuis and Samreen Khadeer. Curiosity killed the organization: A
psychological comparison between malicious and non-malicious insiders
and the insider threat. In Proceedings of the 5th Annual Conference on
Research in Information Technology, page 3540. ACM Press, 2016.
[13] Marc Dupuis, Samreen Khadeer, and Joyce Huang. ”i got the job!”: An
exploratory study examining the psychological factors related to status
updates on facebook. Computers in Human Behavior, 73:132140, 2017.
[14] Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis,
and Michael Mathioudakis. Political Discourse on Social Media: Echo
Chambers, Gatekeepers, and the Price of Bipartisanship. In Proceedings
of the 2018 World Wide Web Conference, WWW ’18, pages 913–922,
Republic and Canton of Geneva, Switzerland, 2018. International World
Wide Web Conferences Steering Committee. event-place: Lyon, France.
[15] Rosanna E. Guadagno, Daniel M. Rempala, Shannon Murphy, and
Bradley M. Okdie. What makes a video go viral? An analysis
of emotional contagion and Internet memes. Computers in Human
Behavior, 29(6):2312–2319, November 2013.
[16] Douglas Haddow. Meme warfare: how the power of mass replication
has poisoned the US election. The Guardian, November 2016.
[17] A. Hasell and Brian E. Weeks. Partisan Provocation: The Role of
Partisan News Use and Emotional Responses in Political Information
Sharing in Social Media. Human Communication Research, 42(4):641–
661, October 2016.
[18] Felicity Hasson, Sinead Keeney, and Hugh McKenna. Research guide-
lines for the delphi survey technique. Journal of Advanced Nursing,
32(4):10081015, 2000.
[19] Heidi Huntington. MENACING MEMES? AFFECT AND EFFECTS
OF POLITICAL INTERNET MEMES. AoIR Selected Papers of Internet
Research, 5(0), 2015.
[20] Oliver P. John, Eileen M. Donahue, and Robert L. Kentle. The big
five inventoryversions 4a and 54. Berkeley: University of California,
Berkeley, Institute of Personality and Social Research, 1991.
[21] Oliver P. John, Laura P. Naumann, and Christopher J. Soto. Paradigm
shift to the integrative big five trait taxonomy. Handbook of personality:
Theory and research, 3:114158, 2008.
[22] Ofra Klein. Manipulative Memes : How Internet Memes Can Distort
the Truth Connected Life Conference, June 2018.
[23] Natasha Korecki. Sustained and ongoing disinformation assault targets
Dem presidential candidates, February 2019.
[24] Nicolle Lamerichs, Dennis Nguyen, Mari Carmen Puerta Melguizo,
Radmila Radojevic, and Anna Lange-Bhmer. Elite male bodies: The
circulation of alt-Right memes and the framing of politicians on Social
Media. Participations, 15(1):27, 2018.
[25] Eden Litt and Eszter Hargittai. The Imagined Audience on Social
Network Sites. Social Media + Society, 2(1):205630511663348, January
2016.
[26] Jane Lytvynenko and Craig Silverman. Here Are The Hoaxes And
Misinformation About The Notre Dame Fire, April 2019.
[27] Xiao Ma, Jeff Hancock, and Mor Naaman. Anonymity, Intimacy and
Self-Disclosure in Social Media. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems, CHI ’16, pages
3857–3869, New York, NY, USA, 2016. ACM. event-place: San Jose,
California, USA.
[28] Alice Marwick. Why Do People Share Fake News? A Sociotechnical
Model of Media Effects. 2 GEO. L. TECH. REV. 474, July 2018.
[29] Alice Marwick and Rebecca Lewis. Media Manipulation and Disinfor-
mation Online. Data and Society Research Institute, page 106, 2017.
[30] Robert Mason and Marc Dupuis. Cultural values, information sources,
and perceptions of security. In iConference 2014 Proceedings, Mar 2014.
[31] Dar Meshi, Diana I. Tamir, and Hauke R. Heekeren. The Emerging Neu-
roscience of Social Media. Trends in Cognitive Sciences, 19(12):771–
782, December 2015.
[32] Panagiotis T. Metaxas and Eni Mustafaraj. Social Media and the
Elections. Science, Vol. 338:472–473, October 2012.
[33] Sanghee Oh and Sue Yeon Syn. Motivations for sharing information
and social support in social media: A comparative analysis of Facebook,
Twitter, Delicious, YouTube, and Flickr. Journal of the Association for
Information Science and Technology, 66(10):2045–2060, 2015.
[34] Ray Oshikawa, Jing Qian, and William Yang Wang. A Survey on Natural
Language Processing for Fake News Detection. arXiv:1811.00770 [cs],
November 2018. arXiv: 1811.00770.
[35] Gordon Pennycook, Tyrone Cannon, and David G. Rand. Prior Exposure
Increases Perceived Accuracy of Fake News. SSRN Scholarly Paper ID
2958246, Social Science Research Network, Rochester, NY, May 2018.
[36] Catherine Powell. The delphi technique: Myths and realities. Journal
of Advanced Nursing, 41(4):376382, 2003.
[37] Daniel Preoiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. Be-
yond Binary Labels: Political Ideology Prediction of Twitter Users.
In Proceedings of the 55th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pages 729–740,
Vancouver, Canada, 2017. Association for Computational Linguistics.
[38] Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin
Choi. Truth of Varying Shades: Analyzing Language in Fake News
and Political Fact-Checking. In Proceedings of the 2017 Conference
on Empirical Methods in Natural Language Processing, pages 2931–
2937, Copenhagen, Denmark, 2017. Association for Computational
Linguistics.
[39] David O. Sears. College sophomores in the laboratory: Influences of a
narrow data base on social psychologys view of human nature. Journal
of Personality and Social Psychology, 51(3):515, 1986.
[40] Jens Seiffert-Brockmann, Trevor Diehl, and Leonhard Dobusch. Memes
as games: The evolution of a digital discourse online. New Media &
Society, 20(8):2862–2879, August 2018.
[41] Limor Shifman. The Cultural Logic of Photo-Based Meme Genres.
Journal of Visual Culture, 13(3):340–358, December 2014.
[42] Jieun Shin and Kjerstin Thorson. Partisan Selective Sharing: The Biased
Diffusion of Fact-Checking Messages on Social Media. Journal of
Communication, 67(2):233–255, April 2017.
[43] Craig Silverman. This Analysis Shows How Viral Fake Election News
Stories Outperformed Real News On Facebook.
[44] Zachary R. Steelman, Bryan I. Hammer, and Moez Limayem. Data
collection in the digital age: Innovative alternatives to student samples.
MIS Quarterly, 38(2):355378, 2014.
[45] Cass R. Sunstein. #Republic: Divided Democracy in the Age of Social
Media. Princeton University Press, April 2018. Google-Books-ID:
nVBLDwAAQBAJ.
[46] Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit
Merchant, and Andreas Krause. Fake News Detection in Social Net-
works via Crowd Signals. In Companion Proceedings of the The Web
Conference 2018, WWW ’18, pages 517–524, Republic and Canton of
Geneva, Switzerland, 2018. International World Wide Web Conferences
Steering Committee. event-place: Lyon, France.
[47] David Watson and Lee Anna Clark. The panas-x: Manual for the positive
and negative affect schedule - expanded form. 1994.
[48] David Watson, Lee Anna Clark, and Auke Tellegen. Development and
validation of brief measures of positive and negative affect: The panas
scales. Journal of Personality and Social Psychology, 54(6):10631070,
Jun 1988.
[49] Linzi E A Williamson, Sarah L Sangster, and Karen L Lawson. HEY
GIRL: THE EFFECT OF RYAN GOSLING FEMINIST MEMES ON
FEMINIST IDENTIFICATION AND ENDORSEMENT OF FEMINIST
BELIEFS. Researchgate.net, page 1, 2014.
[50] L. Y.C. Wong and Jacquelyn Burkell. Motivations for Sharing News on
Social Media. In Proceedings of the 8th International Conference on
Social Media & Society, #SMSociety17, pages 57:1–57:5, New York,
NY, USA, 2017. ACM. event-place: Toronto, ON, Canada.
[51] Samuel C. Woolley and Douglas R. Guilbeault. Computational Pro-
paganda in the United States of America: Manufacturing Consensus
Online. Project on Computational Propaganda, 2017.
[52] Kaiping Zhang and Ren F. Kizilcec. Anonymity in Social Media:
Effects of Content Controversiality and Social Endorsement on Sharing
Behavior. In Eighth International AAAI Conference on Weblogs and
Social Media, May 2014.
... gather broader support [45]. At the same time, disinformation actors routinely exploit memes to promote false narratives on political scandals [54] and weaponize them to spread propaganda as well as manipulate public opinion [21,87,91,92]. ...
... Virality. Recent research showed that image memes have become a prominent way for social media users to communicate complex concepts [21,92]. Consistent with previous meme studies [86], we use the number of times a meme is shared on social media as an indicator of its virality; viral memes appear at a high frequency on online services and are shared, transformed, and imitated by many people. ...
... Researchers recently started studying image memes and how online users interact with them. Dupuis et al. performed a survey to identify personality traits of users that are more likely to share image memes containing disinformation [21]. Crovitz and Moran provided a qualitative analysis of image memes used as a vehicle of disinformation on social media [17]. ...
Article
Full-text available
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
... gather broader support [45]. At the same time, disinformation actors routinely exploit memes to promote false narratives on political scandals [54] and weaponize them to spread propaganda as well as manipulate public opinion [21,87,91,92]. ...
... Virality. Recent research showed that image memes have become a prominent way for social media users to communicate complex concepts [21,92]. Consistent with previous meme studies [86], we use the number of times a meme is shared on social media as an indicator of its virality; viral memes appear at a high frequency on online services and are shared, transformed, and imitated by many people. ...
... Researchers recently started studying image memes and how online users interact with them. Dupuis et al. performed a survey to identify personality traits of users that are more likely to share image memes containing disinformation [21]. Crovitz and Moran provided a qualitative analysis of image memes used as a vehicle of disinformation on social media [17]. ...
Preprint
Full-text available
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
... These effects can elicit an emotional response to the content and convey the impression of reasonable arguments, even if the information is incorrect (Harvey et al., 2019). Memes are particularly effective when they align with one's political beliefs, but they can also change perceptions of credibility among people of opposing views (Dupuis & Williams, 2019;Wong et al., 2022). ...
Technical Report
Full-text available
Misinformation can cause significant harm to individuals, communities, and societies. Because it’s designed to appeal to our emotions and exploit our cognitive shortcuts, everyone is susceptible to it. We are particularly vulnerable to misinformation in times of crisis when the consequences are most acute. Science and health misinformation damages our community well-being through otherwise preventable illnesses, deaths, and economic losses, and our social well-being through polarization and the erosion of public trust. These harms often fall most heavily on the most vulnerable. The pervasive spread of misinformation and the damage it can cause underscore the need for reasoned, evidence-informed decision-making at both the personal and public level. Strategies and tools exist to help combat these harms, strengthen, and build trust in our institutions, and boost our ability to recognize and reject the misinformation we encounter. Fault Lines details how science and health misinformation can proliferate and its impacts on individuals, communities, and society. It explores what makes us susceptible to misinformation and how we might use these insights to improve societal resilience to it. The report includes a model of the impacts of COVID‑19 misinformation on vaccination rates in Canada, producing quantitative estimates of its impacts on our health and the economy, and situating these within a broader context of societal and economic harms.
... Through playfully implicit visual (and brief text) references at the expense of targeted outsiders, visual memes are powerful vehicles for ingroup identity formation. Yet, few scholars have taken these sprightly bimodal messages seriously as conduits for disinformation and antagonism (for notable exceptions, see Beskow & Carley, 2020;Dupuis & Williams, 2019;Nieubuurt, 2021). The visually dominant single-sliced format, resembling a newspaper cover, is easy to share in online environments and is cognitively aligned with the human penchant for lending attention and credibility to visual stimuli. ...
Article
Full-text available
Contemporary concerns about the integrity of information are not easily dismissible as merely a perennial cycle of moral panic. In an attempt to provide context and map territory for future research, this conceptual paper draws comparisons between disinformation, the deliberate fabrication and dissemination of false information, and tabloid news – a news format with a century’s old reputation for sparking consternation over journalistic truth telling. Sociological functions, information economy particularities, and message packaging features are comparatively scrutinized, revealing areas of divergence, convergence, and symbiotic interface between disinformation and tabloid news that likely afford the success of both formats.
... We posit that, to some extent, the content of a short video determines its virality [20,23,40]. Viewers may be more engaged when special themes are featured, i.e., disabled people thriving to shine, gorgeous pups roaming around the park, or babies gazing at their parents. ...
Conference Paper
Full-text available
Short videos have become one of the leading media used by younger generations to express themselves online and thus a driving force in shaping online culture. In this context, TikTok has emerged as a platform where viral videos are often posted first. In this paper, we study what elements of short videos posted on TikTok contribute to their virality. We apply a mixed-method approach to develop a codebook and identify important virality features. We do so vis-à-vis three research hypotheses; namely, that: 1) the video content, 2) TikTok's recommendation algorithm, and 3) the popularity of the video creator contributes to virality. We collect and label a dataset of 400 TikTok videos and train classifiers to help us identify the features that influence virality the most. While the number of followers is the most powerful predictor, close-up and medium-shot scales also play an essential role. So does the lifespan of the video, the presence of text, and the point of view. Our research highlights the characteristics that distinguish viral from non-viral TikTok videos, laying the groundwork for developing additional approaches to create more engaging online content and proactively identify possibly risky content that is likely to reach a large audience.
... Dupuis and his colleagues A survey was done to determine the personality traits of individuals who are prone to post picture memes with false information. 10 Crovitz and Moran 11 conducted a qualitative examination of visual memes used as a type of social media deception. Zannettou et al. 12 propose a large-scale quantitative measurement of picture meme spread on the Web, where the small, polarized community subreddit is racist and loathed on mainstream social media like Twitter and Reddit. ...
Article
Full-text available
In today’s technological era, most new generations are using memes because the phenomenon of memes is rapidly gaining in popularity. In addition, various modifications and parodies use social and cultural phenomena to transform an original idea into another. Virus memes are sometimes called virus scripts and are the most widespread on the Internet. However, there is little scientific evidence for this assumption. This paper presents an innovative method to generate memes using machine learning algorithms that match the users’ humorous and relevant captions. The VGG16 network is used to return the embedded image. We have collected feedback from citizens of more than five countries and received 37 feedbacks to find humorous memes. The obtained results from feedback are very encouraging from the generated memes.
... We posit that, to some extent, the content of a short video determines its virality [20,23,40]. Viewers may be more engaged when special themes are featured, i.e., disabled people thriving to shine, gorgeous pups roaming around the park, or babies gazing at their parents. ...
Preprint
Full-text available
Short videos have become one of the leading media used by younger generations to express themselves online and thus a driving force in shaping online culture. In this context, TikTok has emerged as a platform where viral videos are often posted first. In this paper, we study what elements of short videos posted on TikTok contribute to their virality. We apply a mixed-method approach to develop a codebook and identify important virality features. We do so vis-\`a-vis three research hypotheses; namely, that: 1) the video content, 2) TikTok's recommendation algorithm, and 3) the popularity of the video creator contribute to virality. We collect and label a dataset of 400 TikTok videos and train classifiers to help us identify the features that influence virality the most. While the number of followers is the most powerful predictor, close-up and medium-shot scales also play an essential role. So does the lifespan of the video, the presence of text, and the point of view. Our research highlights the characteristics that distinguish viral from non-viral TikTok videos, laying the groundwork for developing additional approaches to create more engaging online content and proactively identify possibly risky content that is likely to reach a large audience.
... In this study, we used the results from an earlier study [44] to help inform the selection of memes that would be used here. In particular, we chose six memes from the original 12 that represent a variety of ideological leanings (please see Appendix). ...
Conference Paper
Full-text available
Social media has become a potent vector for the spread of disinformation. Content initially posted by bots, trolls, or malicious actors is often picked up and magnified by ordinary users, greatly extending its influence and reach. In order to combat disinformation online, it is important to understand how users interact with and spread this type of content, unwittingly or not. We studied patterns in the sharing of propaganda and disinformation on social media through political image-based memes. We chose a selection of six memes, and conducted a survey in order to better understand the behavior of ordinary users as they interact with propaganda and disinformation on social media. Particular attention was paid to differences based on political affiliation and psychological factors, including personality and trait affect. Negative types of affect appear to dominate the level of engagement Republicans and Independents have with memes, while positive types of affect and extraversion do the same for Democrats.
Preprint
Among the various modes of communication in social media, the use of Internet memes has emerged as a powerful means to convey political, psychological, and socio-cultural opinions. Although memes are typically humorous in nature, recent days have witnessed a proliferation of harmful memes targeted to abuse various social entities. As most harmful memes are highly satirical and abstruse without appropriate contexts, off-the-shelf multimodal models may not be adequate to understand their underlying semantics. In this work, we propose two novel problem formulations: detecting harmful memes and the social entities that these harmful memes target. To this end, we present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19. Each meme went through a rigorous two-stage annotation process. In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to: individual, organization, community, or society/general public/other. The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks. We further discuss the limitations of these models and we argue that more research is needed to address these problems.
Article
The scale, volume, and distribution speed of disinformation raise concerns in governments, businesses, and citizens. To respond effectively to this problem, we first need to disambiguate, understand, and clearly define the phenomenon. Our online information landscape is characterized by a variety of different types of false information. There is no commonly agreed typology framework, specific categorization criteria, and explicit definitions as a basis to assist the further investigation of the area. Our work is focused on filling this need. Our contribution is twofold. First, we collect the various implicit and explicit disinformation typologies proposed by scholars. We consolidate the findings following certain design principles to articulate an all-inclusive disinformation typology. Second, we propose three independent dimensions with controlled values per dimension as categorization criteria for all types of disinformation. The taxonomy can promote and support further multidisciplinary research to analyze the special characteristics of the identified disinformation types.
Article
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Article
Full-text available
The fake news epidemic makes it imperative to develop a diagnostic framework that is both parsimonious and valid to guide present and future efforts in fake news detection. This paper represents one of the very first attempts to fill a void in the research on this topic. The LeSiE (Lexical Structure, Simplicity, Emotion) framework we created and validated allows lay people to identify potential fake news without the use of calculators or complex statistics by looking out for three simple cues.
Article
Full-text available
Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the efforts of movements and parties on the radical right to mobilize supporters against centre parties and the mainstream press that carries their messages. The spread of disinformation can be traced to growing legitimacy problems in many democracies. Declining citizen confidence in institutions undermines the credibility of official information in the news and opens publics to alternative information sources. Those sources are often associated with both nationalist (primarily radical right) and foreign (commonly Russian) strategies to undermine institutional legitimacy and destabilize centre parties, governments and elections. The Brexit campaign in the United Kingdom and the election of Donald Trump in the United States are among the most prominent examples of disinformation campaigns intended to disrupt normal democratic order, but many other nations display signs of disinformation and democratic disruption. The origins of these problems and their implications for political communication research are explored.
Article
In 2016, Donald Trump was elected president of the United States. The right-wing support online was particularly of influence in this event. Indeed, some argued that the biggest winner of the 2016 US presidential election was the ‘“alt-right’, an extreme right-wing community that communicates through online image boards like 4chan and social news sites like Reddit. By close-reading images and memes from the Facebook pages and Instagram, we traced the circulation and impact of these memes, as well as their visual connections and themes. We argue that the communities that share these memes adhere to a masculine iconography. By drawing inspiration from different texts, such as games and historical portraits, Trump is glorified by his supporters as the ultimate saviour, aided by other politicians such as Putin. In its framing of patriarchy, sexism, racism, and even racial purity as a heroic and cartoonish narrative, the alt-right renders its memes as part of a powerful male story. We argue that the use of parody to discredit an opponent is what allows memes to be read as an incredibly powerful, persuasive medium, which has led to them being adopted by the alt-right to justify a racist and sexist discourse.
Conference Paper
Our work considers leveraging crowd signals for detecting fake news and is motivated by tools recently introduced by Facebook that enable users to flag fake news. By aggregating users' flags, our goal is to select a small subset of news every day, send them to an expert (e.g., via a third-party fact-checking organization), and stop the spread of news identified as fake by an expert. The main objective of our work is to minimize the spread of misinformation by stopping the propagation of fake news in the network. It is especially challenging to achieve this objective as it requires detecting fake news with high-confidence as quickly as possible. We show that in order to leverage users' flags efficiently, it is crucial to learn about users' flagging accuracy. We develop a novel algorithm, DETECTIVE, that performs Bayesian inference for detecting fake news and jointly learns about users' flagging accuracy over time. Our algorithm employs posterior sampling to actively trade off exploitation (selecting news that maximize the objective value at a given epoch) and exploration (selecting news that maximize the value of information towards learning about users' flagging accuracy). We demonstrate the effectiveness of our approach via extensive experiments and show the power of leveraging community signals for fake news detection.
Conference Paper
Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared, and the »chamber» (i.e., the social network) that allows the opinion to »echo» (i.e., be re-shared in the network) -- and examine closely at how these two components interact. We define a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we find that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also find that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a »price of bipartisanship» in terms of their network centrality and content appreciation. In addition, we study the role of »gatekeepers,» users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these findings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging.
Article
This study proposes a theoretical framework for understanding how and why certain memes prevail as a form of political discourse online. Since memes are constantly changing as they spread, drawing inferences from a population of memes as concrete digital artifacts is a pressing challenge for researchers. This article argues that meme selection and mutation are driven by a cooperative combination of three types of communication logic: wasteful play online, social media political expression, and cultural evolution. To illustrate this concept, we map Shepard Fairey’s Obama Hope Poster as it spreads online. Employing structural rhetorical analysis, the study categorizes Internet memes on branching diagrams as they evolve. We argue that mapping these variations is a useful tool for organizing memes as an expression of the values and preferences embedded in online communities. The study adds to the growing literature around the subversive nature of memetic diffusion in popular and political culture.