Conference PaperPDF Available

Fake News on Social Media: The (In)Effectiveness of Warning Messages

Authors:

Abstract

Warning messages are being discussed as a possible mechanism to contain the circulation of false information on social media. Their effectiveness for this purpose, however, is unclear. This article describes a survey experiment carried out to test two designs of warning messages: a simple one identical to the one used by Facebook, and a more complex one informed by recent research. We find no evidence that either design is clearly superior to not showing a warning message. This result has serious implications for brands and politicians, who might find false information about them spreading uncontrollably, as well as for managers of social media platforms, who are struggling to find effective means of controlling the diffusion of misinformation.
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 1
Fake News on Social Media:
The (In)Effectiveness of Warning Messages
Completed Research Paper
Björn Ross
University of Duisburg-Essen
Duisburg, Germany
bjoern.ross@uni-due.de
Anna-Katharina Jung
University of Duisburg-Essen
Duisburg, Germany
anna-katharina.jung@uni-due.de
Jennifer Heisel
University of Duisburg-Essen
Duisburg, Germany
jennifer.heisel@stud.uni-due.de
Stefan Stieglitz
University of Duisburg-Essen
Duisburg, Germany
stefan.stieglitz@uni-due.de
Abstract
Warning messages are being discussed as a possible mechanism to contain the circulation
of false information on social media. Their effectiveness for this purpose, however, is
unclear. This article describes a survey experiment carried out to test two designs of
warning messages: a simple one identical to the one used by Facebook, and a more
complex one informed by recent research. We find no evidence that either design is clearly
superior to not showing a warning message. This result has serious implications for
brands and politicians, who might find false information about them spreading
uncontrollably, as well as for managers of social media platforms, who are struggling to
find effective means of controlling the diffusion of misinformation.
Keywords: fake news, misinformation, disinformation, social media, social networking
sites, information diffusion, journalism, brand management
Introduction
As social media has become an important news source for its users, the spread of inaccurate information
has been theorised to adversely affect the decision-making processes of consumers and voters. Examples of
this include false reports that Pope Francis endorsed Donald Trump for President, or the case of companies
in China which disseminated fake news about their competitors (Chen et al. 2013; Evon 2016). Faced with
public criticism, platforms have begun to take action. On 15 December 2016, Facebook announced that it
would flag posts of questionable credibility “fake news” as disputed, after third-party fact checkers had
manually checked their quality (Mosseri 2016). A warning message was displayed alongside the original
post. On 20 December 2017, Facebook declared that this practice would be discontinued (Lyons 2017). The
warning messages had evidently not had their intended effects. What went wrong?
The use of warning messages to encourage or discourage specific behaviours on the part of the users of an
information system has been well researched. When warnings were examined in the contexts of product
recommendations (Xiao and Benbasat 2015) and computer-mediated job interviews (Giordano et al. 2008),
a well-designed message was found to improve people’s ability to detect manipulation. Given this contrast
between what is suggested by previous research and the observations by practitioners, and given the large
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 2
societal importance of manipulated messages, there is a need to better understand the mechanisms behind
the success and failure of “fake news warning messages.
This article examines the effectiveness of messages warning against fake news on social media. Specifically,
we also address how the success or failure of the message may be influenced by its framing. Using signal
detection theory and the context of product recommendations, a warning has been found to be more likely
to be effective if it includes specific advice to users on how to act, compared with a simple warning (Xiao
and Benbasat 2015). In this framework, Facebook’s warning system would be considered a simple warning.
We investigate whether a warning that additionally incorporates advice would be more likely to be effective,
as previous studies suggest. This leads to the following research questions:
RQ1: Does flagging news as disputed help users distinguish fake news from real news?
RQ2: Does the inclusion of (negatively framed) risk-handling advice in the warning message further help
users distinguish fake news from real news?
We conducted a survey with 151 participants. The results further challenge the notion that warning
messages are effective instruments for controlling the spread of fake news on social media. Neither in the
phrasing used by Facebook nor in the formulation suggested by recent scholarship are the warnings
messages clearly superior to not showing a message.
The paper is structured as follows. As a first step, the literature review defines fake news and summarises
relevant studies about people’s ability to detect false information and the effectiveness of warning messages
in various digital contexts. In the third section, signal detection theory is introduced, which forms the
theoretical foundation regarding how the detection of warning messages can be examined. The method
section describes the design of the pretest and survey, including the stimulus material. The subsequent
section presents the statistical results. The penultimate section discusses these results before the final
section concludes the paper with a summary of the most important scientific and practical implications.
Literature Review
Fake News and its detection
“Fake news” is a fairly recent term. Although the manipulation of online content is not new and has been
studied before under various names such as misinformation, disinformation, rumours and hoaxes (Bessi et
al. 2015; Chua et al. 2016; Gupta et al. 2014; Shin et al. 2017), public awareness of fake news has risen in
recent years. This can be seen, for example, in an official warning by the World Economic Forum which
describes fake news as a “digital wildfire” and “global risk” for our “hyper-connected world” (Howell 2013).
Social media has changed news consumption and production behaviours profoundly by blurring the
contours between professional journalists and users (Deuze et al. 2007). This shift from a media landscape
dominated by journalists who acted as gatekeepers of information to one which is characterised by huge
amounts of user-generated content has diversified the available information but also simplified the
dissemination of fake news (Lewandowsky et al. 2012). Such false information may be spread by accounts
controlled by humans as well as those run by algorithms, called social bots(Stieglitz et al. 2017). Recent
examples of fake news include reports that Pope Francis endorsed Donald Trump for President of the
United States and Chinese companies which spread rumours about competitors (Chen et al. 2013; Evon
2016).
In contrast to the related concept of rumours, which is ambiguous information that can be either proven
true or false, fake news can be characterised as always false (Liu et al. 2014; Spiro et al. 2012). However,
someone spreading the story is not necessarily aware that it is false. Verifying it may be more difficult for
some than for others, depending on various internal and external factors (Lewandowsky et al. 2012). Fake
news often uses sensationalist language and is regularly presented with the help of clickbait characteristics
(Chen, Conroy, and Rubin 2015). Although the intention of the outlets and individuals who are spreading
fake news are more difficult to investigate, than the content of fake news or the user perception researchers
agree that the dissemination of fake news is often linked to political intentions or financial benefits expected
by the sender (Allcott and Gentzkow 2017; Chen et al. 2013).
After the phenomenon of fake news has thus been described, we can consider how users act to detect it
online, and how they act, more generally, to determine the credibility of a message. When users need to
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 3
evaluate if a message is credible, they draw their conclusion based on the sender, the medium and the
content of the message (Hu and Sundar 2010). While most credibility research is based on the source of a
message, characteristics of the message itself were also shown to have a significant influence on individuals’
perceived credibility. Eastin (2001) found that characteristics of a message often hold a greater influence
on credibility judgements than sources do. As shown by the elaboration likelihood model of persuasion, the
characteristics of a message are more influential than the source when an individual’s knowledge,
involvement, and personal relevance are high, leading to an increased motivation to question the message’s
content. Whenever there is little to no information about the source, individuals use the message itself for
credibility judgements (Eagly and Chaiken 1993). While the effects of warning messages were not explicitly
studied, these results do not preclude the possibility that a warning message displayed alongside the
message might encourage the user to question the credibility of a claim. This is true especially if the message
points out, for example, how the identification of incorrect reports is personally relevant to the user.
Another relevant strand of literature deals with people’s ability to detect deception. This literature shows
mixed results. A number of surveys found that even experienced humans are poor at detecting deception
(Grazioli 2004; Grazioli and Jarvenpaa 2001). The traditional deception detection experiment refers to a
face-to-face setting. Research shows that in those types of settings, individuals were typically poor at
detecting biases (Cao et al. 2015; Giordano and Tilley 2006). One reason why it is difficult to detect
manipulated data is that deceptive behaviour evolves over time (Song et al. 2012). Klein et al. (1997) state
that success at this task depends on the given circumstances, and that contrary to traditional literature,
users can be indeed able to detect errors. Conroy et al. (2015) provided a typology of veracity assessment
methods specializing in fake news detection. Their technology aims to identify news that is published online
is intentionally deceptive. Besides linguistic analyses, network behaviour analysis and up-to-date fact
checking, they point out that a fake news detection tool should amplify a human’s detection skills.
Warning messages as a support mechanism
A promising way to improve the individual’s ability to successfully detect deceptive data is to include a
warning message. It is generally stated that training and warnings are the two main strategies to improve
deception detection (DePaulo et al. 2003), as they lead to an improved deception detection accuracy
(Grazioli 2004). But whereas detection training cannot be provided to all users of social networking services
or even all Internet users, warnings can reach more people, regardless of their location (Ivaturi et al. 2014).
A warning is characterised as a communication medium or a tool that informs about hazards (Wogalter
2006). One of its goals can be to increase sensitivity to the possibility of manipulations (Biros et al. 2002).
Hence, the receiver is alerted to cues of potential manipulations and put in an enhanced state of alertness,
in which he or she is more sensitive to the possibility of manipulation and more conscious of the possible
presence of false data (Biros et al. 2002; Xiao and Benbasat 2015). Thus, the warning itself works as an
amplifier (Ivaturi et al. 2014). The intention of providing a warning is to affect human behaviour by, for
instance, helping the consumer make the right decisions (Silic et al. 2017) and therefore, to avoid potential
negative consequences (Wogalter 2006).
Yet, the use of warning messages has received mixed empirical support. Whereas there is empirical evidence
about the presence and absence of warning messages, indicating that a warning improves individuals’
detection skills (Biros et al. 2002; Egelman et al. 2008; George et al. 2004; Robbins and Waked 1997; Zhang
et al. 2014), some studies did not find evidence that suspicious users are better at detecting errors (Egelman
et al. 2008; Junger et al. 2017; Marett and George 2005; Wu et al. 2006; Zhang et al. 2014). Zhang et al.
(2014) even found an adverse effect: Participants were more likely to disclose social media information on
a website after being warned about its security risks, even though those who were shown the warning were
also more likely to state that they felt that their personal data was threatened. This inconsistency between
expressed attitudes and observed behaviour could be seen as an example of the privacy paradox. Conzola
and Wogalter (2001) state that for a warning to be effective, the receiver must go through some information
processing steps. First, the warning must attract attention (attention switch) and also maintain long enough
so that all indicators are noticed (attention maintenance). Next, the warning’s content must be understood
(comprehension) and go along with the receiver’s opinions and beliefs (beliefs and attitudes). Moreover,
the warning should motivate the receiver to take the required action (motivation).
But even when a warning has the expected effect on human detection success, it can also have negative side
effects. Namely, it was found that individuals who have been provided with a warning message may also be
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 4
more likely to consider data manipulated that is actually truthful (Biros et al. 2002; Giordano and Tilley
2006; Toris and DePaulo 1984; Xiao and Benbasat 2015). Even if the rate of identified manipulations is
high, an equally high or even higher number of incorrect claims of manipulation about data that is in fact
accurate could put overall detection performance at risk and even damage the individual’s reputation
(Giordano and Tilley 2006), similar to the boy who cried wolf in Aesop’s fable. Apart from that, research
has found that the more explicit a warning is, the better the achieved successes. An explicit warning draws
attention to potential risks (Conzola and Wogalter 2001) and as a consequence, raises suspicion (Ivaturi et
al., 2014). Silic et al. (2017) claim that a warning message about a hazard which contains explicit
information results in individuals becoming more vigilant compared with a warning message with less
explicit information. Such explicit warnings have the effect that the receiver is better prepared and feels less
uncertain (Ivaturi et al., 2014). This again leads to a more accurate performance of risk and trust
assessments (Bal 2014) and an increased likelihood of detection (Xiao and Benbasat 2015). Also, the more
often a warning is shown to users, the more their confidence in their detection skills can increase (Zahedi
et al. 2015). In summary, a warning message is a detection support mechanism which is most effective when
illustrated explicitly. Research emphasises that it is a powerful tool to increase fake detection and thus, was
used in several studies.
The design of warning messages
The literature reviewed above focuses on the presence and absence of warnings. A widely used survey design
for this question is presented by Giordano’s et al. (2008). They investigated how successful individuals can
be at detecting deception while reviewing computer-mediated job interviews. In order to vary the level of
suspicion in participants, they distinguished between two conditions, the presence and absence of a warning
about a possible deception. They found that the presence of a warning improved reviewer’s detection
accuracy. Additionally, it did not have any effect on the number of incorrectly perceived manipulations.
In contrast, several studies have gone beyond the presence and absence of warning messages and have also
examined how the design of warning messages affects the detection of manipulation, albeit not in the
context of fake news on social media. For example, Egelman et al. (2008) differentiated between active and
passive warnings. Ivaturi et al. (2014) developed a research model on how to examine framing effects of
both positively and negatively framed warning messages. They mean to test the effects of pointing out
consequential gains (positively framed) and consequential losses (negatively framed) on the individual’s
deception detection. Regarding biased online product recommendations, Xiao and Benbasat (2015) also
investigated various designs and focused on the contents of the warning messages. Their focus lay on two
design characteristics, namely inclusion of risk-handling advice (presence of warning message) and framing
of such advice (positively and negatively formulated advice). The simple warning message only mentioned
that the automated shopping advisors may provide biased recommendations. In contrast, the framed
warning message directly asked the consumers to be careful and compare several brands and offers.
Furthermore, it either closed with the positive remark that research shows that verification decreased the
chance to receive biased recommendations, or a negatively framed sentence which stated that consumers
who fail to verify the product recommendation agent’s advice had an increased risk of being misled. Their
online experiment revealed that the various test conditions led to different outcomes. A simple warning
message led to an increased number of both correctly and incorrectly identified deception. By contrast,
warning messages including a negatively framed advice about assessing source credibility increased correct
detection and decreased incorrect detection. Positively framed warnings did not show this effect.
Xiao and Benbasat (2015) explain this result through the fact that through advice participants gained
knowledge and skills and were presented a strategy how to deal with risky situations. They were more
motivated to adapt their behaviour in order to follow the advice and to achieve good detection results.
Research on warning messages on social media, to inform users about potential fake news, has not been
conducted yet. Although there are numerous studies on the development of automatic detection algorithms
of fake news on social media, the impact of potential flags and warnings of the identified messages, is still
a research gap, which will be addressed by this study. Due to the high adoption rates of Facebook and its
unrestricted text length, the social network was chosen to be most suitable for this study.
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 5
Theoretical Background
Signal Detection Theory
Signal detection theory (Green and Swets 1966) is an appropriate basis for researching error detection tasks.
This theory provides a systematic view and terminology regarding a user’s (or system’s) ability to recognise
or not recognise an error, and the fact that items could be falsely identified as an error. This allows one to
evaluate the effectiveness of measures to help users detect errors. The theory distinguishes between two
classes of events, namely ‘noise’ and ‘signal’. Signal detection theory models the ability of individuals or
systems to distinguish signal from noise. In the context of the detection of deception on social media, the
background “noise” are regular messages. The “signal” – which is to be detected deviates from noise by
being manipulated or deceptive. The terminology of signal detection theory can be introduced with a simple
“yes-no” experiment in which individuals attempt to detect a signal. The two possible responses are “yes”
(“there was a signal”) and “no” (“there was no signal”). Consequently, four different outcomes are possible:
a “hit” (a signal was present and is discovered), a “miss” (a signal was present but is not discovered), a “false
alarm” (background noise is incorrectly identified as being a signal) and a “correct rejection” (background
noise is correctly identified as such). The detection is considered successful if the number of hits and correct
rejections is high and the number of false alarms and misses low. These four potential outcomes are shown
in Table 1. Summarised, signal detection theory can be used to characterise people’s performance when
determining the presence of absence of a signal (Green and Swets 1966).
Conclusion about deception
State
Signal (manipulated data)
Noise (not manipulated data)
Error detected
Hit
False alarm
Error not detected
Miss
Correct rejection
Table 1. Presence of errors and detector responses
Two further key aspects of signal detection theory are the concepts of discriminant ability and decision
threshold (Green and Swets, 1966). These factors have an impact on the performance of individuals when
attempting to identify signals and noise (Klein et al., 1997).
Discriminant ability describes the ability to distinguish between noise and signal. As Jensen et al. (2011)
put it, discriminant ability is a decision variable which influences the individual’s decision on whether they
think a signal exists or not. Thus, every individual’s discriminant ability depends on their competency of
detecting errors. In the context of fake news, it could represent how easy it is for an individual to correctly
distinguish between fake news and real news, depending on the user’s media literacy. Individuals with low
discriminant ability might falsely consider fake news true, while accurate factual statements might be
perceived as manipulated. A high discriminant ability, in contrast, results in more hits and correct
rejections. Discriminant ability is influenced by personal and situational characteristics (Scott 2006). An
example in the context of fake news is the prior beliefs of the user, which might collide or match with the
information presented in an article (Lewandowsky et al. 2012; Nyhan and Reifler 2010). According to Xiao
and Benbasat (2015), discriminant ability is high when individuals are well trained and experienced in
detection tasks and have the right tools and access to further information. One example of this could be a
warning message.
The decision threshold refers to the point at which the presence of a suspicion leads the individual to declare
the detection of a signal (cf. Klein et al. 1997). Signal detection tasks are often characterised by ambiguous
situations (Xiao and Benbasat 2015). Each individual has their own level of what they consider ambiguous
(Scott 2006). The placement of the threshold is thus perceptual and subjective (Jensen et al. 2011). A high
decision threshold means that the individual might ignore the given signal and mistake it for background
noise. Fake news might not be identified as such but perceived as real news. Hence, the detection task would
lead to fewer false alarms and more correct rejections but also to fewer hits and more misses. An individual
with a high decision threshold who is attempting to detect fake news is thus too gullible. On the other hand,
the lower an individual’s decision threshold, the more noise will be perceived as signal. The people who are
supposed to identify fake news within real news, for instance, would be more likely to perceive the shown
news as manipulated. Thus, there would be more hits and fewer misses. However, this would also lead to
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 6
an increased number of false alarms and a decrease in correct rejections. In the fake news context, such an
individual has become overly distrustful of the media and therefore often incorrectly considers unaltered
information to have been manipulated. The best possible result is a high rate of hits and correct rejections
but also a low rate of false alarms and misses. To achieve this, the decision threshold should neither be too
low nor too high. The placement of the threshold is also influenced by the costs and benefits of error
detection (Klein et al. 1997). If missing an error is costly or the correct detection is linked to great benefits,
the threshold should be low, as individuals may accept a high number of false alarms. When false alarms
are damaging or embarrassing, the threshold should be high, leading to fewer hits (Klein et al. 1997).
Importantly, signal detection theory suggests that there are ways to increase sensitivity to deceptive data
(Biros et al. 2002). Support mechanisms such as training and explicit warnings (Biros et al. 2002) can lower
decision thresholds in judging ambiguous situations. Warnings with advice on how to avoid harm increase
discriminant ability in detecting errors (Xiao and Benbasat 2015). Based on these findings and knowledge
gained through the review of literature and signal detection theory, the hypotheses for the present study
were developed.
Hypotheses
Warning messages can have a positive impact on the deception detection accuracy (Biros et al. 2002; George
et al. 2004; Giordano et al. 2008; Grazioli 2004) and make users more aware of potential manipulations
(Biros et al. 2002; Ivaturi et al. 2014; Xiao and Benbasat 2015). However, as signal detection theory shows,
individuals can also become overly sensitive to errors (Giordano and Tilley 2006). This is because a warning
message makes the user aware of the possibility that the news can be deceptive and thus also makes the
user more suspicious. Simple warnings can thus result in a decreased decision threshold, which leads to
more hits but also more false alarms (e.g. Biros et al. 2002; Giordano and Tilley 2006; Xiao and Benbasat
2015). These results suggest that in the current study, a warning could equally make the Facebook user
more suspicious, resulting in an enhanced perception of news as fake. It is important to find out whether
the ability to report and flag news on social media is likely to entail this problematic consequence. This leads
to the following research question and hypotheses:
RQ1: Does flagging news as disputed help users distinguish fake news from real news?
H1: Compared with those receiving no warning, users who are provided with a warning will be
more likely to perceive manipulations in fake news, thus resulting in more hits (i.e. fewer misses).
H2: Compared with those receiving no warning, users who are provided with a warning will be
more likely to perceive manipulations in real news, thus resulting in more false alarms (i.e. fewer
correct rejections).
However, a previous study that not only focused on the presence and absence of warning messages found
that the inclusion of advice decreases false alarms, further increases hits and thus leads to a better overall
result (Xiao and Benbasat 2015). In order to analyse which type of advice achieves the best results, they
differed between positively and negatively framed advice. Xiao and Benbasat (2015) found that only the
latter improved the participants’ sensitivity and detection skills. Thus, this study only considers negatively
framed advice and examines if Facebook’s planned warning design is sufficient or if advice that
accompanies the warning message improves users’ hits and false alarm rate. In this case, a higher level of
discriminant ability is expected, leading to a higher number of hits and a lower number of false alarms. This
leads to the second research question:
RQ2: Does the inclusion of (negatively framed) risk-handling advice in the warning message further help
users distinguish fake news from real news?
H3: Compared with those receiving a simple warning, users provided with a warning with advice
will be more likely to perceive manipulations in fake news, resulting in more hits (i.e. fewer misses).
H4: Compared with those receiving a simple warning, users provided with a warning with advice
will be less likely to perceive manipulations in real news, resulting in fewer false alarms (i.e. more
correct rejections).
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 7
Research Design
Summary
To test the influence of warning messages on participants’ abilities to distinguish fake from real news, an
online survey was conducted. Twelve news items were presented to each participant, six examples of fake
news and six real news stories. Participants were randomly assigned to one of three conditions. Participants
in the first condition were shown all twelve news items without any warning messages. Participants in
conditions 2 and 3 were shown six news items (three fake, three real) with a warning message and the
remaining six items without one. Conditions 2 and 3 differed in the warning message that was shown (see
Figure 1). Participants were asked if they thought the story was manipulated or fake.
To compare the differences in means between the three conditions, an ANOVA was conducted. The
independent variable is the participants’ response to the question regarding the detection of manipulations.
Two dependent variables were considered: the number of hits, to examine how often participants correctly
recognised fake news as such or incorrectly considered it true (H1 and H3), and the number of false alarms,
to examine if they correctly identified actual news or incorrectly considered it to have been manipulated
(H2 and H4).
Planned contrasts were calculated to examine the research questions. The first research question concerns
how users’ performance in detecting fake news is affected by the presence of a warning message, regardless
of its design. The first contrast therefore compares the participants receiving no warning with those
receiving a warning message of either design (H1 and H2). The second research question concerns the effect
of the design of the message, and therefore, the second contrast compares participants who were shown the
simple warning with those who were presented advice alongside the warning message (H3 and H4).
Survey description
Before starting the online survey, participants were asked only to participate if they had a Facebook account
and sufficient knowledge of English to understand the content of the stories. They were also asked to pay
close attention to the survey and not to leave the browser.
In the first section of the survey, participants completed a short questionnaire to collect demographic data
(age, gender and level of education). Several questions covered social media usage habits. Users were asked
whether, and to which extent, they used Facebook, Twitter, Instagram and others. They were asked on a 5-
point Likert scale how likely they were to use the respective social media to like and share content.
In the second section of the survey, participants were shown the twelve Facebook posts, as described above.
Below each story, the participants were asked to answer a number of questions with a “yes” or a “no”, similar
to other studies that also made use of a simple choice between two answers (Biros et al. 2002; Giordano
and Tilley 2006; Giordano et al. 2008). Among the items was the sentence “This story seems to be
manipulated or fake.”, which was used for the dependent variables in this study. The other questions, such
as “I would like or share this story on Facebook.” were asked to avoid revealing the actual topic of this study.
If the participants realised that the main goal of the study was to assess their detection performance, they
might have paid more attention to the veracity of the stories than they would in a realistic scenario, when
casually scrolling through their Facebook feeds. As a result, their performance would have been
unrealistically inflated.
Finally, participants were informed about the true purpose of the study, asked if they had left the browser
during the study, and shown which of the items were true or false.
Selection of stories
The stories presented in the survey were selected as follows. In order to ensure that the fake and real news
stories were otherwise comparable, six topics were chosen (‘health’, sex’, ‘nature’, ‘Trump’, ‘Syria’ and
‘crime’). The news stories were collected from the Internet and inspected for their accuracy. More precisely,
the fact-checking website Snopes.com served as the source of information. This independent website is one
of the most reputable fact-checking sites on the Internet. It researches fake news and urban legends and
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 8
indicates their level of veracity (‘false’, ‘true’, ‘mostly false’, ‘mostly true’ and ‘unproven’). In this study, six
items were taken from the category ‘false’ and six from the category ‘true’.
To ensure that the survey was a realistic environment for the participants, the news items were searched
for through Facebook’s search and screenshotted. As a result, they looked exactly as they would appear on
Facebook. Since the stories were all found on Facebook, they are real-world examples of stories diffusing
on social networking sites. Furthermore, the screenshots of the stories (Figure 1) ensure that the design is
consistent. However, some details needed to be removed to avoid influencing and distracting the
participants: information about the news source, the number of likes, reactions, comments and shares and
information about the sender (such as profile picture and name) were blackened. This way, participants’
assessment of the veracity of the story was not influenced by the factors such as the credibility of the source
(Bucy 2003) or the profile owner’s trustworthiness (Amelina and Zhu 2016). As a result, all news was
presented in the same design familiar to participants, including information about the date it was posted
and the caption.
Pretest
A pretest with thirteen participants was carried out to confirm that the selected news stories were
appropriate. Ideally, the stories should not be familiar to the participants, and participants’ responses to
the stories should vary. It would pose difficulties for the statistical analysis if most of the stories were
considered manipulated by almost all or by almost none of the participants, as this could lead to a highly
skewed or truncated distribution in the dependent variable.
As a result of the pretest, one of the stories was replaced, since two participants were familiar with it. The
remaining stories were only known to one participant (in two cases) or none at all (in all other cases). The
number of hits and false alarms was determined for each participant and showed enough variation to be
suitable for statistical analysis. The remaining stories were therefore kept.
Design of the warning messages
The design of the warning message varied between the conditions 2 and 3 (see Figure 1). The six items along
which this message was displayed were chosen randomly from the original set of twelve, and they were
identical for each participant. Since many participants were recruited in Germany, a summary in German
was displayed alongside the post. The graphical design of the warning messages followed Facebook’s style
guidelines. The background colour, font type, size, colour and warning sign icon were chosen so as to
resemble what they would look like on Facebook as closely as possible. The news items were presented in
random order to prevent order effects from influencing the results.
The design of the simple warning was identical to the one presented by Facebook in their original
announcement of their work on flagging news stories: “Disputed by 3rd party Fact-Checkers. Learn why
this is disputed” (see Figure 1). This simple warning might make participants more aware of potential
manipulations (Biros et al. 2002; Ivaturi et al. 2014; Xiao and Benbasat 2015). Hence, the decision
threshold should be lowered. An increase in suspiciousness should lead to more items being considered
fake news, thus leading to more hits and more false alarms.
The design of the warning with advice was based on the work by Xiao and Benbasat (2015). By displaying a
warning message that contains advice alongside the post, participants should be provided with the right
information in order to distinguish fake news from real news. The following message was used: “Disputed
by 3rd party Fact-Checkers. Research shows that users who fail to verify the story’s correctness have an
increased risk of being misled by fake news.” The phrasing was kept as close as possible to that in the
original study, to ensure that a similar effect can reasonably be expected. The warning should increase
participants’ discriminant ability, leading to more hits and fewer false alarms. The advice is negatively
framed. Empirical literature distinguishes between positively and negatively framed advice (Ivaturi et al.
2014; Xioa and Benbasat 2015). Negatively framed advice emphasises the possible losses a bad detection
accuracy could imply. Xiao and Benbasat (2015) found that this type of advice leads to the best detection
accuracy. The message is based on Xiao and Benbasat’s study (2015) on product recommendations and
adjusted to Facebook’s News Feed and Facebook’s style. This message points out the fact that only
verification can help a user avoid falling for a fake story and indirectly encourages the receiver to verify the
news.
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 9
Condition 1: Condition 2: Condition 3:
No warning Simple warning Warning with advice
Figure 1. Warning message designs used in the study
Variables
For each participant, their percentage of hits, false alarms, misses and correct rejections was calculated
from their response to the item “This story seems to be manipulated or fake.” In order to calculate the value
for hits and misses, the perceived manipulations in all six fake news stories were taken into account. The
perceived manipulations in the remaining six items of real news give information about the relationship
between false alarms and correct rejections. Since misses and correct rejections are simply the counterpart
to hits and false alarms, they hold no additional information and were not considered further in the analysis.
News stories were not counted for a participant if the participant had indicated that they knew the story
and, as a result, already knew whether or not it had been manipulated. Among the 1,812 (151 × 12) times a
news story was shown to a participant, there were 22 instances in which the participant indicated that they
were familiar with the story and correctly indicated whether or not it was true. These cases were thus
excluded.
Participants
The link to the final online survey was circulated on Facebook. Users were either invited to participate
directly or addressed in student groups which resulted in 171 participants. This method of circulating the
study ensured that most people who found out about the study had a Facebook account and were thus
eligible to participate. Six individuals were excluded because they stated that they had left the browser while
they were looking at the Facebook posts. Two users were excluded because they were unusually young (8
and 15 years old). Twelve more individual were excluded because they gave the same response to the
question about the story being manipulated or false to each story. It can therefore not be ruled out that they
rapidly went through the survey without reading the stories. In total, 20 participants were excluded.
As a result, 151 participants were included in the analysis. With the aid of the application G*Power 3.1 the
power of the sample size was calculated (.78). The participants were randomly assigned to the groups: 53
(35%) to group one (no warning), 50 (33%) to group two (simple warning) and 48 (32%) to the last group
(warning with advice). Their ages ranged from 17 to 63 (younger participants had been excluded). The mean
age is 25.92 years (SD = 5.62). Furthermore, 86 participants (57%) were female and 65 male (43%).
The descriptive statistics confirm that Facebook is an appropriate social networking site to examine in this
study. All participants stated that they have a Facebook account. Half of the participants (50%) have an
Instagram account and 18% have a Twitter account. Nine participants mentioned Snapchat, while no other
social networking sites were mentioned more than three times. Regarding the combined length of the
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 10
sessions spent on each site per day, Facebook was the most popular social networking site with an average
of 1.19 hours (approximately 71 minutes) of usage per day (SD = 1.11), followed by Instagram with 1.02
hours (approximately 61 minutes) per day (SD = .89). Twitter user spend 0.22 hours (approximately 13
minutes) per day (SD = .34) on this platform, while other social media are visited for 0.54 hours daily
(approximately 32 minutes) (SD = .62). Finally, participants were asked about the likelihood of their liking,
commenting and sharing posts on the respective service on a 5-point Likert scale from very unlikely to very
likely. The results show that they are the most likely to like, comment and / or share on the platform
Instagram (M = 3.37, SD = 1.19), followed by Facebook (M = 2.64, SD = 1.02), other social media (M = 2.39,
SD = 1.04) and Twitter (M = 1.93, SD = 1.17).
Results
The relative number of hits, false alarms, misses and correct rejections in each group is presented in Figure
2. Group one rated Facebook posts without being presented warning messages and correctly identified 64%
(SD = .21) of the fake news as manipulated and did not identify the remaining 36% (SD = .21) as fake which
results in misses. Furthermore, 50% (SD = .19) of real news stories were perceived as manipulated (false
alarm) so that 50% (SD = .19) were correctly rejected. The second group was shown a simple warning
message. This group achieved a hit in 62% (SD = .24) of cases and missed the manipulation in 38% (SD =
.24) of cases. 50% (SD = .25) of real news was perceived as fake news (false alarm) and thus, 50% (SD =
.25) of real news was perceived as true (correct rejection). The third group were shown a warning that
advised the participants on how to verify the story’s correctness. In this group, 68% (SD = .20) of fake news
was correctly identified as such and 32% (SD = .20) of fake news was identified as real news (misses). In
contrast, 59% (SD = .24) of real news was perceived as manipulated and thus, 41% (SD = .24) of real news
resulted in a correct rejection.
Overall, the participants perceived more news in the study to be manipulated than real. Across the three
test conditions, 65% (SD = .22) of fake news (hit) were correctly identified, while 53% (SD = .23) of real
news stories were incorrectly considered false, resulting in a false alarm.
To examine whether the differences in the group means were significant, an ANOVA with planned contrasts
was calculated as described above. With planned contrasts, the group comparisons to be carried out are
specified in advance, thus controlling the family-wise error rate (
𝛼 = 0.05
). Analysis of the model fit
revealed that the residuals were sufficiently close to a normal distribution, and the model could therefore
be kept.
As seen in Table 2, the mean values of perceived manipulations (hits) differ only very slightly between the
groups. As far as the detection of fake news is concerned, the number of hits does not differ significantly
within the first contrast between the participants in the first condition (“no warning”) and those in
conditions 2 and 3 (“warning”) (t(148) = -.215, p = .830). Consequently, H1 is not supported.
The groups also do not differ significantly regarding the identification of real news as such, since the small
difference in means for the variable “false alarms” is not significant. As a result, H2 is also rejected (t(148)
= -1.228, p = .222).
68
62
64
59
50
50
32
38
36
41
50
50
0% 20% 40% 60% 80% 100%
Condition 3: Warning with advice
Condition 2: Simple warning
Condition 1: No warning
Hits
Mis se s
False alarms
Correct rejections
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 11
Dep. variable
Contrast
Diff. in
mean
df
T
p
Related
hypothesis
Hits
1
.01
148
-0.215
.830
H1
False alarms
1
.05
148
-1.221
.224
H2
Hits
2
.06
148
-1.228
.222
H3
False alarms
2
.09
148
-1.907
.058
H4
Table 2. Results of ANOVA with planned contrasts
The second research question focuses on the second contrast and examines if the type of warning affected
the participant’s performance in detecting fake news from real news. Therefore, the second and third group
were compared in the contrast analysis. H3 investigates if compared with those receiving a simple warning,
users provided with a warning with advice will be more likely to perceive manipulations in fake news,
resulting in more hits. However, the third group did not achieve significantly more hits than those in the
simple warning condition. H3 is rejected (t(148) = -1.228, p = .222).
H4 compares the two experimental conditions “simple warning” and “warning with advice” regarding the
number of false alarms. It is hypothesised that a provision with warning with advice results in fewer false
alarms. Again, the difference of the groups’ means was not significant (t(148) = -1.907, p = .058).
Consequently, H4 is also not supported.
The above results suggest that the warning message had no effect. To further validate this result, we
examined which stories were perceived as manipulated by the users. In the simple warning condition, 48%
of all hits were achieved when the fake news story was linked to a warning message. In other words, 52% of
hits were achieved even though no warning message was displayed. In the warning with advice condition,
it is similar, with 50% of hits achieved on news displayed with a warning message. The other half of hits
were achieved by the participants even though no warning was shown alongside the story. In contrast, 67%
of all false alarms in the simple warning condition occurred with stories that were shown alongside a
warning message, as did 58% of false alarms in the warning with advice condition. These results further
support the notion that the warning messages were ineffective for fake news. For true news, however, while
displaying the warning message did not result in an overall increase in the number of false alarms (see H2),
most of the false alarms that did occur happened when a true story was shown alongside a warning message
– as opposed to a true story being shown without one. The results thus suggest that the display of warning
messages alongside a true story may lead to an increased likelihood of this story being perceived as
manipulated, while simultaneously leading to a decreased likelihood of an unflagged story being perceived
as manipulated.
Discussion
The main aim of the paper was to investigate the effectiveness of two designs of warning messages, one by
practitioners (Facebook) and one informed by recent information systems research. The results are
surprising. Neither of the warnings had a statistically significant effect on the participants. The warning
messages did not make users more distrustful of the news stories, lowering their decision threshold, as
expected. The messages also did not lead to an increased detection performance by increasing their
discriminant ability, whether or not they included advice. Both of these results are at odds with what
previous research would have suggested. Taken together, they suggest that while warning messages may be
an effective mechanism to prevent individuals from being deceived in other contexts, they might not be an
effective mechanism for combating the spread of fake news on social media. The following section discusses
these results and their implications for research and practitioners.
Effect of the absence or presence of warning messages
The first research question concerns the absence or presence of warning messages. Compared with those
receiving no warning, users who are provided with a warning are not more likely to perceive manipulations
in fake news or in real news. This result is contrary to expectations. The warning message was expected to
increase users’ decision thresholds, and make them more distrustful of news, resulting in a higher number
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 12
of hits, but also possibly a higher number of false alarms. Instead, it appears that warning messages had no
effect on the users.
One effect that may be at play when warning messages are placed next to fake news on social media is
habituation. According to Anderson et al. (2014), habituation is a key contributor to the disregard of
warning messages. Even though it has been suggested that the more often a warning is shown, the more the
usersconfidence in their detection skills improves (Zahedi et al. 2012), there is empirical evidence that
repeated demonstrations of a warning could lead to an adverse effect. Frequent exposure diminishes the
attention, so that warning messages would not be perceived attentively (Anderson et al. 2014). Multiple
exposure to the same warning can also produce a behaviour which is inconsistent with the warning’s
intention and irritate individuals (Fransen et al. 2015; Stewart and Martin 1994). Egelman et al. (2008) also
imply that individuals that see a warning multiple times eventually dismiss it. This theory is supported by
the outcome seen in the survey of Junger et al. (2017). They displayed the warning on top of each page as a
reminder and it did not result in the expected behaviour, namely a decrease in disclosure, even though the
warning was very clear. Another possible reason for a lack of attention to warning messages could be
exposure to unwarranted warning messages in the past. In this study, warnings were presented alongside
half of the posts. It is possible that as soon as a large number of posts is flagged, the participants get used
to the warning and consequently ignore it. However, the amount of fake news circulating on social media is
rising (Vargo et al. 2018). If all social media posts containing one of these news stories were flagged, then a
similar effect should be expected in practice. In summary, if warnings are only effective when they are rarely
shown, then perhaps warning messages are not a useful mechanism to combat the spread of fake news on
social media.
It is possible that social media differs too much from the contexts in which warning messages have been
found to be successful. One example of this is the personal relevance of the potential consequences of being
misled. Studies which found a significant increase in perceived manipulations as a result of warning
messages often warned about negative consequences for the respective user such as the purchase of inferior
products due to voided recommendations (Xiao and Benbasat 2015), purchase of low-quality shoes which
may result in injury (Robbins and Waked 1997), or the danger of hiring unqualified employees (Giordano
et al. 2008). Consequences that affect one personally can lead to more cautious behaviour (Robbins and
Waked 1997). The failure to identify fake news as manipulated does not have such immediate personal
consequences for social media users. Of course, the potential negative consequences of the spread of fake
news for the general public and decision-making processes are widely debated, but the immediate
consequences for the individual are much less severe. In addition to that, some news stories are more
difficult to clearly identify as manipulated than others, since it could be the case that only parts of the story
are fabricated or exaggerated. One of the most popular political fact-checking websites in the U.S. reacted
to this problem by not dealing with truth and fake as a binary construct, but by presenting their debunks
on a six-level scale from “True” to “Pants on Fire” (Politifact 2018).
Effect of the design of warning messages
The second research question addressed warning design. Compared with those receiving a simple warning,
users provided with a warning with negatively framed advice were not more likely to perceive manipulations
in fake news, which would have resulted in more hits, nor were they less likely to perceive manipulations in
real news, resulting in fewer false alarms.
The present results indicate that it is still unclear in which situations advice is useful for increasing the
discriminant ability of the users. Perhaps advice only has the desired effect in some contexts, but not in the
case of fake news on social media. In the original study by Xiao and Benbasat (2015), participants had the
opportunity to directly adopt the instruction and compare products on the very same website that they were
visiting. They did not even have to leave the website in order to search for specific brands. Thus, their
success was depending on the effort they put in their comparison of products while they were participating
in the survey. However, in the context of fake news and Facebook posts, users do not have the possibility to
directly compare certain news on the same website. Furthermore, an important characteristic of fake news
is that it is often falsely believed to be true. When this false belief is widespread, researching the veracity of
certain news may prove challenging even to those willing to put in a lot of effort, and the truth may be
impossible to find out even for an experienced web user. This means that in the context of warning
messages, following the advice may be much more complex than it seems. That leads to the question if
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 13
warnings with advice can only be successful when the advice can be adopted immediately and easily and no
further effort such as additional research is required.
Again, these considerations call into question whether warning messages with advice are a useful tool for
combating the spread of fake news online at all. Facebook has decided to abandon warning messages in
favour of presenting several related stories side by side (Lyons 2017). These related articles are meant to
provide additional perspectives and include those by third-party fact checkers. As a result of the above
considerations, this approach promises to be a step in the right direction, since it will require less effort on
the part of the user than a warning message.
Designing effective warning messages
If warning messages are to be used on social media, then perhaps the designs that are currently being
discussed in the context of social media in research and in practice – are not sufficient. The literature on
warning messages has identified several prerequisites for the success of warning messages.
Most of these characteristics are arguably met by the current designs. For example, a warning needs to be
noticed by the user and then displayed long enough for the user to extract all necessary information
(Conzola and Wogalter 2001). The warning designs that are being discussed, and which were presented in
this survey, are unlikely to be missed by the user, as they are clearly differentiated from the posts in their
visual design. For example, a red warning symbol was attached to every warning message which very likely
attracted the user’s attention. The requirement that the warning must be noticed is therefore very likely to
be met by current designs. Furthermore, on social media, the warning message can be displayed
permanently. Readers can then decide for themselves how much time to spend reading it. The warning also
needs to be comprehensible. As the warning was very short and simple, the requirement of comprehension
should have been met. Finally, warnings should be as brief as possible (Rashid and Wogalter 1997). The
warnings used in the study were short, so this requirement should also be met.
Other characteristics are not necessarily met by current designs, and thus suggest ways in which the designs
could be improved. For example, explicitness is an important characteristic of a successful warning
message. Explicit warnings reduce uncertainty, improve decision making and have an effect on
information’s perceived trustworthiness (Bal 2014). Zahedi et al. (2015) also support the view that a
warning message is the most effective when it is illustrated explicitly. If a warning contains a great deal of
explicit information, it is perceived as more useful and the individual is more likely to adopt the advice
(Huang and Kuo 2014). Explicit warnings also lead to lower decision thresholds in judging ambiguous
situations (Biros et al. 2002). If other designs of warning messages are to be tested in future research, they
should be more explicit than the ones that are currently being considered.
Finally, there are prerequisites that are difficult or impossible to achieve on social media. Consider the “lack
of knowledge” (Junger et al. 2017). Social media users cannot be expected to have previous knowledge about
the veracity of the news stories, and they are not trained to differentiate fake from real news. An individual’s
experience and knowledge in the respective domain improves error detection skills (Biros et al. 2002).
Another example of this is the goal hierarchy (cf. Junger et al. 2017). Users who are casually scrolling
through their social media feeds might not be motivated to research the veracity of each story. Their primary
goal is not necessarily to inform themselves about current events, but perhaps distraction and enjoyment.
Another aspect is personal relevance (Junger et al. 2017). As has been discussed above, the successful
detection of fake news does not have immediate positive or negative consequences for the individual.
Together, these prerequisites suggest that warning messages are not an appropriate tool for helping users
detect fake news on social media.
Implications for Practice
Our study has several important implications for practitioners, as well as for research. The implications for
practitioners should be especially relevant for managers of social networking sites and other social media
platforms. At the time the present study was conceived and carried out, Facebook had just launched its
warning system. News stories that were reported by users several times were sent out to third-party fact
checkers. If those fact checkers deemed it fake news, the story was displayed alongside a warning message
(Mosseri 2016). Facebook later chose to abandon this system in favour of displaying related posts. In a blog
post they mentioned several reasons for this, citing, among others, their ineffectiveness (Lyons 2017). The
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 14
results of our study corroborate and complement their findings. We tested the same warning design
proposed by Facebook and showed, among other things, that the warnings did not significantly improve the
number of fake news stories correctly identified as such. We also tested another design that, according to
recent research (Xiao and Benbasat 2015) should be more effective, with the same result. In summary,
warning messages do not seem to be effective mechanisms for curtailing the spread of fake news on social
media, and managers should look to other techniques to address this problem.
The discussion reveals some options that could be attempted in order to improve the present warning
messages. One attribute is the explicitness of the warning. This could result in an enhanced information
quality and therefore, in an increase in perceived credibility which could finally be linked to users following
the advice presented in the warning message (Bal 2014; Huang and Kuo 2014). Such additional information
could include specific consequences of being misled by fake news. The user should perceive a personal
relevance and consequently be more cautious (Robbins and Waked 1997). The implementation of these
characteristics might help to reduce the spread of false information, but first, further research will be
necessary to test these alternatives in the context of social media.
The diffusion of manipulated information is not limited to Facebook. For managers of other social media
platforms and, more generally, other information systems, these findings emphasise the need to study the
effectiveness of interventions before deploying a system such as the one examined in this article. Empirical
research has shown that warning messages can even lead to adverse effects (Zhang et al. 2014). If other
social media, including internal social networks in businesses, should consider launching a similar warning
system, they need to keep these same considerations in mind.
Implications for Research
The main contribution of this study to the academic literature is that it shows that warning messages may
be less effective than previously thought when employed in certain contexts, such as fake news on social
media. Signal detection theory describes how changes in users’ decision thresholds and discriminant ability
lead to changes in the number of hits and correct rejections. These changes were apparently not effected by
the warning messages. To the best of our knowledge, no previous study has examined warning messages in
this context, and the surprising results warrant more investigation. More research in this area could help
platforms find better ways to contain or even stop the spread of fake news.
Moreover, the present study provides new impulses to IS literature since it examined the design of warning
messages and the factors that could affect a potential warning system to be launched by Facebook. Based
on previous literature, it was assumed that the simple warning message that represents the warning planned
by Facebook would lower users’ decision thresholds, resulting in an increased number of hits and false
alarms (Biros et al. 2002; Giordano et al. 2008; Xiao and Benbasat, 2015). The results of the present study
suggest though that a warning does not automatically increase suspiciousness.
Warnings with advice, however, seem to not always result in a reflection of the news and consideration of
its veracity either. In terms of signal detection theory, they do not always lead to an improved discriminant
ability. Even though there is very little research referring to warnings with advice, it was still hypothesised
that they would result in more hits and fewer false alarms. Since only Xiao and Benbasat (2015) have
investigated the use of framed advice in warning messages, this article contributes to the development of
theory and thus further advances this young field of research.
Limitations and Future Research
Our study compared two warning designs in a setting that was as realistic as possible. Of course, some
compromises were necessary to be able to arrive at statistically sound results. For example, users were asked
not to leave the website during the survey. This requirement is common practice in survey research, as
participants should not be distracted from the task. In addition, if users had been allowed to leave the site,
they would have been able to find the “correct” answer on a fact-checking website such as snopes.com.
However, this restriction obviously limited users’ ability to gauge the veracity of the story. Instead of having
online research tools at their disposal, they were required to carefully deliberate and critically reflect the
content of the story. Considering that Facebook users have the possibility to research the veracity of news
stories, the warning with advice might be more successful in a real environment, although it is also to be
doubted whether users would actually make use of this possibility.
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 15
There are a number of avenues for future work on this topic. An obvious one that has already been discussed
in detail is to vary the design of the warning messages. Another important question is whether there are
specific situations in which warning messages for fake news are effective on social media, perhaps
depending on the user’s age group, social media usage intensity, the topic of the story, or another variable.
For such a study to be successful, participants would need to be recruited carefully to ensure that all groups
present in the population are represented adequately, and a large number of news stories per topic would
need to be tested to allow for generalisable conclusions.
However, our results suggest that people should look to other methods of improving users’ media literacy
and sensitising them to the dangers of misinformation. These measures are worth exploring empirically. In
particular, an anonymous reviewer suggested a game-like app that displays true and false stories and asks
users to guess which ones are true. Media literacy, including the dangers of misinformation, could also
become a part of the school curriculum. Finally, there is the danger that people accept news even in the face
of contradictory evidence as long as it conforms with their prior beliefs. This “backfire effect(Nyhan and
Reifler 2010) renders warning messages and other measures aimed at improving users’ ability to detect
false claims effectively useless. It is yet unclear which countermeasures can be taken (Haglin 2017).
Conclusion
The present study investigated fake news and its correct identification by users who encounter it when using
social media. Since fake news has gained popularity and political influence on social media within the last
year, this topic is currently very relevant and it is of high importance to the media, social media managers,
and researchers to examine fake news and its impact on people.
We tested two designs of warning messages, one by Facebook and one informed by recent research, and
found no evidence that either was effective in helping users identify fake news. We discussed several reasons
for this surprising observation. In this particular setting, the effort associated with researching the veracity
of the story is high, while the cost associated with falsely believing fake news to be true is relatively low for
the individual, and the consequences are not immediate. These aspects may have contributed to
ineffectiveness of warning messages for fake news on social media. New methods will need to be developed
in order to effectively address the problem. After this first study, the successful design of warning messages
for flagging fake news on social media is still an open question.
We further argued that some of the prerequisites for the success of warning messages are difficult or
impossible to achieve on social media. Managers of platforms may need to turn to other methods, and we
argue that information systems research can help address this need.
References
Allcott, H., and Gentzkow, M. 2017. “Social Media and Fake News in the 2016 Election,” Journal of
Economic Perspectives, (31:2), pp. 211236, May 24.
Amelina, D., and Zhu, Y.-Q. 2016. “Investigating Effectiveness of Source Credibility Elements on Social
Commerce Endorsement: the Case of Instagram in Indonesia,” Proceedings of the Pacific Asia
Conference on Information Systems.
Anderson, B., Vance, T., Kirwan, B., Eargle, D., and Howard, S. 2014. “Users aren’t (necessarily) lazy: using
NeuroIS to explain habituation to security warnings,” in Thirty Fifth International Conference on
Information Systems.
Bal, G. 2014. “Explicitness of Consequence Information in Privacy Warnings: Experimentally Investigating
the Effects on Perceived Risk, Trust, and Privacy Information Quality,” ICIS 2014 Proceedings, (July).
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., and Quattrociocchi, W. 2015. “Science vs
Conspiracy: Collective Narratives in the Age of Misinformation,PLoS ONE, (10:2), pp. 117.
Biros, D. P., George, J. F., and Zmud, R. W. 2002. “Inducing sensitivity to deception in order to improve
decision making performance: A field study,” MIS Quarterly, (26:2), pp. 119144.
Bucy, E. P. 2003. “Media credibility reconsidered: Synergy effects between on-air and online news,”
Journalism and Mass Communication Quarterly, (80:2), pp. 247264.
Cao, C., Yu, L., and Hu, Y. 2015. “Containment of RumorsunderLimitCost Budget in Social Network,”
Fourteenth Wuhan International Conference on E-Business, pp. 341348.
Chen, C., Wu, K., Srinivasan, V., and Zhang, X. 2013. “Battling the internet water army,” in Proceedings of
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 16
the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
- ASONAM ’13, New York, New York, USA: ACM Press, pp. 116120.
Chen, Y., Conroy, N. J., and Rubin, V. L. 2015. “Misleading Online Content: Recognizing Clickbait as ‘False
News,’” in Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection - WMDD
’15, pp. 1519 (available at https://doi.org/10.1145/2823465.2823467).
Chua, A. Y. K., Cheah, S.-M., Goh, D. H., and Lim, E.-P. 2016. “Collective Rumor Correction on the Death
Hoax,” in PACIS 2016 Proceedings.
Conroy, N. J., Rubin, V. L., and Chen, Y. 2015. “Automatic deception detection: Methods for finding fake
news,” Proceedings of the Association for Information Science and Technology, (52:1), pp. 14.
Conzola, V. C., and Wogalter, M. S. 2001. “A communication–human information processing (Chip)
approach to warning effectiveness in the workplace,” Journal of Risk Research, (4:4), pp. 309322.
DePaulo, B. M., Malone, B. E., Lindsay, J. J., Muhlenbruck, L., Charlton, K., and Cooper, H. 2003. “Cues to
deception,” Psychological Bulletin, (129:1), pp. 74118.
Deuze, M., Bruns, A., and Neuberger, C. 2007. “Preparing for an Age of Participatory News,” Journalism
Practice, (1:3), pp. 322338.
Eagly, A. H., and Chaiken, S. 1993. The Psychology of Attitudes, Fort Worth, TX: Harcourt, Brace, &
Janovich.
Eastin, M. S. 2001. “Credibility Assessments of Online Health Information: The Effects of Source Expertise
and Knowledge of Content,” Journal of Computer-Mediated Communication, (6:4) (available at
http://doi.wiley.com/10.1111/j.1083-6101.2001.tb00126.x).
Egelman, S., Cranor, L. F., and Hong, J. 2008. “You’ve Been Warned: An Empirical Study of the
Effectiveness of Web Browser Phishing Warnings,” in Proceeding of the twenty-sixth annual CHI
conference on Human factors in computing systems - CHI ’08, p. 1065.
Evon, D. 2016. “Nope Francis,” Snopes (available at https://www.snopes.com/fact-check/pope-francis-
donald-trump-endorsement/; retrieved September 4, 2018).
Fransen, M. L., Smit, E. G., and Verlegh, P. W. J. 2015. “Strategies and motives for resistance to persuasion:
an integrative framework,” Frontiers in Psychology, (6) (available at
http://journal.frontiersin.org/Article/10.3389/fpsyg.2015.01201/abstract).
George, J. F., Marett, K., and Tilley, P. 2004. “Deception detection under varying electronic media and
warning conditions,” 37th Annual Hawaii International Conference on System Sciences, 2004.
Proceedings of the, (00:C), pp. 19.
Giordano, G. A., George, J. F., Marett, K., and Keane, B. 2008. “Detecting Deception in Computer-Mediated
Interviewing,” in ECIS.
Giordano, G. A., and Tilley, P. 2006. “The Effects of Computer-Mediation, Training, and Warning on False
Alarms in an Interview Setting,” Communications of the Association for Information Systems, (18).
Grazioli, S. 2004. “Where did they go wrong? An analysis of the failure of knowledgeable Internet
consumers to detect deception over the internet,” Group Decision and Negotiation, (13:2), pp. 149
172.
Grazioli, S., and Jarvenpaa, S. 2001. “Tactics Used Against Consumers as Victims of Internet Deception,”
in AMCIS 2001 Proceedings.
Green, D. M., and Swets, J. A. 1966. “Signal detection theory and psychophysics,” Society, (1), p. 521.
Gupta, A., Kumaraguru, P., Castillo, C., and Meier, P. 2014. “Tweetcred: Real-time credibility assessment
of content on twitter.,” in International Conference on Social Informatics, pp. 228243.
Haglin, K. 2017. “The limitations of the backfire effect,” Research and Politics, (4:3).
Howell, L. 2013. “Digital wildfires in a hyperconnected world,” Global Risks 2013 (available at
http://reports.weforum.org/global-risks-2013/risk-case-1/digital-wildfires-in-a-hyperconnected-
world/; retrieved September 4, 2018).
Hu, Y., and Sundar, S. S. 2010. “Effects of online health sources on credibility and behavioral intentions,”
Communication Research, (37:1), pp. 105132.
Ivaturi, K., Janczewski, L., and Chua, C. 2014. “Effect of Frame of Mind on Users’ Deception Detection
Attitudes and Behaviours,” in CONF-IRM 2014 Proceedings.
Jensen, M. L., Lowry, P. B., and Jenkins, J. L. 2011. “Effects of Automated and Participative Decision
Support in Computer-Aided Credibility Assessment,” Journal of Management Information Systems,
(28:1), pp. 201234.
Junger, M., Montoya, L., and Overink, F. J. 2017. “Priming and warnings are not effective to prevent social
engineering attacks,” Computers in Human Behavior, (66), pp. 7587.
Klein, B. D., Goodhue, D. L., and Davis, G. B. 1997. “Can Humans Detect Errors in Data? Impact of Base
The (In)Effectiveness of Fake News Warning Messages
Thirty Ninth International Conference on Information Systems, San Francisco 2018 17
Rates, Incentives, and Goals,MIS Quarterly, (21:2), p. 169.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., and Cook, J. 2012. “Misinformation and Its
Correction,” Psychological Science in the Public Interest, (13:3), pp. 106131, December 17.
Liu, F., Burton-Jones, A., and Xu, D. 2014. “Rumors on Social Media in Disasters: Extending transmission
to retransmission,” Proceedings of the Pacific Asia Conference on Information Systems (PACIS), p.
paper 49.
Lyons, T. 2017. “News Feed FYI: Replacing Disputed Flags with Related Articles,” Facebook Newsroom
(available at https://newsroom.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-against-
misinformation/; retrieved September 4, 2018).
Marett, K., and George, J. F. 2005. “Group Deception in Computer-Supported Environments,” Proceedings
of the 38th Annual Hawaii International Conference on System Sciences (HICSS-38 ’05), (00:C), p.
19b--19b.
Mosseri, A. 2016. “News Feed FYI: Addressing Hoaxes and Fake News,” Facebook Newsroom (available at
https://newsroom.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/;
retrieved September 4, 2018).
Nyhan, B., and Reifler, J. 2010. “When corrections fail: The persistence of political misperceptions,”
Political Behavior, (32:2), pp. 303330.
Politifact. 2018. “Truthometer,” (available at https://www.politifact.com/truth-o-meter; retrieved
September 4, 2018).
Rashid, R., and Wogalter, M. S. 1997. “Effects of warning border color, width, and design on perceived
effectiveness,” in Advances in Occupational Ergonomics and Safety, B. Das and W. Karwowski (eds.),
Louisville, KY: IOS Press and Ohmsha, pp. 455–458.
Robbins, S., and Waked, E. 1997. “Hazard of deceptive advertising of athletic footwear,” British Journal of
Sports Medicine, (31:4), pp. 299303.
Scott, E. 2006. “Just(?) a True-False Test,” Business & Society, (45:2), pp. 130148.
Shin, J., Jian, L., Driscoll, K., and Bar, F. 2017. “Political rumoring on Twitter during the 2012 US
presidential election: Rumor diffusion and correction,” New Media & Society, (19:8), pp. 12141235.
Silic, M., Cyr, D., Back, A., and Holzer, A. 2017. “Effects of Color Appeal, Perceived Risk and Culture on
User’s Decision in Presence of Warning Banner Message,” in Proceedings of the 50th Hawaii
International Conference on System Sciences, pp. 527536.
Song, L., Zhang, W., Lau, R., Liao, S., and Kwok, R. 2012. “a Critical Analysis of the State-of-the-Art on
Automated Detection of Deceptive Behavior in Social Media,” in PACIS Proceedings, pp. 115.
Spiro, E., Fitzhugh, S., Sutton, J., Pierski, N., Greczek, M., and Butts, C. 2012. “Rumoring during extreme
events: A case study of Deepwater Horizon 2010,” in Proceeding of the 4th Annual ACM Web Science
Conference, pp. 275283.
Stewart, D. W., and Martin, I. M. 1994. “Intended and Unintended Consequences of Warning Messages - A
Review and Synthesis of Empirical Research,” Journal of Public Policy & Marketing, (13:1), pp. 119.
Stieglitz, S., Brachten, F., Ross, B., and Jung, A.-K. 2017. “Do Social Bots Dream of Electric Sheep? A
Categorisation of Social Media Bot Accounts,” in Proceedings of the Australasian Conference on
Information Systems.
Toris, C., and DePaulo, B. M. 1984. “Effects of actual deception and suspiciousness of deception on
interpersonal perceptions,” Journal of Personality and Social Psychology, (47:5), pp. 10631073.
Vargo, C. J., Guo, L., and Amazeen, M. A. 2018. “The agenda-setting power of fake news: A big data analysis
of the online media landscape from 2014 to 2016,” New Media & Society, (20:5), pp. 20282049, May
15.
Wogalter, M. S. 2006. “Purposes and Scope of Warnings,” in Handbook of Warnings, pp. 39.
Wu, M., Miller, R. C., and Garfinkel, S. L. 2006. “Do security toolbars actually prevent phishing attacks?,”
in Proceedings of the SIGCHI conference on Human Factors in computing systems - CHI ’06, p. 601.
Xiao, B., and Benbasat, I. 2015. “Designing warning messages for detecting biased online product
recommendations: An empirical investigation,” Information Systems Research, (26:4), pp. 793811.
Zahedi, F. M., Abbasi, A., and Chen, Y. 2015. “Fake-website detection tools: Identifying elements that
promote individuals’ use and enhance their performance.,” Journal of the Association for
Information Systems, (16:6), pp. 448484.
Zhang, B., Wu, M., Kang, H., Go, E., and Sundar, S. S. 2014. “Effects of security warnings and instant
gratification cues on attitudes toward mobile websites,” in Proceedings of the 32nd annual ACM
conference on Human factors in computing systems - CHI ’14, pp. 111114.
... [7,44,46], exploring the (in)effectiveness of warnings (e.g. [25,33]), or studying users reactions to fake news posts [16]. Other work has acknowledged the impact of emotion on technology use [43] and the mediating effect of emotions on people's behavior in social media [37]. ...
... In general, encouraging people to engage in the evaluation of information is a successful approach to reduce susceptibility to inaccuracies [5,8]. However, people are often not motivated to engage in information evaluation, even when it seems obvious and reasonable [25,33]. Social media users may be encouraged to employ metacognitive strategies in social media contexts with the right design scaffolds and user experience (UX) design. ...
... Misinformation can be completely removed (an approach already used by some providers), or it can be camouflaged with opaque overlays or patterns that make it less visible (for images) or less readable (for text). Many previous studies have looked at highlighting misinformation by adding warnings (e.g., [6,25,33]. However, as our findings indicate, people are overloaded with information. ...
Preprint
Full-text available
Misinformation spread through social media has become a fundamental challenge in modern society. Recent studies have evaluated various strategies for addressing this problem, such as by modifying social media platforms or educating people about misinformation, to varying degrees of success. Our goal is to develop a new strategy for countering misinformation: intelligent tools that encourage social media users to foster metacognitive skills "in the wild." As a first step, we conducted focus groups with social media users to discover how they can be best supported in combating misinformation. Qualitative analyses of the discussions revealed that people find it difficult to detect misinformation. Findings also indicated a need for but lack of resources to support cross-validation of information. Moreover, misinformation had a nuanced emotional impact on people. Suggestions for the design of intelligent tools that support social media users in information selection, information engagement, and emotional response management are presented.
... [7,44,46], exploring the (in)effectiveness of warnings (e.g. [25,33]), or studying users reactions to fake news posts [16]. Other work has acknowledged the impact of emotion on technology use [43] and the mediating effect of emotions on people's behavior in social media [37]. ...
... In general, encouraging people to engage in the evaluation of information is a successful approach to reduce susceptibility to inaccuracies [5,8]. However, people are often not motivated to engage in information evaluation, even when it seems obvious and reasonable [25,33]. Social media users may be encouraged to employ metacognitive strategies in social media contexts with the right design scaffolds and user experience (UX) design. ...
... Misinformation can be completely removed (an approach already used by some providers), or it can be camouflaged with opaque overlays or patterns that make it less visible (for images) or less readable (for text). Many previous studies have looked at highlighting misinformation by adding warnings (e.g., [6,25,33]. However, as our findings indicate, people are overloaded with information. ...
Conference Paper
Full-text available
Misinformation spread through social media has become a fundamental challenge in modern society. Recent studies have evaluated various strategies for addressing this problem, such as by modifying social media platforms or educating people about misinformation, to varying degrees of success. Our goal is to develop a new strategy for countering misinformation: intelligent tools that encourage social media users to foster metacognitive skills "in the wild." As a first step, we conducted focus groups with social media users to discover how they can be best supported in combating misinformation. Qualitative analyses of the discussions revealed that people find it difficult to detect misinformation. Findings also indicated a need for but lack of resources to support cross-validation of information. Moreover, misinformation had a nuanced emotional impact on people. Suggestions for the design of intelligent tools that support social media users in information selection, information engagement, and emotional response management are presented.
... In false news reports, misinformation similarly triggers emotional responses even in contexts where it should not. When misinformation is labelled as potentially inaccurate and contested (e.g., Facebook stating on a page that independent fact-checkers have said the information is false, or Twitter warning users that claims about election fraud are disputed), readers often still believe it and share the misleading claims [115,[162][163][164][165][166]. In fact, even when readers notice the warnings, they often ignore them, so long as the warning does not interrupt the readers' actions [163,167,168]. ...
... Magical effects are incredibly robust: they work even though audiences know that they are being tricked. Similarly, people often accept and disperse misinformation despite warnings that the facts are disputed and potentially false [115,[162][163][164][165][166]. Thus, increasing the awareness of scientific facts has proven ineffective in countering the flow of misinformation [175,176]. ...
Article
Full-text available
When we believe misinformation, we have succumbed to an illusion: our perception or interpretation of the world does not match reality. We often trust misinformation for reasons that are unrelated to an objective, critical interpretation of the available data: Key facts go unnoticed or unreported. Overwhelming information prevents the formulation of alternative explanations. Statements become more believable every time they are repeated. Events are reframed or given “spin” to mislead audiences. In magic shows, illusionists apply similar techniques to convince spectators that false and even seemingly impossible events have happened. Yet, many magicians are “honest liars,” asking audiences to suspend their disbelief only during the performance, for the sole purpose of entertainment. Magic misdirection has been studied in the lab for over a century. Psychological research has sought to understand magic from a scientific perspective and to apply the tools of magic to the understanding of cognitive and perceptual processes. More recently, neuroscientific investigations have also explored the relationship between magic illusions and their underlying brain mechanisms. We propose that the insights gained from such studies can be applied to understanding the prevalence and success of misinformation. Here, we review some of the common factors in how people experience magic during a performance and are subject to misinformation in their daily lives. Considering these factors will be important in reducing misinformation and encouraging critical thinking in society.
... For instance, although Facebook has its internal authenticity verification algorithms for the content and tries to flag fake news, those algorithms are black boxes to the public. Platforms' logic of distinguishing false news is debatable because the users do not participate in the algorithm design, and the fake news identified by the platform does not necessarily conform to the user's definition of fake news (Ross et al. 2018). ...
... Those platforms or websites have a group of experts with the background to verify the authenticity of the content. Still, as mentioned about Facebook's controversial internal censorship mechanism (Ross et al. 2018), experts' judgment to detect fake news differs from that of the users. The decentralized method distributes the content to multiple individuals for content review, but this is back to a state of no control mechanism. ...
Article
Full-text available
Fake news is undoubtedly a significant threat to democratic countries nowadays because existing technologies can quickly and massively produce fake videos, articles, or social media messages based on the rapid development of artificial intelligence and deep learning. Therefore, human assistance is critical if current fake news prevention systems desire to improve accuracy. Given this situation, prior research has proposed to add a quorum, a group of appraisers trusted by users to verify the authenticity of digital content, to the fake news prevention systems. This paper proposes an entropy-based incentive mechanism to diminish the negative effect of malicious behaviors on a quorum-based fake news prevention system. In order to maintain the Safety and Liveness of our system, we employed entropy to measure the degree of voting disagreement to determine appropriate rewards and penalties. To the best of our knowledge, we believe this is the first proposed work to leverage entropy in a fake news prevention system. Moreover, we use Hyperledger Fabric, Schnorr signatures, and human appraisers to implement a practical prototype of a quorum-based fake news prevention system. Then we conduct necessary case analyses and experiments to realize how dishonest participants, crash failures, and scale impact our system. The outcomes of the case analyses and experiments show that our mechanisms are feasible and provide an analytical basis for developing fake news prevention systems. Furthermore, we have added six innovative contributions in this extension work compared to our previous workshop paper in DEVIANCE 2021.
... Within seconds, any content can be circulated among thousands of people (Mirbabaie et al., 2014;Stieglitz & Dang-Xuan, 2013). Due to the large amount of information and the variety of data sources, it has become increasingly difficult for citizens to decide on the trustworthiness of social media content (Alkawaz et al., 2021;Jung et al., 2020;Ross et al., 2018). ...
Article
Full-text available
Social media plays a major role in public communication in many countries. Therefore, it has a large impact on societies and their cohesion. This thematic issue explores the impact social media has on social cohesion on a local or national level. The nine articles in this issue focus on both the potential of social media usage to foster social cohesion and the possible drawbacks of social media which could negatively influence the development and maintenance of social cohesion. In the articles, social cohesion is examined from different perspectives with or without the background of crisis, and on various social media platforms. The picture that emerges is that of social media as, to borrow a phrase used in one of the articles, a double-edged sword.
Article
Fake news on social media has become a serious problem, and social media platforms have started to actively implement various interventions to mitigate its impact. This paper focuses on the effectiveness of two platform interventions, namely a content-level intervention (i.e., a fake news flag that applies to a single post) and an account-level intervention (i.e., a forwarding restriction policy that applies to the entire account). Collecting data from China’s largest social media platform, we study the impact of a fake news flag on three fake news dissemination patterns using a propensity score matching method with a difference-in-differences approach. We find that implementing a policy of using fake news flag influences the dissemination of fake news in a more centralized manner via direct forwards and in a less dispersed manner via indirect forwards, and that fake news posts are forwarded more often by influential users. In addition, compared with truthful news, fake news is disseminated in a less centralized and more dispersed manner and survives for a shorter period after a forwarding restriction policy is implemented. This study provides causal empirical evidence of the effect of a fake news flag on fake news dissemination. We also expand the literature on platform interventions to combat fake news by investigating a less studied account-level intervention. We discuss the practical implications of our results for social media platform owners and policymakers.
Article
Despite attempts by social media companies to curb the spread of fake news with warnings flagging news credibility, the effectiveness of such measures remains unclear. Through the lens of the cognitive dissonance theory and individuals’ trust in the news, this study develops a theoretical model that explains why and how warnings affect an individual’s intention to share fake news. The study empirically assesses the predicted relationships using experimental survey data from 382 individuals. The findings provide evidence for two processes that underlie the effectiveness of warnings in curbing fake news sharing: (1) warnings negatively impact intention to share fake news through the psychological mechanism of lowering people’s cognitive and emotional trust in the news and (2) warnings mitigate the impact of cognitive trust on intention to share fake news. Confirmation bias is found to serve as a boundary condition for the effectiveness of warnings in lowering individuals’ cognitive and emotional trust in the news and in reducing the impacts of trust on an individual’s intention to share fake news.
Article
Full-text available
Refugee integration, one long-term solution to the large number of people fleeing their home countries, constitutes a challenge for both refugees and host societies. ICT and especially online peer groups seem promising to support this process. Building on literature demonstrating the societal benefits of peer groups, this paper proposes a novel peer-group-based approach to address refugee integration and introduces both an online and offline realization. A randomized field experiment in cooperation with public (refugee) services and a non-governmental organization makes it possible to expand existing research by quantitatively demonstrating societal benefits of online peer groups and ICT for refugee integration. Further, this paper is the first to assess the effectiveness of online and offline peer groups in one experimental setup comparatively. Results show that peer groups provide substantial value with respect to the integration domains social bridges, social bonds, rights and citizenship as well as safety and stability. While the outcome of the various integration domains differs for online and offline peer groups, participants’ adoption rates were higher for online peer groups.
Conference Paper
Full-text available
So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.
Article
Full-text available
Nyhan and Reifler (2010, 2015) document a “backfire effect,” wherein attempts to correct factual misperceptions increase the prevalence of false beliefs. These results are widely cited both in and outside of political science. In this research note, I report the results of a replication of Nyhan and Reifler’s (2015) flu vaccine study that was embedded in a larger study about flu vaccines. The backfire effect was not replicated in my experiment. The main replication result suggests the need for additional studies to verify the backfire effect and identify conditions under which it occurs.
Article
Full-text available
This study examines the agenda-setting power of fake news and fact-checkers who fight them through a computational look at the online mediascape from 2014 to 2016. Although our study confirms that content from fake news websites is increasing, these sites do not exert excessive power. Instead, fake news has an intricately entwined relationship with online partisan media, both responding and setting its issue agenda. In 2016, partisan media appeared to be especially susceptible to the agendas of fake news, perhaps due to the election. Emerging news media are also responsive to the agendas of fake news, but to a lesser degree. Fake news coverage itself is diverging and becoming more autonomous topically. While fact-checkers are autonomous in their selection of issues to cover, they were not influential in determining the agenda of news media overall, and their influence appears to be declining, illustrating the difficulties fact-checkers face in disseminating their corrections.
Conference Paper
Full-text available
Color is present in every aspect of human life, and color is driving our decisions. In the digital computer warning realm, in which a warning message is a communication mechanism, color represents an important design element, which aims at preventing the hazard and reducing negative outcomes from the user’s action. Interestingly, we are lacking the understanding of how color appeal influences behavioral intentions in culturally distinct countries when it comes to paying more attention to warning messages. We conducted a cross-cultural investigation by running an online experiment, followed by a survey of 258 participants from the United States and India. Supported by the color-in-context theory, we found that culture is an important dimension in the specific warning message context in which color appeal is a salient antecedent to behavioral intentions in culturally distinct countries. We derive several theoretical contributions and practitioners’ insights.
Article
Full-text available
Social media can be a double-edged sword for political misinformation, either a conduit propagating false rumors through a large population or an effective tool to challenge misinformation. To understand this phenomenon, we tracked a comprehensive collection of political rumors on Twitter during the 2012 US presidential election campaign, analyzing a large set of rumor tweets (n = 330,538). We found that Twitter helped rumor spreaders circulate false information within homophilous follower networks, but seldom functioned as a self-correcting marketplace of ideas. Rumor spreaders formed strong partisan structures in which core groups of users selectively transmitted negative rumors about opposing candidates. Yet, rumor rejecters neither formed a sizable community nor exhibited a partisan structure. While in general rumors resisted debunking by professional fact-checking sites (e.g. Snopes), this was less true of rumors originating with satirical sources.
Article
Full-text available
Article
Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their "most important" source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Article
This research surveys the current state-of-the-art technologies that are instrumental in the adoption and development of fake news detection. “Fake news detection” is defined as the task of categorizing news along a continuum of veracity, with an associated measure of certainty. Veracity is compromised by the occurrence of intentional deceptions. The nature of online news publication has changed, such that traditional fact checking and vetting from potential deception is impossible against the flood arising from content generators, as well as various formats and genres. The paper provides a typology of several varieties of veracity assessment methods emerging from two major categories – linguistic cue approaches (with machine learning), and network analysis approaches. We see promise in an innovative hybrid approach that combines linguistic cue and machine learning, with network-based behavioral data. Although designing a fake news detector is not a straightforward problem, we propose operational guidelines for a feasible fake news detecting system.