ArticlePDF Available

Pausing to consider why a headline is true or false can help reduce the sharing of false news


Abstract and Figures

In an online experiment, participants who paused to explain why a headline was true or false indicated that they were less likely to share false information compared to control participants. Their intention to share accurate news stories was unchanged. These results indicate that adding “friction” (i.e., pausing to think) before sharing can improve the quality of information shared on social media.
The Harvard Kennedy School Misinformation Review1
January 2020, Volume 1, Issue 2
Attribution 4.0 International (CC BY 4.0)
Reprints and permissions:
Research Article
Pausing to consider why a headline is true or false can help
reduce the sharing of false news
In an online experiment, participants who paused to explain why a headline was true or false indicated
that they were less likely to share false information compared to control participants. Their intention to
share accurate news stories was unchanged. These results indicate that adding friction” (i.e., pausing to
think) before sharing can improve the quality of information shared on social media.
Author: Lisa K. Fazi o
Affiliations: Psychology & Human Development, Vanderbilt University
How to cite: Fazio, L. K. (2020). Pausing to consider why a headline is true or false can help reduce the sharing of false news,
The Harvard Kennedy School (HKS) Misinformation Review, Volume 1, Issue 2
Received: Dec. 20, 2019 Accepted: Jan. 23, 2020 Published: Feb 10, 2020
Research questions
Can asking people to explain why a headline is true or false decrease sharing of false political news
headlines? Is this intervention effective for both novel headlines and ones that were seen previously?
Essay summary
In this experiment, 501 participants from Amazon’s mTurk platform were asked to rate how likely
they would be to share true and false news headlines.
Before rating how likely they would be to share the story, some participants were asked to
Please explain how you know that the headline is true or false.”
Explaining why a headline was true or false reduced participants’ intention to share false
headlines, but had no effect on true headlines.
The effect of providing an explanation was larger when participants were seeing the headline for
the first time. The intervention was less effective for headlines that had been seen previously in
the experiment.
This research suggests that forcing people to pause and think can reduce shares of false
A publication of the Shorenstein Center for Media, Politics, and Public Policy, at Harvard University, John F.
Kennedy School of Government.
Reducing Shares of False Information 2
While propagandists, profiteers and trolls are responsible for the creation and initial sharing of much of
the misinformation found on social media, this false information spreads due to actions of the general
public (Vosoughi, Roy, & Aral, 2018). Thus, one way to reduce the spread of misinformation is to reduce
the likelihood that individuals will share false information that they find online. Social media exists to
allow people to share information with others, so our goal was not to reduce shares in general. Instead,
we sought a solution that would reduce shares of incorrect information while not affecting accurate
In a large online survey experiment, we found that asking participants to explain how they knew that a
political headline was true or false decreased their intention to share false headlines. This is good news
for social media companies who may be able to improve the quality of information on their site by asking
people to pause and think before sharing information, especially since the intervention did not reduce
sharing of true information (the effects were limited to false headlines).
We suggest that social media companies should implement these pauses and encourage people to
consider the accuracy and quality of what they are posting. For example, Instagram is now asking users
“Are you sure you want to post this?” before they are able to post bullying comments (Lee, 2019). By
making people pause and think about their action before posting, the intervention is aimed at decreasing
the number of bullying comments on the platform. We believe that a similar strategy may also decrease
shares of false information on other social media. Individuals can also implement this intervention on their
own by committing to always pause and think about the truth of a story before sharing it with others.
One of the troubling aspects of social media is that people may see false content multiple times. That
repetition can increase people’s belief that the false information is true (Fazio, Brashier, Payne, & Marsh,
2015; Pennycook, Cannon, & Rand, 2018), reduce beliefs that it is unethical to publish or share the false
information (Effron & Raj, 2019), and increase shares of the false information (Effron & Raj, 2019). Thus,
in our study, we examined how providing explanations affected shares of both new and repeated
Unlike prior research (Effron & Raj, 2019), we found that repetition did not increase participants
intention to share false headlines. Both studies used very similar materials, so the difference is likely due
to the number of repetitions. The repeated headlines in Effron and Raj (2019) were viewed five times
during the experiment, while in our study they were only viewed twice. It may be that repetition does
affect sharing, but only after multiple repetitions.
However, repetition did affect the efficacy of the intervention. Providing an explanation of why the
headline was true or false reduced sharing intentions for both repeated and novel headlines, but the
decrease was larger for headlines that were being seen for the first time. One possible explanation is that
because the repeated headlines were more likely to be thought of as true, providing an explanation was
less effective in reducing participants’ belief in the headline and decreasing their intentions to share. This
finding suggests that it is important to alter how people process a social media post the first time that
they see it.
There are multiple reasons why our intervention may have been effective. Providing an explanation
helps people realize gaps between their perceived knowledge and actual knowledge (Rozenblit & Keil,
2002) and improves learning in classroom settings (Dunlosky, Rawson, Marsh, Nathan, & Willingham,
2013). Providing the explanation helps people connect their current task with their prior knowledge
(Lombrozo, 2006). In a similar way, providing an explanation of why the headline is true or false may have
helped participants consult their prior knowledge and realize that the false headlines were incorrect. The
prompt may have also slowed people down and encouraged them to think more deeply about their
actions rather than simply relying on their gut instinct. That is, people may initially be willing to share false
Fazio 3
information, but with a pause, they are able to resist that tendency (as in Bago, Rand, & Pennycook, 2020).
Finally, the explanation prompt may have also encouraged a norm of accuracy and made participants
more reluctant to share false information. People can have many motivations to share information on
social media e.g., to inform, to entertain, or to signal their group membership (Brady, Wills, Jost, Tucker,
& Van Bavel, 2017; Metaxas et al., 2015). Thinking about the veracity of the headline may have shifted
participants’ motivations for sharing and caused them to value accuracy more than entertainment.
To be clear, we do not think that our explanation task is the only task that would reduce shares of false
information. Other tasks that emphasize a norm of accuracy or that force people to pause before sharing
or consult their prior knowledge, may also be effective. Future research should disentangle if each of these
three factors can reduce sharing of false information on their own, or if all three are necessary.
Finding 1: Explaining why a headline was true or false reduced participants’ intention to share false
headlines, but did not affect sharing intentions for true headlines
For each headline, participants rated the likelihood that they would share it online on a scale from 1 = not
at all likely to 6 = extremely likely. For true headlines, participants’ intention to share the stories did not
differ between the control condition (M = 2.17) and the explanation condition (M = 2.20), t(499) = 0.27, p
= .789. Explaining why the headline was true or false did not change participants’ hypothetical sharing
behavior for factual news stories (Figure 1A).
For false headlines, participants who first explained why the headline was true or false indicated that
they would be less likely to share the story (M = 1.79) than participants in the control condition (M = 2.11),
t(499) = 3.026, p = .003, Cohen’s d = 0.27 (Figure 1B). In the control condition, over half of the participants
(57%) indicated that they would be likely, somewhat likelyorextremely likelyto share at least one
false headline. However, in the explanation condition, only 39% indicated that they would be at least
likelyto share one or more false headlines. A similar decrease occurred in the number of people who
indicated that they would be extremely likelyto share at least one false headline (24% in control
condition; 17% in explanation condition).
These patterns were reflected statistically in the results of a 2 (repetition: new, repeated) x 2 (truth
status: true, false) x 2 (task: control, explain) ANOVA. This preregistered analysis indicated that there was
a main effect of headline truth, F(1,499) = 60.39, p < .001,
= .108, with participants being more likely
to share true headlines than false headlines. There was also an interaction between the truth of the
headline and the effect of providing an explanation, F(1,499) = 34.44, p < .001,
= .065. Within the
control condition, participants indicated that they were equally likely to share true (M = 2.17) and false
headlines (M = 2.11), t(259) = 1.52, p = .129, Cohen’s d = 0.09. In the explain condition, however,
participants indicated that they would be less likely to share false headlines (M = 1.79) as compared to
true headlines (M = 2.20), t(240) = 8.63, p < .001, Cohen’s d = 0.56.
Reducing Shares of False Information 4
Figure 1. Providing explanations reduced intent to share only for false headlines. Average likelihood to share true (left) and false
(right) headlines split by condition. The dots on the left indicate the condition means (error bars are standard errors) and the plots
on the right visualize the probability distribution.
Finding 2: The intervention was not as effective for repeated headlines.
Overall, participants indicated that they were equally likely to share new headlines (M = 2.07) and
repeated headlines (M = 2.06). However, as shown below (Figure 2), providing an explanation reduced
participants’ likelihood to share new headlines more than repeated headlines. Within the false
headlines, providing an explanation reduced participants’ intent to share for both new (control M =
2.16, explain M = 1.74, t(298) = 3.87, p < .001, Cohen’s d = 0.35) and repeated headlines (control M =
2.07, explain M = 1.85, t(298) = 2.04, p = .042, Cohen’s d = 0.18). This decrease in sharing intentions was
much larger when the headline was being viewed for the first time.
These patterns were reflected statistically in the same 2 (repetition: new, repeated) x 2 (truth status:
true, false) x 2 (task: control, explain) ANOVA partially reported above. The main effect of repetition was
not significant, F(1,499) = 0.44, p = .510,
= .001, but there was an interaction between the effect of
repetition and providing an explanation, F(1,499) = 7.23, p = .007,
= .014. In addition, there was a 3-
way interaction between repetition, task and truth, F(1,499) = 6.19, p = .013,
= .012. No other main
effects or interactions were significant.
Fazio 5
Figure 2. The decrease in intent to share was larger for novel headlines. Mean likelihood to share split by the truth of the
headline, repetition and condition. Error bars are standard errors.
Limitations and future directions
One key question is how well participants’ intent to share judgments match their actual sharing
behavior on social media platforms. Recent research, using the same set of true and false political
news headlines as the current study, suggests that participants’ survey responses are correlated
with real-world shares (Mosleh, Pennycook, & Rand, 2019). Headlines that participants indicated
that they would be more likely to share in mTurk surveys were also more likely to be shared on
Twitter. Thus, it appears that participants’ survey responses are predictive of actual sharing
In addition, both practitioners and researchers should be aware that we tested a limited set of
true and false political headlines. While we believe that the headlines are typical of the types of
true and false political stories that circulate on social media, they are not a representative
sample. In particular, the results may differ when it is less obvious which stories are likely true
or likely suspect.
A final limitation of the study was that the decrease in sharing intentions for the false headlines
was relatively small (0.32 points on a 6-point scale). However, this small decrease could still have
a large effect in social networks where shares affect how many people see a post. In addition,
our participants were relatively unlikely to share these political news headlines the average
rating was just above “slightly likely”. Since participants were already unlikely to share the posts,
the possible effect of the intervention was limited. Future research should examine how
providing an explanation affects sharing of true and false posts that people are more likely to
Reducing Shares of False Information 6
All data are available online, along with a preregistration of our hypotheses, primary analyses and
sample size (
Participants. Five hundred and one participants (Mage = 40.99, SD =12.88) completed the full
study online via Amazon’s Mechanical Turk (260 in the control condition, 241 in the explain
condition). An additional 17 participants started but did not finish the study (5 in the control
condition, 12 in the explain condition). Using TurkPrime (Litman, Robinson, & Abberbock, 2017),
we restricted the sample to participants in the United States and blocked duplicate IP addresses.
Materials. We used 24 true and false political headlines from Pennycook, Cannon and Rand
(2018) Experiment 3. Half of the headlines were true and came from reliable sources such as, and The other half were false and came from
disreputable sources such as, and In
addition, within each set, half of the headlines were pro-republican and the other half pro-
democrat. (We did not measure participantspolitical beliefs in this study; therefore, we did not
examine differences in sharing between pro-republican and pro-democrat headlines).
As in the Pennycook study, the headlines were presented in a format similar to a Facebook post
(a photograph with a headline and byline below it). See Figure 3 for examples. The full set of
headlines is available from Pennycook and colleagues at
Figure 3. Sample true and false headlines that appeal to either democrats or republicans.
Design and counterbalancing. The experiment had a 2 (repetition: repeated, new) X 2 (task:
Fazio 7
control, explain) mixed design. Repetition was manipulated within-subjects, while the
participants’ task during the share phase was manipulated between-subjects. The 24 headlines
were split into two sets of 12, and both sets contained an equal number of true/false and pro-
democrat/pro-republican headlines. Across participants, we counterbalanced which set of 12 was
repeated (presented during the exposure and share phase) and new (presented only during the
share phase).
Procedure. The experiment began with the exposure phase. Participants viewed 12 headlines
and were asked to judge “How interested are you in reading the rest of the story?” Response
options included Very Uninterested, Uninterested, Slightly Uninterested, Slightly Interested,
Interested, and Very Interested. Each headline was presented individually, and participants moved
through the study at their own pace. Participants were correctly informed that some of the
headlines were true and others were false.
After rating the 12 headlines, participants proceeded immediately to the sharing phase. The full
set of 24 headlines was presented one at a time and participants were asked “How likely would
you be to share this story online?” The response options included Not at all likely, A little bit likely,
Slightly likely, Pretty likely, Very likely, and Extremely likely. Participants in the control condition
simply viewed each headline and then rated how likely they would be to share it online.
Participants in the explain condition saw the headline and were first asked toPlease explain
how you know that the headline is true or false” before being asked how likely they would be to
share the story. All participants were told that some of the headlines would be ones they saw
earlier, and that others would be new. They were also again told that some headlines would be
true and others not true.
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces
belief in false (but not true) news headlines. Journal of Experimental Psychology:
General. doi:10.1037/xge0000729
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the
diffusion of moralized content in social networks. Proceedings of the National Academy
of Sciences, 114(28), 7313-7318. doi:10.1073/pnas.1618923114
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving
students’ learning with effective learning techniques promising directions from cognitive
and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58.
doi:10.1177/1529 i 00612453266
Effron, D. A., & Raj, M. (2019). Misinformation and morality: encountering fake-news headlines
makes them seem less unethical to publish and share. Psychological Science, 31, 75-87.
Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect
against illusory truth. Journal of Experimental Psychology: General, 144(5), 993-1002.
Lee, D. (2019). Instagram now asks bullies: 'Are you sure?'. Retrieved from
Reducing Shares of False Information 8
Litman, L., Robinson, J., & Abberbock, T. (2017). A versatile crowdsourcing data
acquisition platform for the behavioral sciences. Behavior Research Methods(49), 433-
442. doi:10.3758/s13428-016-0727-z
Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Sciences,
10(10), 464-470. doi:10.1016/j.tics.2006.08.004
Metaxas, P., Mustafaraj, E., Wong, K., Zeng, L., O'Keefe, M., & Finn, S. (2015). What do retweets
indicate? Results from user survey and meta-review of research. Paper presented at the
Ninth International AAAI Conference on Web and Social Media.
Mosleh, M., Pennycook, G., & Rand, D. G. (2019). Self-reported willingness to share political
news articles in online surveys correlates with actual sharing on Twitter. PsyArXiv
Working Paper. doi:10.31234/
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy
of fake news. . Journal of Experimental Psychology: General, 147(12), 1865-1880.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of
explanatory depth. Cognitive science, 26(5), 521-562. doi:10.1207/s15516709cog2605_1
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146-1151. doi:10.1126/science.aap9559
This research was funded by a gift from Facebook Research.
Competing interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.
Approval for this study was provided by the Vanderbilt University Institutional Review Board and all
participants provided informed consent.
This is an open access article distributed under the terms of the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided that
the original author and source are properly credited.
Data Availability
All materials needed to replicate this study are available via the Center for Open Science:
... One effective method is debunking, a reactive method which tries to undo the damage done by misinformation by correcting false claims (Chan et al., 2017). Another method is nudging people to pause and think or to highlight accuracy while they are making their judgment about a news item (Fazio, 2020;. However, both methods are not sufficient to prevent the influence of the exceedingly wide range of misinformation in circulation. ...
... Accordingly, researchers have increasingly attempted to leverage basic insights from social and educational psychology to find new and preemptive solutions to the problem of online misinformation (Fazio, 2020;Roozenbeek, Nygren, & van der Linden, 2020). One promising avenue in this regard is inoculation theory McGuire & Papageorgis, 1961a;McGuire, 1964;van der Linden, Leiserowitz, et al., 2017;van der Linden, Maibach, et al., 2017), often referred to as the "grandfather of resistance to persuasion" (Eagly & Chaiken, 1993, p. 561). ...
... Accordingly, across disciplines, research on the processes behind, impact of, and interventions against misinformation-which has been around for decades-has surged over the past years (for recent reviews, see Van Bavel et al., 2021;van der Linden et al., 2021). Researchers have made progress in designing media and information literacy interventions in the form of educational games van der Linden, 2019a, 2020), "accuracy" primes (Pennycook, Epstein, et al., 2021;Pennycook, McPhetres, et al., 2020), introducing friction (Fazio, 2020), and inoculation messages (Lewandowsky & van der Linden, 2021). Crucially, however, no theoretical framework exists for a nuanced evaluation of misinformation susceptibility, nor a psychometrically validated measurement that provides a reliable measure across studies. ...
For over 60 years, inoculation theory has been a key framework to understand resistance to persuasion, yet many critical questions have remained unanswered. This dissertation aims to provide a theoretical and empirical understanding of how resistance to persuasion effects decay over time. In the context of resistance to persuasion by misinformation, I offer 10 empirical experiments that shed new light on this question, including several methodological innovations. In Chapter 2, I propose a new model that integrates memory theories with motivation theories on inoculation. In Chapters 3–6, I evaluate the long-term effectiveness of inoculation in message-based, gamified, and video-based inoculation interventions, unveiling the underlying mechanisms of decay. In Chapter 7, I address methodological issues, including the effects of repeated testing, and unstandardised items, and the development of a new misinformation susceptibility test. In summary, this thesis advances our understanding of the mechanisms of decay in resistance to persuasion, and sheds light on the role of and interplay between memory and motivation. The new memory-motivation model brings a significant advancement to the field, as it taps into the memory literature of forgetting—a domain in cognitive psychology—to shed new light on a concept in social psychology, and enables a new approach to modelling the longevity of inoculation effects. In addition, I offer novel insights into limitations with current methodological paradigms, and demonstrate how new standardised measurement tools can be developed to more accurately map inoculation effects in future research. Finally, I discuss how the findings of this dissertation can inform not only inoculation scholarship, but also intervention designers, evaluators, and policy makers, on how to address the problem of misinformation, and demonstrate how to extend the long-term effects of inoculation in applied interventions.
... In recent years, research on misinformation in the behavioral and social sciences has introduced a range of interventions designed to target users' competences and behaviors in a variety of ways: by debunking false claims (Lewandowsky, Cook, Ecker, et al., 2020), by boosting people's competences (e.g., digital media literacy; Guess et al., 2020) and resilience against manipulation (e.g., pre-emptive inoculation; Basol et al., 2020;, by implementing design choices that slow the sharing of misinformation (Fazio, 2020), by directing attention to the importance of accuracy , or by highlighting whether the information in question is trustworthy (Clayton et al., 2020). These interventions stem from different disciplines, including cognitive science , political and social psychology INTERVENTIONS AGAINST MISINFORMATION 5 (Brady et al., 2017;Van Bavel et al., 2021), and education research (Osborne et al., 2022;Wineburg et al., 2022). ...
... One nudging intervention, accuracy prompts ( Fig. 1, "Accuracy prompts"), reminds people of the importance of information accuracy in order to encourage them to share fewer false headlines . Other nudges introduce friction into a decision-making process in order to slow sharing of information-for instance, asking a person to pause and think before sharing content on social media (Fazio, 2020) or to read an article before sharing it (TwitterSupport, 2020; Fig. 1 Debunking provides corrective information to reduce a speci c misconception or false belief. ...
The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. A wide range of individual-focused interventions aimed at reducing harm from online misinformation have been developed in the behavioral and cognitive sciences. We, an international group of 26 experts, introduce and analyze our toolbox of interventions against misinformation, which includes an up-to-date account of the interventions featured in 42 scientific papers. A resource for scientists, policy makers, and the public, the toolbox delivers both a conceptual overview of the breadth of interventions, including their target and scope, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The toolbox covers 10 types of interventions: accuracy prompts, debunking, friction, inoculation, lateral reading, media-literacy tips, rebuttals of science denialism, self-reflection tools, social norms, and warning and fact-checking labels.
... Other studies have shown that simply asking people to stop and think before sharing a news headline improves the quality of news people intend to share (L. Fazio, 2020). Similar findings suggest that performance on the cognitive reflection test, which measures, in part, a willingness to stop and reflect (Thomson & Oppenheimer, 2016), but also captures other factors, such as intelligence (Otero et al., 2022), correlates with the ability to identify fake news (Pennycook & Rand, 2018). ...
Why do people believe in and share misinformation? Some theories focus on social identity and politically motivated reasoning, arguing that people are motivated to believe and share identity-congruent news. Other theories suggest that belief in misinformation is not shaped by motivated reasoning, but is instead shaped by other factors, such as prior knowledge, lack of reflection, or inattention to accuracy. Integrating multiple perspectives, this thesis argues that the spread of (mis)information is shaped by two (often competing) motivations: accuracy and social motivations, in combination with other factors, such as personality variables and information exposure. Through a variety of methods, including analyses of large-scale social media datasets, online experiments, network analysis, and a digital field experiment, this thesis illustrates how accuracy motivations, social motivations, and other variables shape the belief and spread of (mis)information. Chapter 2 takes a big data approach to test whether online content that fulfills political identity motivations, such as out-group derogation and in-group favoritism, tends to receive more engagement online across eight large-scale datasets containing a total of 2.7 million tweets and Facebook posts. Chapter 3 experimentally manipulates accuracy and social motivations for believing in and sharing true and false news headlines in a series of four online experiments with 3,364 participants. Chapter 4 examines partisan asymmetries in the effectiveness of a popular misinformation intervention, the accuracy nudge. Chapter 5 links survey data to the Twitter data of 2,064 participants to examine how beliefs about the COVID-19 vaccine and politics are associated with following political elites online and interacting with low-quality news sources. Finally, Chapter 6 examines how manipulating participants’ online social networks in a naturalistic setting (e.g., incentivizing people to follow and unfollow specific accounts on Twitter in a randomized controlled trial) influences beliefs about the opposing political party and the sharing of misinformation.
... 'lazy' or intuitive thinking can also lead people to share content that they might recognize as false if they thought about it more. accordingly, asking people to explain how they know that news headlines are true or false reduces sharing of false political headlines 278 , and brief accuracy nudges -simple interventions that prompt people to consider the accuracy of the information they encounter or share -can reduce sharing of false news about politics 207 and CovID-19 (reF. 279 ). ...
Full-text available
Critical thinking for sustainable development therefore focuses on the soft skills of positive values and attitudes while at the same time embracing social, economic, political, and environmental transformation for the good of everyone irrespective of age, gender, ethnicity, or status in society. Green marketing is developing and selling environmentally friendly goods or services. It helps improve credibility, enter a new audience segment, and stand out among competitors as more and more people become environmentally conscious. Using eco-friendly paper and inks for print marketing materials. Skipping the printed materials altogether and option for electronic marketing. Having a recycling program and responsible waste disposal practices. Using eco-friendly product packaging. Critical thinking helps people better understand themselves, their motivations and goals. When you can deduce information to find the most important parts and apply those to your life, you can change your situation and promote personal growth and overall happiness. The reason why innovation benefits from critical thinking is simple; critical thinking is used when judgment is needed to produce a desired set of valued outcomes. That is why the majority of innovation outcomes reflect incremental improvements built on a foundation of critically thought-out solutions. The results indicate that there are four factors that effectively influence fulfillment of green marketing, specifically, green labeling, compatibility, product value and green advertising. A green mission statement becomes the foundation of a company's sustainability efforts. It provides the organization and its stakeholders with an understanding of what's most important and what your company can do to protect the natural world and be more socially responsible.
... One technique to stop the consumption of misinformation at an early stage is by pausing to think about why a news headline is true or false. Fazio [68] conducted experiments where they ask the participants to explain why they rate the news headlines as real or fake. Their results suggested that forcing people to pause and think could reduce the spread of misinformation. ...
Full-text available
Social media has been one of the main information consumption sources for the public, allowing people to seek and spread information more quickly and easily. However, the rise of various social media platforms also enables the proliferation of online misinformation. In particular, misinformation in the health domain has significant impacts on our society such as the COVID-19 infodemic. Therefore, health misinformation in social media has become an emerging research direction that attracts increasing attention from researchers of different disciplines. Compared to misinformation in other domains, the key differences of health misinformation include the potential of causing actual harm to humans' bodies and even lives, the hardness to identify for normal people, and the deep connection with medical science. In addition, health misinformation on social media has distinct characteristics from conventional channels such as television on multiple dimensions including the generation, dissemination, and consumption paradigms. Because of the uniqueness and importance of combating health misinformation in social media, we conduct this survey to further facilitate interdisciplinary research on this problem. In this survey, we present a comprehensive review of existing research about online health misinformation in different disciplines. Furthermore, we also systematically organize the related literature from three perspectives: characterization, detection, and intervention. Lastly, we conduct a deep discussion on the pressing open issues of combating health misinformation in social media and provide future directions for multidisciplinary researchers.
Misinformation can negatively impact people’s lives in domains ranging from health to politics. An important research goal is to understand how misinformation spreads in order to curb it. Here, we test whether and how a single repetition of misinformation fuels its spread. Over two experiments (N = 260) participants indicated which statements they would like to share with other participants on social media. Half of the statements were repeated and half were new. The results reveal that participants were more likely to share statements they had previously been exposed to. Importantly, the relationship between repetition and sharing was mediated by perceived accuracy. That is, repetition of misinformation biased people’s judgement of accuracy and as a result fuelled the spread of misinformation. The effect was observed in the domain of health (Exp 1) and general knowledge (Exp 2), suggesting it is domain general.
Beliefs are, in many ways, central to psychology and, in turn, consistency is central to belief. Theories in philosophy and psychology assume that beliefs must be consistent with each other for people to be rational. That people fail to hold fully consistent beliefs has, therefore, been the subject of much theorizing, with numerous mechanisms proposed to explain how inconsistency is possible. Despite the widespread assumption of consistency as a default, achieving a consistent set of beliefs is computationally intractable. We review research on consistency in philosophy and psychology and argue that it is consistency, not inconsistency, that requires explanation. We discuss evidence from the attitude, belief, and persuasion literatures, which suggests that accessibility of beliefs in memory is one possible mechanism for achieving a limited, but psychologically plausible, form of consistency. Finally, we conclude by suggesting future directions for research beginning from the assumption of inconsistency as the default. This article is categorized under: Psychology > Reasoning and Decision Making Psychology > Theory and Methods Philosophy > Knowledge and Belief Consistency among beliefs is a hallmark of rationality. However, we argue that achieving full consistency is so difficult that it cannot be accomplished by a human mind. Instead, the mind must rely on heuristics, which means that people can be only partially consistent.
How do the reasons people post misinformation affect how they respond to fact checking interventions? In this research, we conducted a qualitative study of people who shared misinformation. We started with stories marked as false by a popular fact checker, Snopes, and identified people who posted those stories on Reddit. We interviewed the posters about the story they shared and their five behaviorally distinct personas: Reason to Disagree, Changed Belief, Steadfast Non-Standard Belief, Sharing to Debunk, and Sharing for Humor. Our findings suggest that research to craft better interventions to counter misinformation might benefit from tailoring to specific personas that can serve as design tools for on-going misinformation intervention research.
Recent experiments have found that prompting people to think about accuracy reduces misinformation sharing intentions. The process by which this effect operates, however, remains unclear. Do accuracy prompts cause people to “stop and think,” increasing deliberation? Or do they change what people think about, drawing attention to accuracy? Since these two accounts predict the same behavioral outcomes (i.e., increased sharing discernment following a prompt), we used computational modeling of sharing decisions with response time data, as well as out-of-sample ratings of headline perceived accuracy, to test the accounts' divergent predictions across six studies (N = 5633). The results suggest that accuracy prompts do not increase the amount of deliberation people engage in. Instead, they increase the weight participants put on accuracy while deliberating. By showing that prompting people makes them think better even without thinking more, our results challenge common dual-process interpretations of the accuracy-prompt effect. Our findings also highlight the importance of understanding how social media distracts people from considering accuracy, and provide evidence for scalable interventions that redirect people's attention.
Full-text available
People may repeatedly encounter the same misinformation when it “goes viral.” Four experiments and a pilot study (two pre-registered; N = 2,587) suggest that repeatedly encountering misinformation makes it seem less unethical to spread––regardless of whether one believes it. Seeing a fake-news headline one or four times reduced how unethical participants thought it was to publish and share that headline when they saw it again – even when it was clearly labelled false and participants disbelieved it, and even after statistically accounting for judgments of how likeable and popular it was. In turn, perceiving it as less unethical predicted stronger inclinations to express approval of it online. People were also more likely to actually share repeated (vs. new) headlines in an experimental setting. We speculate that repeating blatant misinformation may reduce the moral condemnation it receives by making it feel intuitively true, and we discuss other potential mechanisms.
Full-text available
There is an increasing imperative for psychologists and other behavioral scientists to understand how people behave on social media. However, it is often very difficult to execute experimental research on actual social media platforms, or to link survey responses to online behavior in order to perform correlational analyses. Thus, there is a natural desire to use self-reported behavioral intentions in standard survey studies to gain insight into online behavior. But are such hypothetical responses hopelessly disconnected from actual sharing decisions? Or are online survey samples via sources such as Amazon Mechanical Turk (MTurk) so different from the average social media user that the survey responses of one group give little insight into the on-platform behavior of the other? Here we investigate these issues by examining 67 pieces of political news content. We evaluate whether there is a meaningful relationship between (i) the level of sharing (tweets and retweets) of a given piece of content on Twitter, and (ii) the extent to which individuals (total N = 993) in online surveys on MTurk reported being willing to share that same piece of content. We found that the same news headlines that were more likely to be hypothetically shared on MTurk were actually shared more frequently by Twitter users, r = .44. For example, across the observed range of MTurk sharing fractions, a 20 percentage point increase in the fraction of MTurk participants who reported being willing to share a news headline on social media was associated with 10x as many actual shares on Twitter. This finding suggests that self-reported sharing intentions collected in online surveys are likely to provide some meaningful insight into what participants would actually share on social media.
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Full-text available
Significance Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.
Full-text available
In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.
Arguably one of the most important features of Twitter is the support for “retweets” or messages re-posted verbatim by a user that were originated by someone else. Despite the fact that retweets are routinely studied and reported, many important questions remain about user motivation for their use and their significance. In this paper we answer the question of what users indicate when they retweet. We do so in a comprehensive fashion, by employing a user survey, a study of user profiles, and a meta-review of over 100 research publications from three related major conferences. Our findings indicate that retweeting indicates not only interest in a message, but also trust in the message and the originator, and agreement with the message contents. However, the findings are significantly weaker for journalists, some of whom beg to differ declaring so in their own user profiles. On the other hand, the inclusion of hashtags strengthens the signal of agreement, especially when the hashtags are related to politics. While in the past there have been additional claims in the literature about possible reasons for retweeting, many of them are not supported, especially given the technical changes introduced recently by Twitter.
What role does deliberation play in susceptibility to political misinformation and "fake news"? The Motivated System 2 Reasoning (MS2R) account posits that deliberation causes people to fall for fake news, because reasoning facilitates identity-protective cognition and is therefore used to rationalize content that is consistent with one's political ideology. The classical account of reasoning instead posits that people ineffectively discern between true and false news headlines when they fail to deliberate (and instead rely on intuition). To distinguish between these competing accounts, we investigated the causal effect of reasoning on media truth discernment using a 2-response paradigm. Participants (N = 1,635 Mechanical Turkers) were presented with a series of headlines. For each, they were first asked to give an initial, intuitive response under time pressure and concurrent working memory load. They were then given an opportunity to rethink their response with no constraints, thereby permitting more deliberation. We also compared these responses to a (deliberative) 1-response baseline condition where participants made a single choice with no constraints. Consistent with the classical account, we found that deliberation corrected intuitive mistakes: Participants believed false headlines (but not true headlines) more in initial responses than in either final responses or the unconstrained 1-response baseline. In contrast-and inconsistent with the Motivated System 2 Reasoning account-we found that political polarization was equivalent across responses. Our data suggest that, in the context of fake news, deliberation facilitates accurate belief formation and not partisan bias. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
What role does deliberation play in susceptibility to political misinformation and “fake news”? The motivated account posits that people fall for fake news because they deliberate, as reasoning facilitates identity-protective cognition and is therefore used to rationalize content that is consistent with one’s political ideology. The classical account of reasoning instead posits that people ineffectively discern between true and false news headlines because they fail to deliberate (and instead rely on intuition). To distinguish between these competing accounts, we investigated the causal effect of reasoning on media truth discernment using a two response paradigm. Participants (N= 1635 MTurkers) were presented with a series of headlines. For each, they were first asked to give an initial, intuitive response facing time pressure and concurrent working memory load. They were then given an opportunity to re-think their response with no constraints, thereby permitting more deliberation. We also compared these responses to a (deliberative) one-response baseline condition where participants made a single choice with no constraints. Consistent with the classical account, we found that deliberation corrected intuitive mistakes: false headlines (but not true headlines) were rated as more accurate in initial responses than either final responses or the one-response baseline. In contrast - and inconsistent with the motivated reasoning account - we found that political polarization was equivalent across responses. Our data suggest that, in the context of fake news, deliberation facilitates accurate belief formation and not partisan bias.
Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science , this issue p. 1146
In daily life, we frequently encounter false claims in the form of consumer advertisements, political propaganda, and rumors. Repetition may be one way that insidious misconceptions, such as the belief that vitamin C prevents the common cold, enter our knowledge base. Research on the illusory truth effect demonstrates that repeated statements are easier to process, and subsequently perceived to be more truthful, than new statements. The prevailing assumption in the literature has been that knowledge constrains this effect (i.e., repeating the statement "The Atlantic Ocean is the largest ocean on Earth" will not make you believe it). We tested this assumption using both normed estimates of knowledge and individuals' demonstrated knowledge on a postexperimental knowledge check (Experiment 1). Contrary to prior suppositions, illusory truth effects occurred even when participants knew better. Multinomial modeling demonstrated that participants sometimes rely on fluency even if knowledge is also available to them (Experiment 2). Thus, participants demonstrated knowledge neglect, or the failure to rely on stored knowledge, in the face of fluent processing experiences. (PsycINFO Database Record (c) 2015 APA, all rights reserved).