ArticlePDF Available

Abstract and Figures

In spite of the attractiveness of fake news stories, most people are reluctant to share them. Why? Four pre-registered experiments (N = 3,656) suggest that sharing fake news hurt one's reputation in a way that is difficult to fix, even for politically congruent fake news. The decrease in trust a source (media outlet or individual) suffers when sharing one fake news story against a background of real news is larger than the increase in trust a source enjoys when sharing one real news story against a background of fake news. A comparison with real-world media outlets showed that only sources sharing no fake news at all had similar trust ratings to mainstream media. Finally, we found that the majority of people declare they would have to be paid to share fake news, even when the news is politically congruent, and more so when their reputation is at stake.
Content may be subject to copyright.
https://doi.org/10.1177/1461444820969893
new media & society
1 –22
© The Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1461444820969893
journals.sagepub.com/home/nms
Why do so few people share
fake news? It hurts their
reputation
Sacha Altay , Anne-Sophie Hacquin
and Hugo Mercier
Institut Jean Nicod, Département d’études cognitives, ENS, EHESS, PSL University, CNRS, France
Abstract
In spite of the attractiveness of fake news stories, most people are reluctant to share
them. Why? Four pre-registered experiments (N = 3,656) suggest that sharing fake news
hurt one’s reputation in a way that is difficult to fix, even for politically congruent fake
news. The decrease in trust a source (media outlet or individual) suffers when sharing
one fake news story against a background of real news is larger than the increase in
trust a source enjoys when sharing one real news story against a background of fake
news. A comparison with real-world media outlets showed that only sources sharing
no fake news at all had similar trust ratings to mainstream media. Finally, we found that
the majority of people declare they would have to be paid to share fake news, even
when the news is politically congruent, and more so when their reputation is at stake.
Keywords
Communication, fake news, misinformation, political bias, reputation, social media,
source, trust
Recent research suggests that we live in a “post-truth” era (Lewandowsky et al., 2017;
Peters, 2018), when ideology trumps facts (Van Bavel and Pereira, 2018), social media are
infected by fake news (Del Vicario et al., 2016), and lies spread faster than (some) truths
(Vosoughi et al., 2018). We might even come to believe in fake news—understood as
“fabricated information that mimics news media content in form but not in organizational
Corresponding author:
Hugo Mercier, Département d’études cognitives, ENS, EHESS, PSL University, Institut Jean Nicod, CNRS,
29 rue d’Ulm, Paris 75005, France.
Email: hugo.mercier@gmail.com
969893NMS0010.1177/1461444820969893new media & societyAltay et al.
research-article2020
Article
2 new media & society 00(0)
process or intent” (Lazer et al., 2018, p. 1094; see also Tandoc et al., 2018a)—for reasons
as superficial as having been repeatedly exposed to them (Balmas, 2014).
In fact, despite the popularity of the “post-truth” narrative (Lewandowsky et al., 2017;
Peters, 2018), an interesting paradox emerges from the scientific literature on fake news:
in spite of its cognitive salience and attractiveness (Acerbi, 2019), fake news is shared by
only a small minority of Internet users (Grinberg et al., 2019; Guess et al., 2019; Nelson
and Taneja, 2018; Osmundsen et al., 2020). In the present article, we suggest and test an
explanation for this paradox: sharing fake news hurts the epistemic reputation of its
source and reduces the attention the source will receive in the future, even when the fake
news supports the audience’s political stance.
Fake news created with the intention of generating engagement is not constrained by
reality. This freedom allows fake news to tap into the natural biases of the human mind
such as our tendency to pay attention to information related to threats, sex, disgust, or
socially salient individuals (Acerbi, 2019; Blaine and Boyer, 2018; Vosoughi et al.,
2018). For example, in 2017, the most shared fake news on Facebook was entitled
“Babysitter transported to hospital after inserting a baby in her vagina” (BuzzFeed,
2017). In 2018, it was “Lottery winner arrested for dumping $200,000 of manure on ex-
boss’ lawn” (BuzzFeed, 2018).
Despite the cognitive appeal of fake news, ordinary citizens, who overwhelmingly
value accuracy (e.g. Knight Foundation, 2018; The Media Insight Project, 2016), and
who believe fake news represents a serious threat (Mitchell et al., 2019), are “becoming
more epistemically responsible consumers of digital information” (Chambers, 2020: 1).
In Europe, less than 4% of the news circulating on Twitter in April 2019 was fake
(Marchal et al., 2019), and fake news represent only 0.15% of ‘Americans’ daily media
diet (Allen et al., 2020). During the 2016 presidential election in the United States, on
Twitter 0.1% of users were responsible of 80% of the fake news shared (Grinberg et al.,
2019). On Facebook, the pattern is similar: only 10% of users shared any fake news dur-
ing the 2016 US presidential election (Guess et al., 2019). If few people share fake news,
media outlets sharing fake news are also relatively rare and highly specialized.
Mainstream media only rarely share fake news (at least intentionally, for example, Quand
et al., 2020; see also the notion of press accountability: Painter and Hodges, 2010) while
sharing fake news is common for some hyper-partisan and specialized outlets (Guo and
Vargo, 2018; Pennycook and Rand, 2019a). We hypothesize that one reason why the
majority of people and media sources avoid sharing fake news, in spite of its attractive-
ness, is that they want to maintain a good epistemic reputation, in order to enjoy the
social benefits associated with being seen as a good source of information (see, for
example, Altay et al., 2020; Altay and Mercier, 2020). For example, evidence suggests
that Internet users share news from credible sources to enhance their own credibility (Lee
and Ma, 2012). In addition, qualitative data suggest that one of people’s main motivation
to verify the accuracy of a piece of news before sharing it is:
protecting their positive self-image as they understand the detrimental impacts of sharing fake
news on their reputation. [...] Avoiding these adverse effects of sharing fake news is a powerful
motivation to scrutinize the authenticity of any news they wish to share. (Waruwu et al., 2020: 7)
Altay et al. 3
To maintain a good epistemic reputation people and media outlets must avoid sharing
fake news because their audience keeps track of how accurate the news they share have
been in the past.
Experiments have shown that accuracy plays a large role in source evaluation: inac-
curate sources quickly become less trusted than accurate source (even by children, for
example, Corriveau and Harris, 2009), people are less likely to follow the advice of a
previously inaccurate source (Fischer and Harvey, 1999), content shared by inaccurate
sources is deemed less plausible (e.g. Collins et al., 2018), and, by contrast, being seen
as a good source of information leads to being perceived as more competent (see, for
example, Altay et al., 2020; Altay and Mercier, 2020; Boyer and Parren, 2015). In addi-
tion, sources sharing political falsehoods are condemned even when these falsehoods
support the views of those who judge the sources (Effron, 2018).
Epistemic reputation is not restricted to individuals, as media outlets also have an epis-
temic reputation to defend: 89% of Americans believe it is “very important” for a news
outlet to be accurate, 86% that it is “very important” that they correct their mistakes (Knight
Foundation, 2018), and 85% say that accuracy is a critical reason why they trust a news
source (The Media Insight Project, 2016). Accordingly, 63% of Americans say they have
stopped getting news from an outlet in response to fake news (Pew Research Center,
2019a), and 50% say they avoided someone because they thought they would bring up fake
news in conversation (Pew Research Center, 2019a). Americans and Europeans are also
able to evaluate media outlets’ reliability: their evaluations, in the aggregate, closely match
those of professional fact-checkers or media experts (Pennycook and Rand, 2019a; Schulz
et al., 2020). As a result, people consume less news from untrustworthy websites (Allen
et al., 2020; Guess et al., 2020) and engage more with articles shared by trusted figures and
trusted media outlets on social media (Sterrett et al., 2019).
However, for the reputational costs of sharing a few fake news stories to explain why
so few sources share fake news, there should be a trust asymmetry: epistemic reputation
must be lost more easily than it is gained. Otherwise sources could get away with sharing
a substantial amount of fake news stories if they compensated by sharing real news sto-
ries to regain some trust.
Experimental evidence suggests that trust takes time to build but can collapse quickly,
in what Slovic (1993: 677) calls “the asymmetry principle.” For example, the reputation
of an inaccurate advisor will be discounted more than the reputation of an accurate advi-
sor will be credited (Skowronski and Carlston, 1989). In general, the reputational costs
associated with being wrong are higher than the reputational benefits of being right
(Yaniv and Kleinberger, 2000). A single mistake can ruin someone’s reputation of trust-
worthiness, while a lot of positive evidence is required to change the reputation of some-
one seen as untrustworthy (Rothbart and Park, 1986).
For the trust asymmetry to apply to the sharing of real and fake news, participants
must be able to deem the former more plausible than the latter. Some evidence suggests
that US participants are able to discriminate between real and fake news in this manner
(Altay et al., 2020; Bago et al., 2020; Pennycook and Rand, 2019b; Pennycook et al.,
2019, 2020). Prior to our experiments, we ran a pre-test to ensure that our set of news had
the desired properties in term of perceived plausibility (fake or real) and political orienta-
tion (pro-Democrats or pro-Republicans) (see Section 2 of the Electronic Supplemental
4 new media & society 00(0)
Material [ESM]). To the extent that people find fake news less plausible than real news,
that real news is deemed at least somewhat plausible, and that fake news is deemed
implausible (as our pre-test suggests is true for our stimuli) trust asymmetry leads to the
following hypothesis:
H1: A good reputation is more easily lost than gained—the negative effect on trust of
sharing one fake news story, against a background of real news stories, should be
larger than the positive effect on trust of sharing one real news story, against a back-
ground of fake news stories.
If the same conditions hold for politically congruent news, trust asymmetry leads to
the following hypothesis:
H2: A good reputation is more easily lost than gained, even if the fake news is politi-
cally congruent—the negative effect on trust of sharing one fake news story, against
a background of real news stories, should be larger than the positive effect on trust of
sharing one real news story, against a background of fake news stories, even if the
news stories are all politically congruent with the participant’s political stance.
We also predicted that, in comparison with real world media outlets, sources in our
experiments sharing only fake news stories should have trust ratings similar to junk
media (such as Breitbart), and have trust ratings different from mainstream media (such
as the New York Times). By contrast, sources sharing only real news stories should have
trust ratings similar to mainstream media, and different from junk media.
If H1 and H2 are true, and if people inflict severe reputational damage to sources of
fake news, the prospect of suffering from these reputational damages, combined with a
natural concern about one’s reputation, should make sharing fake news costly. Participants
should be more reluctant to share fake news when their reputation is at stake than when
it isn’t. To measure participants’ reluctance to share fake news we asked them how much
they would have to be paid to share various fake news stories (for a similar method see:
Graham and Haidt, 2012; Graham et al., 2009). These considerations lead to the follow-
ing hypotheses:
H3: Sharing fake news should be costly: the majority of people should ask to be paid
a non-null amount of money to share a fake news story on their own social media
account.
H4: Sharing fake news should be costlier when one’s reputation is at stake—people
should ask to be paid more money for sharing a piece of fake news when it is shared
by their own social media account, compared to when it is not shared by them.
If H2 is true, the reputational costs inflicted to fake news sharers should also be exerted
on those who share politically congruent fake news, leading to:
H5: Sharing fake news should appear costly for most people, even when the fake news
stories are politically congruent: the majority of people will be asked to be paid a
Altay et al. 5
non-null amount of money to share a politically congruent fake news story on their
own social media account.
H6: Sharing fake news should appear costlier when reputation is on the line, even
when the fake news stories are politically congruent—people should ask to be paid
more money for a piece of politically congruent fake news when it is shared on their
own social media account, compared to when it is shared by someone else.
If H3-6 are true, sharing fake news should also appear costlier than sharing real news:
H7: Sharing fake news should be costlier than sharing real news when one’s reputa-
tion is at stake—people should ask to be paid more money for sharing a piece of news
on their own social media account when the piece of news is fake compared to when
it is real.
We conducted four experiments to test these hypotheses (Experiment 1 tests H1,
Experiment 2 tests H2, Experiment 3 tests H3-6, Experiments 4 tests H3,4,7). Based on
preregistered power analyses, we recruited a total of 3656 online participants from the
United States. We also preregistered our hypotheses, primary analyses, and exclusion
criterion (based on two attention check and geolocation for Experiments 1 and 2, and one
attention check for Experiments 3 and 4). All the results supporting the hypotheses pre-
sented in this manuscript hold when no participants are excluded (see section 9 of ESM).
Preregistrations, data, materials, and the scripts used to analyze the data are available on
the Open Science Framework at https://osf.io/cxrgq/.
Experiment 1
The goal of the first experiment was to measure how easily a good reputation could be lost,
compared to the difficulty of acquiring a good reputation. We compared the difference
between the trust granted to a source sharing one fake news story, after having shared three
real news stories, with the trust granted to a source sharing one real news story, after having
shared three fake news stories. We predicted that the negative effect on trust of sharing one
fake news story, after having shared real news stories, would be larger than the positive
effect on trust of sharing one real news story, after having shared fake news stories (H1).
Participants
Based on a pre-registered power analysis, we recruited 1,113 US participants on Amazon
Mechanical Turk, paid $0.30. We removed 73 participants who failed at least one of the
two post-treatment attention checks (see Section 2 of the ESM), leaving 1,040 partici-
pants (510 men, 681 democrats, MAge = 39.09, SD = 12.32).
Design and procedure
After having completed a consent form, in a between-subjects design, participants were
presented with one of the following conditions: three real news stories, three fake news
6 new media & society 00(0)
stories, three real news stories and one fake news story, three fake news stories and one
real news story. The news stories that participants were exposed to were randomly
selected from the initial set of eight neutral news stories.
Presentation order of the news stories was randomized, but the news story with a dif-
ferent truth-status was always presented at the end. Half of the participants were told that
the news stories came from one of the two following made up outlets: “CSS.co.uk” or
“MBI news.” The other half were told that the news stories had been shared on Facebook
by one of two acquaintance: “Charlie” or “Skyler.” After having read the news stories,
participants were asked the following question: “how reliable do you think [insert source
name] is as a source of information,” on a seven-point Likert-type scale ranging from
“Not reliable at all” (1) to “Extremely reliable” (7), with the central measure being
“Somewhat reliable” (4). Even though using one question to measure trust in information
sources has proven reliable in the past (Pennycook and Rand, 2019a), participants were
also asked a related question: “How likely would you be to visit this website in the
future?” (for outlets) or “How likely would you be to pay attention to what [insert a
source name] will post in the future?” (for individuals) on a seven-point Likert-type
scale ranging from “Not likely at all” (1) to “Very likely” (7), with the central measure
being “Somewhat likely” (4).
Before finishing the experiment, participants were presented with a correction of the
fake news stories they might have read during the experiment, with a link to a fact-
checking article. Fact-checking reliably corrects political misinformation and backfires
only in rare cases (see, Walter et al., 2019). The ideological position of the participants
was measured in the demographics section with the following question: “If you abso-
lutely had to choose between only the Democratic and Republican party, which would do
you prefer?” Polls have shown that 81% of Americans who consider themselves inde-
pendent fall into the Democratic-Republican axis (Pew Research Center, 2019b), and
that this dichotomous scale yields results similar to those of more fine-grained scales
(Pennycook and Rand, 2019a, 2019b).
Materials
We pre-tested our materials with 288 US online participants on Amazon Mechanical
Turk to select two news sources (among the 10 pre-tested) whose novel names would
evoke trust ratings situated between those of mainstream sources and junk media
(Pennycook and Rand, 2019a). We also selected 24 news stories (among the 45 pre-
tested) from online news media and fact-checking websites that were either real or fake
and whose political orientation was either in favor of Republicans, in favor of Democrats,
or politically neutral (neither in favor of Republicans nor Democrats; all news stories are
available in Section 1 of the ESM). The full results of the pre-test are available in Section
2 of the ESM, but the main elements are as follows. For the stories we retained, the fake
news stories were considered less accurate (M = 2.35, SD = 1.66) than the real news sto-
ries (M = 4.16, SD = 1.56), t(662) = 14.52, p < .001, d = 1.26. Politically neutral news sto-
ries’ political orientation (M = 3.96, SD = 0.91) did not significantly differ from the
middle of the scale (4), t(222) = .73, p = .46. News stories in favor of Democrats (M = 2.56,
SD = 1.82) significantly differed in political orientation from politically neutral news, in
Altay et al. 7
the expected direction (M = 3.96, SD = .91), t(340) = 10.37, p < .001, d = .97. News stories
in favor of Republicans (M = 5.58, SD = 1.76) significantly differed in political orienta-
tion from politically neutral news stories, in the expected direction (M = 3.96, SD = .91),
t(313) = 11.94, p < .001, d = 1.15. Figure 1 provides an example of the stories presented
to the participants.
Results and discussion
All statistical analyses were conducted in R (v.3.6.0), using R Studio (v.1.1.419). We use
parametric tests throughout because we had normal distributions of the residuals and did
not violate statistical assumptions (switching to non-parametric tests would have reduce
our statistical power). The t-tests reported in Experiments 1 and 2 are Welch’s t-test. Post
hoc analyses for the main analyses presented below can be found in Section 6 of the ESM.
The correlation between our two measures of trust (the estimated reliability and the
willingness to interact with the source in the future) was 0.77, Pearson’s product-moment
correlation t(1,038) = 38.34, p < .001. Since these two measures yielded similar results,
in order to have a more robust measure of the epistemic reputation of the source we com-
bined them into a measure called “Trust.” This measure will be used for the following
analyses. The pre-registered analyses conducted separately on the estimated reliability
and the willingness to interact with the source in the future can be found in Section 4 of
the ESM. In Experiments 1 and 2, since the slopes that we compare initially do not have
the same sign (e.g. 0.98 and –0.30 in Experiment 1), we changed the sign of one slope to
compare the absolute values of the slopes (i.e. 0.98 and 0.30). Without this manipulation,
the interactions would not inform the trust asymmetry hypothesis (e.g. if the slopes had
Figure 1. Example of a politically neutral fake news story shared by “MBI news” on the left,
and a politically neutral real news story shared by “Charlie,” as they were presented to the
participants.
8 new media & society 00(0)
the following values “0.98 and –0.98” there would be no asymmetry, but the interaction
would be statistically significant).
Confirmatory analyses. As predicted by H1, whether the source is a media outlet or an
acquaintance, the increase in trust that a source enjoys when sharing one real news against
a background of fake news is smaller (trend = .30, SE = .12) than the drop in trust a source
suffers when sharing one fake news against a background of real news (trend = .98,
SE = .12), t(1,036) = 4.11, p < .001. This effect is depicted in Figure 2 (left panel), and
holds whether the source is an acquaintance, respective trends: .30, SE = .18; .98, SE = .17;
t(510) = 2.79, p = .005, or a media outlet, respective trends: .29, SE = .16; .98, SE = .16;
t(522) = 3.11, p = .002.
A good reputation is more easily lost than gained. Regardless of whether the source
was an acquaintance or a media outlet, participants decreased the trust granted to sources
sharing one fake news after having shared three real news more than they increased the
trust granted to sources sharing one real news after having shared three fake news.
Experiment 2
This second experiment is a replication of the first experiment with political news. The
news were either in favor of Republicans or in favor of Democrats. Depending on the par-
ticipants’ own political orientation, the news were classified as either politically congruent
(e.g. a Democrat exposed to a piece of news in favor of Democrats) or politically incongru-
ent (e.g. a Democrat exposed to a piece of news in favor of Republicans). We predicted
that, even when participants receive politically congruent news, we would observe the
same pattern as in Experiment 1: the negative effect on trust of sharing one fake news story
Figure 2. Interaction plot for the trust attributed to sources sharing politically neutral,
congruent, and incongruent news. This figure represents the effect on trust (i.e. reliability
rating and willingness to interact in the future) of the number of news stories presented (three
or four), and the nature of the majority of the news stories (real or fake). The left panel:
Experiment 1; middle and right panels: Experiment 2.
Altay et al. 9
against a background of real news stories would be larger than the positive effect on trust
of sharing one real news story against a background of fake news stories (H2).
Participants
Based on a pre-registered power analysis, we recruited 1600 participants on Amazon
Mechanical Turk, paid $0.30. We removed 68 participants who failed the first post-treat-
ment attention check (but not the second one, see Section 5 of the ESM), leaving 1532
participants (855 women, 985 democrats, MAge = 39.28, SD = 12.42).
Design, procedure, and materials
In a between-subjects design, participants were randomly presented with one of the fol-
lowing conditions: three real political news stories, three fake political news stories,
three real political news stories and one fake political news story, three fake political
news stories and one real political news story. The news stories were randomly selected
from the initial set of 16 political news stories. Whether participants saw only news in
favor of Republicans or news in favor of Democrats was also random.
The design and procedure are identical to Experiment 1, except that we only used one
type of source (media outlets), since the first experiment showed that the effect hold
regardless of the type of source. Figure 3 provides an example of the materials used.
Results
The correlation between the two measures of trust (the estimated reliability and the will-
ingness to interact with the source in the future) was 0.80, Pearson’s product-moment
correlation t(1,530) = 51.64, p < .001. Since these two measures yielded similar results,
as in Experiment 1, we combined them into a “Trust” measure. The pre-registered sepa-
rated analyses on the estimated reliability and the willingness to interact with the source
Figure 3. Example of a real political news story in favor of Democrats shared by “CSS.co.uk”
on the left, and a fake political news story in favor of Democrats shared by “MBI news,” as they
were presented to the participants.
10 new media & society 00(0)
in the future can be found in Section 5 of the ESM. Post hoc analyses for the main analy-
ses presented below can also be found in Section 6 of the ESM.
Confirmatory analyses. As predicted by H2, among politically congruent news, we found
that the increase in trust that a source enjoys when sharing one real news against a back-
ground of fake news is smaller (trend = .48, SE = .15) than the drop in trust a source suf-
fers when sharing one fake news against a background of real news (trend = .95, SE = .14),
t(737) = 2.31, p = .02, (see the middle panel of Figure 2). Among politically incongruent
news, we found that the increase in trust that a source enjoys when sharing one real news
against a background of fake news is smaller (trend = .06, SE = .13) than the drop in trust
a source suffers when sharing one fake news against a background of real news
(trend = .99, SE = .14), t(787) = 4.94, p < .001, (see the right panel of Figure 2).
Slopes comparison across experiments (exploratory analyses). The decrease in trust (in
absolute value) that sources sharing one fake news story against a background of real
news stories, compared to sources that share only real news stories, was not different for
politically neutral news (trend = .98, SE = .12) and political news (politically congruent
news (trend = .95, SE = .14), t(1,280) = .06, p = .95, politically incongruent news
(trend = .99, SE = .14), t(901) = .03, p = .98.
The increase in trust (in absolute value) that source sharing one real news story
against a background of fake news stories, compared to sources that share only fake
news stories, was not different between politically neutral news (trend = .30, SE = .12)
and political news, politically congruent news: (trend = .48, SE = .15), t(876) = .92,
p = .36; politically incongruent news: (trend = .06, SE = .13), t(922) = 1.42, p = .15.
However, this increase was smaller for politically incongruent than congruent news,
t(731) = 2.68, p = .008.
Participants trusted less sources sharing politically incongruent news than politically
congruent news, β = −0.51, t(2,569) = −10.22, p < .001, and politically neutral news,
β = −0.52, t(2,569) = −11.26, p < .001. On the other hand, we found no significant differ-
ence in the trust granted to sources sharing politically neutral news compared to politi-
cally congruent news, β = −0.01, t(2,569) = −0.18, p = .86. An equivalence test with
equivalence bounds of −0.20 and 0.20 showed that the observed effect is statistically not
different from zero and statistically equivalent to zero, t(1,608.22) = −3.99, p < .001.
Comparison of the results of Experiments 1 and 2 with real world trust ratings (confirmatory
analyses). We compared the trust ratings of the sources in Experiments 1 and 2 to the
trust ratings that people gave to mainstream media outlets and junk media outlets (Pen-
nycook and Rand, 2019a). We predicted that sources sharing only fake news stories
should have trust ratings similar to junk media, and dissimilar to mainstream media,
whereas sources sharing only real news stories should have trust ratings similar to main-
stream media, and dissimilar to junk media.
To this end, we rescaled the trust ratings from the interval [1,7] to the interval [0,1].
To ensure a better comparison with the mainstream sources sampled in studies one and
two of Pennycook and Rand (2019a), which relay both political and politically neutral
news, we merged the data from Experiment 1 (in which the sources shared politically
Altay et al. 11
Figure 4. Statistical comparison of the four present conditions (three fake news, three fake news
and one real news, three fake news and one real news, three real news) with the results obtained
in studies one and two of Pennycook and Rand (2019a) for trust scores of mainstream media and
junk media. “Very dissimilar” correspond to large effect; “Moderately dissimilar” medium effect;
“Slightly similar” to small effect; “Not dissimilar” to an absence of statistical difference.
neutral news) and Experiment 2 (in which the sources shared political news). Then, we
compared these merged trust score with the trust scores that mainstream media and junk
media received in Pennycook and Rand (2019a) (see Figure 4).
As predicted, we found that sources sharing only fake news stories had trust ratings
not dissimilar to junk media, and very dissimilar to mainstream media, while sources
sharing only real news stories had trust ratings not dissimilar to mainstream media, and
dissimilar to junk media.
Sharing one real news against a background of real news was not sufficient to escape
the category junk media. The only sources that received trust scores not dissimilar to
those of mainstream media were sources sharing exclusively real news stories.
Discussion
A good reputation is more easily lost than gained, even when sharing fake news stories
politically congruent with participants’ political orientation. The increase in trust gained
by sources sharing a real news story against a background of fake news stories was
smaller than the decrease in trust suffered by sources sharing a fake news story against a
background of real news stories. Moreover, this decrease in trust was not weaker for
politically congruent news than for politically neutral or politically incongruent news.
12 new media & society 00(0)
Participants did not differentiate between sources sharing politically neutral news and
politically congruent news, but they were mistrustful of sources sharing incongruent
political news.
Experiment 3
Experiments 1 and 2 show that people are quick to distrust sources sharing fake news,
even if they have previously shared real news, and slow to trust sources sharing real
news, if they have previously shared fake news. However, by themselves, these results
do not show that this is why most people appear to refrain from sharing fake news. In
Experiment 3, we test more directly the hypothesis that the reputational fallout from
sharing fake news motivates people not to share them. In particular, if people are aware
of the reputational damage that sharing fake news can wreak, they should not willingly
share such news if they are not otherwise incentivized.
Some evidence from Singaporean participants already suggests that people are
aware of the negative reputational fallouts associated with sharing fake news (Waruwu
et al., 2020). However, no data suggest that the same is true for Americans. The politi-
cal environment in the United States, in particular the high degree of affective polari-
zation (see, for example, Iyengar et al., 2019), might make US participants more likely
to share fake news in order to signal their identity or justify their ideological positions.
However, we still predict that even in this environment, most people should be reluc-
tant to share fake news.
In Experiment 3, we asked participants how much they would have to be paid to share
a variety of fake news stories. However, even if participants ask to be paid to share fake
news, it might not be because they fear the reputational consequences—for example,
they might be worried that their contacts would accept false information, wherever it
comes from. To test this possibility, we manipulated whether the fake news would be
shared by the participant’s own social media account, or by an anonymous account, lead-
ing to the following hypotheses:
H3: The majority of participants will ask to be paid to share each politically neutral
fake news story on their own social media account.
H4: Participants ask to be paid more money for a piece of fake news when it is shared
on their own social media account, compared to when it is shared by someone else.
H5: The majority of participants will ask to be paid to share each politically congruent
fake news story on their own social media account.
H6: Participants ask to be paid more money for a piece of politically congruent fake
news when it is shared on their own social media account, compared to when it is
shared by someone else.
Participants
Based on pre-registered power analysis, we recruited 505 participants on Prolific
Academic, paid £0.20. We removed one participant who failed to complete the
Altay et al. 13
post-treatment attention test (see Section 2 of the ESM), and 35 participants who reported
not using social media, leaving 469 participants (258 women, MAge = 32.87, SD = 11.51).
Design, procedure and materials
In a between-subjects design, participants had to rate how much they would have to be
paid for their contacts to see fake news stories, either shared from their own personal
social media account (in the Personal Condition), or by an anonymous account (in the
Anonymous Condition).
We used the same set of fake news as in Experiment 1 and Experiment 2, but this time
the news were presented without any source. Each participant saw 12 fake news stories
in a randomized order and rated each of them.
In the Personal Condition, after having read a fake news story, participants were asked
the following question: “How much you would have to be paid to share this piece of
news with your contacts on social media from your personal account?” on a four-point
Likert-type scale “$0” (1), “$10” (2), “$100” (3), “$1000 or more” (4). We used a Likert-
type scale instead of an open-ended format because in a previous version of this experi-
ment the open-ended format generated too many outliers, making statistical analysis
difficult (see Section 3 of the ESM).
In the Anonymous Condition, after having read a fake news story, participants were
asked the following question: “How much you would have to be paid for this piece of
news to be seen by your contacts on social media, shared by an anonymous account?” on
a four-point Likert-type scale “$0” (1), “$10” (2), “$100” (3), “$1000 or more” (4).
Results
Confirmatory analyses. In support of H3, for each politically neutral fake news, a majority
of participants asked to be paid a non-null amount of money to share it (share of partici-
pants requesting at least $10 to share each piece of fake news: M = 66.45%, Min = 61.8%,
Max = 69.5%) (for a visual representation see Figure 5; for more details see Section 8 of
the ESM).
In support of H4, participants asked to be paid more to share politically neutral fake
news stories from their personal account compared to when it was shared by an anony-
mous account, β = 0.28, t(467) = 3.73, p < .001 (see Figure 6).
In support of H5, for each politically congruent fake news, a majority of participants
asked to be paid a non-null amount of money to share it (share of participants requesting
at least $10 to share each piece of fake news: M = 64.9%, Min = 59.4%, Max = 71.7%)
(for a visual representation see Figure 5; for more details see Section 8 of the ESM).
In support of H6, participants asked to be paid more to share politically congruent fake
news stories from their personal account compared to when it was shared by an anony-
mous account, β = 0.24, t(467) = 3.24, p = .001, (see Figure 6).
Exploratory analyses. Participants asked to be paid more to share politically incongruent
news than politically congruent news, β = 0.28, t(5625) = 8.77, p < .001, and politically
neutral news, β = 0.32, t(5,625) = 9.93, p < .001. On the other hand, we found no
14 new media & society 00(0)
significant difference between the amount requested to share politically congruent and
neutral fake news, β = 0.04, t(5,625) = 1.16, p = .25. Additional exploratory analyses and
descriptive statistics are available in Section 7 of the ESM.
Figure 5. Bar plots representing how much participants asked to be paid to share fake news
stories in the Anonymous Condition (on the left) and Personal Condition (on the right) in
Experiments 3 and 4 (as well as real news stories in the latter). The red bars represent the
percentage of participants saying they would share a piece of news for free, while the green
bars represent the percentage of participants asking for a non-null amount of money to share a
piece of news.
Altay et al. 15
For each politically incongruent fake news, a majority of participants asked to be paid
a non-null amount of money to share it (share of participants requesting at least $10 to
share each piece of fake news: M = 70.73%, Min = 60.4%, Max = 77.2%) (for a visual
representation see Figure 5; for more details see Section 8 of the ESM).
In the Personal Condition, the 9.3% of participants who were willing to share all the
pieces of fake news presented to them for free accounted for 37.4% of the $0 responses.
Experiment 4
Experiment 4 is a replication of Experiment 3 with novel materials (i.e. a new set of
news) and the use of real news in addition to fake news. It allows us to test the generaliz-
ability of the findings of Experiment 3 (in particular H3 and H4), and to measure the
amount of money participants will request to share fake news compared to real news.
Thus, in addition to H3-4, Experiment 4 tests the following hypothesis:
H7: People will ask to be paid more money for sharing a piece of news on their own
social media account when the news is fake compared to when it is real.
Participants
Based on pre-registered power analysis, we recruited 150 participants on Prolific
Academic, paid £0.20. We removed eight participants who reported not using social
media (see Section 2 of the ESM) leaving 142 participants (94 women, MAge = 30.15,
SD = 9.93).
Figure 6. Interaction plot for the amount of money requested (raw values) in the Anonymous
Condition and the Personal Condition.
16 new media & society 00(0)
Design, procedure and materials
The design and procedure were similar to Experiment 3 except that participants were
presented with 20 news instead of 10, and that among these news half of them were true
(the other half being fake). We used novel materials because the sets of news used in
Experiments 1, 2 and 3 were then outdated. The new set of news is related to COVID-19
and is not overtly political.
Results and discussion
Confirmatory analyses. In support of H3, for each fake news, a majority of participants
asked to be paid a non-null amount of money to share it (share of participants requesting
at least $10 to share each piece of fake news: M = 71.1%, Min = 66.7%, Max = 76.0%)
(for a visual representation see Figure 5; for more details see Section 8 of the ESM).
In support of H4, participants asked to be paid more to share fake news from the per-
sonal account than from an anonymous account, ß = 0.32, t(148) = 3.41, p < .001. In an
exploratory analysis, we found that participants did not significantly request more money
to share real news from their personal account compared to an anonymous account,
ß = 0.18, t(140) = 1.41, p = .16. The effect of anonymity was stronger for fake news com-
pared to real news, interaction term: ß = 0.32, t(2,996) = 6.22, p < .001.
In support of H7, participants asked to be paid more to share, from their personal
account fake news stories compared to real news stories, ß = 0.57, t(1,424) = 18.92,
p < .001.
Exploratory analyses. By contrast with fake news, for some real news, most participants
accepted to share them without being paid (share of participants requesting at least $10
to share each piece of fake news: M = 56.5%, Min = 43.3%, Max = 67.3%) (for a visual
representation see Figure 5; for more details see Section 8 of the ESM). In the Personal
Condition, the 14.1% of participants who were willing to share all the pieces of fake
news presented to them for free accounted for 43.8% of all the $0 responses.
We successfully replicated the findings of Experiment 3 on a novel set of news, offer-
ing further support for H3 and H4 and demonstrated that the perceived cost of sharing
fake news is higher than the perceived costs of sharing real news. Overall, the results of
Experiments 3 and 4 suggest that most people are reluctant to share fake news, even
when it is politically congruent, and that this reluctance is motivated in part by a desire
to prevent reputational damage, since it is stronger when the news is shared from the
participant’s own social media account. These results are consistent with most people’s
expressed commitment to share only accurate news articles on social media (Pennycook
et al., 2019), their awareness that their reputation will be negatively affected if they share
fake news (Waruwu et al., 2020), and with the fact that a small minority of people is
responsible for the majority of fake news diffusion (Grinberg et al., 2019; Guess et al.,
2019; Nelson and Taneja, 2018; Osmundsen et al., 2020). However, our results should be
interpreted tentatively since they are based on participants’ self-reported intentions. We
encourage future studies to extend these findings by relying on actual sharing decisions
by social media users.
Altay et al. 17
General discussion
Even though fake news can be made to be cognitively appealing, and congruent with
anyone’s political stance, it is only shared by a small minority of social media users, and
by specialized media outlets. We suggest that so few sources share fake news because
sharing fake news hurts one’s reputation. In Experiments 1 and 2, we show that sharing
fake news does hurt one’s reputation, and that it does so in a way that cannot be easily
mended by sharing real news: not only did trust in sources that had provided one fake
news story against a background of real news dropped, but this drop was larger than the
increase in trust yielded by sharing one real news story against a background of fake news
stories (an effect that was also observed for politically congruent news stories). Moreover,
sharing only one fake news story, in addition to three real news stories, is sufficient for
trust ratings to become significantly lower than the average of the mainstream media.
Not only is sharing fake news reputationally costly, but people appear to take these
costs into account. In Experiments 3 and 4, a majority of participants declared they
would have to be paid to share each of a variety of fake news story (even when the stories
were politically congruent), that participants requested more money when their reputa-
tion could be affected, and that the amount of money requested was larger for fake news
compared to real news. These results suggest that people’s general reluctance to share
fake news is in part due to reputational concerns, which dovetails well with qualitative
data indicating that people are aware of the reputational costs associated with sharing
fake news (Waruwu et al., 2020). In this perspective, Experiments 1 and 2 show that
these fears are founded, since sharing fake news effectively hurts one’s reputation in a
way that appears hard to fix.
Consistent with past work showing that a small minority of people shares most of the
fake news (e.g. Grinberg et al., 2019; Guess et al., 2019; Nelson and Taneja, 2018;
Osmundsen et al., 2020), in Experiments 3 and 4 we observed that a small minority of
participants (less than 15%) requested no payment to share any of the fake news items
they were presented with. These participants accounted for over a third of all the cases in
which a participant requested no payment to share a piece of fake news.
Why would a minority of people appear to have no compunction in sharing fake news,
and why would many people occasionally share the odd fake news stories? The sharing of
fake news in spite of the potential reputational fallout can likely be explained by a variety
of factors, the most obvious being that people might fail to realize a pieces of news is fake:
if they think the news to be real, people have no reason to suspect that their reputation
would suffer from sharing it (on the contrary). Studies suggest that people are, on the
whole, able to distinguish fake from real news (Altay et al., 2020; Bago et al., 2020;
Pennycook et al., 2019, 2020; Pennycook and Rand, 2019b), and that they are better at
doing so for politically congruent than incongruent fake news (Pennycook and Rand,
2019b). However, this ability does not always translate into a refusal to share fake news
(Pennycook et al., 2019, 2020). Why would people share news they suspect to be fake?
There is a number of reasons why people might share even news they recognize as
fake, which we illustrate with popular fake news from 2016 to 2018 (BuzzFeed, 2016,
2017, 2018). Some fake news might be shared because they are entertaining (“Female
Legislators Unveil ‘Male Ejaculation Bill’ Forbidding The Disposal Of Unused Semen,”
18 new media & society 00(0)
see Acerbi, 2019; Tandoc, 2019; Tandoc et al., 2018b; Waruwu et al., 2020), or because
they serve a phatic function (“North Korea Agrees To Open Its Doors To Christianity,”
see Berriche and Altay, 2020; Duffy and Ling, 2020), in which cases sharers would not
expect to be judged harshly based on the accuracy of the news. Some fake news relate to
conspiracy theories (“FBI Agent Suspected in Hillary Email Leaks Found Dead in
Apparent Murder-Suicide”), and recent work shows people high in need for chaos—peo-
ple who might not care much about how society sees them—are particularly prone to
sharing such news (Petersen et al., 2018). A few people appear to be so politically parti-
san that the perceived reputational gains of sharing politically congruent news, even
fake, might outweigh the consequences for their epistemic reputation (Hopp et al., 2020;
Osmundsen et al., 2020; Tandoc et al., 2018b). Some fake news might fall in the category
of news that would be very interesting if they were true, and this interestingness might
compensate for their lack of plausibility (e.g. “North Korea Agrees To Open Its Doors to
Christianity”, see Altay et al., 2020).
Finally, the question of why people share fake news in spite of the reputational fallout
assumes that the sharing of fake news is not anonymous. However, in some platforms,
people can share news anonymously, and we would expect fake news to be more likely
to flourish in such environments. Indeed, some of the most popular fake news (e.g. piz-
zagate, QAnon) started flourishing on anonymous platforms such as 4chan. Their transi-
tion toward more mainstream, non-anonymous social media might be facilitated once the
news are perceived as being sufficiently popular that one doesn’t necessarily jeopardize
one’s reputation by sharing them (Acerbi, 2020). This non-exhaustive list shows that in
a variety of contexts, the negative reputational consequences of sharing fake news can be
either ignored, or outweighed by other concerns (see also, e.g. Brashier and Schacter,
2020; Guess et al., 2019; Mourão and Robertson, 2019).
Beyond the question of fake news, our studies also speak to the more general question
of how people treat politically congruent versus politically incongruent information. In
influential motivated reasoning accounts, no essential difference is drawn between biases
in the rejection of information that do not fit our views or preferences, and biases in the
acceptance of information that fit our views or preferences (Ditto et al., 2009; Kunda,
1990). By contrast, another account suggests that people should be particularly critical of
information that does not fit their priors, rather than being particularly accepting of infor-
mation that does (Mercier, 2020; Trouche et al., 2018). On the whole, our results support
this latter account.
In the first three experiments reported here, participants treated politically congruent
and politically neutral news in a similar manner, but not politically incongruent news.
Participants did not lower their trust less when they were confronted with politically con-
gruent fake news, compared with a politically neutral or politically congruent fake news.
Participants did not ask either to be paid less to share politically congruent fake news
compared to politically neutral fake news. Instead, participants failed to increase their
trust when a politically incongruent real news was presented (for similar results, see, for
example, Edwards and Smith, 1996), and asked to be paid more to share politically incon-
gruent fake news. More generally, the trust ratings of politically congruent news sources
were not higher than those of politically neutral news sources, while the ratings of politi-
cally incongruent news sources were lower than those of politically neutral news sources.
Altay et al. 19
These results support a form of “vigilant conservatism,” according to which people are
not biased because they accept information congruent with their beliefs too easily, but
rather because they spontaneously reject information incongruent with their beliefs
(Mercier, 2020; Trouche et al., 2018). As for fake news, the main danger is not that people
are gullible and consume information from unreliable sources, instead, we should worry
that people reject good information and don’t trust reliable sources—a mistrust that might
be fueled by alarmist discourse on fake news (Van Duyn and Collier, 2019).
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship and/
or publication of this article: This research was supported by the grant EUR FrontCog ANR-17-
EURE-0017 and ANR-10-IDEX-0001-02 PSL, and by the CONFIRMA grant from the DGA.
Sacha Altay received funding for his PhD thesis from the DGA.
ORCID iD
Sacha Altay https://orcid.org/0000-0002-2839-7375
Supplemental material
Supplemental material for this article is available online.
References
Acerbi A (2019) Cognitive attraction and online misinformation. Palgrave Communications 5(1):
15.
Acerbi A (2020) Cultural Evolution in the Digital Age. Oxford: Oxford University Press.
Allen J, Howland B, Mobius M, et al. (2020) Evaluating the fake news problem at the scale of the
information ecosystem. Science Advances 6(14): eaay3539.
Altay S, de Araujo E and Mercier H (2020) “If this account is true, it is most enormously won-
derful”: Interestingness-if-true and the sharing of true and false news. Available at: https://
psyarxiv.com/tdfh5/
Altay S, Majima Y and Mercier H (2020) It’s my idea ! Reputation management and idea appro-
priation. Evolution & Human Behavior 41: 235–243.
Altay S and Mercier H (2020) Relevance is socially rewarded, but not at the price of accuracy.
Evolutionary Psychology 18(1): 1474704920912640.
Bago B, Rand DG and Pennycook G (2020) Fake news, fast and slow : deliberation reduces belief
in false (but not true) news headlines. Journal of Experimental Psychology: General. Epub
ahead of print 9 January. DOI: 10.1037/xge0000729.
Balmas M (2014) When fake news becomes real : combined exposure to multiple news sources
and political attitudes of inefficacy, alienation, and cynicism. Communication Research
41(3): 430–454.
Berriche M and Altay S (2020) Internet users engage more with phatic posts than with health mis-
information on Facebook. Palgrave Communications 6(1): 1–9.
Blaine T and Boyer P (2018) Origins of sinister rumors : a preference for threat-related material in
the supply and demand of information. Evolution and Human Behavior 39(1): 67–75.
Boyer P and Parren N (2015) Threat-related information suggests competence : a possible factor
in the spread of rumors. PLoS One 10(6): e0128421.
20 new media & society 00(0)
Brashier NM and Schacter DL (2020) Aging in an era of fake news. Current Directions in
Psychological Science 29(3): 316–323.
BuzzFeed (2016) Here Are 50 of the Biggest Fake News Hits on Facebook from 2016. Available
at: https://www.buzzfeednews.com/article/craigsilverman/top-fake-news-of-2016
BuzzFeed (2017) These Are 50 of the Biggest Fake News Hits on Facebook in 2017. Available at:
https://www.buzzfeednews.com/article/craigsilverman/340these-are-50-of-the-biggest-fake-
news-hits-on-facebook-in. 341
BuzzFeed (2018) These Are 50 of the Biggest Fake News Hits on Facebook in 2018. Available at:
https://www.buzzfeednews.com/article/craigsilverman/facebook-fake-news-hits-2018
Chambers S (2020) Truth, deliberative democracy, and the virtues of accuracy : is fake news
destroying the public sphere? Political Studies. Epub ahead of print 2 April. DOI: 10.1177
/0032321719890811.
Collins PJ, Hahn U, von Gerber Y, et al. (2018) The bi-directional relationship between source
characteristics and message content. Frontiers in Psychology 9: 18.
Corriveau KH and Harris PL (2009) Choosing your informant : weighing familiarity and recent
accuracy. Developmental Science 12(3): 426–437.
Del Vicario M, Bessi A, Zollo F, et al. (2016) The spreading of misinformation online. Proceedings
of the National Academy of Sciences 113(3): 554–559.
Ditto PH (2009) Passion, reason, and necessity : a quantity-of-processing view of motivated
reasoning. In: Bayne T and Fernandez J (eds) Delusion and Self-Deception: Affective and
Motivational Influences on Belief Formation. New York: Taylor & Francis, pp. 23–53.
Duffy A and Ling R (2020) The gift of news : Phatic news sharing on social media for social cohe-
sion. Journalism Studies 21(1): 72–87.
Edwards K and Smith EE (1996) A disconfirmation bias in the evaluation of arguments. Journal
of Personality and Social Psychology 71: 5–24.
Effron DA (2018) It could have been true : how counterfactual thoughts reduce condemnation of
falsehoods and increase political polarization. Personality and Social Psychology Bulletin
44(5): 729–745.
Fischer I and Harvey N (1999) Combining forecasts : what information do judges need to outper-
form the simple average? International Journal of Forecasting 15(3): 227–246.
Graham J and Haidt J (2012) Sacred values and evil adversaries : a moral foundations
approach. Available at: https://www.semanticscholar.org/paper/Sacred-values-and-evil-
adversaries%3A-A-moral-Graham-Haidt/6ba2b8ea7529302ebdb97d7ef02c43437fe86eda
Graham J, Haidt J and Nosek BA (2009) Liberals and conservatives rely on different sets of moral
foundations. Journal of Personality and Social Psychology 96(5): 1029.
Grinberg N, Joseph K, Friedland L, et al. (2019) Fake news on twitter during the 2016 US
Presidential election. Science 363(6425): 374–378.
Guess A, Nagler J and Tucker J (2019) Less than you think : prevalence and predictors of fake
news dissemination on Facebook. Science Advances 5(1): eaau4586.
Guess A, Nyhan B and Reifler J (2020) Exposure to untrustworthy websites in the 2016 US elec-
tion. Nature Human Behaviour 4: 472–480.
Guo L and Vargo C (2018) “Fake News” and emerging online media ecosystem : an integrated
intermedia agenda-setting analysis of the 2016 US Presidential Election. Communication
Research 47(2): 178–200.
Hopp T, Ferrucci P and Vargo CJ (2020) Why do people share ideologically extreme, false, and
misleading content on social media ? A self-report and trace data–based analysis of counter-
media content dissemination on Facebook and Twitter. Human Communication Research.
Epub ahead of print 19 May. DOI: 10.1093/hcr/hqz022.
Iyengar S, Lelkes Y, Levendusky M, et al. (2019) The origins and consequences of affective
polarization in the United States. Annual Review of Political Science 22: 129–146.
Altay et al. 21
Knight Foundation (2018) Indicators of News Media Trust. Available at: https://knightfoundation.
org/reports/indicators-of-news-media-trust/
Kunda Z (1990) The case for motivated reasoning. Psychological Bulletin 108: 480–498.
Lazer DM, Baum MA, Benkler Y, et al. (2018) The science of fake news. Science 359(6380):
1094–1096.
Lee CS and Ma L (2012) News sharing in social media : the effect of gratifications and prior expe-
rience. Computers in Human Behavior 28(2): 331–339.
Lewandowsky S, Ecker UK and Cook J (2017) Beyond misinformation : understanding and cop-
ing with the “post-truth” era. Journal of Applied Research in Memory and Cognition 6(4):
353–369.
Marchal N, Kollanyi B, Neudert L-M, et al. (2019) Junk News During the EU Parliamentary
Elections : Lessons from a Seven-Language Study of Twitter and Facebook. Oxford: Project
on Computational Propaganda, Oxford Internet Institute, Oxford University.
Mercier H (2020) Not Born Yesterday : The Science of Who We Trust and What We Believe.
Princeton: Princeton University Press.
Mitchell A, Gottfried J, Fedeli S, et al. (2019) Many Americans Say Made-Up News is a Critical
Problem That Needs to Be Fixed. Pew Research Center. Available at: https://www.journal-
ism.org/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-needs-
to-be-fixed/
Mourão RR and Robertson CT (2019) Fake news as discursive integration : an analysis of sites
that publish false, misleading, hyperpartisan and sensational information. Journalism Studies
20(14): 2077–2095.
Nelson JL and Taneja H (2018) The small, disloyal fake news audience : the role of audience avail-
ability in fake news consumption. New Media & Society 20(10): 3720–3737.
Osmundsen M, Bor A, Bjerregaard Vahlstrup P, et al. (2020) Partisan Polarization Is the Primary
Psychological Motivation behind “Fake News” Sharing on Twitter. Available at: https://
psyarxiv.com/v45bk/
Painter C and Hodges L (2010) Mocking the news : how The Daily Show with Jon Stewart holds
traditional broadcast news accountable. Journal of Mass Media Ethics 25(4): 257–274.
Pennycook G and Rand DG (2019a) Fighting misinformation on social media using crowdsourced
judgments of news source quality. Proceedings of the National Academy of Sciences 116(7):
2521–2526.
Pennycook G and Rand DG (2019b) Lazy, not biased : susceptibility to partisan fake news is better
explained by lack of reasoning than by motivated reasoning. Cognition 188: 39–50.
Pennycook G, Epstein Z, Mosleh M, et al. (2019) Understanding and reducing the spread of mis-
information online.
Pennycook G, McPhetres J, Zhang Y, et al. (2020) Fighting COVID-19 misinformation on social
media : Experimental evidence for a scalable accuracy nudge intervention. Psychological
Science 31(7): 770–780.
Peters MA (2018) Education in a post-truth world. In: Peters MA, Rider S, Hyvönen M and Besley
T (eds) Post-Truth, Fake News. Singapore: Springer, pp. 145–150.
Petersen MB, Osmundsen M and Arceneaux K (2018) A “Need for Chaos” and the sharing of
hostile political rumors in advanced democracies. Available at: https://www.researchgate.
net/publication/327382989_A_Need_for_Chaos_and_the_Sharing_of_Hostile_Political_
Rumors_in_Advanced_Democracies
Pew Research Center (2019a) Many Americans Say Made-up News Is a Critical Problem That
Needs to Be Fixed. Available at: https://www.journalism.org/2019/06/05/many-americans-
say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/
22 new media & society 00(0)
Pew Research Center (2019b) Political Independents : Who They Are, What They Think. Available
at: https://www.pewresearch.org/politics/2019/03/14/political-independents-who-they-are-
what-they-think/
Quand T, Boberg S, Schatto-Eckrodt T, et al. (2020) Pandemic news : Facebook pages of main-
stream news media and the coronavirus crisis—a computational content analysis. Available
at: https://dblp.uni-trier.de/rec/journals/corr/abs-2005-13290.html
Rothbart M and Park B (1986) On the confirmability and disconfirmability of trait concepts.
Journal of Personality and Social Psychology 50(1): 131.
Schulz A, Fletcher R and Popescu M (2020) Are news outlets viewed in the same way by experts
and the public ? A comparison across 23 European Countries. Reuters Institute Factsheet.
Available at: https://reutersinstitute.politics.ox.ac.uk/are-news-outlets-viewed-same-way-
experts-and-public-comparison-across-23-european-countries
Skowronski JJ and Carlston DE (1989) Negativity and extremity biases in impression formation :
a review of explanations. Psychological Bulletin 105(1): 131.
Slovic P (1993) Perceived risk, trust, and democracy. Risk Analysis 13(6): 675–682.
Sterrett D, Malato D, Benz J, et al. (2019) Who shared it? Deciding what news to trust on social
media. Digital Journalism 7(6): 783–801.
Tandoc EC Jr (2019) The facts of fake news : a research review. Sociology Compass 13(9): e12724.
Tandoc EC Jr, Lim ZW and Ling R (2018a) Defining “fake news” A typology of scholarly defini-
tions. Digital Journalism 6(2): 137–153.
Tandoc EC Jr, Ling R, Westlund O, et al. (2018b) Audiences’ acts of authentication in the age of
fake news : a conceptual framework. New Media & Society 20(8): 2745–2763.
The Media Insight Project (2016) A New Understanding : What Makes People Trust and Rely on
News. Available at: http://bit.ly/1rmuYok
Trouche E, Johansson P, Hall L, et al. (2018) Vigilant conservatism in evaluating communicated
information. PLoS One. DOI: 10.1371/journal.pone.0188825.
Van Bavel JJ and Pereira A (2018) The partisan brain : an Identity-based model of political belief.
Trends in Cognitive Sciences 22(3): 213–224.
Van Duyn E and Collier J (2019) Priming and fake news : the effects of elite discourse on evalua-
tions of news media. Mass Communication and Society 22(1): 29–48.
Vosoughi S, Roy D and Aral S (2018) The spread of true and false news online. Science 359(6380):
1146–1151.
Walter N, Cohen J, Holbert RL, et al. (2019) Fact-checking : a meta-analysis of what works and
for whom. Political Communication 37: 350–375.
Waruwu BK, Tandoc EC Jr, Duffy A, et al. (2020) Telling lies together ? Sharing news as a form
of social authentication. New Media & Society. Epub ahead of print 10 June. DOI: 10.1177
/1461444820931017.
Yaniv I and Kleinberger E (2000) Advice taking in decision making : egocentric discounting and
reputation formation. Organizational Behavior and Human Decision Processes 83: 260–281.
Author biographies
Sacha Altay is completing his PhD thesis at the Jean Nicod Institute, on the topic of misinformation
from a cognitive and evolutionary perspective.
Anne-Sophie Hacquin is a research engineer at the Jean Nicod Institute working on psychology and
public policy.
Hugo Mercier is a research scientist at the CNRS (Jean Nicod Institute) working on communica-
tion from a cognitive and evolutionary perspective.
... People's mental processes, which are unobservable and influenced by various factors, directly affect several externalized behaviors, such as reposting on social media (Mitchell et al., 2019;Brady et al., 2020;Islam et al., 2020;Altay et al., 2022). Building on these prior works, we propose a computational method to efficiently model individuals' unobservable susceptibility levels only based on their observable social media posting and shar- Figure 1: Computational Modeling of Susceptibility to Misinformation. ...
... For example, previous studies of the inattention or "classical reasoning" account contend that people are committed to sharing accurate information, but the unique context of social media disrupts their capacity to critically assess the accuracy of news (Pennycook and Rand, 2021;van der Linden, 2022). These studies suggest that people are more likely to share things they genuinely believe (Altay et al., 2022). Inspired by this observation, we propose to model user's unobservable susceptibility only based on their historical posting and sharing behaviors, which are the most available and the easiest collectable data from social media ( §3.1) as shown in Fig. 1. ...
... When participants were aware of whether a headline was accurate or not, the preference for sharing information that favors the ingroup was largely limited to information known to be true. This likely reflects a reluctance to share verifiably false content due to reputational concerns (Altay et al., 2022;Osmundsen et al., 2021). ...
Article
Full-text available
Recent analyses of social media activity indicate that outgroup animosity drives user engagement more than ingroup favoritism, with content that derogates the outgroup tending to generate more viral responses online. However, it is unclear whether those findings are due to most people’s underlying preferences or structural features of the social media landscape. To address this uncertainty, we conducted three experimental studies (Noverall = 609) to examine how intended impact (ingroup favoritism/outgroup derogation) influences intentions to share both true and false news posts among U.S. partisans who regularly use social media. Participants consistently preferred to share posts that favor their own party over those that denigrate the opposition—a preference that was largely maintained despite a manipulation of ingroup threat or a manipulated desire to share viral content in Studies 2 and 3. We discuss the influence of polarized politicians and their followers, malign actors, and social media algorithms as potential drivers of earlier results that highlight the virality of derogatory content.
... In recent years, there have been growing concerns in the U.S. and other Western countries about the spread of news and posts that combine political denigration and low-quality evidence, such as ungrounded conspiracy theories or fabricated attacks on out-party leaders and voters (Petersen et al., 2023;Vosoughi et al., 2018). Even though several studies have found that few people knowingly share misinformation (Altay, Hacquin, et al., 2020;Grinberg et al., 2019;Littrell et al., 2023;Osmundsen et al., 2021) and that they are mainly circulated in ideologically homogenous networks on social media (Marie et al., 2023;Osmundsen et al., 2021), hostile and false claims can pose a threat to social trust and cohesion. By portraying out-party voters and elites as excessively stupid or evil, and by covering complex issues in simplistic ways, hostile (mis)information can exaggerate ideological and affective polarization between liberals and conservatives (Finkel et al., 2020), "anti-system" sentiments (Uscinski et al., 2021), and distrust toward journalists and beneficial technologies such as vaccines against COVID-19 (Jensen et al., 2021). ...
... At the advent of public Internet access, its primary purpose was to facilitate public access to a wealth of information [1]. Nevertheless, as the Internet evolved, it both [11], (4) fake news propagation [13,25,26], (5) fake news spreaders [27,28], (6) and the examination of available datasets [29,30]. Our review of the domain allowed us to conclude that the majority of existing reviews primarily concentrate on fake news detection techniques. ...
Article
Full-text available
In today’s digital age, accessing a vast amount of information online has become effortless. However, the question remains: can all of this information be trusted? Unfortunately, the answer is no. Although various research efforts have focused on fake news detection, regrettably, these approaches have proven to be relatively ineffective. Consequently, there is a pressing need to investigate more strategies for countering the spread of fake news beyond mere detection. This survey aims to provide a comprehensive review of various prevention and mitigation techniques. We introduce a novel taxonomy and analyze the challenges inherent in countering fake news at each level through a systematic examination of the literature. Furthermore, we explore potential avenues for future research. Our conclusion underscores the multidisciplinary nature of this issue, emphasizing the necessity for collaborative efforts across diverse disciplines to effectively address fake news.
... Individuals scoring high in agreeableness are typically more concerned with fostering harmonious social interactions and are less likely to engage in behaviors that could harm their social standing. The reluctance to believe in or share misinformation among these individuals can be attributed to the potential damage such actions could cause to their reputation and the embarrassment it might bring within their social circles [15,16]. Therefore, there is a negative relationship between agreeableness and misinformation sharing, as these individuals prioritize maintaining positive relationships during social interactions. ...
Article
Full-text available
With the advancement of media and the proliferation of new digital platforms, the Internet has become the primary channel through which individuals access information. However, this shift has also heightened the likelihood of people believing and sharing misinformation. This article explores the relationship between personality traits and the propensity to believe and share misinformation. Through a comprehensive literature review, the associations between the Big Five personality traits and misinformation are analyzed and summarized. The findings suggest that extraversion is positively associated with the belief in and sharing of misinformation, whereas conscientiousness and agreeableness are negatively associated with these behaviors. In contrast, openness to experience and neuroticism do not show a significant relationship with misinformation, belief, or sharing. At the specific period, extroversion, openness to experience and neuroticism show the positive relationship with misinformation believing. Moreover, extroversion has positive relationship with misinformation sharing during epidemic period. Effective interventions to mitigate the belief in and sharing of misinformation can include strategies such as leveraging social norms, peer influence, and promoting critical thinking. However, these interventions should be tailored to align with different personality traits to maximize their effectiveness.
Article
Purpose During the COVID-19 pandemic, an infodemic erupted on social media, leading to a surge in negative disclosure behaviors such as expressing dissatisfaction and releasing negative emotions. By extending the elaboration likelihood model and the Big Five personality theory to the domain of online self-disclosure, we aimed to identify the factors that influence negative disclosure behavior. Design/methodology/approach We investigated how the features of negative information content, information sources and recipients’ social perceptions influence how social media users disclose negative information. We also examined the moderating roles of personality traits in this process. To validate the model and test our hypotheses, we collected cross-sectional data from 456 social media users. Findings Empirical results reveal that (1) information overload, topic relevance, attractiveness of information sources, peer approval of negative disclosure and social influence on negative information strengthen the intention to disclose negative information. (2) The perception of social risk weakens the intention to disclose negative information. (3) Openness to experience, extraversion and neuroticism strengthen the relationship between the intention to disclose negative information and actual disclosure behavior. Originality/value Our results not only provide new perspectives on the decision-making mechanisms behind negative disclosure behavior but also extend personality research within the context of the dissemination of negative information. Furthermore, it offers insights into negative information dissemination on social media platforms, with significant implications for various stakeholders.
Article
Purpose Fake News, a disruptive force in the information world, has been extensively researched across various academic domains. This study, however, takes a unique approach by using bibliometric analysis to explore the specific link between fake news and the erosion of media trust. The purpsose of this study is to introduce novel and unexplored research questions that have not been thoroughly investigated, opening up exciting avenues for future research. Design/methodology/approach A thorough bibliometric analysis was conducted on 480 papers published between 2015 and 2023, using VOSviewer and Biblioshiny software packages. These papers were sourced from the well-known electronic research database, Scopus. The study included cluster analysis, bibliographic coupling, citation analysis, content analysis, keyword analysis and a three-field plot, providing a robust examination of the research landscape. Findings The bibliometric content analysis gave eight research clusters in the area. Future research guidelines are proposed, followed by conclusions, limitations and research and management implications. (1) Distrust in media and populism; (2) Social media, conspiracy theories and COVID-19; (3) Fact-checking, misinformation and media dynamics; (4) Fake news, trust and political bias; (5) Polarisation, echo chambers and information bubbles; (6) Political communication and media trust; (7) Media literacy and mass communication; and (8) Disinformation, trust and political consequences. Research limitations/implications The analysis reveals gaps in existing literature, highlighting the need for comprehensive studies that explore the nuanced relationships between fake news and media credibility by using interdisciplinary approaches, combining insights from communication theory, psychology and sociology. This analysis can guide scholars in identifying new research directions. Practical implications Media organisations can use this knowledge to develop strategies that enhance their credibility and counteract the effects of fake news. Policymakers can design informed regulations to combat misinformation and protect public trust. Educators can integrate these insights into curricula to prepare future journalists and media professionals for the evolving landscape. Tech companies can leverage these findings to mitigate fake news and build media trust. Social implications Public trust in media is foundational to democratic societies. Understanding the dynamic of fake news helps recognise broader societal consequences, such as increased polarisation and decreased civic engagement. By addressing the issues, society can work towards restoring faith in the institution of media. Originality/value There is a lack of comprehensive research using bibliometric analysis to understand how the rise of fake news has affected the reputation of traditional media. This study makes a significant contribution, using a bibliographic lens to highlight key themes and pave the way for future research.
Article
The importance of the study is determined by the necessity to develop effective methods for detecting fake news to ensure societal information security, as misinformation can be used to harm at various levels, including engaging in hybrid warfare. The aim of this work is a comparative analysis of the use of digital literacy as a tool for detecting fake news in European and Kazakhstani media to determine the most effective mechanisms of counteraction. Programs and strategies for using digital literacy tools to improve media literacy among the population were analyzed. The study showed that in European countries, fact-checking and media education tools are actively used, while in Kazakhstani mass media, this approach is still in the early stages of development. It was also determined that effective cooperation between government agencies, mass media, and educational institutions plays an important role in detecting fake news, and only comprehensive interaction can lead to the formation of a truly effective mechanism for countering misinformation. The practical significance of the study lies in the fact that the obtained results can be used to develop recommendations for increasing the effectiveness of using digital literacy as a tool to combat misinformation.
Conference Paper
Full-text available
This article addresses the topic of fake news prevention. It approaches it from the perspective of crime prevention by looking at legislative and situational measures. The authors explain that this criminal behaviour is by no means new, but has changed recently with the rise of social media and the frequent use of the term fake news by politicians and mainstream media. So a reconsideration of the phenomenon in an era of network societies is warranted. This paper recognises the importance of dissemination of fake news in the above context and the need to combat this contemporary online harm, while at the same time critically analyses the existing legislative and non-legislative mechanisms to prevent the dissemination of fake news or mitigate their effects. The authors take an international perspective in their analysis of legislative efforts and also discuss the endemic limits/ barriers related to international harmonisation, but also other important issues such as the protection of fundamental rights, the crime displacement and the difficulty for the law in this area to keep pace with technological developments. Finally, the paper makes recommendations to increase efficiency in the fight against the dissemination of fake news online by taking a multimodal approach that combines the strongest responses in a network of mutually reinforcing measures to address the problem holistically, as is appropriate in contemporary networked societies.
Article
We tested a hypothesis that misinformation exploits outrage to spread online, examining generalizability across multiple platforms, time periods, and classifications of misinformation. Outrage is highly engaging and need not be accurate to achieve its communicative goals, making it an attractive signal to embed in misinformation. In eight studies that used US data from Facebook (1,063,298 links) and Twitter (44,529 tweets, 24,007 users) and two behavioral experiments (1475 participants), we show that (i) misinformation sources evoke more outrage than do trustworthy sources; (ii) outrage facilitates the sharing of misinformation at least as strongly as sharing of trustworthy news; and (iii) users are more willing to share outrage-evoking misinformation without reading it first. Consequently, outrage-evoking misinformation may be difficult to mitigate with interventions that assume users want to share accurate information.
Article
Full-text available
Why would people share news they think might not be accurate? We identify a factor that, alongside accuracy, drives the sharing of true and fake news: the ‘interestingness-if-true’ of a piece of news. In three pre-registered experiments (N = 904), participants were presented with a series of true and fake news, and asked to rate the accuracy of the news, how interesting the news would be if it were true, and how likely they would be to share it. Participants were more willing to share news they found more interesting-if-true, as well as news they deemed more accurate. They deemed fake news less accurate but more interesting-if-true than true news, and were more likely to share true news than fake news. As expected, interestingness-if-true differed from interestingness and accuracy, and had good face validity. Higher trust in mass media was associated with a greater ability to discern true from fake news, and participants rated as more accurate news that they had already been exposed to (especially for true news). We argue that people may not share news of questionable accuracy by mistake, but instead because the news has qualities that compensate for its potential inaccuracy, such as being interesting-if-true.
Article
Full-text available
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.
Preprint
Full-text available
Why would people share news they think might not be accurate? We identify a factor that, alongside accuracy, drives the sharing of true and fake news: the ‘interestingness-if-true’ of a piece of news. In two pre-registered experiments (N = 604), participants were presented with a series of true and fake news, and asked to rate the accuracy of the news, how interesting the news would be if it were true, and how likely they would be to share it. Both interestingness-if-true and accuracy played an important role in explaining the sharing of true and fake news, with participants more willing to share news they thought interesting-if-true, and accurate. Participants also found fake news less accurate but more interesting-if-true than true news, and were more likely to share true news than fake news. Higher trust in mass media was associated with a greater ability to discern between true and fake news, and participants rated as more accurate news that they had already been exposed to (especially among true news). These results suggest that people may not share news of questionable accuracy by mistake, but instead because the news has qualities that make up for its potential inaccuracy, such as being interesting-if- true.
Article
Full-text available
Misinformation causes serious harm, from sowing doubt in modern medicine to inciting violence. Older adults are especially susceptible—they shared the most fake news during the 2016 U.S. election. The most intuitive explanation for this pattern lays the blame on cognitive deficits. Although older adults forget where they learned information, fluency remains intact, and knowledge accumulated across decades helps them evaluate claims. Thus, cognitive declines cannot fully explain older adults’ engagement with fake news. Late adulthood also involves social changes, including greater trust, difficulty detecting lies, and less emphasis on accuracy when communicating. In addition, older adults are relative newcomers to social media and may struggle to spot sponsored content or manipulated images. In a post-truth world, interventions should account for older adults’ shifting social goals and gaps in their digital literacy.
Article
Full-text available
Social media like Facebook are harshly criticized for the propagation of health misinformation. Yet, little research has provided in-depth analysis of real-world data to measure the extent to which Internet users engage with it. This article examines 6.5 million interactions generated by 500 posts on an emblematic case of online health misinformation: the Facebook page Santé + Mag, which generates five times more interactions than the combination of the five best-established French media outlets. Based on the literature on cultural evolution, we tested whether the presence of cognitive factors of attraction, that tap into evolved cognitive preferences, such as information related to sexuality, social relations, threat, disgust or negative emotions, could explain the success of Santé + Mag’s posts. Drawing from media studies findings, we hypothesized that their popularity could be driven by Internet users’ desire to interact with their friends and family by sharing phatic posts (i.e. statements with no practical information fulfilling a social function such as “hello” or “sister, I love you”). We found that phatic posts were the strongest predictor of interactions, followed by posts with a positive emotional valence. While 50% of the posts were related to social relations, only 28% consisted of health misinformation. Despite its cognitive appeal, health misinformation was a negative predictor of interactions. Sexual content negatively predicted interactions and other factors of attraction such as disgust, threat or negative emotions did not predict interactions. These results strengthen the idea that Facebook is first and foremost a social network used by people to foster their social relations, not to spread online misinformation. We encourage researchers working on misinformation to conduct finer-grained analysis of online content and to adopt interdisciplinary approach to study the phatic dimension of communication, together with positive content, to better understand the cultural evolution dynamics of social media.
Technical Report
Full-text available
The COVID-19 pandemic has not only had severe political, economic, and societal effects, it has also affected media and communication systems in unprecedented ways. While traditional journalistic media has tried to adapt to the rapidly evolving situation, alternative news media on the Internet have given the events their own ideological spin. Such voices have been criticized for furthering societal confusion and spreading potentially dangerous "fake news" or conspiracy theories via social media and other online channels. The current study analyzes the factual basis of such fears in an initial computational content analysis of alternative news media's output on Facebook during the early Corona crisis, based on a large German data set from January to the second half of March 2020. Using computational content analysis, methods, reach, interactions, actors, and topics of the messages were examined, as well as the use of fabricated news and conspiracy theories. The analysis revealed that the alternative news media stay true to message patterns and ideological foundations identified in prior research. While they do not spread obvious lies, they are predominantly sharing overly critical, even anti-systemic messages, opposing the view of the mainstream news media and the political establishment. With this pandemic populism, they contribute to a contradictory, menacing, and distrusting worldview, as portrayed in detail in this analysis. https://arxiv.org/abs/2004.02566
Article
Full-text available
“Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery.
Article
Recently, substantial attention has been paid to the spread of highly partisan and often factually incorrect information (i.e., so-called “fake news”) on social media. In this study, we attempt to extend current knowledge on this topic by exploring the degree to which individual levels of ideological extremity, social trust, and trust in the news media are associated with the dissemination of countermedia content, or web-based, ideologically extreme information that uses false, biased, misleading, and hyper-partisan claims to counter the knowledge produced by the mainstream news media. To investigate these possible associations, we used a combination of self-report survey data and trace data collected from Facebook and Twitter. The results suggested that sharing countermedia content on Facebook is positively associated with ideological extremity and negatively associated with trust in the mainstream news media. On Twitter, we found evidence that countermedia content sharing is negatively associated with social trust.
Article
The increasingly assertive position of social media as a news source means that news audiences can no longer depend on traditional journalists for information verification. Instead, they must determine the news credibility on their own. The majority of information credibility studies have considered news audiences’ information evaluation as a purely cognitive endeavor, implying that individuals can arrive at valid information without social validation. By drawing on self-categorization theory, this article re-conceptualizes audiences’ acts of news authentication by considering it not as a one-off activity under the uncontested control of the individual, but as a cycle of collective authentication strategies whereby individual authentication and social validation are entangled in the context-dependent processing of social news. To do this, we unpacked the social dimension of news authentication by looking at the social motivation, strategies, as well as the consequences that support it through a series of focus group discussions in Singapore.
Article
Do fake news and what some have labeled our post-truth predicament represent a new and deadly challenge to the epistemic presuppositions of the public sphere? While many commentators have invoked Hannah Arendt to help answer this question, I argue that Arendt is the wrong place to look. Instead, I suggest that, on one hand, deliberative democracy and Jürgen Habermas’ idea of democracy as truth-tracking offer a more helpful framework for assessing and combating the threat of fake news and, on the other hand, Bernard Williams’ virtues of accuracy identify the citizen virtues necessary to counteract fake news. The virtues of accuracy, I contend, can be facilitated and encouraged through structural and regulatory features in the public sphere. We are indeed seeing a recalibration and significant push back on fake news due to both structural changes and ordinary citizens becoming more epistemically responsible consumers of digital information.