Article

Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... And the one being fooled for the second time was not just the social order and the establishment, but also the individual constituents, i.e. the everyday users of social media. Researchers deemed social media users Past research has deemed social media users too biased [19], lazy, deluded, ignorant [9,52], or simply illiterate [10] to be able to deal with misinformation so we were to be taught how to recognize it and ultimately reject it [23,25]. For us, the people of these liberal democracies, misinformation rejection is the only possible outcome for dealing with this "information-based threat, " from disinformation and fake news to rumors, hoaxes, and conspiracies theories [88]. ...
... And the problem is not just whether to apply moderation or not, but also how frequent and in what context should misinformation be dealt with. Too frequent moderation paired with constant pre/debunking is also found to create an "illusory truth effect" [52], while the absence of the moderation in some scenarios creates an 'implied truth effect" and lead users to deem any misinformation content they encounter as credible and accurate [51]. ...
... preferred not to say. Age-wise, 32 (12.76%) were in the [18][19][20][21][22][23][24][25][26][27][28][29][30] bracket, 100 (42.55%) in the [31][32][33][34][35][36][37][38][39][40] bracket, 60 (25.53%) in the [41][42][43][44][45][46][47][48][49][50] bracket, 28 (11.91%) in the [51][52][53][54][55][56][57][58][59][60] bracket, and 15 (6.38%) were 61+ old. The distribution of the political leanings within the sample was: apolitical 10 (4.25%), left-leaning 115 (48.93%), moderate 61 (25.95%), and 49 right-leaning (20.85%). ...
Preprint
Full-text available
In this paper we investigate what folk models of misinformation exist through semi-structured interviews with a sample of 235 social media users. Work on social media misinformation does not investigate how ordinary users - the target of misinformation - deal with it; rather, the focus is mostly on the anxiety, tensions, or divisions misinformation creates. Studying the aspects of creation, diffusion and amplification also overlooks how misinformation is internalized by users on social media and thus is quick to prescribe "inoculation" strategies for the presumed lack of immunity to misinformation. How users grapple with social media content to develop "natural immunity" as a precursor to misinformation resilience remains an open question. We have identified at least five folk models that conceptualize misinformation as either: political (counter)argumentation, out-of-context narratives, inherently fallacious information, external propaganda, or simply entertainment. We use the rich conceptualizations embodied in these folk models to uncover how social media users minimize adverse reactions to misinformation encounters in their everyday lives.
... Recent work has emphasized the role of individual-difference variables in misinformation susceptibility, foremost among them partisan bias, actively open-minded thinking, "bullshit receptivity," and analytical thinking ability (31)(32)(33). Crucially, some of these factors (particularly political partisanship) are known to moderate the effectiveness of some anti-misinformation interventions, such as accuracy nudges (34). ...
... "Discernment" (i.e., technique/trustworthiness/sharing discernment) is defined as the difference between the averaged neutral (nonmanipulative) post scores and manipulative post scores for each outcome measure (6,32,47), the exception being confidence, for which we present the results for the manipulative and neutral posts separately. The reason for this is that "confidence discernment" is not a meaningful analytical construct. ...
... Aside from the item test, participants in studies 1 to 6 were also asked a series of demographic and other questions. Alongside standard demographic variables (age group, gender, education, and political ideology; 1 being "very left-wing" and 7 being "very right-wing"), we also included the following measures as covariates, which research has shown are associated with susceptibility to misinformation: 1) Populist attitudes (35) 2) Analytical (or "intuitive" versus "reflective") thinking, using the three-item CRT (38,50) 3) Numerical thinking, using the combined score on the threeitem Schwartz test and one item from the risk assessment test by Wright et al. (32,51,52) 4) Bullshit receptivity (36) 5) Conspiracy mentality questionnaire, five items (37) 6) How often people check the news (1 being "never" and 5 being "all the time") (2) 7) Social media use (1 being "never" and 5 being "all the time") (2) In study 6, instead of populism, analytical thinking, numerical thinking, bullshit receptivity, and conspiracy belief, we assessed the Ten-Item Personality Inventory (39), actively open-minded thinking (33,40), and the 20-item misinformation susceptibility test (33,41). See figs. ...
Article
Full-text available
Online misinformation continues to have adverse consequences for society. Inoculation theory has been put forward as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed , but its scalability has been elusive both at a theoretical level and a practical level. We developed five short videos that inoculate people against manipulation techniques commonly used in misinformation: emotionally manipulative language, incoherence, false dichotomies, scapegoating, and ad hominem attacks. In seven preregistered studies, i.e., six randomized controlled studies (n = 6464) and an ecologically valid field study on YouTube (n = 22,632), we find that these videos improve manipulation technique recognition, boost confidence in spotting these techniques, increase people's ability to discern trustworthy from untrustworthy content, and improve the quality of their sharing decisions. These effects are robust across the political spectrum and a wide variety of covariates. We show that psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
... In several studies, the more strongly people identify with a party that would benefit from a false political belief, the more likely they are to hold that misperception (Bolsen, Druckman, & Cook, 2014;Nyhan & Reifler, 2010). Recent research on fake news also finds that people are more likely to believe false stories that are congruent (rather than incongruent) with their party identification (Allcott & Gentzkow, 2017;Pennycook & Rand, 2019, 2021Vegetti & Mancosu, 2020). ...
... Education has long been used as a proxy for political sophistication and knowledge, and it has a strong correlation with them (Delli Carpini & Keeter, 1996). Like political knowledge, it is negatively related with political misperceptions (Johansen & Joslyn, 2008;Kull et al., 2003;Nyhan, 2010) just as political sophistication (Vegetti & Mancosu, 2020) and a propensity for analytical thinking (Pennycook & Rand, 2019) are negatively related to belief in false news stories. ...
... Partisan polarization tends to be strongest among the most informed (Abramowitz & Saunders, 2008) because those higher in knowledge and education are better able to connect their party identification to their preferences and beliefs (Delli Carpini & Keeter, 1996), and are more attentive to cues from partisan elites (Zaller, 1992). The literature on political misperceptions has evidence for the moderating role of education and/or knowledge in the relationship between partisanship and misperceptions in more studies (Kahan, 2015;Kahan et al., 2012;Meirick, 2016;Nyhan, 2010, Nyhan & Reifler, 2010Schaffner & Luks, 2018) than not (Meirick, 2013;Nyhan & Reifler, 2010;Pennycook & Rand, 2019). ...
Article
Full-text available
This study examined secondary survey data (N = 3,015) that asked respondents about real and pro-Trump fake news headlines in late 2016 as well as their reliance on online news sources. Reliance on Facebook for news was a vector for exposure to pro-Trump fake news but not for believing it. Reliance on Fox News online and on nonlegacy news sites was positively associated both with exposure to and perceived accuracy of pro-Trump fake news. The Fox News relationship with perceived accuracy was moderated by party and education such that Fox News reliance was a stronger predictor for Democrats and the more highly educated. Reliance on CNN online and elite newspaper sites was negatively related with the perceived accuracy of pro-Trump fake news. Implications for motivated reasoning theory and future directions are discussed.
... Such news discusses major problems in the fields of politics, public health, and personal lives. For individuals who do not observe too closely, this news may easily be spread, so that is simply accepted (Pennycook & Rand, 2019). One of the results of public disbelief in the responses to the COVID-19 pandemic finally had an impact on desires for vaccination. ...
... Berita tersebut membahas masalah besar dalam bidang politik, kesehatan masyarakat, dan kehidupan pribadi. Bagi para individu yang tidak melihat secara teliti, berita tersebut dengan mudah disebar hingga dipercaya begitu saja (Pennycook & Rand, 2019). Salah satu akibat dari ketidakpercayaan masyarakat dalam merespon pandemi COVID-19 akhirnya berdampak pada keinginan untuk melakukan vaksinasi. ...
... Pada sejumlah individu dengan skor tinggi pada hal religiusitas dan dogma, ada kecenderungan higher tendency to believe fake news, as compared to real news (Pennycook & Rand, 2019). In religious groups, it has been found that the level of error in understanding the COVID-19 pandemic is classified as being quite high, although the mechanism of what occurs cannot as yet be explained in detail (Druckman et al., 2021). ...
Article
Full-text available
The COVID-19 pandemic has presented various responses in society. A number of individuals have believed in its existence and conducted health protocols properly, but there are also those who have done the opposite. During a pandemic, belief in science influences actions and responses in society. However, individuals often do not believe in scientific findings, such as the existence of the virus causing the COVID-19 pandemic (SARS-CoV-2). A number of previous studies have often assumed that science is in conflict with religion. But is religion truly the opposite of science? This article aims to look at the role of belief in science in Indonesian society, in responding to the COVID-19 pandemic, and is hoped to be read by various parties such as the general public, scientists, to policymakers. Furthermore, this article may help in understanding the position of science and religion under certain conditions, while also being able to examine the differences in responses that occur. In Indonesia, religion and science have not been at odds in responding to the COVID-19 pandemic. The two each have their respective roles in providing explanations of the problems that have occured. However, there are groups of religious fundamentalists and their perception in viewing science that require attention in further studies. Pandemi COVID-19 menghadirkan berbagai respon di masyarakat. Beberapa individu percaya akan keberadaannya dan melakukan protokol kesehatan dengan baik, akan tetapi ada pula yang melakukan hal sebaliknya. Pada masa pandemi, keyakinan terhadap sains (belief in science) memberi pengaruh pada tindakan serta tanggapan masyarakat. Akan tetapi, individu kerap tidak percaya dengan temuan ilmiah, seperti halnya tentang keberadaan virus penyebab pandemi COVID-19 (SARS-CoV-2). Sejumlah studi sebelumnya seringkali menganggap bahwa sains bertentangan dengan agama. Namun, apakah agama adalah faktor yang memang berkebalikan dengan sains? Artikel ini bertujuan untuk melihat peran belief in science pada masyarakat Indonesia dalam merespon pandemi COVID-19, dan diharapkan dapat dibaca oleh berbagai kalangan seperti masyarakat awam, ilmuwan, hingga pembuat kebijakan. Selain itu, artikel ini dapat membantu memahami posisi sains dan agama dalam kondisi tertentu, serta mampu menelaah perbedaan respon yang terjadi. Di Indonesia, agama dan sains tidak berseberangan dalam merespon kondisi pandemi COVID-19. Kedua hal tersebut memiliki perannya masing-masing dalam memberikan penjelasan atas permasalahan yang terjadi. Walaupun demikian, terdapat kelompok fundamentalisme agama dan persepsinya atas sains yang patut mendapat perhatian dalam studi selanjutnya.
... The main challenge this approach must overcome is to somehow function in online environments where truth seeking is not the dominant driver of behavior, but rather personal convictions, e.g. of a political or ideological nature. Previous studies on misinformation have shown that sharing decisions regarding messages with a clear political leaning are primarily guided by users' ideological congruence with the message and only little by perceived veracity [13][14][15] . Nonetheless, previous work on the wisdom of the crowd shows that also when individuals have strong individual biases of an ideological or other nature, as long as the average individual's assessment is better than random, the aggregation of judgments produces an accurate collective assessment. ...
... While user ratings are an intervention that becomes effective very early, fact-checking could be used later to verify the accuracy of a rating before a message goes truly viral. Additionally, by shifting users' attention away from cognitive alignment and towards veracity, interventions could aim at 'de-biasing' users' ratings in segregated communities to avoid back ring 13 . Such measures may be especially important since the felt presence of like-minded others in echo chambers is known to shift individual behavior towards sharing information according to partisan identity rather than information veracity 14,30,32,46 . ...
... As with all crowd-based rating systems, our strategy can only work if online users are reasonably able to discern true content from false content most of the time. Research suggests that this is indeed the case 7,9,13,49 . Of course, ratings could be thwarted off by users rating in intentionally malevolent ways. ...
Preprint
Full-text available
Fact-checking takes time. As a consequence, verdicts are usually reached after a message has started to go viral and interventions can have only limited effect. A new approach inspired by the scholarly debate and implemented in practice is to harness the wisdom of the crowd by enabling recipients of an online message to attach veracity assessments to it, with the intention to allow poor initial crowd reception to temper belief in and further spread of misinformation. We study this approach by letting 4,000 subjects in 80 experimental bipartisan communities sequentially rate the veracity of informational messages. We find that in well-mixed communities, the public display of earlier veracity ratings indeed enhances the correct classification of true and false messages by subsequent users. However, crowd intelligence backfires when false information is sequentially rated in ideologically segregated communities. This happens because early raters’ ideological bias, which is aligned with a message, influences later raters’ assessments away from the truth. These results suggest that network segregation poses an important problem for community misinformation detection systems that must be accounted for in the design of such systems.
... In addition, previous research has also shown that other overriding factors influence citizens' social media behavior regarding the dissemination of misinformation (Van Bavel et al., 2021). Overall, the current state of research suggests that people who spread deceptive or misleading content often share similar characteristics (Guess et al., 2019;Pennycook & Rand, 2019). This seems to imply that false information is shared online independent from a specific topic or message. ...
... Although research on misinformation is on the rise, we know little about what is driving people to engage with this type of content on social media (Pennycook & Rand, 2019;Pennycook et al., 2021). As a key contribution to this strand of research, we aimed in this study to determine whether the process of disseminating online misinformation depends on specific attitudes toward issues or is determined by more general characteristics and behavior, such as personality traits, political orientation, and social media behavior. ...
... We found that participants who frequently use social media, generally like, share, or comment on the posts of friends and family members, and have higher levels of trust in news on social media were more likely to engage with possibly false or misleading content. These results are in line with previous studies showing that belief in the content of misinformation is less important for its spread than general social media behavior (Pennycook & Rand, 2019;Pennycook et al., 2021). Further, in line with previous studies (e.g. ...
Article
The increasing dissemination of online misinformation in recent years has raised the question which individuals interact with this kind of information and what role attitudinal congruence plays in this context. To answer these questions, we conduct surveys in six countries (BE, CH, DE, FR, UK, and US) and investigate the drivers of the dissemination of misinformation on three noncountry specific topics (immigration, climate change, and COVID-19). Our results show that besides issue attitudes and issue salience, political orientation, personality traits, and heavy social media use increase the willingness to disseminate misinformation online. We conclude that future research should not only consider individual’s beliefs but also focus on specific user groups that are particularly susceptible to misinformation and possibly caught in social media “fringe bubbles.”
... Perceived accuracy of COVID-19 misinformation was measured by asking the respondents to rate their level of perceived accuracy (1 [not at all accurate] to 5 [extremely accurate]) for the 5 claims in the news headlines. The scale is based on previous research on the perceived accuracy of news/misinformation headlines [54,55]. The participants were asked how accurate are the claims that (1) coconut is effective in reducing COVID-19 symptoms; (2) the pH miracle lifestyle healing program of alkaline diet, exercise, and healing foods can cure COVID-19; (3) COVID vaccines are dangerous and ineffective against the Omicron variant; (4) mRNA COVID-19 vaccinations cause magnetism by introducing graphene oxide into the blood; and (5) there is no evidence of the COVID-19 virus and no one has isolated and sequenced SARS-CoV-2 from any patient sample. ...
... Sharing intention of COVID-19 misinformation was measured by asking respondents how likely (1 [extremely likely to share] to 5 [not at all likely to share]) are they to share these news headlines on their social media profiles. While it is acknowledged that these sharing intentions are hypothetical, such approaches have been previously adopted by scholars to measure misinformation sharing [54,56]. Moreover, self-reports of sharing intentions have been found to be strongly associated with attention received by news headlines on social media [57]. ...
... However, further probing suggested that the effects of personality traits on sharing intents are driven mainly by low rather than high cognitive social media news users. These results are in line with recent findings where cognitive ability was found to be positively associated with better truth discernment [54,55], weaker belief in false content [17,18,66], and reduced sharing intention of misinformation [56]. In addition, a higher cognitive ability allows individuals to make better risk assessments and filter what information is relevant when placing their trust [67]. ...
Article
Full-text available
Background: Social media is widely used as a source of news and information regarding COVID-19. However, the abundance of misinformation on social media platforms has raised concerns regarding the spreading infodemic. Accordingly, many have questioned the utility and impact of social media news use on users' engagement with (mis) information. Objective: This study offers a conceptual framework for how social media news use influences COVID-19 misinformation engagement. More specifically, we examine how news consumption on social media leads to COVID-19 misinformation sharing by inducing belief in such misinformation. We further explore if the effects of social media news use on COVID-19 misinformation engagement depend on individual differences in cognition and personality traits. Methods: We use data from an online survey panel administered by a survey agency (Qualtrics) in Singapore. The survey was conducted in March 2022, and 500 respondents answered the survey. All participants were above 21 years of age and provided consent before taking part in the study. We use linear regression, mediation, and moderated mediation analyses to explore the proposed relationships between social media news use, cognitive ability, personality traits, and COVID-19 misinformation belief and sharing intentions. Results: The results suggest that those who frequently use social media for news consumption are more likely to believe COVID-19 misinformation and share it on social media. Further probing the mechanism suggests that social media news use translates into sharing intent via the perceived accuracy of misinformation. Simply put, social media news users share COVID-19 misinformation because they believe it to be accurate. We also find those with high levels of extraversion than those with low levels are more likely to perceive the misinformation to be accurate and share them. Those with high levels of neuroticism and openness (than lower levels) are also likely to perceive the misinformation to be accurate. Finally, it is observed that personality traits do not significantly influence misinformation sharing at higher levels of cognitive ability but low cognitive users largely drive misinformation sharing across personality traits. Conclusions: The reliance on social media platforms for news consumption during the COVID-19 pandemic has amplified, with dire consequences for misinformation sharing. This study shows that increased social media news consumption is associated with believing and sharing COVID-19 misinformation with low cognitive users being the most vulnerable. We offer recommendations to newsmakers, social media moderators, and policymakers towards efforts in limiting COVID-19 misinformation propagation and safeguard citizens.
... to recent findings, social media users with high political engagement (Valenzuela et al., 2019) and low cognitive ability (Ahmed, 2021;Pennycook & Rand, 2019) are more susceptible to misinformation. Political ideology and partisan slant have also been shown to play a central role in how online users perceive misinformation (Allcott & Gentzkow, 2017;Pennycook & Rand, 2019). ...
... to recent findings, social media users with high political engagement (Valenzuela et al., 2019) and low cognitive ability (Ahmed, 2021;Pennycook & Rand, 2019) are more susceptible to misinformation. Political ideology and partisan slant have also been shown to play a central role in how online users perceive misinformation (Allcott & Gentzkow, 2017;Pennycook & Rand, 2019). ...
... It follows that individuals with relatively more significant ability to engage in deliberation or reasoning (i.e., greater cognitive ability) will be less susceptible to believing false information. Multiple studies have shown that individuals with higher cognitive skills are less vulnerable to fake news (Ahmed, 2021;Murphy et al., 2019;Pennycook & Rand, 2019). Notably, some of these studies also found that, in some cases, deliberation appears to facilitate accurate belief formation over ideological congruency stemming from partisan bias (Bago et al., 2020;Ross et al., 2021). ...
Article
Full-text available
This study examines the relationship between personal traits, news use via YouTube algorithmic searches, and engagement with misinformation about U.S. Muslim congresswomen. Based on analyses of survey data, we find that those with lower cognitive ability and frequent algorithmic use were more likely to believe and share misinformation. Republicans and those with higher levels of nationalism and prejudice against Muslims were also more likely to believe the misinformation. Moderation findings suggest that higher algorithmic use strengthens belief in misinformation about U.S. Muslim congresswomen. The results highlight the importance of both individual ideologies and systematic factors in understanding misinformation engagement.
... The contribution of D and post-truth epistemic beliefs with respect to accepting and spreading disinformation and other incivil behavior online remains to be addressed. The focus of our study was on post-truth epistemic beliefs as a predictor of the processing of information, more specifically, of lower fake news discernment scores (e.g., Pennycook & Rand, 2019). ...
... Thus, we aimed at testing whether the model holds for all news or only news that are ideology congruent or incongruent. To this end, the political stance of the posts (pro-Democrat vs. pro-Republican) served as a second factor that was manipulated within subjects (see Kim & Dennis, 2019;Pennycook & Rand, 2019, for similar approaches). ...
... For each of the 12 news posts, participants were asked to indicate how accurate they perceived the news post to be, using a 7point scale that ranged from very inaccurate (1) to very accurate (7, item wording: "To the best of your knowledge, how accurate is the claim in the news headline above?"). 2 Fake news discernment scores were calculated by subtracting the average ratings of fake news posts from the average ratings of accurate news posts. Similar stimuli and measurements were used in prior research on fake news (e.g., Kim & Dennis, 2019;Pennycook & Rand, 2019). 2 We also assessed participants' willingness to share the news posts and performed the same analyses with fake news discernment for willingness to share as the dependent variable. ...
Article
Full-text available
The widespread distribution of mis- and disinformation highlights the need to understand why individuals fall for fake news. Surprisingly, individuals’ very understanding of knowledge and how it is created (epistemic beliefs) has received little attention in this context. We present a model focusing on the role of post-truth epistemic beliefs, their relationship to the Dark Factor of Personality (D), and their mutual association with fake news discernment. Based on a repeated measures experiment (N = 668), we show that individuals who endorse post-truth epistemic beliefs distinguish less between fake news and accurate news (fake news discernment). Further, D was linked to reduced fake news discernment, which is explained by a positive relationship with post-truth epistemic beliefs. Results remained virtually identical when ideology congruent and ideology incongruent news were considered separately. In conclusion, when addressing the global threat of fake news, epistemic beliefs need to be considered.
... Moreover, in addition to Greene and Murphy (2020), research demonstrated that obtaining high scores on analytic thinking goes hand in hand with an ability to distinguish between true and false headlines (Pennycook & Rand, 2019), even about COVID-19 (Pennycook et al., 2020; but see Scuotto et al., 2021). Overall, these results are in line with work pointing out that analytical thinking can guard against the acceptance of fake news and hence might mitigate against COVID-19 misinformation (Pennycook & Rand, 2019;Pennycook et al. 2020). ...
... Moreover, in addition to Greene and Murphy (2020), research demonstrated that obtaining high scores on analytic thinking goes hand in hand with an ability to distinguish between true and false headlines (Pennycook & Rand, 2019), even about COVID-19 (Pennycook et al., 2020; but see Scuotto et al., 2021). Overall, these results are in line with work pointing out that analytical thinking can guard against the acceptance of fake news and hence might mitigate against COVID-19 misinformation (Pennycook & Rand, 2019;Pennycook et al. 2020). ...
... Three of the included test items were of the general numeric version of the CRT and the other three items pertained to a non-numeric version of the test (Thomson & Oppenheimer, 2016). This combination has already been used in previous studies and shown to correlate significantly (Pennycook & Rand, 2019;Thomson & Oppenheimer, 2016). An example of a possible CRT item was "The ages of Mark and Adam add up to 28 years total. ...
Article
Full-text available
People are often exposed to fake news. Such an exposure to misleading information might lead to false memory creation. We examined whether people can form false memories for COVID-19-related fake news. Furthermore, we investigated which individual factors might predict false memory formation for fake news. In two experiments, we provided participants with two pieces of COVID-19-related fake news along with a non-probative photograph. In Experiment 1, 41% (n = 66/161) of our sample reported at least one false memory for COVID-19-related fake news. In Experiment 2, even a higher percentage emerged (54.9%; n = 185/337). Moreover, in Experiment 2, participants with conspiracy beliefs were more likely to report false memories for fake news than those without such beliefs, irrespective of the conspiratorial nature of the materials. Finally, while well-being was found to be positively associated with both true and false memories (Experiment 1), only analytical thinking was negatively linked to the vulnerability to form false memories for COVID-19-related fake news (Experiment 2). Overall, our data demonstrated that false memories can occur following exposure to fake news about COVID-19, and that governmental and social media interventions are needed to increase individuals’ discriminability between true and false COVID-19-related news.
... Another method would be to adapt and improve people's fake news detection skills. Pennycook and Rand (2019) hypothesized that, currently, most people do not apply the appropriate reasoning when dealing with potentially fake news. Pennycook and Rand (2019) considered two competing explanations for people's belief in fake news: one based on motivated reasoning, and one based on classical reasoning. ...
... Pennycook and Rand (2019) hypothesized that, currently, most people do not apply the appropriate reasoning when dealing with potentially fake news. Pennycook and Rand (2019) considered two competing explanations for people's belief in fake news: one based on motivated reasoning, and one based on classical reasoning. The motivated reasoning argument suggests that people's prejudices drive the acceptance of fake news. ...
... The goal of micro-targeting each message is to garner likes and encourage users to share the message (whether positive or negative) with other connected users on the social network. In the 2016 US presidential election, candidates used Facebook actively to reach potential voters, and interact with their user base by posting information, videos, news links, and discrediting other candidates (Allcott and Gentzkow, 2017;Pennycook and Rand, 2019). In many cases, negative advertising was used to disparage other candidates by creating a negative image of that candidate (Jost, 2017). ...
... Because of the ease of accessing different types of content on social media platforms such as Facebook, many people rely on social media to consume news over traditional news sources such as print and television (Nelson and Taneja, 2018;Gerbaudo, 2018). Fake news, disguised as real news, makes it almost impossible to detect as most individuals may not be able to determine the authenticity that separates facts from made up 'news' (Pennycook and Rand, 2019). Polarising messages on Facebook can be used to create fear or anger which would cause the message to be shared for higher reactions and engagement. ...
Article
Full-text available
Facebook is the largest social media platform that is used by all generations of users, as well as small and large businesses. Many users consider Facebook as a primary news source even though the news on Facebook is not authenticated. This 'fake news' can be used for financial or political gain and can also impact consumer behaviour towards products. The purpose of this study was to investigate advertising response behaviour and fake news perception among multi-generational Facebook users, in conjunction with other variables such as gender. Using a survey, data were collected from a multi-stage quota sample of 400 respondents in the USA. A scale was developed and psychometrically tested as part of the study to determine fake news perception. Findings of this study showed that the frequency of Facebook use was consistent among generations, with Baby Boomers being most active in reading posts, and Gen Y users being most active in posting to Facebook. Gen Y users found Facebook advertisements to be most relevant. Results can be used to drive engagement with Facebook users and develop campaigns that use actionable segmentation schemes. Implications of fake news perception are discussed, and future research directions are provided.
... examining an "engagements matrix" using CrowdTangle) when constructing misinformation items. Such an approach allowed us to construct misinformation items in a systematic manner instead of cherry-picking misinformation stories, as most of the previous studies have done (Jones-Jang et al., 2021b;Pennycook and Rand, 2019). ...
Article
Full-text available
The literature on misinformation has not provided sufficient empirical evidence concerning its political consequences. To amend this trend, this study examines how widespread misinformation on social media elevates political cynicism, which has peaked over the past decade in the United States. Using two-wave survey data collected both before and after the 2020 US presidential election, we present evidence that social media use triggers political cynicism, which is mediated through exposure to misinformation. In addition, the results reveal that the mediating relationship only holds among nonpartisans. Implications for democracy are also discussed.
... Four are addressed in this section, while the fifth, blockchain technology, is analysed in more detail in the following section. While these approaches aim to combat disinformation, they offer little where users share disinformation due to inertia, ignorance or laziness (Pennycook and Rand 2018), or to engrained "problematic partisan information […] on a continuum with mainstream partisan media" that continuously reaffirms fake news and existing beliefs (Marwick 2018, 501). By extension, these approaches are ineffective if shared over platforms that are harder to track, such as WhatsApp or Telegram (Aral 2020). ...
Article
This paper addresses the following main question: In what respect does the blockchain technology have potentials to alleviate and/or intensify some of the problems of the information and communication sector? Divided into four sections, the paper first explores the democratic deficit within the context of an informed citizenry. This section includes a study of the current public sphere, post-truth politics and populism. Second, it addresses the current information and communication system. The section investigates today’s social media, and an ever-changing digital news media landscape. Third, it explores four prevalent approaches toward reforming the information and communication system. These are fact-checking and debunking, media literacy, regulation and policy reform, and self-regulation. The fourth section addresses the central question of the study concerning blockchain technology. This disruptive database technology has potential to offer solutions to regaining trust in the information ecosystem yet like other approaches, placed within existing socio-economic structures, falls short in reversing the democratic deficit.
... Fake news is false information presented on media outlets as truthful (Pennycook, Cannon, & Rand, 2018). e acceptance of fake news tends to occur due to a lack of critical evaluation of the information, a proneness to accept weak claims (Pennycook & Rand, 2020), which are due to "lazy thinking" (Pennycook & Rand, 2019). ...
Article
Full-text available
Context: Hygiene and social distancing were recommended as strategies to mitigate the proliferation of COVID-19 early in the pandemic. Despite their importance, many people resisted implementing such strategies. In this sense, it is important to understand social and psychological processes underlying people's prevention behaviors regarding COVID-19. Method: This research aimed to assess the influence of fake news (FN) and belief in a just world (BJW) on prevention behaviors for the COVID-19. 198 participants indicated the extent to which they believed in FN about COVID-19, answered questions about their hygienic behavior and social distancing, completed the personal BJW scale, and answered a sociodemographic questionnaire. The results indicated that believing in FN was associated with fewer hygienic behaviors [β=-0,17, t(195)=-2,44, p=0,016] and less social distancing [β=-0,16, t(195)=-2,28, p=0,024]. Personal BJW moderated the effects of FN on social distancing [β=0,16, t(194)=2,21, p=0,028]. Results: These results show the impact of FN on prevention behaviors during the pandemic and illustrate the role of BJW on this relationship. Conclusions: It was concluded that it is essential to inform the population by trustworthy sources of knowledge and that public figures only disseminate scientifically accurate information. Although BJW may mitigate the negative impact of misinformation, the reduction of fake news and its impact is of utmost importance for public health during a pandemic.
... We know, 2021b). Due to the semantic proximity between bullshit and fake news (Jaster & Lanius, 2018;Mukerji, 2018) and the positive correlation between receptivity to bullshit and belief in fake news (Pennycook & Rand, 2019b), we consider that the study of bullshit, in a digital context, is a valuable contribution to the debate about disinformation in general. ...
Article
Full-text available
The spread of political disinformation remains a problem for democracy. In a digital universe surrendered to the dominance of social media, motivated political reasoning can be an ally of disinformation in general. Our exploratory study is a first approach, in Portugal, to the analysis of receptivity to bullshit. The main objective is to verify how political and partisan orientation can influence the level of receptivity to pseudo-profound bullshit. We used a survey (n = 268) to measure participants' partisanship and ideological orientation and to identify possible political and partisan (a)symmetries regarding receptivity to pseudo-profound bullshit. Our findings revealed that individuals are less receptive to pseudo-profound bullshit attributed to political leaders than when the source is anonymous. Furthermore, partisanship, as motivated reasoning, can determine how respondents evaluate information. We found that the level of receptivity to pseudo-profound bullshit is dependent on the political alignment of the source for left and right supporters. In addition to partisan bias, our results show that people with lower levels of education are more receptive to bullshit in general, which reinforces the need to invest in digital literacy to combat disinformation.
... Thus, it is plausible that although third parties are initially skeptical toward claims of unfairness, they are nevertheless motivated to obtain information about the incident until they feel that they have an accurate understanding of it. Moreover, some employees are more motivated to engage in deliberate thinking and information search than others (Frederick, 2005;Kruglanski & Webster, 1996), such that they are more likely to continue to evaluate the credibility of the claim beyond their initial reactions (Pennycook & Rand, 2019). Future research can examine why and how employees' perceived credibility of a claim changes as they engage in sense-making processes, as well as individual differences that influence those changes. ...
... Another preventative approach involves subtle prompts that nudge people to consider accuracy. Evidence suggests that deliberation is associated with [134][135][136] and causes 137 reduced belief in false news headlines that circulated on social media. Platforms could nudge users to think about accuracy by, for example, periodically asking users to rate the accuracy of randomly selected posts. ...
... A survey identified that, in Brazil, 86% of respondents used the internet to obtain information on health and disease, [52] a percentage higher than those who sought information from doctors or specialists (74%). These results seem to corroborate that self-managed ideas give space to the building of knowledge not based on scientific evidence and methods, but on common experiences, [53] often incorrect, which are spread by ignorance or maybe deliberately. ...
Article
Full-text available
As coronavirus disease 2019 (COVID-19) is asserting itself as a health crisis, it is necessary to assess the knowledge and perceptions of people about the disease. The aim of this study is to assess the knowledge of the general population about COVID-19 and how the media influence this knowledge. This is a cross-sectional study with 5066 participants who answered an online questionnaire between April and May 2020. Data analysis was performed using descriptive statistics and logistic regression models. Over 75% have obtained a high degree of knowledge regarding signs, symptoms, and transmission, 95% stated to check the veracity of the information received, and also showed that the total knowledge about COVID-19 was associated with the level of instruction, with the perception of the quality of information disseminated by the media, and with the risk perception. Despite the high level of knowledge of participants, the results pointed to the need to reinforce information for individuals with less education and the importance of avoiding denialism that reduces the risk perception about COVID-19.
... This type of corrective messaging was used by Pennycook et al. (2020) and consists in "nudging" individuals in the direction of thinking about the accuracy of what they are reading. Their study relies on previous work by psychologists which found that susceptibility to fake news is often caused by lack of reasoning rather than motivated reasoning (Pennycook & Rand, 2019;Pennycook & Rand, 2021b). This implies that individuals do not readily consider the accuracy of the news information they are presented with. ...
Conference Paper
Full-text available
The legitimacy of the electoral process is often put into question by political candidates and elites who seek to account for their loss. As a result, a significant portion of voters are presented with unfounded allegations of widespread election fraud even though such fraud seldom occurs in consolidated democracies. Previous research has determined that misleading claims regarding the integrity of elections carry important implications for citizens' perceptions of electoral fairness. However, the literature has yet to systematically explore the impact of electoral fraud allegations on voter participation. Using original survey data from the United Kingdom, this research measures the impact of unfounded allegations of election fraud on the decision to vote or not. The results of the survey experiment do not support the hypotheses according to which exposure to unfounded allegations of fraud influences confidence in elections and voter participation. However, results from supplementary analyses highlight a significant relationship between perceptions of fraud and subsequent desire to cast a ballot. Explanations for these findings are discussed.
... Analytical thinking is best suited for complex and complicated problems. Several recent studies (e.g., Pennycook and Rand, 2019;Pennycook et al., 2020) showed that people with better analytical thinking are usually better at discerning fake news from the real news, regardless of their political orientation. These studies examined mainly political fake news and used adult samples; therefore, we aim to examine the protective role of analytical thinking against manipulation of the messages in high school students (RQ1). ...
Article
Full-text available
Adolescents, as active online searchers, have easy access to health information. Much health information they encounter online is of poor quality and even contains potentially harmful health information. The ability to identify the quality of health messages disseminated via online technologies is needed in terms of health attitudes and behaviors. This study aims to understand how different ways of editing health-related messages affect their credibility among adolescents and what impact this may have on the content or format of health information. The sample consisted of 300 secondary school students ( M age = 17.26; SD age = 1.04; 66.3% female). To examine the effects of manipulating editorial elements, we used seven short messages about the health-promoting effects of different fruits and vegetables. Participants were then asked to rate the message’s trustworthiness with a single question. We calculated second-order variable sensitivity as the derivative of the trustworthiness of a fake message from the trustworthiness of a true neutral message. We also controlled for participants’ scientific reasoning, cognitive reflection, and media literacy. Adolescents were able to distinguish overtly fake health messages from true health messages. True messages with and without editorial elements were perceived as equally trustworthy, except for news with clickbait headlines, which were less trustworthy than other true messages. The results were also the same when scientific reasoning, analytical reasoning, and media literacy were considered. Adolescents should be well trained to recognize online health messages with editorial elements characteristic of low-quality content. They should also be trained on how to evaluate these messages.
... Older adults were unable to avoid spreading more health misinformation than true information in this study, although they had tried to discern what is true or false before sharing. Multiple researchers suggested that the lack of credibility judgment mainly accounted for the spreading of misinformation on social media (e.g., [63][64][65]). Furthermore, Pennycook et al. [66] found that requiring participants to judge the accuracy of COVID-19 news before sharing increased the quality of shared news twice to three times, in comparison with the condition only requiring participants to decide whether to share the news. ...
Article
Full-text available
The online world is flooded with misinformation that puts older adults at risk, especially the misinformation about health and wellness. To understand older adults’ vulnerability to online misinformation, this study examines how eye-catching headlines and emotional images impact their credibility judgments and spreading of health misinformation. Fifty-nine older adults aged between 58 and 83 years participated in this experiment. Firstly, participants intuitively chose an article for further reading among a bunch of headlines. Then they viewed the emotional images. Finally, they judged the credibility of health articles and decided whether to share these articles. On average, participants only successfully judged 41.38% of health articles. Attractive headlines not only attracted participants’ clicks at first glance but also increased their credibility judgments on the content of health misinformation. Although participants were more willing to share an article they believed than not, 62.5% of the articles they want to share were falsehoods. Older adults in this study were notified of possible falsehoods in advance and were given enough time to discern misinformation before sharing. However, these efforts neither lead to a high judgment accuracy nor a high quality of information that they wanted to share. That may be on account of eye-catching headlines which misled participants into believing health misinformation. Besides, the most older adults in this study may follow the “better safe than sorry” principle when confronted with health misinformation, that is to say they would rather trust the misinformation to avoid health risks than doubt it.
... Treating social media as platforms or publishers, or a hybrid? (Samuelson, 2021) The "credulous mind" (Fessler et al., 2017;Pennycook & Rand, 2019) Social psychology of groups (e.g., in-group/out-group thinking, social identity) Who controls the media environment? ...
Article
An influential line of thinking in behavioral science, to which the two authors have long subscribed, is that many of society's most pressing problems can be addressed cheaply and effectively at the level of the individual, without modifying the system in which the individual operates. We now believe this was a mistake, along with, we suspect, many colleagues in both the academic and policy communities. Results from such interventions have been disappointingly modest. But more importantly, they have guided many (though by no means all) behavioral scientists to frame policy problems in individual, not systemic, terms: to adopt what we call the “i-frame,” rather than the “s-frame.” The difference may be more consequential than i-frame advocates have realized, by deflecting attention and support away from s-frame policies. Indeed, highlighting the i-frame is a long-established objective of corporate opponents of concerted systemic action such as regulation and taxation. We illustrate our argument briefly for six policy problems, and in depth with the examples of climate change, obesity, retirement savings, and pollution from plastic waste. We argue that the most important way in which behavioral scientists can contributed to public policy is by employing their skills to develop and implement value-creating system-level change.
... The literature suggests that veracity is central to a user's decision to share news or not. Yet this finding is often studied by asking users about their own behavior, not the perception they have on a particular news item, i.e., the perceived veracity (Metaxas et al., 2014;Pennycook and Rand, 2019). The experimental setting of this paper allows us to study the perceived veracity of users on all claims. ...
Article
Full-text available
Why do we share fake news? Despite a growing body of freely-available knowledge and information fake news has managed to spread more widely and deeply than before. This paper seeks to understand why this is the case. More specifically, using an experimental setting we aim to quantify the effect of veracity and perception on reaction likelihood. To examine the nature of this relationship, we set up an experiment that mimics the mechanics of Twitter, allowing us to observe the user perception, their reaction in the face of shown claims and the factual veracity of those claims. We find that perceived veracity significantly predicts how likely a user is to react, with higher perceived veracity leading to higher reaction rates. Additionally, we confirm that fake news is inherently more likely to be shared than other types of news. Lastly, we identify an activist-type behavior, meaning that belief in fake news is associated with significantly disproportionate spreading (compared to belief in true news).
... According to Indiana University researchers, these two types of information commonly go viral because "information overload and users' short attention span impair social media's ability to discriminate material based on quality." Because social media is a public forum, anybody may post whatever they want, including news organizations, without fear of being held accountable for fact-checking (Pennycook & Rand, 2019;Bondielli & Marcelloni, 2019). Users must decide if their feeds include false or misleading information. ...
Article
Full-text available
-In recent years, the subject of fake news, as well as its consequences, has gained a lot of attention. Even though fake news is not a new occurrence, technological advancements have created an ideal atmosphere for it to spread quickly. Platforms like Facebook, Twitter, and YouTube provide fertile ground for the creation and dissemination of misinformation and disinformation. As a result, it is critical to research how social media works, how fake news is created and distributed through social media, and what role users play. The study examines social media as a tool for misinformation and disinformation. Been qualitative, the paper relies on secondary data such as published materials and personal observations to make deductions and inferences about the use of social media for fake news. This study examines misinformation and disinformation as a kind of fake news, as well as the many sorts of misinformation that may be found on social media. It adds to the idea of fake news by addressing the problem of users' interactions with news and cooperation in the information age. To add credibility to the study, the idea of misinformation and disinformation was investigated. In addition, the role of social media in the spread of misinformation and disinformation was studied to provide a thorough framework for the study. The study concluded with recommendations for preventing information manipulation on social media. Keywords---disinformation, fake news, misinformation, social media.
... Dual stage processing is most likely to occur on difficult tasks when a deliberative stage is needed (see Rotello & Heit, 2000). Research has shown that those who believe fake news may be relying more on an intuitive system than on a deliberative one (Pennycook & Rand, 2019, 2021. Researchers have tried to assess whether these stages/systems are discrete (i.e., the deliberative system processes information after the intuitive system has finished its evaluation) or whether evidence is evaluated incrementally (i.e., the intuitive and deliberative systems overlap as a decision is made). ...
Article
Full-text available
Health misinformation is a problem on social media, and more understanding is needed about how users cognitively process it. In this study, participants’ accuracy in determining whether 60 health claims were true (e.g., “Vaccines prevent disease outbreaks”) or false (e.g., “Vaccines cause disease outbreaks”) was assessed. The 60 claims were related to three domains of health risk behavior (i.e., smoking, alcohol and vaccines). Claims were presented as Tweets or as simple text statements. We employed mouse tracking to measure reaction times, whether processing happens in discrete stages, and response uncertainty. We also examined whether health literacy was a moderating variable. The results indicate that information in statements and tweets is evaluated incrementally most of the time, but with overrides happening on some trials. Adequate health literacy scorers were equally certain when responding to tweets and statements, but they were more accurate when responding to tweets. Inadequate scorers were more confident on statements than on tweets but equally accurate on both. These results have important implications for understanding the underlying cognition needed to combat health misinformation online.
... Most of the previous prevention literature focused on technical specifications, such as algorithm-based debunking and correcting of misinformation [26,27], rather than the consumer as the misinformation sharer. Besides the technical solutions, studies have now started to examine cognitive factors that may impact people's susceptibility to fake news [28][29][30][31]. In this view, in this current study, we introduced cognitive ability as a moderator. ...
Article
Sharing of misinformation on social media platforms is a global concern, with research offering little insight into the motives behind such sharing. Drawing from the cognitive load theory and literature on cognitive ability, we developed and tested a research model hypothesising why people share misinformation. We also tested the moderating role of cognitive ability. We obtained data from 385 social media users in Nigeria using a chain referral technique with an online questionnaire as the instrument for data collection. Our findings suggest that information overload and social media fatigue are strong predictors of misinformation sharing. Information stress also contributed to misinformation sharing behaviour. Furthermore, cognitive ability moderated and weakened the effect information strain and information overload have on misinformation sharing in such a way that this effect is more pronounced among those with low cognitive ability. This indicates that those with low cognitive ability have a higher tendency to share misinformation. However, cognitive ability had no effect on the effect social media fatigue has on misinformation sharing behaviour. The study concluded with some theoretical and practical implications.
... This idea is supported by various arguments, for example, by the pervasive effects of motivated reasoning on human judgment (Mercier and Sperber 2011) but also by the observation that voters tend to support a preferred political candidate when presented with negative information (Redlawsk, Civettini, and Emmerson 2010) or by the fact that people tend to strenuously debate arguments that are inconsistent with their political view, while passively and uncritically accepting those that are in line with their ideology (Strickland, Taber, and Lodge 2011). A recent empirical study, though, (Pennycook and Rand 2019) raised serious doubts about the idea that motivated reasoning may be the only mechanism responsible for these processes, indicating in fact that analytic/critical thinking plays a hugely important role in people's self-inoculation against political disinformation. In other words, increasing amount of 'evidence indicates that people fall for fake news because they fail to think; not because they think in a motivated or identity-protective way' (Pennycook and Rand 2019, 10). ...
Article
Full-text available
We start by introducing the idea of echo chambers. Echo chambers are social and epistemic structures in which opinions, leanings, or beliefs about certain topics are amplified and reinforced due to repeated interactions within a closed system; that is, within a system that has a rather homogeneous sample of sources or people, which all share the same attitudes towards the topics in question. Echo chambers are a particularly dangerous phenomena because they prevent the critical assessment of sources and contents, thus leading the people living within them to deliberately ignore or exclude opposing views. In the second part of this paper, we argue that the reason for the appearance of echo chambers lies in the adoption of what we call 'epistemic vices'. We examine which vices might be responsible for their emergence, and in doing so, we focus on a specific one; 'epistemic violence'. In assessing and evaluating the role of this epistemic vice, we note that it can be triggered by epistemic contexts characterized by high stakes that may turn ordinary intellectual virtues (such as skepticism) into vices (such as denialism). In the third part of this contribution, we suggest a way to deal with echo chambers. The solution focuses on advocating a responsibilist pedagogy of virtues and vices that-we claim-might be capable of preventing their emergence.
... Using the cognitive science approach, Pennycook et al. [37] investigated whether cognitive factors motivate belief in or rejection of fake news. They conducted two studies in the paper, utilizing the Cognitive Reflection Test (CRT) as a measure of the proclivity to engage in analytical reasoning. ...
Preprint
Full-text available
The spread of digital disinformation (aka "fake news") is arguably one of the most significant threats on the Internet which can cause individual and societal harm of large scales. The susceptibility to fake news attacks hinges on whether Internet users perceive a fake news article/snippet to be legitimate after reading it. In this paper, we attempt to garner an in-depth understanding of users' susceptibility to text-centric fake news attacks via a neuro-cognitive methodology. We investigate the neural underpinnings relevant to fake/real news through EEG. We run an experiment with human users to pursue a thorough investigation of users' perception and cognitive processing of fake/real news. We analyze the neural activity associated with the fake/real news detection task for different categories of news articles. Our results show there may be no statistically significant or automatically inferable differences in the way the human brain processes the fake vs. real news, while marked differences are observed when people are subject to (real/fake) news vs. resting state and even between some different categories of fake news. This neuro-cognitive finding may help to justify users' susceptibility to fake news attacks, as also confirmed from the behavioral analysis. In other words, the fake news articles may seem almost indistinguishable from the real news articles in both behavioral and neural domains. Our work serves to dissect the fundamental neural phenomena underlying fake news attacks and explains users' susceptibility to these attacks through the limits of human biology. We believe this could be a notable insight for the researchers and practitioners suggesting the human detection of fake news might be ineffective, which may also have an adverse impact on the design of automated detection approaches that crucially rely upon human labeling of text articles for building training models
... This presumably reflects that statements that are easy to process are experienced as familiar […], thus leading participants to feel that they have heard or seen this before, suggesting that it is probably true." (Reber & Schwarz 1999) Notably, already reading a false statement once is sufficient to increase later perceptions of its accuracy (Pennycook, Cannon, & Rand 2018). Furthermore, an experiment on misinformation and fake news by Pennycook et al. (2019) showed that low-level fluency heuristics play a role in judgements of accuracy for even highly implausible, intensely partisan, and entirely fabricated news stories (see also Pennycook & Rand, 2019a;Pennycook, Cannon, & Rand 2019b;Pennycook, Cannon, & Rand 2019c). Heuristics are mental shortcuts; low-level fluency heuristics mean that a piece of information is processed more fluently, faster or more smoothly than another, and thus is being considered as more important or valuable; it refers to the speed in which information is processed. ...
Technical Report
Full-text available
This report documents the outcomes of an analysis of user behaviour on social media regarding the approval, assessment and evaluation of information and information sources, feeding into the further development of the EUNOMIA toolkit. Both individual and collective behaviour was analysed. On the one hand, there are factors that cause and explain individual behaviour, such as cognitive biases and psychological effects that influence a single person’s behaviour. An example is the so-called truth effect, i.e. the fact that repetition and familiarity with content make it more believable. On the other hand, group effects and social norms additionally influence the individual’s behaviour. Studies have shown that we are more likely to believe a piece of information if our social circles also accept it (Lewandowski et al. 2012; Eckles & Bakshy 2017; Lazer et al. 2017; Sloman & Fernbac, 2018; Karduni 2019). The task of user behaviour analysis included (i) a literature review; (ii) a workshop with end users and experts; and (iii) an online survey. We identified explanations for collective and individual user behaviour in assessing, sharing and distributing (mis)information, building on (i) the theory of cognitive dissonance and the theory of selective exposure; (ii) the third-person effect; (iii) the concept of opinion leadership; (iv) the concept of information gatekeeping; (v) the truth effect; and (vi) explanations for the persistence of misinformation and (vii) audience behaviour. Insights explain, for example, how users on social media tend to surround themselves with information that confirms their own interests, values and beliefs in so-called ‘filter bubbles’ or ‘echo chambers’. Furthermore, we were able to identify strategies to influence or reward preferable behaviour to avoid the spread of misinformation in the form of nudges, building on certain heuristics (i.e., mental shortcuts) and psychological or social effects. We also identified approaches for the correction of misinformation (e.g., providing explanations, targeting opinion leaders), as well as strategies to avoid their spread (e.g., triggering a thinking process before information is read).
Article
Full-text available
Fake news is frequently disseminated on social media with significant implications for public opinion, a phenomenon that increases academic interest in studying it. This research proposes to summarize the body of literature developed around fake news through a bibliometric analysis. For this, 1213 documents on the subject are analyzed, extracted from Scopus. Indicators are constructed identifying the evolution of scientific production, the most important journals, authors and organizations, countries with the highest production, bibliographic coupling networks and terms of greatest occurrence. The results suggest that studies on fake news have increased since 2016, coinciding with Brexit and the US elections, and the identification of six clusters that can guide future research by recognizing scientific trends.
Article
Health misinformation is a problem on social media, and more understanding is needed about how users cognitively process it. In this study, participants' accuracy in determining whether 60 health claims were true (e.g., "Vaccines prevent disease outbreaks") or false (e.g., "Vaccines cause disease outbreaks") was assessed. The 60 claims were related to three domains of health risk behavior (i.e., smoking, alcohol and vaccines). Claims were presented as Tweets or as simple text statements. We employed mouse tracking to measure reaction times, whether processing happens in discrete stages, and response uncertainty. We also examined whether health literacy was a moderating variable. The results indicate that information in statements and tweets is evaluated incrementally most of the time, but with overrides happening on some trials. Adequate health literacy scorers were more uncertain when responding to tweets than for statements, but they were more accurate when responding to tweets. Inadequate scorers were more confident on statements than tweets but equally accurate on both. These results have important implications for understanding the underlying cognition needed to combat health misinformation online. Significance Statement Over 70% of the U.S. population has a social media account, and are susceptible to misinformation online, especially those with inadequate health literacy. For example, misinformation about vaccines on Twitter may make users more hesitant to get inoculated. This study examines misinformation related to health risk topics (vaccines, alcohol and tobacco) on social media. We employed mouse tracking to better understand how information presented as a tweet is processed compared to information presented as a simple statement. Public health researchers will benefit from the finding that inadequate health literacy scorers are overconfident when evaluating Tweets. Conversely, cognitive psychologists can use the results to better understand how intuition and deliberate decision making interact in real time. The combined results may inform researchers across fields with information to combat the spread of false information online.
Article
The illusory truth effect occurs when exposure to information increases belief in it. This effect highlights one concern about the prevalence of misinformation: that exposure to it increases belief in it. To combat this, social media platforms have employed fact-checkers to label misinformation. These fact-checks are effective in reducing belief in misinformation. Some fact-checkers, however, post headlines without clear truth labels on these platforms. For example, the fact-checker Snopes posted, “Did Mark Zuckerberg Post About Orgies on Little James Island?” Posting questions that do not explicitly state that the information is false may increase belief in that information. Two experiments examined this possibility. In both experiments, exposure to questions did not increase belief. Furthermore, exposure to questions decreased the illusory truth effect in subsequent statements. These findings suggest that posting false headlines as questions is not harmful and could be beneficial because it may focus readers’ attention on accuracy.
Article
The spread of misinformation about COVID-19 vaccines threatens to prolong the pandemic, with prior evidence indicating that exposure to misinformation has negative effects on intent to be vaccinated. We describe results from randomized experiments in the United States (n = 5,075) that allow us to measure the effects of factual corrections on false beliefs about the vaccine and vaccination intent. Our evidence makes clear that corrections eliminate the effects of misinformation on beliefs about the vaccine, but that neither misinformation nor corrections affect vaccination intention. These effects are robust to formatting changes in the presentation of the corrections. Indeed, corrections without any formatting modifications whatsoever prove effective at reducing false beliefs, with formatting variations playing a very minor role. Despite the politicization of the pandemic, misperceptions about COVID-19 vaccines can be consistently rebutted across party lines.
Article
Political discourse often seems divided not just by different preferences, but by entirely different representations of the debate. Are partisans able to accurately describe their opponents’ position, or do they instead generate unrepresentative “straw man” arguments? In this research we examined an (incentivized) political imitation game by asking partisans on both sides of the U.S. health care debate to describe the most common arguments for and against ObamaCare. We used natural language-processing algorithms to benchmark the biases and blind spots of our participants. Overall, partisans showed a limited ability to simulate their opponents’ perspective, or to distinguish genuine from imitation arguments. In general, imitations were less extreme than their genuine counterparts. Individual difference analyses suggest that political sophistication only improves the representations of one’s own side but not of an opponent’s side, exacerbating the straw man effect. Our findings suggest that false beliefs about partisan opponents may be pervasive.
Article
Purpose Misinformation is a significant phenomenon in today's world: the purpose of this paper is to explore the motivations behind the creation and use of misinformation. Design/methodology/approach A literature review was undertaken, covering the English and Russian language sources. Content analysis was used to identify the different kinds of motivation relating to the stages of creating and communicating misinformation. The authors applied Schutz's analysis of motivational types. Findings The main types of motivation for creating and facilitating misinformation were identified as “in-order-to motivations”, i.e. seeking to bring about some desired state, whereas the motivations for using and, to a significant extent, sharing misinformation were “because” motivations, i.e. rooted in the individual's personal history. Originality/value The general model of the motivations underlying misinformation is original as is the application of Schutz's typification of motivations to the different stages in the creation, dissemination and use of misinformation.
Article
Full-text available
Purpose In light of the fact that people have more opportunities to encounter scientific misinformation surrounding the COVID-19 pandemic, this research aimed to examine how different types of misinformation impact readers’ evaluations of messages and to identify the mechanisms (motivated reasoning hypothesis vs. classical reasoning theory) underlying those evaluations of message inaccuracy and fakeness. Design/methodology/approach This research employed data from an online experiment conducted in Hong Kong in March 2022, when the fifth COVID-19 wave peaked. The data were collected using quota sampling established by age based on census data ( N = 835). Findings In general, the participants were not able to discern manipulated content from misinterpreted content. When given a counter-attitudinal message, those who read a message with research findings as supporting evidence rated the message as being more inaccurate and fake than those who read the same message but with quotes as supporting evidence. Contrary, one’s disposition to engage in analytical thinking and reasoning was not found to impact assessments of information inaccuracy and fakeness. Implications With respect to the debate about whether people are susceptible to misinformation because of cognitive laziness or because they want to protect their personal beliefs, the findings provide evidence of the motivated reasoning hypothesis. Media literacy programs should identify strategies to prepare readers to be attentive to personal biases on information processing. Originality/value Although many researchers have attempted to identify the mechanisms underlying readers’ susceptibility to misinformation, this research makes a distinction between misinterpreted and manipulated content. Furthermore, although the Cognitive Reflection Test is widely studied in the Western context, this research tested this disposition in Hong Kong. Future research should continue to empirically test the effects of different types of misinformation on readers and develop distinct strategies in response to the diverse effects found.
Article
The negative relationship between religiosity and cognitive ability is well documented though most research on the connection between religion and cognitive factors has largely ignored how social positions like race and gender may inform the association. This paper explores how race and gender intersect with the association between religious factors and verbal ability. Using data from the General Social Surveys, I examine racial differences in the impact of religious identification, religious participation, and beliefs about the Bible on verbal ability. The analyses show that the association between religious factors and verbal ability varies significantly across racial groups, and points to some gender differences in the association between religious factors and verbal ability by race. The findings highlight the importance of an intersectional approach and suggest that psychological theorizing about the relationships between religion and cognitive ability is underdeveloped.
Article
[Abstract] This article analyzes the extent to which certain cultural dimensions explain the intensity with which citizens of different countries perceive the presence of fake news in their daily lives. The research is based on the Flash Eurobarometer survey conducted in 2018 about fake news and disinformation online in 25 European countries, and adopts the Hofstede cultural dimensions as a model of cultural analysis. The study uses multilevel regression analysis to test individual and macro-level indicators that explain variations in perceptions of fake news. The findings reveal a clear, direct relationship between uncertainty avoidance, masculinity, and fake news exposure, as well as an interaction of these cultural dimensions with age, but not with the other individual and media use related variables. These results have theoretical and practical implications, especially from the point of view of the design of public policies to fight disinformation in the European Union (EU).
Chapter
The rapid spread of fake news during the COVID-19 pandemic has aggravated the situation and made it extremely difficult for the World Health Organization and government officials to inform people only with accurate scientific findings. Misinformation dissemination was so unhindered that social media sites had to ultimately conceal posts related to COVID-19 entirely and allow users to see only the WHO or government-approved information. This action had to be taken because newsreaders lack the ability to efficiently discern fact from fiction and thereby indirectly aid in the spread of fake news believing it to be true. Our work helps in understanding the thought process of an individual when reading a news article. This information can further be used to develop their critical thinking ability. We expand the space of misinformation’s impact on users by conducting our own surveys to understand the factors consumers deem most important when deciding if some piece of information is true or not. Results from our study show that what people perceive to be important in deciding what is true information is different when confronted with the actual articles. We also find that prior beliefs and political leanings affect the ability of people to detect the legitimacy of the information.KeywordsFake newsCritical thinkingSourceSocial media
Article
The FIAT paradigm (Grimmer et al., 2021) is a novel method of eliciting ‘Aha’ moments for incorrect solutions to anagrams in the laboratory, i.e. false insights. There exist many documented reports of psychotic symptoms accompanying strong feelings of ‘Aha!’ (Feyaerts, Henriksen, Vanheule, Myin-Germeys, & Sass, 2021; Mishara, 2010; Tulver, Kaup, Laukkonen, & Aru, 2021), suggesting that the newly developed FIAT could reveal whether people who have more false insights are more prone to psychosis and delusional belief. To test this possibility, we recruited 200 participants to take an adapted version of the FIAT and complete measures of thinking style and psychosis proneness. We found no association between experimentally induced false insights and measures of Schizotypy, Need for Cognition, Jumping to Conclusions, Aberrant Salience, Faith in Intuition, or the Cognitive Reflection Task. We conclude that experiencing false insights might not be constrained to any particular type of person, but rather, may arise for anyone under the right circumstances.
Article
Full-text available
This article focuses on journalistic activities in the context of the first wave of Covid-19, in 2020, when ahigh presence of post-truth and fake news was identified in news production, which is justified in addressing these two conceptual objects. The objective was to understand how problems in the production process, suchas job insecurity during the pandemic, allowed information gaps, which were filled by misinformation andinfodemics. For this, a quantitative method was used, with the application of an online survey to 365 participantsfrom Ibero-America during 2020, on production processes, work routines, and information generatedduring the quarantine of journalists, and consumption of information during confinement to journalists andonline news receivers. As a result, most journalists have changed their work routine, such as digital datachecking and preference for scientific sources. About half of news receivers valued press work positively,even though news consumption has generated negative prospects. As conclusions, there is a need to reviewcertain productive practices in the journalistic field, during exceptional situations such as the pandemic.
Article
Misinformation related to COVID-19 is a threat to public health. The present study examined the potential for deliberative cognitive styles such as actively open-minded thinking and need for evidence in deterring belief in misinformation and promoting belief in true information related to COVID-19. In addition, regarding how responses to the pandemic have been politicized, the role of political orientation and motivated reasoning were also examined. We conducted a survey in South Korea (N = 1466) during May 2020. Participants answered measures related to demographics, open-minded thinking, need for evidence, and accuracy perceptions of COVID-19 misinformation and true information items. Multi-level analyses of the survey data found that while motivated reasoning was present, deliberative cognitive styles (actively open-minded thinking and need for evidence) decreased belief in misinformation without intensifying motivated reasoning tendencies. Findings also showed a political asymmetry where conservatives detected COVID-19 misinformation at a lesser rate. Overall, results suggest that health communication related to COVID-19 misinformation should pay attention to conservative populations. Results also imply that interventions that activate deliberative cognitive styles hold promise in reducing belief in COVID-19 misinformation.
Article
What are the underlying cognitive mechanisms that support belief in conspiracies? Common dual-process perspectives suggest that deliberation helps people make more accurate decisions and decreases belief in conspiracy theories that have been proven wrong (therefore, bringing people closer to objective accuracy). However, evidence for this stance is i) mostly correlational and ii) existing causal evidence might be influenced by experimental demand effects and/or a lack of suitable control conditions. Furthermore, recent work has found that analytic thinking tends to increase the coherence between prior beliefs and new information, which may not always lead to accurate conclusions. In two studies (Study 1: N = 1028; Study 2: N = 1000), participants were asked to evaluate the strength of conspiracist (or non-conspiracist) explanations of events. In the first study, which used well-known conspiracy theories, deliberation had no effect. In the second study, which used relatively unknown conspiracy theories, we found that experimentally manipulating deliberation did increase belief accuracy - but only among people with a strong ‘anti-conspiracy’ or strong ‘pro-conspiracy’ mindset from the beginning, and not among those with an intermediate conspiracist mindset. Although these results generally support the idea that encouraging people to deliberate can help to counter the growth of novel conspiracy theories, they also indicate that the effect of deliberation on conspiracist beliefs is more complicated than previously thought.
Article
Two studies investigated the effects of exposure to disinformation on citizens’ evaluation of politicians and the impact of corrections. Study 1 tested the roles of message valence and relational closeness of social media connections sharing disinformation. Study 2 examined whether corrections on social networking sites could mitigate the influence of disinformation. Results of the first study indicate a limited persuasive effect of disinformation, with negative disinformation being more entertaining but potentially less credible than positive disinformation. Effects of corrections in Study 2 were strong. There was no consistent influence of whether disinformation was shared by a close versus distant friend.
Article
Based on the Risk Information Seeking and Processing Model, the present study examines whether COVID-19 message fatigue leads to greater information avoidance and heuristic processing, and consequently greater acceptance of misinformation. We conducted a survey of 821 Korean adults regarding their information seeking and processing regarding COVID-19 vaccination. Results of SEM analyses showed that COVID-19 message fatigue was (a) negatively related to information insufficiency and (b) positively related to information avoidance and heuristic processing. Information avoidance and heuristic processing were subsequently related to greater levels of misinformation acceptance. Theoretical and practical implications are discussed.
Article
In matters of governance, is believing subject to ethical standards? If so, what are the criteria how relevant are they in our personal and political culture today? The really important matters in politics and governance necessitate a confidence that our beliefs will lead dependably to predictable and verifiable outcomes. Accordingly, it is unethical to hold a belief that is founded on insufficient evidence or based on hearsay or blind acceptance. In this paper, we demonstrate that the pragmatist concept of truth best meets this standard for ethically held belief in matters of politics and governance. Currently, these standards are abused by the gaslighting and distortion characteristics of the often social media driven ‘misinformation society’. The legitimacy and trust in our institutions and leadership that is requisite for good governance is challenged thereby, threatening the viability of our republic.
Article
Full-text available
Background During the COVID-19 pandemic, the world witnessed a partisan segregation of beliefs toward the global health crisis and its management. Politically motivated reasoning, the tendency to interpret information in accordance with individual motives to protect valued beliefs rather than objectively considering the facts, could represent a key process involved in the polarization of attitudes. The objective of this study was to explore politically motivated reasoning when participants assess information regarding COVID-19. Design We carried out a preregistered online experiment using a diverse sample ( N = 1500) from the United States. Both Republicans and Democrats assessed the same COVID-19–related information about the health effects of lockdowns, social distancing, vaccination, hydroxychloroquine, and wearing face masks. Results At odds with our prestated hypothesis, we found no evidence in line with politically motivated reasoning when interpreting numerical information about COVID-19. Moreover, we found no evidence supporting the idea that numeric ability or cognitive sophistication bolster politically motivated reasoning in the case of COVID-19. Instead, our findings suggest that participants base their assessment on prior beliefs of the matter. Conclusions Our findings suggest that politically polarized attitudes toward COVID-19 are more likely to be driven by lack of reasoning than politically motivated reasoning—a finding that opens potential avenues for combating political polarization about important health care topics. Highlights Participants assessed numerical information regarding the effect of different COVID-19 policies. We found no evidence in line with politically motivated reasoning when interpreting numerical information about COVID-19. Participants tend to base their assessment of COVID-19–related facts on prior beliefs of the matter. Politically polarized attitudes toward COVID-19 are more a result of lack of thinking than partisanship.
Article
Full-text available
Objective Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. We investigate the psychological profile of individuals who fall prey to fake news. Method We recruited 1,606 participants from Amazon's Mechanical Turk for three online surveys. Results The tendency to ascribe profundity to randomly generated sentences – pseudo‐profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline's source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news). Conclusion Our results suggest that belief in fake news may be driven, to some extent, by a general tendency to be overly accepting of weak claims. This tendency, which we refer to as reflexive open‐mindedness, may be partly responsible for the prevalence of epistemically suspect beliefs writ large. This article is protected by copyright. All rights reserved.
Article
Full-text available
What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Article
Full-text available
Although Americans generally hold science in high regard and respect its findings, for some contested issues, such as the existence of anthropogenic climate change, public opinion is polarized along religious and political lines. We ask whether individuals with more general education and greater science knowledge, measured in terms of science education and science literacy, display more (or less) polarized beliefs on several such issues. We report secondary analyses of a nationally representative dataset (the General Social Survey), examining the predictors of beliefs regarding six potentially controversial issues. We find that beliefs are correlated with both political and religious identity for stem cell research, the Big Bang, and human evolution, and with political identity alone on climate change. Individuals with greater education, science education, and science literacy display more polarized beliefs on these issues. We find little evidence of political or religious polarization regarding nanotechnology and genetically modified foods. On all six topics, people who trust the scientific enterprise more are also more likely to accept its findings. We discuss the causal mechanisms that might underlie the correlation between education and identity-based polarization.
Article
Full-text available
The Cognitive Reflection Test (CRT) is a widely used measure of the propensity to engage in analytic or deliberative reasoning in lieu of gut feelings or intuitions. CRT problems are unique because they reliably cue intuitive but incorrect responses and, therefore, appear simple among those who do poorly. By virtue of being comprised of so-called “trick-problems” that, in theory, could be discovered as such, it is commonly held that the predictive validity of the CRT is undermined by prior experience with the task. Indeed, recent studies show that people who have previous experience with the CRT score higher on the test. Naturally, however, it is not obvious that this actually undermines the predictive validity of the test. Across six studies with ~2500 participants and seventeen variables of interest (e.g., religious belief, bullshit receptivity, smartphone usage, susceptibility to heuristics and biases, numeracy), we did not find a single case where the predictive power of the CRT was significantly undermined by repeated exposure. This was despite the fact that we replicated the previously reported increase in accuracy among individuals who report previous experience with the CRT. We speculate that the CRT remains robust after multiple exposures because less reflective (more intuitive) individuals fail to realize that being presented with apparently easy problems more than once confers information about the tasks’ actual difficulty.
Article
Full-text available
The 2016 US Presidential Election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake news headlines occurs despite a low level of overall believability, and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories, and that tagging such stories as disputed is not an effective solution to this problem. Interestingly, however, we also find that prior exposure does not impact entirely implausible statements (e.g., “The Earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than previously assumed.
Article
Full-text available
Misinformation can undermine a well-functioning democracy. For example, public misconceptions about climate change can lead to lowered acceptance of the reality of climate change and lowered support for mitigation policies. This study experimentally explored the impact of misinformation about climate change and tested several pre-emptive interventions designed to reduce the influence of misinformation. We found that false-balance media coverage (giving contrarian views equal voice with climate scientists) lowered perceived consensus overall, although the effect was greater among free-market supporters. Likewise, misinformation that confuses people about the level of scientific agreement regarding anthropogenic global warming (AGW) had a polarizing effect, with free-market supporters reducing their acceptance of AGW and those with low free-market support increasing their acceptance of AGW. However, we found that inoculating messages that (1) explain the flawed argumentation technique used in the misinformation or that (2) highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation. We recommend that climate communication messages should take into account ways in which scientific content can be distorted, and include pre-emptive inoculation messages.
Article
Full-text available
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the “myth” more familiar—and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. We thus examined factors known to affect whether strategic memory processes can be utilized; age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then re-rated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative/corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (1) older adults over the age of 65 were worse at sustaining their post-correction belief that myths were inaccurate, (2) a greater level of explanatory detail promoted more sustained belief change, and (3) fact affirmations promoted more sustained belief change in comparison to myth retractions over the course of one week (but not over three weeks). This supports the notion that familiarity is indeed a driver of continued influence effects.
Article
Full-text available
The present research investigated the reason for mixed evidence concerning the relationship between analytic cognitive style (ACS) and political orientation in previous research. Most past research operationalized ACS with the Cognitive Reflection Test (CRT), which has been criticized as relying heavily on numeracy skills, and operationalized political orientation with the single-item self-placement measure, which has been criticized as masking the distinction between social and economic conservatism. The present research recruited an Amazon Mechanical Turk sample and, for the first time, simultaneously employed three separate ACS measures (CRT, CRT2, Baserate conflict problems), a measure of attitudes toward self-critical and reflective thinking (the Actively Open-Minded Thinking Scale; AOT), and separate measures of social and economic conservatism, as well the standard measure of political orientation. As expected, the total ACS score (combination of the separate measures) was negatively related to social, but not economic, conservatism. However, the CRT by itself was not related to conservatism, in parallel with some past findings, while the two other measures of ACS showed the same pattern as the combined score. Trait reflectiveness (AOT) was related negatively to all measures of political conservatism (social, economic, and general). Results clearly suggest that the conclusion reached regarding the ACS-political orientation relationship depends on the measure(s) used, with the measure most commonly employed in past research (CRT) behaving differently than other measures. Future research must further pursue the implications of the known differences (e.g., reliance on numeracy vs. verbal skills) of ACS measures and distinguish different senses of reflectiveness.
Chapter
Full-text available
Dual-process theories formalize a salient feature of human cognition: We have the capacity to rapidly formulate answers to questions, but we sometimes engage in deliberate reasoning processes before responding. It does not require deliberative thought to respond to the question “what is your name”. It did, however, require some thinking to write this paragraph (perhaps not enough). We have, in other words, two minds that might influence what we decide to do (Evans, 2003; Evans & Frankish, 2009). Although this distinction is acceptable (and, as I’ll argue, essentially irrefutable), it poses serious challenges for our understanding of cognitive architecture. In this chapter, I will outline what I view to be important theoretical groundwork for future dual-process models. I will start with two core premises that I take to be foundational: 1) dual-process theory is irrefutable but falsifiable, and 2) analytic thought has to be triggered by something. I will then use these premises to outline my perspective on what I consider the most substantial challenge for dual-process theorists: We don’t (yet) know what makes us think.
Article
Full-text available
People frequently rely on information even after it has been retracted, a phenomenon known as the continued-influence effect of misinformation. One factor proposed to explain the ineffectiveness of retractions is that repeating misinformation during a correction may inadvertently strengthen the misinformation by making it more familiar. Practitioners are therefore often encouraged to design corrections that avoid misinformation repetition. The current study tested this recommendation, investigating whether retractions become more or less effective when they include reminders or repetitions of the initial misinformation. Participants read fictional reports, some of which contained retractions of previous information, and inferential reasoning was measured via questionnaire. Retractions varied in the extent to which they served as misinformation reminders. Retractions that explicitly repeated the misinformation were more effective in reducing misinformation effects than retractions that avoided repetition, presumably because of enhanced salience. Recommendations for effective myth debunking may thus need to be revised.
Article
Full-text available
The Cognitive Reflection Test (CRT) is a hugely influential problem solving task that measures individual differences in the propensity to reflect on and override intuitive (but incorrect) solutions. The validity of this three-item measure depends on participants being naïve to its materials and objectives. Evidence from 142 volunteers recruited online suggests this is often not the case. Over half of the sample had previously seen at least one of the problems, predominantly through research participation or the media. These participants produced substantially higher CRT scores than those without prior exposure (2.36 vs. 1.48), with the majority scoring at ceiling level. Participants that had previously seen a specific problem (e.g., the bat and ball problem) nearly always solved that problem correctly. These data suggest the CRT may have been widely invalidated. As a minimum, researchers must control for prior exposure to the three problems and begin to consider alternative, extended measures of cognitive reflection.
Article
Full-text available
Previous studies relating low-effort or intuitive thinking to political conservatism are limited to Western cultures. Using Turkish and predominantly Muslim samples, Study 1 found that analytic cognitive style (ACS) is negatively correlated with political conservatism. Study 2 found that ACS correlates negatively with political orientation and with social and personal conservatism, but not with economic conservatism. It also examined other variables that might help to explain this correlation. Study 3 tried to manipulate ACS via two different standard priming procedures in two different samples, but our manipulation checks failed. Study 4 manipulated intuitive thinking style via cognitive load manipulation to see whether it enhances conservatism for contextualized political attitudes but we did not find a significant effect. Overall, the results indicate that social liberals tend to think more analytically than conservatives and people's long term political attitudes may be resistant to experimental manipulations.
Article
Full-text available
Individual differences in the mere willingness to think analytically has been shown to predict religious disbelief. Recently, however, it has been argued that analytic thinkers are not actually less religious; rather, the putative association may be a result of religiosity typically being measured after analytic thinking (an order effect). In light of this possibility, we report four studies in which a negative correlation between religious belief and performance on analytic thinking measures is found when religious belief is measured in a separate session. We also performed a meta-analysis on all previously published studies on the topic along with our four new studies (N = 15,078, k = 31), focusing specifically on the association between performance on the Cognitive Reflection Test (the most widely used individual difference measure of analytic thinking) and religious belief. This meta-analysis revealed an overall negative correlation (r) of -.18, 95% CI [-.21, -.16]. Although this correlation is modest, self-identified atheists (N = 133) scored 18.7% higher than religiously affiliated individuals (N = 597) on a composite measure of analytic thinking administered across our four new studies (d = .72). Our results indicate that the association between analytic thinking and religious disbelief is not caused by a simple order effect. There is good evidence that atheists and agnostics are more reflective than religious believers.
Article
Full-text available
Much research in cognitive psychology has focused on the tendency to conserve limited cognitive resources. The CRT is the predominant measure of such miserly information processing, and also predicts a number of frequently studied decisionmaking traits (such as belief bias and need for cognition). However, many subjects from common subject populations have already been exposed to the questions, which might add considerable noise to data. Moreover, the CRT has been shown to be confounded with numeracy. To increase the pool of available questions and to try to address numeracy confounds, we developed and tested the CRT-2. CRT-2 questions appear to rely less on numeracy than the original CRT but appear to measure closely related constructs in other respects. Crucially, substantially fewer subjects from Amazon’s Mechanical Turk have been previously exposed to CRT-2 questions. Though our primary purpose was investigating the CRT-2, we also found that belief bias questions appear suitable as an additional source of new items. Implications and remaining measurement challenges are discussed. © 2016, Society for Judgment and Decision making. All rights reserved.
Article
Full-text available
Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.
Article
Full-text available
We review recent evidence revealing that the mere willingness to engage analytic reasoning as a means to override intuitive “gut feelings” is a meaningful predictor of key psychological outcomes in diverse areas of everyday life. For example, those with a more analytic thinking style are more skeptical about religious, paranormal, and conspiratorial concepts. In addition, analytic thinking relates to having less traditional moral values, making less emotional or disgust-based moral judgments, and being less cooperative and more rationally self-interested in social dilemmas. Analytic thinkers are even less likely to offload thinking to smartphone technology and may be more creative. Taken together, these results indicate that the propensity to think analytically has major consequences for individual psychology.
Article
Full-text available
Dual-system theories of human cognition, under which fast automatic processes can complement or compete with slower deliberative processes, have not typically been incorporated into larger scale population models used in evolutionary biology, macroeconomics, or sociology. However, doing so may reveal important phenomena at the population level. Here, we introduce a novel model of the evolution of dual-system agents using a resource-consumption paradigm. By simulating agents with the capacity for both automatic and controlled processing, we illustrate how controlled processing may not always be selected over rigid, but rapid, automatic processing. Furthermore, even when controlled processing is advantageous, frequency-dependent effects may exist whereby the spread of control within the population undermines this advantage. As a result, the level of controlled processing in the population can oscillate persistently, or even go extinct in the long run. Our model illustrates how dual-system psychology can be incorporated into population-level evolutionary models, and how such a framework can be used to examine the dynamics of interaction between automatic and controlled processing that transpire over an evolutionary time scale.
Article
Full-text available
Scores on the three-item Cognitive Reflection Test (CRT) have been linked with dual-system theory and normative decision making (Frederick, 2005). In particular, the CRT is thought to measure monitoring of System 1 intuitions such that, if cognitive reflection is high enough, intuitive errors will be detected and the problem will be solved. However, CRT items also require numeric ability to be answered correctly and it is unclear how much numeric ability vs. cognitive reflection contributes to better decision making. In two studies, CRT responses were used to calculate Cognitive Reflection and numeric ability; a numeracy scale was also administered. Numeric ability, measured on the CRT or the numeracy scale, accounted for the CRT's ability to predict more normative decisions (a subscale of decision-making competence, incentivized measures of impatient and risk-averse choice, and self-reported financial outcomes); Cognitive Reflection contributed no independent predictive power. Results were similar whether the two abilities were modeled (Study 1) or calculated using proportions (Studies 1 and 2). These findings demonstrate numeric ability as a robust predictor of superior decision making across multiple tasks and outcomes. They also indicate that correlations of decision performance with the CRT are insufficient evidence to implicate overriding intuitions in the decision-making biases and outcomes we examined. Numeric ability appears to be the key mechanism instead.
Article
Full-text available
Exposure to news, opinion and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using de-identified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks, and examine the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook's algorithmically ranked News Feed, and further studied users' choices to click through to ideologically discordant content. Compared to algorithmic ranking, individuals' choices about what to consume had a stronger effect limiting exposure to cross-cutting content. Copyright © 2015, American Association for the Advancement of Science.
Article
Full-text available
The Cognitive Reflection Test (CRT) is one of the most widely used tools to assess individual differences in intuitive-analytic cognitive styles. The CRT is of broad interest because each of its items reliably cues a highly available and superficially appropriate but incorrect response, conventionally deemed the "intuitive" response. To do well on the CRT, participants must reflect on and question the intuitive responses. The CRT score typically employed is the sum of correct responses, assumed to indicate greater "reflectiveness" (i.e., CRT-Reflective scoring). Some recent researchers have, however, inverted the rationale of the CRT by summing the number of intuitive incorrect responses, creating a putative measure of intuitiveness (i.e., CRT-Intuitive). We address the feasibility and validity of this strategy by considering the problem of the structural dependency of these measures derived from the CRT and by assessing their respective associations with self-report measures of intuitive-analytic cognitive styles: the Faith in Intuition and Need for Cognition scales. Our results indicated that, to the extent that the dependency problem can be addressed, the CRT-Reflective but not the CRT-Intuitive measure predicts intuitive-analytic cognitive styles. These results provide evidence that the CRT is a valid measure of reflective but not of intuitive thinking.
Article
Full-text available
This is an important book. It addresses the question: Are human beings systematically irrational? They would be so if they were "hard-wired" to reason badly on certain types of tasks. Even if they could discover on reflection that the reasoning was bad, the unreflective tendency to reason badly would be a systematic irrationality. According to Stanovich, psychologists have shown that "people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, they systematically underweight information about nonoccurrence when evaluat-ing covariation, and they display numerous other information-processing bi-ases." (1-2) Such cognitive psychologists as Nisbett and Ross (1980) and Kahneman, Slovic and Tversky (1982) interpret this apparently dismal typical performance as evidence of hard-wired "heuristics and biases" (whose pres-ence can be given an evolutionary explanation) which are sometimes irra-tional. Critics have proposed four alternative explanations. (1) Are the deficiencies just unsystematic performance errors of basically competent subjects due to such temporary psychological malfunctions as in-attention or memory lapses? Stanovich and West (1998a) administered to the same subjects four types of reasoning tests: syllogistic reasoning, selection, statistical reasoning, argument evaluation. They assumed that, ifmistakes were random performance errors, there would no significant correlation between scores on the different types of tests. In fact, they found modest but statisti-cally very significant correlations (at the .001 level) between all pairs of scores except those on statistical reasoning and argument evaluation. Hence, they concluded, not all mistakes on such reasoning tasks are random performance errors.
Article
Full-text available
While individual differences in the willingness and ability to engage analytic processing have long informed research in reasoning and decision making, the implications of such differences have not yet had a strong influence in other domains of psychological research. We claim that analytic thinking is not limited to problems that have a normative basis and, as an extension of this, predict that individual differences in analytic thinking will be influential in determining beliefs and values. Along with assessments of cognitive ability and style, religious beliefs, and moral values, participants judged the wrongness of acts considered disgusting and conventionally immoral, but that do not violate care- or fairness-based moral principles. Differences in willingness to engage analytic thinking predicted reduced judgements of wrongness, independent of demographics, political ideology, religiosity, and moral values. Further, we show that those who were higher in cognitive ability were less likely to indicate that purity, patriotism, and respect for traditions and authority are important to their moral thinking. These findings are consistent with a “Reflectionist” view that assumes a role for analytic thought in determining substantive, deeply-held human beliefs and values.
Article
Full-text available
Human reasoning has been characterized as often biased, heuristic, and illogical. In this article, I consider recent findings establishing that, despite the widespread bias and logical errors, people at least implicitly detect that their heuristic response conflicts with traditional normative considerations. I propose that this conflict sensitivity calls for the postulation of logical and probabilistic knowledge that is intuitive and that is activated automatically when people engage in a reasoning task. I sketch the basic characteristics of these intuitions and point to implications for ongoing debates in the field. © Association for Psychological Science 2012.
Article
Full-text available
Dual-process and dual-system theories in both cognitive and social psychology have been subjected to a number of recently published criticisms. However, they have been attacked as a category, incorrectly assuming there is a generic version that applies to all. We identify and respond to 5 main lines of argument made by such critics. We agree that some of these arguments have force against some of the theories in the literature but believe them to be overstated. We argue that the dual-processing distinction is supported by much recent evidence in cognitive science. Our preferred theoretical approach is one in which rapid autonomous processes (Type 1) are assumed to yield default responses unless intervened on by distinctive higher order reasoning processes (Type 2). What defines the difference is that Type 2 processing supports hypothetical thinking and load heavily on working memory. © The Author(s) 2013.
Article
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.
Article
We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.
Article
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties – known as partisanship – can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides.
Article
Selective reading of political online information was examined based on cognitive dissonance, social identity, and news values frameworks. Online reports were displayed to 156 Americans while selective exposure was tracked. The news articles that participants chose from were either conservative or liberal and also either positive or negative regarding American political policies. In addition, information processing styles (cognitive reflection and need-for-cognition) were measured. Results revealed confirmation and negativity biases, per cognitive dissonance and news values, but did not corroborate the hypothesis derived from social identity theory. Greater cognitive reflection, greater need-for-cognition, and worse affective state fostered the confirmation bias; stronger social comparison tendency reduced the negativity bias.
Article
Psychologists, neuroscientists, and economists often conceptualize decisions as arising from processes that lie along a continuum from automatic (i.e., “hardwired” or overlearned, but relatively inflexible) to controlled (less efficient and effortful, but more flexible). Control is central to human cognition, and plays a key role in our ability to modify the world to suit our needs. Given its advantages, reliance on controlled processing may seem predestined to increase within the population over time. Here, we examine whether this is so by introducing an evolutionary game theoretic model of agents that vary in their use of automatic versus controlled processes, and in which cognitive processing modifies the environment in which the agents interact. We find that, under a wide range of parameters and model assumptions, cycles emerge in which the prevalence of each type of processing in the population oscillates between 2 extremes. Rather than inexorably increasing, the emergence of control often creates conditions that lead to its own demise by allowing automaticity to also flourish, thereby undermining the progress made by the initial emergence of controlled processing. We speculate that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by our species.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the ‘science comprehension thesis’ (SCT), which identifies defects in the public's knowledge and reasoning capacities as the source of such controversies; and the ‘identity-protective cognition thesis’ (ICT), which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in numeracy – a measure of the ability and disposition to make use of quantitative information – did substantially better than less numerate ones when the data were presented as results from a study of a new skin rash treatment. Also as expected, subjects’ responses became politically polarized – and even less accurate – when the same data were presented as results from the study of a gun control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in numeracy; instead, it increased . This outcome supported ICT, which predicted that more numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their "most important" source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Article
Our smartphones enable—and encourage—constant connection to information, entertainment, and each other. They put the world at our fingertips, and rarely leave our sides. Although these devices have immense potential to improve welfare, their persistent presence may come at a cognitive cost. In this research, we test the “brain drain” hypothesis that the mere presence of one’s own smartphone may occupy limited-capacity cognitive resources, thereby leaving fewer resources available for other tasks and undercutting cognitive performance. Results from two experiments indicate that even when people are successful at maintaining sustained attention—as when avoiding the temptation to check their phones—the mere presence of these devices reduces available cognitive capacity. Moreover, these cognitive costs are highest for those highest in smartphone dependence. We conclude by discussing the practical implications of this smartphone-induced brain drain for consumer decision-making and consumer welfare.