Article

Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Researchers are trying to figure out why certain individuals are more susceptible to misinformation than others, and which individual traits are associated with a greater vulnerability to FN [4]. This latter line of research has identified many determinants that may increase the likelihood of being misled by FN, including, for example, partisan bias (i.e., the tendency to interpret new information in accordance with one's ideological beliefs [5]), delusion-proneness [6], dogmatism and religious fundamentalism [6], over-claiming [7], and cognitive reflection (i.e., the disposition to rely on intuitive rather than analytical thinking [8]). ...
... The study of the psychological characteristics that make individuals vulnerable to being misinformed has focused on individual differences in analytical thinking [8], delusionproneness [6], religiosity and dogmatism [6], bullshit receptivity [26], over-claiming one's own knowledge [7], confirmation bias [2], selective exposure [27], partisan bias [28], and others. If we consider that all these variables can be operationalized in many different ways, it is certain that external validation according to Scenarios 1 and 2 will fail (see Section 2) if a large set of variables is considered in the statistical analysis. ...
... The present study relies on a convenience sample which is not nationally representative and our comparisons of the personal relevance of news topics should be interpreted accordingly. In this respect, however, we repeat the consideration provided by Pennycook and Rand [8]: In a study on news truth discrimination and overall belief, obtaining a nationally representative sample may be less important than sampling from frequent internet and social media users, who are most likely to be exposed to FN. According to this point of view, therefore, the present sampling method has several advantages relative to a nationally representative sample. ...
Article
Full-text available
The massive spread of fake news (FN) requires a better understanding of both risks and protective psychological factors underlying vulnerability to misinformation. Prior studies have mostly dealt with news that do not bear any direct personal relevance to participants. Here, we ask whether high-stakes news topics may decrease vulnerability to FN. Data were collected during the national lockdown in Italy (COVID-19 news) and one year later (political news). We compared truth discrimination and overall belief for true news (TN) and FN concerning COVID-19 and political topics. Our findings indicate that psychological risk and protective factors have similar effects on truth discrimination, regardless of whether the news topic is highly or minimally personally relevant. However, we found different effects of psychological factors on overall belief, for high and low personal relevance. These results suggest that, given a high level of cognitive dissonance, individuals tend to rely on proximal or emotional sources of information. In summary, our study underscores the importance of understanding the psychological factors that contribute to vulnerability to misinformation, particularly in high-stakes news contexts.
... The phenomenon of fake news and its exponential growth is increasingly attracting scholars and digital marketing practitioners, who are overwhelmed by the size of the phenomenon and baffled by the difficulty of finding ways to tell reliable and false information apart in the digital era (Cohen, 2017;Pennycook & Rand, 2018;Colliander, 2019;Talwar, Dhir, Singh, Virk & Salo, 2020;Tejedor, Portales-Oliva, Carniel-Bugs & Cervi, 2021). Despite the emergence and use of verifiers that analyze the nature of the information (Chen, Luo, Hu, Zhao & Zhang, 2021), some based on artificial intelligence algorithms, these are not infallible and allow many loopholes for fake news to slip through (Sáez-Ortuño, Forgas-Coll, Huertas-Garcia & Sánchez-García, 2023). ...
... Third, the proliferation of fake news or reviews made to distort the market and whose origin can come from consumers, companies, and competitors (Moon et al., 2021). Three lines of action are being proposed: (i) the development of mechanisms for their detection (Hooi et al., 2016;Pennycook & Rand, 2018); (ii) the analysis of their consequences for the market and society (Talwar et al., 2020); and (iii) the detection and analysis of the motivations that lead users both to generate fake news and to accept this news in the decision-making process (Horn & Veermans, 2019;Tejedor et al., 2021). ...
Article
Full-text available
Purpose: Social media has changed the way users interact with each other, and has become an important part of numerous lives. However, there is an increasing flow of implausible content circulating on social media, which points to the need for some categorization and regulation. This study will examine how the proliferation of fake news on social media impacts students and their choice of university. To answer this question, market research was conducted on the precedents that affect the acceptance of fake news among university students when choosing to study for a master's degree that will help them in their professional careers.Design/methodology/approach: The study used a quantitative method. A parsimonious model of causal relationships was proposed based on scales taken from the literature, assessed by a convenience sample of students, and adjusted by structural equation modelling (SEM).Findings: Results show that the parsimonious model explains 35% of fake news acceptance and that media dependency (ISMD) and parasocial interaction (PSI) are the main direct effects, while perceived media richness (PR) has a significant indirect influence on the attitude towards fake news and, consequently, on its acceptance. Furthermore, fake news literacy plays a correct moderating role with the most relevant source of influence, SNS dependency.Research limitations: A convenience sample was used, and a parsimonious model with three antecedent factors and one mediating factor was proposed. Other social factors could have been considered, including multicultural variables.Practical implications: The results point to students' expressed dependence on social networks as the main factor explaining their attitude towards fake news, negatively moderated by students' level of knowledge about the importance of this phenomenon in social networks. Therefore, it is relevant to promote knowledge about this phenomenon among students to reduce its influence on decision-making processes.Originality/value: This paper provides a novel context for the study of the proliferation of fake news on social networks: the process of choosing a university by students addicted to the news circulating on social media.
... The literature on how cognitive function influences people's interaction with (mis/dis) information (see Pantazi, Hale and Klein [9] for a review), suggests that higher cognitive ability and analytical thinking are linked to an increased propensity to detect and resist misinformation [26][27][28]. Those with lower cognitive ability are also found to be less likely to adjust their judgment after they learn that important information on which their initial evaluation was based is incorrect [29] and are more susceptible to false memories arising from exposure to fabricated news stories [30]. ...
... In short, we compare the effects of cognitive ability on voting behaviour for individuals exposed to the same contextual factors and information sets. Household variation in cognitive ability and voting behaviour is therefore likely to expose variation in how individuals process information sets [26][27][28] and interact and interpret their geographical context [54]. Column 4 of Table 2 reports the estimated coefficients of this procedure. ...
Article
Full-text available
On June 23 rd 2016 the UK voted to leave the European Union. The period leading up to the referendum was characterized by a significant volume of misinformation and disinformation. Existing literature has established the importance of cognitive ability in processing and discounting (mis/dis) information in decision making. We use a dataset of couples within households from a nationally representative UK survey to investigate the relationship between cognitive ability and the propensity to vote Leave / Remain in the 2016 UK referendum on European Union membership. We find that a one standard deviation increase in cognitive ability, all else being equal, increases the likelihood of a Remain vote by 9.7%. Similarly, we find that an increase in partner’s cognitive ability further increases the respondent’s likelihood of a Remain vote (7.6%). In a final test, restricting our analysis to couples who voted in a conflicting manner, we find that having a cognitive ability advantage over one’s partner increases the likelihood of voting Remain (10.9%). An important question then becomes how to improve individual and household decision making in the face of increasing amounts of (mis/dis) information.
... As this relates to susceptibility to misleading information, Littrell and Fugelsang (2023) found that those who are most receptive to bullshit are grossly overconfident in their ability to detect it and overconfidence is associated with reduced engagement in reflective thinking (Littrell et al., 2020). Moreover, Pennycook and Rand argued that receptivity to bullshit and fake news largely results from a lack of reflective engagement (Pennycook & Rand, 2019. Given these findings, it is reasonable to hypothesize that reflective thinking may be a fitting candidate for an intervention task to ameliorate an individual's receptivity to bullshit. ...
... At first glance, the results reported here may appear to somewhat conflict with "miserly processing" accounts which assert that falling for bullshit and fake news stems from a tendency to rely on quicker, lower-effort intuitive thinking processes while avoiding engagement in more deliberative, effortful reflection when evaluating information (Bago et al., 2020;Pennycook & Rand, 2019. These claims often draw support from studies correlating receptivity to various types of misleading information with performance on the CRT (Frederick, 2005), a cognitive task purported to measure "engagement in cognitive reflection." ...
Article
Full-text available
Across three studies (N = 659), we present evidence that engaging in explanatory reflection reduces receptivity to pseudo-profound bullshit but not scientific bullshit or fake news. Additionally, ratings for pseudo-profound and scientific bullshit attributed to authoritative sources were significantly inflated compared to bullshit from anonymous sources. These findings provide initial evidence that asking people to reflect on why they find certain statements meaningful (or not) helps reduce receptivity to some types of misinformation but not others. Moreover, the appeal of misleading claims spread by perceived experts may be largely immune to the putative benefits of interventions that rely solely on reflective thinking. Taken together, our results suggest that while encouraging the public to be more reflective can certainly be helpful as a general rule, the effectiveness of this strategy in reducing the persua-siveness of misleading or otherwise epistemically-suspect claims is limited by the type of claims being evaluated.
... Social media users are simultaneously exposed to multiple cues (e.g. the source, number of likes, message content), so they often minimise the cognitive workload involved in processing the content to arrive at decisions and may accept it at face value by relying on heuristic cues (Gabielkov et al., 2016). The existence of such mental heuristics can make it difficult to counter misinformation on social media, as people who use heuristic information processing have been shown to be less likely to detect fake news (Pennycook and Rand, 2019). ...
... These researchers found that fake news publishers incorporate cues that are designed to trigger heuristic processing, thus making audiences more susceptible to false content, particularly within the online social media environment. Other studies have also found a significant positive relationship between heuristic processing, often referred to as a 'lack of reasoning', and the propensity to be deceived by misinformation, such that a lack of deliberation hinders an individual's ability to distinguish between true and false news headlines (Bago et al., 2020;Pennycook and Rand, 2019). The significance of reducing heuristic processing has been incorporated into the design of misinformation interventions, with recent research by Lee and Bissell (2023) underscoring the importance of engaging users by commenting on misinformation posts as a strategy to discourage heuristic processing and foster critical evaluation of misleading content. ...
Article
Objectives This study aimed to examine the joint effect of two core message elements – authoritative source and argument strength – in correction tweets to counter conspiratorial misinformation about the measles, mumps and rubella (MMR) vaccine. Design/Method An online experiment with US residents ( N = 404) was conducted in a 2 (authoritative correction sources: layperson vs US Centres for Disease Prevention and Control [CDC]) × 2 (correction argument strength: weak vs strong) design. Results The results indicate that the correction employing strong arguments and a correction provided by the CDC heightened heuristic processing of the corrective information, which in turn increased the perceived credibility of the conspiratorial misinformation. The effect of the CDC correction on heuristic processing was heightened when it contained weak arguments. Notably, user-generated corrections with weak arguments reduced heuristic processing of the information and contributed to reducing the perceived credibility of the misinformation. Conclusion Based on the findings, we argue that both communicator- and content-related cues jointly influence how audiences process corrective information. The current study discusses the potency of user-generated social media corrections to counter vaccine misinformation and provides practical implications for how user-generated social media correction can be utilised by health practitioners. Public health organisations should prioritise presenting corrective information in an easily understandable manner, using user-generated content that fosters a sense of connection and engagement with individuals.
... Individuals who are inclined to reason analytically may be more likely to rely on systematic (rather than heuristic) source-monitoring processes and therefore more likely to reject information that simply "feels" true but has no basis in fact. Indeed, several studies have demonstrated that individuals with higher scores on a measure of analytical reasoning style are less likely to believe, share, or form false memories for fake news stories (e.g., Greene & Murphy, 2020;Pennycook & Rand, 2019). Since the preferential formation of false memories for ideologically congruent information may be driven at least in part by a reliance on heuristic processes, more analytical participants might also be expected to be less susceptible to stereotypically consistent fake news stories. ...
... This finding may explain in part, why Irish participants were less likely to report a memory for a true news story, as they scored significantly lower than their German counterparts on the CRT. Future research could explicitly compare the motivated cognition and analytical reasoning accounts of fake new susceptibility in the context of national stereotypes, in line with previous work (Pennycook & Rand, 2019). ...
... People with greater predispositions for need for cognition tend to engage in more systematic processing of information because they prefer to have more confidence in their judgments (Chaiken et al., 1989). Indeed, people who are more likely to cognitively reflect on mediated messages are better at distinguishing misinformation from accurate news (Pennycook & Rand, 2019). When both of these stable, individual differences are considered together, the need for cognition and the need to elaborate point toward dispositional reasoning styles that affect perceptions of public opinion (Nir, 2011) as well as message perceptions. ...
... Nonetheless, partisan beliefs in statements of inaccurate facts have been successfully attenuated using different types of correction message strategies (Wood & Porter, 2019), using accuracyprocessing incentives such as monetary payments (Bullock et al., 2015), or even priming individuals to consider the accuracy of a message (Fazio, 2020). In contrast, other researchers contend that a predisposition to misinformation may be better explained by a lack of reasoning rather than motivated reasoning (Pennycook & Rand, 2019). ...
Article
Full-text available
Although research on misinformation and corrections has recently proliferated, no systematic structure has guided the examination of conditions under which misinformation is most likely to be recognized and the potential ensuing effects of recognition. The Misinformation Recognition and Response Model (MRRM) provides a framework for investigating the antecedents to and consequences of misinformation recognition. The model theorizes that how people cope with exposure to misinformation and/or intervention messages is conditioned by both dispositional and situational individual characteristics and is part of a process mediated by informational problem identification, issue motivation, and—crucially—recognition of misinformation. Whether or not recognition is activated then triggers differential cognitive coping strategies which ultimately affect consequent cognitive, affective, and behavioral outcomes. Working to explore the notion of misinformation will be more fruitful if researchers take into consideration how various perspectives fit together and form a larger picture. The MRRM offers guidance on a multi-disciplinary understanding of recognizing and responding to misinformation.
... For instance, analytic thinking has recently received a good deal of attention as a potential driver of engagement with "fake news". A growing body of work suggests that belief in fake news headlines is associated with the tendency to engage in analytic thinking [33,34], such that people who tend to rely on an initial intuitive impression (versus engage in more deliberative reasoning) are more likely to believe false headlines. Importantly, fake news itself is often deeply connected with issues of status threat in that many of the most provocative and influential articles in this genre pertain to a sense of threat to a particular demographic [35]. ...
Article
Full-text available
Status threat (i.e., concern that one’s dominant social group will be undermined by outsiders) is a significant factor in current United States politics. While demographic factors such as race (e.g., Whiteness) and political affiliation (e.g., conservatism) tend to be associated with heightened levels of status threat, its psychological facets have yet to be fully characterized. Informed by a “paranoid” model of American politics, we explored a suite of possible psychological and demographic associates of perceived status threat, including race/ethnicity, political conservatism, analytic thinking, magical ideation, subclinical paranoia, and conspiracy mentality. In a small, quota sample drawn from the United States (N = 300), we found that conspiracy mentality, subclinical paranoia, conservatism, and age were each positively (and uniquely) associated with status threat. In addition to replicating past work linking conservatism to status threat, this study identifies subclinical paranoia and conspiracy mentality as novel psychological associates of status threat. These findings pave the way for future research regarding how and why status threat concerns may become exaggerated in certain individuals, possibly to the detriment of personal and societal wellbeing.
... Research highlights the role of platforms like Twitter in breaking news stories and their potential to shape public opinion. However, concerns about the spread of misinformation and filter bubbles have also been raised (Pennycook & Rand, 2019). However, this new era of social media also brings forth numerous challenges, including the proliferation of misinformation, privacy concerns, online harassment, and the potential for digital divides. ...
... The reason may be that the information is presented in a suboptimal way or that the respondents analytical reasoning skills are limited (c.f. Norton and Ariely, 2011;Eriksson and Simpson, 2012;Blaufus et al., 2015;Pennycook and Rand, 2019b;Haghtalab et al., 2021). Several of the studies included in our review also discuss how difficulty connecting information with specific policies may lead to perceptions being updated (indicating that treated respondents do incorporate the information) while preferences and opinions remain unchanged (see e.g. ...
Preprint
Full-text available
Voters hold widespread misperceptions about society, which have been documented in numerous studies. Likewise, voters demonstrate increasing political polarization over policy preferences. Against this backdrop, it is intuitively appealing to think that information provision can help correct misperceptions and create common ground by enhancing the political conversation and bridging political divisiveness. We show, using a general population survey in the United States, that beliefs in the power of information to reduce polarization are indeed widespread. Additionally, we review the extensive literature on misperceptions. To investigate the empirical relationships between misperceptions, information, and political polarization, we exploit the fact that many studies investigate heterogeneities in misperceptions and/or in the reaction to information treatments. Our review shows that existing misperceptions often, but not always, appear to be associated with an increased sense of divisiveness in society; however, information provision is more likely to increase polarization than decrease it. The reason is that different societal groups exhibit differing reactions to truthful and accurate information, in ways that often strengthens, rather than mitigates, existing preference schisms. Thus, the intuitively appealing suggestion that information provision can serve as a powerful tool to reduce polarization is often proven false.
... To investigate the hypothesis regarding the relationship between social media usage growth and Infectious Diseases content volume, data were sourced from various reputable platforms and repositories [6]. The primary data sources used for this study include: ...
Article
Full-text available
In an era marked by information overload and unparalleled connectivity, the 'infodemic', a term coined by the WHO, characterizes the deluge of information, both accurate and inaccurate. This research delves into the multifaceted nature of the infodemic during a global pandemic in the year 2021, with a particular focus on its interaction with social media. Challenges encompass the swift dissemination of information, the absence of regulatory oversight, the replication of misinformation, and the susceptibility of the public[1]. The author’s central hypothesis establishes a connection between heightened social media usage during the Infectious Diseases 2019 era and the proliferation of Infectious Diseases related content. Statistical analysis substantiates this correlation.
... Most bots execute innocuous advertising tasks (Appel et al., 2020). Others, though, pursue socially harmful ends, such as spreading misinformation and encouraging false beliefs (Cresci, 2020;Ferrara et al., 2016;Huang & Carley, 2020;Pacheco et al., 2020;Pennycook & Rand, 2019;Shao et al., 2018;Wu et al., 2019). ...
Article
Full-text available
Objective We test the effects of three aids on individuals’ ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning. Background Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids. Method Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots. Results The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it. Conclusions Informative interventions improved social bot detection; warning alone did not. Application We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
... These results suggest that source effects are stronger when respondents visit the website and see the full text relative to when they only evaluate the headline/lede of an article. This may explain why previous studies investigating source effects (Austin and Dong, 1994;Jakesch et al., 2019;Pennycook and Rand, 2019b) without the full web page did not find any source cue effects. The format in which source cues are provided matter. ...
Article
Full-text available
Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.
... It is an important question what we do to discourage this kind of sharing and belief in fake or misinformation? (Pennycook & Rand, 2019b). ...
Article
Full-text available
Social media has influenced all fields of life due to their interactivity and freedom without any gatekeeping but these advantages developed immense problems related to the validity of data. Social media platforms are becoming the most influential tool of interaction and most users spend many hours on different social media networks daily. Diffusion of Misinformation has become one of the major problems on social media which is defined as unintentionally sharing and posting content on social media. The issues associated with the misinformation on Facebook and Twitter are focused on picture, video, and text-based content. The key objective of the study is to analyze the relationship between user engagement, the continued influence of misinformation, and religious and political intolerance. The model of the continued influence effect of misinformation suggests that social media users are part of misinformation even after the identification of fact-check tools. The content analysis of political and religious misinformation content identified by the fact-check sources (N = 200) highlighted a positive relationship between the variables. The findings highlight the interconnected dynamics between intolerance and the spread of misinformation in the digital age. It suggests that the role of fact-check sources should be increased and social media users should be literate or mindful while consuming social media.
... Overall, our study suggests that fostering intellectual humility may be an important key to reducing anti-vaccination attitudes for people who score high on conservatism via narrowing their latitude of rejection and/or broadening their latitudes of noncommitment and acceptance within the social judgement theory framework (Sherif & Hovland, 1961). To be more intellectually humble, people can participate in more investigative behaviors (Koetke et al., 2022) and engage in more analytical thinking (Pennycook & Rand, 2019) regarding information about vaccines, particularly when information corresponds to their general world view (Van Bavel & Pereira, 2018). According to Porter and Schumann (2018), cultivating a growth mindset of intelligence may also help people develop intellectual humility. ...
Article
Full-text available
Previous research has consistently found that more political conservatism is related to higher anti-vaccination attitudes. However, little work has investigated how intellectual humility could potentially contribute to this relationship. Employing the social judgment theory of attitude change, we examined whether conservatism could mediate the association between intellectual humility and anti-vaccination attitudes. Participants (N = 1,293; 40.1% female; Mage = 38.23 years, SDage = 11.61, range of age was 18–78) completed a multifaceted measure of intellectual humility, an assessment of four types of anti-vaccination attitudes, and a measure of political orientation. Results from structural equation modeling revealed that decreased levels of most aspects of intellectual humility (i.e., independence of intellect and ego, openness to revising one’s viewpoint, and lack of intellectual overconfidence) are associated with more conservative political views, which in turn is associated with stronger anti-vaccination attitudes, particularly worries about unforeseen future effects, concerns about commercial profiteering, and preference for natural immunity. These findings suggest that intellectual humility could reflect one’s latitude widths, thereby predicting their openness to vaccine massaging, and thus may play an important role in addressing anti-vaccination attitudes, especially when politics is involved.
... Lack of funding can stifle innovative research, limiting insights into human behavior, cognition, emotion, and social interaction, and valuable findings regarding issues such as mental health, education, and workplace efficiency may accordingly be underutilized. Research suggests that individuals who utilize more cognitive reflection tend to hold more pro-scientific beliefs across other domains such as biology and astronomy (Pennycook et al., 2022), and are more inclined to reject unsubstantiated claims, including false news stories (Pennycook & Rand, 2019), and stereotypes (Hammond & Cimpian, 2017). ...
Article
Although the field of psychology was not originally approached systematically, the scientific method is now applied in order to evaluate psychological theories. To further the body of knowledge concerning the public perception of psychology as a science, community college students were recruited for the Psi Beta National Research Project questionnaire that included the Psychology as a Science Scale (PAS) and the Cognitive Reflection Test 2 (CRT-2). Participants received the CRT-2 at either the beginning or midway through the questionnaire, depending on the last digit of the participant’s phone number. The prediction that individuals who received the CRT-2 earlier would score differently on the PAS than those who received it later was not supported. Additionally, the prediction that individuals who are more analytical would score higher on the PAS Scale was supported. The aim of this study was to offer supporting data to further the understanding of the influences of cognitive reflection and public perception of psychology as a science. Implications of these findings may include the recognition of the role cognitive reflection may play in decision-making and provide supporting evidence of the potential for correcting detrimental misconceptions of psychology as a science. Suggestions for future directions include focusing on the relationship between CRT-2 and PAS scores and having more than three CRT-like questions to allow for a more sensitive measure of analytical and intuitive responses.
... Existing works have been focused on studying users' susceptibility to various information sources in social media and social networks (Wald et al., 2013;Lee and Lim, 2015;Hoang and Lim, 2016;Albladi and Weir, 2020). There have been few previous analyses on the vulnerability of users to rumors and fake news (Rath et al., 2019;Pennycook and Rand, 2019;Bringula et al., 2021). Rath et al. (2019) proposed a community health assessment model based on the concept of believability derived from computational trust metrics to calculate the vulnerability of nodes and communities to fake news spread. ...
Article
Full-text available
In the age of the infodemic, it is crucial to have tools for effectively monitoring the spread of rampant rumors that can quickly go viral, as well as identifying vulnerable users who may be more susceptible to spreading such misinformation. This proactive approach allows for timely preventive measures to be taken, mitigating the negative impact of false information on society. We propose a novel approach to predict viral rumors and vulnerable users using a unified graph neural network model. We pre-train network-based user embeddings and leverage a cross-attention mechanism between users and posts, together with a community-enhanced vulnerability propagation (CVP) method to improve user and propagation graph representations. Furthermore, we employ two multi-task training strategies to mitigate negative transfer effects among tasks in different settings, enhancing the overall performance of our approach. We also construct two datasets with ground-truth annotations on information virality and user vulnerability in rumor and non-rumor events, which are automatically derived from existing rumor detection datasets. Extensive evaluation results of our joint learning model confirm its superiority over strong baselines in all three tasks: rumor detection, virality prediction, and user vulnerability scoring. For instance, compared to the best baselines based on the Weibo dataset, our model makes 3.8% and 3.0% improvements on Accuracy and MacF1 for rumor detection, and reduces mean squared error (MSE) by 23.9% and 16.5% for virality prediction and user vulnerability scoring, respectively. Our findings suggest that our approach effectively captures the correlation between rumor virality and user vulnerability, leveraging this information to improve prediction performance and provide a valuable tool for infodemic surveillance.
... The negative effects and impact of this phenomenon are widely studied with the aim of mitigating or diminishing the collateral effects that harm social development; however, the strategies recognized by mass application by both the media and platform owners (identification, labelling, penalization, restriction, etc.) have not been effective, an argument supported by several research studies that demonstrate the permanence of disinformation even when false news has been restricted or discredited [5] and [6]. ...
Chapter
Full-text available
This study examines the use of Twitter during humanitarian crises and its impact on public opinion. The study analyzed over 3262 tweets related to crisis, war, tragedy, violence, riot, uprising, revolt, destruction, bombing, migration, and refugees from February 2021 to February 2023 from International News Agencies. The study found that Twitter, reveals the fragmentation of news consumption patterns on social media, which are influenced by the sources of information followed and strengthened by the platforms’ algorithms. Furthermore, the study shows that news agencies’ coverage of humanitarian crises is detectably fragmented, and governments and related organizations have an impact on them and use them for various purposes. The study concludes by recommending future research to expand the analysis to other social media and news agencies, as well as incorporate more advanced techniques for handling misinformation and analyzing the impact of social media on public opinion.
Article
Sharing negative gossip has been found to be pivotal for fostering cooperation in social groups. The positive function gossip serves for groups suggests that gossipers should be rewarded for sharing useful information. In contrast, gossip is commonly perceived negatively, meaning that gossipers incur more social costs than benefits. To solve this puzzle, we argue that whether receivers interpret gossip as stemming from pro‐social versus pro‐self motives shapes their reactions towards gossipers. We conducted a pre‐registered experimental vignette study ( n = 1188) in which participants received negative gossip statements, which we manipulated to reflect either pro‐self or pro‐social motives. Supporting our predictions, receivers were more likely to mistakenly interpret negative pro‐social gossip as stemming from pro‐self motives than vice versa. Nevertheless, receivers with a higher ability to overcome intuition were better able to correctly interpret negative gossip as driven by pro‐self and pro‐social motives. Furthermore, results showed that when receivers interpreted negative gossip as pro‐socially (vs. pro‐selfishly) motivated, they trusted gossipers more and gossip targets less (for behavioral as well as attitudinal measures of trust).
Article
Full-text available
Misinformation harms society by affecting citizens' beliefs and behaviour. Recent research has shown that partisanship and cognitive reflection (i.e. engaging in analytical thinking) play key roles in the acceptance of misinformation. However, the relative importance of these factors remains a topic of ongoing debate. In this registered study, we tested four hypotheses on the relationship between each factor and the belief in statements made by Argentine politicians. Participants (N = 1353) classified fact-checked political statements as true or false, completed a cognitive reflection test, and reported their voting preferences. Using Signal Detection Theory and Bayesian modeling, we found a reliable positive association between political concordance and overall belief in a statement (median = 0.663, CI95 = [0.640, 0.685]), a reliable positive association between cognitive reflection and scepticism (median = 0.039, CI95 = [0.006, 0.072]), a positive but unreliable association between cognitive reflection and truth discernment (median = 0.016, CI95 = [− 0.015, 0.046]) and a negative but unreliable association between cognitive reflection and partisan bias (median = − 0.016, CI95 = [− 0.037, 0.006]). Our results highlight the need to further investigate the relationship between cognitive reflection and partisanship in different contexts and formats. Protocol registration The stage 1 protocol for this Registered Report was accepted in principle on 22 August 2022. The protocol, as accepted by the journal, can be found at: https://doi.org/10.17605/OSF.IO/EBRGC .
Article
Do people process information differently on mobile phones compared to computers? We investigate this question by conducting two online field experiments. We randomly assigned participants to use their mobile phones or personal computers (PCs) to process different kinds of information. Study 1 ( N = 116) discovered that participants using mobile phones process emails more efficiently (i.e., spend less time) than those using PCs. Study 2 ( N = 241) extended this to online deceptive content and found that individuals using mobile phones, especially habitual users, are more efficient, but engage in less information processing, are less attentive and less vigilant about misinformation, compared to those using PCs. However, the latter are more likely to succumb to phishing emails by clicking on malicious links. We discuss theoretical implications for information processing across media devices and practical implications for combating misinformation and cybersecurity attacks.
Article
When people use web search engines to find information on debated topics, the search results they encounter can influence opinion formation and practical decision-making with potentially far-reaching consequences for the individual and society. However, current web search engines lack support for information-seeking strategies that enable responsible opinion formation, e.g., by mitigating confirmation bias and motivating engagement with diverse viewpoints. We conducted two preregistered user studies to test the benefits and risks of an intervention aimed at confirmation bias mitigation. In the first study, we tested the effect of warning labels, warning of the risk of confirmation bias, combined with obfuscations, hiding selected search results per default. We observed that obfuscations with warning labels effectively reduce engagement with search results. These initial findings did not allow conclusions about the extent to which the reduced engagement was caused by the warning label (reflective nudging element) versus the obfuscation (automatic nudging element). If obfuscation was the primary cause, this would raise concerns about harming user autonomy. We thus conducted a follow-up study to test the effect of warning labels and obfuscations separately. According to our findings, obfuscations run the risk of manipulating behavior instead of guiding it, while warning labels without obfuscations (purely reflective) do not exhaust processing capacities but encourage users to actively choose to decrease engagement with attitude-confirming search results. Therefore, given the risks and unclear benefits of obfuscations and potentially other automatic nudging elements to guide engagement with information, we call for prioritizing interventions that aim to enhance human cognitive skills and agency instead.
Article
As social media are increasingly integrated into our lives, scholars have examined how social media use can inform beliefs about politics and science. This study considers the political implications of following lifestyle influencers and their aspirational content in social media. Aspirational social media use may shape political attitudes and beliefs, even when not explicitly political. Using a two-wave survey of American adults ( N = 1,421), this study examines whether aspirational use of social media is associated with anti-expert attitudes and inaccurate beliefs about politicized science. Data indicate that aspirational social media use is associated with anti-intellectualism and holding more inaccurate beliefs, but not with overall distrust of science. These relationships are moderated by political affiliation, so that the attitudes and beliefs of Democrats and Republicans are similar at high levels of aspirational social media use. The results highlight the need to better understand the political implications of different types of social media use, including seemingly apolitical social media.
Article
Full-text available
We reviewed 555 papers published from 2016–2022 that presented misinformation to participants. We identified several trends in the literature—increasing frequency of misinformation studies over time, a wide variety of topics covered, and a significant focus on COVID-19 misinformation since 2020. We also identified several important shortcomings, including overrepresentation of samples from the United States and Europe and excessive emphasis on short-term consequences of brief, text-based misinformation. Most studies examined belief in misinformation as the primary outcome. While many researchers identified behavioural consequences of misinformation exposure as a pressing concern, we observed a lack of research directly investigating behaviour change.
Article
Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people’s engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages—source and information selection—as relatively neglected processes that should be addressed to further improve people’s ability to contend with misinformation.
Chapter
In this study, we examine whether perceived news credibility is affected when reading news in a foreign language. In addition, we investigate whether a possible effect might be the result of (a) the attenuation of emotional responses in a foreign language and whether (b) it affects individuals depending on their need for cognition. In an online experimental study with N = 134 participants, we presented a news article either in the participants’ native language or in a foreign language. Controlling for individuals’ need for cognition, we assessed participants’ emotional reactions and their perceived credibility. Results indicate that, for participants with a high need for cognition, the native language article was rated as more credible than the foreign language article. Participants with a low need for cognition perceived the foreign news article as similarly credible as compared to the native news. The language condition did not affect emotional responses.
Book
Full-text available
Le teorie cospirazioniste non sono solo un elenco di credenze bizzarre, ma ci rivelano molto sul “nostro Io più segreto”. Poche cose sono più seducenti di un’opinione pronta e rassicurante, poco importa se assurda. Come ingegnose stampelle, le credenze complottistiche hanno la funzione di sorreggere il traballante incedere lungo un cammino in cui si incontrano pandemie, guerre, cambiamenti climatici. Complottisti vulnerabili intreccia i temi di bias di ragionamento, autoinganno, processi emotivi di base attingendo ai dati della psicologia cognitiva, clinica e sociale e dell’infant research. L’obiettivo è quello di illustrare la “ricetta perfetta” che alimenta la mentalità cospirazionista. Insicurezza esistenziale e ansia generalizzata sono il primo ingrediente. Curvatura narcisistica dello sviluppo della personalità e condizioni di isolamento e frustrazione sociale sono i due successivi, mentre il quarto è il bisogno di riconoscimento sociale. Quest’ultimo dà conto di un elemento indispensabile perché si possa parlare di cospirazionismo: il gruppo. È all’interno di comunità, reali o virtuali che siano, che le dinamiche cospirazionistiche prendono le sembianze a cui assistiamo in questi tempi. Inediti casi di sedute terapeutiche offrono l’occasione per definire ulteriormente la complessità del fenomeno cospirazionista e distinguerlo nettamente, nonostante alcune chiare analogie, da forme di delirio individuale e collettivo.
Article
Full-text available
The boom experienced by fake news in recent years has generated new professional profiles, such as fact-checkers, who seek solutions to a problem that affects the credibility of the media. The objective of this study is to find out about this new professional profile, analyze the skills and abilities most in demand to perform these functions, and reflect on whether this specialty represents a new professional outlet for the labor market. To achieve this purpose, the independent Spanish data verification projects Maldita.es, Newtral Media Audiovisual, and Verificat are studied through different qualitative techniques, such as in-depth semi-structured interviews with their co-founders or managers and the analysis of web content and social media. It is a combination of techniques that have made it possible to draw conclusions and provide examples of interest to the research. The data show that there is a hybridization of profiles and a cross-section of knowledge, skills, and attitudes around information verifiers in Spain, who have competencies in current technologies, visualization, and database management.
Article
Predictable polarization is everywhere: we can often predict how people’s opinions, including our own, will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. They needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, that is, when the rational response is not obvious. I show how Bayesians should model such ambiguity and then prove that—assuming rational updates are those which obey the value of evidence—ambiguity is necessary and sufficient for the rationality of predictable polarization. The main theoretical result is that there can be a series of such updates, each of which is individually expected to make you more accurate, but which together will predictably polarize you. Polarization results from asymmetric increases in accuracy. This mechanism is not only theoretically possible, but empirically plausible. I argue that cognitive search—searching a cognitively accessible space for a particular item—often yields asymmetrically ambiguous evidence, I present an experiment supporting its polarizing effects, and I use simulations to show how it can explain two of the core causes of polarization: confirmation bias and the group polarization effect.
Article
Full-text available
The pictures of the US Capitol attack, on January 6, 2021, represent a before and after in a country marked by the culture of political polarization. Following a presidential campaign based on misinformation and accusations of electoral fraud by Republican candidate Donald Trump, the level of maximum polarization causes a climate of social rupture. Faced with this, the Democratic candidate and winner of the elections, Joe Biden, projects a discourse of institutional stability and legality as a strategy before public opinion. Two years later, the abrupt division of the US electorate is evident, with a significant percentage of Republican voters questioning the legitimacy of the electoral process. The objective of this research is to find out the strategies of political polarization deployed by Donald Trump and Joe Biden on Twitter in the 2020-2021 presidential transition period, as well as the public’s response. Based on a general sample of 1,060 tweets, a comparative content analysis methodology with a triple approach (quantitative-qualitative-discursive) is applied, based on the study of themes, emotions, and the ability to go viral of the messages of both political leaders. The results confirm a Trump’s speech defined by polarization, misinformation and the attack on the democratic system, relegating information from his presidential administration in the last months of his term to the background. On the contrary, Biden avoids confrontation and reinforces his legitimacy as president-elect, by announcing management measures of the future government. The engagement value of the social audience on Twitter is also added, with a position of support for the winner of the elections.
Article
Introduction Although the US Government considers threats of misinformation, disinformation, and mal-information to rise to the level of terrorism, little is known about service members’ experiences with disinformation in the military context. We examined soldiers’ perceptions of disinformation impact on the Army and their units. We also investigated associations between disinformation perceptions and soldiers’ sociodemographic characteristics, reported use of fact-checking, and perceptions of unit cohesion and readiness Methods Active-duty soldiers (N = 19,465) across two large installations in the Southwest US completed an anonymous online survey Results Sixty-six percent of soldiers agreed that disinformation has a negative impact on the Army. Thirty-three percent of soldiers perceived disinformation as a problem in their unit. Females were more likely to agree that disinformation has a negative impact on the Army and is a problem in their unit. Higher military rank was associated with lower odds of agreeing that disinformation is a problem in units. Most soldiers were confident about their ability to recognize disinformation (62%) and reported using fact-checking resources (53%), and these factors were most often endorsed by soldiers who agreed that disinformation is a problem for the Army and their unit. Soldiers’ perceptions of unit cohesion and readiness were negatively associated with the perception that disinformation is a problem in their unit Conclusion While the majority of soldiers viewed disinformation as a problem across the Army, fewer perceived it as problematic within their units. Higher levels of reported fact-checking were most evident among those who perceived disinformation as a problem, suggesting that enhancing awareness of the problem of disinformation alone could help mitigate its deleterious impact. Perceptions of disinformation problems within units were associated with soldiers’ perceptions of lower unit cohesion and readiness, highlighting misinformation, disinformation, and mal-information’s impact on force readiness. Limitations and future directions are discussed.
Article
Social media companies have come under increasing pressure to remove misinformation from their platforms, but partisan disagreements over what should be removed have stymied efforts to deal with misinformation in the United States. Current explanations for these disagreements center on the “fact gap”—differences in perceptions about what is misinformation. We argue that partisan differences could also be due to “party promotion”—a desire to leave misinformation online that promotes one’s own party—or a “preference gap”—differences in internalized preferences about whether misinformation should be removed. Through an experiment where respondents are shown false headlines aligned with their own or the opposing party, we find some evidence of party promotion among Democrats and strong evidence of a preference gap between Democrats and Republicans. Even when Republicans agree that content is false, they are half as likely as Democrats to say that the content should be removed and more than twice as likely to consider removal as censorship.
Article
This study investigates fact-checking effectiveness in reducing belief in misinformation across various types of fact-check sources (i.e., professional fact-checkers, mainstream news outlets, social media platforms, artificial intelligence, and crowdsourcing). We examine fact-checker credibility perceptions as a mechanism to explain variance in fact-checking effectiveness across sources, while taking individual differences into account (i.e., analytic thinking and alignment with the fact-check verdict). An experiment with 859 participants revealed few differences in effectiveness across fact-checking sources but found that sources perceived as more credible are more effective. Indeed, the data show that perceived credibility of fact-check sources mediates the relationship between exposure to fact-checking messages and their effectiveness for some source types. Moreover, fact-checker credibility moderates the effect of alignment on effectiveness, while analytic thinking is unrelated to fact-checker credibility perceptions, alignment, and effectiveness. Other theoretical contributions include extending the scope of the credibility-persuasion association and the MAIN model to the fact-checking context, and empirically verifying a critical component of the two-step motivated reasoning model of misinformation correction.
Article
Successful cooperation is tightly linked to individuals’ beliefs about their interaction partners, the decision setting, and existing norms, perceptions, and values. This article reviews and integrates findings from judgment and decision-making, social and cognitive psychology, political science, and economics, developing a systematic overview of the mechanisms underlying motivated cognition in cooperation. We elaborate on how theories and concepts related to motivated cognition developed in various disciplines define the concept and describe its functionality. We explain why beliefs play such an essential role in cooperation, how they can be distorted, and how this fosters or harms cooperation. We also highlight how individual differences and situational factors change the propensity to engage in motivated cognition. In the form of a construct map, we provide a visualization of the theoretical and empirical knowledge structure regarding the role of motivated cognition, including its many interdependencies, feedback loops, and moderating influences. We conclude with a brief suggestion for a future research agenda based on this compiled evidence.
Article
Purpose The insidious proliferation of online misinformation represents a significant societal problem. With a wealth of research dedicated to the topic, it is still unclear what determines fake news sharing. This paper comparatively examines fake and accurate news sharing in a novel experimental setting that manipulates news about terrorism. Design/methodology/approach The authors follow an extended version of the uses-and-gratification framework for news sharing, complemented by variables commonly employed in fake news rebuttal studies. Findings Logistic regression and classification trees revealed worry about the topic, media literacy, information-seeking and conservatism as significant predictors of willingness to share news online. No significant association was found for general analytical thinking, journalism skepticism, conspiracy ideation, uses-and-gratification motives or pass-time coping strategies. Practical implications The current results broaden and expand the literature examining beliefs in and sharing of misinformation, highlighting the role of media literacy in protecting the public against the spread of fake news. Originality/value This is, to the authors’ knowledge, the first study to integrate a breadth of theoretically and empirically driven predictors of fake news sharing within a single experimental framework. Peer review The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-12-2022-0693
Article
Full-text available
Research suggests that minority-group members sometimes are more susceptible to misinformation. Two complementary studies examined the influence of perceived minority status on susceptibility to misinformation and conspiracy beliefs. In study 1 (n = 2140), the perception of belonging to a minority group, rather than factually belonging to it, was most consistently related with an increased susceptibility to COVID-19 misinformation across national samples from the USA, the UK, Germany and Poland. Specifically, perceiving that one belongs to a gender minority group particularly predicted susceptibility to misinformation when participants factually did not belong to it. In pre-registered study 2 (n = 1823), an experiment aiming to manipulate the minority perceptions of men failed to influence conspiracy beliefs in the predicted direction. However, pre-registered correlational analyses showed that men who view themselves as a gender minority were more prone to gender conspiracy beliefs and exhibited a heightened conspiracy mentality. This effect was correlationally mediated by increased feelings of system identity threat, collective narcissism, group relative deprivation and actively open-minded thinking. Especially, the perception of being a minority in terms of power and influence (as compared to numerically) was linked to these outcomes. We discuss limitations and practical implications for countering misinformation.
Article
Full-text available
Why do consumers tolerate low quality news when they do not tolerate low quality cars? Amidst a glut of dubious information driven by social media, websites, and news outlets, this paper provides an economic framework about modern news deficiencies. High-speed production and consumption of facts and opinions overwhelm individual cognitive processing limits. With Masherg’s Law, the flood of low-quality information drowns out the trickle of high-quality information. Consumer and social economic benefit are zero, with all economic benefit going to the supplier. Information asymmetry contributes to a complex relationship between the consumer and the supplier. The principal-agent model shows an alternative view of that relationship. From a systems view, there is no single cause. The free news market has failed due to seven factors: I. technology enables fast low-cost distribution such as television, websites, or social media; II. existing laws protect distributors from content liability, like Section 230 in the United States; III. facts are expensive and the value fades quickly in the 24-hour news cycle; IV. opinions have legal protection; V. consumers think news is free due to supplier secondary payment mechanisms; VI. consumers cannot process the immense volume of news due to Masherg’s Law. And VII. suppliers know more about consumers than consumers do about themselves. These seven factors converge, resulting in the oddity of consumer perceived zero-cost infinite supply. This floods the market with non-perceived low quality products. Without a consumer feedback loop on price, the market does not correct and becomes flooded with more inexpensively produced low quality news, resulting in economic market failure.
Chapter
This chapter explores the use of artificial intelligence (AI) in market research and its potential impact on the field. Discuss how AI can be used for data collection, filtering, analysis, and prediction, and how it can help companies develop more accurate predictive models and personalized marketing strategies. Highlight the drawbacks of AI, such as the need to ensure diverse and unbiased data and the importance of monitoring and interpreting results and covers various AI techniques used in market research, including machine learning, natural language processing, computer vision, deep learning, and rule-based systems. The applications of AI in marketing research are also discussed, including sentiment analysis, market segmentation, predictive analytics, and adaptive recommendation engines and personalization systems. The chapter concludes that while AI presents many benefits, it also presents several challenges related to data quality and accuracy, algorithmic biases and fairness issues, as well as ethical considerations that need to be carefully considered.
Article
Polarization has been rising in the United States of America for the past few decades and now poses a significant—and growing—public-health risk. One of the signature features of the American response to the COVID-19 pandemic has been the degree to which perceptions of risk and willingness to follow public-health recommendations have been politically polarized. Although COVID-19 has proven more lethal than any war or public-health crisis in American history, the deadly consequences of the pandemic were exacerbated by polarization. We review research detailing how every phase of the COVID-19 pandemic has been polarized, including judgments of risk, spatial distancing, mask wearing, and vaccination. We describe the role of political ideology, partisan identity, leadership, misinformation, and mass communication in this public-health crisis. We then assess the overall impact of polarization on infections, illness, and mortality during the pandemic; offer a psychological analysis of key policy questions; and identify a set of future research questions for scholars and policy experts. Our analysis suggests that the catastrophic death toll in the United States was largely preventable and due, in large part, to the polarization of the pandemic. Finally, we discuss implications for public policy to help avoid the same deadly mistakes in future public-health crises.
Preprint
Full-text available
How good are people at judging the veracity of news? We conducted a systematic literature review and pre-registered meta-analysis of 232 effect sizes from 53 experimental articles evaluating accuracy ratings of true and false news ($N_{participants}$ = 104'064 from 30 countries across 6 continents). We found that people rated true news as more accurate than false news (Cohen’s d = 1.26, [1.13, 1.39]) and were better at rating false news as false than at rating true news as true (Cohen’s d = 0.35, [0.25, 0.44]). In other words, participants were able to discern true from false news, and erred on the side of skepticism rather than credulity. The political concordance of the news had no effect on discernment, but participants were more skeptical of politically discordant news. These findings lend support to crowdsourced fact-checking initiatives, and suggest that, to improve discernment, there is more room to increase the acceptance of true news than to reduce the acceptance of false news.
Article
In recent years, industry leaders and researchers have proposed to use technical provenance standards to address visual misinformation spread through digitally altered media. By adding immutable and secure provenance information such as authorship and edit date to media metadata, social media users could potentially better assess the validity of the media they encounter. However, it is unclear how end users would respond to provenance information, or how to best design provenance indicators to be understandable to laypeople. We conducted an online experiment with 595 participants from the US and UK to investigate how provenance information altered users' accuracy perceptions and trust in visual content shared on social media. We found that provenance information often lowered trust and caused users to doubt deceptive media, particularly when it revealed that the media was composited. We additionally tested conditions where the provenance information itself was shown to be incomplete or invalid, and found that these states have a significant impact on participants' accuracy perceptions and trust in media, leading them, in some cases, to disbelieve honest media. Our findings show that provenance, although enlightening, is still not a concept well-understood by users, who confuse media credibility with the orthogonal (albeit related) concept of provenance credibility. We discuss how design choices may contribute to provenance (mis)understanding, and conclude with implications for usable provenance systems, including clearer interfaces and user education.
Article
Full-text available
Objective Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. We investigate the psychological profile of individuals who fall prey to fake news. Method We recruited 1,606 participants from Amazon's Mechanical Turk for three online surveys. Results The tendency to ascribe profundity to randomly generated sentences – pseudo‐profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline's source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news). Conclusion Our results suggest that belief in fake news may be driven, to some extent, by a general tendency to be overly accepting of weak claims. This tendency, which we refer to as reflexive open‐mindedness, may be partly responsible for the prevalence of epistemically suspect beliefs writ large. This article is protected by copyright. All rights reserved.
Article
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Article
Full-text available
Both liberals and conservatives accuse their political opponents of partisan bias, but is there empirical evidence that one side of the political aisle is indeed more biased than the other? To address this question, we meta-analyzed the results of 51 experimental studies, involving over 18,000 participants, that examined one form of partisan bias—the tendency to evaluate otherwise identical information more favorably when it supports one’s political beliefs or allegiances than when it challenges those beliefs or allegiances. Two hypotheses based on previous literature were tested: an asymmetry hypothesis (predicting greater partisan bias in conservatives than in liberals) and a symmetry hypothesis (predicting equal levels of partisan bias in liberals and conservatives). Mean overall partisan bias was robust (r = .245), and there was strong support for the symmetry hypothesis: Liberals (r = .235) and conservatives (r = .255) showed no difference in mean levels of bias across studies. Moderator analyses reveal this pattern to be consistent across a number of different methodological variations and political topics. Implications of the current findings for the ongoing ideological symmetry debate and the role of partisan bias in scientific discourse and political conflict are discussed.
Article
Full-text available
Addressing fake news requires a multidisciplinary effort
Article
Full-text available
What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Article
Full-text available
Significance Public opinion toward some science and technology issues is polarized along religious and political lines. We investigate whether people with more education and greater science knowledge tend to express beliefs that are more (or less) polarized. Using data from the nationally representative General Social Survey, we find that more knowledgeable individuals are more likely to express beliefs consistent with their religious or political identities for issues that have become polarized along those lines (e.g., stem cell research, human evolution), but not for issues that are controversial on other grounds (e.g., genetically modified foods). These patterns suggest that scientific knowledge may facilitate defending positions motivated by nonscientific concerns.
Article
Full-text available
The Cognitive Reflection Test (CRT) is a widely used measure of the propensity to engage in analytic or deliberative reasoning in lieu of gut feelings or intuitions. CRT problems are unique because they reliably cue intuitive but incorrect responses and, therefore, appear simple among those who do poorly. By virtue of being comprised of so-called “trick-problems” that, in theory, could be discovered as such, it is commonly held that the predictive validity of the CRT is undermined by prior experience with the task. Indeed, recent studies show that people who have previous experience with the CRT score higher on the test. Naturally, however, it is not obvious that this actually undermines the predictive validity of the test. Across six studies with ~2500 participants and seventeen variables of interest (e.g., religious belief, bullshit receptivity, smartphone usage, susceptibility to heuristics and biases, numeracy), we did not find a single case where the predictive power of the CRT was significantly undermined by repeated exposure. This was despite the fact that we replicated the previously reported increase in accuracy among individuals who report previous experience with the CRT. We speculate that the CRT remains robust after multiple exposures because less reflective (more intuitive) individuals fail to realize that being presented with apparently easy problems more than once confers information about the tasks’ actual difficulty.
Article
Full-text available
Psychologists, neuroscientists, and economists often conceptualize decisions as arising from processes that lie along a continuum from automatic (i.e., “hardwired” or overlearned, but relatively inflexible) to controlled (less efficient and effortful, but more flexible). Control is central to human cognition, and plays a key role in our ability to modify the world to suit our needs. Given its advantages, reliance on controlled processing may seem predestined to increase within the population over time. Here, we examine whether this is so by introducing an evolutionary game theoretic model of agents that vary in their use of automatic versus controlled processes, and in which cognitive processing modifies the environment in which the agents interact. We find that, under a wide range of parameters and model assumptions, cycles emerge in which the prevalence of each type of processing in the population oscillates between 2 extremes. Rather than inexorably increasing, the emergence of control often creates conditions that lead to its own demise by allowing automaticity to also flourish, thereby undermining the progress made by the initial emergence of controlled processing. We speculate that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by our species.
Article
Full-text available
The 2016 US Presidential Election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake news headlines occurs despite a low level of overall believability, and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories, and that tagging such stories as disputed is not an effective solution to this problem. Interestingly, however, we also find that prior exposure does not impact entirely implausible statements (e.g., “The Earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than previously assumed.
Article
Full-text available
Misinformation can undermine a well-functioning democracy. For example, public misconceptions about climate change can lead to lowered acceptance of the reality of climate change and lowered support for mitigation policies. This study experimentally explored the impact of misinformation about climate change and tested several pre-emptive interventions designed to reduce the influence of misinformation. We found that false-balance media coverage (giving contrarian views equal voice with climate scientists) lowered perceived consensus overall, although the effect was greater among free-market supporters. Likewise, misinformation that confuses people about the level of scientific agreement regarding anthropogenic global warming (AGW) had a polarizing effect, with free-market supporters reducing their acceptance of AGW and those with low free-market support increasing their acceptance of AGW. However, we found that inoculating messages that (1) explain the flawed argumentation technique used in the misinformation or that (2) highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation. We recommend that climate communication messages should take into account ways in which scientific content can be distorted, and include pre-emptive inoculation messages.
Article
Full-text available
Our smartphones enable—and encourage—constant connection to information, entertainment, and each other. They put the world at our fingertips, and rarely leave our sides. Although these devices have immense potential to improve welfare, their persistent presence may come at a cognitive cost. In this research, we test the “brain drain” hypothesis that the mere presence of one’s own smartphone may occupy limited-capacity cognitive resources, thereby leaving fewer resources available for other tasks and undercutting cognitive performance. Results from two experiments indicate that even when people are successful at maintaining sustained attention—as when avoiding the temptation to check their phones—the mere presence of these devices reduces available cognitive capacity. Moreover, these cognitive costs are highest for those highest in smartphone dependence. We conclude by discussing the practical implications of this smartphone-induced brain drain for consumer decision-making and consumer welfare.
Article
Full-text available
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the “myth” more familiar—and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. We thus examined factors known to affect whether strategic memory processes can be utilized; age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then re-rated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative/corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (1) older adults over the age of 65 were worse at sustaining their post-correction belief that myths were inaccurate, (2) a greater level of explanatory detail promoted more sustained belief change, and (3) fact affirmations promoted more sustained belief change in comparison to myth retractions over the course of one week (but not over three weeks). This supports the notion that familiarity is indeed a driver of continued influence effects.
Article
Full-text available
The present research investigated the reason for mixed evidence concerning the relationship between analytic cognitive style (ACS) and political orientation in previous research. Most past research operationalized ACS with the Cognitive Reflection Test (CRT), which has been criticized as relying heavily on numeracy skills, and operationalized political orientation with the single-item self-placement measure, which has been criticized as masking the distinction between social and economic conservatism. The present research recruited an Amazon Mechanical Turk sample and, for the first time, simultaneously employed three separate ACS measures (CRT, CRT2, Baserate conflict problems), a measure of attitudes toward self-critical and reflective thinking (the Actively Open-Minded Thinking Scale; AOT), and separate measures of social and economic conservatism, as well the standard measure of political orientation. As expected, the total ACS score (combination of the separate measures) was negatively related to social, but not economic, conservatism. However, the CRT by itself was not related to conservatism, in parallel with some past findings, while the two other measures of ACS showed the same pattern as the combined score. Trait reflectiveness (AOT) was related negatively to all measures of political conservatism (social, economic, and general). Results clearly suggest that the conclusion reached regarding the ACS-political orientation relationship depends on the measure(s) used, with the measure most commonly employed in past research (CRT) behaving differently than other measures. Future research must further pursue the implications of the known differences (e.g., reliance on numeracy vs. verbal skills) of ACS measures and distinguish different senses of reflectiveness.
Chapter
Full-text available
Dual-process theories formalize a salient feature of human cognition: We have the capacity to rapidly formulate answers to questions, but we sometimes engage in deliberate reasoning processes before responding. It does not require deliberative thought to respond to the question “what is your name”. It did, however, require some thinking to write this paragraph (perhaps not enough). We have, in other words, two minds that might influence what we decide to do (Evans, 2003; Evans & Frankish, 2009). Although this distinction is acceptable (and, as I’ll argue, essentially irrefutable), it poses serious challenges for our understanding of cognitive architecture. In this chapter, I will outline what I view to be important theoretical groundwork for future dual-process models. I will start with two core premises that I take to be foundational: 1) dual-process theory is irrefutable but falsifiable, and 2) analytic thought has to be triggered by something. I will then use these premises to outline my perspective on what I consider the most substantial challenge for dual-process theorists: We don’t (yet) know what makes us think.
Article
Full-text available
People frequently rely on information even after it has been retracted, a phenomenon known as the continued-influence effect of misinformation. One factor proposed to explain the ineffectiveness of retractions is that repeating misinformation during a correction may inadvertently strengthen the misinformation by making it more familiar. Practitioners are therefore often encouraged to design corrections that avoid misinformation repetition. The current study tested this recommendation, investigating whether retractions become more or less effective when they include reminders or repetitions of the initial misinformation. Participants read fictional reports, some of which contained retractions of previous information, and inferential reasoning was measured via questionnaire. Retractions varied in the extent to which they served as misinformation reminders. Retractions that explicitly repeated the misinformation were more effective in reducing misinformation effects than retractions that avoided repetition, presumably because of enhanced salience. Recommendations for effective myth debunking may thus need to be revised.
Article
Full-text available
The Cognitive Reflection Test (CRT) is a hugely influential problem solving task that measures individual differences in the propensity to reflect on and override intuitive (but incorrect) solutions. The validity of this three-item measure depends on participants being naïve to its materials and objectives. Evidence from 142 volunteers recruited online suggests this is often not the case. Over half of the sample had previously seen at least one of the problems, predominantly through research participation or the media. These participants produced substantially higher CRT scores than those without prior exposure (2.36 vs. 1.48), with the majority scoring at ceiling level. Participants that had previously seen a specific problem (e.g., the bat and ball problem) nearly always solved that problem correctly. These data suggest the CRT may have been widely invalidated. As a minimum, researchers must control for prior exposure to the three problems and begin to consider alternative, extended measures of cognitive reflection.
Article
Full-text available
Previous studies relating low-effort or intuitive thinking to political conservatism are limited to Western cultures. Using Turkish and predominantly Muslim samples, Study 1 found that analytic cognitive style (ACS) is negatively correlated with political conservatism. Study 2 found that ACS correlates negatively with political orientation and with social and personal conservatism, but not with economic conservatism. It also examined other variables that might help to explain this correlation. Study 3 tried to manipulate ACS via two different standard priming procedures in two different samples, but our manipulation checks failed. Study 4 manipulated intuitive thinking style via cognitive load manipulation to see whether it enhances conservatism for contextualized political attitudes but we did not find a significant effect. Overall, the results indicate that social liberals tend to think more analytically than conservatives and people's long term political attitudes may be resistant to experimental manipulations.
Article
Full-text available
Individual differences in the mere willingness to think analytically has been shown to predict religious disbelief. Recently, however, it has been argued that analytic thinkers are not actually less religious; rather, the putative association may be a result of religiosity typically being measured after analytic thinking (an order effect). In light of this possibility, we report four studies in which a negative correlation between religious belief and performance on analytic thinking measures is found when religious belief is measured in a separate session. We also performed a meta-analysis on all previously published studies on the topic along with our four new studies (N = 15,078, k = 31), focusing specifically on the association between performance on the Cognitive Reflection Test (the most widely used individual difference measure of analytic thinking) and religious belief. This meta-analysis revealed an overall negative correlation (r) of -.18, 95% CI [-.21, -.16]. Although this correlation is modest, self-identified atheists (N = 133) scored 18.7% higher than religiously affiliated individuals (N = 597) on a composite measure of analytic thinking administered across our four new studies (d = .72). Our results indicate that the association between analytic thinking and religious disbelief is not caused by a simple order effect. There is good evidence that atheists and agnostics are more reflective than religious believers.
Article
Full-text available
Much research in cognitive psychology has focused on the tendency to conserve limited cognitive resources. The CRT is the predominant measure of such miserly information processing, and also predicts a number of frequently studied decisionmaking traits (such as belief bias and need for cognition). However, many subjects from common subject populations have already been exposed to the questions, which might add considerable noise to data. Moreover, the CRT has been shown to be confounded with numeracy. To increase the pool of available questions and to try to address numeracy confounds, we developed and tested the CRT-2. CRT-2 questions appear to rely less on numeracy than the original CRT but appear to measure closely related constructs in other respects. Crucially, substantially fewer subjects from Amazon’s Mechanical Turk have been previously exposed to CRT-2 questions. Though our primary purpose was investigating the CRT-2, we also found that belief bias questions appear suitable as an additional source of new items. Implications and remaining measurement challenges are discussed. © 2016, Society for Judgment and Decision making. All rights reserved.
Article
Full-text available
Sinayev and Peters (2015; hereafter SP namely, that the propensity or disposition to think analytically plays an important role in CRT performance (Pennycook et al., 2015b). We discuss recent empirical evidence that supports the claim that the CRT is more than just a measure of numeracy or, more generally, cognitive ability.
Article
Full-text available
Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.
Article
Full-text available
We review recent evidence revealing that the mere willingness to engage analytic reasoning as a means to override intuitive “gut feelings” is a meaningful predictor of key psychological outcomes in diverse areas of everyday life. For example, those with a more analytic thinking style are more skeptical about religious, paranormal, and conspiratorial concepts. In addition, analytic thinking relates to having less traditional moral values, making less emotional or disgust-based moral judgments, and being less cooperative and more rationally self-interested in social dilemmas. Analytic thinkers are even less likely to offload thinking to smartphone technology and may be more creative. Taken together, these results indicate that the propensity to think analytically has major consequences for individual psychology.
Article
Full-text available
Understanding scientific theories like evolution by natural selection, classical mechanics, or plate tectonics requires knowledge restructuring at the level of individual concepts, or conceptual change. Here, we investigate the role of cognitive reflection (Frederick, 2005) in achieving conceptual change. College undergraduates (n = 184) were administered a 45-question survey probing their understanding of six domains of science requiring conceptual change – astronomy, evolution, geology, mechanics, perception, and thermodynamics – as well as (a) their ability to analyze covariation-based data, (b) their understanding of the nature of science (NOS), and (c) their disposition towards cognitive reflection. Cognitive reflection was a significant predictor of science understanding in all domains, as well as an independent predictor, explaining significantly more variance in science understanding than that explained by covariation analysis ability and NOS understanding combined. These results suggest that cognitive reflection may be a prerequisite for changing certain cognitive structures, namely, concepts and theories.
Article
Full-text available
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un-reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals' interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Article
Full-text available
Dual-system theories of human cognition, under which fast automatic processes can complement or compete with slower deliberative processes, have not typically been incorporated into larger scale population models used in evolutionary biology, macroeconomics, or sociology. However, doing so may reveal important phenomena at the population level. Here, we introduce a novel model of the evolution of dual-system agents using a resource-consumption paradigm. By simulating agents with the capacity for both automatic and controlled processing, we illustrate how controlled processing may not always be selected over rigid, but rapid, automatic processing. Furthermore, even when controlled processing is advantageous, frequency-dependent effects may exist whereby the spread of control within the population undermines this advantage. As a result, the level of controlled processing in the population can oscillate persistently, or even go extinct in the long run. Our model illustrates how dual-system psychology can be incorporated into population-level evolutionary models, and how such a framework can be used to examine the dynamics of interaction between automatic and controlled processing that transpire over an evolutionary time scale.
Article
Full-text available
Scores on the three-item Cognitive Reflection Test (CRT) have been linked with dual-system theory and normative decision making (Frederick, 2005). In particular, the CRT is thought to measure monitoring of System 1 intuitions such that, if cognitive reflection is high enough, intuitive errors will be detected and the problem will be solved. However, CRT items also require numeric ability to be answered correctly and it is unclear how much numeric ability vs. cognitive reflection contributes to better decision making. In two studies, CRT responses were used to calculate Cognitive Reflection and numeric ability; a numeracy scale was also administered. Numeric ability, measured on the CRT or the numeracy scale, accounted for the CRT's ability to predict more normative decisions (a subscale of decision-making competence, incentivized measures of impatient and risk-averse choice, and self-reported financial outcomes); Cognitive Reflection contributed no independent predictive power. Results were similar whether the two abilities were modeled (Study 1) or calculated using proportions (Studies 1 and 2). These findings demonstrate numeric ability as a robust predictor of superior decision making across multiple tasks and outcomes. They also indicate that correlations of decision performance with the CRT are insufficient evidence to implicate overriding intuitions in the decision-making biases and outcomes we examined. Numeric ability appears to be the key mechanism instead.
Article
Full-text available
Exposure to news, opinion and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using de-identified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks, and examine the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook's algorithmically ranked News Feed, and further studied users' choices to click through to ideologically discordant content. Compared to algorithmic ranking, individuals' choices about what to consume had a stronger effect limiting exposure to cross-cutting content. Copyright © 2015, American Association for the Advancement of Science.
Article
Full-text available
As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge "in the head," even seeing their own brains as more active as depicted by functional MRI (fMRI) images. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
Belief bias is the tendency for prior beliefs to influence people’s deductive reasoning in two ways: through the application of a simple belief-heuristic (response bias) and through the application of more effortful reasoning for unbelievable conclusions (accuracy effect or motivated reasoning). Previous research indicates that cognitive ability is the primary determinant of the effect of beliefs on accuracy. In the current study, we show that the mere tendency to engage analytic reasoning (analytic cognitive style) is responsible for the effect of cognitive ability on motivated reasoning. The implications of this finding for our understanding of the impact of individual differences on belief bias are discussed.
Article
Full-text available
The Cognitive Reflection Test (CRT) is one of the most widely used tools to assess individual differences in intuitive-analytic cognitive styles. The CRT is of broad interest because each of its items reliably cues a highly available and superficially appropriate but incorrect response, conventionally deemed the "intuitive" response. To do well on the CRT, participants must reflect on and question the intuitive responses. The CRT score typically employed is the sum of correct responses, assumed to indicate greater "reflectiveness" (i.e., CRT-Reflective scoring). Some recent researchers have, however, inverted the rationale of the CRT by summing the number of intuitive incorrect responses, creating a putative measure of intuitiveness (i.e., CRT-Intuitive). We address the feasibility and validity of this strategy by considering the problem of the structural dependency of these measures derived from the CRT and by assessing their respective associations with self-report measures of intuitive-analytic cognitive styles: the Faith in Intuition and Need for Cognition scales. Our results indicated that, to the extent that the dependency problem can be addressed, the CRT-Reflective but not the CRT-Intuitive measure predicts intuitive-analytic cognitive styles. These results provide evidence that the CRT is a valid measure of reflective but not of intuitive thinking.
Article
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.
Article
Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science , this issue p. 1146
Article
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties – known as partisanship – can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides.
Article
Selective reading of political online information was examined based on cognitive dissonance, social identity, and news values frameworks. Online reports were displayed to 156 Americans while selective exposure was tracked. The news articles that participants chose from were either conservative or liberal and also either positive or negative regarding American political policies. In addition, information processing styles (cognitive reflection and need-for-cognition) were measured. Results revealed confirmation and negativity biases, per cognitive dissonance and news values, but did not corroborate the hypothesis derived from social identity theory. Greater cognitive reflection, greater need-for-cognition, and worse affective state fostered the confirmation bias; stronger social comparison tendency reduced the negativity bias.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the ‘science comprehension thesis’ (SCT), which identifies defects in the public's knowledge and reasoning capacities as the source of such controversies; and the ‘identity-protective cognition thesis’ (ICT), which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in numeracy – a measure of the ability and disposition to make use of quantitative information – did substantially better than less numerate ones when the data were presented as results from a study of a new skin rash treatment. Also as expected, subjects’ responses became politically polarized – and even less accurate – when the same data were presented as results from the study of a gun control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in numeracy; instead, it increased . This outcome supported ICT, which predicted that more numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their "most important" source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Article
Individuals are not merely passive vessels of whatever beliefs and opinions they have been exposed to; rather, they are attracted to belief systems that resonate with their own psychological needs and interests, including epistemic, existential, and relational needs to attain certainty, security, and social belongingness. Jost, Glaser, Kruglanski, and Sulloway (2003) demonstrated that needs to manage uncertainty and threat were associated with core values of political conservatism, namely respect for tradition and acceptance of inequality. Since 2003 there have been far more studies on the psychology of left-right ideology than in the preceding half century, and their empirical yield helps to address lingering questions and criticisms. We have identified 181 studies of epistemic motivation (involving 130,000 individual participants) and nearly 100 studies of existential motivation (involving 360,000 participants). These databases, which are much larger and more heterogeneous than those used in previous meta-analyses, confirm that significant ideological asymmetries exist with respect to dogmatism, cognitive/perceptual rigidity, personal needs for order/structure/closure, integrative complexity, tolerance of ambiguity/uncertainty, need for cognition, cognitive reflection, self-deception, and subjective perceptions of threat. Exposure to objectively threatening circumstances—such as terrorist attacks, governmental warnings, and shifts in racial demography—contribute to modest “conservative shifts” in public opinion. There are also ideological asymmetries in relational motivation, including the desire to share reality, perceptions of within-group consensus, collective self-efficacy, homogeneity of social networks, and the tendency to trust the government more when one's own political party is in power. Although some object to the very notion that there are meaningful psychological differences between leftists and rightists, the identification of “elective affinities” between cognitive-motivational processes and contents of specific belief systems is essential to the study of political psychology. Political psychologists may contribute to the development of a good society not by downplaying ideological differences or advocating “Swiss-style neutrality” when it comes to human values, but by investigating such phenomena critically, even—or perhaps especially—when there is pressure in society to view them uncritically.
Article
Previous research revealed that inducing an intuitive thinking style led people to adopt more conservative social and economic attitudes. No prior study, however, has shown a causal effect of analytic cognitive style (ACS) on political conservatism. It is also not clear whether these cognitive-style manipulations influence stable or contextualized (less stable) political attitudes differentially. The current research investigated the causal effect of ACS on both stable and contextualized political opinions. In Experiment 1, we briefly trained participants to think analytically (or not) and assessed their contextualized and stable political attitudes. Those in the analytic thinking group responded more positively to liberal (but not conservative) arguments on contextualized opinions. However, no significant change occurred in stable opinions. In Experiment 2, we replicated this basic finding with a larger sample. Thus, the results demonstrate that inducing ACS causally influences contextualized liberal attitudes, but not stable ones.
Article
Individual differences researchers very commonly report Pearson correlations between their variables of interest. Cohen (1988) provided guidelines for the purposes of interpreting the magnitude of a correlation, as well as estimating power. Specifically, r = 0.10, r = 0.30, and r = 0.50 were recommended to be considered small, medium , and large in magnitude, respectively. However, Cohen's effect size guidelines were based principally upon an essentially qualitative impression, rather than a systematic, quantitative analysis of data. Consequently, the purpose of this investigation was to develop a large sample of previously published meta-analytically derived correlations which would allow for an evaluation of Cohen's guidelines from an empirical perspective. Based on 708 meta-analytically derived correlations, the 25th, 50th, and 75th percentiles corresponded to correlations of 0.11, 0.19, and 0.29, respectively. Based on the results, it is suggested that Cohen's correlation guidelines are too exigent, as b3% of correlations in the literature were found to be as large as r = 0.50. Consequently, in the absence of any other information, individual differences researchers are recommended to consider correlations of 0.10, 0.20, and 0.30 as relatively small, typical, and relatively large, in the context of a power analysis, as well as the interpretation of statistical results from a normative perspective.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the “Science Comprehension Thesis” (SCT), which identifies defects in the public’s knowledge and reasoning capacities as the source of such controversies; and the “Identity-protective Cognition Thesis” (ICT) which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in Numeracy — a measure of the ability and disposition to make use of quantitative information — did substantially better than less numerate ones when the data were presented as results from a study of a new skin-rash treatment. Also as expected, subjects’ responses became politically polarized — and even less accurate — when the same data were presented as results from the study of a gun-control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in Numeracy; instead, it increased. This outcome supported ICT, which predicted that more Numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
Survey experiments have become a central methodology across the social sciences. Researchers can combine experiments’ causal power with the generalizability of population-based samples. Yet, due to the expense of population-based samples, much research relies on convenience samples (e.g. students, online opt-in samples). The emergence of affordable, but non-representative online samples has reinvigorated debates about the external validity of experiments. We conduct two studies of how experimental treatment effects obtained from convenience samples compare to effects produced by population samples. In Study 1, we compare effect estimates from four different types of convenience samples and a population-based sample. In Study 2, we analyze treatment effects obtained from 20 experiments implemented on a population-based sample and Amazon's Mechanical Turk (MTurk). The results reveal considerable similarity between many treatment effects obtained from convenience and nationally representative population-based samples. While the results thus bolster confidence in the utility of convenience samples, we conclude with guidance for the use of a multitude of samples for advancing scientific knowledge.
Article
This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ – the ease of information recall – this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.