ArticlePDF Available

Shifting attention to accuracy can reduce misinformation online

Authors:

Abstract and Figures

In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media1–4. Academics and practitioners alike have asked why people share such misinformation, and sought solutions to reduce the sharing of misinformation5–7. Here, we attempt to address both of these questions. First, we find that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy. This dissociation suggests that sharing does not necessarily indicate belief. Nonetheless, most participants say it is important to share only accurate news. To shed light on this apparent contradiction, we carried out four survey experiments and a field experiment on Twitter; the results show that subtly shifting attention to accuracy increases the quality of news that people subsequently share. Together with additional computational analyses, these findings indicate that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. Our results challenge the popular claim that people value partisanship over accuracy8,9, and provide evidence for scalable attention-based interventions that social media platforms could easily implement to counter misinformation online.
Headline-level analyses for study 5 showing the effect of each condition relative to control as a function of the perceived accuracy and humorousness of the headlines For each headline, we calculate the effect size as the mean sharing intention in the condition in question minus the control (among users who indicate that they sometimes share political content); and we then plot this difference against the pre-test ratings of perceived accuracy and humorousness of the headline. The effect of both treatments is strongly correlated with the perceived accuracy of headline (treatment, r(18) = 0.61, P = 0.005; importance treatment, r(18) = 0.69, P = 0.0008), such that both treatments reduce sharing intentions to a greater extent as the headline becomes more inaccurate seeming. This supports our proposed mechanism in which the treatments operate through drawing attention to the concept of accuracy. Notably, we see no such analogous effect for the active control. Drawing attention to the concept of humorousness does not make people significantly less likely to share less humorous headlines (or more likely to share more humorous headlines), r(18) = −0.02, P = 0.93. This confirms the prediction generated by our model fitting in Supplementary Information section 3.6—because our participants do not have a strong preference for sharing humorous news headlines, drawing their attention to humorousness does not influence their choices. This also demonstrates the importance of our theoretical approach that incorporates the role of preferences, relative to how priming is often conceptualized in psychology: drawing attention to a concept does not automatically lead to a greater effect of that concept on behaviour.
… 
Results of agent-based simulations of news sharing on social networks See Supplementary Information section 6 for model details. Shown is the relationship between individual-level probability of sharing misinformation and population-level exposure rates, for various levels of network density (fraction of the population that the average agent is connected to, k/N) and different network structures. Top, the raw number of agents exposed to the misinformation (y axis) as a function of the agents’ raw probability of misinformation sharing (x axis). Bottom, the percentage reduction in the fraction of the population exposed to the piece of misinformation relative to control (y axis) as a function of the percentage reduction in individuals’ probability of sharing the misinformation relative to control (x axis). As can be seen, a robust pattern emerges across network structures. First, we see that the network dynamics never suppress the individual-level intervention effect: a decrease in sharing probability of x% always decreases the fraction of the population exposed to the misinformation by at least x%. Second, in some cases the network dynamics can markedly amplify the effect of the individual-level intervention: for example, a 10% decrease in sharing probability can lead to up to a 40% decrease in the fraction of the population that is exposed, and a 50% decrease in sharing probability can lead to more than a 95% reduction in the fraction of the population that is exposed. These simulation results help to connect our findings about individual-level sharing to the resulting effects on population-level spreading dynamics of misinformation. They demonstrate the potential for individual-level interventions, such as the accuracy prompts that we propose here, to meaningfully improve the quality of the information that is spread via social media. These simulations also lay the groundwork for future theoretical work that can investigate a range of issues, including which agents to target if only a limited number of agents can be intervened on, the optimal spatiotemporal intervention schedule to minimize the frequency of any individual agent receiving the intervention (to minimize adaption or familiarity effects), and the inclusion of strategic sharing considerations (by introducing game theory).
… 
This content is subject to copyright. Terms and conditions apply.
A preview of the PDF is not available
... From just 1-3 mouse clicks or screen taps, the meme goes out to be displayed to hundreds, and even thousands of people. A study done in the US in 2021 showed that more than half of people share content on social networks without carefully analysing it (Pennycook et al. 2021). This happens, according to an MIT study, because fake and manipulative content is made more creative and exciting, specifically to stir up emotions and thus end up being shared 70% more likely than authentic and true content (Empoli 2019, 72) and political content spreads three times faster than that from other sources (Bârgăoanu 2018, 146). ...
... One suggested solution to combat scrutinized distributions was to multiply the steps until the process is complete. In this sense, it was suggested to introduce a small self-assessment of the veracity of the content by ticking on a scale the estimated degree of correctness of the content before the execution of the distribution (Pennycook et al., 2021). ...
Article
In the era of communication, internet and social networks, memes have acquired a huge power to disseminate and influence people. This phenomenon is mainly due to the ease with which a message resonates with people and with one’s friends and acquaintances, the lack of regulation of digital communication channels, and the way social media algorithms are programmed. They tend to favour the financial profit of social media platforms and less to combat bots, fake accounts and the propagation of misleading messages. In this paper, I will analyse and explain why humorous satirical memes have come to be used in influence actions and campaigns and why they have increased effectiveness, especially among trained people, compared to other techniques. The argumentation is built on two dimensions: that of genetic baggage and programming, together with the chemistry of the human body, where hormones play a very important role and that of the social, tribal dimension, where man is a being who wants and needs belonging to a group to feel safe, accepted and valued.
... Students who place less value on information credibility have been found to be less successful at determining credibility, suggesting that students' intellectual dispositions can impact their evaluation processes (Nygren & Guath, 2019. Relatedly, misinformation sharing is associated with low attention to accuracy Pennycook et al., 2020Pennycook et al., , 2021. When sharing information, people may prioritize goals such as pleasing followers, signaling group membership, or promoting ideological agendas, and put less weight on accuracy Van Bavel et al., 2021). ...
... One way to address this challenge is to set or invoke norms that encourage epistemically responsible sharing, such as accuracy norms (Miller & Record, 2022). Pennycook and colleagues found that prompting people to consider accuracy can improve sharing discernment, defined as the difference in sharing intentions between true and false information Pennycook et al., 2021;Pennycook et al., 2020; yet see Roozenbeek, Maertens, McClanahan, & van der Linden, 2021). However, accuracy prompts have been found to be more effective among people who place greater importance on sharing accurate information in the first place . ...
Article
Digital games have emerged as promising tools for countering the spread of misinformation online. Previous studies have mostly used games to inoculate players against misleading communication techniques. There has also been a lack of research on misinformation games in middle school. Hence, the aims of this investigation were to examine to what extent a game can support middle school students' competence to evaluate online information and their dispositions to share information responsibly. For this purpose, we developed a game, Misinformation Is Contagious, that models reliable evaluation strategies and the social implications of sharing (in)accurate information. In two studies with 7th and 8th grade students (N = 84 and N = 131), we found that playing the misinformation game resulted in better accuracy discernment, sharing discernment, and metastrategic knowledge about corroboration, compared to playing a control language game. In Study 1, the effects on discernment scores were mainly due to higher ratings of accurate messages; whereas in Study 2, the effects were mainly due to lower ratings of inaccurate messages. In both studies, accuracy discernment mediated the effect of playing the misinformation game on sharing discernment. In Study 2, the misinformation game also had a direct effect on sharing discernment, suggesting it may have impacted players' dispositions to value accuracy while sharing. However, the game did not affect students' self-reported stances regarding sharing misinformation. These results provide initial evidence that a game designed to support evaluation strategies can help students resist misinformation and identify reliable information. The findings also suggest that games can potentially promote responsible information sharing.
... Concerns about the spread of false information have prompted the emergence of a new field of study in psychology (Pennycook, et al., 2021;Brady, Crockett, & Bavel, 2020;Brady, Gantman, & Bavel, 2020) (along with other disciplines like computer science (Zhou, Zafarani, Shu, & Liu, 2019), political science (Tucker, et al., 2018), and communication (Li, 2020), among others (Paletz, Auxier, & Golonka, 2019) devoted to understanding the psychological underpinnings of the dynamics of social media sharing. Understanding the reasons why individuals share incessantly online might aid in finding a solution to the rising problem. ...
... The fact that viral fake news has high social media metrics ) cannot be overlooked. If media consumers observe that a certain fake news item has been liked and shared several times, they may perceive that the narrative is generally accepted; accordingly, they join the trend and disseminate the news on social media without understanding that the story is untrue (Pennycook, et al 2021;Simon, 1954-bandwagon effect). Furthermore, it's possible that this negative association may not accurately represent how well media users' corrective actions have worked. ...
... Retention of popular topics is high across social media platforms even if the original content is not updated (Kapoor et al., 2018). The sharing of content is often driven by believes over accuracy (Pennycook et al., 2021). The implications of a believedriven process can result in a diversion of attention towards wellestablished methods of carbon sequestration. ...
Article
Full-text available
Whales have been titled climate savers in the media with their recovery welcomed as a potential carbon solution. However, only a few studies were performed to date providing data or model outputs to support the hypothesis. Following an outline of the primary mechanisms by which baleen whales remove carbon from the atmosphere for eventual sequestration at regional and global scales, we conclude that the amount of carbon whales are potentially sequestering might be too little to meaningfully alter the course of climate change. This is in contrast to media perpetuating whales as climate engineers. Creating false hope in the ability of charismatic species to be climate engineers may act to further delay the urgent behavioral change needed to avert catastrophic climate change impacts, which can in turn have indirect consequences for the recovery of whale populations. Nevertheless, whales are important components of marine ecosystems, and any further investigation on existing gaps in their ecology will contribute to clarifying their contribution to the ocean carbon cycle, a major driver of the world’s climate. While whales are vital to the healthy functioning of marine ecosystems, overstating their ability to prevent or counterbalance anthropogenically induced changes in global carbon budget may unintentionally redirect attention from known, well-established methods of reducing greenhouse gases. Large scale protection of marine environments including the habitats of whales will build resilience and assist with natural carbon capture.
... Such a system will work with the natural human tendency to select actions that lead to the greatest reward and avoid those that lead to punishment (Skinner, 1966). Scientists have tested different strategies to reduce the spread of misinformation, including educating people about fake news (Guess et al., 2020;Traberg et al., 2022), using a prompt to direct attention to accuracy (Kozyreva et al., 2020;Pennycook et al., 2021;Pennycook et al., 2020) and limiting how widely a post can be shared (Jackson et al., 2022). Surprisingly, possible interventions in which the incentive structure of social media platforms is altered to reduce misinformation have been overlooked. ...
Article
Full-text available
The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social 'carrots' (e.g., 'likes') and 'sticks' (e.g., 'dislikes') that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement.
Article
Deepfakes are an effective method of media manipulation because of their realism and also because truth is not a priority when people are consuming and sharing content online. Consumers are more focused on creating their own reality that aligns with their desires, opinions, and values. We explain how deepfakes differ from other sources of information. Their realism and vividness makes them unusually effective at depicting alternative facts, including fake news. Deepfakes are difficult to detect and will be even harder to detect in the future. However, people share deepfakes not necessarily because they believe them but because they want to reinforce their own identity and social position. The threat posed by deepfakes is that they can radicalize people by sowing chaos and confusion. They rarely change minds. We review the consequences of deepfakes in both the social sphere and private lives. We suggest potential solutions to reduce their negative consequences.
Article
Full-text available
We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.
Article
Full-text available
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot’s strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Article
Full-text available
The 2020 U.S. Presidential Election saw an unprecedented number of false claims alleging election fraud and arguing that Donald Trump was the actual winner of the election. Here we report a sur-vey exploring belief in these false claims that was conducted three days after Biden was declared the winner. We find that a majority of Trump voters in our sample – particularly those who were more politically knowledgeable and more closely following election news – falsely believed that election fraud was widespread and that Trump won the election. Thus, false beliefs about the elec-tion are not merely a fringe phenomenon. We also find that Trump conceding or losing his legal challenges would likely lead a majority of Trump voters to accept Biden’s victory as legitimate, alt-hough 40% said they would continue to view Biden as illegitimate regardless. Finally, we found that levels of partisan spite and endorsement of violence were equivalent between Trump and Biden voters.
Article
Full-text available
The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions. Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximize commercial interests, capture and sustain users’ attention, monetize user data, and predict and influence future behavior. This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation. Our goal is to present a conceptual map of interventions that are based on insights from psychological science. We begin by systematically outlining how online and offline environments differ despite being increasingly inextricable. We then identify four major types of challenges that users encounter in online environments: persuasive and manipulative choice architectures, AI-assisted information architectures, false and misleading information, and distracting environments. Next, we turn to how psychological science can inform interventions to counteract these challenges of the digital world. After distinguishing among three types of behavioral and cognitive interventions—nudges, technocognition, and boosts—we focus on boosts, of which we identify two main groups: (a) those aimed at enhancing people’s agency in their digital environments (e.g., self-nudging, deliberate ignorance) and (b) those aimed at boosting competencies of reasoning and resilience to manipulation (e.g., simple decision aids, inoculation). These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information.
Article
Full-text available
Social media platforms rarely provide data to misinformation researchers. This is problematic as platforms play a major role in the diffusion and amplification of mis- and disinformation narratives. Scientists are often left working with partial or biased data and must rush to archive relevant data as soon as it appears on the platforms, before it is suddenly and permanently removed by deplatforming operations. Alternatively, scientists have conducted off-platform laboratory research that approximates social media use. While this can provide useful insights, this approach can have severely limited external validity (though see Munger, 2017; Pennycook et al. 2020). For researchers in the field of misinformation, emphasizing the necessity of establishing better collaborations with social media platforms has become routine. In-lab studies and off-platform investigations can only take us so far. Increased data access would enable researchers to perform studies on a broader scale, allow for improved characterization of misinformation in real-world contexts, and facilitate the testing of interventions to prevent the spread of misinformation. The current paper highlights 15 opinions from researchers detailing these possibilities and describes research that could hypothetically be conducted if social media data were more readily available. As scientists, our findings are only as good as the dataset at our disposal, and with the current misinformation crisis, it is urgent that we have access to real-world data where misinformation is wreaking the most havoc.
Article
Full-text available
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.
Article
Full-text available
Significance Few people are prepared to effectively navigate the online information environment. This global deficit in digital media literacy has been identified as a critical factor explaining widespread belief in online misinformation, leading to changes in education policy and the design of technology platforms. However, little rigorous evidence exists documenting the relationship between digital media literacy and people’s ability to distinguish between low- and high-quality news online. This large-scale study evaluates the effectiveness of a real-world digital media literacy intervention in both the United States and India. Our largely encouraging results indicate that relatively short, scalable interventions could be effective in fighting misinformation around the world.
Preprint
Misinformation on social media has become a major focus of research and concern in recent years. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possibly keep up with the volume of misinformation produced every day. Furthermore, many people see fact-checkers as having a liberal bias and thus distrust them. Here, we explore a potential solution to both of these problems: leveraging the “wisdom of crowds'' to identify misinformation at scale using politically-balanced groups of laypeople. Using a set of 207 news articles flagged for fact-checking by an internal Facebook algorithm, we compare the accuracy ratings given by (i) three professional fact-checkers after researching each article and (ii) 1,128 Americans from Amazon Mechanical Turk after simply reading the headline and lede sentence. We find that the average rating of a politically-balanced crowd of 10 laypeople is as correlated with the average fact-checker rating as the fact-checkers’ ratings are correlated with each other. Furthermore, the layperson ratings can predict whether the majority of fact-checkers rated a headline as “true” with high accuracy, particularly for headlines where all three fact-checkers agree. We also find that layperson cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checker ratings; and that informing laypeople of each headline’s publisher leads to a small increase in agreement with fact-checkers. Our results indicate that crowdsourcing is a promising approach for helping to identify misinformation at scale.