Article

How to unring the bell: A meta-analytic approach to correction of misinformation

Taylor & Francis
Communication Monographs
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The study reports on a meta-analysis of attempts to correct misinformation (k = 65). Results indicate that corrective messages have a moderate influence on belief in misinformation (r = .35); however, it is more difficult to correct for misinformation in the context of politics (r = .15) and marketing (r = .18) than health (r = .27). Correction of real-world misinformation is more challenging (r = .14), as opposed to constructed misinformation (r = .48). Rebuttals (r = .38) are more effective than forewarnings (r = .16), and appeals to coherence (r = .55) outperform fact-checking (r = .25), and appeals to credibility (r = .14).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... falsehoods-is crucial for governmental institutions to maintain transparency and accountability, especially when they find themselves targeted by rumors, alongside other elites (N. Walter & Tukachinsky, 2020). Paradoxically, attempts to debunk misinformation may backfire, reinforcing false beliefs instead of dispelling them (Nyhan & Reifler, 2010;N. Walter & Murphy, 2018). How can governmental institutions effectively address rumors and misinformation that undermine their credibility and the legitimacy of other powerful entities? ...
... he sources of information, such as platform algorithms or peer-shared content (L. Chen et al., 2022;Kleis Nielsen & Ganter, 2018). Regarding message framing, research shows that merely denying inaccuracies is insufficient; providing causal elaborations that expose contradictions within the misinformation is more effective (Lewandowsky et al., 2013;N. Walter & Murphy, 2018). Lastly, social cues, such as likes and peer interactions, further shape users' evaluations of messages (Margetts, 2017). This approach provides a comprehensive lens for assessing debunking practices in digital environments. ...
... Empirically, a meta-analysis of debunking framing effectiveness indicates that among the various types of message framing employed in debunking, causal elaborations are considered a superior debunking practice to most other types of debunking (N. Walter & Murphy, 2018). The reason is that when people are exposed to a coherent message that explains "the chain of events, they will be more likely to substitute the false information with the retraction" (N. ...
Article
This study investigates the effectiveness of public health institutions’ misinformation debunking on social media by examining the impact of message features—social media intermediaries, message framing, and social cues—alongside the moderating roles of political cynicism and conspiracy beliefs. We conducted preregistered survey experiments in Hong Kong, the Netherlands, and the United States (total N = 2,769). Results show that sponsored messages outperformed AI recommendations. Causal framing would backfire for the cynics (in both Hong Kong and the Netherlands). In the United States, peer-shared messages enhanced source and message evaluations among those with higher conspiracy beliefs.
... Despite a growing number of fact-checking studies exploring practices (Moreno-Gil, Ramon, and Rodríguez-Martínez 2021); epistemologies (Amazeen 2015); transparency (Humprecht 2019); and effects (Walter et al. 2020;Walter and Murphy 2018), there is limited understanding of the specific tactics fact-checkers use to correct misinformation at the message level, how these techniques may vary based on the verification target, and how they differ across different countries and organizations. Guided by insights derived deductively from the literature on fact-checking effects and inductively from analysis of verification articles, we identified 17 distinct tactics used by fact-checkers across eight countries and 23 organizations with diverse backgrounds. ...
... By elucidating why the information is false and providing the premises and reasoning behind the correction, fact-checkers enable readers to independently draw their own conclusions. This clarity in explaining inaccuracies significantly enhances the persuasiveness of the correction (Amazeen and Krishna 2024;Cook and Lewandowsky 2012;Nyhan and Reifler 2013;Walter and Murphy 2018). ...
... Moreover, research supports providing a competing causal explanation as an effective method for mitigating misinformation. Offering an alternative explanation for misleading information is more effective than merely declaring a claim false (Amazeen and Krishna 2024;Cook and Lewandowsky 2012;Nyhan and Reifler 2013;Walter and Murphy 2018). Nyhan and Reifler (2013, p. 2) argue that replacing a false justification with a different causal interpretation is more impactful. ...
Article
Full-text available
This study systematically analyzes and compares verification strategies employed by fact-checking organizations across various contexts. Utilizing a dataset of 3,154 verification articles from 23 organizations in eight countries across Europe and Latin America, the study identifies 17 distinct debunking techniques through both inductive and deductive approaches. The primary objectives are to uncover common and divergent practices in factual correction, assess how techniques vary by verification target (e.g., online rumors versus statements by public figures), and examine variations at organizational and national levels. The findings reveal that while methods such as providing documents and tracing misinformation origins are prevalent, significant variation exists depending on the target. For online rumors, common practices include tracing misinformation origins, forensic analysis, and visual indicators of image manipulation. Conversely, verification of public figure statements frequently involves expert arbitration and direct contact with misinformation sources. Additionally, the study highlights substantial differences in fact-checking strategies across countries and organizations, influenced by their focus and institutional contexts. This research addresses a notable gap in the literature by offering a comparative analysis of verification strategies, providing a framework for future experimental research, and offering guidance for fact-checkers and scholars to refine their approaches to combating misinformation. ARTICLE HISTORY
... Previous literature has conducted meta-analyses to investigate the effectiveness of fact-checking in correcting misinformation (Walter & Murphy, 2018;Walter & Tukachinsky, 2020). These studies have consistently identified the timing of corrections as a significant factor influencing the effectiveness of factchecking. ...
... However, the exact timing that yields optimal results remains somewhat controversial in prior research (Ecker et al., 2022). For instance, research conducted by Brashier et al. (2021) and Walter & Murphy (2018) suggests that debunking, which involves fact-checking after the exposure of misinformation, tends to be more effective than forewarning or prebunking. In contrast, Jolley and Douglas (2017) found that prebunking, which involves addressing misinformation prior to exposure, was more successful in correcting anti-vaccine conspiracy theories compared to debunking. ...
... In contrast, Jolley and Douglas (2017) found that prebunking, which involves addressing misinformation prior to exposure, was more successful in correcting anti-vaccine conspiracy theories compared to debunking. Additionally, the effectiveness of corrections tends to diminish over time when there is a significant delay between the exposure to misinformation and the subsequent correction (Walter & Murphy, 2018;Walter & Tukachinsky, 2020). Our findings revealed a higher number of fact-checking articles during major events, such as the COVID-19 pandemic and the U.S. presidential election, where misinformation tends to spread widely (Cinelli et al., 2020;Grinberg et al., 2019;Sharma et al., 2022). ...
Article
This study examined four fact checkers (Snopes, PolitiFact, Logically, and the Australian Associated Press FactCheck) using a data-driven approach. First, we scraped 22,349 fact-checking articles from Snopes and PolitiFact and compared their results and agreement on verdicts. Generally, the two fact checkers agreed with each other, with only one conflicting verdict among 749 matching claims after adjusting minor rating differences. Next, we assessed 1,820 fact-checking articles from Logically and the Australian Associated Press FactCheck, and highlighted the differences in their fact-checking behaviors. Major events like the COVID-19 pandemic and the presidential election drove increased the frequency of fact-checking, with notable variations in ratings and authors across fact checkers.
... The literature on belief updating extends across disciplines, encompassing a wide range of topics and methodologies, including belief change in response to social, environmental, and political arguments and information (e.g., Corner et al., 2012;A. G. Miller et al., 1993;Taber & Lodge, 2006;Tappin et al., 2020), factual or statistical evidence (e.g., Vlasceanu & Coman, 2022), court or legal evidence (e.g., Hudachek & Quigley-McBride, 2022;McKenzie et al., 2002), misinformation and conspiracies (e.g., McHoskey, 1995;O'Brien et al., 2021;Orticio et al., 2022), misinformation corrections (e.g., Carey et al., 2022;Walter & Murphy, 2018), self-relevant evidence (e.g., Drobner & Goerg, 2024;Eil & Rao, 2011;Marks & Baines, 2017;Sharot et al., 2011), the perceived normative prevalence of beliefs (Orticio et al., 2022;Vlasceanu & Coman, 2022), and evidence bearing on beliefs instilled or claims made at the beginning of the study, including false claims (e.g., Anderson, 1983;Anderson et al., 1980;Ross et al., 1973). ...
... 2 Understanding the public's interpretation of and receptivity to scientific research on polarized topics is critical to effectively communicating evidenced-based solutions to social problems (e.g., those related to the environment, public health, education, and intergroup relations). Although belief updating in response to scientific evidence may operate similarly to other forms of belief updating, research has shown that people sometimes respond differently to scientific and non-scientific information (Corner & Hahn, 2009;Walter & Murphy, 2018) and when the topics are polarized vs. not (e.g., Kahan et al., 2017;Vedejová & Čavojová, 2022). Beliefs may be based on many different forms of evidence besides scientific studies (e.g., group agreement, expert opinion, trusted figures or authorities, anecdotes, perception, experience, testimony, logic and reasoning, etc.; Metz et al., 2018;Sommer et al., 2024). ...
Article
Full-text available
Although studies on belief perseverance suggest that people resist evidence opposing their beliefs, recent research found that people were receptive to clear, belief-disconfirming evidence. However, this research measured belief change immediately after presenting the evidence, and belief change varied considerably across participants. In three preregistered experiments, we replicated and extended prior work, testing whether belief change in response to empirical evidence on polarized topics persists one day later and variables associated with belief change, including the (in)consistency of evidence with prior views, evidence strength, and individual differences in beliefs, affect, thinking and reasoning strategies, and perceptions of the evidence and science. Overall, participants shifted their beliefs in response to evidence on capital punishment (Study 1), gun control (Study 2), and video games and aggression (Study 3) and maintained this change the next day. Belief change primarily occurred among those presented with belief-inconsistent evidence. Participants shifted their beliefs more in response to stronger vs. weaker evidence but were more sensitive to the evidence strength initially than the next day. Perceived evidence quality and scientific certainty were consistently associated with belief change, whereas belief commitment, actively open-minded thinking, social desirability, and positive and negative affect were not. People may be receptive to belief-inconsistent evidence, especially if they view it as strong and science as certain, irrespective of general individual differences in receptivity. Further research is needed on the persistence and predictors of belief change in response to evidence over a longer time frame and across topics, contexts, and samples.
... The alignment between these modalities can impact how well the correction is received and internalised. Prior research has shown that when addressing misperceptions conveyed through text, presenting corrections in the same text modality can be an effective strategy [10,33,67,85]. However, text-based corrections may not always be effective when addressing misinformation presented in other modalities. ...
... This is evidenced in our results, where Text corrections were more effective than Image corrections for misinformation presented in Text form. Previous research has established that text corrections are effective in addressing text-based misinformation [10,33,67,85]. However, these studies did not compare text corrections with corrections delivered through other media, leaving the relative effectiveness of different correction modalities unexplored. ...
Conference Paper
Full-text available
Social media has become a primary information source, with platforms evolving from text-based to multi-modal environments that include images and videos. While richer media modalities enhance user engagement, they also increase the spread and perceived credibility of misinformation. Most interventions to counter misinformation on social media are text-based, which may lack the persuasive power of richer modalities. This study explores whether the effectiveness of misinformation correction varies by modality, and if certain modalities of misinformation are better countered by a specific correction modality. We conducted a survey-based experiment where participants rated the credibility of misinformation tweets before and after exposure to corrections, across all combinations of text, images and video modalities. Our findings suggest that corrections are most effective when their modality richness matches that of the original misinformation. We discuss factors affecting the perceived credibility of corrections and offer strategies to optimise misinformation correction.
... Although there is no consensus over which of the techniques is more effective and under which circumstances, experimental studies show that both prebunking and debunking have the potential to correct inaccurate and conspiracy beliefs (see e.g., Ecker et al., 2022;Swire-Thompson et al., 2021;van der Linden, 2022;Walter & Murphy, 2018). A recent systematic review of the efficacy of interventions in reducing CTs (O'Mahony et al., 2023) shows that the most effective interventions are those that happen before exposure to CTs (prebunking) and that inoculations that identified the factual inaccuracies of conspiracy beliefs were the most effective of the reviewed interventions. ...
... While fact-checking effectively reduces support for CTs and increases the ability to discern facts from misinformation, it may be insufficient to curb support for politicians who spread them or the policies they propose, particularly in people with an aligned political leaning (Barrera et al., 2020;grady et al., 2021). Also, a meta-analysis of studies attempting to correct misinformation (Walter & Murphy, 2018) indicates that it may be more difficult to refute political claims compared to health-related misinformation. Consequently, the development of "onesize-fit-all" interventions is unlikely. ...
... A correction can be made in response to existing misinformation, termed debunking (Chan et al., 2017), or a prewarning can be presented proactively before people encounter the misinformation, a preemptive method named prebunking (Lewandowsky & Van Der Linden, 2021). Whereas meta-analytic work has evidenced the general effectiveness of strategies to counter misinformation (e.g., Walter & Murphy, 2018;Walter et al., 2021), there is no easy cure. For example, the inoculation strategy may reduce perceived misinformation credibility (Lu et al., 2023), the correction may have limited reach compared to the vast spread of misinformation (van der Linden, 2022), or the effect of the counter may differ based on the specific strategy (Walter & Murphy, 2018) or wear out over time (Maertens et al., 2021;Walter & Tukachinsky, 2020). ...
... Whereas meta-analytic work has evidenced the general effectiveness of strategies to counter misinformation (e.g., Walter & Murphy, 2018;Walter et al., 2021), there is no easy cure. For example, the inoculation strategy may reduce perceived misinformation credibility (Lu et al., 2023), the correction may have limited reach compared to the vast spread of misinformation (van der Linden, 2022), or the effect of the counter may differ based on the specific strategy (Walter & Murphy, 2018) or wear out over time (Maertens et al., 2021;Walter & Tukachinsky, 2020). ...
Article
Given the prevalence of health misinformation, it is essential to develop interventions to correct misinformation and reduce its negative influence. Emerging research has investigated the use of narratives as both prebunking and debunking strategies, but the findings are mixed regarding their effectiveness. This systematic scoping review aimed to examine the role of narratives in countering health misinformation, drawing on evidence from 19 studies. The identified studies investigate a variety of health issues, with most employing a randomized experimental design and collecting data in the United States. The findings suggest that narratives are a promising prebunking strategy to inoculate individuals against health misinformation. However, their effectiveness in debunking health misinformation remains inconsistent. Narrative features such as emotional appeals and audiovisual elements may enhance their impact. Directions for future research are discussed.
... However, a recent meta-analysis of science-relevant misinformation (including health) found that corrections were, on average, not effective (Chan & Albarracín, 2023), though the average masks substantial variation in effectiveness across studies and designs. Findings are mixed as to whether health misinformation is easier to correct than political misinformation (Chan & Albarracin, 2023;Vraga et al., 2019), but Walter and Murphy (2018) posit that health misinformation may be easier to correct because topics that involve political identity are especially resistant to belief change. Yet, it is worth noting that health is becoming an increasingly politicized issue (e.g., . ...
Technical Report
Full-text available
There is widespread concern that misinformation poses dangerous risks to health, well-being, and civic life. Despite a growing body of research on the topic, significant questions remain about (a) psychological factors that render people susceptible to misinformation, (b) the extent to which it affects real-world behavior, (c) how it spreads online and offline, and (d) intervention strategies that counter and correct it effectively. This report reviews the best available psychological science research to reach consensus on each of these crucial questions, particularly as they pertain to health-related misinformation. In addition, the report offers eight specific recommendations for scientists, policymakers, and health professionals who seek to recognize and respond to misinformation in health care and beyond.
... A large body of empirical evidence shows that retractions can mitigate, but rarely eliminate, the influence of misinformation, a phenomenon known as the continued influence effect [CIE; 14,20, for reviews see 1,21]. Typically, the CIE is investigated using event-related misinformation. ...
Article
Full-text available
Retracted misinformation often continues to influence event-related reasoning, but there is mixed evidence that it influences person impressions. A recent study found no evidence for the continued influence of retracted misinformation on person impressions across four experiments. However, the study used a dynamic impression-rating measure that may have obscured any continued influence effects. Here we report three experiments that tested for the continued influence of retracted misinformation on person impressions using a non-dynamic impression-formation task that is comparable to tasks used in event-related misinformation research. Participants formed an impression of a fictitious person based on a series of behaviour statements. A negative behaviour statement (e.g., “John kicked his pet dog hard in the head when it didn’t come when called”) was subsequently retracted or not retracted. Evidence for the continued influence of the retracted behaviour statement was found in one experiment; in the other two experiments the retracted misinformation was fully discounted. The mixed findings indicate that, unlike retracted event-related misinformation, retracted person-related misinformation does not consistently show a continued influence effect. Future research should investigate potential moderating factors, such as the attributes of the misinformation and the presence of social-category information about the protagonist, to reveal the mechanisms underlying the continued influence effect in person impressions.
... Numerous interventions have been proposed and tested in current literature, ranging from algorithmic solutions such as machine learning models for misinformation detection to psychological interventions such as inoculation (Linden & Roozenbeek, 2020). One widely proposed intervention is correction, which recent meta-analyses suggest is effective against misinformation (Walter et al., 2021;Walter & Murphy, 2018). Corrections to misinformation posts can originate from two primary sources. ...
Preprint
Full-text available
Corrections given by ordinary social media users, also referred to as Social Correction have emerged as a viable intervention against misinformation as per the recent literature. However, little is known about how often users give disputing or endorsing comments and how reliable those comments are. An online experiment was conducted to investigate how users' credibility evaluations of social media posts and their confidence in those evaluations combined with online reputational concerns affect their commenting behaviour. The study found that participants exhibited a more conservative approach when giving disputing comments compared to endorsing ones. Nevertheless, participants were more discerning in their disputing comments than endorsing ones. These findings contribute to a better understanding of social correction on social media and highlight the factors influencing comment behaviour and reliability.
... For example, corrections and using awareness prompts are commonly used worldwide (Poynter, 2023), whereas legal warnings are more prevalent in countries like China (Rodrigues and Xu, 2020). In addition, misinformation spans diverse topics, including politics, health, and science (Walter and Murphy, 2018). Its nature varies greatly depending on the context and underlying motivations, such as dread, wedge-driving, and wish-related motivations, adding further complexity to the development of universal fact-checking strategies. ...
Article
This study investigates the differential impacts of corrections, awareness prompts, and legal warnings on the endorsement of fact-checking information (through both “likes” and expressed support in associated comments) across three types of misinformation motivation (dread, wedge-driving, wish) on Weibo, a major Chinese social media platform. Through manual labeling and BERT (a pretrained large language model), we analyzed a cleaned dataset of 4,942 original fact-checking Weibo posts from 18 November 2010 to 31 May 2022, created or shared by Weibo Piyao. Results indicate that government posts or those with visual cues received fewer “likes” but garnered more supportive comments, while awareness prompts and legal warnings received more supportive comments across three misinformation types. This research provides valuable insights into the practice of fact-checking on social media, highlighting how different strategies may vary in their impact depending on the nature of the misinformation being addressed.
... Public correction on social media has a second benefit, given the large number of people who have the opportunity to witness correction, even without participating in sharing misinformation or engaging in correction themselves. This ability to reach a secondary audience is known as observed correction (Vraga & Bode, 2017, p. 634), and research indicates that correction from peers reduces people's misperceptions across health and political contexts (Walter et al., 2021;Walter & Murphy, 2018). According to the World Health Organization (WHO, 2020), debunking (or correcting) health misinformation is a major task that the whole of society must undertake to maintain trust in the health care system and improve people's health. ...
Article
Full-text available
Of the many solutions to address political misinformation spreading on social media, user correction holds special promise for connective democracy given its emphasis on prioritizing user autonomy and fostering communication and connections across lines of disagreement. But for the connective democratic benefits to be realized, these user corrections should ideally come from those who express strong support for democratic norms. Using a nationally representative survey of Americans immediately after the 2020 U.S. presidential election, we find the opposite is true: self-reported correctors also tended to support political violence to achieve their goals. Rather than treating self-reported correction as a clear positive force for democracy, researchers and practitioners should consider the potential drawbacks and limitations of self-reported correction, particularly when coming from those with less supportive attitudes toward connective democracy.
... In the misinformation correction literature, whether the correction should be made before or after the misinformation attack is also discussed with inconsistent findings and recommendations. While some scholars (e.g., Walter and Murphy 2018) suggested that directly refuting the opposing information (i.e., misinformation) is more effective than forewarning the potential existence of misinformation, others (e.g., Wan and Pfau 2004) argued that acknowledging the misinformation before the correction process might unexpectedly reinforce the misinformation or put the organization in a crisis if the misinformation does not appear as planned. Despite the pros cons of placing correction before misinformation occurrence, scholars have advocated for the value of such a proactive timing strategy in correcting misinformation. ...
Article
Full-text available
While narrative is regarded as a powerfully persuasive tool in previous crisis communication literature, few empirical studies in crisis misinformation correction have examined the danger of narrative‐based misinformation about an organizational crisis and how an organization might correct it via prebunking strategies using narratives. Thus, contextualized in an organizational misinformation crisis, this study examined the informational competition between crisis misinformation narrative and organizational prebunking narrative, as well as identified viable ways of using organizational narrative persuasion as a robust prebunking messaging strategy, via the mediation effects of character connection and perceived information quality, to reduce misinformation discussion on social media and increase publics' social correction intention. An online experiment with 1 (Misinformation: blame narrative) × 4 (Organizational prebunking message: blame narrative vs. victim narrative vs. renewal narrative vs. nonnarrative correction) between‐subjects design was conducted with 352 US adults. Key findings include: (1) the narrative strategy included in the prebunking message exhibited limited direct effects on participants' communicative behaviors; (2) identification with the spokesperson had more impact than perceived correction quality on participants' communicative behaviors (i.e., misinformation discussion and social correction); and (3) participants' liking of the spokesperson (not trust) was positively associated with their character identification with the spokesperson. Theoretical and practical implications are further discussed in terms of the potential for using a solid persuasive tool ‐‐ narrative ‐‐to combat misinformation narrative through communicative behaviors, as well as the mechanism behind the competition between misinformation and corrective information.
... This case is one of the most famous real-life examples of the continued influence effect (CIE), which refers to situations in which one tends to use information initially presented as true after it has been retracted and acknowledged as false, even though one is aware of this retraction. This phenomenon has been consistently observed in various pieces of laboratory research (e.g., Brydges et al., 2018;Buczel et al., 2024;Ecker & Ang, 2019;Ecker & Antonio, 2021;Ecker et al., 2010;2011a;2011b;2014a;2015; 2017; Guillory & Geraci, 2010;2013;Hamby et al., 2020;Johnson & Seifert, 1994;1998;Nyhan & Reifler, 2015;O'Rear & Radvansky, 2020;Rich & Zaragoza, 2016;2020;Susmann & Wegener, 2022a;2022b;van Oostendorp & Bonebakker, 1999;Walter & Murphy, 2018;Wilkes & Leatherbarrow, 1988;Wilkes & Reynolds, 1999). Misinformation reliance in CIE can manifest both as a directly believing misinformation (e.g., Aird et al., 2018;Lewandowsky et al., 2005;Nyhan & Reifler, 2010;Rich et al., 2017;Swire et al., 2017), and as forming misinformation-relevant inferences (e.g., Buczel et al., 2024;Ecker et al., 2010;Guillory & Geraci, 2013;Johnson & Seifert, 1994;Rich & Zaragoza, 2016;. ...
Article
Full-text available
The continued influence effect (CIE) refers to continued reliance on misinformation, even after it has been retracted. There are several techniques to counter it, such as forewarnings or presenting alternative explanations that can replace misinformation in knowledge or mental models of events. However, the existing research shows that they generally do not eliminate CIE, and their protective effects do not appear to be durable over time. In two experiments (N = 441), we aimed to investigate the effectiveness of the alternative explanation technique and a combination of an alternative explanation and a forewarning (Experiment 1) or inoculation (Experiment 2) in both reducing CIE and the effect of increasing misinformation reliance over time, which is called belief regression. We found that an alternative reduced CIE, while combining it with a forewarning or inoculation boosted this protective effect in the pretest. Nevertheless, the protective effect of the alternative+forewarning and inoculation techniques was not sustained, as shown by the fact that misinformation reliance increased for over 7 days, despite continued memory of the correction. A similar pattern, albeit with mixed evidence from NHST vs. Bayesian analyses, was found for the alternative+inoculation technique. In the Discussion, we address issues such as the potential cognitive mechanisms of this effect. Despite all the similarities, given the difference in both methodology and results, we proposed that increased misinformation reliance over time in inferential reasoning should be attributed not to belief regression but to a phenomenon we refer to as reliance regression.
... Engagement and reciprocal communication have been posited as necessary aspects of deliberative discourse [15,16] -and even when engagement is initially negative or challenges the correction, such engagement initiates a dialogue that may ultimately open the door to belief updating. Typical one-off approaches for correcting misinformation have reliable but ultimately small and limited effects in magnitude and scope [17,18], and so rebuttal mechanisms that elicit engagement over non-responses may be more effective at promoting sustained learning and belief change [19,20]. ...
Preprint
Full-text available
Social corrections – where users correct each other – can help rectify inaccurate beliefs. However,social corrections are often ignored. Here we ask under what conditions social corrections promoteengagement from corrected users, allowing for greater insight into how users respond to debunkingmessages (even if such responses are negative). Prior work suggests two key factors may helppromote engagement with corrections – partisan alignment between users, and social connectionsbetween users. We investigate these factors here. First, we conducted a field experiment on Twitter(X) using human-looking bots to examine how shared partisanship and prior social connectionaffect correction engagement. We randomized whether our accounts identified as Democrat orRepublican, and whether they followed Twitter users and liked three of their tweets beforecorrecting them (creating a minimal social connection). We found that shared partisanship had nosignificant effect in the baseline (no social connection) condition. Interestingly, social connectionincreased engagement with corrections from co-partisans. Effects in the social counter-partisancondition were ambiguous. Follow-up survey experiments largely replicated these results andfound evidence for a generalized norm of responding, wherein people feel more obligated torespond to people who follow them – even outside the context of misinformation correction. Ourfindings have important implications for increasing engagement with social corrections online.
... Engagement and reciprocal communication have been posited as necessary aspects of deliberative discourse [15,16] -and even when engagement is initially negative or challenges the correction, such engagement initiates a dialogue that may ultimately open the door to belief updating. Typical one-off approaches for correcting misinformation have reliable but ultimately small and limited effects in magnitude and scope [17,18], and so rebuttal mechanisms that elicit engagement over non-responses may be more effective at promoting sustained learning and belief change [19,20]. ...
Article
Full-text available
Social corrections – where users correct each other – can help rectify inaccurate beliefs. However, social corrections are often ignored. Here we ask under what conditions social corrections promote engagement from corrected users, allowing for greater insight into how users respond to debunking messages (even if such responses are negative). Prior work suggests two key factors may help promote engagement with corrections – partisan alignment between users, and social connections between users. We investigate these factors here. First, we conducted a field experiment on Twitter (X) using human-looking bots to examine how shared partisanship and prior social connection affect correction engagement. We randomized whether our accounts identified as Democrat or Republican, and whether they followed Twitter users and liked three of their tweets before correcting them (creating a minimal social connection). We found that shared partisanship had no significant effect in the baseline (no social connection) condition. Interestingly, social connection increased engagement with corrections from co-partisans. Effects in the social counter-partisan condition were ambiguous. Follow-up survey experiments largely replicated these results and found evidence for a generalized norm of responding, wherein people feel more obligated to respond to people who follow them – even outside the context of misinformation correction. Our findings have important implications for increasing engagement with social corrections online.
... Previous studies have investigated two main correction approaches. Fact-based corrections, which directly counter false claims with accurate information, have demonstrated efficacy across politics, health, and science communication [20]. The evolving complexity of misinformation has prompted the investigation of logic-based corrections, which address reasoning flaws underlying misinformation [21,22]. ...
Article
Full-text available
The rapid spread of health misinformation on social media poses significant challenges to public health crisis. Mpox misinformation has portrayed it as exclusively a sexually transmitted infection, resulting in misperceptions about infection risk and stigmatization of affected groups. This study aimed to evaluate the effectiveness of different correction approaches and message framing in reducing misperception and shaping disease-related attitudes, both immediately after exposure and after a 1-day delay. We employed a 2 × 2 design with a control group to test correction approaches (fact-based vs. logic-based) combined with hashtag framing (health literacy vs. inclusivity) through an experiment (N = 274). Findings showed that all corrections reduced misperception both immediately and after 1 day and increased the likelihood of sharing corrective messages. Only corrections with inclusivity hashtags promoted more positive attitudes towards Mpox immediately after exposure. Stereotypes played a significant moderating role where participants with stronger stereotypes showed a greater reduction in misperception when exposed to corrections with inclusivity hashtags but were less likely to share logic-based corrective message. These findings contributed to understanding effective health communication by highlighting the role of social media hashtags in message framing, promoting user sharing of corrective information, and addressing stereotypes when designing interventions against health misinformation.
... Extant research contends that social correction is crucial in combating rampant health misinformation, especially when both experts and non-experts actively participate in the correction process (Bautista et al., 2024;Bode & Vraga, 2018). Practically, social correction can be delivered in various forms, with the most common being rebuttals, which means providing corrective responses to erroneous information (Walter & Murphy, 2018). Additionally, compared to institutionalized correction primarily operated by authorities and platforms, social correction empowered by crowd wisdom is more promising to handle the mounting online misinformation timely, flexibly, and cost-effectively Koo et al., 2021). ...
Article
Full-text available
The insidious fallouts of online health misinformation highlight the need to explore effective strategies to correct it. Although incidental information exposure has become increasingly prevalent, how it may be associated with misinformation correction intention remains hazy. Drawing upon the Cognitive Mediation Model (CMM) and misinformation correction literature, this study proposes a research model to theorize the pathways from incidental corrective information exposure (ICIE) to misinformation correction intention in the online context. Based on a cross-sectional survey conducted in China (N = 690), we found that ICIE was positively tied to elaborative processing and interpersonal discussion of corrective information. Additionally, information elaboration was positively related to health knowledge, whereas ICIE and interpersonal discussion were negatively associated with health knowledge. Moreover, elaboration and interpersonal discussion were associated with a stronger perception of norms, further contributing to misinformation correction intention. Counterintuitively, there lacked a significant relationship between health knowledge and correction intention. Theoretically, this study adapts the CMM to fit the current information consumption trend and integrates factors at the interpersonal level for theory extension. Practically, our findings inform possible remedies to combat the rampant online health misinformation effectively, especially for societies value collectivism.
... False and potentially harmful information may be produced and spread during a disaster by a variety of actors, including benevolent citizens who want to make sense of the crisis situation as well as hostile foreign actors who aim to cause harm to the population, but people may experience adverse effects in both casesif they act on misleading and harmful information, they may put themselves and others at risk [12][13][14]. Therefore it is necessary to explore strategies for tackling false and harmful information in disaster management [15][16][17]. Importantly, distrust towards official sources may increase social vulnerability to crises (e.g., when people disregard official risk and crisis information and behave in hazardous ways), so emergency management institutions need to find ways to improve their credibility [18][19][20]. ...
... The effectiveness of correction in mitigating the negative effect has been well documented, particularly in the field of health communication. A meta-analysis by Walter and Murphy (2018) concluded that correction is more effective in the context of health than in politics or marketing . Correction was also crucial feedback following misinformation sharing with attitudes toward it varying by age (Bode & Vraga, 2021). ...
Article
It is known that older adults are more susceptible to misinformation, and older adults sharing health misinformation is a growing concern. This study explores the factors influencing health misinformation sharing and relational correction among Chinese older adults from a cultural perspective. Guided by the PEN-3 cultural model, we conducted focus groups and in-depth interviews with 79 participants in China to understand the cultural and contextual factors of misinformation sharing. We found that (a) older adults actively shared health misinformation influenced by negative factors such as values of familial ties, need for respect, reciprocity, and initiation of conversation; (b) existential factors such as fact-checking tendency; (c) positive factors such as fatal information avoidance, political identify, awareness of marketing targeting, and social responsibility. Additionally, we found that older adults tend to switch to a silent mode of relational correction for factors such as harmony and face. This research extends the model’s applicability and provides localized insights for developing culturally sensitive health communication strategies to mitigate the spread of health misinformation.
... In recent years, several empirical studies have been conducted on fact-checking and debunking information (for meta-analyses, see Lewandowsky et al., 2012;Walter et al., 2020;Walter and Murphy). ...
Article
Regarding the impact of fact-checking, extensive research has been conducted on the correlation between fact-checking and individuals’ political beliefs, but this issue is difficult to address by policy. This study investigates the relationship between the effectiveness of fact-checking and literacy, as well as the relationship between the effectiveness of fact-checking and the types of media used to disseminate this information. These variables can be addressed through policy measures. We conducted the survey via the internet. Participants were tasked with making true or false judgments about real instances of misinformation before and after fact-checking. The results highlighted the significance of information literacy in achieving accurate perceptions through fact-checking. Secondly, in the case of COVID-19-related misinformation, fact-checking proved more effective on government websites than on social media. Thirdly, many individuals incorrectly identified misinformation as true even after fact-checking. These findings underscore the risk of indiscriminately disseminating fact-check results on social media, as doing so could potentially have the opposite effect if the recipients lack the requisite literacy.
... 2022). Dalam studi kuantitatif yang dilakukan oleh Walter dan Murphy menemukan bahwa upaya debunking dapat mengurangi penyebaran hoaks (Walter & Murphy, 2018). Debunking dinilai lebih efektif dibandingkan prebunking karena proses debunking dilakukan setelah hoks tersebar, sehingga lebih mudah diingat oleh masyarakat (Wang & Huang, 2023). ...
Chapter
Full-text available
Selain jurnalis, subjek penulisan artikel dalam buku ini adalah juga kelompok yang rentan seperti perempuan dan kelompok disabilitas. Cut Meutia Karolina dan Irwa Rochimah Zarkasi dari Universitas Al Azhar Indonesia mencoba menerangkan tren bentuk-bentuk praktik misinformasi, disinformasi, dan malinformasi menjelang Pemilu 2024 yang terjadi di kalangan tunanetra. Mengapa tuna netra? Ternyata, berdasarkan fenomena yang terjadi, Pertuni (Persatuan Tunanetra) menyatakan bahwa tuna netra cenderung apatis terhadap penyelenggaraan Pemilu, termasuk pada Pemilu Presiden dan Wakil Presiden 2024. Namun meskipun apatis, kalangan tuna netra tetap menjadi target hoaks.
... (a) Meta-analyses show that debunking texts, such as fact-checking texts, can significantly reduce belief in misinformation (Blank & Launay, 2014;Chan et al., 2017;Walter & Murphy, 2018;Walter & Tukachinsky, 2019). Unfortunately, this positive effect is usually not long term (Carey et al., 2022), and once confronted with misinformation, people often continue to recall false details from memory despite taking note of factual correctionsknown as the continued influence effect of misinformation (Johnson & Seifert, 1994;Lewandowsky et al., 2012;Walter & Tukachinsky, 2019) and belief perseverance (Anderson, 2007). ...
Chapter
This chapter explores the pervasive issue of health-related misinformation and fake news, particularly during the COVID-19 pandemic. It defines key terms, examines how misinformation spreads, and discusses its prevalence and impact on public health. The chapter highlights the role of social media in the rapid dissemination of false information and investigates the reasons behind the public’s belief in such misinformation. Consequences of believing in health misinformation are addressed, including negative impacts on health behaviors and public trust. The chapter also reviews various strategies to combat misinformation, such as debunking, nudging, andprebunking, and emphasizes the need for comprehensive approaches to strengthen individual and societal resilience. Future research directions are suggested, focusing on underexplored areas and the generalizability of findings beyond the COVID-19 context. This comprehensive analysis underscores the importance of combating health misinformation to protect public health.
... Some meta-analyses (Walter & Tukachinsky, 2020;Walter & Murphy, 2018;Chan et al., 2017) further document that including an alternative in the correction effectively reduces the continued influence effect of misinformation. Though correction does not completely eradicate the effect of misinformation, it is found more effective when the source of the misinformation delivers the correction itself, and it is coherent with the worldview, but less effective when the misinformation is ascribed to a credible source, is transmitted repeatedly, or when there is a significant interval between the delivery of the misinformation and the subsequent correction (Walter & Tukachinsky, 2020: 172-173). ...
... Specifically, we tested if the results generalize beyond the American context and to COVID-19-related misinformation. Health misinformation may differ from other (political) misinformation in terms of sentiment and diffusion (Pröllochs et al., 2021) and can be less persistent when not associated with people's social identity (Vraga et al., 2019;Walter & Murphy, 2018). Nevertheless, due to the political polarization of COVID-19-related health (mis)information and the high emotional involvement with this topic, we expected no substantial differences to the topic of US politics. ...
Article
Full-text available
Prior studies indicate that emotions, particularly high-arousal emotions, may elicit rapid intuitive thinking, thereby decreasing the ability to recognize misinformation. Yet, few studies have distinguished prior affective states from emotional reactions to false news, which could influence belief in falsehoods in different ways. Extending a study by Martel et al. (Cognit Res: Principles Implic 5: 1–20, 2020), we conducted a pre-registered online survey experiment in Austria (N = 422), investigating associations of emotions and discernment of false and real news related to COVID-19. We found no associations of prior affective state with discernment, but observed higher anger and less joy in response to false compared to real news. Exploratory analyses, including automated analyses of open-ended text responses, suggested that anger arose for different reasons in different people depending on their prior beliefs. In our educated and left-leaning sample, higher anger was often related to recognizing the misinformation as such, rather than accepting the false claims. We conclude that studies need to distinguish between prior affective state and emotional response to misinformation and consider individuals’ prior beliefs as determinants of emotions.
... Fact-checkers often encounter a wide range of potential and conflicting sources and inaccurate claims, which increases cognitive load. On the other hand, providing timely and relevant disconfirmation is crucial to prevent the spread of misinformation and discourage audiences from integrating inaccurate information into their beliefs (Walter and Murphy, 2018;Walter and Tukachinsky, 2020). Additionally, each potential inaccuracy in factchecking can cause a crisis of trust in the audience. ...
Article
Full-text available
In the fast-paced, densely populated information landscape shaped by digitization, distinguishing information from misinformation is critical. Fact-checkers are effective in fighting fake news but face challenges such as cognitive overload and time pressure, which increase susceptibility to cognitive biases. Establishing standards to mitigate these biases can improve the quality of fact-checks, bolster audience trust, and protect against reputation attacks from disinformation actors. While previous research has focused on audience biases, we propose a novel approach grounded on relevance theory and the argumentum model of topics to identify (i) the biases intervening in the fact-checking process, (ii) their triggers, and (iii) at what level of reasoning they act. We showcase the predictive power of our approach through a multimethod case study involving a semi-automatic literature review, a fact-checking simulation with 12 news practitioners, and an online survey involving 40 journalists and fact-checkers. The study highlights the distinction between biases triggered by relevance by effort and effect, offering a taxonomy of cognitive biases and a method to map them within decision-making processes. These insights can inform trainings to enhance fact-checkers’ critical thinking skills, improving the quality and trustworthiness of fact-checking practices.
... Numerous strategies have been proposed to combat misinformation, including people inoculation against it [7], automated detection and correction [8], and user-centric rectification [9]. User-centred rectification approaches have shown promising potential for mitigating the spread of misinformation [10]. However, there is a noticeable reluctance among social media users to correct misinformation posted by others when they encounter it, even when they recognise it as such [11]. ...
... As per Micallef et al. (2020a), 96% of countermisinformation responses are by other (non-expert) social media users, which effectively curbs misinformation (Walter et al., 2021(Walter et al., , 2020Walter and Murphy, 2018) and reduces misperceptions (Bode and Vraga, 2021;Colliander, 2019;Friggeri et al., 2014;Seo et al., 2021;Wijenayake et al., 2020) across topics (Bode and Vraga, 2015;Bode, 2018, 2021;Bode and Vraga, 2018;, platforms, and demographics (Vraga et al., 2022a(Vraga et al., ,b, 2020. Although scalable, unlike experts, most non-expert user responses are rude and use unverified evidence (Micallef et al., 2020b;He et al., 2023), propelling mistrust (Flekova et al., 2016;Thorson et al., 2010) and further agitation (Cheng et al., 2017;Kumar et al., 2018;Masullo and Kim, 2021). ...
... As a result, the number of studies aiming to investigate and propose ways to detect and combat misinformation and its effects is increasing. We highlight the metaanalytical studies by Nathan Walter and his colleagues on effective forms of information correction (Walter and Tukachinsky 2020;Walter and Murphy, 2018;Walter and Tukachinsky, 2020). It is also worth mentioning research on the efficiency of factchecking (Graves, 2018;Hameleers and Meer, 2020) and studies on the role of government officials in promoting distrust in health policies and science through the use of misinformation (Lovari, 2020). ...
Article
Full-text available
This study examines the contribution of the Social Sciences to disinformation research. Using network analysis and bibliometrics with the Bibliometrix tool in R, we analyze academic publications in Scopus and Web of Science (WOS) to understand scholarly work on misinfor-mation, disinformation, and fake news. We compare data from Scopus and WOS, explore research trends, identify influential authors, examine relevant journals, assess productive institutions and countries, analyze author keywords , and provide a brief analysis of highly cited articles. The findings reveal the scholarly landscape of dis-information research within the Social Sciences. The comparison of Scopus and WOS data highlights the coverage and representation of disinformation studies. Research trends indicate the field's growth and acceptance through publication and citation rates. Influential authors are identified based on publication output and h-index. Key journals in the field are identified, and productive institutions and countries are assessed. The analysis of author keywords reveals central themes and topics within the discipline. In addition, the analysis of highly cited articles provides insights into the theoretical and methodological aspects that have received significant attention.
... The most common dependent variable in misinformation research represents whether people accept or reject misinformation. For meta-analyses, seeWalter and Murphy (2018) and.5 Peterson's study was an undergraduate senior honor's thesis at the University of Illinois that was completed under the supervision of the current paper's second author. As work on the current paper was completed, two new studies were reported; seeGoldberg and Marquart (2024) andGraham and Yair (2024). ...
Article
Statements of fact can be proved or disproved with objective evidence, whereas statements of opinion depend on personal values and preferences. Distinguishing between these types of statements contributes to information competence. Conversely, failure at fact-opinion differentiation potentially brings resistance to corrections of misinformation and susceptibility to manipulation. Our analyses show that on fact-opinion differentiation tasks, unsystematic mistakes and mistakes emanating from partisan bias occur at higher rates than accurate responses. Accuracy increases with political sophistication. Affective partisan polarization promotes systematic partisan error: As views grow more polarized, partisans increasingly see their side as holding facts and the opposing side as holding opinions.
... In addition to its theoretical implications, our research has important implications for real-world contexts as the findings imply that enhancing retraction effectiveness necessitates highly credible retractors. This is particularly critical in real-world scenarios, where retracting misinformation is markedly more challenging than in controlled laboratory settings (Walter & Murphy, 2018). Outside the laboratory, however, it may be less likely to find a retractor (or any source) that is undisputedly of high credibility. ...
Article
Full-text available
The Continued Influence Effect (CIE) is the phenomenon that retracted information often continues to influence judgments and inferences. The CIE is rational when the source that retracts the information (the retractor) is less credible than the source that originally presented the information (the informant; Connor Desai et al., 2020). Conversely, a CIE is not rational when the retractor is at least as credible as the informant. Thus, a rational account predicts that the CIE depends on the relative credibility of informant and retractor. In two experiments (N = 151, N = 146), informant credibility and retractor credibility were independently manipulated. Participants read a fictitious news report in which original information and a retraction were each presented by either a source with high credibility or a source with low credibility. In both experiments, when the informant was more credible than the retractor, participants showed a CIE compared to control participants who saw neither the information nor the retraction (ds > 0.82). When the informant was less credible than the retractor, participants showed no CIE, in line with a rational account. However, in Experiment 2, participants also showed a CIE when informant and retractor were equally credible (ds > 0.51). This cannot be explained by a rational account, but is consistent with error-based accounts of the CIE. Thus, a rational account alone cannot fully account for the complete pattern of results, but needs to be complemented with accounts that view the CIE as a memory-based error.
... Other meta-analyses have found medium-sized average debunking effects (r = .35; Walter & Murphy, 2018). A related meta-analysis focusing specifically on science-relevant misinformation found that the average debunking effect was not statistically different from zero (d = 0.19; Chan & Albarracín, 2023), suggesting that corrections may be differentially effective across domains. ...
Article
Full-text available
The standard method for addressing the consequences of misinformation is the provision of a correction in which the misinformation is directly refuted. However, the impact of misinformation may also be successfully addressed by introducing or bolstering alternative beliefs with opposite evaluative implications. Six preregistered experiments clarified important processes influencing the impact of bypassing versus correcting misinformation via negation. First, we find that, following exposure to misinformation, bypassing generally changes people’s attitudes and intentions more than correction in the form of a simple negation. Second, this relative advantage is not a function of the depth at which information is processed but rather the degree to which people form attitudes or beliefs when they receive the misinformation. When people form attitudes when they first receive the misinformation, bypassing has no advantage over corrections, likely owing to anchoring. In contrast, when individuals focus on the accuracy of the statements and form beliefs, bypassing is significantly more successful at changing their attitudes because these attitudes are constructed based on expectancy-value principles, while misinformation continues to influence attitudes after correction. Broader implications of this work are discussed.
... Fact-checking appears to be a generally robust strategy for combatting misinformation [Walter et al., 2020] and appears to have efficacy to endure over time as well [Porter & Wood, 2021]. However, recent meta-analyses have found no significant overall effects for the correction of science-related misinformation [Walter & Murphy, 2018;Chan & Albarracín, 2023], and additionally, different forms of fact-checking or correction have been differentially effective. Corrections that are partial or do not make a strong claim about the veracity of the original misinformation (such as a scale of truthfulness) tend to be weaker in their effect [Walter et al., 2020], while more detailed corrections have an overall tendency to be more effective [Chan & Albarracín, 2023]. ...
Article
Full-text available
Former government intelligence officer David Grusch became a hot new topic in the UFO world when he declared that the government was hiding an alien ship crash retrieval program. Can this media coverage be influential in increasing belief in UFOs? And can a credible critic of Grusch's claims successfully negate the impact of the media coverage on the acceptance of misinformation? A three-condition experiment (N\,=\,287) showed that a counternarrative can successfully negate the influence of his claims on conspiratorial beliefs. We suggest that these results have practical implications for journalists in their coverage of controversial claims.
... Such interventions might be effective when two conditions are satisfied: (1) that participants indeed correct their misperceptions and (2) that they see a need to update their policy attitudes in the light of their newly acquired factual knowledge. Regarding the first condition, meta-analytic studies show that corrections are generally effective in reducing misperceptions (Chan et al., 2017;Walter and Murphy, 2018). Although there have been some instances in which the corrections "backfired" and increased misperceptions (Nyhan and Reifler, 2010;Ma et al., 2019), most studies established that informed participants indeed report more accurate beliefs. ...
Article
Full-text available
Democrats and Republicans have polarized in their attitudes (i.e., ideological polarization) and their feelings toward each other (i.e., affective polarization). Simultaneously, both groups also seem to diverge in their factual beliefs about reality. This preregistered survey experiment among 2,253 American citizens examined how this factualbeliefpolarizationmay or may not fuel ideological and affective polarization around four key issues: income differences, immigration, climate change, and defense spending. On all issues except immigration, Democrats and Republicans were equally or more divided in their factual beliefs about the present than in their ideals for the future. Corrective information decreased partisan polarization over some ideals, but not directional policy attitudes. Priming respondents’ factual beliefs conversely increased polarization around defense spending, but not other issues. Much remains unclear about the complex relation between factual beliefs and polarization, but measuring ideals and priming beliefs could be promising avenues for future research.
... First, little research is available on how to develop and disseminate media literacy programs for older adults, especially media literacy programs that teach older adults to use technological solutions to fight online misinformation [19,20]. Second, although fact checking, debunking, or correcting health misinformation are generally effective [21][22][23][24], these attempts may backfire due to psychological reactance, a motivational reaction derived from a threat to individuals' autonomy and freedom to make choices and manifested in negative cognitions and anger emotions [25]. ...
Article
Full-text available
Background Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices. Objective This study aims to understand how older adults may use chatbots’ capabilities for misinformation management. Methods We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults’ encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1. Results We found that issues with older adults’ misinformation management arose more from interpersonal relationships than individuals’ ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults’ distrust in IT companies and governments’ ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults’ trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking. Conclusions This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.
Article
Purpose Despite the effectiveness of correction in reducing misperceptions, individuals are often reluctant to correct misinformation on social media platforms. To enhance misinformation management efforts, this study investigates how best to motivate corrective efforts, using the situational theory of problem solving and the health belief model as guiding frameworks. Design/methodology/approach Two online survey experiments were conducted: one college student sample (Study 1, N = 458) and one adult sample recruited via a survey company (Study 2, N = 600). Both studies examined the effectiveness of problem-recognition messages and cues-to-action (CTA), from either the CDC or a layperson, on motivating corrections of raw milk misinformation, with a 3 (CDC high problem-recognition messages vs layperson high problem-recognition messages vs control) x 2 (CTA presence vs. absence) experimental design. Findings Both studies suggest that high problem-recognition messages from the CDC significantly increase correction intentions. The same problem-recognition messages from a layperson can also increase the correction intentions of college students (not adults). Surprisingly, CTA did not enhance corrective intentions for the adult sample recruited from a survey company but reduced corrective intentions among the college student sample. No significant interaction was found between problem-recognition messages and CTA on corrective intentions. Originality/value This study illuminates the effectiveness of using problem-recognition messages from an authoritative source and a layperson to motivate corrections. However, the unexpected results highlight the need for careful CTA design to avoid backfiring.
Article
Full-text available
Purpose The two primary purposes of the current study are to further understand the impact of corrective messages on misperceptions about election fraud in the US and to test the effect of party affiliation of the accused politician on participants’ election misperceptions. Design/methodology/approach To assess these relationships, we conducted a between-subjects randomized online experiment. Findings Our results show that participants in the control condition held higher misperceptions than those who were exposed to a correction message. Findings also showed that liberal media use was negatively associated with election fraud misperceptions, while conservative media use, information from Donald Trump, authoritarianism and self-reported conservatives were positively associated with election fraud misperceptions. Originality/value Experimental test to understand election fraud misperceptions, using our own original stimulus materials.
Article
Numerous global trends related to communicative conflicts—like widespread public dissent, the increasing fragmentation of the digital media landscape, the fast-paced dissemination of mis- and disinformation, polarization of public debates, and delegitimizing populist rhetoric—form a perfect storm that significantly disrupts today’s society on multiple levels. Against this backdrop, research on content, causes, consequences, and counter-strategies of conflicts has been central to the discipline of communication science and beyond over the past decades. Although many of these societal challenges are rooted in communication problems similar in nature, current research lines appear to move forward in isolation, creating largely disconnected streams of research that could benefit from more integration. With this article, we aim to bring together disconnected strands of literature in communication that revolve around conflicts in mass and digital communication under the umbrella term “informational conflict.” This framework synthesizes existing knowledge, enabling us to better understand the root causes of increasing cleavages in society and to forward potential solutions leading to conflict resolution.
Article
Introduction This study examined whether different cigarette package features such as tar yield display, tar warning statement, and plain packaging affect beliefs about tar intake, smoothness, and safety of low-tar cigarettes among South Koreans who smoke. Aims and Methods An online randomized between-subjects experiment was carried out (n = 500) on a panel of South Koreans who smoke. Participants were exposed to either a mock cigarette package that (1) displayed tar yield, (2) did not display tar yield, (3) showed a tar warning statement, or (4) was plain packaged. Beliefs about tar intake, smoothness, and safety were measured post-exposure. Beliefs were compared across conditions, and mediation analysis was conducted. Results Participants exposed to the tar warning statement believed the mock cigarette would deliver lesser tar compared to those only exposed to the package that displayed tar yield. Those who viewed the cigarette package with no tar yield number were less likely to agree that the cigarette would be smoother compared to those who viewed the package with a visible tar yield number. The effect of viewing the tar warning statement on safety beliefs was fully mediated by tar intake beliefs. The effect of exposure to tar yield display on safety beliefs was fully mediated by smoothness beliefs. Conclusions Study results indicate that the current tar warning statement could increase misperceptions. Removing tar yield numbers may reduce smoothness beliefs about low-tar cigarettes. Health communication efforts should address beliefs about tar intake and smoothness when trying to correct low-tar cigarette misperceptions about safety. Implications Previous research has found that tar yield numbers displayed on the front of South Korean low-tar cigarette packages may mislead people to perceive lesser harm. However, studies have not yet examined whether other package elements such as the current tar warning statement or plain packaging could reduce misperceptions. Study results indicated that viewing the current tar warning statement backfires by increasing belief in lesser tar intake. Viewing tar yield numbers also led to an increased perception that the cigarette would feel smoother. Plain packaging did not exert any effects on beliefs about tar intake, safety, or smoothness. Findings can inform tobacco packaging policies and health communication efforts to reduce misperceptions about low-tar cigarettes.
Article
Background The relationship between low-density lipoprotein cholesterol (LDL-C) and atherosclerotic cardiovascular disease (ASCVD) is well-established. Recently, non-high-density lipoprotein cholesterol (non-HDL-C) has been validated as a superior predictor of ASCVD, especially in individuals with mild to moderate hypertriglyceridemia. The EPHESUS study evaluated real-life hypercholesterolemia management and awareness of non-HDL-C in cardiology outpatient practices. Methods Data from 1868 patients with ASCVD or high-risk primary prevention were analyzed to assess cholesterol goal attainment, statin adherence, and physician perceptions. This analysis focused on awareness of non-HDL-C as an ASCVD predictor, adherence to lipid-lowering therapy, and clinicians’ perceptions. Associations between patient demographics, clinical characteristics, and statin adherence were examined. Results Among patients, 20.2% achieved non-HDL-C and 16.5% achieved LDL-C goals. In primary prevention, 18.1% reached non-HDL-C and 10.6% reached LDL-C goals, while in secondary prevention, 20.8% and 18.0% met these goals. High-intensity statin therapy was observed in 21.2% of patients, with 30.3% and 24.3% achieving non-HDL-C and LDL-C targets, respectively. Statin use was lower in women than men (54.0% vs 66.9%, P < 0.001). Women less frequently achieved non-HDL-C and LDL-C goals in both prevention groups. Conclusions Non-HDL-C goal attainment remains suboptimal in both primary and secondary prevention of hypercholesterolemia, particularly in women who had lower statin use and goal achievement. These findings highlight the need for improved awareness, education, and treatment strategies to reduce residual cardiovascular risk and improve outcomes.
Book
Misinformation can be broadly defined as information that is inaccurate or false according to the best available evidence, or information whose validity cannot be verified. It is created and spread with or without clear intent to cause harm. There is well-documented evidence that misinformation persists despite fact-checking and the presentation of corrective information, often traveling faster and deeper than facts in the online environment. Drawing on the frameworks of social judgment theory, cognitive dissonance theory, and motivated information processing, the authors conceptualize corrective information as a generic type of counter-attitudinal message and misinformation as attitude-congruent messages. They then examine the persistence of misinformation through the lens of biased responses to attitude-inconsistent versus -consistent information. Psychological inoculation is proposed as a strategy to mitigate misinformation.
Article
This study aims to develop and test “myth-busting,” a promising strategy for science communicators to debunk common misconceptions among citizens. Widely spread counterfactual beliefs about biotechnologically produced food (BTF) as (a) unhealthy and unsafe, (b) unsustainable, and (c) unnatural served as context of application. A preregistered online experiment ( N = 2,925) showed that myth-busting messages effectively correct misconceptions. Trust in science but not pre-existing beliefs moderated this effect. Reduced misconceptions in turn led to greater changes in attitudes toward BTF. The study exemplifies an interesting option for science communicators to address citizens’ misbeliefs about complex science and technology.
Article
Full-text available
This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis (k = 52, N = 6,878) revealed large effects for presenting misinformation (ds = 2.41–3.08), debunking (ds = 1.14–1.33), and the persistence of misinformation in the face of debunking (ds = 0.75–1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.
Article
Full-text available
One of the reasons for the popularity of meta-analysis is the notion that these analyses will possess more power to detect effects than individual studies. This is inevitably the case under a fixed-effect model. However, the inclusion of the between-study variance in the random-effects model, and the need to estimate this parameter, can have unfortunate implications for this power. We develop methods for assessing the power of random-effects meta-analyses, and the average power of the individual studies that contribute to meta-analyses, so that these powers can be compared. In addition to deriving new analytical results and methods, we apply our methods to 1991 meta-analyses taken from the Cochrane Database of Systematic Reviews to retrospectively calculate their powers. We find that, in practice, 5 or more studies are needed to reasonably consistently achieve powers from random-effects meta-analyses that are greater than the studies that contribute to them. Not only is statistical inference under the random-effects model challenging when there are very few studies but also less worthwhile in such cases. The assumption that meta-analysis will result in an increase in power is challenged by our findings.
Article
Full-text available
Background: A substantial minority of American adults continue to hold influential misperceptions about childhood vaccine safety. Growing public concern and refusal to vaccinate poses a serious public health risk. Evaluations of recent pro-vaccine health communication interventions have revealed mixed results (at best). This study investigated whether highlighting consensus among medical scientists about childhood vaccine safety can lower public concern, reduce key misperceptions about the discredited autism-vaccine link and promote overall support for vaccines. Methods: American adults (N = 206) were invited participate in an online survey experiment. Participants were randomly assigned to either a control group or to one of three treatment interventions. The treatment messages were based on expert-consensus estimates and either normatively described or prescribed the extant medical consensus: "90 % of medical scientists agree that vaccines are safe and that all parents should be required to vaccinate their children". Results: Compared to the control group, the consensus-messages significantly reduced vaccine concern (M = 3.51 vs. M = 2.93, p < 0.01) and belief in the vaccine-autism-link (M = 3.07 vs M = 2.15, p < 0.01) while increasing perceived consensus about vaccine safety (M = 83.93 vs M = 89.80, p < 0.01) and public support for vaccines (M = 5.66 vs M = 6.22, p < 0.01). Mediation analysis further revealed that the public's understanding of the level of scientific agreement acts as an important "gateway" belief by promoting public attitudes and policy support for vaccines directly as well as indirectly by reducing endorsement of the discredited autism-vaccine link. Conclusion: These findings suggest that emphasizing the medical consensus about (childhood) vaccine safety is likely to be an effective pro-vaccine message that could help prevent current immunization rates from declining. We recommend that clinicians and public health officials highlight and communicate the high degree of medical consensus on (childhood) vaccine safety when possible.
Article
Full-text available
Citizens are frequently misinformed about political issues and candidates but the circumstances under which inaccurate beliefs emerge are not fully understood. This experimental study demonstrates that the independent experience of two emotions, anger and anxiety, in part determines whether citizens consider misinformation in a partisan or open‐minded fashion. Anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. However, exposure to corrections improves belief accuracy, regardless of emotion or partisanship. The results indicate that the unique experience of anger and anxiety can affect the accuracy of political beliefs by strengthening or attenuating the influence of partisanship.
Article
Full-text available
The piecemeal reporting of unfolding news events can lead to the reporting of mistaken information (or misinformation) about the cause of the newsworthy event, which later needs to be corrected. Studies of the continued influence effect have shown, however, that corrections are not entirely effective in reversing the effects of initial misinformation. Instead, participants continue to rely on the discredited misinformation when asked to draw inferences and make judgments about the news story. Most prior studies have employed misinformation that explicitly states the likely cause of an outcome. However, news stories do not always provide misinformation explicitly, but instead merely imply that something or someone might be the cause of an adverse outcome. Two experiments employing both direct and indirect measures of misinformation reliance were conducted to assess whether implied misinformation is more resistant to correction than explicitly stated misinformation. The results supported this prediction. Experiment 1 showed that corrections reduced misinformation reliance in both the explicit and implied conditions, but the correction was much less effective following implied misinformation. Experiment 2 showed that implied misinformation was more resistant to correction than explicit misinformation, even when the correction was paired with an alternative explanation. Finally, Experiment 3 showed that greater resistance to correction in the implied misinformation condition did not reflect greater disbelief in the correction. Potential reasons why implied misinformation is more difficult to correct than explicitly provided misinformation are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
The increasing prevalence of misinformation in society may adversely affect democratic decision making, which depends on a well-informed public. False information can originate from a number of sources including rumors, literary fiction, mainstream media, corporate-vested interests, governments, and nongovernmental organizations. The rise of the Internet and user-driven content has provided a venue for quick and broad dissemination of information, not all of which is accurate. Consequently, a large body of research spanning a number of disciplines has sought to understandmisinformation and determine which interventions are most effective in reducing its influence. This essay summarizes research into misinformation, bringing together studies from psychology, political science, education, and computer science. Cognitive psychology investigates why individuals struggle with correcting misinformation and inaccurate beliefs, and why myths are so difficult to dislodge. Two important findings involve (i) various “backfire effects,” which arise when refutations ironically reinforce misconceptions, and (ii) the role of worldviews in accentuating the persistence of misinformation. Computer scientists simulate the spread of misinformation through social networks and develop algorithms to automatically detect or neutralize myths. We draw together various research threads to provide guidelines on how to effectively refute misconceptions without risking backfire effects.
Article
Full-text available
The divergence of public opinion and climate science in the English-speaking world, particularly the United States and Australia, has attracted a variety of explanations. One of the more interesting accounts, from a psychological perspective, is the influence of ideology on climate change beliefs. Previous work suggests that ideology trumps knowledge in shaping climate change beliefs. However, these studies have typically examined the influence of proxy measures of knowledge rather than specific climate change knowledge. The goal of the present research was to provide some clarification on the different influences of knowledge and ideology on beliefs about climate change. Specifically, we investigated the relationship between specific climate change knowledge, hierarchical and individualistic ideology, and climate change belief in a national sample (N = 335) of the Australian public. Contrary to research involving proxy knowledge measures, we found that people who had greater knowledge of climate change causes were more willing to accept that climate change is occurring. Furthermore, knowledge of causes attenuated the negative relationship between individualistic ideology and belief that climate change exists. Our findings suggest that climate change knowledge has the potential to positively influence public discourse on the issue. Copyright © 2014 John Wiley & Sons, Ltd.
Article
Full-text available
Science is of critical importance to daily life in a knowledge society and has a significant influence on many everyday decisions. As scientific problems increase in their number and complexity, so do the challenges facing the public in understanding these issues. Our objective is to focus on 3 of those challenges: the challenge of reasoning about knowledge and the processes of knowing, the challenge of overcoming biases in that reasoning, and the challenge of overcoming misconceptions. We propose that research in epistemic cognition, motivated reasoning, and conceptual change can help to identify, understand, and address these obstacles for public understanding of science. We explain the contributions of each of these areas in providing insights into the public's understandings and misunderstandings about knowledge, the nature of science, and the content of science. We close with educational recommendations for promoting scientific literacy.
Article
Full-text available
The current studies investigated the potential impact of anti-vaccine conspiracy beliefs, and exposure to anti-vaccine conspiracy theories, on vaccination intentions. In Study 1, British parents completed a questionnaire measuring beliefs in anti-vaccine conspiracy theories and the likelihood that they would have a fictitious child vaccinated. Results revealed a significant negative relationship between anti-vaccine conspiracy beliefs and vaccination intentions. This effect was mediated by the perceived dangers of vaccines, and feelings of powerlessness, disillusionment and mistrust in authorities. In Study 2, participants were exposed to information that either supported or refuted anti-vaccine conspiracy theories, or a control condition. Results revealed that participants who had been exposed to material supporting anti-vaccine conspiracy theories showed less intention to vaccinate than those in the anti-conspiracy condition or controls. This effect was mediated by the same variables as in Study 1. These findings point to the potentially detrimental consequences of anti-vaccine conspiracy theories, and highlight their potential role in shaping health-related behaviors.
Article
Full-text available
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
Article
Full-text available
Meta-analysis collects and synthesizes results from individual studies to estimate an overall effect size. If published studies are chosen, say through a literature review, then an inherent selection bias may arise, because, for example, studies may tend to be published more readily if they are statistically significant, or deemed to be more “interesting” in terms of the impact of their outcomes. We develop a simple rank-based data augmentation technique, formalizing the use of funnel plots, to estimate and adjust for the numbers and outcomes of missing studies. Several nonparametric estimators are proposed for the number of missing studies, and their properties are developed analytically and through simulations. We apply the method to simulated and epidemiological datasets and show that it is both effective and consistent with other criteria in the literature.
Article
Full-text available
Two quite different types of research design are characteristically used to study the modification of atitudes through communication. In the first type, the experiment, individuals are given a controlled exposure to a communication and the effects evaluated in terms of the amount of change in attitude or opinion produced In the alternative research design, the sample survey, information is secured through interviews or questionnaires, both concerning the respondent's exposure to various communications and his attitudes and opinions on various issues." Divergences in results from the 2 methods are cited and the reconciliation of apparent conflicts is attempted. There appear to be "certain inherent limitations of each method." The mutual importance of the 2 approaches to communication effectiveness is stressed. " each represents an important emphasis. The challenge of future work is one of fruitfully combining their virtues so that we may develop a social psychology of communication with the conceptual breadth provided by correlational study of process and with the rigorous but more delimited methodology of the experiment." 24 refs
Article
Full-text available
This chapter discusses the influence of misinformation in memory. In a dynamic world, information in memory is frequently outdated, corrected, or replaced. People often make use of this misinformation in memory during later reasoning. The example demonstrates several important features of this phenomenon. First, the information and its correction are clearly presented and connected together in memory. Second, the information and its correction are believable, and accepted as valid in the absence of any conflicting information. Third, following correction, the value of the initial information is clearly identified. And further, the misinformation and its correction are both accurately recalled later, at the time of reasoning. The power of misinformation appears to arise from its legitimacy as initially presented. Direct negation of the misinformation, no matter how stated, does not address the initial value of the information as true and relevant, nor explain how the negation may have arisen.
Article
Full-text available
The meta-analytic random effects model assumes that the variability in effect size estimates drawn from a set of studies can be decomposed into two parts: heterogeneity due to random population effects and sampling variance. In this context, the usual goal is to estimate the central tendency and the amount of heterogeneity in the population effect sizes. The amount of heterogeneity in a set of effect sizes has implications regarding the interpretation of the meta-analytic findings and often serves as an indicator for the presence of potential moderator variables. Five population heterogeneity estimators were compared in this article analytically and via Monte Carlo simulations with respect to their bias and efficiency.
Book
Full-text available
Meta-analysis is arguably the most important methodological innovation in the social and behavioral sciences in the last 25 years. Developed to offer researchers an informative account of which methods are most useful in integrating research findings across studies, this book will enable the reader to apply, as well as understand, meta-analytic methods. Rather than taking an encyclopedic approach, the authors have focused on carefully developing those techniques that are most applicable to social science research, and have given a general conceptual description of more complex and rarely-used techniques. Fully revised and updated, Methods of Meta-Analysis, Second Edition is the most comprehensive text on meta-analysis available today. New to the Second Edition: * An evaluation of fixed versus random effects models for meta-analysis* New methods for correcting for indirect range restriction in meta-analysis* New developments in corrections for measurement error* A discussion of a new Windows-based program package for applying the meta-analysis methods presented in the book* A presentation of the theories of data underlying different approaches to meta-analysis
Article
Full-text available
There are 2 families of statistical procedures in meta-analysis: fixed- and random-effects procedures. They were developed for somewhat different inference goals: making inferences about the effect parameters in the studies that have been observed versus making inferences about the distribution of effect parameters in a population of studies from a random sample of studies. The authors evaluate the performance of confidence intervals and hypothesis tests when each type of statistical procedure is used for each type of inference and confirm that each procedure is best for making the kind of inference for which it was designed. Conditionally random-effects procedures (a hybrid type) are shown to have properties in between those of fixed- and random-effects procedures. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
U.S. public opinion regarding climate change has become increasingly polarized in recent years, as partisan think tanks and others worked to recast an originally scientific topic into a political wedge issue. Nominally “scientific” arguments against taking anthropogenic climate change seriously have been publicized to reach informed but ideologically receptive audiences. Reflecting the success of such arguments, polls have noted that concern about climate change increased with education among Democrats, but decreased with education among Republicans. These observations lead to the hypothesis that there exist interaction (non-additive) effects between education or knowledge and political orientation, net of other background factors, in predicting public concern about climate change. Two regional telephone surveys, conducted in New Hampshire (n = 541) and Michigan (n = 1, 008) in 2008, included identical climate-change questions that provide opportunities to test this hypothesis. Multivariate analysis of both surveys finds significant interactions. These empirical results fit with theoretical interpretations and several other recent studies. They suggest that the classically identified social bases of concern about the environment in general, and climate in particular, have shifted in recent years. Narrowcast media, including the many Web sites devoted to discrediting climate-change concerns, provide ideal conduits for channeling contrarian arguments to an audience predisposed to believe and electronically spread them further. Active-response Web sites by climate scientists could prove critical to counterbalancing contrarian arguments.
Article
Full-text available
An extensive literature addresses citizen ignorance, but very little research focuses on misperceptions. Can these false or unsubstantiated beliefs about politics be corrected? Previous studies have not tested the efficacy of corrections in a realistic format. We conducted four experiments in which subjects read mock news articles that included either a misleading claim from a politician, or a misleading claim and a correction. Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question. KeywordsMisperceptions-Misinformation-Ignorance-Knowledge-Correction-Backfire
Article
Full-text available
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.
Article
Full-text available
It is well known that people often continue to rely on initial misinformation even if this information is later corrected and even if the correction itself is remembered. This article investigated the impact of emotionality of the material on people's ability to discount corrected misinformation. The focus was on moderate levels of emotionality comparable to those elicited by real-world news reports. Emotionality has frequently been shown to have an impact upon reasoning and memory, but the generality of this influence remains unclear. In three experiments, participants read a report of a fictitious plane crash that was initially associated with either an emotionally laden cause (terrorist attack) or an emotionally more neutral cause (bad weather). This initial attribution was followed by a retraction and presentation of an alternative cause (faulty fuel tank). The scenarios demonstrably affected participants' self-reported feelings. However, all three experiments showed that emotionality does not affect the continued influence of misinformation.
Article
Full-text available
A second-order meta-analysis was conducted to assess the implications of using college student subjects in social science research. Four meta-analyses investigating response homogeneity (cumulative N > 650,000) and 30 meta-analyses reporting effect sizes for 65 behavioral or psychological relationships (cumulative N > 350,000) provided comparative data for college student subjects and nonstudent (adult) subjects for the present research. In general, responses of college student subjects were found to be slightly more homogeneous than those of nonstudent subjects. Moreover, effect sizes derived from college student subjects frequently differed from those derived from nonstudent subjects both directionally and in magnitude. Because there was no systematic pattern to the differences observed, caution must be exercised when attempting to extend any relationship found using college student subjects to a nonstudent (adult) population. The results augur in favor of, and emphasize the importance of, replicating research based on college student subjects with nonstudent subjects before attempting any generalizations. Copyright 2001 by the University of Chicago.
Article
Full-text available
We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome. These are simple rank-based data augmentation techniques, which formalize the use of funnel plots. We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the nominal confidence levels entirely. We illustrate the trim and fill method on existing meta-analyses of studies in clinical trials and psychometrics.
Article
Full-text available
Calculations of the power of statistical tests are important in planning research studies (including meta-analyses) and in interpreting situations in which a result has not proven to be statistically significant. The authors describe procedures to compute statistical power of fixed- and random-effects tests of the mean effect size, tests for heterogeneity (or variation) of effect size parameters across studies, and tests for contrasts among effect sizes of different studies. Examples are given using 2 published meta-analyses. The examples illustrate that statistical power is not always high in meta-analysis.
Article
Full-text available
The misinformation effect refers to the impairment in memory for the past that arises after exposure to misleading information. The phenomenon has been investigated for at least 30 years, as investigators have addressed a number of issues. These include the conditions under which people are especially susceptible to the negative impact of misinformation, and conversely when are they resistant. Warnings about the potential for misinformation sometimes work to inhibit its damaging effects, but only under limited circumstances. The misinformation effect has been observed in a variety of human and nonhuman species. And some groups of individuals are more susceptible than others. At a more theoretical level, investigators have explored the fate of the original memory traces after exposure to misinformation appears to have made them inaccessible. This review of the field ends with a brief discussion of the newer work involving misinformation that has explored the processes by which people come to believe falsely that they experienced rich complex events that never, in fact, occurred.
Article
While fact-checking has grown dramatically in the last decade, little is known about the relative effectiveness of different formats in correcting false beliefs or overcoming partisan resistance to new information. This article addresses that gap by using theories from communication and psychology to compare two prevailing approaches: An online experiment examined how the use of visual “truth scales” interacts with partisanship to shape the effectiveness of corrections. We find that truth scales make fact-checks more effective in some conditions. Contrary to theoretical predictions and the fears of some journalists, their use does not increase partisan backlash against the correction or the organization that produced it.
Article
The omnipresence of political misinformation in the today's media environment raises serious concerns about citizens' ability make fully informed decisions. In response to these concerns, the last few years have seen a renewed commitment to journalistic and institutional fact-checking. The assumption of these efforts is that successfully correcting misinformation will prevent it from affecting citizens' attitudes. However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction--even when it is fully believed--does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects, which I call "belief echoes," are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking. Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person's pre-existing political views. The existence of belief echoes provide an enormous incentive for politicians to strategically spread false information with the goal of shaping public opinion on key issues. However, results from two more experiments show that politicians also suffer consequences for making false claims, an encouraging finding that has the potential to constrain the behavior of politicians presented with the opportunity to strategically create belief echoes. While the existence of belief echoes may also provide a disincentive for the media to engage in serious fact-checking, evidence also suggests that such efforts can also have positive consequences by increasing citizens' trust in media.
Article
Participants: Among Australians, consensus information partially neutralized the influence of worldview, with free-market supporters showing a greater increase in acceptance of human-caused global warming relative to free-market opponents. In contrast, while consensus information overall had a positive effect on perceived consensus among U.S. participants, there was a reduction in perceived consensus and acceptance of human-caused global warming for strong supporters of unregulated free markets. Fitting a Bayes net model to the data indicated that under a Bayesian framework, free-market support is a significant driver of beliefs about climate change and trust in climate scientists. Further, active distrust of climate scientists among a small number of U.S. conservatives drives contrary updating in response to consensus information among this particular group.
Article
Across three separate experiments, I find that exposure to negative political information continues to shape attitudes even after the information has been effectively discredited. I call these effects “belief echoes.” Results suggest that belief echoes can be created through an automatic or deliberative process. Belief echoes occur even when the misinformation is corrected immediately, the “gold standard” of journalistic fact-checking. The existence of belief echoes raises ethical concerns about journalists’ and fact-checking organizations’ efforts to publicly correct false claims.
Article
This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ – the ease of information recall – this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.
Chapter
Publisher Summary This chapter describes various approaches to the problem of inducing resistance to persuasion, and presents a number of variations on each approach. Persuasive messages are known to be more effective if they are presented with their conclusions explicitly drawn, rather than left to be drawn by the recipient. Several ways of inducing resistance to persuasion, and some possibly pretreatments—like enhancing the person's tendency to use perceptual distortion in the defense of his preconceptions—are included in the chapter. Some contemporary approaches to inducing resistance to persuasion include the behavioral commitment approach, anchoring the belief to other cognitions, inducing resistance cognitive states, and prior training in resting persuasive attempts. It is believed that with better education the individual becomes more resistant to persuasion. However, empirical research does not consistently support such a proposition. It is by no means clear that any general-education manipulation would have the effect of increasing resistance to persuasion. Training more specifically tailored to reduce susceptibility to persuasion might be more successful. There is some evidence that the more intelligent are more resistant to conformity pressures from peers, but they also seem to be more susceptible to the mass-media kind of persuasion attempts. Further experiments will have to determine, if inoculation theory will predict the immunizing efficacy of various types of defenses in the case of controversial beliefs as successfully as it has for truisms.
Article
Literature pertaining to the effects of age differences indicates that elderly individuals and younger adults process information differently. Age differences result in a complex set of changes in individuals' sources of information, ability to learn, and susceptibility to social influence. The implications of these changes are discussed in terms of marketing practice, theory, and methodology.
Article
Four decades of research and hundreds of studies speak to the power of post-event misinformation to bias eyewitness accounts of events (see e.g. Loftus’ summary, 2005). A subset of this research has explored if the adverse influence of misinformation on remembering can be undone or at least reduced through a later warning about its presence. We meta-analyzed 25 such post-warning studies (including 155 effect sizes) to determine the effectiveness of different types of warnings and to explore moderator effects. Key findings were that (1) post-warnings are surprisingly effective, reducing the misinformation effect to less than half of its size on average. (2) Some types of post-warning (following a theoretical classification) seem to be more effective than others, particularly studies using an enlightenment procedure ( Blank, 1998). (3) The post-warning reduction in the misinformation effect reflects a specific increase in misled performance (relative to no warning), at negligible cost for control performance. We conclude with a discussion of theoretical and practical implications.
Article
Although conspiracy theories have long been a staple of American political culture, no research has systematically examined the nature of their support in the mass public. Using four nationally representative surveys, sampled between 2006 and 2011, we find that half of the American public consistently endorses at least one conspiracy theory and that many popular conspiracy theories are differentiated along ideological and anomic dimensions. In contrast with many theoretical speculations, we do not find conspiracism to be a product of greater authoritarianism, ignorance, or political conservatism. Rather, the likelihood of supporting conspiracy theories is strongly predicted by a willingness to believe in other unseen, intentional forces and an attraction to Manichean narratives. These findings both demonstrate the widespread allure of conspiracy theories as political explanations and offer new perspectives on the forces that shape mass opinion and American political culture.
Article
To assess if exposure to varying "facts and myths" message formats affected participant knowledge and recall accuracy of information related to influenza vaccination. Consenting patients (N=125) were randomized to receive one of four influenza related messages (Facts Only; Facts and Myths; Facts, Myths, and Refutations; or CDC Control), mailed one week prior to a scheduled physician visit. Knowledge was measured using 15 true/false items at pretest and posttest; recall accuracy was assessed using eight items at posttest. All participants' knowledge scores increased significantly (p<0.05); those exposed to the CDC Control message had a higher posttest knowledge score (adjusted mean=11.18) than those in the Facts Only condition (adjusted mean 9.61, p=<0.02). Participants accurately recalled a mean of 4.49 statements (SD=1.98). ANOVA demonstrated significant differences in recall accuracy by condition [F(3, 83)=7.74, p<.001, η(2)=0.22]. Messages that include facts, myths, and evidence to counteract myths appear to be effective in increasing participants' knowledge. We found no evidence that presenting both facts and myths is counterproductive to recall accuracy. Use of messages containing facts and myths may engage the reader and lead to knowledge gain. Recall accuracy is not assured by merely presenting factual information.
Article
Despite significant efforts by governments, organizations and individuals to maintain public trust in vaccines, concerns persist and threaten to undermine the effectiveness of immunization programs. Vaccine advocates have traditionally focused on education based on evidence to address vaccine concerns and hesitancy. However, being informed of the facts about immunization does not always translate into support for immunization. While many are persuaded by scientific evidence, others are more influenced by cognitive shortcuts, beliefs, societal pressure and the media, with the latter group more likely to hesitate over immunization. Understanding evidence from the behaviour sciences opens new doors to better support individual decision-making about immunization. Drawing on heuristics, this overview explores how individuals find, process and utilize vaccine information and the role health care professionals and society can play in vaccine decision-making. Traditional, evidence-based approaches aimed at staunching the erosion of public confidence in vaccines are proving inadequate and expensive. Enhancing public confidence in vaccines will be complex, necessitating a much wider range of strategies than currently used. Success will require a shift in how the public, health care professionals and media are informed and educated about vaccine benefits, risks and safety; considerable introspection and change in current academic and vaccine decision-making practices; development of proactive strategies to broadly address current and potential future concerns, as well as targeted interventions such as programs to address pain with immunization. This overview outlines ten such opportunities for change to improve vaccine confidence.
Article
Context: Misperceptions are a major problem in debates about health care reform and other controversial health issues. Methods: We conducted an experiment to determine if more aggressive media fact-checking could correct the false belief that the Affordable Care Act would create "death panels." Participants from an opt-in Internet panel were randomly assigned to either a control group in which they read an article on Sarah Palin's claims about "death panels" or an intervention group in which the article also contained corrective information refuting Palin. Findings: The correction reduced belief in death panels and strong opposition to the reform bill among those who view Palin unfavorably and those who view her favorably but have low political knowledge. However, it backfired among politically knowledgeable Palin supporters, who were more likely to believe in death panels and to strongly oppose reform if they received the correction. Conclusions: These results underscore the difficulty of reducing misperceptions about health care reform among individuals with the motivation and sophistication to reject corrective information.
Article
Students often come into the introductory psychology course with many misconceptions and leave with most of them intact. Borrowing from other disciplines, we set out to determine whether refutational lecture and text are effective in dispelling student misconceptions. These approaches first activate a misconception and then immediately counter it with correct information. We tested students' knowledge of 45 common misconceptions and then taught the course with lecture and readings of a refutational or standard format or did not cover the information at all. Students showed significant changes in their beliefs when we used refutational approaches, suggesting refutational pedagogies are best for changing students' misconceptions.
Article
A questionnaire concerning the degree of belief in 12 statements of current rumors was circulated to adults through children in 8 Syracuse schools. Attitudes toward rationing and wartime administration were also solicited. The 537 complete returns are analyzed to reveal possible factors associated with belief in rumors. Various statistical controls were tried to delimit the combined influence of several factors. The reasoning is presented in detailed research notes. The rumors were believed in one fourth of the cases. Belief was associated with previous hearing of the rumors, antirationing attitudes, suspicion of slackerism, and failure to read the Rumor Clinic column. Relationship to sex, age, or occupation is doubtful. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This paper reviews the empirical evidence of the effect of credibility of the message source on persuasion over a span of 5 decades, primarily to come up with recommendations for practitioners as to when to use a high- or a low-credibility source and secondarily to identify areas for future research. The main effect studies of source credibility on persuasion seem to indicate the superiority of a high-credibility source over a low-credibility one. Interaction effect studies, however, show source credibility to be a liability under certain conditions. The variables found to interact with source credibility are categorized into 5 categories: source, message, channel, receiver, and destination variables. The most heavily researched variables have been the message and receiver variables. Implications for marketers/advertisers and suggestions for future research are discussed.
Article
In 2006, a U.S. Federal Court ruled that the major domestic cigarette manufacturers were guilty of conspiring to deny, distort, and minimize the hazards of cigarette smoking to the public and ordered corrective statements to correct these deceptions. This study evaluates the effectiveness of different versions of corrective statements that were proposed to the Court. 239 adult smokers (aged 18-65 years) were randomized to view one of five different versions of corrective statements on five topics (health risks, addiction, low-tar cigarettes, product manipulation, and secondhand smoke); change in knowledge and beliefs were measured before and after viewing the statements, as well as 1 week later. Three of the versions were text-based statements recommended by different parties in the case (Philip Morris, U.S. Department of Justice [DOJ], Interveners), whereas two others were developed at Roswell Park Cancer Institute (RPCI) for this study and utilized pictorial images (emotive and neutral). Data collection and analysis were conducted in Buffalo NY from 2008 to 2009. Regardless of which corrective statement was seen, exposure resulted in a consistent pattern of increased level of knowledge and corrected misperceptions about smoking, although the effects were not large and diminished back toward baseline levels within 1 week. The DOJ, Interveners, and emotive statements elicited a stronger affective response and were rated by respondents as more persuasive (p-value<0.05). The emotive statement was better recalled and drew the respondents' attention in the shortest amount of time. Each of the proposed corrective statements tested helped correct false beliefs about smoking, but sustained impact will likely require repeated exposures to the message.
Article
Recall tasks render 2 distinct sources of information available: the recalled content and the experienced ease or difficulty with which it can be brought to mind. Because retrieving many pieces of information is more difficult than retrieving only a few, reliance on accessible content and subjective accessibility experiences leads to opposite judgmental outcomes. People are likely to base judgments on accessibility experiences when they adopt a heuristic processing strategy and the informational value of the experience is not called into question. When the experience is considered nondiagnostic, or when a systematic processing strategy is adopted, people rely on accessible content. Implications for the operation of the availability heuristic and the emergence of knowledge accessibility effects are discussed.
Experiments on mass communication: Studies in social psychology in World War II
  • C Hovland
  • A Lumsdaine
  • R Sheffield
Hovland, C., Lumsdaine, A., & Sheffield, R. (1949). Experiments on mass communication: Studies in social psychology in World War II. Princeton, NJ: Princeton University Press.
Word of the year 2016 is …
  • Oxford Dictionary
Oxford Dictionary. (2016). Word of the year 2016 is …. Retrieved from https://en. oxforddictionaries.com/word-of-the-year/word-of-the-year-2016
Hunter-Schmidt meta-analysis programs
  • F L Schmidt
  • H Le
Schmidt, F. L., & Le, H. (2014). Hunter-Schmidt meta-analysis programs [statistical software]. Iowa City: University of Iowa.
Myths & facts" about the flu: Health education campaigns can reduce vaccination intentions
  • I Skurnik
  • C Yoon
  • N Schwarz
Skurnik, I., Yoon, C., & Schwarz, N. (2007). "Myths & facts" about the flu: Health education campaigns can reduce vaccination intentions. Unpublished manuscript.
Evidence of a relationship between need for cognition and chronological age: Implications for persuasion in consumer research
  • H Spotts
Spotts, H. (1994). Evidence of a relationship between need for cognition and chronological age: Implications for persuasion in consumer research. In C. T. Allen & D. R. John (Eds.), Advances in consumer research (Vol. 21, pp. 238-243). Ann Arbor, MI: Association for Consumer Research.