Figure 6 - uploaded by Sander van der Linden
Content may be subject to copyright.
Pre and post scores for fake items that use manipulation techniques (panels A-C) as well as the mean score for the control items (panel D). Note: Error bars represent 95% confidence intervals. Adopted from Roozenbeek and van der Linden (2019).

Pre and post scores for fake items that use manipulation techniques (panels A-C) as well as the mean score for the control items (panel D). Note: Error bars represent 95% confidence intervals. Adopted from Roozenbeek and van der Linden (2019).

Source publication
Article
Full-text available
There has been increasing concern with the growing infusion of misinformation, or “fake news”, into public discourse and politics in many western democracies. Our article first briefly reviews the current state of the literature on conventional countermeasures to misinformation. We then explore proactive measures to prevent misinformation from find...

Contexts in source publication

Context 1
... and van der Linden (2019b) initially evaluated the game using a within-subject design with a sample of roughly N ¼ 15; 000 people. The results are shown in Figure 6. For the real news items, people did not change their reliability ratings between a pre-and a post-test (d ¼ 0:03 À 0:04, Figure 6D). ...
Context 2
... results are shown in Figure 6. For the real news items, people did not change their reliability ratings between a pre-and a post-test (d ¼ 0:03 À 0:04, Figure 6D). For the fake news items, by contrast, people significantly downgraded reliability overall (d ¼ 0:52) as well as for each technique separately (d ranges from 0.16 to 0.35, Figure 6A-C) 2 Given that many elections are decided on small margins (e.g., half of U.S. presidential elections were decided by margins under 7.6% (Epstein & Robertson, 2015) and the 2016 election was decided by razor-thin margins in a few swing states), these effects can be considered meaningful when scaled (Funder & Ozer, 2019) and commensurate with effect sizes in persuasion research (Banas & Rains, 2010;Walter & Murphy, 2018). ...
Context 3
... the real news items, people did not change their reliability ratings between a pre-and a post-test (d ¼ 0:03 À 0:04, Figure 6D). For the fake news items, by contrast, people significantly downgraded reliability overall (d ¼ 0:52) as well as for each technique separately (d ranges from 0.16 to 0.35, Figure 6A-C) 2 Given that many elections are decided on small margins (e.g., half of U.S. presidential elections were decided by margins under 7.6% (Epstein & Robertson, 2015) and the 2016 election was decided by razor-thin margins in a few swing states), these effects can be considered meaningful when scaled (Funder & Ozer, 2019) and commensurate with effect sizes in persuasion research (Banas & Rains, 2010;Walter & Murphy, 2018). Importantly, although Roozenbeek and van der Linden (2019) found some small variation in the inoculation effect across age and ideology, such that older people and Conservatives were slightly more susceptible to fake news on the pre-test (which is consistent with other recent work, e.g., Grinberg et al., 2019;Guess et al., 2019Guess et al., , 2020; for a review see Brashier & Schacter, 2020), the inoculation effect was significant across all subgroups. ...

Citations

... The goal is to build societal resilience against the dangers of disinformation, in order to pre-empt its effects. If people are educated about the threat of disinformation-for example, through increased media literacy [123]-and forewarned that they may be targeted, they will become immunised against disinformation [124,125]. This also depends on various other key indicators of societal resilience-building, such as levels of populism, polarisation, media trust, time spent on social media platforms and the strength of the public broadcasting service [126]. ...
Article
Full-text available
The proliferation and development of social media platforms in recent years has contributed significantly to the spread of disinformation. Police Authorities around Europe have observed that harmful or criminal behaviour, stemming from social unrest, hate speech, and violent disorder are regularly preceded by disinformation campaigns. This begs the question: How can practitioners be better prepared for the real-world consequences of malign disinformation activities and to potentially even mitigate any criminal consequences? The first step in properly countering disinformation is to enhance the understanding of the complex phenomenon. Therefore, this article puts forth a new theoretical framework, called the ‘C5 Interaction Model’, that explains the creation, spread and impact of disinformation, synthesising academic theory to provide practical guidance on disinformation dynamics. The multidisciplinary model represents a lifecycle and contains five main elements: Context, Causes, Content, Consequences, and Cycle of Amplification. They are each organised into two further layers of (sub)factors, which were developed to provide a comprehensive overview and breakdown of the important elements of disinformation. The C5 Interaction Model represents one of the first concerted efforts to bring diverse insights together into a comprehensive integrative framework. The complexity of the model shows that this process is non-liner and that there are a multitude of factors determining the lifecycle of disinformation, making it a highly complex phenomenon to research. A key contribution of this article is the focus on the interaction between different elements that influence the process of disinformation—from creation to consequences. Importantly, the lifecycle route is predominantly influenced by the social context in which it exists.
... Furthermore, fact-based interventions may be ineffective for those with high overconfidence in their knowledge 38 , and tailored communication strategies are needed to effectively reach and influence these individuals. A promising alternative might be a prebunking/ inoculation strategy, which proactively exposes audiences to weakened forms of misinformation and refutations 43 , coupled with narrative-based framing, which embeds information in personal stories rather than abstract facts, thereby enhancing engagement and reducing resistance 44 . This approach may be particularly effective when delivered through AI-powered conversational agents, which offer interactive and personalized communication in a scalable format. ...
Article
Full-text available
The rising consumption of gluten-free products among non-celiac individuals represents a burden for society due to these foods’ lower nutritional quality, poorer taste, and higher cost. While these products are essential for individuals with celiac disease, an increasing number of consumers are choosing them for perceived health benefits. Indeed, subjective beliefs about food and nutrition significantly influence food choices, whether or not they align with scientific evidence. This study investigates the Dunning-Kruger effect within the domain of nutritional knowledge and its impact on consumer behavior. Here we show that individuals with low nutritional knowledge who overestimate their competence—a hallmark of the Dunning-Kruger effect—are more likely to consume gluten-free products without medical necessity. This overconfidence is compounded by narcissistic traits and is further associated with higher conspiracy beliefs about the food industry. These consumers appear vulnerable to marketplace exploitation, lacking the knowledge to make informed food choices while being unaware of this condition. Our findings emphasize the need for targeted communication strategies to guide consumers towards more evidence-based dietary choices, recognizing that fact-based interventions may be ineffective for those with high knowledge overconfidence.
... The academic basis for the use of fact-checking for narrative control appears to be based on "inoculation theory" [370,371]. This is a strategy to minimise the availability of multiple perspectives on certain topics by "pre-bunking" the public with deliberate misrepresentations of compelling arguments into less compelling versions and then countering these strawman versions of the arguments. ...
... This is a strategy to minimise the availability of multiple perspectives on certain topics by "pre-bunking" the public with deliberate misrepresentations of compelling arguments into less compelling versions and then countering these strawman versions of the arguments. Analogous to vaccination with a weakened virus, when the public later encounters the original arguments, they will then be "inoculated" into dismissing the arguments without due consideration [370,371]. ...
Article
Full-text available
During the COVID-19 pandemic (2020-2023), governments around the world implemented an unprecedented array of non-pharmaceutical interventions (NPIs) to control the spread of SARS-CoV-2. From early 2021, these were accompanied by major population-wide COVID-19 vaccination programmes-often using novel mRNA/ DNA technology, although some countries used traditional vaccines. Both the NPIs and the vaccine programmes were apparently justified by highly concerning model projections of how the pandemic could progress in their absence. Efforts to reduce the spread of misinformation during the pandemic meant that differing scientific opinions on each of these aspects inevitably received unequal weighting. In this perspective review, based on an international multidisciplinary collaboration, we identify major problems with many aspects of these COVID-19 policies as they were implemented. We show how this resulted in adverse impacts for public health, society, and scientific progress. Therefore, we propose seven recommendations to reduce such adverse consequences in the future. HOW TO CITE: Quinn GA, Connolly R, ÓhAiseadha C, Hynds P, Bagus P, Brown RB, Cáceres CF, Craig C, Connolly M, Domingo JL, Fenton N, Frijters P, Hatfill S, Heymans R, Joffe AR, Jones R, Lauc G, Lawrie T, Malone RW, Mordue A, Mushet G, O’Connor A, Orient J, Peña-Ramos JA, Risch HA, Rose J, Sánchez-Bayón A, Savaris RF, Schippers MC, Simandan D, Sikora K, Soon W, Shir-Raz Y, Spandidos DA, Spira B, Tsatsakis AM and Walach H (2025) What Lessons can Be Learned From the Management of the COVID-19 Pandemic?. Int. J. Public Health 70:1607727. doi: https://doi.org/10.3389/ijph.2025.1607727
... Inoculation theory (McGuire, 1961a(McGuire, , 1961b posits that by preemptively exposing individuals to a weakened form of fake news, they can develop resistance against future manipulation attempts (Lewandowsky & van der Linden, 2021;Traberg et al., 2022). The inoculation process comprises two elements. ...
... One prominent distinction in the inoculation framework is between fact-based and logic-based approaches (Lewandowsky & van der Linden, 2021). Fact-based inoculation focuses on preemptively addressing specific false claims by presenting counterarguments and factual corrections tailored to particular topics, such as climate change or vaccine safety. ...
Article
Full-text available
Adolescents increasingly rely on social media platforms for news consumption, highlighting the urgent need to equip them with the skills to differentiate between credible and deceptive information. To address this challenge, we developed two logic-based inoculation interventions suitable for classroom settings: an online game (active) and a leaflet (passive). In a school experiment involving 373 participants, we assessed students’ news discernment ability before and after exposure to the interventions, evaluating veracity discernment, fake and real news detection, and response biases. Additionally, we examined the interventions’ effects on students’ enjoyment of learning, recognizing its critical role in game-based learning. Findings show that enjoyment was significantly higher in the active intervention group compared to the passive one. However, our findings revealed no significant differences in credibility evaluation performance between the two intervention types over time. Moreover, we uncovered a paradoxical trend: while students improved in identifying fake news after the interventions, they demonstrated a decline in proficiency at identifying real news. The observed effect highlights the importance of a holistic approach to media literacy education, incorporating strategies to evaluate both fake and real news sources to mitigate unexpected backfire effects.
... Though useful in retrospect, most methods struggle to detect campaigns as they unfold. Early detection of foreign malign influence is important because taking preventive measures in the early stages of an influence campaign on social media can permanently stunt the campaign's growth [8]. Improving the current methods requires looking at a combination of the attacker methods and victim communities. ...
Preprint
Full-text available
Foreign information operations conducted by Russian and Chinese actors exploit the United States' permissive information environment. These campaigns threaten democratic institutions and the broader Westphalian model. Yet, existing detection and mitigation strategies often fail to identify active information campaigns in real time. This paper introduces ChestyBot, a pragmatics-based language model that detects unlabeled foreign malign influence tweets with up to 98.34% accuracy. The model supports a novel framework to disrupt foreign influence operations in their formative stages.
... It is one of the goals of this article to paint a more accurate and nuanced portrait of detransition experiences; one that will offer the reader greater understanding as well as increased psychological immunity against the ploy of disinformation-fuelled detransition panic. Since spotting fallacies or misleading tactics has been demonstrated to be more effective of a strategy in combating the spread of disinformation than "myth busting" or "fact-checking" alone, another goal of this article is to expose the underbelly of detrans panic (see Lewandowsky & van der Linden, 2021;van der Linden et al., 2021). Finally, this article aimed to be a step in the direction of expanding our understanding of gender and nonlinear gender paths and reducing detransition stigma. ...
Article
Full-text available
Cet article examine la polarisation croissante, la mésinformation et la désinformation entourant la diversité de genre, en mettant particulièrement l’accent sur la détransition. Dans le contexte actuel de dissensions sociopolitiques, la représentation trompeuse des expériences de détransition incite à la panique morale, alimente les attitudes sociales négatives envers les personnes qui détransitionnent et les individus issus de la pluralité des genres, tout en étant instrumentalisée pour restreindre l’accès aux soins d’affirmation du genre. En réponse, cet article vise à offrir une compréhension plus juste et nuancée de la détransition, à contrer la panique induite par la désinformation, et à réduire la stigmatisation vécue par les personnes concernées. S’appuyant sur de multiples sources de savoir, l’article déconstruit méticuleusement les fondements de la panique morale liée à la détransition, révélant une interaction de biais cisnormatifs et transnormatifs. En exposant ces biais, l’article encourage une approche plus réfléchie et inclusive face au genre et aux trajectoires de genre non linéaires, met en lumière la diversité des expériences vécues ainsi que la multiplicité des facteurs pouvant motiver la détransition. Il se conclut par une série de recommandations et d’invitations, notamment à élargir notre regard sur le genre, vers une perspective qui « dépathologise » la détransition et les trajectoires de genre non linéaires, et qui va au-delà de la binarité trans–cis.
... However, the Internet as a highly dynamic media environment poses numerous challenges and new demands on citizens, as there are systematic differences between online and offline environments (e.g., Kozyreva et al., 2020). The challenges lie in persuasive and manipulative choice architectures, promotion of sources of questionable quality, biased reporting, advertisements and clickbait, which are further exacerbated by the distracting nature of Internet environments and general information overload (Kozyreva et al., 2020;Lewandowsky & van der Linden, 2021). This circumstance is partly attributed to the lack of gatekeepers, quality control, and regulation on the Internet, as well as low barriers to publishing information, which is further exacerbated by the use of social bots (Metzger & Flanagin, 2015;Meβmer et al., 2021;Shao et al., 2018). ...
... Furthermore, there is a continued influence effect (Lewandowsky et al., 2012;Rapp, 2016), which means that mere exposure to inaccurate "facts" can cause people to incorporate the expressed misinformation into their understanding, even if their pre-existing understanding was accurate and even if it is later debunked. These findings further highlight the importance of fostering critical evaluation skills to build resilience to an online environment that seeks to manipulate and polarize (e.g., Lewandowsky & van der Linden, 2021). ...
Article
Full-text available
Online evaluation skills such as assessing the credibility and relevance of Internet sources are crucial for students' self-regulated learning on the Internet, yet many struggle to identify reliable information online. While AI-based chatbots have made progress in teaching various skills, their application in improving online evaluation skills remains underexplored. In this study, we present an educational chatbot designed to train university students to evaluate online information. Participants were assigned to one of three conditions: (1) training with the interactive chatbot, (2) training with a static checklist, or (3) no additional training (i.e., baseline condition). In an ecologically valid test that provided a simulated web environment, participants had to identify the most reliable and relevant websites among several non-target websites to solve given problems. Participants in the chatbot condition outperformed those in the baseline condition on this test, while participants in the checklist condition showed no significant advantage over the baseline condition. These findings suggest the potential of educational chatbots as effective tools for improving critical evaluation skills. The implications of using chatbots for scalable educational interventions are discussed, particularly in light of recent advances such as the integration of large language models into search engines and the potential for hybrid intelligence paradigms that combine human oversight with AI-driven learning tools.
... In our study, successful foraging outcomes were made salient by a visual cue (i.e., splash), although people can also deploy metacognitive strategies to infer latent performance or skill from overt behavior 5,14 , providing additional mechanisms for guiding selective social learning. Future work can explore the extent to which these mechanisms (together with our ability to discount correlated social information 60 ) may offer a degree of natural protection against the spread of misinformation 61 and the formation of echo chambers through homophilic social transmission 62 . ...
Article
Full-text available
Human cognition is distinguished by our ability to adapt to different environments and circumstances. Yet the mechanisms driving adaptive behavior have predominantly been studied in separate asocial and social contexts, with an integrated framework remaining elusive. Here, we use a collective foraging task in a virtual Minecraft environment to integrate these two fields, by leveraging automated transcriptions of visual field data combined with high-resolution spatial trajectories. Our behavioral analyses capture both the structure and temporal dynamics of social interactions, which are then directly tested using computational models sequentially predicting each foraging decision. These results reveal that adaptation mechanisms of both asocial foraging and selective social learning are driven by individual foraging success (rather than social factors). Furthermore, it is the degree of adaptivity—of both asocial and social learning—that best predicts individual performance. These findings not only integrate theories across asocial and social domains, but also provide key insights into the adaptability of human decision-making in complex and dynamic social landscapes.
... Although some reviews have covered specific types of educational interventions, such as inoculation techniques (e.g., Lewandowsky & Van Der Linden, 2021) or lie detection trainings (e.g., Driskell, 2012), they do not cover the breadth of existing educational interventions to misinformation. This article seeks to close this gap, motivated by misinformation as a persistent problem of large societal impact and by calls to include misinformation in educational curricula (Schwartz, 2021). ...
Article
Full-text available
Misinformation can have severe negative effects on people’s decisions, behaviors, and on society at large. This creates a need to develop and evaluate educational interventions that prepare people to recognize and respond to misinformation. We systematically review 107 articles describing educational interventions across various lines of research. In characterizing existing educational interventions, this review combines a theory-driven approach with a data-driven approach. The theory-driven approach uncovered that educational interventions differ in terms of how they define misinformation and regarding which misinformation characteristics they target. The data-driven approach uncovered that educational interventions have been addressed by research on the misinformation effect, lie detection, information literacy, and fraud trainings, with each line of research yielding different types of interventions. Furthermore, this article reviews evidence about the interventions’ effectiveness. Besides identifying several promising types of interventions, comparisons across different lines of research yield open questions that future research should address to identify ways to increase people's resilience towards misinformation.
... that exposing individuals to weakened counterarguments can build resistance to persuasion (McGuire, 1964;Compton, 2013). It has been widely applied to reduce susceptibility to misinformation, often as a form of "prebunking" (Lewandowsky & van der Linden, 2021). ...
Preprint
Full-text available
The Continued Influence Effect (CIE) refers to the persistent impact of misinformation on beliefs or reasoning, even after its retraction. Traditional accounts attribute CIE to memory failures – either in updating mental models or retrieving corrections. In contrast, newer theories propose that CIE persists despite successful encoding and retrieval of retractions, due to motivational and reasoning-based processes. Across six experiments (N = 1,446), we tested these competing explanations. Participants frequently relied on misinformation even when recognizing it as misleading, and those who knowingly used it showed greater overall reliance. However, after debriefing, CIE declined most among participants who identified it as misleading. Furthermore, participants cited perceived relevance and explanatory value as reasons for misinformation use, highlighting goal-directed reasoning. These patterns were robust across various conditions, including prebunking interventions and source credibility manipulations. To integrate these findings, we propose the Dynamic Inference Optimization (DIO) model, framing CIE as a trade-off between conserving cognitive resources and minimizing uncertainty (i.e., model entropy). DIO suggests that CIE occurs because misinformation is weighted over retraction due to its explanatory power, forming a low-entropy model that requires minimal cognitive effort. However, maladaptive mismatches between cognition and environment may strategically increase model entropy, enabling belief revision by reweighting information probabilities, at the cost of increased cognitive effort. This model offers a unified, process-level account of CIE grounded in principles of adaptive reasoning and cognitive effort. Importantly, DIO fits into dynamic models of cognition, offering a more nuanced and ecologically valid explanation of how misinformation continues to influence reasoning.