Figure 1 - uploaded by Sander van der Linden
Content may be subject to copyright.
A Vaccine for Brainwash. From the original article by McGuire (1970) in Psychology Today. Copyright held by an unknown person.
Source publication
There has been increasing concern with the growing infusion of misinformation, or “fake news”, into public discourse and politics in many western democracies. Our article first briefly reviews the current state of the literature on conventional countermeasures to misinformation. We then explore proactive measures to prevent misinformation from find...
Context in source publication
Context 1
... about people's general vulnerability to political indoctrination goes back many decades (McGuire, 1961), arising at the time from disquietude about persuasive techniques employed by totalitarian states. The larger question of how to go about developing attitudinal "resistance" against unwanted persuasion attempts ultimately led McGuire to develop "inoculation theory", which, for a popular audience, he described as a "vaccine for brainwash" (McGuire, 1970); see Figure 1. ...
Citations
... The Principles of Inoculation Theory Inoculation theory (McGuire, 1961a(McGuire, , 1961b posits that by preemptively exposing individuals to a weakened form of fake news, they can develop resistance against future manipulation attempts (Lewandowsky & van der Linden, 2021;Traberg et al., 2022). The inoculation process comprises two elements. ...
Adolescents increasingly rely on social media platforms for news consumption, highlighting the urgent need to equip them with the skills to differentiate between credible and deceptive information. To address this challenge, we developed two logic-based inoculation interventions suitable for classroom settings: an online game (active) and a leaflet (passive). In a school experiment involving 373 participants, we assessed students’ news discernment ability before and after exposure to the interventions, evaluating veracity discernment, fake and real news detection, and response biases. Additionally, we examined the interventions’ effects on students’ enjoyment of learning, recognizing its critical role in game-based learning. Findings show that enjoyment was significantly higher in the active intervention group compared to the passive one. However, our findings revealed no significant differences in credibility evaluation performance between the two intervention types over time. Moreover, we uncovered a paradoxical trend: while students improved in identifying fake news after the interventions, they demonstrated a decline in proficiency at identifying real news. The observed effect highlights the importance of a holistic approach to media literacy education, incorporating strategies to evaluate both fake and real news sources to mitigate unexpected backfire effects.
... Though useful in retrospect, most methods struggle to detect campaigns as they unfold. Early detection of foreign malign influence is important because taking preventive measures in the early stages of an influence campaign on social media can permanently stunt the campaign's growth [8]. Improving the current methods requires looking at a combination of the attacker methods and victim communities. ...
Foreign information operations conducted by Russian and Chinese actors exploit the United States' permissive information environment. These campaigns threaten democratic institutions and the broader Westphalian model. Yet, existing detection and mitigation strategies often fail to identify active information campaigns in real time. This paper introduces ChestyBot, a pragmatics-based language model that detects unlabeled foreign malign influence tweets with up to 98.34% accuracy. The model supports a novel framework to disrupt foreign influence operations in their formative stages.
... It is one of the goals of this article to paint a more accurate and nuanced portrait of detransition experiences; one that will offer the reader greater understanding as well as increased psychological immunity against the ploy of disinformation-fuelled detransition panic. Since spotting fallacies or misleading tactics has been demonstrated to be more effective of a strategy in combating the spread of disinformation than "myth busting" or "fact-checking" alone, another goal of this article is to expose the underbelly of detrans panic (see Lewandowsky & van der Linden, 2021;van der Linden et al., 2021). Finally, this article aimed to be a step in the direction of expanding our understanding of gender and nonlinear gender paths and reducing detransition stigma. ...
Cet article examine la polarisation croissante, la mésinformation et la désinformation entourant la diversité de genre, en mettant particulièrement l’accent sur la détransition. Dans le contexte actuel de dissensions sociopolitiques, la représentation trompeuse des expériences de détransition incite à la panique morale, alimente les attitudes sociales négatives envers les personnes qui détransitionnent et les individus issus de la pluralité des genres, tout en étant instrumentalisée pour restreindre l’accès aux soins d’affirmation du genre. En réponse, cet article vise à offrir une compréhension plus juste et nuancée de la détransition, à contrer la panique induite par la désinformation, et à réduire la stigmatisation vécue par les personnes concernées. S’appuyant sur de multiples sources de savoir, l’article déconstruit méticuleusement les fondements de la panique morale liée à la détransition, révélant une interaction de biais cisnormatifs et transnormatifs. En exposant ces biais, l’article encourage une approche plus réfléchie et inclusive face au genre et aux trajectoires de genre non linéaires, met en lumière la diversité des expériences vécues ainsi que la multiplicité des facteurs pouvant motiver la détransition. Il se conclut par une série de recommandations et d’invitations, notamment à élargir notre regard sur le genre, vers une perspective qui « dépathologise » la détransition et les trajectoires de genre non linéaires, et qui va au-delà de la binarité trans–cis.
... However, the Internet as a highly dynamic media environment poses numerous challenges and new demands on citizens, as there are systematic differences between online and offline environments (e.g., Kozyreva et al., 2020). The challenges lie in persuasive and manipulative choice architectures, promotion of sources of questionable quality, biased reporting, advertisements and clickbait, which are further exacerbated by the distracting nature of Internet environments and general information overload (Kozyreva et al., 2020;Lewandowsky & van der Linden, 2021). This circumstance is partly attributed to the lack of gatekeepers, quality control, and regulation on the Internet, as well as low barriers to publishing information, which is further exacerbated by the use of social bots (Metzger & Flanagin, 2015;Meβmer et al., 2021;Shao et al., 2018). ...
... Furthermore, there is a continued influence effect (Lewandowsky et al., 2012;Rapp, 2016), which means that mere exposure to inaccurate "facts" can cause people to incorporate the expressed misinformation into their understanding, even if their pre-existing understanding was accurate and even if it is later debunked. These findings further highlight the importance of fostering critical evaluation skills to build resilience to an online environment that seeks to manipulate and polarize (e.g., Lewandowsky & van der Linden, 2021). ...
... In our study, successful foraging outcomes were made salient by a visual cue (i.e., splash), although people can also deploy metacognitive strategies to infer latent performance or skill from overt behavior 5,14 , providing additional mechanisms for guiding selective social learning. Future work can explore the extent to which these mechanisms (together with our ability to discount correlated social information 60 ) may offer a degree of natural protection against the spread of misinformation 61 and the formation of echo chambers through homophilic social transmission 62 . ...
Human cognition is distinguished by our ability to adapt to different environments and circumstances. Yet the mechanisms driving adaptive behavior have predominantly been studied in separate asocial and social contexts, with an integrated framework remaining elusive. Here, we use a collective foraging task in a virtual Minecraft environment to integrate these two fields, by leveraging automated transcriptions of visual field data combined with high-resolution spatial trajectories. Our behavioral analyses capture both the structure and temporal dynamics of social interactions, which are then directly tested using computational models sequentially predicting each foraging decision. These results reveal that adaptation mechanisms of both asocial foraging and selective social learning are driven by individual foraging success (rather than social factors). Furthermore, it is the degree of adaptivity—of both asocial and social learning—that best predicts individual performance. These findings not only integrate theories across asocial and social domains, but also provide key insights into the adaptability of human decision-making in complex and dynamic social landscapes.
... Although some reviews have covered specific types of educational interventions, such as inoculation techniques (e.g., Lewandowsky & Van Der Linden, 2021) or lie detection trainings (e.g., Driskell, 2012), they do not cover the breadth of existing educational interventions to misinformation. This article seeks to close this gap, motivated by misinformation as a persistent problem of large societal impact and by calls to include misinformation in educational curricula (Schwartz, 2021). ...
Misinformation can have severe negative effects on people’s decisions, behaviors, and on society at large. This creates a need to develop and evaluate educational interventions that prepare people to recognize and respond to misinformation. We systematically review 107 articles describing educational interventions across various lines of research. In characterizing existing educational interventions, this review combines a theory-driven approach with a data-driven approach. The theory-driven approach uncovered that educational interventions differ in terms of how they define misinformation and regarding which misinformation characteristics they target. The data-driven approach uncovered that educational interventions have been addressed by research on the misinformation effect, lie detection, information literacy, and fraud trainings, with each line of research yielding different types of interventions. Furthermore, this article reviews evidence about the interventions’ effectiveness. Besides identifying several promising types of interventions, comparisons across different lines of research yield open questions that future research should address to identify ways to increase people's resilience towards misinformation.
... that exposing individuals to weakened counterarguments can build resistance to persuasion (McGuire, 1964;Compton, 2013). It has been widely applied to reduce susceptibility to misinformation, often as a form of "prebunking" (Lewandowsky & van der Linden, 2021). ...
The Continued Influence Effect (CIE) refers to the persistent impact of misinformation on beliefs or reasoning, even after its retraction. Traditional accounts attribute CIE to memory failures – either in updating mental models or retrieving corrections. In contrast, newer theories propose that CIE persists despite successful encoding and retrieval of retractions, due to motivational and reasoning-based processes. Across six experiments (N = 1,446), we tested these competing explanations. Participants frequently relied on misinformation even when recognizing it as misleading, and those who knowingly used it showed greater overall reliance. However, after debriefing, CIE declined most among participants who identified it as misleading. Furthermore, participants cited perceived relevance and explanatory value as reasons for misinformation use, highlighting goal-directed reasoning. These patterns were robust across various conditions, including prebunking interventions and source credibility manipulations.
To integrate these findings, we propose the Dynamic Inference Optimization (DIO) model, framing CIE as a trade-off between conserving cognitive resources and minimizing uncertainty (i.e., model entropy). DIO suggests that CIE occurs because misinformation is weighted over retraction due to its explanatory power, forming a low-entropy model that requires minimal cognitive effort. However, maladaptive mismatches between cognition and environment may strategically increase model entropy, enabling belief revision by reweighting information probabilities, at the cost of increased cognitive effort. This model offers a unified, process-level account of CIE grounded in principles of adaptive reasoning and cognitive effort. Importantly, DIO fits into dynamic models of cognition, offering a more nuanced and ecologically valid explanation of how misinformation continues to influence reasoning.
... Possible spread of misinformation and disinformation is also concerning (Hu, 2023). Disinformation occurs when an actor knowingly disseminates false information, whereas misinformation (fake news) is the circulation of incorrect information that may emerge unintentionally (Lewandowsky and van der Linden, 2021). AI chatbots could be used to spread climate misinformation. ...
Educators and students are increasingly using a subset of artificial intelligence (AI) large language models (LLMs) like ChatGPT to learn about climate change. Others speculate that learning will be revolutionized forever with the mainstreaming of AI-based sites like ChatGPT. Less is known about the quality of AI prompt responses about climate change. This study examines 100 ChatGPT climate change prompts to examine what exploratory themes emerge from climate response queries. The results show the presence of the following themes within ChatGPT responses: passive learning, lack of transparency, desensitization and low environmental concern language. Climate educators, students, and parents who are curious about ChatGPT’s functionality and the accuracy of its responses on climate change may find the results useful.
... Pemilih digital native, yang mengandalkan media sosial sebagai sumber utama informasi politik, rentan terhadap misinformasi dan hoaks, terutama di tengah algoritma media sosial yang memperkuat efek echo chamber dan filter bubble, di mana individu lebih cenderung terpapar informasi yang sesuai dengan pandangan mereka tanpa proses verifikasi yang kritis (Erickson, 2024;Interian et al., 2023). Studi menunjukkan bahwa eksposur terhadap misinformasi politik dapat membentuk opini yang keliru dan sulit dikoreksi, bahkan setelah diberikan klarifikasi (Lewandowsky & van der Linden, 2021;Pantazi et al., 2021) (Bouaamri et al., 2024;Koskelainen et al., 2023;Oh et al., 2021;Yang et al., 2021) Rendahnya literasi politik dapat membuat mereka lebih rentan terhadap propaganda politik berbasis hoaks, yang dapat mempengaruhi preferensi pemilih dan melemahkan kualitas demokrasi (Bringula et al., 2022;Karolčík et al., 2025). Hasil dari penelitian ini diharapkan dapat memberikan kontribusi dalam memahami dinamika penyebaran informasi politik di tingkat lokal serta merumuskan strategi yang lebih efektif dalam meningkatkan literasi politik masyarakat. ...
Dalam era demokrasi digital, hoaks pemilu menjadi ancaman serius bagi kualitas demokrasi, terutama di kalangan pemilih digital native. Penelitian ini bertujuan untuk mengukur tingkat kewaspadaan pemilih digital native di Kota Pariaman terhadap hoaks pemilu serta menganalisis faktor-faktor yang mempengaruhinya, dengan pendekatan kuantitatif berbasis survei. Hasil penelitian menunjukkan bahwa kewaspadaan pemilih masih rendah, dengan hanya 15% responden yang secara aktif memverifikasi informasi, sementara mayoritas (60%) jarang atau tidak pernah mengecek kebenaran berita yang mereka terima. Selain itu, sebagian besar pemilih digital native lebih mengandalkan sumber yang kurang kredibel, seperti grup percakapan WhatsApp dan media sosial, dalam mencari informasi politik. Temuan ini menegaskan bahwa literasi politik tidak hanya mencakup pemahaman tentang sistem politik dan hak suara, tetapi juga keterampilan dalam memilah informasi di ekosistem digital yang semakin kompleks. Oleh karena itu, diperlukan strategi literasi digital yang lebih komprehensif melalui kolaborasi antara pemerintah, akademisi, dan organisasi masyarakat sipil guna meningkatkan kesadaran kritis pemilih digital native dalam mengidentifikasi informasi yang valid dan mencegah penyebaran disinformasi politik.
... In a cross-national randomized controlled trial, Spampatti et al. (2024) found that climate disinformation exerted detrimental effects on affective, cognitive, and behavioral responses to climate change. Their registered report also revealed that common interventions to combat disinformation (Lewandowsky, 2021;Lewandowsky & van der Linden, 2021) could not prevent the negative effects of repeated exposure to climate disinformation (Spampatti, 2024). McCright et al. (2016) found that statements promoting the benefits of climate action are less effective when presented alongside climate denial messages (see also van der Linden et al., 2017). ...
In popular media, accurate climate information and climate disinformation often coexist and present competing narratives about climate change. As climate disinformation can undermine public support of climate policies and trust in climate science, it is crucial to understand what leads to exposure and acceptance of climate disinformation. Whereas previous research examined the effects of disinformation on climate beliefs, little is known about how people seek climate-related content (Pro-or Anti-climate) and how this varies between cross-cultural contexts. In a preregistered experiment, we studied how individuals sequentially sample and process climate-related information and disinformation. Participants from the U.S., China and Germany (Ntotal = 2226) freely sampled real-world climate related statements. Across 15 rounds, participants decided between two boxes containing Pro-climate or Anti-climate statements , respectively. Overall, reading a statement influenced climate concern in all countries. Participants preferred the box that was better aligned with their initial climate beliefs, and this confirmatory tendency intensified in later rounds. While climate concern was mostly stable, in the U.S., climate concern levels and box choices mutually reinforced each other, leading to greater polarization within the sample over time. The paradigm offers new perspectives on how people process and navigate conflicting narratives about climate change.