Figure 1 - uploaded by Sander van der Linden
Content may be subject to copyright.
A Vaccine for Brainwash. From the original article by McGuire (1970) in Psychology Today. Copyright held by an unknown person.
Source publication
There has been increasing concern with the growing infusion of misinformation, or “fake news”, into public discourse and politics in many western democracies. Our article first briefly reviews the current state of the literature on conventional countermeasures to misinformation. We then explore proactive measures to prevent misinformation from find...
Context in source publication
Context 1
... about people's general vulnerability to political indoctrination goes back many decades (McGuire, 1961), arising at the time from disquietude about persuasive techniques employed by totalitarian states. The larger question of how to go about developing attitudinal "resistance" against unwanted persuasion attempts ultimately led McGuire to develop "inoculation theory", which, for a popular audience, he described as a "vaccine for brainwash" (McGuire, 1970); see Figure 1. ...
Citations
... He posits that "just as vaccines trigger the production of antibodies to help confer immunity against future infection, the same can be achieved with information" (p.464). Real-time debunking and misinformation detection techniques are important organizational capabilities, but only after inoculation, that is, prevention measures, have failed (Lewandowsky and van der Linden, 2021). A systematic review by Skafle et al. (2022) revealed that scholarship concerned with COVID-19 related misinformation on social media primarily deals with the types of misinformation (conspiracy claims, medical misinformation, vaccine development) and its effects such as vaccine hesitancy. ...
... Reactive Lewandowsky and van der Linden, 2021;Chan et al., 2017 Existing research heavily focuses on the reactive phases of misinformation containment such as detection (Asr and Taboada, 2019;Shu et al., 2017) and debunking (Chan et al., 2017). According to Lewandowsky and van der Linden (2021) and Ecker et al. (2022), the preventive phase of misinformation is largely overlooked. ...
... Reactive Lewandowsky and van der Linden, 2021;Chan et al., 2017 Existing research heavily focuses on the reactive phases of misinformation containment such as detection (Asr and Taboada, 2019;Shu et al., 2017) and debunking (Chan et al., 2017). According to Lewandowsky and van der Linden (2021) and Ecker et al. (2022), the preventive phase of misinformation is largely overlooked. Research that explains the mechanisms of misinformation prevention, however, could contribute to our understanding of reducing the amount of misinformation that is created in the first place. ...
Purpose – This study investigates the communication behavior of public health organizations on Twitter during the COVID-19 vaccination campaign in Brazil. It contributes to the understanding of the organizational framing of health communication by showcasing several instances of framing devices that borrow from (Brazilian) internet culture. The investigation of this case extends our knowledge by providing a rich description of the organizational framing of health communication to combat misinformation in a politically charged environment.
Design/methodology/approach – The authors collected a Twitter dataset of 77,527 tweets and analyzed a purposeful subsample of 536 tweets that contained information provided by Brazilian public health organizations about COVID-19 vaccination campaigns. The data analysis was carried out quantitatively and qualitatively by combining social media analytics techniques and frame analysis.
Findings – The analysis showed that Brazilian health organizations used several framing devices that have been identified by previous literature such as hashtags, links, emojis, or images. However, the analysis also unearthed hitherto unknown visual framing devices for misinformation prevention and debunking that borrow from internet culture such as ‘infographics,’ ‘pop culture references,’ and ‘internet-native symbolism.’
Practical implications – The findings inform decision-makers and public health organizations about framing devices that are tailored to internet-native audiences and can guide strategies to carry out information campaigns in misinformation-laden social media environments.
Social implications – The findings of this case study expose the often-overlooked cultural peculiarities of framing information campaigns on social media. The report of this study from a country in the Global South helps to contrast several assumptions and strategies that are prevalent in (health) discourses in Western societies and scholarship.
Originality/value – This study uncovers unconventional and barely addressed framing devices of health organizations operating in Brazil, which provides a novel perspective to the body of research on misinformation. It contributes to existing knowledge about frame analysis and broadens the understanding of frame devices borrowing from internet culture. It is a call for a frontier in misinformation research that deals with internet culture as part of organizational strategies for successful misinformation combat.
... To this, the goal of this paper is not only to evaluate news items, i.e. to understand if the news is real or fake, but also to develop a prebunking system [4], i.e. the process of debunking lies, fake news or sources before they strike, by evaluating the trustworthiness of the news providers. ...
Technological development combined with the evolution of the Internet has made it possible to reach an increasing number of people over the years and given them the opportunity to access information published on the network. The growth in the number of fake news generated daily, combined with the simplicity with which it is possible to share them, has created such a large phenomenon that it has become immediately uncontrollable. Furthermore, the quality with which malicious content is made is increasingly high so even professional experts, such as journalists, have difficulty recognizing which news is fake and which is real. This paper aims to implement an architecture that provides a service to final users that assures the reliability of news providers and the quality of news based on innovative tools. The proposed models take advantage of several Machine Learning approaches for fake news detection tasks and take into account well-known attacks on trust.
Finally, the implemented architecture is tested with a well-known dataset and shows how the proposed models can effectively identify fake news and isolate malicious sources.
... This provides an explanatory basis for the role of COR. Although there are studies (Banas, 2020;Pryor & Steinfatt, 1978) that have broadened the horizons of the inoculation theory, it has been applied most to explain how individuals resist persuasion through pretraining (Compton & Pfau, 2005;Lewandowsky & van der Linden, 2021). A few studies have used inoculation theory to explain the effect of another type of pretraining (i.e. ...
... (2) This study provides a new perspective on inoculation theory in terms of CBS. Previous inoculation theory studies have focused on how users can be inoculated to resist persuasion (Compton & Pfau, 2005;Lewandowsky & van der Linden, 2021). As there are studies (e.g. ...
Social media provides individuals with tremendous opportunities to follow nearly unlimited influencers online, prompting scholars’ concern about confirmation bias and the need to address it. Based on data from 894 participants, this study explores the positive effect of perceived influence on confirmation bias in social media contexts and the negative moderating effect of civic online reasoning on this relationship. These findings indicate that efforts in public media literacy education for citizens must be enhanced to transform subconscious defense mechanisms into mature coping skills through critical thinking.
... The terms "misinformation" and "disinformation" are both referenced to describe the nature of information being untrue (Shin et al., 2018). While misinformation refers to false or misleading information in general, the motivation for the spread of misinformation is unknown and not necessarily with the intent to manipulate (Lewandowsky and Van Der Linden, 2021). Disinformation, on the other hand, specifically refers to false information being spread with the deliberate intent to manipulate or confuse the public (Institute for Public Relations, 2019). ...
Purpose
This study aims to propose a model that delineated the diffusion process of product-harm misinformation on social media. Drawing on theoretical insights from cue diagnosticity and corporate associations, the proposed model mapped out how consumers' information skepticism and perceived content credibility influence their perceived diagnosticity of the product-harm misinformation and corporate ability (CA) associations with the company being impacted, which in turn influenced their trust toward the company and negative word-of-mouth (NWOM) intention.
Design/methodology/approach
A survey was conducted with 504 US consumers to empirically test the proposed model. Following the survey, in-depth interviews were conducted with 11 communication professionals regarding the applicability of the model.
Findings
When exposed to product-harm misinformation on social media, consumers' perceived diagnosticity of misinformation was negatively impacted by their information skepticism and positively impacted by perceived content credibility of misinformation. Perceived diagnosticity of product-harm misinformation negatively impacted consumers' CA associations, which then led to decreased trust and increased NWOM intention. Findings from the interviews further supported the diffusion process and provided insights on strategies to combat product-harm misinformation. Strategies shared by the interviewees included preparedness and social listening, proactive outreach and building strong CA associations as preventative measures.
Originality/value
This study incorporates the theoretical frameworks of cue diagnosticity and corporate associations into the scholarship of misinformation and specifically addresses the unique diffusion process of product-harm misinformation on social media. This study provides insights and tangible recommendations for communication professionals to combat product-harm misinformation.
... Social media ecosystem is often polluted by false information, extreme partisan messages, financially motivated hoaxes, hate speech, rumor, and satire. To address these issues, different formats of fact-checking -from prebunking (i.e., alerting individuals to the fakeness of incoming information, Lewandowsky and Van Der Linden 2021) to debunking/correction (Chung and Kim 2021; Kligler-Vilenchik 2022) -have been employed to help individuals better assess the veracity of information. ...
... For example, a video deepfake [81] intended to influence public opinion in a certain direction (perhaps by misrepresenting the actions of a political figure) may be unethical because the people who are influenced are not made fully aware of this intention. Had they been aware, they would have assigned less credence to the information contained in the video [74]. See Box 2 for a more general discussion of intent as it relates to the ethics of influence. ...
Agents often exert influence when interacting with humans and non-human agents. However, the ethical status of such influence is often unclear. In this paper, we present the SHAPE framework, which lists reasons why influence may be unethical. We draw on literature from descriptive and moral philosophy and connect it to machine learning to help guide ethical considerations when developing algorithms with potential influence. Lastly, we explore mechanisms for governing algorithmic systems that influence people, inspired by mechanisms used in journalism, human subject research, and advertising.
... As a way of achieving the objectives stated above and in section 1, and as a methodological proof-of-concept and test bed, we want to tackle the problem of acquiring and practising skills and notions that enable us to detect false or dubious information. This is becoming a critical skill as it is increasingly easy to produce, distribute, and weaponize misinformation with serious real-world consequences [3,93]. The idea of using games to approach this problem is not new [12,40,56,91], but results have been mixed at best, possibly because the predominant format of these games seems to be quiz-like, resembling more a lecture than a game [39]. ...
... Adams et al.[3] define the terms misinformation and disinformation and, together with Lewandowsky and van der Linden[93], provide a discussion of the efforts and risks involved in addressing and countering these.2 These are sometimes called Commercial Off-The-Shelf (COTS) games, but we think this term may be a little restrictive so it is best avoided.3 ...
... Among these potential interventions are inoculation or empathetic-refutational techniques. 63,64 Despite the fact that the rates of physicians exhibiting higher-than-average CAM endorsement and lower-thanaverage vaccine confidence and recommendation identified in our LPA have no direct precedents, the resulting patterns mirror the results from previous studies investigating CAM use among HCPs and the general population. As we mentioned in the introductory section, HCPs tend to be equally likely than the general population to use CAM, 24 even though some studies have found strikingly high rates of CAM use and referral among HCPs-e.g., 39% in a study by Posadzki et al. 25 55% in a study by Berretta et al. 26 and 68% in a study by Linde et al. 27 A survey on the use of several CAM techniques, such as acupuncture, massage, homeopathy, and spiritual healing, during the last 12 months among the general European population showed an overall average rate of CAM use of 25.9%, with Portugal exhibiting the lowest rate of CAM use (12.9%) and Germany the highest (39.5%) among the four countries included in the present study-31.2% ...
Vaccine hesitancy has become a threat to public health, especially as it is a phenomenon that has also been observed among healthcare professionals. In this study, we analyzed the relationship between endorsement of complementary and alternative medicine (CAM) and vaccination attitudes and behaviors among healthcare professionals, using a cross-sectional sample of physicians with vaccination responsibilities from four European countries: Germany, Finland, Portugal, and France (total N = 2,787). Our results suggest that, in all the participating countries, CAM endorsement is associated with lower frequency of vaccine recommendation, lower self-vaccination rates, and being more open to patients delaying vaccination, with these relationships being mediated by distrust in vaccines. A latent profile analysis revealed that a profile characterized by higher-than-average CAM endorsement and lower-than-average confidence and recommendation of vaccines occurs, to some degree, among 19% of the total sample, although these percentages varied from one country to another: 23.72% in Germany, 17.83% in France, 9.77% in Finland, and 5.86% in Portugal. These results constitute a call to consider health care professionals’ attitudes toward CAM as a factor that could hinder the implementation of immunization campaigns.
... and improving real information credibility assessment (d=0.20) and real information sharing intention (d=0.09). This is consistent with existing research that psychological inoculation increases people's immunity to misinformation and makes them less likely to believe the misinformation they subsequently encounter [5,57]. Second, psychological inoculation did not effectively influence misinformation sharing intention (P=.12). ...
Background
The prevalence of misinformation poses a substantial threat to individuals’ daily lives, necessitating the deployment of effective remedial approaches. One promising strategy is psychological inoculation, which pre-emptively immunizes individuals against misinformation attacks. However, uncertainties remain regarding the extent to which psychological inoculation effectively enhances the capacity to differentiate between misinformation and real information.
Objective
To reduce the potential risk of misinformation about digital health, this study aims to examine the effectiveness of psychological inoculation in countering misinformation with a focus on several factors, including misinformation credibility assessment, real information credibility assessment, credibility discernment, misinformation sharing intention, real information sharing intention, and sharing discernment.
Methods
Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, we conducted a meta-analysis by searching 4 databases (Web of Science, APA PsycINFO, Proquest, and PubMed) for empirical studies based on inoculation theory and outcome measure–related misinformation published in the English language. Moderator analyses were used to examine the differences in intervention strategy, intervention type, theme, measurement time, team, and intervention design.
ResultsBased on 42 independent studies with 42,530 subjects, we found that psychological inoculation effectively reduces misinformation credibility assessment (d=–0.36, 95% CI –0.50 to –0.23; P
... It is plausible that as consumers, we may be more likely to question the credibility of information when we purposefully seek out that information. In fact, research has demonstrated that pre-bunking, or warning people that misinformation may be ahead, is a useful strategy in immunizing audiences against the impact of false information [43]. ...
Throughout the COVID-19 pandemic, the American College Health Association (ACHA) has partnered with CommunicateHealth (CH) to develop COVID-19 mitigation resources for colleges and universities. In 2021, the CH team conducted a series of applied research activities to gain a nuanced understanding of factors that shape perceptions of risk and drive vaccine hesitancy among campus audiences—especially college students who are emerging adults (approximately ages 18 to 22). Based on our findings, CH and ACHA identified key traits of vaccine-hesitant college students and implications for future vaccine communication campaigns. First, vaccine-hesitant students are more likely to ask “why” and “how” questions such as “Why do I need to get vaccinated?” and “How was the vaccine developed and tested?”. Secondly, these students want to have open, authentic dialogue rather than simply accepting health recommendations from a trusted source. Finally, the CH team noted that vaccine-hesitant students were not highly motivated by their own personal risk of getting sick from COVID-19; concern about spreading COVID-19 to others was a much stronger motivating factor. Leveraging these insights, CH and ACHA developed strategies to apply health literacy principles to reach vaccine-hesitant college students with the right information at the right time—and to leverage relevant motivators and overcome barriers to vaccination. By implementing these strategies, CH and ACHA developed clear and empowering educational materials about COVID-19 vaccination tailored to the unique information needs of vaccine-hesitant students.