Table 2 - uploaded by Brian E Weeks
Content may be subject to copyright.
Logistic Regression Predicting Obama Vote 

Logistic Regression Predicting Obama Vote 

Source publication
Article
Full-text available
Using national telephone survey data collected immediately after the 2008 U.S. presidential election (N = 600), this study examines real-world consequences of inaccurate political rumors. First, individuals more willingly believe negative rumors about a candidate from the opposing party than from their party. However, rumor rebuttals are uniformly...

Contexts in source publication

Context 1
... next ran an identical model but added belief in rumors about both candidates, which increased the variance explained to .61, a substantively important improve- ment in our ability to explain vote choice. Turning to the model coefficients, we find support for H3 (see Table 2). Obama rumor belief and a vote for Obama were negatively and significantly related, indicating that as the number 2 The model specification that includes both Democratic and Republican party affiliation dummy variables does not support H2 either, but it does reveal something unexpected. ...
Context 2
... how much did a one-unit change in rumor belief correspond with the likeli- hood of voting for or against a candidate? Using the observed-value approach based on coefficients from the model in Table 2 (see Hanmer & Kalkan, 2013), we estimated the predicted probabilities of a vote for Obama across levels of rumor belief. The resulting probabilities add further support to the hypothesis that rumor belief is related to vote choice. ...

Similar publications

Preprint
Full-text available
The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 million users, which contain a link to news outlets. Based on a classification of news outlets curated...
Article
Full-text available
We combine a model of symmetric information with selfish and office‐motivated politicians and a Regression Discontinuity Design analysis based on close municipal elections to study partisan bias in the allocation of drought aid relief in Brazil. We identify a novel pattern of distributive politics whereby partisan bias materialises only before muni...
Article
Full-text available
We show that in the month prior to the 2003 Argentine presidential election, the expenditures of an Argentine poverty relief program exhibit a partisan bias. Taking into consideration the number of potential recipients (the unemployed with children 18 years old or less), the counties that were ideologically against the incumbent received a dispropo...
Article
Full-text available
The spread of conspiracy theories and misinformation poses substantial threats to democracy around the world. In the United States, entrenched political polarization is both a consequence and a ramification of the spread of biased and false information. Much of this misinformation is spread online, especially on social media. Of all the social medi...
Article
Full-text available
Following Donald Trump’s unexpected victory in the 2016 US presidential election, the American Association for Public Opinion Research announced that “the polls clearly got it wrong” and noted that talk of a “crisis in polling” was already emerging. Although the national polls ended up being accurate, surveys just weeks before the election substant...

Citations

... 146 Currently, applied psychology is consistently and aggressively used in media during pre-election campaigns. 150,151 Whether anti-populism or moralism is sufficient against populism 152,153 is a moot question. Selfserving attribution bias describes people's willingness to cast themselves in a favorable light [154][155][156] German sociologist Maximilian Carl Emil Weber (1864-1920) defined power as "any chance of opposing one's will against reluctance within a social relationship, as well as what that opportunity is based on". ...
Article
Full-text available
Background Tremendous achievements in healthcare and science over the past 200 years have enhanced life expectancy in parallel with a shift from dogma to humanistic liberal education. Advancements in cancer have included vaccines treating causes of cancer (eg, hepatitis C- induced liver cancer and human papillomavirus-induced cervical cancer) along with improved cancer survival in children. In contrast, developments in cancer, frequently touted as “discoveries” or “breakthroughs” in media headlines, have been demonstrated to be ephemeral rather than game changers. In reality, cancer incidences are increasing, and relapse and mortality rates have not changed substantially. By this, we are experiencing today similar challenges to those before the so-called Humboldt reform. The trend towards managerialism with a focus on quantity in health care and science endangers their integrity. Methods Due to the complexity of integrity of healthcare and science, in-depth contemplation of this review contains foundations of actions in healthcare and science, information regarding cancer, as an example, quantity focus of healthcare, technology, publishing, marketing and media, predatory publishers, followed by psychologic and sociologic aspects which influence our perception. Results A complex paradoxical transformation has occurred, in which quality and humanism have been replaced by quantity, revenue, and marketing, together with “citation silence”, (ignoring original findings), and increased corruption and misconduct. This shift explains why the integrity of healthcare and science is being eroded. Conclusion Countries and societies are only as strong as their healthcare and science, both of which are only as strong as their emphasis on quality and integrity. Awareness of this situation may represent a first step toward a renewed focus on accountability.
... Due to PMR, individuals may be particularly susceptible to negative information or rumors about opposing parties, which can directly influence their voting behavior (Jennings and Stroud 2023;Morris et al. 2020;Weeks and Garrett 2014). The impact of partisanship extends to misinformation, with individuals more likely to believe questionable claims that resonate with their partisan leanings and dismiss those that do not (Flynn et al. 2017;Jennings and Stroud 2023;Vegetti and Mancosu 2020). ...
Article
Full-text available
Misinformation poses a significant threat to the integrity of political systems, particularly in competitive authoritarian regimes (CARs), where it can distort public perception and undermine democratic processes. This study focuses on the 2023 Turkish general elections—a context characterized by widespread misinformation. While extensive research has been conducted on misinformation in democratic systems, where press freedom and digitalization foster a mix of reliable and misleading information, this investigation targets the unique challenges and media consumption patterns in CARs. Utilizing a nationally representative survey after the 2023 elections, we examine the association between media consumption (traditional and online) and susceptibility to misinformation among government and opposition voters. Our findings reveal that partisan news consumption significantly influences belief in misinformation, with individuals tending to believe claims aligning with their political affiliations while rejecting opposing claims. Moreover, television remains a dominant source of information in Turkey, unlike social media, which shows a limited impact on misinformation beliefs but possesses a conditional corrective potential for certain electorate segments. This study underscores the enduring influence of traditional media in CARs and suggests that while the theory of selective exposure and partisanship is applicable, the constrained information environment significantly shapes public perceptions and misinformation dynamics.
... However in general, rumors can be understood as unverified information or statements shared by among people that may be positive or negative and they circulate without confirmation [1]. Psychologically, rumors spread because people needs factual information, self enhancement, and social enhancement, as three key motivations in rumor transmission intentions [1], [2]. Consequently, rumors become an inevitable part of human life, including in political aspects, such as elections. ...
... Secondly, we performed the main simulations of the rumor dynamics in Model (1), as shown in Figure (4) and (5). The transmission rates of the rumor were based on the values of α 1 and α 2 listed in Table (2). In accordance with the numerical experiments from the previous section, the input parameters yields R 0 = 3.804 > 1, indicating the rumor will continue to circulate and remain stable in society. ...
Article
Full-text available
Rumors can be defined as unverified information or statements shared by people that may be positive or negative and circulate without confirmation. Since humans naturally seek factual information for social and self-enhancement purposes, rumors become an inevitable aspect of human life, including in politics, such as elections. The complexity of the electoral process, with various factors such as individual candidates, social circumstances, and particularly the media, leads to the dynamic spread of rumors in society. Thus, it is both interesting and important to understand the dynamics of rumor spreading, particularly in the context of elections. In this article, we formulate a mathematical model of rumor spread dynamics based on different attitudes of people toward rumors. The model considers the spread of rumors about two candidates in the electoral context. From the model, we derived and investigated the basic reproductive number (R0) as a threshold for rumor spread and conducted a sensitivity analysis with respect to all the model parameters. Based on numerical experiments and simulations, it was revealed that the number of people resistant to or disbelieving in rumors increases significantly in the first ten days and remains higher than other subpopulations for at least after first seven days. Furthermore, we found that a high number of people directly affected by rumors, combined with the rumor transmission rate for both candidates being greater than each other, are necessary and sufficient conditions for rumors to circulate rapidly and remain stable in society. The results of this study can be interpreted and considered as a campaign strategy in an electoral context.
... Research from high-income democracies around the world has provided evidence of the detrimental consequences of misinformation on the conduct and outcome of elections. In theUnited States, for example, survey data from the 2008 presidential election found that the endorsement of negative rumors about Barack Obama was associated with a reduced likelihood of voting for him (Weeks and Garrett 2014). Similar evidence from Washington State (Reedy et al. 2014) and Oregon (Gastil et al. 2018) utilizing cross-sectional and panel data, respectively, revealed that disinformation influenced voter support for ballot measures on each state's electoral ticket. ...
... Individuals are more likely to believe political information that aligns with their partisan beliefs and less likely to believe information that does not align with these beliefs (Lodge and Taber 2013;Weeks and Garrett 2014), as partisan motivated reasoning drives individuals with strong political attachments to defend their existing beliefs even in the face of otherwise compelling evidence (Flynn et al. 2017;Peterson and Iyengar 2021). Political misinformation can be more difficult to counter than apolitical misinformation due to partisan attachments (Kunda 1990;Garrett et al. 2013), with the effectiveness of fact-checking methods yielding mixed results (Hameleers and van der Meer 2020;Iyengar and Hahn 2009;Nyhan and Reifler 2010). ...
Article
Full-text available
The global reach of misinformation has exacerbated harms in low- and middle-income countries faced with deficiencies in funding, platform engagement, and media literacy. These challenges have reiterated the need for the development of strategies capable of addressing misinformation that cannot be countered using popular fact-checking methods. Focusing on Kenya’s contentious 2022 election, we evaluate a novel method for democratizing debunking efforts termed “social truth queries” (STQs), which use questions posed by everyday users to draw reader attention to the veracity of the targeted misinformation in the aim of minimizing its impact. In an online survey of Kenyan participants ( N ~ 4,000), we test the efficacy of STQs in reducing the influence of electoral misinformation which could not have been plausibly fact-checked using existing methods. We find that STQs reduce the perceived accuracy of misinformation while also reducing trust in prominent disseminators of misinformation, with null results for sharing propensities. While effect sizes are small across conditions, assessments of the respondents most susceptible to misinformation reveal larger potential effects if targeted at vulnerable users. These findings collectively illustrate the potential of STQs to expand the reach of debunking efforts to a wider array of actors and misinformation clusters.
... In recent years, fact-checking has gained interest in the field of journalism and media studies. A range of the research focuses on the rise of fact-checking (Graves, 2016), how fact-checking became a global movement (Graves, 2018), how it has grown as a transnational field (Lauer & Graves, 2024), the institutional logic, and the diversity of the fact-checking landscape (Lowrey, 2015) as well as the effectiveness (or lack thereof) of fact-checking (Amazeen et al., 2018;Nyhan & Reifler, 2010;Porter et al., 2018;Weeks & Garrett, 2014;Young et al., 2018). Another part of the research is dedicated more specifically to the practice of fact-checking, with Graves (2017) describing the five steps of a fact-check and Steensen et al. (2023) examining the benefits and limitations of live fact-checking. ...
Article
Full-text available
This study explores fact-checking practices in Ethiopia and Mali in times of conflict and in a context marked by increasing restrictions to press freedom. The objective is to understand how, in this hostile environment, fact-checkers in these two countries manage to carry out their activities. Our findings reveal that fact-checkers are often victims of online bullying and harassment and fear reprisal from governments. This pushes them to self-censor, avoiding working on sensitive topics, such as military issues in Mali. In addition, fact-checking organizations in both countries highlight the difficulty of accessing reliable sources. Consequently, they focus more on debunking viral social media content, thus effectively becoming content moderators who have turned away from the mission of holding leaders accountable, one of the primary functions of fact-checking. Regarding their role conception, fact-checkers in Ethiopia and Mali see themselves more as guides helping navigate the information disorder than “guardians of truth” or “truth keepers.”
... The spread of political misinformation and rumors can impede individuals' ability to make accurate political decisions, highlighting the impact of misinformation on political behavior (Weeks & Garrett, 2014). Political identity and epistemic beliefs also play a role in promoting misperceptions and biased information processing, ultimately influencing political beliefs and behaviors (Moore et al., 2021). ...
Article
Full-text available
Political polarization is a complex phenomenon with significant implications for democratic processes worldwide. This study investigates the cognitive mechanisms underlying political reinforcement learning and examines how environmental information influences political decision-making, resulting in diverse political behaviors and beliefs. The methodology employed encompasses descriptive analysis, systematic literature review, and content analysis. Data were sourced from various democratic countries to ensure a comprehensive and diverse perspective. Key findings indicate that both traditional and social media significantly shape political opinions, while cognitive biases and political motivations can lead to divergent interpretations of identical facts, culminating in polarized beliefs. Interventions that enhance cognitive flexibility and metacognitive insight, as well as those promoting civil discourse and reducing intergroup anxiety, were found to be effective in mitigating political polarization. This research provides valuable insights into the cognitive and social dynamics underlying political polarization and proposes strategies to reduce polarization and strengthen democratic institutions. Future research should prioritize the empirical validation of these models and the testing of interventions across diverse cultural and political contexts.
... Models for rumor spreading emerge naturally in the context of applied social sciences, with ramifications across various subjects such as politics, economics, and public health. In the political context, the authors in [27] empirically concluded that political rumoring can directly affect the electoral process. In [13], the authors investigate the characteristic features of financial rumors compared to rumors about other subjects. ...
Preprint
We introduce a non-Markovian rumor model in the complete graph on n vertices inspired by Daley and Kendall's model. For this model, we prove a functional law of large numbers (FLLN) and a functional central limit theorem (FCLT). We apply these results to a non-Markovian version of the rumor model introduced by Lebensztayn, Machado and Rodr\'{i}guez.
... One specific explanation for partisan bias in false beliefs is motivated reasoning, or people's biased processing of information, which refers to the phenomenon that people tend to believe opinions that match their established worldviews (Kunda, 1990). Motivated reasoning leads to a preference for choosing a side instead of seeking truth (Schaffner & Roche, 2016;Vegetti & Mancosu, 2020;Weeks & Garrett, 2014). For instance, based on two surveys of American adults, Miller et al. (2016) found that both conservatives and liberals tend to believe in ideologically consistent conspiracy theories (i.e., conspiracies putting the ideological opponents in a bad light), indicating the presence of ideologically motivated conspiracy endorsement. ...
Article
Full-text available
Although the growing literature on coronavirus disease 2019 (COVID-19) conspiracy theories has highlighted the role of digital media in fomenting beliefs, few studies have examined the influence of the fast-rising far-right media platforms. This study examines and compares the role of conservative media and far-right websites in propagating COVID-19 conspiracy theories and explores an underlying sociopsychological mechanism of political identity. The results of an online survey (N = 702) in the United States indicated that people exposed to conservative media and far-right websites were more likely to endorse COVID-19 conspiracy theories, but the impact of conservative media exposure was more prominent. Additionally, the positive relations between conservative media/far-right websites exposure and conspiracy beliefs were stronger among liberal-leaning individuals than conservative-leaning individuals. Counter-attitudinal exposure is often regarded as a crucial element of political deliberation and a solution to opinion polarization. Our findings cautioned, however, that counter-attitudinal exposure would also help propagate conspiracy theories.
... Alineado con esta disponibilidad de las herramientas de manipulación de imágenes a través de la IA surge el término cheapfake. Gamir-Ríos y Tarullo (2022, p. 102) observan que "la tosquedad de su elaboración las hace más identificables como fraudulentas, las cheapfakes pueden producirse sin necesidad de habilidades tecnológicas avanzadas ni de sofisticados programas" y logran un efecto similar de polución desinformativa (Dowling, 2021), confirmando juicios preexistentes (Weeks y Garrett, 2014). ...
Article
Full-text available
Introducción: El uso de la Inteligencia Artificial en la generación de contenido y narraciones audiovisuales si bien representa una oportunidad en muchos campos como el artístico o en la creación visual y gráfica, también se convierte en un potente instrumento para generar relatos y representaciones falsos. Metodología: Se aplica la Revisión Sistemática Exploratoria (RSE), aportando referencias que radiografíen con evidencias empíricas la imagen de la posverdad. Resultados: Se aporta una revisión crítica de los últimos estudios y tendencias en la creación de imagen mediante inteligencia artificial relacionadas con la desinformación. Ésta forma parte del ecosistema audiovisual contemporáneo amenazando la confianza de la ciudadanía en el entorno mediático, social o institucional. Discusión: Los usuarios, a través de las redes sociales, generan imágenes falsas o distorsionadas, que una vez viralizadas son nuevamente reinterpretadas por otros usuarios. Los vídeos falsos pueden arruinar tanto la reputación del individuo como la confianza en los actores sociales. Estos efectos podrían estar moderados por la alfabetización visual y digital. Conclusiones: El aprendizaje profundo de las redes neuronales artificiales genera nuevas formas de deepfake, desconcertantes por su realismo y verosimilitud, y que empiezan a suponer un cuestionamiento hacia los medios de comunicación, deslegitimando la representación de la realidad y la información veraz como base de una sociedad democrática.
... On the one hand, there is substantial evidence that discredited misinformation can continue to inform person impressions. Examples include inadmissible evidence affecting jury decisions (see Steblay et al., 2006, for a review), continued stigma towards victims of miscarriages of justice (Brooks & Greenberg, 2021;Clow & Leach, 2015), and discounted rumours swaying voter preferences (Jardina & Traugott, 2019;Weeks & Garrett, 2014). In an experiment investigating misinformation and political figures, Thorson (2016) found that misinformation about a (fictional) candidate accepting donations from a convicted felon led participants to evaluate the candidate more negatively compared to a no-misinformation control condition, even when the misinformation had been corrected (although primarily if the political candidate was affiliated with the participant's non-preferred political party; see also Bullock, 2007). ...
... For example, a CIE may be more likely to emerge if participants already hold a negative attitude towards a person based on previous encounters, existing knowledge, or incompatible worldviews (Thorson, 2016; see also Bullock, 2007). Given the documentation of clear cases of misinformation impacting person judgements in other contexts (Brooks & Greenberg, 2021;Clow et al., 2012;Jardina & Traugott, 2019;Steblay et al., 2006;Weeks & Garrett, 2014), a long-term research goal will be to identify the boundary conditions under which person misinformation does versus does not continue to influence person impressions after a retraction. ...
Article
Full-text available
Despite robust evidence that misinformation continues to influence event-related reasoning after a clear retraction, evidence for the continued influence of misinformation on person impressions is mixed. Across four experiments, we investigated the impact of person-related misinformation and its correction on dynamic (moment-to-moment) impression formation. Participants formed an impression of a protagonist, “John”, based on a series of behaviour descriptions, including misinformation that was later retracted. Person impressions were recorded after the presentation of each behaviour description. As predicted, we found a strong effect of information valence on person impressions: negative misinformation had a greater impact on person impressions than positive misinformation (Experiments 1 and 2). Furthermore, in each experiment participants fully discounted the misinformation once retracted, regardless of whether the misinformation was negative or positive. This was true even when the other behaviour descriptions were congruent with (Experiment 2) or causally related to the retracted misinformation (Experiments 3 and 4). To test for generalisation, Experiment 4 used a different misinformation statement; it again showed no evidence for the continued influence of retracted misinformation on person impressions. Our findings indicate that person-related misinformation can be effectively discounted following a clear retraction.