Article

Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Hyper-partisan misinformation has become a major public concern. In order to examine what type of misinformation label can mitigate hyper-partisan misinformation sharing on social media, we conducted a 4 (label type: algorithm, community, third-party fact-checker, and no label) X 2 (post ideology: liberal vs. conservative) between-subjects online experiment (N = 1,677) in the context of COVID-19 health information. The results suggest that for liberal users, all labels reduced the perceived accuracy and believability of fake posts regardless of the posts' ideology. In contrast, for conservative users, the efficacy of the labels depended on whether the posts were ideologically consistent: algorithmic labels were more effective in reducing the perceived accuracy and believability of fake conservative posts compared to community labels, whereas all labels were effective in reducing their belief in liberal posts. Our results shed light on the differing effects of various misinformation labels dependent on people's political ideology.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Rijo and Waldzus [248] looked into how voting patterns and political beliefs influence the way people evaluate information credibility. Jia et al. [141] focused on liberal and conservative users. Other works (e.g., [130]) focus on social media users in general. ...
... Regarding the scope of credibility signals, some works focus on signals provided directly by (social media) platforms [184,305], some investigate a single credibility signal (e.g., hyper-partisanship in [141]) or a whole spectrum of signals (e.g., 28 signals analysed in [230]). Jia et al. [141] compared credibility signals produced by three various sources -an algorithm, a community, and a third-party fact-checker. ...
... Regarding the scope of credibility signals, some works focus on signals provided directly by (social media) platforms [184,305], some investigate a single credibility signal (e.g., hyper-partisanship in [141]) or a whole spectrum of signals (e.g., 28 signals analysed in [230]). Jia et al. [141] compared credibility signals produced by three various sources -an algorithm, a community, and a third-party fact-checker. Besides content-and context-based signals, Chang et al. [56] considered also an additional group of design signals (consisting of four signals: interaction design, interface design, navigation design, and security settings). ...
Preprint
Full-text available
In the current era of social media and generative AI, an ability to automatically assess the credibility of online social media content is of tremendous importance. Credibility assessment is fundamentally based on aggregating credibility signals, which refer to small units of information, such as content factuality, bias, or a presence of persuasion techniques, into an overall credibility score. Credibility signals provide a more granular, more easily explainable and widely utilizable information in contrast to currently predominant fake news detection, which utilizes various (mostly latent) features. A growing body of research on automatic credibility assessment and detection of credibility signals can be characterized as highly fragmented and lacking mutual interconnections. This issue is even more prominent due to a lack of an up-to-date overview of research works on automatic credibility assessment. In this survey, we provide such systematic and comprehensive literature review of 175 research papers while focusing on textual credibility signals and Natural Language Processing (NLP), which undergoes a significant advancement due to Large Language Models (LLMs). While positioning the NLP research into the context of other multidisciplinary research works, we tackle with approaches for credibility assessment as well as with 9 categories of credibility signals (we provide a thorough analysis for 3 of them, namely: 1) factuality, subjectivity and bias, 2) persuasion techniques and logical fallacies, and 3) claims and veracity). Following the description of the existing methods, datasets and tools, we identify future challenges and opportunities, while paying a specific attention to recent rapid development of generative AI.
... Further, human-identified misinformation introduces the partiality of a human into the moderation process [27]. Some users may be skeptical of warning tag accuracy depending on if the correction comes from a community member or an algorithm [36] and manly believe tags from humans are more biased than tags from algorithms [94]. As a result, AI-based tagging of potential misinformation and AI-generated labeling of online posts is seen as a potential soft-moderation middle ground that can help curb misinformation spread and vulnerability without completely removing the agency of the reader. ...
... We hypothesize one of the reasons for prior mixed results on the efficacy of tags is due to the complex nature of misinformation vulnerability, spreading behavior, and attitudes towards mitigation efforts. Misinformation vulnerability and spreading behavior is affected by a person's psychological faculty [45], frequency of experience with the information [67], personal beliefs [36], and socioeconomic status [95]. Intervention-centered factors include the phrasing of the intervention [11,17] and prevalence of alternative information during intervention [39]. ...
... Other works support the notion that who provides the warning label matters. Jia et al. found warning labels provided by an algorithm, community, or third-party fact-checker were trusted by Democrats regardless of post ideology while only algorithmic labels impacted Republican's belief of false conservative news [36]. All intervention methods were effective for Republicans when the content was liberal-leaning. ...
Preprint
Full-text available
Social media platforms enhance the propagation of online misinformation by providing large user bases with a quick means to share content. One way to disrupt the rapid dissemination of misinformation at scale is through warning tags, which label content as potentially false or misleading. Past warning tag mitigation studies yield mixed results for diverse audiences, however. We hypothesize that personalizing warning tags to the individual characteristics of their diverse users may enhance mitigation effectiveness. To reach the goal of personalization, we need to understand how people differ and how those differences predict a person's attitudes and self-described behaviors toward tags and tagged content. In this study, we leverage Amazon Mechanical Turk (n = 132) and undergraduate students (n = 112) to provide this foundational understanding. Specifically, we find attitudes towards warning tags and self-described behaviors are positively influenced by factors such as Personality Openness and Agreeableness, Need for Cognitive Closure (NFCC), Cognitive Reflection Test (CRT) score, and Trust in Medical Scientists. Conversely, Trust in Religious Leaders, Conscientiousness, and political conservatism were negatively correlated with these attitudes and behaviors. We synthesize our results into design insights and a future research agenda for more effective and personalized misinformation warning tags and misinformation mitigation strategies more generally.
... In studies that compare the effects among different agents (e.g., AI, human-AI, expert panels), AI is mostly considered an agent with some levels of autonomy that are compared to human agents who work on the same task (e.g., identifying and flagging problematic content online). There are a few terms frequently used by prior work that refer to AI-assisted moderation, including "automated content moderation" (Bhandari et al., 2021;Horta Ribeiro et al., 2023;Ozanne et al., 2022), "AI moderation" (Calleberg, 2021; Oh & Park, 2023), "machine moderation" (Wang, 2021), "algorithm" (Gonçalves et al., 2023;Hohenstein et al., 2023;Jia et al., 2022;Pan et al., 2022;Vaccaro et al., 2020), "Human-AI collaboration" (Li & Chau, 2023;Molina & Sundar, 2022;Wang & Kim, 2023), and "AI-assisted human moderation" (Wojcieszak et al., 2021). While the terminology and conceptualizations in these studies are not entirely aligned, they mostly consider AI-assisted content moderation as a decision-making procedure with AI involvement that influences the moderated users. ...
... This research shows that, in general, the involvement of AI in the moderation process did not significantly influence the perceived trustworthiness or fairness of moderation decisions, moderation agents, or moderated content. Beside trustworthiness, researchers have explored various other perceptions that are also related to quality traits, which include the credibility of moderated content (Oh & Park, 2023;Wang, 2021), the accuracy and believability of moderated content (Jia et al., 2022), the accountability of moderation decisions (Ozanne et al., 2022;Vaccaro et al., 2020), the willingness to accept the removal decision (Wang & Kim, 2023), and the perceived institutional legitimacy of moderated content (Pan et al., 2022). The other concerns are in association with fairness, including research that investigated the perceived fairness of the moderation decision (Ozanne et al., 2022;Vaccaro et al., 2020), the potential bias contained in the moderation content (Oh & Park, 2023) and of the agents (Wang, 2021), the objectivity of the moderation implementation (Ozanne et al., 2022), the transparency and fairness of separate moderation agents (Gonçalves et al., 2021), and their feelings of procedural control that they can argue for themselves in the process (Vaccaro et al., 2020). ...
Article
This review paper provides a conceptualization of AI-assisted content moderation with various degrees of autonomy and summarizes experimental evidence for how different levels of automation in content moderation and related losses of autonomy affect individuals and groups. Our results show that current research predominantly focuses on individual level effects, necessitating a shift toward understanding the impact on groups. The study highlights gaps in exploring different levels of AI-assisted moderation interventions and misalignments of different conceptualizations that make comparing research results difficult. The discussion underscores the prevailing emphasis on harmful content removal and advocates for investigating more constructive moderation techniques, emphasizing the potential of AI in fostering normative, higher-level outcomes.
... Similarly, Seo, Xiong, and Lee (2019) found that warnings from factcheckers were more trusted than warnings from machines. Yet, other work has found that warning labels from fact checkers, the public, and algorithms were equally effective at reducing the perceived accuracy of false information for politically liberal information consumers (Jia et al. 2022). ...
... One such setting change is the source of the veracity labels. As briefly discussed in the introduction to this paper, a few recent studies have directly or indirectly studied the relative efficacy of warning label sources (Seo, Xiong, and Lee 2019;Yaqub et al. 2020;Jia et al. 2022), each varying in effectiveness metrics and results. The majority of these works provide evidence that trust in the warner does matter to some degree, but the relative efficacy has varied. ...
Preprint
In this study, we conducted an online, between-subjects experiment (N = 2,049) to better understand the impact of warning label sources on information trust and sharing intentions. Across four warners (the social media platform, other social media users, Artificial Intelligence (AI), and fact checkers), we found that all significantly decreased trust in false information relative to control, but warnings from AI were modestly more effective. All warners significantly decreased the sharing intentions of false information, except warnings from other social media users. AI was again the most effective. These results were moderated by prior trust in media and the information itself. Most noteworthy, we found that warning labels from AI were significantly more effective than all other warning labels for participants who reported a low trust in news organizations, while warnings from AI were no more effective than any other warning label for participants who reported a high trust in news organizations.
... In some cases, there is a combination, e.g., of survey and interview or of laboratory experiment and web-based experiment within one publication. Regarding sample size, the empirical studies range from small groups of participants (<20 e.g., [29,30,31,32]) to large-scaled representative groups with far over 1,000 participants (e.g., [33,34,35]). You can find a visualization of sample sizes in Figure 2. Looking more closely at the participants, a bias can be clearly discerned, as the majority of students report having surveyed U.S. adults and college students as participants. ...
... Since we did not specifically review which of the 154 interventions within 108 publications actually fit the definition of a nudge, we would nevertheless like to provide an overview of the publications that refer to their interventions as nudges themselves. Some publications present intervention as "accuracy nudges" and introduce concepts in which users are specifically nudged to reflect on the accuracy of content and to act more thoughtfully accordingly (e.g., [114,34,1,132,55]). For example, Capraro and Celadin [132] report promising results that indicate positive effects on sharing behavior when using an accuracy prompt. ...
Preprint
Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.
... He found that flagging fake news has a significant effect on users' sharing intentions; that is, users are less willing to share content with the labels. This was corroborated in [116], [168], [234], [236], [272], [326]. Chandrasekhran et al. [90] found that quarantine made it more difficult to recruit new members on r/TheRedPill and r/The Donald, however they find that the existing members hateful rhetoric remained the same. ...
... Saltz et al. [260] found that participants had a different opinion regarding Facebook COVID-19 warning labels, some perceiving them necessary step to inform users whereas others saw them as politically biased and an act of censorship. Many studies [168], [173], [183], [213] found that interstitial covers, labels and flagging decrease the perceived accuracy of COVID-19 misinformation and fake news on Twitter [275] and Facebook [97], [213]. Previous research has also found that correcting or debunking fake news can significantly decrease users' gullibility to the story [89], [182], [199], [228], [232], [234], [272], [326]. ...
Preprint
Full-text available
To counter online abuse and misinformation, social media platforms have been establishing content moderation guidelines and employing various moderation policies. The goal of this paper is to study these community guidelines and moderation practices, as well as the relevant research publications to identify the research gaps, differences in moderation techniques, and challenges that should be tackled by the social media platforms and the research community at large. In this regard, we study and analyze in the US jurisdiction the fourteen most popular social media content moderation guidelines and practices, and consolidate them. We then introduce three taxonomies drawn from this analysis as well as covering over one hundred interdisciplinary research papers about moderation strategies. We identified the differences between the content moderation employed in mainstream social media platforms compared to fringe platforms. We also highlight the implications of Section 230, the need for transparency and opacity in content moderation, why platforms should shift from a one-size-fits-all model to a more inclusive model, and lastly, we highlight why there is a need for a collaborative human-AI system.
... 52 participants observed explanations for uneducated users and 105 observed explanations for educated users. We focus on right-learning personalization since prior research has found rightleaning users to be disproportionately targeted and involved in spread of misinformation (Sakketou et al., 2022;Jia et al., 2022;Pierri et al., 2022). ...
... 52 participants observed explanations for uneducated users and 105 observed explanations for educated users. We focus on right-learning personalization since prior research has found rightleaning users to be disproportionately targeted and involved in spread of misinformation (Sakketou et al., 2022;Jia et al., 2022;Pierri et al., 2022). ...
Preprint
The spread of misinformation on social media platforms threatens democratic processes, contributes to massive economic losses, and endangers public health. Many efforts to address misinformation focus on a knowledge deficit model and propose interventions for improving users' critical thinking through access to facts. Such efforts are often hampered by challenges with scalability, and by platform users' personal biases. The emergence of generative AI presents promising opportunities for countering misinformation at scale across ideological barriers. In this paper, we introduce a framework (MisinfoEval) for generating and comprehensively evaluating large language model (LLM) based misinformation interventions. We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users with the goal of countering misinformation by appealing to their pre-existing values. Our findings confirm that LLM-based interventions are highly effective at correcting user behavior (improving overall user accuracy at reliability labeling by up to 41.72%). Furthermore, we find that users favor more personalized interventions when making decisions about news reliability and users shown personalized interventions have significantly higher accuracy at identifying misinformation.
... A widely adopted debunking approach is to apply warning tags, labels, or indicators, during the misinformation presentation after fact-checking by professional organizations or artificial intelligence (AI). Empirical user studies reveal that those warnings are generally effective in reducing participants' belief in misinformation (Clayton et al. 2020;Yaqub et al. 2020;Jia et al. 2022;Kreps and Kriner 2022;Lu et al. 2022). Yet, the efficacy of the warnings can be impacted by factors such as warning specificity (e.g., general warnings introduce bias, reducing belief in real news), warning design (e.g., simple and precise warning language), the source of the warnings (e.g., fact checker and community), and extra fact-checking details. ...
Article
To mitigate misinformation on social media, platforms such as Facebook have offered warnings to users based on the detection results of AI systems. With the evolution of AI detection systems, efforts have been devoted to applying explainable AI (XAI) to further increase the transparency of AI decision-making. Nevertheless, few factors have been considered to understand the effectiveness of a warning with AI explanations in helping humans detect misinformation. In this study, we report the results of three online human-subject experiments (N = 2,692) investigating the framing effect and the impact of an AI system’s reliability on the effectiveness of AI warning with explanations. Our findings show that the framing effect is effective for participants’ misinformation detection, whereas the AI system’s reliability is critical for humans’ misinformation detection and participants’ trust in the AI system. However, adding the explanations can potentially increase participants’ suspicions on miss errors (i.e., false negatives) in the AI system. Furthermore, more trust is shown in the AI warning without explanations condition. We conclude by discussing the implications of our findings.
... Moreover, we recognize another limitation inherent in binary classification for misinformation detection is that the use of binary label classes (i.e., misleading and nonmisleading) may overlook the uncertainty in labeling due to the ambiguity of a post. This ambiguity often arises from the varying contexts and ideological perspectives among the audience (Jia et al. 2022). For instance, a post with suspicion of the slow and poor handling of HIV drugs can be classified as misleading based on linguistic characteristics (e.g., language style, sentiment). ...
Article
A fundamental issue in healthcare misinformation detection is the lack of timely resources (e.g., medical knowledge, annotated data), making it challenging to accurately detect emergent healthcare misinformation at an early stage. In this paper, we develop a crowdsourcing-based early healthcare misinformation detection framework that jointly exploits the medical expertise of expert crowd workers and adapts the medical knowledge from a source domain (e.g., COVID-19) to detect misleading posts in an emergent target domain (e.g., Mpox, Polio). Two important challenges exist in developing our solution: (i) How to leverage the complex and noisy knowledge from the source domain to facilitate the detection of misinformation in the target domain? (ii) How to effectively utilize the limited amount of expert workers to correct the inapplicable knowledge facts in the source domain and adapt the corrected facts to examine the truthfulness of the posts in the emergent target domain? To address these challenges, we develop CrowdAdapt, a crowdsourcing-based domain adaptive approach that effectively identifies and adapts relevant knowledge facts from the source domain to accurately detect misinformation in the target domain. Evaluation results from two real-world case studies demonstrate the superiority of CrowdAdapt over state-of-the-art baselines in accurately detecting emergent healthcare misinformation.
... For algorithmic decision making, Langer et al. [28] showed that terminology (e.g., 'algorithms' vs. 'artificial intelligence') affects laypeople's perceptions of system properties and evaluations (e.g., trust) -they recommend being mindful when choosing terms given unintended consequences, and their impact on HCI research robustness and replicability. Within COVID-19 health (mis-)information, Jia et al. [25] found that various misinformation labels (e.g., algorithm, community, third-party fact-checker) are dependent on people's political ideology (liberal, conservative). Cloudy et al. [10] found that a news story presented as sourced from an AI journalist activated individuals' machine heuristic (rule of thumb that machines are more secure and trustworthy than humans [51]), which helps mitigate the hostile media bias effect. ...
Conference Paper
Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the currently discussed European AI Act aims at addressing these risks through Article 52's AI transparency obligations, its interpretation and implications remain unclear. In this early work, we adopt a participatory AI approach to derive key questions based on Article 52's disclosure obligations. We ran two workshops with researchers, designers, and engineers across disciplines (N=16), where participants deconstructed Article 52's relevant clauses using the 5W1H framework. We contribute a set of 149 questions clustered into five themes and 18 sub-themes. We believe these can not only help inform future legal developments and interpretations of Article 52, but also provide a starting point for Human-Computer Interaction research to (re-)examine disclosure transparency from a human-centered AI lens.
Article
While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News media labels were perceived as more effective than user labels but not statistically different from labels by fact checkers and algorithms. There was no significant difference between labels created by users and algorithms. These findings have implications for platforms and fact-checking practitioners, underscoring the importance of journalistic professionalism in fact-checking.
Article
Full-text available
While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News media labels were perceived as more effective than user labels but not statistically different from labels by fact checkers and algorithms. There was no significant difference between labels created by users and algorithms. These findings have implications for platforms and fact-checking practitioners, underscoring the importance of journalistic professionalism in fact-checking.
Article
Misinformation is one of the key challenges facing society today. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 6,000 scholarly publications were screened, and a systematic literature review ( N = 172) was conducted. A taxonomy was derived regarding intervention design (e.g., labels, showing indicators of misinformation, corrections, removal, or visibility reduction of content), user interaction (active or passive), and timing (e.g., pre or post exposure to misinformation or on request of the user). We provide a structured overview of approaches across multiple disciplines and derive six overarching challenges for future research regarding transferability of approaches to (1) novel platforms and (2) emerging video- and image-based misinformation, the sensible combination of automated mechanisms with (3) human experts and (4) user-centered feedback to facilitate comprehensibility, (5) encouraging media literacy without misinformation exposure, and (6) adequately addressing particularly vulnerable users such as older people or adolescents.
Article
Community-based fact-checking is a promising approach to fact-check social media content at scale. However, an understanding of whether users trust community fact-checks is missing. Here, we presented n = 1810 Americans with 36 misleading and non-misleading social media posts and assessed their trust in different types of fact-checking interventions. Participants were randomly assigned to treatments where misleading content was either accompanied by simple (i.e., context-free) misinformation flags in different formats (expert flags or community flags), or by textual “community notes” explaining why the fact-checked post was misleading. Across both sides of the political spectrum, community notes were perceived as significantly more trustworthy than simple misinformation flags. Our results further suggest that the higher trustworthiness primarily stemmed from the context provided in community notes (i.e., factchecking explanations) rather than generally higher trust towards community fact-checkers. Community notes also improved the identification of misleading posts. In sum, our work implies that context matters in fact-checking and that community notes might be an effective approach to mitigate trust issues with simple misinformation flags.
Article
Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.
Article
Social media images, curated or casual, have become a crucial component of communicating situational information and emotions during health crises. Despite its prevalence and significance in informational dissemination and emotional connection, there lacks a comprehensive understanding of visual crisis communication in the aftermath of a pandemic which is characterized by uncertain local situations and emotional fatigue. To fill this gap, this work collected 345,423 crisis-related posts and 65,376 original images during the Xi'an COVID-19 local outbreak in China, and adopted a mixed-methods approach to understanding themes, goals, and strategies of crisis imagery. Image clustering captured the diversity of visual themes during the outbreak, such as text images embedding authoritative guidelines and "visual diaries" recording and sharing the quarantine life. Through text classification of the post that visuals were situated in, we found that different visual themes highly correlated with the informational and emotional goals of the post text, such as adopting text images to convey the latest policies and sharing food images to express anxiety. We further unpacked nuanced strategies of crisis image use through inductive coding, such as signifying authority and triggering empathy. We discuss the opportunities and challenges of crisis imagery and provide design implications to facilitate effective visual crisis communication.
Article
While social computing technologies are increasingly being used to counter misinformation, more work is needed to understand how they can support the crucial work of community-based trusted messengers, especially in marginalized communities where distrust in health authorities is rooted in historical inequities. We describe an early exploration of these opportunities in our collaboration with Black and Latinx young adult "Peer Champions" addressing COVID-19 vaccine hesitancy the U.S. state of Georgia. We conducted interviews engaging them with a social media monitoring and outreach dashboard we designed, to probe their understanding of their roles and current and potential use of digital platforms. With the concept of cultural code-switching as a framing, we found that the Peer Champions leveraged their particular combination of cultural, health, and digital literacy skills to understand their communities' concerns surrounding misinformation and to communicate health information in a culturally appropriate manner. While being positioned between their communities and public health research and practice motivated and enabled their work, it also introduced challenges in finding (mis)information online and navigating tensions around authenticity and respect when engaging those close to them. Our research contributes towards characterizing the valuable and difficult work trusted messengers do, and (re)imagining collaboratively designed interpretive digital tools to support them.
Article
Full-text available
Background The global spread of coronavirus disease 2019 (COVID-19) has been mirrored by diffusion of misinformation and conspiracy theories about its origins (such as 5G cellular networks) and the motivations of preventive measures like vaccination, social distancing, and face masks (for example, as a political ploy). These beliefs have resulted in substantive, negative real-world outcomes but remain largely unstudied. Methods This was a cross-sectional, online survey ( n =660). Participants were asked about the believability of five selected COVID-19 narratives, their political orientation, their religious commitment, and their trust in science (a 21-item scale), along with sociodemographic items. Data were assessed descriptively, then latent profile analysis was used to identify subgroups with similar believability profiles. Bivariate (ANOVA) analyses were run, then multivariable, multivariate logistic regression was used to identify factors associated with membership in specific COVID-19 narrative believability profiles. Results For the full sample, believability of the narratives varied, from a low of 1.94 (SD=1.72) for the 5G narrative to a high of 5.56 (SD=1.64) for the zoonotic (scientific consensus) narrative. Four distinct belief profiles emerged, with the preponderance (70%) of the sample falling into Profile 1, which believed the scientifically accepted narrative (zoonotic origin) but not the misinformed or conspiratorial narratives. Other profiles did not disbelieve the zoonotic explanation, but rather believed additional misinformation to varying degrees. Controlling for sociodemographics, political orientation and religious commitment were marginally, and typically non-significantly, associated with COVID-19 belief profile membership. However, trust in science was a strong, significant predictor of profile membership, with lower trust being substantively associated with belonging to Profiles 2 through 4. Conclusions Belief in misinformation or conspiratorial narratives may not be mutually exclusive from belief in the narrative reflecting scientific consensus; that is, profiles were distinguished not by belief in the zoonotic narrative, but rather by concomitant belief or disbelief in additional narratives. Additional, renewed dissemination of scientifically accepted narratives may not attenuate belief in misinformation. However, prophylaxis of COVID-19 misinformation might be achieved by taking concrete steps to improve trust in science and scientists, such as building understanding of the scientific process and supporting open science initiatives.
Article
Full-text available
Social media has become a popular means for people to consume and share the news. At the same time, however, it has also enabled the wide dissemination of fake news, that is, news with intentionally false information, causing significant negative effects on society. To mitigate this problem, the research of fake news detection has recently received a lot of attention. Despite several existing computational solutions on the detection of fake news, the lack of comprehensive and community-driven fake news data sets has become one of major roadblocks. Not only existing data sets are scarce, they do not contain a myriad of features often required in the study such as news content, social context, and spatiotemporal information. Therefore, in this article, to facilitate fake news-related research, we present a fake news data repository FakeNewsNet, which contains two comprehensive data sets with diverse features in news content, social context, and spatiotemporal information. We present a comprehensive description of the FakeNewsNet, demonstrate an exploratory analysis of two data sets from different perspectives, and discuss the benefits of the FakeNewsNet for potential applications on fake news study on social media.
Article
Full-text available
Survey experiments with nearly 7,000 Americans suggest that increasing the visibility of publishers is an ineffective, and perhaps even counterproductive, way to address misinformation on social media. Our findings underscore the importance of social media platforms and civil society organizations evaluating interventions experimentally rather than implementing them based on intuitive appeal.
Article
Full-text available
What role does deliberation play in susceptibility to political misinformation and "fake news"? The Motivated System 2 Reasoning (MS2R) account posits that deliberation causes people to fall for fake news, because reasoning facilitates identity-protective cognition and is therefore used to rationalize content that is consistent with one's political ideology. The classical account of reasoning instead posits that people ineffectively discern between true and false news headlines when they fail to deliberate (and instead rely on intuition). To distinguish between these competing accounts, we investigated the causal effect of reasoning on media truth discernment using a 2-response paradigm. Participants (N = 1,635 Mechanical Turkers) were presented with a series of headlines. For each, they were first asked to give an initial, intuitive response under time pressure and concurrent working memory load. They were then given an opportunity to rethink their response with no constraints, thereby permitting more deliberation. We also compared these responses to a (deliberative) 1-response baseline condition where participants made a single choice with no constraints. Consistent with the classical account, we found that deliberation corrected intuitive mistakes: Participants believed false headlines (but not true headlines) more in initial responses than in either final responses or the unconstrained 1-response baseline. In contrast-and inconsistent with the Motivated System 2 Reasoning account-we found that political polarization was equivalent across responses. Our data suggest that, in the context of fake news, deliberation facilitates accurate belief formation and not partisan bias. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Article
Full-text available
Amazon Mechanical Turk (MTurk) is widely used by behavioral scientists to recruit research participants. MTurk offers advantages over traditional student subject pools, but it also has important limitations. In particular, the MTurk population is small and potentially overused, and some groups of interest to behavioral scientists are underrepresented and difficult to recruit. Here we examined whether online research panels can avoid these limitations. Specifically, we compared sample composition, data quality (measured by effect sizes, internal reliability, and attention checks), and the non-naivete of participants recruited from MTurk and Prime Panels—an aggregate of online research panels. Prime Panels participants were more diverse in age, family composition, religiosity, education, and political attitudes. Prime Panels participants also reported less exposure to classic protocols and produced larger effect sizes, but only after screening out several participants who failed a screening task. We conclude that online research panels offer a unique opportunity for research, yet one with some important trade-offs.
Article
Full-text available
Social media has increasingly enabled “fake news” to circulate widely, most notably during the 2016 U.S. presidential campaign. These intentionally false or misleading stories threaten the democratic goal of a well-informed electorate. This study evaluates the effectiveness of strategies that could be used by Facebook and other social media to counter false stories. Results from a pre-registered experiment indicate that false headlines are perceived as less accurate when people receive a general warning about misleading information on social media or when specific headlines are accompanied by a “Disputed” or “Rated false” tag. Though the magnitudes of these effects are relatively modest, they generally do not vary by whether headlines were congenial to respondents’ political views. In addition, we find that adding a “Rated false” tag to an article headline lowers its perceived accuracy more than adding a “Disputed” tag (Facebook’s original approach) relative to a control condition. Finally, though exposure to the “Disputed” or “Rated false” tags did not affect the perceived accuracy of unlabeled false or true headlines, exposure to a general warning decreased belief in the accuracy of true headlines, suggesting the need for further research into how to most effectively counter false news without distorting belief in true information. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
Article
Full-text available
One of the most fundamental changes in today’s political information environment is an increasing lack of communicative truthfulness. To explore this worrisome phenomenon, this study aims to investigate the effects of political misinformation by integrating three theoretical approaches: (1) misinformation, (2) polarization, and (3) selective exposure. In this article, we examine the role of fact-checkers in discrediting polarized misinformation in a fragmented media environment. We rely on two experiments (N = 1,117) in which we vary exposure to attitudinal-congruent or incongruent political news and a follow-up fact-check article debunking the information. Participants were either forced to see or free to select a fact-checker. Results show that fact-checkers can be successful as they (1) lower agreement with attitudinally congruent political misinformation and (2) can overcome political polarization. Moreover, dependent on the issue, fact-checkers are most likely to be selected when they confirm prior attitudes and avoided when they are incongruent, indicating a confirmation bias for selecting corrective information. The freedom to select or avoid fact-checkers does not have an impact on political beliefs.
Article
Full-text available
Background During epidemic crises, some of the information the public receives on social media is misinformation. Health organizations are required to respond and correct the information to gain the public’s trust and influence it to follow the recommended instructions. Objectives (1) To examine ways for health organizations to correct misinformation concerning the measles vaccination on social networks for two groups: pro-vaccination and hesitant; (2) To examine the types of reactions of two subgroups (pro-vaccination, hesitant) to misinformation correction; and (3) To examine the effect of misinformation correction on these two subgroups regarding reliability, satisfaction, self-efficacy and intentions. Methods A controlled experiment with participants divided randomly into two conditions. In both experiment conditions a dilemma was presented as to sending a child to kindergarten, followed by an identical Facebook post voicing the children mothers’ concerns. In the third stage the correction by the health organization is presented differently in two conditions: Condition 1 –common information correction, and Condition 2 –recommended (theory-based) information correction, mainly communicating information transparently and addressing the public’s concerns. The study included (n = 243) graduate students from the Faculty of Social Welfare and Health Sciences at Haifa University. Results A statistically significant difference was found in the reliability level attributed to information correction by the Health Ministry between the Control condition and Experimental condition (sig<0.001), with the average reliability level of the subjects in Condition 2 (M = 5.68) being considerably higher than the average reliability level of subjects in Condition 1 (4.64). A significant difference was found between Condition 1 and Condition 2 (sig<0.001), with the average satisfaction from the Health Ministry’s response of Condition 2 subjects (M = 5.75) being significantly higher than the average satisfaction level of Condition 1 subjects (4.66). Similarly, when we tested the pro and hesitant groups separately, we found that both preferred the response presented in Condition 2. Conclusion It is very important for the organizations to correct misinformation transparently, and to address the emotional aspects for both the pro-vaccination and the hesitant groups. The pro-vaccination group is not a captive audience, and it too requires a full response that addresses the public's fears and concerns.
Article
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Article
Full-text available
Social media platforms play an increasingly important civic role as platforms for discourse, where we discuss, debate, and share information. This article explores how users make sense of the content moderation systems social media platforms use to curate this discourse. Through a survey of users (n = 519) who have experienced content moderation, I explore users’ folk theories of how content moderation systems work, how they shape the affective relationship between users and platforms, and the steps users take to assert their agency by seeking redress. I find significant impacts of content moderation that go far beyond the questions of freedom of expression that have thus far dominated the debate. Raising questions about what content moderation systems are designed to accomplish, I conclude by conceptualizing an educational, rather than punitive, model for content moderation systems.
Conference Paper
Full-text available
Social media provide a platform for quick and seamless access to information. However, the propagation of false information, especially during the last year, raises major concerns, especially given the fact that social media are the primary source of information for a large percentage of the population. False information may manipulate people's beliefs and have real-life consequences. enrefore, one major challenge is to automatically identify false information by categorizing it into diierent types and notify users about the credibility of diierent articles shared online. Existing approaches primarily focus on feature generation and selection from various sources, including corpus-related features. However, so far, prior work has not paid considerable aaention to the following question: how can we accurately distinguish diierent categories of false news, solely based on the content? In this paper, we work on answering this question. In particular, we propose a tensor modeling of the problem, where we capture latent relations between articles and terms, as well as spa-tial/contextual relations between terms, towards unlocking the full potential of the content. Furthermore, we propose an ensemble method which judiciously combines and consolidates results form diierent tensor decompositions into clean, coherent, and high-accuracy groups of articles that belong to diierent categories of false news. We extensively evaluate our proposed method on real data, for which we have labels, and demonstrate that the proposed algorithm was able to identify all diierent false news categories within the corpus, with average homogeneity per group of up to 80%.
Article
Full-text available
What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Article
Full-text available
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of "fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.
Article
Full-text available
Why is it so difficult to resist the desire to use social media? One possibility is that frequent social media users possess strong and spontaneous hedonic reactions to social media cues, which, in turn, makes it difficult to resist social media temptations. In two studies (total N = 200), we investigated less frequent and frequent social media users’ spontaneous hedonic reactions to social media cues using the Affect Misattribution Procedure–an implicit measure of affective reactions. Results demonstrated that frequent social media users showed more favorable affective reactions in response to social media (vs. control) cues, whereas less frequent social media users’ affective reactions did not differ between social media and control cues (Studies 1 and 2). Moreover, the spontaneous hedonic reactions to social media (vs. control) cues were related to self-reported cravings to use social media and partially accounted for the link between social media use and social media cravings (Study 2). These findings suggest that frequent social media users’ spontaneous hedonic reactions in response to social media cues might contribute to their difficulties in resisting desires to use social media.
Article
Full-text available
The problem of fake news has gained a lot of attention as it is claimed to have had a significant impact on 2016 US Presidential Elections. Fake news is not a new problem and its spread in social networks is well-studied. Often an underlying assumption in fake news discussion is that it is written to look like real news, fooling the reader who does not check for reliability of the sources or the arguments in its content. Through a unique study of three data sets and features that capture the style and the language of articles, we show that this assumption is not true. Fake news in most cases is more similar to satire than to real news, leading us to conclude that persuasion in fake news is achieved through heuristics rather than the strength of arguments. We show overall title structure and the use of proper nouns in titles are very significant in differentiating fake from real. This leads us to conclude that fake news is targeted for audiences who are not likely to read beyond titles and is aimed at creating mental associations between entities and claims.
Article
Full-text available
In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.
Article
People often prefer to consume news with similar political predispositions and access like-minded news articles, which aggravates polarized clusters known as "echo chamber". To mitigate this phenomenon, we propose a computer-aided solution to help combat extreme political polarization. Specifically, we present a framework for reversing or neutralizing the political polarity of news headlines and articles. The framework leverages the attention mechanism of a Transformer-based language model to first identify polar sentences, and then either flip the polarity to the neutral or to the opposite through a GAN network. Tested on the same benchmark dataset, our framework achieves a 3% − 10% improvement on the flipping/neutralizing success rate of headlines compared with the current state-of-the-art model. Adding to prior literature, our framework not only flips the polarity of headlines but also extends the task of polarity flipping to full-length articles. Human evaluation results show that our model successfully neutralizes or reverses the polarity of news without reducing readability. We release a large annotated dataset that includes both news headlines and full-length articles with polarity labels and meta-data to be used for future research. Our framework has a potential to be used by social scientists, content creators and content consumers in the real world.
Article
Studies have shown that uncivil comments under an online news article may result in biased perceptions of the news content, and explicit comment moderation has the potential to mitigate this adverse effect. Using an online experiment, the present study extends this line of research with the examination of how interface cues signalling different agents (human vs. machine) in moderating uncivil comments affect a reader’s judgment of the news and how prior belief in machine heuristic moderates such effects. The results indicated that perceptions of news bias were attenuated when uncivil comments were moderated by a machine (as opposed to a human) agent, which subsequently engendered greater perceived credibility of the news story. Additionally, such indirect effects were more prominent among readers who strongly believed that machine operations are generally accurate and reliable than those with a weaker prior belief in this rule of thumb.
Article
Recently, substantial attention has been paid to the spread of highly partisan and often factually incorrect information (i.e., so-called “fake news”) on social media. In this study, we attempt to extend current knowledge on this topic by exploring the degree to which individual levels of ideological extremity, social trust, and trust in the news media are associated with the dissemination of countermedia content, or web-based, ideologically extreme information that uses false, biased, misleading, and hyper-partisan claims to counter the knowledge produced by the mainstream news media. To investigate these possible associations, we used a combination of self-report survey data and trace data collected from Facebook and Twitter. The results suggested that sharing countermedia content on Facebook is positively associated with ideological extremity and negatively associated with trust in the mainstream news media. On Twitter, we found evidence that countermedia content sharing is negatively associated with social trust.
Article
Since the emergence of so-called fake news on the internet and in social media, platforms such as Facebook have started to take countermeasures, and researchers have begun looking into this phenomenon from a variety of perspectives. A large number of scientific work has investigated ways to detect fake news automatically. Less attention has been paid to the subsequent step, i.e., what to do when you are aware of the inaccuracy of claims in social media. This work takes a user-centered approach on means to counter identified mis-and disinformation in social media. We conduct a three-step study design on how approaches in social media should be presented to respect the users' needs and experiences and how effective they are. As our first step, in an online survey representative for some factors to the German adult population, we enquire regarding their strategies on handling information in social media, and their opinion regarding possible solutions-focusing on the approach of displaying a warning on inaccurate posts. In a second step, we present five potential approaches for countermeasures identified in related work to interviewees for qualitative input. We discuss (1) warning, (2) related articles, (3) reducing the size, (4) covering, and (5) requiring confirmation. Based on the interview feedback, as the third step of this study, we select, improve, and examine four promising approaches on how to counter misinformation. We conduct an online experiment to test their effectiveness on the perceived accuracy of false headlines and also ask for the users' preferences. In this study, we find that users welcome warning-based approaches to counter fake news and are somewhat critical with less transparent methods. Moreover, users want social media platforms to explain why a post was marked as disputed. The results regarding effectiveness are similar: Warning-based approaches are shown to be effective in reducing the perceived accuracy of false headlines. Moreover, adding an explanation to the warning leads to the most significant results. In contrast, we could not find a significant effect on one of Facebook's current approaches (reduced post size and fact-checks in related articles).
Article
Significance We examine the role of partisanship in engagement in physical distancing following the outbreak of the novel coronavirus COVID-19 in the United States. We use data on daily mobility patterns for US counties along with information on county-level political preferences and the timing of state government leaders’ recommendations for individuals to stay at home. We find that state government leaders’ recommendations were more effective in reducing mobility in Democratic-leaning counties than in Republican-leaning counties. Among Democratic-leaning counties, recommendations from Republican leaders generated larger mobility reductions than recommendations from Democratic leaders. This study highlights the nuanced role of political partisanship in influencing how leaders’ COVID-19 prevention recommendations affect individuals’ voluntary decisions to engage in physical distancing.
Article
Disinformation on social media—commonly called “fake news”—has become a major concern around the world, and many fact-checking initiatives have been launched in response. However, if the presentation format of fact-checked results is not persuasive, fact-checking may not be effective. For instance, Facebook tested the idea of flagging dubious articles in 2017 but concluded that it was ineffective and removed the feature. We conducted three experiments with social media users to investigate two different approaches to implementing a fake news flag—one designed to be most effective when processed by automatic cognition (System 1) and the other designed to be most effective when processed by deliberate cognition (System 2). Both interventions were effective, and an intervention that combined both approaches was about twice as effective. The awareness training on the meaning of the flags increased the effectiveness of the System 2 intervention but not the System 1 intervention. Believability influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). Our results suggest that both theoretical routes can be used—separately or together—in the presentation of fact-checking results in order to reduce the influence of fake news on social media users.
Article
The present research examined the relationship between political ideology and perceptions of the threat of COVID-19. Due to Republican leadership’s initial downplaying of COVID-19 and the resulting partisan media coverage, we predicted that conservatives would perceive it as less threatening. Two preregistered online studies supported this prediction. Conservatism was associated with perceiving less personal vulnerability to the virus and the virus’s severity as lower, and stronger endorsement of the beliefs that the media had exaggerated the virus’s impact and that the spread of the virus was a conspiracy. Conservatism also predicted less accurate discernment between real and fake COVID-19 headlines and fewer accurate responses to COVID-19 knowledge questions. Path analyses suggested that presidential approval, knowledge about COVID-19, and news discernment mediated the relationship between ideology and perceived vulnerability. These results suggest that the relationship between political ideology and threat perceptions may depend on issue framing by political leadership and media.
Article
Facing budget constraints, many traditional news organizations are turning their eyes on automation to streamline manpower, cut down on costs, and improve efficiency. But how does automation fit into traditional values of journalism and how does it affect perceptions of credibility, an important currency valued by the journalistic field? This study explores this question using a 3 (declared author: human vs. machine vs. combined) × 2 (objectivity: objective vs. not objective) between-subjects experimental design involving 420 participants drawn from the national population of Singapore. The analysis found no main differences in perceived source credibility between algorithm, human, and mixed authors. Similarly, news articles attributed to an algorithm, a human journalist, and a combination of both showed no differences in message credibility. However, the study found an interaction effect between type of declared author and news objectivity. When the article is presented to be written by a human journalist, source and message credibility remain stable regardless of whether the article was objective or not objective. However, when the article is presented to be written by an algorithm, source and message credibility are higher when the article is objective than when the article is not objective. Findings for combined authorship are split: there were no differences between objective and non-objective articles when it comes to message credibility. However, combined authorship is rated higher in source credibility when the article is not objective than when the article is objective.
Article
A growing number of hyper-partisan alternative media outlets have sprung up online to challenge mainstream journalism. However, research on news sharing in this particular media environment is lacking. Based on the virality of seventeen partisan outlets’ coverage of immigration and using the latest computational linguistic algorithm, the present study probes how hyper-partisan news sharing is related to source transparency, content styles, and moral framing. The study finds that the most shared articles reveal author names, but not necessarily other types of author information. The study uncovers a salient link between moral frames and virality. In particular, audiences are more sensitive to moral frames that emphasize authority/respect, fairness/reciprocity, and harm/care.
Article
News—real or fake—is now abundant on social media. News posts on social media focus users’ attention on the headlines, but does it matter who wrote the article? We investigate whether changing the presentation format to highlight the source of the article affects its believability and how social media users choose to engage with it. We conducted two experiments and found that nudging users to think about who wrote the article influenced the extent to which they believed it. The presentation format of highlighting the source had a main effect; it made users more skeptical of all articles, regardless of the source’s credibility. For unknown sources, low source ratings had a direct effect on believability. Believability, in turn, influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). We also found confirmation bias to be rampant: users were more likely to believe articles that aligned with their beliefs, over and above the effects of other factors.
Article
As a remedy against fake news on social media, we examine the effectiveness of three different mechanisms for source ratings that can be applied to articles when they are initially published: expert rating (where expert reviewers fact-check articles, which are aggregated to provide a source rating), user article rating (where users rate articles, which are aggregated to provide a source rating), and user source rating (where users rate the sources themselves). We conducted two experiments and found that source ratings influenced social media users’ beliefs in the articles and that the rating mechanisms behind the ratings mattered. Low ratings, which would mark the usual culprits in spreading fake news, had stronger effects than did high ratings. When the ratings were low, users paid more attention to the rating mechanism, and, overall, expert ratings and user article ratings had stronger effects than did user source ratings. We also noticed a second-order effect, where ratings on some sources led users to be more skeptical of sources without ratings, even with instructions to the contrary. A user’s belief in an article, in turn, influenced the extent to which users would engage with the article (e.g., read, like, comment and share). Lastly, we found confirmation bias to be prominent; users were more likely to believe — and spread — articles that aligned with their beliefs. Overall, our results show that source rating is a viable measure against fake news and propose how the rating mechanism should be designed.
Conference Paper
Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Post-session questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning- Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.
Conference Paper
In this day and age of identity theft, are we likely to trust machines more than humans for handling our personal information? We answer this question by invoking the concept of "machine heuristic," which is a rule of thumb that machines are more secure and trustworthy than humans. In an experiment (N = 160) that involved making airline reservations, users were more likely to reveal their credit card information to a machine agent than a human agent. We demonstrate that cues on the interface trigger the machine heuristic by showing that those with higher cognitive accessibility of the heuristic (i.e., stronger prior belief in the rule of thumb) were more likely than those with lower accessibility to disclose to a machine, but they did not differ in their disclosure to a human. These findings have implications for design of interface cues conveying machine vs. human sources of our online interactions.
Article
The proliferation of fake news on social media has opened up new directions of research for timely identification and containment of fake news and mitigation of its widespread impact on public opinion. While much of the earlier research was focused on identification of fake news based on its contents or by exploiting users’ engagements with the news on social media, there has been a rising interest in proactive intervention strategies to counter the spread of misinformation and its impact on society. In this survey, we describe the modern-day problem of fake news and, in particular, highlight the technical challenges associated with it. We discuss existing methods and techniques applicable to both identification and mitigation, with a focus on the significant advances in each method and their advantages and limitations. In addition, research has often been limited by the quality of existing datasets and their specific application contexts. To alleviate this problem, we comprehensively compile and summarize characteristic features of available datasets. Furthermore, we outline new directions of research to facilitate future development of effective and interdisciplinary solutions.
Article
Although accusations of editorial slant are ubiquitous to the contemporary media environment, recent advances in journalism such as news writing algorithms may hold the potential to reduce readers’ perceptions of media bias. Informed by the Modality-Agency-Interactivity-Navigability (MAIN) model and the principle of similarity attraction, an online experiment (n = 612) was conducted to test if news attributed to an automated author is perceived as less biased and more credible than news attributed to a human author. Results reveal that perceptions of bias are attenuated when news is attributed to a journalist and algorithm in tandem, with positive downstream consequences for perceived news credibility.
Article
Finding facts about fake news There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters. Science , this issue p. 374 ; see also p. 348
Article
After the 2016 US presidential election, the concept of fake news captured popular attention, but conversations lacked a clear conceptualization and used the label in elastic ways to describe various distinct phenomena. In this paper, we analyze fake news as genre blending, combining elements of traditional news with features that are exogenous to normative professional journalism: misinformation, sensationalism, clickbait, and bias. Through a content analysis of stories published by 50 sites that have been labeled fake news and the engagement they generated on social media, we found that stories employed moderate levels of sensationalism, misinformation and partisanship to provide anti-establishment narratives. Complete fabrications were uncommon and did not resonate well with audiences, although there was some truth-stretching that came with genre blending. Results suggest that technocentric solutions aimed at detecting falsehoods are likely insufficient, as fake news is defined more by partisanship and identity politics than misinformation and deception.
Article
Social media sites use different labels to help users find and select news feeds. For example, Blue Feed, Red Feed, a news feed created by the Wall Street Journal, use stance labels to separate news articles with opposing political ideologies to help people explore diverse opinions. To combat the spread of fake news, Facebook has experimented with putting credibility labels on news articles to help readers decide whether the content is trustworthy. To systematically understand the effects of stance and credibility labels on online news selection and consumption, we conducted a controlled experiment to study how these labels influence the selection, perceived extremeness, and level of agreement of news articles. Results show that stance labels may intensify selective exposure - a tendency for people to look for agreeable opinions -- and make people more vulnerable to polarized opinions and fake news. We found, however, that the effect of credibility labels on reducing selective exposure and recognizing fake news is limited. Although originally designed to encourage exposure to opposite viewpoints, stance labels can make fake news articles look more trustworthy, and they may lower people's perception of the extremeness of fake news articles. Our results have important implications on the subtle effects of stance and credibility labels on online news consumption.
Article
Mindfulness is an important emerging topic. Individual mindfulness in IT use has not been studied systematically. Through three programmatic empirical studies, this paper develops a scale for IT mindfulness and tests its utility in the post-adoption system use context. Study 1 develops a measure of IT mindfulness and evaluates its validity and reliability. Study 2 employs a laboratory experiment to examine whether IT mindfulness can be manipulated and whether its influence is consistent across technological contexts. Study 3 places IT mindfulness in a nomological network and tests the construct's utility for predicting more active system use (e.g., trying to innovate and deep structure usage) as well as more automatic system use (e.g., continuance intention). Our primary contribution includes the development and validation of a scale for IT mindfulness. In addition, we demonstrate that IT mindfulness (1) differs from important existing concepts such as cognitive absorption, (2) can be manipulated, (3) more closely relates to active system use than automatic system use, and (4) provides more predictive power within the IS context than general trait mindfulness.
Article
Fake news has become a prominent topic of public discussion, particularly amongst elites. Recent research has explored the prevalence of fake news during the 2016 election cycle and possible effects on electoral outcomes. This scholarship has not yet considered how elite discourse surrounding fake news may influence individual perceptions of real news. Through an experiment, this study explores the effects of elite discourse about fake news on the public’s evaluation of news media. Results show that exposure to elite discourse about fake news leads to lower levels of trust in media and less accurate identification of real news. Therefore, frequent discussion of fake news may affect whether individuals trust news media and the standards with which they evaluate it. This discourse may also prompt the dissemination of false information, particularly when fake news is discussed by elites without context and caution.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Article
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties – known as partisanship – can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides.
Conference Paper
The topic of fake news has drawn attention both from the public and the academic communities. Such misinformation has the potential of affecting public opinion, providing an opportunity for malicious parties to manipulate the outcomes of public events such as elections. Because such high stakes are at play, automatically detecting fake news is an important, yet challenging problem that is not yet well understood. Nevertheless, there are three generally agreed upon characteristics of fake news: the text of an article, the user response it receives, and the source users promoting it. Existing work has largely focused on tailoring solutions to one particular characteristic which has limited their success and generality. In this work, we propose a model that combines all three characteristics for a more accurate and automated prediction. Specifically, we incorporate the behavior of both parties, users and articles, and the group behavior of users who propagate fake news. Motivated by the three characteristics, we propose a model called CSI which is composed of three modules: Capture, Score, and Integrate. The first module is based on the response and text; it uses a Recurrent Neural Network to capture the temporal pattern of user activity on a given article. The second module learns the source characteristic based on the behavior of users, and the two are integrated with the third module to classify an article as fake or not. Experimental analysis on real-world data demonstrates that CSI achieves higher accuracy than existing models, and extracts meaningful latent representations of both users and articles.
Article
This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ – the ease of information recall – this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.
Conference Paper
Is a polarized society inevitable, where people choose to be exposed to only political news and commentary that reinforces their existing viewpoints? We examine the relationship between the numbers of supporting and challenging items in a collection of political opinion items and readers' satisfaction, and then evaluate whether simple presentation techniques such as highlighting agreeable items or showing them first can increase satisfaction when fewer agreeable items are present. We find individual differences: some people are diversity-seeking while others are challenge-averse. For challenge-averse readers, highlighting appears to make satisfaction with sets of mostly agreeable items more extreme, but does not increase satisfaction overall, and sorting agreeable content first appears to decrease satisfaction rather than increasing it. These findings have important implications for builders of websites that aggregate content reflecting different positions.
Source Credibility Matters: Does Automated Journalism Inspire Selective Exposure
  • Chenyan Jia
  • Jia Chenyan