Preprint

Online Disinformation and the Role of Wikipedia

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the author.

Abstract

The aim of this study is to find key areas of research that can be useful to fight against disinformation on Wikipedia. To address this problem we perform a literature review trying to answer three main questions: (i) What is disinformation? (ii) What are the most popular mechanisms to spread online disinformation? and (iii) Which are the mechanisms that are currently being used to fight against disinformation?. In all these three questions we take first a general approach, considering studies from different areas such as journalism and communications, sociology, philosophy, information and political sciences. And comparing those studies with the current situation on the Wikipedia ecosystem. We conclude that in order to keep Wikipedia as free as possible from disinformation, it is necessary to help patrollers to early detect disinformation and assess the credibility of external sources. More research is needed to develop tools that use state-of-the-art machine learning techniques to detect potentially dangerous content, empowering patrollers to deal with attacks that are becoming more complex and sophisticated.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the author.

... The categories used by Facebook's fact-checking agencies to label content reliability include False, True, Mixed, Incorrect Title, Inappropriate, Ridiculousness, Opinion, and Joke Generator. The same classifications are also suggested to be used as a kind of moderation tool for Wikipedia (Saez-Trumper, 2019). ...
Article
Full-text available
The concepts of Islamophobia and self-orientalism have gained prominence in recent years, both in societal events and academic debates. This study aims to explore how the construction of Islam is carried out in articles produced with the key terms "the fundamentals of faith" and "obligations of Islam" on Vikipedi Türkiye, and to examine the relationship between this construction and the self-orientalist Islamophobic discourse. The relevant texts were obtained through the Maxqda program and analyzed using content analysis methodology. As a result of the analysis, it was found that content about the fundamentals of faith and Islamic obligations on Wikipedia was produced in a way that could manipulate users, with references often directed not to the primary sources of Islam but to individuals highlighted in popular discourses in Turkey and worldwide. Moreover, articles were deliberately presented in a manner that could lead to negative attitudes, especially about specific topics (such as jihad, marriage, sects, etc.) among platform users. The study suggests that such platforms, which inform the public, may serve the phenomenon of local Islamophobia or self-orientalism. It also emphasizes the need for these platforms to be supported with accurate content and for followers to approach the information on these platforms with greater skepticism, directing them to authentic sources.
Conference Paper
Full-text available
Through a systematic literature review method, in this work we searched classical electronic libraries in order to find the most recent papers related to fake news detection on social medias. Our target is mapping the state of art of fake news detection, defining fake news and finding the most useful machine learning technique for doing so. We concluded that the most used method for automatic fake news detection is not just one classical machine learning technique, but instead a amalgamation of classic techniques coordinated by a neural network. We also identified a need for a domain ontology that would unify the different terminology and definitions of the fake news domain. This lack of consensual information may mislead opinions and conclusions.
Conference Paper
Full-text available
In Brazil, 48% of the population use WhatsApp to share and discuss news. Currently, there are serious concerns that this platform can become a fertile ground for groups interested in disseminating misinformation, especially as part of articulated political campaigns. Particularly, WhatsApp provides an important space for users to engage in public conversations that worth attention, the public groups. These groups are suitable for political activism and social movement organization. Additionally, it is reasonable to assume that a malicious misinformation campaign might attempt to maximize the audience of a fake story by sharing it in existing public groups. In this paper, we present a system for gathering, analyzing and visualize public groups in WhatsApp. In addition to describe our methodology, we also provide a brief characterization of the content shared in 127 Brazilian groups. We hope our system can help journalists and researchers to understand the repercussion of events related to the Brazilian elections within these groups.
Conference Paper
Full-text available
The Deepfake algorithm allows a user to switch the face of one actor in a video with the face of a different actor in a photorealistic manner. This poses forensic challenges with regards to the reliability of video evidence. To contribute to a solution, photo response non uniformity (PRNU) analysis is tested for its effectiveness at detecting Deepfake video manipulation. The PRNU analysis shows a significant difference in mean normalised cross correlation scores between authentic videos and Deepfakes.
Conference Paper
Full-text available
As Internet users increasingly rely on social media sites like Facebook and Twitter to receive news, they are faced with a bewildering number of news media choices. For example, thousands of Facebook pages today are registered and categorized as some form of news media outlets. Inferring the bias (or slant) of these media pages poses a difficult challenge for media watchdog organizations that traditionally rely on content analysis. In this paper, we explore a novel scalable methodology to accurately infer the biases of thousands of news sources on social media sites like Facebook and Twitter. Our key idea is to utilize their advertiser interfaces, that offer detailed insights into the demographics of the news source's audience on the social media site. We show that the ideological (lib-eral or conservative) leaning of a news source can be accurately estimated by the extent to which liberals or conserva-tives are over-/under-represented among its audience. Additionally , we show how biases in a news source's audience demographics, along the lines of race, gender, age, national identity, and income, can be used to infer more fine-grained biases of the source, such as social vs. economic vs. nation-alistic conservatism. Finally, we demonstrate the scalability of our approach by building and publicly deploying a system , called "Media Bias Monitor" 1 , which makes the biases in audience demographics for over 20, 000 news outlets on Facebook transparent to any Internet user.
Conference Paper
Full-text available
In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
Article
Full-text available
Political communication is the process of putting information, technology, and media in the service of power. Increasingly, political actors are automating such processes, through algorithms that obscure motives and authors yet reach immense networks of people through personal ties among friends and family. Not all political algorithms are used for manipulation and social control however. So what are the primary ways in which algorithmic political communication—organized by automated scripts on social media—may undermine elections in democracies? In the US context, what specific elements of communication policy or election law might regulate the behavior of such “bots,” or the political actors who employ them? First, we describe computational propaganda and define political bots as automated scripts designed to manipulate public opinion. Second, we illustrate how political bots have been used to manipulate public opinion and explain how algorithms are an important new domain of analysis for scholars of political communication. Finally, we demonstrate how political bots are likely to interfere with political communication in the United States by allowing surreptitious campaign coordination, illegally soliciting either contributions or votes, or violating rules on disclosure.
Article
Full-text available
As political polarization in the United States continues to rise, the question of whether polarized individuals can fruitfully cooperate becomes pressing. Although diversity of individual perspectives typically leads to superior team performance on complex tasks, strong political perspectives have been associated with conflict, misinformation and a reluctance to engage with people and perspectives beyond one's echo chamber. It is unclear whether self-selected teams of politically diverse individuals will create higher or lower quality outcomes. In this paper, we explore the effect of team political composition on performance through analysis of millions of edits to Wikipedia's Political, Social Issues, and Science articles. We measure editors' political alignments by their contributions to conservative versus liberal articles. A survey of editors validates that those who primarily edit liberal articles identify more strongly with the Democratic party and those who edit conservative ones with the Republican party. Our analysis then reveals that polarized teams---those consisting of a balanced set of politically diverse editors---create articles of higher quality than politically homogeneous teams. The effect appears most strongly in Wikipedia's Political articles, but is also observed in Social Issues and even Science articles. Analysis of article "talk pages" reveals that politically polarized teams engage in longer, more constructive, competitive, and substantively focused but linguistically diverse debates than political moderates. More intense use of Wikipedia policies by politically diverse teams suggests institutional design principles to help unleash the power of politically polarized teams.
Article
Full-text available
In this article, we present results on the identification and behavioral analysis of social bots in a sample of 542,584 Tweets, collected before and after Japan's 2014 general election. Typical forms of bot activity include massive Retweeting and repeated posting of (nearly) the same message, sometimes used in combination. We focus on the second method and present (1) a case study on several patterns of bot activity, (2) methodological considerations on the automatic identification of such patterns and the prerequisite near-duplicate detection, and (3) we give qualitative insights into the purposes behind the usage of social/political bots. We argue that it was in the latency of the semi-public sphere of social media-and not in the visible or manifest public sphere (official campaign platform, mass media)-where Shinzō Abe's hidden nationalist agenda interlocked and overlapped with the one propagated by organizations such as Nippon Kaigi and Internet right-wingers (netto uyo) during the election campaign, the latter potentially forming an enormous online support army of Abe's agenda.
Book
Full-text available
Social media is an invaluable source of time-critical information during a crisis. However, emergency response and humanitarian relief organizations that would like to use this information struggle with an avalanche of social media messages that exceeds human capacity to process. Emergency managers, decision makers, and affected communities can make sense of social media through a combination of machine computation and human compassion - expressed by thousands of digital volunteers who publish, process, and summarize potentially life-saving information. This book brings together computational methods from many disciplines: natural language processing, semantic technologies, data mining, machine learning, network analysis, human-computer interaction, and information visualization, focusing on methods that are commonly used for processing social media messages under time-critical constraints, and offering more than 500 references to in-depth information.
Conference Paper
Full-text available
Online astroturfing refers to coordinated campaigns where messages supporting a specific agenda are distributed via the Internet. These messages employ deception to create the appearance of being generated by an independent entity. In other words, astroturfing occurs when people are hired to present certain beliefs or opinions on behalf of their employer through various communication channels. The key component of astroturfing is the creation of false impressions that a particular idea or opinion has widespread support. Although the concept of astroturfing in traditional media outlets has been studied, online astroturfing has not been investigated intensively by IS scholars. This study develops a theoretically-based definition of online astroturfing from an IS perspective and discusses its key attributes. Online astroturfing campaigns may ultimately have a substantial influence on both Internet users and society. Thus a clear understanding of its characteristics, techniques and usage can provide valuable insights for both practitioners and scholars. © (2013) by the AIS/ICIS Administrative Office All rights reserved.
Conference Paper
Full-text available
In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a pre-liminary study of certain social phenomenons, such as the dissem-ination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the pur-pose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propa-gation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.
Article
Full-text available
The distinction between misinformation and disinformation becomes especially important in political, editorial, and advertising contexts, where sources may make deliberate efforts to mislead, deceive, or confuse an audience in order to promote their personal, religious, or ideological objectives. The difference consists in having an agenda. It thus bears comparison with lying, because lies are assertions that are false, that are known to be false, and that are asserted with the intention to mislead, deceive, or confuse. One context in which disinformation abounds is the study of the death of JFK, which I know from more than a decade of personal research experience. Here I reflect on that experience and advance a preliminary theory of disinformation that is intended to stimulate thinking on this increasingly important subject. Five kinds of disinformation are distinguished and exemplified by real life cases I have encountered. It follows that the story you are about to read is true.
Conference Paper
Full-text available
We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to "trending" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting ("re-tweeting") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.
Article
Full-text available
In this paper, the serious problem of disinformation is discussed. It is argued that, in order to deal with this problem, we first need to understand exactly what disinformation is. The philosophical method of conceptual analysis is described, and a conceptual analysis of disinformation is offered. Finally, how this analysis can help us to deal with the problem of disinformation is briefly discussed.
Article
Full-text available
Computational and information-theoretic research in philosophy has become increasingly fertile and pervasive, giving rise to a wealth of interesting results. Consequently, a new and vitally important field has emerged, the philosophy of information (PI). This paper introduces PI as the philosophical field concerned with (i) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, and with (ii) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. It is argued that PI is a mature discipline for three reasons: it represents an autonomous field of research; it provides an innovative approach to both traditional and new philosophical topics; and it can stand beside other branches of philosophy, offering a systematic treatment of the conceptual foundations of the world of information and the information society.
Conference Paper
Wikipedia is playing an increasingly central role on the web, and the policies its contributors follow when sourcing and fact-checking content affect million of readers. Among these core guiding principles, verifiability policies have a particularly important role. Verifiability requires that information included in a Wikipedia article be corroborated against reliable secondary sources. Because of the manual labor needed to curate Wikipedia at scale, however, its contents do not always evenly comply with these policies. Citations (i.e. reference to external sources) may not conform to verifiability requirements or may be missing altogether, potentially weakening the reliability of specific topic areas of the free encyclopedia. In this paper, we aim to provide an empirical characterization of the reasons why and how Wikipedia cites external sources to comply with its own verifiability guidelines. First, we construct a taxonomy of reasons why inline citations are required, by collecting labeled data from editors of multiple Wikipedia language editions. We then crowdsource a large-scale dataset of Wikipedia sentences annotated with categories derived from this taxonomy. Finally, we design algorithmic models to determine if a statement requires a citation, and to predict the citation reason . We evaluate the accuracy of such models across different classes of Wikipedia articles of varying quality, and on external datasets of claims annotated for fact-checking purposes.
Article
Finding facts about fake news There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters. Science , this issue p. 374 ; see also p. 348
Conference Paper
Recent years have witnessed a widespread increase of rumor news generated by humans and machines. Therefore, tools for investigating rumor news have become an urgent necessity. One useful function of such tools is to see ways a specific topic or event is represented by presenting different points of view from multiple sources. In this paper, we propose Maester, a novel agreement-aware search framework for investigating rumor news. Given an investigative question, Maester will retrieve related articles to that question, assign and display top articles from agree, disagree, and discuss categories to users. Splitting the results into these three categories provides the user a holistic view towards the investigative question. We build Maester based on the following two key observations: (1) relatedness can commonly be determined by keywords and entities occurring in both questions and articles, and (2) the level of agreement between the investigative question and the related news article can often be decided by a few key sentences. Accordingly, we use gradient boosting tree models with keyword/entity matching features for relatedness detection, and leverage recurrent neural network to infer the level of agreement. Our experiments on the Fake News Challenge (FNC) dataset demonstrate up to an order of magnitude improvement of Maester over the original FNC winning solution, for agreement-aware search.
Article
Deepfake videos are the product of artificial intelligence or machine-learning applications that merge, combine, replace and superimpose images and video clips onto a video, creating a fake video that appears authentic. The main issue with Deepfake videos is that anyone can produce explicit content without the consent of those involved. While some of these videos are humorous and benign, the majority of them are pornographic. The faces of celebrities and other well-known (and lesser-known) individuals have been superimposed on the bodies of porn stars. The existence of this technology erodes trust in video evidence and adversely affects its probative value in court. This article describes the current and future capabilities of this technology, stresses the need to plan for its treatment as evidence in court, and draws attention to its current and future impact on the authentication process of video evidence in courts. Ultimately, as the technology improves, parallel technologies will need to be developed and utilised to identify and expose fake videos.
Book
Cybersexism is rampant and can exact an astonishingly high cost. In some cases, the final result is suicide. Bullying, stalking, and trolling are just the beginning. Extreme examples such as GamerGate get publicized, but otherwise the online abuse of women is largely underreported. Haters combines a history of online sexism with suggestions for solutions. Using current events and the latest available research into cybersexism, Bailey Poland questions the motivations behind cybersexist activities and explores methods to reduce footprints of Internet misogyny, drawing parallels between online and offline abuse. By exploring the cases of Alyssa Funke, Rehtaeh Parsons, Audrie Pott, Zoe Quinn, Anita Sarkeesian, Brianna Wu, and others, and her personal experiences with sexism, Poland develops a compelling method of combating sexism online. © 2016 by the Board of Regents of the University of Nebraska. All rights reserved.
Article
Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election. © 2016, Alessandro Bessi and Emilio Ferrara. All Rights Reserved.
Conference Paper
Wikipedia is one of the most popular sources of free data on the Internet and subject to extensive use in numerous areas of research. Wikidata on the other hand, the knowledge base behind Wikipedia, is less popular as a source of data, despite having the “data” already in its name, and despite the fact that many applications in Natural Language Processing in general and Information Extraction in particular benefit immensely from the integration of knowledge bases. In part, this imbalance is owed to the younger age of Wikidata, which launched over a decade after Wikipedia. However, this is also owed to challenges posed by the still evolving properties of Wikidata that make its content more difficult to consume for third parties than is desirable. In this article, we analzye the causes of these challenges from the viewpoint of a data consumer and discuss possible avenues of research and advancement that both the scientific and the Wikidata community can collaborate on to turn the knowledge base into the invaluable asset that it is uniquely positioned to become.
Conference Paper
This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.
Article
This tutorial brings together perspectives on ER from a variety of fields, including databases, machine learning, natural language processing and information retrieval, to provide, in one setting, a survey of a large body of work. We discuss both the practical aspects and theoretical underpinnings of ER. We describe existing solutions, current challenges, and open research problems.
Article
This chapter is an overview of the design and analysis of reputation systems for strategic users. We consider three specific strategic threats to reputa- tion systems: the possibility of users with poor reputations starting afresh (whitewashing); lack of effort or honesty in providing feedback; and sybil attacks, in which users create phantom feedback from fake identities to ma- nipulate their own reputation. In each case, we present a simple analytical model that captures the essence of the strategy, and describe approaches to solving the strategic problem in the context of this model. We conclude with a discussion of open questions in this research area. If each entity's history of previous interactions is made visible to poten- tial new interaction partners, several benefits ensue. First, a history may reveal information about an entity's ability, allowing others to make choices about whether to interact with that entity, and on what terms. Second, an expectation that current performance will be visible in the future may de- ter moral hazard in the present, that hazard being the temptation to cheat or exert low effort. In other words, visible histories create an incentive to
Conference Paper
Recently, Nature published an article comparing the quality of Wikipedia articles to those of Encyclopedia Britannica (Giles 2005). The article, which gained much public attention, provides evidence for Wikipedia quality, but does not provide an explanation of the underlying source of that quality. Wikipedia, and wikis in general, aggregate information from a large and diverse author-base, where authors are free to modify any article. Building upon Surowiecki's (2005) Wisdom of Crowds, we develop a model of the factors that determine wiki content quality. In an empirical study of Wikipedia, we find strong support for our model. Our results indicate that increasing size and diversity of the author-base improves content quality. We conclude by highlighting implications for system design and suggesting avenues for future research.
Computational propaganda in brazil: Social bots during elections
  • D Arnaudo
ARNAUDO, D. Computational propaganda in brazil: Social bots during elections. Project on Computational Propaganda 8 (2017).
Contropedia-the analysis and visualization of controversies in wikipedia articles
  • E Borra
  • E Weltevrede
  • P Ciuccarelli
  • A Kaltenbrunner
  • D Laniado
  • G Magni
  • M Mauri
  • R Rogers
  • T Venturini
  • Et Al
BORRA, E., WELTEVREDE, E., CIUCCARELLI, P., KALTENBRUNNER, A., LANIADO, D., MAGNI, G., MAURI, M., ROGERS, R., VENTURINI, T., ET AL. Contropedia-the analysis and visualization of controversies in wikipedia articles. In OpenSym (2014), pp. 34-1.
Strategies and influence of social bots in a 2017 german state election-a case study on twitter
  • F Brachten
  • S Stieglitz
  • L Hofeditz
  • K Kloppenborg
  • A Reimann
BRACHTEN, F., STIEGLITZ, S., HOFEDITZ, L., KLOPPENBORG, K., AND REIMANN, A. Strategies and influence of social bots in a 2017 german state election-a case study on twitter. arXiv preprint arXiv:1710.07562 (2017).
The great british brexit robbery: how our democracy was hijacked
  • C Cadwalladr
CADWALLADR, C. The great british brexit robbery: how our democracy was hijacked. The Guardian 7 (2017).
Perspective | Conspiracy videos? Fake news? Enter Wikipedia, the 'good cop' of the Internet
  • N Cohen
COHEN, N. Perspective | Conspiracy videos? Fake news? Enter Wikipedia, the 'good cop' of the Internet. Washington Post (Apr 2018).
Creating a data set and a challenge for deepfakes
  • Facebook
FACEBOOK. Creating a data set and a challenge for deepfakes, Sep 2019. [Online; accessed 23. Sep. 2019].
Data Voids: Where Missing Data Can Easily Be Exploited
  • M Golebiewski
  • D Boyd
GOLEBIEWSKI, M., AND BOYD, D. Data Voids: Where Missing Data Can Easily Be Exploited. Data & Society, 2018.
Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 us presidential campaign
  • A Guess
  • B Nyhan
  • J Reifler
GUESS, A., NYHAN, B., AND REIFLER, J. Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 us presidential campaign. European Research Council 9 (2018).
Artificial intelligence service "ores" gives wikipedians x-ray specs to see through bad edits
  • A Halfaker
  • D Taraborelli
HALFAKER, A., AND TARABORELLI, D. Artificial intelligence service "ores" gives wikipedians x-ray specs to see through bad edits, 2015.
Reports of a Facebook fake news detector are apparently a plugin
  • T Hatmaker
  • J Constine
HATMAKER, T., AND CONSTINE, J. Reports of a Facebook fake news detector are apparently a plugin. TechCrunch (Dec 2016).
Bots and automation over twitter during the third us presidential debate
  • P Howard
  • B Kollanyi
  • S C Woolley
HOWARD, P., KOLLANYI, B., AND WOOLLEY, S. C. Bots and automation over twitter during the third us presidential debate.
Helping People Better Assess the Stories They See in News Feed with the Context Button | Facebook Newsroom
  • T Hughes
  • J Smith
  • A Leavitt
HUGHES, T., SMITH, J., AND LEAVITT, A. Helping People Better Assess the Stories They See in News Feed with the Context Button | Facebook Newsroom, Sep 2019. [Online; accessed 23. Sep. 2019].
Exposing deepfake videos by detecting face warping artifacts
  • Y Li
  • S Lyu
LI, Y., AND LYU, S. Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656 2 (2018).
WhatsApp weaponised in Brazil election
  • M Magenta
  • J Gragnani
  • F Souza
MAGENTA, M., GRAGNANI, J., AND SOUZA, F. WhatsApp weaponised in Brazil election, Sep 2019. [Online; accessed 19. Sep. 2019].
Media manipulation and disinformation online
  • A Marwick
  • R Lewis
MARWICK, A., AND LEWIS, R. Media manipulation and disinformation online. New York: Data and Society Research Institute (2017).
Legitimizing wikipedia: How us national newspapers frame and use the online encyclopedia in their coverage
  • M Messner
  • J South
MESSNER, M., AND SOUTH, J. Legitimizing wikipedia: How us national newspapers frame and use the online encyclopedia in their coverage. Journalism Practice 5, 2 (2011), 145-160.
Kenyans face a fake news epidemic
  • N Miriello
  • D Gilbert
  • J Steers
MIRIELLO, N., GILBERT, D., AND STEERS, J. Kenyans face a fake news epidemic. -vice. https://www.vice.com/en_us/article/43bdpm/ kenyans-face-a-fake-news-epidemic-they-want-to-know-just-how-much-cambridge-analytica-(Accessed on 09/18/2019).