Article

Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Mainstream news stories from obscure sources that were propagated on Twitter became one of the central elements of the 2016 US presidential election, and subsequent analysis reveals that many of the stories were largely manipulated or totally fabricated (Howard et al., 2017;Stone and Gordon, 2017). Republican candidate Donald Trump himself, among others, helped to drive the mainstream news agenda with his prolific use of the social media platform (Lynch, 2016;Maheshwari, 2016). ...
... While the Russian infiltration of American social media has been explored (e.g. Chou, 2016;Howard et al., 2017;Lynch, 2016;Stone and Gordon, 2017), this study, which examines the attributes of cross-border content comparisons on Twitter, can provide a foundation for broader understanding of the international scope of social media content. Therefore, the goal of this study was to examine Twitter comments about both candidates in the US, and in five key countries that are of prime importance to American diplomacy and international trade, in order to characterise the content as a foundation for future studies on social media's influence in shaping the news agenda. ...
... In a separate study, these authors also found that non-elite sources had a growing influence with journalists through Twitter (Lewis and Zamith, 2015). One study examined how Twitter propagated 'fake news' in US states (Howard et al., 2017). Studying Twitter content may reveal the extent of biased agenda-setting, with its potential to influence and polarise millions of people both in the US and around the world. ...
Article
Full-text available
A manual content analysis compares 6019 Twitter comments from six countries during the 2016 US presidential election. Twitter comments were positive about Trump and negative about Clinton in Russia, the US and also in India and China. In the UK and Brazil, Twitter comments were largely negative about both candidates. Twitter sources for Clinton comments were more frequently from journalists and news companies, and still more negative than positive in tone. Topics on Twitter varied from those in mainstream news media. This foundational study expands communications research on social media, as well as political communications and international distinctions.
... False news, on the other hand, is defined as information that is intentionally false, and are often malicious stories propagating conspiracy theories. Although this type of content shares characteristics with polarized and sensationalist content (described later in this article), where information can be characterized as highly emotional and highly partisan (Allcott & Gentzkow, 2017;Howard, Kollanyi, Bradshaw, & Neudert, 2017;Potthast et al., 2017), it differs in important features (see Table 2). First and foremost, false news stories are not factual and have no basis in reality, and thus, are unable to be verified (Allcott & Gentzkow, 2017;Cohen, 2017). ...
... As Howard et al. (2017) elucidate, "both fake news websites and political bots are crucial tools in digital propaganda attacks-they aim to influence conversations, demobilize opposition and generate false support" (p. 1). For example, a recent study revealed that social bots in Twitter served to amplify the dissemination of content coming from low-credibility sources, content such as conspiracy theories, false news, and junk science, suggesting that "curbing social bots may be an effective strategy for mitigating the spread of low credibility content" (Shao et al., 2018, p. 5-6). ...
... Identifying commentary, thus, requires differentiating between both real news and assertions (Howard et al., 2017; see Table 3). This can be done based on opinion journalists' adherence to the code of professional conduct of the Society of Professional Journalists and based on the Verification, Independence, and Accountability principles of journalism. ...
Article
As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content.
... Specifically, we explore the following questions: (1) to what extent do social bots participate in the information distribution of news in professional media; (2) what role do social bots play in the diffusion of such stories? (3) can social bots become opinion leaders along the diffusion path of professional news stories? ...
... It uses the functionality of social accounts to deliver news and information like a human and can also perform malicious activities such as sending spam, posting harassment, and delivering hate speech. Such social bots can post messages quickly, mass-produce replicated messages, and eventually distribute messages in the form of humanlike users [3]. Social bots are more active compared to regular human users [4], and their purpose is to learn and imitate humans to manipulate public opinion on social media platforms. ...
Article
Full-text available
Social-bots-mediated information manipulation is influencing the public opinion environment, and their role and behavior patterns in news proliferation are worth exploring. Based on the analysis of bots' posting frequency, influence, and retweeting relationship, we take the diffusion of The New York Times' coverage of Xinjiang issue on the overseas social platform Twitter as an example and employ the two-step flow model. It is found that in the role of second-step diffusion, unlike posting news indiscriminately in first-step diffusion, social bots are more inclined to postcontroversial information in second-step diffusion; in terms of diffusion patterns, although social bots are more engaged in first-step diffusion than in second-step diffusion and can trigger human users to retweet, they are still inferior to humans in terms of influence.
... Research on disinformation is dispersed across numerous scientific fields. In computer science, there is mature work on automatic detection [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] and real-world measurement [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58], but work on responses and countermeasures is comparatively thin (see Section 2.3). ...
... Disinformation campaigns are typically multimodal, exploiting many different social and media channels at once [59]. These campaigns use websites as an important tool: to host content for distribution across platforms, facilitate user tracking, and generate ad revenue [42,45,[60][61][62][63]. Disinformation websites are frequently designed to conceal their provenance and deceive users into believing that they are legitimate news or opinion outlets. 1 Our work examined whether warnings can counter this deception and help users distinguish, contextualize, or avoid disinformation websites. ...
Preprint
Full-text available
Online platforms are using warning messages to counter disinformation, but current approaches are not evidence-based and appear ineffective. We designed and empirically evaluated new disinformation warnings by drawing from the research that led to effective security warnings. In a laboratory study, we found that contextual warnings are easily ignored, but interstitial warnings are highly effective at inducing subjects to visit alternative websites. We then investigated how comprehension and risk perception moderate warning effects by comparing eight interstitial warning designs. This second study validated that interstitial warnings have a strong effect and found that while warning design impacts comprehension and risk perception, neither attribute resulted in a significant behavioral difference. Our work provides the first empirical evidence that disinformation warnings can have a strong effect on users' information-seeking behaviors, shows a path forward for effective warnings, and contributes scalable, repeatable methods for establishing evidence on the effects of disinformation warnings.
... The attack struck the entire nation with some states getting larger doses of malicious content than others. Out of the sixteen states labeled as swing states by the National Constitution Center at the time, twelve were exposed to above average levels of polarizing content (Howard et al., 2018). ...
... It was again humans and artificials working together in that effort. Fredheim and Gallacher (2018) and Kollanyi et al. (2018) document the increasing rate of anonymous activity during key social momentspolitical events, such as elections. While the first work studies two months of 2018 and concludes that around 35% of Twitter activity around content mentioning NATO can be assigned to anonymous or low quality accounts, the second aforementioned study focuses on the week prior to election day of the 2016 US presidential elections. ...
... UU. Obama fue criticado por llenar las redes sociales con mensajes automatizados con la intención de atraer la atención y el apoyo de la ciudadanía en las elecciones de 2008 y 2012; Mitt Romney, candidato republicano a las elecciones estadounidenses de 2012, fue acusada de comprar miles de seguidores en Twitter en un intento por parecer más popular, y Donald Trump utilizó bots sociales y perfiles falsos en Twitter y otras redes sociales para lanzar opiniones favorables sobre su candidatura, aumentar sus seguidores, generar una percepción artificial de mayor popularidad y atacar a sus contrincantes lanzando noticias falsas o distorsionando subliminalmente su imagen, entre otras cosas (Bessi y Ferrara, 2016;Howard et al., 2017Molina, et al., 2017. En Francia, durante las elecciones presidenciales de 2017 se detectaron bots vinculados con la candidata Marine Le Pen y el candidato Emmanuel Macron en los días previos a las elecciones (Ferrara, 2017(Ferrara, , 2020. ...
Article
Full-text available
Este artículo se propone confrontar el concepto de opinión pública con la realidad y las expectativas de una sociedad digitalizada para analizar si la actual colonización algorítmica exige un nuevo cambio estructural de la opinión pública o más bien la retirada de este concepto. Los datos y metadatos masivos se han vuelto un arma de doble filo para la sociedad democrática digitalmente hiperconectada. Mientras que, por un lado, el increíble potencial que atesora el big data y sus diferentes técnicas y tecnologías de explotación de los datos y metadatos lo convierten en un producto codiciado por sistema de instituciones que componen tanto el estado como la sociedad civil; por otro, los altos impactos negativos que su uso instrumental e irresponsable está produciendo y puede llegar a producir, hacen del big data una herramienta controvertida y altamente criticada por alejarnos de cualquier intento de construir una ciudadanía digital. Si bien la democracia algorítmica no se apoya solo en la opinión pública, el objetivo es mostrar la incompatibilidad entre opinión pública artificial y democracia. Nuestro hilo conductor es el concepto habermasiano de opinión pública, puesto que será precisamente la fuerza de la sociedad civil, a través del diseño en su seno de espacios de participación, de donde podemos extraer el potencial necesario para enfrentarnos a la actual colonización algorítmica, para recuperar una deliberación autónoma y crítica sin la cual no existe opinión pública alguna y, por tanto, tampoco democracia.
... Durante la campaña electoral estadounidense de 2016 los discursos de los candidatos Donald Trump y Hilary Clinton fueron sometidos al fact-checking, reflejando que su conteniendo albergaba un 70% y 30% de fake news respectivamente (Gutiérrez-Rubí, 2017). Además, según los datos del proyecto de la Universidad de Oxford sobre la propaganda computacional y los discursos políticos (Howard, Kollanyi, Bradshaw y Neudert, 2017), las semanas antes de las elecciones, los Revista usuarios de twitter contribuyeron a la infoxicación compartiendo tanto volumen de noticias falsas, de contenido polarizado y conspiratorio, como las producidas por los medios profesionales: "Junk news, characterized by ideological extremism, misinformation and the intention to persuade readers to respect or hate a candidate or policy based on emotional appeals, was just as, if not more, prevalent than the amount of information produced by professional news organizations" (p.5). La situación se ha agravado en esta crisis del coronavirus, pues según avanzan los resultados de un estudio de la Universidad Carnegie Mellon (2020), casi la mitad de los usuarios de twitter que publicaron tuits sobre el coronavirus durante el mes de febrero, responden a los patrones de conducta de los bots. ...
Article
La investigación indaga sobre la competencia mediática de un grupo de 13 chicos y 12 chicas sobre la crisis COVID-19 y su percepción sobre la misma. Se recoge la información a través de sus participaciones en los foros de una plataforma virtual y la construcción de sus relatos sobre los hechos. En los foros exponen tres fake news sobre la pandemia, las posibles soluciones y consecuencias, además de analizar el discurso de odio de un tuit. Finalmente, posicionándose en el futuro, una vez pasada la crisis, elaboran una narración sobre cómo contarían los hechos vividos. Los resultados muestran su preocupación e interés por la noticias relacionadas con la salud, la polarización en el análisis del discurso de odio y la presencia de las emociones en sus narraciones. Se concluye la necesidad de la alfabetización digital crítica, especialmente en contextos dramáticos, de vulnerabilidad emocional como las crisis, y favorables para la difusión de bulos.
... This attention has come from whistleblowers (Wylie, 2019), law enforcement (U.S. Department of Justice, 2019a, 2019bUS v. Internet Research Agency, LLC, 2018), legislators (U.S. Senate Select Committee on Intelligence, 2019a, 2019b), and scholars (DiResta et al., 2019;Howard et al., 2017). More recently, similar campaigns have attempted to discredit COVID-19 vaccination efforts, and public concern about how misinformation inhibits democratic processes remains high. ...
Article
Full-text available
Records are persistent representations of activities created by partakers, observers, or their authorized proxies. People are generally willing to trust vital records such as birth, death, and marriage certificates. However, conspiracy theories and other misinformation may negatively impact perceptions of such documents, particularly when they are associated with a significant person or event. This paper explores the relationship between archival records and trustworthiness by reporting results of a survey that asked genealogists about their perceptions of 44th U.S. President Barack Obama's birth certificate, which was then at the center of the “birtherism” conspiracy. We found that although most participants perceived the birth certificate as trustworthy, others engaged in a biased review, considering it not trustworthy because of the news and politics surrounding it. These findings suggest that a conspiracy theory can act as a moderating variable that undermines the efficacy of normal or recommended practices and procedures for evaluating online information such as birth certificates. We provide recommendations and propose strategies for archivists to disseminate correct information to counteract the spread of misinformation about the authenticity of vital records, and we discuss future directions for research.
... It is understood that disinformation can promote false understanding through different means, not necessarily based on false identities, but by using true but misleading content to trigger false inferences (Fallis, 2015), and promoting misperceptions about reality and social consensus (McKay & Tenove, 2021). Regarding the effects this has, disinformation often seeks to amplify social divisions, through discursive means of "us" and/against "the other", including the propagation of conspiracy theories, and using polarising and sensationalist content that is highly emotional and partisan (Howard et al., 2017). Reddi et al. (2021) noted that disinformation in US politics works at the service of existing power structures and identified anti-black racism, misogyny and xenophobic sentiment as topics susceptible to disinformation. ...
Article
Full-text available
Democracy is based on individuals' ability to give their opinions freely. To do this, they must have access to a multitude of reliable information sources (Dahl, 1998), and this greatly depends on the characteristics of their media environments. Today, one of the main issues individuals face is the significant amount of disinformation circulating through social networks. This study focuses on parliamentary disinformation. It examines how parliamentarians contribute to generating information disorder (Wardle & Derakhshan, 2017) in the digital public space. Through an exploratory content analysis − a descriptive content analysis of 2,307 messages posted on Twitter accounts of parliamentary spokespeople and representatives of the main list of each political party in the Spanish Lower House of Parliament − we explore disinformation rhetoric. The results allow us to conclude that, while the volume of messages shared by parliamentarians on issues susceptible to disinformation is relatively low (14% of tweets), both the themes of the tweets (COVID-19, sex-based violence, migrants or LGBTI), as well as their tone and argumentative and discursive lines, contribute to generating distrust through institutional criticism or their peers. The study deepens current knowledge of the disinformation generated by political elites, key agents of the construction of polarising narratives.
... Previously, a large-scale survey (Barthel et al., 2016) reports that 32% of the US adults often encountered completely made up stories on social media. Our study also supports the findings of other studies that fake news on social media is published mainly to support or oppose political agenda (Lazer, et al., 2018;Howard et al., 2018), religious agenda (Boyd and Ellison, 2007;Howard et al., 2017;Farkas et al., 2018), and to trigger people to take certain actions (Kang and Goldman, 2016). Another study from the USA indicates that 71% adults either 'often' or 'sometimes' see completely made up political news online (Barthel et al., 2016), and many U.S. adults have expressed their concerns about the impact of fake news stories on the 2016 Presidential election in the US (Allcott and Gentzkrow, 2017;Silverman, 2016 Regarding news literacy skills, the findings show that librarians have a slightly moderate level of understanding of the concept of news literacy, as well as they demonstrated a moderate level of perceived news literacy skills (Table 9). ...
Article
Introduction. This study was conducted with an objective to determine the ways librarians deal with fake news, as well as to assess the status of their news literacy skills in combating the fake news phenomenon. Method. A cross-sectional survey was conducted in public and private sector university libraries of Punjab, Pakistan. The study’s population comprised of university librarians working at the rank of an assistant librarian or above. Analysis. One hundred and eighty questionnaires were distributed in both print and online out of which 128 were returned (response rate 71.11%).Descriptive and inferential statistics were applied to report the data using Statistical Package for Social Science (SPSS-version 22). Results. Librarians ‘sometimes’ determine the authenticity of news story e.g., ‘check it from other sources in case of doubts.’ The responses for all the eleven statements relating to news literacy skills ranged between 3.05 to 3.36 on a five-point Likert type scale, indicating that respondents ‘somewhat’ agreed with their perceived news literacy skills. Conclusions.University librarians are not fully acquainted with the aspect of news trustworthiness on social media, which affects their news acceptance and sharing behaviour. They also have a moderate level of conceptual understanding of news literacy.
... In particolare, da alcuni documenti dell'intelligence statunitense 2 , oltre che da indagini interne svolte da Facebook (STAMOS, 2017), è emerso il fondato sospetto che la volontà politica degli elettori fosse stata manipolata da specifiche e mirate attività di disinformazione condotte sui social, con effetti determinanti sul risultato elettorale: è stato calcolato, infatti, come la diffusione delle fake news (SCIORTINO, 2020) sui social si fosse concentrata soprattutto negli "Swing States", ovvero negli "Stati in bilico" che sono decisivi per la competizione elettorale 3 (BESSI, FERRARA, 2017;HOWARD, KOLLANYI, BRADSHAW, NEUDERT, 2017). ...
Article
Full-text available
This paper intends to discuss the issue of regulation of social networks as a media space employed by political actors (individuals or organizations) for the formation of public opinion and the construction of consensus in the context of the electoral campaigns: in more synthetic terms, social networks as an instrument of electoral propaganda. The main subject of my essay will concern, therefore, the problems de iure condito and de iure condendo about the subjection of communication on social media to rules aimed at ensuring a fair comparison between parties, lists, candidates in the electoral competition and, with it, the value of freedom and authenticity of the vote that emanates from article 48 of the Italian Constitution. This substantiates a constitutional directive in favor of a regulatory intervention of the legislator, called upon to achieve an adequate balance between the many rights and interests involved.
... The content of fake news in the literature has been compared with polarized and sensational content. The information of such content is usually characterized as highly emotional and highly partisan (Allcott and Gentzkow, 2017;Howard et al., 2018). The results revealed interesting insights into the context of fake news detection. ...
Article
Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
... One part of this variation is a function of language use, but another is derivative of political events. The 2016 US election is a good example, as it was rife with both misleading content widely displayed on social media platforms and widespread politicization of the term 'fake news' itself (Allcott and Gentzkow 2017;Howard et al. 2017;Howard et al. 2018;Grinberg et al. 2019). Other countries had less pronounced salience of the conceptual frame of 'fake news'. ...
Article
Does perceived exposure on social media to mis/disinformation affect user perceptions of social media newsfeed algorithmic bias? Using survey data from eight liberal democratic countries and propensity score matching statistical techniques, this paper details the average treatment effect (ATE) of self-reported perceived exposure to mis/disinformation on perceptions that social media newsfeed algorithms are biased. Overall, the results show that self-reported perceived exposure to misleading content on social media increases perceptions of algorithmic bias. The results also detail interesting platform/country variation in the estimated average treatment effect. The ATE of perceived fake news exposure on perceptions of algorithmic bias are similar on Twitter and Facebook but are amplified in countries with high society-wide issue salience surrounding ‘fake news’ and, especially, ‘algorithmic bias’.
... In their study of the different types of information shared on social media during the 2016 US election, Howard et al. (2017) established a typology of sources for information being shared on social media platforms, included in Figure 2. This classification is interesting in light of the social media content analyzed in the present study, as it compartmentalizes content according to defined classifications, and it is of interest to assess its applicability to our sample. ...
Article
Full-text available
Ephemeral media has become a staple of today’s social media ecology. This study advances the first exploratory analysis of Instagram Stories as a format for political communication. Through an initial content analysis of 832 stories in three verified Vox accounts and a secondary content and discourse analysis of 114 stories, we delve into the strategies used by right-wing party Vox in Spain to portray immigration as an issue for ideological positioning. The findings shed light onto the ways in which the representation of migrants is employed as an instrument for anti-migratory policy support, through the construction of a very specific profile of a migrant in terms of age and gender and the exclusion of significant migrant populations from the argument. Moreover, the party employs the content creation functionalities of Instagram Stories to construct arguments and storylines where diverse information sources converge, effectively bypassing traditional media and reaching their supporter base directly.
... Previous work has highlighted the prominent role that bots played during the U.S. 2016 election (Badawy et al., 2018;Howard et al., 2018;Kriel & Pavliuc, 2019;Ruck et al., 2019). Our findings provide evidence that bots remained very influential even after the elections. ...
Article
Twitter gained new levels of political prominence with Donald J. Trump’s use of the platform. Although previous work has been done studying the content of Trump’s tweets, there remains a dearth of research exploring who opinion leaders were in the early days of his presidency and what they were tweeting about. Therefore, this study retroactively investigates opinion leaders on Twitter during Trump’s 1st month in office and explores what those influencers tweeted about. We uniquely used a historical data set of 3 million tweets that contained the word “trump” and used Latent Dirichlet Allocation, a probabilistic algorithmic model, to extract topics from both general Twitter users and opinion leaders. Opinion leaders were identified by measuring eigenvector centrality and removing users with fewer than 10,000 followers. The top 1% users with the highest score in eigencentrality ( N = 303) were sampled, and their attributes were manually coded. We found that most Twitter-based opinion leaders are either media outlets/journalists with a left-center bias or social bots. Immigration was found to be a key topic during our study period. Our empirical evidence underscores the influence of bots on social media even after the 2016 U.S. presidential election, providing further context to ongoing revelations and disclosures about influence operations during that election. Furthermore, our results provide evidence of the continued relevance of established, “traditional” media sources on Twitter as opinion leaders.
... In addition, the correlation matrix of covariates is presented in Appendix B, Fig Besides the statistically tests performed in methodology selection, state specific effects are also validated by the diversity identified in American states in terms of culture, economic development, legislation, and voters' preferences. Moreover, the decision is defended by the presence of political polarization (Baker et al., 2020d) and the swing states effect on final results (Howard et al., 2018;Antoniades and Calomiris, 2020). Table 2 presents estimated coefficients for variables included in baseline models. ...
Article
The paper examines the United States 2020 presidential election drivers and effects, under the uncertainty caused by COVID-19. By considering news-based, financial markets, and coronavirus specific inputs in panel data framework, the results reveal that COVID-19 affects candidates’ chances. Biden's electorate reacts positive to news regarding unemployment or healthcare, stress level on financial markets or Country Sentiment Index. Trump's opportunities increase with coronavirus indicators or news about populism. However, President-elect Biden must provide solutions for national economy issues like unemployment, budget deficit or healthcare inequalities. Simultaneously, having extensive prerogatives on trade and investment partnerships, influences mitigation of COVID-19 global effects.
... These SM platforms are frequently used to look for information for the purposes of social networking, marketing, reading user reviews, daily routines, religion, food products, disasters (floods), educational and research (Kubiak, 2017;Li et al., 2018;Martínez-Ruiz et al., 2018;Sutherland et al., 2018;Thakur & Chander, 2017;Wickramanayake & Jika, 2018). A few of previous studies also that social media also involves with political discussions and exchanging opinions regarding national and international political issues that have influenced the behavior, mindset and attitudes of young adults (Hassan, 2018;Howard et al., 2018;Kahne & Bowyer, 2018;Stanley, 2017;Tucker et al., 2018). Likewise, the social media have been used to create hypes regarding socio-political issues in recent years particularly on Panama Leaks in Pakistan. ...
Article
Full-text available
Background: Social media (SM) have become popular among all genre of people due to its instant and dynamic communication ability. Substantial use of social media as a source of political information raises a concern of researchers to investigate the usage patterns of SM about socio-political issues of the society. Objective: The aim of this study was to investigate the use of social media as a source of political information regarding Panama Leaks in Pakistan. Method: A quantitative research approach based on survey method was used to collect the primary data from a sample of 500 educated adults conveniently available in Lahore city of the Punjab province of Pakistan. Descriptive and inferential statistics were used for data analysis in SPSS-25. Findings: The findings revealed that majority of the educated adults used social media platforms (i.e. Facebook, WhatsApp, YouTube, Twitter and Wikipedia) on daily basis. The educated adults commonly acquired information to know historical perspectives of Panama Leaks (PL); update themselves with general discussions and opinions; understand political and economic conditions due to PL outbreak; be aware of court proceedings/judgments of PL; and get information for entertainment, education and research.
... On the other hand, such large-scale diffusion into daily routines requires an increased understanding of the risks and consequences for individuals concerning data that are wilfully shared online in a variety of platforms and situations. For instance, recent studies have focused on the role of social media in amplifying fake news propagation (Allcott & Gentzkow, 2017), hate speech (Mondal et al., 2017), and its impact on influencing political debates such as BREXIT (Del Vicario et al., 2017) and the 2016 US Presidential election (Howard et al., 2018). The revelation related to Facebook providing unfettered access to personal information about over 87 million users to Cambridge Analytica (Isaak & Hanna, 2018) has fueled the debate over not only the societal impact of those technologies but also about user's privacy and their data rights. ...
... One could conjecture that the motivation of foreign information operations is to sew discord and to reduce unity of a society's populace. We remain politically neutral with a hope that divisive language is not used intentionally to polarize others and in cases of legitimate promotion of already divisive topics, that polarization can be functionally minimized as opposed to unintentionally creating further division of an audience while advancing politically charged causes such as healthcare or social security reform (Howard, 2018). It may not be apparent how this happens, but common devices identified in the FLC portion of this competition such as flag waving i.e. conflating the opposing viewpoint with being unpatriotic, etc. is one example of many possible. ...
... The authors state that chatbots in fact showed a measurable influence during the election by either manufacturing online popularity or by democratizing propaganda. Thus, governments of several countries start to introduce regulations to fight against these kinds of online manipulations (Howard et al., 2018). However, chatbots oftentimes remain a widely-accepted tool for propaganda (Woolley and Guilbeault, 2017). ...
Conference Paper
Full-text available
Recent years show an increasing popularity of chatbots, with latest efforts aiming to make them more empathic and human-like, finding application for example in customer service or in treating mental illnesses. Thereby, emphatic chatbots can understand the user's emotional state and respond to it on an appropriate emotional level. This survey provides an overview of existing approaches used for emotion detection and empathic response generation. These approaches raise at least one of the following profound challenges: the lack of quality training data, balancing emotion and content level information , considering the full end-to-end experience and modelling emotions throughout conversations. Furthermore, only few approaches actually cover response generation. We state that these approaches are not yet empathic in that they either mirror the user's emotional state or leave it up to the user to decide the emotion category of the response. Empathic response generation should select appropriate emotional responses more dynamically and express them accordingly, for example using emo-jis.
... Russian media, and in particular their online segment, have recently been (re-)instated as a focus of attention of communication scholars and computer scientists (Howard, Kollanyi, Bradshaw, & Neudert, 2017;Sanovich, 2017). This was a result of several scandals around the spread of various cyber-attacking techniques, such as email hacking, attacks of social media bots, and spread of allegedly pre-paid electoral advertisements. ...
Article
Full-text available
Russian media have recently (re-)gained attention of the scholarly community, mostly due to the rise of cyber-attacking techniques and computational propaganda efforts. A revived conceptualization of the Russian media as a uniform system driven by a well-coordinated propagandistic state effort, though having evidence thereunder, does not allow seeing the public discussion inside Russia as a more diverse and multifaceted process. This is especially true for the Russian-language mediated discussions online, which, in the recent years, have proven to be efficient enough in raising both social issues and waves of political protest, including on-street spillovers. While, in the recent years, several attempts have been made to demonstrate the complexity of the Russian media system at large, the content and structures of the Russian-language online discussions remain seriously understudied. The thematic issue draws attention to various aspects of online public discussions in Runet; it creates a perspective in studying Russian mediated communication at the level of Internet users. The articles are selected in the way that they not only contribute to the systemic knowledge on the Russian media but also add to the respective subdomains of media research, including the studies on social problem construction, news values, political polarization, and affect in communication.
... Another challenge we encountered when preparing our network graphs was the presence of bot accounts. Bots can be most broadly characterized as entities with the ability to produce and publish content and interact directly with other users without human intervention (Howard, Kollanyi, Bradshaw, & Neudert, 2018). These entities play a crucial role in the social media ecosystem by responding to real-time questions about a variety of topics or by providing automated updates on news and events. ...
Article
With increasing attention devoted to automated bot accounts, fake news, and echo chambers, how much of the theory of a Habermassian public sphere is still applicable to social media? Drawing on Twitter data collected on April 16, 2017, during the night of Turkey’s 2017 Constitutional Referendum, we test whether the networks of political communication resemble the communicative structures characteristic of Habermas’s “public sphere.” The referendum left the country sharply divided; 51.4 percent of the electorate voted in favor of amending the constitution to grant sweeping new executive powers to the presidency, with an overall turnout of 85.46 percent. In this article, we examine whether Twitter users were meaningfully engaged on the night of the referendum, and if their communicative patterns resembled a networked public sphere, that is, a space where information and ideas are exchanged, and public opinion is formed in a deliberative, rational manner. We find ideological uniformity, polarization, and partisan antipathy to be especially evident—mirroring existing social tensions in Turkey. Rather than resembling a public sphere, we found Twitter users to be more likely to communicate on the basis of homophily—rather than to engage in democratic debate or establish a common ground between the two campaigns.
... We began with a sample of URLs shared in the United States during that country's 2016 Presidential election-an important moment in which the "fake news" term first emerged. The team elaborated four broad: (1) professional news outlets, (2) established political actors, (3) polarizing and conspiratorial content, and (4) other sources of political news and information (Howard, Kollanyi, Bradshaw, & Neudert, 2017). ...
Article
Full-text available
Voters increasingly rely on social media for news and information about politics. But increasingly, social media has emerged as a fertile soil for deliberately produced misinformation campaigns, conspiracy, and extremist alternative media. How does the sourcing of political news and information define contemporary political communication in different countries in Europe? To understand what users are sharing in their political communication, we analyzed large volumes of political conversation over a major social media platform—in real-time and native languages during campaign periods—for three major European elections. Rather than chasing a definition of what has come to be known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the types of sources, compare them in context with known forms of political news and information, and contrast their circulation patterns in France, the United Kingdom, and Germany. Based on this analysis, we offer a definition of “junk news” that refers to deliberately produced misleading, deceptive, and incorrect propaganda purporting to be real news. In the first multilingual, cross-national comparison of junk news sourcing and consumption over social media, we analyze over 4 million tweets from three elections and find that (1) users across Europe shared substantial amounts of junk news in varying qualities and quantities, (2) amplifier accounts drive low to medium levels of traffic and news sharing, and (3) Europeans still share large amounts of professionally produced information from media outlets, but other traditional sources of political information including political parties and government agencies are in decline.
... In this article, rumour is defined as a form of unverified information that arises from, and is publicly circulated under, conditions of uncertainty (DiFonzo & Bordia, 2007). Scholars have discussed rumour's potential threats in jeopardising public health (Ngade, Singer, Marcus, & Lara, 2016), intensifying racial conflicts (Williams & Burnap, 2015) and influencing presidential elections (Howard, Kollanyi, Bradshaw, & Neudert, 2018). During times of crisis, the consequences of online rumours can be particularly severe, as they can lead to social unrest and hamper efforts to control and contain a crisis (Gupta, Lamba, Kumaraguru, & Joshi, 2013;Starbird, Maddock, Orand, Achterman, & Mason, 2014). ...
Article
Full-text available
This article investigates how citizens contribute to rumour verification on social media in China, drawing on a case study of Weibo communication about the 2015 Tianjin blasts. Three aspects of citizen engagement in verifying rumours via Weibo are examined: (1) how they directly debunked rumours related to the blasts, (2) how they verified official rumour messages and (3) how they used Weibo’s community verification function to collectively identify and fact-check rumours. The article argues that in carrying out such activities, ordinary Weibo users were engaging in practices of citizen journalism. Findings from our analysis suggest that even though citizen journalists’ direct engagement in publishing debunking messages was not as visible as that of the police and mainstream media, self-organised grassroots rumour-debunking practices demonstrate great potential. In terms of both the reposts and the positive comments they received, rumour-debunking posts from non-official actors appear to have been given more credibility than those from their official counterparts. In contrast, the official narratives about the Tianjin blasts were challenged, and the credibility of the official rumour-debunking messages was commonly questioned. Nevertheless, this article also shows that Weibo’s community verification system had limited effects in facilitating how Weibo users could collaboratively fact-check potentially false information.
... Other factors were also associated with small increases in exposures to fake news sources: Men and whites had slightly higher rates, as did voters in swing states and voters who sent more tweets (excluding political URLs analyzed here). These findings are in line with previous work that showed concentration of polarizing content in swing states (17) and among older white men (18). However, effects for the above groups were small (less than one percentage point increase in proportion of exposures) and somewhat inconsistent across political groups. ...
Article
Finding facts about fake news There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters. Science , this issue p. 374 ; see also p. 348
Article
Persuasion is a process that aims to utilize (true or false) information to change people’s attitudes in relation to something, usually as a precursor to behavioural change. Its use is prevalent in democratic societies, which do not, in principle, permit censorship of information or the use of force to enact power. The transition of information to the internet, particularly with the rise of social media, together with the capacity to capture, store and process big data, and advances in machine learning, have transformed the way modern persuasion is conducted. This has led to new opportunities for persuaders, but also to well-documented instances of abuse: fake news, Cambridge Analytica, foreign interference in elections, etc. We investigate large-scale technology-based persuasion, with the help of three case studies derived from secondary sources, in order to identify and describe the underlying technology architecture and propose issues for future research, including a number of ethical concerns
Chapter
The new internet and digital technologies have truly accelerated and improved media functions and operations in modern society. Like the developed nations, sub-Saharan African countries have benefitted immensely in adopting new media tools to generate, access, disseminate, store, and retrieve information. Since the basic function of the media is to inform the public, digital tools and various internet platforms have exemplified this role by increasing the volume and spread of news information in today's network society. In fact, the current information era is one characterized by the inundated volume of data and flood of information. However, with such incredible overload of information, new problems have emerged; the anonymous nature of most of these internet platforms have permitted highly adulterated and unethical news contents to contaminate the digital space. Sadly, many credible news information compete or get mixed with the whirlpool of disinformation and news pollutants.
Article
Full-text available
With the acceleration of human society’s digitization and the application of innovative technologies to emerging media, popular social media platforms are inundated by fresh news and multimedia content from multiple more or less reliable sources. This abundance of circulating and accessible information and content has intensified the difficulty of separating good, real, and true information from bad, false, and fake information. As it has been proven, most unwanted content is created automatically using bots (automated accounts supported by artificial intelligence), and it is difficult for authorities and respective media platforms to combat the proliferation of such malicious, pervasive, and artificially intelligent entities. In this article, we propose using automated account (bots)-originating content to compete with and reduce the speed of propagating a harmful rumor on a given social media platform by modeling the underlying relationship between the circulating contents when they are related to the same topic and present relative interest for respective online communities using differential equations and dynamical systems. We studied the proposed model qualitatively and quantitatively and found that peaceful coexistence could be obtained under certain conditions, and improving the controlled social bot’s content attractiveness and visibility has a significant impact on the long-term behavior of the system depending on the control parameters.
Article
This article examines 3,517 Facebook ads created by Russia’s Internet Research Agency (IRA) between June 2015 and August 2017 in its Active Measures disinformation campaign targeting the 2016 U.S. presidential election. We aimed to unearth the relationship between ad engagement (ad clicks) and 40 features related to the ads’ metadata, psychological meaning, and sentiment. The purpose of our analysis was to (1) understand the relationship between engagement and features, (2) find the most relevant feature subsets to predict engagement via feature selection, and (3) find the semantic topics that best characterize the data set via topic modeling. We found that investment features (e.g., ad spend, ad lifetime), caption length, and sentiment were the top features predicting users’ engagement with the ads. In addition, positive sentiment ads were more engaging than negative ads, and psycholinguistic features (e.g., use of religion-relevant words) were identified as highly important in the makeup of an engaging disinformation ad. Linear support vector machines (SVMs) and logistic regression classifiers achieved the highest mean F scores (93.6%), revealing that the optimal feature subset contains 12 and six features, respectively. Finally, we corroborate the findings of previous research that the IRA specifically targeted Americans on divisive ad topics (e.g., LGBT rights) and advance a definition of disinformation advertising.
Article
Full-text available
In the field of social media, the systematic impact that bot users bring to the dissemination of public opinion has been a key concern of the research. To achieve more effective opinion management, it is important to understand how and why behavior differs between bot users and human users. The study compares the differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, using public health emergencies as the research target, and further provides specific explanations for the differences. First, the study classified users with bot characteristics and human users by establishing the relevant formulas of user indicator characteristics. Secondly, the study used deep learning methods such as Top2Vec and BERT to extract topics and sentiments, and used social network analysis methods to construct network graphs and compare network attribute features. Finally, the study further compared the differences in information dissemination between posts published by bot users and human users through multi-factor ANOVA. It was found that there were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users. The findings can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
Thesis
Full-text available
During the 2016 election, memes were used heavily by individuals and organized groups who wanted to have an impact on the outcome. In the proceeding years, groups provided organized opportunities for individuals to further learn how to utilize memes more effectively, turning this once benign digital artifact into modern propaganda. This study examined memes that were focused on the lead up to the 2020 U.S. election, specifically memes that contained some element of misleading information. During the study, which collected memes from July 1 – 31, 2020, 60 left-leaning and 60 right-leaning memes were collected from six Facebooks groups, for a total of 120 memes. Using mixed method content and thematic analyses, the memes were examined for propaganda, persuasion, misleading information, and multimodality. They were looked at individually and as a left vs. right comparison. When examining propaganda, almost 75% of the memes collected met all of the criteria for propaganda, and those that did not tended to be more humorous. The memes that contained propaganda were likely to be relevant in the short term and feature moral appeals, pre-giving messages, or esteem (negative) appeals. These memes are likely to come from unofficial sources as a mode of expression and public discussion, and feature a number of techniques of misleading information, the majority being fabricated or manipulated content. When the memes were examined for the type of misleading information used, humor was used the most frequently, however the cumulative of the other, non-humorous categories showed that memes are a vehicle for subtle and nuanced techniques. Many memes had at least one element that was truthful, lending legitimacy to an overall misleading message. Many memes featured multiple techniques, making fact-checking a difficult process. When examining the multimodal aspects of the memes, this research shows that any unwritten “rules” that memes had when they first came on the scene no longer exist. Misleading political memes were heavily manipulated, with almost 70% of them appearing to have some alteration, and more than 64% using shading and highlight modulation techniques. This study found that the visual elements of the meme are meant to be the main focus, and that the heavy, error-ridden textual elements were included for maximum information without concern for design principles. This study also compared the 60 memes collected from left-leaning Facebook groups and the 60 collected from right-leaning Facebook groups. The messages primarily focused on the two candidates, Democrat Joe Biden and Republican Donald Trump, followed the mainstream news and popular conspiracy theories, and featured very similar techniques. Significant differences were found in the level of accuracy within the message, the number of memes that could be considered propaganda, and the number of memes that appeared to be digitally altered. This study also supports the idea that right-leaning misleading political memes are more frequently disseminated than their left-leaning counterparts.
Chapter
Today, major online social networking websites host millions of user accounts. These websites provide a convenient platform for sharing information and opinions in the form of microblogs. However, the ease of sharing also brings ramifications in the form of fake news, misinformation, and rumors, which has become highly prevalent recently. The impact of fake news dissemination was observed in major political events like the US elections and the Jakarta elections, as well as the distortion of celebrities and companies’ reputation. Researchers have studied the propagation of fake news over social media websites and have proposed various techniques to combat fake news. In this chapter, we discuss propagation models for misinformation and review the fake news mitigation techniques. We also compose a list of datasets used in fake news-related studies. The chapter is concluded with open research questions.
Chapter
Unter dem in regelmäßigen Abständen wiederkehrenden Schlagwort „Social Bots“ wird in der öffentlichen Debatte überwiegend ein negativ konnotiertes Bild gezeichnet, wonach diese vornehmlich für die gezielte Verbreitung von Falschnachrichten, Desinformationen sowie Hassrede verantwortlich sind und letztlich sogar eine Gefahr für die Demokratie darstellen können. Allerdings ist die Wirkungsmacht der in Deutschland gerne als „Meinungsroboter“ betitelten Computerprogramme bisweilen umstritten. Dennoch macht der Umstand, dass der alleinige Einsatz derartiger Programme eine solche Resonanz hervorruft, eine grundlegende Einordnung der Mechanismen notwendig.
Thesis
Full-text available
Digital media misinformation is a threat to democracy and national security in America. This is because in today’s modern landscape the ability to personalize people’s experiences online has become common practice through the collection of cookies and other personalized user data.This data can be used to identify important information about the particular user. Things such as age, race, gender, sexual orientation, political interests, as well as other seemingly harmless information about computer-users is collected in a variety of ways. This important user information can be used to personalize media, including news, online social media feeds, advertisements among many other things. However, when placed into the wrong hands this seemingly harmless data that in many cases enhances and improves online user experience can be used maliciously to affect a variety of human interactions and experiences; as well as alter one’s perception of reality. Thus the threat of digital misinformation is critically magnified by the fact that it can be directed to affect specific users-experience as it pertains to user demographics.
Chapter
The researcher explores the world's first use of AI. In the “Bad Bot” section, the authors look at the negative impact of AI in politics with the first elections won in history through the use of AI's bots and trolls propaganda, and how it could bring to a more dystopian future with deepfakes. In the “Good Bot” section, they focus on positive case studies; starting with the 2021 Tokyo Olympics and health, they explore AI techniques applied from the infinitive small, Higgs Boson, to the infinitely large, dark matter; we'll meet Cimon at the Space Station; AI in climate change and pioneer UN projects such as “Earth” and “Humanitarian” AI; in education, they look at the latest use of AI helping schools and EU project “Time Machine.” They also see examples done to tackle the “Bad Bots” section looking at what is being implemented. This chapter will finally look at the world's first rebellious behaviour in bots with funny examples that will make you think.
Article
Why did Russia's relations with the West shift from cooperation a few decades ago to a new era of confrontation today? Some explanations focus narrowly on changes in the balance of power in the international system, or trace historic parallels and cultural continuities in Russian international behavior. For a complete understanding of Russian foreign policy today, individuals, ideas, and institutions—President Vladimir Putin, Putinism, and autocracy—must be added to the analysis. An examination of three cases of recent Russian intervention (in Ukraine in 2014, Syria in 2015, and the United States in 2016) illuminates the causal influence of these domestic determinants in the making of Russian foreign policy.
Chapter
Dieses Kapitel kombiniert neue Erkenntnisse aus der Analyse der wirtschaftlichen Globalisierung und Digitalisierung mit Ungleichheitsstatistiken und neuen US-Umfrageergebnissen, die die Hauptsorgen der US-Haushalte bzw. der Wähler aufzeigen. Während die zunehmende wirtschaftliche Ungleichheit in der US-Umfrage als Problem angesehen wird, äußert die relative Mehrheit der Befragten die Erwartung, dass große Unternehmen Maßnahmen ergreifen werden, um übermäßige Ungleichheit zu korrigieren – eine Sichtweise, die Wunschdenken ist und die für die untere Hälfte der US-Einkommenspyramide zu anhaltender Wählerfrustration führen wird. Dies impliziert ein strukturelles Problem des Populismus in den USA und stellt eine völlig neue Situation dar: mit Herausforderungen für Nordamerika, Europa, Asien und die Welt. Diese neue strukturelle US-Populismushypothese ist mit tiefgreifenden Auswirkungen auf die Handelspolitik und mit Antimultilateralismus verbunden. Dabei wird auch die 2018 veröffentlichte Studie des Council of Economic Advisers widerlegt, die den Pro-Kopf-Konsum in den USA und den nordischen Ländern Europas vergleicht und einen hohen US-Vorsprung behauptet.
Chapter
Full-text available
Çalışmanın amacı yeni gazetecilik uygulamaları içerisinde giderek önem kazanan veri gazeteciliğini anlamak ve veri gazeteciliği haberlerinin yapısal özelliklerini çözümlemektir. Bu çerçevede çalışma şu sorulara yanıt aramaktadır; veri gazeteciliği nasıl bir bir gelişim süreci izlemiştir ve hangi teknolojik gelişmeler ve dönüşümlerden etkilenmiştir? Veri gazeteciliğinde kullanılan yöntemler açısından farklılıklar bulunmakta mıdır? Tasarım, yazılım ve veri gazeteciliği arasında nasıl bir bağlantı vardır? Etkileşimli haber anlatısı hangi çoklu ortam içeriklerden oluşmaktadır? Bu projelerde sosyal medya ve video paylaşım platformlarından nasıl yararlanılmaktadır?
Chapter
Social Media and Democracy - edited by Nathaniel Persily September 2020
Book
Full-text available
Bu derlemede gazeteciliği tanımlamanın ve icra etmenin farklı biçimlerine odaklanmak; yeni mecraların, deneyimlerin ve olanakların izini sürmek istedik. Bu doğrultuda, dünya genelinde otoriterleşme eğiliminin yükselişi karşısında bir demokrasi cephesi olarak gördüğümüz “yeni gazetecilik” kavramını çeşitli boyutlarıyla tartışmaya açtık. Bizim için “yeni gazetecilik”, profesyonellik göndermesi yapmaktan ziyade emeğin ve ürünün niteliğine odaklanan bir kavram. Çoklu/karşı kamuları süreçlere dâhil eden, çoğulcu, dayanışmacı, katılımcı, ticari olmayan ya da sosyal girişimci, anti-kapitalist, karşı-hegemonik ve belki de en önemlisi rizomatik bir pratiği ifade etmektedir. Bu pratikte geleneksel hiyerarşik haber merkezi yapılanması yerini, heterarşik bir iç içe geçmeyle oluşan yarı kurumsal ve bireylerin öne çıktığı, takipçilerin müdahalesine olanak tanıyan, haberin üretimine ve dağıtımına/paylaşımına odaklı ağlaşmış bir haber merkezine bırakmaktadır. Yeni gazetecilik tartışması ayrıca yalan haberin, propagandanın, ideolojik mücadele çerçevesine hapsolmuş hakikatin gerçekliğine dair sorgulamaları da kapsamaktadır. Bu doğrultuda bu pratiğe dâhil olanların, sürekli akış ve içerik bombardımanını takip edip anlamlandırabilmesi için teknolojik ve dijital beceriler haricinde temel bir eleştirel okuryazarlığa da sahip olması gerekmektedir. Güncel gazetecilik çalışmaları ve uygulamalarına bakıldığında, hem akademisyenlerin hem de uygulayıcıların, ürettikleri içeriklerde üzerinde durduğumuz "yeni gazetecilik" kavramsallaştırmasını benimsedikleri, ancak gelişmekte olan bu çalışma alanına dair net bir tanımlama yapmadıkları görülmektedir. “Yeni Gazetecilik. Mecralar, deneyimler, olanaklar” isimli çalışmamız, Türkçe literatürde bu yeni kavramsal çerçevenin oluşturulması için bir giriş çalışması niteliğindedir.
Article
Due to new technologies, the speed and volume of disinformation is unprecedented today. As seen in the 2016 US presidential election, especially with the conduct of the Internet Research Agency, this poses challenges and threats for the (democratic) political processes of internal State affairs, and in particular, (democratic) elections are under increasing risk. Disinformation has the potential to sway the outcome of an election and therefore discredits the idea of free and fair elections. Given the growing prevalence of disinformation operations aimed at (democratic) elections, the question arises as to how international law applies to such operations and how States under international law might counter such hostile operations launched by their adversaries. From a legal standpoint, it appears that such disinformation operations do not fully escape existing international law. However, due to open questions and the geopolitical context, many States refrain from clearly labelling them as internationally wrongful acts under international law. Stretching current international legal norms to cover the issue does not seem to be the optimal solution and a binding international treaty would also need to overcome various hurdles. The author suggests that disinformation operations aimed at (democratic) elections in the context of public international law will most likely be regulated (if) by a combination of custom and bottom-up law-making influencing and reinforcing each other.
Book
Full-text available
Acknowledgments I could not imagine finishing this monograph without the encouragement of my first editor, Holly Buchanan at Lexington Books. Although I have written extensively in recent years, I could not imagine finishing up a book project. So my special thanks go to Ms. Buchanan. After her, Bryndee Ryan continued to encourage me and here comes the book. Other thanks go to her. Most of what I have written is a product of long years of teaching and intellectual development at Istanbul Bilgi University. I am very proud of being a faculty member at the communication school here and I believe this is one of the best things ever happened to me. This book was written during my sabbatical period. My university generously supported me during the period and I could write with the comfort of staying at Anthropology Department at the University of California, Irvine and at the Science and Technology Studies program at MIT. My stays were a result of Rice Anthropology network and I cannot tell how valuable it was to brainstorm regularly with Prof. George E. Marcus and Prof. Michael M. J. Fischer. I do not claim that their wisdom rightfully reflects on this manuscript but that gave me intellectual empowerment and gave me clues for future research and publications. Having such academic mentors is a big chance in life. I have been involved with many digital personas, activists, colleagues, friends, and beloved ones; that includes my former student and now friend, Atınç, and my dear brother, Hakan in Turkey. I am lucky to be surrounded by all these beautiful people. However, I will always miss those civilians who have fallen during the Gezi Park Protests for which I devote a chapter. Thus, I would like to dedicate this book to those fallen citizens who are collectively called “Gezi Martyrs.” Boston, MA May 1, 2019
Article
Full-text available
Social media is an important source of news and information in the United States. But during the 2016 US presidential election, social media platforms emerged as a breeding ground for influence campaigns, conspiracy, and alternative media. Anecdotally, the nature of political news and information evolved over time, but political communication researchers have yet to develop a comprehensive, grounded, internally consistent typology of the types of sources shared. Rather than chasing a definition of what is popularly known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the phenomenon. To understand what social media users are sharing, we analyzed large volumes of political conversations that took place on Twitter during the 2016 presidential campaign and the 2018 State of the Union address in the United States. We developed the concept of “junk news,” which refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news. First, we found a 1:1 ratio of junk news to professionally produced news and information shared by users during the US election in 2016, a ratio that had improved by the State of the Union address in 2018. Second, we discovered that amplifier accounts drove a consistently higher proportion of political communication during the presidential election but accounted for only marginal quantities of traffic during the State of the Union address. Finally, we found that some of the most important units of analysis for general political theory—parties, the state, and policy experts—generated only a fraction of the political communication.
Chapter
This chapter combines new insights from economic globalization and digitalization analysis with inequality statistics and new US survey results which show the main concerns of US households and voters, respectively. While rising economic inequality is considered to be a problem in the US survey, the relative majority of respondents express the expectation that large companies will take action to correct excessive inequality—a view that is wishful thinking and which is bound to result in sustained voter frustration for the lower half of the US income pyramid. This implies a structural populism problem in the US, which is a totally new situation: with challenges for the North America, Europe, Asia and the world. This new structural US populism hypothesis is linked to deep implications for trade policy and anti-multilateralism. Meanwhile, the Council of Economic Advisers’ 2018 study comparing per capita consumption in the US and Nordic European countries is also refuted.
Chapter
Full-text available
This chapter explores the challenges and opportunities of content analysis as a method for researching digitalised forms of propaganda, particularly in hybridised media environments. Digital propaganda is one of the manifestations of post-truth politics, and as such, it is a product of the culture of social interconnectivity as well as the hybridisation of political news media. It, therefore, represents the zeitgeist in communication studies and we problematise digital propaganda through the prism of social change. In doing so, we contextualise digital propaganda within political communication research, specifically studies employing content analysis-based methodologies, keeping in mind that this communicative practice adapts to, adopts features of, and co-evolves along with the media environment it occupies. First, the chapter describes the content analysis methodology and its application within propaganda research. Second, we provide an overview of the research questions content analysis tends to be used to answer in digital propaganda research. Third, we focus our discussion on a critical examination of the content analysis methodology, leading into a discussion of new challenges surrounding the emergence of computational propaganda. Fourth, in the context of the shift towards computational propaganda, the delivery of personalised messages based on analysis of user interests and behaviours, we consider emerging trends in propaganda and what contribution content analysis research can make to understanding them. Fifth, we account for innovation in content analysis and the use of big data. Finally, we conclude with a discussion that develops an understanding of propaganda uses in a fast-moving and ever-evolving communication environment and the future potential of the content analysis methodology as an exploratory and explanatory tool.
Article
Full-text available
This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs and requested by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs, assesses the impact of disinformation and strategic political propaganda disseminated through online social media sites. It examines effects on the functioning of the rule of law, democracy and fundamental rights in the EU and its Member States. The study formulates recommendations on how to tackle this threat to human rights, democracy and the rule of law. It specifically addresses the role of social media platform providers in this regard.
Article
Full-text available
Significance The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
Article
Full-text available
In this paper we take advantage of recent developments in identifying the demographic characteristics of Twitter users to explore the demographic differences between those who do and do not enable location services and those who do and do not geotag their tweets. We discuss the collation and processing of two datasets-one focusing on enabling geoservices and the other on tweet geotagging. We then investigate how opting in to either of these behaviours is associated with gender, age, class, the language in which tweets are written and the language in which users interact with the Twitter user interface. We find statistically significant differences for both behaviours for all demographic characteristics, although the magnitude of association differs substantially by factor. We conclude that there are significant demographic variations between those who opt in to geoservices and those who geotag their tweets. Not withstanding the limitations of the data, we suggest that Twitter users who publish geographical information are not representative of the wider Twitter population.
Article
Full-text available
This article provides a review of scientific, peer-reviewed articles that examine the relationship between news sharing and social media in the period from 2004 to 2014. A total of 461 articles were obtained following a literature search in two databases (Communication & Mass Media Complete [CMMC] and ACM), out of which 109 were deemed relevant based on the study’s inclusion criteria. In order to identify general tendencies and to uncover nuanced findings, news sharing research was analyzed both quantitatively and qualitatively. Three central areas of research—news sharing users, content, and networks—were identified and systematically reviewed. In the central concluding section, the results of the review are used to provide a critical diagnosis of current research and suggestions on how to move forward in news sharing research.
Article
Full-text available
The increasing popularity of the social networking service, Twitter, has made it more involved in day-to-day communications, strengthening social relationships and information dissemination. Conversations on Twitter are now being explored as indicators within early warning systems to alert of imminent natural disasters such as earthquakes and aid prompt emergency responses to crime. Producers are privileged to have limitless access to market perception from consumer comments on social media and microblogs. Targeted advertising can be made more effective based on user profile information such as demography, interests and location. While these applications have proven beneficial, the ability to effectively infer the location of Twitter users has even more immense value. However, accurately identifying where a message originated from or author’s location remains a challenge thus essentially driving research in that regard. In this paper, we survey a range of techniques applied to infer the location of Twitter users from inception to state-of-the-art. We find significant improvements over time in the granularity levels and better accuracy with results driven by refinements to algorithms and inclusion of more spatial features.
Article
Full-text available
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
Conference Paper
In many Twitter studies, it is important to know where a tweet came from in order to use the tweet content to study regional user behavior. However, researchers using Twitter to understand user behavior often lack sufficient geo-tagged data. Given the huge volume of Twitter data there is a need for accurate automated geolocating solutions. Herein, we present a new method to predict a Twitter user's location based on the information in a single tweet. We integrate text and user profile meta-data into a single model using a convolutional neural network. Our experiments demonstrate that our neural model substantially outperforms baseline methods, achieving 52.8% accuracy and 92.1% accuracy on city-level and country-level prediction respectively.
Article
Social and political bots have a small but strategic role in Venezuelan political conversations. These automated scripts generate content through social media platforms and then interact with people. In this preliminary study on the use of political bots in Venezuela, we analyze the tweeting, following and retweeting patterns for the accounts of prominent Venezuelan politicians and prominent Venezuelan bots. We find that bots generate a very small proportion of all the traffic about political life in Venezuela. Bots are used to retweet content from Venezuelan politicians but the effect is subtle in that less than 10 percent of all retweets come from bot-related platforms. Nonetheless, we find that the most active bots are those used by Venezuela's radical opposition. Bots are pretending to be political leaders, government agencies and political parties more than citizens. Finally, bots are promoting innocuous political events more than attacking opponents or spreading misinformation.
Article
Campaigns are complex exercises in the creation, transmission, and mutation of significant political symbols. However, there are important differences between political communication through new media and political communication through traditional media. I argue that the most interesting change in patterns of political communication is in the way political culture is produced, not in the way it is consumed. These changes are presented through the findings from systematic ethnographies of two organizations devoted to digitizing the social contract. DataBank.com is a private data mining company that used to offer its services to wealthier campaigns, but can now sell data to the smallest nascent grassroots movements and individuals. Astroturf-Lobby.org is a political action committee that helps lobbyists seek legislative relief to grievances by helping these groups find and mobilize their sympathetic publics. I analyze the range of new media tools for producing political culture, and with this ethnographic evidence build two theories about the role of new media in advanced democracies-a theory of thin citizenship and a theory about data shadows as a means of political representation.
Click and elect: how fake news helped Donald Trump win a real election
  • H J Parkinson
Parkinson, H. J. Click and elect: how fake news helped Donald Trump win a real election. The Guardian (2016).
Trump Won Because of Facebook
  • M Read
  • Donald
Read, M. Donald Trump Won Because of Facebook. New York Magazine (2016).
Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me
  • C Dewey
Dewey, C. Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me'. The Washington Post (2016).
  • P Howard
  • B Kollanyi
  • S Woolley
Howard, P., Kollanyi, B. & Woolley, S. Bots and Automation over Twitter during the U.S. Election. Oxf. UK Proj. Comput. Propag. (2016).
A recent voting history of the 15 Battleground states -National Constitution Center
  • Ncc Staff
NCC Staff. A recent voting history of the 15 Battleground states -National Constitution Center. National Constitution Center -constitutioncenter.org Available at: https://constitutioncenter.org/blog/voting-history-ofthe-15-battleground-states. (Accessed: 22nd September 2017)
Social Media and News Sources during the 2017 UK General Election
  • J Gallacher
  • M Kaminska
  • B Kollanyi
  • T Yasseri
  • P N Howard
Gallacher, J., Kaminska, M., Kollanyi, B., Yasseri, T. & Howard, P. N. Social Media and News Sources during the 2017 UK General Election. (2017).
Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter
  • P N Howard
  • G Bolsover
  • B Kollanyi
  • S Bradshaw
  • L.-M Neudert
Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S. & Neudert, L.-M. Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? (2017).
Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda
  • B Kollanyi
  • P N Howard
  • S C Woolley
Kollanyi, B., Howard, P. N. & Woolley, S. C. Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda, 2016).