Article

Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Mainstream news stories from obscure sources that were propagated on Twitter became one of the central elements of the 2016 US presidential election, and subsequent analysis reveals that many of the stories were largely manipulated or totally fabricated (Howard et al., 2017;Stone and Gordon, 2017). Republican candidate Donald Trump himself, among others, helped to drive the mainstream news agenda with his prolific use of the social media platform (Lynch, 2016;Maheshwari, 2016). ...
... While the Russian infiltration of American social media has been explored (e.g. Chou, 2016;Howard et al., 2017;Lynch, 2016;Stone and Gordon, 2017), this study, which examines the attributes of cross-border content comparisons on Twitter, can provide a foundation for broader understanding of the international scope of social media content. Therefore, the goal of this study was to examine Twitter comments about both candidates in the US, and in five key countries that are of prime importance to American diplomacy and international trade, in order to characterise the content as a foundation for future studies on social media's influence in shaping the news agenda. ...
... In a separate study, these authors also found that non-elite sources had a growing influence with journalists through Twitter (Lewis and Zamith, 2015). One study examined how Twitter propagated 'fake news' in US states (Howard et al., 2017). Studying Twitter content may reveal the extent of biased agenda-setting, with its potential to influence and polarise millions of people both in the US and around the world. ...
Article
Full-text available
A manual content analysis compares 6019 Twitter comments from six countries during the 2016 US presidential election. Twitter comments were positive about Trump and negative about Clinton in Russia, the US and also in India and China. In the UK and Brazil, Twitter comments were largely negative about both candidates. Twitter sources for Clinton comments were more frequently from journalists and news companies, and still more negative than positive in tone. Topics on Twitter varied from those in mainstream news media. This foundational study expands communications research on social media, as well as political communications and international distinctions.
... False news, on the other hand, is defined as information that is intentionally false, and are often malicious stories propagating conspiracy theories. Although this type of content shares characteristics with polarized and sensationalist content (described later in this article), where information can be characterized as highly emotional and highly partisan (Allcott & Gentzkow, 2017;Howard, Kollanyi, Bradshaw, & Neudert, 2017;Potthast et al., 2017), it differs in important features (see Table 2). First and foremost, false news stories are not factual and have no basis in reality, and thus, are unable to be verified (Allcott & Gentzkow, 2017;Cohen, 2017). ...
... As Howard et al. (2017) elucidate, "both fake news websites and political bots are crucial tools in digital propaganda attacks-they aim to influence conversations, demobilize opposition and generate false support" (p. 1). For example, a recent study revealed that social bots in Twitter served to amplify the dissemination of content coming from low-credibility sources, content such as conspiracy theories, false news, and junk science, suggesting that "curbing social bots may be an effective strategy for mitigating the spread of low credibility content" (Shao et al., 2018, p. 5-6). ...
... Identifying commentary, thus, requires differentiating between both real news and assertions (Howard et al., 2017; see Table 3). This can be done based on opinion journalists' adherence to the code of professional conduct of the Society of Professional Journalists and based on the Verification, Independence, and Accountability principles of journalism. ...
Article
As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content.
... Research on disinformation is dispersed across numerous scientific fields. In computer science, there is mature work on automatic detection [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] and real-world measurement [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58], but work on responses and countermeasures is comparatively thin (see Section 2.3). ...
... Disinformation campaigns are typically multimodal, exploiting many different social and media channels at once [59]. These campaigns use websites as an important tool: to host content for distribution across platforms, facilitate user tracking, and generate ad revenue [42,45,[60][61][62][63]. Disinformation websites are frequently designed to conceal their provenance and deceive users into believing that they are legitimate news or opinion outlets. 1 Our work examined whether warnings can counter this deception and help users distinguish, contextualize, or avoid disinformation websites. ...
Preprint
Full-text available
Online platforms are using warning messages to counter disinformation, but current approaches are not evidence-based and appear ineffective. We designed and empirically evaluated new disinformation warnings by drawing from the research that led to effective security warnings. In a laboratory study, we found that contextual warnings are easily ignored, but interstitial warnings are highly effective at inducing subjects to visit alternative websites. We then investigated how comprehension and risk perception moderate warning effects by comparing eight interstitial warning designs. This second study validated that interstitial warnings have a strong effect and found that while warning design impacts comprehension and risk perception, neither attribute resulted in a significant behavioral difference. Our work provides the first empirical evidence that disinformation warnings can have a strong effect on users' information-seeking behaviors, shows a path forward for effective warnings, and contributes scalable, repeatable methods for establishing evidence on the effects of disinformation warnings.
... The attack struck the entire nation with some states getting larger doses of malicious content than others. Out of the sixteen states labeled as swing states by the National Constitution Center at the time, twelve were exposed to above average levels of polarizing content (Howard et al., 2018). ...
... It was again humans and artificials working together in that effort. Fredheim and Gallacher (2018) and Kollanyi et al. (2018) document the increasing rate of anonymous activity during key social momentspolitical events, such as elections. While the first work studies two months of 2018 and concludes that around 35% of Twitter activity around content mentioning NATO can be assigned to anonymous or low quality accounts, the second aforementioned study focuses on the week prior to election day of the 2016 US presidential elections. ...
... Specifically, we explore the following questions: (1) to what extent do social bots participate in the information distribution of news in professional media; (2) what role do social bots play in the diffusion of such stories? (3) can social bots become opinion leaders along the diffusion path of professional news stories? ...
... It uses the functionality of social accounts to deliver news and information like a human and can also perform malicious activities such as sending spam, posting harassment, and delivering hate speech. Such social bots can post messages quickly, mass-produce replicated messages, and eventually distribute messages in the form of humanlike users [3]. Social bots are more active compared to regular human users [4], and their purpose is to learn and imitate humans to manipulate public opinion on social media platforms. ...
Article
Full-text available
Social-bots-mediated information manipulation is influencing the public opinion environment, and their role and behavior patterns in news proliferation are worth exploring. Based on the analysis of bots' posting frequency, influence, and retweeting relationship, we take the diffusion of The New York Times' coverage of Xinjiang issue on the overseas social platform Twitter as an example and employ the two-step flow model. It is found that in the role of second-step diffusion, unlike posting news indiscriminately in first-step diffusion, social bots are more inclined to postcontroversial information in second-step diffusion; in terms of diffusion patterns, although social bots are more engaged in first-step diffusion than in second-step diffusion and can trigger human users to retweet, they are still inferior to humans in terms of influence.
... We began with a sample of URLs shared in the United States during that country's 2016 Presidential election-an important moment in which the "fake news" term first emerged. The team elaborated four broad: (1) professional news outlets, (2) established political actors, (3) polarizing and conspiratorial content, and (4) other sources of political news and information (Howard, Kollanyi, Bradshaw, & Neudert, 2017). ...
Article
Full-text available
Voters increasingly rely on social media for news and information about politics. But increasingly, social media has emerged as a fertile soil for deliberately produced misinformation campaigns, conspiracy, and extremist alternative media. How does the sourcing of political news and information define contemporary political communication in different countries in Europe? To understand what users are sharing in their political communication, we analyzed large volumes of political conversation over a major social media platform—in real-time and native languages during campaign periods—for three major European elections. Rather than chasing a definition of what has come to be known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the types of sources, compare them in context with known forms of political news and information, and contrast their circulation patterns in France, the United Kingdom, and Germany. Based on this analysis, we offer a definition of “junk news” that refers to deliberately produced misleading, deceptive, and incorrect propaganda purporting to be real news. In the first multilingual, cross-national comparison of junk news sourcing and consumption over social media, we analyze over 4 million tweets from three elections and find that (1) users across Europe shared substantial amounts of junk news in varying qualities and quantities, (2) amplifier accounts drive low to medium levels of traffic and news sharing, and (3) Europeans still share large amounts of professionally produced information from media outlets, but other traditional sources of political information including political parties and government agencies are in decline.
... One could conjecture that the motivation of foreign information operations is to sew discord and to reduce unity of a society's populace. We remain politically neutral with a hope that divisive language is not used intentionally to polarize others and in cases of legitimate promotion of already divisive topics, that polarization can be functionally minimized as opposed to unintentionally creating further division of an audience while advancing politically charged causes such as healthcare or social security reform (Howard, 2018). It may not be apparent how this happens, but common devices identified in the FLC portion of this competition such as flag waving i.e. conflating the opposing viewpoint with being unpatriotic, etc. is one example of many possible. ...
... The authors state that chatbots in fact showed a measurable influence during the election by either manufacturing online popularity or by democratizing propaganda. Thus, governments of several countries start to introduce regulations to fight against these kinds of online manipulations (Howard et al., 2018). However, chatbots oftentimes remain a widely-accepted tool for propaganda (Woolley and Guilbeault, 2017). ...
Conference Paper
Full-text available
Recent years show an increasing popularity of chatbots, with latest efforts aiming to make them more empathic and human-like, finding application for example in customer service or in treating mental illnesses. Thereby, emphatic chatbots can understand the user's emotional state and respond to it on an appropriate emotional level. This survey provides an overview of existing approaches used for emotion detection and empathic response generation. These approaches raise at least one of the following profound challenges: the lack of quality training data, balancing emotion and content level information , considering the full end-to-end experience and modelling emotions throughout conversations. Furthermore, only few approaches actually cover response generation. We state that these approaches are not yet empathic in that they either mirror the user's emotional state or leave it up to the user to decide the emotion category of the response. Empathic response generation should select appropriate emotional responses more dynamically and express them accordingly, for example using emo-jis.
... Russian media, and in particular their online segment, have recently been (re-)instated as a focus of attention of communication scholars and computer scientists (Howard, Kollanyi, Bradshaw, & Neudert, 2017;Sanovich, 2017). This was a result of several scandals around the spread of various cyber-attacking techniques, such as email hacking, attacks of social media bots, and spread of allegedly pre-paid electoral advertisements. ...
Article
Full-text available
Russian media have recently (re-)gained attention of the scholarly community, mostly due to the rise of cyber-attacking techniques and computational propaganda efforts. A revived conceptualization of the Russian media as a uniform system driven by a well-coordinated propagandistic state effort, though having evidence thereunder, does not allow seeing the public discussion inside Russia as a more diverse and multifaceted process. This is especially true for the Russian-language mediated discussions online, which, in the recent years, have proven to be efficient enough in raising both social issues and waves of political protest, including on-street spillovers. While, in the recent years, several attempts have been made to demonstrate the complexity of the Russian media system at large, the content and structures of the Russian-language online discussions remain seriously understudied. The thematic issue draws attention to various aspects of online public discussions in Runet; it creates a perspective in studying Russian mediated communication at the level of Internet users. The articles are selected in the way that they not only contribute to the systemic knowledge on the Russian media but also add to the respective subdomains of media research, including the studies on social problem construction, news values, political polarization, and affect in communication.
... Another challenge we encountered when preparing our network graphs was the presence of bot accounts. Bots can be most broadly characterized as entities with the ability to produce and publish content and interact directly with other users without human intervention (Howard, Kollanyi, Bradshaw, & Neudert, 2018). These entities play a crucial role in the social media ecosystem by responding to real-time questions about a variety of topics or by providing automated updates on news and events. ...
Article
With increasing attention devoted to automated bot accounts, fake news, and echo chambers, how much of the theory of a Habermassian public sphere is still applicable to social media? Drawing on Twitter data collected on April 16, 2017, during the night of Turkey's 2017 Constitutional Referendum, we test whether the networks of political communication resemble the communicative structures characteristic of Habermas's “public sphere.” The referendum left the country sharply divided; 51.4 percent of the electorate voted in favor of amending the constitution to grant sweeping new executive powers to the presidency, with an overall turnout of 85.46 percent. In this article, we examine whether Twitter users were meaningfully engaged on the night of the referendum, and if their communicative patterns resembled a networked public sphere, that is, a space where information and ideas are exchanged, and public opinion is formed in a deliberative, rational manner. We find ideological uniformity, polarization, and partisan antipathy to be especially evident—mirroring existing social tensions in Turkey. Rather than resembling a public sphere, we found Twitter users to be more likely to communicate on the basis of homophily—rather than to engage in democratic debate or establish a common ground between the two campaigns. 鉴于越来越多的关注聚焦于自动机器人账户、假新闻、回音室, 哈贝马斯提出的公共领域理论中还有哪些内容仍然适用于社交媒体?利用2017年4月16日土耳其宪法公投之夜期间推特收集的数据, 作者验证了政治传播网络是否能模拟哈贝马斯公共领域理论中的传播性结构。公投导致土耳其政治出现明显区分;总投票率为85.46%, 其中51.4%的选民支持修正宪法, 以期将新执行权一并赋予总统。 作者验证了推特用户在公投当晚是否有意义地参与沟通, 沟通模式是否模拟了一种网络化的公共领域, 即一个以商讨性和理性方式交换信息、想法, 和形成舆论的空间。作者发现, 意识形态的统一性、(政治)极化、党派反感尤为明显——这反映了土耳其现存社会的紧张局势。与模拟公共领域不同的是, 作者发现推特用户更可能在同质性的基础上进行沟通, 而不是参与民主辩论, 或在两个阵营之间建立共同点。 Con una atención cada vez mayor dedicada a las cuentas automatizadas de bots, noticias falsas y cámaras de eco, ¿qué parte de la teoría de una esfera pública habermassiana sigue siendo aplicable a las redes sociales? Basándonos en los datos de Twitter recopilados el 16 de abril de 2017, durante la noche del Referéndum Constitucional de Turquía 2017, verificamos si las redes de comunicación política se asemejan a las estructuras comunicativas características de la “esfera pública” de Habermas. El referéndum dejó al país muy dividido; El 51.4 por ciento del electorado votó a favor de enmendar la constitución para otorgar amplios poderes ejecutivos a la presidencia, con una participación general de 85.46 por ciento. En este artículo, examinamos si los usuarios de Twitter se involucraron de manera significativa la noche del referéndum, y si sus patrones comunicativos se parecían a una esfera pública en red, es decir, un espacio donde se intercambian información e ideas, y la opinión pública se forma de manera deliberada y racional. Nos parece especialmente evidente la uniformidad ideológica, la polarización y la antipatía partidista, lo que refleja las tensiones sociales existentes en Turquía. En lugar de parecernos a una esfera pública, descubrimos que los usuarios de Twitter tienen más probabilidades de comunicarse en base a la homofilia, en lugar de participar en un debate democrático o establecer un terreno común entre las dos campañas.
... It is understood that disinformation can promote false understanding through different means, not necessarily based on false identities, but by using true but misleading content to trigger false inferences (Fallis, 2015), and promoting misperceptions about reality and social consensus (McKay & Tenove, 2021). Regarding the effects this has, disinformation often seeks to amplify social divisions, through discursive means of "us" and/against "the other", including the propagation of conspiracy theories, and using polarising and sensationalist content that is highly emotional and partisan (Howard et al., 2017). Reddi et al. (2021) noted that disinformation in US politics works at the service of existing power structures and identified anti-black racism, misogyny and xenophobic sentiment as topics susceptible to disinformation. ...
Article
Full-text available
Democracy is based on individuals' ability to give their opinions freely. To do this, they must have access to a multitude of reliable information sources (Dahl, 1998), and this greatly depends on the characteristics of their media environments. Today, one of the main issues individuals face is the significant amount of disinformation circulating through social networks. This study focuses on parliamentary disinformation. It examines how parliamentarians contribute to generating information disorder (Wardle & Derakhshan, 2017) in the digital public space. Through an exploratory content analysis − a descriptive content analysis of 2,307 messages posted on Twitter accounts of parliamentary spokespeople and representatives of the main list of each political party in the Spanish Lower House of Parliament − we explore disinformation rhetoric. The results allow us to conclude that, while the volume of messages shared by parliamentarians on issues susceptible to disinformation is relatively low (14% of tweets), both the themes of the tweets (COVID-19, sex-based violence, migrants or LGBTI), as well as their tone and argumentative and discursive lines, contribute to generating distrust through institutional criticism or their peers. The study deepens current knowledge of the disinformation generated by political elites, key agents of the construction of polarising narratives.
... In this article, rumour is defined as a form of unverified information that arises from, and is publicly circulated under, conditions of uncertainty (DiFonzo & Bordia, 2007). Scholars have discussed rumour's potential threats in jeopardising public health (Ngade, Singer, Marcus, & Lara, 2016), intensifying racial conflicts (Williams & Burnap, 2015) and influencing presidential elections (Howard, Kollanyi, Bradshaw, & Neudert, 2018). During times of crisis, the consequences of online rumours can be particularly severe, as they can lead to social unrest and hamper efforts to control and contain a crisis (Gupta, Lamba, Kumaraguru, & Joshi, 2013;Starbird, Maddock, Orand, Achterman, & Mason, 2014). ...
Article
Full-text available
This article investigates how citizens contribute to rumour verification on social media in China, drawing on a case study of Weibo communication about the 2015 Tianjin blasts. Three aspects of citizen engagement in verifying rumours via Weibo are examined: (1) how they directly debunked rumours related to the blasts, (2) how they verified official rumour messages and (3) how they used Weibo’s community verification function to collectively identify and fact-check rumours. The article argues that in carrying out such activities, ordinary Weibo users were engaging in practices of citizen journalism. Findings from our analysis suggest that even though citizen journalists’ direct engagement in publishing debunking messages was not as visible as that of the police and mainstream media, self-organised grassroots rumour-debunking practices demonstrate great potential. In terms of both the reposts and the positive comments they received, rumour-debunking posts from non-official actors appear to have been given more credibility than those from their official counterparts. In contrast, the official narratives about the Tianjin blasts were challenged, and the credibility of the official rumour-debunking messages was commonly questioned. Nevertheless, this article also shows that Weibo’s community verification system had limited effects in facilitating how Weibo users could collaboratively fact-check potentially false information.
... On the other hand, such large-scale diffusion into daily routines requires an increased understanding of the risks and consequences for individuals concerning data that are wilfully shared online in a variety of platforms and situations. For instance, recent studies have focused on the role of social media in amplifying fake news propagation (Allcott & Gentzkow, 2017), hate speech (Mondal et al., 2017), and its impact on influencing political debates such as BREXIT (Del Vicario et al., 2017) and the 2016 US Presidential election (Howard et al., 2018). The revelation related to Facebook providing unfettered access to personal information about over 87 million users to Cambridge Analytica (Isaak & Hanna, 2018) has fueled the debate over not only the societal impact of those technologies but also about user's privacy and their data rights. ...
... Sources are important, as they help frame the journalists' news stories (Golan & Himelboim, 2015). These might be individual citizens, representatives of government agencies, think tanks, universities, and political parties (Howard, Kollanyi, Bradshaw, & Neudert, 2017), or policy makers, businesses, interest groups, ...
Article
Full-text available
In 2018, the World Health Organization declared Delhi the most polluted city on the planet. This study examines how the Indian print news media has framed the issue of Delhi air pollution, and framed responsibilities for its causes and solutions. This content analysis examines stories from The Times of India, Hindustan Times, and The Hindu for news coverage of Delhi air pollution between 2011 and 2016, when Delhi was enveloped in the worst toxic smog. Findings revealed that personal-level causal attributions (i.e., cars) were mentioned more frequently than were societal-level or other causes (industrial emissions and weather). The responsibility for solutions was attributed to the government and businesses, however, and not to individuals, which may be due to the nation's high-context culture. Theoretical implications and practical applications are discussed.
... The term User Affinity refers to the natural liking or interest of a user towards a business or brand or any other thing as such. This concern has been surfacing through various incidents that took place globally; Presidential Election 2015 of Sri Lanka, Presidential Election 2016 of United States [7], [8], Brexit [9], Local Government Election 2018 of Sri Lanka. In all of these external events, influencing user affinity is likely to affect the outcome of these elections (or in turn the democratic process [10]- [12]). ...
... Nevertheless studying the effectiveness of these phenomena is of paramount importance. Fake news are currently discussed in the United State of America, particularly in the context of the 2016 presidential election [67,68] and is now the subject of an adaptation of legislation in France since the attempts to manipulate public opinion during the French 2017 presidential elections [69]. ...
Article
Full-text available
Background Digital spaces, and in particular social networking sites, are becoming increasingly present and influential in the functioning of our democracies. In this paper, we propose an integrated methodology for the data collection, the reconstruction, the analysis and the visualization of the development of a country’s political landscape from Twitter data. Method The proposed method relies solely on the interactions between Twitter accounts and is independent of the characteristics of the shared contents such as the language of the tweets. We validate our methodology on a case study on the 2017 French presidential election (60 million Twitter exchanges between more than 2.4 million users) via two independent methods: the comparison between our automated political categorization and a human categorization based on the evaluation of a sample of 5000 profiles descriptions; the correspondence between the reconfigurations detected in the reconstructed political landscape and key political events reported in the media. This latter validation demonstrated the ability of our approach to accurately reflect the reconfigurations at play in the off-line political scene. Results We built on this reconstruction to give insights into the opinion dynamics and the reconfigurations of political communities at play during a presidential election. First, we propose a quantitative description and analysis of the political engagement of members of political communities. Second, we analyze the impact of political communities on information diffusion and in particular on their role in the fake news phenomena. We measure a differential echo chamber effect on the different types of political news (fake news, debunks, standard news) caused by the community structure and emphasize the importance of addressing the meso-structures of political networks in understanding the fake news phenomena. Conclusions Giving access to an intermediate level, between sociological surveys in the field and large statistical studies (such as those conducted by national or international organizations) we demonstrate that social networks data make it possible to qualify and quantify the activity of political communities in a multi-polar political environment; as well as their temporal evolution and reconfiguration, their structure, their alliance strategies and their semantic particularities during a presidential campaign through the analysis of their digital traces. We conclude this paper with a comment on the political and ethical implications of the use of social networks data in politics. We stress the importance of developing social macroscopes that will enable citizens to better understand how they collectively make society and propose as example the “Politoscope”, a macroscope that delivers some of our results in an interactive way.
... The three primary groups identified as manufacturing this content were (i) strong Trump supporters; (ii) actors looking to gain advertising using 'click-bait' headlines; and (iii) the Russian propaganda apparatus (Weisburd, Watts, & Berger, 2016). Additional evidence emerging since the January 2017 ODNI assessment further confirms its claims: a Facebook White Paper stated their data 'does not contradict' ODNI's findings and attribution (Weedon, Nuland, & Stamos, 2017, p. 11); researchers at Oxford University found that individual U.S. states of electoral importance were specifically targeted with higher volumes of fake news, political messaging via trolling social media advertisements (Howard, Kollanyi, Bradshaw, & Neudert, 2017); and, most sensationally, in February 2018 an indictment was presented against the Internet Research Agency (a Russian company with ties to the Kremlin), its leadership, and affiliates by Robert Mueller, the U.S. Special Counsel overseeing the investigation into Russian interference in the 2016 election. The 37-page indictment offers remarkable insights into Russia's disinformation campaign, for example that the defendants 'began to promote allegations of voter fraud by the Democratic Party' in 2016, including posts that allegations of voter fraud were being investigated in North Carolina and later reported in Florida (Mueller, 2018, pp. ...
Article
There is a recent surge in the use of state-sponsored cyber operations by states against foreign political institutions, including efforts to sway electoral outcomes by influencing voters. Yet cyber statecraft research has focused more on operations designed to yield a direct military advantage or reward, rather than as a subtle tool of influence. We seek to address this gap in the literature, first by conceptualising a typology of state-sponsored operations constituting ‘cyber voter interference’ (CVI), second by theorising a causal mechanism through which CVI can influence the cognition and behaviour of voters contingent on specific local conditions within a target state, and third by testing the plausibility of our theoretical model via two case studies of recent elections in the United States and France, both of which saw credible accusations of cyber interference by hostile foreign actors. We find that the evidence supports the plausibility of the theorised model, and our argument that the success of CVI is mediated by specific conditions within the state being targeted.
... Howard 2017.139Gordon & Stone 2017.140 ...
Book
Full-text available
Kansainvälistä politiikkaa leimaa entistä enemmän valtioiden strateginen kilpailu. Kansainvälisten keskinäisriippuvuuksien lisääntyminen ei ole tarkoittanut jännitteiden loppua. Myös valtioiden ulkopoliittinen vaikuttaminen on muutoksessa. Hybridivaikuttamisesta on tullut viime vuosien keskeisimpiä turvallisuus- ja puolustuspolitiikan sekä julkisen keskustelun käsitteitä. Raportti analysoi eri hybridikäsitteiden osuvuutta ja arvioi niihin kohdistuvaa kritiikkiä. Taloudellisesti ja yhteiskunnallisesti avoimina toimijoina liberaalit demokratiat ovat alttiita hybridivaikuttamiselle. Hybriditaktiikat hyödyntävät usein tarkoituksella liberaalien demokratioiden luontaisia haavoittuvuuksia. Raportti tutkii hybridivaikuttamisen mahdollisuuksia liberaaleissa demokratioissa ja arvioi valtioiden haavoittuvuutta hybridivaikuttamiselle. Raportti keskittyy kolmeen hybridivaikuttamisen muotoon: geoekonomiseen vaikuttamiseen, informaatiovaikuttamiseen ja vaalivaikuttamiseen. Tietoisuus hybridivaikuttamisesta on kasvanut. Yksi mahdollinen keino suojautua vaikuttamiselta on resilienssin hyödyntäminen kansallisessa turvallisuustoiminnassa. Lopuksi raportissa analysoidaan kansallisen resilienssipolitiikan mahdollisuuksia hybridivaikuttamista vastaan. Raportti ladattavissa osoitteesta: https://www.fiia.fi/julkaisu/hybridivaikuttaminen-ja-demokratian-resilienssi
... According to recent declassified assessments by military agencies, this represents a new and more sophisticated asymmetrical war in which Russia is alleged to exploit a network of state-funded news agencies, trolls, bots, and activists to encourage dissent and polarize the electorate in service of eroding trust in western institutions (e.g., elections, government officials and regulatory agencies, capitalism, and an independent press; Giles 2016 and National Intelligence Council 2016). To date, much of the attention in the US has been on the use of social media giants Facebook and Twitter to spread highly polarizing, ideologically extreme, and conspiratorial propaganda (Howard et al., 2017). Although there has been less attention paid to the sources of disinformation campaigns (Lazer et al. 2018), two Russian state-funded news organizations, RT (Russia Today) and Sputnik, were singled out by defense agencies as central actors in this influence campaign. ...
Article
Full-text available
Biotech news coverage in English-language Russian media fits the profile of the Russian information warfare strategy described in recent military reports. This raises the question of whether Russia views the dissemination of anti-GMO information as just one of many divisive issues it can exploit as part of its information war, or if GMOs serve more expansive disruptive purposes. Distinctive patterns in Russian news provide evidence of a coordinated information campaign that could turn public opinion against genetic engineering. The recent branding of Russian agriculture as the ecologically clean alternative to genetically engineered foods is suggestive of an economic motive behind the information campaign against western biotechnologies.
... Other factors were also associated with small increases in exposures to fake news sources: Men and whites had slightly higher rates, as did voters in swing states and voters who sent more tweets (excluding political URLs analyzed here). These findings are in line with previous work that showed concentration of polarizing content in swing states (17) and among older white men (18). However, effects for the above groups were small (less than one percentage point increase in proportion of exposures) and somewhat inconsistent across political groups. ...
Article
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.
... They suggest that "when links to Russian content and unverified WikiLeaks stories are added to the volume of junk news, fully 32% of all the successfully catalogued political content was polarizing, conspiracy driven, and of an untrustworthy provenance." Then they deliver what they think is their punch line: "Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state" (Howard et al., 2017). ...
... These SM platforms are frequently used to look for information for the purposes of social networking, marketing, reading user reviews, daily routines, religion, food products, disasters (floods), educational and research (Kubiak, 2017;Li et al., 2018;Martínez-Ruiz et al., 2018;Sutherland et al., 2018;Thakur & Chander, 2017;Wickramanayake & Jika, 2018). A few of previous studies also that social media also involves with political discussions and exchanging opinions regarding national and international political issues that have influenced the behavior, mindset and attitudes of young adults (Hassan, 2018;Howard et al., 2018;Kahne & Bowyer, 2018;Stanley, 2017;Tucker et al., 2018). Likewise, the social media have been used to create hypes regarding socio-political issues in recent years particularly on Panama Leaks in Pakistan. ...
Article
Full-text available
Background: Social media (SM) have become popular among all genre of people due to its instant and dynamic communication ability. Substantial use of social media as a source of political information raises a concern of researchers to investigate the usage patterns of SM about socio-political issues of the society. Objective: The aim of this study was to investigate the use of social media as a source of political information regarding Panama Leaks in Pakistan. Method: A quantitative research approach based on survey method was used to collect the primary data from a sample of 500 educated adults conveniently available in Lahore city of the Punjab province of Pakistan. Descriptive and inferential statistics were used for data analysis in SPSS-25. Findings: The findings revealed that majority of the educated adults used social media platforms (i.e. Facebook, WhatsApp, YouTube, Twitter and Wikipedia) on daily basis. The educated adults commonly acquired information to know historical perspectives of Panama Leaks (PL); update themselves with general discussions and opinions; understand political and economic conditions due to PL outbreak; be aware of court proceedings/judgments of PL; and get information for entertainment, education and research.
... Previously, a large-scale survey (Barthel et al., 2016) reports that 32% of the US adults often encountered completely made up stories on social media. Our study also supports the findings of other studies that fake news on social media is published mainly to support or oppose political agenda (Lazer, et al., 2018;Howard et al., 2018), religious agenda (Boyd and Ellison, 2007;Howard et al., 2017;Farkas et al., 2018), and to trigger people to take certain actions (Kang and Goldman, 2016). Another study from the USA indicates that 71% adults either 'often' or 'sometimes' see completely made up political news online (Barthel et al., 2016), and many U.S. adults have expressed their concerns about the impact of fake news stories on the 2016 Presidential election in the US (Allcott and Gentzkrow, 2017;Silverman, 2016 Regarding news literacy skills, the findings show that librarians have a slightly moderate level of understanding of the concept of news literacy, as well as they demonstrated a moderate level of perceived news literacy skills (Table 9). ...
Article
Introduction. This study was conducted with an objective to determine the ways librarians deal with fake news, as well as to assess the status of their news literacy skills in combating the fake news phenomenon. Method. A cross-sectional survey was conducted in public and private sector university libraries of Punjab, Pakistan. The study’s population comprised of university librarians working at the rank of an assistant librarian or above. Analysis. One hundred and eighty questionnaires were distributed in both print and online out of which 128 were returned (response rate 71.11%).Descriptive and inferential statistics were applied to report the data using Statistical Package for Social Science (SPSS-version 22). Results. Librarians ‘sometimes’ determine the authenticity of news story e.g., ‘check it from other sources in case of doubts.’ The responses for all the eleven statements relating to news literacy skills ranged between 3.05 to 3.36 on a five-point Likert type scale, indicating that respondents ‘somewhat’ agreed with their perceived news literacy skills. Conclusions.University librarians are not fully acquainted with the aspect of news trustworthiness on social media, which affects their news acceptance and sharing behaviour. They also have a moderate level of conceptual understanding of news literacy.
... The content of fake news in the literature has been compared with polarized and sensational content. The information of such content is usually characterized as highly emotional and highly partisan (Allcott and Gentzkow, 2017;Howard et al., 2018). The results revealed interesting insights into the context of fake news detection. ...
Article
Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
... One part of this variation is a function of language use, but another is derivative of political events. The 2016 US election is a good example, as it was rife with both misleading content widely displayed on social media platforms and widespread politicization of the term 'fake news' itself (Allcott and Gentzkow 2017;Howard et al. 2017;Howard et al. 2018;Grinberg et al. 2019). Other countries had less pronounced salience of the conceptual frame of 'fake news'. ...
Article
Does perceived exposure on social media to mis/disinformation affect user perceptions of social media newsfeed algorithmic bias? Using survey data from eight liberal democratic countries and propensity score matching statistical techniques, this paper details the average treatment effect (ATE) of self-reported perceived exposure to mis/disinformation on perceptions that social media newsfeed algorithms are biased. Overall, the results show that self-reported perceived exposure to misleading content on social media increases perceptions of algorithmic bias. The results also detail interesting platform/country variation in the estimated average treatment effect. The ATE of perceived fake news exposure on perceptions of algorithmic bias are similar on Twitter and Facebook but are amplified in countries with high society-wide issue salience surrounding ‘fake news’ and, especially, ‘algorithmic bias’.
... In their study of the different types of information shared on social media during the 2016 US election, Howard et al. (2017) established a typology of sources for information being shared on social media platforms, included in Figure 2. This classification is interesting in light of the social media content analyzed in the present study, as it compartmentalizes content according to defined classifications, and it is of interest to assess its applicability to our sample. ...
Article
Full-text available
Ephemeral media has become a staple of today’s social media ecology. This study advances the first exploratory analysis of Instagram Stories as a format for political communication. Through an initial content analysis of 832 stories in three verified Vox accounts and a secondary content and discourse analysis of 114 stories, we delve into the strategies used by right-wing party Vox in Spain to portray immigration as an issue for ideological positioning. The findings shed light onto the ways in which the representation of migrants is employed as an instrument for anti-migratory policy support, through the construction of a very specific profile of a migrant in terms of age and gender and the exclusion of significant migrant populations from the argument. Moreover, the party employs the content creation functionalities of Instagram Stories to construct arguments and storylines where diverse information sources converge, effectively bypassing traditional media and reaching their supporter base directly.
... Previous work has highlighted the prominent role that bots played during the U.S. 2016 election (Badawy et al., 2018;Howard et al., 2018;Kriel & Pavliuc, 2019;Ruck et al., 2019). Our findings provide evidence that bots remained very influential even after the elections. ...
Article
Twitter gained new levels of political prominence with Donald J. Trump’s use of the platform. Although previous work has been done studying the content of Trump’s tweets, there remains a dearth of research exploring who opinion leaders were in the early days of his presidency and what they were tweeting about. Therefore, this study retroactively investigates opinion leaders on Twitter during Trump’s 1st month in office and explores what those influencers tweeted about. We uniquely used a historical data set of 3 million tweets that contained the word “trump” and used Latent Dirichlet Allocation, a probabilistic algorithmic model, to extract topics from both general Twitter users and opinion leaders. Opinion leaders were identified by measuring eigenvector centrality and removing users with fewer than 10,000 followers. The top 1% users with the highest score in eigencentrality ( N = 303) were sampled, and their attributes were manually coded. We found that most Twitter-based opinion leaders are either media outlets/journalists with a left-center bias or social bots. Immigration was found to be a key topic during our study period. Our empirical evidence underscores the influence of bots on social media even after the 2016 U.S. presidential election, providing further context to ongoing revelations and disclosures about influence operations during that election. Furthermore, our results provide evidence of the continued relevance of established, “traditional” media sources on Twitter as opinion leaders.
... In addition, the correlation matrix of covariates is presented in Appendix B, Fig Besides the statistically tests performed in methodology selection, state specific effects are also validated by the diversity identified in American states in terms of culture, economic development, legislation, and voters' preferences. Moreover, the decision is defended by the presence of political polarization (Baker et al., 2020d) and the swing states effect on final results (Howard et al., 2018;Antoniades and Calomiris, 2020). Table 2 presents estimated coefficients for variables included in baseline models. ...
Article
The paper examines the United States 2020 presidential election drivers and effects, under the uncertainty caused by COVID-19. By considering news-based, financial markets, and coronavirus specific inputs in panel data framework, the results reveal that COVID-19 affects candidates’ chances. Biden's electorate reacts positive to news regarding unemployment or healthcare, stress level on financial markets or Country Sentiment Index. Trump's opportunities increase with coronavirus indicators or news about populism. However, President-elect Biden must provide solutions for national economy issues like unemployment, budget deficit or healthcare inequalities. Simultaneously, having extensive prerogatives on trade and investment partnerships, influences mitigation of COVID-19 global effects.
Article
Full-text available
What kinds of social media users read junk news? We examine the distribution of the most significant sources of junk news in the three months before President Donald Trump first State of the Union Address. Drawing on a list of sources that consistently publish political news and information that is extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms of junk news, we find that the distribution of such content is unevenly spread across the ideological spectrum. We demonstrate that (1) on Twitter, a network of Trump supporters shares the widest range of known junk news sources and circulates more junk news than all the other groups put together; (2) on Facebook, extreme hard right pages, distinct from Republican pages, share the widest range of known junk news sources and circulate more junk news than all the other audiences put together; (3) on average, the audiences for junk news on Twitter share a wider range of known junk news sources than audiences on Facebook public pages.
Thesis
Full-text available
During the 2016 election, memes were used heavily by individuals and organized groups who wanted to have an impact on the outcome. In the proceeding years, groups provided organized opportunities for individuals to further learn how to utilize memes more effectively, turning this once benign digital artifact into modern propaganda. This study examined memes that were focused on the lead up to the 2020 U.S. election, specifically memes that contained some element of misleading information. During the study, which collected memes from July 1 – 31, 2020, 60 left-leaning and 60 right-leaning memes were collected from six Facebooks groups, for a total of 120 memes. Using mixed method content and thematic analyses, the memes were examined for propaganda, persuasion, misleading information, and multimodality. They were looked at individually and as a left vs. right comparison. When examining propaganda, almost 75% of the memes collected met all of the criteria for propaganda, and those that did not tended to be more humorous. The memes that contained propaganda were likely to be relevant in the short term and feature moral appeals, pre-giving messages, or esteem (negative) appeals. These memes are likely to come from unofficial sources as a mode of expression and public discussion, and feature a number of techniques of misleading information, the majority being fabricated or manipulated content. When the memes were examined for the type of misleading information used, humor was used the most frequently, however the cumulative of the other, non-humorous categories showed that memes are a vehicle for subtle and nuanced techniques. Many memes had at least one element that was truthful, lending legitimacy to an overall misleading message. Many memes featured multiple techniques, making fact-checking a difficult process. When examining the multimodal aspects of the memes, this research shows that any unwritten “rules” that memes had when they first came on the scene no longer exist. Misleading political memes were heavily manipulated, with almost 70% of them appearing to have some alteration, and more than 64% using shading and highlight modulation techniques. This study found that the visual elements of the meme are meant to be the main focus, and that the heavy, error-ridden textual elements were included for maximum information without concern for design principles. This study also compared the 60 memes collected from left-leaning Facebook groups and the 60 collected from right-leaning Facebook groups. The messages primarily focused on the two candidates, Democrat Joe Biden and Republican Donald Trump, followed the mainstream news and popular conspiracy theories, and featured very similar techniques. Significant differences were found in the level of accuracy within the message, the number of memes that could be considered propaganda, and the number of memes that appeared to be digitally altered. This study also supports the idea that right-leaning misleading political memes are more frequently disseminated than their left-leaning counterparts.
Chapter
Today, major online social networking websites host millions of user accounts. These websites provide a convenient platform for sharing information and opinions in the form of microblogs. However, the ease of sharing also brings ramifications in the form of fake news, misinformation, and rumors, which has become highly prevalent recently. The impact of fake news dissemination was observed in major political events like the US elections and the Jakarta elections, as well as the distortion of celebrities and companies’ reputation. Researchers have studied the propagation of fake news over social media websites and have proposed various techniques to combat fake news. In this chapter, we discuss propagation models for misinformation and review the fake news mitigation techniques. We also compose a list of datasets used in fake news-related studies. The chapter is concluded with open research questions.
Article
Full-text available
This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs and requested by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs, assesses the impact of disinformation and strategic political propaganda disseminated through online social media sites. It examines effects on the functioning of the rule of law, democracy and fundamental rights in the EU and its Member States. The study formulates recommendations on how to tackle this threat to human rights, democracy and the rule of law. It specifically addresses the role of social media platform providers in this regard.
Thesis
Full-text available
Digital media misinformation is a threat to democracy and national security in America. This is because in today’s modern landscape the ability to personalize people’s experiences online has become common practice through the collection of cookies and other personalized user data.This data can be used to identify important information about the particular user. Things such as age, race, gender, sexual orientation, political interests, as well as other seemingly harmless information about computer-users is collected in a variety of ways. This important user information can be used to personalize media, including news, online social media feeds, advertisements among many other things. However, when placed into the wrong hands this seemingly harmless data that in many cases enhances and improves online user experience can be used maliciously to affect a variety of human interactions and experiences; as well as alter one’s perception of reality. Thus the threat of digital misinformation is critically magnified by the fact that it can be directed to affect specific users-experience as it pertains to user demographics.
Book
Full-text available
Bu derlemede gazeteciliği tanımlamanın ve icra etmenin farklı biçimlerine odaklanmak; yeni mecraların, deneyimlerin ve olanakların izini sürmek istedik. Bu doğrultuda, dünya genelinde otoriterleşme eğiliminin yükselişi karşısında bir demokrasi cephesi olarak gördüğümüz “yeni gazetecilik” kavramını çeşitli boyutlarıyla tartışmaya açtık. Bizim için “yeni gazetecilik”, profesyonellik göndermesi yapmaktan ziyade emeğin ve ürünün niteliğine odaklanan bir kavram. Çoklu/karşı kamuları süreçlere dâhil eden, çoğulcu, dayanışmacı, katılımcı, ticari olmayan ya da sosyal girişimci, anti-kapitalist, karşı-hegemonik ve belki de en önemlisi rizomatik bir pratiği ifade etmektedir. Bu pratikte geleneksel hiyerarşik haber merkezi yapılanması yerini, heterarşik bir iç içe geçmeyle oluşan yarı kurumsal ve bireylerin öne çıktığı, takipçilerin müdahalesine olanak tanıyan, haberin üretimine ve dağıtımına/paylaşımına odaklı ağlaşmış bir haber merkezine bırakmaktadır. Yeni gazetecilik tartışması ayrıca yalan haberin, propagandanın, ideolojik mücadele çerçevesine hapsolmuş hakikatin gerçekliğine dair sorgulamaları da kapsamaktadır. Bu doğrultuda bu pratiğe dâhil olanların, sürekli akış ve içerik bombardımanını takip edip anlamlandırabilmesi için teknolojik ve dijital beceriler haricinde temel bir eleştirel okuryazarlığa da sahip olması gerekmektedir. Güncel gazetecilik çalışmaları ve uygulamalarına bakıldığında, hem akademisyenlerin hem de uygulayıcıların, ürettikleri içeriklerde üzerinde durduğumuz "yeni gazetecilik" kavramsallaştırmasını benimsedikleri, ancak gelişmekte olan bu çalışma alanına dair net bir tanımlama yapmadıkları görülmektedir. “Yeni Gazetecilik. Mecralar, deneyimler, olanaklar” isimli çalışmamız, Türkçe literatürde bu yeni kavramsal çerçevenin oluşturulması için bir giriş çalışması niteliğindedir.
Chapter
Full-text available
This chapter explores the challenges and opportunities of content analysis as a method for researching digitalised forms of propaganda, particularly in hybridised media environments. Digital propaganda is one of the manifestations of post-truth politics, and as such, it is a product of the culture of social interconnectivity as well as the hybridisation of political news media. It, therefore, represents the zeitgeist in communication studies and we problematise digital propaganda through the prism of social change. In doing so, we contextualise digital propaganda within political communication research, specifically studies employing content analysis-based methodologies, keeping in mind that this communicative practice adapts to, adopts features of, and co-evolves along with the media environment it occupies. First, the chapter describes the content analysis methodology and its application within propaganda research. Second, we provide an overview of the research questions content analysis tends to be used to answer in digital propaganda research. Third, we focus our discussion on a critical examination of the content analysis methodology, leading into a discussion of new challenges surrounding the emergence of computational propaganda. Fourth, in the context of the shift towards computational propaganda, the delivery of personalised messages based on analysis of user interests and behaviours, we consider emerging trends in propaganda and what contribution content analysis research can make to understanding them. Fifth, we account for innovation in content analysis and the use of big data. Finally, we conclude with a discussion that develops an understanding of propaganda uses in a fast-moving and ever-evolving communication environment and the future potential of the content analysis methodology as an exploratory and explanatory tool.
Chapter
The researcher explores the world's first use of AI. In the “Bad Bot” section, the authors look at the negative impact of AI in politics with the first elections won in history through the use of AI's bots and trolls propaganda, and how it could bring to a more dystopian future with deepfakes. In the “Good Bot” section, they focus on positive case studies; starting with the 2021 Tokyo Olympics and health, they explore AI techniques applied from the infinitive small, Higgs Boson, to the infinitely large, dark matter; we'll meet Cimon at the Space Station; AI in climate change and pioneer UN projects such as “Earth” and “Humanitarian” AI; in education, they look at the latest use of AI helping schools and EU project “Time Machine.” They also see examples done to tackle the “Bad Bots” section looking at what is being implemented. This chapter will finally look at the world's first rebellious behaviour in bots with funny examples that will make you think.
Chapter
This chapter combines new insights from economic globalization and digitalization analysis with inequality statistics and new US survey results which show the main concerns of US households and voters, respectively. While rising economic inequality is considered to be a problem in the US survey, the relative majority of respondents express the expectation that large companies will take action to correct excessive inequality—a view that is wishful thinking and which is bound to result in sustained voter frustration for the lower half of the US income pyramid. This implies a structural populism problem in the US, which is a totally new situation: with challenges for the North America, Europe, Asia and the world. This new structural US populism hypothesis is linked to deep implications for trade policy and anti-multilateralism. Meanwhile, the Council of Economic Advisers’ 2018 study comparing per capita consumption in the US and Nordic European countries is also refuted.
Article
Why did Russia's relations with the West shift from cooperation a few decades ago to a new era of confrontation today? Some explanations focus narrowly on changes in the balance of power in the international system, or trace historic parallels and cultural continuities in Russian international behavior. For a complete understanding of Russian foreign policy today, individuals, ideas, and institutions—President Vladimir Putin, Putinism, and autocracy—must be added to the analysis. An examination of three cases of recent Russian intervention (in Ukraine in 2014, Syria in 2015, and the United States in 2016) illuminates the causal influence of these domestic determinants in the making of Russian foreign policy.
Chapter
Dieses Kapitel kombiniert neue Erkenntnisse aus der Analyse der wirtschaftlichen Globalisierung und Digitalisierung mit Ungleichheitsstatistiken und neuen US-Umfrageergebnissen, die die Hauptsorgen der US-Haushalte bzw. der Wähler aufzeigen. Während die zunehmende wirtschaftliche Ungleichheit in der US-Umfrage als Problem angesehen wird, äußert die relative Mehrheit der Befragten die Erwartung, dass große Unternehmen Maßnahmen ergreifen werden, um übermäßige Ungleichheit zu korrigieren – eine Sichtweise, die Wunschdenken ist und die für die untere Hälfte der US-Einkommenspyramide zu anhaltender Wählerfrustration führen wird. Dies impliziert ein strukturelles Problem des Populismus in den USA und stellt eine völlig neue Situation dar: mit Herausforderungen für Nordamerika, Europa, Asien und die Welt. Diese neue strukturelle US-Populismushypothese ist mit tiefgreifenden Auswirkungen auf die Handelspolitik und mit Antimultilateralismus verbunden. Dabei wird auch die 2018 veröffentlichte Studie des Council of Economic Advisers widerlegt, die den Pro-Kopf-Konsum in den USA und den nordischen Ländern Europas vergleicht und einen hohen US-Vorsprung behauptet.
Article
Full-text available
Social media is an important source of news and information in the United States. But during the 2016 US presidential election, social media platforms emerged as a breeding ground for influence campaigns, conspiracy, and alternative media. Anecdotally, the nature of political news and information evolved over time, but political communication researchers have yet to develop a comprehensive, grounded, internally consistent typology of the types of sources shared. Rather than chasing a definition of what is popularly known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the phenomenon. To understand what social media users are sharing, we analyzed large volumes of political conversations that took place on Twitter during the 2016 presidential campaign and the 2018 State of the Union address in the United States. We developed the concept of “junk news,” which refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news. First, we found a 1:1 ratio of junk news to professionally produced news and information shared by users during the US election in 2016, a ratio that had improved by the State of the Union address in 2018. Second, we discovered that amplifier accounts drove a consistently higher proportion of political communication during the presidential election but accounted for only marginal quantities of traffic during the State of the Union address. Finally, we found that some of the most important units of analysis for general political theory—parties, the state, and policy experts—generated only a fraction of the political communication.
Chapter
Full-text available
Çalışmanın amacı yeni gazetecilik uygulamaları içerisinde giderek önem kazanan veri gazeteciliğini anlamak ve veri gazeteciliği haberlerinin yapısal özelliklerini çözümlemektir. Bu çerçevede çalışma şu sorulara yanıt aramaktadır; veri gazeteciliği nasıl bir bir gelişim süreci izlemiştir ve hangi teknolojik gelişmeler ve dönüşümlerden etkilenmiştir? Veri gazeteciliğinde kullanılan yöntemler açısından farklılıklar bulunmakta mıdır? Tasarım, yazılım ve veri gazeteciliği arasında nasıl bir bağlantı vardır? Etkileşimli haber anlatısı hangi çoklu ortam içeriklerden oluşmaktadır? Bu projelerde sosyal medya ve video paylaşım platformlarından nasıl yararlanılmaktadır?
Chapter
Social Media and Democracy - edited by Nathaniel Persily September 2020
Book
Full-text available
Acknowledgments I could not imagine finishing this monograph without the encouragement of my first editor, Holly Buchanan at Lexington Books. Although I have written extensively in recent years, I could not imagine finishing up a book project. So my special thanks go to Ms. Buchanan. After her, Bryndee Ryan continued to encourage me and here comes the book. Other thanks go to her. Most of what I have written is a product of long years of teaching and intellectual development at Istanbul Bilgi University. I am very proud of being a faculty member at the communication school here and I believe this is one of the best things ever happened to me. This book was written during my sabbatical period. My university generously supported me during the period and I could write with the comfort of staying at Anthropology Department at the University of California, Irvine and at the Science and Technology Studies program at MIT. My stays were a result of Rice Anthropology network and I cannot tell how valuable it was to brainstorm regularly with Prof. George E. Marcus and Prof. Michael M. J. Fischer. I do not claim that their wisdom rightfully reflects on this manuscript but that gave me intellectual empowerment and gave me clues for future research and publications. Having such academic mentors is a big chance in life. I have been involved with many digital personas, activists, colleagues, friends, and beloved ones; that includes my former student and now friend, Atınç, and my dear brother, Hakan in Turkey. I am lucky to be surrounded by all these beautiful people. However, I will always miss those civilians who have fallen during the Gezi Park Protests for which I devote a chapter. Thus, I would like to dedicate this book to those fallen citizens who are collectively called “Gezi Martyrs.” Boston, MA May 1, 2019
Article
Full-text available
In the field of social media, the systematic impact that bot users bring to the dissemination of public opinion has been a key concern of the research. To achieve more effective opinion management, it is important to understand how and why behavior differs between bot users and human users. The study compares the differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, using public health emergencies as the research target, and further provides specific explanations for the differences. First, the study classified users with bot characteristics and human users by establishing the relevant formulas of user indicator characteristics. Secondly, the study used deep learning methods such as Top2Vec and BERT to extract topics and sentiments, and used social network analysis methods to construct network graphs and compare network attribute features. Finally, the study further compared the differences in information dissemination between posts published by bot users and human users through multi-factor ANOVA. It was found that there were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users. The findings can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
Article
Due to new technologies, the speed and volume of disinformation is unprecedented today. As seen in the 2016 US presidential election, especially with the conduct of the Internet Research Agency, this poses challenges and threats for the (democratic) political processes of internal State affairs, and in particular, (democratic) elections are under increasing risk. Disinformation has the potential to sway the outcome of an election and therefore discredits the idea of free and fair elections. Given the growing prevalence of disinformation operations aimed at (democratic) elections, the question arises as to how international law applies to such operations and how States under international law might counter such hostile operations launched by their adversaries. From a legal standpoint, it appears that such disinformation operations do not fully escape existing international law. However, due to open questions and the geopolitical context, many States refrain from clearly labelling them as internationally wrongful acts under international law. Stretching current international legal norms to cover the issue does not seem to be the optimal solution and a binding international treaty would also need to overcome various hurdles. The author suggests that disinformation operations aimed at (democratic) elections in the context of public international law will most likely be regulated (if) by a combination of custom and bottom-up law-making influencing and reinforcing each other.
Article
Full-text available
The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15--where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., "echo chambers." Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades' size.
Article
Full-text available
In this paper we take advantage of recent developments in identifying the demographic characteristics of Twitter users to explore the demographic differences between those who do and do not enable location services and those who do and do not geotag their tweets. We discuss the collation and processing of two datasets-one focusing on enabling geoservices and the other on tweet geotagging. We then investigate how opting in to either of these behaviours is associated with gender, age, class, the language in which tweets are written and the language in which users interact with the Twitter user interface. We find statistically significant differences for both behaviours for all demographic characteristics, although the magnitude of association differs substantially by factor. We conclude that there are significant demographic variations between those who opt in to geoservices and those who geotag their tweets. Not withstanding the limitations of the data, we suggest that Twitter users who publish geographical information are not representative of the wider Twitter population.
Article
Full-text available
This article provides a review of scientific, peer-reviewed articles that examine the relationship between news sharing and social media in the period from 2004 to 2014. A total of 461 articles were obtained following a literature search in two databases (Communication & Mass Media Complete [CMMC] and ACM), out of which 109 were deemed relevant based on the study’s inclusion criteria. In order to identify general tendencies and to uncover nuanced findings, news sharing research was analyzed both quantitatively and qualitatively. Three central areas of research—news sharing users, content, and networks—were identified and systematically reviewed. In the central concluding section, the results of the review are used to provide a critical diagnosis of current research and suggestions on how to move forward in news sharing research.
Article
Full-text available
The increasing popularity of the social networking service, Twitter, has made it more involved in day-to-day communications, strengthening social relationships and information dissemination. Conversations on Twitter are now being explored as indicators within early warning systems to alert of imminent natural disasters such as earthquakes and aid prompt emergency responses to crime. Producers are privileged to have limitless access to market perception from consumer comments on social media and microblogs. Targeted advertising can be made more effective based on user profile information such as demography, interests and location. While these applications have proven beneficial, the ability to effectively infer the location of Twitter users has even more immense value. However, accurately identifying where a message originated from or author’s location remains a challenge thus essentially driving research in that regard. In this paper, we survey a range of techniques applied to infer the location of Twitter users from inception to state-of-the-art. We find significant improvements over time in the granularity levels and better accuracy with results driven by refinements to algorithms and inclusion of more spatial features.
Article
Full-text available
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
Conference Paper
In many Twitter studies, it is important to know where a tweet came from in order to use the tweet content to study regional user behavior. However, researchers using Twitter to understand user behavior often lack sufficient geo-tagged data. Given the huge volume of Twitter data there is a need for accurate automated geolocating solutions. Herein, we present a new method to predict a Twitter user's location based on the information in a single tweet. We integrate text and user profile meta-data into a single model using a convolutional neural network. Our experiments demonstrate that our neural model substantially outperforms baseline methods, achieving 52.8% accuracy and 92.1% accuracy on city-level and country-level prediction respectively.
Article
Social and political bots have a small but strategic role in Venezuelan political conversations. These automated scripts generate content through social media platforms and then interact with people. In this preliminary study on the use of political bots in Venezuela, we analyze the tweeting, following and retweeting patterns for the accounts of prominent Venezuelan politicians and prominent Venezuelan bots. We find that bots generate a very small proportion of all the traffic about political life in Venezuela. Bots are used to retweet content from Venezuelan politicians but the effect is subtle in that less than 10 percent of all retweets come from bot-related platforms. Nonetheless, we find that the most active bots are those used by Venezuela's radical opposition. Bots are pretending to be political leaders, government agencies and political parties more than citizens. Finally, bots are promoting innocuous political events more than attacking opponents or spreading misinformation.
Article
Campaigns are complex exercises in the creation, transmission, and mutation of significant political symbols. However, there are important differences between political communication through new media and political communication through traditional media. I argue that the most interesting change in patterns of political communication is in the way political culture is produced, not in the way it is consumed. These changes are presented through the findings from systematic ethnographies of two organizations devoted to digitizing the social contract. DataBank.com is a private data mining company that used to offer its services to wealthier campaigns, but can now sell data to the smallest nascent grassroots movements and individuals. Astroturf-Lobby.org is a political action committee that helps lobbyists seek legislative relief to grievances by helping these groups find and mobilize their sympathetic publics. I analyze the range of new media tools for producing political culture, and with this ethnographic evidence build two theories about the role of new media in advanced democracies-a theory of thin citizenship and a theory about data shadows as a means of political representation.
Click and elect: how fake news helped Donald Trump win a real election
  • H J Parkinson
Parkinson, H. J. Click and elect: how fake news helped Donald Trump win a real election. The Guardian (2016).
Trump Won Because of Facebook
  • M Read
  • Donald
Read, M. Donald Trump Won Because of Facebook. New York Magazine (2016).
Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me
  • C Dewey
Dewey, C. Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me'. The Washington Post (2016).
  • P Howard
  • B Kollanyi
  • S Woolley
Howard, P., Kollanyi, B. & Woolley, S. Bots and Automation over Twitter during the U.S. Election. Oxf. UK Proj. Comput. Propag. (2016).
A recent voting history of the 15 Battleground states -National Constitution Center
  • Ncc Staff
NCC Staff. A recent voting history of the 15 Battleground states -National Constitution Center. National Constitution Center -constitutioncenter.org Available at: https://constitutioncenter.org/blog/voting-history-ofthe-15-battleground-states. (Accessed: 22nd September 2017)
Social Media and News Sources during the 2017 UK General Election
  • J Gallacher
  • M Kaminska
  • B Kollanyi
  • T Yasseri
  • P N Howard
Gallacher, J., Kaminska, M., Kollanyi, B., Yasseri, T. & Howard, P. N. Social Media and News Sources during the 2017 UK General Election. (2017).
Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter
  • P N Howard
  • G Bolsover
  • B Kollanyi
  • S Bradshaw
  • L.-M Neudert
Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S. & Neudert, L.-M. Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? (2017).
Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda
  • B Kollanyi
  • P N Howard
  • S C Woolley
Kollanyi, B., Howard, P. N. & Woolley, S. C. Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda, 2016).