Article

Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A brief review on the relationships between electoral politics and social media will be presented later in this article. Work of Howard et al. in [14] examines tweets from authors who left some evidence of their physical location in the period leading up to the 2016 U.S. presidential election. The analysis reveals a high concentration of polarized news in tweets associated to swing states with a significant number of presidential electors. ...
... In addition to differences in years (2016 versus 2020) and differences in data collection methods (hashtags versus general keywords), our study differs from the work of Howard et al. [14] in some important ways. First, our analysis includes an evaluation of automated accounts, and the classification of news sources is based on the annotations of expert journalists. ...
... As mentioned at the beginning of the article, Howard et al. in [14] conducted a study centered on analyzing tweets related to swing and safe states during the pre-election period of the 2016 U.S. presidential election. Their findings revealed a significant concentration of polarized news in tweets associated with swing states with a significant number of presidential electors. ...
Article
Full-text available
For U.S. presidential elections, most states use the so-called winner-take-all system, in which the state’s presidential electors are awarded to the winning political party in the state after a popular vote phase, regardless of the actual margin of victory. Therefore, election campaigns are especially intense in states where there is no clear direction on which party will be the winning party. These states are often referred to as swing states . To measure the impact of such an election law on the campaigns, we analyze the Twitter activity surrounding the 2020 US preelection debate, with a particular focus on the spread of disinformation. We find that about 88% of the online traffic was associated with swing states. In addition, the sharing of links to unreliable news sources is significantly more prevalent in tweets associated with swing states: in this case, untrustworthy tweets are predominantly generated by automated accounts. Furthermore, we observe that the debate is mostly led by two main communities, one with a predominantly Republican affiliation and the other with accounts of different political orientations. Most of the disinformation comes from the former.
... Mainstream news stories from obscure sources that were propagated on Twitter became one of the central elements of the 2016 US presidential election, and subsequent analysis reveals that many of the stories were largely manipulated or totally fabricated (Howard et al., 2017;Stone and Gordon, 2017). Republican candidate Donald Trump himself, among others, helped to drive the mainstream news agenda with his prolific use of the social media platform (Lynch, 2016;Maheshwari, 2016). ...
... While the Russian infiltration of American social media has been explored (e.g. Chou, 2016;Howard et al., 2017;Lynch, 2016;Stone and Gordon, 2017), this study, which examines the attributes of cross-border content comparisons on Twitter, can provide a foundation for broader understanding of the international scope of social media content. Therefore, the goal of this study was to examine Twitter comments about both candidates in the US, and in five key countries that are of prime importance to American diplomacy and international trade, in order to characterise the content as a foundation for future studies on social media's influence in shaping the news agenda. ...
... In a separate study, these authors also found that non-elite sources had a growing influence with journalists through Twitter (Lewis and Zamith, 2015). One study examined how Twitter propagated 'fake news' in US states (Howard et al., 2017). Studying Twitter content may reveal the extent of biased agenda-setting, with its potential to influence and polarise millions of people both in the US and around the world. ...
Article
Full-text available
A manual content analysis compares 6019 Twitter comments from six countries during the 2016 US presidential election. Twitter comments were positive about Trump and negative about Clinton in Russia, the US and also in India and China. In the UK and Brazil, Twitter comments were largely negative about both candidates. Twitter sources for Clinton comments were more frequently from journalists and news companies, and still more negative than positive in tone. Topics on Twitter varied from those in mainstream news media. This foundational study expands communications research on social media, as well as political communications and international distinctions.
... Specifically, we explore the following questions: (1) to what extent do social bots participate in the information distribution of news in professional media; (2) what role do social bots play in the diffusion of such stories? (3) can social bots become opinion leaders along the diffusion path of professional news stories? ...
... It uses the functionality of social accounts to deliver news and information like a human and can also perform malicious activities such as sending spam, posting harassment, and delivering hate speech. Such social bots can post messages quickly, mass-produce replicated messages, and eventually distribute messages in the form of humanlike users [3]. Social bots are more active compared to regular human users [4], and their purpose is to learn and imitate humans to manipulate public opinion on social media platforms. ...
Article
Full-text available
Social-bots-mediated information manipulation is influencing the public opinion environment, and their role and behavior patterns in news proliferation are worth exploring. Based on the analysis of bots' posting frequency, influence, and retweeting relationship, we take the diffusion of The New York Times' coverage of Xinjiang issue on the overseas social platform Twitter as an example and employ the two-step flow model. It is found that in the role of second-step diffusion, unlike posting news indiscriminately in first-step diffusion, social bots are more inclined to postcontroversial information in second-step diffusion; in terms of diffusion patterns, although social bots are more engaged in first-step diffusion than in second-step diffusion and can trigger human users to retweet, they are still inferior to humans in terms of influence.
... Research on disinformation is dispersed across numerous scientific fields. In computer science, there is mature work on automatic detection [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] and real-world measurement [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58], but work on responses and countermeasures is comparatively thin (see Section 2.3). ...
... Disinformation campaigns are typically multimodal, exploiting many different social and media channels at once [59]. These campaigns use websites as an important tool: to host content for distribution across platforms, facilitate user tracking, and generate ad revenue [42,45,[60][61][62][63]. Disinformation websites are frequently designed to conceal their provenance and deceive users into believing that they are legitimate news or opinion outlets. 1 Our work examined whether warnings can counter this deception and help users distinguish, contextualize, or avoid disinformation websites. ...
Preprint
Full-text available
Online platforms are using warning messages to counter disinformation, but current approaches are not evidence-based and appear ineffective. We designed and empirically evaluated new disinformation warnings by drawing from the research that led to effective security warnings. In a laboratory study, we found that contextual warnings are easily ignored, but interstitial warnings are highly effective at inducing subjects to visit alternative websites. We then investigated how comprehension and risk perception moderate warning effects by comparing eight interstitial warning designs. This second study validated that interstitial warnings have a strong effect and found that while warning design impacts comprehension and risk perception, neither attribute resulted in a significant behavioral difference. Our work provides the first empirical evidence that disinformation warnings can have a strong effect on users' information-seeking behaviors, shows a path forward for effective warnings, and contributes scalable, repeatable methods for establishing evidence on the effects of disinformation warnings.
... The attack struck the entire nation with some states getting larger doses of malicious content than others. Out of the sixteen states labeled as swing states by the National Constitution Center at the time, twelve were exposed to above average levels of polarizing content (Howard et al., 2018). ...
... It was again humans and artificials working together in that effort. Fredheim and Gallacher (2018) and Kollanyi et al. (2018) document the increasing rate of anonymous activity during key social momentspolitical events, such as elections. While the first work studies two months of 2018 and concludes that around 35% of Twitter activity around content mentioning NATO can be assigned to anonymous or low quality accounts, the second aforementioned study focuses on the week prior to election day of the 2016 US presidential elections. ...
... Using the stated locations of Twitter users, Howard, Kollanyi, Bradshaw, and Neudert (2017) found that American users in swing states were targeted to a greater degree by junk news and alleged Russian accounts. That ordinary foreigners also tweet about U.S. elections has been documented in research on nation branding by Sevin and Uzunoğlu (2017). ...
Conference Paper
Full-text available
The way people use media to learn about issues outside their immediate community has long been an important area of media research. It has been argued that social media environments open up spaces for people to practice forms of cosmopolitan communication. This paper considers the cosmopolitan dimension of the 2020 U.S. presidential race on Twitter. It examines engagement by Twitter users in Scandinavian countries along both a local–cosmopolitan axis and a monitorial–networked axis, creating a typology that reflects the geospatial and interactive dimensions of online political communication. Direct interactions between Scandinavian and American users and their ideological makeup are also analyzed. The findings indicate that users are generally monitorial on a global scale and more networked within their national communities. However, both left- and right-wing users engage in cosmopolitan communication. This paper contributes to cosmopolitan theory in the digital age and offers a geospatial perspective on citizen interaction online.
... Additionally, given the relevance of immigration among the UK public, antiimmigration propaganda has the potential to significantly influence political outcomes. Previous studies have suggested that the dissemination of highly partisan and misleading content may have affected last-minute electoral decisions [73], particularly in highly contested areas [74]. In the context of health pandemics, the speed at which content spreads can significantly impede public efforts to contain them, emphasizing the importance of mitigating the rapid spread of anti-immigration content [75]. ...
Article
Full-text available
Immigration is one of the most salient topics in public debate. Social media heavily influences opinions on immigration, often sparking polarized debates and offline tensions. Studying 220,870 immigration-related tweets in the UK, we assessed the extent of polarization, key content creators and disseminators, and the speed of content dissemination. We identify a high degree of online polarization between pro and anti-immigration communities. We found that the anti-migration community is small but denser and more active than the pro-immigration community with the top 1% of users responsible for over 23% of anti-immigration tweets and 21% of retweets. We also discovered that anti-immigration content spreads also 1.66 times faster than pro-immigration messages and bots have minimal impact on content dissemination. Our findings suggest that identifying and tracking highly active users could curb anti-immigration sentiment, potentially easing social polarization and shaping broader societal attitudes toward migration.
... The 2016 and 2018 U.S. elections demonstrated the vulnerability of domestic politics to false stories originating from foreign countries, and the World Economic Forum (WEF) identifies massive and systematic digital disinformation as one of the top global risks in its 2019 Global Risks Report [2]. A rich academic literature also examines the effects of fake news and foreign interference on politics [3][4][5][6]. Yet relatively few papers have considered the role of foreign actors in spreading corporate fake news about a country's firms. ...
Article
Full-text available
Although a rich academic literature examines the use of fake news by foreign actors for political manipulation, there is limited research on potential foreign intervention in capital markets. To address this gap, we construct a comprehensive database of (negative) fake news regarding U.S. firms by scraping prominent fact-checking sites. We identify the accounts that spread the news on Twitter (now X) and use machine-learning techniques to infer the geographic locations of these fake news spreaders. Our analysis reveals that corporate fake news is more likely than corporate non-fake news to be spread by foreign accounts. At the country level, corporate fake news is more likely to originate from African and Middle Eastern countries and tends to increase during periods of high geopolitical tension. At the firm level, firms operating in uncertain information environments and strategic industries are more likely to be targeted by foreign accounts. Overall, our findings provide initial evidence of foreign-originating misinformation in capital markets and thus have important policy implications.
... Via case study of the 2017 Chief Executive election in Hong Kong, Lee (2018) examined different types of intermediary actors who engaged in agenda-steering and frame construction on social media. There are also studies analyzing data about political news and information shared over Twitter and/or Facebook during elections (Glowacki et al., 2018;Hedman et al., 2018;Howard et al., 2017). ...
Article
Full-text available
Social media exerts a considerable influence on the democratic process in the modern digital age. Meanwhile, political information appears to be a precious commodity in the political process and functioning democracy. Despite the proliferation and globalization of research in these areas, lesser efforts have been made to systematically review and integrate discoveries from previous studies to assess the current state of research on social media use for political information. This article aims to systematically collect, condense, analyze and report the holistic, empirical findings from extant literature between 2010 and 2020 to offer a rich overview of research on social media use for political information. The Systematic Literature Review (SLR) integrated both the automatic and manual search strategies for data collection. Out of the 292 papers, 23 primary studies were identified to answer a defined set of research questions. Political communication and political participation were the two most popular themes. Uses and gratifications theory appeared to be the most used theory. Scholars are keen to explore the indirect variables that exist at any stage in between the process of consuming political information on social media and political participation. The review results suggested that although social media is widely used for political information, exploration on the body of knowledge of this domain is not receiving much attention and is reported roughly. An explicit analytical discussion on the review results with identified knowledge gaps that call for further exploration and conclusion is offered.
... With the emergence of social media and mobile media in the early 2000s, the spread of misinformation, a communication phenomenon that has existed for thousands of years, has accelerated at a speed faster than ever (Ha et al., 2021). Around the 2016 U.S. presidential election, misinformation and its news form, fake news, have been suggested to play a prominent role in influencing voters' choices and accelerating political polarization (Howard et al., 2018). Later, the COVID-19 pandemic facilitated a surge in misinformation and fake news worldwide, which produced devastating influences and (Roozenbeek et al., 2020). ...
Article
Full-text available
Misinformation constitutes a societal practice and challenge that necessitates unwavering attention worldwide. In this essay, we discussed the theoretical advancement and empirical evidence in misinformation research, encompassing a review of definitions of misinformation, research orientations, research perspectives, and vulnerable groups. We then reviewed the misinformation fueled by generative artificial intelligence (AI) and the evolving conceptualization of literacy. To counter AI-fueled misinformation, we argue that the development of ethical AI necessitates regulations from AI practitioners and legislation, and ethical uses of AI require efforts in AI literacy education and research. The AI literacy should include (a) users’ understanding and critical evaluation of knowledge, values, and cultures within which AI systems function, and their implications on the AI-generated content, (b) users’ strategic interpretation and proper use of AI-generated content, and (c) users’ utilization of feedback mechanisms to promote institutional management of the AI power.
... UU. Obama fue criticado por llenar las redes sociales con mensajes automatizados con la intención de atraer la atención y el apoyo de la ciudadanía en las elecciones de 2008 y 2012; Mitt Romney, candidato republicano a las elecciones estadounidenses de 2012, fue acusada de comprar miles de seguidores en Twitter en un intento por parecer más popular, y Donald Trump utilizó bots sociales y perfiles falsos en Twitter y otras redes sociales para lanzar opiniones favorables sobre su candidatura, aumentar sus seguidores, generar una percepción artificial de mayor popularidad y atacar a sus contrincantes lanzando noticias falsas o distorsionando subliminalmente su imagen, entre otras cosas (Bessi y Ferrara, 2016;Howard et al., 2017Molina, et al., 2017. En Francia, durante las elecciones presidenciales de 2017 se detectaron bots vinculados con la candidata Marine Le Pen y el candidato Emmanuel Macron en los días previos a las elecciones (Ferrara, 2017(Ferrara, , 2020. ...
Article
Full-text available
Este artículo se propone confrontar el concepto de opinión pública con la realidad y las expectativas de una sociedad digitalizada para analizar si la actual colonización algorítmica exige un nuevo cambio estructural de la opinión pública o más bien la retirada de este concepto. Los datos y metadatos masivos se han vuelto un arma de doble filo para la sociedad democrática digitalmente hiperconectada. Mientras que, por un lado, el increíble potencial que atesora el big data y sus diferentes técnicas y tecnologías de explotación de los datos y metadatos lo convierten en un producto codiciado por sistema de instituciones que componen tanto el estado como la sociedad civil; por otro, los altos impactos negativos que su uso instrumental e irresponsable está produciendo y puede llegar a producir, hacen del big data una herramienta controvertida y altamente criticada por alejarnos de cualquier intento de construir una ciudadanía digital. Si bien la democracia algorítmica no se apoya solo en la opinión pública, el objetivo es mostrar la incompatibilidad entre opinión pública artificial y democracia. Nuestro hilo conductor es el concepto habermasiano de opinión pública, puesto que será precisamente la fuerza de la sociedad civil, a través del diseño en su seno de espacios de participación, de donde podemos extraer el potencial necesario para enfrentarnos a la actual colonización algorítmica, para recuperar una deliberación autónoma y crítica sin la cual no existe opinión pública alguna y, por tanto, tampoco democracia.
... Durante la campaña electoral estadounidense de 2016 los discursos de los candidatos Donald Trump y Hilary Clinton fueron sometidos al fact-checking, reflejando que su conteniendo albergaba un 70% y 30% de fake news respectivamente (Gutiérrez-Rubí, 2017). Además, según los datos del proyecto de la Universidad de Oxford sobre la propaganda computacional y los discursos políticos (Howard, Kollanyi, Bradshaw y Neudert, 2017), las semanas antes de las elecciones, los Revista usuarios de twitter contribuyeron a la infoxicación compartiendo tanto volumen de noticias falsas, de contenido polarizado y conspiratorio, como las producidas por los medios profesionales: "Junk news, characterized by ideological extremism, misinformation and the intention to persuade readers to respect or hate a candidate or policy based on emotional appeals, was just as, if not more, prevalent than the amount of information produced by professional news organizations" (p.5). La situación se ha agravado en esta crisis del coronavirus, pues según avanzan los resultados de un estudio de la Universidad Carnegie Mellon (2020), casi la mitad de los usuarios de twitter que publicaron tuits sobre el coronavirus durante el mes de febrero, responden a los patrones de conducta de los bots. ...
Article
La investigación indaga sobre la competencia mediática de un grupo de 13 chicos y 12 chicas sobre la crisis COVID-19 y su percepción sobre la misma. Se recoge la información a través de sus participaciones en los foros de una plataforma virtual y la construcción de sus relatos sobre los hechos. En los foros exponen tres fake news sobre la pandemia, las posibles soluciones y consecuencias, además de analizar el discurso de odio de un tuit. Finalmente, posicionándose en el futuro, una vez pasada la crisis, elaboran una narración sobre cómo contarían los hechos vividos. Los resultados muestran su preocupación e interés por la noticias relacionadas con la salud, la polarización en el análisis del discurso de odio y la presencia de las emociones en sus narraciones. Se concluye la necesidad de la alfabetización digital crítica, especialmente en contextos dramáticos, de vulnerabilidad emocional como las crisis, y favorables para la difusión de bulos.
... This attention has come from whistleblowers (Wylie, 2019), law enforcement (U.S. Department of Justice, 2019a, 2019bUS v. Internet Research Agency, LLC, 2018), legislators (U.S. Senate Select Committee on Intelligence, 2019a, 2019b), and scholars (DiResta et al., 2019;Howard et al., 2017). More recently, similar campaigns have attempted to discredit COVID-19 vaccination efforts, and public concern about how misinformation inhibits democratic processes remains high. ...
Article
Full-text available
Records are persistent representations of activities created by partakers, observers, or their authorized proxies. People are generally willing to trust vital records such as birth, death, and marriage certificates. However, conspiracy theories and other misinformation may negatively impact perceptions of such documents, particularly when they are associated with a significant person or event. This paper explores the relationship between archival records and trustworthiness by reporting results of a survey that asked genealogists about their perceptions of 44th U.S. President Barack Obama's birth certificate, which was then at the center of the “birtherism” conspiracy. We found that although most participants perceived the birth certificate as trustworthy, others engaged in a biased review, considering it not trustworthy because of the news and politics surrounding it. These findings suggest that a conspiracy theory can act as a moderating variable that undermines the efficacy of normal or recommended practices and procedures for evaluating online information such as birth certificates. We provide recommendations and propose strategies for archivists to disseminate correct information to counteract the spread of misinformation about the authenticity of vital records, and we discuss future directions for research.
... It is understood that disinformation can promote false understanding through different means, not necessarily based on false identities, but by using true but misleading content to trigger false inferences (Fallis, 2015), and promoting misperceptions about reality and social consensus (McKay & Tenove, 2021). Regarding the effects this has, disinformation often seeks to amplify social divisions, through discursive means of "us" and/against "the other", including the propagation of conspiracy theories, and using polarising and sensationalist content that is highly emotional and partisan (Howard et al., 2017). Reddi et al. (2021) noted that disinformation in US politics works at the service of existing power structures and identified anti-black racism, misogyny and xenophobic sentiment as topics susceptible to disinformation. ...
Article
Full-text available
Democracy is based on individuals' ability to give their opinions freely. To do this, they must have access to a multitude of reliable information sources (Dahl, 1998), and this greatly depends on the characteristics of their media environments. Today, one of the main issues individuals face is the significant amount of disinformation circulating through social networks. This study focuses on parliamentary disinformation. It examines how parliamentarians contribute to generating information disorder (Wardle & Derakhshan, 2017) in the digital public space. Through an exploratory content analysis − a descriptive content analysis of 2,307 messages posted on Twitter accounts of parliamentary spokespeople and representatives of the main list of each political party in the Spanish Lower House of Parliament − we explore disinformation rhetoric. The results allow us to conclude that, while the volume of messages shared by parliamentarians on issues susceptible to disinformation is relatively low (14% of tweets), both the themes of the tweets (COVID-19, sex-based violence, migrants or LGBTI), as well as their tone and argumentative and discursive lines, contribute to generating distrust through institutional criticism or their peers. The study deepens current knowledge of the disinformation generated by political elites, key agents of the construction of polarising narratives.
... However, the problems caused by social bots and related rumors, and fake news cannot be ignored [11]. During breaking events, social bots imitate human users and deliver false news and fake information, and even carry out malicious activities such as spreading conspiracies and publishing hate speech, which in turn disrupt the normal network order and network ecology and cause network users to form negative emotions such as panic, causing great hindrance and difficulties in network governance [12]. Meanwhile, social bots can create false popularity [13] and make the complex social media environment more uncertain through group pressure and network contagion. ...
Article
Full-text available
In the field of social media, the systematic impact that bot users bring to the dissemination of public opinion has been a key concern of the research. To achieve more effective opinion management, it is important to understand how and why behavior differs between bot users and human users. The study compares the differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, using public health emergencies as the research target, and further provides specific explanations for the differences. First, the study classified users with bot characteristics and human users by establishing the relevant formulas of user indicator characteristics. Secondly, the study used deep learning methods such as Top2Vec and BERT to extract topics and sentiments, and used social network analysis methods to construct network graphs and compare network attribute features. Finally, the study further compared the differences in information dissemination between posts published by bot users and human users through multi-factor ANOVA. It was found that there were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users. The findings can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
... Previously, a large-scale survey (Barthel et al., 2016) reports that 32% of the US adults often encountered completely made up stories on social media. Our study also supports the findings of other studies that fake news on social media is published mainly to support or oppose political agenda (Lazer, et al., 2018;Howard et al., 2018), religious agenda (Boyd and Ellison, 2007;Howard et al., 2017;Farkas et al., 2018), and to trigger people to take certain actions (Kang and Goldman, 2016). Another study from the USA indicates that 71% adults either 'often' or 'sometimes' see completely made up political news online (Barthel et al., 2016), and many U.S. adults have expressed their concerns about the impact of fake news stories on the 2016 Presidential election in the US (Allcott and Gentzkrow, 2017;Silverman, 2016 Regarding news literacy skills, the findings show that librarians have a slightly moderate level of understanding of the concept of news literacy, as well as they demonstrated a moderate level of perceived news literacy skills (Table 9). ...
Article
Introduction. This study was conducted with an objective to determine the ways librarians deal with fake news, as well as to assess the status of their news literacy skills in combating the fake news phenomenon. Method. A cross-sectional survey was conducted in public and private sector university libraries of Punjab, Pakistan. The study’s population comprised of university librarians working at the rank of an assistant librarian or above. Analysis. One hundred and eighty questionnaires were distributed in both print and online out of which 128 were returned (response rate 71.11%).Descriptive and inferential statistics were applied to report the data using Statistical Package for Social Science (SPSS-version 22). Results. Librarians ‘sometimes’ determine the authenticity of news story e.g., ‘check it from other sources in case of doubts.’ The responses for all the eleven statements relating to news literacy skills ranged between 3.05 to 3.36 on a five-point Likert type scale, indicating that respondents ‘somewhat’ agreed with their perceived news literacy skills. Conclusions.University librarians are not fully acquainted with the aspect of news trustworthiness on social media, which affects their news acceptance and sharing behaviour. They also have a moderate level of conceptual understanding of news literacy.
... In particolare, da alcuni documenti dell'intelligence statunitense 2 , oltre che da indagini interne svolte da Facebook (STAMOS, 2017), è emerso il fondato sospetto che la volontà politica degli elettori fosse stata manipolata da specifiche e mirate attività di disinformazione condotte sui social, con effetti determinanti sul risultato elettorale: è stato calcolato, infatti, come la diffusione delle fake news (SCIORTINO, 2020) sui social si fosse concentrata soprattutto negli "Swing States", ovvero negli "Stati in bilico" che sono decisivi per la competizione elettorale 3 (BESSI, FERRARA, 2017;HOWARD, KOLLANYI, BRADSHAW, NEUDERT, 2017). ...
Article
Full-text available
This paper intends to discuss the issue of regulation of social networks as a media space employed by political actors (individuals or organizations) for the formation of public opinion and the construction of consensus in the context of the electoral campaigns: in more synthetic terms, social networks as an instrument of electoral propaganda. The main subject of my essay will concern, therefore, the problems de iure condito and de iure condendo about the subjection of communication on social media to rules aimed at ensuring a fair comparison between parties, lists, candidates in the electoral competition and, with it, the value of freedom and authenticity of the vote that emanates from article 48 of the Italian Constitution. This substantiates a constitutional directive in favor of a regulatory intervention of the legislator, called upon to achieve an adequate balance between the many rights and interests involved.
... The content of fake news in the literature has been compared with polarized and sensational content. The information of such content is usually characterized as highly emotional and highly partisan (Allcott and Gentzkow, 2017;Howard et al., 2018). The results revealed interesting insights into the context of fake news detection. ...
Article
Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
... One part of this variation is a function of language use, but another is derivative of political events. The 2016 US election is a good example, as it was rife with both misleading content widely displayed on social media platforms and widespread politicization of the term 'fake news' itself (Allcott and Gentzkow 2017;Howard et al. 2017;Howard et al. 2018;Grinberg et al. 2019). Other countries had less pronounced salience of the conceptual frame of 'fake news'. ...
Article
Does perceived exposure on social media to mis/disinformation affect user perceptions of social media newsfeed algorithmic bias? Using survey data from eight liberal democratic countries and propensity score matching statistical techniques, this paper details the average treatment effect (ATE) of self-reported perceived exposure to mis/disinformation on perceptions that social media newsfeed algorithms are biased. Overall, the results show that self-reported perceived exposure to misleading content on social media increases perceptions of algorithmic bias. The results also detail interesting platform/country variation in the estimated average treatment effect. The ATE of perceived fake news exposure on perceptions of algorithmic bias are similar on Twitter and Facebook but are amplified in countries with high society-wide issue salience surrounding ‘fake news’ and, especially, ‘algorithmic bias’.
... In their study of the different types of information shared on social media during the 2016 US election, Howard et al. (2017) established a typology of sources for information being shared on social media platforms, included in Figure 2. This classification is interesting in light of the social media content analyzed in the present study, as it compartmentalizes content according to defined classifications, and it is of interest to assess its applicability to our sample. ...
Article
Full-text available
Ephemeral media has become a staple of today’s social media ecology. This study advances the first exploratory analysis of Instagram Stories as a format for political communication. Through an initial content analysis of 832 stories in three verified Vox accounts and a secondary content and discourse analysis of 114 stories, we delve into the strategies used by right-wing party Vox in Spain to portray immigration as an issue for ideological positioning. The findings shed light onto the ways in which the representation of migrants is employed as an instrument for anti-migratory policy support, through the construction of a very specific profile of a migrant in terms of age and gender and the exclusion of significant migrant populations from the argument. Moreover, the party employs the content creation functionalities of Instagram Stories to construct arguments and storylines where diverse information sources converge, effectively bypassing traditional media and reaching their supporter base directly.
... Previous work has highlighted the prominent role that bots played during the U.S. 2016 election (Badawy et al., 2018;Howard et al., 2018;Kriel & Pavliuc, 2019;Ruck et al., 2019). Our findings provide evidence that bots remained very influential even after the elections. ...
Article
Twitter gained new levels of political prominence with Donald J. Trump’s use of the platform. Although previous work has been done studying the content of Trump’s tweets, there remains a dearth of research exploring who opinion leaders were in the early days of his presidency and what they were tweeting about. Therefore, this study retroactively investigates opinion leaders on Twitter during Trump’s 1st month in office and explores what those influencers tweeted about. We uniquely used a historical data set of 3 million tweets that contained the word “trump” and used Latent Dirichlet Allocation, a probabilistic algorithmic model, to extract topics from both general Twitter users and opinion leaders. Opinion leaders were identified by measuring eigenvector centrality and removing users with fewer than 10,000 followers. The top 1% users with the highest score in eigencentrality ( N = 303) were sampled, and their attributes were manually coded. We found that most Twitter-based opinion leaders are either media outlets/journalists with a left-center bias or social bots. Immigration was found to be a key topic during our study period. Our empirical evidence underscores the influence of bots on social media even after the 2016 U.S. presidential election, providing further context to ongoing revelations and disclosures about influence operations during that election. Furthermore, our results provide evidence of the continued relevance of established, “traditional” media sources on Twitter as opinion leaders.
... In addition, the correlation matrix of covariates is presented in Appendix B, Fig Besides the statistically tests performed in methodology selection, state specific effects are also validated by the diversity identified in American states in terms of culture, economic development, legislation, and voters' preferences. Moreover, the decision is defended by the presence of political polarization (Baker et al., 2020d) and the swing states effect on final results (Howard et al., 2018;Antoniades and Calomiris, 2020). Table 2 presents estimated coefficients for variables included in baseline models. ...
Article
The paper examines the United States 2020 presidential election drivers and effects, under the uncertainty caused by COVID-19. By considering news-based, financial markets, and coronavirus specific inputs in panel data framework, the results reveal that COVID-19 affects candidates’ chances. Biden's electorate reacts positive to news regarding unemployment or healthcare, stress level on financial markets or Country Sentiment Index. Trump's opportunities increase with coronavirus indicators or news about populism. However, President-elect Biden must provide solutions for national economy issues like unemployment, budget deficit or healthcare inequalities. Simultaneously, having extensive prerogatives on trade and investment partnerships, influences mitigation of COVID-19 global effects.
... These SM platforms are frequently used to look for information for the purposes of social networking, marketing, reading user reviews, daily routines, religion, food products, disasters (floods), educational and research (Kubiak, 2017;Li et al., 2018;Martínez-Ruiz et al., 2018;Sutherland et al., 2018;Thakur & Chander, 2017;Wickramanayake & Jika, 2018). A few of previous studies also that social media also involves with political discussions and exchanging opinions regarding national and international political issues that have influenced the behavior, mindset and attitudes of young adults (Hassan, 2018;Howard et al., 2018;Kahne & Bowyer, 2018;Stanley, 2017;Tucker et al., 2018). Likewise, the social media have been used to create hypes regarding socio-political issues in recent years particularly on Panama Leaks in Pakistan. ...
Article
Full-text available
Background: Social media (SM) have become popular among all genre of people due to its instant and dynamic communication ability. Substantial use of social media as a source of political information raises a concern of researchers to investigate the usage patterns of SM about socio-political issues of the society. Objective: The aim of this study was to investigate the use of social media as a source of political information regarding Panama Leaks in Pakistan. Method: A quantitative research approach based on survey method was used to collect the primary data from a sample of 500 educated adults conveniently available in Lahore city of the Punjab province of Pakistan. Descriptive and inferential statistics were used for data analysis in SPSS-25. Findings: The findings revealed that majority of the educated adults used social media platforms (i.e. Facebook, WhatsApp, YouTube, Twitter and Wikipedia) on daily basis. The educated adults commonly acquired information to know historical perspectives of Panama Leaks (PL); update themselves with general discussions and opinions; understand political and economic conditions due to PL outbreak; be aware of court proceedings/judgments of PL; and get information for entertainment, education and research.
... On the other hand, such large-scale diffusion into daily routines requires an increased understanding of the risks and consequences for individuals concerning data that are wilfully shared online in a variety of platforms and situations. For instance, recent studies have focused on the role of social media in amplifying fake news propagation (Allcott & Gentzkow, 2017), hate speech (Mondal et al., 2017), and its impact on influencing political debates such as BREXIT (Del Vicario et al., 2017) and the 2016 US Presidential election (Howard et al., 2018). The revelation related to Facebook providing unfettered access to personal information about over 87 million users to Cambridge Analytica (Isaak & Hanna, 2018) has fueled the debate over not only the societal impact of those technologies but also about user's privacy and their data rights. ...
... One could conjecture that the motivation of foreign information operations is to sew discord and to reduce unity of a society's populace. We remain politically neutral with a hope that divisive language is not used intentionally to polarize others and in cases of legitimate promotion of already divisive topics, that polarization can be functionally minimized as opposed to unintentionally creating further division of an audience while advancing politically charged causes such as healthcare or social security reform (Howard, 2018). It may not be apparent how this happens, but common devices identified in the FLC portion of this competition such as flag waving i.e. conflating the opposing viewpoint with being unpatriotic, etc. is one example of many possible. ...
Chapter
Political misinformation is a danger to society, and echo chambers exacerbate the spread and exposure to misinformation, creating harms as severe as those associated with the January 6 US insurrection. Thus, it is important to understand who is most susceptible to believing it. The current study builds on previous work from Rhodes (Polit. Commun. Commun. 39(1), 1–22 (2021) [3]) and aims to explore whether certain groups within the US Republican Party are more susceptible to believing political misinformation than other groups within the Republican Party. Findings indicate that Republicans who identify as having a ‘strong’ political affiliation are significantly more likely to believe political misinformation than those Republicans who identify as having a ‘not very strong’ political affiliation. While Rhodes (Polit. Commun. Commun. 39(1), 1–22 (2021) [3]) found that echo chambers did not impact the entirety of Republicans in their sample, the current study examined whether echo chambers interacted significantly with the strength of political affiliation. However, no significant interaction was found, indicating that echo chambers impacted neither ‘strong’ Republicans nor ‘not very strong’ Republicans. The results provide implications for which groups of people are most susceptible to believing political misinformation and should be the priority in directing ways to mitigate their believability.
Article
Full-text available
The proliferation of fake news on social media platforms poses significant challenges to society and individuals, leading to negative impacts. As the tactics employed by purveyors of fake news continue to evolve, there is an urgent need for automatic fake news detection (FND) to mitigate its adverse social consequences. Machine learning (ML) and deep learning (DL) techniques have emerged as promising approaches for characterising and identifying fake news content. This paper presents an extensive review of previous studies aiming to understand and combat the dissemination of fake news. The review begins by exploring the definitions of fake news proposed in the literature and delves into related terms and psychological and scientific theories that shed light on why people believe and disseminate fake news. Subsequently, advanced ML and DL techniques for FND are dicussed in detail, focusing on three main feature categories: content-based, context-based, and hybrid-based features. Additionally, the review summarises the characteristics of fake news, commonly used datasets, and the methodologies employed in existing studies. Furthermore, the review identifies the challenges current FND studies encounter and highlights areas that require further investigation in future research. By offering a comprehensive overview of the field, this survey aims to serve as a guide for researchers working on FND, providing valuable insights for developing effective FND mechanisms in the era of technological advancements.
Article
The dissemination of fabricated information is not a new phenomenon in human society. However, recent developments in social media have massively increased the creation and dissemination of this information. This study employed a scoping review method to ascertain fabricated information contexts, regulatory frameworks, and impediments in regulating the information. Google and Google Scholar search engines were used to identify documents published between 2006 and 2022 on fabricated information and regulatory frameworks. The data in these studies were subjected to thematic analysis. The study reveals that Facebook and Twitter produce a large quantity of fabricated information and that most of the information is created and disseminated from political and health contexts. Besides, the study shows that despite the available regulatory frameworks for curbing fabricated information, the problem persists. This has been attributed to diverse challenges associated with the regulation of information. A universal mechanism and regulatory framework should be enacted to regulate fabricated information effectively.
Article
Persuasion is a process that aims to utilize (true or false) information to change people’s attitudes in relation to something, usually as a precursor to behavioural change. Its use is prevalent in democratic societies, which do not, in principle, permit censorship of information or the use of force to enact power. The transition of information to the internet, particularly with the rise of social media, together with the capacity to capture, store and process big data, and advances in machine learning, have transformed the way modern persuasion is conducted. This has led to new opportunities for persuaders, but also to well-documented instances of abuse: fake news, Cambridge Analytica, foreign interference in elections, etc. We investigate large-scale technology-based persuasion, with the help of three case studies derived from secondary sources, in order to identify and describe the underlying technology architecture and propose issues for future research, including a number of ethical concerns
Chapter
The new internet and digital technologies have truly accelerated and improved media functions and operations in modern society. Like the developed nations, sub-Saharan African countries have benefitted immensely in adopting new media tools to generate, access, disseminate, store, and retrieve information. Since the basic function of the media is to inform the public, digital tools and various internet platforms have exemplified this role by increasing the volume and spread of news information in today's network society. In fact, the current information era is one characterized by the inundated volume of data and flood of information. However, with such incredible overload of information, new problems have emerged; the anonymous nature of most of these internet platforms have permitted highly adulterated and unethical news contents to contaminate the digital space. Sadly, many credible news information compete or get mixed with the whirlpool of disinformation and news pollutants.
Article
Full-text available
With the acceleration of human society’s digitization and the application of innovative technologies to emerging media, popular social media platforms are inundated by fresh news and multimedia content from multiple more or less reliable sources. This abundance of circulating and accessible information and content has intensified the difficulty of separating good, real, and true information from bad, false, and fake information. As it has been proven, most unwanted content is created automatically using bots (automated accounts supported by artificial intelligence), and it is difficult for authorities and respective media platforms to combat the proliferation of such malicious, pervasive, and artificially intelligent entities. In this article, we propose using automated account (bots)-originating content to compete with and reduce the speed of propagating a harmful rumor on a given social media platform by modeling the underlying relationship between the circulating contents when they are related to the same topic and present relative interest for respective online communities using differential equations and dynamical systems. We studied the proposed model qualitatively and quantitatively and found that peaceful coexistence could be obtained under certain conditions, and improving the controlled social bot’s content attractiveness and visibility has a significant impact on the long-term behavior of the system depending on the control parameters.
Article
This article examines 3,517 Facebook ads created by Russia’s Internet Research Agency (IRA) between June 2015 and August 2017 in its Active Measures disinformation campaign targeting the 2016 U.S. presidential election. We aimed to unearth the relationship between ad engagement (ad clicks) and 40 features related to the ads’ metadata, psychological meaning, and sentiment. The purpose of our analysis was to (1) understand the relationship between engagement and features, (2) find the most relevant feature subsets to predict engagement via feature selection, and (3) find the semantic topics that best characterize the data set via topic modeling. We found that investment features (e.g., ad spend, ad lifetime), caption length, and sentiment were the top features predicting users’ engagement with the ads. In addition, positive sentiment ads were more engaging than negative ads, and psycholinguistic features (e.g., use of religion-relevant words) were identified as highly important in the makeup of an engaging disinformation ad. Linear support vector machines (SVMs) and logistic regression classifiers achieved the highest mean F scores (93.6%), revealing that the optimal feature subset contains 12 and six features, respectively. Finally, we corroborate the findings of previous research that the IRA specifically targeted Americans on divisive ad topics (e.g., LGBT rights) and advance a definition of disinformation advertising.
Thesis
Full-text available
During the 2016 election, memes were used heavily by individuals and organized groups who wanted to have an impact on the outcome. In the proceeding years, groups provided organized opportunities for individuals to further learn how to utilize memes more effectively, turning this once benign digital artifact into modern propaganda. This study examined memes that were focused on the lead up to the 2020 U.S. election, specifically memes that contained some element of misleading information. During the study, which collected memes from July 1 – 31, 2020, 60 left-leaning and 60 right-leaning memes were collected from six Facebooks groups, for a total of 120 memes. Using mixed method content and thematic analyses, the memes were examined for propaganda, persuasion, misleading information, and multimodality. They were looked at individually and as a left vs. right comparison. When examining propaganda, almost 75% of the memes collected met all of the criteria for propaganda, and those that did not tended to be more humorous. The memes that contained propaganda were likely to be relevant in the short term and feature moral appeals, pre-giving messages, or esteem (negative) appeals. These memes are likely to come from unofficial sources as a mode of expression and public discussion, and feature a number of techniques of misleading information, the majority being fabricated or manipulated content. When the memes were examined for the type of misleading information used, humor was used the most frequently, however the cumulative of the other, non-humorous categories showed that memes are a vehicle for subtle and nuanced techniques. Many memes had at least one element that was truthful, lending legitimacy to an overall misleading message. Many memes featured multiple techniques, making fact-checking a difficult process. When examining the multimodal aspects of the memes, this research shows that any unwritten “rules” that memes had when they first came on the scene no longer exist. Misleading political memes were heavily manipulated, with almost 70% of them appearing to have some alteration, and more than 64% using shading and highlight modulation techniques. This study found that the visual elements of the meme are meant to be the main focus, and that the heavy, error-ridden textual elements were included for maximum information without concern for design principles. This study also compared the 60 memes collected from left-leaning Facebook groups and the 60 collected from right-leaning Facebook groups. The messages primarily focused on the two candidates, Democrat Joe Biden and Republican Donald Trump, followed the mainstream news and popular conspiracy theories, and featured very similar techniques. Significant differences were found in the level of accuracy within the message, the number of memes that could be considered propaganda, and the number of memes that appeared to be digitally altered. This study also supports the idea that right-leaning misleading political memes are more frequently disseminated than their left-leaning counterparts.
Chapter
Today, major online social networking websites host millions of user accounts. These websites provide a convenient platform for sharing information and opinions in the form of microblogs. However, the ease of sharing also brings ramifications in the form of fake news, misinformation, and rumors, which has become highly prevalent recently. The impact of fake news dissemination was observed in major political events like the US elections and the Jakarta elections, as well as the distortion of celebrities and companies’ reputation. Researchers have studied the propagation of fake news over social media websites and have proposed various techniques to combat fake news. In this chapter, we discuss propagation models for misinformation and review the fake news mitigation techniques. We also compose a list of datasets used in fake news-related studies. The chapter is concluded with open research questions.
Chapter
Unter dem in regelmäßigen Abständen wiederkehrenden Schlagwort „Social Bots“ wird in der öffentlichen Debatte überwiegend ein negativ konnotiertes Bild gezeichnet, wonach diese vornehmlich für die gezielte Verbreitung von Falschnachrichten, Desinformationen sowie Hassrede verantwortlich sind und letztlich sogar eine Gefahr für die Demokratie darstellen können. Allerdings ist die Wirkungsmacht der in Deutschland gerne als „Meinungsroboter“ betitelten Computerprogramme bisweilen umstritten. Dennoch macht der Umstand, dass der alleinige Einsatz derartiger Programme eine solche Resonanz hervorruft, eine grundlegende Einordnung der Mechanismen notwendig.
Thesis
Full-text available
Digital media misinformation is a threat to democracy and national security in America. This is because in today’s modern landscape the ability to personalize people’s experiences online has become common practice through the collection of cookies and other personalized user data.This data can be used to identify important information about the particular user. Things such as age, race, gender, sexual orientation, political interests, as well as other seemingly harmless information about computer-users is collected in a variety of ways. This important user information can be used to personalize media, including news, online social media feeds, advertisements among many other things. However, when placed into the wrong hands this seemingly harmless data that in many cases enhances and improves online user experience can be used maliciously to affect a variety of human interactions and experiences; as well as alter one’s perception of reality. Thus the threat of digital misinformation is critically magnified by the fact that it can be directed to affect specific users-experience as it pertains to user demographics.
Chapter
The researcher explores the world's first use of AI. In the “Bad Bot” section, the authors look at the negative impact of AI in politics with the first elections won in history through the use of AI's bots and trolls propaganda, and how it could bring to a more dystopian future with deepfakes. In the “Good Bot” section, they focus on positive case studies; starting with the 2021 Tokyo Olympics and health, they explore AI techniques applied from the infinitive small, Higgs Boson, to the infinitely large, dark matter; we'll meet Cimon at the Space Station; AI in climate change and pioneer UN projects such as “Earth” and “Humanitarian” AI; in education, they look at the latest use of AI helping schools and EU project “Time Machine.” They also see examples done to tackle the “Bad Bots” section looking at what is being implemented. This chapter will finally look at the world's first rebellious behaviour in bots with funny examples that will make you think.
Article
Why did Russia's relations with the West shift from cooperation a few decades ago to a new era of confrontation today? Some explanations focus narrowly on changes in the balance of power in the international system, or trace historic parallels and cultural continuities in Russian international behavior. For a complete understanding of Russian foreign policy today, individuals, ideas, and institutions—President Vladimir Putin, Putinism, and autocracy—must be added to the analysis. An examination of three cases of recent Russian intervention (in Ukraine in 2014, Syria in 2015, and the United States in 2016) illuminates the causal influence of these domestic determinants in the making of Russian foreign policy.
Chapter
Dieses Kapitel kombiniert neue Erkenntnisse aus der Analyse der wirtschaftlichen Globalisierung und Digitalisierung mit Ungleichheitsstatistiken und neuen US-Umfrageergebnissen, die die Hauptsorgen der US-Haushalte bzw. der Wähler aufzeigen. Während die zunehmende wirtschaftliche Ungleichheit in der US-Umfrage als Problem angesehen wird, äußert die relative Mehrheit der Befragten die Erwartung, dass große Unternehmen Maßnahmen ergreifen werden, um übermäßige Ungleichheit zu korrigieren – eine Sichtweise, die Wunschdenken ist und die für die untere Hälfte der US-Einkommenspyramide zu anhaltender Wählerfrustration führen wird. Dies impliziert ein strukturelles Problem des Populismus in den USA und stellt eine völlig neue Situation dar: mit Herausforderungen für Nordamerika, Europa, Asien und die Welt. Diese neue strukturelle US-Populismushypothese ist mit tiefgreifenden Auswirkungen auf die Handelspolitik und mit Antimultilateralismus verbunden. Dabei wird auch die 2018 veröffentlichte Studie des Council of Economic Advisers widerlegt, die den Pro-Kopf-Konsum in den USA und den nordischen Ländern Europas vergleicht und einen hohen US-Vorsprung behauptet.
Chapter
Full-text available
Çalışmanın amacı yeni gazetecilik uygulamaları içerisinde giderek önem kazanan veri gazeteciliğini anlamak ve veri gazeteciliği haberlerinin yapısal özelliklerini çözümlemektir. Bu çerçevede çalışma şu sorulara yanıt aramaktadır; veri gazeteciliği nasıl bir bir gelişim süreci izlemiştir ve hangi teknolojik gelişmeler ve dönüşümlerden etkilenmiştir? Veri gazeteciliğinde kullanılan yöntemler açısından farklılıklar bulunmakta mıdır? Tasarım, yazılım ve veri gazeteciliği arasında nasıl bir bağlantı vardır? Etkileşimli haber anlatısı hangi çoklu ortam içeriklerden oluşmaktadır? Bu projelerde sosyal medya ve video paylaşım platformlarından nasıl yararlanılmaktadır?
Chapter
Social Media and Democracy - edited by Nathaniel Persily September 2020
Book
Full-text available
Bu derlemede gazeteciliği tanımlamanın ve icra etmenin farklı biçimlerine odaklanmak; yeni mecraların, deneyimlerin ve olanakların izini sürmek istedik. Bu doğrultuda, dünya genelinde otoriterleşme eğiliminin yükselişi karşısında bir demokrasi cephesi olarak gördüğümüz “yeni gazetecilik” kavramını çeşitli boyutlarıyla tartışmaya açtık. Bizim için “yeni gazetecilik”, profesyonellik göndermesi yapmaktan ziyade emeğin ve ürünün niteliğine odaklanan bir kavram. Çoklu/karşı kamuları süreçlere dâhil eden, çoğulcu, dayanışmacı, katılımcı, ticari olmayan ya da sosyal girişimci, anti-kapitalist, karşı-hegemonik ve belki de en önemlisi rizomatik bir pratiği ifade etmektedir. Bu pratikte geleneksel hiyerarşik haber merkezi yapılanması yerini, heterarşik bir iç içe geçmeyle oluşan yarı kurumsal ve bireylerin öne çıktığı, takipçilerin müdahalesine olanak tanıyan, haberin üretimine ve dağıtımına/paylaşımına odaklı ağlaşmış bir haber merkezine bırakmaktadır. Yeni gazetecilik tartışması ayrıca yalan haberin, propagandanın, ideolojik mücadele çerçevesine hapsolmuş hakikatin gerçekliğine dair sorgulamaları da kapsamaktadır. Bu doğrultuda bu pratiğe dâhil olanların, sürekli akış ve içerik bombardımanını takip edip anlamlandırabilmesi için teknolojik ve dijital beceriler haricinde temel bir eleştirel okuryazarlığa da sahip olması gerekmektedir. Güncel gazetecilik çalışmaları ve uygulamalarına bakıldığında, hem akademisyenlerin hem de uygulayıcıların, ürettikleri içeriklerde üzerinde durduğumuz "yeni gazetecilik" kavramsallaştırmasını benimsedikleri, ancak gelişmekte olan bu çalışma alanına dair net bir tanımlama yapmadıkları görülmektedir. “Yeni Gazetecilik. Mecralar, deneyimler, olanaklar” isimli çalışmamız, Türkçe literatürde bu yeni kavramsal çerçevenin oluşturulması için bir giriş çalışması niteliğindedir.
Article
Due to new technologies, the speed and volume of disinformation is unprecedented today. As seen in the 2016 US presidential election, especially with the conduct of the Internet Research Agency, this poses challenges and threats for the (democratic) political processes of internal State affairs, and in particular, (democratic) elections are under increasing risk. Disinformation has the potential to sway the outcome of an election and therefore discredits the idea of free and fair elections. Given the growing prevalence of disinformation operations aimed at (democratic) elections, the question arises as to how international law applies to such operations and how States under international law might counter such hostile operations launched by their adversaries. From a legal standpoint, it appears that such disinformation operations do not fully escape existing international law. However, due to open questions and the geopolitical context, many States refrain from clearly labelling them as internationally wrongful acts under international law. Stretching current international legal norms to cover the issue does not seem to be the optimal solution and a binding international treaty would also need to overcome various hurdles. The author suggests that disinformation operations aimed at (democratic) elections in the context of public international law will most likely be regulated (if) by a combination of custom and bottom-up law-making influencing and reinforcing each other.
Book
Full-text available
Acknowledgments I could not imagine finishing this monograph without the encouragement of my first editor, Holly Buchanan at Lexington Books. Although I have written extensively in recent years, I could not imagine finishing up a book project. So my special thanks go to Ms. Buchanan. After her, Bryndee Ryan continued to encourage me and here comes the book. Other thanks go to her. Most of what I have written is a product of long years of teaching and intellectual development at Istanbul Bilgi University. I am very proud of being a faculty member at the communication school here and I believe this is one of the best things ever happened to me. This book was written during my sabbatical period. My university generously supported me during the period and I could write with the comfort of staying at Anthropology Department at the University of California, Irvine and at the Science and Technology Studies program at MIT. My stays were a result of Rice Anthropology network and I cannot tell how valuable it was to brainstorm regularly with Prof. George E. Marcus and Prof. Michael M. J. Fischer. I do not claim that their wisdom rightfully reflects on this manuscript but that gave me intellectual empowerment and gave me clues for future research and publications. Having such academic mentors is a big chance in life. I have been involved with many digital personas, activists, colleagues, friends, and beloved ones; that includes my former student and now friend, Atınç, and my dear brother, Hakan in Turkey. I am lucky to be surrounded by all these beautiful people. However, I will always miss those civilians who have fallen during the Gezi Park Protests for which I devote a chapter. Thus, I would like to dedicate this book to those fallen citizens who are collectively called “Gezi Martyrs.” Boston, MA May 1, 2019
Article
Full-text available
Significance The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
Article
Full-text available
In this paper we take advantage of recent developments in identifying the demographic characteristics of Twitter users to explore the demographic differences between those who do and do not enable location services and those who do and do not geotag their tweets. We discuss the collation and processing of two datasets-one focusing on enabling geoservices and the other on tweet geotagging. We then investigate how opting in to either of these behaviours is associated with gender, age, class, the language in which tweets are written and the language in which users interact with the Twitter user interface. We find statistically significant differences for both behaviours for all demographic characteristics, although the magnitude of association differs substantially by factor. We conclude that there are significant demographic variations between those who opt in to geoservices and those who geotag their tweets. Not withstanding the limitations of the data, we suggest that Twitter users who publish geographical information are not representative of the wider Twitter population.
Article
Full-text available
This article provides a review of scientific, peer-reviewed articles that examine the relationship between news sharing and social media in the period from 2004 to 2014. A total of 461 articles were obtained following a literature search in two databases (Communication & Mass Media Complete [CMMC] and ACM), out of which 109 were deemed relevant based on the study’s inclusion criteria. In order to identify general tendencies and to uncover nuanced findings, news sharing research was analyzed both quantitatively and qualitatively. Three central areas of research—news sharing users, content, and networks—were identified and systematically reviewed. In the central concluding section, the results of the review are used to provide a critical diagnosis of current research and suggestions on how to move forward in news sharing research.
Article
Full-text available
The increasing popularity of the social networking service, Twitter, has made it more involved in day-to-day communications, strengthening social relationships and information dissemination. Conversations on Twitter are now being explored as indicators within early warning systems to alert of imminent natural disasters such as earthquakes and aid prompt emergency responses to crime. Producers are privileged to have limitless access to market perception from consumer comments on social media and microblogs. Targeted advertising can be made more effective based on user profile information such as demography, interests and location. While these applications have proven beneficial, the ability to effectively infer the location of Twitter users has even more immense value. However, accurately identifying where a message originated from or author’s location remains a challenge thus essentially driving research in that regard. In this paper, we survey a range of techniques applied to infer the location of Twitter users from inception to state-of-the-art. We find significant improvements over time in the granularity levels and better accuracy with results driven by refinements to algorithms and inclusion of more spatial features.
Article
Full-text available
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
Conference Paper
In many Twitter studies, it is important to know where a tweet came from in order to use the tweet content to study regional user behavior. However, researchers using Twitter to understand user behavior often lack sufficient geo-tagged data. Given the huge volume of Twitter data there is a need for accurate automated geolocating solutions. Herein, we present a new method to predict a Twitter user's location based on the information in a single tweet. We integrate text and user profile meta-data into a single model using a convolutional neural network. Our experiments demonstrate that our neural model substantially outperforms baseline methods, achieving 52.8% accuracy and 92.1% accuracy on city-level and country-level prediction respectively.
Article
Social and political bots have a small but strategic role in Venezuelan political conversations. These automated scripts generate content through social media platforms and then interact with people. In this preliminary study on the use of political bots in Venezuela, we analyze the tweeting, following and retweeting patterns for the accounts of prominent Venezuelan politicians and prominent Venezuelan bots. We find that bots generate a very small proportion of all the traffic about political life in Venezuela. Bots are used to retweet content from Venezuelan politicians but the effect is subtle in that less than 10 percent of all retweets come from bot-related platforms. Nonetheless, we find that the most active bots are those used by Venezuela's radical opposition. Bots are pretending to be political leaders, government agencies and political parties more than citizens. Finally, bots are promoting innocuous political events more than attacking opponents or spreading misinformation.
Article
Campaigns are complex exercises in the creation, transmission, and mutation of significant political symbols. However, there are important differences between political communication through new media and political communication through traditional media. I argue that the most interesting change in patterns of political communication is in the way political culture is produced, not in the way it is consumed. These changes are presented through the findings from systematic ethnographies of two organizations devoted to digitizing the social contract. DataBank.com is a private data mining company that used to offer its services to wealthier campaigns, but can now sell data to the smallest nascent grassroots movements and individuals. Astroturf-Lobby.org is a political action committee that helps lobbyists seek legislative relief to grievances by helping these groups find and mobilize their sympathetic publics. I analyze the range of new media tools for producing political culture, and with this ethnographic evidence build two theories about the role of new media in advanced democracies-a theory of thin citizenship and a theory about data shadows as a means of political representation.
Click and elect: how fake news helped Donald Trump win a real election
  • H J Parkinson
Parkinson, H. J. Click and elect: how fake news helped Donald Trump win a real election. The Guardian (2016).
Trump Won Because of Facebook
  • M Read
  • Donald
Read, M. Donald Trump Won Because of Facebook. New York Magazine (2016).
Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me
  • C Dewey
Dewey, C. Facebook Fake-News Writer: 'I Think Donald Trump is in the White House Because of Me'. The Washington Post (2016).
  • P Howard
  • B Kollanyi
  • S Woolley
Howard, P., Kollanyi, B. & Woolley, S. Bots and Automation over Twitter during the U.S. Election. Oxf. UK Proj. Comput. Propag. (2016).
A recent voting history of the 15 Battleground states -National Constitution Center
  • Ncc Staff
NCC Staff. A recent voting history of the 15 Battleground states -National Constitution Center. National Constitution Center -constitutioncenter.org Available at: https://constitutioncenter.org/blog/voting-history-ofthe-15-battleground-states. (Accessed: 22nd September 2017)
Social Media and News Sources during the 2017 UK General Election
  • J Gallacher
  • M Kaminska
  • B Kollanyi
  • T Yasseri
  • P N Howard
Gallacher, J., Kaminska, M., Kollanyi, B., Yasseri, T. & Howard, P. N. Social Media and News Sources during the 2017 UK General Election. (2017).
Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter
  • P N Howard
  • G Bolsover
  • B Kollanyi
  • S Bradshaw
  • L.-M Neudert
Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S. & Neudert, L.-M. Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? (2017).
Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda
  • B Kollanyi
  • P N Howard
  • S C Woolley
Kollanyi, B., Howard, P. N. & Woolley, S. C. Bots and Automation over Twitter during the Third U.S. Presidential Debate. 4 (Project on Computational Propaganda, 2016).