Article

Information Operations in Turkey: Manufacturing Resilience with Free Twitter Accounts

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Following the 2016 US elections Twitter launched their Information Operations (IO) hub where they archive account activity connected to state linked information operations. In June 2020, Twitter took down and released a set of accounts linked to Turkey's ruling political party (AKP). We investigate these accounts in the aftermath of the takedown to explore whether AKP-linked operations are ongoing and to understand the strategies they use to remain resilient to disruption. We collect live accounts that appear to be part of the same network, ~30% of which have been suspended by Twitter since our collection. We create a BERT-based classifier that shows similarity between these two networks, develop a taxonomy to categorize these accounts, find direct sequel accounts between the Turkish takedown and the live accounts, and find evidence that Turkish IO actors deliberately construct their network to withstand large-scale shutdown by utilizing explicit and implicit signals of coordination. We compare our findings from the Turkish operation to Russian and Chinese IO on Twitter and find that Turkey's IO utilizes a unique group structure to remain resilient. Our work highlights the fundamental imbalance between IO actors quickly and easily creating free accounts and the social media platforms spending significant resources on detection and removal, and contributes novel findings about Turkish IO on Twitter.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Previous research has explored the activity of IO accounts, such as state-sponsored trolls targeting the #BlackLivesMatter movement (Stewart, Arif, and Starbird 2018) and the 2016 U.S. Election (Badawy et al. 2019), and the differences in their activities across campaigns (Zannettou et al. 2019b). Some studies have shown how IO accounts leverage inauthentic or automated accounts to increase their prominence and artificially amplify messages (Linvill and Warren 2020;Elmas 2023), while being resilient to large-scale shutdown (Merhi, Rajtmajer, and Lee 2023). Researchers have reported on different tactics used by IO accounts, such as trolling (Zannettou et al. 2019a ...
Preprint
Full-text available
Social media platforms have become a hub for political activities and discussions, democratizing participation in these endeavors. However, they have also become an incubator for manipulation campaigns, like information operations (IOs). Some social media platforms have released datasets related to such IOs originating from different countries. However, we lack comprehensive control data that can enable the development of IO detection methods. To bridge this gap, we present new labeled datasets about 26 campaigns, which contain both IO posts verified by a social media platform and over 13M posts by 303k accounts that discussed similar topics in the same time frames (control data). The datasets will facilitate the study of narratives, network interactions, and engagement strategies employed by coordinated accounts across various campaigns and countries. By comparing these coordinated accounts against organic ones, researchers can develop and benchmark IO detection algorithms.
... Farkas and Bastos (2018) manually annotate IRA-linked tweets into 19 different categories to study whether IRA operations are consistent with classic propaganda models. Merhi, Rajtmajer, and Lee (2023) find that the accounts involved in an IO in Turkey were resilient to large-scale shutdown. Elmas, Overdorf, and Aberer (2023) discover that IO actors and other adversarial accounts often change their names and assume new identities. ...
Preprint
Full-text available
Coordinated reply attacks are a tactic observed in online influence operations and other coordinated campaigns to support or harass targeted individuals, or influence them or their followers. Despite its potential to influence the public, past studies have yet to analyze or provide a methodology to detect this tactic. In this study, we characterize coordinated reply attacks in the context of influence operations on Twitter. Our analysis reveals that the primary targets of these attacks are influential people such as journalists, news media, state officials, and politicians. We propose two supervised machine-learning models, one to classify tweets to determine whether they are targeted by a reply attack, and one to classify accounts that reply to a targeted tweet to determine whether they are part of a coordinated attack. The classifiers achieve AUC scores of 0.88 and 0.97, respectively. These results indicate that accounts involved in reply attacks can be detected, and the targeted accounts themselves can serve as sensors for influence operation detection.
... Digital sphere can also be an arena of contestation between countries such as China and the United States (55), and government-linked inauthentic behavior might target other nations and foreign elections (29,56). In both domestic and international realm, fake news, trolls, disinformation campaigns, conspiracy theories and coordinated authentic or inauthentic actions are the subject matters of these state-linked information operations (57)(58)(59). With existence of strong authoritarian countries, the issue of disinformation becomes more interesting to investigate (60). ...
Article
Full-text available
Since 2018, Twitter has steadily released into the public domain content discovered on the platform and believed to be associated with information operations originating from more than a dozen state-backed organizations. Leveraging this dataset, we explore inter-state coordination amongst state-backed information operations and find evidence of intentional, strategic interaction amongst thirteen different states, separate and distinct from within-state operations. We find that coordinated, inter-state information operations attract greater engagement than baseline information operations and appear to come online in service to specific aims. We explore these ideas in depth through two case studies on the coordination between Cuba and Venezuela, and between Russia and Iran.
Preprint
Full-text available
Since 2018, Twitter has steadily released into the public domain content discovered on the platform and believed to be associated with information operations originating from more than a dozen state-backed organizations. Leveraging this dataset, we explore inter-state coordination amongst state-backed information operations and find evidence of intentional, strategic interaction amongst thirteen different states, separate and distinct from within-state operations. We find that coordinated, inter-state information operations attract greater engagement than baseline information operations and appear to come online in service to specific aims. We explore these ideas in depth through two case studies on the coordination between Cuba and Venezuela, and between Russia and Iran.
Conference Paper
Full-text available
State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as “regular” users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images.
Article
Full-text available
Recently, the use of social networks such as Facebook, Twitter, and Sina Weibo has become an inseparable part of our daily lives. It is considered as a convenient platform for users to share personal messages, pictures, and videos. However, while people enjoy social networks, many deceptive activities such as fake news or rumors can mislead users into believing misin-formation. Besides, spreading the massive amount of misinformation in social networks has become a global risk. Therefore, misinformation detection (MID) in social networks has gained a great deal of attention and is considered an emerging area of research interest. We find that several studies related to MID have been studied to new research problems and techniques. While important, however, the automated detection of misinformation is difficult to accomplish as it requires the advanced model to understand how related or unrelated the reported information is when compared to real information. The existing studies have mainly focused on three broad categories of misinformation: false information, fake news, and rumor detection. Therefore, related to the previous issues, we present a comprehensive survey of automated misinformation detection on (i) false information, (ii) rumors, (iii) spam, (iv) fake news, and (v) disinformation. We provide a state-of-the-art review on MID where deep learning (DL) is used to automatically process data and create patterns to make decisions not only to extract global features but also to achieve better results. We further show that DL is an effective and scalable technique for the state-of-the-art MID. Finally, we suggest several open issues that currently limit real-world implementation and point to future directions along this dimension.
Article
Full-text available
We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.
Article
Full-text available
We document methods employed by Russia’s Internet Research Agency to influence the political agenda of the United States from September 9, 2009 to June 21, 2018. We qualitatively and quantitatively analyze Twitter accounts with known IRA affiliation to better understand the form and function of Russian efforts. We identified five handle categories: Right Troll, Left Troll, News Feed, Hashtag Gamer, and Fearmonger. Within each type, accounts were used consistently, but the behavior across types was different, both in terms of “normal” daily behavior and in how they responded to external events. In this sense, the Internet Research Agency’s agenda-building effort was “industrial” – mass produced from a system of interchangeable parts, where each class of part fulfilled a specialized function.
Conference Paper
Full-text available
Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.
Conference Paper
Full-text available
Over the past couple of years, anecdotal evidence has emerged linking coordinated campaigns by state-sponsored actors with efforts to manipulate public opinion on the Web, often around major political events, through dedicated accounts, or “trolls.” Although they are often involved in spreading disinformation on social media, there is little understanding of how these trolls operate, what type of content they disseminate, and most importantly their influence on the information ecosystem. In this paper, we shed light on these questions by analyzing 27K tweets posted by 1K Twitter users identified as having ties with Russia’s Internet Research Agency and thus likely state-sponsored trolls. We compare their behavior to a random set of Twitter users, finding interesting differences in terms of the content they disseminate, the evolution of their account, as well as their general behavior and use of Twitter. Then, using Hawkes Processes, we quantify the influence that trolls had on the dissemination of news on social platforms like Twitter, Reddit, and 4chan. Overall, our findings indicate that Russian trolls managed to stay active for long periods of time and to reach a substantial number of Twitter users with their tweets. When looking at their ability of spreading news content and making it viral, however, we find that their effect on social platforms was minor, with the significant exception of news published by the Russian state-sponsored news outlet RT (Russia Today).
Conference Paper
Full-text available
This paper presents preliminary findings of a content analysis of tweets posted by false accounts operated by the Internet Research Agency (IRA) in St Petersburg. We relied on a historical database of tweets to retrieve 4,539 tweets posted by IRA-linked accounts between 2012 and 2017 and coded 2,501 tweets manually. The messages cover newsworthy events in the United States, the Charlie Hebdo terrorist attack in 2015, and the Brexit referendum in 2016. Tweets were annotated using 19 control variables to investigate whether IRA operations on social media are consistent with classic propaganda models. The results show that the IRA operates a composite of user accounts tailored to perform specific tasks, with the lion's share of their work focusing on US daily news activity and the diffusion of polarized news across different national contexts.
Article
Full-text available
This article focuses on AKTrolls, defined as pro-government political trolls in Turkey, while attempting to draw implications about political trolling in the country in general. It examines their methods and effects, and it interrogates whether (and how) Turkish authorities have attempted to shape or counter politically motivated social media content production through trolling after the Gezi Park Protests that took place in 2013. My findings are based on an ethnographic study that included participant observation and in-depth interviews in a setting that is under-studied and about which reliable sources are difficult to find. The study demonstrates political trolling activity in Turkey is more decentralized and less institutionalized than generally thought, and is based more on ad hoc decisions by a larger public. However, I argue here that AKTrolls do have impact on reducing discourses on social media that are critical of the government, by engaging in surveillance, among other practices.
Article
Full-text available
Drawing on the approach suggesting that the analysis of social media in relation to democracy should be provided within its own social context, we outline the social media activities adopted by the ruling populist political party in Turkey, namely, the Justice and Development Party (AKP), aimed at reinforcing its political ideology. We also unpack the ‘political online trolling’ as a manifestation of online practices driving the post-truth politics in Turkey. Following the Gezi protests, when social media and, in particular, Twitter gained trust and popularity as a news source due to severe censorship and polarization in the traditional mass media, the AKP adopted an aggressive strategy to attack and destroy all opposition as well as to manipulate public opinion through their political trolling activities. Employing the approach of digital ethnography and drawing on the archive of mass media outputs about the trolling events, we discuss how the ruling party has adopted online political trolling as a strategy, one that is deeply embedded in the political system, politicians and mainstream media. We also explain how trolling practices are facilitated by the coordinated work of these institutions to silence all critical opposing voices, in particular journalists and how they stifle public debate that is grounded in truth and evidence. We have also concluded that the chilling effects of political trolling lead to quitting social media, self-censorship and less participation in public debate of unprotected citizens who are the most vulnerable targets for the trolls. The trolls have targeted the dissent voices not only for criticizing the government publicly, but also to brand them as terrorists and traitors through increasingly polarizing and discriminating language based on nationalist and religious perspectives, which peaked in the aftermath of the 15 July coup attempt and the debates on the presidential regime. Far from condemnation of the trolling activities along with their polarizing and hateful rhetoric, the mainstream culture and public discourse seem to have been taken over by an increasing trolling subculture, which inhibits public debate, discredits the sources of truth, fosters fanaticism and encourages a hate discourse and violence, all of which are undermining democracy.
Chapter
As societies, governments, corporations, and individuals become more dependent on the digital environment, so they also become increasingly vulnerable to misuse of that environment. A considerable industry has developed to provide the means with which to make cyberspace more secure, stable, and predictable. Cybersecurity is concerned with the identification, avoidance, management, and mitigation of risk in, or from, cyberspace—the risk of harm and damage that might occur as the result of everything from individual carelessness to organized criminality, to industrial and national security espionage, and, at the extreme end of the scale, to disabling attacks against a country’s critical national infrastructure. But this represents a rather narrow understanding of security and there is much more to cyberspace than vulnerability, risk, and threat. As well as security from financial loss, physical damage, etc., cybersecurity must also be for the maximization of benefit. The Oxford Handbook of Cybersecurity takes a comprehensive and rounded approach to the still evolving topic of cybersecurity: the security of cyberspace is as much technological as it is commercial and strategic; as much international as regional, national, and personal; and as much a matter of hazard and vulnerability as an opportunity for social, economic, and cultural growth.
Article
In this paper, we argue that strategic information operations (e.g. disinformation, political propaganda, and other forms of online manipulation) are a critical concern for CSCW researchers, and that the CSCW community can provide vital insight into understanding how these operations function-by examining them as collaborative "work" within online crowds. First, we provide needed definitions and a framework for conceptualizing strategic information operations, highlighting related literatures and noting historical context. Next, we examine three case studies of online information operations using a sociotechnical lens that draws on CSCW theories and methods to account for the mutual shaping of technology, social structure, and human action. Through this lens, we contribute a more nuanced understanding of these operations (beyond "bots" and "trolls") and highlight a persistent challenge for researchers, platform designers, and policy makers-distinguishing between orchestrated, explicitly coordinated, information operations and the emergent, organic behaviors of an online crowd.
Article
This research examines how Russian disinformation actors participated in a highly charged online conversation about the #BlackLivesMatter movement and police-related shootings in the USA during 2016. We first present high-level dynamics of this conversation on Twitter using a network graph based on retweet flows that reveals two structurally distinct communities. Next, we identify accounts in this graph that were suspended by Twitter for being affiliated with the Internet Research Agency, an entity accused of conducting information operations in support of Russian political interests. Finally, we conduct an interpretive analysis that consolidates observations about the activities of these accounts. Our findings have implications for platforms seeking to develop mechanisms for determining authenticity---by illuminating how disinformation actors enact authentic personas and caricatures to target different audiences. This work also sheds light on how these actors systematically manipulate politically active online communities by amplifying diverging streams of divisive content.
Article
The Chinese government has long been suspected of hiring as many as 2 million people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.
Article
Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts occurred between April 27 and May 7, 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups taken independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them and oppose it to those users who didn't. Prior interests of disinformation adopters pinpoint to the reasons of the scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with a preexisting interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black-market for reusable political disinformation bots.
Linguistic cues to deception: Identifying political trolls on social media
  • A Addawood
  • A Badawy
  • K Lerman
  • E Ferrara
Addawood, A.; Badawy, A.; Lerman, K.; and Ferrara, E. 2019. Linguistic cues to deception: Identifying political trolls on social media. Proceedings of the International AAAI Conference on Web and Social Media, 13: 15-25.
Turkey's Government Forms 6,000-Member Social Media Team
  • A Albayrak
  • J Parkinson
Albayrak, A.; and Parkinson, J. 2023. Turkey's Government Forms 6,000-Member Social Media Team -WSJ. https://www.wsj.com/articles/turkeys-governmentforms-6000member-social-media-team-1379351399. Accessed: 2023-04-16.
Digital Authoritarianism and Trolling in Turkey
  • F Başaran
Başaran, F. 2020. Digital Authoritarianism and Trolling in Turkey. https://publicseminar.org/essays/digitalauthoritarianism-and-trolling-in-turkey/. Accessed: 2023-04-16.
Mediatized Populisms| Digital Populism: Trolls and Political Polarization of Twitter in Turkey
  • E Bulut
  • E Yörük
Bulut, E.; and Yörük, E. 2017. Mediatized Populisms| Digital Populism: Trolls and Political Polarization of Twitter in Turkey. International Journal of Communication, 11(0): 25. Number: 0.
  • N Dhamani
  • P Azunre
  • J L Gleason
  • C Corcoran
  • G Honke
  • S Kramer
  • J Morgan
Dhamani, N.; Azunre, P.; Gleason, J. L.; Corcoran, C.; Honke, G.; Kramer, S.; and Morgan, J. 2019. Using Deep Networks and Transfer Learning to Address Disinformation. arXiv:1905.10412.
Analysis of Twitter takedown of state-backed operation attributed to saudi arabian digital marketing firm smaat
  • R Diresta
  • K Shelby Grossman
  • C Miller
DiResta, R.; Shelby Grossman, K.; and Miller, C. 2019. Analysis of Twitter takedown of state-backed operation attributed to saudi arabian digital marketing firm smaat [White
  • M Grootendorst
Grootendorst, M. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794.
Political retweet rings and compromised accounts: A Twitter influence operation linked to the youth wing of Turkey's ruling party
  • S Grossman
  • F A Akis
  • A Alemdaroglu
  • J A Goldstein
  • K Jonsson
  • I Garcia-Camargo
  • D Thiel
  • A Zaheer
Grossman, S.; Akis, F. A.; Alemdaroglu, A.; Goldstein, J. A.; Jonsson, K.; Garcia-Camargo, I.; Thiel, D.; and Zaheer, A. 2020. Political retweet rings and compromised accounts: A Twitter influence operation linked to the youth wing of Turkey's ruling party [Whitepaper].
Who is @TEN GOP from the Russia indictment? Here's what we found reading 2,000 of its tweets
  • A Kessler
Kessler, A. 2018. Who is @TEN GOP from the Russia indictment? Here's what we found reading 2,000 of its tweets. CNN.
  • L Luceri
  • S Giordano
  • E Ferrara
Luceri, L.; Giordano, S.; and Ferrara, E. 2020b. Don't Feed the Troll: Detecting Troll Behavior via Inverse Reinforcement Learning. arXiv:2001.10570.
United States; Department of Justice; and Special Counsel's Office. 2019. The Mueller report
  • R S Mueller
  • R S Helderman
  • M Zapotosky
Mueller, R. S.; Helderman, R. S.; Zapotosky, M.; United States; Department of Justice; and Special Counsel's Office. 2019. The Mueller report. ISBN 978-1-982129-74-3. OCLC: 1097961575.
Hyperpartisanship, disinformation and political conversations on Twitter: The Brazilian Presidential Election
  • A Prokop
  • Vox
  • R Recuero
  • F B Soares
  • A Gruzd
Prokop, A. 2018. 23 tweets from @TEN GOP, one Russianrun Twitter account mentioned in Mueller's new indictment. Vox. Recuero, R.; Soares, F. B.; and Gruzd, A. 2020. Hyperpartisanship, disinformation and political conversations on Twitter: The Brazilian Presidential Election of 2018. Proceedings of the International AAAI Conference on Web and Social Media, 14: 569-578.
BERTurk -BERT models for Turkish
  • S Schweter
Schweter, S. 2020. BERTurk -BERT models for Turkish. https://github.com/stefan-it/turkish-bert. Accessed: 2023-04-16.
Russian Twitter account pretending to be Tennessee GOP fools celebrities, politicians
  • C Timberg
  • E Dwoskin
  • A Entous
Timberg, C.; Dwoskin, E.; and Entous, A. 2017. Russian Twitter account pretending to be Tennessee GOP fools celebrities, politicians. Chicago Tribune.
Update on Twitter's review of the 2016 US election
  • Twitter
Twitter. 2018. Update on Twitter's review of the 2016 US election. https://blog.twitter.com/en us/topics/company/ 2018/2016-election-update. Accessed: 2023-04-16.