Article

Reverse-engineering political protest: the Russian Internet Research Agency in the Heart of Texas

Taylor & Francis
Information, Communication & Society
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the aftermath of the 2016 US presidential election, the public slowly began to grapple with the extent of Russian disinformation campaigns by the Internet Research Agency (IRA), elements of which were carried out on Facebook. Campaigns targeted people in the United States in many ways, including by publishing event pages on Facebook that were at times piggybacking on existing events, stoking fear, anger and other emotions already on the rise in an increasingly tribal political climate. In the summer before the election, two particular Facebook pages-'Heart of Texas' and the 'United Muslims of America,' published events advertising protests in front of the Islamic Da'wah Center, a mosque and religious center in downtown Houston. Our study reverse-engineers the IRA-inspired 'Heart of Texas' protests on 21 May 2016, using qualitative in-depth interviews with 14 individuals connected to these events-including counterprotest participants and local organizers, journalists who covered the protest, as well as representatives of local organizations. Results shed light on the role that news media played in protest coverage, the dynamics at the protest, issues around vetting information and the serendipity around how protests emerge and get organized on and off social media. This research documents and critically assesses the on-the-ground transactions such propaganda foments and offers insights into the role of social media in local protests. ARTICLE HISTORY

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Gradually, government agencies became more advanced in their techniques of political manipulation on the Internet, not only in Russia but also in the global context. For example, professionals and the lay public actively disputed the role of Russian people in the elections of 2016 and 2000 in the USA and whether they were favorable to Trump [45][46][47][48]. These suspicions significantly deteriorated relations between the USA and Russia. ...
Article
Full-text available
This paper offers a multidimensional theoretical scheme to analyze professional ethics in the field of political public relations. We suggest investigating the decisions of these professionals using moral foundations theory because human ethical reasoning is contextual, and the examination of ethics in a one-dimensional manner as previous researchers have done overlooks the complexity of the moral choices that such professionals make. The prospects of the proposed theoretical approach are demonstrated on 16 interviews with post-soviet Russian political PR industry leaders that were conducted from March 2018 to April 2020. Our empirical findings show that Russian political PR specialists employ all moral foundations, however, in their narratives the “care/harm” and “authority/respect” foundations were not mentioned very often. Overall, this paper makes a critical contribution to research on professional ethics in political public relations, and it provides important insight into the specifics of moral reasoning in the Russian political PR industry that is insufficiently described in the current literature.
Chapter
Full-text available
This chapter introduces the two Russian state-affiliated news media organizations, Sputnik and RT, and provides background on their development. It also discusses what is known about their journalistic culture with regard to organization and journalistic style. Scholars have highlighted that although they are international news organizations producing and disseminating information and current affairs for a global audience, in a similar way to other news media outlets, there are significant differences between RT/Sputnik and news organizations such as BBC World or CNN International. These differences in journalistic ethos have a bearing on the storytelling techniques used, and these are therefore also analyzed in this chapter. Finally the chapter presents previous research findings on the audiences for RT and Sputnik, why people turn to these channels and how they perceive and engage with the news content. Having thus positioned RT and Sputnik as international news organizations and given an account of what is known about audience reception of their messaging, Chap. 3 focuses on how RT and Sputnik have used disinformation and targeted various states.
Article
As Russia launched its full-scale invasion of Ukraine in February 2022, social media was rife with pro-Kremlin disinformation. To effectively tackle the issue of state-sponsored disinformation campaigns, this study examines the underlying reasons why some individuals are susceptible to false claims and explores ways to reduce their susceptibility. It uses linear regression analysis on data from a national survey of 1,500 adults (18+) to examine the factors that predict belief in pro-Kremlin disinformation narratives regarding the Russia–Ukraine war. Our research finds that belief in Pro-Kremlin disinformation is politically motivated and linked to users who: (1) hold conservative views, (2) trust partisan media, and (3) frequently share political opinions on social media. Our findings also show that exposure to disinformation is positively associated with belief in disinformation. Conversely, trust in mainstream media is negatively associated with belief in disinformation, offering a potential way to mitigate its impact.
Article
Computational propaganda—the use of political bots and trolls for orchestrated interventions in online political discourse—is not the only means through which nefarious actors assert political influence in democracies around the world. In the context of influence operations, a significant—yet, understudied—phenomenon is the so-called “boots on the ground”—political communities with ties to state actors that aptly use digital tools to engage in framing and contestation of political events, narratives, and agendas. This article defines political micro-influencers as strategic communicative actors who perform authentic political identities with the goal of influencing public attitudes and behaviors. Situating the inquiry in the context of Russian influence operations, this article investigates the case of the Night Wolves Motorcycle Club by mapping the narratives associated with the movement and examining their structural and functional characteristics. Findings demonstrate how migrant counterpublics and historically nonconforming social groups get co-opted to propagate Russian geopolitical influence in Western democracies. On the basis of this case, the study provides guidelines and suggestions for future work on identifying political micro-influencers and tracing their narratives across the digital ecosystem.
Preprint
Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content. For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations. This report assesses how language models might change influence operations in the future, and what steps can be taken to mitigate this threat. We lay out possible changes to the actors, behaviors, and content of online influence operations, and provide a framework for stages of the language model-to-influence operations pipeline that mitigations could target (model construction, model access, content dissemination, and belief formation). While no reasonable mitigation can be expected to fully prevent the threat of AI-enabled influence operations, a combination of multiple mitigations may make an important difference.
Article
Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations. We conduct a qualitative content analysis of 2,014 Facebook posts about the #BlackLivesMatter (BLM) protests in the United States over the summer of 2020 to comparatively examine the overt propaganda strategies of six Russian-linked news organizations—RT, Ruptly, Soapbox, In The NOW, Sputnik, and Redfish. We found that RT and Sputnik diverged in their framing of the BLM movement from the newer media properties. RT and Sputnik primarily produced negative coverage of the BLM movement, painting protestors as violent, or discussed the hypocrisy of racial justice in America. In contrast, newer media properties like In The NOW, Soapbox, and Redfish supported the BLM movement with clickbait-style videos highlighting racism in America. Video footage bearing the Ruptly brandmark appears in both traditional and new media properties, to illustrate, in real time, civil unrest across the US. By focusing on overt propaganda from the broad array of Russian-affiliated media, our data allows us to further understand the “full spectrum” and “counter-hegemonic” strategies at play in contemporary information operations.
Article
Full-text available
This article considers the use of public social media in transnational expatriate activism. It is an investigation of connections among users of Facebook event pages associated with 122 cities worldwide where demonstrations took place in 2017, in support of the anti-corruption Romanian #rezist protests. An exploration of interconnections between socio-demographics, space and network characteristics, it probes the association of geographic location and the gender of page users to connectivity in comment and share networks to reveal a connectivity differential. Connections increased when users were active on the same pages whereas users’ common location related to comparatively lower levels of commenting but not sharing activity. At the same time, while a larger proportion of them were male, users displayed a systematic tendency to interact with the other gender. Renewed attention should thus be paid to socio-spatial variations in the use of social media that both localize and help bridge transnational activism.
Article
Full-text available
Previous research suggests that mainstream media coverage around the world follows a “protest paradigm” that demonizes protesters and marginalizes their causes. Given the recent increase in global protest activity and the growing importance of social media for activism, this paper content analyzes 1,438 protest-related English and Spanish news stories from around the world that were shared on social media, examining framing, sourcing, and marginalizing devices across media outlet type, region, language, and social media platform in order to create a typology of how the protest paradigm operates in an international and social media context. Results showed type of protest, location of protest, and type of media outlet were significantly related to whether news stories adhered to the protest paradigm. Social media shares were predicted by region of media outlet, English-language media, and type of protest.
Article
Full-text available
In the wake of the 2016 U.S. presidential election, the notion of “fake news” and Russian election interference became somewhat interchangeable. We argue that “fake news” insufficiently describes the Russian disinformation campaign. Our analysis of Facebook ad texts shows that they incorporate emotional appeals differently at unique moments in the political campaign of 2016. We use sentiment analysis to demonstrate the use of positive and negative emotional messages. Sentiment ratings dipped to more negative scores before the November 2016 election and rose to new positive heights after the election. This provides some evidence that the varying uses of positive sentiment and negative sentiment may have been strategic.
Article
Full-text available
The recent rise of disinformation and propaganda on social media has attracted strong interest from social scientists. Research on the topic has repeatedly observed ideological asymmetries in disinformation content and reception, wherein conservatives are more likely to view, redistribute, and believe such content. However, preliminary evidence has suggested that race may also play a substantial role in determining the targeting and consumption of disinformation content. Such racial asymmetries may exist alongside, or even instead of, ideological ones. Our computational analysis of 5.2 million tweets by the Russian government-funded “troll farm” known as the Internet Research Agency sheds light on these possibilities. We find stark differences in the numbers of unique accounts and tweets originating from ostensibly liberal, conservative, and Black left-leaning individuals. But diverging from prior empirical accounts, we find racial presentation—specifically, presenting as a Black activist—to be the most effective predictor of disinformation engagement by far. Importantly, these results could only be detected once we disaggregated Black-presenting accounts from non-Black liberal accounts. In addition to its contributions to the study of ideological asymmetry in disinformation content and reception, this study also underscores the general relevance of race to disinformation studies.
Article
Full-text available
We review online activism and its relations with offline collective action. Social media facilitate online activism, particularly by documenting and collating individual experiences, community building, norm formation, and development of shared realities. In theory, online activism could hinder offline protests, but empirical evidence for slacktivism is mixed. In some contexts, online and offline action could be unrelated because people act differently online versus offline, or because people restrict their actions to one domain. However, most empirical evidence suggests that online and offline activism are positively related and intertwined (no digital dualism), because social media posts can mobilise others for offline protest. Notwithstanding this positive relationship, the internet also enhances the visibility of activism and therefore facilitates repression in repressive contexts.
Article
Full-text available
We document methods employed by Russia’s Internet Research Agency to influence the political agenda of the United States from September 9, 2009 to June 21, 2018. We qualitatively and quantitatively analyze Twitter accounts with known IRA affiliation to better understand the form and function of Russian efforts. We identified five handle categories: Right Troll, Left Troll, News Feed, Hashtag Gamer, and Fearmonger. Within each type, accounts were used consistently, but the behavior across types was different, both in terms of “normal” daily behavior and in how they responded to external events. In this sense, the Internet Research Agency’s agenda-building effort was “industrial” – mass produced from a system of interchangeable parts, where each class of part fulfilled a specialized function.
Article
Full-text available
The year 2017 saw a cycle of protest ignited by President Trump’s election and subsequent policies. This research seeks to investigate the role of social media and television in raising awareness of protest events and increasing participation in marches and demonstrations. This paper uses data from two surveys conducted in May and June 2017, during the peak of this cycle of protest. We explore the role of social media for protest participation (in general) as well as for awareness and participation in the Women’s March and March for Science. We find that Twitter use offers more consistent effects compared to Facebook in relation to the cycle of protest. In contrast, television use has no impact on awareness and thus, limited potential for mobilization. Social media is distinctive in relation to mobilization, because of social networking features that allow people to learn about specific events, discuss the issues, expose people to invitations to participation, as well as identify members of one’s social network who are also interested in participation.
Article
Full-text available
There is widespread concern that Russia and other countries have launched social-media campaigns designed to increase political divisions in the United States. Though a growing number of studies analyze the strategy of such campaigns, it is not yet known how these efforts shaped the political attitudes and behaviors of Americans. We study this question using longitudinal data that describe the attitudes and online behaviors of 1,239 Republican and Democratic Twitter users from late 2017 merged with nonpublic data about the Russian Internet Research Agency (IRA) from Twitter. Using Bayesian regression tree models, we find no evidence that interaction with IRA accounts substantially impacted 6 distinctive measures of political attitudes and behaviors over a 1-mo period. We also find that interaction with IRA accounts were most common among respondents with strong ideological homophily within their Twitter network, high interest in politics, and high frequency of Twitter usage. Together, these findings suggest that Russian trolls might have failed to sow discord because they mostly interacted with those who were already highly polarized. We conclude by discussing several important limitations of our study—especially our inability to determine whether IRA accounts influenced the 2016 presidential election—as well as its implications for future research on social media influence campaigns, political polarization, and computational social science.
Article
Full-text available
This article presents a typological study of the Twitter accounts operated by the Internet Research Agency (IRA), a company specialized in online influence operations based in St. Petersburg, Russia. Drawing on concepts from 20th-century propaganda theory, we modeled the IRA operations along propaganda classes and campaign targets. The study relies on two historical databases and data from the Internet Archive’s Wayback Machine to retrieve 826 user profiles and 6,377 tweets posted by the agency between 2012 and 2017. We manually coded the source as identifiable, obfuscated, or impersonated and classified the campaign target of IRA operations using an inductive typology based on profile descriptions, images, location, language, and tweeted content. The qualitative variables were analyzed as relative frequencies to test the extent to which the IRA’s black, gray, and white propaganda are deployed with clearly defined targets for short-, medium-, and long-term propaganda strategies. The results show that source classification from propaganda theory remains a valid framework to understand IRA’s propaganda machine and that the agency operates a composite of different user accounts tailored to perform specific tasks, including openly pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists.
Article
Full-text available
In this article, we explore the potential role of social media in helping movements expand and/or strengthen themselves internally, processes we refer to as scaling up . Drawing on a case study of Black Lives Matter (BLM) that includes both analysis of public social media accounts and interviews with BLM groups, we highlight possibilities created by social media for building connections, mobilizing participants and tangible resources, coalition building, and amplifying alternative narratives. We also discuss challenges and risks associated with using social media as a platform for scaling up. Our analysis suggests that while benefits of social media use outweigh its risks, careful management of online media platforms is necessary to mitigate concrete, physical risks that social media can create for activists.
Article
Full-text available
The shooting death of Michael Brown by a police officer in August 2014 served as a pivotal case that pushed excessive use of police force against minority groups to the national spotlight. Guided by the scholarship on protest coverage, this article investigates the interplay between advocacy and journalistic framing in the coverage of the Ferguson protests by national and local news. A content analysis of five newspapers during the first cycle of protests identified how journalistic frames of presentation derived from the ‘protest paradigm’ literature related to the content frames pushed forward by the Black Lives Matter movement. Results reveal that initial stories were predominantly episodic and focused on violence to the detriment of demands and grievances. However, episodic stories were also critical of the police response and the use of military-grade weapons to contain the demonstrations. As the weeks progressed, journalists gave space to the ideas of protestors in a more thematic way, especially on issues related to race beyond the topic of police brutality. Taken together, findings suggest small but significant progress as time continued during the first month of demonstrations after Brown’s shooting. Results presented here challenge the paradigmatic nature of protest coverage but reinforce that more space should be given to contextual narratives behind social movements’ actions in addition to coverage that is critical of police and protestor tactics.
Article
Full-text available
A consequence of living in a media-saturated world is that we inevitably leave behind digital traces of our media use. In this introduction to the International Journal of Communication's thematic section, we argue for a need to put those digital traces in context. As a starting point, we outline our basic understanding of digital traces, generally defining them as numerically produced correlations of disparate kinds of data that are generated by our practices in a media environment characterized by digitalization. On this basis, we distinguish three contextual facets that are of relevance when considering digital traces: first, the context of the scientific discourse in which research on digital traces is positioned; second, the context of the methods being applied to researching them; and third, the aforementioned context of the empirical field. With reference to the articles in this thematic section, this introduction argues that, in a single study, all three contextual facets interact as the scientific discourse relates to the methods being used, which in turn relates to the entire field of research. Digital Traces as a Phenomenon Whatever we do, wherever we are, by living in a media-saturated social world we leave behind footprints of our media use that compile an archive of "digital traces." To some degree we do this consciously; when we upload photographs to or write comments on the timelines of digital platforms, we leave an enduring imprint of our presence there. On the other hand, however, we are often unaware of the process as an (unintended) side effect of our media-related practices. This can be the case, for example, when using a search engine, when reading newspapers online, or when posting on Facebook or Andreas Hepp: andreas.hepp@uni-bremen.de Andreas Breiter: abreiter@uni-bremen.de Thomas
Chapter
Full-text available
The algorithmic processing of very large sets of “traces” of user activities collected by digital platforms – so-called “Big Data” – exerts a strong appeal on social media researchers. In the context of a computational turn in social sciences and humanities, is qualitative research based on small samples and corpuses (“small data”) still relevant? It is argued that the unique value of such research lies in data thickness. This is achieved through a process we call thickening. Drawing on recent case studies in social media research we have conducted, we propose and illustrate three strategies to thicken trace data: trace interview, manual data collection and agile long-term online observation.
Article
Full-text available
This study aims to clarify inconsistencies regarding the term affordances by examining how affordances terminology is used in empirical research on communication and technology. Through an analysis of 82 communication-oriented scholarly works on affordances, we identify 3 inconsistencies regarding the use of this term. First, much research describes a particular affordance without engaging other scholarship addressing that affordance. Second, several studies identify “lists” of affordances without conceptually developing individual affordances within those lists. Third, the affordances perspective is evoked in situations where the purported affordance does not meet commonly accepted definitions. We conclude with a set of criteria to aid scholars in evaluating their assumptions about affordances and to facilitate a more consistent approach to its conceptualization and application.
Article
Full-text available
The flurry of protests since the turn of the decade has sustained a growth area in the social sciences. The diversity of approaches to the various facets and concerns raised by the collective action of aggrieved groups the world over impresses through multidisciplinarity and the wealth of insights it has generated. This introduction to a special issue of the international journal Information, Communication and Society is an invitation to recover conceptual instruments – such as the ecological trope – that have fallen out of fashion in media and communication studies. We account for their fall from grace and explicate the rationale for seeking to reinsert them into the empirical terrain of interlocking media, communication practices and protest which we aim to both capture with theory and adopt as a starting point for further analytical innovation.
Article
Full-text available
Although fieldwork is the foundation of robust ethnographic inquiry in physical settings, the practical methods have never fit comfortably in digital contexts. For many researchers, the activities of fieldwork must be so radically adjusted, they hardly resemble fieldwork anymore. How does one conduct “participant observation” of Twitter? When identities and cultural formations are located in or made of information flows through global networks, where are the boundaries of “the field”? In such global networks, what strategies do we use to get close to people? What might count as an interview? This essay discusses the persistent challenges of transferring fieldwork methods intended for physically situated contexts to digitally-mediated social contexts. I offer provocations for considering the premises rather than the procedures of fieldwork. These may not be seen on the surface level of method but operate at a level below method, or in everyday inquiry practices. I suggest that a practice of reflexive methodological analysis allows for more resonant and adaptive fieldwork suitable for studying 21st century networked communication practices and cultural formations.
Article
Full-text available
Many observers doubt the capacity of digital media to change the political game. The rise of a transnational activism that is aimed beyond states and directly at corporations, trade and development regimes offers a fruitful area for understanding how communication practices can help create a new politics. The Internet is implicated in the new global activism far beyond merely reducing the costs of communication, or transcending the geographical and temporal barriers associated with other communication media. Various uses of the Internet and digital media facilitate the loosely structured networks, the weak identity ties, and the patterns of issue and demonstration organizing that define a new global protest politics. Analysis of various cases shows how digital network configurations can facilitate: permanent campaigns, the growth of broad networks despite relatively weak social identity and ideology ties, transformation of individual member organizations and whole networks, and the capacity to communicate messages from desktops to television screens. The same qualities that make these communication-based politics durable also make them vulnerable to problems of control, decision-making and collective identity.
Chapter
The current system for monitoring and removal of foreign election interference on social media is a free speech blind spot. Social media platforms’ standards for what constitutes impermissible interference are vague, enforcement is seemingly ad hoc and inconsistent, and the role governments play in deciding what speech should be taken down is unclear. This extraordinary opacity—at odds with the ordinary requirements of respect for free speech—has been justified by a militarized discourse that paints such interference as highly effective, and “foreign” speech as uniquely pernicious. But, in fact, evidence of such campaigns’ effectiveness is limited, and the singling out and denigration of “foreign” speech is at odds with the traditional justifications for free expression. Hiding in the blind spot created by this foreign-threat, securitized framing are more pervasive and fundamental questions about online public discourse, such as how to define appropriate norms of online behavior more generally, who should decide them, and how they should be enforced. Without examining and answering these underlying questions, the goal that removing foreign election interference on social media is meant to achieve—re-establishing trust in the online public sphere—will remain unrealized.
Book
Cyberwar examines the ways in which Russian interventions not only affected the behaviors of key players but altered the 2016 presidential campaign’s media and social media landscape. After laying out a theory of influence that explains how Russian activities could have produced effects, Jamieson documents the hackers and trolls’ influence on the topics in the news, the questions in the presidential debates, and the social media stream. Drawing on her analysis of messages crafted and amplified by Russian operatives, changes that Russian-hacked content elicited in news and the debates, the scholarly work of other researchers, and Annenberg surveys, she concludes that it is plausible to believe that Russian machinations helped elect Donald J. Trump the 45 th president of the United States.
Article
This paper examines social media ads purchased by the Russian Internet Research Agency (IRA). Using a public dataset, we employ a mixed method to analyze the thematic and strategic patterns of these ads. Due to Facebook and Instagram’s promotional features, IRA managed to microtarget audiences mostly located in the United States fitting its messages to suit the audiences’ political, racial, gendered, and in some cases religious backgrounds. The findings reveal the divisive nature and topics that are dominant in the dataset including: race, immigration, and police brutality. By expanding on the theoretical conceptualization of astroturfing strategy that focuses on carefully concealing the identity and intention of actors behind social media activities, we argue that IRA has added political astroturfing as a powerful tool at a low cost contributing to the broader Russian geopolitical disinformation campaign strategies. The IRA made use of the business model of Facebook and Instagram in an attempt to further divide its targeted audiences and by highlighting mostly negative issues with a potential goal of fuelling political rage.
Article
This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.
Article
Disinformation campaigns continue to thrive online, despite social media companies’ efforts at identifying and culling manipulation on their platforms. Framing these manipulation tactics as ‘coordinated inauthentic behavior,’ major platforms have banned culprits and deleted the evidence of their actions from social activity streams, making independent assessment and auditing impossible. While researchers, journalists, and civil society groups use multiple methods for discovering and tracking disinformation, platforms began to publish highly curated data archives of disinformation in 2016. When platform companies reframe manipulation campaigns, however, they downplay the importance of their products in spreading disinformation. We propose to treat social media metadata as a boundary object that supports research across platforms and use metadata as an entry point for investigating manipulation campaigns. We illustrate how platform companies’ responses to disinformation campaigns are at odds with the interests of researchers, civil society, policy-makers, and journalists, limiting the capacity to audit the role that platforms play in political discourse. To show how platforms’ data archives of ‘coordinated inauthentic behavior’ prevent researchers from examining the contexts of manipulation, we present two case studies of disinformation campaigns related to the Black Lives Matter Movement. We demonstrate how data craft – the exploitation of metrics, metadata, and recommendation engines – played a prominent role attracting audiences to these disinformation campaigns. Additionally, we offer some investigative techniques for researchers to employ data craft in their own research of the disinformation. We conclude by proposing new avenues for research for the field of Critical Internet Studies.
Article
EXECUTIVE SUMMARY TO VOLUME I RUSSIAN SOCIAL MEDIA CAMPAIGN The Internet Research Agency (IRA) carried out the earliest Russian interference operations identified by the investigation- a social media campaign designed to provoke and amplify political and social discord in the United States. The IRA was based in St. Petersburg, Russia, and received funding from Russian oligarch Y evgeniy Prigozhin and companies he controlled. Pri ozhin is widel re orted to have ties to Russian President Vladimir Putin [redacted] In mid-2014, the IRA sent employees to the United States on an intelligence-gathering mission with instructions [redacted] The IRA later used social media accounts and interest groups to sow discord in the U.S. political system through what it termed "information warfare." The campaign evolved from a generalized program designed in 2014 and 2015 to undermine the U.S. electoral system, to a targeted operation that by early 2016 favored candidate Trump and disparaged candidate Clinton. The IRA' s operation also included the purchase of political advertisements on social media in the names of U.S. persons and entities, as well as the staging of political rallies inside the United States. To organize those rallies, IRA employees posed as U.S. grassroots entities and persons and made contact with Trump supporters and Trump Campaign officials in the United States. The investigation did not identify evidence that any U.S. persons conspired or coordinated with the IRA. Section II of this report details the Office's investigation of the Russian social media campaign. RUSSIAN HACKING OPERATIONS At the same time that the IRA operation began to focus ·on supporting candidate Trump in early 2016, the Russian government employed a second form of interference: cyber intrusions (hacking) and releases of hacked materials damaging to the Clinton Campaign. The Russian intelligence service known as the Main Intelligence Directorate of the General Staff of the Russian Army (GRU) carried out these operations. In March 2016, the GRU began hacking the email accounts of Clinton Campaign volunteers and employees, including campaign chairman John Podesta. In April 2016, the GRU hacked into the computer networks of the Democratic Congressional Campaign Committee (DCCC) and the Democratic National Committee (DNC). The GRU stole hundreds of thousands of documents from the compromised email accounts and networks. Around the time that the DNC announced in mid-June 2016 the Russian government's role in hacking its network, the GRU began disseminating stolen materials through the fictitious online personas "DCLeaks" and "Guccifer 2.0." The GRU later released additional materials through the organization WikiLeaks. The presidential campaign of Donald J. Trump ("Trump Campaign" or "Campaign") showed interest in WikiLeaks's releases of documents and welcomed their otential to damage candidate Clinton. Beginning in June 2016, [redacted] forecast to senior Campaign officials that WikiLeaks would release information damaging to candidate Clinton. WikiLeaks's first release came in July 2016. Around the same time, candidate Trump announced that he hoped Russia would recover emails described as missing from a private server used b Clinton when she was Secreta of State he later said that he was speaking sarcastically). [redacted] WikiLeaks began releasing Podesta' s stolen emails on October 7, 2016, less than one hour after a U.S. media outlet released video considered damaging to candidate Trump. Section lII of this Report details the Office's investigation into the Russian hacking operations, as well as other efforts by Trump Campaign supporters to obtain Clinton-related emails. RUSSIAN CONTACTS WITH THE CAMPAIGN The social media campaign and the GRU hacking operations coincided with a series of contacts between Trump Campaign officials and individuals with ties to the Russian government. The Office investigated whether those contacts reflected or resulted in the Campaign conspiring or coordinating with Russia in its election-interference activities. Although the investigation established that the Russian government perceived it would benefit from a Trump presidency and worked to secure that outcome, and that the Campaign expected it would benefit electorally from information stolen and released through Russian efforts, the investigation did not establish that members of the Trump Campaign conspired or coordinated with the Russian government in its election interference activities. The Russian contacts consisted of business connections, offers of assistance to the Campaign, invitations for candidate Trump and Putin to meet in person, invitations for Campaign officials and representatives of the Russian government to meet, and policy positions seeking improved U.S.-Russian relations. Section IV of this Report details the contacts between Russia and the Trump Campaign during the campaign and transition periods, the most salient of which are summarized below in chronological order. 2015. Some of the earliest contacts were made in connection with a Trump Organization real-estate project in Russia known as Trump Tower Moscow. Candidate Trump signed a Letter oflntent for Trump Tower Moscow by November 2015, and in January 2016 Trump Organization executive Michael Cohen emailed and spoke about the project with the office of Russian government press secretary Dmitry Peskov. The Trump Organization pursued the project through at least June 2016, including by considering travel to Russia by Cohen and candidate Trump. Spring 2016. Campaign foreign policy advisor George Papadopoulos made early contact with Joseph Mifsud, a London-based professor who had connections to Russia and traveled to Moscow in April 2016. Immediately upon his return to London from that trip, Mifsud told Papadopoulos that the Russian government had "dirt" on Hillary Clinton in the form of thousands of emails. One week later, in the first week of May 2016, Papadopoulos suggested to a representative of a foreign government that the Trump Campaign had received indications from the Russian government that it could assist the Campaign through the anonymous release of information damaging to candidate Clinton. Throughout that period of time and for several months thereafter, Papadopoulos worked with Mifsud and two Russian nationals to arrange a meeting between the Campaign and the Russian government. No meeting took place. Summer 2016. Russian outreach to the Trump Campaign continued into the summer of 2016, as candidate Trump was becoming the presumptive Republican nominee for President. On June 9, 2016, for example, a Russian lawyer met with senior Trump Campaign officials Donald Trump Jr., Jared Kushner, and campaign chairman Paul Manafort to deliver what the email proposing the meeting had described as "official documents and information that would incriminate Hillary." The materials were offered to Trump Jr. as "part of Russia and its government's support for Mr. Trump." The written communications setting up the meeting showed that the Campaign anticipated receiving information from Russia that could assist candidate Trump's electoral prospects, but the Russian lawyer's presentation did not provide such information. Days after the June 9 meeting, on June 14, 2016, a cybersecurity firm and the DNC announced that Russian government hackers had infiltrated the DNC and obtained access to opposition research on candidate Trump, among other documents. In July 2016, Campaign foreign policy advisor Carter Page traveled in his personal capacity to Moscow and gave the keynote address at the New Economic School. Page had lived and worked in Russia between 2003 and 2007. After returning to the United States, Page became acquainted with at least two Russian intelligence officers, one of whom was later charged in 2015 with conspiracy to act as an unregistered agent of Russia. Page's July 2016 trip to Moscow and his advocacy for pro-Russian foreign policy drew media attention. The Campaign then distanced itself from Page and, by late September 2016, removed him from the Campaign. July 2016 was also the month WikiLeaks first released emails stolen by the GRU from the DNC. On July 22, 2016, WikiLeaks posted thousands of internal DNC documents revealing information about the Clinton Campaign. Within days, there was public reporting that U.S. intelligence agencies had "high confidence" that the Russian government was.behind the theft of emails and documents from the DNC. And within a week of the release, a foreign government informed the FBI about its May 2016 interaction with Papadopoulos and his statement that the Russian government could assist the Trump Campaign. On July 31, 2016, based on the foreign government rep01ting, the FBI opened an investigation into potential coordination between the Russian government and individuals associated with the Trump Campaign. Separately, on August 2, 2016, Trump campaign chairman Paul Manafort met in New York City with his long-time business associate Konstantin Kilimnik, who the FBI assesses to have ties to Russian intelligence. Kilimnik requested the meeting to deliver in person a peace plan for Ukraine that Manafort acknowledged to the Special Counsel's Office was a "backdoor" way for Russia to control part of eastern Ukraine; both men believed the plan would require candidate Trump's assent to succeed (were he to be elected President). They also discussed the status of the Trump Campaign and Manafort's strategy for winning Democratic votes in Midwestern states. Months before that meeting, Manafort had caused internal polling data to be shared with Kilimnik, and the sharing continued for some period of time after their August meeting. Fall 2016. On October 7, 2016, the media released video of candidate Trump speaking in graphic terms about women years earlier, which was considered damaging to his candidacy. Less than an hour later, WikiLeaks made its second release: thousands of John Podesta's emails that had been stolen by the GRU in late March 2016. The FBI and other U.S. government institutions were at the time continuing their investigation of suspected Russian government efforts to interfere in the presidential election. That same day, October 7, the Department of Homeland Security and the Office of the Director of National Intelligence issued a joint public statement "that the Russian Government directed the recent compromises of e-mails from US persons and institutions, including from US political organizations." Those "thefts" and the "disclosures" of the hacked materials through online platforms such as WikiLeaks, the statement continued, "are intended to interfere with the US election process." Post-2016 Election. Immediately after the November 8 election, Russian government officials and prominent Russian businessmen began trying to make inroads into the new administration. The most senior levels of the Russian government encouraged these efforts. The Russian Embassy made contact hours after the election to congratulate the President-Elect and to arrange a call with President Putin. Several Russian businessmen picked up the effort from there. Kirill Dmitriev, the chief executive officer of Russia's sovereign wealth fund, was among the Russians who tried to make contact with the incoming administration. In early December, a business associate steered Dmitriev to Erik Prince, a supporter of the Trump Campaign and an associate of senior Trump advisor Steve Bannon. Dmitriev and Prince later met face-to-face in January 2017 in the Seychelles and discussed U.S.-Russia relations. During the same period, another business associate introduced Dmitriev to a friend of Jared Kushner who had not served on the Campaign or the Transition Team. Dmitriev and Kushner's friend collaborated on a short written reconciliation plan for the United States and Russia, which Dmitriev implied had been cleared through Putin. The friend gave that proposal to Kushner before the inauguration, and Kushner later gave copies to Bannon and incoming Secretary of State Rex Tillerson. On December 29, 2016, then-President Obama imposed sanctions on Russia for having interfered in the election. Incoming National Security Advisor Michael Flynn called Russian Ambassador Sergey Kislyak and asked Russia not to escalate the situation in response to the sanctions. The following day, Putin announced that Russia would not take retaliatory measures in response to the sanctions at that time. Hours later, President-Elect Trump tweeted, "Great move on delay (by V. Putin)." The next day, on December 31 , 2016, Kislyak called Flynn and told him the request had been received at the highest levels and Russia had chosen not to retaliate as a result of Flynn's request. On January 6, 2017, members of the intelligence community briefed President-Elect Trump on a joint assessment-drafted and coordinated among the Central Intelligence Agency, FBI, and National Security Agency-that concluded with high confidence that Russia had intervened in the election through a variety of means to assist Trump's candidacy and harm Clinton's. A declassified version of the assessment was publicly released that same day. Between mid-January 2017 and early February 2017, three congressional committees-the House Permanent Select Committee on Intelligence (HPSCI), the Senate Select Committee on Intelligence (SSCI), and the Senate Judiciary Committee (SJC)-announced that they would conduct inquiries, or had already been conducting inquiries, into Russian interference in the election. Then-FBI Director James Corney later confirmed to Congress the existence of the FBI's investigation into Russian interference that had begun before the election. On March 20, 2017, in open-session testimony before HPSCI, Corney stated: I have been authorized by the Department of Justice to confirm that the FBI, as part of our counterintelligence mission, is investigating the Russian government's efforts to interfere in the 2016 presidential election, and that includes investigating the nature of any links between individuals associated with the Trump campaign and the Russian government and whether there was any coordination between the campaign and Russia's efforts .... As with any counterintelligence investigation, this will also include an assessment of whether any crimes were committed. The investigation continued under then-Director Corney for the next seven weeks until May 9, 2017, when President Trump fired Corney as FBI Director-an action which is analyzed in Volume II of the rep01t. On May 17, 2017, Acting Attorney General Rod Rosenstein appointed the Special Counsel and authorized him to conduct the investigation that Corney had confirmed in his congressional testimony, as well as matters arising directly from the investigation, and any other matters within the scope of 28 C.F.R. § 600.4(a), which generally covers efforts to interfere with or obstruct the investigation. President Trump reacted negatively to the Special Counsel's appointment. He told advisors that it was the end of his presidency, sought to have Attorney General Jefferson (Jeff) Sessions unrecuse from the Russia investigation and to have the Special Counsel removed, and engaged in efforts to curtail the Special Counsel's investigation and prevent the disclosure of evidence to it, including through public and private contacts with potential witnesses. Those and related actions are described and analyzed in Volume II of the report. * * * THE SPECIAL COUNSEL'S CHARGING DECISIONS In reaching the charging decisions described in Volume 1 of the report, the Office determined whether the conduct it found amounted to a violation of federal criminal law chargeable under the Principles of Federal Prosecution. See Justice Manual § 9-27.000 et seq. (2018). The standard set forth in the Justice Manual is whether the conduct constitutes a crime; if so, whether admissible evidence would probably be sufficient to obtain and sustain a conviction; and whether prosecution would serve a substantial federal interest that could not be adequately served by prosecution elsewhere or through non-criminal alternatives. See Justice Manual § 9- 27 .220. Section V of the report provides detailed explanations of the Office's charging decisions, which contain three main components.
Article
In light of the foreign interference in the 2016 U.S. elections, the present research asks the question of whether the digital media has become the stealth media for anonymous political campaigns. By utilizing a user-based, real-time, digital ad tracking tool, the present research reverse engineers and tracks the groups (Study 1) and the targets (Study 2) of divisive issue campaigns based on 5 million paid ads on Facebook exposed to 9,519 individuals between September 28, 2016, and November 8, 2016. The findings reveal groups that did not file reports to the Federal Election Commission (FEC)—nonprofits, astroturf/movement groups, and unidentifiable “suspicious” groups, including foreign entities—ran most of the divisive issue campaigns. One out of six suspicious groups later turned out to be Russian groups. The volume of ads sponsored by non-FEC groups was 4 times larger than that of FEC groups. Divisive issue campaigns clearly targeted battleground states, including Pennsylvania and Wisconsin where traditional Democratic strongholds supported Donald Trump by a razor-thin margin. The present research asserts that media ecology, the technological features and capacity of digital media, as well as regulatory loopholes created by Citizens United v. FEC and the FEC’s disclaimer exemption for digital platforms contribute to the prevalence of anonymous groups’ divisive issue campaigns on digital media. The present research offers insight relevant for regulatory policy discussion and discusses the normative implications of the findings for the functioning of democracy.
Article
This article discusses the news coverage of a highly mediatised protest action which took place in early May 2011 in Flanders, Belgium. A social movement called the Field Liberation Movement rallied against a field trial of a scientific research project testing genetically modified potatoes. Seeking to understand how implicit patterns associated with the protest paradigm influence media representations of this ‘Big Potato Swap’, this article discusses the results of a qualitative content analysis of news coverage by two Flemish quality newspapers and one alternative news website. We conclude by discussing to what extent strategies assigned to the protest paradigm are in fact a product of normative journalistic routines. Different journalistic approaches to coverage of protest may be interpreted as distinct journalistic paradigms. As a result, any criticism of protest paradigm mechanisms in news reporting should be seen as part of a broader critique of prevailing journalistic formats and practices.
Article
Based on a survey of participants in Egypt's Tahrir Square protests, we demonstrate that social media in general, and Facebook in particular, provided new sources of information the regime could not easily control and were crucial in shaping how citizens made individual decisions about participating in protests, the logistics of protest, and the likelihood of success. We demonstrate that people learned about the protests primarily through interpersonal communication using Facebook, phone contact, or face-to-face conversation. Controlling for other factors, social media use greatly increased the odds that a respondent attended protests on the first day. Half of those surveyed produced and disseminated visuals from the demonstrations, mainly through Facebook.
Book
Robert Gehl's timely critique, Reverse Engineering Social Media, rigorously analyzes the ideas of social media and software engineers, using these ideas to find contradictions and fissures beneath the surfaces of glossy sites such as Facebook, Google, and Twitter. Gehl adeptly uses a mix of software studies, science and technology studies, and political economy to reveal the histories and contexts of these social media sites. Looking backward at divisions of labor and the process of user labor, he provides case studies that illustrate how binary "Like" consumer choices hide surveillance systems that rely on users to build content for site owners who make money selling user data, and that promote a culture of anxiety and immediacy over depth. Reverse Engineering Social Media also presents ways out of this paradox, illustrating how activists, academics, and users change social media for the better by building alternatives to the dominant social media sites.
Article
Editor's Note: Tricia provides an excellent segue between last month's "Ethnomining" Special Edition and this month's on "Talking to Companies about Ethnography." She offers further thoughts buildi...
Article
Today, ethnographers and qualitative researchers in fields such as urban poverty, immigration, and social inequality face an environment in which their work will be read, cited, and assessed by demographers, quantitative sociologists, and even economists. They also face a demand for case studies of poor, minority, or immigrant groups and neighborhoods that not only generate theory but also somehow speak to empirical conditions in other cases (not observed). Many have responded by incorporating elements of quantitative methods into their designs, such as selecting respondents `at random' for small, in-depth interview projects or identifying `representative' neighborhoods for ethnographic case studies, aiming to increase generalizability. This article assesses these strategies and argues that they fall short of their objectives. Recognizing the importance of the predicament underlying the strategies — to determine how case studies can speak empirically to other cases — it presents two alternatives to current practices, and calls for greater clarity in the logic of design when producing ethnographic research in a multi-method intellectual environment.
Article
This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.
Book
An investigation into how specific Web technologies can change the dynamics of organizing and participating in political and social protest. Much attention has been paid in recent years to the emergence of “Internet activism,” but scholars and pundits disagree about whether online political activity is different in kind from more traditional forms of activism. Does the global reach and blazing speed of the Internet affect the essential character or dynamics of online political protest? In Digitally Enabled Social Change, Jennifer Earl and Katrina Kimport examine key characteristics of web activism and investigate their impacts on organizing and participation. Earl and Kimport argue that the web offers two key affordances relevant to activism: sharply reduced costs for creating, organizing, and participating in protest; and the decreased need for activists to be physically together in order to act together. Drawing on evidence from samples of online petitions, boycotts, and letter-writing and e-mailing campaigns, Earl and Kimport show that the more these affordances are leveraged, the more transformative the changes to organizing and participating in protest.
Chapter
In her book, Philosophy in a New Key, Susanne Langer remarks that certain ideas burst upon the intellectual landscape with a tremendous force. They resolve so many fundamental problems at once that they seem also to promise that they will resolve all fundamental problems, clarify all obscure issues. Everyone snaps them up as the open sesame of some new positive science, the conceptual center-point around which a comprehensive system of analysis can be built. The sudden vogue of such a grande ideé, crowding out almost everything else for a while, is due, she says, "to the fact that all sensitive and active minds turn at once to exploiting it. We try it in every connection, for every purpose, experiment with possible stretches of its strict meaning, with generalizetions and derivatives." After we have become familiar with the new idea, however, after it has become part of our general stock of theoretical concepts, our expectations are brought more into balance with its actual uses, and its excessive popularity is ended. A few zealots persist in the old key-to-the-universe view of it; but less driven thinkers settle down after a while to the problems the idea has really generated. They try to apply it and extend it where it applies and where it is capable of extension; and they desist where it does not apply or cannot be extended. It becomes, if it was, in truth, a seminal idea in the first place, a permanent and enduring part of our intellectual armory. But it no longer has the grandiose, all-promising scope, the infinite versatility of apparent application, it once had.
Article
Drawing on several larger literatures on diffusion processes, including literatures on the diffusion of innovations, disease spread, and the diffusion of information, this paper examines the classes of diffusion processes that may be relevant to understanding online protest and social movements. The author also argues that one commonly studied type of online, protest-related diffusion process - the online diffusion of information - has only minor theoretical implications for social movement theory. Two other diffusion processes - the diffusion of online, protest-related innovations and the diffusion of protest in its many forms as a problem-solving heuristic to new populations - are likely to qualitatively alter other (non-diffusion) social movement processes, creating important second-order, theoretical effects from these types of diffusion.
Article
give an overview of the origins, purposes, uses, and contributions of grounded theory methodology / grounded theory is a general methodology for developing theory that is grounded in data systematically gathered and analyzed (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The coordination of a social event requires a number of time-bounded tasks (exact time, location, activities of the event) to be determined prior to the event. On university campuses, Facebook is increasingly used to organize ad hoc physical gatherings within social groups. The coordination takes place via Facebook event pages and a ‘wall’ that represent online, shared, interactive spaces. This paper explores how such online interactive spaces facilitate the temporal coordination of social events. We content analyze Facebook's event pages and walls to understand how social group behavior is different from prevailing theories of group task progress and media use. The results suggest that, similar to many work groups, social groups exhibit differential interactive behaviors before and after the midpoint of when the event is created on Facebook and when the offline activity is going to happen. However, the results differ in the sense that the interactive behavior is highest before rather than after the midpoint. We also found that the involvement of the creator of the event pages is associated with higher interactive behavior of the social group. We discuss the findings and derive implications for the design of online tools for social event management.
Activism in the social media age
  • M Anderson
  • S Toor
  • L Rainie
  • A Smith
Anderson, M., Toor, S., Rainie, L., & Smith, A. (2018). Activism in the social media age. https://doi. org/10.1080/08998280.2018.1441474
Aryan Renaissance Society
Anti-Defamation League. (2020a). Aryan Renaissance Society. Adl.org. https://www.adl.org/ education/references/hate-symbols/aryan-renaissance-society
The nature and consequences of black propaganda
  • H Becker
Becker, H. (1995). The nature and consequences of black propaganda. American Sociological Review, 14(2), 221-235. https://doi.org/10.2307/2086855
The journalistic paradigm on civil protest: A case study of Hong Kong
  • J Chan
  • C Lee
Chan, J., & Lee, C. (1984). The journalistic paradigm on civil protest: A case study of Hong Kong. In A. Arno & W. Dissanayake (Eds.), News media in national and international conflict (pp. 193-202). Westview Press.
Propaganda: The formation of men's attitudes
  • J Ellul
Ellul, J. (1973). Propaganda: The formation of men's attitudes. Vintage Books.
Russian social media influence: Understanding Russian propaganda in Eastern Europe
  • T C Helmus
  • E Bodine-Baron
  • A Radin
  • M Magnuson
  • J Mendelsohn
  • W Marcellino
  • A Bega
  • Winkelman
Helmus, T. C., Bodine-Baron, E., Radin, A., Magnuson, M., Mendelsohn, J., Marcellino, W., Bega, A., & Winkelman, Z. (2018). Russian social media influence: Understanding Russian propaganda in Eastern Europe. Rand Corporation.