ArticlePDF Available

Does the Musk Twitter Takeover Matter? Political Influencers, Their Arguments, and the Quality of Information They Share

Authors:

Abstract and Figures

In October 2022, Elon Musk took over Twitter. Although conservatives cheered the takeover, progressives decried it as dangerous for democracy. Despite scholarly interest in Twitter, little is known about the impact of “old” Twitter’s policies on the information environment, making it difficult to speculate about Musk’s effects. The authors begin to address this gap through an analysis of 245,020 tweets collected before and after Twitter suspended eight accounts calling for state audits of the 2020 presidential election results. In this analysis of message amplifiers, or accounts receiving 200 or more retweets, and message drivers, or top-ranked accounts, no evidence is found that the Twitter ban improved the ideas or the quality of information shared about the election, nor did it dramatically change who posted about the audit. The authors conclude with a discussion of the implications of these findings for future research on Twitter under Musk’s control.
Content may be subject to copyright.
https://doi.org/10.1177/23780231231152193
Socius: Sociological Research for
a Dynamic World
Volume 9: 1 –15
© The Author(s) 2023
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/23780231231152193
srd.sagepub.com
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and
distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
Original Article
After purchasing Twitter on October 27, 2022, Elon Musk
tweeted “the bird is freed.” His tweet, and purchase of
Twitter, was welcomed by Republicans, who had long criti-
cized the platform for censoring conservative viewpoints.
Senator Ted Cruz (R-TX) called Musk’s Twitter takeover
“one of the most significant developments for free speech in
modern times.” Marjorie Taylor Greene (R-GA) tweeted
“FREEDOM OF SPEECH!!!” later adding “We are win-
ning” (Popl 2022). Plenty of other conservatives agreed,
flooding Twitter with pro-Musk memes (Miller 2022).
Democrats were less thrilled with the takeover. Senator
Elizabeth Warren (D-MA) called the deal “dangerous for our
democracy.” She added, “Billionaires like Elon Musk play
by a different set of rules than everyone else, accumulating
power for their own gain” (Halaschak 2022). Journalists and
pundits alike point to the recent, sometimes temporary, sus-
pension of several mainstream journalists’ Twitter accounts
as evidence of Musk’s desire to curtail speech that does not
align with his brand of conservative politics (Peters 2022).
All the attention given to Twitter, its owners, and its
policies is not surprising. Politicians, governmental offices,
and public agencies have relied on Twitter for nearly a
decade to quickly communicate everything from perspec-
tives on policy proposals to information regarding road clo-
sures and local emergencies (Crudele 2022; Pattison-Gordon
2022). This is in large part because the platform reaches a
healthy swath of the American public (23 percent of U.S.
adults), many of whom use Twitter to quickly access news
and increase their understanding of current events (Aslam
2022; Odabas 2022). Although Musk’s Twitter moves are
likely to continue to make headlines, scholars should be
careful in assessing the effects of his actions on the plat-
form’s information environment. Specifically, before wax-
ing nostalgic about the Twitter of yore, it is worthwhile to
1152193SRDXXX10.1177/23780231231152193Socius: Sociological Research for a Dynamic WorldRohlinger et al.
research-article2023
1Florida State University, Tallahassee, FL, USA
2Rochester University, Rochester, NY, USA
3Texifter, Amherst, USA
Corresponding Author:
Deana A. Rohlinger, Florida State University, 636 West Call Street, 207
Pepper Center, Tallahassee, FL 32306-1121, USA.
Email: deana.rohlinger@fsu.edu
Does the Musk Twitter Takeover Matter?
Political Influencers, Their Arguments,
and the Quality of Information They
Share
Deana A. Rohlinger1, Kyle Rose1, Sarah Warren2,
and Stuart Shulman3
Abstract
In October 2022, Elon Musk took over Twitter. Although conservatives cheered the takeover, progressives decried it
as dangerous for democracy. Despite scholarly interest in Twitter, little is known about the impact of “old” Twitter’s
policies on the information environment, making it difficult to speculate about Musk’s effects. The authors begin to
address this gap through an analysis of 245,020 tweets collected before and after Twitter suspended eight accounts
calling for state audits of the 2020 presidential election results. In this analysis of message amplifiers, or accounts
receiving 200 or more retweets, and message drivers, or top-ranked accounts, no evidence is found that the Twitter
ban improved the ideas or the quality of information shared about the election, nor did it dramatically change who
posted about the audit. The authors conclude with a discussion of the implications of these findings for future research
on Twitter under Musk’s control.
Keywords
Twitter, information environment, conspiracy theory, political influencers
2 Socius: Sociological Research for a Dynamic World
assess the relative effectiveness of policies, such as those
involving deplatforming, under the old regime on the infor-
mation environment, which, despite broad scholarly inter-
est in Twitter and the quality of information on it, we know
little about.
To examine whether deplatforming before the Musk
takeover appears to influence the information environment
around an issue, we analyzed tweets before and after
Twitter banned eight accounts calling for the audit of the
2020 presidential election results in Maricopa County,
Arizona, as well as in other states Trump lost (referred to
hereafter as the Arizona audit accounts). The permanent
suspension of the accounts occurred on July 27, 2021, and
made national news as politicians on both sides of the aisle
responded. Although Democrats praised Twitter for strik-
ing another blow against misinformation, Republicans
decried the move as more evidence of censorship perpe-
trated by Democrats in collusion with big tech (Schwenk
2021). Here, we analyze the relationship between deplat-
forming and who broadcasts information and the quality of
information they share. We assume that accounts engaged
in discussions about the Arizona audit were unaware of the
impending ban. That is, the account bans function as a
quasi-exogenous shock to individuals participating in the
conversation, which allows us to examine the impact of the
ban on the Twitter information environment surrounding
the Arizona audit. Although we are not able to infer causal-
ity, we are able to paint a more complete picture of how
deplatforming shaped an issue information environment
before Musk took over Twitter, providing critical contex-
tual information for assessing the impact of current or
future suspensions.
We address three research questions: Does banning
accounts on Twitter change who drives the conversation
around the Arizona audit? Does banning accounts on Twitter
change the kinds of arguments accounts make about the
Arizona audit? Does banning accounts on Twitter affect the
type and quality of information shared about the audit? We
answer these three questions through an examination of two
different kinds of accounts: political message amplifiers, or
accounts that receive 200 or more retweets in the sample, and
political message drivers, or accounts that rank in the top 10
most posted tweets on each day associated with the sample.
We find that the Twitter ban does little to improve the infor-
mation environment around the Arizona audit. The ban does
not appear to effectively change the kinds of ideas that are
amplified on Twitter about the Arizona audit, nor does it
appear to affect who posts about the Arizona audit. In fact,
two accounts not only sent the majority of the tweets on each
of the days in the sample but also remained steadfast in their
support of the lie that the election was stolen from Donald
Trump and that the fraud was being covered up by Democrats.
We conclude the paper with a discussion of the implications
of our findings for future research on Twitter under Musk’s
control.
Political Influencers and the Information They
Share before and after Bans
Social scientists have long been interested in who shapes
political discourse in virtual forums. Much attention has
focused on political influencers, which are also known as
“opinion leaders,” “hashtag entrepreneurs,” “influentials,”
“initiators,” “opinion brokers,” “networked gatekeepers,”
and “crowd-sourced elites,” or the accounts that potentially
alter how users think and talk about issues on platforms such
as Reddit, Twitter, and Facebook (Dubois and Gaffney 2014;
Kermani and Adham 2021; Sunstein 2017; Valente and Davis
1999). Not surprisingly, social scientists emphasize how
political influencers amplify ideas and the relative reach of
these accounts. Using metrics such as retweets, mentions,
and follows as well as centrality measures such as PageRank
and betweenness centrality, the extant research assesses how
these accounts, which we call message amplifiers, influence
the course and content of a conversation (e.g., Dubois and
Gaffney 2014; Jackson and Foucault Welles 2016). For
example, Stewart et al. (2017) found that message amplifiers
use hashtags to influence framing contests and to channel
how information flows on Twitter. Likewise, Meraz and
Papacharissi (2013) found that message amplifiers, mea-
sured as accounts with high betweenness centrality scores,
are well positioned to cross over between network clusters,
share information, and influence the framing in both.
We know far less, however, about message drivers, or
accounts that frequently post on a digital forum relative to a
political issue or event but do not necessarily have large
numbers of followers. There is reason to believe that these
influencers also are important to understanding the informa-
tion environment around political issues, at least on Twitter.
First, scholars find that there are different ways in which an
account, and its content, may become visible (Meraz and
Papacharissi 2013). Rohlinger, Williams, and Teek (2020),
for example, found that, after school shootings, accounts
with local expertise (e.g., local journalists and students) and
seemingly credible information (e.g., trolls posing as eyewit-
nesses) have more standing on Twitter and, subsequently, get
far more attention than those of well-known activists, celeb-
rities, or gun organizations. Likewise, and more relevant
here, scholars find that accounts which send large numbers
of tweets occasionally send posts that get a lot of attention. In
analyses of message drivers, research reveals that accounts
with the highest number of original outgoing tweets some-
times have a small overlap with accounts that also received
the highest number of retweets (Boyraz, Krishnan, and
Catona 2015; LeFebvre and Armstrong 2018). In short, high-
tweeting accounts may be able to tap into visibility, regard-
less of their follower numbers, and shape political discourse
around an issue.
Similarly, we do not know if there are substantive differ-
ences in the quality of information message amplifiers and
message drivers share. To be sure, there are numerous studies
Rohlinger et al. 3
that assess the general quality of information in digital envi-
ronments as well as how it spreads. Social scientists studying
mis- and disinformation routinely find that low-quality infor-
mation is pervasive on platforms such as Facebook and Twitter
(Allcott, Gentzkow, and Yu 2019; Bessi et al. 2015; Ferrara
2017; Ross and Rivers 2018), that mis- and disinformation
spread faster and further than factually accurate information
(Del Vicario et al. 2016), and that efforts to correct bad infor-
mation do not work as well as we would like (Sangalang,
Ophir, and Cappella 2019; Scheufele and Krause 2019;
Thorson 2016; Yang, Torres-Lugo, and Menczer 2020). We
know, for example, that conspiracy stories thrive in some
social media communities because platforms make group
homogenization easy (Del Vicario et al. 2016). What is miss-
ing in the research is whether message amplifiers and message
drivers play similar (or different) roles in the spreading of low-
quality information that undergirds some conversations.
We would expect message amplifiers and message drivers
to play different roles in Twitter conversations because they
are often responding to different incentives. In his research,
Meyer (1995) found that “cultural elites,” which included
celebrities, used mass media to build a following and reputa-
tion with an audience, which they leveraged into other
opportunities such as additional jobs and appearances.
Consequently, when speaking on behalf of political causes,
cultural elites often “watered down” their positions and opin-
ions in an effort not to offend audience members, as this
could negatively affect their future revenue flows (Meyer
1995). It is reasonable to assume that this dynamic is similar
on social media platforms such as Twitter. Message amplifi-
ers, here often visibly verified cultural elites, use social
media to grow their online audiences and potential influence
while maintaining reputations that will further their profes-
sional aspirations. To this end, message amplifiers, particu-
larly ones with national profiles, may signal support for
individuals, issues, and events, but avoid the firebrand rheto-
ric that could come back to haunt them when doing so.
Similarly, message amplifiers may also be reticent to share
news stories with their original posts. News outlets often use
dramatic or inflammatory language in their headlines to
entice audiences to click over to their stories (Scacco and
Muddiman 2016); the exact kind of language message ampli-
fiers may seek to avoid.
This is probably not true of message drivers, who are less
interested in how they are perceived by a broader audience
and more concerned about the value of engaging on digital
forums and directly affecting the course and content of the
conversation. Schradie (2019), for example, found that con-
servatives in particular spent hours posting content to
Facebook and Twitter because they saw it as their responsi-
bility to get the “truth” about politics out into the world.
Similarly, Rohlinger (2022) found that some users posted
more than 100 tweets a day in response to the school shoot-
ing in Parkland, Florida, most of which used hashtags related
to the shooting to share other, conspiracy-minded content.
Given the extant research, which indicates that the most
widely circulated falsehoods also tend to promote conserva-
tive points of view (Garrett and Bond 2021), conservative
message drivers are more likely to share low-quality news
stories with their posts.
Finally, we know very little about how account suspen-
sions affect the course and content of a conversation. To our
knowledge, there is only one study exploring the influence of
Twitter bans on political discourse. Jhaver et al. (2021)
examined the effect of Twitter bans on the number of conver-
sations about the deplatformed account, the spread of offen-
sive ideas held by the deplatformed account, and the overall
activity of supporters of the deplatformed account. They
found that deplatforming can be very fruitful in terms of
removing an account from a larger conversation and reduc-
ing its influence on discourse. Although this is a critically
important first step, it sheds little insight into how account
suspensions might influence the kinds of political influenc-
ers weighing in on an issue, the kinds of arguments they
make, or the quality of information they share with others.
Again, we would expect message amplifiers to behave
differently from message drivers in the wake of account
bans. Message amplifiers, who might be anxious about being
suspended themselves, may be more cautious in terms of
what and how they tweet after a Twitter suspension. For
example, accounts that signaled support for misinformation
or conspiracy theories may adjust their arguments and the
quality of the information they share after a suspension to
help make sure their accounts stay on the right side of
Twitter’s policies. Message drivers, however, may be much
less concerned with the prospects of a suspension, and they
may even see it as an inevitable outcome of their online work
(Rohlinger 2022; Schradie 2019). These accounts are more
likely to maintain their tweeting behavior, which may include
sharing misinformation and conspiracy theories.
Data and Methods
To explore whether a Twitter ban changed the information
circulated about the 2020 presidential election and the
Arizona audit, we used DiscoverText software and an aca-
demic developer’s license to scrape 245,020 tweets between
July 17, 2021, and August 5, 2021. On July 27, 2021, Twitter
suspended eight accounts related to the election audits, citing
platform manipulation and spam. The suspension included
the official account associated with the audit (@
ArizonaAudit), an account known for spreading misinforma-
tion (@AuditWarRoom), and six AuditWarRoom spinoff
accounts targeting additional states where Republican con-
tested the election (@AuditArizona, @AuditMichigan, @
AuditWisconsin, @AuditNevada, @AuditGeorgia, and @
AuditPennsylvania). The ban was sudden and unexpected,
meaning that it can be understood as a quasi-exogeneous
“treatment” that may or may not influence whether and how
individuals engage with these topics on Twitter. We used the
4 Socius: Sociological Research for a Dynamic World
keywords “Arizona audit” for our search parameters to col-
lect tweets. As two of the suspended accounts directly con-
cerned the Arizona audit and the remaining accounts were
spinoffs inspired by the Arizona audit, the keywords “Arizona
audit” were selected to capture discourse likely to be about
the audit and reactions to the account suspensions.
We used DiscoverText to retroactively scrape and orga-
nize the data through Twitter’s application programming
interface version 2, which stores all data and metadata older
than 14 days. DiscoverText is in-browser software developed
for researchers to help query Twitter’s application program-
ming interfaces then organize and visualize data for analysis.
The program collects and stores data in “archives,” which
can be organized into smaller samples called “buckets.” The
use of DiscoverText has clear benefits. DiscoverText ensures
research compliance with Twitter’s terms of service. To this
end, it automatically removes data that has been suspended
or removed from Twitter and reposts content when those
accounts are no longer suspended. Additionally, it does not
allow researchers to export a data set from DiscoverText,
only Twitter identification numbers, to ensure that scholars
remain in compliance with Twitter’s developer agreement.1
The downside of Twitter’s developer agreement is that
researchers cannot include the content of suspended or
deplatformed accounts in their analyses, which means that
our data set does not include content posted by the eight sus-
pended accounts. That said, we are confident that our sam-
pling strategy adequately captures discourse, and potential
discourse changes, resulting from the ban. In a separate
DiscoverText data collection of 199,946 tweets in advance of
the ban using the same key words, only one AuditWarRoom
post received more than 200 retweets in the sample. This
suggests that the accounts were a touchstone for debate and
helped shape discourse but were not the sole drivers of con-
tent about the Arizona audit.
To analyze message amplifiers, we used DiscoverText’s
“exact duplicate” function to sort tweets into “groups,”
allowing us to identify posts that were retweeted by at least
one other account during the 20-day period of interest. We
operationalized message amplifiers as accounts that had at
least 200 retweets. As there is not a standard metric by which
researchers define message amplifiers (Dubois and Blank
2018; Jackson and Foucault Welles 2015; LeFebvre and
Armstrong 2018), we selected 200 retweets as the cutoff
point. This, we reasoned, allowed us to identify a broad range
of accounts that potentially influenced discourse surrounding
the Arizona audit and maintain an operable sample size for
subsequent coding. A total of 148 accounts met this criterion,
and their retweeted posts constitute 55.4 percent of the total
sample. Relative to message amplifiers we explore three
research questions: Does banning accounts on Twitter change
who shapes the conversation around the 2020 presidential
election and Arizona audit? Does banning accounts on
Twitter change the kinds of arguments message amplifiers
make about the 2020 presidential election and Arizona audit?
Does banning accounts on Twitter affect the type and quality
of information shared by message amplifiers?
To assess whether the ban changed who shapes discourse,
we analyzed the biographical descriptions, profile pictures,
and emojis associated with the 148 accounts. In addition to
coding whether the account was verified or not, we analyzed
how account holders identified themselves relative to the
causes and issues they referenced. So doing generated 108
separate codes, which were inductively coded by the lead
researcher, that captured each account’s stated current or
former occupation (e.g., elected officials, academics, data
analysts, teachers, media professionals, authors, engineers);
political and partisan alignment (e.g., Democrat, Republican,
independent, Libertarian, Trump supporter); references to
political engagement (e.g., engaged citizen, democracy sav-
age, citizen legislator); references to America, democracy,
and patriotism; other affinities (e.g., dog lover, cat lover,
music lover, bourbon lover); geographic mentions (local,
state, region, and country, as well as mentions of rural and
urban attachments); mentions of conservative, progressive,
and QAnon causes; and support for particular individuals
and groups, among others. Additionally, given the increased
importance of visual indicators to signaling political orien-
tations and allegiances (Gerbaudo 2015; Kariryaa et al.
2022), the lead researcher inductively coded 69 types of
emojis included in the biography of each account as well as
political, social, and religious symbols included in the
account profile picture and banner (another 24 categories).
These inductive categories were collapsed, when possible,
into more meaningful categories (e.g., particular offices and
types of political candidates were collapsed into a single
political officeholder and candidate category) so that chi-
square (>30 cases) and Fisher’s exact (<30 cases) tests
could be used to assess whether there was a nonrandom rela-
tionship between the dependent variable (e.g., timing of the
post) and independent variables (e.g., political candidate or
politicians).
To assess whether and how the Twitter suspension changed
the kinds of claims message amplifiers made about the
Arizona audit and the presidential election more generally,
the lead researcher and an undergraduate research assistant
analyzed whether the tweet supported or opposed the audit as
well as the general argument(s) referenced in the tweet. The
lead researcher read all of the tweets, inductively coded the
content of each in NVivo and then created a codebook that
captured the most prominent arguments in the sample sup-
porting and opposing the audit and the election results. This
process generated 19 mutually exclusive codes. An “unclear”
category was included to capture tweets that were neutral or
(1) reported an action or event related to the audit but did not
express supportive or oppositional sentiment regarding the
1Learn more about Twitter’s developer agreement at https://devel-
oper.twitter.com/en/developer-terms/agreement-and-policy.
Rohlinger et al. 5
audit or (2) shared a neutral headline related to the audit but
offered no comment on the headline or audit. An example of
the former would be an account remarking that they thought
the audit might not be complete by the promised July dead-
line. An example of the latter would be an account sharing a
news story with a headline to that effect with no original con-
tent in the tweet. All coding was dichotomous so that we
could capture the presence of more than one type of argu-
ment, if applicable.
Finally, we coded what information message amplifiers
shared and, if it was a news source, the quality of this infor-
mation. The kinds of information account holders shared
included legal documents; materials for political parties and
candidates; interest group reports; government documents;
screenshots of fund-raising materials; their own and others’
tweets; pictures of different aspects of the audit, Trump sup-
porters, and rallies; Biden and other well-known Democrats;
and blogs. Some individuals did not include additional mate-
rial with their tweets, which we coded as “nothing shared.” If
the account holder shared a news source, we categorized it
according to Ad Fontes Media’s (2022) media bias chart. This
media bias chart, which is updated frequently, categorizes
outlets into one of seven bias types: most extreme left, hyper-
partisan left, skews left, middle, skews right, hyperpartisan
right, and most extreme right. Ad Fontes Media employs 40
analysts to categorize outlet content. Each article or episode is
rated by three analysts representing the political spectrum on
the basis of self-reported political views (left, right, and cen-
ter). Reliability scores for coding between the lead researcher
and undergraduate research assistant relative to the claims
and information shared were consistently high (κ = .91).
We also explore whether the Twitter ban changed the
accounts posting the most tweets (message drivers) or the
kinds of content they posted. Here, we used DiscoverText to
isolate tweets by day and exported the tweets into a “bucket.”
Then, using the “Top Meta Explorer” option, we identified
the top 10 posters for each of the 20 days and analyzed the
content of their tweets. Because, to our knowledge, message
driver is a novel concept, we included the top 10 posters
from each day (on one day the top 11 because of a tie for 10th
place) so that we could better assess the range of posting
behavior during the time period of interest and identify what
it means to be a highly active poster in the sample. The lead
researcher coded each account according to (1) whether it
supported or opposed the audit, (2) the types of arguments
made in opposition or support of the audit, and (3) whether
additional material was shared with their post. In terms of the
latter, the lead researcher coded tweets supporting the audit
or big lie as supportive of Trump, supportive of Republicans,
opposed to Democrats, or supporting or sharing a conspiracy
theory. Tweets opposing the audit were coded as opposed to
Trump, opposed to Republicans, supportive of Democrats, or
supporting or sharing a conspiracy. We conducted a reliabil-
ity check of the coding using 20 percent of the sample and
reliability scores were high (κ = .94).
We supplemented this analysis with information from Bot
Sentinel, a free, nonpartisan platform that uses artificial
intelligence and machine learning to score accounts from 0
percent to 100 percent. The higher the score on Bot Sentinel,
the more likely the account engages in bad behavior such as
“harassment, toxic trolling, or uses deceptive tactics engi-
neered to cause division and chaos” (https://botsentinel.com/
info/about). Account rating categories, which are based on
an analysis of several hundred tweets associated with the
account, include normal (a score from 0 percent to 24 per-
cent), satisfactory (a score from 25 percent to 49 percent),
disruptive (a score from 50 percent to 74 percent), and prob-
lematic (a score from 75 percent to 100 percent). The Bot
Sentinel score provide additional context regarding an
account’s Twitter behavior and, specifically, allow us to
assess whether tweets about the audit are consistent with a
larger corpus of posts shared.
Political Influencers and the
Information Environment
Message Amplifiers
Table 1 summarizes the profile characteristics mentioned at
least five times in message amplifiers’ accounts. The table
provides the frequency of relevant information, including
emojis, provided in the biography and profile picture before
the ban, the day of the ban, and after the ban, and notes
whether there is a significant difference in the distribution of
each profile characteristic. Table 1 indicates that overall, the
kinds of accounts that get the most retweets are largely stable
before and after the ban. However, there are two significant
differences worth noting. First, accounts expressing at least
one conservative political sentiment were retweeted more
often before the ban. Second, media outlets, fact-checking
groups, and watchdog organizations were retweeted more
often after the ban. At first glance, this suggests that the ban
influenced who shaped the conversation about the audit, and,
given the significant increase in media outlets, watchdog
groups and fact checking organizations, potentially for the
better.
Of course, not all the accounts are retweeted at similar
rates. Table 2 shows the frequency of retweets by quartiles
for each of the profile characteristics. Here, we can see
whether there are significant differences in the distribution
of the profile characteristics among posts retweeted 201 to
278 times, 279 to 447 times, 448 to 950 times, and 951 to
23,831 times. Table 2 indicates that there is only one sig-
nificant difference. Accounts with 448 to 950 retweets
expressed a right-leaning political sentiment more often in
the text description than accounts in the other three quar-
tiles. In short, there are almost no differences in the profile
characteristics of accounts receiving the most and the least
tweets. We were surprised by the lack of variation in profile
pictures and emoji use, particularly regarding the use of the
6 Socius: Sociological Research for a Dynamic World
American flag, which is a mainstay of conservative politics
(Kariryaa et al. 2022). This may just be a reflection of our
sample, but it may also reflect the efforts of Democrats to
“reclaim” the flag and its symbolism from Republicans
(Harrison 2020).
Table 3 takes a closer look at the accounts to assess whose
ideas were amplified and how this changed before and after
the ban. Table 3 shows the frequency of the type (e.g., a
media outlet, a media professional, an officeholder) and ori-
entation (e.g., liberal/left, moderate/middle, or conservative/
right on the basis of the Ad Fontes media bias chart, which
was collapsed here for parsimony) of the account as well as
the number of retweets each account type received before,
the day of and after the ban. Pursuant to our agreement with
Twitter, we report only the data of public figures, entities,
and organizations, which is why the total number of tweets is
slightly less than the number reported above.2 Table 3 reveals
that the ban seems to be a mixed bag in terms of its ability to
improve information about the Arizona audit. On the one
hand, more moderate, or what Ad Fontes labels “middle
bias” outlets, media professionals, and a watchdog organiza-
tion are retweeted relatively more often after the ban.
Arguably, this signals there may be improvement in the qual-
ity of discourse, as the quality of information is more reli-
able. On the other hand, it is clear that partisans drive the
discussion over the audit. Before the ban, liberal/left and
conservative/right accounts were retweeted at similar rates
overall. Liberal/left account retweets make up 49.8 percent
Table 1. Summary of the Profile Characteristics Associated with Message-Amplifying Accounts.
Before the Ban
(n = 204) (%)
Day of the Ban
(n = 53) (%)
After the Ban
(n = 76) (%)
Total
n
% of Accounts in the
Sample (n = 148)
Text
Media professional 65.9 4.6 29.6 44 29.7
Mentions a job or career 61.8 11.8 26.5 34 23.0
Expresses a political sentiment, right* 70.6 17.7 11.8 34 23.0
Political candidate or politician 57.7 11.5 30.8 26 17.6
Identifies with a political party or
person
68.0 16.0 16.0 25 16.9
A media outlet, fact-checking group,
or watchdog organization**
25.0 15.0 60.0 20 13.5
Expresses a political sentiment,
patriotic
46.7 13.3 40.0 15 10.1
Mentions values* 33.3 33.3 33.3 15 10.1
Mentions specific hobbies or
individual attributes
71.4 14.3 14.3 14 9.5
Identifies with military or law
enforcement*
58.3 33.3 8.3 12 8.1
Mentions religion or a religious
affiliation
70.0 10.0 20.0 10 6.8
Identifies as an academic 75.0 25.0 .0 8 5.4
Expresses a political sentiment, left 80.0 20.0 .0 5 3.4
Mentions a geographic location 60.0 20.0 20.0 5 3.4
Emojis
American flag 61.8 20.6 17.7 34 23.0
Military or law enforcement symbols 83.3 16.7 .0 6 4.1
Profile picture
American flag 70.0 20.0 10.0 20 13.5
Trump related 50.0 25.0 25.0 20 13.5
Republican rally or protest 42.9 28.6 26.5 14 9.5
Newsroom or set 66.7 16.7 16.7 6 4.1
Political or party logo 100.0 .0 .0 5 3.4
Campaign sign or materials 80.0 .0 20.0 5 3.4
Quotation about actions mattering 80.0 20.0 .0 5 3.4
Note: Accounts could signal more than one characteristic in their descriptions. As a result, there are more profile characteristics than there are political
amplifier accounts.
*p < .05 and **p < .01 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
2A list of public actors and the number of retweets they received is
available in the Supplement.
Rohlinger et al. 7
Table 2. Summary of the Profile Characteristics Associated with Message Amplifiers by Quartile.
201–278
Retweets
(Q4) (%)
279–447
Retweets
(Q3) (%)
448–950
Retweets
(Q2) (%)
951–23,831
Retweets
(Q1) (%) Total n
Text description
Media professional 22.7 25.0 25.0 27.3 44
Mentions a job or career 20.6 23.5 29.4 26.5 34
Expresses a political sentiment, right* 17.7 23.5 44.1 14.7 34
Political candidate or politician 26.9 19.2 23.1 30.8 26
Identifies with a political party or person 26.9 19.2 23.1 30.8 25
A media outlet, fact-checking group or watchdog
organization
30.0 40.0 15.0 15.0 20
Expresses a political sentiment, patriotic 20.0 33.3 20.0 26.7 15
Mentions values 40.0 20.0 20.0 20.0 15
Mentions specific hobbies or individual attributes 21.4 21.4 42.9 14.3 14
Identifies with military or law enforcement 16.7 25.0 25.0 33.3 12
Mentions religion or a religious affiliation 40.0 30.0 30.0 .0 10
Identifies as an academic 12.5 37.5 .0 50.0 8
Expresses a political sentiment, left 40.0 .0 20.0 40.0 5
Mentions a geographic location 40.0 .0 20.0 40.0 5
Emojis
American flag 14.7 26.5 35.3 23.5 34
Military or law enforcement symbols .0 50.0 33.3 16.7 6
Profile
American flag 20.0 20.0 45.0 15.0 20
Trump related 5.0 30.0 30.0 35.0 20
Republican rally or protest 14.3 28.6 26.8 26.8 14
Newsroom or set 50.0 .0 16.7 33.3 6
Political organization logo .0 .0 40.0 60.0 5
Campaign sign or materials 40.0 .0 .0 60.0 5
Quotation about actions mattering .0 40.0 60.0 .0 5
Note: Accounts could signal more than one characteristic in their descriptions. As a result, there are more profile characteristics than there are accounts.
Q = quartile.
*p < .05 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Table 3. Summary of Public-Oriented, Message-Amplifying Accounts.
Before the Ban Day of Ban After the Ban
Liberal/left
Outlets 1.7% (627) 0 8.6% (2,877)
Media professional 35.5% (12,845) 6.8% (643) 19.7% (6,601)
Democratic party, officeholder, or candidate .6% (234) 0 0
Other 12.0% (4,319) 0 0
Middle/moderate
Outlets 0 0 3.8% (1,275)
Media professional 6.9% (2,502) 0 7.0% (2,336)
Watchdog organization 0 0 5.3% (1,790)
Conservative/right
Outlets 3.9% (1,394) 0 7.9% (2,629)
Media professional 11.1% (4,099) 48.2% (4,518) 14.3% (4,773)
Republican party, officeholder, or candidate 20.8% (7,533) 0 14.3% (4,772)
Other 7.1% (2,583) 44.9% (4,211) 19.2% (6,424)
Total number of RTs by public person or entity 100% (36,136) 100% (9,372) 100% (33,477)
Note: Because of rounding, percentages may total more than 100 percent. Pursuant to our agreement with Twitter, this table reflects only public
accounts. The liberal/left, middle/moderate, and conservative/right categories are based on the Ad Fontes media bias chart. RT = retweet.
8 Socius: Sociological Research for a Dynamic World
of the sample and conservative/right accounts make up 42.9
percent of the sample before the ban. This changes the day of
and after the ban. Conservative/right accounts dominate dis-
course the day of the ban and constitute the majority of
retweets after the ban. More important, there is a distinct
shift in the types conservative/right accounts that are ampli-
fied. Before the ban, Republican officeholders, candidates,
and the party as well as media professionals are the most
retweeted in this category. Although they maintain a pres-
ence, the day of and after the ban, there is a sharp increase in
the “other” category, which consists of Trump loyalists, a
Trump spokesperson, and an activist advocating for “elec-
tion integrity” in Wisconsin (see the Supplement). In short,
the Twitter ban does not appear to improve the kinds of
accounts that amplify information about the Arizona audit.
The ban, however, does seem to have shaped the kinds of
arguments made before and after the ban. Table 4 indicates
whether message amplifiers generally supported or opposed
the audit, and Table 5 summarizes the general argument(s)
they made in their tweets. First, Table 4 shows that only tweets
expressing opposition to the audit significantly decreased after
the ban. The frequency of tweets supporting the Arizona audit
remained relatively stable. Second, there is a clear shift in the
frequency of the arguments made before and after the ban,
particularly among audit supporters (Table 5). Conspiracy
theory–based arguments supporting the audit and negative
comments about public figures who opposed the audit were
significantly more likely to appear in posts before the ban than
after, as were tweets supporting Donald Trump. These differ-
ences, however, disappear when we look at the frequency of
Table 4. Summary of Audit Support and Opposition.
Argument Type
Before the Ban
(n = 90) (%)
Day of the Ban
(n = 18) (%)
After the Ban
(n = 45) (%) Total = 153
Support the audit 59.3 16.3 24.4 86
Oppose the audit** 56.9 2.0 42.2 51
Unclear 62.5 18.8 18.8 16
Note: Accounts could make more than one argument in their posts. As a result, there are more argument types than there are political amplifier accounts.
**p < .01 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Table 5. Audit Support and Opposition before, the Day of, and after the Ban.
Before the Ban (%) Day of the Ban (%) After the Ban (%) Total n
Supports the audit n = 174
Nonspecific support 62.0 12.7 25.4 71
Supportive based on a conspiracy theory* 79.3 10.3 10.3 29
Disparages individuals who do not support the audit* 73.9 17.4 8.7 23
Supportive of audit expansion 71.4 .0 28.6 14
Supportive of Trump* 90.9 9.1 .0 11
Supportive of proaudit politicians 25.0 25.0 50.0 8
Critical of the Twitter ban*** .0 71.4 28.6 7
Supportive of using the audit to overturn the election 83.3 .0 16.7 6
Supportive of removing individuals from office who
oppose the audit
100.0 .0 .0 5
Opposes the audit n = 80
Audit is a threat to democracy* 52.0 .0 48.0 25
Corrects misinformation or says how it is spread 71.4 .0 28.6 14
Disparages individuals who support the audit 75.0 .0 25.0 12
Calls it a sham or fraudit 63.6 .0 36.4 11
Discusses audit funding or funders 37.5 .0 62.5 8
Conspiratorial ideas about republicans or activities of
conservatives
25.0 .0 75.0 4
Makes fun of the audit 100.0 .0 .0 2
Proposes or is supportive of antiaudit action 50.0 .0 50.0 2
Opposes audit because Biden won 100.0 .0 .0 1
Celebrates the Twitter ban .0 100.0 .0 1
Note: Accounts could make more than one argument in their posts. As a result, there are more argument types than there are political amplifier accounts.
*p < .05 and ***p < .001 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Rohlinger et al. 9
retweets by quartile, suggesting that conspiracy theory and
negative comments still are relatively prominent after the audit
(Table 6).
A closer analysis of the tweets in the top quartile before,
the day of and after the ban reveals that what changes is how
amplifiers talk about the audit. Of the 16 tweets in the top
quartile before the ban, six tweets are supportive of the audit,
and four of these tweets were from Republican officeholders
and candidates who either called for audit expansion or iden-
tified co-conspirators in the “stolen” election. For example,
Elise Stefanik (1,598 retweets), who is in the top quartile,
alerted her followers, “Now that the Biden admin admitted to
colluding with Big Tech to censor Americans. Was it the
Biden Admin, who asked Big Tech to remove all videos of
the duly elected Arizona State Senate’s forensic Audit hear-
ing off their platforms? Patriots see what’s happening!” The
other two posts suggest that the Democratic National
Committee and RINOs (Republicans in name only) are com-
plicit in the election “stolen” from Trump (2,440 retweets
and 1,175 retweets respectively). Compare this with the top
quartile the day of and after the ban. Of the 21 tweets in the
top quartile, 12 are supportive of the audit, and four
announced that Twitter had banned the audit accounts. The
post with the most retweets urged conservatives not to be
distracted by the ban and to “follow the Arizona audit results”
(George Papadopoulos, 5,529 retweets), but the rest circu-
lated claims that were rooted in or supported conspiracy
theories about the 2020 election being stolen. Trump spokes-
person Liz Harrington (2,476 retweets), for example, tweeted
that Arizona “found 270,000 potential fraudulent ballots . . .
in ONE county.” Similarly, a Newsmax reporter whose
account was suspended, but who also accounted for three of
the 12 most retweeted conservative posts, celebrated the
work of the auditors and reminded his followers that Karen
Fann (a state senator in Arizona) outlined a plan to decertify
the 2020 election (1,095 retweets and 1,046 retweets, respec-
tively). In other words, although the ban changed how politi-
cal amplifiers talked about the stolen election, it did not
strictly reduce the circulation of misleading and incorrect
information.
The analysis also reveals that the account bans may cause
some public figures, particularly those with seemingly clear
professional aspirations, to either move their public focus to
less risky topics or to double down on their support for the
big lie. To illustrate this, we observe the activity of two pub-
lic figures who appear in the top-quartile of the sample, Elise
Stefanik and Wendy Rogers. Elise Stefanik was a vocal sup-
porter of the audit and conspiracy theories about the election
before Twitter suspended the accounts. However, Stefanik,
the Republican conference chairwoman who has built a pow-
erful fund-raising machine and is looking to run for a higher
office (Zanona and Orr 2022a), did not comment again on
the audit in the wake of the account bans. Instead, she
attacked Biden from another angle, tweeting about the
Table 6. Audit Support and Opposition by Quartile.
201–278
RTs (%)
279–447
RTs (%)
448–950
RTs (%)
951–23,831
RTs (%) Total n
Supports the audit n = 174
Nonspecific support 23.9 29.6 28.2 18.3 18
Supportive based on a conspiracy theory 24.1 20.7 37.9 17.2 29
Disparages individuals who do not support the audit* 21.7 26.1 47.8 4.4 23
Supportive of audit expansion 28.6 28.6 35.7 7.1 14
Supportive of Trump 18.2 27.3 45.5 9.1 11
Supportive of proaudit politicians 37.5 25.0 25.0 12.5 8
Critical of the Twitter ban .0 71.4 14.3 14.3 7
Supportive of using the audit to overturn the election 33.3 50.0 .0 16.7 6
Supportive of removing individuals from office who oppose the audit 40.0 20.0 20.0 20.0 5
Opposes the audit n = 80
Audit is a threat to democracy 20.0 24.0 32.0 24.0 25
Corrects misinformation or says how it is spread* 57.1 14.3 7.1 21.4 14
Disparages individuals who support the audit 33.3 16.7 25.0 25.0 12
Calls it a sham or fraudit 45.5 18.2 9.1 27.3 11
Discusses audit funding or funders 25.0 37.5 12.5 25.0 8
Conspiratorial ideas about republicans or activities of conservatives 25.0 25.0 25.0 25.0 4
Makes fun of the audit 50.0 .0 .0 50.0 2
Proposes or is supportive of antiaudit action 100.0 .0 .0 .0 2
Opposes audit because Biden won .0 .0 .0 100.0 1
Celebrates the Twitter ban .0 .0 .0 100.0 1
Note. Accounts could make more than one argument in their posts. As a result, there are more argument types than there are accounts. RT = retweet.
*p < .05 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
10 Socius: Sociological Research for a Dynamic World
“Biden border crisis,” which she claims, is an “absolute
catastrophe” involving more than 19,000 children at the
southern U.S. border. As a politician who carefully monitors
the shifting winds in Washington, D.C. (Zanona and Orr
2022b), Stefanik arguably has incentives to make sure she
keeps her access to mainstream platforms such as Twitter.
Compare Stefanik with Arizona state senator Wendy Rogers,
who has built a national profile by appealing to white nation-
alists and calling for violence against her political opponents
(Reinhard and Helderman 2022). The day of and after the
ban, Rogers helped publicize the account ban, warned fol-
lowers that Twitter would ban her next, amplified her own
activities furthering the audit, and expanded her attacks on
Joe Biden. In other words, Rogers did not really alter her
tweeting behavior. Arguably, Rogers has little incentive to do
so. As she has embraced the extremes of the Republican
Party, she has little to lose—and perhaps something to gain
in terms of credibility with her supporters—from getting sus-
pended from Twitter.
Table 7, which shows the frequency of the type of mate-
rial shared by support or opposition to the Arizona audit, pro-
vides further support for the relative ineffectiveness of the
ban. Opponents of the audit are significantly more likely to
share mainstream and partisan left news articles and support-
ers of the audit are more likely to circulate partisan right
news content or not to share content at all. Although this is
not particularly surprising, Table 8 indicates that the quality
of information shared changes very little before and after the
ban. For example, there are not significant differences in the
rate at which (largely) supporters shared partisan right news
coverage before, the day of, or after then ban. The same is
true of ban opponents who share mainstream and partisan
left news sources at similar rates before and after the ban.
That said, the accounts sharing partisan right information do
not have the most retweeted posts. Table 9, which shows the
frequency of the type of material shared by quartile, indi-
cates that nearly half of the accounts sharing partisan right
content are only retweeted between 279 and 447 times. The
Table 7. Summary of Materials Shared by Audit Supporters and Opponents.
Material Shared Supports the Audit (n = 86) (%) Opposes the Audit (n = 51) (%) Unclear (n = 11) (%) Total (n = 148)
None*** 78.1 19.6 2.4 41
Partisan right news article*** 94.1 2.9 2.9 34
Mainstream news article*** 12.5 75.0 12.5 32
Social media screenshot 57.9 26.3 15.8 19
Partisan left news article* 14.3 85.7 .0 7
Legal documents 20.0 60.0 20.0 5
*p < .05 and ***p < .001 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Table 8. Frequency of Materials Shared before, the Day of, and after the Ban.
Material Shared Before the Ban (n = 86) (%) Day of the Ban (n = 51) (%) After the Ban (n = 11) (%) Total (n = 148)
None** 63.4 24.4 12.2 41
Partisan right news article 61.8 8.8 29.4 34
Mainstream news article 53.1 3.1 43.8 32
Social media screenshot 52.6 21.1 26.3 19
Partisan left news article 57.1 .0 42.9 7
Legal documents* 20.0 .0 80.0 5
*p < .05 and **p < .01 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Table 9. Frequency of Materials Shared by RT Quartile.
Material Shared
201–278 RTs
(n = 36) (%)
279–447 RTs
(n = 38) (%)
448–950 RTs
(n = 37) (%)
951–23,831 RTs
(n = 37) (%)
Total n
(n = 148)
None 17.1 19.5 26.8 36.6 41
Partisan right news article** 20.6 47.1 26.5 5.9 34
Mainstream news article 25.0 31.3 21.9 21.9 32
Social media screenshot 36.8 15.8 21.1 26.3 19
Partisan left news article 28.6 14.3 28.6 28.6 7
Legal documents 40.0 .0 20.0 40.0 5
Note: RT = retweet.
**p < .01 (chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Rohlinger et al. 11
accounts with the most retweets share partisan right news
content significantly less often. This suggests that the most
influential political amplifiers seem to either imply in their
posts that the Arizona audit is necessary because the election
was stolen from Trump or share content to that effect, but not
both. A close analysis of the tweets indicates that this is the
case. For example, the Elise Stefanik quotation mentioned
above, which suggests that Biden is implicated in a conspir-
acy with big tech, does not share additional information. This
is also true of the George Papadopoulos tweet warning audit
supporters not to be distracted by the account bans.
Message Drivers
Table 10 summarizes the distribution of the top 10 tweeting
accounts supporting and opposing the audit before, the day
of, and after Twitter suspended the audit accounts. Recall
that a total of 201 accounts are included in the message driver
sample. Here, we counted the number of accounts that posted
(or reposted) clear arguments supporting the audit, the num-
ber of accounts opposing the audit, and the number of
accounts in which the argument was unclear because it was a
news story, a watchdog post, or the account was private or
suspended. Table 10 makes clear that, other than the day of
the account suspension, there are consistently more accounts
tweeting their support for the Arizona audit. This trend is
particularly pronounced among accounts that sent at least 50
tweets on a given day. In fact, only one account opposing the
audit sent at least 50 tweets on a given day during the 20-day
sample. This is far fewer than the 25 audit-supporting
accounts during the same time frame. Accounts supportive of
the audit also post more tweets than those opposing the audit.
Audit-supporting accounts sent 65.8 percent of the tweets
before the suspension, 73.4 percent of the tweets the day of
the suspension, and 75.9 percent of the tweets after the sus-
pension. In short, the suspension does not seem to deter
audit-supporting accounts from posting.
Importantly, Table 11, which shows the distribution of dis-
ruptive and problematic accounts by audit support and oppo-
sition before, during, and after the account suspension,
indicates that there are more audit-supporting disruptive (a
Bot Sentinel score of 50 percent to 74 percent) or problematic
(a Bot Sentinel score of 75 percent to 100 percent) accounts
in the sample. To assess what kind of content message drivers
share, we conducted a qualitative content analysis of both
their tweets during the time frame of analysis as well as the
first 50 to 100 re(tweets) of the top 10 message drivers on
each of the 20 days included in the sample. This analysis
revealed three important insights. First, the analysis indicates
that 71 of the 101 accounts supported the audit, and shared
Table 10. Distribution of Account Types among Message Drivers.
Before Suspension After
Support
(%)
Oppose
(%)
Unclear
(%)
Total
n
Support
(%)
Oppose
(%)
Unclear
(%)
Total
n
Support
(%)
Oppose
(%)
Unclear
(%)
Total
n
Top 10 tweeting accounts* 53.5 41.6 5.0 101 50.0 50.0 .0 10 52.2 38.9 8.9 90
Accounts that sent at least
20 tweets
69.2 26.9 3.8 26 100.0 .0 .0 3 83.3 12.5 4.2 24
Accounts that sent at least
30 tweets
75.0 18.8 6.3 16 100.0 .0 .0 1 100.0 .0 .0 16
Accounts that sent at least
50 tweets
90.0 10.0 .0 10 100.0 .0 .0 1 100.0 .0 .0 15
Total % of tweets sent 65.8 30.5 3.8 2360 73.4 26.6 .0 237 75.9 19.2 4.9 2317
Note: The total is 201 accounts rather than 200 because one of the days had a tie for the 10th most sent tweets. As one account was supportive of the
audit and the other was not supportive of the audit, we left both in the sample.
*p < .05. (Chi-square, two-tailed test; Fisher’s exact test was also conducted if the sample size was <30 observations).
Table 11. Distribution of Disruptive and Problematic Accounts among Message Drivers.a
Before Suspension After
Support
(%)
Oppose
(%) Total n
Support
(%)
Oppose
(%) Total n
Support
(%)
Oppose
(%) Total n
Top 10 tweeting accounts 64.8 35.2 54 50.0 50.0 6 73.2 26.8 41
Accounts that sent at least 20 tweets 66.7 33.3 18 100.0 .0 2 89.5 10.5 19
Accounts that sent at least 30 tweets 75.0 25.0 12 100.0 .0 1 100.0 .0 16
Accounts that sent at least 50 tweets 88.9 11.1 9 100.0 .0 1 100.0 .0 15
aDropped accounts that could not be classified because of suspension.
12 Socius: Sociological Research for a Dynamic World
conspiracy theories about the election, coronavirus disease
2019, and globalism. Additionally, 54 of the 71 accounts were
categorized as disruptive or problematic by Bot Sentinel (and
two of these accounts were subsequently suspended by
Twitter). Second, the ban did not affect the number of accounts
actively spreading conspiracy theories. We found that, when
we counted the accounts spreading conspiracy theory on each
of the days, a total of 34 accounts spread misinformation
about the election before the suspension, 3 accounts spread
misinformation the day of the ban, and 34 accounts spread
misinformation after the ban.
Third, two accounts are almost completely responsible for
spreading conspiracy theories during this time frame,
although they engage in different tweeting behavior. The first
account, which we call Bunny3 (because this “problematic”
account with a 97 percent Bot Sentinel score claims to repre-
sent a former Playboy bunny who sells jewelry), posted 859
tweets after the account suspension. Although the account is
never among the most retweeted, Bunny is the top poster in
the sample six of the nine days after the account suspension
and largely works to amplify the work of Overstock founder
and election conspiracy theorist Patrick Byrne. When we
analyzed the corpus of tweets in the months before the
Twitter suspended the audit accounts, we found that Bunny,
in her capacity as the page admin for the Byrne account,
largely worked to connect other Twitter users with Byrne on
locals.com. Although the account continued to push the
Byrne community after the suspension, her efforts were more
direct (e.g., they are done in replies to tweets) and specifi-
cally amplified claims of election fraud, which are outlined
in a “FREE EXCLUSIVE VIDEO” by the “Top funder of the
AZ audit: Patrick Byrne,” and Byrne’s book The Deep Rig,
which outlines the “election fraud and what happened on
January 6th with pictures of Antifa changing into MAGA
gear” (from July 28).
The second account, which we call Arizona Sam4 (because
the “disruptive” account with a Bot Sentinel score of 55 per-
cent has no description, and the predominant image is of an
American flag and the Arizona state flag), is more active,
posting 1,544 conspiracy tweets during the 20-day period.
Arizona Sam is the top tweeter 17 of the 20 days included in
the sample and posted the most tweets of anyone in the sam-
ple on 12 different days, most of which occur before the audit
accounts were suspended. However, the account seems influ-
ential after the ban as well. Arizona Sam’s tweets were
among the most retweeted accounts in the sample on three
different days, two of which fell after the suspension of the
audit accounts. His content across the 20-days is also remark-
ably consistent. Although the account retweets content from
Arizona politicians, particularly Wendy Rogers, and election
conspiracy theorists (e.g., Jovan Pulitzer), he mostly chas-
tises politicians and officials that “embarrass” Arizona with
their “sham audit” that, he argues, is covering up fraud com-
mitted by Democrats and Dominion, a voting system com-
pany. The account’s most frequently sent tweet, which was
sent hundreds of times as a reply to other accounts question-
ing the election results, asked the Maricopa County Board of
Supervisors, “Can you tell us why you and Dominion refuse
to provide routers, passwords and election materials? WHAT
HAVE YOU DONE? [American flag emoji].” In short, the
Twitter ban seems to have little influence on who was tweet-
ing about the Arizona audit or what they were tweeting
about.
Discussion and Conclusion
We do not find an association between the Arizona audit
account bans and an improved information environment.
First, we find that the ban does not seem to change the kinds
of ideas that are amplified on Twitter about the Arizona audit.
Although the popularity of accounts opposing the audit
declined, conservative/right accounts, which consistently
supported conspiracy theories, increased after the ban.
Second, we find that the ban does not seem to affect who
posts about the Arizona audit. Two accounts create the major-
ity of tweets. Arizona Sam sent the most tweets 17 out of the
20 days sampled but also remained steadfast in his support
that the election was stolen and that the fraud was being cov-
ered up in Maricopa County.
There are three interesting findings that warrant addi-
tional investigation. First, we find that the ban may have
changed whether and how some accounts talk about the
audit. We find that politicians respond differently to the
account bans. Although this may be a function of who has
the most standing at a given moment with an audience
(Rohlinger, Williams, and Teek 2020), it may also reflect the
incentives associated with tweeting. Politicians, who often
aspire for higher office, are likely to tweet in ways that will
further their professional goals. Viewed this way, there are
few incentives for Wendy Rogers to change her tweeting
behavior. Rogers, who has aligned herself with the extremes
of the Republican Party, has little to lose, and perhaps cred-
ibility to gain with her right-wing base, if Twitter boots her
from the platform. This is not true of Elise Stefanik, who
carefully monitors the political winds in her quest for higher
office. Stefanik likely sees professional downsides to losing
access to a large, politically diverse audience. As long as
Twitter is able to attract left-leaning and independent-
minded users, the Musk takeover is unlikely to change the
tweeting behavior of politicians seeking higher office.
Although politicians such as Wendy Rogers have little to
lose by becoming more extreme in their posts, there are still
incentives for most politicians to moderate their tweets and
position themselves so they might appeal to a broader swath
of the voting public down the road.
3The account holder joined in December 2012, has 366 followers,
and is following 24 accounts.
4This account holder joined in February 2021, has 2,710 followers,
and is following 716 accounts.
Rohlinger et al. 13
Second, we find that the quality of information accounts
share after a ban changes very little. Partisans predominantly
share partisan sources, and this does not change after Twitter
intervenes. Moreover, we find that posts with the most
retweets do not typically share any materials, whereas those
with relatively few retweets do. These findings lend some
support to our argument that some message amplifiers may
find it professionally expedient to signal support or opposi-
tion to an issue without attaching their opinions to clickbait
news headlines that could create problems for them later
(Scacco and Muddiman 2016) but also suggest that sharing
materials is an important, and underanalyzed, aspect of
Twitter behavior that warrants additional investigation.
Although users may be cautious regarding what they share
involving Musk personally, sharing behavior more broadly is
unlikely to change.
Finally, we find that Twitter bans seem particularly inef-
fective on message drivers, who continued their open sup-
port of the audit and denied the election results after the ban.
This finding buttresses our argument regarding the differen-
tial incentives driving the Twitter behavior of message
amplifiers and message drivers. In this case, the largely con-
servative message drivers are more concerned with spread-
ing the “truth” than they are of getting booted from Twitter
(Rohlinger 2022; Schradie 2019). In fact, because the
accounts do not have huge numbers of followers, the owners
may feel like their “disruptive” and “problematic” tweeting
behavior will go unreported. Even if they were reported, it
may be difficult to justify a suspension given their posting
behavior. Arizona Sam, for example, replied to hundreds of
other tweets, pressing his conspiracy claim. Although it may
have been an annoyance, his replies did not attack posters
and were not offensive, meaning that they probably went
unreported. Future research should continue to explore indi-
vidual tweeting behavior and assess whether (and how) it has
changed under Musk.
Ostensibly, Musk bought Twitter to create a “digital pub-
lic square, where a wide range of beliefs can be debated in a
healthy manner” (Katsuyama 2022). Although thus far,
Musk’s critics are skeptical of his motives and practices, it is
important for scholars to contextualize the realities of the
information environment on Twitter of yore. Twitter was not
a paragon of quality discourse in the past and is unlikely to
become so in the future. However, Twitter remains an impor-
tant platform for journalists, public officials, government
offices, movement activists, and citizens globally. As such,
scholars should continue to study Twitter and the effects of
Musk’s practices and policies on this important platform.
Acknowledgments
We would like to thank Daniel Kreiss, the PolCom group at
Vrije Universiteit Amsterdam, Elizabeth Mazzolini, Dale
Winling, and the reviewers and editors of Socius for their feed-
back, as well as Charles Phillips for his research assistance. An
earlier draft of this article was presented at the 2022 International
Communication Association Meeting in Paris and the Media
Sociology Postconference.
ORCID iDs
Deana A. Rohlinger https://orcid.org/0000-0001-6606-9404
Sarah Warren https://orcid.org/0000-0002-7905-0267
Supplemental Material
Supplemental material for this article is available online.
References
Ad Fontes Media. 2022. “Interactive Media Bias Chart.” Retrieved
September 12, 2022. https://adfontesmedia.com/interactive-
media-bias-chart/.
Allcott, Hunt, Matthew Gentzkow, and Chuan Yu. 2019. “Trends in
the Diffusion of Misinformation on Social Media.” Research &
Politics 6(2):2053168019848554.
Aslam, Salman. 2022. “Twitter by the Numbers: Stats, Demographics
& Fun Facts.” Omnicore Agency. Retrieved December 14, 2022.
https://www.omnicoreagency.com/twitter-statistics/.
Bessi, Alessandro, Fabiana Zollo, Michela Del Vicario, Antonio
Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015.
“Trend of Narratives in the Age of Misinformation.” PLoS
ONE 10(8):e0134641.
Boyraz, Maggie, Aparna Krishnan, and Danielle Catona. 2015.
“Who Is Retweeted in Times of Political Protest? An Analysis
of Characteristics of Top Tweeters and Top Retweeted Users
during the 2011 Egyptian Revolution.” Atlantic Journal of
Communication 23(2):99–119.
Crudele, Lindsay. 2022. “What Should Government Agencies Do
amid the Twitter Chaos?” Government Technology. Retrieved
December 20, 2022. https://www.govtech.com/policy/what-
should-government-agencies-do-amid-the-twitter-chaos.
Del Vicario, Michela, Alessandro Bessi, Fabiana Zollo, Fabio Petroni,
Antonio Scala, Guido Caldarelli, H. Eugene Stanley, et al. 2016.
“The Spreading of Misinformation Online.” Proceedings of the
National Academy of Sciences 113(3):554–59.
Dubois, Elizabeth, and Devin Gaffney. 2014. “The Multiple
Facets of Influence: Identifying Political Influentials and
Opinion Leaders on Twitter.” American Behavioral Scientist
58(10):1260–77.
Dubois, Elizabeth, and Grant Blank. 2018. “The Echo Chamber
Is Overstated: The Moderating Effect of Political Interest
and Diverse Media.” Information, Communication & Society
21(5):729–45.
Ferrara, Emilio. 2017. “Disinformation and Social Bot Operations
in the Run up to the 2017 French Presidential Election.” First
Monday 22(8).
Garrett, R. Kelly, and Robert M. Bond. 2021. “Conservatives’
Susceptibility to Political Misperceptions.” Science Advances
7(23):eabf1234.
Gerbaudo, Paolo. 2015. “Protest Avatars as Memetic Signifiers:
Political Profile Pictures and the Construction of Collective
Identity on Social Media in the 2011 Protest Wave.” Information,
Communication & Society 18(8):916–29.
14 Socius: Sociological Research for a Dynamic World
Halaschak, Zachary. 2022. “Democrats Respond to Musk Twitter
Purchase with Renewed Wealth Tax Push.” Washington
Examiner. Retrieved December 17, 2022. https://www.msn.
com/en-us/money/companies/democrats-respond-to-musk-
twitter-purchase-with-renewed-wealth-tax-push/ar-AAWCr8f.
Harrison, Olivia. 2020. “Can The American Flag Be Reclaimed
From Trump? It’s Complicated.” From Refinery 29.
Retrieved February 4, 2023. https://www.refinery29.com/
en-us/2020/11/10163136/reclaiming-american-flag-after-
trump-presidency
Jackson, Sarah J., and Brooke Foucault Welles. 2015. “Hijacking
#Mynypd: Social Media Dissent and Networked Counterpublics.”
Journal of Communication 65(6):932–52.
Jackson, Sarah J., and Brooke Foucault Welles. 2016. “#Ferguson Is
Everywhere: Initiators in Emerging Counterpublic Networks.”
Information, Communication & Society 19(3):397–418.
Jhaver, Shagun, Christian Boylston, Diyi Yang, and Amy Bruckman.
2021. “Evaluating the Effectiveness of Deplatforming as a
Moderation Strategy on Twitter.” Proceedings of the ACM on
Human-Computer Interaction 5(CSCW2):381:1–381:30.
Kariryaa, Ankit, Simon Rundé, Hendrik Heuer, Andreas Jungherr,
and Johannes Schöning. 2022. “The role of flag emoji in online
political communication.” Social Science Computer Review
40(2):367–387.
Katsuyama, Jana. 2022. “Elon Musk Takes over Twitter, Saying It
Will Be ‘Digital Town Square’ Not a ‘Free-for-All.’” KTVU
Fox 2. Retrieved December 14, 2022. https://www.ktvu.com/
news/elon-musk-takes-over-twitter-saying-it-will-be-digital-
town-square-not-a-free-for-all.
Kermani, Hossein, and Marzieh Adham. 2021. “Mapping Persian
Twitter: Networks and Mechanism of Political Communication
in Iranian 2017 Presidential Election.” Big Data & Society
8(1):20539517211025568.
LeFebvre, Rebecca Kay, and Crystal Armstrong. 2018. “Grievance-
Based Social Movement Mobilization in the #Ferguson Twitter
Storm.” New Media & Society 20(1):8–28.
Meraz, Sharon, and Zizi Papacharissi. 2013. “Networked Gatekeeping
and Networked Framing on #Egypt.” International Journal of
Press/Politics 18(2):138–66.
Meyer, David S. 1995. “The Challenge of Cultural Elites: Celebrities
and Social Movements.” Sociological Inquiry 65(2):181–206.
Miller, Andrew. 2022. “Conservatives Flood Twitter with Memes
after Musk Twitter Takeover: ‘Get Your Own Platform.’” Fox
Business. Retrieved December 14, 2022. https://www.foxbusi-
ness.com/politics/conservatives-flood-twitter-memes-musk-
twitter-takeover-get-your-own-platform.
Odabas, Meltem. 2022. “10 Fact about Americans and Twitter”.
Pew Research Center. Retrieved December 14, 2022. https://
www.pewresearch.org/fact-tank/2022/05/05/10-facts-about-
americans-and-twitter/.
Pattison-Gordon, Jule. 2022. “Upheaval at Twitter Worries
Government Agency Users.” Governing. Retrieved December
20, 2022. https://www.governing.com/now/upheaval-at-twit-
ter-worries-government-agency-users.
Peters, Justin. 2022. “The Finale of the Great Internet Grievance
Wars Is Here.” Slate. Retrieved December 20, 2022. https://
slate.com/technology/2022/12/elon-musk-twitter-files-bari-
weiss-matt-taibbi-shadowbanning.html.
Popl, Nik. 2022. “As Elon Musk Buys Titter, the Right Is
Celebrating.” Time. Retrieved December 14, 2022. https://
www.yahoo.com/now/elon-musk-buys-twitter-celebrat-
ing-203506559.html.
Reinhard, Beth, and Rosalind S. Helderman. 2022. “Arizona
Lawmaker Speaks to White Nationalists, Calls for Violence—
and Sets Fundraising Records.” The Washington Post.
Retrieved September 19, 2022. https://www.msn.com/en-us/
news/politics/arizona-lawmaker-speaks-to-white-nationalists-
calls-for-violence-and-sets-fundraising-records/ar-AAULLoV.
Rohlinger, Deana A. 2022. “Digital Technologies, Dysfunctional
Movement-Party Dynamics and the Threat to Democracy.”
Information, Communication & Society 25(5):591–97.
Rohlinger, Deana A., Cynthia Williams, and Mackenzie Teek.
2020. “From ‘Thank God for Helping This Person’ to ‘Libtards
Really Jumped the Shark’: Opinion Leaders and (In)Civility
in the Wake of School Shootings.” New Media & Society
22(6):1004–25.
Ross, Andrew S., and Damian J. Rivers. 2018. “Discursive
Deflection: Accusation of ‘Fake News’ and the Spread of Mis-
and Disinformation in the Tweets of President Trump.” Social
Media + Society 4(2):2056305118776010.
Sangalang, Angeline, Yotam Ophir, and Joseph N. Cappella.
2019. “The Potential for Narrative Correctives to Combat
Misinformation.” Journal of Communication 69(3):298–319.
Scacco, Joshua M., and Ashley Muddiman. 2016. “Investigating the
Influence of ‘Clickbait’ News Headlines.” UT Austin Center
for Media Engagement. Retrieved September 17, 2022. https://
mediaengagement.org/research/clickbait-headlines/.
Scheufele, Dietram A., and Nicole M. Krause. 2019. “Science
Audiences, Misinformation, and Fake News.” Proceedings of
the National Academy of Sciences 116(16):7662–69.
Schradie, Jen. 2019. The Revolution That Wasn’t: How Digital
Activism Favors Conservatives. Cambridge, MA: Harvard
University Press.
Schwenk, Katya. 2021. “Twitter Cracks Down on Arizona Audit
Accounts Promoting Misinformation.” Phoenix New Times.
Retrieved September 19, 2022. https://www.phoenixnewtimes.
com/news/twitter-suspends-arizona-audit-accounts-in-misin-
formation-crackdown-11678942.
Stewart, Leo Graiden, Ahmer Arif, A. Conrad Nied, Emma S. Spiro,
and Kate Starbird. 2017. “Drawing the Lines of Contention:
Networked Frame Contests within #BlackLivesMatter
Discourse.” Proceedings of the ACM on Human-Computer
Interaction 1(CSCW):96:1–96:23.
Sunstein, Cass R. 2017. #Republic: Divided Democracy in the Age
of Social Media. Princeton, NJ: University Press.
Thorson, Emily. 2016. “Belief Echoes: The Persistent Effects
of Corrected Misinformation.” Political Communication
33(3):460–80.
Valente, Thomas W., and Rebecca L. Davis. 1999. “Accelerating
the Diffusion of Innovations Using Opinion Leaders.” Annals
of the American Academy of Political and Social Science
566(1):55–67.
Yang, Kai-Cheng, Christopher Torres-Lugo, and Filippo Menczer.
2020. “Prevalence of Low-Credibility Information on Twitter
during the COVID-19 Outbreak.” ArXiv. Retrieved January
20, 2023. https://arxiv.org/abs/2004.14484.
Rohlinger et al. 15
Zanona, Melanie, and Gabby Orr. 2022a. “Leaning on Her Trump
Ties, Elise Stefanik Plots Future inside House GOP.” CNN
Politics. Retrieved September 17, 2022. https://www.cnn.
com/2022/03/23/politics/elise-stefanik-future-house-gop/
index.html.
Zanona, Melanie, and Gabby Orr. 2022b. “Moderate-Turned-
MAGA Congresswoman’s Stock Is Rising in Trump World.”
CNN Politics. Retrieved September 17, 2022. https://www.
cnn.com/2022/05/25/politics/elise-stefanik-trump/index.html.
Author Biographies
Deana A. Rohlinger is a professor of sociology, director of
research for the Institute of Politics, a research associate in the
Pepper Institute on Aging and Public Policy, and an associate
dean in the College of Social Sciences and Public Policy at Florida
State University. She is the author of Abortion Politics, Mass
Media, and Social Movements in America (Cambridge University
Press, 2015) and New Media and Society (New York University
Press, 2019), coeditor of The Oxford Handbook of Sociology and
Digital Media (Oxford University Press, 2022), and has authored
more than 50 research articles and book chapters on digital media,
political participation, and American politics. She is the former
chair of the American Sociological Association’s Section on
Communication, Information Technologies, and Media Sociology,
the current chair of the American Sociological Association’s
Section on Collective Behavior and Social Movements, and a
member of the National Institute for Civil Discourse Research
Network. Her current research explores digital technologies,
polarization, and extremism in individual claims making around
political controversies, including abortion politics and school
shootings.
Kyle Rose is a PhD student in the Department of Sociology at
Florida State University. His research interests include social net-
work analysis, new media, social movements, environmental soci-
ology and science communication. His current research explores
expertise and authority around environmental debates.
Sarah Warren is a PhD student in the Department of Political
Science at the University of Rochester. She studies American poli-
tics, focusing on voter behavior, gender, and redistributive politics.
Her current research uses experimental methods to analyze the effect
of federal redistributive programs on voter turnout and choice.
Stuart Shulman is the founder and CEO of Texifter and creator of
DiscoverText.
... Here, we draw on the work of Rohlinger et al. (2023), who distinguish message-amplifying influencers (or those who have a lot of followers) from message-driving influencers (or those who share a lot of content on a platform). They find that both kinds of influencers shape the political information environment. ...
... 2021). Political influencers on both sides of the audit debate were vocal about the need for perceived legitimacy of the audit and, more often than not, offered a disparaging word or two about their opposition (Rohlinger et al. 2023). In other words, given the ideological range of the debate, the Arizona audit represents a good case to observe the relationship between different kinds of political influencers and their audiences, and, more specifically, to assess the extent to which expressed political points of view between influencers and their audiences align. ...
... We operationalized message drivers as the top 10 users who posted or shared the most content on each of the 20 days. This operationalization is the same as the one used by Rohlinger et al. (2023). By measuring influence relative to the size of the discourse, we were able to identify influencers with a wide range of follower counts, including smaller, niche political audiences. ...
Article
Full-text available
In this study, the authors explore the role of echo chambers in political polarization through a network and content analysis of 183 political influencer accounts and 3,000 audience accounts on Twitter (now X) around the Arizona audit of the 2020 U.S. presidential election, sampled between July 17 and August 5, 2021. The authors identify five distinct groups of influencers who shared followers, noting differences in the information they post and the followers they attract. The most ideologically diverse audiences belong to popular media organizations and reporters with localized expertise to Arizona, but partisan influencer groups and their audiences are not uniformly like-minded. Interestingly, conservative audiences are spread across multiple influencer groups varying in ideology, from liberal influencers and mainstream news outlets to conservative conspiracy theorists. The findings highlight the need to understand users’ motivations for seeking political information and suggest that the echo chamber issue may be overstated.
... Changes to the Twitter API and new ownership placed severe limits on researcher access to data and the opportunities for transparency (Blakey, 2024;Rohlinger et al., 2023). Academic conferences have convened over the issue of diminished Twitter data access and the deplatforming of researchers. ...
Conference Paper
Full-text available
With content moderation and "Trust & Safety" in flux, toxic accounts antithetical to democratic discourse and governance are returning to Twitter. The moment is significant, complex, and politically fraught. Twitter is unregulated. Some academics are turning away from the analysis of Twitter as bulk data access has become cumbersome requiring a vendor. Viewing recent events in Lawrence Lessig's enduring framework developed in Code: And Other Laws of Cyberspace, we see norms, law, and markets play a declining role in the governance of Twitter; architecture is rising as the dominant regulator. A mercurial and overtly partisan Elon Musk, with no Board of Directors, demands his personal code and other laws of cyberspace be enacted. There are time-sensitive questions about tools, methods, and political games in the "Age of Musk." This study examines Twitter account suspensions and tweet deletions over time. It highlights past and current QAnon information warfare activity through the lens of a machine-learning model with 3.4 million labeled user bios. New methods, information theory, accessible replication data, a gamified pedagogy, and a free crowd source web-based software platform are introduced here to focus teaching and research on information literacy and digital citizenship.
... Longitudinal data may also allow researchers to make causal claims on the effects of peer networks, which we cannot do with the data at hand. Additionally, while the user volume and the purpose for Twitter (X) usage has largely remained the same through the rebranding (Newman et al. 2023), researchers may examine whether the platform now attracts different users or there are changes in users' behavior and how this may impact attention for PAs (Rohlinger et al. 2023). Our findings may also be moderated by offline relationships or different online channels 14 However, an exploration of the dynamics by year show that they appear quite similar throughout the time-period under investigation (see Appendix D.2). ...
Article
Full-text available
Policy actors (PAs) like nongovernmental organizations, political parties or governmental institutions strategically communicate on social media to gain attention and thus influence the public agenda. We argue that networks of PAs engaged in the same issues (i.e., a PA’s peer network) are crucial to attracting the interest of a broad audience. Drawing on network theory, we posit that (i) ideological homophily, and (ii) the centrality and embeddedness in a PA’s peer network increase the attention received from all Twitter (now X) users. We investigate these premises by analyzing the European migration discourse on Twitter (2014–2020). The results of our study reveal that the centrality of PAs in their peer networks and ideologically similar relations considerably increase attention from the whole Twittersphere. These findings provide strong evidence that a PA’s role in its organizational peer network on social media governs the attention received in the overall discourse.
... Existing research on the effects of Elon Musk's takeover of Twitter mainly measures surfacelevel metrics like volume of hate speech and bot activity or are comparative studies on older datasets (Rohlinger et al., 2023;Hickey et al., 2023). Yet, understanding the impact on user interactions, community formation, and influential user interactions necessitates data on network dynamics, which current datasets do not provide. ...
... Additionally, it is critical to consider the platform's unique history, which includes political (particularly in the United States) and ownership changes. These factors are likely to have influenced Twitter (X) users' demographics, the type of content they share, and their level of engagement on the platform (Rohlinger et al., 2023). Twitter (X) is very important in terms of policy and journalism (Lupton, 2014). ...
Article
Full-text available
Background In an effort to move to a sustainable society, new concepts and findings related to sustainable construction are being developed. With ambition to transfer newly developed knowledge to society, various communication paths are being used. In this study we investigated what kind of messages shared on institutional social media channels (Facebook, Twitter (now renamed to X), and LinkedIn) about sustainable construction create more audience engagement. Methods The study consisted of two phases of weekly social media posts. In each phase, 15 posts were published on the same day and time, while engagement was monitored. Three different types of posts were created, that were sequential cycling each week. Type 1 was written informative content related to research activities; type 2 was image content related to the research activities and equipment, with a short text caption of the image; and type 3 was image content with people – scientists working on research activities with a short text caption of the image. Results Poisson regression analysis revealed that type 3 posts result in the most audience engagement on LinkedIn, suggesting that using images of people in combination with short text captions is the most effective way to engage social media audiences. These findings can help organizations to use social media to promote sustainable construction and other sustainability-related research. The engagement was lower on Facebook and Twitter (X). Conclusions As the science is aiming to be closer to the society, these findings deliver an important insight of science communication through the social media. Although the study delivered several lessons learnt related to science communication through social media studies, it provides an important bases for further studies. Conclusions can support research organizations in improving their science communication.
Chapter
Modern-day users are in need of effective modes for sharing, organizing, and finding relevant content and building an appropriate connection to contacts through online social networking. As there is a rapid drift in the trends of the products and technology with respect to time, social networking portals should be dynamic toward those changes. This work presents a strategic study of two organizations tesla which bought twitter for $44 billion and was one of the biggest acquisitions of the year. Twitter which is a global leader in social networking has eminent personalities sharing their thoughts followed by many people worldwide joining tesla which is a leader in electric vehicles. The goal of this purchase was to connect more people around the world toward technology transformation. This work makes a detailed literature survey of the various technologies, economy, and growth of tesla and twitter and concludes with the reason behind their merger. It also does a thorough examination of the evolution of social networks toward business.
Article
Full-text available
This paper investigates the structure of networked publics and their sharing practices in Persian Twitter during a period surrounding Iran’s 2017 presidential election. Building on networked gatekeeping and framing theories, we used a mixed methodological approach to analyze a dataset of 2,596,284 Persian tweets. Results revealed that Twitter provided a space for Iranians to discuss public topics. However, this space is not necessarily used by voiceless and marginalized groups; and the uses are not limited to discussing controversial issues. The growing body of conservative crowdsourced elites emerged to defend the regime’s ideology. Moreover, the dominant networked frames were shaped around normal and routine subjects in an election time. Thus, Twitter was not a platform for only seeking liberal demands. It was to some extent used to serve the regime’s political interests. Furthermore, while many ordinary users rose to prominence, mainstream media continued to act as powerful players. This study contributes to the existing literature into networked practices, digital democracy, and citizen journalism; particularly in restrictive contexts.
Article
Full-text available
The idea that U.S. conservatives are uniquely likely to hold misperceptions is widespread but has not been systematically assessed. Research has focused on beliefs about narrow sets of claims never intended to capture the richness of the political information environment. Furthermore, factors contributing to this performance gap remain unclear. We generated an unique longitudinal dataset combining social media engagement data and a 12-wave panel study of Americans’ political knowledge about high-profile news over 6 months. Results confirm that conservatives have lower sensitivity than liberals, performing worse at distinguishing truths and falsehoods. This is partially explained by the fact that the most widely shared falsehoods tend to promote conservative positions, while corresponding truths typically favor liberals. The problem is exacerbated by liberals’ tendency to experience bigger improvements in sensitivity than conservatives as the proportion of partisan news increases. These results underscore the importance of reducing the supply of right-leaning misinformation.
Article
Full-text available
Flags are important national symbols that have transcended into the digital world with inclusion in the Unicode character set. Despite their significance, there is little information about their role in online communication. This article examines the role of flag emoji in political communication online by analyzing 640,676 tweets by the most important political parties and Members of Parliament in Germany and the United States. We find that national flags are frequently used in political communication and are mostly used in-line with political ideology. As off-line, flag emoji usage in online communication is associated with external events of national importance. This association is stronger in the United States than in Germany. The results also reveal that the presence of the national flag emoji is associated with significantly higher engagement in Germany irrespective of party, whereas it is associated with slightly higher engagement for politicians of the Republican party and slightly lower engagement for Democrats in the United States. Implications of the results and future research directions are discussed.
Article
Full-text available
Drawing on a qualitative analysis of 5996 tweets and 480 mainstream news stories about the Florida State University (FSU) and the Ohio State University (OSU) shootings, we examine who emerges as opinion leaders during crises, the kinds of narratives they help construct about school shootings, and the relative civility of these narratives. We find that the opinion leaders who emerge after a crisis are assumed to have local knowledge about the incident and/or are able to quickly curate information about the incident. In addition, we find that the quality of information spread by opinion leaders is critical to narrative construction and civility. The largely fact-based narratives associated with the FSU incident were far more civil than the OSU narratives, which were based on disinformation and polemics. We conclude the article by calling on scholars to take a more nuanced approach to conceptualizing and studying opinion leaders.
Article
Full-text available
In recent years, there has been widespread concern that misinformation on social media is damaging societies and democratic institutions. In response, social media platforms have announced actions to limit the spread of false content. We measure trends in the diffusion of content from 569 fake news websites and 9540 fake news stories on Facebook and Twitter between January 2015 and July 2018. User interactions with false content rose steadily on both Facebook and Twitter through the end of 2016. Since then, however, interactions with false content have fallen sharply on Facebook while continuing to rise on Twitter, with the ratio of Facebook engagements to Twitter shares decreasing by 60%. In comparison, interactions with other news, business, or culture sites have followed similar trends on both platforms. Our results suggest that the relative magnitude of the misinformation problem on Facebook has declined since its peak.
Article
On 6 January 2021, the world watched as Donald Trump’s supporters stormed the US Capitol. However, in this paper, I argue that social scientists should not simply focus on Trump or the Republicans who have supported his false claims that the presidency was stolen from him. Instead, researchers need to leverage the insights provided by sociology, political science and information studies and communication to unpack the increasingly dysfunctional movement-party dynamics in the US, which not only made the 6 January riots possible but continue to erode democratic processes. Here, I outline four developments over the last thirty years that help account for the contemporary political moment and underscore the role of digital technologies in these developments.
Article
Deplatforming refers to the permanent ban of controversial public figures with large followings on social media sites. In recent years, platforms like Facebook, Twitter and YouTube have deplatformed many influencers to curb the spread of offensive speech. We present a case study of three high-profile influencers who were deplatformed on Twitter---Alex Jones, Milo Yiannopoulos, and Owen Benjamin. Working with over 49M tweets, we found that deplatforming significantly reduced the number of conversations about all three individuals on Twitter. Further, analyzing the Twitter-wide activity of these influencers' supporters, we show that the overall activity and toxicity levels of supporters declined after deplatforming. We contribute a methodological framework to systematically examine the effectiveness of moderation interventions and discuss broader implications of using deplatforming as a moderation strategy.
Book
Academic study of how contending political groups do—or do not—leverage digital media in their quests to recruit support and members.Focusing on the workers rights movement in the battleground state of North Carolina, documentary filmmaker and sociologist Schradie points out a great gulf in technological sophistication between left and right, with the former “having belatedly awoken to the notion that they were on the wrong side of a digital political divide that they weren’t even aware existed.” Part of the problem, writes the author in a book likely to appeal most to sociologists and aspiring digital activists, is that many working-class labor activists have neither the interest nor the resources required to master social media even as conservative activists manage to form themselves into “hierarchical organizations” with the money to buy computers and the people committed to getting their message out. Thus, Schradie suggests, the image the words “digital activist” should conjure is not of a left-wing student or labor activist but instead a well-heeled think-tank denizen or technologically adept tea party member. Though the latter groups tend to be well-funded, it’s not only money that carries the day; it’s that very hierarchical organization that seems central. Moreover, as Schradie observes, the decline of traditional journalism has come in an atmosphere in which rightward organizations such as Fox News and Breitbart have filled the vacuum even as left-leaning publications have struggled to find space in the cybersphere and funding to permit them to compete. “As a result,” she notes, “digital evangelists were able to spread their anti-government message in sync with the convergence and ascent of social media, conservative news, and the Christian right.” If they are to compete, leftist activists must do more to gain access to media and attain the skills necessary to put out a coherent message; if not, the gulf will only grow. [Kirkus Review]
Article
Misinformation can influence personal and societal decisions in detrimental ways. Not only is misinformation challenging to correct, but even when individuals accept corrective information, misinformation can continue to influence attitudes: a phenomenon known as belief echoes, affective perseverance, or the continued influence effect. Two controlled experiments tested the efficacy of narrative-based correctives to reduce this affective residual in the context of misinformation about organic tobacco. Study 1 (N = 385) tested within-narrative corrective endings, embedded in four discrete emotions (happiness, anger, sadness, and fear). Study 2 (N = 586) tested the utility of a narrative with a negative, emotional corrective ending (fear and anger). Results provide some evidence that narrative correctives, with or without emotional endings, can be effective at reducing misinformed beliefs and intentions, but narratives consisting of emotional corrective endings are better at correcting attitudes than a simple corrective. Implications for misinformation scholarship and corrective message design are discussed.