ArticlePDF Available

Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War

Authors:

Abstract and Figures

In this study, we examined online conversations on Twitter about the Russia-Ukraine War and investigated differences between bots and non-bots accounts. Using ‘Russia’ and ‘Ukraine’ as keywords, we employed a Twitter API to collect data from 17 February to 18 March on Twitter. We obtained a large dataset of over 3.7 million tweets generated by about one million distinct accounts. We then analyzed one percent of the data using interval sampling for bot detection and found that about 13.4 percent of the accounts were social media bots, responsible for about 16.7 percent of the tweets. We examined the difference between bots and non-bots regarding online conversations on the Russia-Ukraine War through account analysis, textual analysis, and interaction analysis. The results show that bots exist on both sides, bots from the Ukrainian side contributed a louder voice while bots on the Russian side demonstrated more effective communication. In addition, there were differences and similarities between bots and non-bots in the behavior of online conversations, but the difference seemed to be relatively weaker than those found in previous studies.
Content may be subject to copyright.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
In this study, we examined online conversations on Twitter about the Russia-Ukraine War and investigated
differences between bots and non-bots accounts. Using ‘Russia’ and ‘Ukraine’ as keywords, we employed a
Twitter API to collect data from 17 February to 18 March on Twitter. We obtained a large dataset of over 3.7
million tweets generated by about one million distinct accounts. We then analyzed one percent of the data
using interval sampling for bot detection and found that about 13.4 percent of the accounts were social media
bots, responsible for about 16.7 percent of the tweets. We examined the difference between bots and non-bots
regarding online conversations on the Russia-Ukraine War through account analysis, textual analysis, and
interaction analysis. The results show that bots exist on both sides, bots from the Ukrainian side contributed a
louder voice while bots on the Russian side demonstrated more effective communication. In addition, there
were differences and similarities between bots and non-bots in the behavior of online conversations, but the
difference seemed to be relatively weaker than those found in previous studies.
Contents
1. Introduction
2. Methodology
3. Findings
4. Conclusion
1. Introduction
Social media is no longer a tool of communication but also a potential ‘weapon’ that can greatly affect human
perceptions and opinions (Orabi, et al., 2020). Empirical studies suggest that social media has an impact on
opinion formation and transformation (Chen, et al., 2022), emotional contagion (Ferrara and Yang, 2015), and
the flow of public opinion (Bradshaw and Howard, 2018; Cheng, et al., 2020). Such effects have substantive
implications for international relations and politics (Barnett, et al., 2017). Some individuals and organizations
utilize social media to capture improper benefits (Allem, et al., 2020; Orabi, et al., 2020), and the anonymous
nature of social media makes the public more susceptible to various forms of manipulation (Tucker, et al.,
2017). One of the leading tools capable of such manipulation is social media bots. Wooley and Howard (2016)
argued that ‘it has become a nexus for some of the most pressing issues around algorithms, automation, and
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Internet policy.’ In terms of international politics, researchers proposed that ‘the impact of social media bots
should be taken into account in any study of online political dialogue’ [1].
1.1. Social media bots
Social media bots are often defined as a type of ‘automation software that controls an account on a particular
OSN (Online Social Network) and has the ability to perform basic activities such as posting a message and
sending a connection request’ [2]. This type of definition emphasizes the function of social media bots but
often omits their potential effects on human agents. Another type of definition focuses more on the
anthropomorphic nature of social media bots and how they affect social systems. For instance, Igawa, et al. [3]
noted that ‘on Twitter social robots, called ‘bots’, pretend to be human beings in order to gain followers and
replies from target users and promotes a product or agenda’. After a comprehensive analysis of the various
definitions of social media bots, Ferrara, et al. (2016) further explained them as ‘a computer algorithm that
automatically produces content and interacts with humans on social media, trying to emulate and possibly alter
their behavior’. In general, Ferrara’s definition of social media bots is more comprehensive and widely
recognized. This study followed Ferrara’s definition and conceptualized bots as machine accounts that
participated in selected topics by emulating human behaviors (including automatically posting content,
engaging in user interactions, etc.) on social media platforms (Ferrara, et al., 2016).
1.2. Social media bots and politics
In recent years, social media bots have been used extensively for various forms of malicious political
manipulation (Albadi, et al., 2019). Sometimes, due to human users’ lack of knowledge, social media bots’
influences appear larger (Everett, et al., 2016; Bolsover and Howard, 2019). Researchers have pointed out that
political actors and governments worldwide have started using social media bots to muddy political issues
(Forelle, et al., 2015). The New York Times and New Yorker also pointed out that social media bots have
become a non-negligible political tool (Dubbin, 2013; Urbina, 2013; Woolley, 2016).
Scholars have examined various cases of social media bots interfering in the political sphere. For instance,
Ratkiewicz, et al. (2011) studied midterm elections discussions on Twitter and found that social media bots
infiltrated political conversations by showing support to some candidates while smearing others. In 2012,
research revealed that politicians used social media bots to augment the number of followers and to achieve an
‘illusory prosperity’ of account influence (Chu, et al., 2012). In 2014, it was found that militaries, state-
contracted firms, and elected officials used social media bots to set agendas by disseminating propaganda and
flooding newsfeeds with political spam (Cook, et al., 2014). Bessi and Ferrara [4] found that bots have actively
engaged in related conversations on Twitter during the 2016 and 2020 U.S. elections. Ferrara, et al. (2020)
found that social media bots can exacerbate users’ consumption of content with the same political stance, thus
enhancing existing political echo chambers. Abokhodair and McDonald (2015) examined Syrian social media
bots on Twitter and depicted their behavioral patterns, such as mimicking human behaviors, reporting news,
and posting misinformation. Howard and Kollanyi (2016) studied Brexit-related computational propaganda
during the UK-EU Referendum and found that less than one percent of sampled accounts generated almost a
third of all the messages. Albadi, et al. (2019) found that 11 percent of hate speech in the Arabic context was
posted by automated agents.
Social media bots can engage in political conversations with different strategies (Ratkiewicz, et al., 2011; Chu,
et al., 2012; Ferrara, et al., 2020). The Russia-Ukraine War constitutes an ideal case to investigate the behavior
and the influence of social media bots. Since Russia officially declared war on Ukraine on 24 February 2022,
the two countries have been engaging in an information war on social media (Bergengruen, 2022). Tim
Bajarin, a famous columnist at Forbes, commented that: ‘This is the first major conflict where a true cyberwar
is attached to a real war’ (Bajarin, 2022). Evidence shows that within 24 hours of the start of the War, the
amount of relevant information generated on social media exceeded that of a week in the Iraq War (Johnson,
2022). Social media bots were found to be busy setting online agendas as they have done in the past (Muscat
and Siebert, 2022; Purtill, 2022). However, the intervention strategies and effects of social media bots in the
ongoing Russia-Ukraine War remain unknown. Ferrara, et al. (2020) concluded that there are two dimensions
of manipulation by social media bots: automation (e.g., the prevalence of bots), and distortion (e.g.,
manipulation of narratives, injection of conspiracies or rumors).
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
In this study, we investigate the extent to which bots were used in the Russia-Ukraine War conversations on
Twitter, the effects of social media bots, and how they differ from human users. Our study’s contribution lies
in two aspects. First, exploring the commonalities in the ‘group behavior’ of social media bots can help better
understand the operational logic of social bots. Second, this study could contribute to understanding the roles
of social media bots played in shaping online public opinion regarding the Russia-Ukraine War. In addition,
research on social media bots in international conflicts is few and far between, and therefore, this study can
potentially provide some empirical evidence and set pathways for related and follow-up studies. In particular,
the following research questions are proposed:
RQ1: To what extent do social media bots interfere with Twitter
conversations about the Russia-Ukraine War?
RQ2: What are the account features of the social media bots
compared to non-bots concerning the Russia-Ukraine War
Twitter discussion?
RQ3: What are the textual features of the tweets posted by bots
compared to non-bots concerning the Russia-Ukraine War
Twitter discussion?
RQ4: How effective were those social media bots compared to
non-bots in attracting likes, comments, and retweets?
2. Methodology
2.1. Data
The current study collected the research data from Twitter, a global social media with close to 400 million
users (Dean, 2022). Twitter is an outlet for up-to-the-minute status updates, allowing users to respond in real-
time to news and political events. In addition to hosting a huge amount of online political conversations,
Twitter has become a ‘breeding ground’ for social media bots and was widely used for social media bots
studies (Alothali, et al., 2018).
We collected all tweets containing keywords ‘Russia’ and ‘Ukraine’ between 17 February to 17 March 2022.
Due to the limitations of the research tools dealing with non-English languages (Albadi, et al., 2019), this
study focused on English tweets. Therefore non-English tweets in the raw data were removed during data
analysis. Although the Russia-Ukraine War started on 24 February 2022, there were many clues and build-ups
in social media before the War. Thus we set the data collection starting date at one week before the War to
capture the differences between the pre-War and post-War periods. A crawler algorithm based on ‘tweepy’, an
open-source Python package, was adopted for data crawling. The data we obtained included tweet content,
likes, retweets, comments, and basic information about user accounts, such as registered time, number of
followers, and self-disclosure information. These data enabled the possibility of detecting social media bots
based on different features of the account. In the end, our raw dataset constituted over 3.7 million tweets posted
by nearly 0.97 million distinct users.
2.2. Sampling
Due to computational power constraints, a random sample of the data was used for statistical analysis. We
adopted the interval sampling method (as relevant tweets on Twitter are not evenly distributed) and sampled
one percent of the data for analysis. As a result, the sample dataset consisted of 37,245 tweets generated by
28,524 distinct accounts.
2.3. Bot detection
Identifying social media bot accounts has been an oft-studied topic in the past few years (Ferrara, et al., 2016;
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Subrahmanian, et al., 2016). There are four existing common bot detection techniques: graph-based, machine
learning-based, crowdsourcing-based, and anomaly-based (Orabi, et al., 2020). In this study, we adopted the
machine learning approach, which is the most widely used approach (Chen, et al., 2022). Botometer,
developed by Yang, et al. (2022), has been proven to be a relatively reliable tool for social bot detection.
Botometer, formerly called BotOrNot, is a machine-learning framework that extracts and analyses a set of over
1,000 features, including content, network structure, temporal activity, user profile data, and text sentiment to
produce a score indicating the likelihood that the inspected account is a social bot (Bessi and Ferrara, 2016). If
the bot score is closer to one, the account is likely to be a social media bot. Reversely, the account belongs to
non-bots [5]. In this study, we tested 28,524 accounts from the sample using Botometer and plotted the
probability distribution of bot scores (Figure 1). According to the graph, most of the cases fall below ‘0.5’, but
an obvious bump was shown between ‘0.8’ and ‘1’, suggesting that a significant amount of accounts exhibit
clear bot characteristics (Ferrara, et al., 2016; Davis, et al., 2016).
Figure 1: Distribution of the probability density of bot scores.
Different criteria were proposed in the existing literature to identify bots. Some researchers use a bot score of
0.5 as the threshold for marking social media bots (Badawy, et al., 2018; Ferrara, 2017a; Shao, et al., 2018;
Chen, et al., 2022). In this study, we adopted a higher threshold (0.8) for bot identification. This criterion has
been used by Broniatowski, et al. (2018) in their research on ‘Russian trolls.’ Using this criterion, we detected
a total of 5,439 accounts with a bot score higher than the ‘0.8’ threshold. After manual checking, we found that
there are 1,623 institution/media accounts labeled as bots. These are verified accounts of organizations,
institutions, or public figures. These types of accounts were often treated as social media bots in previous
studies, but we argue that they are distinctly different from bots for three reasons. First, as we argued earlier,
social media bots should possess ‘anthropomorphic,’ ‘invisibility,’ and ‘automated’ characteristics (Boshmaf,
et al., 2011; Igawa, et al., 2016). But institution/media accounts are distinctly different from social media bots
in terms of ‘anthropomorphic’ and ‘invisibility’ criteria. These accounts usually display their true identities and
detailed self-disclosure information (name, self-introduction, geolocation, etc.). Second, in terms of intent to
use, social media bots can be divided into benign and malicious categories (Ferrara,et al., 2016). Stieglitz, et
al. [6] noted that ‘Benign bots aggregate content, respond automatically, and perform other useful services.
Malicious bots, in contrast, are designed with a purpose to harm.’ Mainstream institutional or media accounts
rarely maliciously disrupt the rules and order of online conversations. Finally, in terms of legitimacy, media
accounts are legitimate sources of news and information, while social media bots are not (González-Bailón and
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
De Domenico, 2021). Based on these rationales, we excluded these 1,623 institution/media accounts from our
analysis. Table 1 presents the final Botometer scores distribution of five intervals. Table 2 presents the final
outcomes of our bot detection process.
Table 1: Distribution of Botometer scores.
Table 2: Bot detection results.
2.4. Content coding
To obtain a deeper understanding of social media bots’ activity during the Russia-Ukraine War, we examined
the political stance of the tweets produced by social media bots in online conversations. Previous studies used
hashtags to determine the binary political stance of bots (Bessi and Ferrara, 2016; Ferrara, et al., 2020).
However, the hashtags used by bots in the Russia-Ukraine War appear vague in meaning. Therefore, we
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
decided to code the political stance of the sampled tweets through machine learning. Following previous
studies, we used the Support Vector Machine (SVM), a stable multi-class classification machine learning
model, to classify stances and attitudes of tweets (Joachims, 1998; Chen, et al., 2022). First, we set up a coding
team consisting of postgraduate students. After three training workshops and pilot coding sessions, the research
team and coding team worked together to create seven political attitudes. The seven-attitude category included
Pro-Russia,’ ‘Pro-Ukraine,’ ‘Anti-Russia,’ ‘Anti-Ukraine,’ ‘Pro-Russia and Anti-Ukraine,’ ‘Pro-Ukraine and
Anti-Russia,’ and ‘Neutral.’ Examples of typical tweets corresponding to different stance attitudes are provided
in Table 3. Tweets reflecting pro- or anti- Russia/Ukraine usually carry words or hashtags with obvious value
judgments, such as ‘invasion,’ ‘Nazi,’ ‘#istandwithukraine,’ etc., whereas neutral tweets tend to be news
reports, calls for peace, or completely unrelated content. To improve the accuracy of machine learning, we
further combined these seven categories into three broader political stances: ‘The Russia side, The Ukraine
side, and Neutral’ (see Figure 2).
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Table 3: Example tweets for political attitude coding.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Figure 2: Seven political attitudes and three political stances.
Two coders were trained to code the three types of stances manually. The Cohen’s Kappa value of two coders
is 0.84, demonstrating a good inter-coder reliability. After manual coding, we obtained a data set containing
6,170 tweets with different stances and attitudes. We randomly divided these tweets into the training set and
the testing set for machine learning. Among them, 5,995 tweets were in the training set, including 2,230 tweets
on the Russian side, 1,883 on the Ukrainian side, and 1,882 neutral. The rest 175 tweets were used for model
validation. After model validation, the accuracy of our machine learning model turned out to be 98.6 percent,
which outperformed those of previous studies using the SVM multi-class model (Guo, et al., 2020; Chen, et
al., 2022). In addition, we also performed an extra validity check for the outcomes of the model. We randomly
extracted 200 machine-predicted tweets and returned them to the two coders for validation. It was found that
the performance of consistency between coders and the model is 92.0 percent, indicating a high level of
accuracy. Finally, we processed 6,200 bot tweets and 27,787 non-bot tweets in our model for prediction. The
results showed that 4.77 percent of bot tweets were on the Russian side, 42.39 percent on the Ukrainian side,
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
and the rest, 52.84 percent, were neutral. For non-bot tweets, 11.65 percent were on the Russian side, 43.57
percent on the Ukrainian side, and the rest, 44.78 percent, were neutral (see Table 4).
Table 4: SVM prediction results.
2.5. Data analysis
We took an event-based difference approach, which is widely used in social media bot research (Bessi and
Ferrara, 2016; Albadi, et al., 2019; Zelenkauskaite, et al., 2021). The approach provided a method for
answering the research questions we raised. More specifically, the data analysis for this study consists of three
parts: a) account analysis, b) textual analysis, and c) interaction analysis. All three parts present descriptive
statistics to show the similarities and differences between bots and non-bots accounts. The text of the tweets
was also included as an essential object of investigation, and we analyzed hashtags, keywords, sentiment, and
the co-occurrence network of ‘mentions’ (@) in tweets.
Hashtag and keywords: Since hashtags and keywords are valuable metrics for analyzing tweets, we use a
‘bottom-up’ approach by Zelenkauskaite, et al. (2021) to extract features of tweets through automated
programs, supplemented by manual analysis to describe the profile of the event. In terms of hashtags, we used
regular expression operations to identify #hashtags in the texts and performed word frequency analysis. Then
the TF-IDF feature algorithm was applied for keyword analysis. When a word is more important to represent
the text, its TF-IDF value will be higher (Aizawa, 2003). The raw text was first converted to lowercase, then
removed their URL links, and finally, word stemming was extracted. We removed words like ‘we,’ ‘is,’ ‘will,’
‘has,’ ‘now,’ and other functional words to ensure that the keyword list retains unique words representing the
meaning of the text.
Sentiment: To implement sentiment analysis, we used TextBlob, a Python text data processing library. The
library provides a simple API for typical natural language processing (NLP) tasks such as lexical tagging, noun
phrase extraction, sentiment analysis, classification, translation, etc. (Manguri, et al., 2020). The TextBlob
sentiment analysis function returns the sentiment polarity score, ranging from -1.0 to 1.0 (0 indicates neutral;
-1.0 indicates negative sentiment; 1.0 indicates positive sentiment) (Gujjar, et al., 2021).
Network: We performed a network analysis of the tweets using Gephi 0.9.5. Gephi is an open-source software
package for network visualization and analysis. It can help researchers reveal network data patterns (Bastian, et
al., 2009).
3. Findings
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
3.1. Bots detection (RQ1)
The first research question (RQ1) intends to explore the extent to which social media bots interfere with online
conversations about the Russia-Ukraine War. We first report descriptive statistics for the samples included in
our analysis (see Table 5). There were 3,816 bot accounts (13.4 percent) and 22,805 non-bot accounts (80.0
percent), which were responsible for 6,200 (16.7 percent) bot tweets and 27,787 (74.6 percent) non-bot tweets.
The data revealed a considerable proportion of social media bots engaging in online conversations about the
Russia-Ukraine War.
By extrapolating for the entire data set, we estimate that there are about 126,100 to 133,860 bots in the data set,
accounting for roughly 13.0 percent — 13.8 percent of the total activity in Russia-Ukraine online
conversations and responsible for about 606,360 to 632,400 tweets, accounting for 16.3 percent to 17.0 percent
of the total volume (95 percent confidence level). In addition, because we excluded institutional/media
accounts and unknown accounts from our data processing, they did not appear in our subsequent analysis.
Table 5: Results of extrapolation of samples to the entire dataset.
Note: The ‘Population estimate’ column is based on statistical extrapolation at a 95 percent confidence level.
3.1.1. Bots activity level
The conversation about the Russia-Ukraine War in Twitter space is also part of the confrontation between
Russia and Ukraine in cyberspace (Jaitner, 2015). Will the online discussion on social media be affected by the
progress of the war between the two sides? Following the work by Zelenkauskaite, et al. (2021), we counted
the number of bot tweets and non-bot tweets in each hour and derived two curves. To further explore the
contributing factors for the changes in the curves, we linked the volume of tweets generated by bots and non-
bots to critical events of the Russia-Ukraine War. According to Al Jazeera’s ‘Timeline: A month of Russia’s
war in Ukraine’ and ‘Timeline: The first 100 days of Russia’s war in Ukraine’ (see Appendix), we sorted out
the key events of the Russia-Ukraine war and used the relevant information to plot Figure 3.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Figure 3: Time-series distribution of bot and non-bot tweets (combined with offline events).
Note: Larger version of Figure 3 available here.
Overall, the trajectories of the two curves are relatively stable and similar, and the bots and non-bots did not
show obvious divergence at any time node. This indicates no significant difference between the temporal
trends in the production of tweets by bots and non-bots. Specifically, bot and non-bot tweeting curves showed
an obvious upward trend on 22 February when the U.S. stepped up sanctions toward Russia and warned of war
risks. Next, both curves showed a steep spike on 24 February, when Putin announced the commencement of the
special military operation. The tweeting curves of both bots and non-bots spiked up again on 26 and 28
February, respectively. The only difference between the two curves is that non-bot tweets show obvious
diurnal variation, while such fluctuations are less pronounced in bot tweets.
3.2. Account analysis (RQ2)
RQ2: What are the account features of those social media bots compared to non-bots concerning the Russia-
Ukraine War Twitter discussion? To answer this question, we first analyzed the average account age, the
average number of daily tweets, and the average following and follower numbers of the accounts.
3.2.1. Basic statistics
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Table 6: Basic statistics for bot and non-bot accounts.
For average account age, the results showed that the average account age of bot accounts was 5.4 years, and
the average account age of non-bot accounts was 7.5 years, indicating an overall younger profile of bot
accounts. This finding is consistent with those of Hagen, et al. (2022). By visualizing the creation time of the
accounts (Figure 4), we can see that 41.9 percent of bot accounts were created in the last three years (2020,
2021, 2022).
Figure 4: Distribution of the creation years of bots.
To understand the activity level of those accounts, we divided the total number of tweets by the number of
days since the day the account was created to obtain the average daily number of tweets. Results show that bot
accounts were much more active (38.9 tweets/day on average) than non-bot accounts (17.6 tweets/day on
average) during the observation period.
We also examined the differences between bots and non-bots in terms of their following/follower status.
Twitter’s following function can help users receive public posts from targeted users. The number of ‘follower’
refers to how many people are ‘following’ an account (Wald, et al., 2013). Previous studies found that social
media bots had a higher number of ‘following’ and fewer ‘followers’ than human users (Stieglitz, et al., 2017).
We found a similar pattern: the average ratio of ‘following’ by ‘follower’ of non-bot accounts (1:20) was much
higher than that of bots (1:7).
3.2.2. Self-disclosure
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
We further analyzed the self-disclosure differences between bots and non-bots, including geolocation and
lexical features of account self-description. First, we analyzed the geolocation of the accounts. The geolocation
of accounts in online conversations has been examined in previous research (Bessi and Ferrara, 2016; Shane,
2017; Zelenkauskaite, et al., 2021). We visualized the geolocation of bot and non-bot accounts separately
(Figure 5). It should be noted that the geolocation of Twitter accounts can be edited by users, and therefore the
location information does not reflect the precise location of the accounts. But according to previous research,
they are still a valuable indicator (Ferrara, et al., 2016; Subrahmanian, et al., 2016).
The results show that for bot accounts, most of them came from the U.S. (37.4 percent), the U.K. (9.1 percent),
and India (6.6 percent), whereas for non-bot accounts, most of them came from the U.S. (38.4 percent), India
(12.5 percent), and the U.K. (5.4 percent) (Figure 5). Other countries in our top five lists were Canada,
Australia, Japan, and Nigeria.
Figure 5: Geolocation distribution of bots and non-bots (Top 5).
We then analyzed the self-descriptions of bots and non-bots. Twitter users can edit their self-description with
no more than 160 words. These descriptions often show the user’s self-image construction (Ahn, 2011) and
their self-identity awareness (Tufekci, 2008). The words used in these texts were analyzed for 3,816 bots and
22,805 non-bots. After removing function words such as adverbs, conjunctions, and emoticons, the top 40
high-frequency words were summarized in Table 7 (bots) and Table 8 (non-bots).
More than half of the high-frequency self-description words were shared by bots and non-bots. Pronouns such
as ‘my,’ ‘you,’ and ‘we’ ranked very high, which indicates that bots tended to imitate non-bots and set their
self-descriptions through an anthropomorphic, informal tone. Both bots and non-bots used the word ‘follow’ to
seek attention and interaction from other users.
However, there were still some differences between bots and non-bots regarding self-descriptions. For bots,
one obvious feature is that the term ‘http’ only appeared in the bots list and occupied the second position,
indicating that bots usually embed external links in their self-descriptions. In addition, some news-related
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
terms such as ‘breaking,’ ‘information,’ and ‘latest’ were exclusive to bots’ descriptions. Furthermore, politics-
related terms were more often used in self-descriptions of bots. For instance, country names and political
figures were mentioned by many bots, such as ‘india’ and ‘ukraine’ and ‘trump.’ Words concerning political
events and social movements were also mentioned frequently, like ‘resist,’ ‘blm,’ (Black Lives Matter) and
‘fbr’ (Follow Back Resistance). Finally, the bots’ self-descriptions also included words such as ‘technology,’
‘tech,’ and ‘crypto,’ which did not appear in the list of non-bots.
While both bots and non-bots often use the word ‘follow’ in their self-descriptions to seek attention and
interaction from other users, our manual review of the content revealed significant differences in the way they
use ‘follow’. Specifically, non-bots tend to use phrases such as ‘I follow back,’ ‘if you follow me, I will follow
you back’ in their self-descriptions, aiming to boost the number of followers of the account itself. But bots are
more inclined to direct other users to follow their ‘ally’ accounts with some expressions like ‘move to @*’
‘please follow us @*’ ‘for more news at @*’ etc. This pattern has also been confirmed several times in
previous studies (Ferrara, 2017b; Bastos and Mercea, 2019).
Table 7: High frequency keywords in the self-introduction of bot accounts (Top 40).
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Table 8: High frequency keywords in the self-introduction of non-bot accounts (Top 40).
3.3. Textual analysis (RQ3)
RQ3 intends to explore the textual features of the tweets posted by bots compared to non-bots concerning the
Russia-Ukraine War discussion.
3.3.1. Topic differences
Twitter tweets often contain hashtags to indicate their relevant conversation topics, which can be used to
represent users’ topic preferences (Pöschko, 2011). In this study, we analyzed the topic preferences of the bots
and non-bots by calculating the frequency of hashtags used in the sampled tweets, and Table 9 shows the top
20 most used hashtags in their tweets.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Table 9: Frequency of hashtags in bot and non-bot tweets (Top 20).
Further, we calculated the relative popularity of hashtags among bot and non-bot tweets (from Table 7) (Arlt,
et al., 2019). The calculation formula is:
Where Fh_bot represents the frequency of hashtag ‘h’ in bot tweets, and Fall_bot refers to the total frequency of
the most used hashtags in bot tweets. Fh_bot/Fall_bot indicates hashtag h’s relative frequency in bot tweets.
Similarly, Fh_human/Fall_human indicates hashtag h’s relative frequency in non-bot tweets. The ratio of two
values refers to the hashtag h’s relative popularity in both bot and non-bot tweets. Then, we conducted a log
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
transformation of the value of relative popularity so that the results could be easily visualized. A negative
result indicates that hashtags were more likely to appear in bots’ tweets, and a positive number means that the
hashtags were more likely to be used by non-bots. The user-generated hashtags are often case sensitive, such as
‘Russia’ and ‘russia’, but their meanings are the same. Therefore, our analysis converted all hashtags to
lowercase. Figure 6 visualizes the relative popularity of the top 20 hashtags for bots and non-bots.
The findings from Table 7 and Figure 6 can be summarized in the following. First, from the perspective of
common features, both sides of the vertical axis involve many neutral hashtags that directly describe the
Russia-Ukraine War, like ‘#ukraine war’ ‘#russiaukrainewar’ and ‘#ukrainerussiawar’ etc. Second, for
differences, non-bots relatively used more hashtags about political leaders: ‘#biden’ and ‘#putin’ were often
mentioned. Also, hashtags ‘#usa’ ‘#nato’ only appeared on the left side of the vertical axis (non-bots),
suggesting that the U.S. and NATO frequently appear in the non-bots narrative. For bots, they were more
likely to use hashtags with obvious opinion stances, such as #stoprussia, #Ukraineunderattack, and
#helpukraine. Finally, bot accounts used hashtags related to newscasts (such as #news, #breaking) more
frequently.
Figure 6: The relative popularity of the top 20 hashtags for bots and non-bots.
Notes: The horizontal axis is the Ph_relative value of hashtags. A positive number means the hashtag was
used more frequently by bots, and a negative number indicates that non-bots used the hashtag more
frequently. The vertical axis is the sum of the frequency of hashtags used in the tweets of the two types of
accounts.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Note: Larger version of Figure 6 available here.
In addition to hashtag analysis, we analyzed the content of tweets to reveal opinions and narrative strategies.
Python word-splitting algorithm and the relative popularity formula mentioned earlier were employed in this
analysis. In Figure 7, we visualized the relative popularity of the top 50 most used words in the tweets of bots
and non-bots. First, terms directly related to the Russia-Ukraine War occupied a considerably high proportion
for both bots and non-bots. Second, for differences, words expressing opposition to war and calling for peace
(‘peace,’ ‘stop,’ etc.) were more prevalent among non-bots. Non-bots also focused more on terms related to
military conflicts, such as ‘military,’ ‘forces,’ ‘troops,’ and ‘weapons.’ For bots, they used more media-related
terms such as ‘youtube,’ ‘live,’ and ‘news.’ In general, the words most frequently used by bots were relatively
focused, while words used by non-bots were more diverse.
Figure 7: The relative popularity of the top 50 most used words in the tweets of bots and non-bots.
Notes: The horizontal axis is the Ph_relative value of words. A positive number means the word was used
more frequently by bots, and a negative number means non-bots used the word more frequently. The vertical
axis indicates the frequency of words in tweets. We log-transformed the sum of the frequencies for ease of
visualization.
Note: Larger version of Figure 7 available here.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
3.3.2. Opinion stance
We divided the political stances conveyed by tweets into three categories according to the rules mentioned
earlier: the Russian side, the Ukrainian side, and neutral (see the Methodology section). According to the
prediction outcomes of our SVM model, for bots, 4.77 percent of tweets belonged to the Russian side, 42.39
percent belonged to the Ukrainian side, and the remaining 52.84 percent were neutral. For non-bots, 11.65
percent of the tweets belonged to the Russian side, 43.57 percent belonged to the Ukrainian side, and the
remaining 44.78 percent were neutral. Further, we compared the percentage of tweets belonging to the Russian
and Ukrainian sides among bots and non-bots (see Figure 8). The results show that tweets belonging to the
Ukrainian side occupy a larger share of both types of accounts, which suggests that the pro-Ukrainian voices
have an overwhelming advantage over pro-Russian voices in the online conversation about the Russia-Ukraine
War on Twitter. In addition, the proportion of tweets speaking for the Ukrainian side was higher in bot tweets
(89.88 percent) than in non-bot tweets (78.9 percent). In other words, bots amplified the voice of the Ukrainian
side.
Figure 8: Distribution of political stances of bot and non-bot tweets.
We backtracked the accounts responsible for these tweets and investigated the consistency of their political
stances. Following the previous studies (e.g., Cinelli, et al., 2021), we used the average level of political stance
conveyed by tweets by a particular account to measure the consistency of accounts. For instance, if account i
posts ai tweets in the dataset and the political stance of each tweet is noted as Ci = {-1, 0, 1} (where the
Russian side was assigned a value of ‘-1’, the neutral was assigned a value of ‘0’ and the Ukrainian side was
assigned a value of ‘1’), then the political stance xi of account i can be expressed as follows:
Based on the values of xi, we divided the accounts into five categories (see Table 10): xi=-1 was defined as the
strong supporter of Russia; xi=(-1, 0) was defined as the moderate supporter of Russia; xi=0 was neutral; xi=(0,
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
1) was defined as the moderate supporter of Ukraine; and x
i
=1 was defined as the strong supporter of Ukraine.
The proportion of pro-Ukrainian accounts in bots is higher than in non-bots. Both the proportions of strong
supporters of Russia and Ukraine were higher in non-bots than in bots.
Table 10: Political stance of accounts by degrees of consistency.
As a follow-up to the geolocation analysis we did earlier, we analyzed the geolocation of bots with different
political stances in Table 11. First, about half of the bots in all three political stances choose to disclose their
geolocation. Second, the distribution of the identified geographical information is highly concentrated, with the
five most mentioned countries in each stance accounting for a considerable proportion. Specifically, on the
Ukrainian side, the top five countries were the U.S. (22.2 percent), India (5.6 percent), the U.K. (3.5 percent),
Ukraine (3.3 percent), and Canada (1.5 percent). On the Russian side, the top three countries were the U.S.
(19.6 percent), India (6.1 percent), the U.K. (3.7 percent), and the fourth and fifth countries were Nigeria (2.4
percent) and Japan (1.4 percent).
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Table 11: Geolocation ranking of bots with different political stances (Top 5 countries).
3.3.3. Sentiment analysis
We analyzed the sentiment of the tweets in the sample using TextBlob. For bots and non-bots (see Figure 9),
the trends in the distribution of sentiment polarity of their tweets are generally similar, but still, there were
some differences. Specifically, tweets posted by non-bots showed relatively more positive or negative
sentiment, while bot tweets had a higher proportion of neutral sentiment. This may be because social bots will
retweet news stories more often.
We also investigated the sentiment polarity distribution of bot tweets and non-bot tweets with different
political stances (see Figure 10, Figure 11). In bot tweets, we found that tweets with clear political stances (the
Russian side or the Ukrainian side) often show more positive or negative sentiments. Both the Russian and the
Ukrainian sides tweeted more positive rather than negative sentiments. Therefore, tweets were more likely to
show political stances by expressing support for their side rather than opposition to the other side. The
distributions of tweets with clear political stances (the Russian side or the Ukrainian side) were very similar
across the three sentiment categories. The proportions of neutral sentiment for both bot and non-bot tweets
were higher than 50 percent. Compared to bot tweets, non-bot tweets showed more non-neutral sentiment in all
three political stances, especially for the Russian side.
Figure 9: Sentiment polarity distribution of bot and non-bot tweets.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Figure 10: Distribution of sentiment polarity of bot tweets with different political stances.
Figure 11: Distribution of sentiment polarity of non-bot tweets with different political stances.
3.4. Interaction analysis (RQ4)
3.4.1. Comments, retweets, and likes
In this part of the analysis, we calculated the average numbers of comments, retweets, and likes of tweets
posted by bots and non-bots, respectively. Table 12 shows the findings. The average number of likes received
by non-bot tweets was 33.84, whereas bot tweets only received 3.25 likes on average. The findings for the
number of comments and the number of retweets were very similar. Tweets posted by bot accounts were less
likely to trigger interactive actions from other accounts.
Table 13 presents a crosstabulation of interaction statistics by political stance. Overall, tweets from the Russian
side and the Ukrainian side obtained more retweets, comments, and likes than those with a neutral stance. This
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
suggests opinion drives more interaction on social media. Interestingly, the data also show that tweets by bots
from the Russian side received higher average retweets, likes, and comments than those from the Ukrainian
side. But for tweets from non-bot accounts, the results are reversed.
Table 12: Interaction statistics for bot and non-bot tweets.
Table 13: Interaction statistics for tweets of different political positions.
Note: * denotes statistical significance at p < 0.05 level.
In terms of temporal patterns, we performed a time-series analysis of interaction data obtained by bot and non-
bot tweets within the sample (Figure 12). What appears evident from observing the upper and lower panels is
that both bot and non-bot tweets showed a spike around 24–25 February, which could be a reaction to the start
of the War. Likes, comments, and retweets of non-bot tweets all showed a second peak between 15 March and
17 March, but for non-bot tweets, this was not evident in the bot tweets.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Figure 12: The time-series changes in interaction data of tweets.
3.4.2. @Mention: Potential network structure
Twitter allows users to mention other users in their tweets by using the “@” symbol. Mention is a type of
social interaction that can form a co-occurrence network by connecting the mentioned with the user who
initiated the mention. In this study, we conducted a co-occurrence network analysis of ‘mentions’ in bot and
non-bot tweets to explore the differences between the two types of accounts.
First, the basic statistics of the co-occurrence relations of ‘mentions’ are presented in Table 14. There were
1,368 co-occurrence relations in 6,200 bot tweets involving 597 bots and 471 mentioned accounts. And there
were 8,408 co-occurrence relations in 27,787 non-bot tweets, including 4,133 non-bot accounts and 3,325
mentioned accounts. The comparison shows that bots were less active than non-bots using the mention
function, indicating that non-bot accounts were more likely to interact with other accounts.
Table 14: The statistics of ‘mention’ action in tweets .
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Figure 13 presents the co-occurrence network of bots and non-bots. In terms of bots, there are five relatively
discrete and sparsely interlinked communities. For non-bot accounts, seven densely knit communities could be
observed.
Figure 13: The co-occurrence network of bot accounts (left) and non-bot accounts (right).
Note: Larger version of Figure 13 available here.
Prior research suggested that bots may interact with each other to enhance their influence and visibility in
online conversations (Howard and Kollanyi, 2016; Duh, et al., 2018). Thus, we further examined the
categories of mentioned accounts in the co-occurrence network. The results show that among the 471 accounts
mentioned by bots, 108 accounts exist in our sample dataset. Of the 108 accounts, 25 (23.15 percent) were
bots, and 83 (76.85 percent) were non-bots. Among the 3,325 accounts mentioned by non-bots, 357 were in
our sample dataset. Of the 357 accounts, only 3 (0.84 percent) were bots, and 354 (99.16 percent) were non-bot
accounts. Therefore, both bots and non-bots were more inclined to mention non-bot accounts.
4. Conclusion
The tools of political discussion have radically changed since the advent of online social media (Harvey,
2013). The popularity of platforms such as Twitter has accelerated the process of political discussion, but the
invention of social media bots could bring potential perils associated with the abuse of these platforms
(Woolley and Howard, 2016; Shorey and Howard, 2016; Maréchal, 2016). Our study investigated the
engagement of social media bots on the issue of the Russia-Ukraine War and summarized our findings in four
aspects as follows.
4.1. Level of bot intervention
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
We found that bots were extensively involved in online conversations concerning the Russia-Ukraine War.
About 3,816 bot accounts (13.4 percent) produced 6,200 tweets in our sampled data. We estimate that at least
126,100 bots were actively engaged in the Russia-Ukraine War conversations, and they were responsible for
about 606,360 to 632,400 tweets during the observation period.
We found many tweets involving the Russia-Ukraine War had a clear political stance. Specifically, the
percentage of bot tweets from the Ukrainian side was nearly nine times higher than that of the Russian side. In
addition, the proportion of tweets supporting the Ukrainian side was about 11 percent higher for bots than non-
bots. It seems that social media bots amplified the voice of the Ukrainian side in the online conversations of the
Russia-Ukraine War. Similarly, we found in our geolocation analysis that although Russia was involved in the
conflict, very few bot accounts labeled themselves as coming from Russia.
There could be two possible reasons explaining the relatively lower activities of bots on the Russian side. On
the one hand, Twitter suspended many bot accounts on the Russian side after the War started (Collins and
Korecki, 2022). On the other hand, since 4 March 2022 (before our data collection date), the Russian
government blocked its citizens from accessing Twitter (Milmo, 2022). However, the bots on the Russian side
were more ‘effective’ because they performed better than the bots on the Ukrainian side in terms of attracting
likes, retweets, and comments. To disentangle this puzzle, we examined the content of pro-Russia tweets and
found that bots on the Russian side more often posted controversial content, such as the eastward expansion of
NATO. The tweets posted by the Ukrainian side were more likely to express condemnation of Russia or
support for Ukraine.
4.2. Differences between bots and non-bots
We identified several differences between bots and non-bots. First, bots were typically younger than non-bots,
with nearly half of the accounts created in the last three years (2020, 2021, 2022). Second, considering that the
average daily tweet volume of bots was more than twice that of non-bots, it seems that bots were more active
than non-bots on Twitter. However, the influence they brought to the online conversation was not as significant
as expected because bots have far fewer followers on average than non-bots. Therefore, the scale of users that
bots can reach is limited. Third, in terms of topic preferences, based on features we extracted from hashtags
and word frequencies, bots used more hashtags with strong opinion stances (e.g., #stoprussia, #helpukraine).
They also attempted to introduce unrelated discussion topics, such as #cybersecurity and #bitcoin. Fourth, from
the perspective of narrative strategy, one of the most salient features was that bots tended to pose as news
media accounts and use news stories to exert their influence.
It is worth noting that although bots are significantly weaker than non-bots in capturing interaction through
likes, retweets, and comments, they were trying to establish more interactions with non-bots to expand their
influence on human users. For instance, bots often direct other users to follow their ‘ally’ accounts in self-
introduction by using phrases like ‘move to @*’ ‘please follow us @*’ ‘for more news at @*’ etc. In addition,
according to the analysis of the co-occurrence network of ‘mention,’ 76.85 percent of bots tend to mention
non-bots to establish connections with them.
4.3. Bot and non-bot similarity
There are a few aspects that bot and non-bot social media accounts demonstrated similarity. First, the tweeting
volume curves of both exhibited correspondence with off-line events. Second, both bots and non-bots mostly
claim to come from the U.S., the U.K., and India. Third, we found that both bots and non-bots adopted
informal tones in their self-descriptions. Finally, we found most of the tweets were neutral in sentiment, which
is quite different from previous study findings, where bots were more inclined to produce content with extreme
sentiments (Stella, et al., 2018; Albadi, et al., 2019).
Our study revealed that bots and non-bots behaved differently in some aspects but similarly in others. Overall,
the differences were generally less prominent than previous studies. We speculate that there could be several
possible explanations. First, we raised the threshold for determining bots to 0.8. So bots with bot scores lower
than 0.8 were classified as non-bots, influencing the overall statistics of the non-bot group. Second, we
excluded institution/media accounts from our study (González-Bail´na and De Domenico, 2021). Human and
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
bot differences became less pronounced when we excluded institution/media accounts from our analysis.
Third, political issues of different natures may invite different levels of bots intervention. The nature of the
Russia-Ukraine War may also directly affect the extent of bot participation. In previous research, bots have
shown strong intervention during elections and referendums. For example, Shao, et al. (2018) estimated that
bots accounted for about 31 percent of the content produced during the 2016 presidential election. Stella and
Ferrara (2018) found bots accounted for about 23.6 percent of the Catalan referendum (2018). The active bot
participation from both sides of the issue could cause a relatively high percentage of bots involvement. Plus,
social media platforms usually show higher tolerance toward political competition within a democracy.
However, in the case of the Russia-Ukraine War, the opinion environment on Twitter was one-sided and less
controversial. In addition, Russia’s blocking of Twitter further limited the role pro-Russia bots can play
(Milmo, 2022).
Our investigation provided another example of the extensive involvement of social media bots in the online
political conversation. As some scholars have warned, social media bots are gradually being ‘weaponized’
(Jones, 2019; Orabi, et al., 2020). As social media platforms become more important in human life, bots may
have greater potential to influence and shape individual opinions and public opinion (Aral and Walker, 2010;
Chen, et al., 2022). Especially when social media become a tool for national interest and political propaganda,
they can be used to challenge social and international order. We argue that this challenge is mainly reflected in
the following three aspects. First, previous studies have repeatedly mentioned the ‘power’ of bots in spreading
dis(mis)information (Ferrara, 2017a; Shao, et al., 2018; Albadi, et al., 2019), and organic opinions may be
concealed to a certain extent. Second, anthropomorphic bots are able to disrupt rational discussions by
disseminating extreme emotions. With the continuous evolution of artificial intelligence technology, bots have
gradually demonstrated their ability to emotional contagion (Stella and Ferrara, 2018; Shi, et al., 2020). This
provocation of emotions is obviously an underestimated risk, and it is hard to imagine the harm extreme and
irrational emotions could do to public discussions. Third, in an online environment where dis(mis)information
is ubiquitous, users’ distrust of information may further translate into a general social distrust. Given these
considerations, research on the development and influences of social bots should be continued in the future,
and social media platforms should develop strong policies to prevent undesriable social impacts incurred by
bot use.
4.4. Limitations
There are a few limitations to this study that need to be acknowledged. Due to time and cost constraints, we did
not perform bot detection and analysis on the entire dataset. In addition, since we have adopted a higher
threshold for determining bots, the dataset of non-bot accounts may contain bots with less obvious bot
characteristics. Furthermore, our data set only includes textual data; however, many tweets participated in
online conversations of the Russia-Ukraine War using images or videos, which deserves more attention.
Finally and most importantly, similar to many social media bot studies, we could only observe the external
behavior exhibited by these accounts. The operation motivation and the algorithms associated with these
accounts remain largely unknown. Future studies could aim to unravel these issues to understand social media
bots.
About the authors
Fei Shen is an associate professor in the Department of Media and Communication at City University of Hong
Kong.
E-mail: feishen [at] cityu [dot] edu [dot] hk
Erkun Zhang is a Ph.D. candidate in the School of Journalism and Communication at Beijing Normal
University.
E-mail: 202231021002 [at] mail [dot] bnu [dot] edu [dot] cn
Wujiong Ren is a postgraduate student in the School of Journalism and Communication at Beijing Normal
University.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
E-mail: wjren [at] mail [dot] bnu [dot] edu [dot] cn
Yuan He is an associate professor in the School of Journalism and Communication at Hebei University.
E-mail: melina_hy [at] qq [dot] com
Quanxin Jia is a Ph.D. candidate in the Department of Communication at the University of Macau.
E-mail: 201921021003 [at] mail [dot] bnu [dot] edu [dot] cn
Hongzhong Zhang is the dean and professor in the Journalism and Communication School at Beijing Normal
University.
E-mail: zhanghz9 [at] 126 [dot] com
Acknowledgements
This project was made possible thanks to funding from the New Media Research Center of Beijing Normal
University.
Fei Shen, Erkun Zhang, Wujiong Ren, Yuan He, and Quanxin Jia all contributed equally to this work and share
first authorship. Hongzhong Zhang is the corresponding author.
Notes
1. Albadi, et al., 2019, p. 3.
2. Boshmaf, et al., 2011, p. 93.
3. Igawa, et al., 2016, p. 73.
4. Bessi and Ferrara, 2016, p. 10.
5. We use the term ‘non-bot’ because we raised the threshold for identifying bots and this will lead to missing
out some bot accounts with nonsignificant automation characteristics. Therefore, we refer to these accounts
with scores below 0.8 as “non-bots,” which are overwhelmingly composed of real users.
6. Stieglitz, et al., 2017, p. 4.
References
N. Abokhodair, D. Yoo, and D.W. McDonald, 2015. “Dissecting a social botnet: Growth, content and
influence in Twitter,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported
Cooperative Work & Social Computing, pp. 839–851.
doi: https://doi.org/10.1145/2675133.2675208, accessed 30 January 2023.
J. Ahn, 2011. The effect of social network sites on adolescents' social and academic development: Current
theories and controversies, Journal of the American Society for information Science and Technology, volume
62, number 8, pp. 1,435–1,445.
doi: https://doi.org/10.1002/asi.21540, accessed 30 January 2023.
A. Aizawa, 2003. “An information-theoretic perspective of tfidf measures,” Information Processing &
Management, volume 39, number 1, pp. 45–65.
doi: https://doi.org/10.1016/S0306-4573(02)00021-3, accessed 30 January 2023.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
N. Albadi, M. Kurdi, and S. Mishra, 2019. “Hateful people or hateful bots? Detection and characterization of
bots spreading religious hatred in Arabic social media,” Proceedings of the ACM on Human-Computer
Interaction, volume 3, number CSCW, article number 61, pp. 1–25.
doi: https://doi.org/10.1145/3359163, accessed 30 January 2023.
j.P. Allem, P. Escobedo, and L. Dharmapuri, 2020. “Cannabis surveillance with Twitter data: Emerging topics
and social bots,” American Journal of Public Health, volume 110, number 3, pp. 357–362.
doi: https://doi.org/10.2105/AJPH.2019.305461, accessed 30 January 2023.
E. Alothali, N. Zaki, E.A. Mohamed, and H. Alashwal, 2018. “Detecting social bots on Twitter: A literature
review,” 2018 International Conference on Innovations in Information Technology (IIT), pp. 175–180.
doi: https://doi.org/10.1109/INNOVATIONS.2018.8605995, accessed 30 January 2023.
S. Aral and D. Walker, 2010. “Creating social contagion through viral product design: A randomized trial of
peer influence in networks,” ICIS 2010 Proceedings, at https://aisel.aisnet.org/icis2010_submissions/44/,
accessed 30 January 2023.
D. Arlt, A. Rauchfleisch, and M.S. Schäfer, 2019. “Between fragmentation and dialogue. Twitter communities
and political debate about the Swiss ‘nuclear withdrawal initiative’,” Environmental Communication, volume
13, number 4, pp. 440–456.
doi: https://doi.org/10.1080/17524032.2018.1430600, accessed 30 January 2023.
A. Badawy, E. Ferrara, and K. Lerman, 2018. “Analyzing the digital traces of political manipulation: The 2016
Russian interference Twitter campaign,” 2018 IEEE/ACM International Conference on Advances in Social
Networks Analysis and Mining (ASONAM), pp. 258–265.
doi: https://doi.org/10.1109/ASONAM.2018.8508646, accessed 30 January 2023.
T. Bajarin, 2022. Russian-Ukraine War ... The most broadcast war in history that includes physical and cyber
warfare, Forbes (2 March), at https://www.forbes.com/sites/timbajarin/2022/03/02/ukraine-russian-warthe-
most-broadcast-war-in-history/?sh=779eafe15b3b, accessed 2 March 2022.
G.A. Barnett, W.W. Xu, J. Chu, K. Jiang, C. Huh, J.Y. Park, and H.W. Park, 2017. “Measuring international
relations in social media conversations,” Government Information Quarterly, volume 34, number 1, pp. 37–44.
doi: https://doi.org/10.1016/j.giq.2016.12.004, accessed 30 January 2023.
M. Bastian, S. Heymann, and M. Jacomy, 2009. “Gephi: An open source software for exploring and
manipulating networks,” Proceedings of the International AAAI Conference on Web and Social Media, volume
3, number 1, pp. 361–362.
doi: https://doi.org/10.1609/icwsm.v3i1.13937, accessed 30 January 2023.
M.T. Bastos and D. Mercea, 2019. “The Brexit botnet and user-generated hyperpartisan news,” Social Science
Computer Review, volume 37, number 1, pp. 38–54.
doi: https://doi.org/10.1177/0894439317734157, accessed 30 January 2023.
V. Bergengruen, 2022. Telegram Becomes a Digital Battlefield in Russia-Ukraine War, Time (21 March),
https://time.com/6158437/telegram-russia-ukraine-information-war/, accessed 10 April 2022.
A. Bessi and E. Ferrara, 2016. “Social bots distort the 2016 U.S. Presidential election online discussion,” First
Monday, volume 21, number 11, at https://firstmonday.org/article/view/7090/5653, accessed 30 January 2023.
doi: https://doi.org/10.5210/fm.v21i11.7090, accessed 30 January 2023.
G. Bolsover and P. Howard, 2019. “Chinese computational propaganda: Automation, algorithms and the
manipulation of information about Chinese politics on Twitter and Weibo,” Information, Communication &
Society, volume 22, number 14, pp. 2,063–2,080.
doi: https://doi.org/10.1080/1369118X.2018.1476576, accessed 30 January 2023.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2011. “The socialbot network: When bots socialize
for fame and money,” ACSAC ’11: Proceedings of the 27th Annual Computer Security Applications
Conference, pp. 93–102.
doi: https://doi.org/10.1145/2076732.2076746, accessed 30 January 2023.
S. Bradshaw and P.N. Howard, 2018. “The global organization of social media disinformation campaigns,”
Journal of International Affairs, volume 71, number 1.5, pp. 23–32, and at https://jia.sipa.columbia.edu/global-
organization-social-media-disinformation-campaigns, accessed 30 January 2023.
D.A. Broniatowski, A.M. Jamison, S. Qi, L. AlKulaib, T. Chen, A. Benton, S.C. Quinn, and M. Dredze, 2018.
“Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate,” American
Journal of Public Health, volume 108, number 10, pp. 1,378–1,384.
doi: https://doi.org/10.2105/AJPH.2018.304567, accessed 30 January 2023.
X. Chen, S. Gao, and X. Zhang, 2022. “Visual analysis of global research trends in social bots based on
bibliometrics,” Online Information Review, volume 46, number 6, pp. 1,076–1,094.
doi: https://doi.org/10.1108/OIR-06-2021-0336, accessed 30 January 2023.
C. Cheng, Y. Luo, and C. Yu, 2020. “Dynamic mechanism of social bots interfering with public opinion in
network,” Physica A: Statistical Mechanics and its Applications, volume 551, 124163.
doi: https://doi.org/10.1016/j.physa.2020.124163, accessed 30 January 2023.
Z. Chu, S. Gianvecchio, H. Wang, and S. Jajodia, 2012. “Detecting automation of Twitter accounts: Are you a
human, bot, or cyborg?” IEEE Transactions on Dependable and Secure Computing, volume 9, number 6, pp.
811–824.
doi: https://doi.org/10.1109/TDSC.2012.75, accessed 30 January 2023.
M. Cinelli, G. De Francisci Morales, A. Galeazzi, and M. Starnini, 2021. “The echo chamber effect on social
media,” Proceedings of the National Academy of Sciences, volume 118, number 9 (23 February),
e2023301118.
doi: https://doi.org/10.1073/pnas.2023301118, accessed 30 January 2023.
B. Collins and N. Korecki, 2022. “Twitter bans over 100 accounts that pushed #IStandWithPutin,” NBC News
(4 March), at https://www.nbcnews.com/tech/internet/twitter-bans-100-accounts-pushed-istandwithputin-
rcna18655, accessed 28 October 2022.
D.M. Cook, B. Waugh, M. Abdipanah, O. Hashemi, and S.A. Rahman, 2014. “Twitter deception and
influence: Issues of identity, slacktivism, and puppetry,” Journal of Information Warfare, volume 13, number
1, pp. 58–71, and at https://www.jinfowar.com/journal/volume-13-issue-1/twitter-deception-and-influence-
issues-identity-slacktivism, accessed 30 January 2023.
C.A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, 2016. “Botornot: A system to evaluate social
bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide
Web, pp. 273–274.
doi: https://doi.org/10.1145/2872518.2889302, accessed 30 January 2023.
B. Dean, 2022. “How many people use Twitter in 2022?” (5 January), at https://backlinko.com/twitter-users,
accessed 15 April 2022.
R. Dubbin, 2013. “The rise of Twitter bots,” New Yorker (14 November), at
http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots, accessed 4 October 2022.
A. Duh, M. Slak Rupnik, and D. Korošak, 2018. “Collective behavior of social bots is encoded in their
temporal Twitter activity,” Big Data, volume 6, number 2, pp. 113–123.
doi: https://doi.org/10.1089/big.2017.0041, accessed 30 January 2023.
R.M. Everett, J.R.C. Nurse, and A. Erola, 2016. “The anatomy of online deception: What makes automated
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
text convincing?” SAC ’16: Proceedings of the 31st Annual ACM Symposium on Applied Computing, pp.
1,115–1,120.
doi: https://doi.org/10.1145/2851613.2851813, accessed 30 January 2023.
E. Ferrara, 2017a. “Disinformation and social bot operations in the run up to the 2017 French presidential
election,” First Monday, volume 22, number 8, at https://firstmonday.org/article/view/8005/6516, accessed 30
January 2023.
doi: https://doi.org/10.5210/fm.v22i8.8005, accessed 30 January 2023.
E. Ferrara, 2017b. “Contagion dynamics of extremist propaganda in social networks,” Information Sciences,
volume 418–419, pp. 1–12.
doi: https://doi.org/10.1016/j.ins.2017.07.030, accessed 30 January 2023.
E. Ferrara and Z. Yang, 2015. “Measuring emotional contagion in social media,” PloS ONE, volume 10,
number 11, e0142390.
doi: https://doi.org/10.1371/journal.pone.0142390, accessed 30 January 2023.
E. Ferrara, H. Chang, E. Chen, G. Muric, and J. Patel, 2020. Characterizing social media manipulation in the
2020 U.S. presidential election, First Monday, volume 25, number 11, at
https://firstmonday.org/article/view/11431/9993, accessed 30 January 2023.
doi: https://doi.org/10.5210/fm.v25i11.11431, accessed 30 January 2023.
E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016. “The rise of social bots,” Communications
of the ACM, volume 59, number 7, pp. 96–104.
doi: https://doi.org/10.1145/2818717, accessed 30 January 2023.
M. Forelle, P. Howard, A. Monroy-Hernández, and S. Savage, 2015. “Political bots and the manipulation of
public opinion in Venezuela,” at https://ora.ox.ac.uk/objects/uuid:07cbc55b-f9e2-44c3-a6f9-daab377c8f8c,
accessed 30 January 2023.
S. González-Bailón and M. De Domenico, 2021. “Bots are less central than verified accounts during
contentious political events,” Proceedings of the National Academy of Sciences, volume 118, number 11 (8
March), e2013443118.
doi: https://doi.org/10.1073/pnas.2013443118, accessed 30 January 2023.
J.P. Gujjar and H.P. Kumar, 2021. “Sentiment analysis: Textblob for decision making,” International Journal
of Scientific Research & Engineering Trends, volume 7, number 2, pp. 1,097–1,099, and at
https://ijsret.com/wp-content/uploads/2021/03/IJSRET_V7_issue2_289.pdf, accessed 30 January 2023.
L. Guo, J.A. Rohde, and H.D. Wu, 2020. “Who is responsible for Twitter’s echo chamber problem? Evidence
from 2016 U.S. election networks,” Information, Communication & Society, volume 23, number 2, pp. 234–
251.
doi: https://doi.org/10.1080/1369118X.2018.1499793, accessed 30 January 2023.
L. Hagen, S. Neely, T.E. Keller, R. Scharf, and F.E. Vasquez, 2022. “Rise of the machines? Examining the
influence of social bots on a political discussion network,” Social Science Computer Review, volume 40,
number ), pp. 264–287.
doi: https://doi.org/10.1177/0894439320908190, accessed 30 January 2023.
K. Harvey (editor), 2013. Encyclopedia of social media and politics. Thousand Oaks, Calif.: Sage.
doi: https://dx.doi.org/10.4135/9781452244723, accessed 30 January 2023.
P.N. Howard and B. Kollanyi, 2016. “Bots,# StrongerIn, and# Brexit: Computational propaganda during the
UK-EU referendum,” at https://ora.ox.ac.uk/objects/uuid:d7787894-7c41-4c3b-a81d-d7c626b414ad, accessed
30 January 2023.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
R.A. Igawa, S. Barbon, Jr., K.C.S. Paulo, G.S. Kido, R.C. Guido, M.L.P. Júnior, and I.N. da Silva, 2016.
“Account classification in online social networks with LBCA and wavelets,” Information Sciences, volume
332, pp. 72–83.
doi: https://doi.org/10.1016/j.ins.2015.10.039, accessed 30 January 2023.
M. Jaitner, 2015. “Russian information warfare: Lessons from Ukraine,” In: K. Geers (editor). Cyber war in
perspective: Russian aggression against Ukraine. Tallinn: NATO Cooperative Cyber Defence Centre of
Excellence, pp. 87–94, and at https://ccdcoe.org/uploads/2018/10/Ch10_CyberWarinPerspective_Jaitner.pdf,
accessed 30 January 2023.
T. Joachims, 1998. “Text categorization with Support Vector Machines: Learning with many relevant
features,” In: C. Nédellec and C. Rouveirol (editors). Machine learning: ECML-98. Lecture Notes in Computer
Science, volume 1398. Berlin: Springer, pp. 137–142.
doi: https://doi.org/10.1007/BFb0026683, accessed 30 January 2023.
D. Johnson, 2022. “Ukraine could be the most documented war in human history,” Slate (24 February), at
https://slate.com/technology/2022/02/ukraine-russia-livestream-google-maps.html, accessed 24 February 2022.
M.O. Jones, 2019. “Propaganda, fake news, and fake trends: The weaponization of Twitter bots in the gulf
crisis,” International Journal of Communication, volume 13, at
https://ijoc.org/index.php/ijoc/article/view/8994, accessed 30 January 2023.
K.H. Manguri, R.N. Ramadhan, and P.R.M. Amin, 2020. “Twitter sentiment analysis on worldwide COVID-
19 outbreaks,” Kurdistan Journal of Applied Research, volume 5, number 3, pp. 54–65.
doi: https://doi.org/10.24017/covid.8, accessed 30 January 2023.
N. Maréchal, 2016. “When bots tweet: Toward a normative framework for bots on social networking sites”
International Journal of Communication, volume 10, at https://ijoc.org/index.php/ijoc/article/view/6180,
accessed 30 January 2023.
D. Milmo, 2022. “Russia blocks access to Facebook and Twitter,” Guardian (4 March), at
https://www.theguardian.com/world/2022/mar/04/russia-completely-blocks-access-to-facebook-and-twitter,
accessed 28 October 2022.
S. Muscat and Z. Siebert, 2022. “Laptop generals and bot armies: The digital front of Russia’s Ukraine war,”
Heinrich Böll Stiftung, Brussels office, European Union (1 March), at
https://eu.boell.org/en/2022/03/01/laptop-generals-and-bot-armies-digital-front-russias-ukraine-war, accessed
12 April 2022.
M. Orabi, D. Mouheb, Z. Al Aghbari, and I. Kamel, 2020. “Detection of bots in social media: A systematic
review,” Information Processing & Management, volume 57, number 4, 102250.
doi: https://doi.org/10.1016/j.ipm.2020.102250, accessed 30 January 2023.
J. Pöschko, 2011. “Exploring Twitter hashtags,” arXiv:1111.6553 (28 November).
doi: https://doi.org/10.48550/arXiv.1111.6553, accessed 30 January 2023.
J. Purtill, 2022. “When it comes to spreading disinformation online, Russia has a massive bot army on its
side,” ABC News (29 March), at https://www.abc.net.au/news/science/2022-03-30/ukraine-war-twitter-bot-
network-amplifies-russian-disinformation/100944970?
utm_campaign=abc_news_webandutm_content=linkandutm_medium=content_sharedandutm_source=abc_news_web
,
accessed 10 April 2022.
J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, A. Flammini, and F. Menczer, 2011. “Detecting and
tracking political abuse in social media,” Proceedings of the International AAAI Conference on Web and
Social Media, volume 5, number 1, pp. 297–304.
doi: https://doi.org/10.1609/icwsm.v5i1.14127, accessed 30 January 2023.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
S. Shane, 2017. “The fake Americans Russia created to influence the election,” New York Times (7
September), at https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html,
accessed 30 January 2023.
C. Shao, G.L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, and F. Menczer, 2018. “The spread of low-
credibility content by social bots,” Nature Communications, volume 9, number 1, Article number 4787.
doi: https://doi.org/10.1038/s41467-018-06930-7, accessed 30 January 2023.
W. Shi, D. Liu, J. Yang, J. Zhang, S. Wen, and J. Su, 2020. “Social bots’ sentiment engagement in health
emergencies: A topic-based analysis of the COVID-19 pandemic discussions on Twitter,” International
Journal of Environmental Research and Public Health, volume 17, number 22, 8701.
doi: https://doi.org/10.3390/ijerph17228701, accessed 30 January 2023.
S. Shorey and P.N. Howard, 2016. “Automation, big data and politics: A research review,” International
Journal of Communication, volume 10, at http://ijoc.org/index.php/ijoc/article/view/6233, accessed 30 January
2023.
M. Stella, E. Ferrara, and M. De Domenico, 2018. “Bots increase exposure to negative and inflammatory
content in online social systems” Proceedings of the National Academy of Sciences, volume 115, number 49
(20 November), pp. 12,435–12,440.
doi: https://doi.org/10.1073/pnas.1803470115, accessed 30 January 2023.
S. Stieglitz, F. Brachten, D. Berthelé, M. Schlaus, C. Venetopoulou, and D. Veutgen, 2017. “Do social bots
(still) act different to humans?comparing metrics of social bots with those of humans,” In: G. Meiselwitz
(editor). Social computing and social media. Human behavior. Lecture Notes in Computer Science, volume
10282. Cham, Switzerland: Springer, pp. 379–395.
doi: https://doi.org/10.1007/978-3-319-58559-8_30, accessed 30 January 2023.
V.S. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman, L. Zhu, E. Ferrara, A. Flammini,
and F. Menczer, 2016. The DARPA Twitter bot challenge, Computer, volume 49, number 6, pp. 38–46.
doi: https://dx.doi.org/10.1109/MC.2016.183, accessed 1 November 2016.
J.A. Tucker, Y. Theocharis, M.E. Roberts, and P. Barberá, 2017. “From liberation to turmoil: Social media and
democracy,” Journal of Democracy, volume 28, number 4, pp. 46–59.
doi: https://doi.org/10.1353/jod.2017.0064, accessed 30 January 2023.
Z. Tufekci, 2008. “Can you see me now? Audience and disclosure regulation in online social network sites,”
Bulletin of Science, Technology and Society, volume 28, number 1, pp. 20–36.
doi: https://doi.org/10.1177/0270467607311484, accessed 30 January 2023.
I. Urbina, 2013. “I flirt and tweet. Follow me at# Socialbot” New York Times (10 August), at
https://www.nytimes.com/2013/08/11/sunday-review/i-flirt-and-tweet-follow-me-at-socialbot.html, accessed 4
October 2022.
R. Wald, T.M. Khoshgoftaar, A. Napolitano, and C. Sumner, 2013. “Which users reply to and interact with
Twitter social bots?” 2013 IEEE 25th International Conference on Tools with Artificial Intelligence, pp. 135–
144.
doi: https://doi.org/10.1109/ICTAI.2013.30, accessed 30 January 2023.
S.C. Woolley, 2016. “Automating power: Social bot interference in global politics,” First Monday, volume 21,
number 4, at https://firstmonday.org/article/view/6161/5300, accessed 30 January 2023.
doi: https://doi.org/10.5210/fm.v21i4.6161, accessed 30 January 2023.
S.C. Woolley and P.N. Howard, 2016. “Political communication, computational propaganda, and autonomous
agents — Introduction,” International Journal of Communication, volume 10, at
https://ijoc.org/index.php/ijoc/article/view/6298, accessed 30 January 2023.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War
K.C. Yang, E. Ferrara, and F. Menczer, 2022. “Botometer 101: Social bot practicum for computational social
scientists,” Journal of Computational Social Science, volume 5, pp. 1,511–1,528.
doi: https://doi.org/10.1007/s42001-022-00177-5, accessed 30 January 2023.
A. Zelenkauskaite, P. Toivanen, J. Huhtamäki, and K. Valaskivi, 2021. “Shades of hatred online: 4chan
duplicate circulation surge during hybrid media events,” First Monday, volume 26, number 1, at
https://firstmonday.org/article/view/11075/10029, accessed 30 January 2023.
doi: https://doi.org/10.1007/s42001-022-00177-5, accessed 30 January 2023.
Editorial history
Received 6 September 2022; revised 28 November 2022; accepted 30 January 2023.
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International License.
Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine
War
by Fei Shen, Erkun Zhang, Wujiong Ren, Yuan He, Quanxin Jia, and Hongzhong Zhang.
First Monday, Volume 28, Number 2 — 6 February 2023
https://firstmonday.org/article/view/12777/10776
doi: https://dx.doi.org/10.5210/fm.v28i2.12777
... Despite the plethora of beneficial use cases, chatGPT could also be used for malicious purposes. For instance, social bots might exploit chatGPT features to (i) spread disinformation [6][7][8], (ii) manipulate public sentiments [9][10][11], (iii) propagate extremist agenda [12,13], (iv) user impersonation [2], and (v) disseminate spam contents [14] over the Twitter network (recently re-branded as X) and other social media. In this context, the chatGPT-generated tweets pose significant concerns for fast-expanding social platforms [3,15]. ...
... Recent studies [12,13] showed that social bots were involved in devising debates related to the Russia-Ukraine conflict on a large scale. For example, Shen et al. [12] investigated 3.7 million tweets and one million accounts related to the Russia-Ukraine conflict, demonstrating that social bots accounted for 13.4% of the accounts and were responsible for 16.7% of the tweets. ...
... Recent studies [12,13] showed that social bots were involved in devising debates related to the Russia-Ukraine conflict on a large scale. For example, Shen et al. [12] investigated 3.7 million tweets and one million accounts related to the Russia-Ukraine conflict, demonstrating that social bots accounted for 13.4% of the accounts and were responsible for 16.7% of the tweets. Another study [27] examined 360,823 comments on 39,611 tweets. ...
Chapter
Recently, there has been a noticeable growth in textual content generated through advanced language models, such as chatGPT, across various social networks. ChatGPT can produce content that closely emulates human writing, making it indistinguishable from human content and introducing concerns regarding its potential exploitation by social bots for malicious purposes. This study undertakes a comprehensive investigation leveraging stylometric features to assess and identify bot accounts and chatGPT writing style on the Twitter platform. In particular, we extract stylometric features from bot- and human-written tweets, perform statistical tests, and evaluate the performance of machine-learning models fed by stylistic indicators. Our findings indicate that chatGPT-driven accounts are statistically different from human accounts based on consistency in their writing style, while the experimented models achieve an accuracy of up to 96% and 91% in the detection of chatGPT-based bot accounts and chatGPT-generated tweets, respectively. Finally, we assess the detection performance when adversarial text is introduced in test samples, demonstrating the robustness of the stylometry-based approach under adversarial attacks.
... As social media platforms continue to play an increasingly significant role in people's lives, the influence of bots in shaping individual opinions and public sentiment grows. Particularly when social media becomes a tool for advancing national interests and political propaganda, it can be harnessed to challenge societal and international norms (Shen et al., 2023). Bots offer the capacity to generate many software-controlled social media profiles at a low cost. ...
... Notably, India, South Africa, and the United States emerged as primary targets for these bots. Shen et al. (2023) study also supports the findings with a study that examined the popularity of the top 20 hashtags in bot and non-bot tweets, revealing that a significant proportion of social media bots engaged in online conversations during the Russia-Ukraine conflict. Intriguingly, the data showed that tweets from Russian-side bots received a higher average number of retweets, likes, and comments compared to those from the Ukrainian side. ...
Article
Full-text available
In recent years, cyberspace has been shaped by a rapid and transformative technological evolution, which ushered in an era characterised by unparalleled connectivity and innovation. However, this remarkable progress has brought a concerning surge in cyberattacks that have fundamentally altered cyberspace dynamics and refined the nature of contemporary warfare. This refinement was vividly illustrated in the recent Russia-Ukraine conflict, where cyberspace played a pivotal role, blurring the traditional boundaries of conflict in the cyber age. As a result, this study used secondary data to examine how various social media platforms such as Twitter, Facebook, TikTok, and Telegram were used as a strategic advantage during the conflict. The findings disclosed that Russia employed offensive propaganda against Ukraine, while Ukraine adopted a defensive stance, effectively countering the narrative through an active online presence. Moreover, this study underscored the substantial role of social media in warfare and its continued significance in future conflicts. Furthermore, this study provided recommendations for nations to better prepare for such conflicts. The recommendations provide valuable insights to assist decision-makers and policymakers in enhancing cybersecurity awareness and practices within their respective countries.
... More so, in the area of social cybersecurity, which focuses on understanding how behavior is influenced by relationships and communities and the analysis of information maneuver campaigns 5 , creating a social cyber geography of these campaigns reveals patterns of global alliances and threats. For example, the social cyber geography of bots and their stances during the 2022 Russian-Ukraine war revealed the distribution of political stances on X towards each country 6 . Here, we bridge the cyber realm and the physical world, by creating a social cyber geography of social bot agents. ...
Preprint
Full-text available
Social Cyber Geography is the space in the digital cyber realm that is produced through social relations. Communication in the social media ecosystem happens not only because of human interactions, but is also fueled by algorithmically controlled bot agents. Most studies have not looked at the social cyber geography of bots because they focus on bot activity within a single country. Since creating a bot uses universal programming technology, bots, how prevalent are these bots throughout the world? To quantify bot activity worldwide, we perform a multilingual and geospatial analysis on a large dataset of social data collected from X during the Coronavirus pandemic in 2021. This pandemic affected most of the world, and thus is a common topic of discussion. Our dataset consists of ~100 mil posts generated by ~31mil users. Most bot studies focus only on English-speaking countries, because most bot detection algorithms are built for the English language. However, only 47\% of the bots write in the English language. To accommodate multiple languages in our bot detection algorithm, we built Multilingual BotBuster, a multi-language bot detection algorithm to identify the bots in this diverse dataset. We also create a Geographical Location Identifier to swiftly identify the countries a user affiliates with in his description. Our results show that bots can appear to move from one country to another, but the language they write in remains relatively constant. Bots distribute narratives on distinct topics related to their self-declared country affiliation. Finally, despite the diverse distribution of bot locations around the world, the proportion of bots per country is about 20%. Our work stresses the importance of a united analysis of the cyber and physical realms, where we combine both spheres to inventorize the language and location of social media bots and understand communication strategies.
... 2 During the first impeachment of US president Donald Trump, an estimated 31% of impeachment related tweets were coming from bots (Rossetti and Zaman, 2023). In the early weeks of the Russia-Ukraine war, bot accounts produced around 17% of tweets related to the conflict (Shen et al., 2023). a common prior about the state, and each voter independently receives an imperfect continuous signal about it. Voters differ in their political preferences, or types. ...
Preprint
Full-text available
Political agents often aim to influence elections through troll farms -- organisations that disseminate messages emulating genuine information. We study the behaviour of a troll farm that faces a heterogeneous electorate of partially informed voters, and aims to achieve a desired political outcome by targeting each type of voter with a specific distribution of messages. We show that such tactics are more effective when voters are otherwise well-informed, for example, when the media is of high quality. At the same time, increased polarisation, as well as deviations from Bayesian rationality, can reduce the negative effect of troll farms and restore efficiency of electoral outcomes.
... Furthering this exploration, a study distinguishes between human and bot account activities on Twitter, revealing significant differences in engagement and political stances. Their findings highlight the nuanced roles of bots in shaping online narratives during conflict [45]. Similarly, another study on pro-Russian propaganda on social media during the 2022 invasion of Ukraine underscores the critical role of bots in spreading propaganda, offering insights into modern information warfare tactics [12]. ...
Chapter
This study investigates the significant shift in social media and brand management due to the outbreak of the war in Ukraine in 2022, focusing particularly on how companies’ social media posts and strategies evolved in response to the conflict and its impact on user attitudes. The research explores the changing dynamics of social media engagement by companies and the public during the war, examining the role of social media in shaping public perceptions and responses to the conflict and the effect of companies’ online activities on their relationships with users and consumers. Utilizing data from platforms like Facebook, Instagram, and Twitter, the study also incorporates a survey conducted among the Polish community to evaluate consumer opinions about corporate actions on social media during the crisis. The findings reveal a paradigm shift in brand communication and management on social media, highlighting the expectation of users for companies to address social and political issues and the demand for brands to withdraw from the Russian market as a show of support for Ukraine. The study also explores into the practical implications of social media crisis management, underscoring the need for brands to monitor online platforms, respond promptly and transparently, offer solutions, and have a crisis management plan.
... A substantial portion of SNSs users discussing this topic are bots. They have been implicated in spreading misinformation, fake news, conspiracy theories and misleading narratives 55 . Early in the war, over 20% of pro-Russian accounts were found to be bots 56 . ...
... In the 2022 Russia-Ukraine war, bots were deployed by both countries on Twitter to shape support for the war. Ukrainian bots overwhelmed the conversation in tweet quantity, but Russian bots had more effective communication manufacturing conflict [11,12]. Russia bots took part in extensive agenda building activities during the 2016 US elections, showcasing how these automated accounts can be used not only to brand messages for domestic audiences but also for foreign audiences [9]. ...
Article
Full-text available
As digitalization increases, countries employ digital diplomacy, harnessing digital resources to project their desired image. Digital diplomacy also encompasses the interactivity of digital platforms, providing a trove of public opinion that diplomatic agents can collect. Social media bots actively participate in political events through influencing political communication and purporting coordinated narratives to influence human behavior. This article provides a methodology towards identifying three types of bots: General Bots, News Bots and Bridging Bots, then further identify these classes of bots on Twitter during a diplomatic incident involving the United States and China. In the balloon incident that occurred in early 2023, where a balloon believed to have originated from China is spotted across the US airspace. Both countries have differing opinions on the function and eventual handling of the balloon. Using a series of computational methods, this article examines the impact of bots on the topics disseminated, the influence and the use of information maneuvers of bots within the social communication network. Among others, our results observe that all three types of bots are present across the two countries; bots geotagged to the US are generally concerned with the balloon location while those geotagged to China discussed topics related to escalating tensions; and perform different extent of positive narrative and network information maneuvers. The broader implications of our work towards policy making is the systematic identification of the type of bot users and their properties across country lines, enabling the evaluation of how automated agents are being deployed to disseminate narratives and the nature of narratives propagated, and therefore reflects the image that the country is being projected as on social media; as well as the perception of political issues by social media users.
Article
Full-text available
Rohingyas, who make up the world’s largest stateless population of more than 3.5 million, are an ethnoreligious minority group originating in Myanmar. Although the ancestors of the Rohingyas and their ancestors have been living in northern Rakhine state since the 8th century, Myanmar does not recognize their citizenship rights. Government-sponsored discrimination, detention, abuse, violence, and torture have been unleashed against them. In Myanmar, Rohingyas do not seem to have the right to have rights. The country’s military government enacted the apartheid-like Citizenship Act in 1982, which made the Rohingyas stateless. They fled persecutions to various countries, including neighboring Bangladesh, India, Pakistan, Thailand, Saudi Arabia, the United Arab Emirates, Malaysia, Indonesia, and Australia. After the recent genocide of August 2017, only 600,000 out of the total Rohingya population is left in Myanmar. The rest are dispersed around the world as stateless people. Currently, the highest number of Rohingyas – more than 1.6 million – live in Bangladesh. Among them about 1 million are sheltering in the 33 camps of Cox’s Bazar, the South- Eastern district of the country, and thousands live in Bhashan Char, an island in the Bay of Bengal. In the refuge countries including Bangladesh, Rohingyas are denied basic rights and protection because of their statelessness. Intense and continued diplomatic efforts from international organizations – such as the United Nations, European Union, Organization of Islamic Cooperation (OIC), and the Association of Southeast Asian Nations (ASEAN) – and regional powers, including China and India, are essential for solving the decades-long Rohingya crisis. A crucial point in resolving the crisis should have been to ensure an end to the Rohingyas’ stateless identity.
Article
Full-text available
Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.
Article
Full-text available
Social bots have become an important component of online social media. Deceptive bots, in particular, can manipulate online discussions of important issues ranging from elections to public health, threatening the constructive exchange of information. Their ubiquity makes them an interesting research subject and requires researchers to properly handle them when conducting studies using social media data. Therefore, it is important for researchers to gain access to bot detection tools that are reliable and easy to use. This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning. We introduce how Botometer works, the different ways users can access it, and present a case study as a demonstration. Readers can use the case study code as a template for their own research. We also discuss recommended practice for using Botometer.
Article
Full-text available
Significance Online networks carry benefits and risks with high-stakes consequences during contentious political events: They can be tools for organization and awareness, or tools for disinformation and conflict. We combine social media and web-tracking data to measure differences on the visibility of news sources during two events that involved massive political mobilizations in two different countries and time periods. We contextualize the role of social media as an entry point to news, and we cast doubts on the impact that bot activity had on the coverage of those mobilizations. We show that verified, blue-badge accounts were significantly more visible and central. Our findings provide evidence to evaluate the role of social media in facilitating information campaigns and eroding traditional gatekeeping roles.
Article
Full-text available
The 4chan /pol/ platform is a controversial online space on which a surge in hate speech has been observed. While recent research indicates that events may lead to more hate speech, empirical evidence on the phenomenon remains limited. This study analyzes 4chan /pol/ user activity during the mass shootings in Christchurch and Pittsburgh and compares the frequency and nature of user activity prior to these events. We find not only a surge in the use of hate speech and anti-Semitism but also increased circulation of duplicate messages, links, and images and an overall increase in messages from users who self-identify as “white supremacist” or “fascist” primarily voiced from English-speaking IP-based locations: the U.S., Canada, Australia, and Great Britain. Finally, we show how these hybrid media events share the arena with other prominent events involving different agendas, such as the U.S. midterm elections. The significant increase in duplicates during the hybrid media events in this study is interpreted beyond their memetic logic. This increase can be interpreted through what we refer to as activism of hate. Our findings indicate that there is either a group of dedicated users who are compelled to support the causes for which shooting took place and/or that users use automated means to achieve duplication.
Article
Full-text available
During the COVID-19 pandemic, when individuals were confronted with social distancing, social media served as a significant platform for expressing feelings and seeking emotional support. However, a group of automated actors known as social bots have been found to coexist with human users in discussions regarding the coronavirus crisis, which may pose threats to public health. To figure out how these actors distorted public opinion and sentiment expressions in the outbreak, this study selected three critical timepoints in the development of the pandemic and conducted a topic-based sentiment analysis for bot-generated and human-generated tweets. The findings show that suspected social bots contributed to as much as 9.27% of COVID-19 discussions on Twitter. Social bots and humans shared a similar trend on sentiment polarity—positive or negative—for almost all topics. For the most negative topics, social bots were even more negative than humans. Their sentiment expressions were weaker than those of humans for most topics, except for COVID-19 in the US and the healthcare system. In most cases, social bots were more likely to actively amplify humans’ emotions, rather than to trigger humans’ amplification. In discussions of COVID-19 in the US, social bots managed to trigger bot-to-human anger transmission. Although these automated accounts expressed more sadness towards health risks, they failed to pass sadness to humans.
Article
Full-text available
In the past two decades, the growth of social data on the web has rapidly increased. This leads to researchers to access the data and information for many academic research and commercial uses. Social data on the web contains many real life events that occurred in daily life, today the global COVID-19 disease is spread worldwide. Many individuals including media organizations and government agencies are presenting the latest news and opinions regarding the coronavirus. In this study, the twitter data has been pulled out from Twitter social media, through python programming language, using Tweepy library, then by using TextBlob library in python, the sentiment analysis operation has been done. After the measuring sentiment analysis, the graphical representation has been provided on the data. The data we have collected on twitter are based on two specified hashtag keywords, which are (“COVID-19, coronavirus”). The date of searching data is seven days from 09-04-2020 to 15-04-2020. In the end a visualized presentation regarding the results and further explanation are provided.
Article
We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96% accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections.
Article
Purpose In order to further advance the research of social bots, based on the latest research trends and in line with international research frontiers, it is necessary to understand the global research situation in social bots. Design/methodology/approach Choosing Web of Science™ Core Collections as the data sources for searching social bots research literature, this paper visually analyzes the processed items and explores the overall research progress and trends of social bots from multiple perspectives of the characteristics of publication output, major academic communities and active research topics of social bots by the method of bibliometrics. Findings The findings offer insights into research trends pertaining to social bots and some of the gaps are also identified. It is recommended to further expand the research objects of social bots in the future, not only focus on Twitter platform and strengthen the research of social bot real-time detection methods and the discussion of the legal and ethical issues of social bots. Originality/value Most of the existing reviews are all for the detection methods and techniques of social bots. Unlike the above reviews, this study is a systematic literature review, through the method of quantitative analysis, comprehensively sort out the research output in social bots and shows the latest research trends in this area and suggests some research indirections that need to be focused in the future. The findings will provide references for subsequent scholars to research on social bots. Peer review The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-06-2021-0336 .
Article
Democracies are postulated upon the ability to carry out fair elections, free from any form of interference or manipulation. Social media have been reportedly used to distort public opinion nearing election events in the United States and beyond. With over 240 million election-related tweets recorded between 20 June and 9 September 2020, in this study we chart the landscape of social media manipulation in the context of the upcoming 3 November 2020 U.S. presidential election. We focus on characterizing two salient dimensions of social media manipulation, namely (i) automation (e.g., the prevalence of bots), and (ii) distortion (e.g., manipulation of narratives, injection of conspiracies or rumors). Despite being outnumbered by several orders of magnitude, just a few thousands of bots generated spikes of conversations around real-world political events in all comparable with the volume of activity of humans. We discover that bots also exacerbate the consumption of content produced by users with their same political views, worsening the issue of political echo chambers. Furthermore, coordinated efforts carried out by Russia, China and other countries are hereby characterized. Finally, we draw a clear connection between bots, hyper-partisan media outlets, and conspiracy groups, suggesting the presence of systematic efforts to distort political narratives and propagate disinformation. Our findings may have impactful implications, shedding light on different forms of social media manipulation that may, altogether, ultimately pose a risk to the integrity of the election.