ArticlePDF Available

Abstract and Figures

Microblogs are increasingly exploited for predicting prices and traded volumes of stocks in financial markets. However, it has been demonstrated that much of the content shared in microblogging platforms is created and publicized by bots and spammers. Yet, the presence (or lack thereof) and the impact of fake stock microblogs has never systematically been investigated before. Here, we study 9M tweets related to stocks of the 5 main financial markets in the US. By comparing tweets with financial data from Google Finance, we highlight important characteristics of Twitter stock microblogs. More importantly, we uncover a malicious practice perpetrated by coordinated groups of bots and likely aimed at promoting low-value stocks by exploiting the popularity of high-value ones. Our results call for the adoption of spam and bot detection techniques in all studies and applications that exploit user-generated content for predicting the stock market.
Content may be subject to copyright.
Cashtag piggybacking: uncovering spam and bot activity in stock
microblogs on Twier
STEFANO CRESCI, Institute of Informatics and Telematics, IIT-CNR
FABRIZIO LILLO, Department of Mathematics, University of Bologna
DANIELE REGOLI, Scuola Normale Superiore
SERENA TARDELLI, Institute of Informatics and Telematics, IIT-CNR
MAURIZIO TESCONI, Institute of Informatics and Telematics, IIT-CNR
Microblogs are increasingly exploited for predicting prices and traded volumes of stocks in nancial markets.
However, it has been demonstrated that much of the content shared in microblogging platforms is created and
publicized by bots and spammers. Yet, the presence (or lack thereof) and the impact of fake stock microblogs
has never systematically been investigated before. Here, we study 9M tweets related to stocks of the 5 main
nancial markets in the US. By comparing tweets with nancial data from Google Finance, we highlight
important characteristics of Twier stock microblogs. More importantly, we uncover a malicious practice
perpetrated by coordinated groups of bots and likely aimed at promoting low-value stocks by exploiting the
popularity of high-value ones. Our results call for the adoption of spam and bot detection techniques in all
studies and applications that exploit user-generated content for predicting the stock market.
CCS Concepts:
Information systems Social networks; Security and privacy Social network
security and privacy; Applied computing Economics;
Additional Key Words and Phrases: Social spam, Social networks, Spambots detection, Stock market
ACM Reference format:
Stefano Cresci, Fabrizio Lillo, Daniele Regoli, Serena Tardelli, and Maurizio Tesconi. 2018. Cashtag piggyback-
ing: uncovering spam and bot activity in stock microblogs on Twier. ACM Trans. Web 0, 0, Article 0 ( 2018),
18 pages.
DOI: 0000001.0000001
e exploitation of user-generated content in microblogs for the prediction of real-world phenom-
ena, has recently gained huge momentum [
]. An important application domain for such approach
is that of nance, and in particular, stock market prediction. Indeed, a number of works developed
algorithms and tools for extracting valuable information (e.g., sentiment scores) from microblogs
and proved capable of predicting prices and traded volumes of stocks in nancial markets [
Moreover, nance is increasingly relying on this information through the development of automatic
trading systems. All such works ground on the assumption that microblogs collectively represent a
reliable proxy for the opinions of masses of users. Meanwhile, evidence of spam and automated
(bot) activities in social platforms is being reported at a growing rate [
]. e existence of ctitious,
synthetic content appears to be pervasive since it has been witnessed both in online discussions
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the
full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honored.
Abstracting with credit is permied. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specic permission and/or a fee. Request permissions from
©2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. 1559-1131/2018/0-ART0 $15.00
DOI: 0000001.0000001
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
arXiv:1804.04406v1 [cs.SI] 12 Apr 2018
0:2 S. Cresci et al.
about important societal topics (e.g., politics, terrorism, immigration), as well as in discussions
about seemingly less relevant topics, such as products on sale on e-commerce platforms, and mobile
applications [
]. For instance, regarding politics, it has been demonstrated that bots tampered
with recent US [
], Italian [
], and French [
] political elections as well as with online discussions
about the 2016 UK Brexit referendum [2].
us, on the one hand, user-generated content in microblogs is being exploited for predicting
trends in the stock market. On the other hand, without a thorough investigation, we run the risk
that much of the content we rely on, is actually fake and possibly purposely created to mislead
algorithms and users alike. Should this risk materialize, real-world consequences would be severe,
as already anticipated by a few noteworthy events. On May 6 2010, the Dow Jones Industrial
Average had the biggest one-day drop in history, later called the Flash Crash. Aer ve months, an
investigation concluded that one of the possible causes was an automated high-frequency trading
system that had incorrectly assessed some information collected from the Web [
]. In 2013, the
US International Press Ocer’s Twier account got hacked and a false rumor was posted reporting
that President Obama got injured during a terrorist aack. e fake news rapidly caused a stock
market collapse that burned
. en, in 2014, the unknown Cynk Technology briey became a
6B worth company. Automatic trading algorithms detected a fake social discussion and begun to
invest heavily in the company’s shares. By the time analysts noticed the orchestration, investments
had already turned into heavy losses2.
is study moves in the direction of investigating the presence of spam and bot activity in
stock microblogs, thus paving the way for the development of intelligent nancial-spam ltering
techniques. Specically, we rst collect a rich dataset comprising 9M tweets posted between
May and September 2017, discussing stocks of the 5 main nancial markets in the US. We enrich
our dataset by collecting nancial information from Google Finance about the 30,032 companies
mentioned in our tweets. Cross-checking discussion paerns on Twier against ocial data
from Google Finance uncovers anomalies in tweets related to some low-value companies. Further
investigation of this issue reveals a large-scale speculative campaign perpetrated by coordinated
groups of bots and aimed at promoting low-value stocks by exploiting the popularity of high-value
ones. Finally, we analyze a small subset of authors of suspicious tweets with state-of-the-art bot
detection techniques, identifying 71% (18,509 accounts) of them as bots.
Since no study has previously addressed bot activity in stock microblogs, this section is organized
so as to separately survey previous work either related to the exploitation of user-generated content
for nancial purposes, or to spam and bot characterization.
2.1 Finance and social media
Works in this eld are based on the idea underlying the Hong-Page theorem [
]. Such theorem,
when cast in the nancial domain, states that user-generated messages about a company’s future
prospects provide a rich and diverse source of information, in contrast to what the small number
of traditional nancial analysts can oer.
Starting from the general assumption of the Hong-Page theorem, much eort has been devoted
towards the detection of correlations between metrics extracted from social media posts and stock
market prices. In particular, sentiment metrics have been widely used as a predictor for stock
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:3
prices and other economic indicators [
]. e primary role played by the sentiment
of the users as a nancial predictor is also testied by the interest in developing domain-specic
sentiment classiers for the nancial domain [
]. Others have instead proposed to exploit the overall
volume of tweets about a company [
] and the topology of stock networks [
] as predictors
of nancial performance. Specically, authors of [
] envisioned the possibility to automatically
buy or sell stocks based on the presence of a peak in the volume of tweets. However, subsequent
work [
] evaluated the informativeness of sentiment- and volume-derived predictors, showing
that the sentiment of tweets contains signicantly more information for predicting stock prices
than just their volume. e role of inuencers in social media has also been identied as a strong
contributing factor to the formation of market trends [
]. Others have instead used weblogs for
studying the relationships between dierent companies [
]. In detail, co-occurrences of stock
mentions in weblogs have been exploited to create a graph of companies, which was subsequently
clustered. Authors have veried that companies belonging to the same clusters feature strong
correlations in their stock prices. is methodology can be employed for market prediction and as
a portfolio-selection method, which has been shown to outperform traditional strategies based on
company sectors or historical stock prices.
Nowadays, results of studies such as those briey surveyed in this section are leveraged for
the development of automatic trading systems that are largely fed with social media-derived
information [
]. As a consequence, such automatic systems can potentially suer severe problems
caused by large quantities of ctitious posts. As discussed in the next section, the presence of social
bots, and of the fake content they produce, is so widespread as to represent a serious, tangible
threat to these, and other, systems [16].
2.2 Characterization of spam and bots in social media
Since our study is aimed at verifying the presence and the impact of spam and bot activity in stock
microblogs, in this section we focus on discussing previous work about the characterization of
spam and bots, rather than on their detection.
Many developers of spammer accounts make use of bots in order to simultaneously and con-
tinuously post a great deal of spam content. is is one of the reasons why, despite bots being
in rather small numbers when compared to legitimate users, they nonetheless have a profound
impact on content popularity and activity in social media [
]. In addition, bots are driven so as to
act in a coordinated and synchronized way, thus amplifying their eects [
]. Another problem
with bots is that they evolve over time, in order to evade established detection techniques [
Hence, newer bots oen feature advanced characteristics that make them way harder to detect with
respect to older ones. Recently, a general-purpose overview of the landscape of automated accounts
was presented in [
]. is work testies the emergence of a new wave of social bots, capable of
mimicking human behavior and interaction paerns in social media beer than ever before. A
subsequent study [
] compared “traditional” and “evolved” bots in Twier, and demonstrated that
the laer are almost completely undetected by platform administrators. Authors demonstrate that
also the majority of bot detection techniques proposed in literature suer from the same problem.
Moreover, a crowdsourcing campaign showed that even tech savvy users are incapable of accurately
identifying the evolved bots.
Given this worrying picture, it is not surprising that bots have recently proven capable of
inuencing the public opinion for many crucial topics [2,3,13] and in many dierent ways, such
as by spreading fake news [
] or by articially inating the popularity of certain posts [
]. e
combination of automatic systems feeding on social media data and the pervasive presence of spam
and bots, motivates our investigation on the presence of spam and bots in stock microblogs.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:4 S. Cresci et al.
Fig. 1. Sample tweet with the $AAPL cashtag.
nancial data twitter data
markets companies median capitalization users tweets retweets (%)
NASDAQ 3,013 365,780,000 252,587 4,017,158 1,017,138 (25%)
NYSE 2,997 1,810,000,000 265,618 4,410,201 923,123 (21%)
NYSEARCA 726 245,375,000 56,101 298,445 157,101 (53%)
NYSEMKT 340 78,705,000 22,614 196,545 63,944 (33%)
OTCMKTS 22,956 31,480,000 64,628 584,169 446,293 (76%)
Table 1. Financial and social dataset composition.
Our dataset for this study is composed of: (i) stock microblogs collected from Twier, and (ii)
nancial information collected from Google Finance.
3.1 Twier data collection
Twier users follow the convention of tagging stock microblogs with so-called cashtags. e
cashtag of a company is composed of a dollar sign followed by its ticker symbol (e.g.,
the cashtag of Apple, Inc.). Figure 1shows a sample tweet with the
cashtag. Similarly to
hashtags, cashtags can be used as an ecient mean to lter content on Twier and to collect data
about given companies [
]. For this reason, we based our Twier data collection on an ocial list
of cashtags. Specically, we rst downloaded a list of 6,689 stocks traded on the most important US
markets (e.g.,
) from the ocial
Web site
. en, we collected all tweets shared
between May and September 2017, containing at least one cashtag from the list. Data collection
from Twier has been carried out by exploiting Twier’s Streaming APIs
. Aer our 5 months data
collection, we ended up with
9M tweets (of which 22% are retweets), posted by
2.5M distinct
users, as shown in Table 1.
As a consequence of our data collection strategy, every tweet in our dataset contains at least one
cashtag from the starting list. However, many collected tweets contain more than one cahstag, many
of which are related to companies not included in our starting list. Indeed, overall we collected
data about 30,032 companies traded across 5 dierent markets.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:5
Fig. 2. TRBC classification.
3.2 Financial data collection
We enriched our Twier dataset by collecting nancial information about each of the 30,032
companies found in our tweets. Financial information have been collected from public company
data hosted on the Google Finance Web site
. Among collected nancial information, is the market
capitalization (market cap) of a company and its industrial classication.
e capitalization is the total dollar market value of a company. For a given company
, it is
computed as the share price (
) times the number of outstanding shares (
Ci=P(si) × |si|
In our study, we take the market cap of a company into account, since it allows us to compare the
nancial value of that company with its social media popularity and engagement. In Table 1we
report the median capitalization of the companies for each considered market. As shown, important
markets such as
trade, on average, stocks with higher capitalization than those
traded in minor markets.
Industrial classication is expressed via the omson Reuters Business Classication
TRBC is a 5-level hierarchical sector and industry classication, widely used in the nancial domain
for computing sector-specic indices (Figure 2). At the topmost (coarse-grained) level TRBC
classies companies into 10 economic sectors, while at the lowest (ne-grained) level companies are
divided into 837 dierent activities. An example of industrial classication can be seen in Table 2.In
our study, we compare companies belonging to the same category, across all 5 levels of TRBC.
4.1 Dataset overview
Surprisingly, the vast majority (76%) of companies mentioned in our dataset do not belong to the
list and are traded in
, as shown in Table 1. Having so many
in our dataset is already an interesting nding, considering that our data collection grounded on
a list of high-capitalization (high-cap) companies.
is a US nancial market for over-the-
counter transactions, thus with far less stringent requirements than those needed from
, and
. For this reason, many small companies opt to be traded in
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:6 S. Cresci et al.
TRBC levels
ticker company activity industry industrial group business sector economic sector
AAPL Apple, Inc. Computer
Computer Hard-
Phones & House-
hold Electronics
GOOG Alphabet, Inc. Search Engines Internet Services Soware & IT Ser-
Soware & IT Ser-
JNJ Johnson & Johnson Pharmaceutic-
Pharmaceutic Pharmaceutic Pharmaceutics &
Medical Research
Table 2. Examples of TRBC classifications.
instead of the more requiring markets. us, from a company viewpoint, our dataset is dominated
. However,
companies play a marginal role from both a nancial and social
viewpoint, having low capitalization and small numbers of tweets, the vast majority of which are
retweets. In contrast, companies from
have high capitalization and are mentioned
in many tweets, with low percentage of retweets.
In the following, we report on some of the general characteristics of our dataset. Figure 3a shows
the mean volume of tweets collected per hour. e largest surge of tweets occurs between 10am
and 5pm (US Eastern time), which almost completely overlaps with the opening hours of the New
York Stock Exchange (9:30am to 4pm). is fact further highlights the strong relation between
stock microblogs and the real-world stock market. Figure 8b shows a cashtag-cloud representing
the most tweeted companies in our dataset. In gure, cashtags are color-coded so as to visually
highlight companies traded in dierent markets. e most tweeted companies in our dataset are
in line with those found in previous works [
], with
leading the way, followed by
, and
. Notably, no company from
appears among top mentioned companies.
Finally, as previously introduced, many stock microblogs contain more than one cashtag. Figure 3c
shows the distribution of distinct cashtags per tweet, with a mean value of 2 cashtags/tweet.
4.2 Stock time series analysis
In order to uncover possible malicious behaviors related to stock microblogs, we carry out a ne-
grained analysis of our data. Specically, we build and analyze the hourly time series of each of the
6,689 stocks downloaded from the
Web site. Given a stock
, its time series is dened as
si=(si,1,si,2, . . . , si,N)
, with
being the number of tweets that mentioned the stock
during the
. Figure 11 shows some examples of our stock time series, for 4 highly tweeted stocks. As
shown in gure, stock time series are characterized by long time spans over which tweet discussion
volumes remain rather low, occasionally interspersed by large discussion spikes. To give a beer
characterization of this phenomenon we ran a simple anomaly detection technique on all the 6,689
time series. As typically done in many time series analysis tasks, our anomaly detection technique
is designed so as to detect a peak
in a time series
i the tweet volume for the hour
from the mean tweet volume ¯
siby a number Kof standard deviations:
e parameter
determines the number of peaks found by our anomaly detection technique. In
fact, a bigger
implies that a larger deviation from the mean is needed in order to detect a peak.
Figure 6shows the number of peaks detected in our time series, as a function of the parameter K.
For the remainder of our analysis we set K=10, which represents a trade-o between the height
of considered peaks and the number of peaks to analyze. is choice of
results in 1,926 peaks
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:7
(a) Mean tweet volume per hour. Peak hours
overlap with the opening hours of the New
York Stock Exchange (red band).
(b) Cashtag-cloud of most tweeted compa-
(c) Distribution of the number of cashtags
per tweet.
(d) Distribution of the number of cashtags
per tweet.
Fig. 3. Overall statistics about our dataset.
detected in our time series. Time series depicted in Figure 11 also show mean values (cyan solid
line) and the 10σthreshold (red solid line) above which peaks are detected.
Next, we are interested in analyzing the tweets that generated the peaks (henceforth, peak tweets).
In detail, a peak
is composed of a set of tweets
, such that each tweet
contains the
cashtag related to the stock iand has been posted during the hour j(i.e., the peak hour):
i,j, . . . , tM
us, for each of the 1,926 peaks
we analyze the corresponding set of tweets
. We nd out
that, on average, 60% of tweets
are retweets. In other words, the peaks identied by our
anomaly detection technique are largely composed of retweets. In addition, considering that our
time series have hourly granularity, those retweets also occurred within a rather limited time span,
in a bursty fashion. is nding is particularly interesting also considering that in all our dataset,
we had only 22% retweets, versus 60% measured for peak tweets.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:8 S. Cresci et al.
(a) $AAPL (Apple, Inc.). (b) $FB (Facebook, Inc.). (c) $NFLX (Netflix, Inc.). (d) $TSLA (Tesla, Inc.).
(e) $DIS (Walt Disney Co). (f) $WMT (Walmart, Inc.). (g) $BABA (Alibaba Ltd). (h) $GE (General Electric).
(i) $NAK (Northern Dynasty
Minerals Ltd).
(j) $HLTH (Nobilis Health
Corp (USA)).
(k) $LNG (Cheniere Energy,
(l) $XXII (22nd Century
Group, Inc).
Fig. 4. Examples of stock time series, for 12 highly tweeted stocks. Mean values are marked with cyan solid
lines and thresholds above which peaks are detected are marked with red solid lines.
We also analyzed tweets
by considering the co-occurrences of stocks. From this analysis
we see that tweets
typically contain many more cashtags than tweets
. Indeed, the mean
number of cashtags per tweet is 6 for
, versus 2 for the whole dataset. e cashtags that
co-occur in peak tweets seem unrelated, and the authors of those tweets don’t provide further
information to explain such co-occurrences. As an example, Figure 5shows 4 of such suspicious
tweets. In gure, in every tweet, a few cashtags of high-capitalization (high-cap) stocks co-occur
with many cashtags of low-cap stocks.
e characteristics of peak tweets previously highlighted – that is, the percentage of retweets
and the number of co-occurring cashtags – dier signicantly from those measured for the whole
dataset. e reason for this peculiar phenomenon could be related to some real-world news or
event, that motivates the surge of retweets and the co-occurrences of dierent cashtags. However,
such dierences could also be the consequence of a shady, malicious activity. Indeed, there have
already been reports of large groups of bots that coordinately and simultaneously alter popularity
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:9
Fig. 5. Examples of suspicious peak tweets. In every tweet, a few cashtags of high-cap stocks (green-colored)
co-occur with many cashtags of low-cap stocks (red-colored).
Fig. 6. Number of peaks detected, as a function of K.
and engagement metrics of Twier users and content [
]. In particular, mass retweets have
been identied as one mean to articially increase the popularity of certain content [10].
In this section we evaluate dierent hypotheses in order to thoroughly understand the reasons
why so many seemingly-unrelated cashtags co-occur in peak tweets, and the reason for the high
percentage of retweets in peaks.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:10 S. Cresci et al.
5.1 Analysis of co-occurring stocks by industrial classification
Previous work have investigated the co-occurrences of stocks in weblogs and their relation to
real-world events. In particular, authors of [
] applied a clustering technique over a stock co-
occurrences matrix, identifying a number of clusters containing highly correlated stocks. Results
of this study highlighted that stocks that co-occur in blog articles as a consequence of real-world
events, belong to the same industrial sector. In other words, results of [
] support the assumption
that stocks that legitimately appear related between one another in weblogs (or microblogs), are
also related in real-world. us, as a consequence of common sense and previous studies, it would
be suspicious for some stocks to appear related (i.e., co-occurring) in microblogs, without being
related (i.e., belonging to the same industrial sector) in real-world.
To evaluate whether co-occurring stocks in peak tweets of our dataset are also related in real-
world, we exploited the TRBC classication previously introduced. Specically, for each tweet
we measured the extent to which the stocks mentioned in
belong to the same (or to dierent)
TRBC class(es), for all the 5 hierarchical levels of TRBC. As a measurement for the dierence in
TRBC classes across stocks in a tweet, we leveraged the notion of entropy. us, given a tweet
distinct cashtags (i.e., each one associated to a dierent company) and the level
TRBC with Njclasses, we rst built the list of TRBC classes of the Xcompanies mentioned in t:
c=(c1,c2, . . . , cX)
en, we computed the normalized Shannon entropy of the TRBC classes in
, for TRBC level
, as:
is the empirical probability that TRBC class
appears in
, and
is the maximum
theoretical entropy for TRBC level j:
Because of the normalization term, 0
1, with
0 meaning companies of the same
industrial sector, while Hc
norm 1 implying unrelated companies.
Intuitively, considering that the 5 TRBC levels are hierarchical, we expect
to be higher
(i.e., more heterogeneity) for ne-grained TRBC levels, while we expect
to be lower (i.e., less
heterogeneity) for the topmost, coarse-grained TRBC level. Results of this experiment, with TRBC
ranging from the lowest level 1 to the topmost level 5, are shown in Figure 7. For every
TRBC level, a boxplot and a scaerplot show the distribution of normalized entropy measured for
each peak tweet. As expected,
actually lowers when considering coarse-grained TRBC levels,
as shown by the median value of the boxplot distributions. Nonetheless, median
1 for all 5
TRBC levels, meaning that co-occurring companies in peak tweets are almost unrelated. Notably,
even for ne-grained TRBC levels, there is a minority of peak tweets for which we measured
norm =
0. ese tweets might actually contain mentions to companies related also in real-world.
Summarizing, the results of this experiment seem to suggest that, overall, co-occurrences of stocks
in peak tweets are not motivated by the fact that stocks belong to the same industrial or economic
5.2 Analysis of co-occurring stocks by market capitalization
Since real-world relatedness (as expressed by industrial classication) is not a plausible explanation
for co-occurring stocks in our dataset, we now turn our aention to market capitalization. We are
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:11
Fig. 7. Normalized Shannon entropy of TRBC classes in peak tweets, for all 5 levels of TRBC. As shown,
1for all 5 TRBC levels, meaning that co-occurring companies in peak tweets are almost
(a) Standard deviation of the capitalization of co-
occurring companies in peak tweets, and compari-
son with a bootstrap. The large measured standard
deviation implies that high-cap companies co-occur
with low-cap ones.
(b) Standard deviation of the capitalization of co-
occurring companies hlin full dataset, and compari-
son with a bootstrap. The large measured standard
deviation implies that high-cap companies co-occur
with low-cap ones.
Fig. 8. Overall statistics about our dataset.
interested in evaluating whether a relation exists between the capitalization of co-occurring stocks.
For instance, legitimate peak tweets could mention multiple stocks with similar capitalization.
Conversely, malicious users could try to exploit the popularity of high-cap stocks by mentioning
them together with low-cap ones.
One way to evaluate the similarity (or dissimilarity) in market capitalization of co-occurring
stocks is by computing statistical measures of spread, standard deviation (std.) being a straightfor-
ward one. us, for each peak tweet
we computed the std. of the capitalization of all companies
mentioned in
. Results are shown in Figure 8a, where boxplots and scaerplots are depicted as a
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:12 S. Cresci et al.
Fig. 9. Kernel density estimation of social and financial importance, for stocks of the 5 considered markets.
OTCMKTS stocks have a suspiciously high social importance despite their low financial importance.
function of the number of distinct companies mentioned in tweets. en, in order to understand
whether the measured spread in capitalization is due to the intrinsic characteristics of our dataset
(i.e., the underlying statistical distribution of capitalization) or to other factors, we compared mean
values of our empiric measurements with the result of a bootstrap. For bootstrapping the std. of
tweets that mention
companies, we randomly sampled 10
000 groups of
companies from our
dataset. en, for each of the 10
000 random groups we computed the std. of the capitalization of
companies of the group. Finally, we averaged results over the 10
000 groups. is procedure
is executed for
, . . . ,
22, thus covering the whole extent of Figure 8a. Results in gure
highlight a large empiric std. between the capitalization of co-occurring companies. is means
that in our peak tweets, high-cap companies co-occur with low-cap ones. Moreover, the measured
std. is larger than that obtained with the bootstrap. In turn, this means that the large dierence in
capitalization can not be explained by the intrinsic characteristics of our dataset, but it is rather the
consequence of an external action.
5.3 Social and financial importance
So far, we demonstrated that tweets responsible for generating peaks, mention a large number of
unrelated stocks, some of which are high-cap stocks while the others are low-cap ones. Adding
to these ndings, we are also interested in assessing the relation between the social and nancial
importance of our 30,032 stocks. Financial importance of a stock
can be measured by its market
. Social importance can be quantied as the number of times a stock is mentioned
in stock microblogs. Intuitively, we expect a positive correlation between stock capitalization
and mentions, meaning that high-cap stocks are mentioned more frequently than low-cap stocks.
Notably, this positive relation has already been measured in a number of previous works, such
as [22], and has been leveraged for predicting stock prices.
Our data in Table 1allow to make a rst assessment of this relation over the whole dataset. It
shows that on average we collected 25.5 tweets/stock for
stocks, versus 1,333.3 tweets/stock
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:13
Fig. 10. Number of peaks detected, as a function of Kfor OTCMKTS cashtags.
and 1,471.5 tweets/stock for
. Results for
fall in between.
Minding that
stocks feature the lowest capitalization while stocks from
have the highest, the positive relation between nancial and social importance seem conrmed,
when considering all tweets of our dataset. However, we are also interested in assessing whether
this relation holds when only considering peak tweets. We performed this measurement as follows.
Given a stock
and a peak
, we counted the number of times that
is mentioned in peak tweets
. We repeated this measurement for every peak
, and we computed the median value of
these measurements that represents the social importance of
in all peak tweets. en, for every
stock, we ploed its measurement of social importance versus that of nancial importance, and we
visually grouped stocks by their market. To avoid overploing, we performed a bivariate (i.e., 2D)
kernel density estimation, whose results are shown in Figure 9. For the sake of clarity, we split
the social–vs–nancial space into 4 sectors. Sector A denes a region of space with stocks having
both a high social and nancial importance. Stocks in Sector B are characterized by high nancial
importance, but low social importance. Stocks in Sector C have both low social and nancial
importance, while stocks in Sector D have high social importance despite low nancial importance.
By comparing stock densities of dierent markets in Figure 9, we see that
stocks almost
completely lay in Sector D. All other markets have their stock densities mainly laying in Sector B and
Sector A. In other words,
stocks have a suspiciously high social importance (i.e., they are
mentioned in many tweets and across many peaks), despite their low nancial importance. Results
for all other markets are more intuitive, with
stocks achieving the best combination of
social and nancial importance. Summarizing, we measured a positive relation between social
and nancial importance when considering all stock microblogs shared during the 5 months of
our study. However, when focusing our analysis on peaks in stock microblogs, we observed a
suspicious behavior related to OTCMKTS stocks.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:14 S. Cresci et al.
(a) $UPZS (Unique Pizza &
Subs Corp.).
(b) $KNSC (Kenergy
Scientific, Inc.).
(c) $INNV (Innovus
Pharmaceuticals, Inc.).
(d) $NNSR (NanoSensors,
Fig. 11. Examples of stock time series for 4 OTCMKTS tweeted stocks. Mean values are marked with cyan
solid lines and thresholds above which peaks are detected are marked with red solid lines.
Fig. 12. Examples of tweets with just one OTCMKTS cashtag.
Fig. 13. Examples of suspicious users classified as bots. The many characteristics shared between all these
users (e.g., name, profile picture, social links) support the hypothesis that they are part of a larger botnet.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:15
Fig. 14. Examples of tweets from suspicious users.
In previous sections we identied a wide array of suspicious phenomena related to stock microblogs.
In particular, peaks in microblog conversations about high-cap stocks are lled with mentions of low-
cap (mainly
) stocks. Such mentions can not be explained by real-world stock relatedness.
Moreover, the peaks in microblog conversations are largely caused by mass retweets. Despite not
having been studied before, this scenario resembles those recently discovered when investigating
the activities of bots tampering with social political discussions [
]. Unfortunately, systems
for automatically detecting spam in stock microblogs are yet to be developed. However, recent
scientic eorts lead to the development of several general-purpose bot and spam detection systems.
us, in this section we employ a state-of-the-art bot and spam detection system, specically
developed for spoing malicious group activities, to classify suspicious users [
]. e goal of
this experiment is to assess whether users that shared/retweeted the suspicious peak tweets we
previously identied, are classied as bots. In turn, this would bring denitive evidence of bot
activities in the stock microblogs that we analyzed. e system in [
] performs bot detection in 2
steps. Firstly, it encodes the online behavior of a user into a string of characters that represents the
digital DNA of the user. en, multiple digital DNA sequences, one for each user of the group under
investigation, are compared between one another by means of string mining and bioinformatics
algorithms. e system classies as bots those users that have suspiciously high similarities among
their digital DNA sequences. Notably, the system in [
] proved capable of accurately detecting
also “evolved” bots (F1=0.97), such as those described in [14].
Because of the computationally intensive analyses performed by [
], we constrained this
experiment to the 100 largest peaks (i.e., those generated by the greatest number of tweets) of our
dataset. Starting from those top-100 peaks, we then analyzed the 25
988 distinct users that shared
or retweeted at least one peak tweet. Behavioral information needed by the detection system to
perform user classication have been collected by crawling the Twier timelines of such 25
users. Notably, the bot detection system classied as much as 71% (18
509) of the analyzed users as
bots. Figure 13 shows 6 examples of users classied as bots. A manual analysis of a subset of bots
allowed to identify characteristics shared between all the users (e.g., similar name, join date, prole
picture, etc.), supporting the hypothesis that they are part of a larger botnet. Users classied as
bots also feature very high retweet rates (ratio of retweets over all posted tweets), thus explaining
the large number of retweets in our peaks and among OTCMKTS stock microblogs.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:16 S. Cresci et al.
We obtained these results by analyzing only the 100 largest detected peaks, therefore analyses of
minor peaks might yield dierent results. Nonetheless, the overwhelming ratio of bots that we
discovered among large peaks discussing popular stocks, raises serious concerns over the reliability
of stock microblogs.
Results of our extensive investigation highlighted the presence of spam and bot activity in stock
microblogs. For the rst time, we described an advertising practice where many nancially unim-
portant (low-cap) stocks are massively mentioned in microblogs together with a few nancially
important (high-cap) stocks. Analyses of suspicious users suggest that the advertising practice is
carried out by large groups of coordinated social bots. Considering the already demonstrated rela-
tion between social and nancial importance [
], a possible outcome expected by perpetrators of
this advertising practice is the increase in nancial importance of the low-cap stocks, by exploiting
the popularity of high-cap ones.
e potential negative consequences of this new form of nancial spam are manifold. On the
one hand, unaware investors could be lured into believing that the social importance of promoted
stocks have a basis in reality. On the other hand, also the multitude of automatic trading systems
that feed on social information, could be tricked into buying low value stocks. Market collapses
such as the Flash Crash, or disastrous investments such as that of Cynk Technology, could occur
again in the future, with dire consequences. For this reason, a favorable research avenue for the
future could involve quantifying the impact of social bots and microblog nancial spam in stock
prices uctuations, similarly to what has already been done for nancial e-mail spam.
To the best of our knowledge, this is the rst exploratory study on the presence of spam and bot
activity in stock microblogs. As such, future works related to the characterization and detection of
nancial spam in microblogs, are much desirable. Indeed, no automatic system for the detection
of nancial spam in microblogs has been developed to date. To overcome this limitation, in our
analyses we employed a general-purpose bot detection system. However, such approach hardly
scales on the massive number of users, both legitimate and automated, involved in nancial
discussions on microblogs. Hence, another promising direction of research involves with the
development of tools and techniques for promptly detecting promoted stocks, thus avoiding the
need for user classication.
Finally, we believe it is useful – and worrying at the same time – to demonstrate the presence
of bot activity in stock microblogs. Finance thus adds to the growing list of domains recently
tampered by social bots (joining the political, social, and commercial domains, to name but a few).
Motivated by the widespread presence of social bots, we carried out the rst large-scale, systematic
analysis on the presence and impact of spam and bot activity in stock microblogs. By cross-checking
9M stock microblogs from Twier with nancial information from Google Finance, we uncovered
a malicious practice aimed at promoting low-value stocks by exploiting the popularity of high-
value ones. In detail, many stocks with low market capitalization, mainly traded in
are mentioned in microblogs together with a few high capitalization stocks traded in
. We showed that such co-occurring stocks are not related by economic and industrial
sector. Moreover, the large discussion spikes about low-value stocks are due to mass, synchronized
retweets. Finally, an analysis of retweeting users classied 71% of them as bots.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twier 0:17
Given the severe consequences that this new form of nancial spam could have on unaware
investors as well as on automatic trading systems, our results call for the prompt adoption of spam
and bot detection techniques in all applications and systems that exploit stock microblogs.
is research is supported in part by the EU H2020 Program under the schemes
Research Infrastructures
grant agreement #654024 SoBigData: Social Mining & Big Data Ecosys-
Omar Alonso and Kartikay Khandelwal. 2014. Kondenzer: Exploration and visualization of archived social media. In
Proceedings of the 30th International Conference on Data Engineering (ICDE’14). IEEE, 1202–1205.
Marco T Bastos and Dan Mercea. 2017. e Brexit Botnet and User-Generated Hyperpartisan News. Social Science
Computer Review (2017), 0894439317734157.
Alessandro Bessi and Emilio Ferrara. 2016. Social bots distort the 2016 US Presidential election online discussion. First
Monday 21, 11 (2016).
Alex Beutel, Wanhong Xu, Venkatesan Guruswami, Christopher Palow, and Christos Faloutsos. 2013. Copycatch:
stopping group aacks by spoing lockstep behavior in social networks. In Proceedings of the 22nd International
Conference on World Wide Web (WWW’13). ACM, 119–130.
Johan Bollen, Huina Mao, and Alberto Pepe. 2011. Modeling public mood and emotion: Twier sentiment and
socio-economic phenomena. In Proceedings of the 5th International Conference on Web and Social Media (ICWSM’11).
AAAI, 450–453.
Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twier mood predicts the stock market. Journal of computational
science 2, 1 (2011), 1–8.
Lorenzo Cazzoli, Rajesh Sharma, Michele Treccani, and Fabrizio Lillo. 2016. A Large Scale Study to Understand the
Relation between Twier and Financial Market. In Proceedings of the 3rd European Network Intelligence Conference
(ENIC’16). IEEE, 98–105.
Keith Cortis, Andr
e Freitas, Tobias Daudert, Manuela Huerlimann, Manel Zarrouk, Siegfried Handschuh, and Brian
Davis. 2017. Semeval-2017 task 5: Fine-grained sentiment analysis on nancial microblogs and news. In Proceedings of
the 11th International Workshop on Semantic Evaluation (SemEval’17). 519–535.
Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. 2016. DNA-inspired
online behavioral modeling and its application to spambot detection. IEEE Intelligent Systems 31, 5 (2016), 58–64.
Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. 2017. e paradigm-
shi of social spambots: Evidence, theories, and tools for the arms race. In Proceedings of the 26th International
Conference on World Wide Web Companion (WWW’17 Companion). ACM, 963–972.
Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. 2017. Social
Fingerprinting: detection of spambot groups through DNA-inspired behavioral modeling. IEEE Transactions on
Dependable and Secure Computing (2017).
[12] Ronen Feldman. 2013. Techniques and applications for sentiment analysis. Commun. ACM 56, 4 (2013), 82–89.
Emilio Ferrara. 2017. Disinformation and social bot operations in the run up to the 2017 French presidential election.
First Monday 22, 8 (2017).
Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. e rise of social bots.
Commun. ACM 59, 7 (2016), 96–104.
Emilio Ferrara, Onur Varol, Filippo Menczer, and Alessandro Flammini. 2016. Detection of Promoted Social Media
Campaigns. In Proceedings of the 10th International Conference on Web and Social Media (ICWSM’16). AAAI, 563–566.
Zafar Gilani, Reza Farahbakhsh, and Jon Crowcro. 2017. Do Bots impact Twier activity?. In Proceedings of the 26th
International Conference on World Wide Web Companion (WWW’17 Companion). ACM, 781–782.
Eric Gilbert and Karrie Karahalios. 2010. Widespread Worry and the Stock Market. In Proceedings of the 4th International
Conference on Web and Social Media (ICWSM’10). AAAI, 59–65.
Martin Hentschel and Omar Alonso. 2014. Follow the money: A study of cashtags on Twier. First Monday 19, 8
Lu Hong and Sco E Page. 2004. Groups of diverse problem solvers can outperform groups of high-ability problem
solvers. Proceedings of the National Academy of Sciences of the United States of America 101, 46 (2004), 16385–16389.
[20] Tim Hwang, Ian Pearce, and Max Nanis. 2012. Socialbots: Voices from the fronts. Interactions 19, 2 (2012), 38–45.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
0:18 S. Cresci et al.
Milad Kharratzadeh and Mark Coates. 2012. Weblog analysis for predicting correlations in stock price evolutions. In
Proceedings of the 6th International Conference on Web and Social Media (ICWSM’12). AAAI.
Yuexin Mao, Wei Wei, Bing Wang, and Benyuan Liu. 2012. Correlating S&P 500 stocks with Twier data. In Proceedings
of the 1st International Workshop on Hot Topics on Interdisciplinary Social Networks Research (KDD’12 Workshops). ACM,
Gabriele Ranco, Darko Aleksovski, Guido Caldarelli, Miha Gr
car, and Igor Mozeti
c. 2015. e eects of Twier
sentiment on stock price returns. PloS one 10, 9 (2015), e0138441.
Jacob Ratkiewicz, Michael Conover, Mark R Meiss, Bruno Gon
alves, Alessandro Flammini, and Filippo Menczer. 2011.
Detecting and tracking political abuse in social media. In Proceedings of the 5th International Conference on Web and
Social Media (ICWSM’11). AAAI, 297–304.
Eduardo J Ruiz, Vagelis Hristidis, Carlos Castillo, Aristides Gionis, and Alejandro Jaimes. 2012. Correlating nancial
time series with micro-blogging activity. In Proceedings of the 5th International Conference on Web Search and Data
Mining (WSDM’12). ACM, 513–522.
Harald Schoen, Daniel Gayo-Avello, Panagiotis Takis Metaxas, Eni Mustafaraj, Markus Strohmaier, and Peter Gloor.
2013. e power of prediction with social media. Internet Research 23, 5 (2013), 528–543.
Chengcheng Shao, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. 2016. Hoaxy: A platform
for tracking online misinformation. In Proceedings of the 25th International Conference on World Wide Web Companion
(WWW’16 Companion). ACM, 745–750.
Timm Oliver Sprenger. 2011. Leveraging Crowd Wisdom in a Stock Microblogging Forum. In
Proceedings of the 5th International Conference on Web and Social Media (ICWSM’11). AAAI.
Ilya Zheludev, Robert Smith, and Tomaso Aste. 2014. When can social media lead nancial markets? Scientic reports
4 (2014), 4213.
ACM Transactions on the Web, Vol. 0, No. 0, Article 0. Publication date: 2018.
... 1. We study, investigate and characterize inorganic financial campaigns [59,60] (more details in Section 4.1), in particular: ...
... One of our contribution in this area represents the first evidence of the presence of disinformation in this domain, raising serious concerns over the reliability of such information. Indeed, we first provide descriptive analyses [59,60,164] and move to predictive ones [165] in order to analyze online disinformation [127,165]. ...
... Following this recent line of thought, in our work, we aim to move beyond a binary classification of content disinformation, and we tackle the more challenging task of estimating the amount of inorganic content within online financial discussions [59,60,165]. To do so, we leverage lessons learned from [173] by including the same kinds of features. ...
... Malicious bots are the most researched category in social media, in which new types are constantly being discovered [7]. Malicious SMBs are typically controlled by a botmaster, who is the human in command of the bots and oversees their assault and actions. ...
... A variety of malicious bots are spambots that distribute malicious links and illegal messages [8]. Cashtag piggybacking bots promote low-value which shares by obtaining the benefit of the popularity of elevated items [7], whereas Astroturfing bot creates the appearance of significant assistance for a politician or point of view [9]. The Sybils' pseudonymous are examples of user accounts [10]. ...
Full-text available
Automated or semiautomated computer programs that imitate humans and/or human behavior in online social networks are known as social bots. Users can be attacked by social bots to achieve several hidden aims, such as spreading information or influencing targets. While researchers develop a variety of methods to detect social media bot accounts, attackers adapt their bots to avoid detection. This field necessitates ongoing growth, particularly in the areas of feature selection and extraction. The study's purpose is to provide an overview of bot attacks on Twitter, shedding light on issues in feature extraction and selection that have a significant impact on the accuracy of bot detection algorithms, and highlighting the weaknesses in training time and dimensionality reduction. To the best of our knowledge, this study is the first systematic literature review based on a preset search-strategy that encompasses literature published between 2018 and 2021 which are concerned with Twitter features (attributes). The key findings of this research are threefold. First, the paper provides an improved taxonomy of feature extraction and selection approaches. Second, it includes a comprehensive overview of approaches for detecting bots in the Twitter platform, particularly machine learning techniques. The percentage was calculated using the proposed taxonomy, with metadata, tweet text, and merging (meta and tweet text) accounting for 37%, 31%, and 32%, respectively. Third, some gaps are also highlighted for further research. The first is that public datasets are not precise or suitable in size. Second, the use of integrated systems and real-time detection is uncommon. Third, detecting each bots category identified separately is needed, rather than detecting all categories of bots using one generic model and the same features' values. Finally, extracting influential features that assist machine learning algorithms in detecting Twitter bots with high accuracy is critical, especially if the type of bot is pre-determined.
... Bots also actively participate in public health debates [12] including those about vaccines [13,14], the COVID-19 pandemic [15,16,17,18], and cannabis [19]. Research has also reported on the presence of social bots in discussions about climate change [20,21,22], cryptocurrency [23], and the stock market [24,25]. ...
... Malicious social bots demonstrate various behavioral patterns in their actions. They may simply generate a large volume of posts to amplify certain narratives [21,26] or to manipulate the price of stocks [24,25] and cryptocurrencies [23]. They can also disseminate low-credibility information strategically by getting involved in the early stage of the spreading process and targeting popular users through mentions and replies [2]. ...
Full-text available
Social bots have become an important component of online social media. Deceptive bots, in particular, can manipulate online discussions of important issues ranging from elections to public health, threatening the constructive exchange of information. Their ubiquity makes them an interesting research subject and requires researchers to properly handle them when conducting studies using social media data. Therefore it is important for researchers to gain access to bot detection tools that are reliable and easy to use. This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning. We introduce how Botometer works, the different ways users can access it, and present a case study as a demonstration. Readers can use the case study code as a template for their own research. We also discuss recommended practice for using Botometer.
... The characteristics of these identified accounts may then be systematically evaluated across relevant case studies of interest to inform counter-strategies for enhanced societal resilience to information operations writ large [12,[26][27][28]. This supervised paradigm of scholarship has uncovered valuable insights about the impacts of information operations, spanning a range of domains like politics [5,29], finance [30,31], and public health [14,32]; as well as in diverse national and international contexts around the world [12,[33][34][35][36]. Across this vast literature, important findings include the quantification of links between the activity of social bots and the spread of low-credibility information or fake news [37]; how automated accounts increase human exposure to more inflammatory content [4]; how bots can spread hate most effectively in groups that are denser and more isolated from mainstream dialogue [7]; and how they may even attenuate the influence of more traditional opinion leaders in online conversations [15]. ...
Full-text available
This paper presents a new computational framework for mapping state-sponsored information operations into distinct strategic units. Utilizing a novel method called multi-view modularity clustering (MVMC), we identify groups of accounts engaged in distinct narrative and network information maneuvers. We then present an analytical pipeline to holistically determine their coordinated and complementary roles within the broader digital campaign. Applying our proposed methodology to disclosed Chinese state-sponsored accounts on Twitter, we discover an overarching operation to protect and manage Chinese international reputation by attacking individual adversaries (Guo Wengui) and collective threats (Hong Kong protestors), while also projecting national strength during global crisis (the COVID-19 pandemic). Psycholinguistic tools quantify variation in narrative maneuvers employing hateful and negative language against critics in contrast to communitarian and positive language to bolster national solidarity. Network analytics further distinguish how groups of accounts used network maneuvers to act as balanced operators, organized masqueraders, and egalitarian echo-chambers. Collectively, this work breaks methodological ground on the interdisciplinary application of unsupervised and multi-view methods for characterizing not just digital campaigns in particular, but also coordinated activity more generally. Moreover, our findings contribute substantive empirical insights around how state-sponsored information operations combine narrative and network maneuvers to achieve interlocking strategic objectives. This bears both theoretical and policy implications for platform regulation and understanding the evolving geopolitical significance of cyberspace.
... Yang et al., 2019a) (a compendium of political bots), midterm-2018 (Yang et al., 2019b) (a hand-labeled dataset of users and bots during the 2018 American midterm elections), botwiki (Yang et al., 2019b) (a collection of self identified Twitter bots), verified-2019 (Yang et al., 2019b) (a collection of verified Twitter users), Cresci 2019-2018 ((Mazza et al., 2019),(Cresci et al., 2018)) (datasets of manually annotated bots), and finally Twibot-20(Feng et al., 2021) (a comprehensive hand labeled dataset of Twitter bots). The statistics of each dataset is shown in ...
Full-text available
With the significant increase in users on social media platforms, a new means of political campaigning has appeared. Twitter and Facebook are now notable campaigning tools during elections. Indeed, the candidates and their parties now take to the internet to interact and spread their ideas. In this paper, we aim to identify political communities formed on Twitter during the 2022 French presidential election and analyze each respective community. We create a large-scale Twitter dataset containing 1.2 million users and 62.6 million tweets that mention keywords relevant to the election. We perform community detection on a retweet graph of users and propose an in-depth analysis of the stance of each community. Finally, we attempt to detect offensive tweets and automatic bots, comparing across communities in order to gain insight into each candidate's supporter demographics and online campaign strategy.
... This is mainly because the platform provides information about the individual accounts (i.e., sources of news), the content they create (i.e., tweets) and the content the reproduce or share (i.e., by retweeting). The literature behind disinformation campaigns on Twitter is long [4,[13][14][15] and keeps growing when new election campaigns occur around the globe or when crypto-currency and stock investors want to influence the market through social media [16][17][18][19]. ...
Full-text available
The combat against fake news and disinformation is an ongoing, multi-faceted task for researchers in social media and social networks domains, which comprises not only the detection of false facts in published content but also the detection of accountability mechanisms that keep a record of the trustfulness of sources that generate news and, lately, of the networks that deliberately distribute fake information. In the direction of detecting and handling organized disinformation networks, major social media and social networking sites are currently developing strategies and mechanisms to block such attempts. The role of machine learning techniques, especially neural networks, is crucial in this task. The current work focuses on the popular and promising graph representation techniques and performs a survey of the works that employ Graph Convolutional Networks (GCNs) to the task of detecting fake news, fake accounts and rumors that spread in social networks. It also highlights the available benchmark datasets employed in current research for validating the performance of the proposed methods. This work is a comprehensive survey of the use of GCNs in the combat against fake news and aims to be an ideal starting point for future researchers in the field.
The detection of organised disinformation campaigns that spread fake news, by first camouflaging them as real ones is crucial in the battle against misinformation and disinformation in social media. This article presents a method for classifying the diffusion graphs of news formed in social media, by taking into account the profiles of the users that participate in the graph, the profiles of their social relations and the way the news spread, ignoring the actual text content of the news or the messages that spread it. This increases the robustness of the method and widens its applicability in different contexts. The results of this study show that the proposed method outperforms methods that rely on textual information only and provide a model that can be employed for detecting similar disinformation campaigns on different context in the same social medium.
Twitter bot detection has become an increasingly important task to combat misinformation, facilitate social media moderation, and preserve the integrity of the online discourse. State-of-the-art bot detection methods generally leverage the graph structure of the Twitter network, and they exhibit promising performance when confronting novel Twitter bots that traditional methods fail to detect. However, very few of the existing Twitter bot detection datasets are graph-based, and even these few graph-based datasets suffer from limited dataset scale, incomplete graph structure, as well as low annotation quality. In fact, the lack of a large-scale graph-based Twitter bot detection benchmark that addresses these issues has seriously hindered the development and evaluation of novel graph-based bot detection approaches. In this paper, we propose TwiBot-22, a comprehensive graph-based Twitter bot detection benchmark that presents the largest dataset to date, provides diversified entities and relations on the Twitter network, and has considerably better annotation quality than existing datasets. In addition, we re-implement 35 representative Twitter bot detection baselines and evaluate them on 9 datasets, including TwiBot-22, to promote a fair comparison of model performance and a holistic understanding of research progress. To facilitate further research, we consolidate all implemented codes and datasets into the TwiBot-22 evaluation framework, where researchers could consistently evaluate new models and datasets. The TwiBot-22 Twitter bot detection benchmark and evaluation framework are publicly available at
Filtering fake news from social network posts and detecting social network users who are responsible for generating and propagating these rumors have become two major issues with the increased popularity of social networking platforms. As any user can post anything on social media and that post can instantly propagate to all over the world, it is important to recognize if the post is rumor or not. Twitter is one of the most popular social networking platforms used for news broadcasting mostly as tweets and retweets. Hence, validating tweets and users based on their posts and behavior on Twitter has become a social, political and international issue. In this paper, we proposed a method to classify rumor and non-rumor tweets by applying a novel tweet and user feature ranking approach with Decision Tree and Logistic Regression that were applied on both tweet and user features extracted from a benchmark rumor dataset ‘PHEME’. The effect of the ranking model was then shown by classifying the dataset with the ranked features and comparing them with the basic classifications with various combination of features. Both supervised classification algorithms (namely, Support Vector Machine, Naïve Bayes, Random Forest and Logistic Regression) and deep learning algorithms (namely, Convolutional Neural Network and Long Short-Term Memory) were used for rumor detection. The classification accuracy showed that the feature ranking classification results were comparable to the original classification performances. The ranking models were also used to list the topmost tweets and users with different conditions and the results showed that even if the features were ranked differently by LR and RF, the topmost results for tweets and users for both rumors and non-rumors were the same.
Bot Detection is crucial in a world where Online Social Networks (OSNs) play a pivotal role in our lives as public communication channels. This task becomes highly relevant in crises like the Covid-19 pandemic when there is a growing risk of proliferation of automated accounts designed to produce misinformation content. To address this issue, we first introduce a comparison between supervised Bot Detection models using Data Selection. The techniques used to develop the bot detection models use features such as the tweets’ metadata or accounts’ Digital Fingerprint. The techniques implemented in this work proved effective in detecting bots with different behaviors. Social Fingerprint-based methods have been found to be effective with bots that behave in a coordinated manner. Furthermore, all these approaches have produced excellent results compared to the Botometer v3. Second, we present and discuss a case study related to the Covid-19 pandemic that analyses the differences in the discourse between bots and humans on Twitter, a platform used worldwide to express opinions and engage in dialogue in a public arena. While bots and humans generally express themselves alike, the tweets’ content and sentiment analysis reveal some dissimilitudes, especially in tweets concerning President Trump. When the discourse switches to pandemic management by Trump, sentiment-related values display a drastic difference, showing that tweets generated by bots have a predominantly negative attitude. However, according to our findings, while automated accounts are numerous and active in discussing controversial issues, they usually do not seem to increase exposure to negative and inflammatory content for human users.
Full-text available
SoBigData is a Research Infrastructure (RI) aiming to provide an integrated ecosystem for ethic-sensitive scientific discoveries and advanced applications of social data mining. A key milestone of the project focuses on data, methods and results sharing, in order to ensure the reproducibility, review and re-use of scientific works. For this reason, the Digital Library paradigm is implemented within the RI, providing users with virtual environments where datasets, methods and results can be collected, maintained, managed and preserved, granting full documentation, access and the possibility to re-use.
Full-text available
Objectives: To understand how Twitter bots and trolls ("bots") promote online health content. Methods: We compared bots' to average users' rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity. Results: Compared with average users, Russian trolls (χ2(1) = 102.0; P < .001), sophisticated bots (χ2(1) = 28.6; P < .001), and "content polluters" (χ2(1) = 7.0; P < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ2(1) = 11.18; P < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ2(1) = 12.1; P < .001) and antivaccine (χ2(1) = 35.9; P < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive. Conclusions: Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content. (Am J Public Health. Published online ahead of print August 23, 2018: e1-e7. doi:10.2105/AJPH.2018.304567).
Full-text available
The recent proliferation of handheld devices that are equipped with a large number of sensors and communication capabilities, as well as the ubiquitous presence of communication facilities and infrastructures, and the mass diffusion and availability of social networking applications, has created a socio-technical convergence capable of sparking a revolution in the sensing world. One of the most promising and fascinating consequences of this new socio-technical convergence is the possibility to significantly extend, complement, and possibly substitute, conventional sensing by enabling the collection of data through networks of humans. Indeed, these unprecedented sensing and sharing opportunities have enabled situations where individuals not only play the role of sensor operators, but also act as data sources themselves. This spontaneous behavior has driven a new thriving – yet challenging – research field, called social sensing, investigating how human-sourced data can be gathered and used to gain situational awareness in a number of socially relevant domains. However, the social sensing revolution does not come without costs. Now that each of us can send messages for the entire world to read, or upload pictures for the entire world to see, the amount of real-time information out there far exceeds our cognitive capacity to consume it. Today, we have access to a plethora of blogs, discussion forums, and online social network accounts that provide orders of magnitude increases in the number of news sources. We are thus witnessing to the development of a widening gap between information production and our consumption capacity. Moreover, the reliability of such sources is not guaranteed. Indeed, it has already been demonstrated that observations produced by social sensors might be affected by a number of issues that undermine their usefulness and applicability. Among such issues are the widespread presence of fictitious, malicious, and deceptive social sensors; and the spreading of deceptive content, such as fake news. As a consequence, in order to fully harness this unfolding sensing revolution, we are in dire need of novel algorithms, techniques, and tools that are capable of turning this deluge of messy data into concise, meaningful, and reliable information. The possibility to fruitfully exploit this citizen-sensed stream of big data for novel applications – and ultimately for improving our societies and our everyday life – represents a tantalizing opportunity, counterbalanced by the many challenges related to the assessment of the reliability of such information, as well as its aggregation, summarization, and filtering. The goal of this thesis is to investigate the two sides of the “social sensing” coin. Thus, the main contributions of this doctoral work are twofold: (i) investigate the problem of credibility and reliability of social sensors; and (ii) explore the opportunities opened up by social sensing for a practically relevant scenario, such as that of emergency management.
Conference Paper
Full-text available
We envisage a revolutionary change in the approach to spambot detection: instead of taking countermeasures only after having collected evidence of new spambot mischiefs, in a near future techniques will be able to anticipate the ever-evolving spammers.
Full-text available
In this article, we present results on the identification and behavioral analysis of social bots in a sample of 542,584 Tweets, collected before and after Japan's 2014 general election. Typical forms of bot activity include massive Retweeting and repeated posting of (nearly) the same message, sometimes used in combination. We focus on the second method and present (1) a case study on several patterns of bot activity, (2) methodological considerations on the automatic identification of such patterns and the prerequisite near-duplicate detection, and (3) we give qualitative insights into the purposes behind the usage of social/political bots. We argue that it was in the latency of the semi-public sphere of social media-and not in the visible or manifest public sphere (official campaign platform, mass media)-where Shinzō Abe's hidden nationalist agenda interlocked and overlapped with the one propagated by organizations such as Nippon Kaigi and Internet right-wingers (netto uyo) during the election campaign, the latter potentially forming an enormous online support army of Abe's agenda.
Full-text available
Messages posted to social media in the aftermath of a natural disaster have value beyond detecting the event itself. Mining such deliberately dropped digital traces allows a precise situational awareness, to help provide a timely estimate of the disaster’s consequences on the population and infrastructures. Yet, to date, the automatic assessment of damage has received little attention. Here, the authors explore feeding predictive models by tweets conveying on-the-ground social sensors’ observations, to nowcast the perceived intensity of earthquakes.
It is often difficult to separate the highly capable “experts” from the average worker in crowdsourced systems. This is especially true for challenge application domains that require extensive domain knowledge. The problem of stock analysis is one such domain, where even the highly paid, well-educated domain experts are prone to make mistakes. As an extremely challenging problem space, the “wisdom of the crowds” property that many crowdsourced applications rely on may not hold. In this article, we study the problem of evaluating and identifying experts in the context of SeekingAlpha and StockTwits, two crowdsourced investment services that have recently begun to encroach on a space dominated for decades by large investment banks. We seek to understand the quality and impact of content on collaborative investment platforms, by empirically analyzing complete datasets of SeekingAlpha articles (9 years) and StockTwits messages (4 years). We develop sentiment analysis tools and correlate contributed content to the historical performance of relevant stocks. While SeekingAlpha articles and StockTwits messages provide minimal correlation to stock performance in aggregate, a subset of experts contribute more valuable (predictive) content. We show that these authors can be easily identified by user interactions, and investments based on their analysis significantly outperform broader markets. This effectively shows that even in challenging application domains, there is a secondary or indirect wisdom of the crowds. Finally, we conduct a user survey that sheds light on users’ views of SeekingAlpha content and stock manipulation. We also devote efforts to identify potential manipulation of stocks by detecting authors controlling multiple identities.
Since decades, genetic algorithms have been used as an effective heuristic to solve optimization problems. However, in order to be applied, genetic algorithms may require a string-based genetic encoding of information, which severely limited their applicability when dealing with online accounts. Remarkably, a behavioral modeling technique inspired by biological DNA has been recently proposed – and successfully applied – for monitoring and detecting spambots in Online Social Networks. In this so-called digital DNA representation, the behavioral lifetime of an account is encoded as a sequence of characters, namely a digital DNA sequence. In a previous work, the authors proposed to create synthetic digital DNA sequences that resemble the characteristics of the digital DNA sequences of real accounts. The combination of (i) the capability to model the accounts’ behaviors as digital DNA sequences, (ii) the possibility to create synthetic digital DNA sequences, and (iii) the evolutionary simulations allowed by genetic algorithms, open up the unprecedented opportunity to study – and even anticipate – the evolutionary patterns of modern social spambots. In this paper, we experiment with a novel ad-hoc genetic algorithm that allows to obtain behaviorally evolved spambots. By varying the different parameters of the genetic algorithm, we are able to evaluate the capability of the evolved spambots to escape a state-of-art behavior-based detection technique. Notably, despite such detection technique achieved excellent performances in the recent past, a number of our spambot evolutions manage to escape detection. Our analysis, if carried out at large-scale, would allow to proactively identify possible spambot evolutions capable of evading current detection techniques.