Conference PaperPDF Available

Detecting and Tracking Political Abuse in Social Media



Content may be subject to copyright.
Detecting and Tracking Political Abuse in Social Media
J. Ratkiewicz, M. D. Conover, M. Meiss, B. Gonc¸alves, A. Flammini, F. Menczer
Center for Complex Networks and Systems Research
School of Informatics and Computing
Indiana University, Bloomington, IN, USA
We study astroturf political campaigns on microblogging
platforms: politically-motivated individuals and organiza-
tions that use multiple centrally-controlled accounts to create
the appearance of widespread support for a candidate or opin-
ion. We describe a machine learning framework that com-
bines topological, content-based and crowdsourced features
of information diffusion networks on Twitter to detect the
early stages of viral spreading of political misinformation. We
present promising preliminary results with better than 96%
accuracy in the detection of astroturf content in the run-up to
the 2010 U.S. midterm elections.
1 Introduction
Social networking and microblogging services reach hun-
dreds of millions of users and have become fertile ground
for a variety of research efforts. They offer a unique op-
portunity to study patterns of social interaction among far
larger populations than ever before. In particular, Twitter has
recently generated much attention in the research commu-
nity due to its peculiar features, open policy on data shar-
ing, and enormous popularity. The popularity of Twitter,
and of social media in general, is further enhanced by the
fact that traditional media pay close attention to the ebb
and flow of the communication that they support. With this
scrutiny comes the potential for the hosted discussions to
reach a far larger audience than simply the original social
media users. Along with the recent growth of social media
popularity, we are witnessing an increased usage of these
platforms to discuss issues of public interest, as they offer
unprecedented opportunities for increased participation and
information awareness among the Internet-connected pub-
lic (Adamic and Glance 2005). While some of the discus-
sions taking place on social media may seem banal and su-
perficial, the attention is not without merit. Social media of-
ten enjoy substantial user bases with participants drawn from
diverse geographic, social, and political backgrounds (Java
et al. 2007). Moreover, the user-as-information-producer
model provides researchers and news organizations alike
with a means of instrumenting and observing a represen-
tative sample of the population in real time. Indeed, it has
Copyright c
2011, Association for the Advancement of Artificial
Intelligence ( All rights reserved.
been recently demonstrated that useful information can be
mined from Twitter data streams(Asur and Huberman 2010;
Tumasjan et al. 2010; Bollen, Mao, and Zeng 2011).
With this increasing popularity, however, comes a dark
side — as social media grows in prominence, it is natural
that people find ways to abuse it. As a result, we observe
various types of illegitimate use; spam is a common exam-
ple (Grier et al. 2010; Wang 2010). Here we focus on a par-
ticular social media platform, Twitter, and on one particular
type of abuse, namely political astroturf — political cam-
paigns disguised as spontaneous “grassroots” behavior that
are in reality carried out by a single person or organization.
This is related to spam but with a more specific domain con-
text, and potentially larger consequences.
Online social media tools play a crucial role in the suc-
cesses and failures of numerous political campaigns and
causes. Examples range from the grassroots organizing
power of Barack Obama’s 2008 presidential campaign, to
Howard Dean’s failed 2004 presidential bid and the first-
ever Tea Party rally (Rasmussen and Schoen 2010; Wiese
and Gronbeck 2005).
The same structural and systemic properties that enable
social media such as Twitter to boost grassroots political
organization can also be leveraged, even inadvertently, to
spread less constructive information. For example, during
the political campaign for the 2010 midterm election, several
major news organizations picked up on the messaging frame
of a viral tweet relating to the allocation of stimulus funds,
succinctly describing a study of decision making in drug-
addicted macaques as “Stimulus $ for coke monkeys” (The
Fox Nation 2010).
While the “coke monkeys” meme developed organically
from the attention dynamics of thousands of users, it illus-
trates the powerful and potentially detrimental role that so-
cial media can play in shaping public discourse. As we will
demonstrate, a motivated attacker can easily orchestrate a
distributed effort to mimic or initiate this kind of organic
spreading behavior, and with the right choice of inflamma-
tory wording, influence a public well beyond the confines of
his or her own social network.
Unlike traditional news sources, social media provide lit-
tle in the way of individual accountability or fact-checking
mechanisms. Catchiness and repeatability, rather than truth-
fulness, can function as the primary drivers of information
diffusion. While flame wars and hyperbole are hardly new
phenomena online, Twitter’s 140-character sound bytes are
ready-made headline fodder for the 24-hour news cycle.
In the remainder of this paper we describe a system to an-
alyze the diffusion of information in social media, and, in
particular, to automatically identify and track orchestrated,
deceptive efforts to mimic the organic spread of information
through the Twitter network. The main contributions of this
paper are very encouraging preliminary results on the detec-
tion of suspicious memes via supervised learning (96% ac-
curacy) based on features extracted from the topology of the
diffusion networks, sentiment analysis, and crowdsourced
annotations. Because what distinguishes astoturf from true
political dialogue includes the way they are spread, our ap-
proach explicitly takes into account the diffusion patterns of
messages across the social network.
2 Background and Related Work
2.1 Information Diffusion
The study of opinion dynamics and information diffusion
in social networks has a long tradition in the social, physi-
cal, and computational sciences (Castellano, Fortunato, and
Loreto 2009; Barrat, Barthelemy, and Vespignani 2008;
Leskovec, Adamic, and Huberman 2006; Leskovec, Back-
strom, and Kleinberg 2009). Twitter has recently been con-
sidered as case study for information diffusion. For example,
Galuba et al. (2010) take into account user behavior, user-
user influence, and resource virulence to predict the spread
of URLs through the social network. While usually referred
to as ‘viral,’ the way in which information or rumors diffuse
in a network has important differences with respect to in-
fectious diseases (Morris 2000). Rumors gradually acquire
more credibility as more and more network neighbors ac-
quire them. After some time, a threshold is crossed and the
rumor is believed to be true within a community.
A serious obstacle in the modeling of information prop-
agation in the real world as well as in the blogosphere
is the fact that the structure of the underlying social net-
work is often unknown. When explicit information on the
social network is available (e.g. the Twitter’s follower re-
lations) the strength of the social links are hardly known
and their importance cannot be deemed uniform across
the network (Huberman, Romero, and Wu 2008). Heuris-
tic methods are being developed to face this issue. Gomez-
Rodriguez, Leskovec, and Krause (2010) propose an algo-
rithm that can efficiently approximate linkage information
based on the times at which specific URLs appear in a net-
work of news sites. For the purposes of our study such prob-
lem can be, at least partially, ignored. Twitter provides an
explicit way to follow the diffusion of information via the
tracking of retweets. This metadata tells us which links in the
social network have actually played a role in the diffusion of
information. Retweets have already been considered, e.g., to
highlight the conversational aspects of online social inter-
action (Honeycutt and Herring 2008). and because it is not
published or accessible yet The reliability of retweeted in-
formation has also been investigated. Mendoza, Poblete, and
Castillo (2010) found that false information is more likely
to be questioned by users than reliable accounts of an event.
Their work is distinct from our own in that it does not inves-
tigate the dynamics of misinformation propagation.
2.2 Mining Microblog Data
Several studies have demonstrated that information shared
on Twitter has some intrinsic value, facilitating, e.g., predic-
tions of box office success (Asur and Huberman 2010) and
the results of political elections (Tumasjan et al. 2010). Con-
tent has been further analyzed to study consumer reactions to
specific brands (Jansen et al. 2009), the use of tags to alter
content (Huang, Thornton, and Efthimiadis 2010), its rela-
tion to headline news (Kwak et al. 2010), and the factors that
influence the probability of a meme to be retweeted (Suh et
al. 2010). Romero et al. (2010) have focused on how passive
and active users influence the spreading paths.
Recent work has leveraged the collective behavior of
Twitter users to gain insight into a number of diverse phe-
nomena. Analysis of tweet content has shown that some
correlation exists between the global mood of its users and
important worldwide events, including stock market fluc-
tuations (Bollen, Mao, and Pepe 2010; Bollen, Mao, and
Zeng 2011). Similar techniques have been applied to in-
fer relationships between media events such as presiden-
tial debates and affective responses among social media
users (Diakopoulos and Shamma 2010). Sankaranarayanan
et al. (2009) developed an automated breaking news de-
tection system based on the linking behavior of Twitter
users, while Heer and boyd (2005) describe a system for
visualizing and exploring the relationships between users
in large-scale social media systems. Driven by practical
concerns, others have successfully approximated the epi-
center of earthquakes in Japan by treating Twitter users
as a geographically-distributed sensor network (Sakaki,
Okazaki, and Matsuo 2010).
2.3 Political Astroturf and Truthiness
In the remainder of this paper we describe the analysis of
data obtained by a system designed to detect astroturfing
campaigns on Twitter (Ratkiewicz et al. 2011). An illus-
trative example of such campaign has been recently docu-
mented by Mustafaraj and Metaxas (2010). They described
a concerted, deceitful attempt to cause a specific URL to
rise to prominence on Twitter through the use of a network
of nine fake user accounts. These accounts produced 929
tweets over the course of 138 minutes, all of which included
a link to a website smearing one of the candidates in the
2009 Massachusetts special election. The tweets injecting
this meme mentioned users who had previously expressed
interest in the election. The initiators sought not just to ex-
pose a finite audience to a specific URL, but to trigger an in-
formation cascade that would lend a sense of credibility and
grassroots enthusiasm to a specific political message. Within
hours, a substantial portion of the targeted users retweeted
the link, resulting in a rapid spread detected by Google’s
real-time search engine. This caused the URL in question
to be promoted to the top of the Google results page for a
query on the candidate’s name — a so-called Twitter bomb.
This case study demonstrates the ease with which a focused
Event 1: Bob tweets with memes #oilspill and
(Analysis may infer dashed edges)
Event 2: Alice re-tweets Bob's message
Source, Target
Figure 1: Model of streaming social media events.
effort can initiate the viral spread of information on Twitter,
and the serious consequences of such abuse.
Mass creation of accounts, impersonation of users, and
the posting of deceptive content are behaviors that are likely
common to both spam and political astroturfing. However,
political astroturf is not exactly the same as spam. While the
primary objective of a spammer is often to persuade users
to click a link, someone interested in promoting an astroturf
message wants to establish a false sense of group consen-
sus about a particular idea. Related to this process is the fact
that users are more likely to believe a message that they per-
ceive as coming from several independent sources, or from
an acquaintance (Jagatic et al. 2007). Spam detection sys-
tems often focus on the content of a potential spam mes-
sage — for instance, to see if the message contains a certain
link or set of tags. In detecting political astroturf, we focus
on how the message is delivered rather than on its content.
Further, many legitimate users may be unwittingly complicit
in the propagation of astroturf, having been themselves de-
ceived. Spam detection methods that focus solely on proper-
ties of user accounts, such as the number of URLs in tweets
from an account or the interval between successive tweets,
may therefore be unsuccessful in finding such abuse.
We adopt the term truthy to discriminate falsely-
propagated information from organic grassroots memes. The
term was coined by comedian Stephen Colbert to describe
something that a person believes based on emotion rather
than facts. We can then define our task as the detection of
truthy memes in the Twitter stream. Not every truthy meme
will result in a viral cascade like the one documented by
Mustafaraj and Metaxas, but we wish to test the hypothesis
that the initial stages exhibit identifiable signatures.
3 Analytical Framework
We developed a unified framework, which we call Klatsch,
that analyzes the behavior of users and diffusion of ideas in
a broad variety of data feeds. This framework is designed
to provide data interoperability for the real-time analysis of
massive social media data streams (millions of posts per
day) from sites with diverse structures and interfaces. To
this end, we model a generic stream of social networking
data as a series of events that represent interactions between
actors and memes, as shown in Fig. 1. Each event involves
some number of actors (entities that represent users), some
number of memes (entities that represent units of informa-
tion at the desired level of detail), and interactions among
them. For example, a single tweet event might involve three
or more actors: the poster, the user she is retweeting, and
the people she is addressing. The post might also involve a
set of memes consisting of ‘hashtags’ and URLs referenced
in the tweet. Each event can be thought of as contributing a
unit of weight to edges in a network structure, where nodes
are associated with either actors or memes. The timestamps
associated with the events allow us to observe the changing
structure of this network over time.
3.1 Meme Types
To study the diffusion of information on Twitter it is neces-
sary to identify a specific topic as it propagates through the
social substrate. While there exist sophisticated statistical
techniques for modeling the topics underlying bodies of text,
the small size of each tweet and the contextual drift present
in streaming data create significant complications (Wang et
al. 2003). Fortunately, several conventions shared by Twit-
ter users allow us to sidestep these issues. We focus on the
following features to identify different types of memes:
Hashtags The Twitter community uses tokens prefixed by
a hashmark (#) to label the topical content of tweets.
Some examples of popular tags are #gop,#obama, and
#desen, marking discussion about the Republican party,
President Obama, and the Delaware race for U.S. Senate,
respectively. These are often called hashtags.
Mentions A Twitter user can include another user’s screen
name in a post, prepended by the @symbol. These men-
tions can be used to denote that a particular Twitter user
is being discussed.
URLs We extract URLs from tweets by matching strings of
valid URL characters that begin with ‘http://.’ Honey-
cutt and Herring (2008) suggest that URLs are associated
with the transmission of information on Twitter.
Phrases Finally, we consider the entire text of the tweet it-
self to be a meme, once all Twitter metadata, punctuation,
and URLs have been removed.
Relying on these conventions we are able to focus on the
ways in which a large number of memes propagate through
the Twitter social network. Note that a tweet may be in-
cluded in several of these categories. A tweet containing (for
instance) two hashtags and a URL would count as a member
of each of the three resulting memes.
3.2 Network Edges
To represent the flow of information through the Twitter
community, we construct a directed graph in which nodes
are individual user accounts. An example diffusion network
involving three users is shown in Fig. 2. An edge is drawn
from node Ato Bwhen either Bis observed to retweet a
message from A, or Amentions Bin a tweet. The weight
of an edge is incremented each time we observe an event
connecting two users. In this way, either type of edge can be
understood to represent a flow of information from Ato B.
kin =1
kout =2
sin =1
sout =2
kin =2
kout =1
sin =3
sout =1
kin =1
kout =1
sin =1
sout =2
Figure 2: Example of a meme diffusion network involving
three users mentioning and retweeting each other. The val-
ues of various node statistics are shown next to each node.
The strength srefers to weighted degree, kstands for degree.
Observing a retweet at node Bprovides implicit confirma-
tion that information from Aappeared in B’s Twitter feed,
while a mention of Boriginating at node Aexplicitly con-
firms that A’s message appeared in Bs Twitter feed. This
may or may not be noticed by B, therefore mention edges
are less reliable indicators of information flow compared to
retweet edges.
Retweet and reply/mention information parsed from the
text can be ambiguous, as in the case when a tweet is marked
as being a ‘retweet’ of multiple people. Rather, we rely
on Twitter metadata, which designates users replied to or
retweeted by each message. Thus, while the text of a tweet
may contain several mentions, we only draw an edge to the
user explicitly designated as the mentioned user by the meta-
data. In so doing, we may miss retweets that do not use the
explicit retweet feature and thus are not captured in the meta-
data. Note that this is separate from our use of mentions as
memes (§3.1), which we parse from the text of the tweet.
4 System Architecture
We implemented a system based on the data representation
described above to automatically monitor the data stream
from Twitter, detect relevant memes, collect the tweets that
match themes of interest, and produce basic statistical fea-
tures relative to patterns of diffusion. These features are
then passed to our meme classifier and/or visualized. We
called this system “Truthy.” The different stages that lead
to the identification of the truthy memes are described in the
following subsections. A screenshot of the meme overview
page of our website ( is shown
in Fig. 3. Upon clicking on any meme, the user is taken to
another page with more detailed statistics about that meme.
They are also given an opportunity to label the meme as
‘truthy;’ the idea is to crowdsource the identification of
truthy memes, as an input to the classifier described in §5.
4.1 Data Collection
To collect meme diffusion data we rely on whitelisted ac-
cess to the Twitter ‘Gardenhose’ streaming API (dev. The Gar-
denhose provides detailed data on a sample of the Twitter
corpus at a rate that varied between roughly 4million tweets
Figure 3: Screenshot of the Meme Overview page of our
website, displaying a number of vital statistics about tracked
memes. Users can then select a particular meme for more
detailed information.
per day near the beginning of our study, to around 8mil-
lion tweets per day at the time of this writing. While the
process of sampling edges (tweets between users) from a
network to investigate structural properties has been shown
to produce suboptimal approximations of true network char-
acteristics (Leskovec and Faloutsos 2006), we find that the
analyses described below are able to produce accurate clas-
sifications of truthy memes even in light of this shortcoming.
4.2 Meme Detection
A second component of our system is devoted to scanning
the collected tweets in real time. The task of this meme de-
tection component is to determine which of the collected
tweets are to be stored in our database for further analysis.
Our goal is to collect only tweets (a) with content related
to U.S. politics, and (b) of sufficiently general interest in
that context. Political relevance is determined by matching
against a manually compiled list of keywords. We consider a
meme to be of general interest if the number of tweets with
that meme observed in a sliding window of time exceeds a
given threshold. We implemented a filtering step for each of
these criteria, described elsewhere (Ratkiewicz et al. 2011).
Our system has tracked a total of approximately 305 mil-
lion tweets collected from September 14 until October 27,
2010. Of these, 1.2 million contain one or more of our polit-
ical keywords; the meme filtering step further reduced this
number to 600,000. Note that this number of tweets does not
directly correspond to the number of tracked memes, as each
tweet might contribute to several memes.
4.3 Network Analysis
To characterize the structure of each meme’s diffusion net-
work we compute several statistics based on the topology
of the largest connected component of the retweet/mention
Table 1: Features used in truthy classification.
nodes Number of nodes
edges Number of edges
mean k Mean degree
mean s Mean strength
mean w Mean edge weight in largest con-
nected component
max k(i,o) Maximum (in,out)-degree
max k(i,o) user User with max. (in,out)-degree
max s(i,o) Maximum (in,out)-strength
max s(i,o) user User with max. (in,out)-strength
std k(i,o) Std. dev. of (in,out)-degree
std s(i,o) Std. dev. of (in,out)-strength
skew k(i,o) Skew of (in,out)-degree distribution
skew s(i,o) Skew of (in,out)-strength distribution
mean cc Mean size of connected components
max cc Size of largest connected component
entry nodes Number of unique injections
num truthy Number of times ‘truthy’ button was
sentiment scores Six GPOMS sentiment dimensions
graph. These include the number of nodes and edges in the
graph, the mean degree and strength of nodes in the graph,
mean edge weight, mean clustering coefficient across nodes
in the largest connected component, and the standard devi-
ation and skew of each network’s in-degree, out-degree and
strength distributions (see Fig. 2). Additionally we track the
out-degree and out-strength of the most prolific broadcaster,
as well as the in-degree and in-strength of the most focused-
upon user. We also monitor the number of unique injection
points of the meme, reasoning that organic memes (such as
those relating to news events) will be associated with larger
number of originating users.
4.4 Sentiment Analysis
We also utilize a modified version of the Google-based
Profile of Mood States (GPOMS) sentiment analysis
method (Bollen, Mao, and Pepe 2010) in the analysis of
meme-specific sentiment on Twitter. The GPOMS tool as-
signs to a body of text a six-dimensional vector with bases
corresponding to different mood attributes (Calm, Alert,
Sure, Vital, Kind, and Happy). To produce scores for a meme
along each of the six dimensions, GPOMS relies on a vocab-
ulary taken from an established psychometric evaluation in-
strument extended with co-occurring terms from the Google
n-gram corpus. We applied the GPOMS methodology to the
collection of tweets, obtaining a six-dimensional mood vec-
tor for each meme.
5 Automatic Classification
As an application of the analyses performed by the Truthy
system, we trained a binary classifier to automatically label
legitimate and truthy memes.
We began by producing a hand-labeled corpus of train-
ing examples in three classes — ‘truthy,’ ‘legitimate,’ and
‘remove.’ We labeled these by presenting random memes to
several human reviewers (the authors of the paper and a few
Table 2: Performance of two classifiers with and without re-
sampling training data to equalize class sizes. All results are
averaged based on 10-fold cross-validation.
Classifier Resampling? Accuracy AUC
AdaBoost No 92.6% 0.91
AdaBoost Yes 96.4% 0.99
SVM No 88.3% 0.77
SVM Yes 95.6% 0.95
Table 3: Confusion matrices for a boosted decision stump
classifier with and without resampling. The labels on the
rows refer to true class assignments; the labels on the
columns are those predicted.
No resampling With resampling
Truthy Legitimate Truthy Legitimate
T 45 (12%) 16 (4%) 165 (45%) 6 (1%)
L 11 (3%) 294 (80%) 7 (2%) 188 (51%)
additional volunteers), and asking them to place each meme
in one of the three categories. A meme was to be classified as
‘truthy’ if a significant portion of the users involved in that
meme appeared to be spreading it in misleading ways —
e.g., if a number of the accounts tweeting about the meme
appeared to be robots or sock puppets, the accounts appeared
to follow only other propagators of the meme (clique behav-
ior), or the users engaged in repeated reply/retweet exclu-
sively with other users who had tweeted the meme. ‘Legit-
imate’ memes were described as memes representing nor-
mal use of Twitter — several non-automated users convers-
ing about a topic. The final category, ‘remove,’ was used for
memes in a non-English language or otherwise unrelated to
U.S. politics (#youth, for example). These memes were
not used in the training or evaluation of classifiers.
Upon gathering 252 annotated memes, we observed an
imbalance in our labeled data (231 legitimate and only 21
truthy). Rather than simply resampling from the smaller
class, as is common practice in the case of class imbal-
ance, we performed a second round of human annotations
on previously-unlabeled memes predicted to be ‘truthy’ by
the classifier trained in the previous round, gaining 103 more
annotations (74 legitimate and 40 truthy). We note that the
human classifiers knew that the additional memes were pos-
sibly more likely to be truthy, but that the classifier was not
very good at this point due to the paucity of training data
and indeed was often contradicted by the human classifica-
tion. This bootstrapping procedure allowed us to manually
label a larger portion of truthy memes with less bias than
resampling. Our final training dataset consisted of 366 train-
ing examples — 61 ‘truthy’ memes and 305 legitimate ones.
In a few cases where multiple reviewers disagreed on the la-
beling of a meme, we determined the final label by reaching
consensus in a group discussion among all reviewers. The
dataset is available online.1
We experimented with several classifiers, as implemented
Table 4: Top 10 most discriminative features, according to a
χ2analysis under 10-fold cross validation. Intervals repre-
sent the variation of the χ2or rank across the folds.
Feature χ2Rank
mean w 230 ±4 1.0±0.0
mean s 204 ±6 2.0±0.0
edges 188 ±4 4.3±1.9
skew ko 185 ±4 4.4±1.1
std si 183 ±5 5.1±1.3
skew so 184 ±4 5.1±0.9
skew si 180 ±4 6.7±1.3
max cc 177 ±4 8.1±1.0
skew ki 174 ±4 9.6±0.9
std ko 168 ±5 11.5±0.9
by Hall et al. (2009). Since comparing different learning
algorithms is not our goal, we report on the results ob-
tained with just two well-known classifiers: AdaBoost with
DecisionStump, and SVM. We provided each classifier with
31 features about each meme, as shown in Table 1. A few
of these features bear further explanation. Measures relating
to ‘degree’ and ‘strength’ refer to the nodes in the diffusion
network of the meme in question — that is, the number of
people that each user retweeted or mentioned, and the num-
ber of times these connections were made, respectively. We
defined an ‘injection point’ as a tweet containing the meme
which was not itself a retweet; our intuition was that memes
with a larger number of injection points were more likely to
be legitimate. No features were normalized.
As the number of instances of truthy memes was still less
than instances of legitimate ones, we also experimented with
resampling the training data to balance the classes prior to
classification. The performance of the classifiers is shown in
Table 2, as evaluated by their accuracy and the area under
their ROC curves (AUC). The latter is an appropriate evalu-
ation measure in the presence of class imbalance. In all cases
these preliminary results are quite encouraging, with accu-
racy around or above 90%. The best results are obtained by
AdaBoost with resampling: better than 96% accuracy and
0.99 AUC. Table 3 further shows the confusion matrices for
AdaBoost. In this task, false negatives (truthy memes incor-
rectly classified as legitimate, in the upper-right quadrant
of each matrix) are less desirable than false positives (the
lower-left quadrant). In the worst case, the false negative rate
is 4%. We did not perform any feature selection or other op-
timization; the classifiers were provided with all the features
computed for each meme (Table 1). Table 4 shows the 10
most discriminative features, as determined by χ2analysis.
Network features appear to be more discriminative than sen-
timent scores or the few user annotations that we collected.
6 Examples of Astroturf
The Truthy system allowed us to identify several egregious
instances of astroturf memes. Some of these cases caught
the attention of the popular press due to the sensitivity of the
topic in the run up to the 2010 U.S. midterm political elec-
tions, and subsequently many of the accounts involved were
suspended by Twitter. Let us illustrate a few representative
#ampat The #ampat hashtag is used by many conserva-
tive users. What makes this meme suspicious is that the
bursts of activity are driven by two accounts, @CSteven
and @CStevenTucker, which are controlled by the
same user, in an apparent effort to give the impression
that more people are tweeting about the same topics. This
user posts the same tweets using the two accounts and has
generated a total of over 41,000 tweets in this fashion.
See Fig. 4(A) for the #ampat diffusion network.
@PeaceKaren 25 This account did not disclose informa-
tion about the identity of its owner, and generated a very
large number of tweets (over 10,000 in four months). Al-
most all of these tweets supported several Republican can-
didates. Another account, @HopeMarie 25, had a simi-
lar behavior to @PeaceKaren 25 in retweeting the ac-
counts of the same candidates and boosting the same web-
sites. It did not produce any original tweets, and in addi-
tion it retweeted all of @PeaceKaren 25’s tweets, pro-
moting that account. These accounts had also succeeded
at creating a ‘twitter bomb:’ for a time, Google searches
for “gopleader” returned these tweets in the first page
of results. A visualization of the interaction between these
two accounts can be seen in Fig. 4(B). Both accounts were
suspended by Twitter by the time of this writing. This meme is the website of the Re-
publican Leader John Boehner. It looks truthy because
it is promoted by the two suspicious accounts described
above. The diffusion of this URL is shown in Fig. 4(C).
How Chris Coons budget works- uses tax $ 2 attend din-
ners and fashion shows
This is one of a set of truthy memes smearing Chris
Coons, the Democratic candidate for U.S. Senate from
Delaware. Looking at the injection points of these
memes, we uncovered a network of about ten bot ac-
counts. They inject thousands of tweets with links to
posts from the website. To avoid
detection by Twitter and increase visibility to different
users, duplicate tweets are disguised by adding different
hashtags and appending junk query parameters to the
URLs. To generate retweeting cascades, the bots also
coordinate mentioning a few popular users. When these
targets perceive receiving the same news from several
people, they are more likely to think it is true and spread
it to their followers. Most bot accounts in this network
can be traced back to a single person who runs the website. The diffusion network
corresponding to this case is illustrated in Fig. 4(D).
These are just a few examples of truthy memes that our
system was able to identify. Two other networks of bots were
shut down by Twitter after being detected by Truthy.
Fig. 4 also shows the diffusion networks for four le-
gitimate memes. One, #Truthy, was injected as an ex-
periment by the NPR Science Friday radio program. An-
other, @senjohnmccain, displays two different commu-
nities in which the meme was propagated: one by retweets
Figure 4: Diffusion networks of sample memes from our dataset. Edges are represented using the same notation as in Fig. 2.
Four truthy memes are shown in the top row and four legitimate ones in the bottom row. (A) #ampat (B) @PeaceKaren 25
(C) (D) “How Chris Coons budget works- uses tax $ 2 attend dinners and fashion shows” (E) #Truthy
(F) @senjohnmccain (G) (H) “Obama said taxes have gone down during his administration. That’s
ONE way to get rid of income tax — getting rid of income”
from @ladygaga in the context of discussion on the re-
peal of the “Don’t ask, don’t tell” policy on gays in the mil-
itary, and the other by mentions of @senjohnmccain. A
gallery with detailed explanations about various truthy and
legitimate memes can be found on our website (truthy.
7 Discussion
Our simple classification system was able to accurately de-
tect ‘truthy’ memes based on features extracted from the
topology of the diffusion networks. Using this system we
have been able to identify a number of ‘truthy’ memes.
Though few of these exhibit the explosive growth charac-
teristic of true viral memes, they are nonetheless clear ex-
amples of coordinated attempts to deceive Twitter users.
Truthy memes are often spread initially by bots, causing
them to exhibit, when compared with organic memes, patho-
logical diffusion graphs. These graphs show a number of pe-
culiar features, including high numbers of unique injection
points with few or no connected components, strong star-
like topologies characterized by high average degree, and
most tellingly large edge weights between dyads.
In addition, we observed several other approaches to de-
ception that were not discoverable using graph-based prop-
erties only. One case was that of a bot network using unique
query string suffixes on otherwise identical URLs in an ef-
fort to make them look distinct. This works because many
URL-shortening services ignore query strings when process-
ing redirect requests. In another case we observed a number
of automated accounts that use text segments drawn from
newswire services to produce multiple legitimate-looking
tweets in between the injection of URLs. These instances
highlight several of the more general properties of truthy
memes detected by our system.
The accuracy scores we obtain in the classification task
are surprisingly high. We hypothesize that this performance
is partially explained by the fact that a consistent propor-
tion of the memes were failed attempts of starting a cascade.
In these cases the networks reduced to isolated injection
points or small components, resulting in network properties
amenable to easy classification.
Despite the fact that many of the memes discussed in this
paper are characterized by small diffusion networks, it is im-
portant to note that this is the stage at which such attempts
at deception must be identified. Once one of these attempts
is successful at gaining the attention of the community, the
meme spreading pattern becomes indistinguishable from an
organic one. Therefore, the early identification and termina-
tion of accounts associated with astroturf memes is critical.
Future work could explore further crowdsourcing the an-
notation of truthy memes. In our present system, we were
not able to collect sufficient crowdsourcing data (only 304
clicks of the ‘truthy’ button, and mostly correlated with
meme popularity), but these annotations may well prove use-
ful with more data. Several other promising features could
be used as input to a classifier, such as the age of the ac-
counts involved in spreading a meme, the reputation of users
based on other memes they have contributed, and other fea-
tures from bot detection methods (Chu et al. 2010).
Acknowledgments. We are grateful to A. Vespignani, C.
Catutto, J. Ramasco, and J. Lehmann for helpful discussions, J.
Bollen for his GPOMS code, T. Metaxas and E. Mustafaraj for
inspiration and advice, and Y. Wang for Web design support. We
thank the Gephi toolkit for aid in our visualizations and the many
users who have provided feedback and annotations. We acknowl-
edge support from NSF (grant No. IIS-0811994), Lilly Foundation
(Data to Insight Center Research Grant), the Center for Complex
Networks and Systems Research, and the IUB School of Informat-
ics and Computing.
Adamic, L., and Glance, N. 2005. The political blogosphere and
the 2004 U.S. election: Divided they blog. In Proc. 3rd Intl. Work-
shop on Link Discovery (LinkKDD), 36–43.
Asur, S., and Huberman, B. A. 2010. Predicting the future with
social media. Technical Report arXiv:1003.5699, CoRR.
Barrat, A.; Barthelemy, M.; and Vespignani, A. 2008. Dynamical
Processes on Complex Networks. Cambridge University Press.
Bollen, J.; Mao, H.; and Pepe, A. 2010. Determining the public
mood state by analysis of microblogging posts. In Proc. of the Alife
XII Conf. MIT Press.
Bollen, J.; Mao, H.; and Zeng, X. 2011. Twitter mood predicts the
stock market. J. of Computational Science In Press.
Castellano, C.; Fortunato, S.; and Loreto, V. 2009. Statistical
physics of social dynamics. Rev. Mod. Phys. 81(2):591–646.
Chu, Z.; Gianvecchio, S.; Wang, H.; and Jajodia, S. 2010. Who is
tweeting on twitter: human, bot, or cyborg? In Proc. 26th Annual
Computer Security Applications Conf. (ASAC), 21–30.
Diakopoulos, N. A., and Shamma, D. A. 2010. Characterizing
debate performance via aggregated twitter sentiment. In Proc. 28th
Intl. Conf. on Human Factors in Computing Systems (CHI), 1195–
Galuba, W.; Aberer, K.; Chakraborty, D.; Despotovic, Z.; and
Kellerer, W. 2010. Outtweeting the Twitterers - Predicting Infor-
mation Cascades in Microblogs. In 3rd Workshop on Online Social
Networks (WOSN).
Gomez-Rodriguez, M.; Leskovec, J.; and Krause, A. 2010. In-
ferring networks of diffusion and influence. In Proc. 16th ACM
SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining
(KDD), 1019–1028.
Grier, C.; Thomas, K.; Paxson, V.; and Zhang, M. 2010. @spam:
the underground on 140 characters or less. In Proc. 17th ACM
Conf. on Computer and Communications Security (CCS), 27–37.
Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; and
Witten, I. H. 2009. The WEKA data mining software: An update.
ACM SIGKDD Explorations 11(1):10–18.
Heer, J., and boyd, d. 2005. Vizster: Visualizing online social net-
works. In Proc. IEEE Symp. on Information Visualization (InfoVis).
Honeycutt, C., and Herring, S. C. 2008. Beyond microblogging:
Conversation and collaboration via Twitter. In Proc. 42nd Hawaii
Intl. Conf. on System Sciences.
Huang, J.; Thornton, K. M.; and Efthimiadis, E. N. 2010. Conver-
sational tagging in Twitter. In Proc. 21st ACM Conf. on Hypertext
and Hypermedia (HT).
Huberman, B. A.; Romero, D. M.; and Wu, F. 2008. Social net-
works that matter: Twitter under the microscope. Technical Report
arXiv:0812.1045, CoRR.
Jagatic, T.; Johnson, N.; Jakobsson, M.; and Menczer, F. 2007.
Social phishing. Communications of the ACM 50(10):94–100.
Jansen, B. J.; Zhang, M.; Sobel, K.; and Chowdury, A. 2009. Twit-
ter power: Tweets as electronic word of mouth. J. of the American
Society for Information Science 60:2169–2188.
Java, A.; Song, X.; Finin, T.; and Tseng, B. 2007. Why we Twitter:
understanding microblogging usage and communities. In Proc. 9th
WebKDD and 1st SNA-KDD Workshop on Web mining and social
network analysis, 56–65.
Kwak, H.; Lee, C.; Park, H.; and Moon, S. 2010. What is Twitter,
a social network or a news media? In Proc. 19th Intl. World Wide
Web Conf. (WWW), 591–600.
Leskovec, J.; Adamic, L. A.; and Huberman, B. A. 2006. Dynamics
of viral marketing. ACM Trans. Web 1(1):5.
Leskovec, J., and Faloutsos, C. 2006. Sampling from large graphs.
In Proc. 12th ACM SIGKDD Intl. Conf. on Knowledge Discovery
and Data Mining (KDD), 631–636.
Leskovec, J.; Backstrom, L.; and Kleinberg, J. 2009. Meme-
tracking and the dynamics of the news cycle. In Proc. 15th ACM
SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining
(KDD), 497–506.
Mendoza, M.; Poblete, B.; and Castillo, C. 2010. Twitter under
crisis: Can we trust what we RT? In Proc. 1st Workshop on Social
Media Analytics (SOMA).
Morris, S. 2000. Contagion. Rev. Economic Studies 67(1):57–78.
Mustafaraj, E., and Metaxas, P. 2010. From obscurity to promi-
nence in minutes: Political speech and real-time search. In Proc.
Web Science: Extending the Frontiers of Society On-Line (WebSci),
Rasmussen, S., and Schoen, D. 2010. Mad as Hell: How the Tea
Party Movement Is Fundamentally Remaking Our Two-Party Sys-
tem. HarperCollins.
Ratkiewicz, J.; Conover, M.; Meiss, M.; Gonc¸alves, B.; Patil, S.;
Flammini, A.; and Menczer, F. 2011. Truthy : Mapping the spread
of astroturf in microblog streams. In Proc. 20th Intl. World Wide
Web Conf. (WWW).
Romero, D. M.; Galuba, W.; Asur, S.; and Huberman, B. A.
2010. Influence and passivity in social media. Technical Report
arXiv:1008.1253, CoRR.
Sakaki, T.; Okazaki, M.; and Matsuo, Y. 2010. Earthquake shakes
twitter users: real-time event detection by social sensors. In Proc.
19th Intl. World Wide Web Conf. (WWW), 851–860.
Sankaranarayanan, J.; Samet, H.; Teitler, B.; Lieberman, M.; and
Sperling, J. 2009. Twitterstand: news in tweets. In Proc. 17th ACM
SIGSPATIAL Intl. Conf. on Advances in Geographic Information
Systems (GIS), 42–51.
Suh, B.; Hong, L.; Pirolli, P.; and Chi, E. H. 2010. Want to be
retweeted? Large scale analytics on factors impacting retweet in
Twitter network. In Proc. IEEE Intl. Conf. on Social Computing.
The Fox Nation. 2010. Stimulus $ for coke monkeys. politifi.
com/news/Stimulus-for- Coke-Monkeys- 267998.html.
Tumasjan, A.; Sprenger, T. O.; Sandner, P. G.; and Welpe, I. M.
2010. Predicting Elections with Twitter: What 140 Characters Re-
veal about Political Sentiment. In Proc.4th Intl. AAAI Conf. on
Weblogs and Social Media (ICWSM).
Wang, H.; Fan, W.; Yu, P. S.; and Han, J. 2003. Mining concept-
drifting data streams using ensemble classifiers. In Proc.9th ACM
SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining
(KDD), 226–235.
Wang, A. H. 2010. Don’t follow me: Twitter spam detection. In
Proc. 5th Intl. Conf. on Security and Cryptography (SECRYPT).
Wiese, D. R., and Gronbeck, B. E. 2005. Campaign 2004: Develop-
ments in cyberpolitics. In Denton, R. E., ed., The 2004 Presidential
Campaign: A Communication Perspective. Rowman & Littlefield.
... The widespread use of social media makes them a prime target for exploitation by bad actors. Efforts to inflate the popularity of political candidates [1] with social bots [2], influence public opinion through the spread of disinformation and conspiracy theories [3,4], and manipulate stock prices through coordinated campaigns [5,6] have been widely reported. The threats posed by malicious actors are far-reaching, endangering democracy [7,8], public health [9][10][11], and the economy [12]. ...
... Others adopt strategies such as coordinated inauthentic behaviors. 1 Such coordinated behaviors appear to be normal when inspected individually, but are centrally controlled to achieve some goal [6]. ...
... o t h e r w i s e (1) where p 1 is a session delimiter threshold. A session is thus defined as a maximal sequence of consecutive actions separated by pauses shorter than p 1 . ...
Full-text available
Malicious actors exploit social media to inflate stock prices, sway elections, spread misinformation, and sow discord. To these ends, they employ tactics that include the use of inauthentic accounts and campaigns. Methods to detect these abuses currently rely on features specifically designed to target suspicious behaviors. However, the effectiveness of these methods decays as malicious behaviors evolve. To address this challenge, we propose a language framework for modeling social media account behaviors. Words in this framework, called BLOC, consist of symbols drawn from distinct alphabets representing user actions and content. Languages from the framework are highly flexible and can be applied to model a broad spectrum of legitimate and suspicious online behaviors without extensive fine-tuning. Using BLOC to represent the behaviors of Twitter accounts, we achieve performance comparable to or better than state-of-the-art methods in the detection of social bots and coordinated inauthentic behavior.
... The vision of social media as the modern public square has been challenged as users have 22 become victims of manipulation by astroturf [1,2], trolling [3], impersonation [4], and mis- Here we introduce SimSoM, a minimal model of a generic social media platform. The 42 model allows us to explore scenarios in which an information-sharing network is manipulated 43 by malicious actors controlling inauthentic accounts, and to measure the consequences of such Figure 1: Illustration of the SimSoM model. ...
... A 214 bot can easily interact with accounts having many followers by mentioning and/or following 215 them [41, 42]; other ploys include retweeting, quoting, and/or liking their tweets. There is 216 empirical evidence of preferential targeting by bots that spread misinformation[1, 8].217Targeting politically-active accounts or habitual misinformation spreaders are also conceiv-218 able strategies. An important question, then, is whether bot strategies aiming at specific authen-Scaling between reshare and exposure cascades. ...
Full-text available
Social media, the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the ethical challenges posed by experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of adversarial manipulation tactics on the quality of content. We find that social media features such as high information load, limited attention, and the presence of influentials exacerbate the vulnerabilities of online communities. Infiltrating a community is the most harmful tactic that bad actors can exploit and the most likely to make low-quality content go viral. The harm is further compounded by inauthentic agents flooding the network with engaging low-quality content, but is mitigated when influential or vulnerable individuals are targeted. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
... With the rise of the Internet, propaganda campaigns increasingly make use of social media. This gives rise to growing concerns that social media may be strategically used to increase political division and influence public opinion as a tool of modern warfare [15][16][17][18]. For example, a coordinated social media campaign was launched by a Russian organization known as the Internet Research Agency (IRA) during the 2014 Russo-Ukrainian conflict [16,19]. ...
Full-text available
The Russian invasion of Ukraine in February 2022 was accompanied by practices of information warfare, yet existing evidence is largely anecdotal while large-scale empirical evidence is lacking. Here, we analyze the spread of pro-Russian support on social media. For this, we collected $N = 349{,}455$ N = 349 , 455 messages from Twitter with pro-Russian support. Our findings suggest that pro-Russian messages received ∼251,000 retweets and thereby reached around 14.4 million users. We further provide evidence that bots played a disproportionate role in the dissemination of pro-Russian messages and amplified its proliferation in early-stage diffusion. Countries that abstained from voting on the United Nations Resolution ES-11/1 such as India, South Africa, and Pakistan showed pronounced activity of bots. Overall, 20.28% of the spreaders are classified as bots, most of which were created at the beginning of the invasion. Together, our findings suggest the presence of a large-scale Russian propaganda campaign on social media and highlight the new threats to society that originate from it. Our results also suggest that curbing bots may be an effective strategy to mitigate such campaigns.
... Specifically for studies that seek to leverage social media as a source of data for scientific study, there is an abundance of additional challenges related to the quality and/or veracity of information communicated, i.e. misinformation. This is not necessarily always malicious, however, there are many cases where it is, as shown by [68]- [72]. Taking all these factors into account, we look for telltale linguistic signs (using LIWC as a method of feature engineering) that focus on how a social media user writes as opposed to what they actually say when classifying users into specific categories. ...
Full-text available
Reaching marginal and other migrant communities to elicit their political views and opinions is a well-known challenge. Social media has enabled a certain amount of online activism and participation, especially in societies with abundant multicultural identities. However, it can be quite challenging to isolate the voice of the migrant in English-speaking countries, especially with an abundance of content in English on social media. In this paper, we pursue a case study of Ireland’s Twitter landscape, specifically migrant and native activists. We present a methodology that can accurately (> 80%) isolate the Irish migrant voice with as little as 25 English tweets without relying on user metadata and using simple, highly explainable, out-of-the-box machine learning methods. Using this, we distil (via sentiment analysis) polarities of views, segment (via BERT-based topic modelling) and summarise (via ChatGPT) differentiated views in a consumable manner for policymakers. Our approach enables policymakers to further their understanding of multicultural communities and use this to inform their decision-making processes.
... Social media platforms, while holding the potential to facilitate communication and foster informed discussions, are also susceptible to the dissemination of misinformation and disinformation campaigns [6,7]. This issue extends beyond politics and seeps into sensitive domains like public health, as exemplified by the anti-vaccine movements during the COVID-19 pandemic [8]. ...
Full-text available
The impact of the social media campaign conducted by the Internet Research Agency (IRA) during the 2016 U.S. presidential election continues to be a topic of ongoing debate. While it is widely acknowledged that the objective of this campaign was to support Donald Trump, the true extent of its influence on Twitter users remains uncertain. Previous research has primarily focused on analyzing the interactions between IRA users and the broader Twitter community to assess the campaign's impact. In this study, we propose an alternative perspective that suggests the existing approach may underestimate the true extent of the IRA campaign. Our analysis uncovers the presence of a notable group of suspended Twitter users, whose size surpasses the IRA user group size by a factor of 60. These suspended users exhibit close interactions with IRA accounts, suggesting potential collaboration or coordination. Notably, our findings reveal the significant role played by these previously unnoticed accounts in amplifying the impact of the IRA campaign, surpassing even the reach of the IRA accounts themselves by a factor of 10. In contrast to previous findings, our study reveals that the combined efforts of the Internet Research Agency (IRA) and the identified group of suspended Twitter accounts had a significant influence on individuals categorized as undecided or weak supporters, probably with the intention of swaying their opinions.
... Taking into account the evident presence of bots (i.e., automated accounts) [6] in Twitter and their impact, especially on political discussions [9,15], we investigated which of the most prevalent accounts can be classified as bots. To this end, we used an online state-of-the-art tool for automated bot detection, namely Bot-Detective [7,13]. ...
Full-text available
In this paper, we study the Greek wiretappings scandal, which has been revealed in 2022 and attracted a lot of attention by press and citizens. Specifically, we propose a methodology for collecting data and analyzing patterns of online public discussions on Twitter. We apply our methodology to the Greek wiretappings use case, and present findings related to the evolution of the discussion over time, its polarization, and the role of the media. The methodology can be of wider use and replicated to other topics. Finally, we provide publicly an open dataset, and online resources with the results.
... However, content in social media is often published without the intermediation of experts, thus increasing the risk of spreading unreliable news [1,2,3,4]. With the advance of the research on how opinions propagate in networks [5,6], how news goes viral [7,8], and how polarization affects public opinion [9,10,11], methods have been proposed to limit the spread of incorrect information and help users distinguish true news from fake news [12,13,14,15]. Despite these efforts, misinformation still threatens society. ...
Full-text available
With social media, the flow of uncertified information is constantly increasing, with the risk that more people will trust low-credible information sources. To design effective strategies against this phenomenon, it is of paramount importance to understand how people end up believing one source rather than another. To this end, we propose a realistic and cognitively affordable heuristic mechanism for opinion formation inspired by the well-known belief propagation algorithm. In our model, an individual observing a network of information sources must infer which of them are reliable and which are not. We study how the individual's ability to identify credible sources, and hence to form correct opinions, is affected by the noise in the system, intended as the amount of disorder in the relationships between the information sources in the network. We find numerically and analytically that there is a critical noise level above which it is impossible for the individual to detect the nature of the sources. Moreover, by comparing our opinion formation model with existing ones in the literature, we show under what conditions people's opinions can be reliable. Overall, our findings imply that the increasing complexity of the information environment is a catalyst for misinformation channels.
The increasing popularity of social media over the past decade has caused the population of these media users to be defined as the size of a large continent. Given the communicative nature of social media, these communications sometimes occur positively and correctly with favorable and desirable outcomes. However, these communications sometimes occur negatively and abnormally, referred to as online dysfunctional behavior. The present study explains the meaning of dysfunctional behavior in the context of social media and what and how these behaviors are. Smith's interpretive phenomenological method was used to achieve this goal. The sample size was determined at 11 people based on theoretical sampling among social media celebrities who had public reputations. The data were collected using a semi-structured interview method. The interview analysis resulted in the identification of 3 primary themes (actions and reactions without proactive and retrospective awareness, aggressive behavior with impersonation, and acting out of accumulated mental disorders) and 6 sub-themes (lack of media and technological literacy, cultural poverty, anonymity, online anger, intolerance of success of others, and inferiority complexes). Online dysfunctional behavior has a semantic affinity with activism without awareness, aggressive behavior with impersonation, and acting out of disorders.
Full-text available
Bot detection in social media, particularly on Twitter, has become a crucial issue in recent years due to the increasing use of bots for malicious uses such as the spreading of false information in order to manipulate public opinion. In this paper, we review the most widely available tools for bot detection and the categorization models that exist in the literature. This paper put focus on providing a concise and informative overview of state-of-the-art bot detection on Twitter. This overview can be useful for developing more effective detection methods. Overall, our paper provides valuable insights into the current state of bot detection in social media, suggesting new challenges and possible future trends and research.Keywordsbot detectionTwittersocial mediabotnetmisinformation spread
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society. Securing cyberspace has become an utmost concern for organizations and governments. Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities. In recent years, with the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance. It is imperative to summarize existing graph-based cybersecurity solutions to provide a guide for future studies. Therefore, as a key contribution of this paper, we provide a comprehensive review of graph mining for cybersecurity, including an overview of cybersecurity tasks, the typical graph mining techniques, and the general process of applying them to cybersecurity, as well as various solutions for different cybersecurity tasks. For each task, we probe into relevant methods and highlight the graph types, graph approaches, and task levels in their modeling. Furthermore, we collect open datasets and toolkits for graph-based cybersecurity. Finally, we outlook the potential directions of this field for future research.
Full-text available
The ever-increasing amount of information flowing through Social Media forces the members of these networks to compete for attention and influence by relying on other people to spread their message. A large study of information propagation within Twitter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We demonstrate that high popularity does not necessarily imply high influence and vice-versa.
Full-text available
Microblogging is a new form of communication in which users can describe their current status in short posts distributed by instant messages, mobile phones, email or the Web. Twitter, a popular microblogging tool has seen a lot of growth since it launched in October, 2006. In this paper, we present our observations of the microblogging phenomena by studying the topological and geographical properties of Twitter's social network. We find that people use microblogging to talk about their daily activities and to seek or share information. Finally, we analyze the user intentions associated at a community level and show how users with similar intentions connect with each other.
Conference Paper
Full-text available
In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a pre-liminary study of certain social phenomenons, such as the dissem-ination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the pur-pose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propa-gation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.
Full-text available
Recently, all major search engines introduced a new fea-ture: real-time search results, embedded in the first page of organic search results. The content appearing in these results is pulled within minutes of its generation from the so-called "real-time Web" such as Twitter, blogs, and news websites. In this paper, we argue that in the context of political speech, this feature provides disproportionate ex-posure to personal opinions, fabricated content, unverified events, lies and misrepresentations that otherwise would not find their way in the first page, giving them the opportunity to spread virally. To support our argument we provide con-crete evidence from the recent Massachusetts (MA) senate race between Martha Coakley and Scott Brown, analyzing political community behavior on Twitter. In the process, we analyze the Twitter activity of those involved in exchanging messages, and we find that it is possible to predict their po-litical orientation and detect attacks launched on Twitter, based on behavioral patterns of activity.
Full-text available
The ever-increasing amount of information owing through Social Media forces the members of these networks to compete for attention and influence by relying on other peopleto spread their message. A large study of information propagation within Twitter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We also explicitly demonstrate that high popularity does not necessarily imply high influence and vice-versa.
Conference Paper
Full-text available
The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In an attempt to determine how well Twitter supports user-to-user exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitter's current design for collaborative use.
Conference Paper
Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential. The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success? We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample. In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and "forest fire"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15% of the original graph.
Extended Abstract Microblogging is a form of online communication by which users broadcast brief text updates, also known as tweets, to the public or a selected circle of contacts. A variegated mosaic of microblogging uses has emerged since the launch of Twitter in 2006: daily chatter, conversation, information sharing, and news commentary, among others (Java et al, 2007). Regard-less of their content and intended use, tweets often convey pertinent information about their authors mood status. As such, tweets can be regarded as temporally-authentic microscopic instantiations of public mood state (O'Connor et al, 2010). Here we perform a sentiment analysis of all public tweets broadcasted by Twitter users between August 1 and December 20, 2008. For every day in the timeline, we extract six dimensions of mood (tension, depression, anger, vigor, fatigue, confusion) using an extended version (Pepe and Bollen, 2008) of the Profile of Mood States (POMS), a well-established psychometric instrument (Norcross et al, 2006; McNair et al, 2003). We compare our results to fluctuations recorded by stock market and crude oil price indices and major events in media and popular culture, such as the U.S. Presidential Election of November 4, 2008 and Thanksgiving Day (see Fig. 1). We find that events in the social, political, cultural and economic sphere do have a significant, immediate and highly specific effect on the various dimensions of public mood. In addition, we found long-term changes in public mood that may reflect the cumulative effect of various underlying socio-economic indicators. With the present investigation (Bollen et al, 2010), we bring about the following methodological contributions: we argue that sentiment analysis of minute text corpora (such as tweets) is efficiently obained via a syntac-tic, term-based approach that requires no training or machine learning. Moreover, we stress the importance of measuring mood and emotion using well-established instruments rooted in decades of empirical psychometric research. Finally, we speculate that collective emotive trends can be modeled and predicted using large-scale analyses of user-generated content but results should be discussed in terms of the social, economic, and cultural spheres in which the users are embedded.
In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 "A-list" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.