Technical ReportPDF Available

DISRUPTING DAESH - MEASURING TAKEDOWN OF ONLINE TERRORIST MATERIAL AND ITS IMPACTS

Authors:

Abstract and Figures

This report seeks to contribute to public and policy debates on the value of social media disruption activity with respect to terrorist material. We look in particular at aggressive account and content takedown, with the aim of accurately measuring this activity and its impacts. To read it online: http://www.voxpol.eu/download/vox-pol_publication/DCUJ5528-Disrupting-DAESH-1706-WEB-v2.pdf
Content may be subject to copyright.
DISRUPTING
DAESH
MEASURING TAKEDOWN OF
ONLINE TERRORIST MATERIAL
AND ITS IMPACTS
Maura Conway, Moign Khawaja, Suraj Lakhani,
Jeremy Reffin, Andrew Robertson and David Weir
DISRUPTING
DAESH
MEASURING TAKEDOWN OF
ONLINE TERRORIST MATERIAL
AND ITS IMPACTS
Maura Conway, Moign Khawaja, Suraj Lakhani,
Jeremy Reffin, Andrew Robertson and David Weir
About the authors
Maura Conway is Professor of International Security in the School
ofLaw and Government at Dublin City University and Coordinator
ofVOX-Pol.
Moign Khawaja is a doctoral candidate in the School of Law and
Government at Dublin City University.
Suraj Lakhani is Lecturer in Criminology and Sociology in the
Department of Sociology at the University of Sussex.
Jeremy Reffin is a Research Fellow in the Department of Informatics
and co-founder of the TAG Laboratory at the University of Sussex.
Andrew Robertson is a Research Fellow in the TAG Laboratory in the
Department of Informatics at the University of Sussex.
David Weir is Professor of Computer Science in the Department of
Informatics and co-founder of the TAG Laboratory at the University
of Sussex.
Acknowledgements
This research was partially funded by a grant from the
UK Home Office.
ISBN: 978-1-873769-70-6
© VOX-Pol Network of Excellence, 2017
This material is offered free of charge for personal and
non-commercial use, provided the source is acknowledged.
For commercial or any other use, prior wrien permission must
be obtained from VOX-Pol. In no case may this material be altered,
soldor rented.
Like all other VOX-Pol publications, this report can be downloaded
free of charge from the VOX-Pol website: www.voxpol.eu
Designed and typeset by Soapbox, www.soapbox.co.uk
DISRUPTING DAESH3
TABLE OF CONTENTS
Executive Summary 4
INTRODUCTION 8
SOCIAL MEDIA MONITORING: METHODOLOGY 10
Caveats 12
OUR DATA 14
MEASURING EFFECTS 18
Account Longevity 20
Community Breakdown 26
BEYOND TWITTER: THE WIDER JIHADIST
SOCIAL MEDIA ECOLOGY 32
Case Study: Destinations of Official ISPropaganda 35
RECOMMENDATIONS 38
FUTURE RESEARCH 42
CONCLUSION 44
DISRUPTING DAESH4
EXECUTIVE SUMMARY
This report seeks to contribute to public and policy debates on the
value of social media disruption activity with respect to terrorist
material. We look in particular at aggressive account and content
takedown, with the aim of accurately measuring this activity and
its impacts.
Our findings challenge the notion that Twier remains a con-
ducive space for Islamic State (IS) accounts and communities to
flourish, although IS continues to distribute propaganda through
this channel. However, not all jihadists on Twier are subject to
the same high levels of disruption as IS, and we show that there
isdifferential disruption taking place.
IS’s and other jihadists’ online activity was never solely restricted
toTwier. Twier is just one node in a wider jihadist social media
ecology. We describe and discuss this, and supply some prelimi-
nary analysis of disruption trends in this area.
Our analysis rests on a dataset containing 722 pro-IS accounts
(labelled Pro-IS throughout) and a convenience sample of 451
other jihadist accounts (labelled Other Jihadist throughout),
including those supportive of Hay’at Tahrir al-Sham (HTS),
Ahraral-Sham, the Taliban and al-Shabaab, active on Twier
atany point between 1 February and 7 April 2017.
The Pro-IS accounts were located and identified using three
methods: the original seed set of accounts (27%) were manually
identified by the research team; the second set of accounts (30%)
were identified ‘semi-automatically’ (i.e. automatically identified
by the system and manually inspected and verified); and the
third group of accounts (43%) were identified using an ‘advanced
semi-automatic’ system via IS propaganda links.
DISRUPTING DAESH5
For the Pro-IS accounts, 57,574 tweets were collected, with
7,216(12.5%) of these tweets containing out-links, i.e. links to
widerwebsites, social media platforms, content hosting sites,etc.
(not including links within Twier). For the Other Jihadist
accounts, 62,156 tweets were collected, of which 7,928(13%)
contained out-links.
One of the overarching objectives of this research was to provide
an up-to-date account of the effects of Twier’s disruption
strategy on IS supporter accounts. We found that pro-IS accounts
faced substantial and aggressive disruption, particularly those
linking to official IS content hosted on a range of other platforms.
The majority– around 65%– of the Pro-IS accounts in our dataset
were suspended within 70 days of their establishment, with
the overall suspension rate of pro-IS accounts probably being
considerably higher.
In a case study of accounts posting links to official IS content
ina24-hour period on 3 and 4 April 2017, 153 accounts were
identified. A subset of 50 were ‘throwaway accounts’ (i.e. accounts
specifically created on 3 April to disseminate IS propaganda with
no expectation that they would stay online for any significant
period of time). Together these accounts sent a total of 842 tweets
with out-links to IS propaganda on other online platforms. Within
this 24-hour period, 65% of accounts were suspended within the
first 17 hours (07.00–00.00 GMT). The 50 throwaway accounts
suffered even higher levels of disruption, with a 75% suspension
rate during the same time period. This demonstrates that the
disruption to official IS propaganda distribution was reasonably
effective in the first 24 hours aer linking thecontent.
We also compared the suspension rates of Pro-IS accounts
versusOther Jihadist accounts to check for differential disruption.
We found that more than 25% of Pro-IS accounts were suspended
within five days of their creation; a negligible number (lessthan1%)
of Other Jihadist accounts were subject to the same rapid
response. Of those accounts in our dataset that were eventually
suspended (i.e. 455 Pro-IS accounts and 163 Other Jihadist), more
DISRUPTING DAESH6
than 30% ofPro-IS accounts were suspended within two days
oftheir creation; less than 1% of Other Jihadist accounts met the
same fate.
As a result of this disruption, IS’s ability to facilitate and maintain
strong and influential communities on Twier was found to
be significantly diminished. Relationship networks were much
sparser for Pro-IS accounts than Other Jihadist accounts. Other
Jihadist accounts had the opportunity to send six times as many
tweets, follow or ‘friend’ four times as many accounts and,
critically, gain 13 times as many followers as Pro-IS accounts.
Pro-IS users who persistently returned to Twier resorted to
adopting counter-measures, such as locking accounts, diluting the
content of tweets, using innocuous profile pictures, and adopting
meaningless Twier handles. This situation makes it extremely
difficult to maintain a strong and influential virtual community.
Twier is, however, just one node in a wider jihadist social
mediaecology. Therefore, we analysed a sample of destinations
from Twier for official IS propaganda at three time points
(4–8February, 4–8 March (excluding 7 March), and 4–8 April
2017). During these periods, Pro-IS accounts linked to 39 dif-
ferent third-party platforms or content hosting sites, as well as
running its own server to host material. Of these, six remained
prominent across the three time periods: justpaste.it, IS’s own
server, archive.org, sendvid.com, YouTube and Google Drive.
These domains accounted for 83%, 70% and 67% of the URLs
in the February, March and April sampling periods respectively.
Thetakedown rate (as of 12 April) was 72%, 66% and 72% for
thesame samplingperiods.
Only 20 (or 0.04%) of all tweets from Pro-IS accounts contained
a telegram.me link. The paucity of such links caused us to explore
further; we found that just two of 722 Pro-IS users’ biographies
and two of 451 Other Jihadist users’ biographies contained
Telegram links. Neither group of accounts was therefore using
Twier to advertise its presence on Telegram.
DISRUPTING DAESH7
Our report makes three recommendations:
1. Modern social media monitoring systems have the ability
to dramatically increase the speed and effectiveness of
data gathering, analysis and (potentially) intervention, but
probably only when deployed in combination with trained
human analysts.
2. Active IS supporters who remain on Twier, in particular
content disseminators and their throwaway accounts, could
probably be degraded further– though this may have both
pros (e.g. detrimental impact on last remaining signifi-
cant IS supporter Twier activity) and cons (e.g. further
degradation of Twier as a source of data or open source
intelligence on IS).
3. Our focus was largely on Twier, but we also pointed to
the importance of the wider jihadist social media ecology.
As our analysis was not restricted to IS users and content,
we also underline the oen uninterrupted online presence
and activity of non-IS jihadists. We point to the usefulness
of maintaining a wide-angle view of the online activity
ofadiversity of these, particularly HTS, across a variety
ofsocial media and other online platforms.
For the future, we propose replicating the present research,
butwith a larger and more equal sample of HTS, Ahrar al-Sham,
and Taliban accounts. This would allow for a more systematic
and comparative analysis of the levels of disruption of a range of
non-IS jihadists, the vibrancy of their contemporary Twier com-
munities and Twier out-linking practices. It would also allow
us to identify their other preferred online platforms. Additional
research is clearly also warranted into the wider jihadist social
media ecology. In particular, we suggest analysing pro-IS and
other jihadist activity on Telegram, which is almost certainly
where IS’s online community has reconstituted, and comparing
this with our present findings.
DISRUPTING DAESH8
1. INTRODUCTION
   Internet, particularly social media, by violent extrem-
ists and terrorists and their supporters is a source of concern for
policy-makers and the public. This is due to apparent connections
between consumption of, and networking around, violent extremist
and terrorist online content. Concerns are focused on:
adoption of extremist ideology– i.e. so-called ‘(violent) online
radicalisation’;
recruitment into violent extremist or terrorist groups
ormovements; and/or
aack planning and preparation.
Particular concerns have been raised regarding easy access to
large volumes of potentially influential violent extremist and terrorist
content on prominent and heavily trafficked social media platforms.
The micro-blogging platform Twier has been subject to particular
scrutiny, especially regarding their response to use of their platform
by the so-called ‘Islamic State’ (hereaer IS), also known as ‘Daesh’.
One of the major aims of this analysis is to supply an up-to-date
account of the effects of Twier’s disruption strategy on IS-supporter
accounts. Twier continues to be ‘called out’ in the media and by
policy-makers for the use of their platform by a variety of violent
extremists. However, Twier is not alone among social media
companies and other online platforms in hosting extremist accounts
and content.1 The company has taken significant steps over the last
three years to disrupt IS activity on their platform. Detailed descrip-
tion and analysis of the precise nature of this disruption activity
1 See, for example, UK House of Commons Home Affairs Commiee,
Hate Crime: Abuse, Hate and Extremism Online, London: House
ofCommons,2017.
DISRUPTING DAESH9
and, importantly, its effects is sparse. Therefore, this report aims to
contribute to public and policy debates on the value of disruption
activity, particularly aggressive account and content takedown,
by seeking to accurately measure this activity and its impacts. Our
findings challenge the notion that Twier remains a conducive space
for IS accounts and communities to flourish, although IS continue
to distribute propaganda through the platform. Not all jihadists on
Twier are subject to the same high levels of disruption as IS, how-
ever, and we show that there is differential disruption taking place.
An important related point is that the social media presence of IS and
other jihadists has never been solely restricted to Twier. Twier is
just one node in awider jihadist social media ecology. We describe
and discuss this, and supply some preliminary analysis of disruption
trends inthisarea.
2. SOCIAL MEDIA
MONITORING:
METHODOLOGY
DISRUPTING DAESH11
   we developed a semi-automated methodology for
identifying pro-jihadist accounts on Twier. Figure 1 illustrates this
methodology, which was implemented using the social media analysis
platform known as Method52.2
Figure 1. Detailed flow diagram for semi-automated social media analysis
The first step was to identify candidate accounts of interest. Our
approach was based on finding tweets that had specific terms of
interest in them (i.e. ‘seed search terms’) and/or finding accounts that
were in some way related to other accounts known to be of interest
(i.e. ‘seed accounts’). See step 1 in Figure 1.
When a tweet matched these search criteria, it was automatically
analysed to see if it was actually relevant, using a machine-learning
classifier trained to mimic the classification decisions of a human
analyst.3 A key task of the relevancy classifier was to separate target
Twier accounts from other Twier accounts using similar language
(e.g. journalist or researcher accounts). If the tweet was deemed rele-
vant, then further historic tweets were automatically extracted for the
candidate account and assessed for relevancy (see step 2 inFigure 1).
2 Method52 was developed by the TAG Laboratory at the University of
Sussex. For more information, see www.taglaboratory.org.
3 Classifiers were trained using semi-supervised machine learning
approaches. Method52 provides components that enable this to be done
swily and in amanner that is bespoke to a project.
Seed search
terms
1
DATA STORE
• Account details
• Tweet details
• Link details
Analyse links in
flagged tweets
Assess
relevancy
Identify new
terms
Score and analyse
confirmed accounts
Tweets
Seed
accounts
6
2
4
3
5
DISRUPTING DAESH12
This provided the system with an aggregate view of the tweet history
of the account. This overview of the tweet history was combined with
other account metadata that could be extracted; these pieces of infor-
mation were scored automatically and candidate Twier accounts
that exceeded the set thresholds were presented to ahumananalyst
for decision (step 3).
If the analyst confirmed that the account was pro-jihadist, then
the out-links found in all the account’s tweets were automatically
analysed (step 4) and details of the account, its tweets and its links
stored (step 5).
Information from new confirmed accounts was used by the
system in a feedback loop to continually improve the efficiency
of thesystem, thereby identifying new seed search terms (step 6)
andproviding additional seed accounts (step 2).
2.1 CAVEATS
There are a number of caveats aached to the data-collection process:
The bulk of data gathering was undertaken over two months
in early 2017. The system to implement our semi-automated
methodology was created, tested and evolved throughout this
period. Online accounts it returned were integrated with those
found via traditional, manual search for accounts of interest. The
overall approach was, therefore, a combination of automated and
manual, and snowball and purposive sampling methods.
Not all the available data was captured. There were various
periods of downtime for the semi-automated system throughout
this period as we developed and modified the methodology.
Further, we were unable to include some accounts found via
automated means because they were taken down before the
DISRUPTING DAESH13
human analyst could assess and confirm their affiliation.4 By the
project’s end, when the system was working optimally, 100% of
these accounts were identified by the soware as pro-IS, reflecting
the high level of disruption of IS-related accounts. We discuss this
furtherbelow.
The semi-automated system primarily focused on pro-IS accounts
operating in English and Arabic or some combination of these
languages. There is a possibility that accounts using, for example,
Bahasa,5 Russian or Turkish were overlooked. We believe this
possibility is worth mentioning but is negligible, as the system’s
effectiveness improved as we learned more about pro-IS users’
contemporary Twier activity and refined the methodology
accordingly. By early April, for example, the soware was able to
identify accounts directly distributing IS propaganda with very
high precision, no maer what language was used. At the same
time, we believe it also identified the majority of accounts linking
to that propaganda.
Our data from the laer stages of this project suggests that around
50 or more throwaway IS accounts were produced daily. These
accounts appear to be set up solely to distribute propaganda,
typically have no followers and send only IS propaganda tweets
until they are suspended. If we had been gathering all of these
throwaway IS accounts over the whole research period, we would
have had many hundreds more– perhaps as many as 2,000 to
3,000 accounts– in our sample data set. The analysis that follows
is however based on pro-IS accounts with at least one follower and
thus excludes these throwaway accounts unless otherwise stated.
4 This is a perennial issue in this type of research. It is also mentioned, for
example, in J.M. Berger and Jonathon Morgan, The ISIS Twier Census:
Defining and Describing the Population of ISIS Supporters on Twier,
Washington DC: Brookings, 2015, p.41 and p.44.
5 J.M. Berger and Heather Perez, The Islamic State’s Diminishing Returns
on Twier: How Suspensions are Limiting the Social Networks of English-
speaking ISIS Supporters, Washington DC: George Washington University
Program on Extremism, 2016, p.6.
3. OUR DATA
DISRUPTING DAESH15
   comprised 722 pro-IS accounts (labelled
Pro-IS throughout) and 451 other jihadist accounts (labelled Other
Jihadist throughout) with at least one follower active on Twier at any
time between 1 February and 7 April 2017 (see Table 1). Accounts were
defined as pro-IS if their avatar or carousel image contained explicitly
pro-IS imagery and/or text, and/or they had at least one recent tweet
by the user (i.e. not a retweet) that contained explicitly pro-IS images
and/or text, such as referring to IS as ‘Dawlah’ or their fighters as
‘lions’. Accounts maintained by journalists and others who tweeted,
for example, Amaq News Agency content for informational purposes,
were manually excluded. The Other Jihadist accounts included,
among others, those supportive of HTS, Ahrar al-Sham, theTaliban
and al-Shabaab. The same parameters were used to categorise
theseaccounts.
Table 1. Description of final dataset
PRO-IS OTHER JIHADIST
Number of accounts 722 451
Number of tweets 57,574 62,156
Number of out-links 7,216 7,928
Percentage of tweets containing out-links 12.5% 13%
The accounts in our dataset were located and identified in
threedifferent ways (see Table 2). One set of accounts was manually
identified by the research team, principally by looking at known jihadi
accounts (or those known to be of interest to jihadi supporters) and
inspecting accounts that were following or being followed by them.
A second group of accounts was identified semi-automatically– that
is, automatically by the above-described social-media monitoring
system and then manually inspected by a human analyst who con-
firmed: (i)whether they were jihadist accounts or not; and (ii)ifthey
were, ofwhat type. Several approaches were used to identify seed
DISRUPTING DAESH16
accounts or generate seed accounts.
This included analysing the vocab-
ulary being used in known jihadi
accounts that were currently or had
recently been active, determining
which terms were being used much
more oen than would be expected
statistically, and searching for tweets
that contained these terms. These
candidates were then winnowed
based on the relevancy of their
tweets in general (see above) and
other metadata. A third group of
accounts was identified automatically by the social-media monitoring
system based on the presence of known IS propaganda links. These
links were first identified through other tracking procedures, includ-
ing (but not limited to) being spoed in confirmed IS tweets.
It is important to underline that our pro-IS Twier account
datasetisas close as possibletaking into account the caveats
in section 2.1 aboveto a full dataset of explicitly IS-supportive
accounts with at least one follower for the period studied. On the
other hand, the Other Jihadist category is a convenience sample of
non-IS jihadist Twier accounts collected for comparison purposes
and in no way reflects the true number of these accounts on Twier.
Table 2. Location and identification of Twitter accounts
PRO-IS OTHER JIHADIST
NO. % NO. %
Manually identified 193 27 332 74
Semi-automated 218 30 119 26
Advanced semi-automated 311 43
TOTAL 722 451
Accounts were defined as
Pro-ISif their avatar or carousel
image contained explicitly pro-IS
imagery and/or text, and/or they
had at least one recent tweet
by the user (i.e. not a retweet)
that contained explicitly pro-IS
images and/or text, such as
referring to IS as ‘Dawlah’
ortheir fighters as ‘lions’.
4. MEASURING
EFFECTS
DISRUPTING DAESH19
   one of the most preferred online spaces for
IS andtheir supporters, even prior to the establishment of their
so-called ‘caliphate’ in June 2014. It was estimated that there were
between 46,000 and 90,000 pro-IS Twier accounts active in the
period September to December 2014.6 However, their activity was
subject to disruption by Twier from mid-2014 and, although initially
low level and sporadic, significantly increasing levels of disruption
were instituted throughout 2015 and 2016. From mid-2015 through
January 2016, for example, Twier claimed to have suspended in the
region of 15,000 to 18,000 IS-supportive accounts per month.7 From
mid-February to mid-July 2016, this increased to an average of 40,000
IS-related account suspensions
per month, according to the com-
pa ny. 8 Despite the growing costs
aached to remaining on Twier
(such as greater effort to maintain
a public presence while relaying
diffused messages and deflated
morale), during this period IS supporters routinely penned online
missives exhorting ‘Come Back to Twier’.9 In 2017, isit worthwhile
for pro-IS users to do so?
Until now, the small amount of publicly available research
ontheonline disruption of IS has focused on the impact of Twier’s
suspension activities on follower numbers for re-established
accounts.10 We also looked at the longevity or survival time of
accounts, and compared Pro-IS to Other Jihadist accounts on both
measures (i.e. follower numbers and longevity). Our overall finding
6 Berger and Morgan, The ISIS Twier Census, 2015, p.9.
7 Twier, ‘Combating Violent Extremism.’ Twier Blog, 5 February 2016,
hps://blog.twier.com/2016/combating-violent-extremism.
8 Twier, ‘An Update on our Efforts to Combat Violent Extremism.’
Twier Blog, 18 August 2016, hps://blog.twier.com/2016/
an-update-on-our-efforts-to-combat-violent-extremism.
9 Cole Bunzel, ‘“Come Back to Twier”: A Jihadi Warning Against Telegram.’
Jihadica, 18 July 2016, www.jihadica.com/come-back-to-twier/.
10 Berger and Perez, The Islamic State’s Diminishing Returns on Twier, 2016.
It was estimated that there
werebetween 46,000 and 90,000
pro-IS Twitter accounts active
in the period September to
December 2014.
DISRUPTING DAESH20
was that pro-IS accounts are being significantly disrupted and this
has effectively eliminated IS’s once vibrant Twier community.
Differential disruption is taking place, however, which means that
other Jihadist accounts are subject to much less pressure.
4.1 ACCOUNT LONGEVITY
This section addresses the survival time of accounts. All the Twier
accounts in our database were active at the time they were identified
and classified as Pro-IS or Other Jihadist. Once an account was
entered in the database, we monitored its status and recorded when
itwas suspended, if this subse-
quently occurred. This allowed
usto measure the age of each
account (i.e. the time elapsed
since the account’s creation) at
the date of suspension.
Worth underlining here is that
the below-described survival rates
of Pro-IS accounts would likely
have been considerably shorter
if the analysis included those
accounts suspended – oen within minutes of creation – before they
could be captured by the research team for inclusion in our dataset.
Figure 2 shows the estimated cumulative suspension rate for all
Twier accounts in our dataset, outlining the probability of an account
being suspended against its age (represented in days) for the 722Pro-IS
accounts and 451 Other Jihadist accounts. Figure 2 shows that the
majorityaround 65%of Pro-IS accounts were suspended before
they reached 70 days since inception. At the same time point, less than
20% of Other Jihadist accounts had been suspended. In fact, as regards
differential disruption, more than 25% of Pro-IS accounts were sus-
pended within five days of inception; a negligible number (less than 1%)
of Other Jihadist accounts were subject to the same rapid response.
Our categorisation of these accounts as being jihadist in orienta-
tion was necessarily subjective. It is possible that others may disagree.
Once an account was entered in
the database, we monitored its
status and recorded when it was
suspended, if this subsequently
occurred. This allowed us
to measure the age of each
account (i.e. the time elapsed
since the account’s creation)
atthe date of suspension.
DISRUPTING DAESH21
Figure 2. Cumulative suspension rate for all accounts in database
To address this possibility, Figure 3 focuses on those accounts in
ourdataset that were eventually suspended: 455 Pro-IS accounts
and 163 Other Jihadist accounts. The rationale is that these accounts
were judged independently to have breached Twier’s terms of use.
Again, as regards differential disruption, our data illustrates that
85% of Pro-IS accounts were suspended within the first 60 days of
their life, compared to 40% of accounts falling into the Other Jihadist
category. More than 30% of Pro-IS accounts were suspended within
two days of their creation; less than 1% of Other Jihadist accounts
met thesamefate.
Figure 3. Cumulative suspension rate for accounts eventually suspended
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
IS
Other Jihadi
0
10%
20%
30%
40%
50%
60%
90%
80%
70%
100%
Number of days account survived (binned every 5 days)
0
10%
20%
30%
40%
50%
60%
90%
80%
70%
100%
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
IS
Other Jihadi
Number of days account survived (binned every 5 days)
DISRUPTING DAESH22
In addition to the differences in longevity of Pro-IS and Other Jihadist
accounts, the three subsets of Pro-IS accounts (i.e. those identified
manually, semi-automatically based on general tweet content, and
semi-automatically as a result of linking to official IS propaganda) also
displayed different survival and activity paerns. From the 722 Pro-IS
accounts in our dataset, the manually identified accounts (27%) sur-
vived disruption for longer periods and were predominantly tweeting
about general IS and non-IS related news. The ‘general content’
semi-automated accounts (30%) had a somewhat shorter lifespan
andwere tweeting content generically related to the conflict (e.g. daily
bale updates from several IS frontlines such as Mosul, Al-Bab, Deir
Ez-Zor, eastern Aleppo, etc.). The advanced semi-automated group
(43%) experienced the shortest lifespans. They were initially identi-
fied as a result of sending at least one tweet specifically disseminating
‘official’ IS propaganda (e.g. from the Amaq News Agency). Many were
then found to be exclusively tweeting links to official IS propaganda.
4.1.1. Case Study: Intervention Effectiveness
Throughout the period of data collection, IS operated a 24-hour
‘newscycle,’ disseminating a new batch of propaganda on a daily
basis via Twier and other online platforms, using links to content
hosted elsewhere on the Internet. These may be so-called ‘ghazwa’
orsocial media ‘raids’ orchestrated
using some other online platform,
potentially Telegram11 and/or the
dark web. The rapid takedown of
Twier accounts sending tweets con-
taining links to official IS propaganda
is seen in greater detail in this case
study, which shows the effectiveness
of intervention over a single 24-hour
period. Figure 4 depicts survival
11 Nico Prucha, ‘IS and the Jihadist Information Highway: Projecting
Influenceand Religious Identity via Telegram.’ Perspectives on Terrorism,
10(6), 2016, pp.51–52.
The rapid takedown of Twitter
accounts sending tweets
containing links to official IS
propaganda is seen in greater
detail in this case study, which
shows the effectiveness of
intervention over a single
24-hour period.
DISRUPTING DAESH23
curves for Twier accounts that disseminated links to one or more
pieces ofofficial IS propaganda produced on Monday 3 April12 (based
on datacollected on Monday 3 April and Tuesday 4April 2017).13
On Monday 3 April 2017, ISuploaded its daily propaganda content
to a variety of social media and online content-hosting platforms.
This content generally included videos (in daily news format and
other propaganda videos), ‘picture stories’ (a photo montage that
tells a story), brief pronouncements similar to short press releases,
radio podcasts and other documents (e.g. magazines). Over the course
of Monday aernoon and evening, 153 unique Twier accounts
were identified that sent a total of 842 tweets with links to external
(non-Twier) web pages, each loaded with an item or items of IS
propaganda. We identified only 10 of those Twier accounts (7%) as
being independent, third-party ‘mainstream’ accounts. The balance of
accounts were identified as pro-IS. Fiy of these accounts appeared to
be throwaway accounts created on Monday evening.
12 By early April 2017, the research reached the stage where there was
complete access to IS’s main Twier propaganda apparatus. This enabled
the semi-automated system to determine what IS and supporter tweets
would be linking to before those tweets were sent. It is thought that this
occurred several hours before Twier themselves became aware of these
accounts and their tweets. Much of this may have been due to the research
team being able to access data and intelligence across multiple sites,
allowing early prediction of tweet material, where Twier’s disruption team
were likely restricted to monitoring their own platform only. The system
was thus able to immediately identify when an account disseminated one of
these propaganda links on Twier. It was then possible to capture the rate
and speed of suspension.
13 It should be noted that this date was chosen at random and thus
propaganda represented in this graph had no relation to the chemical aack
on the town of Khan Shaykhun, also on 4 April, as the propaganda was
produced by IS on Monday 3 April 2017.
DISRUPTING DAESH24
Figure 4. Case study of intervention effectiveness: survival of IS
disseminator accounts 4–6 April 2017
The semi-automated system tracked all the accounts disseminating
this propaganda– those sending one or more tweets with a 3 April
propaganda link at some point prior to 06.00 GMT on the morning
of Tuesday 4 April 2017. Figure 4 shows the survival curves for all 153
Twier accounts tweeting IS propaganda from Monday 3 April and for
the subset of 50 accounts specifically created on the Monday evening.
The data shows that, at 07.00 GMT on Tuesday 4 April 2017, 100%
of these accounts were active. However, by 13.00 GMT, only 73% of
the 153 accounts were still active, falling to 58% by 23.00 GMT. This
then dropped sharply to 35% surviving un-suspended by midnight on
Tuesday. Very few of these surviving accounts were suspended over
the subsequent 48 hours that we tracked. The 50 throwaway accounts
created on Monday evening specifically to disseminate propaganda
were suspended or deleted even faster: by 13.00 GMT only 52%
were still active, falling to 34% by 23.00 GMT and 24% by midnight
onTuesday.
Figure 5 illustrates which accounts were responsible for posting
original links (i.e. links that had not been sent before by another
Twier account). The diagram shows the account that sent the tweet
(labelled Account 1, Account 2, etc. in order to retain user anonym-
ity), the domain the link pointed at (e.g. sendvid.com), the time the
2300 GMT
1900 GMT
1500 GMT
1100 GMT
0700 GMT
0300 GMT
2300 GMT
1900 GMT
1500 GMT
1100 GMT
0700 GMT
0600 GMT
0300 GMT
Time on Tuesday 4 April – Thursday 6 April
0
10%
20%
30%
40%
50%
60%
90%
80%
70%
100%
Accounts identified distributing IS “3 April” propaganda
New Account since 1700 GMT 3 April
All participating Accounts
DISRUPTING DAESH25
tweet was sent, the language used, and whether the account was a
‘mainstream’ third-party account or a pro-IS account. Overall, we
identified 19 accounts sending original links to a total of 24 different
URLs (destinations). Two of the accounts (identified as Account 1 and
Account2 in Figure 5) were mainstream, independent third-party
accounts. A third account (Account 3) is a ‘wolf in sheep’s clothing’,
afake account, a duplicate of the account of a widely followed,
US-based new media journalist. This was an old account, unused since
mid-2013, which was hacked (presumably by IS) and used to transmit
IS propaganda since 26 March 2017.14 Account 1 is not a pro-IS user,
but more of a citizen journalism-type account that albeit outside of
the region was nonetheless in a position to supply three ‘exclusives’
(first releases) of pieces of official IS propaganda during the day. The
mainstream accounts (including the fake one) and one known IS
supporter account (i.e. Account 4) all tweeted in English. Itis likely,
however, that there were other English language accounts taken
downbefore 06.00GMT on Tuesday 4 April when we first identified
the propaganda accounts. Most of the tweets and accounts in this case
study were inArabic, with an additional small number inSomali.
What this shows is that the response to official IS propaganda being
distributed via Twier was reasonably effective in terms of identifying
and taking down such disseminator accounts in the first 24 hours
aer linking to official IS content. Comparing these rates to the rates
across our entire Pro-IS dataset, it was also clear that those accounts
disseminating official IS propaganda were taken down at ahigher rate,
compared to other pro-IS accounts that were not disseminating this
propaganda. However, it must be borne in mind that pro-IS users were
operating on a 24-hour ‘news cycle’ and creating a large number of
accounts every day to disseminate daily propaganda. As these accounts
were being taken down during Tuesday, a similar number of fresh
accounts were being created and used to distribute the next day’s official
IS content. Therefore, it could be argued that, while efforts to remove
14 The account was still live as of 21 April 2017. For more on such accounts,
seeBerger and Perez, The Islamic State’s Diminishing Returns on Twier,
2016, p.16.
DISRUPTING DAESH26
permanent traces of IS propaganda links from Twier were relatively
successful, IS was still able to broadcast links to its daily propaganda
using Twier in 24-hour bursts during the research period.
Figure 5. Case study of intervention effectiveness: IS disseminator account
types, tweet timings, languages and URL destinations, 4 April 2017
4.2 COMMUNITY BREAKDOWN
What are the effects of this disruption of IS accounts? The
truncatedsurvival rates for Pro-IS accounts meant that their
relationship networks were much sparser than for the Other
Jihadist accounts in our dataset and compared to previously
mapped IS-supporter networks on Twier. From a more qualitative
perspective, this means that the IS Twier community was virtually
non-existent in the researchperiod.
11
Time
(GMT)
Tweets
sent
12 13 14 15 16 17 18 19 20 21 22 23 00 01 02 03 04 05 06 07
‘Mainstream’
accounts
IS accounts
Language key
Account 1
0
3
3
4
31
41
61
81
121
137
525
632
697
766
776
777
790
794
798
835
842
Account 4
Somali
archive.org vid
Account 4
Account 4
Account 5
Account 6
archive (justpaste.it)
youtu.be
Account 7 Account 8
Account 9
Account 10
Account 11 (x2)
Account 13
Arabic
Account 14
Account 15
Account 16
Account 17
Account 18
Account 19
Account 14
sendvid.com
youtube.com
youtu.be
justpaste.it
Google Photos
Google Drive
yadi.sk
justpaste.it x2
justpaste.it
Account 1
justpaste.it
English
justpaste.it
Account 2
justpaste.it
Account 3
sendvid.com
DISRUPTING DAESH27
Table 3. Median number of tweets, followers and friends for accounts not
yetsuspended
TWEETS FOLLOWERS FRIENDS
Pro-IS 51 14 33
Other Jihadist 320 189 122
Table 3 compares the median number of tweets, followers
andfriends of Pro-IS accounts versus those of Other Jihadists.
Theshortlifespan of the Pro-IS accounts meant that many had only
asmall window in which to tweet, gain followers and follow other
accounts. This resulted in the Other Jihadist accounts enjoying the
opportunityto: send six times as many tweets; follow or ‘friend’ four
times as many accounts; and importantly, gain 13 times as many follow-
ers as the Pro-IS accounts. An even more stark comparison isbetween
median figures for contemporary Pro-IS accounts versus those recorded
for similar accounts in 2014. The median number of followers for
contemporary Pro-IS accounts was 14 versus 177 in 2014,15 a decrease of
92%. The median number of accounts followed by IS supporters in 2014
was 257, while we recorded a median of 33‘friends’ per Pro-IS account–
a decrease of 87%.16 In an analysis of 20,000 IS supporter accounts
over five months (September 2014 to January 2015), Berger and Morgan
observed suspension of just 678accounts, a total loss of 3.4%.17 In our
dataset, the total loss of Pro-IS accounts injust four months (between
January and April 2017) was 63%. It is worth noting that the total loss of
pro-IS accounts over the period studied would have been dramatically
higher had we included not just accounts with a minimum of one
follower, but all the throwaway accounts generated in the same period.
Considering also those accounts we were unable to capture due to their
suspension within minutes of creation, the total loss of IS-supportive
accounts over the period was probably greater than 90%.
15 Berger and Morgan, The ISIS Twier Census, 2015, p.30.
16 Berger and Morgan, The ISIS Twier Census, 2015, p.32.
17 Berger and Morgan, The ISIS Twier Census, 2015, p.33.
DISRUPTING DAESH28
In the IS Twier ‘Golden Age’ in 2013 and 2014, a variety of
official IS ‘fighter’ and an assortment of other IS ‘fan’ accounts could
be accessed with relative ease. For the uninitiated user, once one
IS-related account was located, the automated Twier recommen-
dations on ‘who to follow’ accurately supplied others. For those
‘intheknow’, pro-IS users were easily and quickly identifiable via
their choice of carousel and avatar images, along with their user
handles and screen names. Therefore, if one wished, it was quick and
easy to become connected to a large number of like-minded other
Twier users. If sufficient time and effort was invested, it was also
relatively straightforward to become a trusted– even prominent–
member of the IS ‘Twiersphere.’18 Not only was there a vibrant over-
arching pro-IS Twier community in existence at this time, but also
awhole series of strong and supportive language (e.g. Arabic, English,
French, Russian, Turkish) and/or ethnicity-based (e.g. Chechens or
‘al-Shishanis’) and other special interest (e.g. females or ‘sisters’19)
Twier sub-communities. Most of these special interest groups were
a mix of: a small number of users actually on the ground in Syria;
alarger number of users seeking to travel (or with a stated preference
to do so); and an even larger number of so-called ‘jihobbyists’20 with
no formal affiliation to any jihadist group, but who spent their time
lauding fighters, celebrating suicide aackers and other ‘martyrs’ and
networking around and disseminating IS content.
In 2014, pro-IS users were already under some pressure from
Twier; for example, official IS accounts were some of the first to be
suspended in summer 2014. Twier’s disruption activity increased
18 See, for example, the extensive media coverage of the Twier user
@ShamiWitness who was revealed in December 2014 to be Mehdi Biswas,
a24-year-old Bangalore-based business executive, who prior to his arrest
was one of the most prominent IS supporters on social media. Interestingly,
his Twier account was only suspended in early 2017, despite being
dormant since his arrest. Biswas is awaiting trial in India.
19 Elizabeth Pearson, ‘Wilayat Twier and the Bale Against Islamic
State’s Twier Jihad.’ VOX-Pol Blog, 11 November, 2015, www.voxpol.eu/
wilayat-twier-and-the-bale-against-islamic-states-twier-jihad.
20 Jarret M. Brachman, Global Jihadism: Theory and Practice, Abingdon:
Routledge, 2009, p.19.
DISRUPTING DAESH29
significantly over time, forcing pro-IS users to develop and institute
ahost of tactics to allow them to maintain their Twier presences,
remain active and preserve their communities of support on the
platform.21 For example, the group used particular hashtags, such as
#baqiyyafamily (‘baqiyya’ means ‘remain’ in Arabic) to announce the
return of suspended users to the platform, in an aempt to regroup
aer their suspension. Twier eventually responded by including
these hashtags in their disruption strategies. Interestingly, this
increased disruption strengthened some IS supporters’ resolve and
they became even more deter-
mined to re-establish their
accounts, even aer repeated
suspensions. This may have
resulted in decreased numbers
ofpro-IS users, but also more
close-knit and unified communi-
ties, because those who remained
needed a high level of commit-
ment and virtual community
support to do so.22
Eventually, however, the costs of remaining began to outweigh
thebenefits. Research from 2016 shows that “the depressive effects of
suspension oen continued even aer an account returned and was
not immediately re-suspended. Returning accounts rarely reached
their previous heights,”23 in terms of numbers of followers and
friends. This was probably due to the eventual discouragement of
many IS supporters subjected to rapid and repeated suspension.
21 For examples, see Berger and Perez, The Islamic State’s Diminishing Returns
on Twier, 2016, pp.15–18.
22 Pearson, ‘Wilayat Twier and the Bale Against Islamic State’s Twier
Jihad’. VOX-Pol Blog, 11 November, 2015, www.voxpol.eu/wilayat-twier-
and-the-bale-against-islamic-states-twier-jihad. See also Elizabeth
Pearson, ‘Online as the New Frontline: Affect, Gender, and ISIS-takedown
on Social Media.’ Studies in Conflict & Terrorism (forthcoming).
23 Berger and Perez, The Islamic State’s Diminishing Returns on Twier,
2016,p.9.
Twitter’s disruption activity
increased significantly over time,
forcing pro-IS users to develop
and institute a host of tactics
to allow them to maintain their
Twitter presences, remain active
and preserve their communities
of support on the platform.
DISRUPTING DAESH30
Eventhose who persisted had to take counter-measures such as
locking their accounts so they were no longer publicly accessible, or
diluting the content of their tweets so their commitment to IS was no
longer as readily apparent. By April 2017, these measures had taken
such hold that the vast majority of Pro-IS account avatar images were
default ‘eggs’ or other innocuous images, and many of the account
user handles and screen names were meaningless combinations of
leers and numbers (see Table 4). A conscious, supportive and
influential virtual community is almost impossible to maintain in the
face of the loss of access to such group or ideological symbols and the
resultant breakdown in commitment. Therefore, IS supporters have
re-located their social media community-building activity elsewhere,
primarily to Telegram,24 which is no longer just a back-up forTwier.25
From a quantitative perspec-
tive, the data discussed in this
section demonstrates three key
findings. First, IS and their sup-
porters were being significantly
disrupted by Twier, where the
rate of disruption depended
on the content of tweets and
out-links. Second, although all
accounts experienced some
type of suspension over a period
of time, Pro-IS accounts expe-
rienced this at a much higher rate compared to the Other Jihadist
accounts in the dataset. Third, this has severely affected IS’s ability to
develop and maintain robust and influential communities on Twier.
As a result, pro-IS Twier activity has largely been reduced to tactical
use of throwaway accounts for distributing links to pro-IS content
on other platforms, rather than as a space for public IS support and
influencing activity.
24 Prucha,‘IS and the Jihadist Information Highway’, 2016.
25 Berger and Perez, The Islamic State’s Diminishing Returns on Twier,
2016,p.15.
As a result [of disruption
byTwitter], pro-IS Twitter
activityhas largely been reduced
to tactical use of throwaway
accounts for distributing links
to pro-IS content on other
platforms, rather than as
aspace for public IS support
andinfluencing activity.
DISRUPTING DAESH31
Table 4. Changes in account name types due to disruption activity*
TYPICAL USER HANDLES
2014–2015
TYPICAL USER HANDLES
2017
Mujahid1985 4iM7EjZphT3OXYG
BintSham 5Asdf68
ukhtialalmani Omar_08
Khilafah78 t7dYqgYMaSB4EcI
ShamGreenbird GilUllul
* These are not real account screen names but composite examples constructed
forillustration purposes.
5. BEYOND TWITTER:
THE WIDER
JIHADIST SOCIAL
MEDIA ECOLOGY
DISRUPTING DAESH33
   intersections of violent extremism and terrorism
andthe Internet have, for some time, been largely concerned with
social media. They have oen had a singular focus on Twier because
of its particular affordances– e.g. ease of data collection due to its pub-
licness, and the nature of its application programming interface (API) –
which is problematic.26 For example, EUROPOL’s Internet Referral Unit
reported that, by mid-2016, they had identified “70platforms used by
terrorist groups to spread their propaganda materials”.27 Therefore, this
section of the report is concerned with the wider social media ecology
where IS supporters and other non-IS jihadist users operate, with
aparticular focus on out-links fromTwier.
Partly because of its 140-character limit, Twier functions as
a‘gateway’ platform28 to other social networking sites and a diversity
of other online spaces. In 2014, it was estimated that one in every 2.5
pro-IS tweets contained a URL. It was acknowledged at the time that
it would be useful to analyse these links, but this was not undertaken
due to complications around Twier’s URL-shortening practices.29
The roll-out of auto-expanding link previews by Twier in July 2015
remedied this difficulty. In terms of link activity in our data, most links
were not out-links, but rather in-links (i.e. within Twier): 8,086 or 14%
for Pro-IS and 4,650 or 7.5% for Other Jihadist tweets. Of the Pro-IS and
Other Jihadist Twier accounts we identified, 1 in 8 (around 13%) con-
tained non-Twier URLs or out-links. This isaconsiderable reduction
from the 40% of tweets reportedly containing URLs in 2014. Analysis
of Twier out-links nonetheless provides an interesting snapshot of the
26 Maura Conway, ‘Determining the Role of the Internet in Violent Extremism
and Terrorism: Six Suggestions for Progressing Research.’ Studies in Conflict
and Terrorism, 40(1), 2017, p.9 and p.12.
27 EUROPOL, EU Internet Referral Unit: Year One Report, The Hague:
EUROPOL, 2016, p.11, hps://www.europol.europa.eu/publications-
documents/eu-internet-referral-unit-year-one-report-highlights.
28 Derek O’Callaghan, Derek Greene, Maura Conway, Joe Carthy and
Pádraig Cunningham, ‘Uncovering the Wider Structure of Extreme
Right Communities Spanning Popular Online Networks.’ In WebSci ’13:
Proceedings of the 5th Annual ACM Web Science Conference, New York:
ACM Digital Library, 2013, pp.276–285.
29 Berger and Morgan, The ISIS Twier Census, 2015, p.21.
DISRUPTING DAESH34
Top 10 platforms linked to byPro-IS and Other Jihadist accounts in our
data-collection period (see Table 5).
Table 5. Top 10 other platforms (based on out-links from Twitter)
Interestingly, YouTube was the top linked-to platform for both Pro-IS
and Other Jihadist accounts. This points to the overall importance of
YouTube and of video to Web 2.0 in the jihadist online scene. Facebook
does not appear in the Top 10 out-links for Pro-IS accounts. This indi-
cates that, like Twier, Facebook is also engaged in differential disrup-
tion as it is the second most preferred platform for out-linking by Other
Jihadists. The somewhat obscure justpaste.it content upload site has
been known for some time as a core node in the ‘jihadisphere’ and so it is
PRO-IS OTHER JIHADIST
PLATFORM NUMBER
% OF ALL
PRO-IS TWEETS PLATFORM NUMBER
% OF ALL OTHER
JIHADIST TWEETS
1. YouTube 1,330 2.3% 1. YouTube 2,488 4.0%
2. Google Drive 792 1.4% 2. Facebook 1,294 2.1%
3. justpaste.it 472 0.82% 3. justpaste.it 479 0.77%
4. Google Photos 431 0.75% 4. Islamic prayers
website
316 0.51%
5. sendvid.com 410 0.71% 5. Taliban news
website
244 0.39%
6. archive.org 353 0.61% 6. Official Taliban
website
228 0.37%
7. archive.is 243 0.42% 7. Taliban’s official
Urdu website
208 0.33%
8. Bahasa IS fan site 198 0.34% 8. Hizb ut-Tahrir
website
189 0.30%
9. medium.com 155 0.27% 9. Telegram 111 0.18%
10. Unofficial Arabic
IS news site
139 0.24% 10. Taliban’s official
English website
103 0.17%
DISRUPTING DAESH35
unsurprising that it should appear as
the third most linked-to site for both
Pro-IS and Other Jihadist accounts.
Other content upload desti-
nations preferred by Pro-IS users,
including Google Drive, Sendvid,
Google Photos and the Web Archive,
do not appear in the Other Jihadist
Top 10. One reason for this is prob-
ably the focus of Other Jihadists on
linking to traditional proprietary websites, such as the Taliban’s suite
of sites. It is worth mentioning that, while Telegram slips into the Top
10 for Other Jihadists, only 20 (or 0.04%) of all tweets from Pro-IS
accounts contained a telegram.me link. The paucity of such links
caused us to explore further; we were surprised to find that just two
of 722 Pro-IS users’ biographies and two of 451 Other Jihadist users’
biographies contained Telegram links. Neither group of accounts was
using Twier to advertise ways into Telegram.
5.1 CASE STUDY: DESTINATIONS OF OFFICIAL
ISPROPAGANDA
As mentioned, when we undertook our research, IS was operating
a 24-hour ‘news cycle,’ disseminating a daily batch of new official
propaganda via social media channels, including Twier. Links
tothe propaganda were circulated via tweets and other means.
These links point to a wide variety of other social media platforms
and content hosts that contained uploaded propaganda daily. We
analysedasample of these propaganda destinations at three time
points: 4–8February, 4–8 March (excluding 7 March, see below), and
4–8 April 2017. We obtained the full daily roster of IS propaganda
and the sites where it appeared for each of these time periods. This
allowed us to identify the most frequently linked-to platforms, along
with how many pieces of propaganda were posted by host domains,
and what proportion of these URLs were subsequently taken down
(seeFigure6).
YouTube was the top linked-to
platform for both Pro-IS and Other
Jihadist accounts. This points to
the overall importance of YouTube
and of video to Web2.0 in the
jihadist online scene. Facebook
does not appear in theTop 10
out-links for Pro-IS accounts.
DISRUPTING DAESH36
Figure 6. Destinations of official IS propaganda: Number of URLs and
URL destinations February to April 2017
* Excludes 7 March which had 240 URLs (Rumiyah release)
Overall, over these three time periods, Pro-IS users linked to
39different third-party platforms or sites hosting its propaganda
material, as well as running its own server to host material.30 It is
important to note that the former were exclusively (we believe) ‘leaf’
destinations. That is, they contained content but no links to other
sites, so did not have a networking or community-building aspect.
Someone visiting such a page would learn nothing about the network
of other sites. Important exceptions to this were YouTube and a small
number of other sites which algorithmically ‘recommend’ similar
content in their inventory, which may have resulted in their pointing
toother available IS propaganda.31 During our analysis, the average
number of URLs populated rose from 42 per day in February to
52perday in April. This hints at increasing fragmentation and
30 This server had five names over the three periods studied because each
domain name was rapidly taken down.
31 Derek O’Callaghan, Derek Greene, Maura Conway, Joe Carthy and Pádraig
Cunningham, ‘Down the (White) Rabbit Hole: The Extreme Right and
Online Recommender Systems.’ Social Science Computer Review, 2015,
33(4), pp.459–478.
0%
20%
40%
60%
80%
100% 42 46 52
– –
URLs per day (mean)
Others (34 domains)
Google Drive
YouTube
sendvid.com
archive.org
IS’s own server
justpaste.it
4 Feb–8 Feb 4 Mar–8 Mar* 4 Apr–8 Apr
DISRUPTING DAESH37
dispersal, possibly in response to takedown activity by a variety
ofplatforms and sites. However, there was a large inter-day variation
(20to 65) and we excluded one outlier day on 7 March, the publi-
cation date of issue 7 of Rumiyah magazine. On this day, IS pushed
240 separate URLs, a quarter of which contained direct reference to
Rumiyah in the link, and many more which probably linked to the
newissue of themagazine.
Out of the 40 domains used (39 external, one internal server) there
was a consistent ‘big 6’ across the three time periods: justpaste.it; IS’s
own server; archive.org; sendvid.com; YouTube; and Google Drive.
These six domains accounted for 83%, 70% and 67% of the URLs in
the February, March and April sampling periods respectively. However,
there was a noticeable declining trend in the use ofjustpaste.it and
IS’s own servers. Between them, this accounted for 40% of URLs in
February declining to only 18% by April. Recently the Amaq News
Agency website has come under repeated aack, which may be respon-
sible for its relative downgrading.32 Use of sendvid.com and archive.org
varied across the time periods, while Google Drive and YouTube were
consistently heavily used; YouTube use showed an increasing trend
(7%, 11% and 12%, respectively). The remaining URLs (17% in February
rising to 33% of URLs by April) were spread across a wide variety of
mainly, though not exclusively, content upload sites: 34 in total.
We also analysed what proportion of IS propaganda content
hadbeen taken down successfully. We found that the takedown
rate (as of 12 April) was 72%, 66% and 72% for the February, March
and April samples respectively. Overall, 30% of links were still live
on 12April. This suggests that takedown activity is relatively rapid
(occurring over a maer of days aer propaganda is posted) and
widespread (across amultiplicity of sites and platforms).
32 Lizzie Dearden, ‘ISIS Losing Ground in Online War Against Hackers Aer
Westminster Aack Turns Focus on Internet Propaganda’, The Independent,
1April 2017.
6. RECOMMENDATIONS
DISRUPTING DAESH39
   monitoring systems have the ability to
dramatically increase the speed and effectiveness of data gathering,
analysis and (potentially) intervention. To work effectively, however,
they must deploy a combination of suitable technology solutions,
including analytical systems, with trained human analysts who
are versed in the domain deployed and preferably also the relevant
languages. This is particularly the case where an adversary is actively
trying to evade tracking efforts. Technology such as Method52 helps
by allowing the analyst to rapidly develop new analytical pipelines
that take into account day-to-day changes in modes of operation.
However, technology cannot detect such changes; these can generally
only be spoed by a human well-versed in the particular domain
ofinterest.
Some IS supporters remain active on Twier. Content dissem-
inators using throwaway accounts could probably be degraded
further– though this may have both pros (e.g. detrimental impact
on last remaining significant IS supporter Twier activity) and
cons (e.g. further degradation of Twier as a source of data or open
source intelligence on IS). Like all disruption activity, whether this
is viewed positively or negatively depends on one’s perspective and
institutional interests. For example, law enforcement tends to favour
this approach, whereas free-speech advocates warn against corporate
policing of political speech, even if that speech is deeply objection-
able. Some intelligence professionals, on the other hand, advocate
forgreater aention to social media intelligence.33
Our focus in this report has not just been on Twier, but we
also point to the importance of the wider jihadist social media
ecology. Also, our analysis was not restricted to IS users and content;
we underline, too, the presence and oen uninterrupted online
activity of non-IS jihadists. In recent years, many counter-terrorism
professionals tasked with examining the role of the Internet in
violent extremism and terrorism have narrowed their focus to IS.
33 David Omand, Jamie Bartle and Carl Miller, ‘Introducing Social
MediaIntelligence (SOCMINT)’. Intelligence and National Security,
27(6),2012,pp.801–823.
DISRUPTING DAESH40
Scholarly researchers have
acted similarly, many narrowing
their focus further to IS Twier
activity. Werecommend against
continued analytical contraction.
Instead, we point to the need to
maintain a wide-angle view of
online activity by diverse other
jihadists across a variety of social
media and other online plat-
forms. This is particularly important due to the shiing fortunes of
IS and HTS on the ground in Iraq and Syria. In the face of increasing
loss of physical territory, the continued– and potentially increas-
ing– importance of online ‘territory’ should not be underestimated.
We arenot suggesting that a focus on IS should be dispensed with,
but the significantly less-impeded online activity of HTS is surely
animportant asset for them and worth monitoring.
Because data collection and analysis of other terrorist groups and
their online platforms has been neglected, very few historical metrics
are available for comparative analyses. We should guard against this
in future too.
We recommend against
continued analytical contraction.
Instead, we point to the need
to maintain a wide-angle view
of online activity by diverse
other jihadists across a variety
ofsocial media and other
onlineplatforms.
DISRUPTING DAESH41
7. FUTURE
RESEARCH
DISRUPTING DAESH43
  , our Other Jihadist category was a conven-
iencesample of non-IS jihadist accounts. For future research, we
therefore propose replicating the present research, but with alarger
and more equal sample of HTS, Ahrar al-Sham and Taliban accounts.
This would allow for a more systematic and comparative analysis
of the disruption levels for a range of non-IS jihadists, including
thosewith a significant international terrorism footprint (i.e. HTS),
groups with a significant national and regional terrorism profile
(i.e.Taliban), and a party to the Syria conflict (i.e.Ahraral-Sham).34
Such an analysis could help to ascertain the vibrancy of their contem-
porary Twier communities and Twier out-linking practices, and
allow us to identify their preferred other online platforms.
Additional research is clearly warranted into the wider violent
jihadist social media ecology. We therefore recommend wider and
more in-depth research into:
1. paerns of use, including community-building and influencing
activity; and
2. levels of disruption on other platforms besides Twier, including
other major platforms such as YouTube, but also other smaller or
more obscure platforms, such as justpaste.it and others.
We also suggest analysing pro-IS and other jihadist activity on
Telegram, which is almost certainly where the IS online community
has reconstituted, and comparing this with our present findings. It
would also be worthwhile analysing out-linking trends on Telegram
to see if different platforms have an impact on the effectiveness of
linking practice.
34 Nationally, Syria, Russia, Iran, Egypt, and the UAE have designated Ahrar
al-Sham as a terrorist organisation. Internationally, the US, Britain, France,
and Ukraine blocked a May 2016 Russian proposal to the United Nations to
take a similar step.
8. CONCLUSION
DISRUPTING DAESH45
   that the costs for most pro-IS users of engaging
onTwier (in terms of deflated morale, diffused messages and
persistent effort needed to maintain a public presence) now largely
outweigh the benefits. This means that the IS Twier community
is now almost non-existent. In turn, this means that radicalisation,
recruitment and aack planning opportunities on this platform
have probably also decreased. However, a hard core of users remain
persistent. In particular, a subset of established throwaway dissemi-
nator accounts pushed out ‘official’ IS content in a daily cycle during
our data-collection period. These accounts were generally suspended
within 24 hours, but not before they promoted links to content hosted
on other platforms. This included major new content, such as a new
issue of the monthly IS Rumiyah magazine.
This report is mainly concerned with pro-IS Twier accounts
and their disruption. However, IS are not the only jihadists active on
Twier, and a host of other violent jihadists were shown to be subject
to much lower levels of disruption by Twier. Also, IS and other
jihadist groups remain active on a wide range of other social media
platforms, content hosting sites and other cyberspaces, including
blogs, forums, and dedicated websites. While it appears that official
IScontent is being disrupted in many of these online spaces, the
extent is yet to be fully determined.
This project has received funding from
the European Union’s Seventh Frame-
work Programme for research, techno-
logical development and demonstration
under grant agreement no. 312827
Email info@voxpol.eu
Twitter @VOX_Pol
www.voxpol.eu
The VOX-Pol Network of Excellence (NoE) isaEuropean Union Framework
Programme7(FP7)-funded academic research network focused on researching
the prevalence, contours, functions, and impacts of ViolentOnline Political
Extremism andresponses toit.
... The radical Muslim social media participation on which we are focusing here, therefore, concerns contributions that are far less radical than those of the radical right wing. Our findings from this study is in line with the findings of Conway et al. (2019), who note that Twitter has largely resolved its ISIS problem. ...
Technical Report
Full-text available
The reports presented here concern the role of the Internet and social media in processes of self-radicalisation. The term self-radicalisation refers to a type of radicalisation process that designates the radicalising individual as the instigator of the process. The DARE study investigates self-radicalisation specifically in relation to the role of participatory media in the process. In line with the dual focus of the DARE project as a whole, this study is concerned with the role of participatory media in the self-radicalisation of people identifying as supporters of i) radical Islamist and ii) anti-Islam(ist) or wider far right groups. The findings of this work are presented through an introduction and 7 National Reports covering: Belgium, France, Germany, Greece, Netherlands, Norway and the United Kingdom. A cross-national report and a policy brief building on these reports are also available.
... The radical Muslim social media participation on which we are focusing here, therefore, concerns contributions that are far less radical than those of the radical right wing. Our findings from this study is in line with the findings of Conway et al. (2019), who note that Twitter has largely resolved its ISIS problem. ...
Research
Executive Summary Norway is currently experiencing upward trends of radicalisation, and there is an urgent need to improve our understanding of these trends. In this timely report, we present an empirical study on how supporters of radical Islam and the radical right interact on Twitter. The study is based on ethnographic and automatic text and network analyses of big data from Twitter accounts belonging to Norwegian women and men. General findings from our study on Twitter in Norway: • Online activity amongst the radical right and radical Muslims is multilingual, with extensive use of English. • Radical views are expressed in text, symbols, posters and images on some online profiles, amongst some influencers and in conversations on Twitter. The study does not offer definitive evidence on the drivers of self-radicalisation. We do not know for certain, for example, whether themes that create engagement, including videos and texts shared on Twitter, serve as drivers of self-radicalisation. The study has, however, produced some indications that can be further researched through qualitative analyses to obtain more definitive knowledge. Findings from analyses of the radical right on Twitter in Norway: • Twitter plays a role in the lives of the radical right and the Norwegian data sample can be considered a digital milieu. Yet there are also users who interact in English on the edges of this milieu and who may bring influences from these interactions into Norway. • In our data sample, the radical right mainly comprises anti-Islamists and/or supporters of Donald Trump. • The themes that generate engagement amongst anti-Islamists are the economy, government and elections, Islam and crimes involving Muslims. • The themes that generate engagement amongst Trump supporters are American politics, borders and migration, terror, sex and freedom of speech. • Signs of extremist positions and support for violence are visible. • Our results support the need for an action plan against hatred towards Muslims in Norway. Findings from the analysis of radical Muslims on Twitter in Norway: • Twitter does not seem to play a major role in the lives of radical Muslims. The Norwegian data sample cannot be considered a radical Muslim digital milieu. • There are no connections to users in other countries within our data sample in the DARE project, just to countries elsewhere. • The themes that generate engagement amongst Muslims are politics in Pakistan, life and religion in general, Salafism and refugees and war in Muslim countries. • There are no signs of support for violence, only some support for radical positions. Possible drivers of self-radicalisation amongst the radical right on Twitter in Norway: • Themes of polarising discussions and conversations: Islam and Muslims, immigration and the white race. • Influencers, videos, images and text promoting hate speech against Muslims and immigrants. • International interaction. • The staging and framing of radical and extremist online identities. Possible drivers of self-radicalisation amongst Muslims on Twitter in Norway: • Influencers, videos, images and texts promoting Salafism. • International interaction. • The staging and framing of radical online identities.
Article
Full-text available
Background Most national counter-radicalization strategies identify the media, and particularly the Internet as key sources of risk for radicalization. However, the magnitude of the relationships between different types of media usage and radicalization remains unknown. Additionally, whether Internet-related risk factors do indeed have greater impacts than other forms of media remain another unknown. Overall, despite extensive research of media effects in criminology, the relationship between media and radicalization has not been systematically investigated. Objectives This systematic review and meta-analysis sought to (1) identify and synthesize the effects of different media-related risk factors at the individual level, (2) identify the relative magnitudes of the effect sizes for the different risk factors, and (3) compare the effects between outcomes of cognitive and behavioral radicalization. The review also sought to examine sources of heterogeneity between different radicalizing ideologies. Search Methods Electronic searches were carried out in several relevant databases and inclusion decisions were guided by a published review protocol. In addition to these searches, leading researchers were contacted to try and identify unpublished or unidentified research. Hand searches of previously published reviews and research were also used to supplement the database searches. Searches were carried out until August 2020. Selection Criteria The review included quantitative studies that examined at least one media-related risk factor (such as exposure to, or usage of a particular medium or mediated content) and its relationship to either cognitive or behavioral radicalization at the individual level. Data Collection and Analysis Random-effects meta-analysis was used for each risk factor individually and risk factors were arranged in rank-order. Heterogeneity was explored using a combination of moderator analysis, meta-regression, and sub-group analysis. Results The review included 4 experimental and 49 observational studies. Most of the studies were judged to be of low quality and suffer from multiple, potential sources of bias. From the included studies, effect sizes pertaining to 23 media-related risk factors were identified and analyzed for the outcome of cognitive radicalization, and two risk factors for the outcome of behavioral radicalization. Experimental evidence demonstrated that mere exposure to media theorized to increase cognitive radicalization was associated with a small increase in risk (g = 0.08, 95% confidence interval [CI] [−0.03, 19]). A slightly larger estimate was observed for those high in trait aggression (g = 0.13, 95% CI [0.01, 0.25]). Evidence from observational studies shows that for cognitive radicalization, risk factors such as television usage have no effect (r = 0.01, 95% CI [−0.06, 0.09]). However, passive (r = 0.24, 95% CI [0.18, 0.31]) and active (r = 0.22, 95% CI [0.15, 0.29]) forms of exposure to radical content online demonstrate small but potentially meaningful relationships. Similar sized estimates for passive (r = 0.23, 95% CI [0.12, 0.33]) and active (r = 0.28, 95% CI [0.21, 0.36]) forms of exposure to radical content online were found for the outcome of behavioral radicalization. Authors' Conclusions Relative to other known risk factors for cognitive radicalization, even the most salient of the media-related risk factors have comparatively small estimates. However, compared to other known risk factors for behavioral radicalization, passive and active forms of exposure to radical content online have relatively large and robust estimates. Overall, exposure to radical content online appears to have a larger relationship with radicalization than other media-related risk factors, and the impact of this relationship is most pronounced for behavioral outcomes of radicalization. While these results may support policy-makers' focus on the Internet in the context of combatting radicalization, the quality of the evidence is low and more robust study designs are needed to enable the drawing of firmer conclusions.
Article
This article discusses the role of giant social media corporations Facebook, Google (YouTube), and Twitter in counter-terrorism and countering violent extremisms (CT/CVEs). Based on a qualitative investigation mobilizing corporate communications as well as a collection of interviews with European stakeholders, it argues that these firms have become actors in this policy area of what is traditionally considered high politics, through their fundamental role in establishing and enforcing the nascent global governance regime on terrorist communications. Since the emergence of the self-proclaimed Islamic State of Iraq and Syria (ISIS), the studied firms have displayed agency and creativity in their appropriation of this new responsibility, effectively going beyond what was legally required of them. After contextualizing and questioning their involvement, motivated by terrorist exploitation of their services, reputational pressure and the threat of legislation, the article provides an analysis of the firms’ self-regulated commitment to CT/CVE through policymaking, content moderation, human resources, and private multilateralism.
Technical Report
Full-text available
The reports presented here concern the role of the Internet and social media in processes of self-radicalisation. The term self-radicalisation refers to a type of radicalisation process that designates the radicalising individual as the instigator of the process. The DARE study investigates self-radicalisation specifically in relation to the role of participatory media in the process. In line with the dual focus of the DARE project as a whole, this study is concerned with the role of participatory media in the self-radicalisation of people identifying as supporters of i) radical Islamist and ii) anti-Islam(ist) or wider far right groups. The findings of this work are presented through an introduction and 7 National Reports covering: Belgium, France, Germany, Greece, Netherlands, Norway and the United Kingdom. A cross-national report and a policy brief building on these reports are also available.
Technical Report
Full-text available
The reports presented here concern the role of the Internet and social media in processes of self-radicalisation. The term self-radicalisation refers to a type of radicalisation process that designates the radicalising individual as the instigator of the process. The DARE study investigates self-radicalisation specifically in relation to the role of participatory media in the process. In line with the dual focus of the DARE project as a whole, this study is concerned with the role of participatory media in the self-radicalisation of people identifying as supporters of i) radical Islamist and ii) anti-Islam(ist) or wider far right groups. The findings of this work are presented through an introduction and 7 National Reports covering: Belgium, France, Germany, Greece, Netherlands, Norway and the United Kingdom. A cross-national report and a policy brief building on these reports are also available.
Article
Research Summary This study offers an empirical insight into terrorists’ use of the Internet. Although criminology has previously been quiet on this topic, behavior‐based studies can aid in understanding the interactions between terrorists and their environments. Using a database of 231 US‐based Islamic State terrorists, four important findings are offered: (1) This cohort utilized the Internet heavily for the purposes of both networking with co‐ideologues and learning about their intended activity. (2) There is little reason to believe that these online interactions are replacing offline ones, as has previously been suggested. Rather, terrorists tend to operate in both domains. (3) Online activity seems to be similar across the sample, regardless of the number of co‐offenders or the sophistication of attack. (4) There is reason to believe that using the Internet may be an impediment to terrorists’ success. Policy Implications The findings of this study have two important policy implications. First, it is vital to understand the multiplicity of environments in which terrorists inhabit. Policy makers have tended to emphasize the online domain as particularly dangerous and ripe for exploitation. While this is understandable from one perspective, simplistic and monocausal explanations for radicalization must be avoided. Terrorists operate in both the online and offline domain and there is little reason to believe that the former is replacing the latter. The two may offer different criminogenic inducements to would‐be terrorists, and at times they may be inseparably intertwined. Second, when policy responses do focus on online interventions, it is vital to understand the unintended consequences. This is particularly the case for content removal, which may inadvertently be aiding terrorists and hampering law enforcement investigations.
Book
Full-text available
Through 12 case studies, this book examines the different ways in which Jihadi groups and their supporters use visualisation, sound production and aesthetic means to articulate their cause in online as well as offline contexts. Divided into four thematic sections, the chapters probe Jihadi appropriation of traditional and popular cultural expressions and show how, in turn, political activists appropriate extremist media to oppose and resist the propaganda. By conceptualising militant Islamist audiovisual productions as part of global media aesthetics and practices, the authors shed light on how religious actors, artists, civil society activists, global youth, political forces, security agencies and researchers engage with mediated manifestations of Jihadi ideology to deconstruct, reinforce, defy or oppose the messages.
Chapter
Full-text available
Abstract: Key debates in cyber and physical security concerns resilience, yet, are we certain we know exactly what we mean when we think of resilience? Is resilience (cyber and physical) managed in an integrated and coherent way across the separate parts of an organization? This article suggests that the separate elements comprising resilience both cyber and physical still suffers from a lack of a conceptual clarity, inconsistent definitions and a shifting landscape of approaches for analysis, measurement and management. The discussion that follows seeks to address current limitations in resilience by examining thinking in resilience across a number of different disciplines as well as providing an overview of some of the tensions between ‘experts’, industry and the academy. The article also argues that, as a result of inconsistencies in thinking, different practices have emerged. In conclusion, the article suggests that practitioners may decide that a reconfiguration of resilience is necessary and that a conceptual clarity may provide practical support to assist practitioners, industry, policy makers and academics.
Article
Extremist exploitation of social media platforms is an important regulatory question for civil society, government, and the private sector. Extremists exploit social media for a range of reasons—from spreading hateful narratives and propaganda to financing, recruitment, and sharing operational information. Policy responses to this question fit under two headings, strategic communication and content moderation. At the center of both of these policy responses is a calculation about how best to limit audience exposure to extremist narratives and maintain the marginality of extremist views, while being conscious of rights to free expression and the appropriateness of restrictions on speech. This special issue on “Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation” focuses on one form of strategic communication, countering violent extremism. In this editorial we discuss the background and effectiveness of this approach, and introduce five articles which develop multiple strands of research into responses and solutions to extremist exploitation of social media. We conclude by suggesting an agenda for future research on how multistakeholder initiatives to challenge extremist exploitation of social media are conceived, designed, and implemented, and the challenges these initiatives need to surmount.
ResearchGate has not been able to resolve any references for this publication.