ArticlePublisher preview available

Hidden resilience and adaptive dynamics of the global online hate ecology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Global ecology of online hate clusters a, Schematic of resilient hate ecology that we find flourishing online, mixing hate narratives, languages and cultures across platforms. A1, A2 and A3 denote three types of self-organized adaptation that we observe that quickly build new bridges between otherwise independent platforms (see main text). We focus on Facebook (FB) and VKontakte (VK) clusters, shown as large blue and red symbols, respectively; different shapes represent different hate narratives. Undirected (that is, no arrowhead) coloured link between two hate clusters indicates a strong two-way connection. Small black circles indicate users, who may be members of 1, 2, 3…hate clusters; directed (that is, with arrowhead) link indicates that the user is a member of that hate cluster. b, Placing hate clusters at the location of their activity (for example, ‘Stop White Genocide in South Africa’ (SA)) reveals a complex web of global hate highways built from these strong inter-cluster connections. Only the basic skeleton is shown. Bridges between Facebook and VKontakte (for example, A1, A2 and A3 in a) are shown in green. When the focus of a hate cluster is an entire country or continent, the geographical centre is chosen. Inset shows dense hate highway interlinkage across Europe. c, Microscale view of actual KKK hate-cluster ecosystem. The ForceAtlas2 algorithm used is such that the further two clusters are apart, the fewer users they have in common. Hate-cluster radii are determined by the number of members. d, Schematic showing synapse-like nature of individual hate clusters. Source Data
… 
Mathematical model showing resilience of hate-cluster ecology a, Connected hate clusters from Fig. 1a, trying to establish links from a platform such as VKontakte (subset 1b) to a better-policed platform such as Facebook (platform 2), run the risk (cost R) of being noticed by moderators of Facebook and hence sanctions and legal action. Because more links creates more visibility and hence more risk, we assume that the cost of accessing platform 2 from platform 1 is proportional to the number of links, ρ. b, Mathematical prediction from this model (equation (1)) shows that the average shortest path ℓ¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\ell }}$$\end{document} between hate clusters in VKontakte (subset 1b) has a minimum ℓ¯min\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\bar{{\ell }}}_{{\rm{\min }}}$$\end{document} as a function of the number of links ρ into platform 2 (Facebook). For any reasonably large number of inter-platform links ρ > ρmin, our theory predicts that the action of platform 2 (such as Facebook) to reduce the number of links ρ will lead to an unwanted decrease in the average shortest path ℓ¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\ell }}$$\end{document} as ρ decreases towards ρmin. In addition, as the universe of social media expands in the future to many interconnected platforms, as shown schematically in a, our theory predicts that the combined effect of having independent moderators on each platform will be to create spontaneous dark pools of hate (dark region in a).
… 
This content is subject to copyright. Terms and conditions apply.
LETTER https://doi.org/10.1038/s41586-019-1494-7
Hidden resilience and adaptive dynamics of the
global online hate ecology
N. F. Johnson1*, R. Leahy1, N. Johnson Restrepo1, N. Velasquez2, M. Zheng3, P. Manrique3, P. Devkota4 & S. Wuchty4
Online hate and extremist narratives have been linked to abhorrent
real-world events, including a current surge in hate crimes
1–6
and
an alarming increase in youth suicides that result from social
media vitriol7; inciting mass shootings such as the 2019 attack
in Christchurch, stabbings and bombings8–11; recruitment of
extremists1216, including entrapment and sex-trafficking of girls
as fighter brides17; threats against public figures, including the 2019
verbal attack against an anti-Brexit politician, and hybrid (racist–
anti-women–anti-immigrant) hate threats against a US member
of the British royal family
18
; and renewed anti-western hate in the
2019 post-ISIS landscape associated with support for Osama Bin
Laden’s son and Al Qaeda. Social media platforms seem to be losing
the battle against online hate
19,20
and urgently need new insights.
Here we show that the key to understanding the resilience of online
hate lies in its global network-of-network dynamics. Interconnected
hate clusters form global ‘hate highways’ that—assisted by collective
online adaptations—cross social media platforms, sometimes using
‘back doors’ even after being banned, as well as jumping between
countries, continents and languages. Our mathematical model
predicts that policing within a single platform (such as Facebook)
can make matters worse, and will eventually generate global ‘dark
pools’ in which online hate will flourish. We observe the current
hate network rapidly rewiring and self-repairing at the micro level
when attacked, in a way that mimics the formation of covalent bonds
in chemistry. This understanding enables us to propose a policy
matrix that can help to defeat online hate, classified by the preferred
(or legally allowed) granularity of the intervention and top-down
versus bottom-up nature. We provide quantitative assessments for
the effects of each intervention. This policy matrix also offers a tool
for tackling a broader class of illicit online behaviours21,22 such as
financial fraud.
Current strategies to defeat online hate tend towards two ends of the
scale: a microscopic approach that seeks to identify ‘bad’ individual(s)
in the sea of online users
1,14,16
, and a macroscopic approach that bans
entire ideologies, which results in allegations of stifling free speech
23
.
These two approaches are equivalent to attempts to try to understand
how water boils by looking for a bad particle in a sea of billions (even
though there is not one for phase transitions
24
), or the macroscopic
viewpoint that the entire system is to blame (akin to thermodynam-
ics24). Yet, the correct science behind extended physical phenomena24
lies at the mesoscale in the self-organized cluster dynamics of the devel-
oping correlations, with the same thought to be true for many social
science settings2527.
A better understanding of how the ecology of online hate evolves
could create more effective intervention policies. Using entirely public
data from different social media platforms, countries and languages, we
find that online hate thrives globally through self-organized, mesoscale
clusters that interconnect to form a resilient network-of-networks of
hate highways across platforms, countries and languages (Fig.1). Our
mathematical theory shows why single-platform policing (for example,
by Facebook) can be ineffective (Fig.2) and may even make things
worse. We find empirically that when attacked, the online hate ecology
can quickly adapt and self-repair at the micro level, akin to the forma-
tion of covalent bonds in chemistry (Fig.3). We leave a detailed study
of the underlying social networks to future work because our focus
here is on the general cross-platform behaviour. Knowledge of these
features of online hate enables us to propose a set of interventions to
thwart it (Fig.4).
Our analysis of online clusters does not require any information
about individuals, just as information about a specific molecule of
water is not required to describe the bubbles (that is, clusters of cor-
related molecules) that form in boiling water. Online clusters such as
groups, communities and pages are a popular feature of platforms such
as Facebook and VKontakte, which is based in central Europe, has hun-
dreds of millions of users worldwide, and had a crucial role in previ-
ous extremist activity
27
. Such online clusters allow several individual
users to self-organize around a common interest27 and they collectively
self-police to remove trolls, bots and adverse opinions. Some people
find it attractive to join a cluster that promotes hate because its social
structure reduces the risk of being trolled or confronted by opponents.
Even on platforms that do not have formal groups, quasi-groups can
be formed (for example, Telegram). Although Twitter has allowed
some notable insights26, we do not consider it here as its open-follower
structure does not fully capture the tendency of humans to form into
tight-knit social clusters (such as VKontakte groups) in which they
can develop thoughts without encountering opposition. Our online
cluster search methodology generalizes that previously described27 to
multiple social media platforms and can be repeated for any hate topic
(seeMethods for full details).
The global hate ecology that we find flourishing online is shown
in Fig.1a, b. The highly interconnected network-of-networks28–30
mixes hate narratives across themes (for example, anti-Semitic,
anti-immigrant, anti-LGBT+), languages, cultures and plat-
forms. This online mixing manifest itself in the 2019 attack in
Christchurch: the presumed shooter was Australian, the attack was
in New Zealand, and the guns carried messages in several European
languages on historical topics that are mentioned in online hate
clusters across continents. We uncover hate clusters of all sizes—for
example, the hate-cluster distribution for the ideology of the Ku
Klux Klan (KKK) on VKontakte has a high goodness-of-fit value for
a power-law distribution (Extended Data Fig.1). This suggests that
the online hate ecology is self-organized, because it would be almost
impossible to engineer this distribution using top-down control.
The estimated power-law exponent is consistent with a sampling of
anti-western hate clusters as well as the online ecology of financial
fraud
21
, suggesting that our findings and policy suggestions can help
to tackle a broader class of illicit online behaviours21,22.
We observe operationally independent platforms—that are also com-
mercial competitors—becoming unwittingly coupled through dynami-
cal, self-organized adaptations of the global hate-cluster networks. This
resilience helps the hate ecology to recover quickly after the banning
of singleplatforms. The three types of adaptation bridging VKontakte
1Physics Department, George Washington University, Washington, DC, USA. 2Elliot School of International Affairs, George Washington University, Washington, DC, USA. 3Physics Department,
University of Miami, Coral Gables, FL, USA. 4Computer Science Department, University of Miami, Coral Gables, FL, USA. *e-mail: neiljohnson@gwu.edu
NATURE| www.nature.com/nature
Content courtesy of Springer Nature, terms of use apply. Rights reserved
... In addition, all in-built communities can feature links into other communities whose content is of interest to them, within the same-and also across different-social media platforms. Again, extremist communities are no different, and typically do this a lot across platforms in order to keep their members away from moderator pressure (Johnson et al., 2019;Velásquez et al., 2021). The net result is a highly complex, interconnected ecosystem of communities within and across platforms, together with links into external information sources of various kinds. ...
... In what follows, we refer to each such online community (e.g., Facebook Page, Telegram Channel) as a "cluster" in order to avoid confusion with platform-specific definitions and network discovery algorithms. Our choice of the word "cluster" is exactly as in previous published work (Johnson et al., 2019;Velásquez et al., 2021). For example, it avoids any possible confusion with the term "community" in network science which has the different meaning of a subnetwork that is inferred from a specific partitioning algorithm and hence is algorithm dependent. ...
... VKontakte was banned in Ukraine in 2017 in an attempt to stem Russian disinformation, yet it persisted as the country's 4th most popular site. The smaller platforms 4chan, Gab and Telegram add to this mix, with their users' many links to each other and to the larger platforms helping to entangle the ecosystem more tightly (Johnson et al., 2019;Velásquez et al., 2021). Moreover, they tend to be more lenient toward hate speech and conspiracy theories (McLaughlin, 2019). ...
Article
The current military conflict between Russia and Ukraine is accompanied by disinformation and propaganda within the digital ecosystem of social media platforms and online news sources. One month prior to the conflict's February 2022 start, a Special Report by the U.S. Department of State had already highlighted concern about the extent to which Kremlin-funded media were feeding the online disinformation and propaganda ecosystem. Here we address a closely related issue: how Russian information sources feed into online extremist communities. Specifically, we present a preliminary study of how the sector of the online ecosystem involving extremist communities interconnects within and across social media platforms, and how it connects into such official information sources. Our focus here is on Russian domains, European Nationalists, and American White Supremacists. Though necessarily very limited in scope, our study goes beyond many existing works that focus on Twitter, by instead considering platforms such as VKontakte, Telegram, and Gab. Our findings can help shed light on the scope and impact of state-sponsored foreign influence operations. Our study also highlights the need to develop a detailed map of the full multi-platform ecosystem in order to better inform discussions aimed at countering violent extremism.
... On one hand, influential accounts may consistently and repeatedly amplify misinformation (Grinberg et al. 2019;EIPT 2020), such that the removal of one central 'hub' dramatically reduces the flow (Albert et al. 2000). On the other hand, misinformation may be diffused by large numbers of smaller-scale agents, such that the removal of smaller groups is needed to weaken the larger ones (Johnson et al. 2019). These models and strategies are not mutually exclusive (Hedström et al. 2000;Rivera et al. 2010); information networks are dynamic and may shift in time between centralized influentials versus diffusive sharing (Stopczynski et al. 2014). ...
... (2022) 2: 36 Page 5 of 20 36 ...
... It will also be fruitful to observe how users move between platforms when marginalized or blocked by authorities (Johnson et al. 2019). To counter discourse from the 2020 protests, the Belarusian government blocked access of Belarusian internet users to independent online news organizations, social media platforms, search engines, and mobile internet (Stratcom 202). ...
Article
Full-text available
Analysts of social media differ in their emphasis on the effects of message content versus social network structure. The balance of these factors may change substantially across time. When a major event occurs, initial independent reactions may give way to more social diffusion of interpretations of the event among different communities, including those committed to disinformation. Here, we explore these dynamics through a case study analysis of the Russian-language Twitter content emerging from Belarus before and after its presidential election of August 9, 2020. From these Russian-language tweets, we extracted a set of topics that characterize the social media data and construct networks to represent the sharing of these topics before and after the election. The case study in Belarus reveals how misinformation can be re-invigorated in discourse through the novelty of a major event. More generally, it suggests how audience networks can shift from influentials dispensing information before an event to a de-centralized sharing of information after it. Supplementary information: The online version contains supplementary material available at 10.1007/s43545-022-00330-x.
... 108 Echo chamber effects have been looked at across extremist milieus, including their role in extreme right radicalization and the spread of conspiracy theories in particular. 109 This concept, however, assumes that individuals select media and content that reinforce pre-existing beliefs and lead to polarization and radicalization based on their interest and political partisanship. In the case of SOF and S.W.A.T. units' vulnerability to this risk factor, it appears the mechanism might be reversed or at least somewhat altered. ...
Article
Full-text available
This article explores potential vulnerability factors for extreme right radicalization of Special Operation Forces (SOF) and Special Weapons and Tactics (S.W.A.T.) personnel in Western countries. Drawing on inquiry commissions reports regarding extreme right behavior or ethical misconduct by six elite units from four countries (Germany, Canada, Australia, the U.S.), this article argues that a lack of diversity in gender and ethnicity, elite warrior subcultures, echo chamber effects and cognitive rigidity can become vulnerability factors for extreme right radicalization. Further, the need for targeted resilience among SOF and S.W.A.T. units designed to counter such processes is highlighted.
... Target hate speech, they may violate online free speech (Mathew et al., 2019). Additionally, attacks at the micro-level may be ineffective as hate networks often have rapid rewiring and self-repairing mechanisms (Johnson et al., 2019). Counter speech refers to the "direct response that counters hate speech" (Mathew et al., 2019). ...
Preprint
Hate speech is plaguing the cyberspace along with user-generated content. This paper investigates the role of conversational context in the annotation and detection of online hate and counter speech, where context is defined as the preceding comment in a conversation thread. We created a context-aware dataset for a 3-way classification task on Reddit comments: hate speech, counter speech, or neutral. Our analyses indicate that context is critical to identify hate and counter speech: human judgments change for most comments depending on whether we show annotators the context. A linguistic analysis draws insights into the language people use to express hate and counter speech. Experimental results show that neural networks obtain significantly better results if context is taken into account. We also present qualitative error analyses shedding light into (a) when and why context is beneficial and (b) the remaining errors made by our best model when context is taken into account.
... A cross-platform study of hateful users revealed how the banning of users could backfire for the platform and the Internet community at large. [27]. ...
Preprint
Curbing online hate speech has become the need of the hour; however, a blanket ban on such activities is infeasible for several geopolitical and cultural reasons. To reduce the severity of the problem, in this paper, we introduce a novel task, hate speech normalization, that aims to weaken the intensity of hatred exhibited by an online post. The intention of hate speech normalization is not to support hate but instead to provide the users with a stepping stone towards non-hate while giving online platforms more time to monitor any improvement in the user's behavior. To this end, we manually curated a parallel corpus - hate texts and their normalized counterparts (a normalized text is less hateful and more benign). We introduce NACL, a simple yet efficient hate speech normalization model that operates in three stages - first, it measures the hate intensity of the original sample; second, it identifies the hate span(s) within it; and finally, it reduces hate intensity by paraphrasing the hate spans. We perform extensive experiments to measure the efficacy of NACL via three-way evaluation (intrinsic, extrinsic, and human-study). We observe that NACL outperforms six baselines - NACL yields a score of 0.1365 RMSE for the intensity prediction, 0.622 F1-score in the span identification, and 82.27 BLEU and 80.05 perplexity for the normalized text generation. We further show the generalizability of NACL across other platforms (Reddit, Facebook, Gab). An interactive prototype of NACL was put together for the user study. Further, the tool is being deployed in a real-world setting at Wipro AI as a part of its mission to tackle harmful content on online platforms.
... Social media platforms seem to be the perfect place for disseminating hate speech due to not only several of their defining characteristics but also the characteristics of social media users. For one, social media platforms enable hate groups to develop, connect, and organize, even internationally, and the resulting clusters of hate (Johnson et al., 2019) facilitate the spread of hate speech across platforms (Nakamura, 2014). For another, actual or perceived anonymity in social media environments and the invisibility of other individuals can embolden users to "be more outrageous, obnoxious, or hateful in what they say" (Brown, 2018: 298). ...
Article
Although many social media users have reported encountering hate speech, differences in the perception between different users remain unclear. Using a qualitative multi-method approach, we investigated how personal characteristics, the presentation form, and content-related characteristics influence social media users' perceptions of hate speech, which we differentiated as first-level (i.e. recognizing hate speech) and second-level perceptions (i.e. attitude toward it). To that end, we first observed 23 German-speaking social media users as they scrolled through a fictitious social media feed featuring hate speech. Next, we conducted remote self-confrontation interviews to discuss the content and semi-structured interviews involving interactive tasks. Although it became apparent that perceptions are highly individual, some overarching tendencies emerged. The results suggest that the perception of and indignation toward hate speech decreases as social media use increases. Moreover, direct and prosecutable hate speech is perceived as being particularly negative, especially in visual presentation form.
... HS classifiers that detect abusive content online and flag it for human moderation or automatic deletion are the most common computational approach to counter HS online (Jurgens et al., 2019). These classifiers are furthermore important research tools, e.g., to explore the dynamics of specific types of HS online (Johnson et al., 2019;Uyheng and Carley, 2021) or to identify common targets of abuse that require special protection (Silva et al., 2021). The algorithms behind HS classifiers are manifold (Schmidt and Wiegand, 2017), ranging from statistical machine learning methods (Saleem et al., 2016;Waseem and Hovy, 2016) to neural approaches applying representations of language models (Yang et al., 2019) in single or multi-task (Plaza-Del-Arco et al., 2021) settings. ...
Preprint
Even though hate speech (HS) online has been an important object of research in the last decade, most HS-related corpora over-simplify the phenomenon of hate by attempting to label user comments as "hate" or "neutral". This ignores the complex and subjective nature of HS, which limits the real-life applicability of classifiers trained on these corpora. In this study, we present the M-Phasis corpus, a corpus of ~9k German and French user comments collected from migration-related news articles. It goes beyond the "hate"-"neutral" dichotomy and is instead annotated with 23 features, which in combination become descriptors of various types of speech, ranging from critical comments to implicit and explicit expressions of hate. The annotations are performed by 4 native speakers per language and achieve high (0.77 <= k <= 1) inter-annotator agreements. Besides describing the corpus creation and presenting insights from a content, error and domain analysis, we explore its data characteristics by training several classification baselines.
... Not only such abusive behavior can lead to the traumatization of the victims by affecting them psychologically [37], but these can also ignite social tensions and affect the stature of the platforms which host them [36]. Further, widespread usage of such content can also have implications in the offline world: violent hate crimes, youth suicides, mass shootings, and extremist recruitment [17]. ...
Preprint
Abusive language is a growing concern in many social media platforms. Repeated exposure to abusive speech has created physiological effects on the target users. Thus, the problem of abusive language should be addressed in all forms for online peace and safety. While extensive research exists in abusive speech detection, most studies focus on English. Recently, many smearing incidents have occurred in India, which provoked diverse forms of abusive speech in online space in various languages based on the geographic location. Therefore it is essential to deal with such malicious content. In this paper, to bridge the gap, we demonstrate a large-scale analysis of multilingual abusive speech in Indic languages. We examine different interlingual transfer mechanisms and observe the performance of various multilingual models for abusive speech detection for eight different Indic languages. We also experiment to show how robust these models are on adversarial attacks. Finally, we conduct an in-depth error analysis by looking into the models' misclassified posts across various settings. We have made our code and models public for other researchers.
Article
Full-text available
This research note argues that the ‘lone wolf’ typology should be fundamentally reconsidered. Based on a three-year empirical research project, two key points are made to support this argument. First, the authors found that ties to online and offline radical milieus are critical to lone actors’ adoption and maintenance of both the motive and capability to commits acts of terrorism. Secondly, in terms of pre-attack behaviors, the majority of lone actors are not the stealthy and highly capable terrorists the ‘lone wolf’ moniker alludes to. These findings not only urge a reconsideration of the utility of the lone-wolf concept, they are also particularly relevant for counterterrorism professional, whose conceptions of this threat may have closed off avenues for detection and interdiction that do, in fact, exist.
Article
Full-text available
Our dependence on networks - be they infrastructure, economic, social or others - leaves us prone to crises caused by the vulnerabilities of these networks. There is a great need to develop new methods to protect infrastructure networks and prevent cascade of failures (especially in cases of coupled networks). Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How, and at which cost can one restructure the network such that it will become more robust against malicious attacks? The gradual increase in attacks on the networks society depends on - Internet, mobile phone, transportation, air travel, banking, etc. - emphasize the need to develop new strategies to protect and defend these crucial networks of communication and infrastructure networks. One example is the threat of liquid explosives a few years ago, which completely shut down air travel for days, and has created extreme changes in regulations. Such threats and dangers warrant the need for new tools and strategies to defend critical infrastructure. In this paper we review recent advances in the theoretical understanding of the vulnerabilities of interdependent networks with and without spatial embedding, attack strategies and their affect on such networks of networks as well as recently developed strategies to optimize and repair failures caused by such attacks.
Article
Full-text available
This article analyzes the sociodemographic network characteristics and antecedent behaviors of 119 lone-actor terrorists. This marks a departure from existing analyses by largely focusing upon behavioral aspects of each offender. This article also examines whether lone-actor terrorists differ based on their ideologies or network connectivity. The analysis leads to seven conclusions. There was no uniform profile identified. In the time leading up to most lone-actor terrorist events, other people generally knew about the offender's grievance, extremist ideology, views, and/or intent to engage in violence. A wide range of activities and experiences preceded lone actors' plots or events. Many but not all lone-actor terrorists were socially isolated. Lone-actor terrorists regularly engaged in a detectable and observable range of activities with a wider pressure group, social movement, or terrorist organization. Lone-actor terrorist events were rarely sudden and impulsive. There were distinguishable behavioral differences between subgroups. The implications for policy conclude this article.
Article
Full-text available
We show that abrupt structural transitions can arise in functionally optimal networks, driven by small changes in the level of transport congestion. Our results offer an explanation as to why so many diverse species of network structure arise in nature (e.g., fungal systems) under essentially the same environmental conditions. Our findings are based on an exactly solvable model system which mimics a variety of biological and social networks. We then extend our analysis by introducing a renormalization scheme involving cost motifs, to describe analytically the average shortest path across multiple-ring-and-hub networks. As a consequence, we uncover a "skin effect" whereby the structure of the inner multi-ring core can cease to play any role in terms of determining the average shortest path across the network.
Article
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.
Article
Background: Given the concerns about bullying via electronic communication in children and young people and its possible contribution to self-harm, we have reviewed the evidence for associations between cyberbullying involvement and self-harm or suicidal behaviors (such as suicidal ideation, suicide plans, and suicide attempts) in children and young people. Objective: The aim of this study was to systematically review the current evidence examining the association between cyberbullying involvement as victim or perpetrator and self-harm and suicidal behaviors in children and young people (younger than 25 years), and where possible, to meta-analyze data on the associations. Methods: An electronic literature search was conducted for all studies published between January 1, 1996, and February 3, 2017, across sources, including MEDLINE, Cochrane, and PsycINFO. Articles were included if the study examined any association between cyberbullying involvement and self-harm or suicidal behaviors and reported empirical data in a sample aged under 25 years. Quality of included papers was assessed and data were extracted. Meta-analyses of data were conducted. Results: A total of 33 eligible articles from 26 independent studies were included, covering a population of 156,384 children and young people. A total of 25 articles (20 independent studies, n=115,056) identified associations (negative influences) between cybervictimization and self-harm or suicidal behaviors or between perpetrating cyberbullying and suicidal behaviors. Three additional studies, in which the cyberbullying, self-harm, or suicidal behaviors measures had been combined with other measures (such as traditional bullying and mental health problems), also showed negative influences (n=44,526). A total of 5 studies showed no significant associations (n=5646). Meta-analyses, producing odds ratios (ORs) as a summary measure of effect size (eg, ratio of the odds of cyber victims who have experienced SH vs nonvictims who have experienced SH), showed that, compared with nonvictims, those who have experienced cybervictimization were OR 2.35 (95% CI 1.65-3.34) times as likely to self-harm, OR 2.10 (95% CI 1.73-2.55) times as likely to exhibit suicidal behaviors, OR 2.57 (95% CI 1.69-3.90) times more likely to attempt suicide, and OR 2.15 (95% CI 1.70-2.71) times more likely to have suicidal thoughts. Cyberbullying perpetrators were OR 1.21 (95% CI 1.02-1.44) times more likely to exhibit suicidal behaviors and OR 1.23 (95% CI 1.10-1.37) times more likely to experience suicidal ideation than nonperpetrators. Conclusions: Victims of cyberbullying are at a greater risk than nonvictims of both self-harm and suicidal behaviors. To a lesser extent, perpetrators of cyberbullying are at risk of suicidal behaviors and suicidal ideation when compared with nonperpetrators. Policy makers and schools should prioritize the inclusion of cyberbullying involvement in programs to prevent traditional bullying. Type of cyberbullying involvement, frequency, and gender should be assessed in future studies.
Article
Public interest and policy debates surrounding the role of the Internet in terrorist activities is increasing. Criminology has said very little on the matter. By using a unique data set of 223 convicted United Kingdom–based terrorists, this article focuses on how they used the Internet in the commission of their crimes. As most samples of terrorist offenders vary in terms of capabilities (lone-actor vs. group offenders) and criminal sophistication (improvised explosive devices vs. stabbings), we tested whether the affordances they sought from the Internet significantly differed. The results suggest that extreme-right-wing individuals, those who planned an attack (as opposed to merely providing material support), conducted a lethal attack, committed an improvised explosive device (IED) attack, committed an armed assault, acted within a cell, attempted to recruit others, and engaged in nonvirtual network activities and nonvirtual place interactions were significantly more likely to learn online compared with those who did not engage in these behaviors. Those undertaking unarmed assaults were significantly less likely to display online learning. The results also suggested that extreme-right-wing individuals who perpetrated an IED attack, associated with a wider network, attempted to recruit others, and engaged in nonvirtual network activities and nonvirtual place interactions were significantly more likely to communicate online with co-ideologues.
Article
Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite significant external pressure, and may ultimately inspire acts by individuals who have no prior history of extremism, formal cell membership or direct links to leadership. We uncover an ultrafast ecology driving this online support and provide a mathematical theory that describes it. The ecology features self-organized aggregates that proliferate preceding the onset of recent real-world campaigns, and adopt novel adaptive mechanisms to enhance their survival. One of the actionable predictions is that the development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones.
Book
This book provides the first empirical analysis of lone-actor terrorist behaviour. Based upon a unique dataset of 111 lone actors that catalogues the life span of the individual’s development, the book contains important insights into what an analysis of their behaviours might imply for practical interventions aimed at disrupting or even preventing attacks. It adopts insights and methodologies from criminology and forensic psychology to provide a holistic analysis of the behavioural underpinnings of lone-actor terrorism. By focusing upon the behavioural aspects of each offender and by analysing a variety of case studies, including Anders Breivik, Ted Kaczynski, Timothy McVeigh and David Copeland, this work marks a pointed departure from previous research in the field. It seeks to answer the following key questions: Is there a lone-actor terrorist profile and how do they differ? What behaviours did the lone-actor terrorist engage in prior to his/her attack and is there a common behavioural trajectory into lone-actor terrorism? How ‘lone’ do lone-actor terrorists tend to be? What role, if any, does the internet play? What role, if any, does mental illness play? This book will be of much interest to students of terrorism/counter-terrorism studies, political violence, criminology, forensic psychology and security studies in general.