ArticlePublisher preview available

Hidden resilience and adaptive dynamics of the global online hate ecology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Global ecology of online hate clusters a, Schematic of resilient hate ecology that we find flourishing online, mixing hate narratives, languages and cultures across platforms. A1, A2 and A3 denote three types of self-organized adaptation that we observe that quickly build new bridges between otherwise independent platforms (see main text). We focus on Facebook (FB) and VKontakte (VK) clusters, shown as large blue and red symbols, respectively; different shapes represent different hate narratives. Undirected (that is, no arrowhead) coloured link between two hate clusters indicates a strong two-way connection. Small black circles indicate users, who may be members of 1, 2, 3…hate clusters; directed (that is, with arrowhead) link indicates that the user is a member of that hate cluster. b, Placing hate clusters at the location of their activity (for example, ‘Stop White Genocide in South Africa’ (SA)) reveals a complex web of global hate highways built from these strong inter-cluster connections. Only the basic skeleton is shown. Bridges between Facebook and VKontakte (for example, A1, A2 and A3 in a) are shown in green. When the focus of a hate cluster is an entire country or continent, the geographical centre is chosen. Inset shows dense hate highway interlinkage across Europe. c, Microscale view of actual KKK hate-cluster ecosystem. The ForceAtlas2 algorithm used is such that the further two clusters are apart, the fewer users they have in common. Hate-cluster radii are determined by the number of members. d, Schematic showing synapse-like nature of individual hate clusters. Source Data
… 
Mathematical model showing resilience of hate-cluster ecology a, Connected hate clusters from Fig. 1a, trying to establish links from a platform such as VKontakte (subset 1b) to a better-policed platform such as Facebook (platform 2), run the risk (cost R) of being noticed by moderators of Facebook and hence sanctions and legal action. Because more links creates more visibility and hence more risk, we assume that the cost of accessing platform 2 from platform 1 is proportional to the number of links, ρ. b, Mathematical prediction from this model (equation (1)) shows that the average shortest path ℓ¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\ell }}$$\end{document} between hate clusters in VKontakte (subset 1b) has a minimum ℓ¯min\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\bar{{\ell }}}_{{\rm{\min }}}$$\end{document} as a function of the number of links ρ into platform 2 (Facebook). For any reasonably large number of inter-platform links ρ > ρmin, our theory predicts that the action of platform 2 (such as Facebook) to reduce the number of links ρ will lead to an unwanted decrease in the average shortest path ℓ¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\ell }}$$\end{document} as ρ decreases towards ρmin. In addition, as the universe of social media expands in the future to many interconnected platforms, as shown schematically in a, our theory predicts that the combined effect of having independent moderators on each platform will be to create spontaneous dark pools of hate (dark region in a).
… 
This content is subject to copyright. Terms and conditions apply.
LETTER https://doi.org/10.1038/s41586-019-1494-7
Hidden resilience and adaptive dynamics of the
global online hate ecology
N. F. Johnson1*, R. Leahy1, N. Johnson Restrepo1, N. Velasquez2, M. Zheng3, P. Manrique3, P. Devkota4 & S. Wuchty4
Online hate and extremist narratives have been linked to abhorrent
real-world events, including a current surge in hate crimes
1–6
and
an alarming increase in youth suicides that result from social
media vitriol7; inciting mass shootings such as the 2019 attack
in Christchurch, stabbings and bombings8–11; recruitment of
extremists1216, including entrapment and sex-trafficking of girls
as fighter brides17; threats against public figures, including the 2019
verbal attack against an anti-Brexit politician, and hybrid (racist–
anti-women–anti-immigrant) hate threats against a US member
of the British royal family
18
; and renewed anti-western hate in the
2019 post-ISIS landscape associated with support for Osama Bin
Laden’s son and Al Qaeda. Social media platforms seem to be losing
the battle against online hate
19,20
and urgently need new insights.
Here we show that the key to understanding the resilience of online
hate lies in its global network-of-network dynamics. Interconnected
hate clusters form global ‘hate highways’ that—assisted by collective
online adaptations—cross social media platforms, sometimes using
‘back doors’ even after being banned, as well as jumping between
countries, continents and languages. Our mathematical model
predicts that policing within a single platform (such as Facebook)
can make matters worse, and will eventually generate global ‘dark
pools’ in which online hate will flourish. We observe the current
hate network rapidly rewiring and self-repairing at the micro level
when attacked, in a way that mimics the formation of covalent bonds
in chemistry. This understanding enables us to propose a policy
matrix that can help to defeat online hate, classified by the preferred
(or legally allowed) granularity of the intervention and top-down
versus bottom-up nature. We provide quantitative assessments for
the effects of each intervention. This policy matrix also offers a tool
for tackling a broader class of illicit online behaviours21,22 such as
financial fraud.
Current strategies to defeat online hate tend towards two ends of the
scale: a microscopic approach that seeks to identify ‘bad’ individual(s)
in the sea of online users
1,14,16
, and a macroscopic approach that bans
entire ideologies, which results in allegations of stifling free speech
23
.
These two approaches are equivalent to attempts to try to understand
how water boils by looking for a bad particle in a sea of billions (even
though there is not one for phase transitions
24
), or the macroscopic
viewpoint that the entire system is to blame (akin to thermodynam-
ics24). Yet, the correct science behind extended physical phenomena24
lies at the mesoscale in the self-organized cluster dynamics of the devel-
oping correlations, with the same thought to be true for many social
science settings2527.
A better understanding of how the ecology of online hate evolves
could create more effective intervention policies. Using entirely public
data from different social media platforms, countries and languages, we
find that online hate thrives globally through self-organized, mesoscale
clusters that interconnect to form a resilient network-of-networks of
hate highways across platforms, countries and languages (Fig.1). Our
mathematical theory shows why single-platform policing (for example,
by Facebook) can be ineffective (Fig.2) and may even make things
worse. We find empirically that when attacked, the online hate ecology
can quickly adapt and self-repair at the micro level, akin to the forma-
tion of covalent bonds in chemistry (Fig.3). We leave a detailed study
of the underlying social networks to future work because our focus
here is on the general cross-platform behaviour. Knowledge of these
features of online hate enables us to propose a set of interventions to
thwart it (Fig.4).
Our analysis of online clusters does not require any information
about individuals, just as information about a specific molecule of
water is not required to describe the bubbles (that is, clusters of cor-
related molecules) that form in boiling water. Online clusters such as
groups, communities and pages are a popular feature of platforms such
as Facebook and VKontakte, which is based in central Europe, has hun-
dreds of millions of users worldwide, and had a crucial role in previ-
ous extremist activity
27
. Such online clusters allow several individual
users to self-organize around a common interest27 and they collectively
self-police to remove trolls, bots and adverse opinions. Some people
find it attractive to join a cluster that promotes hate because its social
structure reduces the risk of being trolled or confronted by opponents.
Even on platforms that do not have formal groups, quasi-groups can
be formed (for example, Telegram). Although Twitter has allowed
some notable insights26, we do not consider it here as its open-follower
structure does not fully capture the tendency of humans to form into
tight-knit social clusters (such as VKontakte groups) in which they
can develop thoughts without encountering opposition. Our online
cluster search methodology generalizes that previously described27 to
multiple social media platforms and can be repeated for any hate topic
(seeMethods for full details).
The global hate ecology that we find flourishing online is shown
in Fig.1a, b. The highly interconnected network-of-networks28–30
mixes hate narratives across themes (for example, anti-Semitic,
anti-immigrant, anti-LGBT+), languages, cultures and plat-
forms. This online mixing manifest itself in the 2019 attack in
Christchurch: the presumed shooter was Australian, the attack was
in New Zealand, and the guns carried messages in several European
languages on historical topics that are mentioned in online hate
clusters across continents. We uncover hate clusters of all sizes—for
example, the hate-cluster distribution for the ideology of the Ku
Klux Klan (KKK) on VKontakte has a high goodness-of-fit value for
a power-law distribution (Extended Data Fig.1). This suggests that
the online hate ecology is self-organized, because it would be almost
impossible to engineer this distribution using top-down control.
The estimated power-law exponent is consistent with a sampling of
anti-western hate clusters as well as the online ecology of financial
fraud
21
, suggesting that our findings and policy suggestions can help
to tackle a broader class of illicit online behaviours21,22.
We observe operationally independent platforms—that are also com-
mercial competitors—becoming unwittingly coupled through dynami-
cal, self-organized adaptations of the global hate-cluster networks. This
resilience helps the hate ecology to recover quickly after the banning
of singleplatforms. The three types of adaptation bridging VKontakte
1Physics Department, George Washington University, Washington, DC, USA. 2Elliot School of International Affairs, George Washington University, Washington, DC, USA. 3Physics Department,
University of Miami, Coral Gables, FL, USA. 4Computer Science Department, University of Miami, Coral Gables, FL, USA. *e-mail: neiljohnson@gwu.edu
NATURE| www.nature.com/nature
Content courtesy of Springer Nature, terms of use apply. Rights reserved
... All data needed to evaluate the conclusions in the paper are present in the paper and Supplementary Information (Refs. 40,42,[55][56][57][58][59][64][65][66][67][68][69][70][71] ). The code used to generate the map in Fig. 1, and from which the results in Figs. 2 and 3 are obtained, is Gephi which is free open-source software. ...
Article
Full-text available
Why does online distrust (e.g., of medical expertise) continue to grow despite numerous mitigation efforts? We analyzed changing discourse within a Facebook ecosystem of approximately 100 million users who were focused pre-pandemic on vaccine (dis)trust. Post-pandemic, their discourse interconnected multiple non-vaccine topics and geographic scales within and across communities. This interconnection confers a unique, system-level (i.e., at the scale of the full network) resistance to mitigations targeting isolated topics or geographic scales—an approach many schemes take due to constrained funding. For example, focusing on local health issues but not national elections. Backed by numerical simulations, we propose counterintuitive solutions for more effective, scalable mitigation: utilize “glocal” messaging by blending (1) strategic topic combinations (e.g., messaging about specific diseases with climate change) and (2) geographic scales (e.g., combining local and national focuses).
... Las consecuencias del ciberodio pueden ser muy dañinas, afectando a diferentes esferas de los individuos, como por ejemplo la reducción de confianza en las personas (Näsi et al., 2015), del bienestar individual (Keipi et al., 2018) y de la motivación académica (Isik et al., 2018). El ciberodio es un complemento, en unos casos, o un antecedente, en otros, de los delitos de odio en el contexto fuera de línea, como tiroteos, apuñalamientos y ataques con bombas (Johnson et al., 2019). Se requiere aún más investigación sobre esta ciberconducta antisocial, con el fin de identificar factores predictores del ciberodio entre los adolescentes. ...
Article
Resumen La pandemia por COVID-19 provocó una gran crisis en numerosos ámbitos sociales, especialmente en los niños, por el cierre de las escuelas en cientos de países. El confinamiento implicó un contexto educativo puramente virtual, lo cual pudo aumentar la participación en ciberconductas antisociales como el ciberodio. El objetivo de la presente investigación es conocer el impacto del confinamiento en el ciberodio de los niños de Educación Primaria, y analizar el papel de las competencias socioemocionales y morales como factor protector. El estudio se realizó con 792 alumnos de Educación Primaria (Medad=10,81, DT=0,85) de Cuenca (Ecuador). Se utilizó un cuestionario compuesto por las escalas de ciberodio, competencias socioemocionales, empatía y emociones morales. Se realizó un estudio cuantitativo con un diseño longitudinal con dos recogidas de datos en un intervalo de cinco meses. Los resultados mostraron que tanto el ciberodio total y sus dimensiones, agresión y promoción, aumentaron longitudinalmente. El ciberodio entre estos participantes se podría predecir tras cinco meses de confinamiento por ser varón, por pertenecer al curso superior, por asistir a un centro público y por tener bajas puntuaciones en emociones morales. Los efectos del confinamiento han destacado la importancia de las relaciones sociales cara a cara, lo cual tiene interesantes implicaciones sobre la importancia de la escuela en el desarrollo de las competencias socioemocionales y morales para la convivencia y el respeto a la diversidad. Abstract The COVID-19 pandemic caused a major crisis in numerous social spheres, especially among children, due to the closure of schools in hundreds of countries. The lockdown resulted in classes being given exclusively online, which could have led to increased participation in antisocial online behaviour such as cyberhate. This research aims to find out the impact of lockdown on cyberhate in children in Primary Education and to analyse the role of social, emotional and moral competencies as a protective factor. The study was conducted with 792 primary school pupils (Mage=10.81, SD=0.85) from Cuenca (Ecuador). A questionnaire focusing on cyberhate, social and emotional competencies, empathy, and moral emotions scales was used. A quantitative study was carried out with a longitudinal design with two data rounds of collection separated by an interval of five months. The results showed that total cyberhate and its dimensions, perpetration and propagation, increased longitudinally. Cyberhate among these participants could be predicted, after five months of lockdown, for being male, being in the highest school year, attending a state school, and obtaining low scores in moral emotions. The effects of the lockdown have highlighted the importance of face-to-face social relationships, which has exciting implications on the importance of school in developing social, emotional, and moral competencies which foster coexistence and respect for diversity.
... As new media isolate subject experts from each other and from the wider population (Johnson et al., 2020), diversity of knowledge can turn into social division between groups (Finkel et al., 2020;Johnson et al., 2019). Segregation does not need to be designed; it can occur through small differences repeated in decentralized interactions, as Schelling (1973) demonstrated in his classic model. ...
Article
Full-text available
Diversity of expertise is inherent to cultural evolution. When it is transparent, diversity of human knowledge is useful; when social conformity overcomes that transparency, “expertise” can lead to divisiveness. This is especially true today, where social media has increasingly allowed misinformation to spread by prioritizing what is recent and popular, regardless of validity or general benefit. Whereas in traditional societies there was diversity of expertise, contemporary social media facilitates homophily, which isolates true subject experts from each other and from the wider population. Diversity of knowledge thus becomes social division. Here, we discuss the potential of a cultural-evolutionary framework designed for the countless choices in contemporary media. Cultural-evolutionary theory identifies key factors that determine whether communication networks unify or fragment knowledge. Our approach highlights two parameters: transparency of information and social conformity. By identifying online spaces exhibiting aggregate patterns of high popularity bias and low transparency of information, we can help define the “safe limits” of social conformity and information overload in digital communications.
... Both suggest paths of research into pro-actively regulating platforms using behavioral news sharing patterns, i.e. from which sources and to which communities news is shared, alongside lingual information (Cheng et al. 2017). Nonetheless, these approaches also need to be balanced against the risk of driving polarized news-sharing to even more extreme websites (Johnson et al. 2019). Another possibility could be that moderation moves article-level anti-partisan content around the platform, such as articles criticizing opposing politicians, while reducing source-level co-partisanship. ...
Article
Online social platforms afford users vast digital spaces to share and discuss current events. However, scholars have concerns both over their role in segregating information exchange into ideological echo chambers, and over evidence that these echo chambers are nonetheless over-stated. In this work, we investigate news-sharing patterns across the entirety of Reddit and find that the platform appears polarized macroscopically, especially in politically right-leaning spaces. On closer examination, however, we observe that the majority of this effect originates from small, hyper-partisan segments of the platform accounting for a minority of news shared. We further map the temporal evolution of polarized news sharing and uncover evidence that, in addition to having grown drastically over time, polarization in hyper-partisan communities also began much earlier than 2016 and is resistant to Reddit's largest moderation event. Our results therefore suggest that social polarized news sharing runs narrow but deep online. Rather than being guided by the general prevalence or absence of echo chambers, we argue that platform policies are better served by measuring and targeting the communities in which ideological segregation is strongest.
Article
Online misinformation promotes distrust in science, undermines public health, and may drive civil unrest. During the coronavirus disease 2019 pandemic, Facebook—the world’s largest social media company—began to remove vaccine misinformation as a matter of policy. We evaluated the efficacy of these policies using a comparative interrupted time-series design. We found that Facebook removed some antivaccine content, but we did not observe decreases in overall engagement with antivaccine content. Provaccine content was also removed, and antivaccine content became more misinformative, more politically polarized, and more likely to be seen in users’ newsfeeds. We explain these findings as a consequence of Facebook’s system architecture, which provides substantial flexibility to motivated users who wish to disseminate misinformation through multiple channels. Facebook’s architecture may therefore afford antivaccine content producers several means to circumvent the intent of misinformation removal policies.
Preprint
Full-text available
Why is distrust (e.g. of medical expertise) now flourishing online despite the surge in mitigation schemes being implemented? We analyze the changing discourse in the Facebook ecosystem of approximately 100 million users who pre-pandemic were focused on (dis)trust of vaccines. We find that post-pandemic, their discourse strongly entangles multiple non-vaccine topics and geographic scales both within and across communities. This gives the current distrust ecosystem a unique system-level resistance to mitigations that target a specific topic and geographic scale -- which is the case of many current schemes due to their funding focus, e.g. local health not national elections. Backed up by detailed numerical simulations, our results reveal the following counterintuitive solutions for implementing more effective mitigation schemes at scale: shift to 'glocal' messaging by (1) blending particular sets of distinct topics (e.g. combine messaging about specific diseases with climate change) and (2) blending geographic scales.
Book
Full-text available
Brosziewski, Achim: Lebenslauf, Medien, Lernen. Skizzen einer systemtheoretischen Bildungssoziologie. Weinheim: Beltz Juventa 2023. Open Access at https://www.beltz.de/fachmedien/soziologie/produkte/details/50517-lebenslauf-medien-lernen.html Das Buch handelt von Lernen, Lehren, Lernfähigkeit, Unterricht, Lehrsätzen, Kompetenzen, Bildung, Altern und Ungewissheit — eingebettet in eine soziologische Medien- und Kommunikationstheorie, die die gesellschaftlichen Zusammenhänge all dieser Phänomene nachzuzeichnen erlaubt. Im Zentrum steht die These, der Lebenslauf lasse sich — wie Geld, Macht, Liebe und manches mehr — als ein symbolisch generalisiertes Medium der Kommunikation begreifen. Lernen und seine Komponenten kommen in den Blick, wenn man nach den Formen fragt, die sich im Medium des Lebenslaufs bilden und es reproduzieren. The book is about learning, teaching, learning ability, instruction, doctrines, competencies, Bildung, aging, and uncertainty — embedded in a sociological theory of media and communication that allows tracing the societal connections of all these phenomena. At the center is the thesis that the life course can be understood — like money, power, love and some more — as a symbolically generalized medium of communication. Learning and its components come into view when one asks about the forms that are formed in the medium of the life course and reproduce it.
Article
Although official departments attempt to intervene against misinformation, the personal field often conflicts with the goals of these departments. Thus, when rumours spread widely on social media, decision-makers often use a combination of rigid and soft control measures, such as blocking keywords, deleting misinformation, suspending accounts or refuting misinformation, to decrease the diffusion of misinformation. However, existing methods rarely consider the interplay of blocking and rebuttal measures, resulting in an unclear effect of the double intervention mechanism. To address these issues, we propose a novel misinformation diffusion model called SEIRI (susceptible, exposed, infective, removed, and infective) that considers the double intervention mechanism and secondary diffusion characteristics. We analyse the stability of the proposed model, obtain rumour-free and rumour-spread equilibriums, and calculate the basic reproduction number. Furthermore, we conduct numerical simulations to analyse the influence of key parameters through comparative experiments. Finally, we validate the effectiveness of the proposed approach by crawling a real-world data set of COVID-19-related misinformation tweets from Sina Weibo. Our comparison experiments with other similar works show that the SEIRI model provides superior performance in characterising the actual spread of misinformation. Our findings lead to several practical implications for public health policymaking.
Conference Paper
The exponential growth of social media users has changed the way people express their thoughts online. The freedom of expression offered by social media raises several issues. A major issue is the increasing number of hate speech posts containing offensive and foul language. Hate speech posts are targeted to individuals or groups of communities or organizations. This paper presents the use of a deep learning method to conduct multi-label text classification for hate speech tweets detection, including detecting the target, category, and degree of hate speech in the Indonesian language. The Convolutional Neural Network (CNN) method was employed in this current study. This study also implemented Word2Vec as word embedding. The result shows that the implementation of Word2Vec improves detection accuracy by 7.12%. The accuracies of using CNN and the CNN+Word2Vec are 64.07% and 71.19%, respectively.
Article
Full-text available
Background: Given the concerns about bullying via electronic communication in children and young people and its possible contribution to self-harm, we have reviewed the evidence for associations between cyberbullying involvement and self-harm or suicidal behaviors (such as suicidal ideation, suicide plans, and suicide attempts) in children and young people. Objective: The aim of this study was to systematically review the current evidence examining the association between cyberbullying involvement as victim or perpetrator and self-harm and suicidal behaviors in children and young people (younger than 25 years), and where possible, to meta-analyze data on the associations. Methods: An electronic literature search was conducted for all studies published between January 1, 1996, and February 3, 2017, across sources, including MEDLINE, Cochrane, and PsycINFO. Articles were included if the study examined any association between cyberbullying involvement and self-harm or suicidal behaviors and reported empirical data in a sample aged under 25 years. Quality of included papers was assessed and data were extracted. Meta-analyses of data were conducted. Results: A total of 33 eligible articles from 26 independent studies were included, covering a population of 156,384 children and young people. A total of 25 articles (20 independent studies, n=115,056) identified associations (negative influences) between cybervictimization and self-harm or suicidal behaviors or between perpetrating cyberbullying and suicidal behaviors. Three additional studies, in which the cyberbullying, self-harm, or suicidal behaviors measures had been combined with other measures (such as traditional bullying and mental health problems), also showed negative influences (n=44,526). A total of 5 studies showed no significant associations (n=5646). Meta-analyses, producing odds ratios (ORs) as a summary measure of effect size (eg, ratio of the odds of cyber victims who have experienced SH vs nonvictims who have experienced SH), showed that, compared with nonvictims, those who have experienced cybervictimization were OR 2.35 (95% CI 1.65-3.34) times as likely to self-harm, OR 2.10 (95% CI 1.73-2.55) times as likely to exhibit suicidal behaviors, OR 2.57 (95% CI 1.69-3.90) times more likely to attempt suicide, and OR 2.15 (95% CI 1.70-2.71) times more likely to have suicidal thoughts. Cyberbullying perpetrators were OR 1.21 (95% CI 1.02-1.44) times more likely to exhibit suicidal behaviors and OR 1.23 (95% CI 1.10-1.37) times more likely to experience suicidal ideation than nonperpetrators. Conclusions: Victims of cyberbullying are at a greater risk than nonvictims of both self-harm and suicidal behaviors. To a lesser extent, perpetrators of cyberbullying are at risk of suicidal behaviors and suicidal ideation when compared with nonperpetrators. Policy makers and schools should prioritize the inclusion of cyberbullying involvement in programs to prevent traditional bullying. Type of cyberbullying involvement, frequency, and gender should be assessed in future studies.
Article
Full-text available
This research note argues that the ‘lone wolf’ typology should be fundamentally reconsidered. Based on a three-year empirical research project, two key points are made to support this argument. First, the authors found that ties to online and offline radical milieus are critical to lone actors’ adoption and maintenance of both the motive and capability to commits acts of terrorism. Secondly, in terms of pre-attack behaviors, the majority of lone actors are not the stealthy and highly capable terrorists the ‘lone wolf’ moniker alludes to. These findings not only urge a reconsideration of the utility of the lone-wolf concept, they are also particularly relevant for counterterrorism professional, whose conceptions of this threat may have closed off avenues for detection and interdiction that do, in fact, exist.
Article
Full-text available
Our dependence on networks - be they infrastructure, economic, social or others - leaves us prone to crises caused by the vulnerabilities of these networks. There is a great need to develop new methods to protect infrastructure networks and prevent cascade of failures (especially in cases of coupled networks). Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How, and at which cost can one restructure the network such that it will become more robust against malicious attacks? The gradual increase in attacks on the networks society depends on - Internet, mobile phone, transportation, air travel, banking, etc. - emphasize the need to develop new strategies to protect and defend these crucial networks of communication and infrastructure networks. One example is the threat of liquid explosives a few years ago, which completely shut down air travel for days, and has created extreme changes in regulations. Such threats and dangers warrant the need for new tools and strategies to defend critical infrastructure. In this paper we review recent advances in the theoretical understanding of the vulnerabilities of interdependent networks with and without spatial embedding, attack strategies and their affect on such networks of networks as well as recently developed strategies to optimize and repair failures caused by such attacks.
Article
Full-text available
This article analyzes the sociodemographic network characteristics and antecedent behaviors of 119 lone-actor terrorists. This marks a departure from existing analyses by largely focusing upon behavioral aspects of each offender. This article also examines whether lone-actor terrorists differ based on their ideologies or network connectivity. The analysis leads to seven conclusions. There was no uniform profile identified. In the time leading up to most lone-actor terrorist events, other people generally knew about the offender's grievance, extremist ideology, views, and/or intent to engage in violence. A wide range of activities and experiences preceded lone actors' plots or events. Many but not all lone-actor terrorists were socially isolated. Lone-actor terrorists regularly engaged in a detectable and observable range of activities with a wider pressure group, social movement, or terrorist organization. Lone-actor terrorist events were rarely sudden and impulsive. There were distinguishable behavioral differences between subgroups. The implications for policy conclude this article.
Article
Full-text available
We show that abrupt structural transitions can arise in functionally optimal networks, driven by small changes in the level of transport congestion. Our results offer an explanation as to why so many diverse species of network structure arise in nature (e.g., fungal systems) under essentially the same environmental conditions. Our findings are based on an exactly solvable model system which mimics a variety of biological and social networks. We then extend our analysis by introducing a renormalization scheme involving cost motifs, to describe analytically the average shortest path across multiple-ring-and-hub networks. As a consequence, we uncover a "skin effect" whereby the structure of the inner multi-ring core can cease to play any role in terms of determining the average shortest path across the network.
Article
Finding facts about fake news There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters. Science , this issue p. 374 ; see also p. 348
Article
Public interest and policy debates surrounding the role of the Internet in terrorist activities is increasing. Criminology has said very little on the matter. By using a unique data set of 223 convicted United Kingdom–based terrorists, this article focuses on how they used the Internet in the commission of their crimes. As most samples of terrorist offenders vary in terms of capabilities (lone-actor vs. group offenders) and criminal sophistication (improvised explosive devices vs. stabbings), we tested whether the affordances they sought from the Internet significantly differed. The results suggest that extreme-right-wing individuals, those who planned an attack (as opposed to merely providing material support), conducted a lethal attack, committed an improvised explosive device (IED) attack, committed an armed assault, acted within a cell, attempted to recruit others, and engaged in nonvirtual network activities and nonvirtual place interactions were significantly more likely to learn online compared with those who did not engage in these behaviors. Those undertaking unarmed assaults were significantly less likely to display online learning. The results also suggested that extreme-right-wing individuals who perpetrated an IED attack, associated with a wider network, attempted to recruit others, and engaged in nonvirtual network activities and nonvirtual place interactions were significantly more likely to communicate online with co-ideologues.
Article
Tackling the advance of online threats Online support for adversarial groups such as Islamic State (ISIS) can turn local into global threats and attract new recruits and funding. Johnson et al. analyzed data collected on ISIS-related websites involving 108,086 individual followers between 1 January 1 and 31 August 2015. They developed a statistical model aimed at identifying behavioral patterns among online supporters of ISIS and used this information to predict the onset of major violent events. Sudden escalation in the number of ISIS-supporting ad hoc web groups (“aggregates”) preceded the onset of violence in a way that would not have been detected by looking at social media references to ISIS alone. The model suggests how the development and evolution of such aggregates can be blocked. Science , this issue p. 1459
Book
This book provides the first empirical analysis of lone-actor terrorist behaviour. Based upon a unique dataset of 111 lone actors that catalogues the life span of the individual’s development, the book contains important insights into what an analysis of their behaviours might imply for practical interventions aimed at disrupting or even preventing attacks. It adopts insights and methodologies from criminology and forensic psychology to provide a holistic analysis of the behavioural underpinnings of lone-actor terrorism. By focusing upon the behavioural aspects of each offender and by analysing a variety of case studies, including Anders Breivik, Ted Kaczynski, Timothy McVeigh and David Copeland, this work marks a pointed departure from previous research in the field. It seeks to answer the following key questions: Is there a lone-actor terrorist profile and how do they differ? What behaviours did the lone-actor terrorist engage in prior to his/her attack and is there a common behavioural trajectory into lone-actor terrorism? How ‘lone’ do lone-actor terrorists tend to be? What role, if any, does the internet play? What role, if any, does mental illness play? This book will be of much interest to students of terrorism/counter-terrorism studies, political violence, criminology, forensic psychology and security studies in general.