- A preview of this full-text is provided by Springer Nature.
- Learn more
Preview content only
Content available from Nature
This content is subject to copyright. Terms and conditions apply.
LETTER https://doi.org/10.1038/s41586-019-1494-7
Hidden resilience and adaptive dynamics of the
global online hate ecology
N. F. Johnson1*, R. Leahy1, N. Johnson Restrepo1, N. Velasquez2, M. Zheng3, P. Manrique3, P. Devkota4 & S. Wuchty4
Online hate and extremist narratives have been linked to abhorrent
real-world events, including a current surge in hate crimes
1–6
and
an alarming increase in youth suicides that result from social
media vitriol7; inciting mass shootings such as the 2019 attack
in Christchurch, stabbings and bombings8–11; recruitment of
extremists12–16, including entrapment and sex-trafficking of girls
as fighter brides17; threats against public figures, including the 2019
verbal attack against an anti-Brexit politician, and hybrid (racist–
anti-women–anti-immigrant) hate threats against a US member
of the British royal family
18
; and renewed anti-western hate in the
2019 post-ISIS landscape associated with support for Osama Bin
Laden’s son and Al Qaeda. Social media platforms seem to be losing
the battle against online hate
19,20
and urgently need new insights.
Here we show that the key to understanding the resilience of online
hate lies in its global network-of-network dynamics. Interconnected
hate clusters form global ‘hate highways’ that—assisted by collective
online adaptations—cross social media platforms, sometimes using
‘back doors’ even after being banned, as well as jumping between
countries, continents and languages. Our mathematical model
predicts that policing within a single platform (such as Facebook)
can make matters worse, and will eventually generate global ‘dark
pools’ in which online hate will flourish. We observe the current
hate network rapidly rewiring and self-repairing at the micro level
when attacked, in a way that mimics the formation of covalent bonds
in chemistry. This understanding enables us to propose a policy
matrix that can help to defeat online hate, classified by the preferred
(or legally allowed) granularity of the intervention and top-down
versus bottom-up nature. We provide quantitative assessments for
the effects of each intervention. This policy matrix also offers a tool
for tackling a broader class of illicit online behaviours21,22 such as
financial fraud.
Current strategies to defeat online hate tend towards two ends of the
scale: a microscopic approach that seeks to identify ‘bad’ individual(s)
in the sea of online users
1,14,16
, and a macroscopic approach that bans
entire ideologies, which results in allegations of stifling free speech
23
.
These two approaches are equivalent to attempts to try to understand
how water boils by looking for a bad particle in a sea of billions (even
though there is not one for phase transitions
24
), or the macroscopic
viewpoint that the entire system is to blame (akin to thermodynam-
ics24). Yet, the correct science behind extended physical phenomena24
lies at the mesoscale in the self-organized cluster dynamics of the devel-
oping correlations, with the same thought to be true for many social
science settings25–27.
A better understanding of how the ecology of online hate evolves
could create more effective intervention policies. Using entirely public
data from different social media platforms, countries and languages, we
find that online hate thrives globally through self-organized, mesoscale
clusters that interconnect to form a resilient network-of-networks of
hate highways across platforms, countries and languages (Fig.1). Our
mathematical theory shows why single-platform policing (for example,
by Facebook) can be ineffective (Fig.2) and may even make things
worse. We find empirically that when attacked, the online hate ecology
can quickly adapt and self-repair at the micro level, akin to the forma-
tion of covalent bonds in chemistry (Fig.3). We leave a detailed study
of the underlying social networks to future work because our focus
here is on the general cross-platform behaviour. Knowledge of these
features of online hate enables us to propose a set of interventions to
thwart it (Fig.4).
Our analysis of online clusters does not require any information
about individuals, just as information about a specific molecule of
water is not required to describe the bubbles (that is, clusters of cor-
related molecules) that form in boiling water. Online clusters such as
groups, communities and pages are a popular feature of platforms such
as Facebook and VKontakte, which is based in central Europe, has hun-
dreds of millions of users worldwide, and had a crucial role in previ-
ous extremist activity
27
. Such online clusters allow several individual
users to self-organize around a common interest27 and they collectively
self-police to remove trolls, bots and adverse opinions. Some people
find it attractive to join a cluster that promotes hate because its social
structure reduces the risk of being trolled or confronted by opponents.
Even on platforms that do not have formal groups, quasi-groups can
be formed (for example, Telegram). Although Twitter has allowed
some notable insights26, we do not consider it here as its open-follower
structure does not fully capture the tendency of humans to form into
tight-knit social clusters (such as VKontakte groups) in which they
can develop thoughts without encountering opposition. Our online
cluster search methodology generalizes that previously described27 to
multiple social media platforms and can be repeated for any hate topic
(seeMethods for full details).
The global hate ecology that we find flourishing online is shown
in Fig.1a, b. The highly interconnected network-of-networks28–30
mixes hate narratives across themes (for example, anti-Semitic,
anti-immigrant, anti-LGBT+), languages, cultures and plat-
forms. This online mixing manifest itself in the 2019 attack in
Christchurch: the presumed shooter was Australian, the attack was
in New Zealand, and the guns carried messages in several European
languages on historical topics that are mentioned in online hate
clusters across continents. We uncover hate clusters of all sizes—for
example, the hate-cluster distribution for the ideology of the Ku
Klux Klan (KKK) on VKontakte has a high goodness-of-fit value for
a power-law distribution (Extended Data Fig.1). This suggests that
the online hate ecology is self-organized, because it would be almost
impossible to engineer this distribution using top-down control.
The estimated power-law exponent is consistent with a sampling of
anti-western hate clusters as well as the online ecology of financial
fraud
21
, suggesting that our findings and policy suggestions can help
to tackle a broader class of illicit online behaviours21,22.
We observe operationally independent platforms—that are also com-
mercial competitors—becoming unwittingly coupled through dynami-
cal, self-organized adaptations of the global hate-cluster networks. This
resilience helps the hate ecology to recover quickly after the banning
of singleplatforms. The three types of adaptation bridging VKontakte
1Physics Department, George Washington University, Washington, DC, USA. 2Elliot School of International Affairs, George Washington University, Washington, DC, USA. 3Physics Department,
University of Miami, Coral Gables, FL, USA. 4Computer Science Department, University of Miami, Coral Gables, FL, USA. *e-mail: neiljohnson@gwu.edu
NATURE| www.nature.com/nature
Content courtesy of Springer Nature, terms of use apply. Rights reserved