Content uploaded by Mauricio Herrera López
Author content
All content in this area was uploaded by Mauricio Herrera López on Apr 23, 2021
Content may be subject to copyright.
Content uploaded by Mauricio Herrera López
Author content
All content in this area was uploaded by Mauricio Herrera López on Apr 10, 2021
Content may be subject to copyright.
Internet, social media and online hate speech. Systematic review
Sergio Andr´
es Casta˜
no-Pulgarín
a
,
*
, Natalia Su´
arez-Betancur
b
, Luz Magnolia Tilano Vega
c
,
Harvey Mauricio Herrera L´
opez
d
a
Psychology Department, Corporaci´
on Universitaria Minuto de Dios-UNIMINUTO, Colombia
b
Corporaci´
on para la Atenci´
on Psicosocial CORAPCO, Medellín, Colombia
c
Psychology Department, Universidad de San Buenaventura, Medellín, Colombia
d
Psychology Department, Universidad de Nari˜
no, Pasto, Colombia
ARTICLE INFO
Keywords:
Cyberhate
Internet
Online hate speech
Social Networks
ABSTRACT
This systematic review aimed to explore the research papers related to how Internet and social media may, or
may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches,
were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and
2019. Meta-analysis could not be conducted due to the broad diversity of studies and measure units. The
reviewed studies provided exploratory data about the Internet and social media as a space for online hate speech,
types of cyberhate, terrorism as online hate trigger, online hate expressions and most common methods to assess
online hate speech. As a general consensus on what is cyberhate, this is conceptualized as the use of violent,
aggressive or offensive language, focused on a specic group of people who share a common property, which can
be religion, race, gender or sex or political afliation through the use of Internet and Social Networks, based on a
power imbalance, which can be carried out repeatedly, systematically and uncontrollably, through digital media
and often motivated by ideologies.
1. Introduction
Cyberspace offers freedom of communication and opinion expres-
sions. However, the current social media is regularly being misused to
spread violent messages, comments, and hateful speech. This has been
conceptualized as online hate speech, dened as any communication
that disparages a person or a group on the basis of characteristics such as
race, color, ethnicity, gender, sexual orientation, nationality, religion, or
political afliation (Zhang & Luo, 2018).
The urgency of this matter has been increasingly recognized
(Gamb¨
ack & Sikdar, 2017). In the European Union (EU), 80% of people
have encountered hate speech online and 40% have felt attacked or
threatened via Social Network Sites [SNS] (Gagliardone, Gal, Alves, &
Martinez, 2015).
Among its main consequences we found harm against social groups
by creating an environment of prejudice and intolerance, fostering
discrimination and hostility, and in severe cases facilitating violent acts
(Gagliardone et al., 2015); impoliteness, pejorative terms, vulgarity, or
sarcasm (Papacharissi, 2004); incivility, that includes behaviors that
threaten democracy, deny people their personal freedoms, or stereotype
social groups (Papacharissi, 2004); and off line hate speech expressed as
direct aggressions (Anderson, Brossard, Scheufele, Xenos, & Ladwig,
2014; Coe, Kenski, & Rains, 2014), against political ideologies, religious
groups or ethnic minorities. For example, racial- and ethnic-centered
rumors can lead to ethnic violence and offended individuals might be
threatened because of their group identities (Bhavnani, Findley, &
Kuklinski, 2009).
A concept that can explain online hate speech is social deviance. This
term encompasses all behaviors, from minor norm-violating to law-
breaking acts against others, and considers online hate as an act of
deviant communication as it violates shared cultural standards, rules, or
norms of social interaction in social group contexts (Henry, 2009).
Among the norm violating behaviors we can identify: defamation
(Coe et al., 2014), call for violence (Hanzelka & Schmidt, 2017),
agitation by provoking statements debating political or social issues
displaying discriminatory views (Bhavnani et al., 2009), rumors and
conspiracy (Sunstein & Vermeule, 2009).
These issues make the investigation of online hate speech an
important area of research. In fact, there are many theoretical gaps on
the explanation of this behavior and there are not enough empirical data
* Corresponding author.
E-mail address: scastanopul@uniminuto.edu.co (S.A. Casta˜
no-Pulgarín).
Contents lists available at ScienceDirect
Aggression and Violent Behavior
journal homepage: www.elsevier.com/locate/aggviobeh
https://doi.org/10.1016/j.avb.2021.101608
Received 14 July 2020; Received in revised form 26 January 2021; Accepted 23 March 2021