ArticlePDF Available

Abstract

This systematic review aimed to explore the research papers related to how Internet and social media may, or may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches, were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and 2019. Meta-analysis could not be conducted due to the broad diversity of studies and measure units. The reviewed studies provided exploratory data about the Internet and social media as a space for online hate speech, types of cyberhate, terrorism as online hate trigger, online hate expressions and most common methods to assess online hate speech. As a general consensus on what is cyberhate, this is conceptualized as the use of violent, aggressive or offensive language, focused on a specific group of people who share a common property, which can be religion, race, gender or sex or political affiliation through the use of Internet and Social Networks, based on a power imbalance, which can be carried out repeatedly, systematically and uncontrollably, through digital media and often motivated by ideologies.
Internet, social media and online hate speech. Systematic review
Sergio Andr´
es Casta˜
no-Pulgarín
a
,
*
, Natalia Su´
arez-Betancur
b
, Luz Magnolia Tilano Vega
c
,
Harvey Mauricio Herrera L´
opez
d
a
Psychology Department, Corporaci´
on Universitaria Minuto de Dios-UNIMINUTO, Colombia
b
Corporaci´
on para la Atenci´
on Psicosocial CORAPCO, Medellín, Colombia
c
Psychology Department, Universidad de San Buenaventura, Medellín, Colombia
d
Psychology Department, Universidad de Nari˜
no, Pasto, Colombia
ARTICLE INFO
Keywords:
Cyberhate
Internet
Online hate speech
Social Networks
ABSTRACT
This systematic review aimed to explore the research papers related to how Internet and social media may, or
may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches,
were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and
2019. Meta-analysis could not be conducted due to the broad diversity of studies and measure units. The
reviewed studies provided exploratory data about the Internet and social media as a space for online hate speech,
types of cyberhate, terrorism as online hate trigger, online hate expressions and most common methods to assess
online hate speech. As a general consensus on what is cyberhate, this is conceptualized as the use of violent,
aggressive or offensive language, focused on a specic group of people who share a common property, which can
be religion, race, gender or sex or political afliation through the use of Internet and Social Networks, based on a
power imbalance, which can be carried out repeatedly, systematically and uncontrollably, through digital media
and often motivated by ideologies.
1. Introduction
Cyberspace offers freedom of communication and opinion expres-
sions. However, the current social media is regularly being misused to
spread violent messages, comments, and hateful speech. This has been
conceptualized as online hate speech, dened as any communication
that disparages a person or a group on the basis of characteristics such as
race, color, ethnicity, gender, sexual orientation, nationality, religion, or
political afliation (Zhang & Luo, 2018).
The urgency of this matter has been increasingly recognized
(Gamb¨
ack & Sikdar, 2017). In the European Union (EU), 80% of people
have encountered hate speech online and 40% have felt attacked or
threatened via Social Network Sites [SNS] (Gagliardone, Gal, Alves, &
Martinez, 2015).
Among its main consequences we found harm against social groups
by creating an environment of prejudice and intolerance, fostering
discrimination and hostility, and in severe cases facilitating violent acts
(Gagliardone et al., 2015); impoliteness, pejorative terms, vulgarity, or
sarcasm (Papacharissi, 2004); incivility, that includes behaviors that
threaten democracy, deny people their personal freedoms, or stereotype
social groups (Papacharissi, 2004); and off line hate speech expressed as
direct aggressions (Anderson, Brossard, Scheufele, Xenos, & Ladwig,
2014; Coe, Kenski, & Rains, 2014), against political ideologies, religious
groups or ethnic minorities. For example, racial- and ethnic-centered
rumors can lead to ethnic violence and offended individuals might be
threatened because of their group identities (Bhavnani, Findley, &
Kuklinski, 2009).
A concept that can explain online hate speech is social deviance. This
term encompasses all behaviors, from minor norm-violating to law-
breaking acts against others, and considers online hate as an act of
deviant communication as it violates shared cultural standards, rules, or
norms of social interaction in social group contexts (Henry, 2009).
Among the norm violating behaviors we can identify: defamation
(Coe et al., 2014), call for violence (Hanzelka & Schmidt, 2017),
agitation by provoking statements debating political or social issues
displaying discriminatory views (Bhavnani et al., 2009), rumors and
conspiracy (Sunstein & Vermeule, 2009).
These issues make the investigation of online hate speech an
important area of research. In fact, there are many theoretical gaps on
the explanation of this behavior and there are not enough empirical data
* Corresponding author.
E-mail address: scastanopul@uniminuto.edu.co (S.A. Casta˜
no-Pulgarín).
Contents lists available at ScienceDirect
Aggression and Violent Behavior
journal homepage: www.elsevier.com/locate/aggviobeh
https://doi.org/10.1016/j.avb.2021.101608
Received 14 July 2020; Received in revised form 26 January 2021; Accepted 23 March 2021
... The escalation of online hate speech presents a significant threat to individuals and society [23,75]. With the proliferation of social media, people now have access to a vast audience to disseminate harmful content that attacks individuals or groups based on their race [31,73,80], gender [35,49,124], religion [13,20,84], sexual orientation [33,34,46], or disability status [120,121,126]. These topics represent some of the most common targets of online hate speech [90]. ...
... Therefore, it is crucial to examine how different topics of hate speech affect the perception of the people who encounter them, especially those who write counterspeech to challenge online hate. In this study, we categorize the topics of hate speech into five groups: race [31,73,80], gender [35,49,124], religion [13,20,84], sexual orientation [33,34,46], or disability status [120,121,126], as these topics represent some of the most common targets of online hate speech [90]. We investigate how these topics influence the perception of counterspeech writers. ...
Preprint
Full-text available
This study investigates how online counterspeech, defined as direct responses to harmful online content with the intention of dissuading the perpetrator from further engaging in such behavior, is influenced by the match between a target of the hate speech and a counterspeech writer's identity. Using a sample of 458 English-speaking adults who responded to online hate speech posts covering race, gender, religion, sexual orientation, and disability status, our research reveals that the match between a hate post's topic and a counter-speaker's identity (topic-identity match, or TIM) shapes perceptions of hatefulness and experiences with counterspeech writing. Specifically, TIM significantly increases the perceived hatefulness of posts related to race and sexual orientation. TIM generally boosts counter-speakers' satisfaction and perceived effectiveness of their responses, and reduces the difficulty of crafting them, with an exception of gender-focused hate speech. In addition, counterspeech that displayed more empathy, was longer, had a more positive tone, and was associated with higher ratings of effectiveness and perceptions of hatefulness. Prior experience with, and openness to AI writing assistance tools like ChatGPT, correlate negatively with perceived difficulty in writing online counterspeech. Overall, this study contributes insights into linguistic and identity-related factors shaping counterspeech on social media. The findings inform the development of supportive technologies and moderation strategies for promoting effective responses to online hate.
... Grasping the dynamics and quality of discussions shaped by the interconnected, algorithm-driven digital landscape is crucial in this context. Although the global shift to digital media has been associated with declining trust in politics [11] and mainstream media [12], as well as with the rise of populism [13], hate speech [14,15], and increasing polarization [16,17], it has also democratized access to information, enhanced political participation [18][19][20], and has the potential to improve political knowledge [21,22]. The existing literature presents conflicting views on the influence of digital media on political expression (see [5] for a comprehensive review). ...
Preprint
Full-text available
Quantifying how individuals react to social influence is crucial for tackling collective political behavior online. While many studies of opinion in public forums focus on social feedback, they often overlook the potential for human interactions to result in self-censorship. Here, we investigate political deliberation in online spaces by exploring the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior. Analyzing conversations under YouTube videos from six prominent US news outlets around the 2020 US presidential elections, we observe patterns of self-censorship signaling the influence of peer toxicity on users' behavior. Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence. Such state is characterized by reduced user activity and a higher likelihood of posting toxic content, indicating an environment where extreme and antisocial behaviors thrive. Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics to properly characterize ideological polarization in digital spheres.
... These platforms are now central to society, serving as tools for dissemination, entertainment, and consumption of information [1][2][3][4]. However, their engagement-driven business models and the dynamics of online interactions have raised concerns about their broader social impacts [5,6], including their role in deepening polarization [7][8][9], propagating misinformation [10][11][12], and amplifying hate speech [13][14][15][16][17]. ...
Preprint
Full-text available
The abundance of information on social media has reshaped public discussions, shifting attention to the mechanisms that drive online discourse. This study analyzes large-scale Twitter (now X) data from three global debates -- Climate Change, COVID-19, and the Russo-Ukrainian War -- to investigate the structural dynamics of engagement. Our findings reveal that discussions are not primarily shaped by specific categories of actors, such as media or activists, but by shared ideological alignment. Users consistently form polarized communities, where their ideological stance in one debate predicts their positions in others. This polarization transcends individual topics, reflecting a broader pattern of ideological divides. Furthermore, the influence of individual actors within these communities appears secondary to the reinforcing effects of selective exposure and shared narratives. Overall, our results underscore that ideological alignment, rather than actor prominence, plays a central role in structuring online discourse and shaping the spread of information in polarized environments.
... Platforms like Twitter and Facebook, which facilitate real-time interactions and content sharing, have become key arenas for public discourse. However, the unregulated nature of usergenerated content on these platforms has led to significant challenges, particularly the proliferation of hate speech [2,3]. Defined as any form of communication that denigrates an individual or group based on characteristics such as race, religion, gender, or ethnicity, hate speech poses profound social and legal concerns [4]. ...
Article
Full-text available
Hate speech, characterized by language that incites discrimination, hostility, or violence against individuals or groups based on attributes such as race, religion, or gender, has become a critical issue on social media platforms. In Indonesia, unique linguistic complexities, such as slang, informal expressions, and code-switching, complicate its detection. This study evaluates the performance of Support Vector Machine (SVM), Naive Bayes, and IndoBERT models for multi-label hate speech detection on a dataset of 13,169 annotated Indonesian tweets. The results show that IndoBERT outperforms SVM and Naive Bayes across all metrics, achieving an accuracy of 93%, F1-score of 91%, precision of 91%, and recall of 91%. IndoBERT's contextual embeddings effectively capture nuanced relationships and complex linguistic patterns, offering superior performance in comparison to traditional methods. The study addresses dataset imbalance using BERT-based data augmentation, leading to significant metric improvements, particularly for SVM and Naive Bayes. Preprocessing steps proved essential in standardizing the dataset for effective model training. This research underscores IndoBERT's potential for advancing hate speech detection in non-English, low-resource languages. The findings contribute to the development of scalable, language-specific solutions for managing harmful online content, promoting safer and more inclusive digital environments.
Article
Understanding the impact of digital platforms on user behavior presents foundational challenges, including issues related to polarization, misinformation dynamics, and variation in news consumption. Comparative analyses across platforms and over different years can provide critical insights into these phenomena. This study investigates the linguistic characteristics of user comments over 34 y, focusing on their complexity and temporal shifts. Using a dataset of approximately 300 million English comments from eight diverse platforms and topics, we examine user communications’ vocabulary size and linguistic richness and their evolution over time. Our findings reveal consistent patterns of complexity across social media platforms and topics, characterized by a nearly universal reduction in text length, diminished lexical richness, and decreased repetitiveness. Despite these trends, users consistently introduce new words into their comments at a nearly constant rate. This analysis underscores that platforms only partially influence the complexity of user comments but, instead, it reflects a broader pattern of linguistic change driven by social triggers, suggesting intrinsic tendencies in users’ online interactions comparable to historically recognized linguistic hybridization and contamination processes.
Conference Paper
A integração de recursos educacionais em plataformas de metaverso se mostra uma possibilidade inovadora para instituições de ensino e para a sociedade em geral. A pandemia de COVID-19 acelerou avanços tecnológicos, intensificando necessidades de ambientes virtuais imersivos e interativos. Estudos recentes mostram que tais ambientes oferecem novas oportunidades de navegação, superando limitações da Web plana. Este trabalho apresenta o redesenho de um repositório de recursos educacionais da Web 2D para uma plataforma de metaverso, destacando funcionalidades e benefícios. Para um futuro próximo, planeja-se a criação dinâmica de cenas, de forma a acompanhar as operações usuais de manutenção do repositório.
Article
Full-text available
How do individuals behave after the sting of social exclusion on social media? Previous theorizing predicts that, after experiencing exclusion, individuals either engage in activities that reconnect them with others, or, they withdraw from the context. We analyzed data from Twitter ( k = 47,399 posts; N = 2,000 users) and Reddit ( k = 58,442 posts; N = 2,000 users), using relative (un)popularity of users’ own posts (i.e., receiving fewer Likes/upvotes than usual) as an indicator of social exclusion. Both studies found no general increase or decrease in posting latency following exclusion. However, the latency of behaviors aimed at connecting with many others decreased (i.e., posting again quickly), and the latency of behaviors aimed at connecting with specific others increased (i.e., commenting or mentioning others less quickly). Our findings speak in favor of acknowledgment-seeking behavior as a reaction to social exclusion that may be specific to social media contexts.
Article
Full-text available
Social media platforms have become gateways to information and news. These platforms potentially offer discursive arenas where individuals can participate in rational critical discourses, resembling with the public sphere. Nonetheless, the threats linked with social media platforms stymie the public sphere potential of the latter. This article attempts to provide an overview of threats namely disinformation, ideological polarization and concomitant extremism and hate speech propagated through social media platforms. Drawing from multidisciplinary literature, we reflect upon solutions which include (1) strengthening of mainstream and professional journalism; (2) fact-checking; (3) platform-driven and technology-based solutions; (4) law enforcement and social media regulations; and (5) media literacy and care for truth. This article contributes to the literature on strengthening the public sphere potential of social media platforms.
Article
Full-text available
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Article
Full-text available
As online content continues to grow, so does the spread of hate speech. We identify and examine challenges faced by online automatic approaches for hate speech detection in text. Among these difficulties are subtleties in language, differing definitions on what constitutes hate speech, and limitations of data availability for training and testing of these systems. Furthermore, many recent approaches suffer from an interpretability problem—that is, it can be difficult to understand why the systems make the decisions that they do. We propose a multi-view SVM approach that achieves near state-of-the-art performance, while being simpler and producing more easily interpretable decisions than neural methods. We also discuss both technical and practical challenges that remain for this task.
Article
Full-text available
We characterize the Twitter networks of the major presidential candidates, Donald J. Trump and Hillary R. Clinton, with various American hate groups defined by the US Southern Poverty Law Center (SPLC). We further examined the Twitter networks for Bernie Sanders, Ted Cruz, and Paul Ryan, for 9 weeks around the 2016 election (4 weeks prior to the election and 4 weeks post-election). We carefully account for the observed heterogeneity in the Twitter activity levels across individuals through the null hypothesis of apathetic retweeting that is formalized as a random network model based on the directed, multi-edged, self-looped, configuration model. Our data revealed via a generalized Fisher’s exact test that there were significantly many Twitter accounts linked to SPLC-defined hate groups belonging to seven ideologies (Anti-Government, Anti-Immigrant, Anti-LGBT, Anti-Muslim, Alt-Right, White-Nationalist and Neo-Nazi) and also to @realDonaldTrump relative to the accounts of the other four politicians. The exact hypothesis test uses Apache Spark’s distributed sort and join algorithms to produce independent samples in a fully scalable way from the null model. Additionally, by exploring the empirical Twitter network we found that significantly more individuals had the fewest retweet degrees of separation simultaneously from Trump and each one of these seven hateful ideologies relative to the other four politicians. We conduct this exploration via a geometric model of the observed retweet network, distributed vertex programs in Spark’s GraphX library and a visual summary through neighbor-joined population retweet ideological trees. Remarkably, less than 5% of individuals had three or fewer retweet degrees of separation simultaneously from Trump and one of several hateful ideologies relative to the other four politicians. Taken together, these findings suggest that Trump may have indeed possessed unique appeal to individuals drawn to hateful ideologies; however, such individuals constituted a small fraction of the sampled population.
Article
Full-text available
El Acuerdo de paz firmado entre el Gobierno colombiano y la guerrilla de las FARC en 2016 permitió que este grupo, ahora como partido político, postulara candidatos en las elecciones parlamentarias y presidenciales realizadas en el primer semestre de 2018. Esto ha sido duramente criticado por un grupo de colombianos, quienes lo han rechazado en espacios físicos y virtuales, al punto de obligar al nuevo partido a cesar su campaña en espacios públicos. Este artículo intenta reflejar esas reacciones en dos grupos de Facebook, identifi-car los elementos comunicativos empleados en la interacción y analizar si en las mismas existe un discurso violento o de odio, a partir de una observación etnográfica digital y el análisis de contenido. Para ello se revisaron publicacio-nes vinculadas al anuncio de la candidatura a la presidencia de Rodrigo Londo-ño 'Timochenko' a la presidencia de Colombia, en noviembre de 2017, lo que permitió identificar factores que favorecen la aparición de discurso violento y de odio, y la falta de interacción y de aprovechamiento de herramientas comunica-tivas propias del entorno digital en este tipo de mensaje. Palabras clave: Discurso violento; discurso de odio; proceso de paz colombiano; interacción en Facebook.
Article
Full-text available
This paper sets out quantitative findings from a research project examining the dynamics of online counter-narratives against hate speech, focusing on #StopIslam, a hashtag that spread racialized hate speech and disinformation directed towards Islam and Muslims and which trended on Twitter after the March 2016 terror attacks in Brussels. We elucidate the dynamics of the counter-narrative through contrasting it with the affordances of the original anti-Islamic narrative it was trying to contest. We then explore the extent to which each narrative was taken up by the mainstream media. Our findings show that actors who disseminated the original hashtag with the most frequency were tightly-knit clusters of self-defined conservative actors based in the US. The hashtag was also routinely used in relation to other pro-Trump, anti-Clinton hashtags in the run-up to the 2016 presidential election, forming part of a broader, racialized, anti-immigration narrative. In contrast, the most widely shared and disseminated messages were attempts to challenge the original narrative that were produced by a geographically dispersed network of self-identified Muslims and allies. The counter-narrative was significant in gaining purchase in the wider media ecology associated with this event, due to being reported by mainstream media outlets. We ultimately argue for the need for further research that combines ‘big data’ approaches with a conceptual focus on the broader media ecologies in which counter-narratives emerge and circulate, in order to better understand how opposition to hate speech can be sustained in the face of the tight-knit right-wing networks that often outlast dissenting voices.
Chapter
This chapter reviews the theoretical frameworks and current empirical findings on cyberhate and its impact on children and adolescents. We draw on the sparse empirical literature on the topic and add insights gleaned from closely related lines of inquiry, such as cyberbullying. We focus on the dilemma posed by our First Amendment protections of freedom of speech and the dangers to our youth posed by exposure to cyberhate. We emphasize the importance of directing attention to this topic by researchers, practitioners, and policymakers to protect our youth from this serious online risk.
Article
How does political violence affect popular support for peace? We answer this question by examining Colombia, where in 2016 the people narrowly and unexpectedly voted against a peace agreement designed to end a half century of civil war. Building on research on the impact of political violence on elections as well as research on referendum/initiative voting in the United States, we argue that local experiences with violence and the political context will lead to heightened support for peace. We test these expectations using spatial modeling and a municipal-level data on voting in the 2016 Colombian peace referendum, and find that municipal-level support for the referendum increases with greater exposure to violence and increasing support for President Santos. These results are spatially distributed, so that exposure to violence in one municipality is associated with greater support for the peace referendum in that municipality and also in surrounding areas. Our findings have implications not only for Colombia, but for all post-war votes and other contexts in which referenda and elections have major and/or unexpected results.
Article
In 2017 the Australian Government undertook a national survey to determine public support for the legalisation of same‐sex marriage. This raised concerns the ‘plebiscitary' act may create harms to two groups: LGBTI people and those religious people with strong attachment to heteronormative marriage. Justifying the process, the Government advanced the possibility of civil dialogue generative of understanding. While instances of hate speech in public spaces were reported, this paper examines comparatively private speech during the period. Based on an analysis of posts to relevant Facebook pages, this analysis found opponents to same‐sex marriage were more highly mobilised online, and considerable differences in the character of online debate for and against the proposed changes. Importantly, while uncivil and ‘hate' speech were part of online conversations, the overall quantum of this uncivil discourse was lower than many feared. Additionally, the process did not generate considerable democratic dialogue around policy alternatives and rationales, particularly among ‘Yes' campaign supporters who were more homogenous in their acceptance of dominant campaign framing of the issue than their opponents. Significantly for ongoing public debates about public values like educational access and freedom of expression, opponents to change focused their arguments against same‐sex marriage around a subset of unrelated issues: free speech, religious freedoms, and diversity in public schools.
Article
Online hatred based on attributes, such as origin, race, gender, religion, or sexual orientation, has become a rising public concern across the world. Past research on aggressive behavior suggests strong associations between victimization and perpetration and that toxic online disinhibition and sex might influence this relationship. However, no study investigated both the associations between online hate victimization and perpetration, and the potential moderation effects of toxic online disinhibition and sex on this relationship. To this end, the present study was conducted. The sample consists of 1,480 German 7th to 10th graders from Germany. Results revealed positive associations between online hate victimization and perpetration. Further, the results support the idea that toxic online disinhibition and sex, by way of moderator effects, affect the relationship between online hate victimization and perpetration. Victims of online hate reported more online hate perpetration when they reported higher levels of online disinhibition and less frequent online hate perpetration when they reported lower levels of toxic online disinhibition. Additionally, the relationship between online hate victimization and perpetration was significantly greater among boys than girls. Taken together, our results extend previous findings to online hate involvement among adolescents and substantiates the importance to conduct more research on online hate. In addition, our findings highlight the need for prevention and intervention programs that help adolescents deal with the emerging issue of online hate.