ArticlePDF Available


This systematic review aimed to explore the research papers related to how Internet and social media may, or may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches, were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and 2019. Meta-analysis could not be conducted due to the broad diversity of studies and measure units. The reviewed studies provided exploratory data about the Internet and social media as a space for online hate speech, types of cyberhate, terrorism as online hate trigger, online hate expressions and most common methods to assess online hate speech. As a general consensus on what is cyberhate, this is conceptualized as the use of violent, aggressive or offensive language, focused on a specific group of people who share a common property, which can be religion, race, gender or sex or political affiliation through the use of Internet and Social Networks, based on a power imbalance, which can be carried out repeatedly, systematically and uncontrollably, through digital media and often motivated by ideologies.
Internet, social media and online hate speech. Systematic review
Sergio Andr´
es Casta˜
, Natalia Su´
, Luz Magnolia Tilano Vega
Harvey Mauricio Herrera L´
Psychology Department, Corporaci´
on Universitaria Minuto de Dios-UNIMINUTO, Colombia
on para la Atenci´
on Psicosocial CORAPCO, Medellín, Colombia
Psychology Department, Universidad de San Buenaventura, Medellín, Colombia
Psychology Department, Universidad de Nari˜
no, Pasto, Colombia
Online hate speech
Social Networks
This systematic review aimed to explore the research papers related to how Internet and social media may, or
may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches,
were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and
2019. Meta-analysis could not be conducted due to the broad diversity of studies and measure units. The
reviewed studies provided exploratory data about the Internet and social media as a space for online hate speech,
types of cyberhate, terrorism as online hate trigger, online hate expressions and most common methods to assess
online hate speech. As a general consensus on what is cyberhate, this is conceptualized as the use of violent,
aggressive or offensive language, focused on a specic group of people who share a common property, which can
be religion, race, gender or sex or political afliation through the use of Internet and Social Networks, based on a
power imbalance, which can be carried out repeatedly, systematically and uncontrollably, through digital media
and often motivated by ideologies.
1. Introduction
Cyberspace offers freedom of communication and opinion expres-
sions. However, the current social media is regularly being misused to
spread violent messages, comments, and hateful speech. This has been
conceptualized as online hate speech, dened as any communication
that disparages a person or a group on the basis of characteristics such as
race, color, ethnicity, gender, sexual orientation, nationality, religion, or
political afliation (Zhang & Luo, 2018).
The urgency of this matter has been increasingly recognized
ack & Sikdar, 2017). In the European Union (EU), 80% of people
have encountered hate speech online and 40% have felt attacked or
threatened via Social Network Sites [SNS] (Gagliardone, Gal, Alves, &
Martinez, 2015).
Among its main consequences we found harm against social groups
by creating an environment of prejudice and intolerance, fostering
discrimination and hostility, and in severe cases facilitating violent acts
(Gagliardone et al., 2015); impoliteness, pejorative terms, vulgarity, or
sarcasm (Papacharissi, 2004); incivility, that includes behaviors that
threaten democracy, deny people their personal freedoms, or stereotype
social groups (Papacharissi, 2004); and off line hate speech expressed as
direct aggressions (Anderson, Brossard, Scheufele, Xenos, & Ladwig,
2014; Coe, Kenski, & Rains, 2014), against political ideologies, religious
groups or ethnic minorities. For example, racial- and ethnic-centered
rumors can lead to ethnic violence and offended individuals might be
threatened because of their group identities (Bhavnani, Findley, &
Kuklinski, 2009).
A concept that can explain online hate speech is social deviance. This
term encompasses all behaviors, from minor norm-violating to law-
breaking acts against others, and considers online hate as an act of
deviant communication as it violates shared cultural standards, rules, or
norms of social interaction in social group contexts (Henry, 2009).
Among the norm violating behaviors we can identify: defamation
(Coe et al., 2014), call for violence (Hanzelka & Schmidt, 2017),
agitation by provoking statements debating political or social issues
displaying discriminatory views (Bhavnani et al., 2009), rumors and
conspiracy (Sunstein & Vermeule, 2009).
These issues make the investigation of online hate speech an
important area of research. In fact, there are many theoretical gaps on
the explanation of this behavior and there are not enough empirical data
* Corresponding author.
E-mail address: (S.A. Casta˜
Contents lists available at ScienceDirect
Aggression and Violent Behavior
journal homepage:
Received 14 July 2020; Received in revised form 26 January 2021; Accepted 23 March 2021
... Hate speech is defined as any form of communication that criticizes a person or group based on characteristics such as race, skin color, ethnicity, nationality, politics, gender, or sexual orientation (such as lesbian, gay, bisexual, or transgender) (Auwal, 2018;Castaño-Pulgarín et al., 2021;Mondal et al., 2017), and religion (Auwal, 2018;Mondal et al., 2017). Individuals who intentionally engage in hate speech, and do it repetitively, their actions fit the cyberbullying criteria listed by Corcoran et al. (2015), Mladenovic et al. (2020), and Thomas et al. (2015). ...
... Consequently, many social media users gather around the post. This concept aligns with hate speech, which is defined as any form of communication criticizing a person or group based on characteristics such as race, skin color, ethnicity, nationality, politics, gender, sexual orientation (e.g., lesbian, gay, bisexual, transgender) (Auwal, 2018;Castaño-Pulgarín et al., 2021;Mondal et al., 2017), and religion (Auwal, 2018;Mondal et al., 2017). ...
Full-text available
The Bawang army phenomenon is newly recognized, and no prior studies have explored it yet. Consequently, understanding this concept necessitates a qualitative research approach. Eight semi-structured interviews were conducted with pertinent stakeholders to probe various research questions. To achieve the first research objective, involving understanding Bawang army, the codes were organized into four categories; definitions, reasons, issues, and activities. The terminologies coined by Malaysian netizens, such as Bawang army and 'mak kau hijau', shape the identity of this phenomenon. The second research objective is to differentiate the Bawang army's classifications, determining whether it falls under cyber-bullying or cyber-aggression. In-depth discussions were analyzed based on previous literature using the constant-comparative method. Several implications were observed. Firstly, industry practitioners need to exercise greater consideration when creating social media content. Secondly, future research should further investigate the distinct typologies of cyber-bullying and cyber-aggression autonomously.
... Evidence suggests that social media can nurture heated discussions, which often can result in the use of offensive and insulting language thus manifesting into abusive behaviours (Tontodimamma et al., 2021). According to (Castaño-Pulgarín et al., 2021), hate speech can be defined as "[…] the use of violent, aggressive or offensive language, focused on a specific group of people who share a common property, which can be religion, race, gender or sex or political affiliation through the use of Internet and Social Networks […]" (pg. 1). ...
Full-text available
Social media platforms have become an increasingly popular tool for individuals to share their thoughts and opinions with other people. However, very often people tend to misuse social media posting abusive comments. Abusive and harassing behaviours can have adverse effects on people's lives. This study takes a novel approach to combat harassment in online platforms by detecting the severity of abusive comments, that has not been investigated before. The study compares the performance of machine learning models such as Naïve Bayes, Random Forest, and Support Vector Machine, with deep learning models such as Convolutional Neural Network (CNN) and Bi-directional Long Short-Term Memory (Bi-LSTM). Moreover, in this work we investigate the effect of text pre-processing on the performance of the machine and deep learning models, the feature set for the abusive comments was made using unigrams and bigrams for the machine learning models and word embeddings for the deep learning models. The comparison of the models’ performances showed that the Random Forest with bigrams achieved the best overall performance with an accuracy of (0.94), a precision of (0.91), a recall of (0.94), and an F1 score of (0.92). The study develops an efficient model to detect severity of abusive language in online platforms, offering important implications both to theory and practice.
... Additionally, 15.1% reported witnessing "Threats to find, name, or 'dox' someone" on NoFap forums. Violent posts online are concerning for normalizing violent rhetoric generally, but also because they sometimes cause real-world violence (Castaño-Pulgarín et al. 2021;Henri et al. 2012;Patton et al. 2014). This is important because NoFap followers have committed real-world violence that could be partly due to celebrated violence in the NoFap forum. ...
Full-text available
Masturbation abstinence practices have returned to the USA in the form of semen retention communities. Followers on one of these male, anti-masturbation forums, “NoFap”, on Reddit (denoted “r/NoFap”), have engaged in homicidal behaviors that appear to be linked to these sexual beliefs and practices. This study used a systematic search on r/NoFap and two control forums (r/pornfree, and r/stopdrinking) to define a corpus of violent content. The study goals were to describe the nature of threats on r/NoFap and suggest whether the violence might be attributable to sexual deprivation or false beliefs that non-sexual targets caused their violent urges. Of the 421 violent posts identified from September 2011 to September 2022, r/NoFap contained the majority (94.3%). Violent threats on r/NoFap mostly targeted pornographers, women, scientists, specific persons, or any person (i.e., homicidal “rage”). Violent threats against r/NoFap’s own followers were growing most quickly. Violent posts were well-supported with upvotes by other followers in r/NoFap. These data are important because NoFap may represent a growing threat for real-world violence.
... For the researcher, the content of hate speech eliminates or minimizes the communicative character since the messages, when expressed, are no longer received as messages and start to be interpreted and felt as attitudes and behaviors. The urgency of studying this subject has been increasingly recognized, since the European Union data showed that around 80% of women reported encountering hate speech and 40% claimed to have been attacked or threatened in social media (Castano-Pulgarín, Suárez-Betancur, Vega, Harvey, & López, 2021). ...
Full-text available
The popularization of digital technologies, such as social media, has driven remarkable changes in the way citizens participate in public life. On the one hand, they gave power to social actors, who began to act in a new media environment, with a considerable impact on the political and economic spheres. On the other hand, they laid the material foundations for the dissemination of hate speech against vulnerable groups and minorities. The present investigation has as its main objective to analyze the misogynistic hate narratives that are uttered in social media. For this purpose, a netnographic study was carried out, of the qualitative type, organized in three sequential moments: extraction, exploration/treatment, and content analysis of the data from the platform of the social network Instagram. The data highlighted from 74 profiles aligned with hate speech show, essentially, the presence of 40 publications, largely linked to extreme right-wing cultures with discursive and imagery manifestations of a misogynistic nature, highlighting the use of irony and the ridicule of women. The most offensive dimension of hate speech was measured in the comments of the followers of these publications, present in the form of insult and direct offense, confirming that the digital environment has aggravated hostility and online harassment against women.
... El método de análisis utilizado fue por categorías de análisis que emergen a partir de la similitud de la información recolectada. Es un método de análisis base de las investigaciones cualitativas que ha sido ampliamente utilizado en estudios cuyos datos son de características heterogéneas (Hernández Sampieri et al., 2014;Castaño-Pulgarín et al., 2021). De tal modo que las categorías de análisis que se establecieron a través del proceso de codificación de los resultados y discusiones de cada artículo de la muestra fueron: control inhibitorio con un récord de 21 estudios, memoria de trabajo con 11, y flexibilidad cognitiva con 11 investigaciones. ...
Full-text available
The general objective of this study was to identify the contributions of executive functions on emotio-nal processes, provided in empirical scientific research published in different databases between 2017 and 2022. A documentary study was carried out following the PRISMA declaration guidelines, with a sample of 43 articles selected from the ScienceDirect, Scopus, EbscoHost, Proquest, Oxford Acade-mic, PudMed, APA PsycInfo, APA PsycArticles, APA PsycNet, SciElo, Redalyc, Dialnet and Web of Science databases, with a summary of terms “executive functions AND emotions”, “executive functions AND emotions”, “executive functions AND emotional processing”. As results, the following catego-ries of analysis were found: inhibitory control, working memory and cognitive flexibility. Overall, it is concluded that several aspects of executive functions have a direct association with several domains of emotional processes, which makes it clear that the processing of emotions depends on executive functioning in more than one aspect, however, it seems that three skills basic executive functions (Inhibitory Control, Working Memory and Cognitive flexibility), are key in aspects of emotional processes such as emotional regulation.
Full-text available
Resumo O artigo discute como as plataformas de mídia social contribuem para a legitimação e o espalhamento do que chamamos de discursos tóxicos, particularmente no que diz respeito à violência de gênero contra mulheres na política brasileira. Nossa pesquisa busca compreender: (1) quais discursos emergem tendo como alvo as deputadas federais brasileiras (com mandatos entre 2019 e 2022) e seus possíveis efeitos; e (2) se há diferenças entre discursos tóxicos dirigidos às deputadas nos diferentes lados do espectro político-partidário. Para isso, analisamos 500 mil tweets publicados em junho de 2022 que mencionavam diretamente as deputadas no exercício do último mandato. Por meio de uma análise quali-quantitativa, identificamos duas grandes categorias de discursos tóxicos: um relacionado exclusivamente à figura da mulher, ou seja, à violência de gênero, e outro em relação ao grupo político do qual faz parte, ou seja, ataques de cunho político-partidário.
Full-text available
Adolescents are the most active user group of social media sites. Due to being in a phase of both biological and psychological development, they may be particularly vulnerable to the darker side of social media, such as its illegal aspects or coordinated information influencing. With this research, we aimed to identify threats Finnish adolescents face on social media from a law-enforcement perspective. To reach this goal, we performed semi-structured interviews with police officers from Finnish preventive measures police units. To identify and structure threats that adolescents face, we employed a twofold analysis. In the first part, we conducted inductive content analysis, which revealed three primary threats: polarization, disinformation, and social media as a pathway to illegal activities. In the second part, we employed the Honeycomb-model of social media functionality as a classificatory device for structuring these threats. Our findings provide explorative insights into the threats social media might present to adolescents from the point of view of the Finnish law-enforcement system.
Researchers have repeatedly discussed how to strengthen supportive and pro-social responses to online hate, such as reporting and commenting. Researchers and practitioners commonly call for the promotion of media literacy measures that are believed to be positively associated with countermeasures against online hate. In this study (conducted in 2021), we examined relationships between media literacy proficiencies of (1) moral-participatory motivation and abilities and, consequently, (2) the establishment of moral-participatory behaviors and the correspondence with prosocial responses to online hate. A sample of 1489 adolescents and young adults (16–22 years old) from eight European countries is examined. Results confirmed that higher participatory-moral motivation and behavior were significantly associated with stronger intentions to report online hate. Commenting on hateful online content, on the other hand, was significantly related to participatory-moral abilities and past experiences with online harassment. Implications for the role of social media literacy in the context of online hate are discussed.
The freedom of expression enabled through information and communication technologies (ICT) has been misused to create, (re)produce, and distribute cyberhate. Otherwise known as online hate speech, it refers to all forms of ICT-mediated expression that incites, justifies, or propagates hatred or violence against specific individuals or groups based on their gender, race, ethnicity, religion, sexual orientation, or other collective characteristics. This chapter aims to contribute to a comprehensive analysis of cyberhate among adolescents and adults. It is structured into three main sections. The first operationalizes the key conceptual characteristics, disentangles the similarities and differences between cyberhate and other forms of violence, and presents the known prevalence of victimization and perpetration. The second identifies the main sociodemographic correlates and discriminates the risk and protective factors with theoretical frameworks. The chapter concludes with recommendations for prevention and intervention strategies that demand a multi-stakeholder approach.
Background Alzheimer disease or related dementias (ADRD) are severe neurological disorders that impair the thinking and memory skills of older adults. Most persons living with dementia receive care at home from their family members or other unpaid informal caregivers; this results in significant mental, physical, and financial challenges for these caregivers. To combat these challenges, many informal ADRD caregivers seek social support in online environments. Although research examining online caregiving discussions is growing, few investigations have distinguished caregivers according to their kin relationships with persons living with dementias. Various studies have suggested that caregivers in different relationships experience distinct caregiving challenges and support needs. Objective This study aims to examine and compare the online behaviors of adult-child and spousal caregivers, the 2 largest groups of informal ADRD caregivers, in an open online community. Methods We collected posts from ALZConnected, an online community managed by the Alzheimer’s Association. To gain insights into online behaviors, we first applied structural topic modeling to identify topics and topic prevalence between adult-child and spousal caregivers. Next, we applied VADER (Valence Aware Dictionary for Sentiment Reasoning) and LIWC (Linguistic Inquiry and Word Count) to evaluate sentiment changes in the online posts over time for both types of caregivers. We further built machine learning models to distinguish the posts of each caregiver type and evaluated them in terms of precision, recall, F1-score, and area under the precision-recall curve. Finally, we applied the best prediction model to compare the temporal trend of relationship-predicting capacities in posts between the 2 types of caregivers. Results Our analysis showed that the number of posts from both types of caregivers followed a long-tailed distribution, indicating that most caregivers in this online community were infrequent users. In comparison with adult-child caregivers, spousal caregivers tended to be more active in the community, publishing more posts and engaging in discussions on a wider range of caregiving topics. Spousal caregivers also exhibited slower growth in positive emotional communication over time. The best machine learning model for predicting adult-child, spousal, or other caregivers achieved an area under the precision-recall curve of 81.3%. The subsequent trend analysis showed that it became more difficult to predict adult-child caregiver posts than spousal caregiver posts over time. This suggests that adult-child and spousal caregivers might gradually shift their discussions from questions that are more directly related to their own experiences and needs to questions that are more general and applicable to other types of caregivers. Conclusions Our findings suggest that it is important for researchers and community organizers to consider the heterogeneity of caregiving experiences and subsequent online behaviors among different types of caregivers when tailoring online peer support to meet the specific needs of each caregiver group.
Full-text available
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Full-text available
As online content continues to grow, so does the spread of hate speech. We identify and examine challenges faced by online automatic approaches for hate speech detection in text. Among these difficulties are subtleties in language, differing definitions on what constitutes hate speech, and limitations of data availability for training and testing of these systems. Furthermore, many recent approaches suffer from an interpretability problem-that is, it can be difficult to understand why the systems make the decisions that they do. We propose a multi-view SVM approach that achieves near state-of-the-art performance, while being simpler and producing more easily interpretable decisions than neural methods. We also discuss both technical and practical challenges that remain for this task.
Full-text available
We characterize the Twitter networks of the major presidential candidates, Donald J. Trump and Hillary R. Clinton, with various American hate groups defined by the US Southern Poverty Law Center (SPLC). We further examined the Twitter networks for Bernie Sanders, Ted Cruz, and Paul Ryan, for 9 weeks around the 2016 election (4 weeks prior to the election and 4 weeks post-election). We carefully account for the observed heterogeneity in the Twitter activity levels across individuals through the null hypothesis of apathetic retweeting that is formalized as a random network model based on the directed, multi-edged, self-looped, configuration model. Our data revealed via a generalized Fisher’s exact test that there were significantly many Twitter accounts linked to SPLC-defined hate groups belonging to seven ideologies (Anti-Government, Anti-Immigrant, Anti-LGBT, Anti-Muslim, Alt-Right, White-Nationalist and Neo-Nazi) and also to @realDonaldTrump relative to the accounts of the other four politicians. The exact hypothesis test uses Apache Spark’s distributed sort and join algorithms to produce independent samples in a fully scalable way from the null model. Additionally, by exploring the empirical Twitter network we found that significantly more individuals had the fewest retweet degrees of separation simultaneously from Trump and each one of these seven hateful ideologies relative to the other four politicians. We conduct this exploration via a geometric model of the observed retweet network, distributed vertex programs in Spark’s GraphX library and a visual summary through neighbor-joined population retweet ideological trees. Remarkably, less than 5% of individuals had three or fewer retweet degrees of separation simultaneously from Trump and one of several hateful ideologies relative to the other four politicians. Taken together, these findings suggest that Trump may have indeed possessed unique appeal to individuals drawn to hateful ideologies; however, such individuals constituted a small fraction of the sampled population.
Full-text available
El Acuerdo de paz firmado entre el Gobierno colombiano y la guerrilla de las FARC en 2016 permitió que este grupo, ahora como partido político, postulara candidatos en las elecciones parlamentarias y presidenciales realizadas en el primer semestre de 2018. Esto ha sido duramente criticado por un grupo de colombianos, quienes lo han rechazado en espacios físicos y virtuales, al punto de obligar al nuevo partido a cesar su campaña en espacios públicos. Este artículo intenta reflejar esas reacciones en dos grupos de Facebook, identifi-car los elementos comunicativos empleados en la interacción y analizar si en las mismas existe un discurso violento o de odio, a partir de una observación etnográfica digital y el análisis de contenido. Para ello se revisaron publicacio-nes vinculadas al anuncio de la candidatura a la presidencia de Rodrigo Londo-ño 'Timochenko' a la presidencia de Colombia, en noviembre de 2017, lo que permitió identificar factores que favorecen la aparición de discurso violento y de odio, y la falta de interacción y de aprovechamiento de herramientas comunica-tivas propias del entorno digital en este tipo de mensaje. Palabras clave: Discurso violento; discurso de odio; proceso de paz colombiano; interacción en Facebook.
Full-text available
This paper sets out quantitative findings from a research project examining the dynamics of online counter-narratives against hate speech, focusing on #StopIslam, a hashtag that spread racialized hate speech and disinformation directed towards Islam and Muslims and which trended on Twitter after the March 2016 terror attacks in Brussels. We elucidate the dynamics of the counter-narrative through contrasting it with the affordances of the original anti-Islamic narrative it was trying to contest. We then explore the extent to which each narrative was taken up by the mainstream media. Our findings show that actors who disseminated the original hashtag with the most frequency were tightly-knit clusters of self-defined conservative actors based in the US. The hashtag was also routinely used in relation to other pro-Trump, anti-Clinton hashtags in the run-up to the 2016 presidential election, forming part of a broader, racialized, anti-immigration narrative. In contrast, the most widely shared and disseminated messages were attempts to challenge the original narrative that were produced by a geographically dispersed network of self-identified Muslims and allies. The counter-narrative was significant in gaining purchase in the wider media ecology associated with this event, due to being reported by mainstream media outlets. We ultimately argue for the need for further research that combines ‘big data’ approaches with a conceptual focus on the broader media ecologies in which counter-narratives emerge and circulate, in order to better understand how opposition to hate speech can be sustained in the face of the tight-knit right-wing networks that often outlast dissenting voices.
This chapter reviews the theoretical frameworks and current empirical findings on cyberhate and its impact on children and adolescents. We draw on the sparse empirical literature on the topic and add insights gleaned from closely related lines of inquiry, such as cyberbullying. We focus on the dilemma posed by our First Amendment protections of freedom of speech and the dangers to our youth posed by exposure to cyberhate. We emphasize the importance of directing attention to this topic by researchers, practitioners, and policymakers to protect our youth from this serious online risk.
How does political violence affect popular support for peace? We answer this question by examining Colombia, where in 2016 the people narrowly and unexpectedly voted against a peace agreement designed to end a half century of civil war. Building on research on the impact of political violence on elections as well as research on referendum/initiative voting in the United States, we argue that local experiences with violence and the political context will lead to heightened support for peace. We test these expectations using spatial modeling and a municipal-level data on voting in the 2016 Colombian peace referendum, and find that municipal-level support for the referendum increases with greater exposure to violence and increasing support for President Santos. These results are spatially distributed, so that exposure to violence in one municipality is associated with greater support for the peace referendum in that municipality and also in surrounding areas. Our findings have implications not only for Colombia, but for all post-war votes and other contexts in which referenda and elections have major and/or unexpected results.
In 2017 the Australian Government undertook a national survey to determine public support for the legalisation of same‐sex marriage. This raised concerns the ‘plebiscitary' act may create harms to two groups: LGBTI people and those religious people with strong attachment to heteronormative marriage. Justifying the process, the Government advanced the possibility of civil dialogue generative of understanding. While instances of hate speech in public spaces were reported, this paper examines comparatively private speech during the period. Based on an analysis of posts to relevant Facebook pages, this analysis found opponents to same‐sex marriage were more highly mobilised online, and considerable differences in the character of online debate for and against the proposed changes. Importantly, while uncivil and ‘hate' speech were part of online conversations, the overall quantum of this uncivil discourse was lower than many feared. Additionally, the process did not generate considerable democratic dialogue around policy alternatives and rationales, particularly among ‘Yes' campaign supporters who were more homogenous in their acceptance of dominant campaign framing of the issue than their opponents. Significantly for ongoing public debates about public values like educational access and freedom of expression, opponents to change focused their arguments against same‐sex marriage around a subset of unrelated issues: free speech, religious freedoms, and diversity in public schools.
Online hatred based on attributes, such as origin, race, gender, religion, or sexual orientation, has become a rising public concern across the world. Past research on aggressive behavior suggests strong associations between victimization and perpetration and that toxic online disinhibition and sex might influence this relationship. However, no study investigated both the associations between online hate victimization and perpetration, and the potential moderation effects of toxic online disinhibition and sex on this relationship. To this end, the present study was conducted. The sample consists of 1,480 German 7th to 10th graders from Germany. Results revealed positive associations between online hate victimization and perpetration. Further, the results support the idea that toxic online disinhibition and sex, by way of moderator effects, affect the relationship between online hate victimization and perpetration. Victims of online hate reported more online hate perpetration when they reported higher levels of online disinhibition and less frequent online hate perpetration when they reported lower levels of toxic online disinhibition. Additionally, the relationship between online hate victimization and perpetration was significantly greater among boys than girls. Taken together, our results extend previous findings to online hate involvement among adolescents and substantiates the importance to conduct more research on online hate. In addition, our findings highlight the need for prevention and intervention programs that help adolescents deal with the emerging issue of online hate.