ChapterPDF Available

Exposure to Online Hate among Young Social Media Users

Chapter

Exposure to Online Hate among Young Social Media Users

Abstract

Purpose: The prevalence of online hate material is a public concern, but few studies have analyzed the extent to which young people are exposed to such material. This study investigated the extent of exposure to and victimization by online hate material among young social media users. Design/methodology/approach: The study analyzed data collected from a sample of Finnish Facebook users (n = 723) between the ages of 15 and 18. Analytic strategies were based on descriptive statistics and logistic regression models. Findings: A majority (67%) of respondents had been exposed to hate material online, with 21% having also fallen victim to such material. The online hate material primarily focused on sexual orientation, physical appearance, and ethnicity and was most widespread on Facebook and YouTube. Exposure to hate material was associated with high online activity, poor attachment to family, and physical offline victimization. Victims of the hate material engaged in high levels of online activity. Their attachment to family was weaker, and they were more likely to be unhappy. Online victimization was also associated with the physical offline victimization. Social implications: While the online world has opened up countless opportunities to expand our experiences and social networks, it has also created new risks and threats. Psychosocial problems that young people confront offline overlap with their negative online experiences. When considering the risks of Internet usage, attention should be paid to the problems young people may encounter offline. Originality: This study expands our knowledge about exposure to online hate material among users of the most popular social networking sites. It is the first study to take an in-depth look at the hate materials young people encounter online in terms of the sites where the material was located, how users found the site, the target of the hate material, and how disturbing users considered the material to be.
!
1!
This is the preprint version of the following article:
Atte Oksanen, James Hawdon, Emma Holkeri, Matti Näsi, Pekka Räsänen (2014), Exposure
to Online Hate among Young Social Media Users, in M. Nicole Warehime (ed.) Soul of
Society: A Focus on the Lives of Children & Youth (Sociological Studies of Children and
Youth, Volume 18) Emerald Group Publishing Limited, pp. 253–273.
which has been published in final form at:
http://www.emeraldinsight.com/doi/abs/10.1108/S1537-466120140000018021
Exposure to Online Hate Among Young Social Media Users
ABSTRACT
Purpose: The prevalence of online hate material is a public concern, but few studies have
analyzed the extent to which young people are exposed to such material. This study
investigated the extent of exposure to and victimization by online hate material among
young social media users. Methodology: The study analyzed data collected from a sample of
Finnish Facebook users (n = 723) between the ages of 15 and 18. Analytic strategies were
based on descriptive statistics and logistic regression analysis. Findings: A majority (67%)
of respondents had been exposed to hate material online, with 21% having also fallen victim
to such material. The online hate material primarily focused on sexual orientation, physical
appearance, and ethnicity, and was most widespread on Facebook and YouTube. Exposure
to hate material was associated with high online activity, poor attachment to family and
physical offline victimization. Victims of the hate material engaged in high levels of online
activity. Their attachment to family was weaker and they were more likely to be unhappy.
Online victimization was also associated with the physical offline victimization. Social
implications: While the online world has opened up countless opportunities to expand our
experiences and social networks, it has also created new risks and threats. Psychosocial
problems that young people confront offline overlap with their negative online experiences.
When considering the risks of internet usage, attention should be paid to the problems young
people may encounter offline.
Keywords: victimization, internet, adolescence, youth, hate material
!
2!
INTRODUCTION
The emergence of different social media sites has revolutionized social
interaction patterns in many ways. Smartphones, tablet computers and fast mobile broadband
connections, for instance, are some of the main means of converging online and offline
spaces. New technologies have also played a major role in accelerating the success of social
networking sites (SNS). Facebook, for example, boasts over one billion monthly users, while
YouTube, Twitter, and LinkedIn are also massively popular sites globally. Earlier research
has established that young people are among the forerunners in the use of technology (e.g.
Livingstone 2009; Söderström 2009), with different online communities and SNS offering
important sources of social identification for young people (Keipi & Oksanen, 2014;
Lehdonvirta & Räsänen, 2011). According to the European Union’s Kids Online Study, 82
percent of adolescents aged 15 to 16 have a profile on a social networking site (Livingstone,
Haddon, Görzig, & Ólafsson, 2011, p. 36).
Despite social media’s significant role in mediating social interaction and
communication on a global scale, it has at the same time also helped to facilitate certain
forms of negative behavior. For instance, the different social networking sites have provided
new types of platforms for making hateful material increasingly visible to millions of young
SNS users.!An array of social networking sites, such as video and image sharing platforms,
have made it particularly easy to spread different types of material, even transnationally,
thereby providing additional avenues for promoting activism and radicalism, and allowing
various hate groups to flourish online. These groups range from racist, xenophobic and
extremist groups to those glorifying mass murder and justifying the explicit hatred of people
in general (e.g. Burris, Smith, & Strahm, 2000; Brown, 2009; Chau & Xu, 2007; Glaser,
Dixit, & Green, 2002; Gerstenfeld, Grant, & Chiang, 2003; Hawdon, 2012). However,
despite the potential threat the different hate groups pose (see Foxman & Wolf, 2013;
Waldron, 2012), we continue to lack critical information about the extent to which young
people are exposed to hate-filled messages, particularly online. The present study aims to
address this gap in the literature.
To this end, this article investigates the extent of exposure to and victimization
by online hate material by analyzing a sample of Finnish Facebook users, aged 15 to 18 (n =
!
3!
723). We report on specific aspects such as the targets of the hate material, the site or service
through which the messages were seen, how the young people found the site, and how
disturbing they found the material. We include measures that allow us to analyze hate
material exposure and victimization in the context of both the online and offline experiences
of the young people in question. We begin by reviewing the literature on online hate material
and online victimization, and go on to report the results of a youth survey regarding hate
material exposure and victimization. Finally, we discuss the potential threat that online hate
poses in light of our findings.
Hate material and hate group activity online
While hate material (or hate speech) is not controlled by legislation in the US,
many European societies have laws which regulate statements that threaten or degrade a
specific group of people (Waldron, 2012). The Council of Europe has recently sought to
raise awareness of online hate speech with the Young People Combating Hate Speech Online
campaign (2012–2014). According to the Council, hate speech “covers all forms of
expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism
or other forms of hatred based on intolerance, including: intolerance expressed by aggressive
nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and
people of immigrant origin” (Council of Europe, 2013; see also Banks, 2011).
Instead of adopting the Council of Europe’s definition, which underlines racial
hatred and xenophobia, we define hate material as an act expressing harmful and threatening
hatred towards individuals or larger human collectives. Hate material may attack a specific
group within society (such as ethnic minorities) or mainstream society as a whole. It is
important to underline that not all hate material is directed against minorities, although these
groups are often the targets of such material. While the dissemination of hate material has
been widely discussed as a “speech act” (e.g. Matsuda, 1989; Waldron, 2012), the multi-
mediated nature of the internet permits other forms of communication to be used to convey
the intended hatred. Visual materials, including artwork and photos, and online games can
also be used to express derogatory attitudes and norms regarding a certain group (e.g.
Nakamura, 2009; Foxman & Wolf, 2013).
!
4!
Online hate is disseminated by both organized groups and by those acting
independently. Despite the recent discussions about the potential rise of online hatred, the
phenomenon is not new. In fact, white supremacists in the USA were among the earliest
users of an electronic communication network during the 1980s, and became increasingly
active after the first hate site (stormfront.org) appeared on the internet in 1995 (Duffy, 2003;
Gerstenfeld et al., 2003; Levin, 2002). The main objectives of such groups and individuals
are to recruit and link like-minded people in support of their cause (Douglas, McGarty, Bliuc,
& Lala, 2005; Gerstenfeld, Grant, & Chiang, 2003; McNamee, Peterson, & Peña, 2010).
They disseminate their ideology in a number of ways, including websites, file archives,
blogs, list servers, newsgroups, internet relay chats, online clubs and groups, webrings, and
online video games (see Amster, 2009; Douglas, 2007). Organized hate groups have been
particularly active in recruiting new members among young people (Lee & Leets, 2002).
The US has, in many respects, constituted a safe haven for organized hate
groups, including well-known examples such as the Ku Klux Klan, Holocaust Denial, and
Christian Identity. More recently in Europe, however, the rise of extreme right-wing political
populism during the 2000s has increased public concern over online hate and aggression (see
Bartlett, Birdwell, & Littler, 2011; Caiani & Parenti, 2013; Lucassen & Lubbers, 2012). In
addition, European terrorists and rampage shooters have published online manifestos where
they advocate violence towards mainstream society, such as the perpetrator of the Norway
attacks in 2011, who published his manifesto online (Sandberg, 2013). In Finland, both the
Jokela (2007) and Kauhajoki (2008) school shooters openly published manifestos and other
textual and audiovisual statements about their hatred of society in general (Oksanen, Nurmi,
Vuori, & Räsänen, 2013). These unexpected cases provoked a critical public discussion on
hate speech in the Nordic societies (e.g. Andersson, 2012).
The different social networking sites have allowed hate groups to become
increasingly visible and successful at both reaching and recruiting a significant number of
internet users. Even an active online “lone wolf” can produce a substantial amount of
material that can be disseminated through different mainstream social networking sites, such
as Facebook. In the US alone, the number of active hate groups increased by 66 percent
between 2000 and 2010, and by 2010 there were over 1,000 active hate groups online (Potok,
!
5!
2011).
Existing hate material does not solely involve organized hate groups or highly
active individuals publishing hateful and threatening material online, as much of the hate
material is disseminated in common online settings. Hence, when analyzing young people’s
responses to exposure to online hate material, we should not only pay attention to the most
radical organized hate groups or lone wolves, but also to the ways in which such rhetoric is
openly used in social media. We hypothesize that young people themselves are no strangers
to hate-related topics and may well produce this kind of material on their own.
Prevalence of young people’s online hate material exposure and victimization
Most of the earlier studies focusing on online hate material have concentrated
on the content, dissemination and legal consequences of such material (e.g. Brown, 2009;
Douglas et al., 2005; Glaser et al., 2002; Waldron 2012). Somewhat surprisingly, there are
very few studies concerning exposure to or victimization by such material (Ybarra, Mitchell,
& Korchmaros, 2011; Livingstone et al., 2011).
According to results based on a nationally representative American survey,
only a small minority of children in the 10 to 15 age range had visited hate sites (3.5% in
2008) (Ybarra et al., 2011). Further, the study did not support the hypothesis that exposure to
hate sites had indeed increased (ibid.). However, while visiting hate sites such as
stormfront.org may still be uncommon, the study did not address the question more broadly
in relation to general exposure to hate material. For example, another nationally
representative US survey reveals that online harassment involving threats or offensive
behavior increased significantly between 2000 and 2010 (Jones et al., 2013). In Europe, the
EU Kids Online study reports that 18 percent of adolescents aged 15 to 16 had encountered
hate material online in 2010 (Livingstone et al., 2011, p. 97).
Exposure to online hate material involves both indirect and direct harm. With
regard to the former, indirect social harm raises possible ethical and legal questions on
whether the dissemination of hate material should be allowed in society (Waldron, 2012).
!
6!
Secondly, hate material may be directly harmful. It has potential social and psychological
dimensions as it may have damaging effects at both personal and group levels (Leets & Giles,
1997; Tynes, 2006). For example, the long-term effects of exposure to hateful online
material may include reinforcing discrimination against vulnerable groups (Foxman & Wolf,
2013). As a result, victims may develop defensive and hyper-vigilant attitudes that can
potentially be dangerous, and which can last for months, or even years (Leets, 2002).
In addition, online hate groups and other individuals disseminating hateful
ideologies can recruit young members to join or support their actions (Lee & Leets, 2002).
They may also directly incite extreme violence as exemplified in the school shootings that
were spurred on by online communities (Böckler & Seeger, 2013; Oksanen et al., 2013).
Previous studies of youth exposure to online hate material show that visiting hate sites is
associated with seriously violent behavior (Ybarra et al., 2011). Besides mere exposure to
such material, we hypothesize that being personally targeted by an online hate act may pose
an even more serious threat to the subjective wellbeing and happiness of young people
(Proctor, Linley, & Maltby, 2009). Exposure to risks in general is common online and is
often higher among the most active users (Livingstone & Helsper, 2010). We submit that
compared to exposure, victimization is rare. However, it is also more likely to be associated
with socio-demographic characteristics and to result in more serious social and
psychological consequences.
Considerable research has been conducted into various forms of online
victimization, including online grooming (Whittle, Hamilton-Giachritsis, Beech, & Collings,
2013; Wolak, Finkelhor, Mitchell, & Ybarra, 2010), online harassment (Bossler, Holt, &
May, 2012; Jones et al., 2013), cybercrime (Oksanen & Keipi, 2013) and especially
cyberbullying (Tokunaga, 2010; Ortega et al., 2012; Sourander et al., 2010). This research
indicates that there are a host of socio-demographic factors associated with various types of
online victimization. Age and gender do not, however, consistently predict online
victimization (Tokunaga, 2010). For instance, the EU Kids Online study found no gender
difference in exposure to messages that attack certain groups or individuals (Livingstone et
al., 2011, p. 98). However, other studies have found online victimization to be a gender-
specific phenomenon (Helweg-Larsen, Schütt, & Larsen, 2012; Oksanen & Keipi, 2013;
Ybarra & Mitchell, 2008). In light of the lack of unity in the results concerning online
!
7!
victimization, a clear hypothesis cannot be formulated on the effects of gender and age.
In addition to demographic characteristics, studies have started to underline the
broader context of online victimization, and the important social factors related to it. For
example, deviant and risky behavior is associated with poor social integration (Colvin,
Cullen, & Vander Ven, 2002; Welch, Tittle, Yonkoski, Meidinger, & Grasmick, 2008), and
social support like good familial relations serve as protective factors against offline
victimization (Noll, Shenk, Barnes, & Haralson, 2013). Hence, young people who lack
social support may be more vulnerable to online victimization. In contrast to the protective
aspect of social support, researchers note that offline victimization increases the probability
of online victimization (Helweg-Larsen et al., 2012; Mitchell, Finkelhor, Wolak, Ybarra, &
Turner, 2011). This relationship is particularly true of sexual and violent offline
victimization (Mitchell et al., 2011; Noll et al., 2013; Oksanen & Keipi, 2013). Indeed, it is
the combination of negative experiences both offline and online that predicts wider
psychological problems among adolescents (Salmivalli, Sainio, & Hodges, 2013). We
therefore contend that negative experiences in both the offline and online worlds accumulate
for certain vulnerable youth groups.
While a growing body of research is beginning to document the correlates of
online victimization, previous studies have analyzed neither the targets of hate material nor
the online settings that prove to be the most risky. Our study fills this gap. We aim to
document the extent to which young Finns are exposed to online hate material and have
become personally victimized by hate material dissemination. We also aim to show who or
what groups were targeted and where (on what sites) the hate material was found. We will
also document how the hate material was found and how disturbing the young people
considered it to be. We also include information on whether the young people themselves
produced such material or were members of hate groups. Our predictive analysis aims to
show whether socio-demographic characteristics and other measures concerning their online
and offline attachments, psychological well-being and physical offline victimization are
associated with both hate material exposure and victimization.
!
8!
METHOD
Participants
This study analyzes data collected from a sample of 723 young people between
the ages of 15 and 18 (471 females and 252 males). The participants were recruited through
the Facebook social networking site in April–May 2013 in Finland (a relatively small Nordic
country with a total population of 5.4 million). The mean age of the participants was 16.6
years (SD = .977).
Although this sample is not representative of Finnish young people, the sample
statistics are relatively close to official parameters. For example, similar to the breakdown in
our sample, women are more active users of social networking sites such as Facebook
(Statistics Finland, 2012). In addition, the sample figures related to immigration are
relatively close to those that appear in official statistics. Finland is still a fairly homogenous
country with a low immigration rate. In 2011, 4.3 percent of young people aged 15 to 19
living in Finland were first- or second-generation immigrants. By comparison, the majority
(97%) of the young people in our sample were born in Finland, and 9 percent had a mother
or father who had been born abroad. Furthermore, children commonly live with their parents
until they are 18 and most of them are studying during this time. In our sample, 94 percent of
respondents were living with their parents, and 96 percent were students. The respondents
were also living in cities or towns (84%) for the most part, tallying with the official degree
of urbanization (84.4) (Statistics Finland, 2013).
Procedures
Our study focuses on Facebook users because it is the predominant social
media in Finland. In 2012, 86 percent of Finnish 16- to 24-year-olds used social networking
sites, and over 95 percent of them had a Facebook profile (Statistics Finland, 2012).
Our survey respondents were recruited using three campaigns targeted at
Finnish Facebook users aged 15 to 30. The campaigns were launched between April 10 and
!
9!
May 18, 2013. We used four images and four short marketing texts in 15 different
combinations to attract users to fill out the YouNet2013 Survey. The possibility of winning a
movie ticket package was mentioned in the marketing text. After the banner click, an
introductory text provided key details about the survey. We managed the data collection with
the LimeSurvey software and optimized the survey for both computers and mobile devices.
The three campaigns reached between 432,649 to 528,261 adolescents and
young adults, which comprises approximately half of the Finnish population aged 15 to 30
(Statistics Finland, 2013). The campaigns received a total of 6,074 clicks and generated
1,337 survey responses in all. Only those respondents who completed at least the first two
pages including socio-demographic information and questions concerning online activity,
online risks and exposure to hate material were included in the sample. Approximately 19
percent of respondents did not complete the 8-page survey. Since the current project focuses
on young people, we selected respondents between the ages of 15 to 18 years (n = 723).
The YouNet2013 Survey included socio-demographic variables, and questions
about online activity, online risks and online hate material. We also enquired about online
and offline interactions, social trust, self-esteem, life satisfaction and violent victimization.
Respondents were also given a chance to provide feedback to the research team concerning
the survey.
Measures
Dependent measures
Exposure to hate material: Our first dependent variable determined the extent
to which participants were exposed to online hate material. In order to measure exposure,
respondents were asked, “In the past three months, have you seen hateful or degrading
writing or speech online, which inappropriately attacked certain groups of people or
individuals?”.
Hate material victimization: Our second dependent variable measured the
!
10!
extent to which the young people themselves were targets of or victimized by hate speech or
hate material. Respondents were asked whether they had “personally been the target[s] of
hateful or degrading material online”. This question appeared directly after the questions
concerning online hate material in the survey.
What, where and how questions on hate material exposure: The descriptive
part of the analysis also included questions on the kind of hateful material the participants
had encountered and where they had encountered it. We also enquired how they had found
such material. Possible responses included deliberately searching for such material, being
provided with a link to a site by a friend or acquaintance, or accidentally coming across such
a site. We also asked whether they had found the hate material disturbing. Finally, we
enquired whether they had produced hateful material themselves and whether they
considered themselves to be members of hate groups.
Independent measures
Online activity: Passive and active online use is treated as the central
independent variable in the predictive analysis. This is necessary since high user activity is
associated with increased online risks (Helweg-Larsen et al., 2012; Livingstone & Helsper,
2010). Respondents were asked about the different social networking sites or applications
they had used. We created the online activity variable by totaling the services respondents
said they had used during the past three months (range: 1–22, M = 7.82, SD = 2.82). These
sites and services included commonly used sites and social networking sites, such as
Facebook, YouTube, blogs, and discussion forums. The variable was dichotomized at the
median number of services used (8 services) into categories labeled passive users (less than
8 services; 48.7%) and active users (8–22 services; 51.3%).
Attachment: We used two separate measures of online and offline attachment
to different groups/communities as our second and third independent variables: attachment
to an online community (M = 3.05, SD = 1.14) and attachment to family (M = 4.02, SD =
1.12). Respondents were asked how strongly they felt that they were a part of various groups.
Responses ranged from 1 (‘not at all’) to 5 (‘very much’). These items have been tested
!
11!
previously, and successfully applied in research concerning online and offline activities
(Lehdonvirta & Räsänen, 2011; Näsi, Räsänen, & Lehdonvirta, 2011).
Happiness: We included the global happiness measure (“All things considered,
how happy would you say you are?”) as the fourth independent variable. This is a widely
used and tested question measuring general psychological well-being (Helliwell, Huang, &
Wang, 2014). The scale ranged from 1 (‘extremely unhappy’) to 10 (‘extremely happy’).
The mean happiness in our sample was 6.81 (SD = 2.16).
Offline victimization: As previous studies have shown, victimization online is
associated with physical offline victimization (Helweg et al., 2012; Mitchell et al., 2011;
Oksanen & Keipi, 2013). Hence, we used offline victimization as the fifth independent
variable. We asked respondents, “In the past three years, has someone bumped against you
or touched you in a way that felt insulting to you?” The variable was a categorical Yes/No
(27% of the respondents replied Yes).
Basic socio-demographic variables: Finally, we accounted for a number of
socio-demographic characteristics in the predictive analysis. These factors included the
respondent’s age, gender, residential area, first- and second-generation immigration
background, occupational status, and whether they were living with their parents, alone or
with other people.
Analytical techniques
Our descriptive analysis consists of cross-tabulations and means as we aim to
show general aspects of hate material exposure and victimization. Our predictive analysis
was conducted using logistic regression models for a more specific comparison of the effects
of the selected independent and control variables. A more detailed description of the
procedures will be provided in connection with the analyses.
!
12!
RESULTS
Descriptive analysis
A total of 67 percent (n = 487) of the 723 respondents were exposed to hate
material, and 21 percent (n = 150) were victims of such material. At the same time, however,
38 percent were exposed to hate material, but were not personally targeted (n = 276). Table 1
provides descriptive statistics for the participants’ exposure to hate material. It also provides
statistical significances between the victimized and non-victimized groups based on chi-
square tests (p-value).
Hate material most commonly targeted sexual orientation (68%), physical
appearance (61%), ethnicity or nationality (50%), and religious conviction/belief (43%).
Victims of hate material were likely to report hate material targeted at physical appearance
(p < .01), sex/gender (p = .017), disability (p < .001) or people in general (p = .002). The
percentages regarding sexual orientation and physical appearance were particularly high
among those who were victims of the hate material (77% and 81%, respectively).
Table 1 about here
Facebook (67%) and YouTube (47%) were the most common sites where the
respondents came across hate material. In addition, such material was seen on the general
message boards (35%). It should be noted that in terms of some of the most popular social
media sites, Twitter seems to contain very little hate targeted at others. Victims of hate
material reported seeing more hate material in Facebook (p = .007), YouTube (p = .014),
blogs (p = .009), IRC-Galleria (a Finnish SNS), and in online role-playing games (p = .12).
Table 2 about here
Table 2 shows that 70 percent of the respondents encountered hate material
accidentally, 8 percent via a link from a friend, while 22 percent intentionally searched for
such content. One third (32%) of the exposed respondents were worried about being targeted
by hate material. Victimized respondents were more likely to report worry than those who
!
13!
were not victims (45% vs. 25%, p < .001). A relatively small percentage of respondents had
produced hateful material themselves (10%), or were members of a hate group (5%). In
terms of how disturbing respondents found the material, those being victimized perceived it
to be more disturbing than those who were not victimized (p < .001).
Predictive analysis
Table 3 shows the logistic regression main-effect tests regarding exposure to
hate material. In the table, the effects of the independent variables are presented with odd
ratios (Exp ß). The coefficient indicates the increase (or decrease if the ratio is less than one)
in the odds of being exposed to online hate material. We first tested the unadjusted effect of
each independent variable (unadjusted effects). After that, the effects of the other factors
were tested by entering a new variable into the model one at a time. The models include
those variables which were significant when analyzing the effects of each independent
variable separately. Model 1 includes online activity and attachment to family. Model 2 adds
offline victimization to the equation.
Table 3 about here
Table 3 indicates that socio-demographic variables are not significant
predictors of exposure to online hate material. At the same time, the increased likelihood of
being exposed to such material is higher among those reporting higher online activity (OR =
1.87, p < .01). This association also remains significant in model 2 (OR = 1.76, p < 1.76). In
addition, weak attachment to family (OR = .81, p < .05) and offline victimization (OR = 1.98,
p < .001) significantly predict exposure. The explanatory shares accounted for are relatively
modest in models 1 and 2, with five and eight percent.
Table 4 about here
In Table 4, we present logistic regression main-effect tests regarding hate
material victimization. Model 1 shows that the odds of hate material victimization are higher
among those who are not studying (OR = 2.88, p < .05) and not living with their parents (OR
!
14!
= 1.82, p < .05), and who are active online (OR = 2.29, p < .001). Model 2 adds attachment
to online communities and family, model 3 happiness, and model 4 offline victimization.
High online activity produces a statistically significant odds ratio in all the models. We can
also see that the attachment to online communities is a significant predictor for online hate
material victimization. The final model 4 shows that victims of hate material are less happy
(OR = .88, p < .05), they are not studying (OR = 3.43, p < .05) and they are more likely to
have experienced offline victimization (OR = 3.16, p < .001). The final model explains 20
percent of the total variance, which can be regarded as a notable explanatory share when
compared to exposure to hate material.
DISCUSSION
Manifested online hate has become a major concern in many Western societies
(Brown 2009, Douglas et al., 2005; Waldron 2012), yet very few survey studies have
focused on young people’s exposure to hateful or threatening online material. Our study fills
this gap with an original survey of Finnish adolescents aged 15 to 18 years old. We explored
both exposure to hate material and whether the respondents themselves had been targets of
such material. According to the results, two-thirds of the respondents had been exposed to,
and one-fifth had been targets of, online hate material.
The percentages regarding the exposure of young Finns to hate material online
are relatively high. This is particularly true in comparison with the percentages regarding
other types of negative online activity, such as cyberbullying, which is argued to be
relatively moderate in Finland (ca. 10% of young Finns have reported being victims of
cyberbullying) (Sourander et al., 2010; Lindfors, Kaltiala-Heino, & Rimpelä, 2012).
Although exposure to hate material online and cyberbullying are somewhat disparate
phenomena, the level of difference is still quite notable, particularly since both can have
significant negative implications for individuals. However, studies indicate that both
cyberbullying and online harassment are on the increase (Jones et al., 2013; Tokunaga,
2010). Furthermore, EU Kids Online considers Finland to be among the leading countries in
terms of both user activity and online risks (Livingstone et al. 2011; Hasebrink, Görzig,
Haddon, Kalmus, & Livingstone, 2011). Our results clearly support this assertion.
!
15!
The hate material encountered by our respondents most commonly targeted
sexual orientation (68%), physical appearance (61%), ethnicity or nationality (50%), and to a
lesser extent religious beliefs (43%), gender (38%), and disability (31%). Facebook and
YouTube were most commonly the sites where young Finns came across such material, with
general message boards also serving as relatively common sites for exposure. However, our
results also indicate that social networking sites differ in terms of the role they play in young
people’s exposure to hate material. For instance, on Twitter such material appeared to be
almost nonexistent. In terms of how the young people were exposed to the hate material, 70
percent came across it accidentally, which highlights the fact that the material exists and can
easily be accessed. Our findings also reveal that the respondents find the hate material
disturbing, especially those who have been targeted by such material. About one-third of
respondents were worried about being targeted by hate material personally, but only a
minority either produced hate material themselves or considered themselves to be hate group
members. This indicates that the majority of hate material on the internet appears to be
distributed by a relatively small group of people.
Our predictive analysis underlined that exposure to hate material was only
associated with high online activity, poor attachment to family, and physical offline
victimization. None of the socio-demographic variables were significant predictors of
exposure. Being a victim of hate material dissemination was more clearly connected with
various social and psychological factors. Victims were more likely not to live with their
parents (see Sourander et al., 2010), were not studying, and engaged in high levels of online
activity, including attachment to online communities. Furthermore, victims were more likely
to be unhappy and to have experienced physical victimization offline during the previous
three years. These results confirm previous research which found that high online activity
increases online risks (Oksanen & Keipi, 2013; Helweg-Larsen et al., 2012) and that
negative offline experiences are related to online victimization (Oksanen & Keipi, 2013;
Noll et al., 2013; Mitchell et al., 2011). Thirdly, our results reveal that both online and
offline problems have negative psychological consequences for young people (Noll et al.,
2013; Salmivalli et al., 2013).
Our study is not without its limitations, however. Firstly, online surveys are
!
16!
rarely (if ever) nationally representative. Our sample was collected via Facebook and
marketed via a link to the survey. Although we were able to reach about half of young
Finnish Facebook users, we obviously do not know what motivated respondents to reply.
However, compared with the Official Statistics of Finland, our sample appears to be
relatively reliable. Secondly, we are uncertain about the nature of the hate material the
respondents reported seeing. The question simply enquired whether they had seen
inappropriate hateful or degrading materials. While we believe this item is able to capture
the various forms of hate material, it lacks specificity. We did not, for example, ask whether
the material was textual or audio-visual. Hence, while our study was able to provide data
about exposure to hate material, it did not yield a precise answer concerning the nature of the
hateful interactions taking place. Future researchers should investigate not only victimization
but also the production of hate material. Moreover, researchers need to theorize the nature of
online hate. Is it the same as other forms of expression or does the current expansion of
social media transform the ways in which we talk about it? Finally, our analysis is limited to
Finnish young people and we therefore encourage future scholars to conduct cross-national
research.
Despite these limitations, our research fills a void in the literature. This is the
first study to focus on exposure to online hate material among users of the most popular SNS.
It is also the first study to take an in-depth look at the hate materials young people encounter
online in terms of the sites where the material was located, how users found the site, the
target of the hate material, and how disturbing users considered the material to be. Finally,
our study adds to the growing body of literature that analyzes the relationship between
online and offline activities. As this literature continues to show, the distinctions between
the online and offline worlds are becoming blurred, especially among young people.
CONCLUSION
The internet has revolutionized social interaction, with increasing numbers of
young people now spending a considerable amount of their time online. While the online
world has opened up countless opportunities to expand our experiences and social networks,
it has also created new risks and threats. Our results, based on a sample of Finnish Facebook
!
17!
users, show that exposure to hate material is common. This finding should send up a red flag
in the sense that hate is a part of the online experience. Online hatred, together with both
cyberbullying and online harassment, cannot be ignored as they concern the majority of
young people today, for whom the internet is an integral part of everyday reality.
Yet recent studies have started to argue that perhaps the real problem does not
lie solely online (Mitchell et al., 2011). Obviously, it is difficult to control all potential risks
resulting from internet use; in fact, it may be easier to reduce online risks by addressing
various offline factors. For example, in our study, online victimization was related to not
studying and to having low levels of attachment to family. Similarly, the psychosocial
problems that young people confront offline overlap with their negative online experiences.
It is therefore critical to develop new ways for parents, teachers and youth workers to
support young people and address the identity issues they face. In addition, we need open
discussions about the high prevalence of aggressive, hateful and threatening online behavior.
Only by confronting it can we hope to address the potential harm it may inflict.
REFERENCES
Amster, S. E. (2009). From Birth of a Nation to Stormfront: A Century of Communicating
Hate. In B. Perry, B. Levin, P. Iganski, R. Blazak, and F. M. Lawrence (Eds.), Hate
Crimes (pp. 221–247). Westport, CT: Greenwood Publishing Group.
Andersson, M. (2012). The debate about multicultural Norway before and after 22 July 2011.
Identities: Global Studies in Culture and Power, 19(4), 418–427.
Banks, J. (2011). European Regulation of Cross-Border Hate Speech in Cyberspace: The
Limits of Legislation. European Journal of Crime, Criminal Law and Criminal
Justice, 19(1), 1–13.
Bartlett, J., Birdwell, J., & Littler, M. (2011). The New Face of Digital Populism. London:
Demos.
Böckler, N., & Seeger, T. (2013). Revolution of the dispossessed: School shooters and their
devotees on the web. In N. Böckler, T. Seeger, P. Sitzer & W. Heitmeyer (Eds.),
School Shootings: International Research, Case Studies and Concepts for Prevention
(pp. 309–339). New York: Springer.
!
18!
Bossler, A. M., Holt, T. J., & May, D. (2012). Predicting online harassment victimization
among a juvenile population. Youth & Society 44(4), 500–523.
Brown, C. (2009). WWW.HATE.COM: White Supremacist Discourse on the Internet and
the Construction of Whiteness Ideology. The Howard Journal of Communications,
20, 189–208.
Burris, V., Smith, E., & Strahm, A. (2000). White supremacist networks on the Internet.
Sociological focus, 33(2), 215–235.
Caiani, M., & Parenti, L. (2013). European and American Extreme Right Groups and the
Internet. Farnham: Ashgate Publishing.
Chau, M., & Xu, J. (2007). Mining Communities and Their Relationships in Blogs: A Study
of Hate Groups. International Journal of Human-Computer Studies, 65, 57–70.
Colvin M., Cullen F. T., & Vander Ven, T. (2002). Coercion, social support, and crime: an
emerging theoretical consensus. Criminology, 40, 19–42.
Council of Europe (2013, July 26). No Hate Speech Movement: Campaign for Human
Rights Online. Retrieved from http://www.nohatespeechmovement.org/
Douglas, K. M. (2007). Psychology, Discrimination and Hate Groups Online. In A. Joinson,
K. McKenna, T. Postmes, & U-D. Reips (Eds.), The Oxford Handbook of Internet
Psychology (pp. 155–164). Oxford: Oxford University Press.
Douglas, K. M., McGarty, C., Bliuc, A. M., & Lala, G. (2005). Understanding Cyberhate:
Social Competition and Social Creativity in Online White Supremacist Groups.
Social Science Computer Review, 23(1), 68–76.
Duffy, M. E. (2003). Web of Hate: A Fantasy Theme Analysis of the Rhetorical Vision of
Hate Groups Online. Journal of Communication Inquiry, 27, 291–312.
Foxman, A., & C. Wolf (2013). Viral hate: Containing its spread on the Internet. New
York: Palgrave MacMillan.
Gerstenfeld, P. B., Grant D. R., & Chiang, C-P. (2003). Hate online: A content analysis of
extremist Internet sites. Analysis of Social Issues and Public Policy, 3, 29–44.
Glaser, J., Dixit, J., & Green, D. P. (2002). Studying hate crime with the internet: what
makes racists advocate racial violence?. Journal of Social Issues, 58(1), 177–193.
Hasebrink, U., Görzig, A., Haddon, L., Kalmus, V., & Livingstone, S. (2011). Patterns of
risk and safety online: in-depth analyses from the EU Kids Online survey of 9- to 16-
year-olds and their parents in 25 European countries. London: EU Kids Online
network.
!
19!
Hawdon, J. (2012). Applying differential association theory to online hate groups: A
theoretical statement. Research on Finnish Society, 5, 39–47.
Helliwell, J. F., Huang, H., & Wang, S. (2014). Social Capital and Well-Being in Times of
Crisis. Journal of Happiness Studies, 15(1), 145–162.
HelwegLarsen, K., Schütt, N., & Larsen, H. B. (2012). Predictors and protective factors for
adolescent Internet victimization: results from a 2008 nationwide Danish youth
survey. Acta Paediatrica, 101(5), 533–539.
Jones, L. M., Mitchell, K. J., & Finkelhor, D. (2013). Online harassment in context: Trends
from three Youth Internet Safety Surveys (2000, 2005, 2010). Psychology of
Violence, 3(1), 53.
Keipi, T., & Oksanen, A. (2014). Self-exploration, anonymity and risks in the online setting:
analysis of narratives by 14–18-year olds. Journal of Youth Studies, (ahead-of-print),
1-17. doi: 10.1080/13676261.2014.881988
Lee, E., & Leets, L. (2002). Persuasive storytelling by hate groups online. American
Behavioral Scientist, 45, 927–957.
Leets, L. (2002). Experiencing hate speech: Perceptions and responses to anti-Semitism and
antigay speech. Journal of Social Issues, 58, 341–361.
Leets, L., & Giles, H. (1997). Words as weapons: When do they wound? Investigations of
racist speech. Human Communication Research, 24, 260–301.
Lehdonvirta, V., & Räsänen, P. (2011). How do young people identify with online and
offline peer groups? A comparison between UK, Spain and Japan. Journal of Youth
Studies, 14(1), 91–108.
Levin, B. (2002). Cyberhate: A legal and historical analysis of extremists’ use of computer
networks in America. American Behavioral Scientist, 45, 958–986.
Lindfors, P. L., Kaltiala-Heino, R., & Rimpelä, A. H. (2012). Cyberbullying among Finnish
adolescents – a population-based study. BMC public health, 12(1), 1027–1031.
Livingstone, S., & Helsper, E. (2010). Balancing opportunities and risks in teenagers’ use of
the internet: the role of online skills and internet self-efficacy. New Media & Society,
12(2), 309–329.
Livingstone, S., Haddon L., Görzig A., & Ólafsson K. (2011). Risks and safety on the
internet: The perspective of European children. Full Findings of the EU Kids Online.
London: LSE.
Lucassen, G., & Lubbers, M. (2012). Who fears what? Explaining far-right-wing preference
!
20!
in Europe by distinguishing perceived cultural and economic ethnic threats.
Comparative Political Studies, 45, 547–574.
Matsuda, M. J. (1989). Public response to racist speech: Considering the victim's story.
Michigan Law Review, 87(8), 2320–2381.
McNamee, L. G., Peterson, B. L., & Peña, J. (2010). A call to educate, participate, invoke
and indict: Understanding the communication of online hate groups. Communication
Monographs, 77(2), 257–280.
Mitchell, K. J., Finkelhor, D., Wolak, J., Ybarra, M. L., & Turner, H. (2011). Youth internet
victimization in a broader victimization context. Journal of Adolescent Health, 48(2),
128–134.
Nakamura, L. (2009). Don’t hate the player, hate the game: The racialization of labor in
World of Warcraft. Critical Studies in Media Communication, 26(2), 128–144.
Näsi, M., Räsänen, P., & Lehdonvirta, V. (2011). Identification with online and offline
communities: Understanding ICT disparities in Finland. Technology in Society,
33(1), 4–11.
Noll, J. G., Shenk, C. E., Barnes, J. E., & Haralson, K. J. (2013). Association of
maltreatment with high-risk internet behaviors and offline encounters. Pediatrics,
131(2), 510–517.
Oksanen, A., & Keipi, T. (2013). Young People as Victims of Crime on the Internet: A
Population-based Study in Finland. Vulnerable Children & Youth Studies, 8(4), 298
309.
Oksanen, A., Nurmi, J., Vuori, M., & Räsänen, P. (2013). Jokela: The Social Roots of a
School Shooting Tragedy in Finland. In N. Böckler, T. Seeger, P. Sitzer & W.
Heitmeyer (Eds.), School Shootings: International Research, Case Studies and
Concepts for Prevention (pp. 189–215). New York: Springer.
Ortega, R., Elipe, P., Mora-Merchán J. A., Genta M. L., Brighi, A., Guarini, … Tippett, N.
(2012). The emotional impact of bullying and cyberbullying on victims: a European
cross-national study. Aggressive Behavior, 38(5), 342–356.
Potok, M. (2011, July 16). The year in hate and extremism, 2010. Intelligence Report, 141,
Southern Poverty Law Center. Retrieved from http://www.splcenter.org/get-
informed/intelligence-report/browse-all-issues/2011/spring/the-year-in-hate-
extremism-2010
Proctor, C. L., Linley, P. A., & Matlby, J. (2009). Youth Life Satisfaction: A Review of the
!
21!
Literature. Journal of Happiness Studies, 10(5), 583–630.
Salmivalli, C., Sainio, M., & Hodges, E. V. (2013). Electronic victimization: correlates,
antecedents, and consequences among elementary and middle school students.
Journal of Clinical Child & Adolescent Psychology, 42(4), 442–453.
Sandberg, S. (2013). Are self-narratives strategic or determined, unified or fragmented?
Reading Breivik’s Manifesto in light of narrative criminology. Acta Sociologica,
56(1), 69–83.
Sourander, A., Brunstein Klomek, A., Ikonen, M., Lindroos, J., Luntamo, T., Koskelainen,
M., Ristkari, T., & Helenius, H. (2010). Psychological risk factors associated with
cyberbullying among adolescents. A population-based study. Archives of General
Psychiatry, 67, 720–728.
Statistics Finland (2012, July 23). Use of information and communications technology by
individuals 2012. Retrieved from http://www.stat.fi/til/sutivi/2012/index_en.html
Statistics Finland (2013, July 19). Population structure. Retrieved from
http://pxweb2.stat.fi/database/StatFin/vrm/vaerak/vaerak_en.asp
Tokunaga, R. S. (2010). Following you home from school: A critical review and synthesis of
research on cyberbullying victimization. Computers in Human Behavior, 26(3), 277
287.
Tynes, B. (2006). Children, adolescents, and the culture of online hate. In N. E. Dowd, D. G.
Singer & R. F. Wilson (Eds.), Handbook of Children, Culture, and Violence (pp.
267–289). Thousand Oaks: Sage.
Waldron, J. (2012). The Harm in the Hate Speech. Cambridge, Massachusetts & London,
England: Harvard University Press.
Welch, M. R., Tittle, C. R., Yonkoski, J., Meidinger, N., & Grasmick, H. G. (2008). Social
integration, self-control, and conformity. Journal of Quantitative Criminology, 24,
73–92.
Whittle, H., Hamilton-Giachritsis, C., Beech, A., & Collings, G. (2013). A review of young
people’s vulnerabilities to online grooming. Aggression and violent behavior, 18(1),
135–146.
Wolak, J., Finkelhor, D., Mitchell, K. J., & Ybarra, M. L. (2010). Online “predators” and
their victims: Myths, Realities, and Implications for Prevention and Treatment.
Psychology of violence, 1, 13–35.
Ybarra, M. L., & Mitchell, K. J. (2008). How risky are social networking sites? A
!
22!
comparison of places online where youth sexual solicitation and harassment occurs.
Pediatrics, 121, 350–357.
Ybarra, M. L., Mitchell, K. J., & Korchmaros, J. D. (2011). National Trends in Exposure to
and Experiences of Violence on the Internet Among Children. Pediatrics, 128(6),
13761386.
... In their study involving more than 700 Finnish youth aged 15-18 who used Facebook as a social medium, Oksanen et al. (2014) analyzed the extent to which this age group is exposed to and victimized by online hate material. Two-thirds, and thus the majority, of the youths stated that they had already encountered online hate material, with 21% of the respondents having been victims of online hate material themselves. ...
... Although socio-demographic data do not appear to be influential in the context of cyberhate, it is worth noting that, first, in the study by Oksanen et al. (2014) online hate material most frequently targeted at ethnicity/nationality (50%) and religious belief/faith (43%), and second, victims of online hate material were more likely to report material in which sex/gender was targeted compared to non-victims. ...
... Study findings presented by Oksanen et al. (2014) suggest that certain states of agitation may increase the likelihood to be a victim of online hate material. For example, in contrast to those who did not perceive themselves as victims of online hate material, youth who did perceive themselves as victims were more likely to report being worried. ...
Article
Full-text available
In this paper we present the results of a systematic review aimed at investigating what the literature reports on cyberbullying and cyberhate, whether and to what extent the connection between the two phenomena is made explicit, and whether it is possible to identify overlapping factors in the description of the phenomena. Specifically, for each of the 24 selected papers, we have identified the predictors of cyberbullying behaviors and the consequences of cyberbullying acts on the victims; the same analysis has been carried out with reference to cyberhate. Then, by comparing what emerged from the literature on cyberbullying with what emerged from the literature on cyberhate, we verify to what extent the two phenomena overlap in terms of predictors and consequences. Results show that the cyberhate issue related to adolescents is less investigated than cyberbullying, and most of the papers focusing on one of them do not refer to the other. Nevertheless, by comparing the predictors and outcomes of cyberbullying and cyberhate as reported in the literature, an overlap between the two concepts emerges, with reference to: the parent-child relationship to reduce the risk of cyber-aggression; the link between sexuality and cyber-attacks; the protective role of the families and of good quality friendship relationships; the impact of cyberbullying and cyberhate on adolescents' individuals' well-being and emotions; meaningful analogies between the coping strategies put in practice by victims of cyberbullying and cyberhate. We argue that the results of this review can stimulate a holistic approach for future studies on cyberbullying and cyberhate where the two phenomena are analyzed as two interlinked instances of cyber-aggression. Similarly, prevention and intervention programs on a responsible and safe use of social media should refer to both cyberbullying and cyberhate issues, as they share many predictors as well as consequences on adolescents' wellbeing, thus making it diminishing to afford them separately. Systematic Review Registration http://www.crd.york.ac.uk/PROSPERO , identifier: CRD42021239461.
... Exposure to online hate propaganda is common among Internet users, especially young people (Oksanen et al., 2014). Individual risk of exposure to this type of messaging rises with increased online activity, decreased attachment to family, and experiences of bullying both online and offline (Oksanen et al., 2014), all of which may be more common for autistic people (Cappadocia et al., 2012;Kuo, 2014;. ...
... Exposure to online hate propaganda is common among Internet users, especially young people (Oksanen et al., 2014). Individual risk of exposure to this type of messaging rises with increased online activity, decreased attachment to family, and experiences of bullying both online and offline (Oksanen et al., 2014), all of which may be more common for autistic people (Cappadocia et al., 2012;Kuo, 2014;. This trend is expected to grow as hate groups increasingly recruit for their movements in online platforms (Perry & Olsson, 2009). ...
Article
Full-text available
Background The term “weaponized autism” is frequently used on extremist platforms. To better understand this, we conducted a discourse analysis of posts on Gab, an alt-right social media platform. Methods We analyzed 711 posts spanning 2018–2019 and filtered for variations on the term “weaponized autism”. Results This term is used mainly by non-autistic Gab users. It refers to exploitation of perceived talents and vulnerabilities of “Weaponized autists”, described as all-powerful masters-of-technology who are devoid of social skills. Conclusions The term “weaponized autism” is simultaneously glorified and derogatory. For some autistic people, the partial acceptance offered within this community may be preferable to lack of acceptance offered in society, which speaks to improving societal acceptance as a prevention effort.
... act, z. B. Oksanen et al. 2014) und grenzt Hatespeech vom Diskriminierungsbegriff ab. Denn dieser beschreibt auch unbeabsichtigtes Verhalten, und er ist zudem opferzentriert ausgerichtet (Wettstein 2021). ...
Article
Full-text available
Der vorliegende Beitrag informiert über 14 deutschsprachige Programme zur Prävention und Intervention bei Hatespeech unter Kindern und Jugendlichen (Jahrgangsstufen 5 bis 12). Inhalte und Durchführungsmodalitäten der Programme sowie Ergebnisse einer kriteriengeleiteten Qualitätseinschätzung anhand von fünf Kriterien werden im Hinblick auf deren Anwendung in der schulischen Praxis beschrieben und erörtert. Der Überblick über Schwerpunkte, Stärken und Entwicklungspotentiale schulbezogener Hatespeech-Programme ermöglicht Leser*innen eine informierte Entscheidung über den Einsatz der Programme in der Schule sowie in der offenen Kinder- und Jugendarbeit.
... On the other hand, increased involvement with online networks may serve as a replacement for weak offline bonds. Weak offline social bonds, as well as stronger online network attachment have been found to increase the likelihood of exposure to online radical content (Oksanen et al., 2014). ...
Article
Full-text available
Background Most national counter-radicalization strategies identify the media, and particularly the Internet as key sources of risk for radicalization. However, the magnitude of the relationships between different types of media usage and radicalization remains unknown. Additionally, whether Internet-related risk factors do indeed have greater impacts than other forms of media remain another unknown. Overall, despite extensive research of media effects in criminology, the relationship between media and radicalization has not been systematically investigated. Objectives This systematic review and meta-analysis sought to (1) identify and synthesize the effects of different media-related risk factors at the individual level, (2) identify the relative magnitudes of the effect sizes for the different risk factors, and (3) compare the effects between outcomes of cognitive and behavioral radicalization. The review also sought to examine sources of heterogeneity between different radicalizing ideologies. Search Methods Electronic searches were carried out in several relevant databases and inclusion decisions were guided by a published review protocol. In addition to these searches, leading researchers were contacted to try and identify unpublished or unidentified research. Hand searches of previously published reviews and research were also used to supplement the database searches. Searches were carried out until August 2020. Selection Criteria The review included quantitative studies that examined at least one media-related risk factor (such as exposure to, or usage of a particular medium or mediated content) and its relationship to either cognitive or behavioral radicalization at the individual level. Data Collection and Analysis Random-effects meta-analysis was used for each risk factor individually and risk factors were arranged in rank-order. Heterogeneity was explored using a combination of moderator analysis, meta-regression, and sub-group analysis. Results The review included 4 experimental and 49 observational studies. Most of the studies were judged to be of low quality and suffer from multiple, potential sources of bias. From the included studies, effect sizes pertaining to 23 media-related risk factors were identified and analyzed for the outcome of cognitive radicalization, and two risk factors for the outcome of behavioral radicalization. Experimental evidence demonstrated that mere exposure to media theorized to increase cognitive radicalization was associated with a small increase in risk (g = 0.08, 95% confidence interval [CI] [−0.03, 19]). A slightly larger estimate was observed for those high in trait aggression (g = 0.13, 95% CI [0.01, 0.25]). Evidence from observational studies shows that for cognitive radicalization, risk factors such as television usage have no effect (r = 0.01, 95% CI [−0.06, 0.09]). However, passive (r = 0.24, 95% CI [0.18, 0.31]) and active (r = 0.22, 95% CI [0.15, 0.29]) forms of exposure to radical content online demonstrate small but potentially meaningful relationships. Similar sized estimates for passive (r = 0.23, 95% CI [0.12, 0.33]) and active (r = 0.28, 95% CI [0.21, 0.36]) forms of exposure to radical content online were found for the outcome of behavioral radicalization. Authors' Conclusions Relative to other known risk factors for cognitive radicalization, even the most salient of the media-related risk factors have comparatively small estimates. However, compared to other known risk factors for behavioral radicalization, passive and active forms of exposure to radical content online have relatively large and robust estimates. Overall, exposure to radical content online appears to have a larger relationship with radicalization than other media-related risk factors, and the impact of this relationship is most pronounced for behavioral outcomes of radicalization. While these results may support policy-makers' focus on the Internet in the context of combatting radicalization, the quality of the evidence is low and more robust study designs are needed to enable the drawing of firmer conclusions.
... Bias-based cyberbullying has been a topic deserving of increased empirical examination in recent years given the increase in online hate (Chetty & Alathur, 2018;Hawdon et al., 2017;Keipi et al., 2016;Oksanen et al., 2014). It and generally involves hurtful actions online that devalue or harass individuals or social groups specific to an identity-based characteristic (Strohmeier et al., 2021). ...
Article
Full-text available
Bias-based cyberbullying involves repeated hurtful actions online that devalue or harass one’s peers specific to an identity-based characteristic. Cyberbullying in general has received increased scholarly scrutiny over the last decade, but the subtype of bias-based cyberbullying has been much less frequently investigated, with no known previous studies involving youth across the United States. The current study explores whether empathy is related to cyberbullying offending generally and bias-based cyberbullying specifically. Using a national sample of 1644 12- to 15-year-olds, we find that those higher in empathy were significantly less likely to cyberbully others in general, and cyberbully others based on their race or religion. When the two sub-facets of empathy were considered separately, only cognitive empathy was inversely related to cyberbullying, while (contrary to expectation) affective empathy was not. Findings support focused efforts in schools to improve empathy as a means to reduce the incidence of these forms of interpersonal harm.
... Studies of Polish population similarly revealed that 54% of adults encountered hate speech online, while among youths (16-17 years old), almost 96% reported seeing online hate speech. Oksanen et al. (2014) also reported the prevalence of exposure to online hate speech among youths (i.e., a Finnish sample between the ages of 15 and 18). Perhaps unsurprisingly, their analyses revealed exposure to hate material correlated positively with online activity, but also with poor attachment to family, and experiences with physical online victimisation. ...
Chapter
Hate speech is a form of communication that targets disadvantaged social groups in a harmful way. It can be seen as a driving force behind the successes of numerous populist politicians and extremist movements. In this chapter, we argue that studying hate speech can be crucial for a better understanding of political mobilisation, intergroup relations, and social media. We describe the role of hate speech in mobilising electoral support and violence, in the promotion of racism and prejudice, as well as in shaping attitudes towards government policies. We uncover how political ideology and hate speech are interconnected, and that the left-right political beliefs do not always explain why individuals turn to use hate speech. We also outline the dilemma between the protection against hate speech and the freedom of expression principles, that are at the core of current debates on derogatory language.
Article
On November 1, 2015, comedian Margaret Cho announced a two-part campaign inspired by her history as a sexual-abuse survivor, to promote her new music video ‘I Wanna Kill My Rapist’. This included the creation of the hashtag #12DaysofRage. In this article, I explore how Cho used her status as a celebrity to circulate #12DaysofRage which acted as a discursive intervention in rape culture. I used content analysis and thematic analysis to identify themes in the archive of 2401 tweets I collected. I also performed a feminist discourse analysis on both the tweets and news coverage of the campaign to situate the hashtag within its historical, social, and political context. I argue that Cho performed what I call ‘promotional activism’, a subsection of celebrity activism where a celebrity promotes a cause as part of the promotion of a particular project or product. Cho’s choice to centre herself in the campaign made it impossible to separate Cho from the hashtag, preventing #12DaysofRage from greater viral potential, but still acting as a resonant, but ephemeral, gathering point for survivor-focused advocacy.
Article
This article provides information on 14 German-language programs for the prevention and intervention of hate speech among children and adolescents (grades 5–12). The contents and implementation modalities of the programs as well as the results of a criteria-based quality assessment of five criteria are described and discussed regarding their application in school practice. The overview of focal points, strengths and development potentials of school-related hate speech programs enables readers to make an informed decision about the use of the programs in schools and in open child and youth work.
Chapter
It has been observed that regular exposure to hateful content online can reduce levels of empathy in individuals, as well as affect the mental health of targeted groups. Research shows that a significant number of young people fall victim to hateful speech online. Unfortunately, such content is often poorly controlled by online platforms, leaving users to mitigate the problem by themselves. It’s possible that Machine Learning and browser extensions could be used to identify hateful content and assist users in reducing their exposure to hate speech online. A proof-of-concept extension was developed for the Google Chrome web browser, using both a local word blocker and a cloud-based model, to explore how effective browser extensions could be in identifying and managing exposure to hateful speech online. The extension was evaluated by 124 participants regarding the usability and functionality of the extension, to gauge the feasibility of this approach. Users responded positively on the usability of the extension, as well as giving feedback regarding where the proof-of-concept could be improved. The research demonstrates the potential for a browser extension aimed at average users to reduce individuals’ exposure to hateful speech online, using both word blocking and cloud-based Machine Learning techniques.
Article
Full-text available
How do right wing extremist organizations throughout the world use the Internet as a tool for communication and recruitment? What is its role in identity-building within radical right-wing groups and how do they use the Internet to set their agenda, build contacts, spread their ideology and encourage mobilization? Manuela Caiani and Linda Parenti address and examine these questions, analysing the potential role of the Internet on the identity-building processes of right wing organizations in France, Germany, Italy, Spain, the United Kingdom and USA and how their use of the internet influences their mobilization and action strategies.
Article
Full-text available
2011) Patterns of risk and safety online: in-depth analyses from the EU Kids Online survey of 9-to 16-year-olds and their parents in 25 European countries. EU Kids Online network, London, UK.
Chapter
An analysis of the self-narratives, self-stagings, and self-glorifications of school shooters circulated by the media, focusing on communicative and ideological elements and investigating the extent and nature of adolescent identification with perpetrators and their ideologies. Online interviews were conducted with a theoretical sample of 31 YouTube users selected to cover a range of relevance (minimum intensity of involvement was participation in online discussions) and diversity of characteristics (attitude, intensity, age, gender). The findings were analyzed by minimal/maximal case comparison. A small identification group characterized by recognition deficits in three crucial areas (family, school, peer group) felt there were strong similarities between their own psychosocial background and those of the shooters, and engaged with school shooters as a strategy of identity assertion.
Chapter
The proliferation of online hate groups over the past few years has brought two main issues into focus. First, legal and political scholars have questioned the extent to which such hate speech should be regulated. Second, and perhaps more importantly, there is a great deal of concern about the effects of hate expressed online - specifically, if it incites violence and hostility between groups in the physical world. Understanding cyberhate therefore provides an important challenge for psychologists. Specifically, it is important to understand why online hate is so widespread and the content of online hate sites often so insulting and aggressive, given that the physical activities of hate groups are much more covert. This article attempts to provide a psychological perspective on the nature and purpose of online hate groups and their underlying motivations, their strategies, psychological theories and research that provide insight into disinhibited online behaviour, the actions being taken to combat cyberhate, and some challenges for future research.