Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N = 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Extremist content can be defined as content that attacks fundamental democratic values and established political institutions (Neumann & Rogers, 2007) and thus higher exposure to this content might lead to higher cynicism. Existing evidence suggests that approximately 40% of adolescents and young adults in Western countries see extremist material such as hate speech or propaganda at least occasionally online, with social media users being specifically at risk to encounter such material (Costello et al., 2020;Costello, Hawdon, Ratliff, & Grantham, 2016;Hawdon et al., 2017). ...
... Although using social media as a source of political information may be beneficial for adolescents in many ways, the unfiltered exposure to political messages also poses a threat, as these messages can contain extremist content (Costello et al., 2020(Costello et al., , 2016Kaakinen et al., 2018). Definitions of extremism differ, but they have in common that "a desire to radically, and if necessary, forcefully and violently impose a political and/ or religiously motivated ideology" is at the core (Schmitt, Rieger, Rutkowski, & Ernst, 2018, p. 782). ...
... Extremist online content refers to online messages that either reflect or propagate such an extremist worldview (Pauwels & Schils, 2016). On social media platforms, extremist messages manifest as hate speech, which devaluates members of certain societal or demographic groups (Costello et al., 2020(Costello et al., , 2016, conspiracy theories, which are beliefs that hidden groups of powerful individuals control certain aspects of society (Ashley, Maksl, & Craft, 2017), or propaganda, which reflects systematic persuasion for ideological, political or commercial purposes through one-sided messages . Social media provide a fruitful ground for extremist actors to reach a predominantly young audience and to promote their ideologies and thoughts via videos, blogs, or social media posts with high technical skills and proficiency (Morris, 2016). ...
Article
This study investigates the predictors of adolescents’ political cynicism in the social media environment. Given that social media are one of the main sources of information for many young people today, it is crucial to investigate how and in which ways social media use is associated with political cynicism. To that aim, we use data from computer-assisted personal interviews of N = 1,061 adolescents between 14 and 19 years in Germany. Our findings reveal that relative information-oriented social media use is related to lower political cynicism, while exposure to extremist political content on social media predicts higher levels of political cynicism. Furthermore, although self-perceived online media literacy is negatively associated with political cynicism, it does not moderate the relationship between political cynicism and relative information-oriented social media use or exposure to extremist content. We discuss theoretical and practical implications of these findings.
... The echo chamber effect can be exacerbated by characteristics of rightwing ideologies, specifically with respect to distrust of government (Costello, Hawdon, Ratliff & Grantham, 2016). Right-wing individuals tend to hold anti-government attitudes which can lead to increased likelihood of exposure to extremist material online as they seek out supportive attitudes, of which there is an abundance online (Costello et al., 2016). ...
... The echo chamber effect can be exacerbated by characteristics of rightwing ideologies, specifically with respect to distrust of government (Costello, Hawdon, Ratliff & Grantham, 2016). Right-wing individuals tend to hold anti-government attitudes which can lead to increased likelihood of exposure to extremist material online as they seek out supportive attitudes, of which there is an abundance online (Costello et al., 2016). Grounded in social learning theory (Bandura & Walters, 1977), it is likely that individuals that hold anti-government sentiments inevitably gravitate towards each other, adopting and amplifying their ideologies and increasing the likelihood of further exposure to extremist material (Costello et al., 2016). ...
... Right-wing individuals tend to hold anti-government attitudes which can lead to increased likelihood of exposure to extremist material online as they seek out supportive attitudes, of which there is an abundance online (Costello et al., 2016). Grounded in social learning theory (Bandura & Walters, 1977), it is likely that individuals that hold anti-government sentiments inevitably gravitate towards each other, adopting and amplifying their ideologies and increasing the likelihood of further exposure to extremist material (Costello et al., 2016). ...
Technical Report
Full-text available
The Online Islamophobia Project was an 18 month research project that ran between June 2020 and December 2021 and examined the interaction between miscommunications and conspiracy theories in relation to key factors such as anonymity, membership length, peer groups and postage frequency, within the context of the current Covid-19 pandemic and Islamophobia on social media. The project was hosted at Birmingham City University and funded by the UKRI and Economic and Social Research Council (ESRC) under their Covid-19 rapid response call. The project explored irrational beliefs and thoughts that are disseminated on social media, covering important coverage of communications surrounding conspiracy theories online whilst paying attention to the content associated to racist ‘infodemic’ messages. The project also sought to provide insights into the drivers of Covid-19 narratives and consequences in fuelling existing extreme communications and Islamophobic language both online and offline.
... As such, it is important to understand who are the most vulnerable to such exposure, so to equip them with the requisite knowledge to critically assess the material they may come across while online. Recent research has identified various psychological and behavioral factors that may put an individual at risk of exposure to hate online, such as: race, level of education, victimization, weak family attachment, low trust in government, and time spent on the internet [24][25][26]. This study seeks to contribute to this body of research focusing on the understanding of the predictors of youth's exposure to online hate by exploring the role of demographic characteristics, attitudes, risk perceptions and online behaviors. ...
... In our study, we found that the more time youth spent online the more likely they were to be exposed to hate in the online space. This result is consistent with previous literature [24][25][26]. Not surprisingly, communicating with strangers online was associated with increased risk of being exposed to hate. ...
... Not surprisingly, communicating with strangers online was associated with increased risk of being exposed to hate. Interestingly, good academic performance was also associated with increased risk, this may be due to increased awareness and ability to recognize the online material as hateful or by some interest on the topic expressed by higher educated youth, as found in previous research [25]. Finally, our data indicate that the more individuals felt disinhibited online, "loosening up, feeling less restrained, and expressing themselves more openly," the more likely they were to be exposed to hateful propaganda and to encounter individuals attempting to convince them of racist views. ...
Preprint
Full-text available
Today's youth have almost universal access to the internet and frequently engage in social networking activities using various social media platforms and devices. This is a phenomenon that hate groups are exploiting when disseminating their propaganda. This study seeks to better understand youth exposure to hateful material in the online space by exploring predictors of such exposure including demographic characteristics (age, gender and race), academic performance, online behaviors, online disinhibition, risk perception, and parents/guardians' supervision of online activities. We implemented a cross-sectional study design, using a paper questionnaire, in two high schools in Massachusetts (USA), focusing on students 14 to 19 years old. Logistic regression models were used to study the association between independent variables (demographics, online behaviors, risk perception, parental supervision) and exposure to hate online. Results revealed an association between exposure to hate messages in the online space and time spent online, academic performance, communicating with a stranger on social media, and benign online disinhibition. In our sample, benign online disinhibition was also associated with students' risk of encountering someone online that tried to convince them of racist views. This study represents an important first step in understanding youth's risk factors of exposure to hateful material online.
... It comprises of content on extremist websites, but also textual or audio-visual content created and disseminated by individual users via, for example, discussion fora or social media. Hate has increasingly been shown to be expressed on popular online platforms and in social media (Costello, Hawdon, Ratliff, & Grantham, 2016;Hawdon, Oksanen, & Räsänen, 2017;. It can be ethnically, politically, or religiously motivated, but it also targets people based on their sexual orientation, gender, class, disability, and weight issues (e.g., Janssen, Craig, Boyce, & Pickett, 2004). ...
... However, previous research on cyberbullying and other related online risks (such as sexting) has also shown that the three types of experience (i.e., exposure, victimization, aggression) are highly interconnected (e.g., Hasebrink, Görzig, Haddon, Kalmus, & Livingstone, 2011;Klettke, Mellor, Silva-Myles, Clancy, & Sharma, 2018;Li, 2007;Peskin et al., 2013;Vandebosch & van Cleemput, 2009;Walrave & Heirman, 2011;Ybarra & Mitchell, 2004). Similarly, it has been shown that cyberhate experiences often overlap (Blaya & Audrin, 2019;Costello et al., 2016;Wachs et al., 2019;Wachs & Wright, 2018), and a recent study by Celik (2019) used a single experience scale to capture both cyberhate victimization and exposure to various cyberhate content. Thus, though there are differentiating factors for each type of involvement, there is also evidence that they share something in common, which may also comprise of a connection to the factors' underlying vulnerabilities and resilience. ...
... Older adolescents and men are more likely to be exposed to risky content (Ybarra, Mitchell, & Korchmaros, 2011), be aggressors of cyberhate (Kaakinen, Keipi et al., 2018) and cyberbullying (Walrave & Heirman, 2011), and more often engage in online risky communication and interactions (Notten & Nikken, 2016). However, several studies have failed to find a significant effect for gender on cyberhate or cyberbullying (Blaya & Audrin, 2019;Costello et al., 2016;Hemphill et al., 2012). As we are looking at the overall experiences of cyberhate and cyberbullying and we do not differentiate among the different types of involvement, we do not have a specific presumption concerning the associations among gender and age and cyberaggression. ...
Article
Full-text available
This study investigates the structural relationship between two types of cyberaggression: cyberhate and cyberbullying. Cyberhate is online hate speech that attacks collective identities. Cyberbullying is defined by the intent to harm, its repeated nature, and a power imbalance. Considering these features and the shared commonalities, we used survey data from adolescents from Czechia, Poland, and Slovakia (N = 3,855, aged 11–17) to examine the relationship between them. We tested a bifactor model with the general common risk factor and two distinct factors of cyberhate and cyberbullying. We also tested alternative one-factor and two-factor models. The bifactor structure showed the best fit and allowed for the further examination of the unique and common features of cyberhate and cyberbullying by testing their associations with selected risk and protective factors. The results showed that the general risk factor was associated with higher age, emotional problems, and time spent online. Individual-based discrimination was associated with cyberbullying and the general risk factor. Group-based discrimination was associated with cyberhate and cyberbullying. Exposure to harmful online content was associated with all factors. Considering that prior research did not sufficiently differentiate between these two phenomena, our study provides an empirically-based delimitation to help to identify their shared basis and differences.
... Studies have found that, in the absence of other constraints, the selection problem (1), can overrepresent individuals with certain protected attributes at the expense of others [45,24]. Towards mitigating this bias, we consider lower bounds and upper bounds on the number of items of a given protected attribute selected. ...
... We observe similar results as with risk difference (F). For a definition of selection lift see Equation (24). group have a lower average than the majority group, and both groups have identical noise. ...
... We observe similar results as with risk difference (F). For a definition of selection lift see Equation(24). ...
Preprint
Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race. Currently, fair subset selection algorithms assume that the protected attributes are known as part of the dataset. However, attributes may be noisy due to errors during data collection or if they are imputed (as is often the case in real-world settings). While a wide body of work addresses the effect of noise on the performance of machine learning algorithms, its effect on fairness remains largely unexamined. We find that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result! Towards addressing this, we consider an existing noise model in which there is probabilistic information about the protected attributes (e.g.,[19, 32, 56, 44]), and ask is fair selection is possible under noisy conditions? We formulate a ``denoised'' selection problem which functions for a large class of fairness metrics; given the desired fairness goal, the solution to the denoised problem violates the goal by at most a small multiplicative amount with high probability. Although the denoised problem turns out to be NP-hard, we give a linear-programming based approximation algorithm for it. We empirically evaluate our approach on both synthetic and real-world datasets. Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes, and, compared to prior noise-oblivious approaches, has better Pareto-tradeoffs between utility and fairness.
... In 2011, a US-based national study demonstrated that amount of general technology use and age are predictive factors for almost all technology-based violent experiences and exposures [32]. Recent research in the US has identified various socio-demographic, psychological, and behavioral factors that may put an individual at risk of exposure to hate online, such as young age, white race, male gender, level of education, online victimization, low trust in government, and time spent on the internet [34,35]. Such risk factors provide invaluable information on a pernicious phenomenon that has persisted at high levels over the past decade in the US. ...
... Such risk factors provide invaluable information on a pernicious phenomenon that has persisted at high levels over the past decade in the US. Indeed, data from the US have consistently indicated high levels of exposure to hate messages online among nationally sampled populations, with 53%, 65%, and 87% of respondents indicating exposure to such messages in 2013, 2015, and 2016, respectively [34][35][36]. The international knowledge base of the correlates for exposure to hate online has similarly grown over the past decade. ...
... In our study, we found that the more time youth spent online the more likely they were to be exposed to hate in the online space. This result is consistent with the previous literature [29,34,35]. Not surprisingly, communicating with strangers online was associated with an increased risk of being exposed to hate. ...
Article
Full-text available
Today's youth have extensive access to the internet and frequently engage in social networking activities using various social media platforms and devices. This is a phenomenon that hate groups are exploiting when disseminating their propaganda. This study seeks to better understand youth exposure to hateful material in the online space by exploring predictors of such exposure including demographic characteristics (age, gender, and race), academic performance, online behaviors, online disinhibition, risk perception, and parents/guardians' supervision of online activities. We implemented a cross-sectional study design, using a paper questionnaire, in two high schools in Massachusetts (USA), focusing on students 14 to 19 years old. Logistic regression models were used to study the association between independent variables (demographics, online behaviors, risk perception, parental supervision) and exposure to hate online. Results revealed an association between exposure to hate messages in the online space and time spent online, academic performance, communicating with a stranger on social media, and benign online disinhibition. In our sample, benign online disinhibition was also associated with students' risk of encountering someone online that tried to convince them of racist views. This study represents an important contribution to understanding youth's risk factors of exposure to hateful material online.
... Consistent with prior research, we focus on youth and young adults because they are particularly active on social media and thus more likely to encounter online hate material (e.g. Costello et al. 2016a;Räsänen et al. 2016). Data were collected by Survey Sample International (SSI) in December 2017 from demographically balanced panels. ...
... Noticeably though, living alone, which we use to assess guardianship, is not significant in our models. This is consistent with several prior works in this vein that find guardianship-both offline and online-to be generally ineffective at curtailing various online activities (see, e.g., Costello et al. 2016a;Bossler and Holt 2009;Leukfeldt and Yar 2016). Still, our consideration of online habits, routines, and experiences is only one part of a nuanced and complex set of dynamics. ...
... We used a measure of offline guardianship, whether an individual lives alone or not, because living alone could serve as a general measure of behavioral guardianship. As noted earlier, although capable guardianship is the most widely tested and supported dimension of RAT in offline settings (see Pratt & Cullen, 2005), its adaptation to online contexts has produced inconsistent findings (e.g., Bossler & Holt, 2009;Choi, 2008;Costello et al. 2016aCostello et al. , b 2016Leukfeldt & Yar, 2016;Reyns, 2015). This is in part due to the difficulty in conceptualizing online guardianship (Vakhitova, Reynald, & Townsley, 2016). ...
Article
Full-text available
The increasingly prominent role of the Internet in the lives of Americans has resulted in more people coming into contact with various types of online content, including online hate material. One of the most common forms of online hate targets immigrants, seeking to position immigrants as threats to personal, national, economic, and cultural security. Given the recent rise in online hate targeting immigrants, this study examines factors that bring individuals into virtual contact with such material. Utilizing recently collected online survey data of American youth and young adults, we draw on insights from Routine Activity Theory and Social Structure-Social Learning Theory to understand exposure to anti-immigrant online hate material. Specifically, we consider how online routines, location in social structure, and social identity are associated with exposure. Results indicate that engaging in behaviors that can increase proximity to motivated offenders increases the likelihood of being exposed to anti-immigrant hate, as does engaging in online behaviors that bolster one's target suitability. Additionally, individuals who view Americanism as fundamental to their social identity are more apt to encounter anti-immigrant hate material on the Internet, as are those who are more dissatisfied with the current direction of the country.
... Overall, the most common experience in terms of cyberhate involvement is witnessing hateful content online Wachs, Wright, & Vazsonyi, 2019). Initial research on cyberhate found that people who visit websites or virtual spaces containing mean or hateful material are more likely to be targeted by cyberhate (Costello et al., 2016). Furthermore, when witnessing cyberhate some young people engage in counter speech and give public support to the targeted person or social group Wachs, Gámez-Guadix, et al., 2020). ...
... Contact with unknown people and cyberhate perpetration. Contact with unknown people online could constitute a potential risk for young people, in terms of being exposed to extremist groups and hateful content (Hassan et al., 2018) and/or being involved in cyberhate episodes (Costello et al., 2016). Indeed, hate groups actively recruit young people online . ...
... A positive correlation was found between witnessing cyberhate and victimisation. This finding corresponds to previous research (Costello et al., 2016;Wachs et al., 2021; and it signals that the higher the exposure to cyberhate, the higher young people's chances to be victimised by hateful contents online are. An explanation could be that some young people might engage in counter speech, while giving public support to the targeted person or social group Wachs, Gámez-Guadix, et al., 2020). ...
Article
Full-text available
Recent evidence shows that young people across Europe are encountering hateful content on the Internet. However, there is a lack of empirically tested theories and investigation of correlates that could help to understand young people’s involvement in cyberhate. To fill this gap, the present study aims to test the Routine Activity Theory to explain cyberhate victimisation and the Problem Behaviour Theory to understand cyberhate perpetration. Participants were 5433 young people (Mage = 14.12, SDage = 1.38; 49.8% boys from ten countries of the EU Kids Online IV survey). Self-report questionnaires were administered to assess cyberhate involvement, experiences of data misuse, frequency of contact with unknown people online, problematic aspects of sharenting, excessive Internet use, and sensation seeking. Results showed that being a victim of cyberhate was positively associated with target suitability (e.g., experiences of data misuse, and contact with unknown people), lack of capable guardianship (e.g., problematic facets of sharenting), and exposure to potential offenders (e.g., witnessing cyberhate, and excessive Internet use). Findings support the general usefulness of using Routine Activity Theory to explain cyberhate victimisation. Being a perpetrator of cyberhate was positively associated with several online problem behaviours (e.g., having contact with unknown people online, excessive Internet use, and sensation seeking), which supports the general assumption of the Problem Behaviour Theory. The findings of this research can be used to develop intervention and prevention programmes on a local, national, and international level.
... Mit seinen SNS stellt das Internet sichtbare und öffentlich zugängliche Plattformen zur Verfügung, über die Hassgruppen sich organisieren, Anhänger rekrutieren und sich mit Gleichdenkenden vernetzen und sozialisieren können (Costello et al., 2016;Leets, 2001). Eine gesonderte Betrachtung von Hate Speech im Internet und insbesondere in SNS ist trotzdem notwendig. ...
... Extremist*innen nutzen das Internet als Rekrutierungs-, Sozialisierungs-und Netzwerktool (Costello et al., 2016). Sie streben damit eine möglichst hohe Assimilation der Rezipient*innen mit ihrer Ideologie, ihren Werten und Überzeugungen an. ...
... Andererseits ist es im Interesse von Extremist*innen, Menschen abseits dieser Denkstrukturen zu erreichen. Dies geschieht mittels sogenanntem Feathering, bei dem subtile Interaktionen und Botschaften eingesetzt werden, die erst nach und nach intensiviert werden, um eine langsame Übernahme der Ideologie zu erreichen (Costello et al., 2016). Diese Methoden eignen sich insbesondere in "Enklaven Gleichgesinnter" (Sunstein, 2002, S. 435), in denen widersprüchliche Inhalte unterdrückt werden, eine hohe Solidarität besteht und sich durch eine anhaltende Wiederholung von Ansichten sowie die Bestärkung durch Gruppenmitglieder Einstellungen vergleichsweise einfach beeinflussen lassen (Sunstein, 2002). ...
Chapter
Hate Speech stellt ein großes Problem für einen funktionierenden deliberativen Diskurs dar. Insbesondere in sozialen Medien werden soziale Gruppen immer wieder Opfer von Herabwürdigungen und Hass und werden somit von der Gesellschaft aktiv ausgeschlossen. Juristische Bestimmungen wie das Netzwerkdurchsetzungsgesetz und Bemühungen, mittels automatisierten Detektionsmethoden der Menge von Hate Speech entgegenzuwirken zeigen, wie ernst das Thema aus staatlicher und wissenschaftlicher Perspektive genommen wird. Dies ist auch dringend notwendig: Abhängig von der Art und Weise, in der Hate Speech transportiert wird, können sowohl für Rezipient*innen der Ingroup als auch für die Outgroup schwerwiegende und ernstzunehmende Wirkungen entstehen. Das Ziel dieses Beitrags ist es, einen Überblick über den sozial- und kommunikationswissenschaftlichen Forschungsstand zu Hate Speech zu geben. Dabei werden sowohl die Definitionsvielfalt von Hate Speech thematisiert als auch Formen und Typologisierungen gegenübergestellt sowie Wirkungsmechanismen von Hassbotschaften dargelegt.
... Tellingly, when asked if they themselves had been exposed to hate, the Internet was identified as the most common place to have encountered hateful material, with 45% of respondents reporting having had such an encounter. Costello, Hawdon, Ratliff, and Grantham (2016) found an even higher rate of exposure to online hate, as over 65% of their respondents reported seeing or hearing hateful materials online. ...
... Participants were drawn from demographically balanced panels of individuals who volunteered to partake in research surveys. Such demographically balanced online panels such as this are typical for investigating hate material on the Internet (see, e.g., Costello et al., 2016;Costello, Hawdon, & Ratliff, 2017;Hawdon et al., 2019a;Näsi et al. 2014;Näsi et al. 2015;Räsänen et al. 2016). ...
... This age range captures the most avid Internet users and is also consistent with other studies that explore online extremist material (e.g. Räsänen et al. 2016;Costello et al., 2016;Costello, Hawdon & Ratliff 2017;Hawdon et al., 2019a). Survey Sample International (SSI) collected the data, recruiting potential participants through random digit dialling and other permission-based techniques. ...
Article
The growing prevalence of hate material on the Internet has led to mounting concerns from scholars and policymakers alike. While recent scholarship has explored predictors of exposure, perception, and participation in online hate, few studies have empirically examined the social factors that lead individuals to produce cyberhate. Therefore, this work examines the production of online hate using online survey data (N = 520) of youth and young adults collected in December 2017. We draw on two commonly-cited criminological theories, the General Theory of Crime (GTC) and Social Structure-Social Learning Theory (SSSL), to understand social factors that contribute to producing cyberhate. In addition, we consider whether a broader relationship exists between the production of online hate and support for President Trump, whose rhetoric has gained traction among far-right and alt-right communities that traffic in hate. Logistic regression results show limited support for GTC, as low self-control is not a significant correlate of producing cyberhate after other relevant variables are considered. We find more robust support for SSSL, as the production of cyberhate is associated with an individual’s social location, online associations, and differential reinforcement. Moreover, we find evidence that individuals who approve of President Trump’s job performance are more likely to produce online hate.
... The abovementioned hate-fueled acts of violence highlight the need to better understand the online radicalization process as well as the role that identity plays in crime perpetration. While exposure to (Costello et al., 2016;Costello and Hawdon 2018;Costello et al., 2019;Kaakinen et al., 2018a, b;Oksanen et al., 2014;Näsi et al., 2015;Reichelmann et al., 2020) -and targeting by (Awan, 2014;Costello et al., 2017;Gemignani and Hernandez-Albujar, 2015;) -cyberhate have been thoroughly explored by scholars, we know surprisingly less about those who produce cyberhate (for exceptions, see Bernatzky et al., 2021;Costello & Hawdon, 2018;Kaakinen et al., 2018a, b;Keipi et al., 2016). While many forms of cyberhate exist online, espoused from the political left and right, radical Islamists, and single issue agitators, scholarship generally demonstrates that the domain of cyberhate is currently dominated by far-right extremists who spew hatred at racial/ ethnic minorities, immigrants, political liberals, women, and sexual and religious minorities (see Hawdon et al., 2014;Holt et al., 2021;Potok, 2017; The Southern Poverty Law Center, 2017, for example). ...
... Cyberhate is explicitly about social identities, as purveyors seek to draw stark boundaries between themselves and those they degrade and devalue. It is therefore unsurprising that aspects of identity have been tied to the production (Costello & Hawdon, 2018) and perpetuation (Hawdon et al., 2019a, b) of hateful online content, as well as acceptance of and exposure to (Costello et al., 2016 this type of material. We seek to expand on this extant work, focusing on the unexplored relationship between American identity dimensions and the production of online hate. ...
... The use of demographically balanced panels increases the validity of the sample by screening for previous participation, conducting attention checks, using pre-panel interview screenings, and utilizing incentives (Evans & Mathur, 2005;Wansink, 2001). Similar demographically balanced online samples have been used in other studies exploring various facets of online extremism (see, e.g., Costello et al., 2016;Costello et al., 2016;Hawdon et al., 2019a, b;Näsi et al. 2015;Näsi et al., 2015;Räsänen et al., 2016). ...
Article
Identity-based crimes are understood as crimes rooted in the perceived identity of either the perpetrator or the victim. While some research reports a relationship between the production of cyberhate and group identity, no empirical tests to date assess the strength of the identity related to the crime. We explore the relationship between American identity and the production of hate in an online setting. We draw on data from a nationally representative survey (n = 896) to examine how various dimensions of American national identity relate to the odds of producing hate in the cyber-world. Framed in modern theories of identity, we use a five-item measurement of American identity – prominence, salience, private self-regard, public self-regard, and verification—to provide a detailed exploration of how a respondent’s self-views of their American identity and understanding of how others view that identity relate to their likelihood of producing hateful online material. Using descriptive statistics and regression analyses, we find higher levels of salience and public self-regard, as well as socio-demographics such as age, ethnicity, conservativism, and living in a large city, are associated with an increased odds of producing hate. Conversely, education and living in the South are inversely related to the production of hate. The findings suggest that understanding the nuances of “what it means to be American” is an important first step toward more fully grasping the phenomenon of cyberhate. Our findings contribute to the growing body of empirical work on online extremism by demonstrating how identity affects behavior, particularly in this polarizing time when what it means to “be American” is frequently questioned.
... Consequently, social media have become major arenas for sharing information and psychosocial reactions after terrorist assaults (Fischer-Preßler et al., 2019;Gruebner et al., 2016) but also for spreading online hate content (i.e., hateful online material that degrades or threatens individuals or social groups) (Costello, Hawdon, Ratliff, & Grantham, 2016;Keipi, Näsi, Oksanen, & Räsänen, 2017). In social media, terrorist attacks lead to higher levels of intergroup antipathies, nationalism, and xenophobia (Fischer-Preßler et al., 2019). ...
... In the online space, hate content has become a widely recognized problem (Bliuc, Faulkner, Jakubowicz, & McGarty, 2018;Kaakinen et al., 2018;Keipi et al., 2017;Salminen et al., 2018). Online hate targets individuals or social groups based on nationality or ethnicity, religious conviction, political views, sexual orientation, gender, or physical appearance, for example (Bliuc et al., 2018;Costello et al., 2016;Keipi et al., 2017;Klausen, 2015). In social media, users can express hateful thoughts and attitudes without a tangible contact with victims, anonymously, and relatively free from external control (Barkun, 2017;Keipi et al., 2017;Peterson & Densley, 2017). ...
Article
Full-text available
Acts of terror lead to both a rise of an extended sense of fear that goes beyond the physical location of the attacks and to increased expressions of online hate. In this longitudinal study, we analyzed dynamics between the exposure to online hate and the fear of terrorism after the Paris attacks in November 13, 2015. We hypothesized that exposure to online hate is connected to a perceived Zeitgeist of fear (i.e., collective fear). In turn, the perceived Zeitgeist of fear is related to higher personal fear of terrorism both immediately after the attacks and a year later. Hypotheses were tested using path modeling and panel data (N = 2325) from Norway, Finland, Spain, France, and the United States a few weeks after the Paris attacks in November 2015 and again a year later in January 2017. With the exception of Norway, exposure to online hate had a positive association with the perceived Zeitgeist of fear in all our samples. The Zeitgeist of fear was correlated with higher personal fear of terrorism immediately after the attacks and one year later. We conclude that online hate content can contribute to the extended sense of fear after the terrorist attacks by skewing perceptions of social climate.
... In the wake of domestic terrorism by white supremacists in the United States, new attention to the exploitation of social networks by white supremacists for radicalization emerged (Manjoo, 2017). Homophily in social networks poses a unique risk for the radicalization of white users who are more likely to be exposed to online extremism compared to other ethnic-racial groups (Costello et al., 2016). Moreover, such homophily may facilitate the entrée of extremist ideologies presented innocuously online (Costello et al., 2016). ...
... Homophily in social networks poses a unique risk for the radicalization of white users who are more likely to be exposed to online extremism compared to other ethnic-racial groups (Costello et al., 2016). Moreover, such homophily may facilitate the entrée of extremist ideologies presented innocuously online (Costello et al., 2016). For example, the "alt-right" represents an intentional re-branding of white supremacist ideology to more easily enter mainstream spaces (Marwick & Lewis, 2017;Southern Poverty Law Center, 2017). ...
Article
Beginning with a historical overview of the construction of whiteness, we identify gaps in extant scholarship and provide conceptual, contextual, and methodological considerations for confronting whiteness by advancing critical white ethnic‐racial socialization (ERS) research. First, we consider the mutually influential developmental processes of ERS experiences and the iterative nature of ethnic‐racial identity (ERI) formation among white children and youth. Second, we address how methodological approaches such as person‐centered analysis (PCA) can yield nuanced insight into white ERS processes by helping identify varying developmental pathways that promote antiracist identity development. Third, we highlight the role of online spaces as a formative ecological context through which white children and youth experience ERS processes and the broader societal implications afforded by this context for their ERI development. We conclude with scholarly and practice implications for promoting and supporting antiracist efforts to resist broad‐based historical and current white supremacy.
... The RAT has been successfully employed to study adolescent cyberbullying and harassment (Reyns et al., 2011), online hate speech (Costello et al., 2016), malware victimization (Bossler & Holt, 2009), and other nonphysical criminal offenses like online fraud (Pratt et al., 2010). However, most of this research has either drawn on smaller samples of self-reported surveys from nonrepresentative data sets (e.g., student surveys), or has focused entirely on a single type of victimization such as cyberbullying or cyberstalking (Leukfeldt & Yar, 2016). ...
... Proponents of RAT surmise that when people engage in more direct forms of online communication, they increase their chance of confronting harmful and threatening behaviors (Leukfeldt & Yar, 2016). The more time an individual spends on social media, the greater chances they will be exposed to hateful material (Costello et al., 2016). Previous research also shows that people who disclose personal information online are more likely to be attacked irrespective of their suitability and "attractiveness" as a target (Welsh & Lavoie, 2012). ...
Article
Full-text available
The study applies and expands the routine activity theory to examine the dynamics of online harassment and violence against women on Twitter in India. We collected 931,363 public tweets (original posts and replies) over a period of 1 month that mentioned at least one of 101 influential women in India. By undertaking both manual and automated text analysis of “hateful” tweets, we identified three broad types of violence experienced by women of influence on Twitter: dismissive insults, ethnoreligious slurs, and gendered sexual harassment. The analysis also revealed different types of individually motivated offenders: “news junkies,” “Bollywood fanatics,” and “lone-wolves”, who do not characteristically engage in direct targeted attacks against a single person. Finally, we question the effectiveness of Twitter’s form of “guardianship” against online violence against women, as we found that a year after our initial data collection in 2017, only 22% of hostile posts with explicit forms of harassment have been deleted. We conclude that in the social media age, online and offline public spheres overlap and intertwine, requiring improved regulatory approaches, policies, and moderation tools of “capable” guardianship that empower women to actively participate in public life.
... Strain also has a documented relationship to online behavior. For example, several studies find those who are victimized by cyberbullies disproportionately engage in cyberbullying themselves Costello et al. 2016;Jang, Song, and Kim 2014;Marcum et al. 2014). In terms of strain's effect on extremism, the relationships are not consistent on and offline. ...
... While it appears that time online is not inherently harmful, the short and long-term effects of excessive Internet usage are still a matter of debate and inquiry. Even so, as teens increasingly use the Internet, especially social media, their risk of exposure to myriad forms of risky cyber-content grows (Costello et al. 2016). With the recent rise in cyberviolence ; Federal Bureau of Investigation 2019), it is imperative to understand how teens and young adults respond to such harmful online material. ...
Article
Cyberviolence is a growing concern, leading researchers to explore why some users engage in harmful acts online. This study uses leading criminological theories—the general theory of crime/self-control theory, social control/bonding theory, social learning theory, and general strain theory—to explore why 15–18-year-old American adolescents join ongoing acts of cyberviolence. Additionally, we examine the role of socio-demographic traits and online routines in perpetuating cyberviolence. Results of an ordinal logistic regression indicate that low self-control, online strain, closeness to online communities, and watching others engage in online attacks are associated with joining an ongoing act of cyberviolence. Moreover, an individual’s age and familial relationships are inversely related to joining an online attack. Taken together, all four criminological theories we test help predict engagement in cyberviolence, indicating an integrative theory may be valuable in understanding participation in cyberhate attacks.
... The use of the Internet to promote radicalization of users and recruit adolescents and young adults into antisocial organizations has been a legitimate public concern. Studies show that most adolescents have been exposed to online hate material, and about one fourth of the respondents have been victimized by such material [Costello et al. 2016;Oksanen et al. 2014]. ...
... Exposure of adolescents to hate material is associated with high online activity, poor attachment to family and physical offline and/or online victimization [Oksanen et al. 2014]. Among young adults, higher levels of education, lower levels of trust in the federal government and proclivity towards risk-taking are associated with increased exposure to negative materials [Costello et al. 2016]. ...
Article
Погожина Ирина Николаевна — доктор психологических наук, доцент кафедры психологии образования и педагогики факультета психологии Московского государственного университета имени М. В. Ломоносова. Адрес: 125009, Москва, ул. Моховая, 11, стр. 9. E-mail: pogozhina@mail.ruПодольский Андрей Ильич — доктор психологических наук, заслуженный профессор МГУ. Адрес: 125009, Москва, ул. Моховая, 11, стр. 9. E-mail: apodolskij@mail.ruИдобаева Ольга Афанасьевна — доктор психологических наук, доцент, главный специалист фонда «НИР». Адрес: 119991, Москва, Ломоносовский просп., 27, корп. 1. E-mail: oai@list.ruПодольская Татьяна Афанасьевна — доктор психологических наук, профессор, главный научный сотрудник ФГБНУ «ИДСВ РАО». Адрес: 105062, Москва, ул. Макаренко, 5/16. E-mail: tpodolskaya@list.ruВ зависимости от принятия или отторжения норм и правил жизни, принятых на данной ступени развития общества, выделяют два вида цифрового поведения: просоциальное и антисоциальное. Носители этих видов поведения различаются способами построения коммуникации в цифровой среде и имеют специфические характеристики когнитивной, мотивационной и эмоциональной сфер.Цель проведенного исследования — на основании зарубежных исследований выделить и проанализировать логико-категориальные характеристики антисоциального цифрового поведения, связанного с особенностями мотивационной сферы интернет-пользователей.Выделены внутренние и внешние факторы антисоциального цифрового поведения. Установлено, что существуют значимые связи между высоким уровнем проблемного использования интернета и психологическими особенностями интернет-пользователей в коммуникативной, эмоциональной, мотивационной и когнитивной сферах. Перспективным с точки зрения построения моделей цифрового поведения и разработки программ противодействия антисоциальному поведению в Сети выступает изучение связи активности пользователей в отношении разного по содержанию и видам интернет-контента с их индивидуально-психологическими особенностями.
... Once individuals become familiar with a community or network that they frequently engage with online, trust increases between them (Näsi, Räsänen, Hawdon, Holkeri & Oksanen, 2015). Through the constant exposure to specific websites or social media platforms that promote hate, these outlets may influence individuals and they may become part of the hateful narrative (Costello, Hawdon, Ratliff & Grantham, 2016). ...
... Therefore, the filter bubble has been recognised as a risk to a well-functioning democracy in modern society (Bozdag, 2015). Studies in various areas have also investigated the impact of filter bubbles on the polarisation of online debates (Flaxman, Goel, & Rao, 2016;Seargeant & Tagg, 2018) and extremism (Costello et al., 2016;Liao & Fu 2013). ...
Article
Full-text available
Reliance on social media as a source of information has lead to several challenges, including the limitation of sources to viewers’ preferences and desires, also known as filter bubbles. The formation of filter bubbles is a known risk to democracy. It can bring negative consequences like polarisation of the society, users’ tendency to extremist viewpoints and the proliferation of fake news. Previous studies have focused on specific aspects and paid less attention to a holistic approach for eliminating the notion. The current study, however, aims to propose a model for an integrated tool that assists users in avoiding filter bubbles in social networks. To this end, a systematic literature review has been undertaken, and initially, 571 papers in six top-ranked scientific databases have been identified. After excluding irrelevant studies and performing an in-depth analysis of the remaining papers, a classification of research studies is proposed. This classification is then used to introduce an overall architecture for an integrated tool that synthesises all previous studies and offers new features for avoiding filter bubbles. The study explains the components and features of the proposed architecture and concludes with a list of implications for the recommended tool.
... Psychological researchers therefore frequently measure vicarious experiences of online hate in which the reader is not personally attacked, but belongs to the derogated minority group (Tynes, Rose, & Williams, 2010). Given that online hostility towards minorities affects large amounts of people (Abbott, 2011;Costello, Hawdon, Ratliff, & Grantham, 2016), varies across geographic areas (Hawdon et al., 2017), and might even turn into physical violence (Awan & Zempi, 2016), it is important to identify the environments in which it is most likely to occur. ...
Article
Full-text available
To what extent are intergroup attitudes associated with regional differences in online aggression and hostility? We test whether regional attitude biases towards minorities and their local variability (i.e. intraregional polarization) independently predict verbal hostility on social media. We measure online hostility using large US American samples from Twitter and measure regional attitudes using nationwide survey data from Project Implicit. Average regional biases against Black people, White people, and gay people are associated with regional differences in social media hostility, and this effect is confounded with regional racial and ideological opposition. In addition, intraregional variability in interracial attitudes is also positively associated with online hostility. In other words, there is greater online hostility in regions where residents disagree in their interracial attitudes. This effect is present both for the full resident sample and when restricting the sample to White attitude holders. We find that this relationship is also, in part, confounded with regional proportions of ideological and racial groups (attitudes are more heterogeneous in regions with greater ideological and racial diversity). We discuss potential mechanisms underlying these relationships, as well as the dangers of escalating conflict and hostility when individuals with diverging intergroup attitudes interact. © 2020 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology
... Differential associations also serve as a source of imitation, and provide differential reinforcement with respect There is quite a bit of evidence to support the relevance of the social learning framework to understanding the social media-radicalization nexus. Increased frequency of social media usage increases the likelihood that a user will come into contact with radical content (Costello, Hawdon, Ratliff and Grantham, 2016). ...
Article
Objectives Social media platforms such as Facebook are used by both radicals and the security services that keep them under surveillance. However, only a small percentage of radicals go on to become terrorists and there is a worrying lack of evidence as to what types of online behaviors may differentiate terrorists from non-violent radicals. Most of the research to date uses text-based analysis to identify "radicals" only. In this study we sought to identify new social-media level behavioral metrics upon which it is possible to differentiate terrorists from non-violent radicals. Methods: Drawing on an established theoretical framework, Social Learning Theory, this study used a matched case-control design to compare the Facebook activities and interactions of 48 Palestinian terrorists in the 100 days prior to their attack with a 2:1 control group. Conditional-likelihood logistic regression was used to identify precise estimates, and a series of binomial logistic regression models were used to identify how well the variables classified between the groups. Findings: Variables from each of the social learning domains of differential associations, definitions, differential reinforcement, and imitation were found to be significant predictors of being a terrorist compared to a nonviolent radical. Models including these factors had a relatively high classification rate, and significantly reduced error over base-rate classification. Conclusions Behavioral level metrics derived from social learning theory should be considered as metrics upon which it may be possible to differentiate between terrorists and non-violent radicals based on their social media profiles. These metrics may also serve to support textbased analysis and vice versa.
... Interview-based research designs vividly demonstrate the impact of perceived discrimination, victimisation and grievances upon radicalization processes and advocacy for violence against an out-group (Ali et al, 2017;Ferguson et al, 2008;Florez-Morris, 2007;Denov and Gervais, 2007;Glaser et al., 2002). Other research designs involving analysis of case files (Bouzar & Martin, 2016), personal narratives (Schafer et al, 2014) and self-report surveys provide further validation Costello et al, 2016;Victoroff, 2012). Doosje et al's (2013) structural equation modelling of survey data depicts factors like individual and collective deprivation being a root cause which inflates the likelihood of determinants of radical beliefs such as intergroup anxiety, perceptions of threat and perceived injustice as well as personal emotional uncertainty. ...
... Altunbas & Thornton (2011) found UK based jihadist terrorists (n = 54) to be younger and more educated than the general population. Costello et al. (2016) surveyed 1, 034 youth and young adults in the US regarding exposure to online extremism. Less education was associated with exposure to online extremism. ...
Thesis
Research on terrorism is increasingly empirical and a number of significant advancements have been made. One such evolution is the emergent understanding of risk factors and indicators for engagement in violent extremism. Beyond contributing to academic knowledge, this has important real-world implications. Notably, the development of terrorism risk assessment tools, as well as behavioural threat assessment in counterterrorism. This thesis makes a unique contribution to the literature in two key ways. First, there is a general consensus that no single, stable profile of a terrorist exists. Relying on profiles of static risk factors to inform judgements of risk and/or threat may therefore be problematic, particularly given the observed multi- and equi-finality. One way forward may be to identify configurations of risk factors and tie these to the theorised causal mechanisms they speak to. Second, there has been little attempt to measure the prevalence of potential risk factors for violent extremism in a general population, i.e. base rates. Establishing general population base rates will help develop more scientifically rigorous putative risk factors, increase transparency in the provision of evidence, minimise potential bias in decision-making, improve risk communication, and allow for risk assessments based on Bayesian principles. This thesis consists of four empirical chapters. First, I inductively disaggregate dynamic person-exposure patterns (PEPs) of risk factors in 125 cases of lone-actor terrorism. Further analysis articulates four configurations of individual-level susceptibilities which interact differentially with situational, and exposure factors. The PEP typology ties patterns of risk factors to theorised causal mechanisms specified by a previously designed Risk Analysis Framework (RAF). This may be more stable grounds for risk assessment however than relying on the presence or absence of single factors. However, with no knowledge of base rates, the relevance of seemingly pertinent risk factors remains unclear. However, how to develop base rates is of equal concern. Hence, second, I develop the Base Rate Survey and compare two survey questioning designs, direct questioning and the Unmatched Count Technique (UCT). Under the conditions described, direct questioning yields the most appropriate estimates. Third, I compare the base rates generated via direct questioning to those observed across a sample of lone-actor terrorists. Lone-actor terrorists demonstrated more propensity, situational, and exposure risk factors, suggesting these offenders may differ from the general population in measurable ways. Finally, moving beyond examining the prevalence rates of single factors, I collect a second sample in order to model the relations among these risk factors as a complex, dynamic system. To do so, the Base Rate Survey: UK is distributed to a representative sample of 1,500 participants from the UK. I introduce psychometric network modelling to terrorism studies which visualises the interactions among risk factors as a complex system via network graphs.
... Online hate, or cyberhate, involves the use of technology to express hatred of, or devalue, some collective, usually based on race, ethnicity, immigrant status, religion, gender, gender identity, sexual identity, or political persuasion (see Blazak, 2009;Costello et al., 2016;Hawdon et al., 2014Hawdon et al., , 2017. It differs from other types of cyberviolence, such as cyberstalking or cyberbullying in that the attack targets a collective instead of an individual. ...
Chapter
Explicit, undeniable expressions of hate, such as hate crimes, are surging in the United States and Europe. Many scholars linked such crimes to hateful speech and extremist ideas, especially online. Therefore, one would expect hate speech and hate crimes to have a similar upward trajectory over the past few years. This chapter explores such a hypothesis by tracking how online hate speech has changed across time. Using aggregate data from the United States and the United Kingdom from 2013 and 2018, this analysis compares trends in levels of exposure and type of hate expressed. After discussing what cyberhate is and highlighting why it is important to track, how the level of exposure and type of cyberhate in each country changed between 2013 and 2018 is explored. Understanding how exposure to and expressions of hate have changed over time within countries helps researchers understand patterns of social change and provides information on emerging concerns related to hateful online rhetoric, such as the divisive narratives that will forestall positive social change.
... Generalized trust in others, on a macro level, has been connected with numerous societal benefits, such as less dishonest behavior [32] and more cooperation between democratic governments and citizens [33]. Considering online hate, low levels of institutional trust have been previously connected to online hate exposure [34] and production [35]. Thus, we expected that institutional trust would be connected to online hate acceptance. ...
Article
Full-text available
The Internet, specifically social media, is among the most common settings where young people encounter hate speech. Understanding their attitudes toward the phenomenon is crucial for combatting it because acceptance of such content could contribute to furthering the spread of hate speech as well as ideology contamination. The present study, theoretically grounded in the General Aggression Model (GAM), investigates factors associated with online hate acceptance among young adults. We collected survey data from participants aged 18–26 from six countries: Finland (n = 483), France (n = 907), Poland (n = 738), Spain (n = 739), the United Kingdom (n = 959), and the United States (n = 1052). Results based on linear regression modeling showed that acceptance of online hate was strongly associated with acceptance of violence in all samples. In addition, participants who admitted to producing online hate reported higher levels of acceptance of it. Moreover, association with social dominance orientation was found in most of the samples. Other sample-specific significant factors included participants’ experiences with the Internet and online hate, as well as empathy and institutional trust levels. Significant differences in online hate acceptance levels and the strength of its connections to individual factors were found between the countries. These results provide important insights into the phenomenon, demonstrating that online hate acceptance is part of a larger belief system and is influenced by cultural background, and, therefore, it cannot be analyzed or combatted in isolation from these factors.
... The behavioral logic of racism is yet to be measured in a systematic and profound way in the latter two mediums, "especially in relation to the peculiarities and characteristics of [digital social networks] which can influence traditional racist logic and function" (Olmos Alcaraz, 2018, p. 43). Noting the uneven development of this type of research with respect to traditional work on mass media, we can highlight a few, for example: Costello et al. (2016) have studied exposure to racist content in terms of the characteristics of the users; Rauch and Schanz (2013) have analyzed the impact of such content on subjects and the varied reactions to it; the works of Arriaga (2013), Cisneros and Nakayama (2015), Ferrándiz et al. (2011), García and Abrahão (2015) and Miró (2016) address specific episodes of racism; while, Alcántara and Ruíz (2017), Awan (2016), Dubrofsky and Wood (2014), Khosravinik and Zia (2014), Mason (2016) and Olmos Alcaraz (2018), have deepened the specific types of racism such as islamophobia, sexist racism, anti-Arab racism and anti-immigrant racism. All of them within social media, digital spaces or networks. ...
Article
Full-text available
This paper is a study case that analyses the various functioning logics of racism and anti-racism in Twitter, specifically following the publication of a polemic tweet against immigration by Pablo Casado, president of PP, which aligns itself on the conservative “right” of Spanish politics. The aim is to provide knowledge –still scarce in research on the subject– on the characteristics of racist discourse –and its confrontation– in digital spaces. It is a case study analysing how the selected political discourse, as elite discourse, elicits reactions and provokes social participation in digital spaces. Methodologically, it is based on Twitter content analysis, utilizing both quantitative (frequency of topics) and qualitative (articulation of arguments) approaches. We worked through the NVivo analysis software and selected a sample of tweets, that were then coded and analysed in depth, responding to the politician’s overall message. The results point to the existence of a strong rejection to the politician's words. However, there was an absence of a visibly explicit and significant anti-racism in the ensuing retorts. The support received for his racist comment was in the minority, although some were very aggressive.
... ▪ sexual orientation ▪ ethnicity, race, or nationality ▪ religion These group identities are among the most common targets of cyberhate as reported by young people (e.g., Costello et al., 2016;Reichelmann et al., 2021). ...
Technical Report
Full-text available
This report presents findings about Czech adolescents’ cyberhate experiences and their caregivers’ knowledge. Caregivers refer to the parents, step-parents, and legal guardians of participating adolescents. Cyberhate refers to hateful and biased contents that are expressed online and via information and communication technologies. Our findings are based on data from a representative sample of 3,087 Czech households collected in 2021. The report is intended to provide a comprehensive picture of adolescents’ involvement with cyberhate as the exposed bystanders, as the victims, and as the perpetrators. It also provides information about their caregivers’ cyberhate exposure, and their knowledge of their child’s cyberhate victimisation.
... The problem of matchings with fairness constraints has been well-studied in recent years and the importance of fairness constraints has been highlighted in literature e.g. Segal-Halevi and Suksompong [2019], Luss [1999], Devanur et al. [2013], Celis et al. [2017], Kay et al. [2015], Costello et al. [2016], Bolukbasi et al. [2016]. ...
Preprint
Full-text available
Matching problems with group fairness constraints have numerous applications, from school choice to committee selection. We consider matchings under diversity constraints. Our problem involves assigning "items" to "platforms" in the presence of diversity constraints. Items belong to various "groups" depending on their attributes, and the constraints are stated in terms of lower bounds on the number of items from each group matched to each platform. In another model, instead of absolute lower bounds, "proportional fairness constraints" are considered. We give hardness results and design approximation algorithms for these problems. The technical core of our proofs is a new connection between these problems and the problem of matchings in hypergraphs. Our third problem addresses a logistical challenge involving opening platforms in the presence of diversity constraints. We give an efficient algorithm for this problem based on dynamic programming.
... The opinion ecosystem also has a dark side. The rise of misinformation [19], filter bubbles and echo chambers [4,11,12,15,21,33] have led to the rampancy of extremist worldviews [8,37,38], leading to detrimental societal consequences such as oppression [10,27] and political violence [16]. ...
Preprint
Full-text available
Recent years have seen the rise of extremist views in the opinion ecosystem we call social media. Allowing online extremism to persist has dire societal consequences, and efforts to mitigate it are continuously explored. Positive interventions, controlled signals that add attention to the opinion ecosystem with the aim of boosting certain opinions, are one such pathway for mitigation. This work proposes a platform to test the effectiveness of positive interventions, through the Opinion Market Model (OMM), a two-tier model of the online opinion ecosystem jointly accounting for both inter-opinion interactions and the role of positive interventions. The first tier models the size of the opinion attention market using the multivariate discrete-time Hawkes process; the second tier leverages the market share attraction model to model opinions cooperating and competing for market share given limited attention. On a synthetic dataset, we show the convergence of our proposed estimation scheme. On a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change, we show superior predictive performance over the state-of-the-art and the ability to uncover latent opinion interactions. Lastly, we use OMM to demonstrate the effectiveness of mainstream media coverage as a positive intervention in suppressing far-right opinions.
... Online priestor vytvára nové typy rizík a ohrození. Súčasný fenomén šírenia nenávisti v online priestore niektorí autori vnímajú ako typ kybernásilia (Wall, 2001) či kybernenávisti (Costello, 2016). Nenávistné prejavy spoločne s radikalizáciou online prostredia utvárajú priestor na kolektívnu identitu a afinitu medzi členmi rovnako zmýšľajúcich online skupín, čo môže zvyšovať náklonnosť jednotlivcov k normatívnemu vplyvu a posilňovať zmenu názorov a postojov v očakávania prevládajúce v rámci danej skupiny (Lee, 2006). ...
Chapter
Full-text available
The paper presents the issue of hate speech, which is increasingly affecting the space of social networks. On the one hand, the online environment brings many benefits in communication, in the work and private spheres, or in education, which has been confirmed by the current pandemic. On the other hand, it creates space for a new type of interaction and communication as well as building online platforms for the spread of hate speech against specific social groups. Hate communication can escalate radicalization and create more and more space for extremism, undermining democratic principles and human values based on solidarity, tolerance and empathy. The aim of the study is therefore to map the pitfalls of hate speech in the online environment and to present the research that deals with this issue. At the same time, it aims to define major EU strategies to combat hate speech, which offer possible solutions to reduce tensions in online communications and are effective tools for companies, organizations and individuals in accessing hate speech and reducing its spread online.
... Bias-based cyberbullying does target others for the same reason, but is a repeated behavior most often victimizing a member of one's school-aged peer group instead of being directed towards a collective (Blazak, 2009;Hawdon et al., 2017;Simpson, 2013). As another differentiating characteristic, online hate often relates to radicalization and the participation in extremist groups intent on doing violence towards specific targeted groups (Costello et al., 2016Hassan et al., 2018). ...
Article
Full-text available
Bias-based cyberbullying involves repeated hurtful actions online that devalue or harass one’s peers specific to an identity-based characteristic. Cyberbullying in general has received increased scholarly scrutiny over the last decade, but the subtype of bias-based cyberbullying has been much less frequently investigated, with no known previous studies involving youth across the United States. The current study explores whether empathy is related to cyberbullying offending generally and bias-based cyberbullying specifically. Using a national sample of 1644 12- to 15-year-olds, we find that those higher in empathy were significantly less likely to cyberbully others in general, and cyberbully others based on their race or religion. When the two sub-facets of empathy were considered separately, only cognitive empathy was inversely related to cyberbullying, while (contrary to expectation) affective empathy was not. Findings support focused efforts in schools to improve empathy as a means to reduce the incidence of these forms of interpersonal harm.
... This step produced the final pool of 18 publications (see Figure 1). Eight publications were excluded because they reported hate speech frequencies for mixed samples of adolescents and adults only (Costello et al., 2016;2017;Costello & Hawdon, 2018;Hawdon et al., 2017;Murray, 2011;Näsi et al., 2015;Savimäki et al., 2020). Publications from the project Global Kids Online, issued by the United Nations Children Fund (e.g., Livingstone et al., 2019) were excluded due to not meeting criterion four. ...
Article
Full-text available
Little is known about the current state of research on the involvement of young people in hate speech. Thus, this systematic review presents findings on a) the prevalence of hate speech among children and adolescents and on hate speech definitions that guide prevalence assessments for this population; and b) the theoretical and empirical overlap of hate speech with related concepts. This review was guided by the Cochrane approach. To be included, publications were required to deal with real-life experiences of hate speech, to provide empirical data on prevalence for samples aged 5 to 21 years and they had to be published in academic formats. Included publications were full-text coded using two raters (κ = .80) and their quality was assessed. The string-guided electronic search (ERIC, SocInfo, Psycinfo, Psyndex) yielded 1,850 publications. Eighteen publications based on 10 studies met the inclusion criteria and their findings were systematized. Twelve publications were of medium quality due to minor deficiencies in their theoretical or methodological foundations. All studies used samples of adolescents and none of younger children. Nine out of 10 studies applied quantitative methodologies. Eighteen publications based on 10 studies were included. Results showed that frequencies for hate speech exposure were higher than those related to victimization and perpetration. Definitions of hate speech and assessment instruments were heterogeneous. Empirical evidence for an often theorized overlap between hate speech and bullying was found. The paper concludes by presenting a definition of hate speech, including implications for practice, policy and research.
... For example, Reyns, Henson, and Fisher (2011), in their investigation of cyberstalking, applied RAT to cyberspace and acknowledged that offenders and targets do not need to be within physical distance of each other as parties intersect within a network. Similarly, Costello, Hawdon, Ratliff, and Grantham (2016) studied exposure to hate materials on social networking websites. They measured guardianship by relational proximity to online communities and exposure to motivated offenders by social network usage. ...
Article
This paper examines the preconditions for direct and indirect interventions by guardians in cyberbullying incidences and, conversely, when automated prevention and detection systems are imperative and likely to be the most useful. A total of 316 young adults read messages of cyberbullying scraped from Twitter with varying degrees of relative popularity status between the sender and receiver. The respondents were then surveyed to measure their willingness to intervene either indirectly or directly in response to these instances of cyberbullying. The results show respondents expressed a greater willingness to intervene both as incidents are interpreted as cyberbullying and when their perceived severity increased. Perceptions of collective and self-efficacy (but not automated efficacy) also mattered for willingness to intervene in cyberbullying. The results also show that participants were more willing to intervene indirectly when the bully was more popular than the victim. Implications of these findings for the guardian and bystander scripts and for automated detection and prevention systems are discussed.
... Conspiracy theories have attracted an increasing amount of attention from the research community. Studies have analyzed language patterns of conspiracy-related discussions [25,50,62], attributes that correlate with conspiracy group engagement [31,51], as well as circumstances that might foster conspiracy beliefs [16,33,42,67,68]. In particular, because of the important consequences that it brings, recent work has focused on the QAnon movement on fringe social media websites [46]. ...
Preprint
Full-text available
Widespread conspiracy theories may significantly impact our society. This paper focuses on the QAnon conspiracy theory, a consequential conspiracy theory that started on and disseminated successfully through social media. Our work characterizes how Reddit users who have participated in QAnon-focused subreddits engage in activities on the platform, especially outside their own communities. Using a large-scale Reddit moderation nation against QAnon-related activities in 2018 as the source, we identified 13,000 users active in the early QAnon communities. We collected the 2.1 million submissions and 10.8 million comments posted by these users across all of Reddit from October 2016 to January 2021. The majority of these users were only active after the emergence of the QAnon Conspiracy theory and decreased in activity after Reddit's 2018 QAnon ban. A qualitative analysis of a sample of 915 subreddits where the "QAnon-enthusiastic" users were especially active shows that they participated in a diverse range of subreddits, often of unrelated topics to QAnon. However, most of the users' submissions were concentrated in subreddits that have sympathetic attitudes towards the conspiracy theory, characterized by discussions that were pro-Trump, or emphasized unconstricted behavior (often anti-establishment and anti-interventionist). Further study of a sample of 1,571 of these submissions indicates that most consist of links from low-quality sources, bringing potential harm to the broader Reddit community. These results point to the likelihood that the activities of early QAnon users on Reddit were dedicated and committed to the conspiracy, providing implications on both platform moderation design and future research.
... For example, consuming an extremist video on YouTube can encourage the recommendation of further extremist material (Faddoul et al. 2020;O'Callaghan et al. 2015). (2) The dialogue culture, i.e., whether a social medium encourages exchange with known versus unknown persons (Costello et al. 2016). We suggest to also add (3) targeting, i.e., the possibility for content producers to send their content only to people who search for certain keywords or who belong to certain groups or have certain interests (e.g., men, 16-24, who are interested in violence). ...
Article
Full-text available
Dark social media has been described as a home base for extremists and a breeding ground for dark participation. Beyond the description of single cases, it often remains unclear what exactly is meant by dark social media and which opportunity structures for extremism emerge on these applications. The current paper contributes to filling this gap. We present a theoretical framework conceptualizing dark social media as opportunity structures shaped by (a) regulation on the macro-level; (b) different genres and types of (dark) social media as influence factors on the meso level; and (c) individual attitudes, salient norms, and technological affordances on the micro-level. The results of a platform analysis and a scoping review identified meaningful differences between dark social media of different types. Particularly social counter-media and fringe communities positioned themselves as "safe havens" for dark participation, indicating a high tolerance for accordant content. This makes them a fertile ground for those spreading extremist worldviews, consuming such content, or engaging in dark participation. Context-bound alternative social media were comparable to mainstream social media but oriented towards different legal spaces and were more intertwined with governments in China and Russia. Private-first channels such as Instant messengers were rooted in private communication. Yet, particularly Telegram also included far-reaching public communication formats and optimal opportunities for the convergence of mass, group, and interpersonal communication. Overall, we show that a closer examination of different types and genres of social media provides a more nuanced understanding of shifting opportunity structures for extremism in the digital realm.
... As a consequence of these developments, our knowledge of the dynamics of online hostility is constantly expanding. We now know that social media users typically become victims of online hostility based on a host of personal attributes, like gender, social class, ethnicity, or sexual orientation (Bliuc et al. 2019;Costello et al. 2016;Saha, Chandrasekharan, and De Choudhury 2019;Silva et al. 2016). We also know that people who are visible on social media (Costello, Rukus, and Hawdon 2019;ElSherief et al. 2018) and who confront the hostile behavior of others (Costello, Rukus, and Hawdon 2019;Hawdon, Oksanen, and Räsänen 2014;Mathew et al. 2019;Ribeiro et al. 2017) often fall victim to online hostility themselves. ...
Preprint
Full-text available
Toxicity and hostility permeate political debates on social media, but who is responsible? Canonical theories of political engagement equate political resources with being a “model democratic citizen.” In contrast, we develop the theoretical argument that in the current polarized political climate, those same resources come to motivate hostile engagement. Combining two years of survey and behavioral Twitter data, we provide empirical support for a link between political resources and online political hostility. This link is especially pronounced among citizens high in affective polarization -- a trait held by many resourceful citizens in current US society. Concerningly, resourceful but hateful social media users do not simply cater to fringe audiences. Rather, they dominate online debates by tweeting more frequently, having more friends and followers, and by occupying powerful positions in their online networks. The present findings thus shed important light on the causes and consequences of online political hostility.
... Deskriptive Normen beeinflussen das Verhalten besonders stark (Manning, 2009 (Faddoul et al., 2020;O'Callaghan et al., 2015). (b) Die Dialogkultur im Sinne eines Austausches mit bekannten, aber auch unbekannten Personen, durch die neue potenzielle Anhänger:innen gewonnen oder vertraute Netzwerke gestärkt werden könnten (Costello et al., 2016). Ergänzend ließe sich auch das (c) Targeting, also die gezielte Adressierung von Personen, die nach bestimmten Stichworten suchen oder die bestimmten Gruppen angehören und Interessen haben (etwa Männer, 16-24-Jährige sowie Personen, die sich generell für Gewalt interessieren), nennen. ...
Technical Report
Full-text available
Extremist:innen greifen zunehmend auf dunkle Sozialen Medien zurück. Der Begriff der dunklen sozialen Medien umfasst verschiedene Typen alternativer Sozialer Medien (soziale Kontermedien wie Gab, kontextgebundene alternative Soziale Medien wie VKontakte, Fringe Communities wie 4Chan), ebenso wie verschiedene Typen dunkler Kanäle (ursprünglich private Kanäle wie Telegram und Separée-Kanäle wie geschloßene Facebook-Gruppen). Das vorliegende Gutachten beleuchtet die Gelegenheitsstrukturen für Extremismus und Extremismusprävention, die sich durch die Verlagerung hin zu dunklen Sozialen Medien ergeben. Hierfür werden in einem theoretischen Rahmenmodel Einflussfaktoren auf drei Ebenen verknüpft: (1) Regulierung (etwa durch das NetzDG) auf der gesellschaftlichen Makro- Ebene. (2) Verschiedene Genres und Typen (dunkler) sozialer Medien auf der Meso-Ebene einzelner Angebote. (3) Einstellungen, Normen und technische Affordanzen als Motivatoren menschlichen Verhaltens im Sinne der Theorie des geplanten Verhaltens (Ajzen und Fishbein, 1977) auf der Mikro-Ebene. Basierend auf diesem Rahmenmodel werden die Gelegenheitststrukturen für Extremismus und Extremismusprävention mit Hilfe zweier Studien untersucht: (1) Einer detaillierten Plattformanalyse dunkler und etablierter Sozialer Medien (N = 19 Plattformen). (2) Eine Literaturanalyse ( ‚scoping review‘) des Forschungsstandes zu (dunklen) Sozialen Medien im Kontext von Extremismus und Extremismusprävention (N = 142 Texte). Die Ergebnisse der Platformanalyse ermöglichen nuancierte Einblicke in die Gelegenheitsstrukturen, die sich durch unterschiedliche Typen und Genres (dunkler) Sozialer Medien ergeben. Das Scoping Review bietet einen Überblick über die Entwicklung des Forschungsfeldes und die typischen Untersuchungsmethoden, die eingesetzt werden. Auf der Grundlage der erhobenen Daten werden Forschungsdesiderata und Implikationen für die Extremismusprävention diskutiert.
Article
YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.
Article
Filter Bubbles, exacerbated by use of digital platforms, have accelerated opinion polarization. This research builds on calls for interventions aimed at preventing or mitigating polarization. This research assesses the extent that an online digital platform, intentionally displaying two sides of an argument with methodology designed to “open minds” and aid readers willingness to consider an opposing view. This “open mindedness” can potentially penetrate online filter bubbles, alleviate polarization and promote social change in an era of exponential growth of discourse via digital platforms. Utilizing “The Perspective” digital platform, 400 respondents were divided into five distinct groups varying in number of articles reading material related to “Black Lives Matter” (BLM). Results indicate that those reading five articles, either related or unrelated to race, were significantly more open-minded towards BLM than the control group. Those who read five race-related articles also showed significantly reduced levels of holding a hardliner opinion towards BLM than control.
Article
This systematic review assesses the impact of mental health problems upon attitudes, intentions and behaviours in the context of radicalisation and terrorism. We identified 25 studies that measured rates of mental health problems across 28 samples. The prevalence rates are heterogenous and range from 0% to 57%. If we pool the results of those samples (n = 19) purely focused upon confirmed diagnoses where sample sizes are known (n = 1705 subjects), the results suggest arate of 14.4% with aconfirmed diagnosis. Where studies relied upon wholly, or in some form, upon privileged access to police or judicial data, diagnoses occurred 16.96% of the time (n = 283 subjects). Where studies were purely focused upon open sources (n = 1089 subjects), diagnoses were present 9.82% of the time. We then explore (a) the types and rates of mental health disorders identified (b) comparison/control group studies (c) studies that explore causal roles of mental health problems and (d) other complex needs.
Article
This study explores the antecedents and consequences of unfriending in social media settings. Employing an online panel survey (N = 990), this study investigates how exposure to hate speech is associated with political talk through social media unfriending. Findings suggest that social media users who are often exposed to hate speech towards specific groups and relevant issues are more likely to unfriend others (i.e., blocking and unfollowing) in social media. Those who unfriend others are less likely to talk about public and political agendas with those with cross-cutting views but tend to often engage in like-minded political talk. In addition, this study found indirect-effects associations, indicating that social media users who are exposed to hate speech are less likely to engage in cross-cutting talk but more likely to participate in like-minded talk because they unfriend other users in social media.
Article
Nowadays, the global community is confronted by an acute problem of extremism associated with the growing intolerance, aggression and hostility of modern society. The ways extremism manifests among youth, who is perceptive and sensitive to extremism due to age peculiarities, are understudied. The purpose of the study is to identify psychological characteristics, which are preconditions and elements of extremism among young people. The research methods are theoretical and methodological analysis, survey methods, methods of mathematical and statistical data processing. Illegal behaviour, a propensity to take risks appeared to be the prerequisites for display of extremist elements among full-time young employees. The research results can be used for scientific and methodologically assurance of psychological and pedagogic support of students, tactics to prevent extremism among young people.
Chapter
Given the potential negative consequences of being exposed to online extremism and hate speech—ranging from mood swings and depression to radicalization and committing acts of violence—we consider strategies for confronting online extremism. We begin by discussing the promises and pitfalls of various forms of social control that can be enacted while online. Specifically, we discuss corporate interventions, governmental interventions, and the informal social control strategies of self-help and collective efficacy. We first consider how each strategy can possibly counter the narrative of hate promulgated by online hate speech materials and limit the extent to which individuals are exposed to these materials. We also present data on the extent to which some of these strategies are used among a sample of American Internet users between the ages of 18 and 24. Finally, we conduct an analysis of who is most likely to enact self-help when they confront hate speech online. We conclude by considering the implications of our findings and by making suggestions for improving efforts to counter online extremism.
Article
Full-text available
Young people encounter and experience both risks and opportunities when participating as actors and interactors in online spaces. Digital skills and resilience are considered important parts of a “rights-based” approach to keeping young people “safe” online in ways that enable them to avoid harm while benefiting from the opportunities. The present paper discusses findings from focus group research conducted in England with 60 young people aged 13 to 21. The research explored their perspectives on responding to different online harms, including online hate, unwanted sexual content, and unrealistic body- and appearance-related content. The findings are discussed in terms of scholarship on digital citizenship, specifically regarding the social, affective, and technical dimensions of online life and the skills required for resilience. The analysis suggests that there was a tension between young people’s individualistic responsibilisation of themselves and one another for responding to risk online and the socio-emotional aspects of online life as perceived and recounted by them in the focus groups. It is concluded that a youth-centred approach to resilience is required that encapsulates the multidimensional nature of encountering, experiencing, and responding to risk online.
Article
Full-text available
Background Most national counter-radicalization strategies identify the media, and particularly the Internet as key sources of risk for radicalization. However, the magnitude of the relationships between different types of media usage and radicalization remains unknown. Additionally, whether Internet-related risk factors do indeed have greater impacts than other forms of media remain another unknown. Overall, despite extensive research of media effects in criminology, the relationship between media and radicalization has not been systematically investigated. Objectives This systematic review and meta-analysis sought to (1) identify and synthesize the effects of different media-related risk factors at the individual level, (2) identify the relative magnitudes of the effect sizes for the different risk factors, and (3) compare the effects between outcomes of cognitive and behavioral radicalization. The review also sought to examine sources of heterogeneity between different radicalizing ideologies. Search Methods Electronic searches were carried out in several relevant databases and inclusion decisions were guided by a published review protocol. In addition to these searches, leading researchers were contacted to try and identify unpublished or unidentified research. Hand searches of previously published reviews and research were also used to supplement the database searches. Searches were carried out until August 2020. Selection Criteria The review included quantitative studies that examined at least one media-related risk factor (such as exposure to, or usage of a particular medium or mediated content) and its relationship to either cognitive or behavioral radicalization at the individual level. Data Collection and Analysis Random-effects meta-analysis was used for each risk factor individually and risk factors were arranged in rank-order. Heterogeneity was explored using a combination of moderator analysis, meta-regression, and sub-group analysis. Results The review included 4 experimental and 49 observational studies. Most of the studies were judged to be of low quality and suffer from multiple, potential sources of bias. From the included studies, effect sizes pertaining to 23 media-related risk factors were identified and analyzed for the outcome of cognitive radicalization, and two risk factors for the outcome of behavioral radicalization. Experimental evidence demonstrated that mere exposure to media theorized to increase cognitive radicalization was associated with a small increase in risk (g = 0.08, 95% confidence interval [CI] [−0.03, 19]). A slightly larger estimate was observed for those high in trait aggression (g = 0.13, 95% CI [0.01, 0.25]). Evidence from observational studies shows that for cognitive radicalization, risk factors such as television usage have no effect (r = 0.01, 95% CI [−0.06, 0.09]). However, passive (r = 0.24, 95% CI [0.18, 0.31]) and active (r = 0.22, 95% CI [0.15, 0.29]) forms of exposure to radical content online demonstrate small but potentially meaningful relationships. Similar sized estimates for passive (r = 0.23, 95% CI [0.12, 0.33]) and active (r = 0.28, 95% CI [0.21, 0.36]) forms of exposure to radical content online were found for the outcome of behavioral radicalization. Authors' Conclusions Relative to other known risk factors for cognitive radicalization, even the most salient of the media-related risk factors have comparatively small estimates. However, compared to other known risk factors for behavioral radicalization, passive and active forms of exposure to radical content online have relatively large and robust estimates. Overall, exposure to radical content online appears to have a larger relationship with radicalization than other media-related risk factors, and the impact of this relationship is most pronounced for behavioral outcomes of radicalization. While these results may support policy-makers' focus on the Internet in the context of combatting radicalization, the quality of the evidence is low and more robust study designs are needed to enable the drawing of firmer conclusions.
Preprint
Full-text available
We consider the problem of assigning items to platforms in the presence of group fairness constraints. In the input, each item belongs to certain categories, called classes in this paper. Each platform specifies the group fairness constraints through an upper bound on the number of items it can serve from each class. Additionally, each platform also has an upper bound on the total number of items it can serve. The goal is to assign items to platforms so as to maximize the number of items assigned while satisfying the upper bounds of each class. In some cases, there is a revenue associated with matching an item to a platform, then the goal is to maximize the revenue generated. This problem models several important real-world problems like ad-auctions, scheduling, resource allocations, school choice etc.We also show an interesting connection to computing a generalized maximum independent set on hypergraphs and ranking items under group fairness constraints. We show that if the classes are arbitrary, then the problem is NP-hard and has a strong inapproximability. We consider the problem in both online and offline settings under natural restrictions on the classes. Under these restrictions, the problem continues to remain NP-hard but admits approximation algorithms with small approximation factors. We also implement some of the algorithms. Our experiments show that the algorithms work well in practice both in terms of efficiency and the number of items that get assigned to some platform.
Article
How do people come to believe conspiracy theories, and what role does the internet play in this process as a socio-technical system? We explore these questions by examining online participants in the "chemtrails" conspiracy, the idea that visible condensation trails behind airliners are deliberately sprayed for nefarious purposes. We apply Weick's theory of sensemaking to examine the role of people's frames (beliefs and worldviews), as well as the socio-technical contexts (social interactions and technological affordances) for processing informational cues about the conspiracy. Through an analysis of in-depth interviews with thirteen believers and seven ex-believers, we find that many people become curious about chemtrails after consuming rich online media, and they later find welcoming online communities to support shared beliefs and worldviews. We discuss how the socio-technical context of the internet may inadvertently trap people in a perpetual state of ambiguity that becomes reinforced through a collective sensemaking process. In addition, we show how the conspiracy offers a way for believers to express their dissatisfaction with authority, enjoy a sense of community, and find some entertainment along the way. Finally, we discuss how people's frames and the various socio-technical contexts of the internet are important in the sensemaking of debunking evidence, and how such factors may function in the rejection of conspiratorial beliefs.
Article
Full-text available
Drawing from routine activity theory (RAT), this article seeks to determine the crucial factors contributing to youth victimization through online hate. Although numerous studies have supported RAT in an online context, research focusing on users of particular forms of social media is lacking. Using a sample of 15- to 18-year-old Finnish Facebook users (n = 723), we examine whether the risk of online hate victimization is more likely when youth themselves produced online hate material, visited online sites containing potentially harmful content, and deliberately sought out online hate material. In addition, we examine whether the risk of victimization is higher if respondents are worried about online victimization and had been personally victimized offline. The discussion highlights the accumulation of online and offline victimization, the ambiguity of the roles of victims and perpetrators, and the artificiality of the division between the online and offline environments among young people.
Article
Full-text available
Purpose – Trust is one of the key elements in social interaction; however, few studies have analyzed how the proliferation of new information and communication technologies influences trust. The authors examine how exposure to hate material in the internet correlates with Finnish youths’ particularized and generalized trust toward people who have varying significance in different contexts of life. Hence, the purpose of this paper is to provide new information about current online culture and its potentially negative characteristics. Design/methodology/approach – Using data collected in the spring of 2013 among Finnish Facebook users (n=723) ages 15-18, the authors measure the participants’ trust in their family, close friends, other acquaintances, work or school colleagues, neighbors, people in general, as well as people only met online. Findings – Witnessing negative images and writings reduces both particularized and generalized trust. The negative effect is greater for particularized trust than generalized trust. Therefore, exposure to hate material seems to have a more negative effect on the relationships with acquaintances than in a more general context. Research limitations/implications – The study relies on a sample of registered social media users from one country. In future research, cross-national comparisons are encouraged. Originality/value – The findings show that trust plays a significant role in online setting. Witnessing hateful online material is common among young people. This is likely to have an impact on perceived social trust. Hateful communication may then impact significantly on current online culture, which has a growing importance for studying, working life, and many leisure activities.
Article
Full-text available
Contrary to committing hacking offences, becoming a victim of hacking has received scant research attention. This article addresses risk factors for this type of crime and explores its theoretical and empirical connectedness to the more commonly studied type of cybercrime victimization: online harassment. The results show that low self-control acts as a general risk factor in two ways. First, it leads to a higher risk of experiencing either one of these two distinct types of victimization within a 1-year period. Second, cumulatively the experiences of being hacked and harassed are also more prominent among this group. However, specific online behaviors predicted specific online victimization types (e.g., using social media predicted only harassment and not hacking). The results thus shed more light on the extent to which criminological theories are applicable across different types of Internet-related crime.
Article
Full-text available
Using a sample of college students, we apply the general theory of crime and the lifestyle/routine activities framework to assess the effects of individual and situational factors on seven types of cybercrime victimization. The results indicate that neither individual nor situational characteristics consistently impacted the likelihood of being victimized in cyberspace. Self-control was significantly related to only two of the seven types of cybercrime victimizations and although five of the coefficients in the routine activity models were significant, all but one of these significant effects were in the opposite direction to that expected from the theory. At the very least, it would appear that other theoretical frameworks should be appealed to in order to explain victimization in cyberspace.
Chapter
Full-text available
Purpose: The prevalence of online hate material is a public concern, but few studies have analyzed the extent to which young people are exposed to such material. This study investigated the extent of exposure to and victimization by online hate material among young social media users. Design/methodology/approach: The study analyzed data collected from a sample of Finnish Facebook users (n = 723) between the ages of 15 and 18. Analytic strategies were based on descriptive statistics and logistic regression models. Findings: A majority (67%) of respondents had been exposed to hate material online, with 21% having also fallen victim to such material. The online hate material primarily focused on sexual orientation, physical appearance, and ethnicity and was most widespread on Facebook and YouTube. Exposure to hate material was associated with high online activity, poor attachment to family, and physical offline victimization. Victims of the hate material engaged in high levels of online activity. Their attachment to family was weaker, and they were more likely to be unhappy. Online victimization was also associated with the physical offline victimization. Social implications: While the online world has opened up countless opportunities to expand our experiences and social networks, it has also created new risks and threats. Psychosocial problems that young people confront offline overlap with their negative online experiences. When considering the risks of Internet usage, attention should be paid to the problems young people may encounter offline. Originality: This study expands our knowledge about exposure to online hate material among users of the most popular social networking sites. It is the first study to take an in-depth look at the hate materials young people encounter online in terms of the sites where the material was located, how users found the site, the target of the hate material, and how disturbing users considered the material to be.
Article
Full-text available
This note describes a new and unique, open source, relational database called the United States Extremist Crime Database (ECDB). We first explain how the ECDB was created and outline its distinguishing features in terms of inclusion criteria and assessment of ideological commitment. Second, the article discusses issues related to the evaluation of the ECDB, such as reliability and selectivity. Third, descriptive results are provided to illustrate the contributions that the ECDB can make to research on terrorism and criminology.
Article
Full-text available
Building on prior work surrounding negative workplace experiences, such as bullying and sexual harassment, we examine the extent to which organizational context is meaningful for the subjective experience of sex discrimination. Data draw on the 2002 National Study of the Changing Workforce, which provides a key indicator of individuals' sex discrimination experiences as well as arguably influential dimensions of organizational context—i.e., sex composition, workplace culture and relative power—suggested by prior research. Results indicate that the experience of sex discrimination is reduced for both women and men when they are part of the numerical majority of their work group. Although supportive workplace cultures mitigate the likelihood of sex discrimination, relative power in the workplace seems to matter little. We conclude by revisiting these results relative to perspectives surrounding hierarchy maintenance, group competition and internal cultural dynamics.
Article
Full-text available
Social scientists have begun to explore sexting—sharing nude or semi-nude images of oneself with others using digital technology—to understand its extent and nature. Building on this growing body of research, the current study utilizes the self-control and opportunity perspectives from criminology to explain sending, receiving, and mutually sending and receiving sext messages. The possible mediating effects of lifestyles and routine activities on the effects of low self-control also were tested using a sample of college students. Results revealed that low self-control is significantly and positively related to each type of sexting behavior, and that while certain lifestyles and routines mediated these effects, low self-control remained a significant predictor of participation in sexting.
Article
Full-text available
Consumer fraud seems to be widespread, yet little research is devoted to understanding why certain social groups are more vulnerable to this type of victimization than others. This article deals with Internet consumer fraud victimization, and uses an explanatory model that combines insights from self-control theory and routine activity theory. The results from large-scale victimization survey data among the Dutch general population (N¼6,201) reveal that people with low self-control run substantially higher victimization risk, as well as active online shoppers and people participating in online forums. Though a share of the link between low self-control and victimization is indirect— because impulsive people are more involved in risk enhancing online routine activities—a large direct effect remains. This suggests that, within similar situations, people with low self-control respond differently to deceptive online commercial offers.
Chapter
Full-text available
Problem-oriented Policing establishes a new unit of work of policing and a new unit of analysis for police research. That unit is the "problem". Problem-oriented policing management and research has been hampered by an inability to define and organize problems -- group similar problems and separate dissimilar ones. To address this deficiency, this paper proposes a method for classifying common problems encountered by local police agencies. Routine Activity Theory provides the basis for a two-dimensional classification scheme, Using this classification scheme, all common problems are typed by the behavior of the participants and the environment where they occur. Concerns that cannot be described on both behavioral and environmental dimensions are not "problems" in the technical sense. After explaining the development of this classification scheme, this paper describes how it can be applied, examines its limitations, propose a research agenda using the scheme, and suggests ways the classification scheme might be improved
Article
Full-text available
Objectives: The purpose of the current study was to extend recent work aimed at applying routine activity theory to crimes in which the victim and offender never come into physical proximity. To that end, relationships between individuals' online routines and identity theft victimization were examined. Method: Data from a subsample of 5,985 respondents from the 2008 to 2009 British Crime Survey were analyzed. Utilizing binary logistic regression, the relationships between individuals' online routine activities (e.g., banking, shopping, downloading), individual characteristics (e.g., gender, age, employment), and perceived risk of victimization on identity theft victimization were assessed. Results: The results suggest that individuals who use the Internet for banking and/or e-mailing/instant messaging are about 50 percent more likely to be victims of identity theft than others. Similarly, online shopping and downloading behaviors increased victimization risk by about 30 percent. Males, older persons, and those with higher incomes were also more likely to experience victimization, as were those who perceived themselves to be at greater risk of victimization. Conclusions: Although the routine activity approach was originally written to account for direct-contact offenses, it appears that the perspective also has utility in explaining crimes at a distance. Further research should continue to explore the online and offline routines that increase individuals' risks of identity theft victimization.
Article
Full-text available
Progress in cyber technology has created innovative ways for individuals to communicate with each other. Sophisticated cell phones, often with integrated cameras, have made it possible for users to instantly send photos, videos, and other materials back and forth to each other regardless of their physical separation. This same technology also makes sexting possible – sending nude or semi-nude images, often of oneself, to others electronically (e.g., by text message, email). Few studies examining sexting have been published, and most have focused on the legal issues associated with juvenile sexting. In general, lacking are empirical analyses of the prevalence of sexting, and its potential consequences (i.e., victimization) that are theoretically grounded. Accordingly, we explored the possible link between sexting and online personal victimization (i.e., cybervictimization) among a sample of college students. As hypothesized, respondents who engaged in sexting were more likely to not only experience cybervictimization, but also to be victimized by different types of cybervictimization.
Article
Full-text available
Theoretical and empirical research investigating victimization and offending has largely been either ‘victim-focused’ or ‘offender-focused.’ This approach ignores the potential theoretical and empirical overlap that may exist among victims and offenders, otherwise referred to as ‘victim–offenders.’ This paper provides a comprehensive review of the research that has examined the relationship between victimization and offending. The review identified 37 studies, spanning over five decades (1958–2011), that have assessed the victim–offender overlap. The empirical evidence gleaned from these studies with regard to the victim–offender overlap is robust as 31 studies found considerable support for the overlap and six additional studies found mixed/limited support. The evidence is also remarkably consistent across a diversity of analytical and statistical techniques and across historical, contemporary, cross-cultural, and international assessments of the victim–offender overlap. In addition, this overlap is identifiable among dating/intimate partners and mental health populations. Conclusions and directions for future research are also discussed.
Article
Full-text available
Victimization on the Internet through what has been termed cyberbullying has attracted increased attention from scholars and practitioners. Defined as “willful and repeated harm inflicted through the medium of electronic text” (Patchin and Hinduja 200653. Patchin , J. W. and S. Hinduja . 2006 . “Bullies Move Beyond the Schoolyard: A Preliminary Look at Cyberbullying.” Youth Violence and Juvenile Justice 4 ( 2 ): 148 – 169 . [CrossRef]View all references:152), this negative experience not only undermines a youth's freedom to use and explore valuable on-line resources, but also can result in severe functional and physical ramifications. Research involving the specific phenomenon—as well as Internet harassment in general—is still in its infancy, and the current work seeks to serve as a foundational piece in understanding its substance and salience. On-line survey data from 1,378 adolescent Internet-users are analyzed for the purposes of identifying characteristics of typical cyberbullying victims and offenders. Although gender and race did not significantly differentiate respondent victimization or offending, computer proficiency and time spent on-line were positively related to both cyberbullying victimization and offending. Additionally, cyberbullying experiences were also linked to respondents who reported school problems (including traditional bullying), assaultive behavior, and substance use. Implications for addressing this novel form of youthful deviance are discussed.
Article
Full-text available
Researchers traditionally rely on routine activities and lifestyle theories to explain the differential risk of victimization; few studies have also explored nonsituational alternative explanations. We present a conceptual framework that links individual trait and situational antecedents of violent victimization. Individual risk factors include low self-control and weak social ties with the family and school. Situational risk factors include having delinquent peers and spending time in unstructured and unsupervised socializing activities with peers. We investigate the empirical claims proposed in this model on a sample of high school students, using LISREL to create a structural equation model. The results generally support our assertions that individual traits and situational variables each significantly and meaningfully contribute to victimization.
Article
Full-text available
In this paper I theorize that low self-control is a reason why offenders are at high risk of being victims of crime. I reformulate self-control theory into a theory of vulnerability and test several of its hypotheses, using data from a survey administered to a sample of college students. This research investigates how well self-control explains different forms of victimization, and the extent to which self-control mediates the effects of gender and family income on victimization. Low self-control significantly increases the odds of both personal and property victimization and substantially reduces the effects of gender and income. When criminal behavior is controlled, the self-control measure still has a significant direct effect on victimization. These results have many implications for victimization research.
Article
Full-text available
In this paper we use methods of social network analysis to examine the interorganizational structure of the white supremacist movement. Treating links between Internet websites as ties of affinity, communication, or potential coordination, we investigate the structural properties of connections among white supremacist groups. White supremacism appears to be a relatively decentralized movement with multiple centers of influence, but without sharp cleavages between factions. Interorganizational links are stronger among groups with a special interest in mutual affirmation of their intellectual legitimacy (Holocaust revisionists) or cultural identity (racist skinheads) and weaker among groups that compete for members (political parties) or customers (commercial enterprises). The network is relatively isolated from both mainstream conservatives and other extremist groups. Christian Identity theology appears ineffective as a unifying creed of the movement, while Nazi sympathies are pervasive. Recruitme...
Article
Full-text available
Although the use of social media by hate groups emerged contemporaneously with the Web, few have researched what influence these groups have. Will increasingly active online-hate groups lead to more acts of mass violence, or is concern over the widespread web presence of hate groups a moral panic? If we consider these groups in light of criminological theories, it becomes clear that they pose a danger. Although mass shootings will remain rare, social media sites may contribute to a relative increase in these tragic phenomena. In this paper, I consider how social media can encourage mass murder within a framework of one of the most prominent and supported criminological theories: differential association. I briefly discuss the presence of hate groups on the web and then review how the core principles of differential association are met and potentially amplified through social media. I then provide an example of the interconnectedness of hate groups and conclude with a call for future research.
Article
Full-text available
Article
Full-text available
Routine activities theory has had considerable influence, stimulating subsequent theoretical development, generating an empirical literature on crime patterns and informing the design of prevention strategies. Despite these numerous applications of the theory to date, a promising vein for theoretical development, research and prevention remains untapped. The concept of handlers, or those who control potential offenders, has received relatively little attention since introduced by Felson (1986). This article examines the reasons for the lack of attention to handlers and extends routine activities theory by proposing a model of handler effectiveness that addresses these issues. In addition, the model explicitly links routine activities theory with two of its complements – the rational choice perspective and situational crime prevention – to articulate the mechanism by which handling prevents crime. We conclude by discussing the broad range of prevention possibilities offered by the model of handler effectiveness.
Article
Full-text available
Building upon Eck and Clarke’s (2003) ideas for explaining crimes in which there is no face-to-face contact between victims and offenders, the authors developed an adapted lifestyle–routine activities theory. Traditional conceptions of place-based environments depend on the convergence of victims and offenders in time and physical space to explain opportunities for victimization. With their proposed cyberlifestyle–routine activities theory, the authors moved beyond this conceptualization to explain opportunities for victimization in cyberspace environments where traditional conceptions of time and space are less relevant. Cyberlifestyle–routine activities theory was tested using a sample of 974 college students on a particular type of cybervictimization—cyberstalking. The study’s findings provide support for the adapted theoretical perspective. Specifically, variables measuring online exposure to risk, online proximity to motivated offenders, online guardianship, online target attractiveness, and online deviance were significant predictors of cyberstalking victimization. Implications for advancing cyberlifestyle–routine activities theory are discussed.
Article
Full-text available
Using data from 541 high school students, we examine the associations between structured and unstructured routine activities and adolescent violent victimization in light of gender's influence. In particular, we focused on whether such activity-victimization relationships explained any effect of gender or, in contrast, were perhaps contingent upon gender. The results showed that gender's effect on both minor and serious victimization was substantially mediated by one measured lifestyle, in particular the delinquent lifestyle. In addition, there was only modest evidence of gender moderating the effects of certain lifestyles on victimization; the effects of most activities were consistent across male and female subjects. Implications of our findings for a contemporary age-graded and gendered routine activity theory are discussed.
Article
Full-text available
Recent discussions of ‘cybercrime’ focus upon the apparent novelty or otherwise of the phenomenon. Some authors claim that such crime is not qualitatively different from ‘terrestrial crime’, and can be analysed and explained using established theories of crime causation. One such approach, oft cited, is the ‘routine activity theory’ developed by Marcus Felson and others. This article explores the extent to which the theory’s concepts and aetiological schema can be transposed to crimes committed in a ‘virtual’ environment. Substantively, the examination concludes that, although some of the theory’s core concepts can indeed be applied to cybercrime, there remain important differences between ‘virtual’ and ‘terrestrial’ worlds that limit the theory’s usefulness. These differences, it is claimed, give qualified support to the suggestion that ‘cybercrime’ does indeed represent the emergence of a new and distinctive form of crime.
Article
Full-text available
In this paper we present a "routine activity approach" for analyzing crime rate trends and cycles. Rather than emphasizing the characteristics of offenders, with this approach we concentrate upon the circumstances in which they carry out predatory criminal acts. Most criminal acts require convergence in space and time of likely offenders, suitable targets and the absence of capable guardians against crime. Human ecological theory facilitates an investigation into the way in which social structure produces this convergence, hence allowing illegal activities to feed upon the legal activities of everyday life. In particular, we hypothesize that the dispersion of activities away from households and families increases the opportunity for crime and thus generates higher crime rates. A variety of data is presented in support of the hypothesis, which helps explain crime rate trends in the United States 1947-1974 as a byproduct of changes in such variables as labor force participation and single-adult households.
Chapter
This chapter examines the shift toward the use of social media to fuel violent extremism, what the key discursive markers are, and how these key discursive markers are used to fuel violent extremism. The chapter then addresses and critiques a number of radicalisation models including but not limited to phase based models. Discursive markers are covered under three broad narrative areas. Narratives of grievance are designed to stimulate strong emotive responses to perceived injustices. Based on these grievances, active agency is advocated in the form of jihad as a path that one should follow. Finally, a commitment to martyrdom is sought as the goal of these discursive markers.
Book
Cybercrime and Society provides a clear, systematic, and critical introduction to current debates about cybercrime. It locates the phenomenon in the wider contexts of social, political, cultural, and economic change. It is the first book to draw upon perspectives spanning criminology, sociology, law, politics, and cultural studies to examine the whole range of cybercrime issues.
Article
Before the Islamic State in Iraq and the Levant (ISIL) leveraged the Internet into a truly modern quasi-state propaganda machine through horrendous online videos, travel handbooks, and sophisticated Twitter messengering, more humble yet highly effective precursors targeted youthful Western Muslims for radicalism, during a time when home grown plots peaked. These brash new entrants into the crowded freewheeling world of extremist cyber-haters joined racists, religious extremists of other faiths, Islamophobes, single issue proponents, as well as anti-government rhetoriticians and conspiracists. The danger from these evolving new provocateurs, then and now, is not that they represent a viewpoint that is widely shared by American Muslims. However, the earlier successful forays by extremist Salafists, firmly established the Internet as a tool to rapidly radicalize, train and connect a growing, but small number of disenfranchised or unstable young people to violence. The protections that the First Amendment provide to expression in the United States, contempt for Western policies and culture, contorted fundamentalism, and the initial successes of these early extremist Internet adopters, outlined here, paved the way for the ubiquitous and sophisticated online radicalization efforts we see today.
Article
Introduction: Why Are Racial Minorities Behind Today? * What is Racism? The Racialized Social System. * Racial Attitudes or Racial Ideology? An Alternative Paradigm for Examining Actors' Racial Views. * The "New Racism": The Post-Civil Rights Racial Structure in the U.S * Color-Blind Racism and Blacks. * Conclusion: New Racism, New Theory, and New Struggle.