Article

Who Views Online Extremism? Individual Attributes Leading to Exposure

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N = 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These findings also translate to the radicalization context, with several authors emphasizing that men are more likely to favor violence and extremism (Muxel, 2020). Furthermore, there is some evidence that older individuals are more exposed to hate and extremism online (Costello et al., 2016), which may fuel their support for radicalization (Hassan et al., 2018). Lastly, though results of previous studies reveal some caveats, stronger partisanship (i.e., the intensity of someone's support for a particular party rather than the underlying orientation) generally leads to stronger intentions to engage in radical actions, such as political violence (Gøtzsche-Astrup et al., 2021). ...
... Though not all users are entrapped in homogeneous environments on social media, with the degree of homogeneity varying based on individual characteristics (Sindermann et al., 2021), high involvement in identity bubbles may contribute to support for radical action in several ways. In general, individuals who use social media to a greater extent are more exposed to hate and extremism online (Costello et al., 2016), which may, in the next step, increase their support for radicalization (Hassan et al., 2018). ...
... In line with previous literature (e.g., Gøtzsche-Astrup et al., 2021;Muxel, 2020), our results showed that men and those with more extreme political views were more supportive of radical action in hypothetical scenarios, thereby supporting H1 and H3. Contrary to our predictions, indirectly derived from previous literature (Costello et al., 2016;Hassan et al., 2018), the results related to age suggested higher levels of support for radical action among younger participants, thus rejecting H2. ...
Preprint
Full-text available
Radicalization and violent extremism endanger the security and stability of modern societies. In the present study, we aimed to improve our understanding of these phenomena by investigating the factors that determine individuals' support for radical action, with a particular focus on the role of identity bubbles on social media. A sample of 563 Europeans filled out an online questionnaire containing demographic questions, scales related to intolerance of uncertainty and identity bubbles, and questions about their support for radical action in hypothetical scenarios. We found that men, younger individuals, those with more extreme political views, and those more prone to interacting with like-minded users on social media (i.e., homophily) exhibited higher support for radical action. Moreover, social identification and information bias on social media moderated the association between intolerance of uncertainty and support for radical action, amplifying the positive association between the variables. These effects were not further moderated by partisanship.
... This has exposed them to the virtual community to a greater extent, along with its negatives and risks, including various types of electronic crimes that have impacted their studies and family relationships. Some families have resorted to reporting these electronic crimes, leading to negative consequences within the family that significantly affect the relationship between university students and their families (Costello et al., 2016). Coordination between security agencies is essential to combat this type of crime and encourage citizens to report any risks associated with cybercrimes, increasing awareness of such crimes. ...
... Most students extensively use the internet for educational and entertainment purposes, particularly social networking sites like Facebook (GAS, 2019;Costello et al., 2016). However, excessive use of Facebook can lead to time wastage, privacy violations, and online harassment (Aljohni et al., 2021). ...
... However, excessive use of Facebook can lead to time wastage, privacy violations, and online harassment (Aljohni et al., 2021). University students are susceptible to various cybercrimes, such as data theft, hacking, and identity theft, due to a lack of internet literacy (Costello et al., 2016;Aljohni et al., 2021). Effective policies and laws are required to combat cybercrimes as they continue to increase with the growing number of internet users (Costello et al., 2016;Aljohni et al., 2021). ...
Article
Full-text available
This study examines the profound impact of cybercrime on the social dynamics of students at Ha'il University in Saudi Arabia during the tumultuous period of the COVID-19 pandemic. Using a carefully crafted and validated questionnaire and data collected from 110 participants, the study reveals nuanced shifts in relationships involving peers, instructors, and especially family members. The importance of understanding these shifts is underscored by the global increase in cyber activity during lockdowns. The findings reveal a pronounced and disturbing impact of cybercrime on family ties. Although the overall gender-based findings were mostly the same, female students showed a higher level of awareness in family-centered situations. This suggests that there are deeper implications for this group and points to the subtleties in society that may be influencing these views. In response to these troubling findings, the study presents a comprehensive set of recommendations. These include raising awareness of cybercrime among students and the wider community, pushing for the introduction of holistic policies and regulations against such crimes, and the essential embedding of cybercrime education within academic curricula. It also emphasizes the paramount need for robust support structures for victims, underscoring the importance of a holistic approach to combating the threat of cybercrime. The implementation of these strategies aims not only to curate a safer digital landscape but also to mitigate the ever-increasing detrimental effects of cybercrime on interpersonal relationships. The robust sample size coupled with meticulous methodology enhances the credibility and applicability of these findings, making this study a central reference point for future research efforts, policy formulation, educational strategies, and community outreach programs in an increasingly digital age.
... Several recent work on cyberhate has been produced with the use of online surveys (e.g., Bernatzky et al., 2022;Bhutkar et al., 2021;Celuch et al., 2022;Costello et al., 2016;Costello et al., 2019;Costello et al., 2021;Hawdon et al., 2014;Näsi, et al., 2015;Obermaier & Schmuck 2022;Oksanen et al., 2014;Reichelmann et al., 2021). Indeed, online surveys are an increasingly popular mode for collecting data (Couper, 2000;Couper et al., 2001). ...
... We instead focus on the influence of providing a definition of a concept that is subsequently asked about later in the survey. The concept in question, hate speech, can be considered sensitive because while there is widespread agreement about it being an anti-social behavior, there is not necessarily agreement about what it is and some actually agree with the sentiments being expressed by it (see Costello et al., 2016). ...
... Previous work found evidence that such online non-probability proportional sampling panels yield similar results to probability sampling of a similar nature (Simmons & Bobo, 2015;Weinberg et al., 2014). Similar samples have been used in numerous studies of online hate and other subjects (Costello et al., 2016;Näsi et al., 2015;Räsänen et al., 2016;Reichelmann et al., 2021;Sedgwick et al., 2022). ...
Article
The purpose of this research is to test the validity of commonly used measures of exposure to and production of online extremism. Specifically, we investigate if a definition of hate influences survey responses about the production of and exposure to online hate. To explore the effects of a definition, we used a split experimental design on a sample of 18 to 25-year-old Americans where half of the respondents were exposed to the European Union’s definition of hate speech and the other half were not. Then, all respondents completed a survey with commonly used items measuring exposure to and perpetration of online hate. The results reveal that providing a definition affects self-reported levels of exposure and perpetration, but the effects are dependent on race. The findings provide evidence that survey responses about online hate may be conditioned by social desirability and framing biases. The findings that group differences exist in how questions about hate are interpreted when definitions of it are not provided mean we must be careful when using measures that try to capture exposure to and the production of hate. While more research is needed, we recommend providing a clear, unambiguous definition when using surveys to measure online hate.
... This corresponds with the numbers of other international studies, according to which victimization ranges between 5% and 11% (Bedrosova et al., 2022;Blaya and Audrin, 2019). Yet, encounters with online hate as mere observers are more frequent, with 30-65% of young respondents from different countries indicating that they have been exposed to online hate at least once (Bedrosova et al., 2022;Costello et al., 2016). ...
... We included the participants' age, as previous studies have indicated that older adolescents are more likely to be exposed to risky content (Bedrosova et al., 2022;Costello et al., 2016;Wachs et al., 2022). In addition, gender might be a relevant variable that could indicate how people respond to online hate since previous studies have shown that male social media users are more likely to be confronted with prejudicial and hateful content (Bedrosova et al., 2022;Obermaier and Schmuck, 2022). ...
... However, the relationship between gender and online hate has not been consistently found in all studies (e.g. Costello et al., 2016;Wachs et al., 2022). It still seems important to assess potential gender effects. ...
Article
Researchers have repeatedly discussed how to strengthen supportive and pro-social responses to online hate, such as reporting and commenting. Researchers and practitioners commonly call for the promotion of media literacy measures that are believed to be positively associated with countermeasures against online hate. In this study (conducted in 2021), we examined relationships between media literacy proficiencies of (1) moral-participatory motivation and abilities and, consequently, (2) the establishment of moral-participatory behaviors and the correspondence with prosocial responses to online hate. A sample of 1489 adolescents and young adults (16–22 years old) from eight European countries is examined. Results confirmed that higher participatory-moral motivation and behavior were significantly associated with stronger intentions to report online hate. Commenting on hateful online content, on the other hand, was significantly related to participatory-moral abilities and past experiences with online harassment. Implications for the role of social media literacy in the context of online hate are discussed.
... Though existing research has focused on the associations of cyberhate exposure, [16][17][18] we still lack knowledge about the nature of unintentional (i.e., accidentally encountering cyberhate) and intentional (i.e., deliberately searching for cyberhate) exposure which might represent different experiences with different predictors and preventive approaches. An emerging line of research explores this difference among adults 3 or in relation to online extremism. ...
... We focus on ethnicity, nationality, and religion related cyberhate, which is the most commonly witnessed by youth. 3,18 Cyberhate exposure was linked to negative behavioral outcomes as it could start a 'circle of violence' from exposure to perpetration. 19,20 However, cyberhate exposure results from two distinct risky situations related to different factors. ...
... Overlaps among cyberhate bystanders, victims, and perpetrators have been found, 18,38,39 however we still miss distinction of how they relate to intentional and unintentional exposure which might concern different groups of vulnerable adolescents and lead to different outcomes. ...
Article
Cyberhate is one of the online risks that adolescents can experience online. It is considered a content risk when it is unintentionally encountered and a conduct risk when the user actively searches for it. Previous research has not differentiated between these experiences, although they can concern different groups of adolescents and be connected to distinctive risk factors. To address this, our study first focuses on both unintentional and intentional exposure and investigates the individual-level risk factors that differentiate them. Second, we compare each exposed group of adolescents with those who were not exposed to cyberhate. We used survey data from a representative sample of adolescents (N = 6,033, aged 12-16 years, 50.3 percent girls) from eight European countries-Czechia, Finland, Flanders, France, Italy, Poland, Romania, and Slovakia-and conducted multinomial logistic regression. Our findings show that adolescents with higher sensation seeking, proactive normative beliefs about aggression (NBA), and who report cyberhate perpetration, are at higher risk of intentionally searching for cyberhate contents compared with those who are unintentionally exposed. In comparison with unexposed adolescents, reporting other risky experiences was a risk factor for both types of exposure. Furthermore, NBA worked differently-reactive NBA was a risk factor for intentional exposure, but proactive NBA did not play a role and even decreased the chance of unintentional exposure. Digital skills increased both types of exposure. Our findings stress the need to differentiate between intentional and unintentional cyberhate exposure and to examine proactive and reactive NBA separately.
... Further, a link with offline violence has been found (Pauwels & Schils, 2016). Research also shows that bias-based cyberaggression in the form of cyberhate is prevalent on social media and in online discussions (e.g., Costello et al., 2016;Hawdon et al., 2017;Oksanen et al., 2014;Reichelmann et al., 2021;Weimann & Masri, 2020); it is entering everyday online communication platforms; and it is reaching broad audiences, including young people and children (e.g., Kardefelt Winther et al., 2023;Machackova et al., 2020;Wachs et al., 2022). The aim of our study is to review the existing broad evidence for young people's bias-based cyberaggression experiences. ...
... We concentrate on studies of the young population, which we define as people up to the age of 30 because we assume biasbased cyberaggression might be especially harmful to young people. Firstly, they are active users of social media and platforms where bias-based cyberaggression is increasingly spread (e.g., Costello et al., 2016;Hawdon et al., 2017;Oksanen et al., 2014;Reichelmann et al., 2021;Weimann & Masri, 2020). Secondly, they are at a developmental stage of identity formation and the cognitive development of their intergroup attitudes (Cortese, 2005), which might be affected by the biased messages of bias-based cyberaggression. ...
Article
Full-text available
Bias-based cyberaggression—hateful and bias-based content and interactions via information and communication technologies—is a frequent experience for young internet users that can result in detrimental consequences for both individuals and society. Ample research has focused on the factors related to involvement in bias-based cyberaggression. This study systematically reviews the research published in the past decade about the investigations into exposure, vicarious and direct victimization, and aggression among young people (up to age 30). We aimed to provide a complex summarization of the research findings about the risk and protective factors and the consequences of experiences with bias-based cyberaggression—specifically the diverse manifestations of bias-based cyberaggression targeted toward ethnicity, race, nationality, religion, sexual orientation, gender, weight, and disability. Three academic databases (EBSCO, Scopus, and WoS) were searched and 41 articles were included in the review. The results show a dominant research focus on bias-based cyberaggression victimization and on the bias-based cyberaggression that targets ethnicity, race, nationality, and religion, leaving a gap in the knowledge about the different types of targeted group categories and bias-based cyberaggression perpetration. The identified risk factors for bias-based cyberaggression involvement included being a minority, low psychological well-being, other victimization experiences, higher internet use, and risky internet use. An overlap was found for bias-based cyberaggression involvement with other offline and online victimization experiences. This review showed limited knowledge about protective factors, namely the social-level and contextual factors. The identified factors, as well as the gaps in the knowledge, are discussed in relation to research implications and practice and policy implications.
... With respect to understanding who is exposed to online hate, a growing body of literature (e.g. Costello et al. 2016;Costello, Hawdon & Cross 2017;Costello, Rukus & Hawdon 2018;Hawdon, Costello, Ratliff, Hall & Middleton 2017;Hawdon, Oksanen, & Räsänen 2017;Oksanen et al. 2014;Räsänen et al. 2016) supports the use of modifications of Cohen and Felson's (1979) routine activity theory. Routine activity theory argues that crimes occur when a motivated offender, a suitable target, and a lack of capable guardians converge in time and space (Cohen & Felson, 1979). ...
... In recognition of the many potential dangers associated with online hate, recent work explored factors associated with being the direct target of cyberhate using a sample of youth in young adults in America between the ages of 15 -36 (Costello, Hawdon, and Ratliff 2016). Twenty-three percent of the survey respondents in the study reported that they were specifically the target of online hate in the past three months. ...
... It involves the use of information on computer technology to express hatred toward a collective on the basis of race, ethnicity, gender, gender identity, sexual orientation, national origin, religion, or some other group characteristic (Hawdon, Oksanen, & Räsänen, 2017). It differs from other forms of cyberviolence such as cyberbullying because the hatred is focused on a collective instead of an individual (Costello, Hawdon, Ratliff, & Grantham, 2016;Hawdon, Oksanen, & Räsänen, 2014). While organized hate groups were once the dominant purveyors of online hate, individuals who are unaffiliated or only loosely affiliated with such groups are now the primary disseminators of hate by far (Potok, 2015). ...
... Similarly, an individual's group memberships -or differential social locationswould also influence the social learning process by shaping not only the deviant definitions to which they are exposed, but also the likelihood they would interact with deviant individuals (i.e., differential association) and be rewarded by them for behaving in a criminal or deviant manner (i.e., positive reinforcement). Since the most common extremist ideology in online today typically reflects a specific political position that is characteristically hyper-socially conservative, hyper-nationalistic, and anti-federal government (see Costello et al., 2016), being highly involved in a political group that adopts these positions would likely influence the probability of producing hate materials. Similarly, since rightwing hate often adopts a pro-Christian, anti-Jewish, and anti-Muslim position, being deeply involved with a religious group that advocates these positions would also likely increase the probability of being involved with producing hate materials. ...
... Sixty-five per cent of adolescents had been bystanders. The major role of involvement in cyberhate is observation (Costello et al., 2016;Räsänen et al., 2015). ...
... Compared to the factorial distribution by dimensions, the items that provided a greater weight according to their coefficient of determination were the following. In component one: he grouped more items, which coincides with previous findings that suggest that the largest role of involvement in cyberhate is observation (Costello et al., 2016;Räsänen et al., 2015). ...
... O discurso de ódio se refere a colocações que incitam a discriminação de qualquer tipo, tais como questões de raça, etnia, sociais, de gênero, religiosas (Meyer-Pflug, 2009), e foi potencializado pelo avanço tecnológico e pela disseminação das redes socais (Costello, Hawdon, Ratliff & Grantham, 2016). O discurso de ódio (Ghaffari, 2022), quando referido às celebridades é, por vezes, visto como algo que faz parte da vida célebre, pois, ao se tornarem figuras públicas, elas são comumente postas em um lugar de julgamento social, até porque são tidas como modelos para a sociedade (Franssen, 2020) e, consequentemente, devem respeitar determinadas regras e padronizações sociais (Foucault, 2001). ...
... Essa violência se apresenta a partir de um discurso de ódio (Ghaffari, 2022) que corresponde às variadas formas de discriminação utilizadas para atacar, diminuir e humilhar pessoas por causa de suas características étnicas, sua raça, gênero, sexualidade ou religião (Costello et. al, 2016;Meyer-Pflug, 2009). ...
Article
Full-text available
A influência digital é um fenômeno cada vez mais presente na sociedade. Os influenciadores digitais se apropriam de divas pop e as paratextualizam evidenciando a normalização que é imposta a seus corpos. Argumentamos então que a normalização aplicada às divas é realizada por meio de um processo de haterização. Exposto isto, buscamos analisar como influenciadores digitais brasileiros paratextualizam a haterização de divas pop. Para isto, analisamos o discurso de influenciadores digitais pela Análise de Discurso Foucaultiana. Nosso arquivo de pesquisa foi construído a partir de blogs de fofocas de celebridades que publicaram notícias sobre divas pop. Quanto aos resultados, chegamos a duas formações discursivas que revelam formas de resistências à haterização das divas pop paratextualizadas pelos influenciadores digitais: a primeira, se refere à proteção às divas, evidenciado pelo apoio às suas escolhas e pela empatia e pelos ataques por elas sofridos; a segunda, se materializa na evidenciação dos lados negativos de ser uma diva pop, que passa a ter sua vida lançada ao constante julgamento público.
... Strain also has a documented relationship to online behavior. For example, several studies find those who are victimized by cyberbullies disproportionately engage in cyberbullying themselves Costello et al. 2016;Jang, Song, and Kim 2014;Marcum et al. 2014). In terms of strain's effect on extremism, the relationships are not consistent on and offline. ...
... While it appears that time online is not inherently harmful, the short and long-term effects of excessive Internet usage are still a matter of debate and inquiry. Even so, as teens increasingly use the Internet, especially social media, their risk of exposure to myriad forms of risky cyber-content grows (Costello et al. 2016). With the recent rise in cyberviolence ; Federal Bureau of Investigation 2019), it is imperative to understand how teens and young adults respond to such harmful online material. ...
Article
Cyberviolence is a growing concern, leading researchers to explore why some users engage in harmful acts online. This study uses leading criminological theories—the general theory of crime/self-control theory, social control/bonding theory, social learning theory, and general strain theory—to explore why 15–18-year-old American adolescents join ongoing acts of cyberviolence. Additionally, we examine the role of socio-demographic traits and online routines in perpetuating cyberviolence. Results of an ordinal logistic regression indicate that low self-control, online strain, closeness to online communities, and watching others engage in online attacks are associated with joining an ongoing act of cyberviolence. Moreover, an individual’s age and familial relationships are inversely related to joining an online attack. Taken together, all four criminological theories we test help predict engagement in cyberviolence, indicating an integrative theory may be valuable in understanding participation in cyberhate attacks.
... El 65% de los adolescentes habían sido espectadores. El mayor rol de implicación en el ciberodio es la observación (Costello et al., 2016;Räsänen et al., 2015). ...
... "he sido testigo de que alguien ha hecho chistes denigrantes en línea relacionados con las creencias religiosas de otra persona". En esta dimensión agrupó mayor número de ítems, lo cual coincide con hallazgos previos que sugieren que el mayor rol de implicación en el ciberodio es la observación (Costello et al., 2016;Räsänen et al., 2015). ...
Article
Full-text available
Electronic Journal of Research in Educational Psychology. ISSN:1696-2095 Introducción. El Internet y las redes sociales se han convertido en espacios de interacción digital en los que la participación de personas de todas las edades ha conllevado a una serie de intercambios que pueden ser reconocidos por sus contenidos de odio en relación con ideologías políticas, religiosas, de género y origen étnico. A este fenómeno se le ha denominado ciberodio. Método. Esta investigación, se propuso diseñar y validar una escala que midiera el ciberodio en población colombiana. La muestra final estuvo conformada por 1984 estudiantes universi-tarios entre 18 y 61 años (M=23.25; DE=5.06), provenientes de 23 universidades y 14 departa-mentos de Colombia. Resultados: La escala final de ciberodio quedó compuesta por 32 ítems, distribuidos en tres dimensiones: victimización, perpetración y observación de ciberodio político, religioso, de gé-nero y origen étnico. Se realizó validez de contenido con el apoyo de nueve jueces expertos, validez comparada con la prueba ECIPQ y validez de constructo, a través de Análisis Factorial Exploratorio y Análisis Factorial Confirmatorio. Discusión y conclusiones. La prueba construida presenta óptimas cualidades psicométricas. Su índice de confiabilidad es α=0.939, lo cual indica un nivel adecuado de consistencia interna. La ECO es válida y confiable para la medición de ciberodio en la población de referencia. Palabras Clave: Ciberodio, estudiantes universitarios, validación, Colombia
... Respondents who were unintentionally exposed to fringe or radical content, or who intentionally accessed or shared such content online, tended to be university educated men who were younger than other respondents and spent a greater amount of time online. This profile is consistent with other research using community samples (Costello et al. 2016). The respondents who accessed radical content intentionally were more likely to spend time on all types of platforms, with the largest differences observed between podcasts, presentations or videos, news or documentaries, websites and online articles. ...
Article
Full-text available
Using a large, national survey of online Australians, we measured unintentional and intentional exposure to fringe or radical content and groups online. Two in five respondents (40.6%) reported being exposed to material they described as fringe, unorthodox or radical. One-quarter of these respondents (23.2%) accessed the content intentionally. One-third (29.9%) said the content they had seen depicted violence. Fringe or radical content was often accessed through messages, discussions and posts online. Mainstream social media and messaging platforms were the platforms most frequently used to share fringe or radical content. Being a member of a group promoting fringe or radical content was associated with increased sharing of that content with other internet users. Efforts to restrict access to radical content and groups online, especially on mainstream platforms, may help reduce intentional and unintentional exposure to and sharing of that content.
... Findings from multiple studies indicate that more time spent online predicts involvement in cyberbullying (Adebayo et al., 2019;Balakrishnan, 2015). Costello et al. (2017) found that, for cyberbystanders in particular, time spent online predicted their more frequent intervention in cyberbullying events. ...
Article
Full-text available
In our study, we aimed to determine how different demographic variables (gender, age, free time spent online) and mechanisms of moral (dis)engagement (justification, disregarding or misrepresenting injurious consequences, diffusion of responsibility, dehumanization) predict perceptions of cyberbullying among student bystanders, according to the Bystander Intervention Model. The model proposes that a bystander must take five steps in order to intervene: notice the event, interpret the event as an emergency requiring help, accept responsibility for intervening, know how to intervene or provide help, and implement decisions to intervene (Latané and Darley, 1970). Our sample included 205 student-bystanders in cyberbullying. The most variance (27 %) was explained in the second step – to interpret the event as an emergency and help. Older students and students with less pronounced dehumanization were more likely to perceive cyberbullying as serious and to help. Our findings suggest a need for greater interest and intervention in the group of cyber-bystanders among this age group of students as well.
... Some studies reveal that youth in Europe come across significant amounts of hate content online (Machackova et al., 2020;Wachs et al., 2021) Overall, exposure to hostile content online is the most common experience in terms of involvement in cyberhate (Machackova et al., 2020;Wachs et al., 2019). Costello et al. (2016) found that people who frequent websites or virtual spaces that include information that is mean-spirited or abusive are more likely to be the targets of cyberhate themselves. Some young people respond to cyberbullying with counter-speech and public support for the person or group being attacked (Wachs et al., 2020). ...
... Consequently, in what is now referred to as "an era of fake news, " users are invariably exposed to misinformation as they interact with and share content on SNS. Research found a positive association between extensive use of SNS and an increased likelihood of encountering and sharing of misinformation (Morosoli et al., 2022) and other types of harmful content such as extremism (Costello et al., 2016). Prolonged and repeated exposure to misinformation can lead individuals to accept and act upon false information (Pennycook et al., 2018;Carrieri et al., 2019). ...
Article
Full-text available
In recent years, concerns over the potential negative impacts of social network sites (SNS) on users’ digital wellbeing are on the rise. These concerns have sparked a growing demand for SNS to introduce changes to their business model and offer features that prioritize users’ wellbeing, even if it means introducing fees to users. Still, it is questionable whether such a new model is welcomed by users and commercially valid. In this paper, we investigate (i) people’s willingness to pay (WTP) for digital wellbeing services designed to foster more autonomy, control, and personal growth in users and (ii) the influence of sociodemographic variables, personality, and social networks use disorder (SNUD) on WTP. Data were collected through an online survey with participants from two distinct cultural contexts, the European and Arabic. The samples comprised 262 participants from Europe (Males: 57.63%) and 251 from Arab countries (Males: 60.56%). The participants ranged in age from 18 to 66 years ( M Europe = 29.16, SD = 8.42; M Arab = 31.24, SD = 8.23). The results revealed that a notable proportion of participants were willing to pay for digital wellbeing services (Europe: 24%; Arab: 30%). Females in the European sample demonstrated a higher WTP for “Mental Health Issues Minimization” compared to males. In the Arab sample, males showed a higher WTP for “Safeguarding Data Privacy” than females. Multiple regression analyses revealed that SNUD and the need for cognition emerged as significant and positive predictors of WTP in both the European and Arab samples. Differences in the relations of personality traits and sociodemographic variables on WTP in each sample were noted. These insights contribute to our understanding of the factors shaping individuals’ preferences and valuation related to digital wellbeing services on SNS and highlight the importance of considering sociodemographic variables and personal factors as well as cultural contexts when planning and introducing them.
... There has been increasing exploitation of social media for spreading abusive language and hateful content regarding race, color, ethnicity, gender, or religion (Barnidge 2015;Costello et al. 2016;Rösner et al. 2016;Schmidt and Wiegand 2017), In this context, legal efforts and regulations are there to counter illegitimate hate speech; however, they are not always sufficient due to the enormous scale of social media platforms. This gives rise to a need for automatic detection of such texts and tweets spreading hatred. ...
Article
Full-text available
Social media platforms have gained immense popularity in recent years and are used for various activities such as marketing, news-sharing, and celebrating achievements. However, they are also notorious for spreading hateful and discriminatory content, which can cause harm to individuals and communities. Therefore, it is crucial to detect and remove such content from social media platforms as soon as possible. Although research related to the detection of hate speech and inflammatory content is increasing, studies focused on code-mixed Indian languages are limited. Hence, in this work, we have conducted a comprehensive study, where we have compared the effectiveness of various neural networks, and transformer-based techniques for the detection of hate and objectionable language in social media tweets in Hinglish, Tamil written in English, and Malayalam written in English, to propose the best-performing ensemble model, named as DConvBLSTM-MuRIL. To carry out our experiments, we have created our datasets for the three languages under study and compared the results with already existing datasets. Our proposed weighted ensemble framework outperformed the existing models, achieving better-weighted F1-scores and better accuracy for all the three languages under consideration.
... On the other hand, Costello, Hawdon, Ratliff and Grantham (2016) found no relationship between gender and exposure. Instead, in a survey of American youth, they found that Black Americans, foreign-born youth, and younger respondents were less likely to encounter this type of content. ...
Article
Full-text available
As the most prolific users of the Internet, youth are exposed to a diverse array of harmful content and experiences, including cyberbullying and sexual exploitation. What is less well understood is the impact of hate and violent extremism on youth in these online spaces. This study surveyed over 800 youth from Alberta, Canada, to identify where they most frequently encountered hateful and extremist content online, how they react to it, and what they believed were the most appropriate responses to these problems. This study adds to a growing literature which takes youth perspectives seriously in the study of this problem. Our study found that more than three-quarters of youth surveyed reported encountering hateful content, while more than two-thirds reported encountering extremist content. Our findings add to a growing debate on the relationship between identity factors and exposure. While our results indicate respondents who identify as female are more likely to report encountering extremist and hateful content than males, intersectionality factors shed new light on the patterns of online exposure among youth. Specifically, we found that the effect of gender is mediated by other identity factors, like being a visible minority or identifying as 2SLGBTQ+.
... This has become prevalent in the institutionalization of radical parties in the parliaments of many countries, as well as the openly aggressive demeanor and language directed against elites, such as the government, scientists, and journalists, and against specific groups perceived as 'foreign.' Hate speech and uncivil comments are so ubiquitous in online environments that the vast majority of internet users report having observed such content at least once in the recent past (e.g., Bedrosova et al., 2022;Costello et al., 2016). ...
Article
Full-text available
Over the past decade, extremists have increasingly aimed to integrate their ideologies into the center of society by changing the presentation of their narratives to appeal to a larger audience. This process is termed (strategic) mainstreaming. Although this phenomenon is not new, the factors that contribute to the mainstreaming of radical and extremist ideas have not been systematically summarized. To identify elements fostering mainstreaming dynamics, we conducted a systematic literature review of N = 143 studies. The results demonstrate that mainstreaming’s gradual and long-term nature makes it particularly difficult to operationalize, which is why it often remains a buzzword. In this article, we propose a novel conceptualization of mainstreaming, understanding it as two communicative steps (content positioning and susceptibility), and present 12 contributing factors. These factors can serve as starting points for future studies, helping to operationalize mainstreaming, empirically monitor it, and, subsequently, tackle its (long-term) effects.
... The problem of matchings with fairness constraints has been wellstudied in recent years and the importance of fairness constraints has been highlighted in literature e.g. [42,34,19,14,32,16,12]. A lot of work in literature has focused on group fairness constraints, modeled as upper bounds on the number of items from each group that can be allocated to a platform. ...
Chapter
Full-text available
Matching problems with group-fairness constraints and diversity constraints have numerous applications such as in allocation problems, committee selection, school choice, etc. Moreover, online matching problems have lots of applications in ad allocations and other e-commerce problems like product recommendation in digital marketing. We study two problems involving assigning items to platforms, where items belong to various groups depending on their attributes; the set of items are available offline and the platforms arrive online. In the first problem, we study online matchings with proportional fairness constraints. Here, each platform on arrival should either be assigned a set of items in which the fraction of items from each group is within specified bounds or be assigned no items; the goal is to assign items to platforms in order to maximize the number of items assigned to platforms. In the second problem, we study online matchings with diversity constraints, i.e. for each platform, absolute lower bounds are specified for each group. Each platform on arrival should either be assigned a set of items that satisfy these bounds or be assigned no items; the goal is to maximize the set of platforms that get matched. We study approximation algorithms and hardness results for these problems. The technical core of our proofs is a new connection between these problems and the problem of matchings in hypergraphs. Our experimental evaluation shows the performance of our algorithms on real-world and synthetic datasets exceeds our theoretical guarantees.
... Beyond identity and ideology, psychosocial variables can also influence adolescents' lifestyles, in turn facilitating their exposure to extremist content and networks (Boehnke et al., 1998;Harpviken, 2020;Jasko et al., 2017;Lösel et al., 2020). Evidence supports this claim, indicating that adolescents who experience discrimination, violence, and marginalization often exhibit shared interests with extremists and report higher rates of encounters with online extremist content (Costello et al., 2016;Hawdon et al., 2019). Having similar lifestyles and shared interests not only increases the likelihood of exposure to extremist content (Jasko et al., 2017), but also fosters stronger emotional connection between adolescents and extremist groups. ...
Article
Full-text available
We examined the relationship between adolescents' extremist attitudes with a multitude of mental health, well-being, psycho-social, environmental, and lifestyle variables, using state-of-the-art machine learning procedure and nationally representative survey dataset of Norwegian adolescents (N = 11,397). Three key research questions were addressed: 1) can adolescents with extremist attitudes be distinguished from those without, using psycho-socio-environmental survey items, 2) what are the most important predictors of adolescents' extremist attitudes, and 3) whether the identified predictors correspond to specific latent factorial structures? Of the total sample, 17.6% showed elevated levels of extremist attitudes. The prevalence was significantly higher among boys and younger adolescents than girls and older adolescents, respectively. The machine learning model reached an AUC of 76.7%, with an equal sensitivity and specificity of 70.5% in the test dataset, demonstrating a satisfactory performance for the model. Items reflecting on positive parenting, quality of relationships with parents and peers, externalizing behavior, and well-being emerged as significant predictors of extremism. Exploratory factor analysis partially supported the suggested latent clusters. Out of the 550 psycho-socio-environmental variables analyzed, behavioral problems, individual and social well-being, along with basic needs such as a secure family environment and interpersonal relationships with parents and peers emerged as significant factors contributing to susceptibility to extremism among adolescents.
... First contacts often happen incidentally, for instance, due to recommendation algorithms . Radical groups and actors actively use online channels and networks to share and distribute their content to those already aligned with the propagated ideas and to share propaganda material that is meant for recruiting new people to the cause (e.g., Bouko, Naderer, et al., 2021;Costello et al., 2016;Rothut et al., 2023;Schulze, Hohner, Greipl, et al., 2022). In addition, the affordances of social media platforms, discussion boards, and messengers facilitate the distribution and accessibility of radical content . ...
... When analyzing how Internet users find polluted content, several routes are feasible. Aside from actively seeking it, most users report more passive encounters, such as being forwarded videos and, most often, simply 'stumbling' upon it (Costello et al., 2016;Reinemann et al., 2019;Rieger et al., 2013). ...
... Studies, usually, investigate OHS Exposure by means of a single item asking how many times, or whether participants had witnessed hateful content on SNs. 2,15,16 OHS is usually assessed as a general construct, without specifying the target of the hateful expressions (e.g., sexual orientation, ethnicity, and so on). However, following Fishbein, 17 each type of HS might have its own pattern of development and different predictors and consequences. ...
Article
Nowadays, adolescents have extensive access to Information and Communication Technologies, which allow them to engage in social networking activities that may expose them to Online Hate Speech (OHS). While there are few cross-sectional studies about the effects of OHS Exposure on attitudes and aggressive behavior, no study aims to analyze the tendency to Speak Up when exposed to certain content (e.g., reporting, etc.). In addition, no instruments have yet been validated to assess these constructs. The aim of the present study, focused on Online ethnic Hate Speech (OeHS), is double: (a) develop a scale to measure OeHS Exposure and the tendency to Speak Up and analyze its psychometric properties; (b) analyze the longitudinal association between Xenophobia (XEN), OeHS Exposure, and Speaking Up against OeHS, while taking into account gender differences and the nested nature of the data. Six hundred sixty-six Italian high school students (52.7 percent male; MAge = 15[0.64]), nested in 36 ninth grade classes (10 schools), took part in the longitudinal study. The first wave of data collection occurred in early 2020, before the COVID-19 pandemic. The second and third waves took place 12 and 15 months later, respectively. Findings suggest that the OeHS Scale has good psychometric properties. Moreover, according to the findings, while the three variables of interest are always cross-sectionally correlated, a longitudinal negative association have been found between XEN and both Exposure and Speaking Up. Regarding the impact of OeHS Exposure, the good news is related to the absence of a longitudinal association with both XEN and Speaking Up.
... The behavior of haters started from the frequent exposure of individuals online to radical content that contains hatred for certain social groups. The material most commonly used by haters is group stereotypes because nearly half of the negative material centers on race or ethnicity, religion, and other differences (Costello et al., 2016, Pohjonen, 2019. ...
Article
Full-text available
Cyber aggression has become a very troubling social problem. This phenomenon is interaction problem between individuals and groups in cyberspace. This study aims to examine the role of perceived threat mediated by prejudice against cyber-aggression by Indonesian youth. The method used in this study is a quantitative survey with structural equation modeling analysis, namely the Structural Equation Model (SEM). The sample in this study used a purposive sampling technique, with 1118 teenagers as respondents from several cities in Indonesia, using techniques of web-based self-report personality scales. The results show that the theoretical model of adolescent cyber-aggression behavior is in accordance with empirical conditions in the field because it meets the goodness of fit model standard, meaning that the perception of threats mediated by prejudice is simultaneously proven to contribute to adolescent cyber-aggression behavior.
... The findings on gender differences in cyberbullying have been inconsistent with studies reporting male (Yang et al., 2006;Calvete et al., 2010;Chang et al., 2013;Leung et al., 2018), or female (Smith et al., 2006;Sourander et al., 2010) predominance, or no gender difference (Li, 2006;Smith et al., 2008;Livingstone et al., 2011). In this study, male adolescents experienced more frequent cyberbullying, probably because they are more likely to engage in "risky" online activities, including online video games, surfing the "dark web, " or inviting strangers as friends to their social networks, all of which increase the likelihood of becoming a victim of cyberbullying (Patchin and Hinduja, 2010;Fanti et al., 2012;Costello et al., 2016;Lapierre and Dane, 2020). It was expected that the risk of cyberbullying would be higher in adolescent patients who experienced symptom worsening or relapse during the COVID-19 pandemic, as most psychiatric symptoms interfere with communication with others. ...
Article
Full-text available
Objective This study examined the prevalence of cyberbullying and its relationship with residual depressive symptoms in this patient population during the COVID-19 outbreak using network analysis. Methods This was a multicenter, cross-sectional study. Adolescent patients attending maintenance treatment at outpatient departments of three major psychiatric hospitals were included. Experience of cyberbullying was measured with a standard question, while the severity of Internet addiction and depressive symptoms were measured using the Internet Addiction Test and the Patient Health Questionnaire-9, respectively. The network structure of depression and cyberbully were characterized and indices of “Expected Influence” was used to identify symptoms central to the network. To identify particular symptoms that were directly associated with cyberbully, the flow function was used. Results Altogether 1,265 patients completed the assessments. The overall prevalence of cyberbullying was 92.3% (95% confidence interval (CI): 90.8–93.7%). Multiple logistic regression analysis revealed that male gender (p = 0.04, OR = 1.72, 95%CI: 1.04–2.85) was significantly associated with higher risk of cyberbullying, while a relapse of illness during the COVID-19 pandemic was significantly associated with a lower risk of cyberbullying (p = 0.03, OR = 0.50, 95%CI: 0.27–0.93). In the network of depression and cyberbully, “Sad mood,” “Anhedonia” and “Energy” were the most central (influential) symptoms. Furthermore, “Suicidal ideation” had the strongest negative association with cyberbully followed by “Guilt”. Conclusion During the COVID-19 pandemic, the experience of cyberbullying was highly prevalent among clinically stable adolescent psychiatric patients, particularly male patients. This finding should raise awareness of this issue emphasizing the need for regular screening and interventions for adolescent patients. Central symptoms (e.g., “Sad mood,” “Anhedonia” and “Energy”) identified in this study should be targeted in interventions and preventive measures.
... In general, the behavior of haters on social media begins with the frequent exposure of netizens to content of radical thought, i.e. full of hatred and anti-social groups based on PJ, as well as negative perceptions of one group or government [35,36]. ...
Article
Full-text available
This study aims to test students’ cyber aggression models based on previous studies, especially those related to high school students’ Cyber Aggression behavior. Following the stages of adolescent development, this research uses the socio-ecological theoretical perspective of the cyber context. This study determines several predictive variables as risk factors and protective factors that have the most potential to influence student cyber aggression, such as perceived threats, school climate, and prejudice. The model tested in this study is the role of the perceived threat and school climate on students’ Cyber Aggression behavior mediated by prejudice. This study uses a quantitative approach with structural equation modeling analysis, namely the structural equation model (SEM). The sampling technique used in this study is purposive sampling. The subjects of this study are high school students who actively use social media every day, with 1118 students as respondents from several cities in Indonesia. The result shows that the theoretical model of students’ Cyber Aggression behavior as per the empirical conditions in the field has met the goodness of fit model standard, meaning that the perception of threats and the school climate-mediated by prejudice were simultaneously proven to play a role as predictors of student Cyber Aggression
... However, it seems that this effort is not enough to limit and prevent such a process. An additional line of recent and complementary research has emerged, focusing on the cultural, educational, and psychological aspects of extremist behavior (Costello et al., 2016). Nowadays, there is a consensus that extremism and radicalization depend especially on mindset, and a predisposition for extremist behavior can be found in all humans (Stankov et al., 2018). ...
Article
Full-text available
Can artificial intelligence networks promote extremism awareness through social intelligence and emotional intelligence? This research contributes to this question in the context of Saudi Arabia. This study defines a model of a cooperative process through an artificial intelligence network, based on knowledge exchange, to generate a high level of extremism awareness and social intelligence. Four main variables were adopted, developed, defined, and measured: artificial intelligence networks, social intelligence, emotional intelligence, and extremism awareness. We fixed attributes for contextualized interactions through a network platform, between professionals and non-professionals, against extremism. The application of artificial intelligence in such platforms lets members share reliable information to combat extremism more effectively. The findings demonstrate that network centrality, network scale, relationship strengths, relationship stability, and reciprocity developed through artificial intelligence networks stimulate extremism awareness by developing social awareness. Emotional intelligence also seems to be important. It moderates the link between platform users and extremism awareness. It facilitates situational and contextual awareness to define appropriate behavior.
... Online hate, or cyberhate, involves the use of technology to express hatred of, or devalue, some collective, usually based on race, ethnicity, immigrant status, religion, gender, gender identity, sexual identity, or political persuasion (see Blazak, 2009;Costello et al., 2016;Hawdon et al., 2014Hawdon et al., , 2017. It differs from other types of cyberviolence, such as cyberstalking or cyberbullying in that the attack targets a collective instead of an individual. ...
Chapter
Explicit, undeniable expressions of hate, such as hate crimes, are surging in the United States and Europe. Many scholars linked such crimes to hateful speech and extremist ideas, especially online. Therefore, one would expect hate speech and hate crimes to have a similar upward trajectory over the past few years. This chapter explores such a hypothesis by tracking how online hate speech has changed across time. Using aggregate data from the United States and the United Kingdom from 2013 and 2018, this analysis compares trends in levels of exposure and type of hate expressed. After discussing what cyberhate is and highlighting why it is important to track, how the level of exposure and type of cyberhate in each country changed between 2013 and 2018 is explored. Understanding how exposure to and expressions of hate have changed over time within countries helps researchers understand patterns of social change and provides information on emerging concerns related to hateful online rhetoric, such as the divisive narratives that will forestall positive social change.
... Exposure to filter bubbles can hinder pluralistic dialogue and thus jeopardizes democracies in modern society [7]. Various studies have shown the impact of filter bubbles on the creation of polarization [8,9] and extremism [10,11]. These consequences lead to the generation of, since living inside a filter bubble and being exposed to racist, sexist, or homophobic views might lead one to experience desensitization and further spread of discriminatory materials [12]. ...
Article
Filter Bubbles, exacerbated by use of digital platforms, have accelerated opinion polarization. This research builds on calls for interventions aimed at preventing or mitigating polarization. This research assesses the extent that an online digital platform, intentionally displaying two sides of an argument with methodology designed to “open minds” and aid readers willingness to consider an opposing view. This “open mindedness” can potentially penetrate online filter bubbles, alleviate polarization and promote social change in an era of exponential growth of discourse via digital platforms. Utilizing “The Perspective” digital platform, 400 respondents were divided into five distinct groups varying in number of articles reading material related to “Black Lives Matter” (BLM). Results indicate that those reading five articles, either related or unrelated to race, were significantly more open-minded towards BLM than the control group. Those who read five race-related articles also showed significantly reduced levels of holding a hardliner opinion towards BLM than control.
... ▪ sexual orientation ▪ ethnicity, race, or nationality ▪ religion These group identities are among the most common targets of cyberhate as reported by young people (e.g., Costello et al., 2016;Reichelmann et al., 2021). ...
Technical Report
Full-text available
This report presents findings about Czech adolescents’ cyberhate experiences and their caregivers’ knowledge. Caregivers refer to the parents, step-parents, and legal guardians of participating adolescents. Cyberhate refers to hateful and biased contents that are expressed online and via information and communication technologies. Our findings are based on data from a representative sample of 3,087 Czech households collected in 2021. The report is intended to provide a comprehensive picture of adolescents’ involvement with cyberhate as the exposed bystanders, as the victims, and as the perpetrators. It also provides information about their caregivers’ cyberhate exposure, and their knowledge of their child’s cyberhate victimisation.
... The problem of matchings with fairness constraints has been well-studied in recent years and the importance of fairness constraints has been highlighted in literature e.g. Segal-Halevi and Suksompong [2019], Luss [1999], Devanur et al. [2013], Celis et al. [2017], Kay et al. [2015], Costello et al. [2016], Bolukbasi et al. [2016]. ...
Preprint
Full-text available
Matching problems with group fairness constraints have numerous applications, from school choice to committee selection. We consider matchings under diversity constraints. Our problem involves assigning "items" to "platforms" in the presence of diversity constraints. Items belong to various "groups" depending on their attributes, and the constraints are stated in terms of lower bounds on the number of items from each group matched to each platform. In another model, instead of absolute lower bounds, "proportional fairness constraints" are considered. We give hardness results and design approximation algorithms for these problems. The technical core of our proofs is a new connection between these problems and the problem of matchings in hypergraphs. Our third problem addresses a logistical challenge involving opening platforms in the presence of diversity constraints. We give an efficient algorithm for this problem based on dynamic programming.
... The opinion ecosystem also has a dark side. The rise of misinformation [19], filter bubbles and echo chambers [4,11,12,15,21,33] have led to the rampancy of extremist worldviews [8,37,38], leading to detrimental societal consequences such as oppression [10,27] and political violence [16]. ...
Preprint
Full-text available
Recent years have seen the rise of extremist views in the opinion ecosystem we call social media. Allowing online extremism to persist has dire societal consequences, and efforts to mitigate it are continuously explored. Positive interventions, controlled signals that add attention to the opinion ecosystem with the aim of boosting certain opinions, are one such pathway for mitigation. This work proposes a platform to test the effectiveness of positive interventions, through the Opinion Market Model (OMM), a two-tier model of the online opinion ecosystem jointly accounting for both inter-opinion interactions and the role of positive interventions. The first tier models the size of the opinion attention market using the multivariate discrete-time Hawkes process; the second tier leverages the market share attraction model to model opinions cooperating and competing for market share given limited attention. On a synthetic dataset, we show the convergence of our proposed estimation scheme. On a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change, we show superior predictive performance over the state-of-the-art and the ability to uncover latent opinion interactions. Lastly, we use OMM to demonstrate the effectiveness of mainstream media coverage as a positive intervention in suppressing far-right opinions.
... Prior studies have focused on contentious issues and relevant groups, such as anti-LGBTQ and anti-Muslim sentiments, and ask people about how often they encounter malicious, vituperative, and hateful comments about these issues and groups Soral et al., 2018). The measure used in the current study is constructed based on previous research Costello et al., 2016;Cowan et al., 2005), but we elaborated on the dimensions of contentious issues and groups. On a 5-point Likert scale (1 = never, 5 = very often), the survey participants were asked how often they encounter hate speech about (1) people of a specific race/ethnicity, (2) people of specific gender identity, (3) immigrants or refugees, (4) people of specific religions, (5) people of specific regions, (6) disabled people, (7) people of a specific political party, and (8) journalists and reporters. ...
Article
This study explores the antecedents and consequences of unfriending in social media settings. Employing an online panel survey (N = 990), this study investigates how exposure to hate speech is associated with political talk through social media unfriending. Findings suggest that social media users who are often exposed to hate speech towards specific groups and relevant issues are more likely to unfriend others (i.e., blocking and unfollowing) in social media. Those who unfriend others are less likely to talk about public and political agendas with those with cross-cutting views but tend to often engage in like-minded political talk. In addition, this study found indirect-effects associations, indicating that social media users who are exposed to hate speech are less likely to engage in cross-cutting talk but more likely to participate in like-minded talk because they unfriend other users in social media.
Article
Full-text available
Recent research has identified a number of powerful new forms of influence that the internet and related technologies have made possible. Randomized, controlled experiments have shown, for example, that when results generated by search engines are presented to undecided voters, if those search results favor one political candidate over another, the opinions and voting preferences of those voters can shift dramatically–by up to 80% in some demographic groups. The present study employed a YouTube simulator to identify and quantify another powerful form of influence that the internet has made possible, which we have labeled the Video Manipulation Effect (VME). In two randomized, controlled, counterbalanced, double-blind experiments with a total of 1,463 politically-diverse, eligible US voters, we show that when a sequence of videos displayed by the simulator is biased to favor one political candidate, and especially when the “up-next” video suggested by the simulator favors that candidate, both the opinions and voting preferences of undecided voters shift dramatically toward that candidate. Voting preferences shifted by between 51.5% and 65.6% overall, and by more than 75% in some demographic groups. We also tested a method for masking the bias in video sequences so that awareness of bias was greatly reduced. In 2018, a YouTube official revealed that 70% of the time people spend watching videos on the site, they are watching content that has been suggested by the company’s recommender algorithms. If the findings in the present study largely apply to YouTube, this popular video platform might have unprecedented power to impact thinking and behavior worldwide.
Article
Full-text available
The increasingly massive spread of online extremism has led to the impression that the behavior of Indonesian netizens has become contradictory. This occurs due to the presence of toxic disinhibition properties. Blocking efforts made by the government were deemed less effective. Therefore, the existence of this paper is to restore the impression of online extremism based on the Koran. The idea offered is the application of religious moderation (Q.S. Al-Baqarah [2]: 143) which is elaborated in 2 forms, namely: Moderate When Receiving Information (Q.S. Al-Hujurat [49]: 6), Moderate When Communicating which can be implemented in the form , Avoiding the Use of Hate Speech (Q.S. Al-Hujurat [49]: 11), Seeing All Humans as Equal (Q.S. Al-Hujurat [49]: 13). This whole idea aims to restore the impression of online extremism which is a religious problem in Indonesia's virtual world. Keyword: Religious Moderation, Online, Koran. Abstrak Penyebaran ekstremisme online yang semakin masif menyebabkan impresi perilaku netizen Indonesia, sehingga menjadi kontradiktif. Hal ini terjadi akibat keberadaan sifat toxic disinhibition. Upaya pemblokiran yang dilakukan pemerintah dirasa kurang efektif. Oleh karena itu, keberadaan tulisan ini untuk merestorasi impresi ekstremisme online dengan landasan Al-Qur’an. Gagasan yang ditawarkan ialah penerapan moderasi beragama (Q.S. Al-Baqarah [2]: 143) yang dielaborasikan dalam 2 bentuk, yaitu: Moderat Ketika Menerima Informasi (Q.S. Al-Hujurat [49]: 6), Moderat Ketika Berkomunikasi yang dapat diimplementasi dalam bentuk, Menghindari Penggunaan Ujaran Kebencian (Q.S. Al-Hujurat [49]: 11), Memandang Semua Manusia Sederajat (Q.S. Al-Hujurat [49]: 13). Keseluruhan gagasan ini bertujuan untuk merestorasi impresi ekstremisme online yang menjadi problem keagaamaan di dunia maya Indonesia. Kata Kunci: Moderasi Beragama, Online, Alquran.
Article
Full-text available
This research analyzes the discursive characteristics of hate messages posted on TikTok Spain against people at risk of social exclusion. Using critical discourse analysis, we analyzed 679 hateful messages generated by 100 videos found about poverty. This method considered the social groups mentioned in those messages, actions attributed to them, the evaluative concepts associated with those actions, and the solutions proposed to eradicate this social problem. We used the qualitative analysis software Atlas.ti to code, categorize, and analyze co-occurrences of derogatory terms. The analysis shows that poverty is linked to migration, laziness, and groups at risk of exclusion. Although insults and degrading terms take on a metaphorical form or are less prevalent, the call to violent action is explicit, openly advocating the extermination of these groups. Underlying these messages is a clear neo-Nazi ideology gaining ground with the advance of the extreme political Right.
Article
Full-text available
This research analyzes the discursive characteristics of hate messages posted on TikTok Spain against people at risk of social exclusion. Using critical discourse analysis, we analyzed 679 hateful messages generated by 100 videos found about poverty. This method considered the social groups mentioned in those messages, actions attributed to them, the evaluative concepts associated with those actions, and the solutions proposed to eradicate this social problem. We used the qualitative analysis software Atlas.ti to code, categorize, and analyze co-occurrences of derogatory terms. The analysis shows that poverty is linked to migration, laziness, and groups at risk of exclusion. Although insults and degrading terms take on a metaphorical form or are less prevalent, the call to violent action is explicit, openly advocating the extermination of these groups. Underlying these messages is a clear neo-Nazi ideology gaining ground with the advance of the extreme political Right.
Article
Full-text available
Moral panics have regularly erupted in society, but they appear almost daily on social media. We propose that social media helps fuel moral panics by combining perceived societal threats with a powerful signal of social amplification—virality. Eight studies with multiple methods test a social amplification model of moral panics in which virality amplifies perceptions of threats posed by deviant behavior and ideas, prompting moral outrage expression. Three naturalistic studies of Twitter (N = 237,230) reveal that virality predicts moral outrage in response to tweets about controversial issues, even when controlling for specific tweet content. Five experiments (N = 1,499) reveal the causal impact of virality on outrage expression and suggest that feelings of danger mediate this effect. This work connects classic ideas about moral panics with ongoing research on social media and provides a perspective on the nature of moral outrage.
Preprint
Ranking algorithms find extensive usage in diverse areas such as web search, employment, college admission, voting, etc. The related rank aggregation problem deals with combining multiple rankings into a single aggregate ranking. However, algorithms for both these problems might be biased against some individuals or groups due to implicit prejudice or marginalization in the historical data. We study ranking and rank aggregation problems from a fairness or diversity perspective, where the candidates (to be ranked) may belong to different groups and each group should have a fair representation in the final ranking. We allow the designer to set the parameters that define fair representation. These parameters specify the allowed range of the number of candidates from a particular group in the top-k positions of the ranking. Given any ranking, we provide a fast and exact algorithm for finding the closest fair ranking for the Kendall tau metric under block-fairness. We also provide an exact algorithm for finding the closest fair ranking for the Ulam metric under strict-fairness, when there are only O(1) number of groups. Our algorithms are simple, fast, and might be extendable to other relevant metrics. We also give a novel meta-algorithm for the general rank aggregation problem under the fairness framework. Surprisingly, this meta-algorithm works for any generalized mean objective (including center and median problems) and any fairness criteria. As a byproduct, we obtain 3-approximation algorithms for both center and median problems, under both Kendall tau and Ulam metrics. Furthermore, using sophisticated techniques we obtain a (3ε)(3-\varepsilon)-approximation algorithm, for a constant ε>0\varepsilon>0, for the Ulam metric under strong fairness.
Article
The increasingly massive spread of online extremism has led to the impression that the behavior of Indonesian netizens has become contradictory. This occurs due to the presence of toxic disinhibition properties. Blocking efforts made by the government were deemed less effective. Therefore, the existence of this paper is to restore the impression of online extremism based on the Koran. The idea offered is the application of religious moderation (Q.S. Al-Baqarah [2]: 143) which is elaborated in 2 forms, namely: Moderate When Receiving Information (Q.S. Al-Hujurat [49]: 6), Moderate When Communicating which can be implemented in the form , Avoiding the Use of Hate Speech (Q.S. Al-Hujurat [49]: 11), Seeing All Humans as Equal (Q.S. Al-Hujurat [49]: 13). This whole idea aims to restore the impression of online extremism which is a religious problem in Indonesia's virtual world. Keyword: Religious Moderation, Online, Koran. Abstrak Penyebaran ekstremisme online yang semakin masif menyebabkan impresi perilaku netizen Indonesia, sehingga menjadi kontradiktif. Hal ini terjadi akibat keberadaan sifat toxic disinhibition. Upaya pemblokiran yang dilakukan pemerintah dirasa kurang efektif. Oleh karena itu, keberadaan tulisan ini untuk merestorasi impresi ekstremisme online dengan landasan Al-Qur’an. Gagasan yang ditawarkan ialah penerapan moderasi beragama (Q.S. Al-Baqarah [2]: 143) yang dielaborasikan dalam 2 bentuk, yaitu: Moderat Ketika Menerima Informasi (Q.S. Al-Hujurat [49]: 6), Moderat Ketika Berkomunikasi yang dapat diimplementasi dalam bentuk, Menghindari Penggunaan Ujaran Kebencian (Q.S. Al-Hujurat [49]: 11), Memandang Semua Manusia Sederajat (Q.S. Al-Hujurat [49]: 13). Keseluruhan gagasan ini bertujuan untuk merestorasi impresi ekstremisme online yang menjadi problem keagaamaan di dunia maya Indonesia. Kata Kunci: Moderasi Beragama, Online, Alquran.
Article
Full-text available
This research aims to know the opinions and attitudes of the Spanish population towards hate speech through a survey of 1,022 persons of both sexes and over 16 years of age. The results show a high awareness of hate speech: participants could identify these messages, assess their different intensities of severity, and understand the harm it causes. This high awareness may be because almost half of the sample has felt alluded to by these types of messages at some point. This group is more proactive in denouncing and counterattacking hate messages, although it is more frequent to remain on the sidelines. There is a hierarchy in the ratings in which racist and sexist comments are considered more severe than those directed at other minority groups (e.g., homeless people). Among the main reasons why people publish these expressions, participants point to the education of the authors, in particular, the rudeness and disrespect that are also perceived as a generalized aspect in today’s society. The polarized Spanish political context is seen as beneficial to the appearance of these messages, as well as the lack of a democratic culture that respects ideological diversity. What is most interesting is that although there is awareness of the seriousness of hate messages in other spheres and towards various groups, hate speech has become normalized in politics, as previously stated.
Article
Full-text available
Social media platforms have led to the creation of a vast amount of information produced by users and published publicly, facilitating participation in the public sphere, but also giving the opportunity for certain users to publish hateful content. This content mainly involves offensive/discriminative speech towards social groups or individuals (based on racial, religious, gender or other characteristics) and could possibly lead into subsequent hate actions/crimes due to persistent escalation. Content management and moderation in big data volumes can no longer be supported manually. In the current research, a web framework is presented and evaluated for the collection, analysis, and aggregation of multilingual textual content from various online sources. The framework is designed to address the needs of human users, journalists, academics, and the public to collect and analyze content from social media and the web in Spanish, Italian, Greek, and English, without prior training or a background in Computer Science. The backend functionality provides content collection and monitoring, semantic analysis including hate speech detection and sentiment analysis using machine learning models and rule-based algorithms, storing, querying, and retrieving such content along with the relevant metadata in a database. This functionality is assessed through a graphic user interface that is accessed using a web browser. An evaluation procedure was held through online questionnaires, including journalists and students, proving the feasibility of the use of the proposed framework by non-experts for the defined use-case scenarios.
Article
Full-text available
Abstrak Artikel ini membahas tentang ujaran kebencian siber yang dipengaruhi oleh sikap prasangka individu, dan karakteristik komunikasi di ruang online serta algoritma media sosial. Artikel ini mengunakan pendekatan kajian kepustakaan dengan mencari referensi teori yang relefan dengan kasus dan permasalahan perilaku siber khususnya ujaran kebencian. Hasil analisis dari kajian teori dan hasil penelitian terbaru pada jurnal ilmiah menyebutkan bahwa prasangka kelompok dan perilaku ujaran kebencian di media sosial adalah akibat dari paparan informasi yang bersifat propokatif dan berulang ulang dalam gelembung informasi masing masing individu yang diperolehnya selama berselancar di dunia maya. Selain itu juga diakibatkan oleh keterbatasan pola komunikasi di media sosial yang hanya satu arah atau perspektif dan bersifat self-interest. Abstract This article discusses the cyber hate on social media that are influenced by prejudice, characteristics of online communication and cyber propaganda. This article uses a literature review approach by looking for references to theories that are relevant to the cases and problems of cyber behavior, especially speech hate. The results of the analysis of theoretical studies and the results of the latest research in scientific journals state that group prejudice and hate speech behavior on social media are the result of exposure to information that is both propocative and repetitive or echo chamber effect in each filter bubble that he obtained during surfing in cyberspace. In addition, it is also caused by the limitations of communication patterns on social media which are only one direction or perspective and are self-interest. Kehadiran media sosial telah membawa perubahan besar terhadap perilaku dan pola komunikasi individu, dimana keberadaannya menggantikan fungsi surat, telepon dan komunikasi langsung tatap muka dengan menciptakan jejaring kemonikasi antar individu melalui perangkat aplikasi yang diinstalkan pada peralatan eletronik atau gawai seperti komputer yang terhubungan dengan internet dan ponsel peribadi. Para pengguna media sosial biasanya disebut sebagai warga netizen, dapat berkomunikasi dengan keluarga, teman, atasan idola dan publik figur bahkan dengan pejabat pemerintahan sekali pun, selain itu juga dapat dengan cepat mengakses informasi dari berbagai sumber dari berbagai belahan dunia hanya dengan alat telekomunikasi yang ada di tangannya dimana pun ia berada. Hootsuite (2017) menjelaskan penguna internet pada tahun 2017 sudah mencapai sebanyak 3,8 miliar, dengan 2,9 milyarnya aktif menggunakan media sosial, tak kurang dari, pengguna media sosial terus meningkat sekitar 1 jutaan orang perhari pengguna media sosial terus meningkat dengan estimasi ada 14 orang yang bikin media sosial baru setiap detik di dunia. Sedangkan dari 2,9 milyar pengguna media sosial yang ada sekitar 2,6 milyarnya mengakses akun media sosial mereka melalui ponsel, sedangkan penggunaan internet melalui laptop atau desktop terus menurun hampir 20% termasuk penggunaan media tablet, dimana jumlahnya pengunaannya hanya 43% dari jumlah keseluruhan pengguna internet, karena orang semakin terbiasa-go online‖ dengan layar smartphone yang lebih kecil. Sementara itu pengguna internet di Indonesia saat ini sudah
Article
Full-text available
Drawing from routine activity theory (RAT), this article seeks to determine the crucial factors contributing to youth victimization through online hate. Although numerous studies have supported RAT in an online context, research focusing on users of particular forms of social media is lacking. Using a sample of 15- to 18-year-old Finnish Facebook users (n = 723), we examine whether the risk of online hate victimization is more likely when youth themselves produced online hate material, visited online sites containing potentially harmful content, and deliberately sought out online hate material. In addition, we examine whether the risk of victimization is higher if respondents are worried about online victimization and had been personally victimized offline. The discussion highlights the accumulation of online and offline victimization, the ambiguity of the roles of victims and perpetrators, and the artificiality of the division between the online and offline environments among young people.
Article
Full-text available
Purpose – Trust is one of the key elements in social interaction; however, few studies have analyzed how the proliferation of new information and communication technologies influences trust. The authors examine how exposure to hate material in the internet correlates with Finnish youths’ particularized and generalized trust toward people who have varying significance in different contexts of life. Hence, the purpose of this paper is to provide new information about current online culture and its potentially negative characteristics. Design/methodology/approach – Using data collected in the spring of 2013 among Finnish Facebook users (n=723) ages 15-18, the authors measure the participants’ trust in their family, close friends, other acquaintances, work or school colleagues, neighbors, people in general, as well as people only met online. Findings – Witnessing negative images and writings reduces both particularized and generalized trust. The negative effect is greater for particularized trust than generalized trust. Therefore, exposure to hate material seems to have a more negative effect on the relationships with acquaintances than in a more general context. Research limitations/implications – The study relies on a sample of registered social media users from one country. In future research, cross-national comparisons are encouraged. Originality/value – The findings show that trust plays a significant role in online setting. Witnessing hateful online material is common among young people. This is likely to have an impact on perceived social trust. Hateful communication may then impact significantly on current online culture, which has a growing importance for studying, working life, and many leisure activities.
Article
Full-text available
Contrary to committing hacking offences, becoming a victim of hacking has received scant research attention. This article addresses risk factors for this type of crime and explores its theoretical and empirical connectedness to the more commonly studied type of cybercrime victimization: online harassment. The results show that low self-control acts as a general risk factor in two ways. First, it leads to a higher risk of experiencing either one of these two distinct types of victimization within a 1-year period. Second, cumulatively the experiences of being hacked and harassed are also more prominent among this group. However, specific online behaviors predicted specific online victimization types (e.g., using social media predicted only harassment and not hacking). The results thus shed more light on the extent to which criminological theories are applicable across different types of Internet-related crime.
Article
Full-text available
Using a sample of college students, we apply the general theory of crime and the lifestyle/routine activities framework to assess the effects of individual and situational factors on seven types of cybercrime victimization. The results indicate that neither individual nor situational characteristics consistently impacted the likelihood of being victimized in cyberspace. Self-control was significantly related to only two of the seven types of cybercrime victimizations and although five of the coefficients in the routine activity models were significant, all but one of these significant effects were in the opposite direction to that expected from the theory. At the very least, it would appear that other theoretical frameworks should be appealed to in order to explain victimization in cyberspace.
Chapter
Full-text available
Purpose: The prevalence of online hate material is a public concern, but few studies have analyzed the extent to which young people are exposed to such material. This study investigated the extent of exposure to and victimization by online hate material among young social media users. Design/methodology/approach: The study analyzed data collected from a sample of Finnish Facebook users (n = 723) between the ages of 15 and 18. Analytic strategies were based on descriptive statistics and logistic regression models. Findings: A majority (67%) of respondents had been exposed to hate material online, with 21% having also fallen victim to such material. The online hate material primarily focused on sexual orientation, physical appearance, and ethnicity and was most widespread on Facebook and YouTube. Exposure to hate material was associated with high online activity, poor attachment to family, and physical offline victimization. Victims of the hate material engaged in high levels of online activity. Their attachment to family was weaker, and they were more likely to be unhappy. Online victimization was also associated with the physical offline victimization. Social implications: While the online world has opened up countless opportunities to expand our experiences and social networks, it has also created new risks and threats. Psychosocial problems that young people confront offline overlap with their negative online experiences. When considering the risks of Internet usage, attention should be paid to the problems young people may encounter offline. Originality: This study expands our knowledge about exposure to online hate material among users of the most popular social networking sites. It is the first study to take an in-depth look at the hate materials young people encounter online in terms of the sites where the material was located, how users found the site, the target of the hate material, and how disturbing users considered the material to be.
Article
Full-text available
This note describes a new and unique, open source, relational database called the United States Extremist Crime Database (ECDB). We first explain how the ECDB was created and outline its distinguishing features in terms of inclusion criteria and assessment of ideological commitment. Second, the article discusses issues related to the evaluation of the ECDB, such as reliability and selectivity. Third, descriptive results are provided to illustrate the contributions that the ECDB can make to research on terrorism and criminology.
Article
Full-text available
Building on prior work surrounding negative workplace experiences, such as bullying and sexual harassment, we examine the extent to which organizational context is meaningful for the subjective experience of sex discrimination. Data draw on the 2002 National Study of the Changing Workforce, which provides a key indicator of individuals' sex discrimination experiences as well as arguably influential dimensions of organizational context—i.e., sex composition, workplace culture and relative power—suggested by prior research. Results indicate that the experience of sex discrimination is reduced for both women and men when they are part of the numerical majority of their work group. Although supportive workplace cultures mitigate the likelihood of sex discrimination, relative power in the workplace seems to matter little. We conclude by revisiting these results relative to perspectives surrounding hierarchy maintenance, group competition and internal cultural dynamics.
Article
Full-text available
Social scientists have begun to explore sexting—sharing nude or semi-nude images of oneself with others using digital technology—to understand its extent and nature. Building on this growing body of research, the current study utilizes the self-control and opportunity perspectives from criminology to explain sending, receiving, and mutually sending and receiving sext messages. The possible mediating effects of lifestyles and routine activities on the effects of low self-control also were tested using a sample of college students. Results revealed that low self-control is significantly and positively related to each type of sexting behavior, and that while certain lifestyles and routines mediated these effects, low self-control remained a significant predictor of participation in sexting.
Article
Full-text available
Consumer fraud seems to be widespread, yet little research is devoted to understanding why certain social groups are more vulnerable to this type of victimization than others. This article deals with Internet consumer fraud victimization, and uses an explanatory model that combines insights from self-control theory and routine activity theory. The results from large-scale victimization survey data among the Dutch general population (N¼6,201) reveal that people with low self-control run substantially higher victimization risk, as well as active online shoppers and people participating in online forums. Though a share of the link between low self-control and victimization is indirect— because impulsive people are more involved in risk enhancing online routine activities—a large direct effect remains. This suggests that, within similar situations, people with low self-control respond differently to deceptive online commercial offers.
Chapter
Full-text available
Problem-oriented Policing establishes a new unit of work of policing and a new unit of analysis for police research. That unit is the "problem". Problem-oriented policing management and research has been hampered by an inability to define and organize problems -- group similar problems and separate dissimilar ones. To address this deficiency, this paper proposes a method for classifying common problems encountered by local police agencies. Routine Activity Theory provides the basis for a two-dimensional classification scheme, Using this classification scheme, all common problems are typed by the behavior of the participants and the environment where they occur. Concerns that cannot be described on both behavioral and environmental dimensions are not "problems" in the technical sense. After explaining the development of this classification scheme, this paper describes how it can be applied, examines its limitations, propose a research agenda using the scheme, and suggests ways the classification scheme might be improved
Article
Full-text available
Objectives: The purpose of the current study was to extend recent work aimed at applying routine activity theory to crimes in which the victim and offender never come into physical proximity. To that end, relationships between individuals' online routines and identity theft victimization were examined. Method: Data from a subsample of 5,985 respondents from the 2008 to 2009 British Crime Survey were analyzed. Utilizing binary logistic regression, the relationships between individuals' online routine activities (e.g., banking, shopping, downloading), individual characteristics (e.g., gender, age, employment), and perceived risk of victimization on identity theft victimization were assessed. Results: The results suggest that individuals who use the Internet for banking and/or e-mailing/instant messaging are about 50 percent more likely to be victims of identity theft than others. Similarly, online shopping and downloading behaviors increased victimization risk by about 30 percent. Males, older persons, and those with higher incomes were also more likely to experience victimization, as were those who perceived themselves to be at greater risk of victimization. Conclusions: Although the routine activity approach was originally written to account for direct-contact offenses, it appears that the perspective also has utility in explaining crimes at a distance. Further research should continue to explore the online and offline routines that increase individuals' risks of identity theft victimization.
Article
Full-text available
Progress in cyber technology has created innovative ways for individuals to communicate with each other. Sophisticated cell phones, often with integrated cameras, have made it possible for users to instantly send photos, videos, and other materials back and forth to each other regardless of their physical separation. This same technology also makes sexting possible – sending nude or semi-nude images, often of oneself, to others electronically (e.g., by text message, email). Few studies examining sexting have been published, and most have focused on the legal issues associated with juvenile sexting. In general, lacking are empirical analyses of the prevalence of sexting, and its potential consequences (i.e., victimization) that are theoretically grounded. Accordingly, we explored the possible link between sexting and online personal victimization (i.e., cybervictimization) among a sample of college students. As hypothesized, respondents who engaged in sexting were more likely to not only experience cybervictimization, but also to be victimized by different types of cybervictimization.
Article
Full-text available
Theoretical and empirical research investigating victimization and offending has largely been either ‘victim-focused’ or ‘offender-focused.’ This approach ignores the potential theoretical and empirical overlap that may exist among victims and offenders, otherwise referred to as ‘victim–offenders.’ This paper provides a comprehensive review of the research that has examined the relationship between victimization and offending. The review identified 37 studies, spanning over five decades (1958–2011), that have assessed the victim–offender overlap. The empirical evidence gleaned from these studies with regard to the victim–offender overlap is robust as 31 studies found considerable support for the overlap and six additional studies found mixed/limited support. The evidence is also remarkably consistent across a diversity of analytical and statistical techniques and across historical, contemporary, cross-cultural, and international assessments of the victim–offender overlap. In addition, this overlap is identifiable among dating/intimate partners and mental health populations. Conclusions and directions for future research are also discussed.
Article
Full-text available
Victimization on the Internet through what has been termed cyberbullying has attracted increased attention from scholars and practitioners. Defined as “willful and repeated harm inflicted through the medium of electronic text” (Patchin and Hinduja 200653. Patchin , J. W. and S. Hinduja . 2006 . “Bullies Move Beyond the Schoolyard: A Preliminary Look at Cyberbullying.” Youth Violence and Juvenile Justice 4 ( 2 ): 148 – 169 . [CrossRef]View all references:152), this negative experience not only undermines a youth's freedom to use and explore valuable on-line resources, but also can result in severe functional and physical ramifications. Research involving the specific phenomenon—as well as Internet harassment in general—is still in its infancy, and the current work seeks to serve as a foundational piece in understanding its substance and salience. On-line survey data from 1,378 adolescent Internet-users are analyzed for the purposes of identifying characteristics of typical cyberbullying victims and offenders. Although gender and race did not significantly differentiate respondent victimization or offending, computer proficiency and time spent on-line were positively related to both cyberbullying victimization and offending. Additionally, cyberbullying experiences were also linked to respondents who reported school problems (including traditional bullying), assaultive behavior, and substance use. Implications for addressing this novel form of youthful deviance are discussed.
Article
Full-text available
Researchers traditionally rely on routine activities and lifestyle theories to explain the differential risk of victimization; few studies have also explored nonsituational alternative explanations. We present a conceptual framework that links individual trait and situational antecedents of violent victimization. Individual risk factors include low self-control and weak social ties with the family and school. Situational risk factors include having delinquent peers and spending time in unstructured and unsupervised socializing activities with peers. We investigate the empirical claims proposed in this model on a sample of high school students, using LISREL to create a structural equation model. The results generally support our assertions that individual traits and situational variables each significantly and meaningfully contribute to victimization.
Article
Full-text available
In this paper I theorize that low self-control is a reason why offenders are at high risk of being victims of crime. I reformulate self-control theory into a theory of vulnerability and test several of its hypotheses, using data from a survey administered to a sample of college students. This research investigates how well self-control explains different forms of victimization, and the extent to which self-control mediates the effects of gender and family income on victimization. Low self-control significantly increases the odds of both personal and property victimization and substantially reduces the effects of gender and income. When criminal behavior is controlled, the self-control measure still has a significant direct effect on victimization. These results have many implications for victimization research.
Article
Full-text available
In this paper we use methods of social network analysis to examine the interorganizational structure of the white supremacist movement. Treating links between Internet websites as ties of affinity, communication, or potential coordination, we investigate the structural properties of connections among white supremacist groups. White supremacism appears to be a relatively decentralized movement with multiple centers of influence, but without sharp cleavages between factions. Interorganizational links are stronger among groups with a special interest in mutual affirmation of their intellectual legitimacy (Holocaust revisionists) or cultural identity (racist skinheads) and weaker among groups that compete for members (political parties) or customers (commercial enterprises). The network is relatively isolated from both mainstream conservatives and other extremist groups. Christian Identity theology appears ineffective as a unifying creed of the movement, while Nazi sympathies are pervasive. Recruitme...
Article
Full-text available
Although the use of social media by hate groups emerged contemporaneously with the Web, few have researched what influence these groups have. Will increasingly active online-hate groups lead to more acts of mass violence, or is concern over the widespread web presence of hate groups a moral panic? If we consider these groups in light of criminological theories, it becomes clear that they pose a danger. Although mass shootings will remain rare, social media sites may contribute to a relative increase in these tragic phenomena. In this paper, I consider how social media can encourage mass murder within a framework of one of the most prominent and supported criminological theories: differential association. I briefly discuss the presence of hate groups on the web and then review how the core principles of differential association are met and potentially amplified through social media. I then provide an example of the interconnectedness of hate groups and conclude with a call for future research.
Article
Full-text available
Article
Full-text available
Routine activities theory has had considerable influence, stimulating subsequent theoretical development, generating an empirical literature on crime patterns and informing the design of prevention strategies. Despite these numerous applications of the theory to date, a promising vein for theoretical development, research and prevention remains untapped. The concept of handlers, or those who control potential offenders, has received relatively little attention since introduced by Felson (1986). This article examines the reasons for the lack of attention to handlers and extends routine activities theory by proposing a model of handler effectiveness that addresses these issues. In addition, the model explicitly links routine activities theory with two of its complements – the rational choice perspective and situational crime prevention – to articulate the mechanism by which handling prevents crime. We conclude by discussing the broad range of prevention possibilities offered by the model of handler effectiveness.
Article
Full-text available
Using data from 541 high school students, we examine the associations between structured and unstructured routine activities and adolescent violent victimization in light of gender's influence. In particular, we focused on whether such activity-victimization relationships explained any effect of gender or, in contrast, were perhaps contingent upon gender. The results showed that gender's effect on both minor and serious victimization was substantially mediated by one measured lifestyle, in particular the delinquent lifestyle. In addition, there was only modest evidence of gender moderating the effects of certain lifestyles on victimization; the effects of most activities were consistent across male and female subjects. Implications of our findings for a contemporary age-graded and gendered routine activity theory are discussed.
Article
Full-text available
Recent discussions of ‘cybercrime’ focus upon the apparent novelty or otherwise of the phenomenon. Some authors claim that such crime is not qualitatively different from ‘terrestrial crime’, and can be analysed and explained using established theories of crime causation. One such approach, oft cited, is the ‘routine activity theory’ developed by Marcus Felson and others. This article explores the extent to which the theory’s concepts and aetiological schema can be transposed to crimes committed in a ‘virtual’ environment. Substantively, the examination concludes that, although some of the theory’s core concepts can indeed be applied to cybercrime, there remain important differences between ‘virtual’ and ‘terrestrial’ worlds that limit the theory’s usefulness. These differences, it is claimed, give qualified support to the suggestion that ‘cybercrime’ does indeed represent the emergence of a new and distinctive form of crime.
Article
Full-text available
In this paper we present a "routine activity approach" for analyzing crime rate trends and cycles. Rather than emphasizing the characteristics of offenders, with this approach we concentrate upon the circumstances in which they carry out predatory criminal acts. Most criminal acts require convergence in space and time of likely offenders, suitable targets and the absence of capable guardians against crime. Human ecological theory facilitates an investigation into the way in which social structure produces this convergence, hence allowing illegal activities to feed upon the legal activities of everyday life. In particular, we hypothesize that the dispersion of activities away from households and families increases the opportunity for crime and thus generates higher crime rates. A variety of data is presented in support of the hypothesis, which helps explain crime rate trends in the United States 1947-1974 as a byproduct of changes in such variables as labor force participation and single-adult households.
Chapter
This chapter examines the shift toward the use of social media to fuel violent extremism, what the key discursive markers are, and how these key discursive markers are used to fuel violent extremism. The chapter then addresses and critiques a number of radicalisation models including but not limited to phase based models. Discursive markers are covered under three broad narrative areas. Narratives of grievance are designed to stimulate strong emotive responses to perceived injustices. Based on these grievances, active agency is advocated in the form of jihad as a path that one should follow. Finally, a commitment to martyrdom is sought as the goal of these discursive markers.
Book
Cybercrime and Society provides a clear, systematic, and critical introduction to current debates about cybercrime. It locates the phenomenon in the wider contexts of social, political, cultural, and economic change. It is the first book to draw upon perspectives spanning criminology, sociology, law, politics, and cultural studies to examine the whole range of cybercrime issues.
Article
Before the Islamic State in Iraq and the Levant (ISIL) leveraged the Internet into a truly modern quasi-state propaganda machine through horrendous online videos, travel handbooks, and sophisticated Twitter messengering, more humble yet highly effective precursors targeted youthful Western Muslims for radicalism, during a time when home grown plots peaked. These brash new entrants into the crowded freewheeling world of extremist cyber-haters joined racists, religious extremists of other faiths, Islamophobes, single issue proponents, as well as anti-government rhetoriticians and conspiracists. The danger from these evolving new provocateurs, then and now, is not that they represent a viewpoint that is widely shared by American Muslims. However, the earlier successful forays by extremist Salafists, firmly established the Internet as a tool to rapidly radicalize, train and connect a growing, but small number of disenfranchised or unstable young people to violence. The protections that the First Amendment provide to expression in the United States, contempt for Western policies and culture, contorted fundamentalism, and the initial successes of these early extremist Internet adopters, outlined here, paved the way for the ubiquitous and sophisticated online radicalization efforts we see today.
Article
Introduction: Why Are Racial Minorities Behind Today? * What is Racism? The Racialized Social System. * Racial Attitudes or Racial Ideology? An Alternative Paradigm for Examining Actors' Racial Views. * The "New Racism": The Post-Civil Rights Racial Structure in the U.S * Color-Blind Racism and Blacks. * Conclusion: New Racism, New Theory, and New Struggle.
Article
Article
This paper explores the extent to which the hate movement in the United States has taken on a new, modern face. The strength of the contemporary hate movement is grounded in its ability to repackage its message in ways that make it more palatable, and in its ability to exploit the points of intersection between itself and prevailing ideological canons. In short, the hate movement is attempting to move itself into the mainstream of United States culture and politics. I conclude by arguing that antiracist and antiviolence organizations must continue to confront hate groups through legal challenges, monitoring, and education.
Article
Traditional leader–member exchange (LMX) research typically measures quality of exchange from the subordinate's or member's perspective—LMX(m). In this research, we propose a new construct, LMX(l), which reflects a supervisor's or leader's perception of the value delivered by his or her subordinate in the exchange relationship. Together, LMX(m) and LMX(l) are expected to provide a more complete picture of dyadic exchange quality. Our results indicate relatively modest convergence between the 2 perspectives on LMX. Both LMX(m) and LMX(l) were found to relate to specific currencies of exchange provided by each dyad partner, and agreement between the 2 was negatively associated with the frequency of supervisor–employee conflict. Implications for LMX theory and future research are discussed.
Article
The purpose of this article is to inform the debate about strategies and options for countering online radicalization within the U.S. domestic context. Its aim is to provide a better understanding of how the Internet facilitates radicalization; an appreciation of the dilemmas and tradeoffs that are involved in countering online radicalization within the United States; and ideas and best practices for making the emerging approach and strategy richer and more effective. It argues that online radicalization can be dealt with in three ways. Approaches aimed at restricting freedom of speech and removing content from the Internet are not only the least desirable, they are also the least effective. Instead, government should play a more energetic role in reducing the demand for radicalization and violent extremist messages—for example, by encouraging civic challenges to extremist narratives and by promoting awareness and education of young people. In the short term, the most promising way for dealing with the presence of violent extremists and their propaganda on the Internet is to exploit their online communications to gain intelligence and gather evidence in the most comprehensive and systematic fashion possible.
Article
Online harassment can consist of threatening, worrisome, emotionally hurtful, or sexual messages delivered via an electronic medium that can lead victims to feel fear or distress much like real-world harassment and stalking. This activity is especially prevalent among middle and high school populations who frequently use technology as a means to communicate with others. Little is known, however, whether factors linked to computer crime victimization in college samples have the same influence in juvenile populations. The article discusses a study conducted utilizing a routine-activities framework that explored the online harassment experiences among middle and high school students and recruited 434 students in a Kentucky middle and high school to complete a survey uploaded on the district server during school hours. Multiple binary logistic regression models indicate that online harassment victimization increased when juveniles maintain social network sites, associate with peers who harass online, and post sensitive information online. The implications of these findings for theorists, practitioners, and policy makers are also explored.
Article
From the early days of the Internet, scholars and writers have speculated that digital worlds are venues where users can leave their bodies behind and create new and different selves online. These speculations take on added significance in the context of adolescence, when individuals have to construct a coherent identity of the self. This chapter examines the role of technology in identity construction – a key adolescent developmental task. We begin by examining theoretical conceptions about identity in the context of adolescence and then explore the meaning of the terms self-presentation and virtual identity. To show how adolescents’ use technology in the service of identity, we will first describe some of the online tools they can use for self-presentation and identity construction. Then we show how adolescents use these tools to explore identity on the Internet, particularly through blogs and social networking sites; in a separate section, we show how youth use the Internet to construct their ethnic identity. Last, we turn to whether adolescents’ engage in identity experiments and online pretending and whether they have virtual personas in a psychological sense. In the conclusion section, we identify questions about online self-presentation, virtual identity, offline identity development for future research to address.
Article
The purpose of this study was to investigate the differences in online victimization between genders, through variables representing the three constructs of routine activity theory. A survey was administered to 100-level courses at a mid-sized university in the northeast, which questioned respondent on their Internet behaviors and experiences during the high school senior and college freshman time period. The findings of the study indicated that participating in behaviors that increased exposure to motivated offenders and target suitability in turn increased the likelihood of victimization for both genders. Conversely, taking protective measures to improve capable guardianship was shown to be the least effective measure, as it did not decrease the likelihood of victimization. This research provides a significant contribution to the literature as there are few explanatory studies that attempt to identify causal reasoning for this behavior.
</