Article

Assessing the Extent and Types of Hate Speech in Fringe Communities: A Case Study of Alt-Right Communities on 8chan, 4chan, and Reddit

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Recent right-wing extremist terrorists were active in online fringe communities connected to the alt-right movement. Although these are commonly considered as distinctly hateful, racist, and misogynistic, the prevalence of hate speech in these communities has not been comprehensively investigated yet, particularly regarding more implicit and covert forms of hate. This study exploratively investigates the extent, nature, and clusters of different forms of hate speech in political fringe communities on Reddit, 4chan, and 8chan. To do so, a manual quantitative content analysis of user comments (N = 6,000) was combined with an automated topic modeling approach. The findings of the study not only show that hate is prevalent in all three communities (24% of comments contained explicit or implicit hate speech), but also provide insights into common types of hate speech expression, targets, and differences between the studied communities.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The offensive nature of the content posted on r/The_Donald and the aggressive behavior of its members frequently caused considerable controversy and turmoil. Through the years, Reddit users, journalists and scholars repeatedly denounced r/The_Donald for being toxic and violent [33,60], racist, sexist and Islamophobic [29,45,54,68], engaged in coordinated trolling and harassment [25], in strategic manipulation [59], and in the spread of conspiracy theories [43]. The archetype of r/The_Donald's member was that of a white Christian male interested in conspiracy theories, firearms, and video games, and engaged in shocking and vitriolic humor [43]. ...
... This approach, however, has a number of issues. Firstly, we lack a single and well agreed-upon definition of hate speech [19,54]. Secondly, the detection of hate speech is often based on the presence of certain hateful words in a text. ...
Preprint
Full-text available
The subreddit r/The_Donald was repeatedly denounced as a toxic and misbehaving online community, reasons for which it faced a sequence of increasingly constraining moderation interventions by Reddit administrators. It was quarantined in June 2019, restricted in February 2020, and finally banned in June 2020, but despite precursory work on the matter, the effects of this sequence of interventions are still unclear. In this work, we follow a multidimensional causal inference approach to study data containing more than 15M posts made in a time frame of 2 years, to examine the effects of such interventions within and without the subreddit. We find that the interventions had strong positive effects toward reducing the activity of problematic users both inside and outside of r/The_Donald. However, the interventions also caused an increase in toxicity and led users to share more polarized and less factual news. Additional findings of our study are that the restriction had stronger effects than the quarantine and that core users of r/The_Donald suffered stronger effects than the other users. Overall, our results provide evidence that the interventions had mixed effects and paint a nuanced picture of the consequences of community-level moderation strategies. We conclude by reflecting on the challenges of policing online platforms and by discussing implications for the design and deployment of moderation interventions.
... In particular, this study explores how an alt-tech platform, Gab, catered to an online illicit network where fake Australian vaccine certificates were distributed. Alt-tech platforms have largely eschewed the focus of criminologists, despite some initial evidence regarding these digital spaces as harbouring extremist communities (Askanius and Keller 2021;Nouri et al. 2020;Rieger et al. 2021). In the following section, this paper provides an overview of the alt-tech social media platform Gab and situates this particular digital environment within the broader digital ecosystem, noting the increasing number of online spaces not easily characterised by binary notions of the 'dark' or 'clear' web (Copland 2021). ...
... An 'alt-tech' platform refers to the collection of social media platforms (e.g. Gab, Truth Social, Parler) that present an alternative option to the mainstream social networking spaces such as Twitter and Facebook, that have purportedly become popular among far-right actors and other fringe online subcultures (Ganesh and Bright 2020;Rieger et al. 2021). The users on Gab, in particular, are typically characterised as those who espouse anti-government and anti-regulatory efforts of online spaces. ...
Article
Full-text available
This paper provides the first exploration of the online distribution of fake Australian COVID-19 vaccine certificates. Drawing on a collection of 2589 posts between five distributors and their community members on the alt-tech platform Gab , this study gathers key insights into the mechanics of illicit vaccine certificate distribution. The qualitative findings in this research demonstrate the various motivations and binding ideologies that underpinned this illicit distribution (e.g. anti-vaccine and anti-surveillance motivations); the unique cybercultural aspects of this online illicit network (e.g. ‘crowdsourcing’ the creation of fake vaccine passes); and how the online community was used to share information on the risks of engaging in this illicit service, setting the appropriate contexts of using fake vaccine passes, and the evasion of guardians in offline settings. Implications for future research in cybercrime, illicit networks, and organised crime in digital spaces are discussed.
... Although numerous definitions of hate speech have emerged in recent years , our study focused on the public expression of hate or degrading attitudes toward a collective, whose targets are devalued based on group-defining characteristics (e.g. race and/or religion) instead of individual traits (Rieger et al., 2021). In Germany, most online hate speech recognized by Internet users is directed toward politicians and minorities and focuses on their race, religion, and/or sexual orientation (Geschke et al., 2019). ...
... threats of violence). In alt-right fringe communities, indirect hate speech is also more prevalent than direct hate speech, even though those communities are known for being outspokenly hateful (Rieger et al., 2021). However, other evidence suggests that overt forms of hate speech containing threats of violence are perceived as being more threatening and harmful than hate speech without such threats (Leonhard et al., 2018). ...
Article
Although many social media users have reported encountering hate speech, differences in the perception between different users remain unclear. Using a qualitative multi-method approach, we investigated how personal characteristics, the presentation form, and content-related characteristics influence social media users' perceptions of hate speech, which we differentiated as first-level (i.e. recognizing hate speech) and second-level perceptions (i.e. attitude toward it). To that end, we first observed 23 German-speaking social media users as they scrolled through a fictitious social media feed featuring hate speech. Next, we conducted remote self-confrontation interviews to discuss the content and semi-structured interviews involving interactive tasks. Although it became apparent that perceptions are highly individual, some overarching tendencies emerged. The results suggest that the perception of and indignation toward hate speech decreases as social media use increases. Moreover, direct and prosecutable hate speech is perceived as being particularly negative, especially in visual presentation form.
... Analysis of both directed and generalized hate speech on Twitter has found that directed hate speech is angrier than generalized hate speech, which in turn is angrier than general tweets [12]. One measurement of hate speech in politically fringe digital communities estimates that one-quarter of all posts contain some form of hate speech, with 13.7% of posts containing explicit hate speech and 15.5% containing implicit hate speech [33]. There is a positive association between time spent on Gab (a far-right social media) and hate speech [13]. ...
Preprint
Full-text available
While online social media offers a way for ignored or stifled voices to be heard, it also allows users a platform to spread hateful speech. Such speech usually originates in fringe communities, yet it can spill over into mainstream channels. In this paper, we measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network. We leverage data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior. We measure members' usage of hate speech outside the studied community before and after they become active participants. Using Interrupted Time Series (ITS) analysis as a causal inference method, we gauge the spillover effect, in which hateful language from within a certain community can spread outside that community by using the level of out-of-community hate word usage as a proxy for learned hate. We investigate four different Reddit sub-communities (subreddits) covering three areas of hate speech: racism, misogyny and fat-shaming. In all three cases we find an increase in hate speech outside the originating community, implying that joining such community leads to a spread of hate speech throughout the platform. Moreover, users are found to pick up this new hateful speech for months after initially joining the community. We show that the harmful speech does not remain contained within the community. Our results provide new evidence of the harmful effects of echo chambers and the potential benefit of moderating them to reduce adoption of hateful speech.
... Third, social media communication tends to be low-cost and more immediate, thus stimulating a more emotional and less considerate type of speech (Theocharis et al., 2020;Ward and McLoughlin, 2020). Fourth, engaging in incivil behavior online can also have community-building properties by creating a sentiment of "us (ordinary people)" vs. "them (powerful politicians);" in that regard, name-calling or verbal attacks may serve the strengthening of bonds among social media users (Rieger et al., 2021;Rossini, 2021). ...
Article
Full-text available
With social media now being ubiquitously used by citizens and political actors, concerns over the incivility of interactions on these platforms have grown. While research has already started to investigate some of the factors that lead users to leave incivil comments on political social media posts, we are lacking a comprehensive understanding of the influence of platform, post, and person characteristics. Using automated text analysis methods on a large body of U.S. Congress Members' social media posts (n = 253,884) and the associated user comments (n = 49,508,863), we investigate how different social media platforms (Facebook, Twitter), characteristics of the original post (e.g., incivility, reach), and personal characteristics of the politicians (e.g., gender, ethnicity) affect the occurrence of incivil user comments. Our results show that ∼23% of all comments can be classified as incivil but that there are important temporal and contextual dynamics. Having incivil comments on one's social media page seems more likely on Twitter than on Facebook and more likely when politicians use incivil language themselves, while the influence of personal characteristics is less clear-cut. Our findings add to the literature on political incivility by providing important insights regarding the dynamics of uncivil discourse, thus helping platforms, political actors, and educators to address associated problems.
... Most prior studies focused on understanding the algorithms that promote hate ideology content [1,15] or the user interactions with hate ideology videos [7,8]. Studies have examined hate groups on other social media [13,16]. This poster offers a preliminary understanding of how hate group, as defined by an authoritative organization, presented videos to and interacted with the viewers. ...
Conference Paper
Full-text available
As the largest video-sharing platform, YouTube has been known for hosting hate ideology content that could lead to between-group conflicts and extremism. Research has examined search algorithms and the creator-fan networks related to radicalization videos on YouTube. However, there is little grounded theory analysis of videos of hate groups to understand how hate groups present to the viewers and discuss social problems, solutions, and actions. This work presents a preliminary analysis of 96 videos using open-coding and affinity diagramming to identify common video styles created by the U.S. hate ideology groups. We also annotated hate videos' diagnostic, prognostic, and motivational framing to understand how the hate groups utilize video-sharing platforms to promote collective actions.
... In contrast, we compare differences among identity targets. Rieger et al. (2021) measured multiple types of variation, including by identity target, in hate speech from fringe platforms such as 4chan and 8chan. We test if such differences affect the generalization of hate speech classifiers. ...
Preprint
Full-text available
This paper investigates how hate speech varies in systematic ways according to the identities it targets. Across multiple hate speech datasets annotated for targeted identities, we find that classifiers trained on hate speech targeting specific identity groups struggle to generalize to other targeted identities. This provides empirical evidence for differences in hate speech by target identity; we then investigate which patterns structure this variation. We find that the targeted demographic category (e.g. gender/sexuality or race/ethnicity) appears to have a greater effect on the language of hate speech than does the relative social power of the targeted identity group. We also find that words associated with hate speech targeting specific identities often relate to stereotypes, histories of oppression, current social movements, and other social contexts specific to identities. These experiments suggest the importance of considering targeted identity, as well as the social contexts associated with these identities, in automated hate speech classification.
... However, this rate of hate words is lower than what is found on other online forums, such as 4chan's Politically Incorrect board (Zannettou et al. 2018). Other forums, such as the (now banned) r/The_Donald Subreddit, also have reportedly high rates of hate speech (Gaudette et al. 2020;Rieger et al. 2021). It is then difficult to know whether Zannettou's findings simply reflect increasing far-right extremism on the internet over the past decade (Rozado and Kaufmann 2022). ...
Article
Full-text available
Freedom of speech has long been considered an essential value in democracies. However, its boundaries concerning hate speech continue to be contested across many social and political spheres, including governments, social media websites, and university campuses. Despite the recent growth of so-called free speech communities online and offline, little empirical research has examined how individuals embedded in these communities make moral sense of free speech and its limits. Examining these perspectives is important for understanding the growing involvement and polarization around this issue. Using a digital ethnographic approach, I address this gap by analyzing discussions in a rapidly growing online forum dedicated to free speech (r/FreeSpeech subreddit). I find that most users on the forum understand free speech in an absolutist sense (i.e., it should be free from legal, institutional, material, and even social censorship or consequences), but that users differ in their arguments and justifications concerning hate speech. Some downplay the harms of hate speech, while others acknowledge its harms but either focus on its epistemic subjectivity or on the moral threats of censorship and authoritarianism. Further, the forum appears to have become more polarized and right-wing-dominated over time, rife with ideological tensions between members and between moderators and members. Overall, this study highlights the variation in free speech discourse within online spaces and calls for further research on free speech that focuses on first-hand perspectives.
... There is no shortage of high-profile examples demonstrating the unintended consequences and negative externalities that can occur when the entire world connects online. Online bullying, cyber stalking, expressions of hate speech, and coordinated disinformation campaigns can have negative psychological and behavioral implications for users (Gahagan, Vaterlaus, and Frost 2016;Rieger et al. 2021). For example, community members in the groups we studied on Nextdoor faced racist and homophobic slurs, belittling scorn, and even overt threats. ...
Article
Full-text available
This study tests whether the architecture of a social media platform can encourage conversations among users to be more civil. It was conducted in collaboration with Nextdoor, a networking platform for neighbors within a defined geographic area. The study involved: (1) prompting users to move popular posts from the neighborhood-wide feed to new groups dedicated to the topic and (2) an experiment that randomized the announcement of community guidelines to members who join those newly formed groups. We examined the impact of each intervention on the level of civility, moral values reflected in user comments, and user’s submitted reports of inappropriate content. In a large quantitative analysis of comments posted to Nextdoor, the results indicate that platform architecture can shape the civility of conversations. Comments within groups were more civil and less frequently reported to Nextdoor moderators than the comments on the neighborhood-wide posts. In addition, comments in groups where new members were shown guidelines were less likely to be reported to moderators and were expressed in a more morally virtuous tone than comments in groups where new members were not presented with guidelines. This research demonstrates the importance of considering the design, structure, and affordance of the online environment when online platforms seek to promote civility and other pro-social behaviors.
... Antisocial communities are groups of users consistently engaging in antisocial behavior [25]. They are often sympathetic to conspiracy theories [e.g., QAnon [43]] and extremist ideologies [e.g., the Alt-right [37]]. They have been shown to have disproportionate in uence over memes and news shared on the web [52,53]. ...
Preprint
Full-text available
Online platforms face pressure to keep their communities civil and respectful. Thus, the bannings of problematic online communities from mainstream platforms like Reddit and Facebook are often met with enthusiastic public reactions. However, this policy can lead users to migrate to alternative fringe platforms with lower moderation standards and where antisocial behaviors like trolling and harassment are widely accepted. As users of these communities often remain \ca across mainstream and fringe platforms, antisocial behaviors may spill over onto the mainstream platform. We study this possible spillover by analyzing around $70,000$ users from three banned communities that migrated to fringe platforms: r/The\_Donald, r/GenderCritical, and r/Incels. Using a difference-in-differences design, we contrast \ca users with matched counterparts to estimate the causal effect of fringe platform participation on users' antisocial behavior on Reddit. Our results show that participating in the fringe communities increases users' toxicity on Reddit (as measured by Perspective API) and involvement with subreddits similar to the banned community -- which often also breach platform norms. The effect intensifies with time and exposure to the fringe platform. In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
... This granular examination of the affordances of features of the networked technology used for CMC provides more specific and yet more generalisable insights to inform both theories of online behaviour and policy about online safety. For example, it may offer some understanding as to why, in certain permissive online communities (e.g., 4Chan, 8Chan), where existing norms support hate speech and anonymity is high, we see a proliferation of hateful language (see Bilewicz & Soral, 2020;Rieger et al., 2021). ...
Article
The internet is often viewed as the source of a myriad of benefits and harms. However, there are problems with using this notion of "the internet" and other high-level concepts to explain the influence of communicating via everyday networked technologies on people and society. Here, we argue that research on social influence in computer-mediated communication (CMC) requires increased precision around how and why specific features of networked technologies interact with and impact psychological processes and outcomes. By reviewing research on the affordances of networked technologies, we demonstrate how the relationship between features of "the internet" and "online behaviour" can be determined by both the affordances of the environment and the psychology of the user and community. To achieve advances in this field, we argue that psychological science must provide nuanced and precise conceptualisations, operationalisations, and measurements of "internet use" and "online behaviour". We provide a template for how future research can become more systematic by examining how and why variables associated with the individual user, networked technologies, and the online community interact and intersect. If adopted, psychological science will be able to make more meaningful predictions about online and offline outcomes associated with communicating via networked technologies.
Technical Report
Full-text available
Extremist:innen greifen zunehmend auf dunkle Sozialen Medien zurück. Der Begriff der dunklen sozialen Medien umfasst verschiedene Typen alternativer Sozialer Medien (soziale Kontermedien wie Gab, kontextgebundene alternative Soziale Medien wie VKontakte, Fringe Communities wie 4Chan), ebenso wie verschiedene Typen dunkler Kanäle (ursprünglich private Kanäle wie Telegram und Separée-Kanäle wie geschloßene Facebook-Gruppen). Das vorliegende Gutachten beleuchtet die Gelegenheitsstrukturen für Extremismus und Extremismusprävention, die sich durch die Verlagerung hin zu dunklen Sozialen Medien ergeben. Hierfür werden in einem theoretischen Rahmenmodel Einflussfaktoren auf drei Ebenen verknüpft: (1) Regulierung (etwa durch das NetzDG) auf der gesellschaftlichen Makro- Ebene. (2) Verschiedene Genres und Typen (dunkler) sozialer Medien auf der Meso-Ebene einzelner Angebote. (3) Einstellungen, Normen und technische Affordanzen als Motivatoren menschlichen Verhaltens im Sinne der Theorie des geplanten Verhaltens (Ajzen und Fishbein, 1977) auf der Mikro-Ebene. Basierend auf diesem Rahmenmodel werden die Gelegenheitststrukturen für Extremismus und Extremismusprävention mit Hilfe zweier Studien untersucht: (1) Einer detaillierten Plattformanalyse dunkler und etablierter Sozialer Medien (N = 19 Plattformen). (2) Eine Literaturanalyse ( ‚scoping review‘) des Forschungsstandes zu (dunklen) Sozialen Medien im Kontext von Extremismus und Extremismusprävention (N = 142 Texte). Die Ergebnisse der Platformanalyse ermöglichen nuancierte Einblicke in die Gelegenheitsstrukturen, die sich durch unterschiedliche Typen und Genres (dunkler) Sozialer Medien ergeben. Das Scoping Review bietet einen Überblick über die Entwicklung des Forschungsfeldes und die typischen Untersuchungsmethoden, die eingesetzt werden. Auf der Grundlage der erhobenen Daten werden Forschungsdesiderata und Implikationen für die Extremismusprävention diskutiert.
Article
Full-text available
Dark social media has been described as a home base for extremists and a breeding ground for dark participation. Beyond the description of single cases, it often remains unclear what exactly is meant by dark social media and which opportunity structures for extremism emerge on these applications. The current paper contributes to filling this gap. We present a theoretical framework conceptualizing dark social media as opportunity structures shaped by (a) regulation on the macro-level; (b) different genres and types of (dark) social media as influence factors on the meso level; and (c) individual attitudes, salient norms, and technological affordances on the micro-level. The results of a platform analysis and a scoping review identified meaningful differences between dark social media of different types. Particularly social counter-media and fringe communities positioned themselves as "safe havens" for dark participation, indicating a high tolerance for accordant content. This makes them a fertile ground for those spreading extremist worldviews, consuming such content, or engaging in dark participation. Context-bound alternative social media were comparable to mainstream social media but oriented towards different legal spaces and were more intertwined with governments in China and Russia. Private-first channels such as Instant messengers were rooted in private communication. Yet, particularly Telegram also included far-reaching public communication formats and optimal opportunities for the convergence of mass, group, and interpersonal communication. Overall, we show that a closer examination of different types and genres of social media provides a more nuanced understanding of shifting opportunity structures for extremism in the digital realm.
Article
The advancement in the number of online social media platforms has entailed active participation from the web users globally. This has also lead to subsequent increase in the cyberbullying cases online. Such incidents diminish an individual’s reputation or defame a community, also posing a threat to the privacy of users in cyberspace. Traditionally, manual checks and handling mechanisms have been used to deal with such textual content. However, an automatic computer-based approach would provide far better solutions to this problem. Existing approaches to automate this task majorly involves classical machine learning models which tend to perform poorly on low resource languages. Owing to the varied background and language of web users, the cyberspace witnesses the presence of multilingual text. An integrated approach to accommodate multilingual text could be the appropriate solution. This paper explores various methods to detect abusive content in 13 Indic code-mixed languages. Firstly, baseline classical machine learning models are compared with Transformer based architecture. Secondly, the paper presents the experimental analysis of four state-of-the-art transformer-based models vis à vis XLM-RoBERTa, indic-BERT, MurilBert and mBERT, out of which XLM Roberta with BiGRU outperforms. Thirdly, the experimental setup of the best performing model XLM-RoBERTa is fed with emoji embeddings that leads to further enhancement of overall performance of the employed model. Finally, the model is trained with the combined dataset of 13 Indic languages, to compare its performance with those of individual language models. The performance of combined model surpassed those of the individual models in terms of F1 score and accuracy, supporting the fact that combined model fits the data better possibly due to its code-mixed nature. This model reports a F1 score of 0.88 on test data while rendering a training loss of 0.28, validation loss of 0.31 and an AUC score of 0.94 for both training and validation.
Chapter
Full-text available
Zur Messung von Radikalisierungsdynamiken in digitalen Räumen wurden im vorangegangenen Monitor-Bericht verschiedene Indikatoren vorgeschlagen. Der diesjährige Beitrag gibt einen empirischen Überblick über verschiedene Weiterentwicklungen und offene Punkte im Bereich Online-Radikalisierung, mit denen sich das Internetmonitoring beschäftigt. Daraus ergeben sich fünf Bereiche: 1. Zunächst wird ein Vorschlag für eine stärkere Differenzierung und Charakterisierung sozialer Medien erstellt, um die unterschiedlichen Nutzungsmöglichkeiten von digitalen Räumen zu systematisieren. Hierzu wird zwischen den technischen Angeboten und den unterschiedlich daraus resultierenden Nutzungsweisen (Affordanzenkonzept) unterschieden. Daran anknüpfend wird 2. in einer Studie die Relevanz von plattformvergleichender Forschung am Beispiel der Menge und Art von Hassrede vor und nach Terroranschlägen aufgezeigt. In 3. einer weiteren Studie zu drei verschiedenen rechten Bewegungen auf Telegram werden die vorgeschlagenen Radikalisierungsindikatoren gemessen und um relevante weitere Indikatoren ergänzt. Dabei wird 4. die Relevanz längsschnittlicher Untersuchungsdesigns deutlich, um den prozesshaften Charakter von Radikalisierung adäquat berücksichtigen zu können. Zuletzt 5. adressiert der Beitrag gruppenbezogene Dynamiken wie die Herausbildung von Identifikation, Verdichtung oder die Schaffung von Bedrohungsnarrativen einer Gruppe (Meso-Ebene) und schließt mit einer Zusammenfassung der Ansätze, die das Internetmonitoring zu den bestehenden Forschungsdesiderata liefert.
Article
The subreddit r/The_Donald was repeatedly denounced as a toxic and misbehaving online community, reasons for which it faced a sequence of moderation interventions by Reddit administrators. It was quarantined in June 2019, restricted in February 2020, and finally banned in June 2020, but despite precursory work on the matter, the effects of this sequence of interventions are still unclear. In this work, we follow a multidimensional causal inference approach, with data containing more than 15M posts made in a time frame of 2 years, to examine the effects of such interventions inside and outside of the subreddit. We find that the interventions greatly reduced the activity of problematic users. However, the interventions also caused an increase in toxicity and led users to share more polarized and less factual news. In addition, the restriction had stronger effects than the quarantine, and core users of r/The_Donald suffered stronger effects than the rest of users. Overall, our results provide evidence that the interventions had mixed effects and paint a nuanced picture of the consequences of community-level moderation strategies. We conclude by reflecting on the challenges of policing online platforms and on the implications for the design and deployment of moderation interventions.
Chapter
Media literacy is an essential discipline for all students in the 21st century, where digital technologies reign. While the concept of media literacy has shifted and changed over nearly a century, the most current iteration, blurred with digital literacy, focuses on the capacity to access, analyze, evaluate, create, and think critically about the messages in media and the forces behind its construction. It also requires the skills to examine and appraise the individual user's own motivations and intentions, and understand the ways that online behaviors can have positive and negative impacts on other people and the world at large. These latter skills comprise the concept of digital citizenship, which, in recent years, has risen to the forefront of many scholars’ focus, prompted by the rise of social media, misinformation, and socio-political movements in online spaces. This chapter explores the intersection of media literacy and digital citizenship by providing an overview of definitions and theory behind various approaches to media literacy and digital citizenship education, the current climate in the United States and globally, the efficacy of media literacy and digital citizenship interventions covering a range of specific topics (advertising, violence, body image, sourcing), and areas where more support is needed. Directions for future research and initiatives are discussed.
Article
Full-text available
Social media host alarming degrees of hate messages directed at individuals and groups, threatening victims psychological and physical well-being. Traditional approaches to the online hate often focus on perpetrators’ traits and attitudes toward their targets. Such approaches neglect the social and interpersonal dynamics that social media afford by which individuals glean social approval from like-minded friends. A theory of online hate based on social approval suggests that individuals and collaborators generate hate messages in order to garner reward for their antagonism toward mutually-hated targets by providing friendship and social support that enhances perpetrators’ well-being as it simultaneously deepens their prejudices. Recent research on a variety of related processes support this view, including notions of moral grandstanding, political derision as fun, and peer support for interpersonal violence.
Article
Full-text available
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
Article
Full-text available
This article is based on a case study of the online media practices of the militant neo-Nazi organization the Nordic Resistance Movement, currently the biggest and most active extreme-right actor in Scandinavia. I trace a recent turn to humor, irony, and ambiguity in their online communication and the increasing adaptation of stylistic strategies and visual aesthetics of the Alt-Right inspired by online communities such as 4chan, 8chan, Reddit, and Imgur. Drawing on a visual content analysis of memes ( N = 634) created and circulated by the organization, the analysis explores the place of humor, irony, and ambiguity across these cultural expressions of neo-Nazism and how ideas, symbols, and layers of meaning travel back and forth between neo-Nazi and Alt-right groups within Sweden today.
Conference Paper
Full-text available
Progress in genomics has enabled the emergence of a booming market for “direct-to-consumer” genetic testing. Nowadays, companies like 23andMe and AncestryDNA provide affordable health, genealogy, and ancestry reports, and have already tested tens of millions of customers. At the same time, alt- and far-right groups have also taken an interest in genetic testing, using them to attack minorities and prove their genetic “purity.” In this paper, we present a measurement study shedding light on how genetic testing is being discussed on Web communities in Reddit and 4chan. We collect 1.3M comments posted over 27 months on the two platforms, using a set of 280 keywords related to genetic testing. We then use NLP and computer vision tools to identify trends, themes, and topics of discussion. Our analysis shows that genetic testing attracts a lot of attention on Reddit and 4chan, with discussions often including highly toxic language expressed through hateful, racist, and misogynistic comments. In particular, on 4chan's politically incorrect board (/pol/), content from genetic testing conversations involves several alt-right personalities and openly antisemitic rhetoric, often conveyed through memes. Finally, we find that discussions build around user groups, from technology enthusiasts to communities promoting fringe political views.
Article
Full-text available
De-listing, de-platforming, and account bans are just some of the increasingly common steps taken by major Internet companies to moderate their online content environments. Yet these steps are not without their unintended effects. This paper proposes a surface-to-Dark Web content cycle. In this process, malicious content is initially posted on the surface Web. It is then moderated by platforms. Moderated content does not necessarily disappear when major Internet platforms crackdown, but simply shifts to the Dark Web. From the Dark Web, malicious informational content can then percolate back to the surface Web through a series of three pathways. The implication of this cycle is that managing the online information environment requires careful attention to the whole system, not just content hosted on surface Web platforms per se. Both government and private sector actors can more effectively manage the surface-to-Dark Web content cycle through a series of discrete practices and policies implemented at each stage of the wider process.
Article
Full-text available
Previously theorised as vehicles for expressing progressive dissent, this article considers how political memes have become entangled in the recent reactionary turn of web subcultures. Drawing on Chantal Mouffe’s work on political affect, this article examines how online anonymous communities use memetic literacy, memetic abstraction, and memetic antagonism to constitute themselves as political collectives. Specifically, it focuses on how the subcultural and highly reactionary milieu of 4chan’s /pol/ board does so through an anti-Semitic meme called triple parentheses. In aggregating the contents of this peculiar meme from a large dataset of /pol/ comments, the article finds that /pol/ users, or anons, tend to use the meme to formulate a nebulous out-group resonant with populist demagoguery.
Article
Full-text available
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of right- wing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Article
Full-text available
The 2016 U.S. presidential election coincided with the rise of the “alternative right,” or alt-right. Alt-right associates have wielded considerable influence on the current administration and on social discourse, but the movement’s loose organizational structure has led to disparate portrayals of its members’ psychology and made it difficult to decipher its aims and reach. To systematically explore the alt-right’s psychology, we recruited two U.S. samples: An exploratory sample through Amazon’s Mechanical Turk ( N = 827, alt-right n = 447) and a larger, nationally representative sample through the National Opinion Research Center’s Amerispeak panel ( N = 1,283, alt-right n = 71–160, depending on the definition). We estimate that 6% of the U.S. population and 10% of Trump voters identify as alt-right. Alt-right adherents reported a psychological profile more reflective of the desire for group-based dominance than economic anxiety. Although both the alt-right and non-alt-right Trump voters differed substantially from non-alt-right, non-Trump voters, the alt-right and Trump voters were quite similar, differing mainly in the alt-right’s especially high enthusiasm for Trump, suspicion of mainstream media, trust in alternative media, and desire for collective action on behalf of Whites. We argue for renewed consideration of overt forms of bias in contemporary intergroup research.
Article
Full-text available
Online hate speech has been a topic of public concern and research interest for some time. Initially the focus of this centred on the proliferation of online groups and websites promoting and distributing discriminatory content. Since the introduction of more interactive tools and platforms in the mid-2000s that enabled new and faster ways of disseminating content in a relatively anonymous fashion, concerns about online hate speech becoming a pervasive behavior have increased. Current research and analysis acknowledge the complex nature of online hate, the mediating role of technology and the influence of other contextual factors (e.g. social or political events). However, despite the growing attention on the topic, New Zealand-based research looking at personal experiences and/or exposure to online hate is surprisingly absent. This study seeks to address this gap. It builds on existing international research on young people’s experiences to explore those of the adult New Zealand population based on a nationally representative sample. The research instrument used for this study was an online survey conducted in June 2018. The maximum margin of error for the whole population is ±3.1% at the 95% confidence level. The sample is representative of the wider population on key demographics: age, gender, ethnicity, and location.
Article
Full-text available
How does the content of so-called ‘fake news’ differ across Western democracies? While previous research on online disinformation has focused on the individual level, the current study aims to shed light on cross-national differences. It compares online disinformation re-published by fact checkers from four Western democracies (the US, the UK, Germany, and Austria). The findings reveal significant differences between English-speaking and German-speaking countries. In the US and the UK, the largest shares of partisan disinformation are found, while in Germany and Austria sensationalist stories prevail. Moreover, in English-speaking countries, disinformation frequently attacks political actors, whereas in German-speaking countries, immigrants are most frequently targeted. Across all of the countries, topics of false stories strongly mirror national news agendas. Based on these results, the paper argues that online disinformation is not only a technology-driven phenomenon but also shaped by national information environments.
Article
Full-text available
In the aftermath of the 2017 Charlottesville tragedy, the prevailing narrative is a Manichean division between ‘white supremacists’ and ‘anti-racists’. We suggest a more complicated, nuanced reality. While the so-called ‘Alt-Right’ includes those pursuing an atavistic political end of racial and ethnic separation, it is also characterised by pluralism and a strategy of nonviolent dialogue and social change, features associated with classic liberalism. The ‘Left,’ consistent with its historic mission, opposes the Alt-Right’s racial/ethnic prejudice; but, a highly visible movement goes farther, embracing an authoritarianism that would forcibly exclude these voices from the public sphere. This authoritarian element has influenced institutions historically committed to free expression and dialogue, notably universities and the ACLU. We discuss these paradoxes by analysing the discourse and actions of each movement, drawing from our study of hundreds of posts and articles on Alt-Right websites and our online exchanges on a leading site (AltRight.com). We consider related news reports and scholarly research, concluding with the case for dialogue.
Chapter
Full-text available
We analyze whether implicitness affects human perception of hate speech. To do so, we use Tweets from an existing hate speech corpus and paraphrase them with rules to make the hate speech they contain more explicit. Comparing the judgment on the original and the paraphrased Tweets, our study indicates that implicitness is a factor in human and automatic hate speech detection. Hence, our study suggests that current automatic hate speech detection needs features that are more sensitive to implicitness.
Article
Full-text available
Buoyed by the populist campaign of Donald Trump, the “alt-right,” a loose political movement based around right-wing ideologies, emerged as an unexpected and highly contentious actor during the election cycle. The alt-right promoted controversy through provocative online actions that drew a considerable amount of media attention. This article focuses on the role of the “alt-right” in the 2016 election by examining its visual and rhetorical efforts to engage the political mainstream in relation to the campaigns of Donald Trump and Hillary Clinton. In particular, the alt-right’s unique style and internal jargon created notable confusion and also attracted interest by the media, while its promotional tactics included the use of social media and Internet memes, through which the movement came to epitomize online antagonism in the 2016 election.
Article
Full-text available
Research topics, as indicators of the profession’s development, are central to the evaluation of academic practices in communication research. To investigate the main topics in our field, we trace the development of research topics since the 1930s by evaluating more than 15,000 articles from 19 academic journals based on an automated content analysis. Topic modeling reveals a high diversity from the early years on. Only a few journals show the tendency to focus on one topic only, whereas most outlets cover a broad variety and thus represent the field as a whole. Although our discipline is strongly interconnected with the changing media landscape, results show that communication research is characterized by high consistency. Although they have not provoked a revolutionary change, Internet and social media have become the most monitored media, parallel to—not displacing—classic media such as newspapers and TV.
Conference Paper
Full-text available
Although it has been a part of the dark underbelly of the Internet since its inception, recent events have brought the discussion board site 4chan to the forefront of the world's collective mind. In particular, /pol/, 4chan's "Politically Incorrect" board has become a central figure in the outlandish 2016 Presidential election. Even though 4chan has long been viewed as the "final boss of the Internet," it remains relatively unstudied in the academic literature. In this paper we analyze /pol/ along several axes using a dataset of over 8M posts. We first perform a general characterization that reveals how active posters are, as well as how some unique features of 4chan affect the flow of discussion. We then analyze the content posted to /pol/ with a focus on determining topics of interest and types of media shared, as well as the usage of hate speech and differences in poster demographics. We additionally provide quantitative evidence of /pol/'s collective attacks on other social media platforms. We perform a quantitative case study of /pol/'s attempt to poison anti-trolling machine learning technology by altering the language of hate on social media. Then, via analysis of comments from the 10s of thousands of YouTube videos linked on /pol/, we provide a mechanism for detecting attacks from /pol/ threads on 3rd party social media services.
Article
Full-text available
There is a considerable amount of hate material online, but the degree to which individuals are exposed to these materials vary. Using samples of youth and young adults from four countries, we investigate who is exposed to hate materials. We find support for using routine activity theory to understand exposure at the individual level; however, there is significant cross-national variation in exposure after accounting for individual-level factors. We consider two plausible hypotheses that could account for this cross-national variation. The data best fit the hypothesis that anti–hate speech laws may provide a source of guardianship against exposure.
Article
Full-text available
This study considers the ways that overt hate speech and covert discriminatory practices circulate on Facebook despite its official policy that prohibits hate speech. We argue that hate speech and discriminatory practices are not only explained by users' motivations and actions, but are also formed by a network of ties between the platform's policy, its technological affordances, and the communicative acts of its users. Our argument is supported with longitudinal multimodal content and network analyses of data extracted from official Facebook pages of seven extreme-right political parties in Spain between 2009 and 2013. We found that the Spanish extreme-right political parties primarily implicate discrimination, which is then taken up by their followers who use overt hate speech in the comment space.
Article
Full-text available
Because news websites' comments have become an important space of spreading hate speech, this article tries to contribute to uncovering the characteristics of Internet hate speech by combining discourse analyses of comments on Slovenian news websites with online in-depth interviews with producers of hate speech comments, researching their values, beliefs, and motives for production. Producers of hate speech use different strategies, mostly rearticulating the meaning of news items. The producers either are organized or act on their own initiative. The main motive of soldiers and believers is the mission; they share characteristics of an authoritarian personality. The key motives of the players are thrill and fun. The watchdogs are motivated by drawing attention to social injustice. The last two groups share the characteristics of a libertarian personality.
Article
Full-text available
Short texts are popular on today's web, especially with the emergence of social media. Inferring topics from large scale short texts becomes a critical but challenging task for many content analysis tasks. Conventional topic models such as latent Dirichlet allocation (LDA) and probabilistic latent semantic analysis (PLSA) learn topics from document-level word co-occurrences by modeling each document as a mixture of topics, whose inference suffers from the sparsity of word co-occurrence patterns in short texts. In this paper, we propose a novel way for short text topic modeling, referred as biterm topic model (BTM). BTM learns topics by directly modeling the generation of word co-occurrence patterns (i.e., biterms) in the corpus, making the inference effective with the rich corpus-level information. To cope with large scale short text data, we further introduce two online algorithms for BTM for efficient topic learning. Experiments on real-word short text collections show that BTM can discover more prominent and coherent topics, and significantly outperform the state-of-the-art baselines. We also demonstrate the appealing performance of the two online BTM algorithms on both time efficiency and topic learning.
Article
Full-text available
This chapter challenges traditional models of deindividuation. These are based on the assumption that such factors as immersion in a group and anonymity lead to a loss of selfhood and hence of control over behaviour. We argue that such models depend upon an individualistic conception of the self, viewed as a unitary construct referring to that which makes individuals unique. This is rejected in favour of the idea that self can be defined at various different levels including the categorical self as well as the personal self. Hence a social identity model of deindividuation (SIDE) is outlined. Evidence is presented to show that deindividuation manipulations gain effect, firstly, through the ways in which they affect the salience of social identity (and hence conformity to categorical norms) and, secondly, through their effects upon strategic considerations relating to the expression of social identities. We conclude that the classic deindividuation paradigm of anonymity within a social group, far from leading to uncontrolled behaviour, maximizes the opportunity of group members to give full voice to their collective identities.
Article
Full-text available
The present research studied the impact of three typical online communication factors on inducing the toxic online disinhibition effect: anonymity, invisibility, and lack of eye-contact. Using an experimental design with 142 participants, we examined the extent to which these factors lead to flaming behaviors, the typical products of online disinhibition. Random pairs of participants were presented with a dilemma for discussion and a common solution through online chat. The effects were measured using participants’ self-reports, expert judges’ ratings of chat transcripts, and textual analyses of participants’ conversations. A 2×2×2 (anonymity/non-anonymity×visibility/invisibility×eye-contact/lack of eye-contact) MANOVA was employed to analyze the findings. The results suggested that of the three independent variables, lack of eye-contact was the chief contributor to the negative effects of online disinhibition. Consequently, it appears that previous studies might have defined the concept of anonymity too broadly by not addressing other online communication factors, especially lack of eye-contact, that impact disinhibition. The findings are explained in the context of an onlinesense of unidentifiability, which apparently requires a more refined view of the components that create a personal sense of anonymity.
Article
Full-text available
This study examined heterosexism that is not specifically targeted at LGB individuals, but may be experienced as antigay harassment, and may contribute to the stigma and stress they experience. LGB participants (N = 175, primarily Euro-American college students), read scenarios of heterosexuals saying or assuming things potentially offensive to gay men or lesbian women. For each scenario, they indicated extent to which they would be offended and less open about their sexuality, and their perceptions of the behaviors as evidence of antigay prejudice. Not only did respondents find the scenarios to be offensive and indicative of prejudice, but perceived offensiveness was associated with a decreased likelihood of coming out. In comparison to gay men, lesbian women and bisexuals found the scenarios more offensive and more indicative of prejudice. Limitations of the current study and directions for future research are outlined.
Article
Memes (z. B. in Form von Bildmakros) sind nicht nur Teil des alltäglichen Umgangs mit digitalen Medien, sie finden sich auch im Rahmen politisch rechter Online-Kommunikationspraxen wieder. Die Studie befasst sich im Rahmen einer Inhaltsanalyse von Memes, die von einer Meldestelle für Online-Hetze dokumentiert wurden, mit folgenden Fragen: Inwiefern zeigen die Memes zentrale Aspekte rechtsextremer Ideologien? Welchen thematischen Clustern lassen sich die Memes zuordnen? Inwiefern lassen sich Strategien des Mainstreamings erkennen, welche die Attraktivität und Anschlussfähigkeit der Inhalte erhöhen sollen? Die Ergebnisse verdeutlichen, dass die Memes zentrale Elemente rechtsextremer Ideologien wie Bezüge zum (historischen) Nationalsozialismus, zu Antisemitismus und Rassismus aufweisen. Dabei lassen sie sich hinsichtlich ihrer zentralen visuellen Motive sowie der thematisierten Feindbilder gruppieren. Als wesentliche Mainstreaming-Strategie rechtsextremer Positionen findet sich insbesondere Humor.
Chapter
This volume assembles a wide range of perspectives on populism and the media, bringing together various disciplinary and theoretical approaches, authors and examples from different continents and a wide range of topical issues. The chapters discuss the contexts of populist communication, communication by populist actors, different types of populist messages (populist communication in traditional and new media, populist criticism of the media, populist discourses related to different topics, etc.), the effects and consequences of populist communication, populist media policy and anti-populist discourses. The contributions synthesise existing research on this subject, propose new approaches to it or present new findings on the relationship between populism and the media. With contibutions by Caroline Avila, Eleonora Benecchi, Florin Büchel, Donatella Campus, María Esperanza Casullo, Nicoleta Corbu, Ann Crigler, Benjamin De Cleen, Sven Engesser, Nicole Ernst, Frank Esser, Nayla Fawzi, Jana Goyvaerts, André Haller, Kristoffer Holt, Christina Holtz-Bacha, Marion Just, Philip Kitzberger, Magdalena Klingler, Benjamin Krämer, Katharina Lobinger, Philipp Müller, Elena Negrea-Busuioc, Carsten Reinemann, Christian Schemer, Anne Schulz, Christian Schwarzenegger, Torgeir Uberg Nærland, Rebecca Venema, Anna Wagner, Martin Wettstein, Werner Wirth, Dominique Stefanie Wirz
Conference Paper
In this paper, we advance the state-of-the-art in topic modeling by means of a new document representation based on pre-trained word embeddings for non-probabilistic matrix factorization. Specifically, our strategy, called CluWords, exploits the nearest words of a given pre-trained word embedding to generate meta-words capable of enhancing the document representation, in terms of both, syntactic and semantic information. The novel contributions of our solution include: (i)the introduction of a novel data representation for topic modeling based on syntactic and semantic relationships derived from distances calculated within a pre-trained word embedding space and (ii)the proposal of a new TF-IDF-based strategy, particularly developed to weight the CluWords. In our extensive experimentation evaluation, covering 12 datasets and 8 state-of-the-art baselines, we exceed (with a few ties) in almost cases, with gains of more than 50% against the best baselines (achieving up to 80% against some runner-ups). Finally, we show that our method is able to improve document representation for the task of automatic text classification.
Article
A common feature among populist parties and movements is their negative perspective on the media’s role in society. This paper analyzes whether citizens with a populist worldview also hold negative attitudes toward the media. From a theoretical point of view, the paper shows that both the anti-elite, anti-outgroup and people centrism dimension of populism contradicts the normative expectations toward the media. For instance, the assumption of a homogeneous people and the exclusion of a societal outgroup is incompatible with a pluralistic media coverage. The results of a representative survey in Germany predominantly confirmed a relation between a populist worldview and negative media attitudes. However, the three populism dimensions influenced the evaluations not in a consistent way. A systematic relation could only be found for antielite populism, which is negatively associated with all analyzed media evaluations such as media trust or satisfaction with the media’s performance. This indicates that in a populist worldview, the media are perceived as part of a detached elite that neglects the citizens’ interests. However, the results confirm the assumption of a natural ally between populism and tabloid or commercial media. Individuals with people centrist and anti-outgroup attitudes have higher trust in these media outlets.
Article
Social media platforms provide an inexpensive communication medium that allows anyone to publish content and anyone interested in the content can obtain it. However, this same potential of social media provide space for discourses that are harmful to certain groups of people. Examples of these discourses include bullying, offensive content, and hate speech. Out of these discourses hate speech is rapidly recognized as a serious problem by authorities of many countries. In this paper, we provide the first of a kind systematic large-scale measurement and analysis study of explicit expressions of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech, the sensitivity of hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.
Article
There is a growing body of literature on whether or not online hate speech, or cyberhate, might be special compared to offline hate speech. This article aims to both critique and augment that literature by emphasising a distinctive feature of the Internet and of cyberhate that, unlike other features, such as ease of access, size of audience, and anonymity, is often overlooked: namely, instantaneousness. This article also asks whether there is anything special about online (as compared to offline) hate speech that might warrant governments and intergovernmental organisations contracting out, so to speak, the responsibility for tackling online hate speech to the very Internet companies which provide the websites and services that hate speakers utilise.
Article
This article proposes the concept ‘platformed racism’ as a new form of racism derived from the culture of social media platforms ‒ their design, technical affordances, business models and policies ‒ and the specific cultures of use associated with them. Platformed racism has dual meanings: first, it evokes platforms as amplifiers and manufacturers of racist discourse and second, it describes the modes of platform governance that reproduce (but that can also address) social inequalities. The national and medium specificity of platformed racism requires nuanced investigation. As a first step, I examined platformed racism through a particular national race-based controversy, the booing of the Australian Football League Indigenous star Adam Goodes, as it was mediated by Twitter, Facebook and YouTube. Second, by using an issue mapping approach to social media analysis, I followed the different actors, themes and objects involved in this controversy to account for the medium specificity of platforms. Platformed racism unfolded in the Adam Goodes controversy as the entanglement between users’ practices to disguise and amplify racist humour and abuse, and the contribution of platforms’ features and algorithms in the circulation of overt and covert hate speech. In addition, the distributed nature of platforms’ editorial practices ‒ which involve their technical infrastructure, policies, moderators and users’ curation of content ‒ obscured the scope and type of this abuse. The paper shows that the concept of platformed racism challenges the discourse of neutrality that characterises social media platforms’ self-representations, and opens new theoretical terrain to engage with their material politics.
Article
The limits of the freedom of expression are a perennial discussion in human rights discourse. This article focuses on identifying yardsticks to establish the boundaries of freedom of expression in cases where violence is a risk. It does so by using insights from the social sciences on the escalation of violent conflict. By emphasizing the interaction between violence and discourse, and its effect on antagonisms between groups, it offers an interdisciplinary perspective on an ongoing legal debate. It introduces the notion of “fear speech” and argues that it may be much more salient in this context than hate speech.
Article
Increased use of online communication in everyday life presents a growing need to understand how people are influenced by others in such settings. In this study, online comments established social norms that directly influenced readers' expressions of prejudice both consciously and unconsciously. Participants read an online article and were then exposed to antiprejudiced or prejudiced comments allegedly posted by other users. Results revealed that exposure to prejudiced (relative to antiprejudiced) comments influenced respondents to post more prejudiced comments themselves. In addition, these effects generalized to participants' unconscious and conscious attitudes toward the target group once offline. These findings suggest that simple exposure to social information can impact our attitudes and behavior, suggesting potential avenues for social change in online environments.
Article
This study analyzes the messages in hate group websites using a grounded theory approach. Through this process of interpretive inquiry we propose four prominent themes—educate, participate, invoke, and indict—that characterize the messages examined in 21 hate groups. These message themes speak to the: (a) education of members and external publics; (b) participation within the group and in the public realm; (c) invocation of divine calling and privilege; and (d) indictment of external groups including the government, media, and entertainment industries, and other extremist sects. In advancing a substantive grounded theory of online hate group communication, we also explore the potential of these themes to ostensibly reinforce the hate group's identity, reduce external threats, and recruit new members.
Latent Dirichlet Allocation
  • J C Campbell
  • A Hindle
  • E Stroulia
Campbell, J. C., Hindle, A., & Stroulia, E. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research, 3, 993-1022.
Social media, social life. Teens reveal their experiences
Common Sense. (2018). Social media, social life. Teens reveal their experiences. https://www.commonsensemedia.org/sites/ default/files/uploads/research/2018_cs_socialmediasociallife_ fullreport-final-release_2_lowres.pdf
Hate lingo: A target-based linguistic analysis of hate speech in social media
  • M Elsherief
  • V Kulkarni
  • D Nguyen
  • W Y Wang
  • E Belding
ElSherief, M., Kulkarni, V., Nguyen, D., Wang, W. Y., & Belding, E. (2018). Hate lingo: A target-based linguistic analysis of hate speech in social media. In Twelfth International AAAI Conference on Web and Social Media (pp. 42-51). https:// www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/ view/17910
How white nationalism became normal online. The Intercept
  • L Fang
  • L A Woodhouse
Fang, L., & Woodhouse, L. A. (2017, August 25). How white nationalism became normal online. The Intercept. https:// theintercept.com/2017/08/25/video-how-white-nationalismbecame-normal-online/
Countering online hate speech
  • I Gagliardone
  • D Gal
  • T Alves
  • G Martínez
Gagliardone, I., Gal, D., Alves, T., & Martínez, G. (2015). Countering online hate speech. UNESCO.
The international encyclopedia of communication
  • T A Kinney
Kinney, T. A. (2008). Hate speech and ethnophaulisms. In W. Donsbach (Ed.), The international encyclopedia of communication. Wiley. https://doi.org/10.1002/9781405186407.wbiech004
Wandel der Sprachund Debattenkultur in sozialen Online-Medien. Ein Literaturüberblick zu Ursachen und Wirkungen von inziviler Kommunikation [The Changing Culture of Language and Debate on Social Media: A Literature Review of the Causes and Effects of Incivil Communication
  • A S Kümpel
  • D Rieger
Kümpel, A. S., & Rieger, D. (2019). Wandel der Sprachund Debattenkultur in sozialen Online-Medien. Ein Literaturüberblick zu Ursachen und Wirkungen von inziviler Kommunikation [The Changing Culture of Language and Debate on Social Media: A Literature Review of the Causes and Effects of Incivil Communication].
Hate speech und Diskussionsbeteiligung im Internet
Konrad-Adenauer-Stiftung. Landesanstalt für Medien NRW. (2018). Hate speech und Diskussionsbeteiligung im Internet.
Hassrede-Von der Sprache zur Politik [Hate Speech-From Language to Politics
  • J Meibauer
Meibauer, J. (2013). Hassrede-Von der Sprache zur Politik [Hate Speech-From Language to Politics].
Hate-speech in the Romanian online media
  • R Meza
Meza, R. (2016). Hate-speech in the Romanian online media. Journal of Media Research, 9(26), 55-77.
Advances in pre-training distributed word representations [Conference session
  • T Mikolov
  • E Grave
  • P Bojanowski
  • C Puhrsch
  • A Joulin
Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., & Joulin, A. (2019). Advances in pre-training distributed word representations [Conference session]. LREC 2018 -11th International Conference on Language Resources and Evaluation, Miyazaki, Japan.
Kill all normies: The online culture wars from Tumblr and 4chan to the alt-right and Trump
  • A Nagle
Nagle, A. (2017). Kill all normies: The online culture wars from Tumblr and 4chan to the alt-right and Trump. Zero Books.
Terminating service for 8chan. The Cloudflare Blog
  • M Prince
Prince, M. (2019, August 5). Terminating service for 8chan. The Cloudflare Blog. https://blog.cloudflare.com/terminating-service-for-8chan/
Short and sparse text topic modeling via self-aggregation
  • X Quan
  • C Kit
  • Y Ge
  • S J Pan
Quan, X., Kit, C., Ge, Y., & Pan, S. J. (2015). Short and sparse text topic modeling via self-aggregation [Conference session].