ArticlePDF Available

Upvoting extremism: Collective identity formation and the extreme right on Reddit



Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right views have exploited online platforms to build a collective identity among the like-minded. Research in this area has largely focused on extremists’ use of websites, forums, and mainstream social media sites, but overlooked in this research has been an exploration of the popular social news aggregation site Reddit. The current study explores the role of Reddit’s unique voting algorithm in facilitating ‘othering’ discourse and, by extension, collective identity formation among members of a notoriously hateful subreddit community, r/The_Donald. The results of the thematic analysis indicate that those who post extreme-right content on r/The_Donald use Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s ‘sense of community’ facilitates identity work among its members by creating an environment wherein extreme right views are continuously validated.
Upvoting extremism: Collective identity formation and the extreme right on Reddit
Tiana Gaudette
School of Criminology
Simon Fraser University
8888 University Drive
Burnaby, BC V5A 1S6, Canada
Ryan Scrivens
School of Criminal Justice
Michigan State University
Garth Davies
School of Criminology
Simon Fraser University
Richard Frank
School of Criminology
Simon Fraser University
Author Biographies
Tiana Gaudette is a Research Associate at the International CyberCrime Research Centre
(ICCRC) at Simon Fraser University (SFU). She recently earned an MA in criminology from
Ryan Scrivens is an Assistant Professor in the School of Criminal Justice (SCJ) at Michigan State
University (MSU). He is also an Associate Director at the International CyberCrime Research
Centre (ICCRC) at Simon Fraser University (SFU) and a Research Fellow at the VOX-Pol
Network of Excellence.
Garth Davies is an Associate Professor in the School of Criminology at Simon Fraser University
(SFU) and is the Associate Director of the Institute on Violence, Extremism, and Terrorism at
Richard Frank is an Associate Professor in the School of Criminology at Simon Fraser
University (SFU) and is the Director of the International CyberCrime Research Centre (ICCRC)
at SFU.
The authors would like to thank Maura Conway for her valuable feedback on earlier versions of
this article. The authors would also like to thank the two anonymous reviewers for their valuable
and constructive feedback.
Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right
views have exploited online platforms to build a collective identity among the like-minded.
Research in this area has largely focused on extremists’ use of websites, forums, and mainstream
social media sites, but overlooked in this research has been an exploration of the popular social
news aggregation site Reddit. The current study explores the role of Reddit’s unique voting
algorithm in facilitating ‘othering’ discourse and, by extension, collective identity formation
among members of a notoriously hateful subreddit community, r/The_Donald. The results of the
thematic analysis indicate that those who post extreme-right content on r/The_Donald use
Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme
discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s
‘sense of community’ facilitates identity work among its members by creating an environment
wherein extreme right views are continuously validated.
Collective identity; extreme right discourse; r/The_Donald; social movement theory; social news
Right-wing extremist movements are increasingly expanding beyond national borders, and the
Internet is a fundamental medium that has facilitated such movements’ increasingly
‘transnational’ nature (Caini and Kröll, 2014; Froio and Ganesh, 2019; Perry and Scrivens,
2016). The plethora of online platforms play a significant role in the ‘transnationalisation’ of the
extreme right by enabling such movements to attract new members and disseminate information
to a global audience (see Caini and Kröll, 2014). The Internet, however, functions as more than a
‘tool’ to be used by movements. It also functions as a space of important identity work; the
interactive nature of the Internet’s various platforms allows for the exchange of ideas and enables
the active construction of collective identities (Perry and Scrivens, 2016). Here the Internet has
facilitated a transnational ‘locale’ where right-wing extremists can exchange radical ideas and
negotiate a common sense of ‘we’ – or a collective identity (Bowman-Grieve, 2009; Futrell and
Simi, 2004; Perry and Scrivens, 2016).
Researchers who have explored right-wing extremists’ use of Internet have typically
focused their attention on dedicated hate sites and forums (e.g. Adams and Roscigno, 2005;
Bliuc et al., 2019; Bowman-Grieve, 2009; Burris, Smith, and Strahm 2000; Futrell and Simi,
2004; Perry and Scrivens, 2016; Scrivens, Davies, and Frank, 2018; Simi and Futrell 2015). As a
result, the nature of right-wing extremists’ use of online platforms that are outside the
mainstream purview of digital platforms – including Facebook (e.g. Ekman, 2018; Nouri and
Lorenzo-Dus, 2018; Stier et al., 2017), Twitter (e.g. Berger, 2016; Berger and Strathearn, 2013;
Burnap and Williams, 2015; Graham, 2016), and YouTube (e.g. Ekman, 2014; O’Callaghan et
al., 2014) – has gone mostly unexplored by researchers, despite the fact that lesser-researched
platforms (e.g., 4chan, 8chan, Reddit, etc.) have provided adherents with spaces to anonymously
discuss and develop ‘taboo’ or ‘anti-moral’ ideologies (see Nagle, 2017). Scholars who have
recognized this lacuna have called for further exploration of extremists’ use of under-researched
platforms. Conway (2017: 85), for example, noted that ‘different social media platforms have
different functionalities’ and posed the question how do extremists exploit the different functions
of these platforms? In response, the current study begins to bridge the abovementioned gap by
exploring one platform that has received relatively little research attention, Reddit, and the
functional role of its voting algorithm (i.e., it’s upvoting and downvoting function) in facilitating
collective identity formation among members of a notoriously hateful subreddit community,
The collective identity of the extreme right online
The notion of ‘collective identity’ lies at the heart of the social movement theory framework and
has provided valuable insight into the impact of right-wing extremists’ online discussions in
shaping their identity (e.g. Futrell and Simi, 2004; Perry and Scrivens, 2016; Scrivens et al.,
2018). The social movement literature posits that collective identity is actively produced (e.g.
Hunt and Benford, 1994; Snow 2001) and constructed through what Melucci (1995: 43)
describes as ‘interaction, negotiation and the opposition of different orientations.’ In addition,
according to the social constructionist paradigm, collective identity is ‘invented, created,
reconstituted, or cobbled together rather than being biologically preordained’ (Snow, 2001: 5).
As activists share ideas among other members of their in-group, these exchanges actively
produce a shared sense of ‘we’ and, by extension, a collective identity (Bowman-Grieve, 2009;
Snow, 2001; Melluci, 1995; Futrell and Simi, 2004). Additionally, a collective identity is further
developed through the construction of an ‘us’ versus ‘them’ binary (Polletta and Jasper, 2001).
By ‘othering,’ or identifying and targeting the groups’ perceived enemies, such interactions
further define the borders of the in-group (Perry and Scrivens, 2016).
Indeed, these same processes of collective identity formation occur within – and are
facilitated by – online communications. Similar to how ‘face-to-face’ interactions in the ‘real
world’ form collective identities, so too do the ‘many-to-many’ interactions in cyberspace
(Crisafi, 2005). As a result, the Internet’s many online platforms have facilitated extreme right-
wing identity work (Futrell and Simi, 2004; Perry and Scrivens, 2016). As Perry and Scrivens
(2016: 69) explained it:
[a] collective identity provides an alternative frame for understanding and expressing
grievances; it shapes the discursive “other” along with the borders that separate “us” from
“them”; it affirms and reaffirms identity formation and maintenance; and it provides the
basis for strategic action.
Each of these elements enhance our understanding of the Internet’s role in developing a
collective identity among extreme right-wing adherents. But among these virtual spaces, Reddit
in particular has received criticisms in recent years for its proliferation of extreme right-wing
‘The front page of the Internet’: Reddit
While the social movement framework has been useful in advancing our understanding of right-
wing extremists’ use of the Internet, particularly on the more traditional websites, forums and
mainstream social media platforms to build a collective identity, it remains unclear how the
unique voting features and content policies of one under-researched virtual platform, Reddit,
shape the formation of a collective identity among members of right-wing extremist movements
who use the platform.
Created in 2005, Reddit has become the fifth-most popular website in the United States
(U.S.) (Reddit Press, n.d.) and the 20th-most popular website in the world (
Competitive Analysis, n.d.). As a ‘social news aggregation’ site, Reddit describes itself as ‘a
platform for communities to discuss, connect, and share in an open environment, home to some
of the most authentic content anywhere online’ (Reddit Content Policy, n.d.). Here users may
participate in, contribute to, and even develop like-minded communities whose members share a
common interest. A unique component of the platform is its voting algorithm, which provides its
users with the means to promote and spread content within a particular subreddit. To illustrate,
Reddit users retrieve interesting content from across the Internet and post that content to niche
topic-based communities called ‘subreddits’, and those who share similar views (or not) can
convey their interest – or disinterest – by giving an ‘upvote’ or ‘downvote’ to the posted content
in the subreddit (Wildman, 2020). ‘Upvoting’ a post, for example, increase its visibility and user
engagement within a subreddit (see Carman et al., 2018), and as a result, the most popular posts
are featured on Reddit’s front page for millions of users to see (Wildman, 2020). In short, using
the upvoting (i.e., increasing content visibility) and downvoting (i.e., decreasing content
visibility) function on Reddit, users shape the discourse and build a collective identity within
their subreddit community. Previous research that explored bidirectional voting results on a
related site, Imgur, has similarly found that users’ upvoting and downvoting practices help
reinforce social identity (see Hale, 2017).
While Reddit is designed to foster a sense of community among like-minded users,
historically its laissez-faire content policy has failed to prevent users from posting and spreading
hate speech on the platform (Gibbs, 2018) and has even facilitated the development of what
Massanari (2017) describes as ‘toxic technocultures’. For example, on the one hand, its content
policy defines ‘unwelcome content’ as material that incites violence or that threatens, harasses,
or bullies other users (Reddit Content Policy, n.d.). Conversely, the policy admits that it
‘provides a lot of leeway in what content is acceptable’ (Reddit Content Policy, n.d.). This
hands-off approach is largely meant to facilitate an open dialogue among members of Reddit’s
numerous communities, wherein users may express their views on controversial topics without
fear of being banned from the site or harassed by those with opposing views (Reddit Content
Policy, n.d). As a result, Reddit tends to stop short of preventing users from posting controversial
ideas. In fact, until recently, Reddit condoned racism and hate speech on its platform, according
to a recent statement by the site’s CEO, Steve Huffman (Gibbs, 2018).
It comes as little surprise, then, that Reddit has witnessed an influx of extremist
communities (Feinberg, 2018). In response, Reddit launched a ‘quarantine’ feature in 2015 that
would hide a number of communities that regularly produce offensive content. Here users must
explicitly opt-in to view quarantined communities as a mechanism to prevent users from
inadvertently coming across extremely offensive content (Account and Community Restrictions,
n.d.-b). One previously quarantined and recently banned subreddit community, r/The_Donald,
has been receiving specific criticism for its extreme right-wing content.
‘America First!’: r/The_Donald
r/The_Donald was a subreddit community where members could discuss a variety of matters
relating to Trump’s U.S. presidency. Since its inception in 2015, Trump’s most fervent
supporters have assembled on r/The_Donald, with numbers surpassing 700,000 subscribers
(Romano, 2017). Some members of r/The_Donald represented a variety of extreme right-wing
movements and ideologies, including the ‘manosphere’, the ‘alt-right’, and racists more broadly,
though all members were united in their unwavering support for the U.S. president (Martin,
Although r/The_Donald was quarantined in June 2019 after members were accused of
breaking Reddit’s rules by inciting violence (see Stewart, 2019), hateful sentiment continued to
be a common occurrence there (Gibbs, 2018), until it was eventually banned (Tiffany, 2020). To
illustrate, while the ‘rules’ of r/The_Donald made it clear that ‘racism and Anti-Semitism will
not be tolerated,’ they also specified that ‘Muslim and illegal immigrant are not races’
(r/The_Donald Wiki, n.d.). These rules suggest, then, that moderators would most likely not
remove discussions related to anti-Muslim or anti-immigrant out-groups within the community.
Not surprisingly, r/The_Donald continued to experience an influx of extreme right-wing content,
including discussions surrounding ‘white genocide’, anti-Muslim sentiment, and anti-Black
discourse (Ward, 2018). Racist memes also appeared to be increasing in popularity on
r/The_Donald (see Ward, 2018; see also Zannettou et al., 2018). Despite these concerns,
however, research on platforms such as Reddit remains limited. So too does our understanding of
how these sites function to facilitate hateful content and collective identity formation.
Before proceeding, it is worth noting here is that, following J.M. Berger (2018), we take
the view that right-wing extremists—like all extremists—structure their beliefs on the basis that
the success and survival of the in-group is inseparable from the negative acts of an out-group
and, in turn, they are willing to assume both an offensive and defensive stance in the name of the
success and survival of the in-group. We thus conceptualize right-wing extremism as a racially,
ethnically, and/or sexually defined nationalism, which is typically framed in terms of white
power and/or white identity (i.e., the in-group) that is grounded in xenophobic and exclusionary
understandings of the perceived threats posed by some combination of non-whites, Jews,
Muslims, immigrants, refugees, members of the LGBTQI+ community, and feminists (i.e., the
out-group(s)) (Conway, Scrivens, and Macnair, 2019).
Data and methods
This research is guided by the following research question: How does Reddit’s unique voting
algorithm (i.e., it’s upvoting and downvoting function) facilitate ‘othering’ discourse and, by
extension, collective identity formation on r/The_Donald following Trump’s presidential election
victory? To answer this question, data were collected from a website that made Reddit data
publicly available for research and analysis.
We extracted all of the data posted to r/The_Donald
subreddit in 2017, as it marked the first year of Trump’s presidency and a time when his far-right
political views encouraged his supporters to preach and practice racist hate against the out-
groups, both on- and offline (see Anti-Defamation League, 2018). Research has similarly
identified a link between Trump’s election victory and a subsequent spike in hatred, both online
(e.g., Zannettou et al., 2018) and offline (e.g., Edwards and Rushin, 2018), in the year following
his victory.
We then identified the 1,000 most highly-upvoted user-submitted comments in the data
(hereafter referred to as the ‘highly-upvoted sample’).
A second sample, which comprised of
1,000 user-submitted comments, was randomly sampled from the data (hereafter referred to as
the ‘random sample’) to serve as a comparison to the highly-upvoted sample. The purpose of this
was to explore what makes highly-upvoted content unique in comparison to a random sample of
non-highly-upvoted comments.
The data were analyzed using thematic analysis (Braun and Clarke, 2006). In particular,
the data were categorized using descriptive coding which was done with Nvivo qualitative
analysis software. After we reviewed each user comment in the highly-upvoted sample and
random sample, codes were assigned to the sections of text that related to the research question.
This descriptive coding technique proceeded in a sequential, line-by-line manner (see Maher et
al., 2018), which was a suitable choice for the current study because it allowed us to organize
vast amount of textual data into manageable word/topic-based clusters. Given that there are no
‘hard and fast’ rules for searching for themes using thematic analysis (see Maguire and Delahunt,
2017), significant themes emerged as we coded, categorized, and reflected on the data (Saldaña,
The findings are organized into two sections, one for each key theme that was identified in the
data. Each includes an explanation of the patterns that emerged in support of each theme, both
within and across the highly-upvoted sample and the random sample. Here we draw largely from
the words of the users in the samples (i.e., quotes). Descriptive information pertaining to the
samples are also provided.
Descriptive information of the samples
Several macro-level patterns emerged across the sample groups, as is illustrated in Table 1. First
was the relatively similar number of unique authors in the highly-upvoted sample and the
random sample (852 and 952, respectively). This suggests that the highly-upvoted sample, and
perhaps the highly-upvoted content on r/The_Donald in general, does not consist of a group of
authors whose comments make up a large proportion of the highly-upvoted comments. Second,
each sample consisted of a variety of thread-starting comments and response comments, but the
thread-starter comments comprised a large proportion of the highly-upvoted sample (89.3%)
while the random sample contained fewer thread-starter comments (46.5%). Lastly, a
comparison of the measures of central tendency across the two samples revealed that the
comment vote scores varied substantially. This is expected because the highly-upvoted sample
consisted of the most highly-upvoted comments found in the r/The_Donald in 2017.
Table 1. Descriptive information of the samples.
Unique Authors
Comments (%)
Comment Vote Score
Highly Upvoted Sample
(N = 1,000)
852 (85.2%)
893 (89.3%)
Random Sample
(N = 1,000)
929 (92.9%)
465 (46.5%)
The external threat
Across the 1,000 most highly-upvoted comments were various talking points, including
discussions about Reddit administrators, a recent solar eclipse, users on 4chan, and United States
Senator Ted Cruz, among numerous other discussions. But a large proportion of the comments
(i.e., 11.6%) in the highly-upvoted sample were hateful comments about Muslim communities,
with claims that Muslim immigration is a persistent danger to Western nations, particularly to the
U.S., and that Muslim-perpetrated violence is to be expected as a result of the increasing levels
of Muslim immigration. In particular, contributors generally agreed that increased Muslim
immigration will result in what one user – who posted one of the most highly-upvoted comments
in the sample – described as ‘chaos and oppression and rape and murder’
(UserID: 977, 4143
Here the highly-upvoted comments not only suggested that ‘my Muslim neighbors’ want to ‘rape
my wife and kill my dogs’, as one user explained (UserID: 272, 1532), but also that ‘pedos
[pedophiles] are rampant inside this mind fuck ideology, as their [religious] ‘book’ allows this
shit to happen’ (UserID: 309, 2300). In addition, among the most highly-upvoted comments was
HIM TO DEATH’ (UserID: 600, 1526), which was used by members to showcase the violent
nature of Islam as well as to make the claim that such a ‘barbaric’ religion is indisputably
violent. Interestingly, while highly-upvoted content reflected such comments as ‘we know
muslims are NOT peaceful and will wreck shit’ (UserID: 588, 1388), much of the content
oftentimes included the use of sarcasm to express frustration about a wider ‘politically correct’
culture that, as one user best summed up, ‘[does] not allow us to report crimes perpetrated by
Muslims because it’s racist’ (UserID: 555, 1548).
Frequently discussed within the highly-upvoted sample was a link between the increase
of Muslim migration in the U.S., and the West more generally, and the increased threat of
terrorism there. Although racism is against r/The_Donald subreddit’s rules, racist sentiments
against Muslims were generally overlooked in the context of discussions around the ‘Muslim
terror threat’. For instance, one member’s highly-upvoted comment described the terrorist threat
ALLAHU AKBAR’ (UserID: 689, 1400). Notably, although racist remarks such as these should
warrant removal by r/The_Donald moderators, they were overlooked and, resultantly, were
allowed to remain featured within the community to draw further attention to the external threat.
As a result, among the most highly-upvoted content were suggestions that Muslims largely
supported terrorism in the U.S., with some users also suggesting ‘NOT ALL MUSLIMS yeah,
but damn near most of them it seems’ (UserID: 353, 1565) support terrorist attacks against the
West. As a result, commonly expressed in the highly-upvoted comments were references to what
was described as a key method to counter this ‘terror threat’: preventing the so-called external
threat from entering the Western world through immigration. Here, contributors oftentimes
argued that it is imperative to, as one user who posted a highly-upvoted comment put it, ‘halt our
disasterous immigration policies’ (UserID: 612, 1942).
Together, extreme right-wing sentiments emerged in the highly-upvoted content if it
related to this external threat. Consistently uncovered within this sample were discussions about
Muslims as ‘invaders’ of the West and an increasing threat to the white population there. As but
one more notable example, in a highly-upvoted comment to a post titledMuslim population in
Europe to triple by 2050, one user expressed their grievances about the rising number of
Muslims in Europe and the subsequent threat to the white race there:
In 2050 your gonna have a lot of younger Europeans who completely resent their elders
for giving away there future for a false sense of moral superiority. I almost already feel it
with my parents. My family is white and both of my white liberal parents talk as if whites
are the worst people on the planet. I love them but holy shit they have no self
preservation instincts. It’s embarrassing. (UserID: 114, 1604)
Similarly uncovered in the highly-upvoted comments were members who described an emerging
sense of desperation that the survival of their in-group is being threatened by the Muslim out-
group. As one user put it: ‘Come on? We’re getting slaughtered here, the previous car that
ploughed though people on the bridge, Ariana Grande concert and now this? Our country has
been infiltrated, we’re fucked’ (UserID: 355, 2189). Here the threat of Muslim-perpetrated
violence and terrorism was described within the broader context of an Islamic ‘invasion’ of
Western countries. To illustrate, a large proportion of the content within the highly-upvoted
messages was a reaction to Muslim immigration into Western nations as a sign of Muslims
‘want[ing] to rule the earth’ (UserID: 309, 2300). One of the highly-upvoted comments
described this so-called Muslim ‘end game’ as a means to destroy other nations to further their
own goals:
Muslims have been trying to conquer us since the beginning of time. They used to
steal our children and then train them to kill their own people. Didn't work then,
and will not work now. There’s not a single romanian out there in this whole
world who doesn’t know what the Muslim end game is. (UserID: 761, 1465)
As a result of this external threat, members in the subreddit oftentimes discussed the ways that
they should defend themselves and members of their in-group since, as one user put it, ‘[w]hy
should we give up our lifestyle for their tyranny?’ (UserID: 460, 1653).
Evidence of the above discussed external threat of the Muslim out-group, such as
gruesome images of recent terror attack victims found in the highly-upvoted content, further
invoked a particularly emotional response from members of the in-group on r/The_Donald; when
Muslim-perpetrated terrorist attacks occurred, the fear and anger that members of r/The_Donald
expressed toward the external threat was reflected in their most highly-upvoted comments. For
example, a highly-upvoted comment to an image of a terror attack victim noted: ‘[i]t’s
heartbreaking and bloodily gruesome but this corpse was once a person. How dare we deny their
pain? We need to defend and avenge’ (UserID: 989, 1468). In effect, each member that upvoted
this comment agreed with its basic underlying assumptions: that the in-group’s very survival
depends on some form of defensive action against the out-groups.
While the most highly-upvoted comments consisted largely of references to the Muslim
threat as imminent threat and the subsequent necessity of a ‘defensive stance’, these highly-
upvoted comments failed to specify how members should respond to this threat. For instance,
while contributors argued that the threat may, in part, be alleviated by striking down the
‘disastrous’ immigration policies that allow Muslims to ‘flood’ to the U.S. and the West, the
highly-upvoted comments provided no guidelines about how to defend themselves against
Muslims who already reside within the country. It would appear, then, that members’ highly-
upvoted discussions around defending themselves from Muslims are meant to be generalized
and, as such, are careful not to overtly incite violence against a particular person or group.
An assessment of the random sample of comments revealed that the highly-upvoted anti-
Muslim comments discussed above go unchallenged, at least visibly. This is largely because any
comments that ran counter to the most highly-upvoted narrative typically did not receive any
upvotes by other members and, resultantly, were less visible than their highly-upvoted
counterparts. For instance, one user who suggested that ‘the basic ideals of Islam’ include such
things as ‘faith, generosity, self-restraint, upright conduct’ (UserID: 1928, 1) did not have their
comment upvoted. Moreover, comments that opposed the more prominent anti-Muslim
narratives, such as ‘Terrorist come in all colors. So do Americans’ (UserID: 1680, 2), were far
from the most highly-upvoted comments in the community. Similarly, users arguing that
Muslims who commit violent acts have ‘psychological problems so it’s not a terror related
attack’ (UserID: 1860, 1) or that ‘there are a lot of good muslim people, but theyre bad muslims’
(UserID: 1733, 4) were largely unpopular within the community.
While there were a small number of particularly anti-Muslim comments in the random
sample, they were less frequent (i.e., 1.6% of the random sample) and, for the most part, less
extreme than those in the highly-upvoted sample. To illustrate, one member in the random
sample wondered ‘why do countries like Sweden let Muslim war criminals into their country
with no repercussion?’ (UserID: 1788, 2). Another user here argued ‘if they make Europe more
Muslim then the West will stop bombing the Middle East’ (UserID: 1609, 30). There were,
however, a number of comments in the random sample that reflected the more extreme anti-
Muslim content that characterized the anti-Muslim rhetoric in the highly-upvoted sample, but the
comments in the random sample received much lower voting scores than the highly-upvoted
messages – most likely because references to Muslims as an external threat were relatively rare
among the comments in the random sample.
The internal threat
Commonly expressed in the data was another threat in most highly-upvoted comments: The Left.
Sentiment against this out-group comprised 13.4% of the most highly-upvoted comments in the
sample. An assessment of the most highly-upvoted messages suggested that the Leftist out-group
was perceived as violent, particularly against the extreme right-wing in-group. Here a widely
held belief was, as two users described, ‘the Left are not as peaceful as they pretend to be,’
(UserID: 392, 1982) and they are ‘growing increasingly unhinged every passing day’ (UserID:
880, 2065). Members who posted highly-upvoted content oftentimes noted that the Left is, as
one user put it, sending ‘death threats to people who have different political views than [them]’
(UserID: 880, 2065). Users who posted highly-upvoted comments in r/The_Donald community
even argued that the Left was willing to go so far as physically attack their right-wing political
opponents or, as one user put it, ‘smash a Trump supporter over the head with a bike lock
because of his intolerance to non-liberal viewpoints’ (UserID: 619, 1640). Yet in light of fellow
members being depicted as the victims of the Left’s violence, an analysis of the highly-upvoted
content revealed that members in r/The_Donald encouraged each other not to react to the Left
because, as one user explained it, ‘[v]iolent backlash from the right is exactly what they’re
looking for so they can start point fingers and calling us the violent ones’ (UserID: 1000, 2173).
Also uncovered in the highly-upvoted comments were discussions about the Left seeking
to weaken the West in general but the U.S. in particular through their crusade against the white
race. Specifically, users who posted highly-upvoted content oftentimes expressed frustration
about the how Left has constantly undermined the many societal contributions made by the white
race, which – as one user best summed up – include ‘economies that actually work, philosophy
as a whole, anti-biotics and healthcare in general, printed press, technology in general, etc.’
(UserID: 357, 1688). Further added here were discussions about how the Left would rather focus
on what white people have done wrong and ‘shit all over white people for what they did in the
past’ (UserID: 357, 1688), as one user put it, rather than acknowledge the contributions that the
white race has made to Western society. Together, the general sentiment was that the Left was
seeking to deter whites – and by extension members of r/The_Donald community – from feeling
any source of pride for their white ancestry, as evidenced by the following highly-upvoted
You’re a white male kid. You’ve grown up being told you’re the reason for all the
world’s ills. You want to speak out. You have to go to your high school in the
dark, slouched, with your face hidden to simply post a sign that says It’s okay to
be white. Leave. School hands security footage over to the media. The media
broadcasts the picture nationwide and says they are investigating. Just step back
and take in how crazy this is. (UserID: 77, 2099)
As a result, members of r/The_Donald conclude not only that Leftists ‘hate white people’
(UserID: 479, 1582), but that, and one user explained it, ‘our enemies sure want whites to be a
minority or even dead’ (UserID: 197, 1356) as evidenced by their support for the immigration of
‘undesirables’ into West in general and the U.S. in particular.
Apparent in the highly-upvoted content was user support for discussions about the Left’s
support for immigration – especially from Muslims entering the West from dangerous countries.
According to users who posted highly-upvoted comments, this was another key component of
the Left’s so-called anti-white crusade. Members noted that although Muslim immigrants
increase the risk of violence and terrorism in the U.S., such a threat does not prevent the Left
from encouraging them to migrate to the U.S. Perhaps this is because, as one user who posted a
highly-upvoted comment emphasized, the Left ‘do[es]n’t view ISIS or terrorism as a threat to
this country at all... [and the Left is] more scared of Trump’s policies than ISIS’ (UserID: 720,
1361). Still, users who posted the most highly-upvoted comments in r/The_Donald noted that
when ‘dangerous Muslim immigrants’ inevitably commit violent acts against the West, members
believe that these instances of Muslim-perpetrated violence are simply overlooked and excused
by the Left. As one user explained: ‘Liberals will defend these sick fucks because they come
from a differnt culture and don’t understand western society’ (UserID: 799, 4229). The general
sentiment expressed in this regard was that the Left favors Muslim immigrants over the well-
being of Americans – sentiment that was oftentimes met with frustration, as is expressed by one
user: ‘the left's love affair with islam is so cringey and infuriating’ (UserID: 687, 1653). Linked
to these discussions were conspiracy theories about the left-wing media playing a major role in
distorting public perceptions about the ‘Muslim threat’. One highly-upvoted comment, for
example, formed an anti-Muslim acronym, based on the media’s distortion of Muslim-
perpetrated terrorism: ‘Just another random incident. Impossible to discern a motive. Has
nothing to do with religion. Always believe the media. Don’t jump to any conclusions’ (UserID:
486, 1461; emphasis in original).
An examination of the random sample of comments revealed few discussions about the
so-called Leftist threat (i.e., 5.7% of the random sample). For instance, not only is the Left
mentioned less often in the random sample than in the highly-upvoted sample, but in less
extreme ways as well. Instead, attacks against the Left are intended to be more humorous than
serious. To illustrate, one user criticized the Left, noting that, ‘in addition to being unable to
meme, leftists and ANTIFA can’t shower either’ (UserID: 1891, 17). Similarly, another user
admonished the ‘[t]ypical leftist hypocrites. That’s why we can’t play fair with them. They never
reciprocate the gesture of good faith in anything’ (UserID: 1719, 9). Another less popular
comment in the random sample suggested that the Left seeks to ‘protect free speech by banning
it. Leftist logic’ (UserID: 1751, 1). This, however, was the extent to which the Left was referred
in the random sample of comments.
The results of the current study suggest that a particularly effective form of collective identity
building was prevalent throughout the r/The_Donald community – one that was riddled with
hateful sentiment against members’ perceived enemies: Muslims and the Left. The thematic
analysis of the highly-upvoted content indicates that members most often agree with (i.e.,
upvote) extreme views toward two of their key adversaries to mobilize their social movement
around the so-called threat. In particular, r/The_Donald’s rules specifically condoned anti-
Muslim content, which in large part explains why such content was so prevalent among the most
highly-upvoted messages. Additionally, Donald Trump’s own anti-Muslim rhetoric, which has
emboldened right-wing extremists to commit hateful acts against Muslims (see Müller and
Schwarz, 2019), may explain why his supporters on r/The_Donald were so eager to vilify the
external threat. Known as the ‘Trump Effect’ (Müller and Schwarz, 2019), the results of the
current study suggest that Trump’s anti-Muslim rhetoric emboldened his fervent supporters on
r/The_Donald to spread anti-Muslim content via Reddit’s upvoting algorithm. Indeed, anti-
Muslim hate speech is increasingly becoming an ‘accepted’ form of racism across the globe and
extreme right movements have capitalized on this trend to help mobilize an international
audience of right-wing extremists (see Hafez, 2014; see also Froio and Ganesh, 2019). For
instance, the othering of Muslims, specifically, is commonly used to strengthen in-group ties
between far-right extremists against a common enemy (Hafez, 2014). By describing Muslims as
the violent perpetrators and themselves as those who must defend themselves from “them”, for
example, members in r/The_Donald framed themselves as victims rather than perpetrators – an
othering tactic commonly used by right-wing extremists, among other extremist movements (see
Meddaugh and Kay, 2009). This suggests that, by endorsing anti-Muslim sentiment,
r/The_Donald offered extremists a virtual community wherein they may bond with likeminded
peers around a common enemy. Worth highlighting though is that, while a large proportion of
the highly-upvoted content in the current study often frame Muslims as a serious and imminent
threat, users tended not to provide specific strategies to respond to the so-called threat. Such
behavioral monitoring may have been done in an effort to safeguard the community from being
banned for breaking Reddit’s sitewide policy against inciting violence.
Regardless of these
efforts, recent amendments to Reddit’s content policy that explicitly target hate speech led to the
ban of r/The_Donald (see Tiffany, 2020).
There are a number of reasons that might explain why the Left is the target of othering
discourse in the highly-upvoted content on r/The_Donald. For instance, such sentiment most
likely reflects the increasing political divide in the U.S. and other parts of the Western world,
where the ‘right’ often accuse the ‘left’ of threatening the wellbeing of the nation (see Dimock et
al., 2014). However, the results of the current study suggest that some users on r/The_Donald
took this narrative to the extreme. To illustrate, the highly-upvoted anti-Left discourse that was
uncovered in the highly-upvoted content mirror ‘paradoxical’ identity politics expressed by the
alt-right movement, which accuses the Left of authoritarianist censorship, yet position the far-
right as the harbingers of ‘peaceful social change’ (see Phillips and Yi, 2018). On r/The_Donald,
the Left (i.e., the out-group) was construed as the opposite of peaceful on and even as a violent
and physical threat to members of the in-group. Framing the out-group as a physical threat to
members of the in-group served to further delineate the borders between ‘them’ versus ‘us’ and
solidifies bonds between members of the in-group who, together, face a common enemy – a
community-building tactic that is commonly discussed in the social movement literature (see
Futrell and Simi, 2004; see also Perry and Scrivens, 2016). Notably, however, in the face of this
so-called physical threat, within the highly-upvoted comments users cautioned others from
retaliating against ‘Leftist-perpetrated violence’. It is probable that members may have wanted to
avoid discussions of offline violence to avoid being banned from Reddit for overstepping its
content policy against citing violence. If members instead were to encourage one another to react
with violence against their perceived enemies, they would risk having their community banned
for violating Reddit’s content policy. Similar tactics have been reported in empirical studies
which found that extreme right adherents in online discussion forums will deliberately present
their radical views in a subtler manner in fear that they will be banned from the virtual
community (see Daniels, 2009; Meddaugh and Kay, 2009).
Comparing the content in the highly-upvoted sample with the random sample, it is clear
that there was a complete absence of dissenting views among the most visible content in the
highly-upvoted comment sample. This suggests that Reddit’s voting algorithm facilitated an
‘echo chamber’ effect in r/The_Donald – one which promoted anti-Muslim and anti-Left
sentiment. In particular, rather than encouraging a variety of perspectives within discussions on
r/The_Donald, Reddit’s voting algorithm seemed to allow members to create an echo chamber
by shaping the discourse within their communities in ways that reflected a specific set of values,
norms, and attitudes about the in-group. For example, as evidenced by an analysis of
r/The_Donald’s most highly-upvoted comments, members of the subreddit community tended to
only support, or upvote, extreme views about Muslims and the Left. That is, among the
community’s most highly-upvoted content was an absence of comments which offered an
alternative perspective or even a dissenting view to the community’s dominant extremist
narrative. In some cases, members even used humor and sarcasm – which is commonly used by
commenters on Reddit (see Mueller, 2016) and related sites like Imgur (see Mikal et al., 2014)
regardless of political ideology – to express otherwise ‘taboo’ anti-Muslim and anti-Left views,
reflecting a longstanding tactic used by the extreme right to ‘say the unsayable’ (see Billig,
Overall, our study’s findings suggest that Reddit’s upvoting and downvoting features
played a central role in facilitating collective identity formation among those who post extreme
right-wing content on r/The_Donald. Reddit’s upvoting feature functioned to promote and
normalize otherwise unacceptable views against the out-groups to produce a one-sided narrative
that serves to reinforce members’ extremist views, thereby strengthening bonds between
members of the in-group. On the other hand, Reddit’s downvoting feature functioned to ensure
that members were not exposed to content that challenged their extreme right-wing beliefs,
which in turn functioned as an echo chamber for hate and may also functioned to correct the
behavior of dissenting members – a possibility gleaned from the results of previous research on
the effect of Imgur’s bidirectional voting features on social identification (Hale, 2017). Seen
through the lens of social movement theory, the extreme views against Muslims and the Left that
characterized the ‘othering’ discourses among the most highly-upvoted comments may have
been more likely to produce a stronger reaction from r/The_Donald’s extreme right-wing in-
group and, as a result, they may have been more likely to mobilize around these two threats.
Limitations and future research
The current study includes a number of limitations that may inform future research. First, our
sample procedure may have implications for the study’s findings, as the highly-upvoted sample
included a larger proportion of thread-starting comments than the random sample (i.e., 89.3%
and 46.5%, respectively). As a result, fewer instances of right-wing extremist content found in
the random sample could be the result of the greater proportion of response comments (i.e.,
comments that may be less visible than thread-starters, or support extremist content rather than
feature it itself) when compared to the predominantly thread-starting comments of the highly-
upvoted sample. Future work should explore this possibility by creating a matched control
sample in which the number of thread-starting and response comments are equal to that of the
experimental sample. Second, we did not identify an important indicator of user engagement: the
number of replies to comments in the highly-upvoted sample or the random sample. Exploring
the number of replies to comments featuring anti-Muslim and anti-Left content versus those
without such content may provide additional insight into user engagement. Third, we drew from
social movement theory to explore the role of Reddit’s unique voting algorithm in promoting
‘othering’ discourses that in turn facilitated collective identity formation on r/The_Donald, but
clearly there are other theoretical frameworks that could be applied in this context as well.
Bandura's (2002) theory of moral disengagement, for example, offers interesting insights into the
themes uncovered in the current study, including detrimental actions, injurious effects, and
victimization. These aspects of Bandura’s theory function to enhance the likelihood of moral
disengagement and are bound by notions of ‘us’ vs ‘them’, all of which should be explored
In closing, the results of the study highlight at least of four additional paths for further
inquiry. The first involves temporal change. Temporal analyses may be able to effectively
capture how the anti-Muslim and anti-Left sentiment uncovered in the current study evolve over
time. This, in turn, may offer new insight into the evolution of extremists’ collective identities in
virtual settings (see Scrivens et al. 2018) and provide more insight into whether and how the
nature of r/The_Donald’s echo chamber dynamics change over time. Second, the data for the
current study was gathered from only the period after Trump’s U.S. presential election victory,
and not before. Future studies could, for example, incorporate an interrupted time series analysis
into the research design, assessing whether Trump’s election had a significant impact on the
volume and severity of hateful speech on r/The_Donald. Since offline events can significantly
impact the sentiment in far-right online communities (see Bluic et al., 2019), determining the
impact of offline events, such as Donald Trump’s election, may shed light on if and how it
emboldened right-wing extremist sentiment within the community. Other external events that
may have influenced the most highly-upvoted comments on r/The_Donald during the 2017
timeline include Islamist terror attacks such as the New York City truck attack and the London
Bridge attack, or other galvanizing events such as the Unite the Right rally in Charlottesville.
Future research should therefore attempt to account for these central events and subsequent
interaction effects during the analysis stage. Research is also needed to assess how the banning
of r/The_Donald impacted its online community in general and collective identity formation in
particular, such as whether its users migrated to other political subreddits (e.g., r/Conservative),
backup or mimic sites (e.g.,, or alternative platforms with more lenient content
policies (e.g., Gab or Voat) in an effort to express their extremist views. Lastly, future research
could compare the discourse within and across a number of other extreme subreddits such as, for
example, the recently quarantined r/Imgoingtohellforthis, which is known to promote ‘racist
nationalism’ (see Topinka, 2017). This may provide insight into how collective identities form in
extreme online communities in general, and how Reddit’s unique features facilitate collective
identity formation, in particular.
Account and Community Restrictions (n.d.-a). Promoting Hate Based on Identity or
Vulnerability. Available at:
(accessed 30 June 2020).
Account and Community Restrictions (n.d.-b). Quarantined Subreddits. Available at:
restrictions/quarantined-subreddits. (accessed 6 January 2020).
Adams J and Roscigno V (2005). White supremacists, oppositional culture and the World Wide
Web. Social Forces 84(2): 759—778.
Anti-Defamation League (2018). New hate and old: The changing face of American white
supremacy. Anti-Defamation League.
Bandura A (2002). Selective moral disengagement in the exercise of moral agency. Journal of
Moral Education 31(2): (2002): 101—119.
Berger JM (2018). Extremism. Cambridge: The MIT Press.
Berger JM (2016). Nazis vs. ISIS on Twitter: A comparative study of white nationalist and
ISIS online social media networks. The George Washington University Program on
Extremism. Available at:
(accessed 8 January 2020).
Berger JM and Strathearn B (2013). Who matters online: Measuring influence, evaluating
content and countering violent extremism in online social networks. The International
Centre for the Study of Radicalisation and Political Violence. Available at: (accessed
8 January 2020).
Billig M (2001). Humour and hatred: The racist jokes of the Ku Klux Klan. Discourse & Society
12(3): 267—289.
Bliuc AM, Betts J, Vergani M, Iqbal M and Dunn K (2019). Collective identity changes in far-
right online communities: The role of offline intergroup conflict. New Media & Society
21(8): 1770–1786.
Bowman-Grieve L (2009). Exploring “Stormfront”: A virtual community of the radical right.
Studies in Conflict & Terrorism 32(11): 989—1007.
Braun V and Clarke V (2006). Using thematic analysis in psychology. Qualitative Research in
Psychology 3(2): 77—101.
Burnap P and Williams M L (2015). Cyber hate speech on Twitter: An application of machine
classification and statistical modeling for policy and decision. Policy and Internet 7(2):
Burris V, Smith E and Strahm A (2000). White supremacist networks on the Internet.
Sociological Focus 33(2): 215—235.
Carman M, Koerber M, Li J, Choo KKR and Ashman H (2018) Manipulating visibility of
political and apolitical threads on reddit via score boosting. In 17th IEEE International
Conference on Trust, Security and Privacy in Computing and Communications/12th
IEEE International Conference on Big Data Science and Engineering, pp. 184—190,
Caini M and Kröll P (2014). The transnationalization of the extreme right and the use of the
Internet. International Journal of Comparative and Applied Criminal Justice 39(4),
Conway M (2017). Determining the role of the Internet in violent extremism and terrorism: Six
suggestions for progressing research. Studies in Conflict & Terrorism 40(1): 77—98.
Conway M, Scrivens R, and Macnair L. (2019). Right-Wing Extremists’ Persistent Online
Presence: History and Contemporary Trends. The International Centre for Counter-
Terrorism – The Hague 10: 1—24.
Crisafi A (2005). The Seduction of Evil: An Examination of the Media Culture of the Current
White Supremacist Movement. In Ni Fhlainn, S., and Myers, W. A. (eds.), The Wicked
Heart: Studies in the Phenomena of Evil (pp. 29—44). Oxford: Inter-Disciplinary Press.
Daniels J (2009). Cyber racism: White supremacy online and the new attack on civil rights.
Lanham, MD: Rowman and Littlefield Publishers.
Diani M (1992). The concept of social movement. Sociological Review 40(1): 1—25.
Dimock M, Doherty C, and Kiley J (2014). Political polarization in the American public: How
increasing ideological uniformity and partisan antipathy affect politics, compromise and
everyday life. Pew Research Center, 12.
Edwards, G S, and Rushin, S (2018). The Effect of president Trump's election on hate crimes.
SSRN: = 3102652 (accessed 8 January 2020).
Ekman M (2014). The dark side of online activism: Swedish right-wing extremist video activism
on YouTube. Journal of Media and Communication Research 30(56): 79—99.
Ekman, M (2018). Anti-refugee mobilization in social media: The case of Soldiers of Odin.
Social Media + Society 4(1): 1—11.
Feinberg A (2018). Conde Nast sibling reddit says banning hate speech is just too hard. The
Huffington Post, 9 July. Available at:
hate-speech-hard_n_5b437fa9e4b07aea75429355. (accessed 8 January 2020).
Froio C and Ganesh B (2019). The transnationalisation of far-right discourse on Twitter: Issues
and actors that cross borders in Western European democracies. European Societies
21(4): 519—539.
Futrell R and Simi P (2004). Free spaces, collective identity, and the persistence of US white
power activism. Social Problems 51(1): 16—42.
Gibbs S (2018). Open racism and slurs are fine to post on reddit, says CEO. The Guardian, 12
April. Available at:
reddit-post-ceo-steve-huffman. (accessed 8 January 2020).
Graham R (2016). Inter-ideological mingling: White extremist ideology entering the
mainstream on Twitter. Sociological Spectrum 36(1): 24—36.
Hafez F (2014). Shifting borders: Islamophobia as common ground for building pan-European
right-wing unity. Patterns of Prejudice 48(5): 479—499.
Hale BJ (2017). “+1 for Imgur”: A content analysis of SIDE theory and common voice effects on
a hierarchical bidirectionally-voted commenting system. Computers in Human Behavior
77(1): 220—229.
Hauser C (2017). Reddit bans ‘incel’ group for inciting violence against women. The New York
Times, 9 November. Available at: (accessed 8
January 2020).
Hunt SA and Benford RD (1994). Identity talk in the peace and justice movement. Journal of
Contemporary Ethnography 22(4): 488—517.
Maguire M and Delahunt B (2017). Doing a thematic analysis: A practical, step-by-step guide
for learning and teaching scholars. AISHE-J: The All Ireland Journal of Teaching and
Learning in Higher Education, 9(3).
Maher C, Hadfield M and Hutchings M (2018). Ensuring rigor in qualitative data analysis: A
design research approach to coding combining NVIVO with traditional material methods.
International Journal of Qualitative Methods 17(1): 1—13.
Martin T (2017). Dissecting trump’s most rabid online following. Five Thirty Eight, 23 March.
Available at:
following/. (accessed 8 January 2020).
Massanari A (2017). #Gamergate and the fappening: How Reddit’s algorithm, governance, and
culture support toxic technocultures. New Media & Society 19(3): 329—346.
Meddaugh PM and Kay J (2009). Hate speech or “reasonable racism?” The other in Stormfront.
Journal of Mass Media Ethics 24(4): 251—268.
Melucci A (1995). The process of collective identity. In Johnston H and Klandermans B (eds)
Social Movements and Culture. Minneapolis: University of Minnesota Press, pp. 41-63.
Mikal JP, Rice RE, Kent RG, and Uchino BN (2014). Common voice: Analysis of behavior
modification and content convergence in a popular online community. Computers in
Human Behavior 35: 506—515.
Mueller C (2016). Positive feedback loops: Sarcasm and the pseudo-argument in Reddit
communities. Working Papers in TESOL & Applied Linguistics 16(2): 84—97.
Müller K and Schwarz C (2019). From hashtags to hate crimes. Available at: (accessed 8 January 2020).
Nagle A (2017). Kill all normies: The online culture wars from Tumblr and 4chan to the alt-
right and Trump. Alresford: John Hunt Publishing.
Nouri L and Lorenzo-Dus N (2019) Investigating Reclaim Australia and Britain First’s use of
social media: Developing a new model of imagined political communities online. Journal
for Deradicalization 18: 1—37.
O’Callaghan D, Greene D, Conway M, Carthy J and Cunningham P (2014). Down the
(white) rabbit hole: The extreme right and online recommender systems. Social Science
Computer Review 33(4): 1—20.
Perry B and Scrivens R (2016). White pride worldwide: Constructing global identities online. In:
Schweppe J and Walters M (eds) The Globalization of Hate: Internationalizing Hate
Crime. Oxford University Press., pp. 65—78.
Phillips J and Yi J (2018). Charlottesville paradox: The ‘liberalizing’ alt right, ‘authoritarian’
left, and politics of dialogue. Society 55(3): 221—228.
Polletta F and Jasper JM (2001). Collective identity and social movements. Annual Review of
Sociology 27(1): 283—305.
r/The_Donald Wiki (n.d.). Can you explain your rules? Available at: (accessed 6 January 2020). Competitive Analysis (n.d.). Retrieved from (accessed 30 June 2020).
Reddit Content Policy (n.d.) Retrieved from
(accessed 6 January 2020).
Reddit Press (n.d.) Reddit by the numbers. Available at:
(accessed 6 January 2020).
Romano A (2017). Reddit just banned one of its most toxic forums. But it won’t touch
The_Donald. Vox, 13 November. Available at:
controversy. (accessed 8 January 2020).
Saldaña J (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.
Scrivens R, Davies G and Frank R (2018). Measuring the evolution of radical right-wing posting
behaviors online. Deviant Behavior 41(2): 216—232.
Simi P and Futrell R (2015). American Swastika: Inside the White Power Movement’s Hidden
Spaces of Hate (2nd Ed). Lanham, MD: Rowman and Littlefield.
Snow D (2001). Collective Identity and Expressive Forms. University of California, Irvine
eScholarship Repository. Available at: (accessed 8
January 2020).
Stewart E (2019). Reddit restricts its biggest pro-Trump board over violent threats. Vox, 26 June.
Available at:
message-board-quarantine-ban. (accessed 8 January 2020).
Stier S, Posch L, Bleier A and Strohmaier M (2017). When populists become popular:
Comparing Facebook use by the right-wing movement Pegida and German political
parties. Information, Communication and Society 20(9): 1365—1388.
Tiffany, K (2020). Reddit is done pretending The Donald is fine. The Atlantic, 29 June.
Available at:
donald-chapo-content-policy/613639. (accessed 29 June 2020).
Topinka RJ (2018). Politically incorrect participatory media: Racist nationalism on
r/ImGoingToHellForThis. New Media & Society 20(5): 2050—2069.
Ward J (2018). Day of the trope: White nationalist memes thrive on Reddit’s r/the_donald.
Southern Poverty Law Center. Available at:
thrive-reddits-rthedonald. (accessed 8 January 2020).
Wildman J (2020). What is Reddit? Digital Trends, 1 July. Available at: (accessed 10 July 2020).
Zadrozny B and Collins B (2018). Reddit bans qanon subreddits after months of violent threats.
NBC News, 12 September. Available at:
news/reddit-bans-qanon-subreddits-after-months-violent-threats-n909061. (accessed 8
January 2020).
Zannettou, S, Bradlyn, B, De Cristofaro, E, Kwak, H, Sirivianos, M, Stringini, G, and Blackburn,
J (2018). What is gab: A bastion of free speech or an alt-right echo chamber. In
Proceedings of the WWW ’18: Companion Proceedings of The Web Conference 2018.
Reddit has since amended its content policy to discourage ‘promoting hate based on identity or
vulnerability’ (see Account and Community Restrictions, n.d.-a).
For more on the data, see
Note that a Reddit ‘comment’ is a user-submitted piece of text that is created in response to a
particular ‘post’. For instance, when a r/The_Donald member finds an interesting link to a news
article, they may create a ‘post’ which links this content to r/The_Donald, to which other
members may comment.
All author names were assigned with pseudonyms to protect user anonymity. All online
comments were quoted verbatim.
This number represents the comment’s ‘vote score’, or the number of upvotes given to a
comment minus downvotes.
Reddit has used this policy to ban popular extremist subreddits in the past, including the anti-
women community r/Incels in 2017 (Hauser, 2017) and the conspiracy theory community
r/QAnon in 2018 (Zadrozny and Collins, 2018).
... While digital settings provide spaces for marginalised ideas and movements, previous research has also identified the many ways platform politics can enable hateful content and users. For instance, the recommendation systems and content feeds on social media sites like YouTube, Facebook and Reddit have been shown to support and even emphasise racist and sexist content (Gaudette et al., 2021;Massanari, 2017). The accessibility and ease of use of many 'reaction', 'sharing' and content organising features have likewise been shown to enable proliferation of polarising and degrading content without much reflection or effort (Munn, 2020). ...
... In doing so, previous research has shown how these sites have positioned themselves as non-restrictive, free and fair alternatives for elsewhere unwelcome users (Jasser et al., 2023;Lima et al., 2018;Rogers, 2020). In some of these 'free speech' settings, unrestricted opportunities for user expression have been established explicitly by platforms (Kilgo et al., 2018;Topinka, 2018) but even in settings where some rules and policies apply publicly, loopholes have been shown to weaken their impact (Gaudette et al., 2021). Ultimately, lenient moderation policies and enforcements have been identified as contributing factors in permitting overall hateful platform culture (Massanari, 2017). ...
... What constitutes a derogatory yet factual and valid statement against an ethnic group, or a justifiable act of child sexual exploitation remains unspecified by Flashback. Still, add-ons like these give users lots of wiggle room to express hateful and harmful ideas in discussions (Gaudette et al., 2021). Given that Flashback operates exclusively in a Swedish legal context and could thus create quite specific rules to cater to these conditions (cf. ...
Full-text available
This article analyses the ‘free speech’ online forum Flashback, which adheres to a strict non-interference policy when it comes to user-generated content, but beyond this also forbids users from deleting their own content or accounts. Through a qualitative content analysis, this article sought to understand the relationship between the platform and its users with respect to this unconventional approach to moderation and content removal. This article discusses both the position(s) taken by Flashback as it pertains to its policy of minimal moderation, and the expectations as expressed by users navigating Flashbacks rules and their practical implementations. The article shows a discrepancy between how Flashback (incoherently) justifies minimal moderation and how users had imagined the platform operating. The article also discusses how Flashback maintains these policies through its community’s active encouragement via supportive posting and silencing of non-conformers, and the consequences that Flashback’s inaction has in terms of residual hate.
... Scholarly literature on these novel facets of far-right violence, hate-filled online discourse, and societal insecurity have thus far focused on their manifestations across different virtual and offline mediums (Bliuc et al., 2019;Gaudette et al., 2021;Vieten, 2020). However, few have attempted to connect them as one interconnected phenomenon. ...
... The concept of ideological confluence from Ong (2020) repeats similar findings, where depicting a common enemy helps reaffirm a shared belief system and offers a simple solution-survival is them or us. The many outlined articles examining online Sinophobic hate speech are appropriate for this framing (Gaudette et al., 2021). By dehumanising Chinese, Asians, and other minorities, the far-right collective can rationalise their uncertainties in a simple black-and-white dichotomy, with "us" as the superior and these "others" as the inferior (Sakki & Castrén, 2022). ...
... For the field of extremist research, the theoretical models of social movement (Gunning, 2009) and collective action (Oberschall, 2004) are predominantly employed and reworked to explain the mobilisation of contemporary far-right groups (see Bliuc et al., 2019;Castelli Gattinara & Pirro, 2019;Gaudette et al., 2021;Meadowcroft & Morrow, 2017). Both theories offer a similar formula to understand the appropriate conditions for mobilisation, ranging from discontent and grievances to a shared belief system and the ability to organise (Gunning, 2009;Oberschall, 2004). ...
Full-text available
The growing dissension towards the political handling of COVID-19, widespread job losses, backlash to extended lockdowns, and hesitancy surrounding the vaccine are propagating toxic far-right discourses in the UK. Moreover, the public is increasingly reliant on different social media platforms, including a growing number of participants on the far-right’s fringe online networks, for all pandemic-related news and interactions. Therefore, with the proliferation of harmful far-right narratives and the public’s reliance on these platforms for socialising, the pandemic environment is a breeding ground for radical ideologically-based mobilisation and social fragmentation. However, there remains a gap in understanding how these far-right online communities, during the pandemic, utilise societal insecurities to attract candidates, maintain viewership, and form a collective on social media platforms. The article aims to better understand online far-right mobilisation by examining, via a mixed-methodology qualitative content analysis and netnography, UK-centric content, narratives, and key political figures on the fringe platform, Gab. Through the dual-qualitative coding and analyses of 925 trending posts, the research outlines the platform’s hate-filled media and the toxic nature of its communications. Moreover, the findings illustrate the far-right’s online discursive dynamics, showcasing the dependence on Michael Hogg’s uncertainty-identity mechanisms in the community’s exploitation of societal insecurity. From these results, I propose a far-right mobilisation model termed Collective Anxiety, which illustrates that toxic communication is the foundation for the community’s maintenance and recruitment. These observations set a precedent for hate-filled discourse on the platform and consequently have widespread policy implications that need addressing.
... Identity is important in the vocal music of Yunnan's ethnic minorities (Zhang & Su, 2023). Vocal performances not only represent a given ethnic group's collective identity, but also allow individuals to express their own feeling of self and belonging (Gaudette et al., 2021). Students acquire insight into the varied nature of identity creation through studying vocal music as a platform for identity expression (Draves, 2021). ...
Full-text available
This qualitative research study investigates the cultural symbolism and identity exhibited by ethnic minorities in Yunnan, China, through vocal music, as well as the ethnic minority music consequences. The study adopts a three-step coding analysis based on interviews with 18 respondents representing diverse ethnic minority groups to discover themes and patterns in the cultural symbolism buried within vocal music. For data analysis, the MAXQDA-2020 software is used. The findings show that ethnic minority groups in Yunnan rely heavily on vocal music to preserve cultural history, generate community pride, and promote intercultural understanding. The rich connection between vocal music and ethnic identity, the influence of nature and environment on musical expressions, the preservation of cultural heritage through rituals and celebrations, and the evolving collaborations and contemporary adaptations in vocal music are among the themes extracted from the data. This study has three implications: general, theoretical, and practical. In overall, this study adds to our understanding of vocal music as a potent medium for cultural expression and identity building. The identification of recurring themes and patterns has theoretical significance, offering a platform for further investigation and comparative investigations. In practice, the findings influence ethnic minority music practices by emphasizing the importance of introducing vocal music traditions into curricula, encouraging cultural awareness, and developing international discussion.
... Extremist groups have taken advantage of the opportunities that social media presents for community formation through discourse (Gaudette et al., 2021). Urman and Katz (2022) show that, in the case of far-right extremists, extremist communities on Telegram form a decentralized network where communities are divided largely by nationality and ideology. ...
Full-text available
This paper maps discourse communities created by Russian extremists on Telegram. It assesses the extent to which discourse communities created by extremists identified through qualitative analysis in prior research are still present in a novel Telegram dataset and can be identified through Latent Dirichlet Allocation topic modeling and network analysis. It then compares the results of Louvian and Girvan-Newman network community detection methods and the topics assigned to each community to see if the underlying structure of association between topics is robust to the use of different methods for community detection. This work contributes to an understanding of how extremist groups operate online.
... Extremist groups have taken advantage of the opportunities that social media presents for community formation through discourse (Gaudette et al., 2021). Urman and Katz (2022) show that, in the case of far-right extremists, extremist communities on Telegram form a decentralized network where communities are divided largely by nationality and ideology. ...
Full-text available
This paper maps discourse communities created by Russian extremists on Telegram. It assesses the extent to which discourse communities created by extremists identified through qualitative analysis in prior research are still present in a novel Telegram dataset and can be identified through Latent Dirichlet Allocation topic modeling and network analysis. It then compares the results of Louvian and Girvan-Newman network community detection methods and the topics assigned to each community to see if the underlying structure of association between topics is robust to the use of different methods for community detection. This work contributes to an understanding of how extremist groups operate online.
... When modern technology serves as an echo chamber, it can deprive communities of feedback while at the same time amplifying their viewpoints, thereby increasing extremist attitudes (Gaudette, Scrivens, Davies, & Frank, 2020;O'Hara and Stevens, 2015). Communication within these echo chambers is often characterized by ideological homophily and can increase polarization and radicalization, especially in the domain of the virtual far-right (Kaakinen, Sirola, Savolainen, & Oksanen, 2020;Keipi et al., 2016). ...
Full-text available
Objectives Mass shooters, violent extremists, and terrorists, who are overwhelmingly male, exhibit misogynistic attitudes and a history of violence against women. Over the past few years, incels (“involuntary celibates”) have gathered in online communities to discuss their frustration with sexual/romantic rejection, espouse male supremacist attitudes, and justify violence against women and men who are more popular with women. Despite the link between misogyny and mass violence, and the recent emergence of online misogynistic extremism, theories and empirical research on misogynistic extremism remain scarce. This article fills this gap. Methods An integration of literatures pertaining to the basics of sexual selection, evolved male psychology, and aggression suggests there are three major areas that should be considered imperative in understanding the emergence of misogynistic extremism. Results Individual factors (e.g., low status) and social forces, such as a high degree of status inequality, female empowerment, and the ease of coordination through social media, give rise to misogynistic extremism. Conclusions The unique interaction between evolved male psychology, the dynamics of the sexual marketplace, and modern technologies can create an ecology in which incel beliefs can thrive and make violence attractive.
... The usage of engagement metrics in social media platforms has also been thought to encourage divisive, polarizing, or misleading content (Boler & Davis, 2021;Bradshaw, 2019;Marwick & Lewis, 2017;Woolley & Howard, 2018). Gaudette et al. (2021), argue that the use of 'upvotes' and 'downvotes' on Reddit, which allow users to promote content they like and demote content they do not, helps to galvanise far-right groups. Such groups use these voting systems to promote content that they agree with and suppress uncomfortable information, without actually having to engage in dialogue. ...
Full-text available
Following concerns about social media's role in politics (fostering polarization and spreading disinformation), many activists and civic hackers have developed alternative digital democracy platforms for both deliberation and the representation of public opinion. But how are we to study the role of these platforms, and in particular, their algorithms in the development of issues and the publics that gather around them? This article employs a simple quali-quantitative data visualization to study how a particular digital democracy platform, vTaiwan (an implementation of tool for generating opinions and consensus about public issues)-formats political participation. We investigate how one particular issue (Uber legalization) was formed and reformed by users, moderators, and algorithms on the vTaiwan platform over time. while the algorithm sorted opinions into a binary of pro and anti-Uber positions, we find that the comments themselves and their sequence suggest more nuanced positions and the potential for dialogue. We argue that vTaiwan may be limited by its focus on simple quantitative data points (positive or negative votes as opposed to the texts themselves) and a forced separation of participants into in-or-out opinion groups. This study contributes to critical algorithm studies and digital democracy studies by offering an effective way to analyse the role of algorithms in democratic politics. ARTICLE HISTORY
An important Russian strategic disinformation campaign, early in the Ukrainian war, contended that the United States was involved in clandestine biological weapons development in Ukraine under the sign “biolabs.” Depending on the platform, the propaganda messaging appeared differently. This study uses a multi-modal approach, featuring Image Plotting, to study the waves of messaging related involved in the biolabs campaign including both the qualitative analysis of text and the analysis of the images. Each platform includes a clear visual and textual strategy which align with known Russian strategies and multi-modal campaign operations.
The science and practice of climate change communication have significantly evolved over the last two decades, leading to a subfield of environmental communication focused on perception, awareness, and risk associated with climate change. This body of literature has demonstrated the importance of recognizing the differences among individuals and social groups in terms of cultural, psychological, and political reasons for their perceptions regarding climate change and has provided guidance for communicating with target audiences. However, most of the research in this subfield has relied on quantitative data from nationally representative survey instruments. While such metrics are essential to understanding longitudinal trends in public perceptions, they are limited in providing deeper understanding of how an individual perceives climate change in relation to other environmental and social issues. Qualitative data, elicited through techniques such as focus groups and semi-structured interviews, can help to provide these insights. In addition, qualitative research can support a more relational approach to climate change communication, which emphasizes the importance of seeing science communication as an opportunity to connect, rather than to persuade. In this paper, we present findings from semi-structured interviews (“environmental conversations”) with fifteen individuals based in the United States regarding their opinions, knowledge, and perceptions of climate change and other environmental issues. The findings demonstrate nuance and diversity in people’s opinions on climate change and how they are connected to other priorities and values. We recommend the value of qualitative research as a tool not only to better understand different environmental perspectives, but additionally to support two-way science communication among the broader public.
Violent political extremists often point to online communities as motivating their behavior. However, researchers studying online exposure to extremism through structural mechanisms such as algorithms have not found strong evidence of their influence. At the same time, models of offline radicalization processes emphasize the importance of personal motivations, such as desire for significance and community, but do not fully account for online contexts. The authors integrate these approaches, which are both interested in worsening political extremism, asking, (1) What are the pathways to extreme content and communities online? and (2) What are the perceptions of extremism in online communities? Through interviews with politically active Redditors, the authors identify three motivations for initial engagement with fringe political communities: political unsorting of the self, political exceptionalism, and virtuous participation. The authors argue these motivations are potentially important seeds of political extremism and discuss the implications for supporting healthy political discourse online.
Full-text available
A step-by-step guide to conducting a thematic analysis within the context of learning and teaching.
Full-text available
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of right- wing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Full-text available
Despite the increasing citizen engagement with socio-political online communities, little is known about how such communities are affected by significant offline events. Thus, we investigate here the ways in which the collective identity of a far-right online community is affected by offline intergroup conflict. We examine over 14 years of online communication between members of Stormfront Downunder, the Australian sub-forum of the global white supremacist community We analyse members’ language use and discourse before and after significant intergroup conflict in 2015, culminating in local racist riots in Sydney, Australia. We found that the riots were associated with significant changes in the collective beliefs of the community (as captured by members’ most salient concerns and group norms), emotions and consensus within the community. Overall, the effects of the local riots were manifest in a reinvigorated sense of purpose for the far-right community with a stronger anti-Muslim agenda.
Full-text available
Researchers have previously explored how right-wing extremists build a collective identity online by targeting their perceived “threat,” but little is known about how this “us” versus “them” dynamic evolves over time. This study uses a sentiment analysis-based algorithm that adapts criminal career measures, as well as semi-parametric group-based modeling, to evaluate how users’ anti-Semitic, anti-Black, and anti-LGBTQ posting behaviors develop on a sub-forum of the most conspicuous white supremacy forum. The results highlight the extent to which authors target their key adversaries over time, as well as the applicability of a criminal career approach in measuring radical posting trajectories online.
Conference Paper
Full-text available
Over the past few years, a number of new "fringe" communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web's information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting "people and free speech first", welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan's Politically Incorrect board.
Full-text available
Deep and insightful interactions with the data are a prerequisite for qualitative data interpretation, in particular, in the generation of grounded theory. The researcher must also employ imaginative insight as they attempt to make sense of the data and generate understanding and theory. Design research is also dependent upon the researchers’ creative interpretation of the data. To support the research process, designers surround themselves with data, both as a source of empirical information and inspiration to trigger imaginative insights. Constant interaction with the data is integral to design research methodology. This article explores a design researchers approach to qualitative data analysis, in particular, the use of traditional tools such as colored pens, paper, and sticky notes with the CAQDAS software, NVivo for analysis, and the associated implications for rigor. A design researchers’ approach which is grounded in a practice which maximizes researcher data interaction in a variety of learning modalities ensures the analysis process is rigorous and productive. Reflection on the authors’ research analysis process, combined with consultation with the literature, would suggest digital analysis software packages such as NVivo do not fully scaffold the analysis process. They do, however, provide excellent data management and retrieval facilities that support analysis and write-up. This research finds that coding using traditional tools such as colored pens, paper, and sticky notes supporting data analysis combined with digital software packages such as NVivo supporting data management offer a valid and tested analysis method for grounded theory generation. Insights developed from exploring a design researchers approach may benefit researchers from other disciplines engaged in qualitative analysis.
Full-text available
In the aftermath of the 2017 Charlottesville tragedy, the prevailing narrative is a Manichean division between ‘white supremacists’ and ‘anti-racists’. We suggest a more complicated, nuanced reality. While the so-called ‘Alt-Right’ includes those pursuing an atavistic political end of racial and ethnic separation, it is also characterised by pluralism and a strategy of nonviolent dialogue and social change, features associated with classic liberalism. The ‘Left,’ consistent with its historic mission, opposes the Alt-Right’s racial/ethnic prejudice; but, a highly visible movement goes farther, embracing an authoritarianism that would forcibly exclude these voices from the public sphere. This authoritarian element has influenced institutions historically committed to free expression and dialogue, notably universities and the ACLU. We discuss these paradoxes by analysing the discourse and actions of each movement, drawing from our study of hundreds of posts and articles on Alt-Right websites and our online exchanges on a leading site ( We consider related news reports and scholarly research, concluding with the case for dialogue.
While an increasing number of contributions address transnationalism in far right politics, few investigations of the actors and discourses favored in transnational exchanges exist on social media, considered a perfect habitat for radicalization. Building on the literature on the far right, social movements, transnationalism and the Internet, we address this gap by studying the initiators and the issues that are favored in online exchanges between audiences of far right political parties and movements across France, Germany, Italy and the United Kingdom. We use a new dataset on the activities of far right Twitter users that is analyzed through a novel mixed methods approach. We use social network analysis to detect transnational links between organizations across countries based on retweets from audiences of far right Twitter users. Retweets are qualitatively coded for content and compared to the content retweeted within national communities. Finally, using a logistic regression, we quantify the level to which specific issues and initiators enjoy high levels of attention across borders. Subsequently we use discourse analysis to qualitatively reconstruct the interpretative frames accompanying these patterns. We find little evidence of a “dark international” on social media across Western Europe. Only a few issues (anti-immigration and nativist interpretations of the economy) garner transnational audiences on Twitter. Additionally, we demonstrate that more than movements, political parties play a prominent role in the construction of a transnational far right discourse. Keywords: Far right, transnationalism, Twitter, issue attention, movements, social network analysis