ArticlePDF Available

Upvoting extremism: Collective identity formation and the extreme right on Reddit



Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right views have exploited online platforms to build a collective identity among the like-minded. Research in this area has largely focused on extremists’ use of websites, forums, and mainstream social media sites, but overlooked in this research has been an exploration of the popular social news aggregation site Reddit. The current study explores the role of Reddit’s unique voting algorithm in facilitating ‘othering’ discourse and, by extension, collective identity formation among members of a notoriously hateful subreddit community, r/The_Donald. The results of the thematic analysis indicate that those who post extreme-right content on r/The_Donald use Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s ‘sense of community’ facilitates identity work among its members by creating an environment wherein extreme right views are continuously validated.
Upvoting extremism: Collective identity formation and the extreme right on Reddit
Tiana Gaudette
School of Criminology
Simon Fraser University
8888 University Drive
Burnaby, BC V5A 1S6, Canada
Ryan Scrivens
School of Criminal Justice
Michigan State University
Garth Davies
School of Criminology
Simon Fraser University
Richard Frank
School of Criminology
Simon Fraser University
Author Biographies
Tiana Gaudette is a Research Associate at the International CyberCrime Research Centre
(ICCRC) at Simon Fraser University (SFU). She recently earned an MA in criminology from
Ryan Scrivens is an Assistant Professor in the School of Criminal Justice (SCJ) at Michigan State
University (MSU). He is also an Associate Director at the International CyberCrime Research
Centre (ICCRC) at Simon Fraser University (SFU) and a Research Fellow at the VOX-Pol
Network of Excellence.
Garth Davies is an Associate Professor in the School of Criminology at Simon Fraser University
(SFU) and is the Associate Director of the Institute on Violence, Extremism, and Terrorism at
Richard Frank is an Associate Professor in the School of Criminology at Simon Fraser
University (SFU) and is the Director of the International CyberCrime Research Centre (ICCRC)
at SFU.
The authors would like to thank Maura Conway for her valuable feedback on earlier versions of
this article. The authors would also like to thank the two anonymous reviewers for their valuable
and constructive feedback.
Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right
views have exploited online platforms to build a collective identity among the like-minded.
Research in this area has largely focused on extremists’ use of websites, forums, and mainstream
social media sites, but overlooked in this research has been an exploration of the popular social
news aggregation site Reddit. The current study explores the role of Reddit’s unique voting
algorithm in facilitating ‘othering’ discourse and, by extension, collective identity formation
among members of a notoriously hateful subreddit community, r/The_Donald. The results of the
thematic analysis indicate that those who post extreme-right content on r/The_Donald use
Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme
discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s
‘sense of community’ facilitates identity work among its members by creating an environment
wherein extreme right views are continuously validated.
Collective identity; extreme right discourse; r/The_Donald; social movement theory; social news
Right-wing extremist movements are increasingly expanding beyond national borders, and the
Internet is a fundamental medium that has facilitated such movements’ increasingly
‘transnational’ nature (Caini and Kröll, 2014; Froio and Ganesh, 2019; Perry and Scrivens,
2016). The plethora of online platforms play a significant role in the ‘transnationalisation’ of the
extreme right by enabling such movements to attract new members and disseminate information
to a global audience (see Caini and Kröll, 2014). The Internet, however, functions as more than a
‘tool’ to be used by movements. It also functions as a space of important identity work; the
interactive nature of the Internet’s various platforms allows for the exchange of ideas and enables
the active construction of collective identities (Perry and Scrivens, 2016). Here the Internet has
facilitated a transnational ‘locale’ where right-wing extremists can exchange radical ideas and
negotiate a common sense of ‘we’ – or a collective identity (Bowman-Grieve, 2009; Futrell and
Simi, 2004; Perry and Scrivens, 2016).
Researchers who have explored right-wing extremists’ use of Internet have typically
focused their attention on dedicated hate sites and forums (e.g. Adams and Roscigno, 2005;
Bliuc et al., 2019; Bowman-Grieve, 2009; Burris, Smith, and Strahm 2000; Futrell and Simi,
2004; Perry and Scrivens, 2016; Scrivens, Davies, and Frank, 2018; Simi and Futrell 2015). As a
result, the nature of right-wing extremists’ use of online platforms that are outside the
mainstream purview of digital platforms – including Facebook (e.g. Ekman, 2018; Nouri and
Lorenzo-Dus, 2018; Stier et al., 2017), Twitter (e.g. Berger, 2016; Berger and Strathearn, 2013;
Burnap and Williams, 2015; Graham, 2016), and YouTube (e.g. Ekman, 2014; O’Callaghan et
al., 2014) – has gone mostly unexplored by researchers, despite the fact that lesser-researched
platforms (e.g., 4chan, 8chan, Reddit, etc.) have provided adherents with spaces to anonymously
discuss and develop ‘taboo’ or ‘anti-moral’ ideologies (see Nagle, 2017). Scholars who have
recognized this lacuna have called for further exploration of extremists’ use of under-researched
platforms. Conway (2017: 85), for example, noted that ‘different social media platforms have
different functionalities’ and posed the question how do extremists exploit the different functions
of these platforms? In response, the current study begins to bridge the abovementioned gap by
exploring one platform that has received relatively little research attention, Reddit, and the
functional role of its voting algorithm (i.e., it’s upvoting and downvoting function) in facilitating
collective identity formation among members of a notoriously hateful subreddit community,
The collective identity of the extreme right online
The notion of ‘collective identity’ lies at the heart of the social movement theory framework and
has provided valuable insight into the impact of right-wing extremists’ online discussions in
shaping their identity (e.g. Futrell and Simi, 2004; Perry and Scrivens, 2016; Scrivens et al.,
2018). The social movement literature posits that collective identity is actively produced (e.g.
Hunt and Benford, 1994; Snow 2001) and constructed through what Melucci (1995: 43)
describes as ‘interaction, negotiation and the opposition of different orientations.’ In addition,
according to the social constructionist paradigm, collective identity is ‘invented, created,
reconstituted, or cobbled together rather than being biologically preordained’ (Snow, 2001: 5).
As activists share ideas among other members of their in-group, these exchanges actively
produce a shared sense of ‘we’ and, by extension, a collective identity (Bowman-Grieve, 2009;
Snow, 2001; Melluci, 1995; Futrell and Simi, 2004). Additionally, a collective identity is further
developed through the construction of an ‘us’ versus ‘them’ binary (Polletta and Jasper, 2001).
By ‘othering,’ or identifying and targeting the groups’ perceived enemies, such interactions
further define the borders of the in-group (Perry and Scrivens, 2016).
Indeed, these same processes of collective identity formation occur within – and are
facilitated by – online communications. Similar to how ‘face-to-face’ interactions in the ‘real
world’ form collective identities, so too do the ‘many-to-many’ interactions in cyberspace
(Crisafi, 2005). As a result, the Internet’s many online platforms have facilitated extreme right-
wing identity work (Futrell and Simi, 2004; Perry and Scrivens, 2016). As Perry and Scrivens
(2016: 69) explained it:
[a] collective identity provides an alternative frame for understanding and expressing
grievances; it shapes the discursive “other” along with the borders that separate “us” from
“them”; it affirms and reaffirms identity formation and maintenance; and it provides the
basis for strategic action.
Each of these elements enhance our understanding of the Internet’s role in developing a
collective identity among extreme right-wing adherents. But among these virtual spaces, Reddit
in particular has received criticisms in recent years for its proliferation of extreme right-wing
‘The front page of the Internet’: Reddit
While the social movement framework has been useful in advancing our understanding of right-
wing extremists’ use of the Internet, particularly on the more traditional websites, forums and
mainstream social media platforms to build a collective identity, it remains unclear how the
unique voting features and content policies of one under-researched virtual platform, Reddit,
shape the formation of a collective identity among members of right-wing extremist movements
who use the platform.
Created in 2005, Reddit has become the fifth-most popular website in the United States
(U.S.) (Reddit Press, n.d.) and the 20th-most popular website in the world (
Competitive Analysis, n.d.). As a ‘social news aggregation’ site, Reddit describes itself as ‘a
platform for communities to discuss, connect, and share in an open environment, home to some
of the most authentic content anywhere online’ (Reddit Content Policy, n.d.). Here users may
participate in, contribute to, and even develop like-minded communities whose members share a
common interest. A unique component of the platform is its voting algorithm, which provides its
users with the means to promote and spread content within a particular subreddit. To illustrate,
Reddit users retrieve interesting content from across the Internet and post that content to niche
topic-based communities called ‘subreddits’, and those who share similar views (or not) can
convey their interest – or disinterest – by giving an ‘upvote’ or ‘downvote’ to the posted content
in the subreddit (Wildman, 2020). ‘Upvoting’ a post, for example, increase its visibility and user
engagement within a subreddit (see Carman et al., 2018), and as a result, the most popular posts
are featured on Reddit’s front page for millions of users to see (Wildman, 2020). In short, using
the upvoting (i.e., increasing content visibility) and downvoting (i.e., decreasing content
visibility) function on Reddit, users shape the discourse and build a collective identity within
their subreddit community. Previous research that explored bidirectional voting results on a
related site, Imgur, has similarly found that users’ upvoting and downvoting practices help
reinforce social identity (see Hale, 2017).
While Reddit is designed to foster a sense of community among like-minded users,
historically its laissez-faire content policy has failed to prevent users from posting and spreading
hate speech on the platform (Gibbs, 2018) and has even facilitated the development of what
Massanari (2017) describes as ‘toxic technocultures’. For example, on the one hand, its content
policy defines ‘unwelcome content’ as material that incites violence or that threatens, harasses,
or bullies other users (Reddit Content Policy, n.d.). Conversely, the policy admits that it
‘provides a lot of leeway in what content is acceptable’ (Reddit Content Policy, n.d.). This
hands-off approach is largely meant to facilitate an open dialogue among members of Reddit’s
numerous communities, wherein users may express their views on controversial topics without
fear of being banned from the site or harassed by those with opposing views (Reddit Content
Policy, n.d). As a result, Reddit tends to stop short of preventing users from posting controversial
ideas. In fact, until recently, Reddit condoned racism and hate speech on its platform, according
to a recent statement by the site’s CEO, Steve Huffman (Gibbs, 2018).
It comes as little surprise, then, that Reddit has witnessed an influx of extremist
communities (Feinberg, 2018). In response, Reddit launched a ‘quarantine’ feature in 2015 that
would hide a number of communities that regularly produce offensive content. Here users must
explicitly opt-in to view quarantined communities as a mechanism to prevent users from
inadvertently coming across extremely offensive content (Account and Community Restrictions,
n.d.-b). One previously quarantined and recently banned subreddit community, r/The_Donald,
has been receiving specific criticism for its extreme right-wing content.
‘America First!’: r/The_Donald
r/The_Donald was a subreddit community where members could discuss a variety of matters
relating to Trump’s U.S. presidency. Since its inception in 2015, Trump’s most fervent
supporters have assembled on r/The_Donald, with numbers surpassing 700,000 subscribers
(Romano, 2017). Some members of r/The_Donald represented a variety of extreme right-wing
movements and ideologies, including the ‘manosphere’, the ‘alt-right’, and racists more broadly,
though all members were united in their unwavering support for the U.S. president (Martin,
Although r/The_Donald was quarantined in June 2019 after members were accused of
breaking Reddit’s rules by inciting violence (see Stewart, 2019), hateful sentiment continued to
be a common occurrence there (Gibbs, 2018), until it was eventually banned (Tiffany, 2020). To
illustrate, while the ‘rules’ of r/The_Donald made it clear that ‘racism and Anti-Semitism will
not be tolerated,’ they also specified that ‘Muslim and illegal immigrant are not races’
(r/The_Donald Wiki, n.d.). These rules suggest, then, that moderators would most likely not
remove discussions related to anti-Muslim or anti-immigrant out-groups within the community.
Not surprisingly, r/The_Donald continued to experience an influx of extreme right-wing content,
including discussions surrounding ‘white genocide’, anti-Muslim sentiment, and anti-Black
discourse (Ward, 2018). Racist memes also appeared to be increasing in popularity on
r/The_Donald (see Ward, 2018; see also Zannettou et al., 2018). Despite these concerns,
however, research on platforms such as Reddit remains limited. So too does our understanding of
how these sites function to facilitate hateful content and collective identity formation.
Before proceeding, it is worth noting here is that, following J.M. Berger (2018), we take
the view that right-wing extremists—like all extremists—structure their beliefs on the basis that
the success and survival of the in-group is inseparable from the negative acts of an out-group
and, in turn, they are willing to assume both an offensive and defensive stance in the name of the
success and survival of the in-group. We thus conceptualize right-wing extremism as a racially,
ethnically, and/or sexually defined nationalism, which is typically framed in terms of white
power and/or white identity (i.e., the in-group) that is grounded in xenophobic and exclusionary
understandings of the perceived threats posed by some combination of non-whites, Jews,
Muslims, immigrants, refugees, members of the LGBTQI+ community, and feminists (i.e., the
out-group(s)) (Conway, Scrivens, and Macnair, 2019).
Data and methods
This research is guided by the following research question: How does Reddit’s unique voting
algorithm (i.e., it’s upvoting and downvoting function) facilitate ‘othering’ discourse and, by
extension, collective identity formation on r/The_Donald following Trump’s presidential election
victory? To answer this question, data were collected from a website that made Reddit data
publicly available for research and analysis.
We extracted all of the data posted to r/The_Donald
subreddit in 2017, as it marked the first year of Trump’s presidency and a time when his far-right
political views encouraged his supporters to preach and practice racist hate against the out-
groups, both on- and offline (see Anti-Defamation League, 2018). Research has similarly
identified a link between Trump’s election victory and a subsequent spike in hatred, both online
(e.g., Zannettou et al., 2018) and offline (e.g., Edwards and Rushin, 2018), in the year following
his victory.
We then identified the 1,000 most highly-upvoted user-submitted comments in the data
(hereafter referred to as the ‘highly-upvoted sample’).
A second sample, which comprised of
1,000 user-submitted comments, was randomly sampled from the data (hereafter referred to as
the ‘random sample’) to serve as a comparison to the highly-upvoted sample. The purpose of this
was to explore what makes highly-upvoted content unique in comparison to a random sample of
non-highly-upvoted comments.
The data were analyzed using thematic analysis (Braun and Clarke, 2006). In particular,
the data were categorized using descriptive coding which was done with Nvivo qualitative
analysis software. After we reviewed each user comment in the highly-upvoted sample and
random sample, codes were assigned to the sections of text that related to the research question.
This descriptive coding technique proceeded in a sequential, line-by-line manner (see Maher et
al., 2018), which was a suitable choice for the current study because it allowed us to organize
vast amount of textual data into manageable word/topic-based clusters. Given that there are no
‘hard and fast’ rules for searching for themes using thematic analysis (see Maguire and Delahunt,
2017), significant themes emerged as we coded, categorized, and reflected on the data (Saldaña,
The findings are organized into two sections, one for each key theme that was identified in the
data. Each includes an explanation of the patterns that emerged in support of each theme, both
within and across the highly-upvoted sample and the random sample. Here we draw largely from
the words of the users in the samples (i.e., quotes). Descriptive information pertaining to the
samples are also provided.
Descriptive information of the samples
Several macro-level patterns emerged across the sample groups, as is illustrated in Table 1. First
was the relatively similar number of unique authors in the highly-upvoted sample and the
random sample (852 and 952, respectively). This suggests that the highly-upvoted sample, and
perhaps the highly-upvoted content on r/The_Donald in general, does not consist of a group of
authors whose comments make up a large proportion of the highly-upvoted comments. Second,
each sample consisted of a variety of thread-starting comments and response comments, but the
thread-starter comments comprised a large proportion of the highly-upvoted sample (89.3%)
while the random sample contained fewer thread-starter comments (46.5%). Lastly, a
comparison of the measures of central tendency across the two samples revealed that the
comment vote scores varied substantially. This is expected because the highly-upvoted sample
consisted of the most highly-upvoted comments found in the r/The_Donald in 2017.
Table 1. Descriptive information of the samples.
Unique Authors
Comments (%)
Comment Vote Score
Highly Upvoted Sample
(N = 1,000)
852 (85.2%)
893 (89.3%)
Random Sample
(N = 1,000)
929 (92.9%)
465 (46.5%)
The external threat
Across the 1,000 most highly-upvoted comments were various talking points, including
discussions about Reddit administrators, a recent solar eclipse, users on 4chan, and United States
Senator Ted Cruz, among numerous other discussions. But a large proportion of the comments
(i.e., 11.6%) in the highly-upvoted sample were hateful comments about Muslim communities,
with claims that Muslim immigration is a persistent danger to Western nations, particularly to the
U.S., and that Muslim-perpetrated violence is to be expected as a result of the increasing levels
of Muslim immigration. In particular, contributors generally agreed that increased Muslim
immigration will result in what one user – who posted one of the most highly-upvoted comments
in the sample – described as ‘chaos and oppression and rape and murder’
(UserID: 977, 4143
Here the highly-upvoted comments not only suggested that ‘my Muslim neighbors’ want to ‘rape
my wife and kill my dogs’, as one user explained (UserID: 272, 1532), but also that ‘pedos
[pedophiles] are rampant inside this mind fuck ideology, as their [religious] ‘book’ allows this
shit to happen’ (UserID: 309, 2300). In addition, among the most highly-upvoted comments was
HIM TO DEATH’ (UserID: 600, 1526), which was used by members to showcase the violent
nature of Islam as well as to make the claim that such a ‘barbaric’ religion is indisputably
violent. Interestingly, while highly-upvoted content reflected such comments as ‘we know
muslims are NOT peaceful and will wreck shit’ (UserID: 588, 1388), much of the content
oftentimes included the use of sarcasm to express frustration about a wider ‘politically correct’
culture that, as one user best summed up, ‘[does] not allow us to report crimes perpetrated by
Muslims because it’s racist’ (UserID: 555, 1548).
Frequently discussed within the highly-upvoted sample was a link between the increase
of Muslim migration in the U.S., and the West more generally, and the increased threat of
terrorism there. Although racism is against r/The_Donald subreddit’s rules, racist sentiments
against Muslims were generally overlooked in the context of discussions around the ‘Muslim
terror threat’. For instance, one member’s highly-upvoted comment described the terrorist threat
ALLAHU AKBAR’ (UserID: 689, 1400). Notably, although racist remarks such as these should
warrant removal by r/The_Donald moderators, they were overlooked and, resultantly, were
allowed to remain featured within the community to draw further attention to the external threat.
As a result, among the most highly-upvoted content were suggestions that Muslims largely
supported terrorism in the U.S., with some users also suggesting ‘NOT ALL MUSLIMS yeah,
but damn near most of them it seems’ (UserID: 353, 1565) support terrorist attacks against the
West. As a result, commonly expressed in the highly-upvoted comments were references to what
was described as a key method to counter this ‘terror threat’: preventing the so-called external
threat from entering the Western world through immigration. Here, contributors oftentimes
argued that it is imperative to, as one user who posted a highly-upvoted comment put it, ‘halt our
disasterous immigration policies’ (UserID: 612, 1942).
Together, extreme right-wing sentiments emerged in the highly-upvoted content if it
related to this external threat. Consistently uncovered within this sample were discussions about
Muslims as ‘invaders’ of the West and an increasing threat to the white population there. As but
one more notable example, in a highly-upvoted comment to a post titledMuslim population in
Europe to triple by 2050, one user expressed their grievances about the rising number of
Muslims in Europe and the subsequent threat to the white race there:
In 2050 your gonna have a lot of younger Europeans who completely resent their elders
for giving away there future for a false sense of moral superiority. I almost already feel it
with my parents. My family is white and both of my white liberal parents talk as if whites
are the worst people on the planet. I love them but holy shit they have no self
preservation instincts. It’s embarrassing. (UserID: 114, 1604)
Similarly uncovered in the highly-upvoted comments were members who described an emerging
sense of desperation that the survival of their in-group is being threatened by the Muslim out-
group. As one user put it: ‘Come on? We’re getting slaughtered here, the previous car that
ploughed though people on the bridge, Ariana Grande concert and now this? Our country has
been infiltrated, we’re fucked’ (UserID: 355, 2189). Here the threat of Muslim-perpetrated
violence and terrorism was described within the broader context of an Islamic ‘invasion’ of
Western countries. To illustrate, a large proportion of the content within the highly-upvoted
messages was a reaction to Muslim immigration into Western nations as a sign of Muslims
‘want[ing] to rule the earth’ (UserID: 309, 2300). One of the highly-upvoted comments
described this so-called Muslim ‘end game’ as a means to destroy other nations to further their
own goals:
Muslims have been trying to conquer us since the beginning of time. They used to
steal our children and then train them to kill their own people. Didn't work then,
and will not work now. There’s not a single romanian out there in this whole
world who doesn’t know what the Muslim end game is. (UserID: 761, 1465)
As a result of this external threat, members in the subreddit oftentimes discussed the ways that
they should defend themselves and members of their in-group since, as one user put it, ‘[w]hy
should we give up our lifestyle for their tyranny?’ (UserID: 460, 1653).
Evidence of the above discussed external threat of the Muslim out-group, such as
gruesome images of recent terror attack victims found in the highly-upvoted content, further
invoked a particularly emotional response from members of the in-group on r/The_Donald; when
Muslim-perpetrated terrorist attacks occurred, the fear and anger that members of r/The_Donald
expressed toward the external threat was reflected in their most highly-upvoted comments. For
example, a highly-upvoted comment to an image of a terror attack victim noted: ‘[i]t’s
heartbreaking and bloodily gruesome but this corpse was once a person. How dare we deny their
pain? We need to defend and avenge’ (UserID: 989, 1468). In effect, each member that upvoted
this comment agreed with its basic underlying assumptions: that the in-group’s very survival
depends on some form of defensive action against the out-groups.
While the most highly-upvoted comments consisted largely of references to the Muslim
threat as imminent threat and the subsequent necessity of a ‘defensive stance’, these highly-
upvoted comments failed to specify how members should respond to this threat. For instance,
while contributors argued that the threat may, in part, be alleviated by striking down the
‘disastrous’ immigration policies that allow Muslims to ‘flood’ to the U.S. and the West, the
highly-upvoted comments provided no guidelines about how to defend themselves against
Muslims who already reside within the country. It would appear, then, that members’ highly-
upvoted discussions around defending themselves from Muslims are meant to be generalized
and, as such, are careful not to overtly incite violence against a particular person or group.
An assessment of the random sample of comments revealed that the highly-upvoted anti-
Muslim comments discussed above go unchallenged, at least visibly. This is largely because any
comments that ran counter to the most highly-upvoted narrative typically did not receive any
upvotes by other members and, resultantly, were less visible than their highly-upvoted
counterparts. For instance, one user who suggested that ‘the basic ideals of Islam’ include such
things as ‘faith, generosity, self-restraint, upright conduct’ (UserID: 1928, 1) did not have their
comment upvoted. Moreover, comments that opposed the more prominent anti-Muslim
narratives, such as ‘Terrorist come in all colors. So do Americans’ (UserID: 1680, 2), were far
from the most highly-upvoted comments in the community. Similarly, users arguing that
Muslims who commit violent acts have ‘psychological problems so it’s not a terror related
attack’ (UserID: 1860, 1) or that ‘there are a lot of good muslim people, but theyre bad muslims’
(UserID: 1733, 4) were largely unpopular within the community.
While there were a small number of particularly anti-Muslim comments in the random
sample, they were less frequent (i.e., 1.6% of the random sample) and, for the most part, less
extreme than those in the highly-upvoted sample. To illustrate, one member in the random
sample wondered ‘why do countries like Sweden let Muslim war criminals into their country
with no repercussion?’ (UserID: 1788, 2). Another user here argued ‘if they make Europe more
Muslim then the West will stop bombing the Middle East’ (UserID: 1609, 30). There were,
however, a number of comments in the random sample that reflected the more extreme anti-
Muslim content that characterized the anti-Muslim rhetoric in the highly-upvoted sample, but the
comments in the random sample received much lower voting scores than the highly-upvoted
messages – most likely because references to Muslims as an external threat were relatively rare
among the comments in the random sample.
The internal threat
Commonly expressed in the data was another threat in most highly-upvoted comments: The Left.
Sentiment against this out-group comprised 13.4% of the most highly-upvoted comments in the
sample. An assessment of the most highly-upvoted messages suggested that the Leftist out-group
was perceived as violent, particularly against the extreme right-wing in-group. Here a widely
held belief was, as two users described, ‘the Left are not as peaceful as they pretend to be,’
(UserID: 392, 1982) and they are ‘growing increasingly unhinged every passing day’ (UserID:
880, 2065). Members who posted highly-upvoted content oftentimes noted that the Left is, as
one user put it, sending ‘death threats to people who have different political views than [them]’
(UserID: 880, 2065). Users who posted highly-upvoted comments in r/The_Donald community
even argued that the Left was willing to go so far as physically attack their right-wing political
opponents or, as one user put it, ‘smash a Trump supporter over the head with a bike lock
because of his intolerance to non-liberal viewpoints’ (UserID: 619, 1640). Yet in light of fellow
members being depicted as the victims of the Left’s violence, an analysis of the highly-upvoted
content revealed that members in r/The_Donald encouraged each other not to react to the Left
because, as one user explained it, ‘[v]iolent backlash from the right is exactly what they’re
looking for so they can start point fingers and calling us the violent ones’ (UserID: 1000, 2173).
Also uncovered in the highly-upvoted comments were discussions about the Left seeking
to weaken the West in general but the U.S. in particular through their crusade against the white
race. Specifically, users who posted highly-upvoted content oftentimes expressed frustration
about the how Left has constantly undermined the many societal contributions made by the white
race, which – as one user best summed up – include ‘economies that actually work, philosophy
as a whole, anti-biotics and healthcare in general, printed press, technology in general, etc.’
(UserID: 357, 1688). Further added here were discussions about how the Left would rather focus
on what white people have done wrong and ‘shit all over white people for what they did in the
past’ (UserID: 357, 1688), as one user put it, rather than acknowledge the contributions that the
white race has made to Western society. Together, the general sentiment was that the Left was
seeking to deter whites – and by extension members of r/The_Donald community – from feeling
any source of pride for their white ancestry, as evidenced by the following highly-upvoted
You’re a white male kid. You’ve grown up being told you’re the reason for all the
world’s ills. You want to speak out. You have to go to your high school in the
dark, slouched, with your face hidden to simply post a sign that says It’s okay to
be white. Leave. School hands security footage over to the media. The media
broadcasts the picture nationwide and says they are investigating. Just step back
and take in how crazy this is. (UserID: 77, 2099)
As a result, members of r/The_Donald conclude not only that Leftists ‘hate white people’
(UserID: 479, 1582), but that, and one user explained it, ‘our enemies sure want whites to be a
minority or even dead’ (UserID: 197, 1356) as evidenced by their support for the immigration of
‘undesirables’ into West in general and the U.S. in particular.
Apparent in the highly-upvoted content was user support for discussions about the Left’s
support for immigration – especially from Muslims entering the West from dangerous countries.
According to users who posted highly-upvoted comments, this was another key component of
the Left’s so-called anti-white crusade. Members noted that although Muslim immigrants
increase the risk of violence and terrorism in the U.S., such a threat does not prevent the Left
from encouraging them to migrate to the U.S. Perhaps this is because, as one user who posted a
highly-upvoted comment emphasized, the Left ‘do[es]n’t view ISIS or terrorism as a threat to
this country at all... [and the Left is] more scared of Trump’s policies than ISIS’ (UserID: 720,
1361). Still, users who posted the most highly-upvoted comments in r/The_Donald noted that
when ‘dangerous Muslim immigrants’ inevitably commit violent acts against the West, members
believe that these instances of Muslim-perpetrated violence are simply overlooked and excused
by the Left. As one user explained: ‘Liberals will defend these sick fucks because they come
from a differnt culture and don’t understand western society’ (UserID: 799, 4229). The general
sentiment expressed in this regard was that the Left favors Muslim immigrants over the well-
being of Americans – sentiment that was oftentimes met with frustration, as is expressed by one
user: ‘the left's love affair with islam is so cringey and infuriating’ (UserID: 687, 1653). Linked
to these discussions were conspiracy theories about the left-wing media playing a major role in
distorting public perceptions about the ‘Muslim threat’. One highly-upvoted comment, for
example, formed an anti-Muslim acronym, based on the media’s distortion of Muslim-
perpetrated terrorism: ‘Just another random incident. Impossible to discern a motive. Has
nothing to do with religion. Always believe the media. Don’t jump to any conclusions’ (UserID:
486, 1461; emphasis in original).
An examination of the random sample of comments revealed few discussions about the
so-called Leftist threat (i.e., 5.7% of the random sample). For instance, not only is the Left
mentioned less often in the random sample than in the highly-upvoted sample, but in less
extreme ways as well. Instead, attacks against the Left are intended to be more humorous than
serious. To illustrate, one user criticized the Left, noting that, ‘in addition to being unable to
meme, leftists and ANTIFA can’t shower either’ (UserID: 1891, 17). Similarly, another user
admonished the ‘[t]ypical leftist hypocrites. That’s why we can’t play fair with them. They never
reciprocate the gesture of good faith in anything’ (UserID: 1719, 9). Another less popular
comment in the random sample suggested that the Left seeks to ‘protect free speech by banning
it. Leftist logic’ (UserID: 1751, 1). This, however, was the extent to which the Left was referred
in the random sample of comments.
The results of the current study suggest that a particularly effective form of collective identity
building was prevalent throughout the r/The_Donald community – one that was riddled with
hateful sentiment against members’ perceived enemies: Muslims and the Left. The thematic
analysis of the highly-upvoted content indicates that members most often agree with (i.e.,
upvote) extreme views toward two of their key adversaries to mobilize their social movement
around the so-called threat. In particular, r/The_Donald’s rules specifically condoned anti-
Muslim content, which in large part explains why such content was so prevalent among the most
highly-upvoted messages. Additionally, Donald Trump’s own anti-Muslim rhetoric, which has
emboldened right-wing extremists to commit hateful acts against Muslims (see Müller and
Schwarz, 2019), may explain why his supporters on r/The_Donald were so eager to vilify the
external threat. Known as the ‘Trump Effect’ (Müller and Schwarz, 2019), the results of the
current study suggest that Trump’s anti-Muslim rhetoric emboldened his fervent supporters on
r/The_Donald to spread anti-Muslim content via Reddit’s upvoting algorithm. Indeed, anti-
Muslim hate speech is increasingly becoming an ‘accepted’ form of racism across the globe and
extreme right movements have capitalized on this trend to help mobilize an international
audience of right-wing extremists (see Hafez, 2014; see also Froio and Ganesh, 2019). For
instance, the othering of Muslims, specifically, is commonly used to strengthen in-group ties
between far-right extremists against a common enemy (Hafez, 2014). By describing Muslims as
the violent perpetrators and themselves as those who must defend themselves from “them”, for
example, members in r/The_Donald framed themselves as victims rather than perpetrators – an
othering tactic commonly used by right-wing extremists, among other extremist movements (see
Meddaugh and Kay, 2009). This suggests that, by endorsing anti-Muslim sentiment,
r/The_Donald offered extremists a virtual community wherein they may bond with likeminded
peers around a common enemy. Worth highlighting though is that, while a large proportion of
the highly-upvoted content in the current study often frame Muslims as a serious and imminent
threat, users tended not to provide specific strategies to respond to the so-called threat. Such
behavioral monitoring may have been done in an effort to safeguard the community from being
banned for breaking Reddit’s sitewide policy against inciting violence.
Regardless of these
efforts, recent amendments to Reddit’s content policy that explicitly target hate speech led to the
ban of r/The_Donald (see Tiffany, 2020).
There are a number of reasons that might explain why the Left is the target of othering
discourse in the highly-upvoted content on r/The_Donald. For instance, such sentiment most
likely reflects the increasing political divide in the U.S. and other parts of the Western world,
where the ‘right’ often accuse the ‘left’ of threatening the wellbeing of the nation (see Dimock et
al., 2014). However, the results of the current study suggest that some users on r/The_Donald
took this narrative to the extreme. To illustrate, the highly-upvoted anti-Left discourse that was
uncovered in the highly-upvoted content mirror ‘paradoxical’ identity politics expressed by the
alt-right movement, which accuses the Left of authoritarianist censorship, yet position the far-
right as the harbingers of ‘peaceful social change’ (see Phillips and Yi, 2018). On r/The_Donald,
the Left (i.e., the out-group) was construed as the opposite of peaceful on and even as a violent
and physical threat to members of the in-group. Framing the out-group as a physical threat to
members of the in-group served to further delineate the borders between ‘them’ versus ‘us’ and
solidifies bonds between members of the in-group who, together, face a common enemy – a
community-building tactic that is commonly discussed in the social movement literature (see
Futrell and Simi, 2004; see also Perry and Scrivens, 2016). Notably, however, in the face of this
so-called physical threat, within the highly-upvoted comments users cautioned others from
retaliating against ‘Leftist-perpetrated violence’. It is probable that members may have wanted to
avoid discussions of offline violence to avoid being banned from Reddit for overstepping its
content policy against citing violence. If members instead were to encourage one another to react
with violence against their perceived enemies, they would risk having their community banned
for violating Reddit’s content policy. Similar tactics have been reported in empirical studies
which found that extreme right adherents in online discussion forums will deliberately present
their radical views in a subtler manner in fear that they will be banned from the virtual
community (see Daniels, 2009; Meddaugh and Kay, 2009).
Comparing the content in the highly-upvoted sample with the random sample, it is clear
that there was a complete absence of dissenting views among the most visible content in the
highly-upvoted comment sample. This suggests that Reddit’s voting algorithm facilitated an
‘echo chamber’ effect in r/The_Donald – one which promoted anti-Muslim and anti-Left
sentiment. In particular, rather than encouraging a variety of perspectives within discussions on
r/The_Donald, Reddit’s voting algorithm seemed to allow members to create an echo chamber
by shaping the discourse within their communities in ways that reflected a specific set of values,
norms, and attitudes about the in-group. For example, as evidenced by an analysis of
r/The_Donald’s most highly-upvoted comments, members of the subreddit community tended to
only support, or upvote, extreme views about Muslims and the Left. That is, among the
community’s most highly-upvoted content was an absence of comments which offered an
alternative perspective or even a dissenting view to the community’s dominant extremist
narrative. In some cases, members even used humor and sarcasm – which is commonly used by
commenters on Reddit (see Mueller, 2016) and related sites like Imgur (see Mikal et al., 2014)
regardless of political ideology – to express otherwise ‘taboo’ anti-Muslim and anti-Left views,
reflecting a longstanding tactic used by the extreme right to ‘say the unsayable’ (see Billig,
Overall, our study’s findings suggest that Reddit’s upvoting and downvoting features
played a central role in facilitating collective identity formation among those who post extreme
right-wing content on r/The_Donald. Reddit’s upvoting feature functioned to promote and
normalize otherwise unacceptable views against the out-groups to produce a one-sided narrative
that serves to reinforce members’ extremist views, thereby strengthening bonds between
members of the in-group. On the other hand, Reddit’s downvoting feature functioned to ensure
that members were not exposed to content that challenged their extreme right-wing beliefs,
which in turn functioned as an echo chamber for hate and may also functioned to correct the
behavior of dissenting members – a possibility gleaned from the results of previous research on
the effect of Imgur’s bidirectional voting features on social identification (Hale, 2017). Seen
through the lens of social movement theory, the extreme views against Muslims and the Left that
characterized the ‘othering’ discourses among the most highly-upvoted comments may have
been more likely to produce a stronger reaction from r/The_Donald’s extreme right-wing in-
group and, as a result, they may have been more likely to mobilize around these two threats.
Limitations and future research
The current study includes a number of limitations that may inform future research. First, our
sample procedure may have implications for the study’s findings, as the highly-upvoted sample
included a larger proportion of thread-starting comments than the random sample (i.e., 89.3%
and 46.5%, respectively). As a result, fewer instances of right-wing extremist content found in
the random sample could be the result of the greater proportion of response comments (i.e.,
comments that may be less visible than thread-starters, or support extremist content rather than
feature it itself) when compared to the predominantly thread-starting comments of the highly-
upvoted sample. Future work should explore this possibility by creating a matched control
sample in which the number of thread-starting and response comments are equal to that of the
experimental sample. Second, we did not identify an important indicator of user engagement: the
number of replies to comments in the highly-upvoted sample or the random sample. Exploring
the number of replies to comments featuring anti-Muslim and anti-Left content versus those
without such content may provide additional insight into user engagement. Third, we drew from
social movement theory to explore the role of Reddit’s unique voting algorithm in promoting
‘othering’ discourses that in turn facilitated collective identity formation on r/The_Donald, but
clearly there are other theoretical frameworks that could be applied in this context as well.
Bandura's (2002) theory of moral disengagement, for example, offers interesting insights into the
themes uncovered in the current study, including detrimental actions, injurious effects, and
victimization. These aspects of Bandura’s theory function to enhance the likelihood of moral
disengagement and are bound by notions of ‘us’ vs ‘them’, all of which should be explored
In closing, the results of the study highlight at least of four additional paths for further
inquiry. The first involves temporal change. Temporal analyses may be able to effectively
capture how the anti-Muslim and anti-Left sentiment uncovered in the current study evolve over
time. This, in turn, may offer new insight into the evolution of extremists’ collective identities in
virtual settings (see Scrivens et al. 2018) and provide more insight into whether and how the
nature of r/The_Donald’s echo chamber dynamics change over time. Second, the data for the
current study was gathered from only the period after Trump’s U.S. presential election victory,
and not before. Future studies could, for example, incorporate an interrupted time series analysis
into the research design, assessing whether Trump’s election had a significant impact on the
volume and severity of hateful speech on r/The_Donald. Since offline events can significantly
impact the sentiment in far-right online communities (see Bluic et al., 2019), determining the
impact of offline events, such as Donald Trump’s election, may shed light on if and how it
emboldened right-wing extremist sentiment within the community. Other external events that
may have influenced the most highly-upvoted comments on r/The_Donald during the 2017
timeline include Islamist terror attacks such as the New York City truck attack and the London
Bridge attack, or other galvanizing events such as the Unite the Right rally in Charlottesville.
Future research should therefore attempt to account for these central events and subsequent
interaction effects during the analysis stage. Research is also needed to assess how the banning
of r/The_Donald impacted its online community in general and collective identity formation in
particular, such as whether its users migrated to other political subreddits (e.g., r/Conservative),
backup or mimic sites (e.g.,, or alternative platforms with more lenient content
policies (e.g., Gab or Voat) in an effort to express their extremist views. Lastly, future research
could compare the discourse within and across a number of other extreme subreddits such as, for
example, the recently quarantined r/Imgoingtohellforthis, which is known to promote ‘racist
nationalism’ (see Topinka, 2017). This may provide insight into how collective identities form in
extreme online communities in general, and how Reddit’s unique features facilitate collective
identity formation, in particular.
Account and Community Restrictions (n.d.-a). Promoting Hate Based on Identity or
Vulnerability. Available at:
(accessed 30 June 2020).
Account and Community Restrictions (n.d.-b). Quarantined Subreddits. Available at:
restrictions/quarantined-subreddits. (accessed 6 January 2020).
Adams J and Roscigno V (2005). White supremacists, oppositional culture and the World Wide
Web. Social Forces 84(2): 759—778.
Anti-Defamation League (2018). New hate and old: The changing face of American white
supremacy. Anti-Defamation League.
Bandura A (2002). Selective moral disengagement in the exercise of moral agency. Journal of
Moral Education 31(2): (2002): 101—119.
Berger JM (2018). Extremism. Cambridge: The MIT Press.
Berger JM (2016). Nazis vs. ISIS on Twitter: A comparative study of white nationalist and
ISIS online social media networks. The George Washington University Program on
Extremism. Available at:
(accessed 8 January 2020).
Berger JM and Strathearn B (2013). Who matters online: Measuring influence, evaluating
content and countering violent extremism in online social networks. The International
Centre for the Study of Radicalisation and Political Violence. Available at: (accessed
8 January 2020).
Billig M (2001). Humour and hatred: The racist jokes of the Ku Klux Klan. Discourse & Society
12(3): 267—289.
Bliuc AM, Betts J, Vergani M, Iqbal M and Dunn K (2019). Collective identity changes in far-
right online communities: The role of offline intergroup conflict. New Media & Society
21(8): 1770–1786.
Bowman-Grieve L (2009). Exploring “Stormfront”: A virtual community of the radical right.
Studies in Conflict & Terrorism 32(11): 989—1007.
Braun V and Clarke V (2006). Using thematic analysis in psychology. Qualitative Research in
Psychology 3(2): 77—101.
Burnap P and Williams M L (2015). Cyber hate speech on Twitter: An application of machine
classification and statistical modeling for policy and decision. Policy and Internet 7(2):
Burris V, Smith E and Strahm A (2000). White supremacist networks on the Internet.
Sociological Focus 33(2): 215—235.
Carman M, Koerber M, Li J, Choo KKR and Ashman H (2018) Manipulating visibility of
political and apolitical threads on reddit via score boosting. In 17th IEEE International
Conference on Trust, Security and Privacy in Computing and Communications/12th
IEEE International Conference on Big Data Science and Engineering, pp. 184—190,
Caini M and Kröll P (2014). The transnationalization of the extreme right and the use of the
Internet. International Journal of Comparative and Applied Criminal Justice 39(4),
Conway M (2017). Determining the role of the Internet in violent extremism and terrorism: Six
suggestions for progressing research. Studies in Conflict & Terrorism 40(1): 77—98.
Conway M, Scrivens R, and Macnair L. (2019). Right-Wing Extremists’ Persistent Online
Presence: History and Contemporary Trends. The International Centre for Counter-
Terrorism – The Hague 10: 1—24.
Crisafi A (2005). The Seduction of Evil: An Examination of the Media Culture of the Current
White Supremacist Movement. In Ni Fhlainn, S., and Myers, W. A. (eds.), The Wicked
Heart: Studies in the Phenomena of Evil (pp. 29—44). Oxford: Inter-Disciplinary Press.
Daniels J (2009). Cyber racism: White supremacy online and the new attack on civil rights.
Lanham, MD: Rowman and Littlefield Publishers.
Diani M (1992). The concept of social movement. Sociological Review 40(1): 1—25.
Dimock M, Doherty C, and Kiley J (2014). Political polarization in the American public: How
increasing ideological uniformity and partisan antipathy affect politics, compromise and
everyday life. Pew Research Center, 12.
Edwards, G S, and Rushin, S (2018). The Effect of president Trump's election on hate crimes.
SSRN: = 3102652 (accessed 8 January 2020).
Ekman M (2014). The dark side of online activism: Swedish right-wing extremist video activism
on YouTube. Journal of Media and Communication Research 30(56): 79—99.
Ekman, M (2018). Anti-refugee mobilization in social media: The case of Soldiers of Odin.
Social Media + Society 4(1): 1—11.
Feinberg A (2018). Conde Nast sibling reddit says banning hate speech is just too hard. The
Huffington Post, 9 July. Available at:
hate-speech-hard_n_5b437fa9e4b07aea75429355. (accessed 8 January 2020).
Froio C and Ganesh B (2019). The transnationalisation of far-right discourse on Twitter: Issues
and actors that cross borders in Western European democracies. European Societies
21(4): 519—539.
Futrell R and Simi P (2004). Free spaces, collective identity, and the persistence of US white
power activism. Social Problems 51(1): 16—42.
Gibbs S (2018). Open racism and slurs are fine to post on reddit, says CEO. The Guardian, 12
April. Available at:
reddit-post-ceo-steve-huffman. (accessed 8 January 2020).
Graham R (2016). Inter-ideological mingling: White extremist ideology entering the
mainstream on Twitter. Sociological Spectrum 36(1): 24—36.
Hafez F (2014). Shifting borders: Islamophobia as common ground for building pan-European
right-wing unity. Patterns of Prejudice 48(5): 479—499.
Hale BJ (2017). “+1 for Imgur”: A content analysis of SIDE theory and common voice effects on
a hierarchical bidirectionally-voted commenting system. Computers in Human Behavior
77(1): 220—229.
Hauser C (2017). Reddit bans ‘incel’ group for inciting violence against women. The New York
Times, 9 November. Available at: (accessed 8
January 2020).
Hunt SA and Benford RD (1994). Identity talk in the peace and justice movement. Journal of
Contemporary Ethnography 22(4): 488—517.
Maguire M and Delahunt B (2017). Doing a thematic analysis: A practical, step-by-step guide
for learning and teaching scholars. AISHE-J: The All Ireland Journal of Teaching and
Learning in Higher Education, 9(3).
Maher C, Hadfield M and Hutchings M (2018). Ensuring rigor in qualitative data analysis: A
design research approach to coding combining NVIVO with traditional material methods.
International Journal of Qualitative Methods 17(1): 1—13.
Martin T (2017). Dissecting trump’s most rabid online following. Five Thirty Eight, 23 March.
Available at:
following/. (accessed 8 January 2020).
Massanari A (2017). #Gamergate and the fappening: How Reddit’s algorithm, governance, and
culture support toxic technocultures. New Media & Society 19(3): 329—346.
Meddaugh PM and Kay J (2009). Hate speech or “reasonable racism?” The other in Stormfront.
Journal of Mass Media Ethics 24(4): 251—268.
Melucci A (1995). The process of collective identity. In Johnston H and Klandermans B (eds)
Social Movements and Culture. Minneapolis: University of Minnesota Press, pp. 41-63.
Mikal JP, Rice RE, Kent RG, and Uchino BN (2014). Common voice: Analysis of behavior
modification and content convergence in a popular online community. Computers in
Human Behavior 35: 506—515.
Mueller C (2016). Positive feedback loops: Sarcasm and the pseudo-argument in Reddit
communities. Working Papers in TESOL & Applied Linguistics 16(2): 84—97.
Müller K and Schwarz C (2019). From hashtags to hate crimes. Available at: (accessed 8 January 2020).
Nagle A (2017). Kill all normies: The online culture wars from Tumblr and 4chan to the alt-
right and Trump. Alresford: John Hunt Publishing.
Nouri L and Lorenzo-Dus N (2019) Investigating Reclaim Australia and Britain First’s use of
social media: Developing a new model of imagined political communities online. Journal
for Deradicalization 18: 1—37.
O’Callaghan D, Greene D, Conway M, Carthy J and Cunningham P (2014). Down the
(white) rabbit hole: The extreme right and online recommender systems. Social Science
Computer Review 33(4): 1—20.
Perry B and Scrivens R (2016). White pride worldwide: Constructing global identities online. In:
Schweppe J and Walters M (eds) The Globalization of Hate: Internationalizing Hate
Crime. Oxford University Press., pp. 65—78.
Phillips J and Yi J (2018). Charlottesville paradox: The ‘liberalizing’ alt right, ‘authoritarian’
left, and politics of dialogue. Society 55(3): 221—228.
Polletta F and Jasper JM (2001). Collective identity and social movements. Annual Review of
Sociology 27(1): 283—305.
r/The_Donald Wiki (n.d.). Can you explain your rules? Available at: (accessed 6 January 2020). Competitive Analysis (n.d.). Retrieved from (accessed 30 June 2020).
Reddit Content Policy (n.d.) Retrieved from
(accessed 6 January 2020).
Reddit Press (n.d.) Reddit by the numbers. Available at:
(accessed 6 January 2020).
Romano A (2017). Reddit just banned one of its most toxic forums. But it won’t touch
The_Donald. Vox, 13 November. Available at:
controversy. (accessed 8 January 2020).
Saldaña J (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.
Scrivens R, Davies G and Frank R (2018). Measuring the evolution of radical right-wing posting
behaviors online. Deviant Behavior 41(2): 216—232.
Simi P and Futrell R (2015). American Swastika: Inside the White Power Movement’s Hidden
Spaces of Hate (2nd Ed). Lanham, MD: Rowman and Littlefield.
Snow D (2001). Collective Identity and Expressive Forms. University of California, Irvine
eScholarship Repository. Available at: (accessed 8
January 2020).
Stewart E (2019). Reddit restricts its biggest pro-Trump board over violent threats. Vox, 26 June.
Available at:
message-board-quarantine-ban. (accessed 8 January 2020).
Stier S, Posch L, Bleier A and Strohmaier M (2017). When populists become popular:
Comparing Facebook use by the right-wing movement Pegida and German political
parties. Information, Communication and Society 20(9): 1365—1388.
Tiffany, K (2020). Reddit is done pretending The Donald is fine. The Atlantic, 29 June.
Available at:
donald-chapo-content-policy/613639. (accessed 29 June 2020).
Topinka RJ (2018). Politically incorrect participatory media: Racist nationalism on
r/ImGoingToHellForThis. New Media & Society 20(5): 2050—2069.
Ward J (2018). Day of the trope: White nationalist memes thrive on Reddit’s r/the_donald.
Southern Poverty Law Center. Available at:
thrive-reddits-rthedonald. (accessed 8 January 2020).
Wildman J (2020). What is Reddit? Digital Trends, 1 July. Available at: (accessed 10 July 2020).
Zadrozny B and Collins B (2018). Reddit bans qanon subreddits after months of violent threats.
NBC News, 12 September. Available at:
news/reddit-bans-qanon-subreddits-after-months-violent-threats-n909061. (accessed 8
January 2020).
Zannettou, S, Bradlyn, B, De Cristofaro, E, Kwak, H, Sirivianos, M, Stringini, G, and Blackburn,
J (2018). What is gab: A bastion of free speech or an alt-right echo chamber. In
Proceedings of the WWW ’18: Companion Proceedings of The Web Conference 2018.
Reddit has since amended its content policy to discourage ‘promoting hate based on identity or
vulnerability’ (see Account and Community Restrictions, n.d.-a).
For more on the data, see
Note that a Reddit ‘comment’ is a user-submitted piece of text that is created in response to a
particular ‘post’. For instance, when a r/The_Donald member finds an interesting link to a news
article, they may create a ‘post’ which links this content to r/The_Donald, to which other
members may comment.
All author names were assigned with pseudonyms to protect user anonymity. All online
comments were quoted verbatim.
This number represents the comment’s ‘vote score’, or the number of upvotes given to a
comment minus downvotes.
Reddit has used this policy to ban popular extremist subreddits in the past, including the anti-
women community r/Incels in 2017 (Hauser, 2017) and the conspiracy theory community
r/QAnon in 2018 (Zadrozny and Collins, 2018).
... Reddit is one of the fastest growing social media platforms and is one of only two online platforms that has experienced statistically significant growth in recent years (Pew Research Center, 2019. 1 While it calls itself the "front page of the Internet, " Reddit has also received extensive attention for the presence of hate and extremist-related content (Massanari, 2020). For example, once-popular subreddits like r/The_Donald and r/MGTOW provided a space for toxic political discussions (Flores-Saviaga et al., 2018;Gaudette et al., 2020). 2 Despite Reddit's crackdown on some of the largest problematic communities (Allyn, 2020), less visible, yet still controversial, subreddits continue to operate freely. ...
... In a sample of individual users on Reddit, 43% were found to be those who steadily used or gradually started to use more toxic comments over time (Mall et al., 2020). Other research points to communities that discuss discriminatory views related to race/ethnicity and politics that have become popular on Reddit (Topinka, 2018;Gaudette et al., 2020;Mittos et al., 2020). Subreddits identified for their sexist concentration on Men's rights and anti-feminist activism also contribute to harmful dynamics (Massanari, 2017;LaViolette and Hogan, 2019). ...
... Common across hate related topics on this platform is the use of negative language to both form an ingroup identity and create toxic conversations around "others" or outgroups. Indeed, on the subreddit r/The_Donald users were found to use hostile language by describing Muslim immigrants as terror threats to western civilization (Gaudette et al., 2020). While Reddit has attempted to crack down and moderate explicitly racist, sexist, and hate speech content (Chandrasekharan et al., 2017(Chandrasekharan et al., , 2020 much of it is still present and available to be viewed by individuals on this platform. ...
Full-text available
Digital media give the public a voice to discuss or share their thoughts about political and social events. However, these discussions can often include language that contributes to creating toxic or uncivil online environments. Using data from Reddit, we examine the language surrounding three major events in the United States that occurred in 2020 and early 2021 from the comments and posts of 65 communities identified for their focus on extreme content. Our results suggest that social and political events in the U.S. triggered increased hostility in discussions as well as the formation of a set of shared language for describing and articulating information about these major political/social moments. Findings further reveal shifts in language toward more negativity, incivility, and specific language surrounding non-White outgroups. Finally, these shifts in language online were found to be durable and last after the events occurred. Our project identifies that negative language is frequently present on social media and is not necessarily exclusive to one group, topic, or real-world event. We discuss the implications of language as a powerful tool to engage, recruit, and radicalize those within communities online.
... Each user may then subscribe to a set of subreddits of their choice, the most popular and current content of which subreddits constitutes the user's default front page when logged in. One of the most common activities on Reddit is the sharing of links to external news sources (Gaudette et al., 2021). These links are important sources of news which may unintentionally reveal important biases in news exposure and consumption. ...
... Part of the answer might lie in certain Reddit features, which inadvertently foster political polarization. Features such as upvoting, default sorting filter, post aggregation across subreddits, and limited tool support for moderators have been implicated in fostering "toxic technocultures" and ideological extremism on Reddit (Gaudette et al., 2021;Massanari, 2017). Our study implicates Reddit's subreddit subscription feature in making COVID-19 a politically polarizing phenomenon. ...
... Thus, users may only be exposed to the narrative of COVID-19 as "not as deadly as the flu" thus downplaying its risk. Furthermore, users may decide to spend most of their time on Reddit interacting with others within select subreddits, where narratives may crystallize into echo chambers (Gaudette et al., 2021). People upvote content and by voting for certain topics, it might appear that certain COVID-19 viewpoints are valid and mainstream when they are not. ...
Full-text available
In this exploratory study, we examine political polarization regarding the online discussion of the COVID-19 pandemic. We use data from Reddit to explore the differences in the topics emphasized by different subreddits according to political ideology. We also examine whether there are systematic differences in the credibility of sources shared by the subscribers of subreddits that vary by ideology, and in the tendency to share information from sources implicated in spreading COVID-19 misinformation. Our results show polarization in topics of discussion: the Trump, White House, and economic relief topics are statistically more prominent in liberal subreddits, and China and deaths topics are more prominent in conservative subreddits. There are also significant differences between liberal and conservative subreddits in their preferences for news sources. Liberal subreddits share and discuss articles from more credible news sources than conservative subreddits, and conservative subreddits are more likely than liberal subreddits to share articles from sites flagged for publishing COVID-19 misinformation.
... While the far right's presence on the web is as diverse as the users and groups which constitute it, it can be roughly divided into two categories: first, that which takes place on inward-facing or fringe, extremist sitesthe 'darker corners' of the web (Chandrasekharan et al., 2017;Rogers, 2020)-which have historically been very important for the far right's digital efforts; and second, the more recent and seemingly less extremist forms of presence in the digital mainstream (e.g. Crosset et al., 2019;Farkas et al., 2018a;Gaudette et al., 2020;Matamoros-Fernández, 2017), which is in particular focus in this thesis. ...
... In recent years, research has shown how far-right accounts have become increasingly common on conventional social media sites like Facebook, Twitter, and YouTube (Ben-David & Matamoros Fernández, 2016;Crosset et al., 2019;Ekman, 2019;Matamoros-Fernández, 2017;Munger & Phillips, 2019;Oboler, 2016;Wahlström et al., 2020), as well as popular sites with highly eclectic user bases like Reddit, 4chan, and the Swedish online forum Flashback (Colley & Moore, 2020;Gaudette et al., 2020;Massanari, 2017;Phillips, 2019;Topinka, 2018;A. Törnberg & Törnberg, 2016a, 2016b. ...
... While datafication provides unique opportunities for researchers to gain insight into settings that are becoming increasingly central to everyday life, politics, and news media, the opportunities of large amounts of aggregated data have also been leveraged for repressive purposes like surveillance (Brayne, 2017) and political persuasion campaigns (Tufekci, 2014). Some, in fact, argue that the ways in which many platforms function are particularly efficient in enabling the creation, spread and persistence of harmful content, Previous research shows that the far right has been exceedingly skilful in leveraging these opportunities (Colley & Moore, 2020;Gaudette et al., 2020;Massanari, 2017;Munn, 2019Munn, , 2020. ...
Full-text available
Background: This thesis explores the far right online beyond the study of political parties and extremist far-right sites and content. Specifically, it focuses on the proliferation of far-right discourse among ‘ordinary’ internet users in mainstream digital settings. In doing so, it aims to bring the study of far-right discourse and the enabling roles of digital platforms and influential users into dialogue. It does so by analysing what is communicated and how; where it is communicated and therein the roles of different socio-technical features associated with various online settings; and finally, by whom, focusing on particularly influential users. Methods: The thesis uses material from four different datasets of digital, user-generated content, collected at different times through different methods. These datasets have been analysed using mixed methods approaches wherein interpretative methods, primarily in the form of critical discourse analysis (CDA), have been combined with various data processing techniques, descriptive statistics, visualisations, and computational data analysis methods. Results: The thesis provides a number of findings in relation to far-right discourse, digital platforms, and online influence, respectively. In doing so it builds on the findings of previous research, illustrates unexpected and contradictory results in relation to what was previously known, and makes a number of interesting new discoveries. Overall, it begins to unravel the complex interconnectedness of far-right discourse, platforms, and influential users, and illustrates that to understand the far-right’s efforts online it is imperative to take several dimensions into account simultaneously. Conclusion: The thesis makes several contributions. First, the thesis makes a conceptual contribution by focusing on the interconnectedness of far-right efforts online. Second, it makes an empirical contribution by exploring the multifaceted grassroots or ‘non-party’ dimensions of far-right mobilisation, Finally, the thesis makes a methodological contribution through its mix of methods which illustrates how different aspects of the far right, over varying time periods, diversely sized and shaped datasets, and user constellations, can be approached to reveal broader overarching patterns as well as intricate details.
... Sowohl Facebook als auch VKontakte bieten zielgruppenspezifische Werbeanzeigen ("targeting") an. (Baugut & Neumann, 2020;Gaudette et al., 2020;Koehler, 2014). ...
... Insbesondere der (inzwischen gesperrte) Subreddit r/theDonald wurde als Sammelbecken vor allem rassistischer Inhalte diskutiert (Peck, 2020). Das Upvoting entsprechender Inhalte wurde hier genutzt, um Gleichgesinnte zu mobilisieren und eine (radikale) gemeinsame Identität zu befördern (Gaudette et al., 2020). Der Anteil ist aber unterschiedlich hoch: Eine nach der eigentlichen Literaturanalyse erschienene  Inhaltsanalyse von Reddits r/theDonald, sowie 4Chans und 8Chans /pol boards zeigte, dass etwas mehr als ein Drittel der Inhalte bei 8Chan Hatespeech enthielten, während es bei 4chan ein Viertel der Inhalte war und bei Reddit 14 Prozent . ...
Technical Report
Full-text available
Extremist:innen greifen zunehmend auf dunkle Sozialen Medien zurück. Der Begriff der dunklen sozialen Medien umfasst verschiedene Typen alternativer Sozialer Medien (soziale Kontermedien wie Gab, kontextgebundene alternative Soziale Medien wie VKontakte, Fringe Communities wie 4Chan), ebenso wie verschiedene Typen dunkler Kanäle (ursprünglich private Kanäle wie Telegram und Separée-Kanäle wie geschloßene Facebook-Gruppen). Das vorliegende Gutachten beleuchtet die Gelegenheitsstrukturen für Extremismus und Extremismusprävention, die sich durch die Verlagerung hin zu dunklen Sozialen Medien ergeben. Hierfür werden in einem theoretischen Rahmenmodel Einflussfaktoren auf drei Ebenen verknüpft: (1) Regulierung (etwa durch das NetzDG) auf der gesellschaftlichen Makro- Ebene. (2) Verschiedene Genres und Typen (dunkler) sozialer Medien auf der Meso-Ebene einzelner Angebote. (3) Einstellungen, Normen und technische Affordanzen als Motivatoren menschlichen Verhaltens im Sinne der Theorie des geplanten Verhaltens (Ajzen und Fishbein, 1977) auf der Mikro-Ebene. Basierend auf diesem Rahmenmodel werden die Gelegenheitststrukturen für Extremismus und Extremismusprävention mit Hilfe zweier Studien untersucht: (1) Einer detaillierten Plattformanalyse dunkler und etablierter Sozialer Medien (N = 19 Plattformen). (2) Eine Literaturanalyse ( ‚scoping review‘) des Forschungsstandes zu (dunklen) Sozialen Medien im Kontext von Extremismus und Extremismusprävention (N = 142 Texte). Die Ergebnisse der Platformanalyse ermöglichen nuancierte Einblicke in die Gelegenheitsstrukturen, die sich durch unterschiedliche Typen und Genres (dunkler) Sozialer Medien ergeben. Das Scoping Review bietet einen Überblick über die Entwicklung des Forschungsfeldes und die typischen Untersuchungsmethoden, die eingesetzt werden. Auf der Grundlage der erhobenen Daten werden Forschungsdesiderata und Implikationen für die Extremismusprävention diskutiert.
... These content-focused looks further inspire an important line of inquiry following online extremism [13,32], ideological radicalization [44], hate speech [20,23], and false information [8]. However, there is a particular absence of two analytical approaches from the the otherwise broader investigation of the 'alternative platforms' phenomenon. ...
Full-text available
As yet another alternative social network, Gettr positions itself as the "marketplace of ideas" where users should expect the truth to emerge without any administrative censorship. We looked deep inside the platform by analyzing it's structure, a sample of 6.8 million posts, and the responses from a sample of 124 Gettr users we interviewed to see if this actually is the case. Administratively, Gettr makes a deliberate attempt to stifle any external evaluation of the platform as collecting data is marred with unpredictable and abrupt changes in their API. Content-wise, Gettr notably hosts pro-Trump content mixed with conspiracy theories and attacks on the perceived "left." It's social network structure is asymmetric and centered around prominent right-thought leaders, which is characteristic for all alt-platforms. While right-leaning users joined Gettr as a result of a perceived freedom of speech infringement by the mainstream platforms, left-leaning users followed them in numbers as to "keep up with the misinformation." We contextualize these findings by looking into the Gettr's user interface design to provide a comprehensive insight into the incentive structure for joining and competing for the truth on Gettr.
... To address this gap, we conduct an ethnographic content analysis of three Stormfront discussion subforums to highlight how the regular exchange of white supremacist humor 2 functions as a form of political resistance crucial to building and sustaining collective identity beyond "movementcentric" activities such as demonstrations and marches (McAdam 2011). While much has been learned from investigating the structure and content of white supremacist discourse (for example, see Weatherby and Scoggins 2005;Wodak and Richardson 2013), less attention has been given to the social interaction and racial identity constructed by exchanging this type of hateful messaging (for exceptions, see Askanius 2021;DeCook 2018;Eschler and Menking 2018;Gaudette et al. 2020). The exchange of white supremacist humor is part of the WSM's long-term strategy to re-brand White identity in the 21 st century and incrementally normalize racism (Futrell and Simi 2017;Simi and Futrell 2020). ...
Full-text available
We conducted an ethnographic content analysis to examine the social interaction and racial identity constructed through the exchange of white supremacist humor shared on three Stormfront discussion subforums. Overall, white supremacist joke sharing functioned multidimensionally as it simultaneously fostered cohesion and contention among users. By mocking political correctness and non-Whites through the circulation of humorous images and text, white supremacists establish a communal atmosphere and produce a sense of solidarity among members in a more “fun” way than conventional speeches or publications. At the same time, joke sharing served as a source of contention when users exchanged jokes that violated collective identity norms, such as sharing “blonde jokes” that disparage White females. These findings underscore the ongoing necessity among members of the white supremacist movement to negotiate different ideological tenets. By attending to the social function of white supremacist joke sharing, insights derived from this investigation move beyond more formal social movement events such as marches and demonstrations by attending to the daily activities that white supremacists utilize to resist external threats.
Most online media consumption is not driven by a desire to seek out news and politics. However, the public may still encounter politics because it arises organically in communities devoted to non-political subjects. This study examines the potential of popular reality television online discussion forums to serve as online third spaces and stimulate political discussion due to the natural connections that audiences make between the cast members and “real life” in reality-based programming. Based on a quantitative analysis of political comments made to reality television subreddits and a survey of visitors to a popular subreddit focused on The Bachelor television show, the findings not only demonstrate the ability of entertainment-focused online communities to expose a broad segment of the public to political talk, but also point to the obstacles in promoting political discussion that is simultaneously enjoyable, informative, and tolerant of diverse viewpoints.
Full-text available
Although there is an ongoing need for law enforcement and intelligence agencies to identify and assess the online activities of violent extremists prior to their engagement in violence offline, little is empirically known about their online posting patterns generally or differences in their online patterns compared to non-violent extremists who share similar ideological beliefs particularly. Even less is empirically known about how their online patterns compare to those who post in extremist spaces in general. This study addresses this gap through a content analysis of postings from a unique sample of violent and non-violent right-wing extremists as well as from a sample of postings within a sub-forum of the largest white supremacy web-forum, Stormfront. Here the existence of extremist ideologies, personal grievances, and violent extremist mobilization efforts were quantified within each of the three sample groups. Several notable differences in posting patterns were observed across samples, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. This study concludes with a discussion of the implications of the analysis, its limitations, and avenues for future research.
Full-text available
This paper examines the shift in focus on content policies and user attitudes on the social media platform Reddit. We do this by focusing on comments from general Reddit users from five posts made by admins (moderators) on updates to Reddit Content Policy. All five concern the nature of what kind of content is allowed to be posted on Reddit, and which measures will be taken against content that violates these policies. We use topic modeling to probe how the general discourse for Redditors has changed around limitations on content, and later, limitations on hate speech, or speech that incites violence against a particular group. We show that there is a clear shift in both the contents and the user attitudes that can be linked to contemporary societal upheaval as well as newly passed laws and regulations, and contribute to the wider discussion on hate speech moderation.
Full-text available
Cruel memes spread messages of hate via social media. The Internet itself extends the memes' geographical reach, and many such cruel memes circulate across borders. This article examines the activities of cruel memeing-practices of creating, commenting on, reinforcing ("liking"), sharing, remixing, and otherwise endorsing cruel memes-as microscale hostile engagements in global politics. This is a politics of the everyday that is accessible to people in their ordinary lives and that is designed to be entertaining as well as cruel. The research draws on a large dataset of memes, comment threads, and related information from two opposed Reddit communities, r/TheLeftCantMeme and r/TheRightCantMeme. The power-flow theoretical framework structures an interpretive analysis of how cruel memes circulate within the social media space. We examine content (including narrative, degree of cruelty, and other components), and velocity of information flow, as well as access to the flow. Focusing on racism, antisemitism, and disdain for political opponents, we draw on interpretive methods to analyze the flow of information that spreads hate. We find: (1) normalization of divisiveness, derisiveness, and bigotry; (2) justifications of violence; and (3) emergence of agents despite pseudonymity. Cruel memeing activities combine with the structure of the online communities to spread hatred far beyond social media platforms. Les mèmes cruels répandent des messages de haine sur les réseaux sociaux. Internet en lui-même étend la portée géo-graphique des mèmes et nombre de ces mèmes cruels circulent au-delà des frontières. Cet article examine les activités liées aux mèmes cruels-pratiques de création, de commentaire, de renforcement (par des « J'aime), de partage, de recyclage et autrement d'adoption des mèmes cruels-en tant qu'engagements au niveau micro dans la politique mondiale. Il s'agit d'une politique du quotidien qui est accessible aux gens dans leurs vies ordinaires et qui est conçue pour être divertissante tout en étant cruelle. Cette recherche s'appuie sur un vaste jeu de données comprenant des mèmes, des fils de commentaires et des in-formations associées issus de deux communautés Reddit opposées, r/TheLeftCantMeme et r/TheRightCantMeme. Le cadre théorique du flux de pouvoir structure une analyse interprétative de la manière dont les mèmes cruels circulent dans l'espace des réseaux sociaux. Nous avons examiné le contenu (notamment le discours, le degré de cruauté et d'autres composantes), la vitesse du flux d'informations et l'accès à ce flux. En nous concentrant sur le racisme, l'antisémitisme et le mépris des opposants politiques, nous nous sommes appuyés sur des méthodes d'interprétation visant à analyser le flux d'informations qui répand la haine. Nous avons constanté: (1) une normalisation de la division, de la dérision et du sectarisme, (2) des justifications de la violence, et (3) l'émergence d'agents malgré le pseudonymat. Les activités liées aux mèmes cruels s'associent à la structure des communautés en ligne pour répandre la haine bien au-delà des plateformes de réseaux sociaux. Los memes crueles difunden mensajes de odio mediante las redes sociales. La internet misma amplía su alcance geográfico, y muchos de estos memes crueles atraviesan las fronteras. En este artículo, analizamos las actividades relacionadas con la publicación de memes crueles, es decir, las prácticas de crear, comentar, reafirmar (dar "me gusta"), compartir, combinar y avalar memes crueles, como interacciones hostiles a microescala en el ámbito de la política internacional. Se trata de una política de lo cotidiano a la que las personas pueden acceder en sus vidas diarias y que está diseñada para entretener y, también, para lastimar. Esta investigación se basa en un gran conjunto de datos sobre memes, secciones de comentarios e información relacionada de dos comunidades opuestas de Reddit: r/TheLeftCantMeme y r/TheRightCantMeme. El marco teórico del flujo del poder estructura un análisis interpretativo de la forma en la que los memes crueles circulan en el ámbito de las redes sociales. Analizamos el contenido (por ejemplo, la técnica narrativa, el grado de crueldad y otros componentes), la velocidad del flujo de la información y la capacidad de acceso al flujo. Nos centramos en el racismo, el antisemitismo y el desprecio por los adversarios políticos, y nos basamos en métodos interpretativos para analizar el flujo de información que propaga odio. Encontramos: (1) la normalización del divisionismo, la burla y la intolerancia; (2) justificaciones de los actos de violencia; y (3) el surgimiento de agentes a pesar de la seudonimia. Las actividades relacionadas con la publicación de memes crueles se combinan con la estructura de las comunidades en línea para propagar el odio mucho más allá de las plataformas de las redes sociales.
Full-text available
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of right- wing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Full-text available
Despite the increasing citizen engagement with socio-political online communities, little is known about how such communities are affected by significant offline events. Thus, we investigate here the ways in which the collective identity of a far-right online community is affected by offline intergroup conflict. We examine over 14 years of online communication between members of Stormfront Downunder, the Australian sub-forum of the global white supremacist community We analyse members’ language use and discourse before and after significant intergroup conflict in 2015, culminating in local racist riots in Sydney, Australia. We found that the riots were associated with significant changes in the collective beliefs of the community (as captured by members’ most salient concerns and group norms), emotions and consensus within the community. Overall, the effects of the local riots were manifest in a reinvigorated sense of purpose for the far-right community with a stronger anti-Muslim agenda.
Full-text available
Researchers have previously explored how right-wing extremists build a collective identity online by targeting their perceived “threat,” but little is known about how this “us” versus “them” dynamic evolves over time. This study uses a sentiment analysis-based algorithm that adapts criminal career measures, as well as semi-parametric group-based modeling, to evaluate how users’ anti-Semitic, anti-Black, and anti-LGBTQ posting behaviors develop on a sub-forum of the most conspicuous white supremacy forum. The results highlight the extent to which authors target their key adversaries over time, as well as the applicability of a criminal career approach in measuring radical posting trajectories online.
Conference Paper
Full-text available
Over the past few years, a number of new "fringe" communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web's information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting "people and free speech first", welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan's Politically Incorrect board.
Full-text available
Deep and insightful interactions with the data are a prerequisite for qualitative data interpretation, in particular, in the generation of grounded theory. The researcher must also employ imaginative insight as they attempt to make sense of the data and generate understanding and theory. Design research is also dependent upon the researchers’ creative interpretation of the data. To support the research process, designers surround themselves with data, both as a source of empirical information and inspiration to trigger imaginative insights. Constant interaction with the data is integral to design research methodology. This article explores a design researchers approach to qualitative data analysis, in particular, the use of traditional tools such as colored pens, paper, and sticky notes with the CAQDAS software, NVivo for analysis, and the associated implications for rigor. A design researchers’ approach which is grounded in a practice which maximizes researcher data interaction in a variety of learning modalities ensures the analysis process is rigorous and productive. Reflection on the authors’ research analysis process, combined with consultation with the literature, would suggest digital analysis software packages such as NVivo do not fully scaffold the analysis process. They do, however, provide excellent data management and retrieval facilities that support analysis and write-up. This research finds that coding using traditional tools such as colored pens, paper, and sticky notes supporting data analysis combined with digital software packages such as NVivo supporting data management offer a valid and tested analysis method for grounded theory generation. Insights developed from exploring a design researchers approach may benefit researchers from other disciplines engaged in qualitative analysis.
Full-text available
In the aftermath of the 2017 Charlottesville tragedy, the prevailing narrative is a Manichean division between ‘white supremacists’ and ‘anti-racists’. We suggest a more complicated, nuanced reality. While the so-called ‘Alt-Right’ includes those pursuing an atavistic political end of racial and ethnic separation, it is also characterised by pluralism and a strategy of nonviolent dialogue and social change, features associated with classic liberalism. The ‘Left,’ consistent with its historic mission, opposes the Alt-Right’s racial/ethnic prejudice; but, a highly visible movement goes farther, embracing an authoritarianism that would forcibly exclude these voices from the public sphere. This authoritarian element has influenced institutions historically committed to free expression and dialogue, notably universities and the ACLU. We discuss these paradoxes by analysing the discourse and actions of each movement, drawing from our study of hundreds of posts and articles on Alt-Right websites and our online exchanges on a leading site ( We consider related news reports and scholarly research, concluding with the case for dialogue.
Conference Paper
Full-text available
Over the past few years, a number of new "fringe" communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web's information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting "people and free speech first", welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan's Politically Incorrect board.
While an increasing number of contributions address transnationalism in far right politics, few investigations of the actors and discourses favored in transnational exchanges exist on social media, considered a perfect habitat for radicalization. Building on the literature on the far right, social movements, transnationalism and the Internet, we address this gap by studying the initiators and the issues that are favored in online exchanges between audiences of far right political parties and movements across France, Germany, Italy and the United Kingdom. We use a new dataset on the activities of far right Twitter users that is analyzed through a novel mixed methods approach. We use social network analysis to detect transnational links between organizations across countries based on retweets from audiences of far right Twitter users. Retweets are qualitatively coded for content and compared to the content retweeted within national communities. Finally, using a logistic regression, we quantify the level to which specific issues and initiators enjoy high levels of attention across borders. Subsequently we use discourse analysis to qualitatively reconstruct the interpretative frames accompanying these patterns. We find little evidence of a “dark international” on social media across Western Europe. Only a few issues (anti-immigration and nativist interpretations of the economy) garner transnational audiences on Twitter. Additionally, we demonstrate that more than movements, political parties play a prominent role in the construction of a transnational far right discourse. Keywords: Far right, transnationalism, Twitter, issue attention, movements, social network analysis