ArticlePDF Available

Exploring Radical Right-Wing Posting Behaviors Online


Abstract and Figures

In recent years, researchers have shown a vested interest in developing advanced information technologies, machine-learning algorithms, and risk-assessment tools to detect and analyze radical content online, with increased attention on identifying violent extremists or measuring digital pathways of violent radicalization. Yet overlooked in this evolving space has been a systematic examination of what constitutes radical posting behaviors in general. This study uses a sentiment analysis-based algorithm that adapts criminal career measures – and is guided by communication research on social influence – to develop and describe three radical posting behaviors (high-intensity, high-frequency, and high-duration) found on a sub-forum of the most conspicuous right-wing extremist forum. The results highlight the multi-dimensional nature of radical right-wing posting behaviors, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online.
Content may be subject to copyright.
Citation: Scrivens, Ryan. (2020). Exploring Radical Right-Wing Posting Behaviors Online. Deviant Behavior. doi:
Exploring Radical Right-Wing Posting Behaviors Online
Ryan Scrivens, School of Criminal Justice, Michigan State University
In recent years, researchers have shown a vested interest in developing advanced information
technologies, machine-learning algorithms, and risk-assessment tools to detect and analyze
radical content online, with increased attention on identifying violent extremists or measuring
digital pathways of violent radicalization. Yet overlooked in this evolving space has been a
systematic examination of what constitutes radical posting behaviors in general. This study uses
a sentiment analysis-based algorithm that adapts criminal career measures – and is guided by
communication research on social influence – to develop and describe three radical posting
behaviors (high-intensity, high-frequency, and high-duration) found on a sub-forum of the most
conspicuous right-wing extremist forum. The results highlight the multi-dimensional nature of
radical right-wing posting behaviors, many of which may inform future risk factor frameworks
used by law enforcement and intelligence agencies to identify credible threats online.
In an increasingly digital world, identifying digital signs of extremism sits at the top of the
priority list for many law enforcement and intelligence agencies (Cohen, Johansson, Kaati, and
Mork 2014; Scrivens, Davies, and Frank 2017), with the current focus of government-funded
research on the development of advanced information technologies and risk assessment tools to
identify and counter the threat of violent extremism on the Internet (Sageman 2014). Within this
context, criminologists, amongst others, have argued that successfully identifying radical content
online (i.e., behaviors, patterns, or processes), on a large scale, is the first step in reacting to it
(e.g., Bouchard, Joffres, and Frank 2014; Davies, Bouchard, Wu, Joffres, and Frank 2015; Frank,
Bouchard, Davies, and Mei 2015; Williams and Burnap 2015). But in the last 15 years alone, it is
estimated that the number of individuals with access to the Internet has increased fourfold
(Internet World Stats 2020), from over 1 billion users in 2005 to more than 4.5 billion as of 2020
(Internet Live Stats 2020). With all of these new users, more information has been generated,
leading to a flood of online data. In response, researchers are shifting from manual identification
of specific online content to algorithmic techniques to do similar yet larger-scale tasks. This is a
symptom of what some have described as the ‘big data’ phenomenon – i.e., a massive increase in
the amount of data that is readily available, particularly online (Chen, Mao, Zhang, and Leung
It is becoming increasingly difficult, nearly impossible really, to manually search for
violent extremists, potentially violent extremists, or even users who post radical content online
because the Internet contains an overwhelming amount of information. These new conditions
have necessitated guided data filtering methods that can side-step – and perhaps one day even
replace – the taxing manual methods that traditionally have been used to identify relevant
information online (Brynielsson, Horndahl, Johansson, Kaati, Martenson, and Svenson 2013;
Cohen et al. 2014). As a result of this changing landscape, a number of governments around the
globe have engaged researchers to develop advanced machine learning algorithms to identify and
counter extremism through the collection and analysis of large-scale data made available online
(Chen et al. 2014). Whether this work involves finding radical users of interest (e.g., Klausen,
Marks, and Zaman 2018), measuring digital pathways of radicalization to violence (e.g., Hung,
Jayasumana, and Bandara 2016a), or detecting virtual indicators that may prevent future terrorist
attacks (e.g., Johansson, Kaati, and Sahlgren 2016), the urgent need to pinpoint extremist content
online is one of the most significant challenges faced by law enforcement agencies and security
officials worldwide (Sageman 2014).
It should come as little surprise, then, that scholars in recent years have shown a vested
interest in developing large-scale ways to identify and analyze radical content online. To
illustrate, researchers have used machine learning algorithms to detect extreme language (e.g.,
Davidson, Warmsley, Macy, and Weber 2017), websites (e.g., Bouchard et al. 2014; Mei and
Frank 2015; Scrivens and Frank 2016) and users online (e.g., Klausen et al. 2018; Scrivens et al.
2017), as well as to measure levels of online propaganda (e.g., Burnap, Williams, Sloan… and
Voss 2014) and cyberhate (e.g., Williams and Burnap 2015) following a terrorism incident, and
to evaluate how radical discourse evolves over time online (e.g., Bliuc, Betts, Vergani, Iqbal, and
Dunn 2019; Figea, Kaati, and Scrivens 2016; Levey and Bouchard 2019; Macnair and Frank
2018; Park, Beck, Fletche, Lam, and Tsang 2016; Vergani and Bluic 2015; Scrivens, Davies, and
Frank 2018). Machine learning has also been used to detect violent extremist language (e.g.,
Abbasi and Chen 2005) and users online (e.g., Alvari, Sarkar and Shakarian 2019; Brynielsson et
al. 2013; Cohen et al. 2014; Johansson et al. 2016; Kaati, Shrestha, and Cohen 2016; Kaati,
Shrestha, and Sardella 2016), as well as to measure levels of – or propensity towards – violent
radicalization online (e.g. Agarwal, and Sureka 2015; Bermingham, Conway, McInerney,
O’Hare, and Smeaton 2009; Chen 2008; Ferrara 2017; Ferrara, Wang, Varol, Flammini, and
Galstyan 2016; Grover and Mark 2019; Hung et al. 2016a; Hung, Jayasumana, and Bandara
2016b). But in light of these important contributions, overlooked in this evolving space has been
a systematic look at what constitutes radical posting behaviors in general. Taking a step back
from measuring levels of online radicalization, for example, to investigate radical posting
behaviors found in an already radical online community may provide law enforcement and
intelligence agencies with new insight into what constitutes online behaviors worthy of future
investigation. It may also provide a useful baseline for key stakeholders to in turn identify
credible threats (i.e., those who engage in violence offline) or inform future risk factor
frameworks. This study seeks to address this gap via a sentiment analysis-based algorithm to
identify and describe radical posting behaviors on a right-wing extremist (RWE) discussion
Before proceeding, it is necessary to outline how right-wing extremism is conceptualized
in the current study. Following Berger (2018), this study is guided by the view that RWEs—like
all extremists—structure their beliefs on the basis that the success and survival of the in-group is
inseparable from the negative acts of an out-group and, in turn, they are willing to assume both
an offensive and defensive stance in the name of the success and survival of the in-group (see
Berger 2018). Right-wing extremism is thus defined as a racially, ethnically, and/or sexually
defined nationalism, which is typically framed in terms of white power and/or white identity
(i.e., the in-group) that is grounded in xenophobic and exclusionary understandings of the
perceived threats posed by some combination of non-whites, Jews, Muslims, immigrants,
refugees, members of the LGBTQ community, and feminists (i.e. the out-group(s)) (Perry and
Scrivens 2019; Conway, Scrivens, and Macnair 2019).
The current study
The purpose of this study was to use a sentiment analysis tool and an algorithm to identify and
describe radical right-wing posting behaviors found online. The following is a step-by-step guide
of this process.
Web-crawler and forum data
All open source content on a RWE sub-forum of Stormfront was analyzed for the current study,
which included 124,058 posts made by 7,014 authors between September 12, 2001 and October
12, 2016.
Data was captured using a custom-written computer program that was designed to
collect vast amounts of information online (for more on the web-crawler, see Scrivens, Gaudette,
Davies, and Frank 2019).
Stormfront is the oldest and most visited web-forum of the RWE movement (Conway et
al. 2019). It is also the largest and most active RWE forum in the world (Bliuc et al. 2019), and it
has hosted some of the deadliest adherents since its inception in 1995. The Southern Poverty
Law Center (2014), for example, has described Stormfront as “a magnet and breeding ground for
the deadly and the deranged”, claiming that its members have been responsible for
approximately 100 murders since the site came online. Stormfront has also been referred to as an
“echo chamber for hate” (see Simi and Futrell 2015) and a “hornet’s nest” for extremists to
become more extreme (see Wojcieszak 2010). Stormfront is indeed a reasonable starting place to
explore online posting behaviors that may be considered ‘radical’.
Keywords and selection procedure
To identify radical discussions and, by extension, radical right-wing posting behaviors in the
sub-forum, the first step was to determine radical topics found in the data that would be
measured. A list of keywords was therefore developed that accounted for discussions associated
with three primary adversary groups of the extreme right: (1) Jews; (2) Blacks; and, (3) lesbian,
The sub-forum went live online on September 12, 2001. An assessment of the first posting on the sub-forum
suggests that it was not launched in response to the 9/11 terror attacks.
gay, bisexual, transgender, and queer (LGBTQ) communities. Research suggests that these three
adversary groups are widely discussed and demonized in RWE discussions forums (e.g., Adams
and Roscigno 2005; Bowman-Grieve 2009; Futrell and Simi 2004), amongst many other online
platforms used by the extreme right. While by no means are these the only adversary groups
targeted by them, historically Jewish, Black and LGBTQ communities have been the primary
opponents of the RWE movement (Daniels 2009). Jews, for example, have been subject to
extensive criticism by the extreme right. They have been labeled as “the source of all evil”, “the
spawn of the Devil himself”, conspiring to extinguish the white race and breeding them out of
existence – through “Jew-controlled” government, financial institutions, and media (i.e., Zionist
Occupation Government (ZOG) conspiracy) (Ezekiel 1995). Black communities, too, have been
the primary target of much of the hateful sentiment expressed by the extreme right. Blacks have
been constructed as “mud races” and the descendants of animals created before Adam and Eve;
“savages” who viciously rape white women and take jobs away from white communities; and the
foot soldiers of “conspiring Jews” (Ezekiel 1995). Adherents of this male-dominated movement
have also categorized anyone who is not heterosexual as “contaminated” and “impure”, not only
by maintaining that the gay rights movement is the killer of the traditional white family and the
cultural destruction of the white race, but that gays are responsible for the contemporary AIDS
endemic (Perry 2001).
For each of the three adversary groups, a list of keywords was developed by drawing
from extensive lists of slur words found online. Each list included an equal number of words via
a standardization procedure: the frequency with which each keyword was found in the data and
the inflection point in the data were identified for each list, and the average inflection point was
calculated for each. Collectively, the average inflection point value was 42 keywords, and in turn
each list included 42 words that were randomly drawn from their associated list (for more on this
procedure, see Scrivens et al. 2018).
Sentiment analysis
Next, the context surrounding each keyword was systematically evaluated using sentiment
analysis software. Also known as ‘opinion mining’, sentiment analysis is a data collection and
analytic method that allows for the application of subjective labels and classifications. It can
evaluate the opinions of individuals by organizing data into distinct classes and sections,
assigning an individual’s sentiment with a polarity score (i.e., a positive, negative, or neutral
score) (see Feldman 2013).
SentiStrength, which is an established sentiment analysis program that has been widely
used by criminologists in terrorism and extremism studies (see Scrivens et al. 2019), was utilized
for the current study, as it allows for a systematic analysis of a user’s discussion that could be
considered ‘radical’ in online settings (see Scrivens et al. 2017, 2018). To illustrate, it allows for
a keyword-focused method of determining sentiment near a specified keyword (see Thelwall and
Buckley 2013). Equally important is another key feature of SentiStrength: polarity scores are
augmented by characters that can influence scores assigned to the text, such as active and
powerful language, booster words, negative words, repeated letters, repeated negative terms,
antagonistic words, punctuation, and other distinctive characters suited for studying an online
context. In theory, the higher a polarity score is assigned to a piece of text, the more likely the
text includes intense opinions (for more on the functional capacity of SentiStrength, see Thelwall
and Buckley 2013).
Sentiment-based Identification of Radical Authors
Scrivens and colleagues (2017) developed an algorithm, Sentiment-based Identification of
Radical Authors (SIRA), which was capable of accounting for specific aspects of a forum
authors’ posting activity that may be deemed ‘radical’. In short, the algorithm computes an
overall ‘radical score’ – out of 40 points – by tallying the following components of an author’s
online activity:
(1) Volume of negative posts, which calculates the number of negative posts for a given
forum user (10 points).
(2) Severity of negative posts, which is a metric that calculates the number of very negative
posts for a given user (10 points).
(3) Duration of negative posts, which calculates the first and last dates on which individual
members post negative messages (10 points).
(4) Average sentiment score percentile, which calculates the average sentiment score for all
posts by a given forum member (10 points).
These four unique dimensions of ‘seriousness’ are quantified to identify radical individuals
within an online forum, whereby the higher a user’s radical score, the more likely they are to be
discussing extremely negative content in their posts (see Scrivens et al. 2017 for more on the
SIRA components). Most of these measures are also drawn from traditional criminal career
measures. The volume of negative posts component, for example, is similar to the concept of
offending frequency (see Blumstein, Cohen, Roth and Visher 1986); the severity of negative
posts component also reflects a traditional criminal career dimension – ‘seriousness of crime’
(see Warr 1989); and the duration of negative posting component borrows from the general
concept behind the traditional criminal career dimension ‘duration of crime’ (e.g., Blumstein et
al. 1986; Tremblay, Pihl, Vitaro, and Dobkin 1994).
The SIRA algorithm has been used in a
number of online contexts, including to identify radical users (see Scrivens et al. 2017), to
determine levels of radical content posted by users (see Dillon, Neo, and Freilich 2019), to
measure users’ radical options over time (see Park et al. 2016), and to measure the evolution of
radical posting behaviors (see Scrivens et al. 2018).
Sentiment-based Identification of Radical Authors re-calibrated
There are various definitions that can be used to describe someone as a ‘radical’ based on their
online posting behaviors. One user, for example, may post a high volume of negative messages
over a moderate period of time in a forum. Online communication research suggests that an
individual’s social influence is associated with the volume of their communications (see
Huffaker 2010; see also Yoo and Alavi 2004), especially when the source of a messages is
perceived as trustworthy in a particular setting (see Hollander 1961). Butler (2001), for example,
suggests that an individual’s online communication activity may produce a social structure that
facilitates information sharing which can in turn influence social behavior. Weimann (1994)
similarly argues that an individual who engages in more communication activity may increase
their chances of reaching a wider audience and extending their potential to influence others.
There may also be a user who espouses very radical views for a short period of time in a
forum. Communication research suggests that an individual’s linguistic choices dictate their
ability to persuade others (e.g., Bradac, Konsky, and Davies 1976; Holtgraves and Lasky 1999;
Hosman 2002; Huffaker 2010; Ng and Bradac 1993). Three areas of research have explored the
The average sentiment score percentile does not have a criminal career equivalent. But in the online context, it
seemed important to be able to differentiate between two authors who on average posted negative messages, but one
posted negative messages that on average were more negative than the other author, especially when both parties
posted similar volumes of negative content over similar periods of time on a forum.
effects of message content on the influence of the source. First is the clarity of the message and
an author’s ability to write with ‘vocabulary richness’ (Bradac et al 1976; Hosman 2002). Poor
language, on the other hand, tends to impact the credibility and influence of the source. In other
words, messages that are perceived as unintelligent are simply perceived as less credible (see
O’Keefe 2002). Second is the powerful nature of the language. Previous research has shown that
the use of powerful or powerless language influences how the source of the message is perceived
(see Ng and Bradac 1993). Powerful language is direct, assertive, exerts confidence and certainty
(Burrell and Koper 1998) while powerless language includes, but is not limited to, fragmented
sentences, hesitations, use of hedges and tag questions (Holtgraves and Lasky 1999). Third is the
intensity of language. Here, the consensus is that intense messages include two characteristics:
(1) some stylistic feature of language, and (2) a level of emotionality (Ng and Bradac 1993;
Hamilton and Hunter 1998). In short, these messages can be more influential because they grab
the attention of the recipient (see Forgas 2006). This is particularly the case for those messages
that reinforce a sense of community and encourage others to participate in the discussions (Joyce
and Kraut 2006), with online RWE communities indeed being one of them (see Bowman-Grieve
Lastly, there may be a user who posts moderately negative material over an incredibly
long period in a forum. According to online communication studies, there is a link between the
amount of time that an individual participates in an online community and their ability to gain
social influence there (Koh, Kim, Butler and Bock 2007). Such influence, however, requires that
they are perceived as trustworthy, which again can be built via the length of time that is spent
participating there, as it shows a level of commitment to a particular group or social setting (see
Hollander 1961).
While these concepts of online social influence serve as a practical starting place to
conceptualize posting behaviors that may be radical, each of the above posting behaviors raise
the question of what constitutes radical posting behavior in an online setting. SIRA is capable of
being adjusted to measure either type of posting behavior (Scrivens et al. 2017). To illustrate, in
order to narrow in on the abovementioned author types in the RWE sub-forum, a straightforward
adaptation of the SIRA algorithm was applied to the sub-forum data, and three separate analyses
were conducted. First, to identify and describe the posting behaviors of those who posted a high
volume of radical messages in the data, SIRA was re-calibrated so that the ‘volume of negative
posts’ parameter, and not the other three SIRA parameters, was computed. Second, to locate a
group of users who posted extremely negative messages in the sub-forum and in turn describe
their posting behaviors, the SIRA algorithm accounted for the ‘severity of negative posts
parameter only. Lastly, to identify a group of users who posted radical messages over an
extensive period of time in the sub-forum and describe their posting behaviors, only the ‘duration
of negative posts’ parameter was calculated.
Analysis and coding procedure
Once a sample of forum users was identified for each of the three radical posting groups, a
macro- and micro-level assessment of their posting behaviors was conducted as follows:
Macro-level assessment. The posting behaviors of the 100 most radical users for each of
the three radical posting groups (n = 300) were compared across groups. Quantitative measures
for this descriptive comparison included users’ mean number of posts, posting scores, and mean
posting duration (in days) for all of their posts, negative posts, and very negative posts.
Micro-level assessment. A thematic content analysis was conducted on the content posted
by the 50 most radical and 50 least radical users within and across each of the three posting
groups (n = 300). Data were analyzed via thematic coding, initially utilizing a constructivist
grounded theory approach which allows for existing literature to be drawn upon to validate codes
(see Charmaz 2006). As codes were later grouped into themes, central emergent themes –
composed of forum users describing similar views – were identified, and less relevant data were
omitted (i.e., selective coding).
The results are divided into three sections: a macro- and macro-level assessment of the three
radical posting groups (high-intensity, high-frequency, and high-duration). Here a number of
manually selected posts are included for the purpose of highlighting the nature of the behaviors
that were uncovered and the key themes that emerged in the data.
High-intensity radical posting behaviors
Several macro- and micro-level patterns were uncovered during the analysis of the high-intensity
radical (HIR) posting group’s (heretofore referred to as the ‘high-intensity posters’) online
presence. From a macro-level perspective, an assessment of this group’s online behaviors
revealed that, on average, they posted the highest volume of very negative messages (2.27 posts)
compared to users in the high-frequency radical (HFR) posting group (heretofore referred to as
the ‘high-frequency posters’) and the high-duration radical (HDR) posting group (heretofore
referred to as the ‘high-duration posters’). For these high-intensity posters, both the average
posting score (-2.26) and the average negative message scores (-4.49) were more negative than
the scores for the high-frequency and high-duration posters. Interestingly, though, was that the
average posting duration (1,402.46 days), negative posting duration (936.80 days), and very
negative posting duration (154.79 days) for the high-intensity posters was much lower than the
posting durations for the two other poster groups. In addition, the high-intensity authors on
average posted the lowest volume of messages (348.70 posts) and negative messages (39.69
posts) compared to the other two poster groups (see Table 1).
Table 1. Descriptive comparisons of radical posting behaviors (n = 300).
Radical posting groups
n (%)
100 (33.33)
100 (33.33)
Mean number of posts
All posts
Negative posts
Very negative posts
Mean posting score
All posts
Negative posts
Very negative posts
Mean posting duration (days)
All posts
Negative posts
Very negative posts
Note: HIR = high-intensity radical, HFR = high-frequency radical, HDR = high-duration radical.
A micro-level assessment of the content posted by the high-intensity posters revealed that
the majority of their very negative messages included alarming topics of discussion: mass
homicide (described with an array of keywords, such as ‘kill’, ‘murder’, ‘slain’, ‘execution’, and
‘genocide’), violence (with keywords such as ‘destruction’, ‘fight’, ‘attack’, ‘beat’, and ‘hurt’),
weapons (with keywords such as ‘guns’, ‘knives’, and ‘bombs’), sexual abuse (with keywords
such as ‘rape’, ‘rapist’, and ‘molester’), hate (with keywords such as ‘hate’ and ‘hatred’), racism
(with keywords such as ‘racism’, and racist’), and the antichrist (with keywords such as ‘the
devil’, ‘Satan’, and ‘satanic’). Intertwined within much of this intense discourse was the fierce
condemnation of the Jewish, Black, and LGBTQ communities, with Blacks oftentimes being
framed as criminals, LGBTQ communities being linked to the spreading of sexually transmitted
diseases, and Jews being described as “the seed of all evil”, as one user put it (UserID2209).
However, most frequently identified within these discussions was a ZOG conspiracy theory in
which Jews were described as controlling all aspects of the government in the Western world in
an effort to overthrow the white race. Here, commonly found within the high-intensity users’
very negative discussions were concerns that the dominant discourse associated with race
relations in the West was focused exclusively on so-called white supremacy when, in fact, the
“real threat” was ZOG. As three of the most intensity users in the sample best summarized this
the term White Supremacists is without a doubt, the Zionist propaganda / smear
machine’s #1 favorite. It’s funny I have never, ever, even once, heard them use the term,
Jewish Supremacists or even Zionist Supremacists. My interpretation of the term White
Supremacists is any white person who does not praise Israel, & welcome the extinction of
his/her race. (UserID1787)
media is largely Jewish owned and will never call itself racist despite being the very
definition of it by intent. […] The media has made itself responsible for defining racism;
All author names were assigned with pseudonyms to protect user anonymity. All online posts were quoted
they define it in terms as anti-White as possible. This is typical […] liberal cowardice;
unwilling to admit that blacks are violent criminals […] they simply do not want people
to grab ahold of the fact that this was a worthless black murdering a White female in a
completely senseless act of black violence. THAT is what doesn’t fit their agenda.
Zionist agents and fanatics freely and criminally accuse their targets of being Nazis while
they themselves utilize the very same techinques of the master Nazi propagandists to
condemn, defame, personally attack, and stereotype all those they fear may oppose or
question their tactics. Perhaps the most heart-breaking, evil side of Zionism is how it
hides behind - and unmercifully uses - the great religion of if it were a some
kind of blunt instrument used to deceive, threaten and then to beat those who differ with
Zionism into submission. (UserID4032)
Further uncovered during the micro-level assessment of the very negative content posted
by high-intensity posters was that this group tended to encourage other forum users to band
together to overcome a perceived struggle against what was described “Jew-run nations”, or as
one high-intensity user put it:
our [Western] countries are run by Jews […] they just want to destroy the white race, fear
of us rising again and others learning the real truths about them. The Jews play dirty, no
shock there, but how can you be surprised when the Jews […] worship and follow the
devil! (UserID213)
In fact, integrated within much of the discussions from the high-intensity posters were strategic
calls for action – and with all force necessary. As one of the most high-intensity users put it,
“Zionist warmongers mean to imprison us and destroy every vestige of our being. We must fight
back before the rot gets worse” (UserID4211). Here the general sentiment was that if the white
race did not respond accordingly, then the white race would further be oppressed by Jews. As
two high-intensity posters further explained it:
Only in White societies do people espouse other people’s (mostly Jewish) cultural
inventions as their own. […] In the West, we wear clothing designed by homosexual
Jews in Paris, made in 3rd World sweat shops usually owned in part by the international
Jewish Clothier oligopolies. The Jews’ Media show us Whites how to groom and their
partners in ethnic genocide sell us all our toiletries (Kosher Tax and all). […] Then after
years of dressing just right and looking in the mirror to see if we look sufficiently like
their favorite sitcom star, we begin to develop skin diseases and other cancers. Then
come the pharmaceuticals. Prescribed by the medical [Jewish] Mafia, the sheople not
only dutifully swallow them to supposedly take care of the problems, they fight for
subsidized drug plans to make their deaths more affordable. The drugs are in effect the
final Nail in the coffin for many. We must stick together to resist the Jew. (UserID1904)
If we fail to choose struggle over surrender, life over death, destiny over oblivion, it will
not be due to the strength of our would-be destroyers, but to our own weakness, not to
their virtue but to our vice. We will have destroyed ourselves by our own self-destructive
perversion of ultimate ethics, overcome by the enemy ideas implanted in our minds. We
will survive only if we find the ultimate ethical wisdom, nobility, virtue and strength
within us to prevail and live. (UserID1554)
These powerful messages were posted using assertive language, tended to be clearly written, and
were posted with a sense of urgency – a linguistic pattern that became even more apparent during
an analysis of the least radical authors in the sample, as they tended to post unintelligent,
ambiguous and poorly written messages about their adversary groups, oftentimes with passive
and powerless language.
Worth highlighting was that the very negative messages posted by the high-intensity
posters differed from the negative messages they posted in the sub-forum. In particular, their
negative posts included alarming language similar to the abovementioned, but much less so than
their very negative posts. In addition, the negative messages tended to include a broader range of
talking points than those found in the very negative posts. That is, high-intensity users who
posted negative messages tended to draw broadly on a number of social issues (such as crime
rates and immigration), which were oftentimes followed by minority groups being blamed for
these issues. Such discourse, however, tended to consist of descriptive accounts of how
minorities had “wronged” the white race and with Jews at the forefront of their anger, but
without the direct call to action against these groups. A comparison of the content posted by the
least radical users in the sample lent support for this finding, as the least radical authors tended to
post vague messages about their adversary groups.
High-frequency radical posting behaviors
A macro-level assessment of the content posted by the high-frequency posters revealed that, on
average, they posted a much higher volume of messages (512.84 posts) and negative messages
(51.03 posts) than the high-intensity and high-duration poster group. High-frequency posters
were also those who on average posted messages in general and negative messages in particular
over a longer period of time (1,836.00 days and 1,331.41 days, respectively) than the high-
intensity group, but over a much shorter period than the high-duration posters. In addition, high-
frequency users posted a lower volume of very negative messages (1.71 posts) than the high-
intensity posters on average, but high-frequency users posted a higher volume of very negative
messages than the high-duration posters on average. Worth adding was that high-frequency
posters were those who on average posted messages in general and negative messages in
particular that were less negative (-1.97 and -4.09, respectively) than those posted by the high-
intensity posters in the sample. But while the very negative messages that were posted by the
high-frequency users received similar sentiment scores as the high-intensity group on average (-
16.35 and -16.33, respectively), high-frequency users were those who posted very negative
messages over the longest period of time on average (170.80 days) compared with the high-
intensity and the high-duration posters (154.79 days and 156.64 days, respectively).
A micro-level assessment of the content posted by the high-frequency users revealed a
number of notable patterns in their posting behaviors. To illustrate, the negative posts from the
high-frequency users were similar to those of the high-intensity users in that the content was
overwhelmingly focused on describing, oftentimes with alarming language, how minority groups
were a threat to the white race, or as two high-frequency users noted:
It was a fine neighbourhood, apparently, until the blacks came. After they showed up,
black teenagers were constantly hanging around outside of everyone’s apartments,
anything left outside was stolen or destroyed, they threatened anyone who passed by
them (F** you, m**f**er, that sort of garbage). They were constantly making noise,
blasting ghetto music, dribbling their monkey balls, spray painting their illegible hideous
graffiti, utterly uncivilized, utterly disrespectful of the neighbours. And, of course, they
are daring people to confront them; always looking for trouble. Complaints about them
were completely useless. And the problem got so bad that my friends simply picked up
and moved. When blacks move into a neighbourhood, that neighbourhood is as good as
dead. Like terminal cancer in a body. (UserID328)
We got immigrants playing the race card when it suits them, now the fags have the gay
card. there mental defects! and should be behind bars just like how we keep other
mental people! for are saftey! (UserID4054)
Similarly, the very negative messages posted by the high-frequency users were comparable to the
very negative messages post by the high-intensity users: the messages themselves included a
wide range of alarming language, such as words associated with weapons (‘guns’, ‘knives’, and
‘bombs’), racism (‘racism’, racist’, and racism against whites in general), and the antichrist (‘the
devil’, ‘Satan’, and ‘satanic’) – all of which tended to be targeted at minority groups in general.
However, unlike the very negative messages posted by high-intensity users, the high-frequency
users’ very negative posts tended to include detailed and descriptive accounts of their adversaries
(with an array of terms, such as ‘vile’, ‘filth’, ‘stupid’, ‘ignorant’, and other similar terms) rather
than calls for violence against them. As but three examples of this type of very negative
Let’s see... gangs of Blacks (unskilled immigrants, brought into the country recklessly by
the Liberal government) are killing other Blacks (unskilled immigrants, brought into the
country recklessly by the Liberal government) while other Blacks (unskilled immigrants,
brought into the country recklessly by the Liberal government) get uppity about it and kill
still more Blacks (unskilled immigrants, brought into the country recklessly by the
Liberal government) with illegal guns bought from other Blacks (unskilled immigrants,
brought into the country recklessly by the Liberal government). See a trend here? OUR
kids? Not a chance in Hell. (UserID1654)
When AIDS first hit America by storm in the 1980s, the largest impact was felt in the city
of San Francisco due to its heavily gay population. At first they weren’t sure what was
happening but they did manage to figure out that there was a disease peculiar to
homosexual males that destroyed individuals’ immunity systems and resulted in a rather
terrible cancer-like death. Rather than throw tax-payer money at cute little promotional
materials, they did the one intelligent thing that you would expect a city’s administration
to do. They closed down the bath-houses. […] They knew that unprotected homosexual
sex was spreading the disease, they knew that homosexual sex was rampant in these
establishments and that, in fact, homosexual promiscuity was the entire purpose of these
establishments. (UserID5832)
Homosexual behavior is abnormal and sickening to any normal mind. One does not have
to have a religion in order to get turned off by these social deviants. Nobody ever taught
me a thing about homosexuals, but when I met my first one, I was so turned off.
This type of sentiment was not uncovered in the messages posted by the least radical users in the
sample. While they did discuss their adversaries in the sub-forum, the messages did not include
the above alarming language, and the messages lacked clarity and authority (i.e., powerless
High-duration radical posting behaviors
The high-duration users, compared to the high-intensity and high-frequency users on average,
posted messages in general and negative messages in particular over an extensive period of time
(2,864.68 days and 2,261.13 days, respectively). High-duration users also posted very negative
messages (156.64 days) on average over a comparable period of time as the high-intensity
posters (154.79 days) in the sub-forum. Interestingly, though, was that high-duration users posted
the least number of messages (347.51 posts), negatives messages (30.71 posts), and very
negative messages (1.14 posts) on average across posting types in the sample, and their messages
received the least negative sentiment scores (-1.85) on average compared with the high-intensity
and high-frequency posters (-2.26 and -1.97, respectively). Here the average sentiment score for
the high-duration posters’ negative messages (-4.21) was less negative than the high-intensity
posters (-4.29) but more negative than the high-frequency posters (-4.09). But worth highlighting
was that the high-duration posters’ average sentiment score for their very negative messages (-
17.13) was more negative than the very negative messages posted by the high-intensity and high-
frequency users (-16.33 and -16.35, respectively).
A micro-level assessment of the content posted by the high-duration posters revealed
similar patterns in their posting behaviors as the high-intensity and high-frequency users,
especially when their content was compared with the content posted by the least radical users in
the high-duration posting group: negative and very negative messages included the use of
alarming language to describe how their adversary groups were a threat to the white race. But on
the one hand, much of the negative sentiment expressed by this group included descriptive
accounts of their adversaries rather than calls for violence, similar to the negative content posted
by the high-frequency users. On the other hand, the focus of much of their radical sentiment was
on how “the jew is enemy number one”, as one user put it (UserID1008) – which was similar to
the sentiment expressed by the high-intensity users. Such sentiment oftentimes highlighted how
Jews were secretly trying to suppress the white race, with much of the discussion on educating
other forum users about the so-called race war. As three high-duration posters, for example,
explained it:
Never forget that the jewish bankers who control the money control those whom we
elect. When it comes to Jews, they wage war on humanity and use puppet governments to
steal the dignity of every race of people other than jews. (UserID2009)
The world’s richest ethnic group is always playing the victim as an excuse to enslave
their neighbours. The world’s richest ethnic group also is in control of the communist
movement that is supposed to be for the rights of the poor working class people. Of
course, this is just controlled opposition on behalf of the Jewish extremists. Manipulators
indeed. (UserID901)
Anyone in this country who’s opened there mind to the truth of the jewish lie’s and there
control of our country is a threat to them. So they find there crooked way’s to get rid of
us. (UserID328)
Researchers are increasingly interested in developing large-scale ways to identify and analyze
radical content online (e.g., Alvari et al. 2019; Grover and Mark 2019; Klausen et al. 2018;
Scrivens et al. 2018), but what has been set aside in this emerging space has been a systematic
assessment of what constitutes radical posting behaviors in general. This study is an attempt to
begin addressing this gap by conducting a macro- and micro-level assessment of radical posting
behaviors in a RWE online community. Here three radical posting groups (high-intensity, high-
frequency, and high-duration) were developed using a sentiment-based algorithm that
incorporated traditional criminal career measures (frequency, seriousness, and duration) and
were guided by communication research on social influence. Several conclusions can be drawn
from this study.
First, the high-intensity radical posting group tend to be those who post few messages
over a relatively short period online. These authors, however, post a high volume of negative and
very negative messages, but again this posting activity unfolds over a short period of time.
Nonetheless, worth noting here is the powerful, clearly written and detailed nature of most of the
very negative messages posted by the high-intensity users. Communication experts would
describe their messages as vocabulary rich and stylistic – and messages that may be influential to
readers (Huffaker 2010; Ng and Bradac 1993). These messages are also assertive in tone and
feature an array of alarming and emotional language (with the usage of words such as ‘bomb’,
‘kill’, ‘evil’, and ‘threat’) about their adversary groups, oftentimes advocating violence against
Jews specifically – leakage warning behavior that researchers believe can assist in identifying
radical violence online (e.g., Brynielsson et al. 2013; Cohen et al 2014; Johansson et al. 2016).
Authors who post these intense messages appear to be fixated on revealing the “truth” about
Jewish control over the white race, which is a linguistic marker that has been recognized as a key
warning behavior in online settings (e.g., Brynielsson et al. 2013; Cohen et al. 2014; Kaati et al.
2016a). In addition, many of these high-intensity posters are actively trying to reinforce a sense
of community by generating discussions about the so-called white struggle against “Jewish
domination”, which is a communication tactic that Joyce and Kraut (2006) posit is very
influential, provided that the source of the message is perceived as credible by the recipient
(Hollander 1961). Such a community-building tactic is commonly used by the extreme right to
unite their movement online (Adams and Roscigno 2005; Back 2002; Bowman-Grieve 2009;
Scrivens et al. 2018).
Second, high-frequency radical posters are those who post a high volume of content in
general and negative messages in particular, and such content is moderately negative and over a
moderate period of time relative to the other two posting groups. Although these high-frequency
users tend to post few very negative messages in the online community compared to the high-
intensity authors, their radical discourse spans over an incredibly long period of time, which may
suggest a level of dedication to the community, according to previous research on social
influence (e.g., Hollander 1961). High-frequency users’ negative messages, too, include
emotional and alarming language which is intended to degrade Jews, Blacks, LGBTQs and a
wider group of adversaries – and not primarily verbal attacks against Jews. But such messages
tend to include descriptive accounts of their adversary groups rather than calls to action or
inciting violence against them. Nonetheless, high-frequency users may be influential in the RWE
forum. Their high volume of engagement may increase their potential to influence others, as has
been found in previous research on social influence (e.g., Butler 2001; Weimann 1994).
Lastly are the high-duration radical users who post few negative and very negative
messages over an extensive period of time, most of which target the Jewish community and raise
concerns about “Jewish corruption.” Importantly, though, is that these authors are not as active in
the sub-forum as the other two user groups, but when they do post very negative content online,
the messages are amongst the most negative in the sample. This may indicate their long-term
level of commitment to the online community, according to previous studies which suggest that
the time in which an individual spends communicating in a specific place may impact their
ability to gain social influence (Hollander 1961), especially if they are communicating with
emotional and vocally rich sentiment (Burrell and Koper 1998). This is particularly the case
when individuals attempt to motivate others to participate in the discussions, or again, as was
noted earlier, when individuals attempt to build a sense of identity for a particular group (see
Koh et al. 2007; see also Joyce and Kraut 2006). High-duration posters in the sample did just
that: over time they posted radical messages in an attempt to bond with others to overcome the
“evil Jews” – a community-building tactic that is commonly used by the extreme right online
(Bowman-Grieve 2009; Scrivens et al. 2018).
Limitations and future directions
Together, the results of the current study suggest that radical right-wing posting behaviors are
multi-dimensional and include an array of posting patterns that law enforcement officials and
intelligence agencies may deem worthy of future investigation. Although this study represents a
first look at posting behaviors in a RWE discussion forum that may be deemed radical, it is not
without its shortcomings. In addition to the sample being limited to one sub-forum of a broader
forum, paired with the relatively small list of keywords guiding the sampling procedure as well
as the application of SentiStrength which, like every sentiment analysis program, does not have a
classification accuracy of 100 percent (for more on these limitations, see Scrivens et al. 2018),
three additional points are worth discussing.
First, three radical posting behavior groups were developed for the current study, all of
which tended to capture the online behaviors of those who posted messages over an extensive
period of time based on how the SIRA algorithm was calibrated. There are, however, additional
behavioral groups worth exploring, including radical users who only post messages over a short
period of time. This should be account for in future research designs, perhaps by developing a
metric that penalizes a user’s overall radical score based on the amount of time that they are
active online.
Second, the radical posting behaviors that were described in the study were treated as
distinct from one another, but the online behaviors of a small number of users in the sample cut
across posting groups. To illustrate, two forum users were amongst the most radical across all
three posting groups, and seven users were the most radical across two groups. Having said that,
future work should assess whether this small group are the most influence users and perhaps
even opinion leaders. This could be done by including a social network analysis measure – one
which could calculate the in-degree centralization of these radical users, as an example. Doing so
would indicate the number of connections that lead to a particular user, thus showing the amount
of attention that they are receiving and, by extension, the amount of influence they have in a
network (see Freeman 1978 for more on centrality in social networks).
Lastly, despite the insightful patterns of posting behaviors that were described in the
current study, it remains unclear as to whether one of the three posting groups is the most radical
and whether one may be perceived as more of a credible threat (i.e., violent offline) for law
enforcement and intelligence agencies. In addition to including a social network measure to
assess users’ level of social influence online, future work is needed to connect the on- and offline
worlds of radical users. Researchers, for example, could combine the online data with offline
data of a sample of known violent extremists in an effort to triangulate their offline experiences
with their online presentation of self, language, and behavior (Scrivens, Gill, and Conway 2020).
This, amongst other research strategies, would provide researchers, practitioners, and
policymakers with new insight into the online discussions, behaviors and actions that can spill
over into the offline realm.
The author would like to thank Richard Frank, Garth Davies, Martin Bouchard, Pete Simi,
Barbara Perry, Maura Conway, and Tiana Gaudette for their invaluable feedback on earlier
versions of this article.
Abbasi, Ahmed and Hsinchun Chen. 2005. “Applying Authorship Analysis to Extremist-Group
Web Forum Messages.” Intelligent Systems 20(5): 67–75. doi: 10.1109/MIS.2005.81.
Adams, Josh and Vincent J. Roscigno. 2005. “White Supremacists, Oppositional Culture and the
World Wide Web.” Social Forces 84(2): 759–778. doi: 10.1353/sof.2006.0001a.
Agarwal, Swati and Ashish Sureka. 2015. “Using KNN and SVM Based One-Class Classifier for
Detecting Online Radicalization on Twitter.” Proceedings of the International Conference
on Distributed Computing and Internet Technology, Bhubaneswar, India.
Alvari, Hamidreza, Soumajyoti Sarkar, and Paulo Shakarian. 2019. “Detection of Violent
Extremists in Social Media.” Proceedings of the Second International Conference on
Data Intelligence and Security, South Padres Island, Texas, USA.
Back, Les. 2002. “Aryans Reading Adorno: Cyber-Culture and Twenty-First Century Racism.”
Ethnic and Racial Studies 25(4): 628–651. doi: 10.1080/01419870220136664.
Berger, J. M. 2018. Extremism. Cambridge, MA: The MIT Press.
Bermingham, Adam, Maura Conway, Lisa McInerney, Neil O’Hare, and Alan F. Smeaton. 2009.
“Combining Social Network Analysis and Sentiment Analysis to Explore the Potential
for Online Radicalisation.” Proceedings of the 2009 International Conference on
Advances in Social Network Analysis Mining, Athens, Greece.
Bliuc, Ana-Maria, John Betts, Matteo Vergani, Muhammad Iqbal, and Kevin Dunn. 2019.
“Collective Identity Changes in Far-Right Online Communities: The Role of Offline
Intergroup Conflict.” New Media and Society 21(8): 1770–1786. doi:
Blumstein, Alfred, Jacqueline Cohen, Jeffrey A. Roth, and Christy A. Visher. 1986. Criminal
Careers and ‘Career Criminals.’ Washington, DC: National Academy Press.
Bowman-Grieve, Lorraine. 2009. “Exploring “Stormfront:” A Virtual Community of the Radical
Right.” Studies in Conflict and Terrorism 32(11): 989–1007. doi:
Bouchard, Martin, Kila Joffres, and Richard Frank. 2014. “Preliminary Analytical
Considerations in Designing a Terrorism and Extremism Online Network Extractor.” Pp.
171–184 in Computational Models of Complex Systems, edited by Vijay K. Mago and
Vahid Dabbaghian. New York, NY: Springer.
Brynielsson, Joel, Andreas Horndahl, Fredik Johansson, Lisa Kaati, Christian Mårtenson, and
Pontus Svenson. 2013. “Analysis of Weak Signals for Detecting Lone Wolf Terrorists.”
Security Informatics 2(11): 1–15. doi: 10.1186/2190-8532-2-11.
Burnap, Pete, Matthew L. Williams, Luke Sloan… and Alex Voss. 2014. “Tweeting the Terror:
Modelling the Social Media Reaction to the Woolwich Terrorist Attack.” Social Network
Analysis and Mining 4: 1–14. doi: 10.1007/s13278-014-0206-4.
Burrell, Nancy C. and Randal J. Koper. 1998. “The Efficacy of Powerful/Powerless Language on
Attitudes and Source Credibility.” Pp. 203–216 in Persuasion: Advances Through Meta-
Analysis, edited by Mike Allen and Raymond W. Preiss. Cresskill, NJ: Hampton Press.
Bradac, James J., Catherine W. Konsky, and Robert A. Davies. 1976. “Two Studies of Effects of
Linguistic Diversity Upon Judgement of Communicator Attributes and Message
Effectiveness.” Speech Monographs 43(1): 70–79. doi: 10.1080/03637757609375917.
Butler, Brian S. 2001. “Membership Size, Communication Activity, and Sustainability: A
Resource-Based Model of Online Social Structures.” Information Systems Research 12:
346–362. doi: 10.1287/isre.12.4.346.9703.
Charmaz, Kathy. 2006. Constructing Grounded Theory. London, UK: Sage.
Chen, Hsinchun. 2008. “Sentiment and Affect Analysis of Dark Web Forums: Measuring
Radicalization on the Internet.” Proceedings of the 2008 IEEE International Conference
on Intelligence and Security Informatics, Taipei, Taiwan.
Chen, Min, Shiwen Mao, Ying Zhang, and Victor C. M. Leung. 2014. Big Data: Related
Technologies, Challenges and Future Prospects. New York, NY: Springer.
Cohen, Katie, Fredrik Johansson, Lisa Kaati, and Jonas C. Mork. 2014. “Detecting Linguistic
Markers for Radical Violence in Social Media.” Terrorism and Political Violence 26(1):
246–256. doi: 10.1080/09546553.2014.849948.
Conway, Maura, Ryan Scrivens, and Logan Macnair. 2019. “Right-Wing Extremists’ Persistent
Online Presence: History and Contemporary Trends” The International Centre for
Counter-Terrorism – The Hague 10: 1–24. doi: 10.19165/2019.3.12.
Daniels, Jessie. 2009. Cyber Racism: White Supremacy Online and the New Attack on Civil
Rights. Lanham, MA: Rowman and Littlefield Publishers.
Davidson, Thomas, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. “Automated Hate
Speech Detection and the Problem of Offensive Language.” Proceedings of the Eleventh
International AAAI Conference on Web and Social Media, Palo Alto, California, USA.
Davies, Garth, Martin Bouchard, Edith Wu, Kila Joffres, and Richard. Frank. 2015. “Terrorist
and Extremist Organizations’ Use of the Internet for Recruitment.” Pp. 105–127 in Social
Networks, Terrorism and Counter-Terrorism: Radical and Connected, edited by Martin
Bouchard. New York, NY: Routledge.
Dillon, Leevia, Loo Seng Neo, and Joshua. D. Freilich. 2019. “A Comparison of ISIS Foreign
Fighters and Supporters Social Media Posts: An Exploratory Mixed-Method Content
Analysis.” Behavioral Sciences of Terrorism and Political Aggression Ahead of Print:
1–24. doi: 10.1080/19434472.2019.1690544.
Ezekiel, Raphael S. 1995. The Racist Mind: Portraits of American Neo-Nazi and Klansmen. New
York, NY: Viking.
Feldman, Ronen. 2013. “Techniques and Applications for Sentiment Analysis.” Communications
of the ACM 56(4): 82–89. doi: 10.1145/2436256.
Ferrara, Emilio. 2017. “Contagion Dynamics of Extremist Propaganda in Social Networks.”
Information Sciences.Information Sciences 418-419: 1–12. doi:
Ferrara, Emilio, Wen-Qiang Wang, Onur Varol, Alessandro Flammini, and Aram Galstyan.
2016. “Predicting Online Extremism, Content Adopters, and Interaction Reciprocity.”
Proceedings of the International Conference on Social Informatics, Berlin, Germany.
Figea, Leo, Lisa Kaati, and Ryan Scrivens. 2016. “Measuring Online Affects in a White
Supremacy Forum.” Proceedings of the 2016 IEEE International Conference on
Intelligence and Security Informatics, Tucson, Arizona, USA.
Forgas, Joseph P. 2006. “Affective Influences on Interpersonal Behavior: Towards
Understanding the Role of Affect in Everyday Interactions.” Pp. 269-290 in Affect in
Social Thinking and Behavior, edited by Joseph P. Forgas. New York, NY: Psychology
Futrell, Robert and Pete Simi. 2004. “Free Spaces, Collective Identity, and the Persistence of
U.S. White Power Activism.” Social Problems 51(1): 16–42. doi:
Frank, Richard, Martin Bouchard, Garth Davies, and Joseph Mei. 2015. “Spreading the Message
Digitally: A Look into Extremist Content on the Internet.” Pp. 130–45 in Cybercrime
Risks and Responses: Eastern and Western Perspectives, edited by Russell G. Smith, Ray
C.-C. Cheung, and Lauri Y.-C. Lau. London, UK: Palgrave.
Freeman, Linton C. 1978. “Centrality in Social Networks Conceptual Clarification.” Social
Networks 1(3): 215–239. doi: 10.1016/0378-8733(78)90021-7.
Grover, Ted, and Gloria Mark. 2019. “Detecting Potential Warning Behaviors of Ideological
Radicalization in an Alt-Right Subreddit.” Proceedings of the Thirteenth International
AAAI Conference on Web and Social Media, Munich, Germany.
Hamilton, Mark A. and John. E. Hunter. 1998. “The Effect of Language Intensity on Receiver
Evaluations of Message, Source, and Topic.” Pp. 99–138 in Persuasion: Advances
Through Meta-Analysis, edited by Mike Allen and Raymond W. Preiss. Cresskill, NJ:
Hampton Press.
Hollander, Edwin P. 1961. “Emergent Leadership and Social Influence.” Pp. 30–47 in
Leadership and Interpersonal Behavior, edited by Luigi Petrullo and Bernard M. Bass.
New York, NY: Holt, Rinehard and Winston.
Holtgraves, Thomas and Benjamin Lasky. 1999. “Linguistic Power and Persuasion.” Journal of
Language and Social Psychology 18(2): 196–205. doi: 10.1177/0261927X99018002004.
Hosman, Lawrence A. 2002. “Language and Persuasion.” Pp. 371–390 in The Persuasion
Handbook: Developments in Theory and Practice, edited by James P. Dillard and
Michael Pfau. New York, NY: Sage.
Huffaker, David. 2010. “Dimensions of Leadership and Social Influence in Online
Communities.” Human Communication Research 36(4): 593–617. doi: 10.1111/j.1468-
Hung, Benjamin W. K., Anura P. Jayasumana, and Vidarshana W. Bandara. 2016a. “Detecting
Radicalization Trajectories Using Graph Pattern Matching Algorithms.” Proceedings of
the 2016 IEEE International Conference on Intelligence and Security Informatics,
Tucson, Arizona, USA.
Hung, Benjamin W. K., Anura P. Jayasumana, and Vidarshana W. Bandara. 2016b. “Pattern
Matching Trajectories for Investigative Graph Searches.” Proceedings of the 2016 IEEE
International Conference on Data Science and Advanced Analytics, Montreal, Canada.
Internet Live Stats. 2020. “Total Number of Websites.” Retrieved from
Internet World Stats. 2020. “Internet Growth Statistics.” Retrieved from
Johansson, Fredrik, Lisa Kaati, and Magnus Sahlgren. 2016. “Detecting Linguistic Markers of
Violent Extremism in Online Environments.” Pp. 374–390 in Combating Violent
Extremism and Radicalization in the Digital Era, edited by Majeed Khader, Loo Seng
Neo, Gabriel Ong, Eunice Tan Mingyi, and Jeffery Chin. Hershey, PA: Information
Science Reference.
Joyce, Elisabeth and Robert E. Kraut. 2006. “Predicting Continued Participation in
Newsgroups.” Journal of Computer-Mediated Communication 11(3): 723–747. doi:
Kaati, Lisa, Amendra Shrestha, and Katie Cohen. 2016. “Linguistic Analysis of Lone Offender
Manifestos.” Proceedings of the 2016 IEEE International Conference on Cybercrime and
Computer Forensics, Vancouver, BC, Canada.
Kaati, Lisa, Amendra Shrestha, and Tony Sardella. 2016. “Identifying Warning Behaviors of
Violent Lone Offenders in Written Communications.” Proceedings of the 2016 IEEE
International Conference on Data Mining Workshops, Barcelona, Spain.
Klausen, Jytte, Christopher E. Marks, and Tauhid Zaman. 2018. “Finding Extremists in Online
Social Networks.” Operations Research 66(4): 957–976. doi: 10.1287/opre.2018.1719.
Koh, Joon, Young-Gul Kim, Brian Butler, and Gee-Woo Bock. 2007. “Encouraging
Participation in Virtual Communities.” Communications of the ACM 50(2): 68–73. doi:
Levey, Philippa and Martin Bouchard. 2019. “The Emergence of Violent Narratives in the Life-
Course Trajectories of Online Forum Participants.” Journal of Qualitative Criminal
Justice and Criminology 7(2): 95–121.
Macnair, Logan and Richard Frank. 2018. “Changes and Stabilities in the Language of Islamic
State Magazines: A Sentiment Analysis.” Dynamics of Asymmetric Conflict 11(2): 109–
120. doi: 10.1080/17467586.2018.1470660.
Mei, Joseph and Richard Frank. 2015. “Sentiment Crawling: Extremist Content Collection
through a Sentiment Analysis Guided Web-Crawler.” Proceedings of the International
Symposium on Foundations of Open Source Intelligence and Security Informatics, Paris,
Ng, Sik Hung and James J. Bradac. 1993. Power in Language: Verbal Communication and
Social Influence. Newbury Park, CA: Sage.
O’Keefe, Daniel J. 2002. Persuasion: Theory and Research (Second Edition). Thousand Oaks,
CA: Sage.
Park, Andrew J., Brian Beck, Darrick Fletche, Patrick Lam, and Herbert H. Tsang. 2016.
“Temporal Analysis of Radical Dark Web Forum Users.” Proceedings of the 2016
IEEE/ACM International Conference on Advances in Social Networks Analysis and
Mining, San Francisco, CA, USA.
Perry, Barbara. 2001. In the Name of Hate: Understanding Hate Crimes. New York, NY:
Perry, Barbara and Ryan Scrivens. 2019. Right-Wing Extremism in Canada. Cham, Switzerland:
Sageman, Marc. 2014. “The Stagnation in Terrorism Research.” Terrorism and Political
Violence 26(4): 565–580. doi: 10.1080/09546553.2014.895649.
Scrivens, Ryan, Garth Davies, and Richard Frank. 2017. “Searching for Signs of Extremism on
the Web: An Introduction to Sentiment-Based Identification of Radical Authors.”
Behavioral Sciences of Terrorism and Political Aggression 10(1): 39–59. doi:
Scrivens, Ryan, Garth Davies, and Richard Frank. 2018. “Measuring the Evolution of Radical
Right-Wing Posting Behaviors Online.” Deviant Behavior 41(2): 216–232. doi:
Scrivens, Ryan, Paul Gill, and Maura Conway. 2020. “The Role of the Internet in Facilitating
Violent Extremism and Terrorism: Suggestions for Progressing Research.” Pp. 1–20 in
The Palgrave Handbook of International Cybercrime and Cyberdeviance, edited by
Thomas J. Holt and Adam Bossler. London, UK: Palgrave
Scrivens, Ryan and Richard Frank. 2016. “Sentiment-based Classification of Radical Text on the
Web.Proceedings of the 2016 European Intelligence and Security Informatics Conference,
Uppsala, Sweden.
Scrivens, Ryan, Tiana Gaudette, Garth Davies, and Richard Frank. 2019. “Searching for
Extremist Content Online Using The Dark Crawler and Sentiment Analysis.” Pp. 179–
194 in Methods of Criminology and Criminal Justice Research, edited by Mathieu
Deflem and Derek M. D. Silva. Bingley, UK: Emerald.
Simi, Pete and Robert Futrell. 2015. American Swastika: Inside the White Power Movement’s
Hidden Spaces of Hate (Second Edition). Lanham, MD: Rowman and Littlefield
Southern Poverty Law Center. 2014. “White Homicide Worldwide.” Retrieved from
Thelwall, Mike and Kevan Buckley. 2013. “Topic-Based Sentiment Analysis for the Social Web:
The Role of Mood and Issue-Related Words.” Journal of the American Society for
Information Science and Technology 64(8): 1608–1617. doi: 10.1002/asi.22872.
Tremblay, Richard E., Rorbert O. Pihl, Frank Vitaro, and Patricia L. Dobkin. 1994. “Predicting
Early Onset of Male Antisocial Behavior from Preschool Behavior.” Archives of General
Psychiatry 51(9): 732–739. doi: 10.1001/archpsyc.1994.03950090064009.
Vergani, Matteo. and Ana-Maria Bluic. 2015. “The Evolution of the ISIS’ Language: A
Quantitative Analysis of the Language of the First Year of Dabiq Magazine.” Sicurezza,
Terrorismo e Società 2: 7–20.
Warr, Mark. 1989. “What is the Perceived Seriousness of Crimes?” Criminology 27(4): 795–822.
doi: 10.1111/j.1745-9125.1989.tb01055.x.
Weimann, Gabriel. 1994. The Influentials: People Who Influence People. Albany, NY: State
University of New York Press.
Williams, Matthew L. and Pete Burnap. 2015. “Cyberhate on Social Media in the Aftermath of
Woolwich: A Case Study in Computational Criminology and Big Data.British Journal
of Criminology 56(2): 211–238. doi: 10.1093/bjc/azv059.
Wojcieszak, Magdalena. 2010. “‘Don’t Talk to Me’: Effects of Ideological Homogenous Online
Groups and Politically Dissimilar Offline Ties on Extremism.” New Media and Society
12(4): 637–655. doi: 10.1177/1461444809342775.
Yoo, Youngjin and Maryam Alavi. 2004. “Emergent Leadership in Virtual Teams: What do
Emergent Leaders do?” Information and Organization 14(1): 27–58. doi:
... Instead, research has overwhelmingly focused on identifying "radicals" online (e.g., Scrivens et al., 2020b;Scrivens, 2020), and not those who adhere to radical beliefs but are violent as well (Wolfowicz et al., 2021). In addition, research on violent online political extremism has been concerned about the extent to which individuals are immersed in violent extremism online, with a particular focus on the relationship between the impact of extremist online content and violent radicalization (see Scrivens et al., 2020a). ...
... Researchers, practitioners, and policymakers have paid close attention to the presence of terrorists and extremists online in recent years, with a particular emphasis on the digital patterns and behaviors of the extreme right (see Conway et al., 2019; see also Holt et al., 2020). It should come as little surprise that researchers have focused on the activities of RWEs on various platforms, including on websites and discussion forums (e.g., Back, 2002;Bliuc et al., 2019;Burris et al., 2000;De Koster & Houtman, 2008;Futrell & Simi, 2004;Holt et al., 2020;Scrivens et al., 2020b;Scrivens, 2020;Wojcieszak, 2010), mainstream social media sites including Facebook (e.g., Ekman, 2018;Nouri & Lorenzo-Dus, 2019;Scrivens & Amarasingam, 2020;Stier et al., 2017), Twitter (e.g., Ahmed & Pisoiu, 2020;Berger, 2016;Berger & Strathearn, 2013;Burnap & Williams, 2015;Graham, 2016), and YouTube (e.g., Ekman, 2014;Munger & Philips, 2020;O'Callaghan et al., 2014), fringe platforms including 4chan (e.g., Finkelstein et al., 2018;Papasavva et al., 2020) and Gab (e.g., Zannettou et al., 2018;Zhou et al., 2019), and digital applications such as TikTok (e.g., Weimann & Masri, 2020) and Telegram (e.g., Guhl & Davey, 2020;Urman & Katz, 2020). But these studies, similar to criminological research on the causes of violent extremism and terrorism in general, lack comparison groups, despite a significant need to focus on comparative analysis and consider how violent extremists are different than non-violent extremists (Becker, 2019;Chermak et al., 2013;Freilich & LaFree, 2015;Jasko et al., 2017;Knight et al., 2019;LaFree et al., 2018). ...
... But what the limited empirical research does suggest about the link between timing of online engagement and posting frequency is that there is a social organization and community structure to those who are embedded in extremist sub-forums and in turn they tend post for extensive periods of time. Posters from the extreme right, for example, tend to show a high level of commitment to the online community and, by extension, the radical cause (Scrivens, 2020). They also attempt to engage with and motivate others to participate in fanatical discussions and build a sense of community (Scrivens, 2020), and they are often the 'super contributors' (i.e., those who dominate the discussions) and those whose first post dates back significantly further than the average user, both in forums (Kleinberg et al., 2020) and sub-forums (Scrivens, 2020). ...
Full-text available
There is an ongoing need for researchers, practitioners, and policymakers to detect and assess online posting behaviors of violent extremists prior to their engagement in violence offline, but little is empirically known about their online behaviors generally or the differences in their behaviors compared with nonviolent extremists who share similar ideological beliefs particularly. In this study, we drew from a unique sample of violent and nonviolent right-wing extremists to compare their posting behaviors in the largest White supremacy web-forum. We used logistic regression and sensitivity analysis to explore how users’ time of entry into the lifespan of an extremist sub-forum and their cumulative posting activity predicted their violence status. We found a number of significant differences in the posting behaviors of violent and nonviolent extremists which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online.
... Although solutions to this problem exist, such as automatic spelling correction or substitution based on a dictionary of word variations (see e.g., Han & Baldwin, 2007;Clark & Araki, 2011), this problem may not be adequately circumvented in all cases. Moreover, even though some custom dictionaries can be used to extract specific terms used by right-wing or jihadi extremists (e.g., Abbasi & Chen, 2007;Figea et al., 2016;Scrivens, 2020), the specific jargon might be highly sensitive to linguistic adaptation where users change or introduce new terms to evade filters on platforms (see e.g., van der Vegt et al., 2019). ...
... A similar approach studied three types of radical right-wing posting behaviour on the Stormfront forum, which focuses on users whose radical posting behaviour can be characterised as high-intensity, high-frequency, or high-duration (Scrivens, 2020). A similar sentiment analysis procedure was conducted as in Scrivens et al. (2018). ...
... Lastly, highduration users generally posted over a long period of time (2,864 days), with very negative messages similar to high intensity posters (155 days). Scrivens (2020) suggested that the long duration of general posting possibly illustrates high commitment to the forum. ...
Language alluding to possible violence is widespread online, and security professionals are increasingly faced with the issue of understanding and mitigating this phenomenon. The volume of extremist and violent online data presents a workload that is unmanageable for traditional, manual threat assessment. Computational linguistics may be of particular relevance to understanding threats of grievance-fuelled targeted violence on a large scale. This thesis seeks to advance knowledge on the possibilities and pitfalls of threat assessment through automated linguistic analysis. Based on in-depth interviews with expert threat assessment practitioners, three areas of language are identified which can be leveraged for automation of threat assessment, namely, linguistic content, style, and trajectories. Implementations of each area are demonstrated in three subsequent quantitative chapters. First, linguistic content is utilised to develop the Grievance Dictionary, a psycholinguistic dictionary aimed at measuring concepts related to grievance-fuelled violence in text. Thereafter, linguistic content is supplemented with measures of linguistic style in order to examine the feasibility of author profiling (determining gender, age, and personality) in abusive texts. Lastly, linguistic trajectories are measured over time in order to assess the effect of an external event on an extremist movement. Collectively, the chapters in this thesis demonstrate that linguistic automation of threat assessment is indeed possible. The concluding chapter describes the limitations of the proposed approaches and illustrates where future potential lies to improve automated linguistic threat assessment. Ideally, developers of computational implementations for threat assessment strive for explainability and transparency. Furthermore, it is argued that computational linguistics holds particular promise for large-scale measurement of grievance-fuelled language, but is perhaps less suited to prediction of actual violent behaviour. Lastly, researchers and practitioners involved in threat assessment are urged to collaboratively and critically evaluate novel computational tools which may emerge in the future.
... To close this gap, we focus on how radicalization dynamics 2 unfold in online environments. Many studies have been concerned with the presence of extremist content online, either on specific websites (Scrivens, 2020), social media (Ahmed et al., 2020), or fringe platforms (Hine et al., 2017). However, these studies neither account for discourse radicalization as a process nor conceptualize specific indicators. ...
Societal crises, such as the COVID-19 pandemic, produce societal instability and create a fertile ground for radicalization. Extremists exploit such crises by distributing disinformation to amplify uncertainty and distrust among the public. Based on these developments, this study presents a longitudinal analysis of far-right communication on fringe platforms, demonstrating radicalization dynamics. Public Telegram communication of three movements active in Germany (QAnon, Identitarian Movement, Querdenken) was analyzed through a quantitative content analysis of 4500 messages posted to nine channels between March 2020 and February 2021. We study the movements' discourse using several indicators of radicalization dynamics. The increasing prevalence of conspiracy narratives, anti-elitism, political activism, and support for violence indicate radicalization dynamics in these movements’ online communication. However, these dynamics varied within the movements. It can be concluded that, when studying radicalization dynamics online, it is crucial to not just focus on one single indicator, but consider longitudinal changes across several indicators, ideally comparing different movements.
... The second, covering 2011-2013 was collected by other scholars (Scrivens et al., 2019) and provided to the authors. The nature of the Stormfront forum enables complete data collection of each forum, both sources utilized web-crawlers in the forum collecting all the open-access forums and sub-forums, thus ensuring the data utilized in the study is all of the forums in the specified time period (Scrivens, 2021;Scrivens et al., 2019Scrivens et al., , 2020Scrivens et al., , 2021. After screening for posts including the term "vaccin*," we retained a corpus of 8892 posts for analysis. ...
Introduction Research has indicated a growing resistance to vaccines among U.S. conservatives and Republicans. Following past successes of the far-right in mainstreaming health misinformation, this study tracks almost two decades of vaccine discourse on the extremist, white nationalist (WN) online message-board Stormfront. We examine the argumentative repertoire around vaccines on the forum, and whether it assimilated to or challenged common arguments for and against vaccines, or extended it in ways unique to the racist WN agenda. Methods We use a mixed-methods approach, combining unsupervised machine learning of 8892 posts including the term “vaccin*“, published on Stormfront between 2001 and 2017. We supplemented the computational analysis with a manual coding of randomly sampled 500 posts, evaluating the prevalence of pro- and anti-vaccine sentiment, previously identified pro- and anti-vaccine arguments, and WN-specific arguments. Results Discourse was dynamic, increasing around specific events, such as outbreaks and following legal debates about vaccine mandates. We identified four themes: conspiracies, science, race and white innovation. The prominence of themes over time was relatively stable. Our manual coding identified levels of anti-vaccine sentiment that were much higher than found in the past on mainstream social media. Most anti-vaccine posts relied on common anti-vaccine tropes and not on WN conspiracy theories. Pro-vaccination posts, however, were supported by unique race-based arguments. Conclusion We find a high volume of anti-vaccine sentiment among WN on Stormfront, but also identify unique pro-vaccine arguments that echo the group's racist ideology. Public health implication As with past health-related conspiracy theories, high levels of anti-vaccine sentiment in online far-right sociotechnical information systems could threaten public health, especially if it ‘spills-over’ to mainstream media. Many pro-vaccine arguments on the forum relied on racist, WN reasoning, thus preventing the authors from recommending the use of these unethical arguments in future public health communications.
... Both nations share historical, cultural and technological synergies and idiosyncrasies. For instance, Australia and Canada share similar yet distinct histories of colonialism and post-colonial ethnocentrism; they both embarked on multi-decade campaigns to establish multiculturalism into the national social and political framework using similar legislation; each nation has expressed similar aspirations to welcome non-Anglo-European immigration and the lesbian, gay, bisexual, transgender, and queer (LGBTQ) community while being met with variations of socio-political resistance; and have witnessed a growth in online extremism and instances of right-wing terrorism related to resident right-wing extremist groups (Ambrose & Mudde, 2015;Hutchinson, 2019aHutchinson, , 2019cPoynting & Perry, 2007;Scrivens, 2020). These synergies and idiosyncrasies provide avenues for social mobilization and shape the ideological and moral inclinations of right-wing extremist groups and movements in each country, including their propensity for and preferred method of violence against targeted identities (Perry & Scrivens, 2016b;Peucker et al., 2018). ...
Full-text available
Right-wing extremist groups harness popular social media platforms to accrue and mobilize followers. In recent years, researchers have examined the various themes and narratives espoused by extremist groups in the United States and Europe, and how these themes and narratives are employed to mobilize their followings on social media. Little, however, is comparatively known about how such efforts unfold within and between rightwing extremist groups in Australia and Canada. In this study, we conducted a cross-national comparative analysis of over eight years of online content found on 59 Australian and Canadian right-wing group pages on Facebook. Here we assessed the level of active and passive user engagement with posts and identified certain themes and narratives that generated the most user engagement. Overall, a number of ideological and behavioral commonalities and differences emerged in regard to patterns of active and passive user engagement, and the character of three prevailing themes: methods of violence, and references to national and racial identities. The results highlight the influence of both the national and transnational context in negotiating which themes and narratives resonate with Australian and Canadian right-wing online communities, and the multidimensional nature of right-wing user engagement and social mobilization on social media.
Full-text available
Im Zuge des Medienwandels und der stetigen Ausdifferenzierung verfügbarer Online-Angebote verlagert sich nicht nur das alltägliche Leben zunehmend ins Digitale, sondern auch die Aktivitäten extremistischer Akteure. In Folge technologischer und gesellschaftlicher Entwicklungen (z. B. hinsichtlich zunehmender Gewaltbereitschaft im Rahmen von Covid-19-Demonstrationen) rücken Befürchtungen, das Internet könne eine Radikalisierung fördern, in den Fokus wissenschaftlicher und öffentlicher Debatten. Die Durchdringung des Alltags durch das Internet ist daher auch zentral bei der Analyse, Diskussion und Prävention von Radikalisierungsdynamiken. Die genaue Rolle des Internets in Radikalisierungsprozes-sen hängt dabei von verschiedenen Faktoren ab. Anhand einer systematischen Literaturanalyse von 216 Publikationen zu Radikalisierung im Internet wird ein Überblick über das Forschungsfeld generiert. Die Systematisie-rung der Literatur erfolgt auf drei Betrachtungsebenen, nämlich (1) der Unterscheidung von Wirkmechanismen auf Mikro-, Meso- und Makroebene, (2) der Modellierung von Radikalisierungsdynamiken entlang des Kommunikationsprozesses (Kommunikator:innen, Inhalt, Medium, Rezipient:innen) sowie (3) der differenzierten Betrachtung unterschiedlicher digitaler Räume im Kontext ihrer Nutzungspotenziale (Affordanzen) für extremistische Akteure. Darauf aufbauend werden Forschungslücken und Potenziale für künftige Studien sowie Handlungsempfehlungen für Akteure aus Praxis und Politik abgeleitet.
Full-text available
Although there is an ongoing need for law enforcement and intelligence agencies to identify and assess the online activities of violent extremists prior to their engagement in violence offline, little is empirically known about their online posting patterns generally or differences in their online patterns compared to non-violent extremists who share similar ideological beliefs particularly. Even less is empirically known about how their online patterns compare to those who post in extremist spaces in general. This study addresses this gap through a content analysis of postings from a unique sample of violent and non-violent right-wing extremists as well as from a sample of postings within a sub-forum of the largest white supremacy web-forum, Stormfront. Here the existence of extremist ideologies, personal grievances, and violent extremist mobilization efforts were quantified within each of the three sample groups. Several notable differences in posting patterns were observed across samples, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. This study concludes with a discussion of the implications of the analysis, its limitations, and avenues for future research.
What makes a neo-Nazi become a convinced anti-fascist or a radical left-winger become a devout Salafist? How do they manage to fit into their new environment and gain acceptance as a former enemy? The people featured in this book made highly puzzling journeys, first venturing into extremist milieus and then deciding to switch to the opposite side. By using their extraordinary life-stories and their own narratives, this book provides the first in-depth analysis of how and why people move between seemingly opposing extremist environments that can sometimes overlap and influence each other. It aims to understand how these extremists manage to convince their new group that they can be trusted, which also allows us to dive deep into the psychology of extremism and terrorism. This fascinating work will be of immense value to those studying radicalization and counter-radicalization in terrorism studies, social psychology and political science.
Technical Report
Full-text available
This policy note highlights the importance of both identifying and examining the online behaviors of violent and non-violent extremists in preventing and countering violent extremism (P/CVE) and provides researchers, practitioners, and policymakers with a number of recommendations for detecting and analyzing the online behaviors of violent and non-violent extremists in the future.
Full-text available
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of right- wing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Full-text available
Purpose – This chapter examines how sentiment analysis and web-crawling technology can be used to conduct large-scale data analyses of extremist content online. Methods/approach – The authors describe a customized web-crawler that was developed for the purpose of collecting, classifying, and interpreting extremist content online and on a large scale, followed by an overview of a relatively novel machine learning tool, sentiment analysis, which has sparked the interest of some researchers in the field of terrorism and extremism studies. The authors conclude with a discussion of what they believe is the future applicability of sentiment analysis within the online political violence research domain. Findings – In order to gain a broader understanding of online extremism, or to improve the means by which researchers and practitioners “search for a needle in a haystack,” the authors recommend that social scientists continue to collaborate with computer scientists, combining sentiment analysis software with other classification tools and research methods, as well as validate sentiment analysis programs and adapt sentiment analysis software to new and evolving radical online spaces.
Full-text available
Despite the increasing citizen engagement with socio-political online communities, little is known about how such communities are affected by significant offline events. Thus, we investigate here the ways in which the collective identity of a far-right online community is affected by offline intergroup conflict. We examine over 14 years of online communication between members of Stormfront Downunder, the Australian sub-forum of the global white supremacist community We analyse members’ language use and discourse before and after significant intergroup conflict in 2015, culminating in local racist riots in Sydney, Australia. We found that the riots were associated with significant changes in the collective beliefs of the community (as captured by members’ most salient concerns and group norms), emotions and consensus within the community. Overall, the effects of the local riots were manifest in a reinvigorated sense of purpose for the far-right community with a stronger anti-Muslim agenda.
Full-text available
Researchers have previously explored how right-wing extremists build a collective identity online by targeting their perceived “threat,” but little is known about how this “us” versus “them” dynamic evolves over time. This study uses a sentiment analysis-based algorithm that adapts criminal career measures, as well as semi-parametric group-based modeling, to evaluate how users’ anti-Semitic, anti-Black, and anti-LGBTQ posting behaviors develop on a sub-forum of the most conspicuous white supremacy forum. The results highlight the extent to which authors target their key adversaries over time, as well as the applicability of a criminal career approach in measuring radical posting trajectories online.
This paper compares the social media posts of ISIS foreign fighters to those of ISIS supporters. We examine a random sample of social media posts made by violent foreign fighters (n = 14; 2000 posts) and non-violent supporters (n = 18; 2000 posts) of the Islamic State of Iraq and Syria (ISIS) (overall n = 4,000 posts), from 2009 to 2015. We used a mixed-method study design. Our qualitative content analyses of the 4,000 posts identified five themes: Threats to in-group, societal grievances, pursuit for significance, religion, and commitment issues. Our quantitative comparisons found that the dominant themes in the foreign fighters' online content were threats to in-group, societal grievances, and pursuit for significance, while religion and commitment issues were dominant themes in the supporters' online content. We also identified thematic variations reflecting individual attitudes that emerged during the 2011-2015 period, when major geopolitical developments occurred in Syria and Iraq. Finally, our quantitative sentiment-based analysis found that the supporters (10 out of 18; 56%) posted more radical content than the foreign fighters (5 out of 14; 36%) on social media. ARTICLE HISTORY
This book comprehensively examines right-wing extremism (RWE) in Canada, discussing the lengthy history of violence and distribution, ideological bases, actions, organizational capacity and connectivity of these extremist groups. It explores the current landscape, the factors that give rise to and minimise these extremist groups, strategies for countering these groups, and the emergence of the ‘Alt-Right’. It draws on interviews with law enforcement officials, community activists, and current and former right-wing activists to inform and offer practical advice, paired with analyses of open source intelligence on the state of the RWE movement in Canada. The historical and contemporary contours of right-wing extremism in Canada are situated within the social, political, and cultural landscape that has shaped the movement. It will be of particular interest to students and researchers of criminology, sociology, social justice, terrorism and political violence.
This study applies the semi-automated method of sentiment analysis in order to examine any quantifiable changes in the linguistic, topical, or narrative patterns that are present in the English-language Islamic State-produced propaganda magazines Dabiq (15 issues) and Rumiyah (10 issues). Based on a sentiment analysis of the textual content of these magazines, it was found that the overall use of language has remained largely consistent between the two magazines and across a timespan of roughly three years. However, while the majority of the language within these magazines is consistent, a small number of significant changes with regard to certain words and phrases were found. Specifically, the language of Islamic State magazines has become increasingly hostile towards certain enemy groups of the organization, while the language used to describe the Islamic State itself has become significantly more positive over time. In addition to identifying the changes and stabilities of the language used in Islamic State magazines, this study endeavours to test the effectiveness of the sentiment analysis method as a means of examining and potentially countering extremist media moving forward.
Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamic activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.