ArticlePDF Available

Exploring Radical Right-Wing Posting Behaviors Online

Taylor & Francis
Deviant Behavior
Authors:

Abstract and Figures

In recent years, researchers have shown a vested interest in developing advanced information technologies, machine-learning algorithms, and risk-assessment tools to detect and analyze radical content online, with increased attention on identifying violent extremists or measuring digital pathways of violent radicalization. Yet overlooked in this evolving space has been a systematic examination of what constitutes radical posting behaviors in general. This study uses a sentiment analysis-based algorithm that adapts criminal career measures – and is guided by communication research on social influence – to develop and describe three radical posting behaviors (high-intensity, high-frequency, and high-duration) found on a sub-forum of the most conspicuous right-wing extremist forum. The results highlight the multi-dimensional nature of radical right-wing posting behaviors, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online.
Content may be subject to copyright.
Citation: Scrivens, Ryan. (2020). Exploring Radical Right-Wing Posting Behaviors Online. Deviant Behavior. doi:
10.1080/01639625.2020.1756391.
Exploring Radical Right-Wing Posting Behaviors Online
Ryan Scrivens, School of Criminal Justice, Michigan State University
Abstract
In recent years, researchers have shown a vested interest in developing advanced information
technologies, machine-learning algorithms, and risk-assessment tools to detect and analyze
radical content online, with increased attention on identifying violent extremists or measuring
digital pathways of violent radicalization. Yet overlooked in this evolving space has been a
systematic examination of what constitutes radical posting behaviors in general. This study uses
a sentiment analysis-based algorithm that adapts criminal career measures – and is guided by
communication research on social influence – to develop and describe three radical posting
behaviors (high-intensity, high-frequency, and high-duration) found on a sub-forum of the most
conspicuous right-wing extremist forum. The results highlight the multi-dimensional nature of
radical right-wing posting behaviors, many of which may inform future risk factor frameworks
used by law enforcement and intelligence agencies to identify credible threats online.
Purpose
In an increasingly digital world, identifying digital signs of extremism sits at the top of the
priority list for many law enforcement and intelligence agencies (Cohen, Johansson, Kaati, and
Mork 2014; Scrivens, Davies, and Frank 2017), with the current focus of government-funded
research on the development of advanced information technologies and risk assessment tools to
identify and counter the threat of violent extremism on the Internet (Sageman 2014). Within this
context, criminologists, amongst others, have argued that successfully identifying radical content
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
2
online (i.e., behaviors, patterns, or processes), on a large scale, is the first step in reacting to it
(e.g., Bouchard, Joffres, and Frank 2014; Davies, Bouchard, Wu, Joffres, and Frank 2015; Frank,
Bouchard, Davies, and Mei 2015; Williams and Burnap 2015). But in the last 15 years alone, it is
estimated that the number of individuals with access to the Internet has increased fourfold
(Internet World Stats 2020), from over 1 billion users in 2005 to more than 4.5 billion as of 2020
(Internet Live Stats 2020). With all of these new users, more information has been generated,
leading to a flood of online data. In response, researchers are shifting from manual identification
of specific online content to algorithmic techniques to do similar yet larger-scale tasks. This is a
symptom of what some have described as the ‘big data’ phenomenon – i.e., a massive increase in
the amount of data that is readily available, particularly online (Chen, Mao, Zhang, and Leung
2014).
It is becoming increasingly difficult, nearly impossible really, to manually search for
violent extremists, potentially violent extremists, or even users who post radical content online
because the Internet contains an overwhelming amount of information. These new conditions
have necessitated guided data filtering methods that can side-step – and perhaps one day even
replace – the taxing manual methods that traditionally have been used to identify relevant
information online (Brynielsson, Horndahl, Johansson, Kaati, Martenson, and Svenson 2013;
Cohen et al. 2014). As a result of this changing landscape, a number of governments around the
globe have engaged researchers to develop advanced machine learning algorithms to identify and
counter extremism through the collection and analysis of large-scale data made available online
(Chen et al. 2014). Whether this work involves finding radical users of interest (e.g., Klausen,
Marks, and Zaman 2018), measuring digital pathways of radicalization to violence (e.g., Hung,
Jayasumana, and Bandara 2016a), or detecting virtual indicators that may prevent future terrorist
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
3
attacks (e.g., Johansson, Kaati, and Sahlgren 2016), the urgent need to pinpoint extremist content
online is one of the most significant challenges faced by law enforcement agencies and security
officials worldwide (Sageman 2014).
It should come as little surprise, then, that scholars in recent years have shown a vested
interest in developing large-scale ways to identify and analyze radical content online. To
illustrate, researchers have used machine learning algorithms to detect extreme language (e.g.,
Davidson, Warmsley, Macy, and Weber 2017), websites (e.g., Bouchard et al. 2014; Mei and
Frank 2015; Scrivens and Frank 2016) and users online (e.g., Klausen et al. 2018; Scrivens et al.
2017), as well as to measure levels of online propaganda (e.g., Burnap, Williams, Sloan… and
Voss 2014) and cyberhate (e.g., Williams and Burnap 2015) following a terrorism incident, and
to evaluate how radical discourse evolves over time online (e.g., Bliuc, Betts, Vergani, Iqbal, and
Dunn 2019; Figea, Kaati, and Scrivens 2016; Levey and Bouchard 2019; Macnair and Frank
2018; Park, Beck, Fletche, Lam, and Tsang 2016; Vergani and Bluic 2015; Scrivens, Davies, and
Frank 2018). Machine learning has also been used to detect violent extremist language (e.g.,
Abbasi and Chen 2005) and users online (e.g., Alvari, Sarkar and Shakarian 2019; Brynielsson et
al. 2013; Cohen et al. 2014; Johansson et al. 2016; Kaati, Shrestha, and Cohen 2016; Kaati,
Shrestha, and Sardella 2016), as well as to measure levels of – or propensity towards – violent
radicalization online (e.g. Agarwal, and Sureka 2015; Bermingham, Conway, McInerney,
O’Hare, and Smeaton 2009; Chen 2008; Ferrara 2017; Ferrara, Wang, Varol, Flammini, and
Galstyan 2016; Grover and Mark 2019; Hung et al. 2016a; Hung, Jayasumana, and Bandara
2016b). But in light of these important contributions, overlooked in this evolving space has been
a systematic look at what constitutes radical posting behaviors in general. Taking a step back
from measuring levels of online radicalization, for example, to investigate radical posting
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
4
behaviors found in an already radical online community may provide law enforcement and
intelligence agencies with new insight into what constitutes online behaviors worthy of future
investigation. It may also provide a useful baseline for key stakeholders to in turn identify
credible threats (i.e., those who engage in violence offline) or inform future risk factor
frameworks. This study seeks to address this gap via a sentiment analysis-based algorithm to
identify and describe radical posting behaviors on a right-wing extremist (RWE) discussion
forum.
Before proceeding, it is necessary to outline how right-wing extremism is conceptualized
in the current study. Following Berger (2018), this study is guided by the view that RWEs—like
all extremists—structure their beliefs on the basis that the success and survival of the in-group is
inseparable from the negative acts of an out-group and, in turn, they are willing to assume both
an offensive and defensive stance in the name of the success and survival of the in-group (see
Berger 2018). Right-wing extremism is thus defined as a racially, ethnically, and/or sexually
defined nationalism, which is typically framed in terms of white power and/or white identity
(i.e., the in-group) that is grounded in xenophobic and exclusionary understandings of the
perceived threats posed by some combination of non-whites, Jews, Muslims, immigrants,
refugees, members of the LGBTQ community, and feminists (i.e. the out-group(s)) (Perry and
Scrivens 2019; Conway, Scrivens, and Macnair 2019).
The current study
The purpose of this study was to use a sentiment analysis tool and an algorithm to identify and
describe radical right-wing posting behaviors found online. The following is a step-by-step guide
of this process.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
5
Web-crawler and forum data
All open source content on a RWE sub-forum of Stormfront was analyzed for the current study,
which included 124,058 posts made by 7,014 authors between September 12, 2001 and October
12, 2016.
1
Data was captured using a custom-written computer program that was designed to
collect vast amounts of information online (for more on the web-crawler, see Scrivens, Gaudette,
Davies, and Frank 2019).
Stormfront is the oldest and most visited web-forum of the RWE movement (Conway et
al. 2019). It is also the largest and most active RWE forum in the world (Bliuc et al. 2019), and it
has hosted some of the deadliest adherents since its inception in 1995. The Southern Poverty
Law Center (2014), for example, has described Stormfront as “a magnet and breeding ground for
the deadly and the deranged”, claiming that its members have been responsible for
approximately 100 murders since the site came online. Stormfront has also been referred to as an
“echo chamber for hate” (see Simi and Futrell 2015) and a “hornet’s nest” for extremists to
become more extreme (see Wojcieszak 2010). Stormfront is indeed a reasonable starting place to
explore online posting behaviors that may be considered ‘radical’.
Keywords and selection procedure
To identify radical discussions and, by extension, radical right-wing posting behaviors in the
sub-forum, the first step was to determine radical topics found in the data that would be
measured. A list of keywords was therefore developed that accounted for discussions associated
with three primary adversary groups of the extreme right: (1) Jews; (2) Blacks; and, (3) lesbian,
1
The sub-forum went live online on September 12, 2001. An assessment of the first posting on the sub-forum
suggests that it was not launched in response to the 9/11 terror attacks.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
6
gay, bisexual, transgender, and queer (LGBTQ) communities. Research suggests that these three
adversary groups are widely discussed and demonized in RWE discussions forums (e.g., Adams
and Roscigno 2005; Bowman-Grieve 2009; Futrell and Simi 2004), amongst many other online
platforms used by the extreme right. While by no means are these the only adversary groups
targeted by them, historically Jewish, Black and LGBTQ communities have been the primary
opponents of the RWE movement (Daniels 2009). Jews, for example, have been subject to
extensive criticism by the extreme right. They have been labeled as “the source of all evil”, “the
spawn of the Devil himself”, conspiring to extinguish the white race and breeding them out of
existence – through “Jew-controlled” government, financial institutions, and media (i.e., Zionist
Occupation Government (ZOG) conspiracy) (Ezekiel 1995). Black communities, too, have been
the primary target of much of the hateful sentiment expressed by the extreme right. Blacks have
been constructed as “mud races” and the descendants of animals created before Adam and Eve;
“savages” who viciously rape white women and take jobs away from white communities; and the
foot soldiers of “conspiring Jews” (Ezekiel 1995). Adherents of this male-dominated movement
have also categorized anyone who is not heterosexual as “contaminated” and “impure”, not only
by maintaining that the gay rights movement is the killer of the traditional white family and the
cultural destruction of the white race, but that gays are responsible for the contemporary AIDS
endemic (Perry 2001).
For each of the three adversary groups, a list of keywords was developed by drawing
from extensive lists of slur words found online. Each list included an equal number of words via
a standardization procedure: the frequency with which each keyword was found in the data and
the inflection point in the data were identified for each list, and the average inflection point was
calculated for each. Collectively, the average inflection point value was 42 keywords, and in turn
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
7
each list included 42 words that were randomly drawn from their associated list (for more on this
procedure, see Scrivens et al. 2018).
Sentiment analysis
Next, the context surrounding each keyword was systematically evaluated using sentiment
analysis software. Also known as ‘opinion mining’, sentiment analysis is a data collection and
analytic method that allows for the application of subjective labels and classifications. It can
evaluate the opinions of individuals by organizing data into distinct classes and sections,
assigning an individual’s sentiment with a polarity score (i.e., a positive, negative, or neutral
score) (see Feldman 2013).
SentiStrength, which is an established sentiment analysis program that has been widely
used by criminologists in terrorism and extremism studies (see Scrivens et al. 2019), was utilized
for the current study, as it allows for a systematic analysis of a user’s discussion that could be
considered ‘radical’ in online settings (see Scrivens et al. 2017, 2018). To illustrate, it allows for
a keyword-focused method of determining sentiment near a specified keyword (see Thelwall and
Buckley 2013). Equally important is another key feature of SentiStrength: polarity scores are
augmented by characters that can influence scores assigned to the text, such as active and
powerful language, booster words, negative words, repeated letters, repeated negative terms,
antagonistic words, punctuation, and other distinctive characters suited for studying an online
context. In theory, the higher a polarity score is assigned to a piece of text, the more likely the
text includes intense opinions (for more on the functional capacity of SentiStrength, see Thelwall
and Buckley 2013).
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
8
Sentiment-based Identification of Radical Authors
Scrivens and colleagues (2017) developed an algorithm, Sentiment-based Identification of
Radical Authors (SIRA), which was capable of accounting for specific aspects of a forum
authors’ posting activity that may be deemed ‘radical’. In short, the algorithm computes an
overall ‘radical score’ – out of 40 points – by tallying the following components of an author’s
online activity:
(1) Volume of negative posts, which calculates the number of negative posts for a given
forum user (10 points).
(2) Severity of negative posts, which is a metric that calculates the number of very negative
posts for a given user (10 points).
(3) Duration of negative posts, which calculates the first and last dates on which individual
members post negative messages (10 points).
(4) Average sentiment score percentile, which calculates the average sentiment score for all
posts by a given forum member (10 points).
These four unique dimensions of ‘seriousness’ are quantified to identify radical individuals
within an online forum, whereby the higher a user’s radical score, the more likely they are to be
discussing extremely negative content in their posts (see Scrivens et al. 2017 for more on the
SIRA components). Most of these measures are also drawn from traditional criminal career
measures. The volume of negative posts component, for example, is similar to the concept of
offending frequency (see Blumstein, Cohen, Roth and Visher 1986); the severity of negative
posts component also reflects a traditional criminal career dimension – ‘seriousness of crime’
(see Warr 1989); and the duration of negative posting component borrows from the general
concept behind the traditional criminal career dimension ‘duration of crime’ (e.g., Blumstein et
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
9
al. 1986; Tremblay, Pihl, Vitaro, and Dobkin 1994).
2
The SIRA algorithm has been used in a
number of online contexts, including to identify radical users (see Scrivens et al. 2017), to
determine levels of radical content posted by users (see Dillon, Neo, and Freilich 2019), to
measure users’ radical options over time (see Park et al. 2016), and to measure the evolution of
radical posting behaviors (see Scrivens et al. 2018).
Sentiment-based Identification of Radical Authors re-calibrated
There are various definitions that can be used to describe someone as a ‘radical’ based on their
online posting behaviors. One user, for example, may post a high volume of negative messages
over a moderate period of time in a forum. Online communication research suggests that an
individual’s social influence is associated with the volume of their communications (see
Huffaker 2010; see also Yoo and Alavi 2004), especially when the source of a messages is
perceived as trustworthy in a particular setting (see Hollander 1961). Butler (2001), for example,
suggests that an individual’s online communication activity may produce a social structure that
facilitates information sharing which can in turn influence social behavior. Weimann (1994)
similarly argues that an individual who engages in more communication activity may increase
their chances of reaching a wider audience and extending their potential to influence others.
There may also be a user who espouses very radical views for a short period of time in a
forum. Communication research suggests that an individual’s linguistic choices dictate their
ability to persuade others (e.g., Bradac, Konsky, and Davies 1976; Holtgraves and Lasky 1999;
Hosman 2002; Huffaker 2010; Ng and Bradac 1993). Three areas of research have explored the
2
The average sentiment score percentile does not have a criminal career equivalent. But in the online context, it
seemed important to be able to differentiate between two authors who on average posted negative messages, but one
posted negative messages that on average were more negative than the other author, especially when both parties
posted similar volumes of negative content over similar periods of time on a forum.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
10
effects of message content on the influence of the source. First is the clarity of the message and
an author’s ability to write with ‘vocabulary richness’ (Bradac et al 1976; Hosman 2002). Poor
language, on the other hand, tends to impact the credibility and influence of the source. In other
words, messages that are perceived as unintelligent are simply perceived as less credible (see
O’Keefe 2002). Second is the powerful nature of the language. Previous research has shown that
the use of powerful or powerless language influences how the source of the message is perceived
(see Ng and Bradac 1993). Powerful language is direct, assertive, exerts confidence and certainty
(Burrell and Koper 1998) while powerless language includes, but is not limited to, fragmented
sentences, hesitations, use of hedges and tag questions (Holtgraves and Lasky 1999). Third is the
intensity of language. Here, the consensus is that intense messages include two characteristics:
(1) some stylistic feature of language, and (2) a level of emotionality (Ng and Bradac 1993;
Hamilton and Hunter 1998). In short, these messages can be more influential because they grab
the attention of the recipient (see Forgas 2006). This is particularly the case for those messages
that reinforce a sense of community and encourage others to participate in the discussions (Joyce
and Kraut 2006), with online RWE communities indeed being one of them (see Bowman-Grieve
2009).
Lastly, there may be a user who posts moderately negative material over an incredibly
long period in a forum. According to online communication studies, there is a link between the
amount of time that an individual participates in an online community and their ability to gain
social influence there (Koh, Kim, Butler and Bock 2007). Such influence, however, requires that
they are perceived as trustworthy, which again can be built via the length of time that is spent
participating there, as it shows a level of commitment to a particular group or social setting (see
Hollander 1961).
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
11
While these concepts of online social influence serve as a practical starting place to
conceptualize posting behaviors that may be radical, each of the above posting behaviors raise
the question of what constitutes radical posting behavior in an online setting. SIRA is capable of
being adjusted to measure either type of posting behavior (Scrivens et al. 2017). To illustrate, in
order to narrow in on the abovementioned author types in the RWE sub-forum, a straightforward
adaptation of the SIRA algorithm was applied to the sub-forum data, and three separate analyses
were conducted. First, to identify and describe the posting behaviors of those who posted a high
volume of radical messages in the data, SIRA was re-calibrated so that the ‘volume of negative
posts’ parameter, and not the other three SIRA parameters, was computed. Second, to locate a
group of users who posted extremely negative messages in the sub-forum and in turn describe
their posting behaviors, the SIRA algorithm accounted for the ‘severity of negative posts
parameter only. Lastly, to identify a group of users who posted radical messages over an
extensive period of time in the sub-forum and describe their posting behaviors, only the ‘duration
of negative posts’ parameter was calculated.
Analysis and coding procedure
Once a sample of forum users was identified for each of the three radical posting groups, a
macro- and micro-level assessment of their posting behaviors was conducted as follows:
Macro-level assessment. The posting behaviors of the 100 most radical users for each of
the three radical posting groups (n = 300) were compared across groups. Quantitative measures
for this descriptive comparison included users’ mean number of posts, posting scores, and mean
posting duration (in days) for all of their posts, negative posts, and very negative posts.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
12
Micro-level assessment. A thematic content analysis was conducted on the content posted
by the 50 most radical and 50 least radical users within and across each of the three posting
groups (n = 300). Data were analyzed via thematic coding, initially utilizing a constructivist
grounded theory approach which allows for existing literature to be drawn upon to validate codes
(see Charmaz 2006). As codes were later grouped into themes, central emergent themes –
composed of forum users describing similar views – were identified, and less relevant data were
omitted (i.e., selective coding).
Results
The results are divided into three sections: a macro- and macro-level assessment of the three
radical posting groups (high-intensity, high-frequency, and high-duration). Here a number of
manually selected posts are included for the purpose of highlighting the nature of the behaviors
that were uncovered and the key themes that emerged in the data.
High-intensity radical posting behaviors
Several macro- and micro-level patterns were uncovered during the analysis of the high-intensity
radical (HIR) posting group’s (heretofore referred to as the ‘high-intensity posters’) online
presence. From a macro-level perspective, an assessment of this group’s online behaviors
revealed that, on average, they posted the highest volume of very negative messages (2.27 posts)
compared to users in the high-frequency radical (HFR) posting group (heretofore referred to as
the ‘high-frequency posters’) and the high-duration radical (HDR) posting group (heretofore
referred to as the ‘high-duration posters’). For these high-intensity posters, both the average
posting score (-2.26) and the average negative message scores (-4.49) were more negative than
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
13
the scores for the high-frequency and high-duration posters. Interestingly, though, was that the
average posting duration (1,402.46 days), negative posting duration (936.80 days), and very
negative posting duration (154.79 days) for the high-intensity posters was much lower than the
posting durations for the two other poster groups. In addition, the high-intensity authors on
average posted the lowest volume of messages (348.70 posts) and negative messages (39.69
posts) compared to the other two poster groups (see Table 1).
Table 1. Descriptive comparisons of radical posting behaviors (n = 300).
Radical posting groups
HIR
HFR
n (%)
100 (33.33)
100 (33.33)
Mean number of posts
All posts
348.70
512.84
Negative posts
39.69
51.03
Very negative posts
2.27
1.71
Mean posting score
All posts
-2.26
-1.97
Negative posts
-4.49
-4.09
Very negative posts
-16.33
-16.35
Mean posting duration (days)
All posts
1,402.46
1,836.00
Negative posts
936.80
1,331.41
Very negative posts
154.79
170.80
Note: HIR = high-intensity radical, HFR = high-frequency radical, HDR = high-duration radical.
A micro-level assessment of the content posted by the high-intensity posters revealed that
the majority of their very negative messages included alarming topics of discussion: mass
homicide (described with an array of keywords, such as ‘kill’, ‘murder’, ‘slain’, ‘execution’, and
‘genocide’), violence (with keywords such as ‘destruction’, ‘fight’, ‘attack’, ‘beat’, and ‘hurt’),
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
14
weapons (with keywords such as ‘guns’, ‘knives’, and ‘bombs’), sexual abuse (with keywords
such as ‘rape’, ‘rapist’, and ‘molester’), hate (with keywords such as ‘hate’ and ‘hatred’), racism
(with keywords such as ‘racism’, and racist’), and the antichrist (with keywords such as ‘the
devil’, ‘Satan’, and ‘satanic’). Intertwined within much of this intense discourse was the fierce
condemnation of the Jewish, Black, and LGBTQ communities, with Blacks oftentimes being
framed as criminals, LGBTQ communities being linked to the spreading of sexually transmitted
diseases, and Jews being described as “the seed of all evil”, as one user put it (UserID2209).
However, most frequently identified within these discussions was a ZOG conspiracy theory in
which Jews were described as controlling all aspects of the government in the Western world in
an effort to overthrow the white race. Here, commonly found within the high-intensity users’
very negative discussions were concerns that the dominant discourse associated with race
relations in the West was focused exclusively on so-called white supremacy when, in fact, the
“real threat” was ZOG. As three of the most intensity users in the sample best summarized this
sentiment:
the term White Supremacists is without a doubt, the Zionist propaganda / smear
machine’s #1 favorite. It’s funny I have never, ever, even once, heard them use the term,
Jewish Supremacists or even Zionist Supremacists. My interpretation of the term White
Supremacists is any white person who does not praise Israel, & welcome the extinction of
his/her race. (UserID1787)
3
media is largely Jewish owned and will never call itself racist despite being the very
definition of it by intent. […] The media has made itself responsible for defining racism;
3
All author names were assigned with pseudonyms to protect user anonymity. All online posts were quoted
verbatim.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
15
they define it in terms as anti-White as possible. This is typical […] liberal cowardice;
unwilling to admit that blacks are violent criminals […] they simply do not want people
to grab ahold of the fact that this was a worthless black murdering a White female in a
completely senseless act of black violence. THAT is what doesn’t fit their agenda.
(UserID221)
Zionist agents and fanatics freely and criminally accuse their targets of being Nazis while
they themselves utilize the very same techinques of the master Nazi propagandists to
condemn, defame, personally attack, and stereotype all those they fear may oppose or
question their tactics. Perhaps the most heart-breaking, evil side of Zionism is how it
hides behind - and unmercifully uses - the great religion of Judaism...as if it were a some
kind of blunt instrument used to deceive, threaten and then to beat those who differ with
Zionism into submission. (UserID4032)
Further uncovered during the micro-level assessment of the very negative content posted
by high-intensity posters was that this group tended to encourage other forum users to band
together to overcome a perceived struggle against what was described “Jew-run nations”, or as
one high-intensity user put it:
our [Western] countries are run by Jews […] they just want to destroy the white race, fear
of us rising again and others learning the real truths about them. The Jews play dirty, no
shock there, but how can you be surprised when the Jews […] worship and follow the
devil! (UserID213)
In fact, integrated within much of the discussions from the high-intensity posters were strategic
calls for action – and with all force necessary. As one of the most high-intensity users put it,
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
16
“Zionist warmongers mean to imprison us and destroy every vestige of our being. We must fight
back before the rot gets worse” (UserID4211). Here the general sentiment was that if the white
race did not respond accordingly, then the white race would further be oppressed by Jews. As
two high-intensity posters further explained it:
Only in White societies do people espouse other people’s (mostly Jewish) cultural
inventions as their own. […] In the West, we wear clothing designed by homosexual
Jews in Paris, made in 3rd World sweat shops usually owned in part by the international
Jewish Clothier oligopolies. The Jews’ Media show us Whites how to groom and their
partners in ethnic genocide sell us all our toiletries (Kosher Tax and all). […] Then after
years of dressing just right and looking in the mirror to see if we look sufficiently like
their favorite sitcom star, we begin to develop skin diseases and other cancers. Then
come the pharmaceuticals. Prescribed by the medical [Jewish] Mafia, the sheople not
only dutifully swallow them to supposedly take care of the problems, they fight for
subsidized drug plans to make their deaths more affordable. The drugs are in effect the
final Nail in the coffin for many. We must stick together to resist the Jew. (UserID1904)
If we fail to choose struggle over surrender, life over death, destiny over oblivion, it will
not be due to the strength of our would-be destroyers, but to our own weakness, not to
their virtue but to our vice. We will have destroyed ourselves by our own self-destructive
perversion of ultimate ethics, overcome by the enemy ideas implanted in our minds. We
will survive only if we find the ultimate ethical wisdom, nobility, virtue and strength
within us to prevail and live. (UserID1554)
These powerful messages were posted using assertive language, tended to be clearly written, and
were posted with a sense of urgency – a linguistic pattern that became even more apparent during
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
17
an analysis of the least radical authors in the sample, as they tended to post unintelligent,
ambiguous and poorly written messages about their adversary groups, oftentimes with passive
and powerless language.
Worth highlighting was that the very negative messages posted by the high-intensity
posters differed from the negative messages they posted in the sub-forum. In particular, their
negative posts included alarming language similar to the abovementioned, but much less so than
their very negative posts. In addition, the negative messages tended to include a broader range of
talking points than those found in the very negative posts. That is, high-intensity users who
posted negative messages tended to draw broadly on a number of social issues (such as crime
rates and immigration), which were oftentimes followed by minority groups being blamed for
these issues. Such discourse, however, tended to consist of descriptive accounts of how
minorities had “wronged” the white race and with Jews at the forefront of their anger, but
without the direct call to action against these groups. A comparison of the content posted by the
least radical users in the sample lent support for this finding, as the least radical authors tended to
post vague messages about their adversary groups.
High-frequency radical posting behaviors
A macro-level assessment of the content posted by the high-frequency posters revealed that, on
average, they posted a much higher volume of messages (512.84 posts) and negative messages
(51.03 posts) than the high-intensity and high-duration poster group. High-frequency posters
were also those who on average posted messages in general and negative messages in particular
over a longer period of time (1,836.00 days and 1,331.41 days, respectively) than the high-
intensity group, but over a much shorter period than the high-duration posters. In addition, high-
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
18
frequency users posted a lower volume of very negative messages (1.71 posts) than the high-
intensity posters on average, but high-frequency users posted a higher volume of very negative
messages than the high-duration posters on average. Worth adding was that high-frequency
posters were those who on average posted messages in general and negative messages in
particular that were less negative (-1.97 and -4.09, respectively) than those posted by the high-
intensity posters in the sample. But while the very negative messages that were posted by the
high-frequency users received similar sentiment scores as the high-intensity group on average (-
16.35 and -16.33, respectively), high-frequency users were those who posted very negative
messages over the longest period of time on average (170.80 days) compared with the high-
intensity and the high-duration posters (154.79 days and 156.64 days, respectively).
A micro-level assessment of the content posted by the high-frequency users revealed a
number of notable patterns in their posting behaviors. To illustrate, the negative posts from the
high-frequency users were similar to those of the high-intensity users in that the content was
overwhelmingly focused on describing, oftentimes with alarming language, how minority groups
were a threat to the white race, or as two high-frequency users noted:
It was a fine neighbourhood, apparently, until the blacks came. After they showed up,
black teenagers were constantly hanging around outside of everyone’s apartments,
anything left outside was stolen or destroyed, they threatened anyone who passed by
them (F** you, m**f**er, that sort of garbage). They were constantly making noise,
blasting ghetto music, dribbling their monkey balls, spray painting their illegible hideous
graffiti, utterly uncivilized, utterly disrespectful of the neighbours. And, of course, they
are daring people to confront them; always looking for trouble. Complaints about them
were completely useless. And the problem got so bad that my friends simply picked up
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
19
and moved. When blacks move into a neighbourhood, that neighbourhood is as good as
dead. Like terminal cancer in a body. (UserID328)
We got immigrants playing the race card when it suits them, now the fags have the gay
card. there mental defects! and should be behind bars just like how we keep other
mental people! for are saftey! (UserID4054)
Similarly, the very negative messages posted by the high-frequency users were comparable to the
very negative messages post by the high-intensity users: the messages themselves included a
wide range of alarming language, such as words associated with weapons (‘guns’, ‘knives’, and
‘bombs’), racism (‘racism’, racist’, and racism against whites in general), and the antichrist (‘the
devil’, ‘Satan’, and ‘satanic’) – all of which tended to be targeted at minority groups in general.
However, unlike the very negative messages posted by high-intensity users, the high-frequency
users’ very negative posts tended to include detailed and descriptive accounts of their adversaries
(with an array of terms, such as ‘vile’, ‘filth’, ‘stupid’, ‘ignorant’, and other similar terms) rather
than calls for violence against them. As but three examples of this type of very negative
sentiment:
Let’s see... gangs of Blacks (unskilled immigrants, brought into the country recklessly by
the Liberal government) are killing other Blacks (unskilled immigrants, brought into the
country recklessly by the Liberal government) while other Blacks (unskilled immigrants,
brought into the country recklessly by the Liberal government) get uppity about it and kill
still more Blacks (unskilled immigrants, brought into the country recklessly by the
Liberal government) with illegal guns bought from other Blacks (unskilled immigrants,
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
20
brought into the country recklessly by the Liberal government). See a trend here? OUR
kids? Not a chance in Hell. (UserID1654)
When AIDS first hit America by storm in the 1980s, the largest impact was felt in the city
of San Francisco due to its heavily gay population. At first they weren’t sure what was
happening but they did manage to figure out that there was a disease peculiar to
homosexual males that destroyed individuals’ immunity systems and resulted in a rather
terrible cancer-like death. Rather than throw tax-payer money at cute little promotional
materials, they did the one intelligent thing that you would expect a city’s administration
to do. They closed down the bath-houses. […] They knew that unprotected homosexual
sex was spreading the disease, they knew that homosexual sex was rampant in these
establishments and that, in fact, homosexual promiscuity was the entire purpose of these
establishments. (UserID5832)
Homosexual behavior is abnormal and sickening to any normal mind. One does not have
to have a religion in order to get turned off by these social deviants. Nobody ever taught
me a thing about homosexuals, but when I met my first one, I was so turned off.
(UserID6001)
This type of sentiment was not uncovered in the messages posted by the least radical users in the
sample. While they did discuss their adversaries in the sub-forum, the messages did not include
the above alarming language, and the messages lacked clarity and authority (i.e., powerless
language).
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
21
High-duration radical posting behaviors
The high-duration users, compared to the high-intensity and high-frequency users on average,
posted messages in general and negative messages in particular over an extensive period of time
(2,864.68 days and 2,261.13 days, respectively). High-duration users also posted very negative
messages (156.64 days) on average over a comparable period of time as the high-intensity
posters (154.79 days) in the sub-forum. Interestingly, though, was that high-duration users posted
the least number of messages (347.51 posts), negatives messages (30.71 posts), and very
negative messages (1.14 posts) on average across posting types in the sample, and their messages
received the least negative sentiment scores (-1.85) on average compared with the high-intensity
and high-frequency posters (-2.26 and -1.97, respectively). Here the average sentiment score for
the high-duration posters’ negative messages (-4.21) was less negative than the high-intensity
posters (-4.29) but more negative than the high-frequency posters (-4.09). But worth highlighting
was that the high-duration posters’ average sentiment score for their very negative messages (-
17.13) was more negative than the very negative messages posted by the high-intensity and high-
frequency users (-16.33 and -16.35, respectively).
A micro-level assessment of the content posted by the high-duration posters revealed
similar patterns in their posting behaviors as the high-intensity and high-frequency users,
especially when their content was compared with the content posted by the least radical users in
the high-duration posting group: negative and very negative messages included the use of
alarming language to describe how their adversary groups were a threat to the white race. But on
the one hand, much of the negative sentiment expressed by this group included descriptive
accounts of their adversaries rather than calls for violence, similar to the negative content posted
by the high-frequency users. On the other hand, the focus of much of their radical sentiment was
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
22
on how “the jew is enemy number one”, as one user put it (UserID1008) – which was similar to
the sentiment expressed by the high-intensity users. Such sentiment oftentimes highlighted how
Jews were secretly trying to suppress the white race, with much of the discussion on educating
other forum users about the so-called race war. As three high-duration posters, for example,
explained it:
Never forget that the jewish bankers who control the money control those whom we
elect. When it comes to Jews, they wage war on humanity and use puppet governments to
steal the dignity of every race of people other than jews. (UserID2009)
The world’s richest ethnic group is always playing the victim as an excuse to enslave
their neighbours. The world’s richest ethnic group also is in control of the communist
movement that is supposed to be for the rights of the poor working class people. Of
course, this is just controlled opposition on behalf of the Jewish extremists. Manipulators
indeed. (UserID901)
Anyone in this country who’s opened there mind to the truth of the jewish lie’s and there
control of our country is a threat to them. So they find there crooked way’s to get rid of
us. (UserID328)
Discussion
Researchers are increasingly interested in developing large-scale ways to identify and analyze
radical content online (e.g., Alvari et al. 2019; Grover and Mark 2019; Klausen et al. 2018;
Scrivens et al. 2018), but what has been set aside in this emerging space has been a systematic
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
23
assessment of what constitutes radical posting behaviors in general. This study is an attempt to
begin addressing this gap by conducting a macro- and micro-level assessment of radical posting
behaviors in a RWE online community. Here three radical posting groups (high-intensity, high-
frequency, and high-duration) were developed using a sentiment-based algorithm that
incorporated traditional criminal career measures (frequency, seriousness, and duration) and
were guided by communication research on social influence. Several conclusions can be drawn
from this study.
First, the high-intensity radical posting group tend to be those who post few messages
over a relatively short period online. These authors, however, post a high volume of negative and
very negative messages, but again this posting activity unfolds over a short period of time.
Nonetheless, worth noting here is the powerful, clearly written and detailed nature of most of the
very negative messages posted by the high-intensity users. Communication experts would
describe their messages as vocabulary rich and stylistic – and messages that may be influential to
readers (Huffaker 2010; Ng and Bradac 1993). These messages are also assertive in tone and
feature an array of alarming and emotional language (with the usage of words such as ‘bomb’,
‘kill’, ‘evil’, and ‘threat’) about their adversary groups, oftentimes advocating violence against
Jews specifically – leakage warning behavior that researchers believe can assist in identifying
radical violence online (e.g., Brynielsson et al. 2013; Cohen et al 2014; Johansson et al. 2016).
Authors who post these intense messages appear to be fixated on revealing the “truth” about
Jewish control over the white race, which is a linguistic marker that has been recognized as a key
warning behavior in online settings (e.g., Brynielsson et al. 2013; Cohen et al. 2014; Kaati et al.
2016a). In addition, many of these high-intensity posters are actively trying to reinforce a sense
of community by generating discussions about the so-called white struggle against “Jewish
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
24
domination”, which is a communication tactic that Joyce and Kraut (2006) posit is very
influential, provided that the source of the message is perceived as credible by the recipient
(Hollander 1961). Such a community-building tactic is commonly used by the extreme right to
unite their movement online (Adams and Roscigno 2005; Back 2002; Bowman-Grieve 2009;
Scrivens et al. 2018).
Second, high-frequency radical posters are those who post a high volume of content in
general and negative messages in particular, and such content is moderately negative and over a
moderate period of time relative to the other two posting groups. Although these high-frequency
users tend to post few very negative messages in the online community compared to the high-
intensity authors, their radical discourse spans over an incredibly long period of time, which may
suggest a level of dedication to the community, according to previous research on social
influence (e.g., Hollander 1961). High-frequency users’ negative messages, too, include
emotional and alarming language which is intended to degrade Jews, Blacks, LGBTQs and a
wider group of adversaries – and not primarily verbal attacks against Jews. But such messages
tend to include descriptive accounts of their adversary groups rather than calls to action or
inciting violence against them. Nonetheless, high-frequency users may be influential in the RWE
forum. Their high volume of engagement may increase their potential to influence others, as has
been found in previous research on social influence (e.g., Butler 2001; Weimann 1994).
Lastly are the high-duration radical users who post few negative and very negative
messages over an extensive period of time, most of which target the Jewish community and raise
concerns about “Jewish corruption.” Importantly, though, is that these authors are not as active in
the sub-forum as the other two user groups, but when they do post very negative content online,
the messages are amongst the most negative in the sample. This may indicate their long-term
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
25
level of commitment to the online community, according to previous studies which suggest that
the time in which an individual spends communicating in a specific place may impact their
ability to gain social influence (Hollander 1961), especially if they are communicating with
emotional and vocally rich sentiment (Burrell and Koper 1998). This is particularly the case
when individuals attempt to motivate others to participate in the discussions, or again, as was
noted earlier, when individuals attempt to build a sense of identity for a particular group (see
Koh et al. 2007; see also Joyce and Kraut 2006). High-duration posters in the sample did just
that: over time they posted radical messages in an attempt to bond with others to overcome the
“evil Jews” – a community-building tactic that is commonly used by the extreme right online
(Bowman-Grieve 2009; Scrivens et al. 2018).
Limitations and future directions
Together, the results of the current study suggest that radical right-wing posting behaviors are
multi-dimensional and include an array of posting patterns that law enforcement officials and
intelligence agencies may deem worthy of future investigation. Although this study represents a
first look at posting behaviors in a RWE discussion forum that may be deemed radical, it is not
without its shortcomings. In addition to the sample being limited to one sub-forum of a broader
forum, paired with the relatively small list of keywords guiding the sampling procedure as well
as the application of SentiStrength which, like every sentiment analysis program, does not have a
classification accuracy of 100 percent (for more on these limitations, see Scrivens et al. 2018),
three additional points are worth discussing.
First, three radical posting behavior groups were developed for the current study, all of
which tended to capture the online behaviors of those who posted messages over an extensive
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
26
period of time based on how the SIRA algorithm was calibrated. There are, however, additional
behavioral groups worth exploring, including radical users who only post messages over a short
period of time. This should be account for in future research designs, perhaps by developing a
metric that penalizes a user’s overall radical score based on the amount of time that they are
active online.
Second, the radical posting behaviors that were described in the study were treated as
distinct from one another, but the online behaviors of a small number of users in the sample cut
across posting groups. To illustrate, two forum users were amongst the most radical across all
three posting groups, and seven users were the most radical across two groups. Having said that,
future work should assess whether this small group are the most influence users and perhaps
even opinion leaders. This could be done by including a social network analysis measure – one
which could calculate the in-degree centralization of these radical users, as an example. Doing so
would indicate the number of connections that lead to a particular user, thus showing the amount
of attention that they are receiving and, by extension, the amount of influence they have in a
network (see Freeman 1978 for more on centrality in social networks).
Lastly, despite the insightful patterns of posting behaviors that were described in the
current study, it remains unclear as to whether one of the three posting groups is the most radical
and whether one may be perceived as more of a credible threat (i.e., violent offline) for law
enforcement and intelligence agencies. In addition to including a social network measure to
assess users’ level of social influence online, future work is needed to connect the on- and offline
worlds of radical users. Researchers, for example, could combine the online data with offline
data of a sample of known violent extremists in an effort to triangulate their offline experiences
with their online presentation of self, language, and behavior (Scrivens, Gill, and Conway 2020).
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
27
This, amongst other research strategies, would provide researchers, practitioners, and
policymakers with new insight into the online discussions, behaviors and actions that can spill
over into the offline realm.
Acknowledgments
The author would like to thank Richard Frank, Garth Davies, Martin Bouchard, Pete Simi,
Barbara Perry, Maura Conway, and Tiana Gaudette for their invaluable feedback on earlier
versions of this article.
References
Abbasi, Ahmed and Hsinchun Chen. 2005. “Applying Authorship Analysis to Extremist-Group
Web Forum Messages.” Intelligent Systems 20(5): 67–75. doi: 10.1109/MIS.2005.81.
Adams, Josh and Vincent J. Roscigno. 2005. “White Supremacists, Oppositional Culture and the
World Wide Web.” Social Forces 84(2): 759–778. doi: 10.1353/sof.2006.0001a.
Agarwal, Swati and Ashish Sureka. 2015. “Using KNN and SVM Based One-Class Classifier for
Detecting Online Radicalization on Twitter.” Proceedings of the International Conference
on Distributed Computing and Internet Technology, Bhubaneswar, India.
Alvari, Hamidreza, Soumajyoti Sarkar, and Paulo Shakarian. 2019. “Detection of Violent
Extremists in Social Media.” Proceedings of the Second International Conference on
Data Intelligence and Security, South Padres Island, Texas, USA.
Back, Les. 2002. “Aryans Reading Adorno: Cyber-Culture and Twenty-First Century Racism.”
Ethnic and Racial Studies 25(4): 628–651. doi: 10.1080/01419870220136664.
Berger, J. M. 2018. Extremism. Cambridge, MA: The MIT Press.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
28
Bermingham, Adam, Maura Conway, Lisa McInerney, Neil O’Hare, and Alan F. Smeaton. 2009.
“Combining Social Network Analysis and Sentiment Analysis to Explore the Potential
for Online Radicalisation.” Proceedings of the 2009 International Conference on
Advances in Social Network Analysis Mining, Athens, Greece.
Bliuc, Ana-Maria, John Betts, Matteo Vergani, Muhammad Iqbal, and Kevin Dunn. 2019.
“Collective Identity Changes in Far-Right Online Communities: The Role of Offline
Intergroup Conflict.” New Media and Society 21(8): 1770–1786. doi:
10.1177/1461444819831779.
Blumstein, Alfred, Jacqueline Cohen, Jeffrey A. Roth, and Christy A. Visher. 1986. Criminal
Careers and ‘Career Criminals.’ Washington, DC: National Academy Press.
Bowman-Grieve, Lorraine. 2009. “Exploring “Stormfront:” A Virtual Community of the Radical
Right.” Studies in Conflict and Terrorism 32(11): 989–1007. doi:
10.1080/10576100903259951.
Bouchard, Martin, Kila Joffres, and Richard Frank. 2014. “Preliminary Analytical
Considerations in Designing a Terrorism and Extremism Online Network Extractor.” Pp.
171–184 in Computational Models of Complex Systems, edited by Vijay K. Mago and
Vahid Dabbaghian. New York, NY: Springer.
Brynielsson, Joel, Andreas Horndahl, Fredik Johansson, Lisa Kaati, Christian Mårtenson, and
Pontus Svenson. 2013. “Analysis of Weak Signals for Detecting Lone Wolf Terrorists.”
Security Informatics 2(11): 1–15. doi: 10.1186/2190-8532-2-11.
Burnap, Pete, Matthew L. Williams, Luke Sloan… and Alex Voss. 2014. “Tweeting the Terror:
Modelling the Social Media Reaction to the Woolwich Terrorist Attack.” Social Network
Analysis and Mining 4: 1–14. doi: 10.1007/s13278-014-0206-4.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
29
Burrell, Nancy C. and Randal J. Koper. 1998. “The Efficacy of Powerful/Powerless Language on
Attitudes and Source Credibility.” Pp. 203–216 in Persuasion: Advances Through Meta-
Analysis, edited by Mike Allen and Raymond W. Preiss. Cresskill, NJ: Hampton Press.
Bradac, James J., Catherine W. Konsky, and Robert A. Davies. 1976. “Two Studies of Effects of
Linguistic Diversity Upon Judgement of Communicator Attributes and Message
Effectiveness.” Speech Monographs 43(1): 70–79. doi: 10.1080/03637757609375917.
Butler, Brian S. 2001. “Membership Size, Communication Activity, and Sustainability: A
Resource-Based Model of Online Social Structures.” Information Systems Research 12:
346–362. doi: 10.1287/isre.12.4.346.9703.
Charmaz, Kathy. 2006. Constructing Grounded Theory. London, UK: Sage.
Chen, Hsinchun. 2008. “Sentiment and Affect Analysis of Dark Web Forums: Measuring
Radicalization on the Internet.” Proceedings of the 2008 IEEE International Conference
on Intelligence and Security Informatics, Taipei, Taiwan.
Chen, Min, Shiwen Mao, Ying Zhang, and Victor C. M. Leung. 2014. Big Data: Related
Technologies, Challenges and Future Prospects. New York, NY: Springer.
Cohen, Katie, Fredrik Johansson, Lisa Kaati, and Jonas C. Mork. 2014. “Detecting Linguistic
Markers for Radical Violence in Social Media.” Terrorism and Political Violence 26(1):
246–256. doi: 10.1080/09546553.2014.849948.
Conway, Maura, Ryan Scrivens, and Logan Macnair. 2019. “Right-Wing Extremists’ Persistent
Online Presence: History and Contemporary Trends” The International Centre for
Counter-Terrorism – The Hague 10: 1–24. doi: 10.19165/2019.3.12.
Daniels, Jessie. 2009. Cyber Racism: White Supremacy Online and the New Attack on Civil
Rights. Lanham, MA: Rowman and Littlefield Publishers.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
30
Davidson, Thomas, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. “Automated Hate
Speech Detection and the Problem of Offensive Language.” Proceedings of the Eleventh
International AAAI Conference on Web and Social Media, Palo Alto, California, USA.
Davies, Garth, Martin Bouchard, Edith Wu, Kila Joffres, and Richard. Frank. 2015. “Terrorist
and Extremist Organizations’ Use of the Internet for Recruitment.” Pp. 105–127 in Social
Networks, Terrorism and Counter-Terrorism: Radical and Connected, edited by Martin
Bouchard. New York, NY: Routledge.
Dillon, Leevia, Loo Seng Neo, and Joshua. D. Freilich. 2019. “A Comparison of ISIS Foreign
Fighters and Supporters Social Media Posts: An Exploratory Mixed-Method Content
Analysis.” Behavioral Sciences of Terrorism and Political Aggression Ahead of Print:
1–24. doi: 10.1080/19434472.2019.1690544.
Ezekiel, Raphael S. 1995. The Racist Mind: Portraits of American Neo-Nazi and Klansmen. New
York, NY: Viking.
Feldman, Ronen. 2013. “Techniques and Applications for Sentiment Analysis.” Communications
of the ACM 56(4): 82–89. doi: 10.1145/2436256.
Ferrara, Emilio. 2017. “Contagion Dynamics of Extremist Propaganda in Social Networks.”
Information Sciences.Information Sciences 418-419: 1–12. doi:
10.1016/j.ins.2017.07.030.
Ferrara, Emilio, Wen-Qiang Wang, Onur Varol, Alessandro Flammini, and Aram Galstyan.
2016. “Predicting Online Extremism, Content Adopters, and Interaction Reciprocity.”
Proceedings of the International Conference on Social Informatics, Berlin, Germany.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
31
Figea, Leo, Lisa Kaati, and Ryan Scrivens. 2016. “Measuring Online Affects in a White
Supremacy Forum.” Proceedings of the 2016 IEEE International Conference on
Intelligence and Security Informatics, Tucson, Arizona, USA.
Forgas, Joseph P. 2006. “Affective Influences on Interpersonal Behavior: Towards
Understanding the Role of Affect in Everyday Interactions.” Pp. 269-290 in Affect in
Social Thinking and Behavior, edited by Joseph P. Forgas. New York, NY: Psychology
Press.
Futrell, Robert and Pete Simi. 2004. “Free Spaces, Collective Identity, and the Persistence of
U.S. White Power Activism.” Social Problems 51(1): 16–42. doi:
10.1525/sp.2004.51.1.16.
Frank, Richard, Martin Bouchard, Garth Davies, and Joseph Mei. 2015. “Spreading the Message
Digitally: A Look into Extremist Content on the Internet.” Pp. 130–45 in Cybercrime
Risks and Responses: Eastern and Western Perspectives, edited by Russell G. Smith, Ray
C.-C. Cheung, and Lauri Y.-C. Lau. London, UK: Palgrave.
Freeman, Linton C. 1978. “Centrality in Social Networks Conceptual Clarification.” Social
Networks 1(3): 215–239. doi: 10.1016/0378-8733(78)90021-7.
Grover, Ted, and Gloria Mark. 2019. “Detecting Potential Warning Behaviors of Ideological
Radicalization in an Alt-Right Subreddit.” Proceedings of the Thirteenth International
AAAI Conference on Web and Social Media, Munich, Germany.
Hamilton, Mark A. and John. E. Hunter. 1998. “The Effect of Language Intensity on Receiver
Evaluations of Message, Source, and Topic.” Pp. 99–138 in Persuasion: Advances
Through Meta-Analysis, edited by Mike Allen and Raymond W. Preiss. Cresskill, NJ:
Hampton Press.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
32
Hollander, Edwin P. 1961. “Emergent Leadership and Social Influence.” Pp. 30–47 in
Leadership and Interpersonal Behavior, edited by Luigi Petrullo and Bernard M. Bass.
New York, NY: Holt, Rinehard and Winston.
Holtgraves, Thomas and Benjamin Lasky. 1999. “Linguistic Power and Persuasion.” Journal of
Language and Social Psychology 18(2): 196–205. doi: 10.1177/0261927X99018002004.
Hosman, Lawrence A. 2002. “Language and Persuasion.” Pp. 371–390 in The Persuasion
Handbook: Developments in Theory and Practice, edited by James P. Dillard and
Michael Pfau. New York, NY: Sage.
Huffaker, David. 2010. “Dimensions of Leadership and Social Influence in Online
Communities.” Human Communication Research 36(4): 593–617. doi: 10.1111/j.1468-
2958.2010.01390.x.
Hung, Benjamin W. K., Anura P. Jayasumana, and Vidarshana W. Bandara. 2016a. “Detecting
Radicalization Trajectories Using Graph Pattern Matching Algorithms.” Proceedings of
the 2016 IEEE International Conference on Intelligence and Security Informatics,
Tucson, Arizona, USA.
Hung, Benjamin W. K., Anura P. Jayasumana, and Vidarshana W. Bandara. 2016b. “Pattern
Matching Trajectories for Investigative Graph Searches.” Proceedings of the 2016 IEEE
International Conference on Data Science and Advanced Analytics, Montreal, Canada.
Internet Live Stats. 2020. “Total Number of Websites.” Retrieved from
http://www.internetlivestats.com/total-number-of-websites.
Internet World Stats. 2020. “Internet Growth Statistics.” Retrieved from
http://www.internetworldstats.com/emarketing.htm.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
33
Johansson, Fredrik, Lisa Kaati, and Magnus Sahlgren. 2016. “Detecting Linguistic Markers of
Violent Extremism in Online Environments.” Pp. 374–390 in Combating Violent
Extremism and Radicalization in the Digital Era, edited by Majeed Khader, Loo Seng
Neo, Gabriel Ong, Eunice Tan Mingyi, and Jeffery Chin. Hershey, PA: Information
Science Reference.
Joyce, Elisabeth and Robert E. Kraut. 2006. “Predicting Continued Participation in
Newsgroups.” Journal of Computer-Mediated Communication 11(3): 723–747. doi:
10.1111/j.1083-6101.2006.00033.x.
Kaati, Lisa, Amendra Shrestha, and Katie Cohen. 2016. “Linguistic Analysis of Lone Offender
Manifestos.” Proceedings of the 2016 IEEE International Conference on Cybercrime and
Computer Forensics, Vancouver, BC, Canada.
Kaati, Lisa, Amendra Shrestha, and Tony Sardella. 2016. “Identifying Warning Behaviors of
Violent Lone Offenders in Written Communications.” Proceedings of the 2016 IEEE
International Conference on Data Mining Workshops, Barcelona, Spain.
Klausen, Jytte, Christopher E. Marks, and Tauhid Zaman. 2018. “Finding Extremists in Online
Social Networks.” Operations Research 66(4): 957–976. doi: 10.1287/opre.2018.1719.
Koh, Joon, Young-Gul Kim, Brian Butler, and Gee-Woo Bock. 2007. “Encouraging
Participation in Virtual Communities.” Communications of the ACM 50(2): 68–73. doi:
10.1145/1216016.1216023.
Levey, Philippa and Martin Bouchard. 2019. “The Emergence of Violent Narratives in the Life-
Course Trajectories of Online Forum Participants.” Journal of Qualitative Criminal
Justice and Criminology 7(2): 95–121.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
34
Macnair, Logan and Richard Frank. 2018. “Changes and Stabilities in the Language of Islamic
State Magazines: A Sentiment Analysis.” Dynamics of Asymmetric Conflict 11(2): 109–
120. doi: 10.1080/17467586.2018.1470660.
Mei, Joseph and Richard Frank. 2015. “Sentiment Crawling: Extremist Content Collection
through a Sentiment Analysis Guided Web-Crawler.” Proceedings of the International
Symposium on Foundations of Open Source Intelligence and Security Informatics, Paris,
France.
Ng, Sik Hung and James J. Bradac. 1993. Power in Language: Verbal Communication and
Social Influence. Newbury Park, CA: Sage.
O’Keefe, Daniel J. 2002. Persuasion: Theory and Research (Second Edition). Thousand Oaks,
CA: Sage.
Park, Andrew J., Brian Beck, Darrick Fletche, Patrick Lam, and Herbert H. Tsang. 2016.
“Temporal Analysis of Radical Dark Web Forum Users.” Proceedings of the 2016
IEEE/ACM International Conference on Advances in Social Networks Analysis and
Mining, San Francisco, CA, USA.
Perry, Barbara. 2001. In the Name of Hate: Understanding Hate Crimes. New York, NY:
Routledge.
Perry, Barbara and Ryan Scrivens. 2019. Right-Wing Extremism in Canada. Cham, Switzerland:
Palgrave.
Sageman, Marc. 2014. “The Stagnation in Terrorism Research.” Terrorism and Political
Violence 26(4): 565–580. doi: 10.1080/09546553.2014.895649.
Scrivens, Ryan, Garth Davies, and Richard Frank. 2017. “Searching for Signs of Extremism on
the Web: An Introduction to Sentiment-Based Identification of Radical Authors.”
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
35
Behavioral Sciences of Terrorism and Political Aggression 10(1): 39–59. doi:
10.1080/19434472.2016.1276612.
Scrivens, Ryan, Garth Davies, and Richard Frank. 2018. “Measuring the Evolution of Radical
Right-Wing Posting Behaviors Online.” Deviant Behavior 41(2): 216–232. doi:
10.1080/01639625.2018.1556994.
Scrivens, Ryan, Paul Gill, and Maura Conway. 2020. “The Role of the Internet in Facilitating
Violent Extremism and Terrorism: Suggestions for Progressing Research.” Pp. 1–20 in
The Palgrave Handbook of International Cybercrime and Cyberdeviance, edited by
Thomas J. Holt and Adam Bossler. London, UK: Palgrave
Scrivens, Ryan and Richard Frank. 2016. “Sentiment-based Classification of Radical Text on the
Web.Proceedings of the 2016 European Intelligence and Security Informatics Conference,
Uppsala, Sweden.
Scrivens, Ryan, Tiana Gaudette, Garth Davies, and Richard Frank. 2019. “Searching for
Extremist Content Online Using The Dark Crawler and Sentiment Analysis.” Pp. 179–
194 in Methods of Criminology and Criminal Justice Research, edited by Mathieu
Deflem and Derek M. D. Silva. Bingley, UK: Emerald.
Simi, Pete and Robert Futrell. 2015. American Swastika: Inside the White Power Movement’s
Hidden Spaces of Hate (Second Edition). Lanham, MD: Rowman and Littlefield
Publishers.
Southern Poverty Law Center. 2014. “White Homicide Worldwide.” Retrieved from
https://www.splcenter.org/20140401/white-homicide-worldwide.
Thelwall, Mike and Kevan Buckley. 2013. “Topic-Based Sentiment Analysis for the Social Web:
The Role of Mood and Issue-Related Words.” Journal of the American Society for
Information Science and Technology 64(8): 1608–1617. doi: 10.1002/asi.22872.
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
36
Tremblay, Richard E., Rorbert O. Pihl, Frank Vitaro, and Patricia L. Dobkin. 1994. “Predicting
Early Onset of Male Antisocial Behavior from Preschool Behavior.” Archives of General
Psychiatry 51(9): 732–739. doi: 10.1001/archpsyc.1994.03950090064009.
Vergani, Matteo. and Ana-Maria Bluic. 2015. “The Evolution of the ISIS’ Language: A
Quantitative Analysis of the Language of the First Year of Dabiq Magazine.” Sicurezza,
Terrorismo e Società 2: 7–20.
Warr, Mark. 1989. “What is the Perceived Seriousness of Crimes?” Criminology 27(4): 795–822.
doi: 10.1111/j.1745-9125.1989.tb01055.x.
Weimann, Gabriel. 1994. The Influentials: People Who Influence People. Albany, NY: State
University of New York Press.
Williams, Matthew L. and Pete Burnap. 2015. “Cyberhate on Social Media in the Aftermath of
Woolwich: A Case Study in Computational Criminology and Big Data.British Journal
of Criminology 56(2): 211–238. doi: 10.1093/bjc/azv059.
Wojcieszak, Magdalena. 2010. “‘Don’t Talk to Me’: Effects of Ideological Homogenous Online
Groups and Politically Dissimilar Offline Ties on Extremism.” New Media and Society
12(4): 637–655. doi: 10.1177/1461444809342775.
Yoo, Youngjin and Maryam Alavi. 2004. “Emergent Leadership in Virtual Teams: What do
Emergent Leaders do?” Information and Organization 14(1): 27–58. doi:
10.1016/j.infoandorg.2003.11.001
EXPLORING RADICAL RIGHT-WING POSTING BEHAVIORS ONLINE
37
Tables
... Over a million accounts were suspended on Twitter between 2015 and 2017 for promoting terrorism, and over 150,000 YouTube videos were removed for promoting violent extremism (Allcott et al., 2019;Allcott & Gentzkow, 2017). The global proliferation of extremist content online has compounded the challenges governments and organizations face in determining which individuals (and what content) might be a risk to public safety (Macdonald et al., 2019;Scrivens, 2021). In response, state and private-sector resources have been directed toward this task, including monitoring social media platforms to identify the digital traces that are associated with increased risk of extremist action (Home Office, 2018;Macdonald et al., 2019). ...
... However, the increase in high-profile terrorist attacks in which online communications have been implicated suggests that online radicalization and polarization do not provide useful and generalizable signal of mobilization that can be used to prevent attacks (Scrivens, 2021;Macdonald et al., 2019;Quiggin, 2017). One of the reasons for this is that the frequency of extremist actions offline is low in comparison with the vast volume of extremist content online, with 5,226 terrorism-related incidents reported in 2021 (National Consortium for the Study of Terrorism and Responses to Terrorism [START], 2022). ...
... Doing so enabled us to develop theoretical propositions that explain how individuals behave online once they have moved "up" the radicalization "staircase" (Moghaddam, 2005), and passed the psychological tipping point for engaging in extremist action. Thus, our research aimed to contribute toward developing theoretical propositions to help identify the "needle in the haystack" and extend the possibilities of using digital trace data to prevent terrorist attacks (Macdonald et al., 2019;Quiggin, 2017;Sageman, 2011;Scrivens, 2021;Scrivens et al., 2019). ...
Article
Full-text available
Psychological theories of mobilization tend to focus on explaining people's motivations for action, rather than mobilization ("activation") processes. To investigate the online behaviors associated with mobilization, we compared the online communications data of 26 people who subsequently mobilized to right-wing extremist action and 48 people who held similar extremist views but did not mobilize (N = 119,473 social media posts). In a three-part analysis, involving content analysis (Part 1), topic modeling (Part 2), and machine learning (Part 3), we showed that communicating ideological or hateful content was not related to mobilization, but rather mobilization was positively related to talking about violent action, operational planning, and logistics. Our findings imply that to explain mobilization to extremist action, rather than the motivations for action, theories of collective action should extend beyond how individuals express grievances and anger, to how they equip themselves with the "know-how" and capability to act.
... These approaches have been used to analyse extremist behaviours online but have seemingly not been deployed to analyse the efficacy of interventions. Sentiment analysis is a popular method which quantifies words into positive and negative scores to assess or detect radical sentiment (Bermingham et al., 2009;Scrivens, Davies, and Frank, 2017;Scrivens, 2020). Research has also utilised a Corpus-Assisted Discourse Studies (CADS) approach which combines both qualitative and quantitative textual analysis (Lorenzo-Dus and Macdonald, 2018; Nouri and Lorenzo-Dus, 2019). ...
Technical Report
Full-text available
The strategic logic of interventions for counter-extremism interventions on short-form video platforms.
Article
Full-text available
There is an ongoing need for researchers, practitioners, and policymakers to identify and examine the online posting behaviors of violent extremists prior to their engagement in violence offline, but little is empirically known about their online presence generally or differences in their posting behaviors compared to their non-violent counterparts particularly. Even less is empirically known about their persisting and desisting posting patterns. This study drew from a unique sample of violent and non-violent right-wing extremists to examine online changes in posting patterns during the beginning, middle, and end of their observed posting activity. Here we identified persister and desister posters to create four sample groups: non-violent persisters, non-violent desisters, violent persisters, and violent desisters. We then calculated the average number of posts for each sample group as well as quantified the existence of extremist ideologies and violent extremist mobilization efforts across each observed posting period. Overall, we identified several noteworthy posting patterns that may assist law enforcement and intelligence agencies in identifying credible threats online. We conclude with a discussion of the implications of the analysis, its limitations, and avenues for future research.
Preprint
The United States has experienced a significant increase in violent extremism, prompting the need for automated tools to detect and limit the spread of extremist ideology online. This study evaluates the performance of Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformers (GPT) in detecting and classifying online domestic extremist posts. We collected social media posts containing "far-right" and "far-left" ideological keywords and manually labeled them as extremist or non-extremist. Extremist posts were further classified into one or more of five contributing elements of extremism based on a working definitional framework. The BERT model's performance was evaluated based on training data size and knowledge transfer between categories. We also compared the performance of GPT 3.5 and GPT 4 models using different prompts: na\"ive, layperson-definition, role-playing, and professional-definition. Results showed that the best performing GPT models outperformed the best performing BERT models, with more detailed prompts generally yielding better results. However, overly complex prompts may impair performance. Different versions of GPT have unique sensitives to what they consider extremist. GPT 3.5 performed better at classifying far-left extremist posts, while GPT 4 performed better at classifying far-right extremist posts. Large language models, represented by GPT models, hold significant potential for online extremism classification tasks, surpassing traditional BERT models in a zero-shot setting. Future research should explore human-computer interactions in optimizing GPT models for extremist detection and classification tasks to develop more efficient (e.g., quicker, less effort) and effective (e.g., fewer errors or mistakes) methods for identifying extremist content.
Article
In response to the data revolution, academic research and media attention have increasingly focused on the technological adaptation and innovation displayed by the far right. The greatest attention is paid to social media and how groups and organizations are utilizing technological advancement and growth in virtual networks to increase recruitment and advance radicalization on a global scale. As with most social and political endeavors, certain technologies are in vogue and thus draw the attention of users and regulators and service providers. This creates a technological blind spot within which extremist groups frequently operate older and less well regarded technologies without the oversight that one might expect. This article examines the less well-studied traditional and official websites of the Ku Klux Klan, the most established and iconic of American far-right organizations. By incorporating non-participant observation of online spaces and thematic analysis, this research analyzes the evolution of 26 websites, from their emergence in the early 1990s to the present day. We examine the ways in which traditional printed communications and other ephemera have progressed with advances in technology, focusing on the following central elements of Klan political activism and community formation: Klan identity, organizational history, aims and objectives; technology and outreach, including online merchandise and event organization; and the constructions of whiteness and racism. The results add value and insight to comparable work by offering a unique historical insight into the ways in which the Klan have developed and made use of Web 1.0, Web 2.0, and Web3 technologies.
Article
Full-text available
Online communities have become a central part of the internet. Understanding what motivates users to join these communities, and how they affect them and others, spans various psychological domains, including organizational psychology, political and social psychology, and clinical and health psychology. We focus on online communities that are exemplary for three domains: work, hate, and addictions. We review the risks that emerge from these online communities but also recognize the opportunities that work and behavioral addiction communities present for groups and individuals. With the continued evolution of online spheres, online communities are likely to have an increasingly significant role in all spheres of life, ranging from personal to professional and from individual to societal. Psychological research provides critical insights into understanding the formation of online communities, and the implications for individuals and society. To counteract risks, it needs to identify opportunities for prevention and support.
Article
Full-text available
Despite the ongoing need for practitioners to identify violent extremists online before their engagement in violence offline, little is empirically known about their digital footprints generally or differences in their posting behaviors compared to their non-violent counterparts particularly – especially on high-frequency posting days. Content analysis was used to examine postings from a unique sample of violent and non-violent right-wing extremists as well as from a sample of postings within a sub-forum of the largest white supremacy forum during peak and non-peak posting days for comparison purposes. Several noteworthy posting behaviors were identified that may assist in identifying credible threats online.
Article
Full-text available
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of right- wing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Chapter
Full-text available
Purpose – This chapter examines how sentiment analysis and web-crawling technology can be used to conduct large-scale data analyses of extremist content online. Methods/approach – The authors describe a customized web-crawler that was developed for the purpose of collecting, classifying, and interpreting extremist content online and on a large scale, followed by an overview of a relatively novel machine learning tool, sentiment analysis, which has sparked the interest of some researchers in the field of terrorism and extremism studies. The authors conclude with a discussion of what they believe is the future applicability of sentiment analysis within the online political violence research domain. Findings – In order to gain a broader understanding of online extremism, or to improve the means by which researchers and practitioners “search for a needle in a haystack,” the authors recommend that social scientists continue to collaborate with computer scientists, combining sentiment analysis software with other classification tools and research methods, as well as validate sentiment analysis programs and adapt sentiment analysis software to new and evolving radical online spaces.
Article
Full-text available
Despite the increasing citizen engagement with socio-political online communities, little is known about how such communities are affected by significant offline events. Thus, we investigate here the ways in which the collective identity of a far-right online community is affected by offline intergroup conflict. We examine over 14 years of online communication between members of Stormfront Downunder, the Australian sub-forum of the global white supremacist community Stormfront.org. We analyse members’ language use and discourse before and after significant intergroup conflict in 2015, culminating in local racist riots in Sydney, Australia. We found that the riots were associated with significant changes in the collective beliefs of the community (as captured by members’ most salient concerns and group norms), emotions and consensus within the community. Overall, the effects of the local riots were manifest in a reinvigorated sense of purpose for the far-right community with a stronger anti-Muslim agenda.
Article
Full-text available
Researchers have previously explored how right-wing extremists build a collective identity online by targeting their perceived “threat,” but little is known about how this “us” versus “them” dynamic evolves over time. This study uses a sentiment analysis-based algorithm that adapts criminal career measures, as well as semi-parametric group-based modeling, to evaluate how users’ anti-Semitic, anti-Black, and anti-LGBTQ posting behaviors develop on a sub-forum of the most conspicuous white supremacy forum. The results highlight the extent to which authors target their key adversaries over time, as well as the applicability of a criminal career approach in measuring radical posting trajectories online.
Article
This paper compares the social media posts of ISIS foreign fighters to those of ISIS supporters. We examine a random sample of social media posts made by violent foreign fighters (n = 14; 2000 posts) and non-violent supporters (n = 18; 2000 posts) of the Islamic State of Iraq and Syria (ISIS) (overall n = 4,000 posts), from 2009 to 2015. We used a mixed-method study design. Our qualitative content analyses of the 4,000 posts identified five themes: Threats to in-group, societal grievances, pursuit for significance, religion, and commitment issues. Our quantitative comparisons found that the dominant themes in the foreign fighters' online content were threats to in-group, societal grievances, and pursuit for significance, while religion and commitment issues were dominant themes in the supporters' online content. We also identified thematic variations reflecting individual attitudes that emerged during the 2011-2015 period, when major geopolitical developments occurred in Syria and Iraq. Finally, our quantitative sentiment-based analysis found that the supporters (10 out of 18; 56%) posted more radical content than the foreign fighters (5 out of 14; 36%) on social media. ARTICLE HISTORY
Book
This book comprehensively examines right-wing extremism (RWE) in Canada, discussing the lengthy history of violence and distribution, ideological bases, actions, organizational capacity and connectivity of these extremist groups. It explores the current landscape, the factors that give rise to and minimise these extremist groups, strategies for countering these groups, and the emergence of the ‘Alt-Right’. It draws on interviews with law enforcement officials, community activists, and current and former right-wing activists to inform and offer practical advice, paired with analyses of open source intelligence on the state of the RWE movement in Canada. The historical and contemporary contours of right-wing extremism in Canada are situated within the social, political, and cultural landscape that has shaped the movement. It will be of particular interest to students and researchers of criminology, sociology, social justice, terrorism and political violence.
Book
What extremism is, how extremist ideologies are constructed, and why extremism can escalate into violence. A rising tide of extremist movements threaten to destabilize civil societies around the globe. It has never been more important to understand extremism, yet the dictionary definition—a logical starting point in a search for understanding—tells us only that extremism is “the quality or state of being extreme.” In this volume in the MIT Press Essential Knowledge series, J. M. Berger offers a nuanced introduction to extremist movements, explaining what extremism is, how extremist ideologies are constructed, and why extremism can escalate into violence. Berger shows that although the ideological content of extremist movements varies widely, there are common structural elements. Berger, an expert on extremist movements and terrorism, explains that extremism arises from a perception of “us versus them,” intensified by the conviction that the success of “us” is inseparable from hostile acts against “them.” Extremism differs from ordinary unpleasantness—run-of-the-mill hatred and racism—by its sweeping rationalization of an insistence on violence. Berger illustrates his argument with case studies and examples from around the world and throughout history, from the destruction of Carthage by the Romans—often called “the first genocide”—to the apocalyptic jihadism of Al Qaeda, America's new “alt-right,” and the anti-Semitic conspiracy tract The Protocols of the Elders of Zion. He describes the evolution of identity movements, individual and group radicalization, and more. If we understand the causes of extremism, and the common elements of extremist movements, Berger says, we will be more effective in countering it.
Article
This study applies the semi-automated method of sentiment analysis in order to examine any quantifiable changes in the linguistic, topical, or narrative patterns that are present in the English-language Islamic State-produced propaganda magazines Dabiq (15 issues) and Rumiyah (10 issues). Based on a sentiment analysis of the textual content of these magazines, it was found that the overall use of language has remained largely consistent between the two magazines and across a timespan of roughly three years. However, while the majority of the language within these magazines is consistent, a small number of significant changes with regard to certain words and phrases were found. Specifically, the language of Islamic State magazines has become increasingly hostile towards certain enemy groups of the organization, while the language used to describe the Islamic State itself has become significantly more positive over time. In addition to identifying the changes and stabilities of the language used in Islamic State magazines, this study endeavours to test the effectiveness of the sentiment analysis method as a means of examining and potentially countering extremist media moving forward.