Article

Impacts of Radical Right Groups’ Movements across Social Media Platforms – A Case Study of Changes to Britain First’s Visual Strategy in Its Removal from Facebook to Gab

Taylor & Francis
Studies In Conflict & Terrorism
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article examines the visual strategy of the U.K. radical right group Britain First as they were removed from Facebook and migrated to a less regulated platform – Gab. Data was collected over two four month periods in 2017 and 2018. Using methods from discourse analysis, the study identifies visual changes in terms of content, including a shift on Gab toward promoting the group's inner core members and expanding “othering” practices to Islam broadly. Changes in visual style were also identified, notably from the routine posting of esthetically polished images, to a reliance on unedited images. The article concludes with policy recommendations for governments and tech companies regarding the removal of visual radical right online content.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These sites, including Facebook, Twitter, Instagram, YouTube, Gab, and Parler, provide a catalogue of mechanisms for producing and sharing various content forms, subsequently appealing and catering to a vast audience. Notably, the two latter platforms are synonymous with material procured for a far-right-leaning Anglosphere populace (Baines et al., 2021;Jasser et al., 2021;Nouri et al., 2021). Moreover, as recent research suggests, the public is increasingly reliant on social media, including the growing number of participants on fringe sites, for all pandemic-related news and interactions (Cinelli et al., 2020;Neely et al., 2021). ...
... Provides a perspective from the general populace Jayda Fransen Political activist, anti-Islamist activist, self-described extremist Fransen is the leader of the British Freedom Party, using religious justifications for her far-right narratives. A former member of Britain First, she broke off in 2021 to create her own political party far-right ideologies turned to Gab as a bastion of free speech (Nouri et al., 2021). With official verification and unmoderated capabilities, these actors use fringe outlets to proliferate their hate-filled messages and ideologies to audiences in the country and beyond without the need to censor their content. ...
Article
Full-text available
The growing dissension towards the political handling of COVID-19, widespread job losses, backlash to extended lockdowns, and hesitancy surrounding the vaccine are propagating toxic far-right discourses in the UK. Moreover, the public is increasingly reliant on different social media platforms, including a growing number of participants on the far-right’s fringe online networks, for all pandemic-related news and interactions. Therefore, with the proliferation of harmful far-right narratives and the public’s reliance on these platforms for socialising, the pandemic environment is a breeding ground for radical ideologically-based mobilisation and social fragmentation. However, there remains a gap in understanding how these far-right online communities, during the pandemic, utilise societal insecurities to attract candidates, maintain viewership, and form a collective on social media platforms. The article aims to better understand online far-right mobilisation by examining, via a mixed-methodology qualitative content analysis and netnography, UK-centric content, narratives, and key political figures on the fringe platform, Gab. Through the dual-qualitative coding and analyses of 925 trending posts, the research outlines the platform’s hate-filled media and the toxic nature of its communications. Moreover, the findings illustrate the far-right’s online discursive dynamics, showcasing the dependence on Michael Hogg’s uncertainty-identity mechanisms in the community’s exploitation of societal insecurity. From these results, I propose a far-right mobilisation model termed Collective Anxiety, which illustrates that toxic communication is the foundation for the community’s maintenance and recruitment. These observations set a precedent for hate-filled discourse on the platform and consequently have widespread policy implications that need addressing.
... Alt-tech platforms tend to mimic affordances of mainstream platforms (e.g., YouTube or Reddit) while advertising minimal content moderation policies and free speech absolutism, attracting organizations and users banned from mainstream sites like Facebook and X (de Keulenaar, 2023;Rogers, 2020;Zeng and Schäfer, 2021). The combination of familiar affordances of social media platforms, explicit lack of content moderation, and a grievance-based identity of perceived oppression by "Big Tech censors" create ideal conditions for conspiracy narratives to thrive and turn to more extreme, and even violence-legitimating, ideas (Cinelli et al., 2022;Jasser et al., 2021;Nouri et al., 2021). For example, some work has found differences in online posting behaviors between violent and nonviolent extremists, particularly on alt-tech platforms such as StormFront (e.g., Scrivens et al., 2023). ...
Article
The mainstreaming of conspiracy narratives has been associated with a rise in violent offline harms, from harassment, vandalism of communications infrastructure, assault, and in its most extreme form, terrorist attacks. Group-level emotions of anger, contempt, and disgust have been proposed as a pathway to legitimizing violence. Here, we examine expressions of anger, contempt, and disgust as well as violence, threat, hate, planning, grievance, and paranoia within various conspiracy narratives on Parler. We found significant differences between conspiracy narratives for all measures and narratives associated with higher levels of offline violence showing greater levels of expression.
... Due to these strong biases, adherents are likely to accept false information that is aligned with their personal beliefs. This has structural consequences for their exposure to information, as online information flows remain within ideological homogenous networks (Marchal, 2021;Pogorelskiy & Shum, 2019;Rathje et al., 2021;Zollo, 2019) and are worsened by fringe communities migrating to alternative social platforms to avoid fact-checkers and digital censorship (Guhl et al., 2020;Nouri et al., 2021;Trujillo et al., 2020). Moreover, a recent study by Aslett et al. (2023) found that such confirmation bias remains persistent when online users try to use search engines as a proxy for debunking, as online searches also give users access to multiple low-quality sources of information. ...
Article
Full-text available
Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources supporting such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability. Based on a snowball sampling of German fringe communities on Facebook, we test if debunking and source exposure reduce groups’ consumption levels of two popular misinformation sources. Results support a proactively engaging counter-misinformation approach to reduce consumption of misinformation sources.
... LGBTþ groups (Doerr 2021;Lorenzetti 2020). 10 The British farright group 'Britain First' depends on visual strategies a lot after its Facebook ban in 2018 (Nouri, Lorenzo-Dus, and Watkin 2020): this implies a shift toward the promotion of the group's inner core members and expanding 'othering' practices, chiefly to Islam. Emotions elicited by online images are used to promote RR activism, as shown in the study on the radical group the 'Soldiers of Odin' (Nikunen, Hokka, and Nelimarkka 2021), which demonstrated that images and their reactions ('emoticons') diffused on the group's Facebook pages, contribute in the creation of a 'visual affective practice' 11 designed to disseminate and reinforce a nationalist, extremist and racist common sense (ethos) of the group. ...
Article
Full-text available
Although images are very important for political actors and social movements, including the radical right (RR), empirical studies still rarely integrate visual material as relevant data for understanding radical right politics. This article outlines this new and growing field of research (i.e., visuality and the RR), critically reviewing existing studies from the perspective of both visual studies of social movements and contentious politics, which are rarely applied to the RR, and the methodology of working with images, offering empirical case studies (European and beyond) to illustrate the argument. The findings reveal the main functions of the use of visuals for the radical right, as well as the benefits (but also the challenges) of studying radical right politics through the lens of visual analysis. A conceptual framework is proposed to capture this dominant visual politics of the radical right. As shown, two dimensions emerge as the most theoretically relevant for the radical right: The discursive meaning of images (the story itself, telling the story, eliciting the story) and the communicative function of images (visual expression by the movement or others, visibility), which combine agency and addressee.
... These findings contribute to the overarching discourse surrounding the sense of emotional belonging prevalent among the subgroup's userbase. While existing research portrays Gab as a technocultural refuge from experiences of social exclusion, mockery, and content moderation (Jasser et al., 2023;Nouri et al., 2021), this analysis reveals that the emotional bonds forged on g/introduceyourself hold a more significant importance for its users. Consequently, the userbase's conceptualization of the space may transcend the status of a mere social media platform, placing the virtual community as a new home: ...
Article
Full-text available
This article challenges prevailing assumptions that fringe social media platforms predominantly serve as unmoderated hatefilled spaces for far-right communication by examining the userbase’s emotional connection to these environments. Focusing on Gab Social, a popular alternative technology website with affordances akin to Twitter, Facebook, and Reddit, and its subgroup, “Introduce Yourself,” the research investigates how participants discuss their attachment and sense of membership within a far-right online community. Employing a constructivist grounded theory approach and a thick data mixed-methods technique encompassing netnography and sentiment analysis, I uncover the complex and impassioned narratives underlying users’ sense of emotional belonging on the platform. The resulting findings demonstrate how counter-mainstream media act as a unifying force by catering to the social needs of participants seeking an in-group of like-minded individuals. Moreover, I argue that fringe social media platforms offer participants far more than mainstream platforms, providing a positive interactive environment and a new virtual home for those feeling rejected and antagonized by other communities, institutions, and organizations, both online and offline. Therefore, the work offers valuable empirical insights into the emotional emphasis participants place on fringe social media and its implications for fostering attachment, community formation, and identity construction within far-right online counterpublics.
... Facebook and YouTube (who both enforced the geo-block), an ecology of alternative platforms emerged. Taking a free-speech absolutist position, these alternative platforms offer a safe haven for unwanted information (Nouri et al., 2021;Trujillo et al., 2020). A number of recent studies, e.g. ...
... Una de las características más notables y que permite explicar gran parte del éxito de plataformas de RRSS consiste en la posibilidad de formar grandes comunidades unidas en torno a intereses comunes. Así, es posible encontrar grupos reunidos en torno a asuntos triviales y benignos, como bandas musicales, estilos de arquitectura, etc. Sin embargo, también es posible encontrar grupos reunidos con base en creencias u opiniones no tan benignas ni triviales en distintas escalas, desde grupos y espacios en los que se alojan posiciones supremacistas raciales (Makombe et al., 2020;Nouri et al., 2021) hasta grupos en los que se refuerzan conductas homofóbicas o sexistas (Kian et al., 2011). El problema con la existencia de estos entornos de creencias compartidas consiste en que pueden congurar cámaras de eco "en la cuales una y otra vez se presta atención a la repetición de la propia convicción por parte de los compañeros de partido" (Hendricks y Hansen, 2016, p. 151). ...
Article
Full-text available
A partir de la teoría matemática de la información (TMI) de Shannon y algunas sugerencias de Luciano Floridi en torno a los desafíos de la revolución tecnológica, este artículo indaga los efectos epistémicos de entornos redundantes. Para lograr aquel propósito, se revisan los conceptos de información y redundancia de la TMI que luego se relaciona con la figura literaria del cuervo de Poe. En primera instancia, se expone la TMI y se analiza la información en tanto su transmisibilidad. A continuación, se reconfigura el problema de la información en el contexto de algoritmos de plataformas digitales y se vincula la personalización del contenido con la redundancia. Después, se proponen los conceptos de espacios poco informativos y de espacios epistémicamente redundantes para dar cuenta de las condiciones de homogeneidad generadas por entornos hiperpersonalizados. Finalmente, se realiza una pequeña propuesta en torno a la medición de la redundancia.
... To de-platform hate speech, conspiracy theories, and radical anti-governmental sentiments, the organisations have blocked, suspended, and removed posts and users linked to the far right. 1 However, these actions have catalysed the growth of fringe 2 online social networks for participants seeking right-wing content, safe havens, and unhindered communication channels. 3 Therefore, with large amounts of data removed from popular media platforms and users congregating in laxly moderated right-wing echo chambers, the shift has created an environment of self-isolated farright content. ...
Article
Full-text available
Major social media platforms have recently taken a more proactive stand against harmful far-right content and pandemic-related disinformation on their sites. However, these actions have catalysed the growth of fringe online social networks for participants seeking right-wing content, safe havens, and unhindered communication channels. To better understand these isolated systems of online activity and their success, the study on Gab Social examines the mechanisms used by the far right to form an alternative collective on fringe social media. My analysis showcases how these online communities are built by perpetuating meso-level identity-building narratives. By examining Gab's emphasis on creating its lasting community base, the work offers an experi-ential examination of the different communication devices and multimedia within the platform through a netnographic and qualitative content analysis lens. The emergent findings and discussion detail the far right's virtual community building model, revolving around its sense of in-group superiority and the self-reinforcing mechanisms of collective. Not only does this have implications for understanding Gab's communicative dynamics as an essential socia-lisation space and promoter of a unique meso-level character, but it also reflects the need for researchers to (re)emphasise identity, community, and collectives in far-right fringe spaces.
... In particular, this study explores how an alt-tech platform, Gab, catered to an online illicit network where fake Australian vaccine certificates were distributed. Alt-tech platforms have largely eschewed the focus of criminologists, despite some initial evidence regarding these digital spaces as harbouring extremist communities (Askanius and Keller 2021;Nouri et al. 2020;Rieger et al. 2021). In the following section, this paper provides an overview of the alt-tech social media platform Gab and situates this particular digital environment within the broader digital ecosystem, noting the increasing number of online spaces not easily characterised by binary notions of the 'dark' or 'clear' web (Copland 2021). ...
Article
Full-text available
This paper provides the first exploration of the online distribution of fake Australian COVID-19 vaccine certificates. Drawing on a collection of 2589 posts between five distributors and their community members on the alt-tech platform Gab , this study gathers key insights into the mechanics of illicit vaccine certificate distribution. The qualitative findings in this research demonstrate the various motivations and binding ideologies that underpinned this illicit distribution (e.g. anti-vaccine and anti-surveillance motivations); the unique cybercultural aspects of this online illicit network (e.g. ‘crowdsourcing’ the creation of fake vaccine passes); and how the online community was used to share information on the risks of engaging in this illicit service, setting the appropriate contexts of using fake vaccine passes, and the evasion of guardians in offline settings. Implications for future research in cybercrime, illicit networks, and organised crime in digital spaces are discussed.
... The long-term efficacy of such policies remains squarely up for debate. In the first instance, we know that mass takedowns have a strong debilitating effect on online social networks and their ability to post further content (Conway, et al., 2019;Nouri, et al., 2020). On the other hand, explorative research indicates that takedowns may also funnel 'extreme communities' towards more 'extreme spaces' online, resulting in a hardening of belief systems (Pearson, 2018;Gaudette, et al., 2021). ...
Article
Full-text available
The rise of QAnon presents researchers with a number of important questions. While emerging literature provides insights into how QAnon exists online, there is a dearth of theoretical engagement with the questions of why it exists, and what conditions brought it into being. This paper seeks to address this gap by contextualizing QAnon as an ontological phenomenon underpinned by anxiety, and inquiring into the identity formation strategies employed by the movement. Applying the basic precepts of discourse theory and discourse analysis to a representative canon of QAnon content, it finds that, like other formations of collective identity, QAnon is premised on interconnected dynamics of ontological fulfillment that cannot be explained away by pointing to ‘the algorithm’ or ‘madness’. Nor can it be tackled effectively by the content takedowns and de-platforming strategies currently employed. The paper concludes with a call to explore more empathetic engagement with conspiracy adherents, arguing that until we (re)discover a more inclusive, agonistic politics, QAnon and other fantastical conspiracy movements will continue to arise and some may metastasize into violent action. New forms of resilience to (online) polarization can be built on this principle.
Article
Full-text available
This systematic review explores the utilization of crowdsourcing for geoinformation in enhancing awareness and mitigating terrorism-related disasters. Out of 519 studies identified in the database search, 108 were deemed eligible for analysis. We focused on articles employing various forms of crowdsourcing platforms, such as Twitter (now known as X), Facebook, and Telegram, across three distinct phases of terrorism-related disasters: monitoring and detection, onset, and post-incident analysis. Notably, we placed particular emphasis on the integration of Machine Learning (ML) algorithms in studying crowdsourced terrorism geoinformation to assess the current state of research and propose future directions. The findings revealed that Twitter emerged as the predominant crowdsourcing platform for terrorism-related information. Despite the prevalence of natural language processing for data mining, the majority of studies did not incorporate ML algorithms in their analyses. This preference for qualitative research methods can be attributed to the multifaceted nature of terrorism, spanning security, governance, politics, religion, and law. Our advocacy is for increased studies from the domains of geography, earth observation, and big data. Simultaneously, we encourage advancements in existing ML algorithms to enhance the accurate real-time detection of planned and onset terrorism disasters.
Article
Anti-genderism discourses emerge in response to new public policies resulting from the Fourth Feminist Wave. In the case of Spain, the radical right political party Vox not only articulates an anti-genderism discourse but has also proposed the so-called Parental Pin as an alternative to feminist education. In this light, this study aims to analyse the propagandistic messages of the aforementioned party on social networks, focusing on the Parental Pin as the main theme. Furthermore, it examines the favourable feedback received from its followers. The application of Critical Discourse Analysis revealed that polarisation constitutes a fundamental resource for understanding the communicative and political strategy of the party. Results also revealed that the party conveys an image of itself as guardians of parental freedom and national values, while strategically portraying the left and feminists as adversaries.
Chapter
This chapter critically evaluates the use of artificial intelligence (AI) in content moderation to counter violent extremism online. To this end, this chapter focuses on measuring the accuracy of AI in content moderation, the occurrences of false positives and false negatives and the infringements on the freedom of expression and democracy. This chapter also presents a critical analysis into the use of de-platforming measures in content moderation. A critical discussion is provided on far-right violent extremists migration from mainstream social media platforms to alt-tech platforms and the use of AI in the de-platforming process. It is argued that the use of automated content removal is limited in effectiveness, and that the use of AI in content moderation could lead to the violation of the principles of freedom of expression and democracy. Furthermore, this chapter highlights the value of de-platforming measures as a more effective tool to counter violent extremism online. It also considers the benefits of the use of AI in the de-platforming measures to enhance the detection of violent extremist users and networks online.
Article
Full-text available
While the messaging tactics of extremist organizations have been studied by researchers, little attention has been devoted to understanding how alternative multimodal communications can enable resistance to polarizing content. This article takes as case studies three grassroots youth arts projects that deploy multimodal resources to educate and build resilience: Build Solid Ground, Jamal al-Khatib and Loulu. The projects won awards at the Horizon 2020 Building Resilience to Violent Extremism and Polarisation (BRaVE) Fair, which was hosted by the Berlin-based intercultural organization Cultures Interactive and took place via Zoom in November 2020. In the pandemic context of increased time spent online, polarization, and growing social and structural vulnerability in which young people face uncertain futures, these projects have been selected for their ability to build channels of communication that support pro-social resilience.
Book
Full-text available
This open access book brings together a range of contributions that seek to explore the ethical issues arising from the overlap between counter-terrorism, ethics, and technologies. Terrorism and our responses pose some of the most significant ethical challenges to states and people. At the same time, we are becoming increasingly aware of the ethical implications of new and emerging technologies. Whether it is the use of remote weapons like drones as part of counter-terrorism strategies, the application of surveillance technologies to monitor and respond to terrorist activities, or counterintelligence agencies use of machine learning to detect suspicious behavior and hacking computers to gain access to encrypted data, technologies play a significant role in modern counter-terrorism. However, each of these technologies carries with them a range of ethical issues and challenges. How we use these technologies and the policies that govern them have broader impact beyond just the identification and response to terrorist activities. As we are seeing with China, the need to respond to domestic terrorism is one of the justifications for their rollout of the “social credit system.” Counter-terrorism technologies can easily succumb to mission creep, where a technology’s exceptional application becomes normalized and rolled out to society more generally. This collection is not just timely but an important contribution to understand the ethics of counter-terrorism and technology and has far wider implications for societies and nations around the world.
Chapter
Full-text available
The concept of WMD is part of numerous national laws and is the core of one of the most important treaties of the United Nations (Organisation for the Prohibition of Chemical Weapons in Convention on the prohibition of the development, production, stockpiling and use of chemical weapons and on their destruction, 1992; United Nation Office of Disarmament Affairs in The convention on the prohibition of the development, production and stockpiling of bacteriological (biological) and toxin weapons and on their destruction, 1975). Yet, the definition of what should be considered a WMD is far from established and subject to controversial debates. Academics, policymakers, and legislators have been introducing a variety of partly conflicting conceptualizations of WMD into scientific debates, public discourse, and legislations over the last eight decades.
Chapter
Full-text available
As liberal democracies grapple with the evolution of online political extremism, in addition to governments, social media and internet infrastructure companies have found themselves making more and more decisions about who gets to use their platforms, and what people say online. This raises the question that this paper explores, who should regulate extremist content online? In doing so the first part of the paper examines the evolution of the increasing role that social media and internet infrastructure companies have come to play in the regulating extremist content online, and the ethical challenges this presents. The second part of the paper explores three ethical challenges: i) the moral legitimacy of private actors, ii) the concentration of power in the hands of a few actors and iii) the lack of separation of powers in the content regulation process by private actors.
ResearchGate has not been able to resolve any references for this publication.