Article

Algorithmic extremism: Examining YouTube's rabbit hole of radicalization

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube’s algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. On the contrary, these data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with a slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We contribute to this debate in two ways: firstly, we conduct an empirical analysis of interactions of recommendation systems and far-right content on three platforms-YouTube, Reddit, and Gab. This analysis provides a novel contribution by being the first study to account for personalisation in an experimental setting, which has been noted as a limitation by previous empirical research (Ledwich and Zaitsev, 2019;Ribeiro et al., 2019). We find that one platform-YouTube-does promote extreme content after interacting with far-right materials, but the other two do not. ...
... Conversely, Ledwich and Zaitsev (2019) find that YouTube recommendation algo-2. In their paper, they use the Anti-Defamation League's description of the Alt-Right as: "Aloose segment of the white supremacist movement consisting of individuals who reject mainstream conservatism in favor of politics that embrace racist, anti-Semitic and white supremacist ideology" (Ribeiro et al., 2019, p. 2). ...
... (2019) and Ledwich and Zaitsev (2019) as limitations. ...
Article
Full-text available
Policymakers have recently expressed concerns over the role of recommendation algorithms and their role in forming “filter bubbles”. This is a particularly prescient concern in the context of extremist content online; these algorithms may promote extremist content at the expense of more moderate voices. In this article, we make two contributions to this debate. Firstly, we provide a novel empirical analysis of three platforms’ recommendation systems when interacting with far-right content. We find that one platform—YouTube—does amplify extreme and fringe content, while two—Reddit and Gab—do not. Secondly, we contextualise these findings into the regulatory debate. There are currently few policy instruments for dealing with algorithmic amplification, and those that do exist largely focus on transparency. We argue that policymakers have yet to fully understand the problems inherent in “de-amplifying” legal, borderline content and argue that a co-regulatory approach may offer a route towards tackling many of these challenges.
... Video-sharing platforms such as YouTube and TikTok are criticized for hosting hate group channels and spreading extreme ideologies and conspiracy theories. Research suggested that the video recommendation algorithms may consistently expose viewers to hate ideology and lead them to the rabbit hole of hate videos [8,15]. On YouTube, the openness and autonomy of the platform allow hate groups to create content and interact with the viewers. ...
... Most prior studies focused on understanding the algorithms that promote hate ideology content [1,15] or the user interactions with hate ideology videos [7,8]. Studies have examined hate groups on other social media [13,16]. ...
... Motivating viewers to "investigate" could mean interacting with more hate group content, which is an encouragement of reflection on and exploration of extreme content [14]. Watching more videos of hate opinions may make viewers fall into the rabbit hole [8]. Video-sharing platforms should consider technical approaches and new policies to moderate viewers' interactions with such content to prevent engagement in the hate group content. ...
Conference Paper
Full-text available
As the largest video-sharing platform, YouTube has been known for hosting hate ideology content that could lead to between-group conflicts and extremism. Research has examined search algorithms and the creator-fan networks related to radicalization videos on YouTube. However, there is little grounded theory analysis of videos of hate groups to understand how hate groups present to the viewers and discuss social problems, solutions, and actions. This work presents a preliminary analysis of 96 videos using open-coding and affinity diagramming to identify common video styles created by the U.S. hate ideology groups. We also annotated hate videos' diagnostic, prognostic, and motivational framing to understand how the hate groups utilize video-sharing platforms to promote collective actions.
... Evidence as to whether YouTube's algorithm disproportionately recommends ideologically biased content is inconclusive. Whereas some studies report that the algorithm tends to recommend radical, divisive, or conspiratorial content [21,22,23,24], others provide evidence to the contrary [25,26]. These conflicting conclusions are due to subtle but crucial differences in the methodologies. ...
... These conflicting conclusions are due to subtle but crucial differences in the methodologies. In particular, some work relies on active measurements using untrained sock puppets (i.e., without any watch history) [25,26,27], and thus cannot capture recommendation processes among actual users. In turn, the studies that measure real user watch activity cannot tease apart the role of algorithmic recommendations from the actions of the user [25,28,29]. ...
... These concerns are particularly relevant to YouTube, one of the most popular online social media platforms. YouTube has been accused of putting its users in "rabbit holes" [26] and its algorithm has been described as a "long-term addiction machine" [19]. This is a grave concern because 70% of watched content on YouTube is via recommendations [16] and the top recommendation is typically played automatically after the currentlywatched video. ...
Preprint
Full-text available
Recommendations algorithms of social media platforms are often criticized for placing users in "rabbit holes" of (increasingly) ideologically biased content. Despite these concerns, prior evidence on this algorithmic radicalization is inconsistent. Furthermore, prior work lacks systematic interventions that reduce the potential ideological bias in recommendation algorithms. We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets to determine the presence of ideological bias (i.e., are recommendations aligned with users' ideology), its magnitude (i.e., are users recommended an increasing number of videos aligned with their ideology), and radicalization (i.e., are the recommendations progressively more extreme). Furthermore, we design and evaluate a bottom-up intervention to minimize ideological bias in recommendations without relying on cooperation from YouTube. We find that YouTube's recommendations do direct users -- especially right-leaning users -- to ideologically biased and increasingly radical content on both homepages and in up-next recommendations. Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content, yet debiasing is especially challenging for right-leaning users. Our systematic assessment shows that while YouTube recommendations lead to ideological bias, such bias can be mitigated through our intervention.
... We estimated YouTube videos' political leanings by first averaging the leaning scores of videos' early adopters on Twitter. We then used an external YouTube media bias dataset (Ledwich and Zaitsev 2020) to label the video leanings and identified optimal classification thresholds. We were able to find 58/10/111 left, center, and right-leaning videos for Abortion (analogously, 81/33/154 for Gun Control and 297/84/396 for BLM). ...
... To assign discrete leaning labels (i.e. Left, Right and Center) to the videos, we leveraged the Recfluence dataset (Ledwich and Zaitsev 2020) as an external source to annotate YouTube videos' leanings. This dataset includes 816 YouTube channels where each channel has 10k+ subscribers and more than 30% of its content is relevant to US politics, or cultural news, or cultural commentary. ...
... Distribution of leaning scores of videos (generated with kernel density estimation after removal of outliers) whose channels are available in Recfluence dataset(Ledwich and Zaitsev 2020). Each rod in the figures corresponds to a video, and its color represents the leaning label of the video obtained from Recfluence dataset.The resulting threshold pairs (thr (L,C) , thr (C,R) ) are (0.538, 0.749) for Abortion, (0.478, 0.716) for Gun Control and (0.524, 0.695) for BLM, respectively. ...
Preprint
Full-text available
The ideological asymmetries have been recently observed in contested online spaces, where conservative voices seem to be relatively more pronounced even though liberals are known to have the population advantage on digital platforms. Most prior research, however, focused on either one single platform or one single political topic. Whether an ideological group garners more attention across platforms and topics, and how the attention dynamics evolve over time, have not been explored. In this work, we present a quantitative study that links collective attention across two social media platforms -- YouTube and Twitter, centered on online activities surrounding videos of three controversial political topics including Abortion, Gun control, and Black Lives Matter over 16 months. We propose several video-centric metrics to characterize how online attention is accumulated for different ideological groups. We find that neither side is on a winning streak: left-leaning videos are overall more viewed, more engaging, but less tweeted than right-leaning videos. The attention time series unfold quicker for left-leaning videos, but span a longer time for right-leaning videos. Network analysis on the early adopters and tweet cascades show that the information diffusion for left-leaning videos tends to involve centralized actors; while that for right-leaning videos starts earlier in the attention lifecycle. In sum, our findings go beyond the static picture of ideological asymmetries in digital spaces and provide a set of methods to quantify attention dynamics across different social platforms.
... However, these reports mention many other factors including chat rooms, personal relationships, user-directed searches, and life circumstances. More systematic studies have looked for recommender feedback effects that move users toward radicalization en mass (Faddoul et al., 2020;Ledwich and Zaitsev, 2019;Munger and Phillips, 2020;Ribeiro et al., 2020a) These studies generally show that recommenders alter content mix in the direction of engagement, but have produced poor evidence on the radicalizing potential of recommender systems because of insufficiently powerful experimental designs, as we will discuss below. In general, causal understanding of user trajectories through recommender systems remains a major challenge. ...
... There is replicated evidence that strongly moralizing expression spreads faster than other content on social networks (Brady and Van Bavel, 2021) and such moralizing seems to precede offline violence (Mooijman et al., 2018). A number of researchers interested in categories such as "far right," "conspiracy," or "radical" have studied the network structure of recommendations between YouTube channels (Faddoul et al., 2020;Ledwich and Zaitsev, 2019;Ribeiro et al., 2020a). While this showed that more extreme channels often link to each other, these studies do not analyze user trajectories because they were conducted without any personalization. ...
Preprint
Recommender systems are the algorithms which select, filter, and personalize content across many of the worlds largest platforms and apps. As such, their positive and negative effects on individuals and on societies have been extensively theorized and studied. Our overarching question is how to ensure that recommender systems enact the values of the individuals and societies that they serve. Addressing this question in a principled fashion requires technical knowledge of recommender design and operation, and also critically depends on insights from diverse fields including social science, ethics, economics, psychology, policy and law. This paper is a multidisciplinary effort to synthesize theory and practice from different perspectives, with the goal of providing a shared language, articulating current design approaches, and identifying open problems. It is not a comprehensive survey of this large space, but a set of highlights identified by our diverse author cohort. We collect a set of values that seem most relevant to recommender systems operating across different domains, then examine them from the perspectives of current industry practice, measurement, product design, and policy approaches. Important open problems include multi-stakeholder processes for defining values and resolving trade-offs, better values-driven measurements, recommender controls that people use, non-behavioral algorithmic feedback, optimization for long-term outcomes, causal inference of recommender effects, academic-industry research collaborations, and interdisciplinary policy-making.
... A 2014 study found that exposure to conspiracy content reduced an individual's willingness to receive vaccinations [12,38,40,57]. Other research additionally supports the overlap of conspiracy and political communities such as the alt-right and radical left [43,54]. Online Advertising. ...
... For the first, we sought a group of videos focused on conspiratorial or pseudoscientific content. We extracted them from Ledwich and Zaitsev [43], who manually labeled many YouTube channels. The paper focused on political channels, but discovered enough conspiratorial YouTube creators to give them a specific category. ...
Preprint
Full-text available
Conspiracy theories are increasingly a subject of research interest as society grapples with their rapid growth in areas such as politics or public health. Previous work has established YouTube as one of the most popular sites for people to host and discuss different theories. In this paper, we present an analysis of monetization methods of conspiracy theorist YouTube creators and the types of advertisers potentially targeting this content. We collect 184,218 ad impressions from 6,347 unique advertisers found on conspiracy-focused channels and mainstream YouTube content. We classify the ads into business categories and compare their prevalence between conspiracy and mainstream content. We also identify common offsite monetization methods. In comparison with mainstream content, conspiracy videos had similar levels of ads from well-known brands, but an almost eleven times higher prevalence of likely predatory or deceptive ads. Additionally, we found that conspiracy channels were more than twice as likely as mainstream channels to use offsite monetization methods, and 53% of the demonetized channels we observed were linking to third-party sites for alternative monetization opportunities. Our results indicate that conspiracy theorists on YouTube had many potential avenues to generate revenue, and that predatory ads were more frequently served for conspiracy videos.
... *Author for correspondence On the other hand, video classification has been viewed as a critical means of ensuring the efficient retrieval of videos [6][7][8][9]. Previous research has indicated that affective computing-based video retrieval is the future direction and presents eminently compelling research issues [7,10,11]. ...
... *Author for correspondence On the other hand, video classification has been viewed as a critical means of ensuring the efficient retrieval of videos [6][7][8][9]. Previous research has indicated that affective computing-based video retrieval is the future direction and presents eminently compelling research issues [7,10,11]. ...
... Nevertheless, critical approaches stress that web platforms can accentuate antagonism and hostility among participants, damaging online discussion's potential for upholding the public sphere (Dahlberg, 2001). This "dual nature" of the internet has manifested since the very beginning of online communication (Ledwich & Zaitsev, 2019), where "flame-wars" (Kayany, 1998) and trolling behavior were already common practice in the everyday life of online communities. With the success of social media, behaviors previously confined to Usenet message boards and limited IRC channels such as inflammatory behavior, antisocial messaging and polarized extremism (Ledwich & Zaitsev, 2019) have become commonplace in the public consciousness. ...
... This "dual nature" of the internet has manifested since the very beginning of online communication (Ledwich & Zaitsev, 2019), where "flame-wars" (Kayany, 1998) and trolling behavior were already common practice in the everyday life of online communities. With the success of social media, behaviors previously confined to Usenet message boards and limited IRC channels such as inflammatory behavior, antisocial messaging and polarized extremism (Ledwich & Zaitsev, 2019) have become commonplace in the public consciousness. Thus, while everyday political discussion might help participants to learn and understand matters of public concerns (Rossini, 2019), narratives emerging on social media do not necessarily seem civil (Rohlinger & Williams, 2019). ...
Article
Full-text available
The study addresses central issues in contemporary politics in response to growing concern about the impoverishment of political discourse that has become increasingly uncivil. In particular it analhyzes citizens’ reactions to leaders’ uncivil posts on Facebook during the 2018 Italian General Election, by adopting a theoretical-operational model based on a dual approach (top down – bottom up) that examines the forms of adverse communication used by politicians online, and the consequences of these forms on users’ discussion (analyzing both ranking behaviors and users’ comments). Political incivility is operationalized as a multidimensional concept and specific types are proposed, starting from violations of norms of politeness (interpersonal-level) and proceeding to violation of public norms of civility (public-level). Results show that leaders’ use of uncivil messages trigger greater online participation, thus increasing the visibility of their posts. However, the emotional excitement elicited by these triggering forms of elite communication encourage antagonistic and rude behaviors among users, leading to an increase in uncivil comments and thus jeopardizing the quality of online discussion. Overall, it emerges that incivility combined with divisive issues can be thought of as a tool of communication used strategically by politicians to mobilize voters and to strengthen their political affiliation.
... press is the role played by online recommendation algorithms, which are thought to contribute to echo chambers, filter bubbles, and radicalization (Tufekci 2018;Weill 2018;Nicas 2018). Yet there is little evidence to support this claim in scholarly work, which primarily examines whether partisans are exposed to different streams of information (Barberá et al. 2015;Guess et al. 2018;Bakshy, Messing, and Adamic 2015;Ledwich and Zaitsev 2020). To the extent that such online echo chambers even exist in the first place (Guess et al. 2018), they are thought to be the product of intentional human behaviors, not recommendation algorithms (Chen et al. 2021). ...
... Despite the popularity of the concept, a recent review finds mixed evidence regarding the prevalence of online echo chambers (Barberá 2020). Moreover, research on social media algorithms specifically has, if anything, suggested that any echo chambers that do exist are more likely to be driven by user choices than by underlying algorithms (Bakshy, Messing, and Adamic 2015;Ribeiro et al. 2020;Ledwich and Zaitsev 2020). ...
Article
Full-text available
Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.
... The rapid growth of Internet users and the increasing popularity of video streaming services also increased the spread of misinformation, malicious and offensive content, and hate speech among people [1]. Several studies have highlighted that the YouTube recommendation algorithm acts as a platform to radicalize people by suggesting radicalized content [2], [3], [4], [5]. Moreover, many private organizations want to limit the political, hate, and violence related videos within their private networks. ...
Conference Paper
Full-text available
Identifying the video in the network traffic is in fact a tough challenge for researchers. Recent studies on video identification in the network traffic use many different assumptions that increase the difficulty of practical deployment of the techniques. This study proposes an attack using a Convolutional Neural Network for video identification in the network traffic and also targets assumptions to understand their effects on the accuracy of the attack. Two basic assumptions are targeted in this work: (1) Time of starting the videos at the client and previous samples in the dataset used for training of models should be same and (2) all normalization techniques for normalizing features before feeding them to the classifier will work equally. The results show that even if the video starts playing from any random position, the convolutional neural network can identify the video with high accuracy of 95%. There is no restriction of the exact match between the start time of the video in the client and in the samples used for training. Moreover, max normalization technique provides better accuracy with the model than sigmoid function.
... For example, fast-paced video streaming services has resulted in a fast spread of hate and misinformation in societies [4]. Various research studies and reports strongly suggest that YouTube acts as a platform to radicalize people [5]. The investigation reports from a shooting incident in a Christchurch Mosque also discusses the role of YouTube in promoting hate against the community and radicalization of the shooter (https://www.theverge.com/2020/12/ ...
Article
Full-text available
Encryption Protocols e.g., HTTPS is utilized to secure the traffic between servers and clients for YouTube and other video streaming services, and to further secure the communication, VPNs are used. However, these protocols are not sufficient to hide the identity of the videos from someone who can sniff the network traffic. The present work explores the methodologies and features to identify the videos in a VPN and non-VPN network traffic. To identify such videos, a side-channel attack using a Sequential Convolution Neural Network is proposed. The results demonstrate that a sequence of bytes per second from even one-minute sniffing of network traffic is sufficient to predict the video with high accuracy. The accuracy is increased to 90% accuracy in the non-VPN, 66% accuracy in the VPN, and 77% in the mixed VPN and non-VPN traffic, for models with two-minute sniffing
... In 2019, several working papers were released that applied quantitative techniques to relatively large-scale datasets to measure the recommendation algorithm's impact on the growth of rightwing radicalization on the platform (Ribeiro et al., 2019;Ledwich and Zaitsev, 2019). 2 While 2 there was significant disagreement between Ribeiro et al. and Ledwich and Zaitsev about the 'political bias' of the recommendation system, the outcome suggested that large-scale mapping of algorithms via YouTube's API can never really capture the 'actual experience' of the audience (Feuer, 2019). ...
Article
Full-text available
Around 2018, YouTube became heavily criticized for its radicalizing function by allowing far-right actors to produce hateful videos that were in turn amplified through algorithmic recommendations. Against this ‘algorithmic radicalization’ hypothesis, Munger and Phillips (2019, A supply and demand framework for YouTube politics. Preprint. https://osf.io/73jys/download; Munger and Phillips, 2020, Right-wing YouTube: a supply and demand perspective. The International Journal of Press/Politics, 21(2). doi: 10.1177/1940161220964767.)) argued that far-right radical content on YouTube fed into audience demand, suggesting researchers adopt a ‘supply and demand’ framework. Navigating this debate, our article deploys novel methods for examining radicalization in the language of far-right pundits and their audiences within YouTube’s so-called ‘Alternative Influence Network’ (Lewis, 2018, Alternative Influence. Data & Society Research Institute. https://datasociety.net/library/alternative-influence/ (accessed 9 December 2020).). To that end, we operationalize the concept ‘extreme speech’—developed to account for ‘the inherent ambiguity of speech contexts’ online (Pohjonen and Udupa, 2017, Extreme speech online: an anthropological critique of hate speech debates. International Journal of Communication, 11: 1173–91)—to an analysis of a right-wing ‘Bloodsports’ debate subculture that thrived on the platform at the time. Highlighting the topic of ‘race realism’, we develop a novel mixed-methods approach: repurposing the far-right website Metapedia as a corpus to detect unique terms related to the issue. We use this corpus to analyze the transcripts and comments from an archive of 950 right-wing channels, collected from 2008 until 2018. In line with Munger and Phillips’ framework, our empirical study identifies a market for extreme speech on the platform, which came into public view in 2017.
... "What-to-watch-next" (W2W) recommenders are key features of video sharing platforms [55], as they sustain user engagement, thus increasing content views and driving advertisement and monetization. However, recent studies have raised serious concerns about the potential role played by W2W recommenders, specifically in driving users towards undesired or polarizing content [29]. Specifically, radicalized communities 1 on social networks and content sharing platforms have been recognized as keys in the consumption of news and in building opinions around politics and related subjects [30,48,53]. ...
Preprint
Full-text available
Recommender systems typically suggest to users content similar to what they consumed in the past. If a user happens to be exposed to strongly polarized content, she might subsequently receive recommendations which may steer her towards more and more radicalized content, eventually being trapped in what we call a "radicalization pathway". In this paper, we study the problem of mitigating radicalization pathways using a graph-based approach. Specifically, we model the set of recommendations of a "what-to-watch-next" recommender as a d-regular directed graph where nodes correspond to content items, links to recommendations, and paths to possible user sessions. We measure the "segregation" score of a node representing radicalized content as the expected length of a random walk from that node to any node representing non-radicalized content. High segregation scores are associated to larger chances to get users trapped in radicalization pathways. Hence, we define the problem of reducing the prevalence of radicalization pathways by selecting a small number of edges to "rewire", so to minimize the maximum of segregation scores among all radicalized nodes, while maintaining the relevance of the recommendations. We prove that the problem of finding the optimal set of recommendations to rewire is NP-hard and NP-hard to approximate within any factor. Therefore, we turn our attention to heuristics, and propose an efficient yet effective greedy algorithm based on the absorbing random walk theory. Our experiments on real-world datasets in the context of video and news recommendations confirm the effectiveness of our proposal.
... While some studies indicate that the algorithmic promotion of extremist and far-right content can lead users through recommendation chains of increasingly extreme content (e.g., Mozilla Foundation, 2020; Ribeiro et al., 2020), others suggest that actors exploit recommender systems by creating content to fill "voids," thereby gaining outsized attention for extreme content (Golebiewski & boyd, 2019, p. 29). Still others conclude that users encounter far-right content mostly through their own searches, indicating a level of pre-existing demand for extreme or radicalizing content, and that recommendation systems (including search engines) play a subsidiary role in its delivery (see e.g., Ledwich & Zaitsev, 2020). A number of scholars have suggested that excessive concern about algorithmic recommendation and associated personalization limiting users' exposure to diverse content may not be warranted (e.g., Haim et al., 2018;Möller et al., 2018), and more broadly, that additional work needs to be conducted to conceptualize standards for "diversity" in critiques of recommender systems' outputs (Loecherbach et al., 2020;Nechushtai & Lewis, 2019;Vrijenhoek et al., 2021). ...
Article
Full-text available
YouTube’s “up next” feature algorithmically selects, suggests, and displays videos to watch after the one that is currently playing. This feature has been criticized for limiting users’ exposure to a range of diverse media content and information sources; meanwhile, YouTube has reported that they have implemented various technical and policy changes to address these concerns. However, there is little publicly available data to support either the existing concerns or YouTube’s claims of having addressed them. Drawing on the idea of “platform observability,” this article combines computational and qualitative methods to investigate the types of content that the algorithms underpinning YouTube’s “up next” feature amplify over time, using three keyword search terms associated with sociocultural issues where concerns have been raised about YouTube’s role: “coronavirus,” “feminism,” and “beauty.” Over six weeks, we collected the videos (and their metadata, including channel IDs) that were highly ranked in the search results for each keyword, as well as the highly ranked recommendations associated with the videos. We repeated this exercise for three steps in the recommendation chain and then examined patterns in the recommended videos (and the channels that uploaded the videos) for each query and their variation over time. We found evidence of YouTube’s stated efforts to boost “authoritative” media outlets, but at the same time, misleading and controversial content continues to be recommended. We also found that while algorithmic recommendations offer diversity in videos over time, there are clear “winners” at the channel level that are given a visibility boost in YouTube’s “up next” feature. However, these impacts are attenuated differently depending on the nature of the issue.
... On YouTube, a recent study concluded that the graph formed by the network of nonpersonalized recommendations tends to confine users in groups of homogeneous videos (Roth et al., 2020). These observations are in line with the Ledwich and Zaitsev (2019) finding that the recommendations tend to redirect the user to more dominant, mainstream media. These disparate results focusing each on a specific technology underline the need for more comprehensive research on the phenomenon, encompassing users' news diets in complex environments. ...
Article
Full-text available
This paper presents the results of a study which aims at understanding how social media platforms influence the formation of opinions of young adults (18–25) through content personalization. To do this, we problematize the so-called “filter bubble” phenomenon. We first go back to the literature and propose to depart from trying to assess the existence of and quantify the presence of filter bubbles on social media. We propose to focus on news use and access to content diversity related to political opinion formation and the impact of algorithms on the presence of said diversity. We then propose a theoretical framework—Activity Theory (AT)—for the understanding modeling the diversity of practices as well as the discourses regarding these practices of youth on social media regarding access to the diversity of content and news. In particular, the division of phenomena in three levels (operations, actions, and activities) is used to build up a canvas for a model that will be tested enriched with the new data. The so-called “pyramidal model” is also discussed and applied to our research topic. The third part of this paper summarizes the methods used to gather the data through a method we call “online in praxis interviews.” We then present the results, which show a relative knowledge of the mechanisms of content recommendations on social media as well as the tactics young people use to increase or mitigate them.
... For instance, echo chambers are not as pervasive as they were thought to be (Guess et al. 2018): "if online 'echo chambers' exist, they are a reality for relatively few people" (Guess 2021(Guess :1007. In a similar vein, Ledwich and Zaitsev (2019) find that YouTube algorithms discourage people from visiting extremist sites, and Munger and Phillips (2022) indicate that far-right viewership on YouTube has declined since 2017, even more so when YouTube changed its algorithm in January 2018 to demote far-right sites. Bail (2021) reports evidence of false polarization, wherein social media users overestimate the degree of political differences with those who have opposing views, and leaderless demagoguery, wherein those who express extreme views with dramatic claims obtain status. ...
Article
Conspiracies are consequential and social, yet online conspiracy groups that consist of individuals (and bots) seeking to explain events or a system have been neglected in sociology. We extract conspiracy talk about the COVID-19 pandemic on Twitter and use the biterm topic model (BTM) to provide a descriptive baseline for the discursive and social structure of online conspiracy groups. We find that individuals enter these communities through a gateway conspiracy theory before proceeding to extreme theories, and humans adopt more diverse conspiracy theories than do bots. Event-history analyses show that individuals tweet new conspiracy theories, and tweet inconsistent theories simultaneously, when they face a threat posed by a rising COVID-19 case rate and receive attention from others via retweets. By contrast, bots are less responsive to rising case rates, but they are more consistent, as they mainly tweet about how COVID-19 was deliberately created by sinister agents. These findings suggest human beings are bricoleurs who use conspiracy theories to make sense of COVID-19, whereas bots are designed to create moral panic. Our findings suggest that conspiracy talk by individuals is defensive in nature, whereas bots engage in offense.
... Polarization is a process that "defines other groups in the social and political arena as allies or opponents" while radicalization involves people who "become separated from the mainstream norms and values of their society" and may engage in violence (van Stekelenburg, 2014). There is a growing body of work studying the connection between recommender systems and radicalization (Baugut & Neumann, 2020;Hosseinmardi et al., 2020;Ledwich & Zaitsev, 2019;Munger & Phillips, 2020;Ribeiro et al., 2020) but this is methodologically challenging, and has not yet established a robust causal link. While social media is plausibly involved in radicalization processes the nature of this connection is complex and poorly understood. ...
Preprint
Full-text available
Polarization is implicated in the erosion of democracy and the progression to violence, which makes the polarization properties of large algorithmic content selection systems (recommender systems) a matter of concern for peace and security. While algorithm-driven social media does not seem to be a primary driver of polarization at the country level, it could be a useful intervention point in polarized societies. This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict. Algorithmic intervention is considered at three stages: which content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface). Empirical studies of online conflict suggest that the exposure diversity intervention proposed as an antidote to "filter bubbles" can be improved and can even worsen polarization under some conditions. Using civility metrics in conjunction with diversity in content selection may be more effective. However, diversity-based interventions have not been tested at scale and may not work in the diverse and dynamic contexts of real platforms. Instead, intervening in platform polarization dynamics will likely require continuous monitoring of polarization metrics, such as the widely used "feeling thermometer." These metrics can be used to evaluate product features, and potentially engineered as algorithmic objectives. It may further prove necessary to include polarization measures in the objective functions of recommender algorithms to prevent optimization processes from creating conflict as a side effect.
... (Kreuschnigg et al. 2018). If the data is not observational but, for instance, generated through (large-scale) online experiments then (conventional) statistical methods can be used to analyse the data. 1 Algorithmic extremism is the online radicalisation of users that is driven by algorithmic recommendation systems (Ledwich and Zaitsev 2019). 2 Predictive policing is the application of analytical techniques-particularly quantitative techniques -to identify likely targets for police intervention and prevent crime or to solve past crimes by making statistical predictions (Perry et al. 2013). ...
Chapter
Full-text available
Digital data and methods are becoming increasingly ubiquitous. Almost all aspects of people's lives are now digitized: population registers are digital; health records are digital, criminal records are digital, employment and education registers are digital, all our interactions with authorities (whether on a local, regional or national level) are increasingly digital and hence produce digital records; most of our economic transactions (purchases, bank transfers, sales) are increasingly digital and hence produce digital records; our entertainment consumption is increasingly digital (Netflix, Spotify) and hence produce digital records, and even our social life becomes increasingly digitized with social media like Facebook, Instagram, WhatsApp etc. While there are still important realms that remain off the digital grid (e.g. elections in most countries, real-life social interactions etc.), the vastness of activities that are now digitized can hardly be ignored. This creates both opportunities and challenges for social science researchers and sociologists in particular (Golder and Macy 2014, Hampton 2017). In this chapter I will discuss how analytical sociology can help to harness the opportunities and respond to the challenges and why and how analytical sociology should embrace digital data and methods in its repertoire of methodological approaches. I will first define and discuss what digital data and digital methods are, then I will discuss the opportunities and challenges and how analytical sociology can help to harness the first and deal with the latter, and finally I will discuss what digital data and digital methods can offer to analytical sociology.
... There has been a significant discussion on how YouTube can lead to radicalization [5,46]. The underlying idea is that the recommendation engine guides non-radicalized users toward increasingly radical content. ...
Article
Full-text available
The COVID-19 pandemic has severely affected the lives of people worldwide, and consequently, it has dominated world news since March 2020. Thus, it is no surprise that it has also been the topic of a massive amount of misinformation, which was most likely amplified by the fact that many details about the virus were not known at the start of the pandemic. While a large amount of this misinformation was harmless, some narratives spread quickly and had a dramatic real-world effect. Such events are called digital wildfires. In this paper we study a specific digital wildfire: the idea that the COVID-19 outbreak is somehow connected to the introduction of 5G wireless technology, which caused real-world harm in April 2020 and beyond. By analyzing early social media contents we investigate the origin of this digital wildfire and the developments that lead to its wide spread. We show how the initial idea was derived from existing opposition to wireless networks, how videos rather than tweets played a crucial role in its propagation, and how commercial interests can partially explain the wide distribution of this particular piece of misinformation. We then illustrate how the initial events in the UK were echoed several months later in different countries around the world.
... While some studies suggest an increase on radicalization exposure (e.g. Faddoul et al., 2020;Ribeiro et al., 2020), other research, like Ledwich and Zaitsev's (2020), suggests that the YouTube recommendation algorithm does not increase radicalisation. ...
Book
Full-text available
YouTube, Instagram, Facebook, Vimeo, Twitter, and so on, have their own logics, dynamics and different audiences. This book analyses how the users of these social networks, especially those of YouTube and Instagram, become content prescribers, opinion leaders and, by extension, people of influence. What influence capacity do they have? Why are intimate or personal aspects shared with unknown people? Who are the big beneficiaries? How much is vanity and how much altruism? What business is behind these social networks? What dangers do they contain? What volume of business can we estimate they generate? How are they transforming cultural industries? What legislation is applied? How does the legislation affect these communications when they are sponsored? Is the privacy of users violated with the data obtained? Who is the owner of the content? Are they to blame for “fake news”? In this changing, challenging and intriguing environment, The Dynamics of Influencer Marketing discusses all of these questions and more. Considering this complexity from different perspectives: technological, economic, sociological, psychological and legal, the book combines the visions of several experts from the academic world and provides a structured framework with a wide approach to understand the new era of influencing, including the dark sides of it. It will be of direct interest to marketing scholars and researchers while also relevant to many other areas affected by the phenomenon of social media influence.
... 19. Ledwich and Zaitsev (2020) argue that "data shows that YouTube does the exact opposite of the radicalization claims". However, it is not clear whether their results are generalizable to users who are logged-in to their accounts, as arguably the whole point of the recommendation algorithm is personalizing recommendation to individual users. ...
... For instance, questions can be raised about the desirability of giving extremist voices equal access to mainstream news media. In the context of news recommendation systems, this challenge becomes even more pressing as some scholars suggest that algorithms may play a role in (dis)encouraging online radicalization (Ledwich & Zaitsev, 2019). ...
Thesis
Digitalization and the emergence of large amounts of media content has pushed organizations towards the use of algorithms to (semi-)automatically determine how information should be filtered, ranked and sorted. Especially in the news environment, there is an evolution ongoing in which news organizations increasingly rely on recommendation algorithms to personalize the news offer and tailor it to the users’ preferences. Although there are several commercial benefits related to the use of recommendation algorithms, several parties such as scholars and policy makers are concerned about how these technologies are used and designed. They believe that recommendation algorithms are a risk to citizens because they are trained to focus on similarities, between articles and people, rather than on differences. As such, they may provide more of the same news and expose citizens to a lesser extent to the diversity that is present in the news supply. In addition, they could also reinforce the self-selection process of citizens, which in turn also poses a risk to citizens’ consumed diversity. However, the idea of diversity is in several normative theories such as the public sphere perceived as an essential prerequisite to inform citizens properly and ensure the functioning of strong democracies. Academics therefore recommend exploring alternative ideas that can mitigate these risks and promote the idea of news diversity. In this dissertation, we take steps in that direction by examining how news organizations can incorporate diversity as a criterion in the development of recommendation algorithms and, by doing so, stimulate users to consume a diverse range of news articles. To do so, we make use of three research themes that give structure and meaning to this dissertation. These research themes are (1) news diversity as an alternative recommendation value, (2) audiences’ perceptions towards diversity-based recommendation algorithms and (3) audiences’ consumption behavior when using diversity-based recommendation algorithms. The rationale for these research themes lies in the idea to approach news algorithms from a socio-technical perspective, taking into account both the technical-conceptual aspects and social aspects of algorithms. In the first theme, we focus on these technical-conceptual aspects by conducting a systematic literature review and an interdisciplinary study on the meaning of news diversity and the different building blocks of a diversity-based algorithm. In the second and third theme, we focus on the social aspects by conducting a survey study and experimental study in which we investigate how news consumers evaluate and interact with news algorithms. Based on these studies, we present in this dissertation for each of these research themes several interesting insights that may be relevant to different stakeholders such as scholars, policy makers, news media and even citizens. A first important insight that emerges from our systematic literature review is that there is much diversity in the conceptualizations of the concept news diversity. For example, in our study we found that communication scholars have used more than 43 diversity dimensions and 26 different conceptualizations to shape the concept news diversity. In addition, researchers typically focus on dimensions that are easier to measure, such as the location of the news topic or the length of an article. Dimensions that are harder to measure, such as objectivity or controversy, are generally less chosen as objects of study. Normative assumptions about news diversity are also often neglected, making it difficult to assess which ideal is dominant in the academic literature. These results are especially valuable for academia in which the concept is frequently used to assess the news landscape and where a detailed dissection of the concept was lacking. In addition, news organizations can also use these insights to reflect on their own activities and/or the development of a diversity-based algorithm. A second important insight was found in our interdisciplinary study in which it became clear that the development of a diversity-based algorithm raises pertinent questions for a broad range of disciplines. These questions were not only present in the field of computer science where most recommendation algorithms are developed, but also in fields such as law, computational linguistics, and communication sciences. For computational linguistics and computer science, these questions are primarily situated in the technical elaboration that determines the accuracy and relevance of recommendation algorithms. For example, relevant content dimensions must be translated into content extraction algorithms, which is not a solved issue. The design of the recommendation algorithm must also be carefully considered, as the right balance has to be made between relevance and diversity. For law and communication sciences, in turn, the questions are more fundamental in nature. Questions such as 'which diversity dimensions are relevant to extract' or 'what is the optimal diversity outcome' are important questions, to which no unambiguous answers currently exist. Our study presents a concise overview of these discussions and also clarifies the challenges that arise with each of these topics. For academics, these challenges are particularly relevant in order to shape future research. At the same time, this study also shows that an interdisciplinary approach is required for the development of diversity-based algorithms and can even make help the development process to be more efficient and structured. A third important insight comes from the survey study in which we shed light on the perceptions that users have towards the different news selection mechanisms that underlie news algorithms. The results of this study show that the audience has a greater preference for news selection principles belonging to the ‘content-based similarity’ news algorithm than for those belonging to the ‘collaborative similarity’ or ‘content-based diversity’ news algorithm. This result shows that when the audience has the choice to determine how they want to receive the news, they have a tendency to prefer news articles that only interests them. To address the risks that are involved with this tendency, we forward a new approach, called ‘personalized diversity’. In this approach, the ultimate goal of the diversity algorithm remains the same, but it takes advantage of the personalization techniques that underlie ‘similarity-based’-news algorithms. This approach is particularly valuable for news organizations who want to implement the idea of diversity in existing or future recommendation activities. At the same time, it also shows that news selection principles are not mutually exclusive and are thus quite compatible with each other. Finally, in our experimental study, we found interesting insights about how diversity-based algorithms can affect people’s news exposure behavior and perceptions. In particular, our results show that diversity-based algorithms can steer users towards more diverse exposure behavior, with the personalized diversity-based news recommender being most effective. Moreover, we found that people using a diversity-based news recommender did not think they read more diverse, pointing towards a so-called diversity paradox. We forward several explanations for this paradox, but mainly point in the direction of transparency and the lack thereof in recommendation systems. This result is especially valuable for policy makers, to advance discussions on the importance of transparency in recommendation systems and to take further policy actions on this issue.
... 186 Indeed, research which debunks myths surrounding AI, such as recent research disproving that YouTube's algorithm encourages radicalisation, are beneficial for development of clear policy and to avoid technological solutionism or falling victim to fear. 187 Despite the continuing tension between the positive and negative attributes of AI, the consensus is clear that AI will have a significant impact on the Internet, whether that impact is through shaping conversations, fear and uncertainty surrounding AI, or cyber-utopianism about AI's potential. ...
Article
Freedom of religion or belief is an essential right for building pluralistic and tolerant societies which can sustain a multiplicity of competing ideas. However, the opaqueness of artificial intelligence systems on the Internet represents a challenge to the protection and enjoyment of this and other human rights. Although AI has generated interest in the human rights literature, these studies have largely focused on AI and its impact on freedom of expression and privacy, leaving other rights such as freedom of religion or belief neglected. As part of a broader research project to expand the academic conversation about AI and human rights, this paper will examine the impact of artificial intelligence on freedom of religion or belief online. The paper will focus on the worship, teaching, observance, and practice associated with freedom of religion or belief alongside the impacts of AI in content display, content moderation, and online privacy. The paper will offer preliminary policy recommendations to encourage discussion on policy approaches to AI development and deployment which incorporate protections for freedom of religion or belief in the era of artificial intelligence.
... CloudWalk Technology, a key supplier to the Chinese government, markets its "fire eye" facial recognition service to pick out "Uighurs, Tibetans and other sensitive groups" 5 Note that this suggestion has been disputed (e.g.Boxell et al. 2017;Ledwich and Zaitsev 2019). The underlying methodological problem is that social media companies have sole access to the data required to perform a thorough analysis, and lack incentive to publicise this data or perform the analysis themselves. ...
Preprint
Full-text available
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI. This requires answering challenging empirical questions about current and potential risks and benefits of AI: assessing impacts that are often widely distributed and indirect, and making predictions about a highly uncertain future. It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging. Though different groups may agree on high-level principles that uses of AI should respect (e.g., privacy, fairness, and autonomy), challenges arise when putting these principles into practice. For example, it is straightforward to say that AI systems must protect individual privacy, but there is presumably some amount or type of privacy that most people would be willing to give up to develop life-saving medical treatments. Despite these challenges, research can and has made progress on these questions. The aim of this chapter will be to give readers an understanding of this progress, and of the challenges that remain.
... Meanwhile, their analysis of user comments suggests that users do migrate from milder to more extreme content over time. This is supported by Ledwich & Zaitsev [45], who find that YouTube's recommendation system actively discourages users from visiting extremist content. Their analysis suggests that YouTube directs traffic towards the two largest mainstream groups -the Partisan Right and the Partisan Left -away from more niche content they labeled Conspiracy, White Identitarian, and Anti-Social Justice Warrior. ...
Conference Paper
Full-text available
With YouTube's growing importance as a news platform, its recommendation system came under increased scrutiny. Recognizing YouTube's recommendation system as a broadcaster of media, we explore the applicability of laws that require broadcasters to give important political, ideological, and social groups adequate opportunity to express themselves in the broadcasted program of the service. We present audits as an important tool to enforce such laws and to ensure that a system operates in the public's interest. To examine whether YouTube is enacting certain biases, we collected video recommendations about political topics by following chains of ten recommendations per video. Our findings suggest that YouTube's recommendation system is enacting important biases. We find that YouTube is recommending increasingly popular but topically unrelated videos. The sadness evoked by the recommended videos decreases while the happiness increases. We discuss the strong popularity bias we identified and analyze the link between the popularity of content and emotions. We also discuss how audits empower researchers and civic hackers to monitor complex machine learning (ML)-based systems like YouTube's recommendation system.
... Meanwhile, their analysis of user comments suggests that users do migrate from milder to more extreme content over time. This is supported by Ledwich & Zaitsev [45], who find that YouTube's recommendation system actively discourages users from visiting extremist content. Their analysis suggests that YouTube directs traffic towards the two largest mainstream groups -the Partisan Right and the Partisan Left -away from more niche content they labeled Conspiracy, White Identitarian, and Anti-Social Justice Warrior. ...
Preprint
Full-text available
With YouTube's growing importance as a news platform, its recommendation system came under increased scrutiny. Recognizing YouTube's recommendation system as a broadcaster of media, we explore the applicability of laws that require broadcasters to give important political, ideological, and social groups adequate opportunity to express themselves in the broadcasted program of the service. We present audits as an important tool to enforce such laws and to ensure that a system operates in the public's interest. To examine whether YouTube is enacting certain biases, we collected video recommendations about political topics by following chains of ten recommendations per video. Our findings suggest that YouTube's recommendation system is enacting important biases. We find that YouTube is recommending increasingly popular but topically unrelated videos. The sadness evoked by the recommended videos decreases while the happiness increases. We discuss the strong popularity bias we identified and analyze the link between the popularity of content and emotions. We also discuss how audits empower researchers and civic hackers to monitor complex machine learning (ML)-based systems like YouTube's recommendation system.
... While some studies suggest an increase on radicalization exposure (e.g. Faddoul et al., 2020;Ribeiro et al., 2020), other research, like Ledwich and Zaitsev's (2020), suggests that the YouTube recommendation algorithm does not increase radicalisation. ...
... On one side, several studies claim that dynamic interaction of recommendation systems with users may lead to polarization [12,54], filter bubbles [23,50], homogenization [6], echo chambers [14,32,48] and extremism [33,45,52]. Empirical audits, however, find that algorithmically recommended content is more diverse than natural consumption [47], and that real systems do not exhibit the strong extremism or polarization effects implied by theoretical models [28,39,40,43,57]. The study [5] points out that recommendation systems might lead to undesired changes in preferences, and proposes to design for safe preference shifts, which are preference trajectories that are deemed "desirable". ...
Preprint
Full-text available
Designing recommendation systems that serve content aligned with time varying preferences requires proper accounting of the feedback effects of recommendations on human behavior and psychological condition. We argue that modeling the influence of recommendations on people's preferences must be grounded in psychologically plausible models. We contribute a methodology for developing grounded dynamic preference models. We demonstrate this method with models that capture three classic effects from the psychology literature: Mere-Exposure, Operant Conditioning, and Hedonic Adaptation. We conduct simulation-based studies to show that the psychological models manifest distinct behaviors that can inform system design. Our study has two direct implications for dynamic user modeling in recommendation systems. First, the methodology we outline is broadly applicable for psychologically grounding dynamic preference models. It allows us to critique recent contributions based on their limited discussion of psychological foundation and their implausible predictions. Second, we discuss implications of dynamic preference models for recommendation systems evaluation and design. In an example, we show that engagement and diversity metrics may be unable to capture desirable recommendation system performance.
... Various studies show the correlation of human behavior with viewing content through popular video streaming websites, such as YouTube [4] [5]. Studies have shown that video streaming websites, such as YouTube plays an important role in the radicalization of people [6]. In recent events, such as Christchurch's Mosque shootings, militants have used social video streaming services to broadcast their attack. ...
Conference Paper
Full-text available
Video streaming service providers are increasingly using end-to-end traffic encryption protocols, such as HTTPS. However, these protocols alone are not sufficient to hide the identification of the videos being watched by the clients on the network from anyone who can sniff the network traffic, such as the network administrator or ISP. In this work, we have explored the YouTube video streams for video identification. We have presented a side-channel attack and a sequential convolutional neural network (SCNN) for the task. The results have shown that even with a series of bytes per second from a two-minute sniffing of network traffic taken as a feature in the SCNN can identify the video with high accuracy without breaking the encryption.
Article
Significance Daily share of news consumption on YouTube, a social media platform with more than 2 billion monthly users, has increased in the last few years. Constructing a large dataset of users’ trajectories across the full political spectrum during 2016–2019, we identify several distinct communities of news consumers, including “far-right” and “anti-woke.” Far right is small and not increasing in size over the observation period, while anti-woke is growing, and both grow in consumption per user. We find little evidence that the YouTube recommendation algorithm is driving attention to this content. Our results indicate that trends in video-based political news consumption are determined by a complicated combination of user preferences, platform features, and the supply-and-demand dynamics of the broader web.
Article
The emergence of deepfakes is the latest form to prompt anxieties over the wider implications of misinformation. This chapter explores possibilities for how these technologies extend the repertoire of modalities available for documentary makers. While these ‘synthetic media’ offer a disruption of the documentary genre, they are also a continuation of long-standing trends within software culture and also clearly augment practices which are deeply embedded within the documentary genre. This discussion draws upon Wardle and Derakhshan’s ‘misinformation’ and ‘disinformation’ framework to highlight the increasing complexity of documentary’s forms and the challenges they pose to audiences. The limited experiments in integrating synthetic media into documentary media in a productive way suggest especially the possibilities for using these to develop more openly reflexive content. The proliferation of synthetic media forms prompt a wider need within documentary practitioners for critical data practices, software literacy, and ethical practices embedded within a broader understanding of automated, networked and entangled media systems. And they challenge documentary designers to strategise the nature of their content, and engage more directly with their audiences on questions around evidence, trust, authenticity and the nature of documentary media within an era of misinformation.
Article
Este artículo analiza diversas implicaciones político-filosóficas de las Inteligencias Artificiales que dirigen las Redes Sociales. En primer lugar, se estudia la polarización social, la cual se observa como consecuencia de la búsqueda de beneficios económicos, que crea, a su vez, la necesidad de ofrecer contenidos polémicos para maximizar la presencia de los usuarios en las Redes Sociales. En segundo lugar, aborda el aumento del control político, debido a la capacidad predictiva y manipulativa de las IA que dirigen dichas redes. Por último, se centrará en la pretendida neutralidad y transparencia de dicha tecnología. En los tres momentos de este trabajo se emplearán herramientas procedentes del arsenal analítico que nos han legado algunos autores de la filosofía del siglo XX tales como Horkheimer, Marcuse, Habermas, Mosterín, Gramsci y Quintanilla, entre otros, cuyas propuestas se aplican a la realidad estudiada. Finalmente, se bosquejarán propuestas prácticas ante la problemática descrita.
Article
Full-text available
Based on the assumption that social media encourages a populist style of politics in online communities and the proposition that populism and conspiracy theories tend to co-occur, this article investigates whether this holds true for YouTube influencers, particularly on the less investigated left-wing spectrum. The article provides qualitative case studies of four different groups of political content creators on YouTube whose content makes use of or analyzes popular culture. The article concludes that a populist style plays a far less central role in left-wing communities on YouTube than on other platforms or within right-wing communities.
Chapter
This study audits the structural and emergent properties of YouTube’s video recommendations, with an emphasis on the black-box evaluation of recommender bias, confinement, and formation of information bubbles in the form of content communities. Adopting complex networks and graphical probabilistic approaches, the results of our analysis of 6,068,057 video recommendations made by the YouTube algorithm reveals strong indicators of recommendation bias leading to the formation of closely-knit and confined content communities. Also, our qualitative and quantitative exploration of the prominent content and discourse in each community further uncovered the formation of narrative-specific clusters made by the recommender system we examined.
Chapter
Regulation to address abuse of the internet and of free speech presents a range of practical challenges as well as questions of moral principles that govern free, open and democratic societies. This chapter focuses on practical challenges that derive from the business models of big tech: the attention economy and a lack of transparency about the algorithms that drive this, the global oligopoly of big tech companies, and the positioning of online service providers as platforms rather than media content publishers. Constraining harmful digital communication requires sustained, co-ordinated, multi-lateral, multi-stakeholder agreements and initiatives—some combination of governmental and inter-governmental regulation, industry-wide standards, industry self-regulation, co-regulation that is monitored and enforced, technology innovation, and market pressure by advertisers, consumers and service users. Civil society organisations must be involved from the outset, in an open and inclusive manner. Planks to build on are Tim Berners-Lee’s Contract for the Web, Ro Khanna’s proposed Internet Bill of Rights, and the European Commission’s Code of Conduct on Countering Illegal Hate Speech Online.KeywordsBig techAttention economyAlgorithmsEcho chambersInternet regulation
Thesis
Full-text available
Background: This thesis explores the far right online beyond the study of political parties and extremist far-right sites and content. Specifically, it focuses on the proliferation of far-right discourse among ‘ordinary’ internet users in mainstream digital settings. In doing so, it aims to bring the study of far-right discourse and the enabling roles of digital platforms and influential users into dialogue. It does so by analysing what is communicated and how; where it is communicated and therein the roles of different socio-technical features associated with various online settings; and finally, by whom, focusing on particularly influential users. Methods: The thesis uses material from four different datasets of digital, user-generated content, collected at different times through different methods. These datasets have been analysed using mixed methods approaches wherein interpretative methods, primarily in the form of critical discourse analysis (CDA), have been combined with various data processing techniques, descriptive statistics, visualisations, and computational data analysis methods. Results: The thesis provides a number of findings in relation to far-right discourse, digital platforms, and online influence, respectively. In doing so it builds on the findings of previous research, illustrates unexpected and contradictory results in relation to what was previously known, and makes a number of interesting new discoveries. Overall, it begins to unravel the complex interconnectedness of far-right discourse, platforms, and influential users, and illustrates that to understand the far-right’s efforts online it is imperative to take several dimensions into account simultaneously. Conclusion: The thesis makes several contributions. First, the thesis makes a conceptual contribution by focusing on the interconnectedness of far-right efforts online. Second, it makes an empirical contribution by exploring the multifaceted grassroots or ‘non-party’ dimensions of far-right mobilisation, Finally, the thesis makes a methodological contribution through its mix of methods which illustrates how different aspects of the far right, over varying time periods, diversely sized and shaped datasets, and user constellations, can be approached to reveal broader overarching patterns as well as intricate details.
Article
Purpose This column aims to explore the technology trends that the social media video service, TikTok, has leveraged in its rise in popularity. Design/methodology/approach It will review how the artificial intelligence, the short video and the proactive delivery of content work together to make TikTok successful and their implications for libraries. Findings Libraries both exist as part of a larger information landscape, and themselves seek to share relevant resources with users. As such they can benefit from a nuanced understanding of the technology trends that make TikTok so impactful. Originality/value It will help information professionals understand the relevant issues.
Technical Report
Full-text available
In this paper, we discuss the potential findings of cross-partisan discussions between Liberals and Conservatives on YouTube's left-leaning and right-leaning news channels. We scraped 9.5 million comments made by 2.87 million users taken from 65,141 videos from 266 left-leaning and right-leaning news channels. We also discuss scraping and annotating the users as Liberals and Conservatives for training our deep learning model. Our Hierarchical Attention Network model achieved test accuracy of 89.69%, which predicted the leaning of the users, and we find that the Conservatives were much more likely to comment on the opposing channels. We also find that the videos from left-leaning and right-leaning channels are being reached more frequently to Liberals than Conservatives. And finally, we compare our results with the study conducted in 2020 and we find that the behaviour of the users is still the same, as it was during the US presidential election in 2020.
ResearchGate has not been able to resolve any references for this publication.