Search keys

Search keys

Source publication
Article
Full-text available
There has been much concern that social media, in particular YouTube, may facilitate radicalisation and polarisation of online audiences. This systematic review aimed to determine whether the YouTube recommender system facilitates pathways to problematic content such as extremist or radicalising material. The review conducted a narrative synthesis...

Contexts in source publication

Context 1
... searched and extracted studies from Google Scholar, Embase, Web of Science, and PubMed for relevant studies using Boolean operators and search terms (see Table 2), resulting in a database of 1,187 studies. The studies were then systematically filtered in line with the eligibility and exclusion criteria (see below). ...
Context 2
... combination of databases has been shown to perform best at achieving efficient and adequate coverage of studies (Bramer, Rethlefsen, Kleijnen, & Franco, 2017). Each database was searched using a set of Boolean operators and truncations (see Table 2). Studies that were not detected by the search terms but were sent to the researchers by colleagues throughout the investigation were also included. ...

Similar publications

Article
Full-text available
This special issue contributes to an emerging literature on the role of social media in shaping narratives on migrants and refugees. The issue is organised into two parts. The first part offers analytical and empirical reflections on the dynamics of digital racism, xenophobia and polarisation on discourses by non-migrants, on migrants and refugees....

Citations

... To prevent inappropriate content from harming the platform ecosystem, the critical content moderation stage [5] has emerged to preemptively filter such content. The content moderation workflow on SVPs involves the systematic evaluation of videos to ensure their compliance with legal regulations, platform policies, and social ethics [42,55]. The traditional SVP content moderation paradigm follows the civil law system [50]: Platforms establish a rule-based system grounded in laws and social consensus, while annotators act as judges to first understand the rule system then traverse to arXiv:2504.14904v1 [cs.SI] 21 Apr 2025 (a) A visualization about three moderation paradigms. ...
Preprint
Full-text available
Exponentially growing short video platforms (SVPs) face significant challenges in moderating content detrimental to users' mental health, particularly for minors. The dissemination of such content on SVPs can lead to catastrophic societal consequences. Although substantial efforts have been dedicated to moderating such content, existing methods suffer from critical limitations: (1) Manual review is prone to human bias and incurs high operational costs. (2) Automated methods, though efficient, lack nuanced content understanding, resulting in lower accuracy. (3) Industrial moderation regulations struggle to adapt to rapidly evolving trends due to long update cycles. In this paper, we annotate the first SVP content moderation benchmark with authentic user/reviewer feedback to fill the absence of benchmark in this field. Then we evaluate various methods on the benchmark to verify the existence of the aforementioned limitations. We further propose our common-law content moderation framework named KuaiMod to address these challenges. KuaiMod consists of three components: training data construction, offline adaptation, and online deployment & refinement. Leveraging large vision language model (VLM) and Chain-of-Thought (CoT) reasoning, KuaiMod adequately models video toxicity based on sparse user feedback and fosters dynamic moderation policy with rapid update speed and high accuracy. Offline experiments and large-scale online A/B test demonstrates the superiority of KuaiMod: KuaiMod achieves the best moderation performance on our benchmark. The deployment of KuaiMod reduces the user reporting rate by 20% and its application in video recommendation increases both Daily Active User (DAU) and APP Usage Time (AUT) on several Kuaishou scenarios. We have open-sourced our benchmark at https://kuaimod.github.io.
... One driving factor for the distribution of content on social media platforms are algorithmic curation processes. This can push users towards extreme and misinforming videos (Bryant, 2020;Hussein et al., 2020;Yesilada & Lewandowsky, 2022) and enforces the emergence of filter bubbles and echo chambers (Cinelli, De Francisci Morales, et al., 2021;Diaz Ruiz & Nilsson, 2023) where users are "only presented with information that matches with ...
... Contentious topics are especially likely to attract heated conversations, and 'alternative facts' or political conversations attract people with controversial opinions that presumably view content moderation critically. The algorithmic structure of YouTube can further reinforce these patterns (Yesilada & Lewandowsky, 2022). It is assumed that opinion-based homophily is facilitated by certain social media platforms, leading to the formation of groups that inhibit specific hate-based communication (Evolvi, 2019). ...
Preprint
Full-text available
Receiving negative sentiment, offensive comments, or even hate speech is a constant part of the working experience of content creators (CCs) on YouTube - a growing occupational group in the platform economy. This study investigates how socio-structural characteristics such as the age, gender, and race of CCs but also platform features including the number of subscribers, community strength, and the channel topic shape differences in the occurrence of these phenomena on that platform. Drawing on a random sample of n=3,695 YouTube channels from German-speaking countries, we conduct a comprehensive analysis combining digital trace data, enhanced with hand-coded variables to include socio-structural characteristics in social media data. Publicly visible negative sentiment, offensive language, and hate speech are detected with machine- and deep-learning methods using N=40,000,000 comments. Contrary to existing studies our findings indicate that female content creators are confronted with less negative communication. Notably, our analysis reveals that while BIPoC, who work as CCs, receive significantly more negative sentiment, they aren't exposed to more offensive comments or hate speech. Additionally, platform characteristics also play a crucial role, as channels publishing content on conspiracy theories or politics are more frequently subject to negative communication.
... During the 2016 U.S. Presidential election [64], this resulted in the amplification of politically polarizing content, deepening divisions among voters [65,66]. • Similar effects have been observed on YouTube, where the recommendation algorithm often promotes extreme and controversial videos [67]. Studies have found that users who start with relatively neutral political content can quickly be led to more radical viewpoints through the platform's recommendations [68]. ...
Article
Full-text available
The integrity of global elections is increasingly under threat from artificial intelligence (AI) technologies. As AI continues to permeate various aspects of society, its influence on political processes and elections has become a critical area of concern. This is because AI language models are far from neutral or objective; they inherit biases from their training data and the individuals who design and utilize them, which can sway voter decisions and affect global elections and democracy. In this research paper, we explore how AI can directly impact election outcomes through various techniques. These include the use of generative AI for disseminating false political information, favoring certain parties over others, and creating fake narratives, content, images, videos, and voice clones to undermine opposition. We highlight how AI threats can influence voter behavior and election outcomes, focusing on critical areas, including political polarization, deepfakes, disinformation, propaganda, and biased campaigns. In response to these challenges, we propose a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) designed to detect and authenticate deepfake content in real time. It leverages the transparency of blockchain technology to reinforce electoral integrity. Finally, we also propose comprehensive countermeasures, including enhanced legislation, technological solutions, and public education initiatives, to mitigate the risks associated with AI in electoral contexts, proactively safeguard democracy, and promote fair elections.
... 43,44 However, the YouTube recommender system may also lead to problematic content, such as such as violent extremism or misleading information, which may be detrimental. 45 With rapidly increased use of YouTube, PUY warrants more investigation and practical solutions. From the present finding, for those with PUSM or PUY, close monitoring of psychological states and sleep quality, as well as providing self- 32 and fulfill the psychological and interpersonal satisfaction. ...
Article
Full-text available
Objective Problematic use of the internet has been linked to emotional and sleep concerns, although relationships with specific types of internet use are less well understood. YouTube, as an online platform with video-watching features, may attract individuals to spend considerable time, for those experiencing problematic use be termed problematic use of social media (PUSM) or problematic use of YouTube (PUY). Therefore, the present study investigated relationships between PUSM/PUY, psychological distress, and insomnia among the Iranian adolescents. Methods An online survey comprising Bergen Social Media Addiction Scale, YouTube Addiction Scale, Depression, Anxiety, Stress-21 Scale, and Insomnia Severity Index recruited 1352 participants. Results Results of Hayes’ Process Macro showed significant correlations between the two types of problematic use and insomnia, with psychological distress as a mediator (unstandardized coefficient = 0.096 and 0.100). Conclusion The findings implied the effect of psychological distress in mediating the relationships of PUSM and PUY to insomnia.
... This is important because nowadays, people are skeptical of conventional media outlets and increasingly consume content and news on social media and online platforms (Ludwig et al., 2023). YouTube is becoming increasingly popular, particularly among younger populations, and if the streaming platform is radicalizing users, this could push peripheral beliefs, such as white supremacy, further into the mainstream (Yesilada & Lewandowsky, 2022). As demonstrated by the Facebook whistleblowing case, the mechanisms of social media platforms are neither neutral nor fair, and the core product mechanics, including recommendations, optimizing for engagement, and virality, are key to why bias and hate proliferate on platforms. ...
... Social media platforms are facilitating the spread of divisive information online at a rapid rate and potentially fueling political instability and societal extremism (Burton, 2023). The algorithms are exceptionally good at identifying what they think would keep users viewing content on the service, even if users do not yet know what they are interested in or need (Yesilada & Lewandowsky, 2022). Algorithms written by data-rich platforms such as TikTok and YouTube can actively create a loop that features specific content in pursuit of user engagement. ...
Article
Full-text available
Algorithmic radicalization is the idea that algorithms used by social media platforms push people down digital “rabbit holes” by framing personal online activity. Algorithms control what people see and when they see it and learn from their past activities. As such, people gradually and subconsciously adopt the ideas presented to them by the rabbit hole down which they have been pushed. In this study, TikTok’s role in fostering radicalized ideology is examined to offer a critical analysis of the state of radicalism and extremism on platforms. This study conducted an algorithm audit of the role of radicalizing information in social media by examining how TikTok’s algorithms are being used to radicalize, polarize, and spread extremism and societal instability. The results revealed that the pathways through which users access far-right content are manifold and that a large portion of the content can be ascribed to platform recommendations through radicalization pipelines. Algorithms are not simple tools that offer personalized services but rather contributors to radicalism, societal violence, and polarization. Such personalization processes have been instrumental in how artificial intelligence (AI) has been deployed, designed, and used to the detrimental outcomes that it has generated. Thus, the generation and adoption of extreme content on TikTok are, by and large, not only a reflection of user inputs and interactions with the platform but also the platform’s ability to slot users into specific categories and reinforce their ideas.
... Children can become engrossed in the enjoyment they derive, leading to excessive use of smart devices. Young children without proper cognitive judgment may also unintentionally be exposed to violent and inappropriate visual content through what they watch or due to YouTube's algorithm [38], potentially increasing the likelihood of emotional/behavioral problems later on. Also, higher usage frequency was significantly associated with increased emotional/behavioral problems. ...
Article
Full-text available
Background YouTube is a widely used video sharing and social networking platform among children and adolescents. However, research on YouTube usage among this population remains scarce. Specifically, studies on factors that influence children and adolescents' usage are clinically significant but largely lacking. Additionally, few studies have examined the association between usage and emotional/behavioral problems, which is fundamental to smartphone research. Therefore, this study explored the relationship between early childhood temperament, subsequent YouTube usage patterns, and emotional/behavioral problems. Methods The Kids Cohort for Understanding Internet Addiction Risk Factors in Early Childhood (K-CURE) is the first long-term prospective cohort study in Korea aimed at understanding the long-term effects of media exposure on young children. The study included 195 children aged 8–11 years enrolled in the K-CURE study. Caregivers, predominantly mothers, who voluntarily participated during their visits to community centers for children’s mental health in Korea’s major cities, completed a detailed self-administered survey. Childhood temperament was measured in 2018 when the children were 5–8 years old. Subsequent YouTube usage patterns and emotional/behavioral problems were assessed in 2021. Data were analyzed using frequency analysis, correlation analysis, and multiple linear regression. Results The study found that 21.0% of children started using YouTube before age 4, with the most common onset age being 8–9 years (30.3%). These children used YouTube on average 4.8 days per week for 68.5 min per day. Early childhood persistence was negatively associated with the subsequent YouTube usage duration, and the age at first YouTube use was negatively correlated with subsequent usage frequency. Furthermore, a younger age at first YouTube use and higher usage frequency were significantly associated with increased emotional/behavioral problems. Conclusions In the YouTube environment, where content is automatically recommended based on user preferences, traits related to usage patterns may be associated with persistence, which is linked to self-regulation. Considering the current trend where children use smartphones, contents frequently for very short durations, our findings highlight the importance of self-regulation in the media usage of children who are still developing. Additionally, our results provide fundamental information for future YouTube studies and illustrate similarities and differences between smartphone and YouTube research.
... [2][3][4][5][6] The algorithmic design of social media platforms could also contribute to the dissemination of problematic content. 7 Additionally, the absence of credible and accessible health information, often referred to as information voids, can promote misinformation. 8 Factors such as consumers' political views, emotions, age, and ability to process abstract information can also affect their susceptibility to misinformation. ...
Article
Full-text available
The COVID-19 pandemic has highlighted how infodemics (defined as an overabundance of information, including misinformation and disinformation) pose a threat to public health and could hinder individuals from making informed health decisions. Although public health authorities and other stakeholders have implemented measures for managing infodemics, existing frameworks for infodemic management have been primarily focused on responding to acute health emergencies rather than integrated in routine service delivery. We review the evidence and propose a framework for infodemic management that encompasses upstream strategies and provides guidance on identifying different interventions, informed by the four levels of prevention in public health: primary, secondary, tertiary, and primordial prevention. On the basis of a narrative review of 54 documents (peer-reviewed and grey literature published from 1961 to 2023), we present examples of interventions that belong to each level of prevention. Adopting this framework requires proactive prevention and response through managing information ecosystems, beyond reacting to misinformation or disinformation.
... Similarly, a former YouTube engineer has claimed that the platform's recommendation algorithm promotes conspiracy theories (Turton, 2018). This has been corroborated by other studies; for example, in a meta-analysis on the YouTube recommender system, Yesilada and Lewandowsky (2022) reported that 14 out of 23 studies found empirical evidence supporting the notion that the system could lead users towards problematic content (seven found mixed results, and two did not find evidence of such content pathways). ...
... In a systematic review to determine whether the YouTube recommender system facilitates pathways to problematic content such as extremist or radicalizing material, Yesilada, M. and Lewandowsky, S. (2021) found that most of the 23 included studies implicated the YouTube recommender system in facilitating pathways towards problematic content (i.e. conspiratorial content, anti-vaccination content, pseudoscientific content, content unsafe for children, Incel-related content, extremist content radicalizing content, and racist content). ...
Article
Full-text available
On 18th May 2022, in an opinion piece for The New York Times, columnist Michelle Goldberg declared “the death of #MeToo” (Goldberg, 2022). The papers in this panel examine this claim and wrestle with its potential implications. Drawing on case studies and data from the United States, Australia, the United Kingdom, and Ireland, we evaluate the current state of play in the online push-and-pull between feminist speech about gender-based violence and its attendant misogynistic backlashes. Using a range of different qualitative methods, these papers unpack the orientations towards visibility and transparency that urge survivors into ever-increasing degrees of exposure online; the way that digital media are reconfiguring the gender and racial politics of doubt and believability; the algorithmic pathways through which boys and men are ushered towards increasingly more radical “manosphere” content and communities; and how the problem of “believability” as it relates to testimonies of assault is being complicated and compounded online by networked misogynoir. The result is an ambivalent portrait of the afterlife of #MeToo on the internet, and some important questions for networked feminist activism going forward.
... Furthermore, past audit studies have identified filter bubbles in YouTube recommendation systems that stem from users' watch history, particularly in extremist content and misinformation (O'Callaghan et al. 2015;Hussein et al. 2020;Röchert et al. 2020;Papadamou et al. 2021;Yesilada and Lewandowsky 2022). For example, to examine the effects of watch history, a study by Papadamou et al. (2021) that trained virtual agents with YouTube videos found that YouTube starts to generate more personalized recommendations on pseudoscientific content after a user watches 22 pseudoscientific YouTube videos. ...
... While discussing the challenges of using virtual agents for algorithm auditing, Ulloa et al. (2022) confirmed the validity of such methods with multiple experimental designs. Yesilada and Lewandowsky (2022) argued that multiple platform data were required to fully reconstruct such a personalization process. Therefore, our study uses YouTube for effective profile training, and then performs an active information search on Google Search. ...
Article
Full-text available
When tourists search information online, personalization algorithms tend to contextually filter the vast amount of information and provide them with a subset of information to increase relevance and avoid overload. However, limited attention is paid to the dark side of these algorithms. An influential critique of personalization algorithms is the filter bubble effect, a hypothesis that people are isolated in their own information bubble based on their prior online activities, resulting in narrowed perspectives and fewer discovery of new experiences. An important question, therefore, is whether algorithmic filtering leads to filter bubbles. We empirically explore this question in an online tourist information search with the three-dimensional ‘cascade’ tourist decision-making model in a two-step experiment. We train two virtual agents with polarized YouTube videos and manipulate them to conduct travel information searches from both off-site and on-site geolocations in Google Search. The first three pages of search results are collected and analyzed with two mathematical metrics and follow-up content analysis. The results do not show significant differences between the two virtual agents with polarized prior training. However, when search geolocations change from off-site to on-site, 39–69% of the search results vary. Additionally, this difference varies between search terms. In summary, our data show that while algorithmic filtering is robust in retrieving relevant search results, it does not necessarily show evidence of filter bubbles. This study provides theoretical and methodological implications to guide future research on filter bubbles and contextual personalization in online tourist information searches. Marketing implications are discussed.