Figure - available from: Information Systems Journal
This content is subject to copyright. Terms and conditions apply.
Source publication
Online extremism remains a persistent problem despite the best efforts of governments, tech companies and civil society. Digital technologies can induce group polarization to promote extremism and cause substantial changes to extremism (e.g., create new forms of extremism, types of threats or radicalization approaches). Current methods to counter e...
Similar publications
The experimental study is devoted to solving the problem of perception and understanding by investigators of the National Police of Ukraine of forensic information about offences presented in the form of texts. The expediency of forming the personality of a specialist investigator based on the competence approach is declared, due to which the contr...
The rapid development of big data technology is bound to affect the development direction of finance majors in colleges and universities and put forward new challenges to the cultivation of talents. This paper first analyzes the data mining concept, process as well as methods and selects the clustering analysis method, K-means algorithm, and NMF al...
Citations
... Despite the growing use of AI in healthcare, there is still a need to explore how consent is obtained from patients, especially from a broader perspective that includes social and technical factors [13][14][15]. Previous studies have looked at topics like the quality of consent [12], informed consent in health record research [16], securing data through techniques like pseudonymization [17], and the ethical challenges of using patient data [18]. ...
... We utilized a sociotechnical perspective to analyze research regarding patient consent for the secondary use of health data in AI models. Specifically, we summarized and classified the findings based on the Structural, Human, Physical system, and Task-related aspects discussed in this research stream [13,14]. The predefined framework not only guided our understanding of the subject matter but also facilitated transparency in our approach. ...
Abstract
Background
The secondary use of health data for training Artificial Intelligence (AI) models holds immense potential for advancing medical research and healthcare delivery. However, ensuring patient consent for such utilization is paramount to uphold ethical standards and data privacy. Patient informed consent means patients are fully informed about how their data will be collected, used, and protected, and they voluntarily agree to allow their data to be used for AI models. In addition to formal consent frameworks, establishing a social license is critical to foster public trust and societal acceptance for the secondary use of health data in AI systems. This study examines patient consent practices in this domain.
Method
In this scoping review, we searched Web of Science, PubMed, and Scopus. We included studies in English that addressed the core issues of interest, namely, privacy, security, legal, and ethical issues related to the secondary use of health data in AI models. Articles not addressing the core issues, as well as systematic reviews, meta-analyses, books, letters, conference abstracts, and study protocols were excluded. Two authors independently screened titles, abstracts, and full texts, resolving disagreements with a third author. Data was extracted using a data extraction form.
Results
After screening 774 articles, a total of 38 articles were ultimately included in the review. Across these studies, a total of 178 barriers and 193 facilitators were identified. We consolidated similar codes and extracted 65 barriers and 101 facilitators, which we then categorized into four themes: “Structure,” “People,” “Physical system,” and “Task.” We identified notable emphasis on “Legal and Ethical Challenges” and “Interoperability and Data Governance.” Key barriers included concerns over privacy and security breaches, inadequacies in informed consent processes, and unauthorized data sharing. Critical facilitators included enhancing patient consent procedures, improving data privacy through anonymization, and promoting ethical standards for data usage.
Conclusion
Our study underscores the complexity of patient consent for the secondary use of health data in AI models, highlighting significant barriers and facilitators within legal, ethical, and technological domains. We recommend the development of specific guidelines and actionable strategies for policymakers, practitioners, and researchers to improve informed consent, ensuring privacy, trust, and ethical use of data, thereby facilitating the responsible advancement of AI in healthcare.
... As AI becomes increasingly personalized, it may repeatedly confirm and reinforce individuals' cultural beliefs (e.g., Brinkmann et al., 2023;Greene et al., 2023). In doing so, AI could create personalized echo chambers that, despite maintaining some level of belief heterogeneity, promote a multicultural state more accurately characterized by polarization rather than true diversity (e.g., by fueling extremism; Greene et al., 2023;Qureshi et al., 2020;Risius et al., 2024). As a result, researchers have recently emphasized the increasing urgency of understanding AI's impact on cultural evolution and have stressed the need for further research (e.g., Alavi et al., 2024;Benbya et al., 2021;Brinkmann et al., 2023;Kane et al., 2021). ...
... With its increasing integration into our daily lives, AI simultaneously influences cultural evolution along several dimensions: AI for text and image generation may transmit a representation of culture that is biased toward the beliefs common in rich Western countries (e.g., Atari et al., 2023;Cao et al., 2023). AI recommendation systems curate our digital lives by selecting the news we see, the music we listen to, and the people we connect with (e.g., Greene et al., 2023;Qureshi et al., 2020;Risius et al., 2024). In organizational contexts, predictive AI can reinforce gender bias by hiring candidates based on historical hiring data (e.g., Marabelli et al., 2021;Teodorescu et al., 2021). ...
... During its training, the ML model learned by AI becomes grounded in the cultural traits embedded in the training data. This principle applies not only to text and image generation, but also to AI that curates rather than generates content: For example, Facebook's news feed has been shown to influence users' political views by promoting (dis)similar content based on cultural traits evident in users' past consumption behavior (e.g., Greene et al., 2023;Qureshi et al., 2020;Risius et al., 2024). Similarly, AI trained on historical practices has repeatedly been found to reinforce human biases (e.g., in hiring; Marabelli et al., 2021;Teodorescu et al., 2021). ...
Culture is fundamental to our society, shaping the traditions, ethics, and laws that guide people's beliefs and behaviors. At the same time, culture is also shaped by people-it evolves as people interact and collectively select, modify, and transmit the beliefs they deem desirable. As artificial intelligence (AI) becomes more integrated into our lives, it plays an increasing role in how cultural beliefs are (re)shaped and promoted. Using a series of agent-based simulations, we analyze how different ways of integrating AI into society (e.g., national vs. global AI) impact cultural evolution, thereby shaping cultural diversity. We find that less globalized AI can help promote diversity in the short run, but risks eliminating diversity in the long run. This becomes more pronounced the less humans and AI are grounded in each other's beliefs. Our findings help researchers revisit cultural evolution in the presence of AI and assist policymakers with AI governance.
... This includes current theoretical advancements in social media induced polarisation (Weismueller et al. 2023), echo chambers (Durani et al. 2023), or fake news dissemination (Wang et al. 2022), which all inform crisis management literature based on data from "open" social media platforms and features. Along the same line, existing review articles in IS do not differentiate between "open" and "closed" channels when synthesising knowledge about critical social media use (e.g., Eismann et al. 2021;Risius et al. 2023). ...
Social media play a crucial role in navigating crises such as pandemics, natural disasters, or other public emergencies. In this context, existing research has primarily focused on examining public social media (e.g., Twitter/X), which offered convenient and ethically acceptable access to digital trace data for analysis. So-called "dark social," that is, private social media such as Telegram, WhatsApp, or Facebook Groups are equally important tools for crisis management but remain underexplored in the literature. This paper addresses this problem by investigating the role of "dark social" for crisis management and highlighting the importance of distinguishing between private and public social media platforms and features. Using a scoping review methodology, we develop six themes that show how existing works across disciplines have so far examined "dark social" in the crisis management domain. Furthermore, we derive an agenda for future research by illuminating "dark social" from an individual and organisational perspective.
... Social media platforms use algorithms to control user attention, a key resource in our increasingly digital world (Zeng and Kaye 2022). A lot has been written on how these algorithms are designed to maximize user engagement, promoting controversial or provocative content on the fringes of mainstream discourse (Zuckerberg 2021) and the question of whether these algorithms induce societal polarization (Bakshy et al. 2015;Guess et al. 2023;Robertson et al. 2023), promote online extremism (Risius et al. 2024), or form filter bubbles and echo chambers (Bruns 2021). Meanwhile, the opposite use of algorithms to demote, hide, and reduce the visibility of content is mostly disregarded (Gillespie 2022a). ...
... Thus, our study highlights shortcomings in analysing platforms as monoliths rather than complex sociotechnical systems. In doing so, we answer calls by Risius et al. (2023) to view such online behavior through a sociotechnical lens. ...
This study explores the emergence of incivility in online communities, challenging the traditional perspective that attributes incivility to individual elements of sociotechnical systems. We argue that this narrow focus fails to recognize the complex interactions between these elements, leading to a rudimentary understanding of how incivility originates and evolves. To address this gap, our research employs fuzzy set Qualitative Comparative Analysis (fsQCA), examining approximately 4.3 million posts from 100 diverse online communities on Reddit. Through this analysis, we identified five distinct paths that converged into two primary community configurations: close-knit and scattered communities. Each configuration exhibits unique affordances whose activation fosters incivility in different ways. Based on these findings, we expand the understanding of incivility to include subtle, indirect behaviors beyond overt forms such as trolling or hate speech and show how the interplay of multiple community elements produces affordances, avoiding the narrow view of individual affordances and shedding light on variations of social systems. Finally, we demonstrate that within the same digital platform, different social systems can impact user behaviors, including incivility.
... In future steps, we will extend our scoping review and undertake a systematic literature review to understand IHS and discover potential avenues to address it. As we continue our research-in-progress, we aim apply the socio-technical perspective (Bostrom and Heinen, 1977;Risius et al., 2023) to review the literature. This not only enables the sourcing of more relevant studies from a broader range of disciplines, but also assists in better understanding the wider impacts of IHS. ...
The alarming growth of hate speech on social media platforms has caused serious consequences for individuals, online communities, and society. While the majority of related work focuses on explicit hate speech, research on implicit hate speech (IHS) is still in its infancy. IHS is a more subtle form of hate that poses considerable challenges for victims, platforms, regulators, and society. To address this gap, we propose a scoping review of IHS to investigate its current state of research, with an emphasis on summarizing its phenotypes, detection challenges, and state-of-the-art detection methods. Our work aims to build a foundation for future studies and pave the way for in-depth research on the characteristics of IHS, as well as potential agendas for countering it.
... In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das & Schroeder, 2021;Ghasiya & Sasahara, 2022;Hameleers, 2022;Risius et al., 2024). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. ...
... cism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.Risius et al. (2024) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectori ...
... Various types of research can be done from a safe distance, for instance, by analyzing metadata (e.g., timestamps, geolocation data) from chat platforms (e.g., Al-Saggaf (2016)). Alternatively, secondary data (e.g., literature) can be drawn on to produce significant insights (e.g., Risius et al. (2023), Aldera et al. (2021)). ...
Scholars studying online extremism and terrorism face major challenges, including finding safe access to hostile environments where members evade law enforcement. Protective measures, such as research ethics, often overlook the safety of investigators. Investigators, including Open-Source Intelligence (OSINT) analysts, encounter emotional harm, abuse from ideologues, consent issues, and legal challenges in data collection. Despite rising awareness of these challenges, scholars lack guidance on starting and navigating research in these areas. This paper identifies challenges and offers strategies for safely, ethically, and legally researching in this environment.
... In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das & Schroeder, 2021;Ghasiya & Sasahara, 2022;Hameleers, 2022;Risius et al., 2024). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. ...
... cism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.Risius et al. (2024) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectori ...
The year 2020 has been a testing ground for the progress towards a cohesive and sustainable future envisaged through the advancement in Information and Communication Technologies (ICTs) (UN ECOSOC 2021). In a time of uncertainty, helplessness, and growing frustrations, we, as a society, found that ICTs can be a mixed blessing. We witnessed the power of ICTs in connecting people across the globe in their collective trauma and desperation (Garfin 2020), forming online mutual aid groups to offer help and support to those in need (Knearem et al. 2021) and building solidarity, and increasing outreach of movements for social justice (Frankfurt 2020). However, these positive trends were marred by the increase in information chaos (Forum on Information and Democracy (2021), the formation of echo chambers (Boutyline and Willer 2017), and the consolidation of extreme views and ideologies (Zeller 2021). These polarizing forces threaten the development-oriented nature of information society and deteriorate social cohesion, which is composed of trust, sense of belonging, and participation in community life (Chan et al. 2006). Social cohesion is the glue that holds the community together and is necessary for collaborative problem solving (Friedkin 2004).
Purpose
A large part of the misinformation, fake news, and propaganda spread on social media originates from content disseminated via online social network platforms, such as X (formerly Twitter) and Facebook. The control and filtering of digital media pose significant challenges and threats to online social networking. This paper aims to understand how propaganda infiltrates news articles, which is critical for fully grasping its impact on daily life.
Design/methodology/approach
This study introduces a pre-trained language model framework, called ProST, to detect propaganda in text-based news articles. ProST addresses two tasks: identifying propaganda spans and classifying propaganda techniques. For span identification, we built a model combining a pre-trained RoBERTa model with long-short-term memory and begin, inside, outside and end tagging to detect propaganda spans. The technique classification model uses contextual features and a RoBERTa-based approach. This study, conducted on the SemEval-2020 dataset (comprising 536 news articles), demonstrates a performance comparable to state-of-the-art methods.
Findings
The results indicate that the ProST model is highly effective in detecting propaganda in text news articles, accurately identifies propaganda spans and classifies techniques with high precision, benefitting from sentence- and span-level feature pruning.
Originality/value
The ProST model offers a novel approach to identifying propaganda in online news articles with diverse webs of information. To the best of our knowledge, this is the first framework capable of classifying both propaganda spans and techniques in textual news. Accordingly, ProST represents a significant advancement in the field of propaganda.