Article

Mitigating Viewer Impact From Disturbing Imagery Using AI Filters: A User-Study

Taylor & Francis
International Journal of Human-Computer Interaction
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The overall aim: supporting investigators in their work while limiting full exposure to gruesome digital imagery at the same time. Sarridis et al. (2024) thus came up with a study design that "explores the capability of Artificial Intelligence (AI)-based image filters to potentially mitigate the emotional impact of viewing such disturbing content" (p. 1). In other words: gruesome images showing visible injury or harm were detected and then 'alienated' to different degrees by applying various filters. ...
Article
This paper deals with those working on the digital frontline, namely journalists, researchers and investigators who view, evaluate, and potentially use digital content such as eyewitness media for their reporting. Viewing such content often means being exposed to gruesome or disturbing material of all types. This can take its toll on the mental wellbeing of investigators. The paper outlines existing research in the domain and provides tips and advice on how so-called vicarious or secondary trauma caused by working with user-generated content can be avoided, or at least kept to a minimum. It also points to the potential harm that can be done. Another aim is to give the topic more prominence and encourage further research in this field.
... Disturbing Image Detection (DID) refers to the task of detecting content in images that can cause trauma to the viewers [1,2]. It may include images that depict violence, pornography, animal cruelty, disasters. ...
Preprint
In this paper we deal with the task of Disturbing Image Detection (DID), exploiting knowledge encoded in Large Multimodal Models (LMMs). Specifically, we propose to exploit LMM knowledge in a two-fold manner: first by extracting generic semantic descriptions, and second by extracting elicited emotions. Subsequently, we use the CLIP's text encoder in order to obtain the text embeddings of both the generic semantic descriptions and LMM-elicited emotions. Finally, we use the aforementioned text embeddings along with the corresponding CLIP's image embeddings for performing the DID task. The proposed method significantly improves the baseline classification accuracy, achieving state-of-the-art performance on the augmented Disturbing Image Detection dataset.
Article
Full-text available
Commercial content moderators are exposed to a range of stressors at work, including analysing content that has been flagged as harmful. However, not much is known about their specific coping strategies. Depth interviews were conducted with 11 content moderators exposed to child sexual abuse material (CSAM) as part of their job, and thematically analysed to investigate both individual coping strategies and those deployed organisationally. Results highlighted the importance of social support and validation of the role, as well as creating boundaries between work and home life. Moderators expressed a preference for mandatory, individual therapy with professionals who had specific experience supporting those exposed to CSAM. How content moderators cope and can be further supported are discussed.
Article
Full-text available
This report presents findings from the REASSURE (Researcher, Security, Safety, and Resilience) project’s in-depth interviews with 39 online extremism and terrorism researchers. Based at universities, research institutes, and think tanks in Europe and North America, the interviewees studied mainly, albeit not exclusively, far-right and violent jihadist online activity. The report catalogues for the first time the range of harms they have experienced, the lack of formalised systems of care or training, and their reliance therefore on informal support networks to mitigate those harms.
Article
Full-text available
Human rights investigators often review graphic imagery of potential war crimes and human rights abuses while conducting open source investigations. As a result, they are at risk of developing secondary trauma, a condition that can produce a range of cognitive and behavioral consequences, including elevated anxiety and distress, depression, and post-traumatic stress disorder. Human rights organizations have traditionally been slow to recognize the risk of secondary trauma. However, in recent years, several university programs offering students practical experience in open source human rights investigations have implemented training on secondary trauma mitigation. We administered a survey to students in these programs to determine whether they are implementing recommended mitigation techniques and to document what techniques they find helpful. From 33 responses, we identified six general practices as helping mitigate secondary trauma: processing graphic content, limiting exposure to graphic content, drawing boundaries between personal life and investigations, bringing positivity into investigations, learning from more experienced investigators, and employing a combination of techniques. We also identified recommendations for institutions to protect the right to health of investigators and to support secondary trauma mitigation, both through frequent training and through practices such as labeling graphic content and emphasizing self-care. The article concludes with areas for future research.
Article
Full-text available
The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at: https://osf.io/f8tu4/.
Article
Full-text available
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
Article
Full-text available
Objective User Generated Content – photos and videos submitted to newsrooms by the public – has become a prominent source of information for news organisations. Journalists working with uncensored material can frequently witness disturbing images for prolonged periods. How this might affect their psychological health is not known and it is the focus of this study. Design Descriptive, exploratory. Setting The newsrooms of three international news organisations. Participants One hundred and sixteen journalists working with User Generated Content material. Main outcome measures Psychometric data included the re-experiencing, avoidance and autonomic arousal indices of posttraumatic stress disorder (Impact of Event Scale-revised), depression (Beck Depression Inventory-II; BDI-II), a measure of psychological distress (GHQ-28), the latter comprising four subscales measuring somatisation, anxiety, social dysfunction and depression, and mean weekly alcohol consumption divided according to gender. Results Regression analyses revealed that frequent (i.e. daily) exposure to violent images independently predicted higher scores on all indices of the Impact of Event Scale-revised, the BDI-II and the somatic and anxiety subscales of the GHQ-28. Exposure per shift only predicted scores on the intrusion subscale of the Impact of Event Scale-revised. Conclusions The present study, the first of its kind, suggests that frequency rather than duration of exposure to images of graphic violence is more emotionally distressing to journalists working with User Generated Content material. Given that good journalism depends on healthy journalists, news organisations will need to look anew at what can be done to offset the risks inherent in viewing User Generated Content material. Our findings, in need of replication, suggest that reducing the frequency of exposure may be one way to go.
Article
Trigger warnings, content warnings, or content notes are alerts about upcoming content that may contain themes related to past negative experiences. Advocates claim that warnings help people to emotionally prepare for or completely avoid distressing material. Critics argue that warnings both contribute to a culture of avoidance at odds with evidence-based treatment practices and instill fear about upcoming content. A body of psychological research has recently begun to empirically investigate these claims. We present the results of a meta-analysis of all empirical studies on the effects of these warnings. Overall, we found that warnings had no effect on affective responses to negative material or on educational outcomes. However, warnings reliably increased anticipatory affect. Findings on avoidance were mixed, suggesting either that warnings have no effect on engagement with material or that they increased engagement with negative material under specific circumstances. Limitations and implications for policy and therapeutic practice are discussed.
Chapter
Arbitrary style transfer algorithms can generate stylization results with arbitrary content-style image pairs but will distort content structures and bring degraded style patterns. The content distortion problem has been well issued using high-frequency signals, salient maps, and low-level features. However, the style degradation problem is still unsolved. Since there is a considerable semantic discrepancy between content and style features, we assume they follow two different manifold distributions. The style degradation happens because existing methods cannot fully leverage the style statistics to render the content feature that lies on a different manifold. Therefore we designed the progressive attentional manifold alignment (PAMA) to align the content manifold to the style manifold. This module consists of a channel alignment module to emphasize related content and style semantics, an attention module to establish the correspondence between features, and a spatial interpolation module to adaptively align the manifolds. The proposed PAMA can alleviate the style degradation problem and produce state-of-the-art stylization results.
Article
In an attempt to mitigate the negative impact of graphic online imagery, Instagram has introduced sensitive-content screens—graphic images are obfuscated with a blur and accompanied by a warning. Sensitive-content screens purportedly allow “vulnerable people” with mental-health concerns to avoid potentially distressing content. However, no research has assessed whether sensitive-content screens operate as intended. Here we examined whether people, including vulnerable users (operationalized as people with more severe psychopathological symptoms, e.g., depression), use the sensitive-content screens as a tool for avoidance. In two studies, we found that the majority of participants (80%–85%) indicated a desire (Study 1) or made a choice (Study 2) to uncover a screened image. Furthermore, we found no evidence that vulnerable users were any more likely to use the screens to avoid sensitive content. Therefore, warning screens appear to be an ineffective way to deter vulnerable users from viewing negative content.
Article
With the rise in user generated content, there is a greater need for content reviews. While machines and technology play a critical role in content moderation, the need for manual reviews still remains. It is known that such manual reviews could be emotionally challenging. We test the effects of simple interventions like grayscaling and blurring to reduce the emotional impact of such reviews. We demonstrate this by bringing in interventions in a live content review setup thus allowing us to maximize external validity. We use a pre-test post-test experiment design and measure review quality, average handling time and emotional affect using the PANAS scale. We find that simple grayscale transformations can provide an easy to implement and use solution that can significantly change the emotional impact of content reviews. We observe, however, that a full blur intervention can be challenging to reviewers.
Article
While most user content posted on social media is benign, other content, such as violent or adult imagery, must be detected and blocked. Unfortunately, such detection is difficult to automate, due to high accuracy requirements, costs of errors, and nuanced rules for acceptable content. Consequently, social media platforms today rely on a vast workforce of human moderators. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to some moderators. To mitigate such harm, we investigate a set of blur-based moderation interfaces for reducing exposure to disturbing content whilst preserving moderator ability to quickly and accurately flag it. We report experiments with Mechanical Turk workers to measure moderator accuracy, speed, and emotional well-being across six alternative designs. Our key findings show interactive blurring designs can reduce emotional impact without sacrificing moderation accuracy and speed.
Article
Avoidance is one of the purported benefits and harms of trigger warnings—alerts that upcoming content may contain traumatic themes. Yet, previous research has focused primarily on emotional responses. Here, we used a trauma analogue design to assess people’s avoidance behavior in response to stimuli directly related to an analogue trauma event. University undergraduates (n = 199) watched a traumatic film and then viewed film image stills preceded by either a trigger warning or a neutral task instruction. Participants had the option to “cover” and avoid each image. Apart from a minor increase in avoidance when a warning appeared in the first few trials, we found that participants did not overall avoid negative stimuli prefaced with a trigger warning any more than stimuli without a warning. In fact, participants were reluctant overall to avoid distressing images; only 12.56% (n = 25) participants used the option to cover such images when given the opportunity to do so. Furthermore, we did not find any indication that trigger warning messages help people to pause and emotionally prepare themselves to view negative content. Our results contribute to the growing body of literature demonstrating that warnings seem trivially effective in achieving their purported goals.
Article
Journalists are not immune from the emotional impact of their work as they report on mass shootings, terror attacks, and natural disasters. Adding to an established body of research on the interrelationship between journalism and trauma, this syndicate focused on how journalism schools should prepare students to deal with traumatic news content and events that would undoubtedly form part of their future day-to-day activities.
Article
Longtime FQ Editorial Board member and 2018 MacArthur Fellow Lisa Parks charts the shift in critical focus from the potential of social media platforms to unite people around progressive causes to the need for “content moderation,” the practice of cleaning up digital pollution. Parks centers her analysis on The Cleaners (2018), Moritz Riesewieck and Hans Block's provocative documentary that delves into the lives and worlds of commercial content moderators at an unnamed company in the Philippines. The film's account of these digital-labor conditions prompts Parks' critical reflection on a series of issues: the delegation of U.S. media regulation to globally outsourced workers, the problematic trope of “cleaning,” the business of historical sanitization, and the black-boxing of infrastructural information.
Conference Paper
As User Generated Content takes up an increasing share of the total Internet multimedia traffic, it becomes increasingly important to protect users (be they consumers or professionals, such as journalists) from potentially traumatizing content that is accessible on the web. In this demonstration, we present a web service that can identify disturbing or graphic content in images. The service can be used by platforms for filtering or to warn users prior to exposing them to such content. We evaluate the performance of the service and propose solutions towards extending the training dataset and thus further improving the performance of the service, while minimizing emotional distress to human annotators.
Conference Paper
In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4,000 identities, where each identity has an average of over a thousand samples. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.25% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 25%, closely approaching human-level performance.
Article
Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.
The human cost of online content moderation
  • A Arsht
  • D Etcovitch
  • Arsht A.
Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontline
  • S Dubberley
  • E Griffin
  • H M Bal
  • Dubberley S.
Youtube moderators are being forced to sign a statement acknowledging the job can give them PTSD
  • C Newton
  • Newton C.
What’s it like when your job involves wading through others’ suffering? I was left weeping and hopeless. The Guardian
  • D Shah
How to maintain mental hygiene as an open source researcher
  • G Fiorella
How are journalists at risk of vicarious trauma from UGC
  • A Reid
  • Reid A.
How war videos on social media can trigger secondary trauma
  • J Spangenberg