Fig 2 - uploaded by Rosanna Bellini
Content may be subject to copyright.
Our four-stage process framework on the expectations of a poster before IPS, their change in attitude/thinking towards their target, escalation of abusive behaviours and reported reflection on the poster's use of IPS. We label influences on the progression between stages.
Source publication
A growing body of research suggests that intimate partner abusers use digital technologies to surveil their partners, including by installing spyware apps, compromising devices and online accounts, and employing social engineering tactics. However, to date, this form of privacy violation, called intimate partner surveillance (IPS), has primarily be...
Contexts in source publication
Context 1
... represent our grounded theory findings by a four-stage model (Figure 2) that illustrates the attitude shift in a potential perpetrator's approach to and use of IPS, as well as what they identify as consequences resulting from their actions. We describe each of the four stages in detail: setting expectations, change in attitude/thinking, escalation and reflection, and describe the six influences on progression through this model. ...
Context 2
... our findings demonstrate, users who share narrative accounts include people expressing curiosity towards IPS, people making active plans to commit IPS, and people who report having surveilled a partner. Such a large cross-section of different people at various stages of attitudes towards IPS (Figure 2) may not be accessible to researchers under normal circumstances. Either self-identification or state-identification of perpetration of harm may lead an individual to partake in self-censorship or a refusal to verbally disclose the extent of their abuse or their attitudes towards their behaviour out of concern for receiving a larger penalty for their behaviour [28,50,92]. ...
Context 3
... analyses show how theories, causes, and explanations can be (and are) reused by perpetrators on infidelity forums to explain, excuse, justify, or perpetuate violence, and even provide patterns for others to follow. In contrast to many stereotypical representations of violence, which create images of a sudden and violent explosion of aggression [27], our mapping of narrative pathways in Figure 1 and our four-stage process model in Figure 2 identify that IPS is a calculated and methodical activity for many posters. This model suggests that there are clear and identifiable points for a poster to be challenged on the unacceptability of their behaviour-when it is deemed safe to do so-before such harm escalates for current targets, or repeats for future ones. ...
Context 4
... is an interesting use of a common deescalation strategy, currently taught in in-person groups [74], that requires being placed in the shoes of the target to increase empathy: perspective-taking. We foresee an intervention that could consist of trained professionals asking posters whether they would follow a strict social contract as described in stage one of Figure 2. By identifying inconsistencies in their way of thinking before IPS has occurred, it might be possible to deter a potential perpetrator from obsessively trying to prove that the target is behaving suspiciously. ...
Context 5
... we described prior, many visitors to the sites posted questions concerning posters' emotional well-being and advice for navigating complex challenges in their relationships. However, being exposed to an environment where IPS is normalised and actively encouraged was identified as a serious influence on progression from considering IPS to performing it (Figure 2). Search engines have attempted to deploy technical interventions by displaying a warning message before search terms about the legal and moral implications of viewing and possessing particular material (e.g., images of child abuse). ...
Similar publications
Student well-being and behaviour is assessed daily by classroom teachers at school and teachers are also asked occasionally to complete evidence-based psychology reports regarding a particular student in their class to inform clinical practice and help with diagnoses. This critical qualitative study considers the role of surveillance in schools as...
A growing body of research suggests that intimate partner abusers use digital technologies to surveil their partners, including by installing spyware apps, compromising devices and online accounts, and employing social engineering tactics. However, to date, this form of privacy violation, called intimate partner surveillance (IPS), has primarily be...
Citations
... S&P publications may motivate readers to eliminate barriers for at-risk groups [148], but may also be read by adversaries who then target at-risk users or incite others to do so [40]. These risks include providing instructions about how to attack a particular group of at-risk users, or naming specific software tools or online communities where adversaries can find further information [21,44,138]. Even activities that may initially seem beneficial could prove harmful to at-risk groups. ...
... The resulting 12 members of our expert panel (i.e., eight experienced S&P scholars, four from our analysis team) together have over 60 years of experience in computer security and digital-safety research involving at-risk users across industry, academia, and non-profit sectors. Our expert panel has, in aggregate, worked on many styles of research projects, ranging from short-term (less than one year) exploratory studies to long-term engagements (5 years or more) involving survivors of IPV [21,43,44,45,56,140], political campaign workers [34], refugees [122], survivors of human trafficking [30], political activists [38], journalists [149], low-income communities [154], returning citizens (post-incarcerated individuals) [18,19], sex workers [83], people experiencing homelessness [153], targets of occupational bullying and harassment [20], online content creators [136], and more. We have therefore worked with a substantial cross-section-18 or 58.1%-of the 31 at-risk groups identified in Warford et al.'s [148] corpus. ...
... However, we were unsure how to safely engage abusers. We delayed direct data collection, and instead, conducted measurement studies of online communities that discuss technology abuse tactics [21,138]. Our measurement studies shed light on abuser perspectives, providing us with valuable expertise. ...
Research involving at-risk users -- that is, users who are more likely to experience a digital attack or to be disproportionately affected when harm from such an attack occurs -- can pose significant safety challenges to both users and researchers. Nevertheless, pursuing research in computer security and privacy is crucial to understanding how to meet the digital-safety needs of at-risk users and to design safer technology for all. To standardize and bolster safer research involving such users, we offer an analysis of 196 academic works to elicit 14 research risks and 36 safety practices used by a growing community of researchers. We pair this inconsistent set of reported safety practices with oral histories from 12 domain experts to contribute scaffolded and consolidated pragmatic guidance that researchers can use to plan, execute, and share safer digital-safety research involving at-risk users. We conclude by suggesting areas for future research regarding the reporting, study, and funding of at-risk user research
... Smartphone apps called stalkerware allow for the collection of personal details such as web searches, location, messages, photos, and other information, making this information accessible to an abuser while hiding their functionality from the victimized partner. Past research has considered the unique threat landscape faced by survivors of technology-enabled abuse [16,39], characteristics of stalkerware and related apps [11,35], clinical approaches to aid survivors [15,20,41] and understanding the motivations of abusers [9,40]; however, to date, technical analysis of apps has largely been limited to relatively small corpora [21,38]. Moreover, there has been little examination of how developers of stalkerware financially benefit from their harmful software. ...
Stalkerware is a form of malware that allows for the abusive monitoring of intimate partners. Primarily deployed on information-rich mobile platforms, these malicious applications allow for collecting information about a victim’s actions and behaviors, including location data, call audio, text messages, photos, and other personal details. While stalkerware has received increased attention from the security community, the ways in which stalkerware authors monetize their efforts have not been explored in depth. This paper represents the first large-scale technical analysis of monetization within the stalkerware ecosystem. We analyze the code base of 6,432 applications collected by the Coalition Against Stalkerware to determine their monetization strategies. We find that while far fewer stalkerware apps use ad libraries than normal apps, 99% of those that do use Google AdMob. We also find that payment services range from traditional in-app billing to cryptocurrency. Finally, we demonstrate that Google’s recent change to their Terms of Service (ToS) did not eliminate these applications, but instead caused a shift to other payment processors, while the apps can still be found on the Play Store; we verify through emulation that these apps often operate in blatant contravention of the ToS. Through this analysis, we find that the heterogeneity of markets and payment processors means that while point solutions can have impact on monetization, a multi-pronged solution involving multiple stakeholders is necessary to mitigate the financial incentive for developing stalkerware.
Online harassment remains a prevalent problem for internet users. Its impact is made orders of magnitude worse when multiple harassers coordinate to conduct networked attacks. This paper presents an analysis of 231 threads in Kiwi Farms, a notorious online harassment community. We find that networked online harassment campaigns consists of three phases: target introduction, network decision, and network response. The first stage consists of the initial narrative elements, that are approved or not in stage two and expanded in stage three. Narrative building is a common element of all three stages. The network plays a key role in narrative building, adding elements to the narrative in at least 80 % of the threads, resulting in sustained harassment. This finding is central to our model of Continuous Narrative Escalation (CNE), that has two parts: (1) narrative continuation, the action of repeatedly adding new information to the existing narrative and (2) escalation, the aggravation of harassment that occurs as a consequence.
In addition, we present insights from our analysis of 100 takedown requests threads, discussing received abuse reports. We find that these takedown requests are misused by the community and are used as elements to further fuel the narrative. We use our findings and framework to come up with a set of recommendations, that can inform harassment interventions and make online spaces safer.