Afsaneh Razi’s research while affiliated with Drexel University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (40)


Exploring Online Support Needs of Adolescents Living with Epilepsy
  • Conference Paper

November 2024

·

2 Reads

Jessica Y. Medina

·

Jordyn Young

·

Wendy Trueblood Miller

·

Afsaneh Razi




Assessing the Impact of Online Harassment on Youth Mental Health in Private Networked Spaces

May 2024

·

27 Reads

Proceedings of the International AAAI Conference on Web and Social Media

Online harassment negatively impacts mental health, with victims expressing increased concerns such as depression, anxiety, and even increased risk of suicide, especially among youth and young adults. Yet, research has mainly focused on building automated systems to detect harassment incidents based on publicly available social media trace data, overlooking the impact of these negative events on the victims, especially in private channels of communication. Looking to close this gap, we examine a large dataset of private message conversations from Instagram shared and annotated by youth aged 13-21. We apply trained classifiers from online mental health to analyze the impact of online harassment on indicators pertinent to mental health expressions. Through a robust causal inference design involving a difference-in-differences analysis, we show that harassment results in greater expression of mental health concerns in victims up to 14 days following the incidents, while controlling for time, seasonality, and topic of conversation. Our study provides new benchmarks to quantify how victims perceive online harassment in the immediate aftermath of when it occurs. We make social justice-centered design recommendations to support harassment victims in private networked spaces. We caution that some of the paper's content could be triggering to readers.



For Me or Not for Me? The Ease With Which Teens Navigate Accurate and Inaccurate Personalized Social Media Content
  • Conference Paper
  • Full-text available

May 2024

·

63 Reads

Download




Citations (30)


... Our results showed that teens are still being exposed to risky content (i.e., explicit and/or self-harm content) even though there are filtering and/or reporting features in place on social media. This suggests that the safety features do not reflect contextualized teens' risk experience, largely because while risk is highly subjective, safety features are designed based on the risk perceptions of the third person [99]. Hence, we need to look again at those safety features through the lens of teens' perspective. ...

Reference:

Teen Talk: The Good, the Bad, and the Neutral of Adolescent Social Media Use
Personally Targeted Risk vs. Humor: How Online Risk Perceptions of Youth vs. Third-Party Annotators Differ based on Privately Shared Media on Instagram
  • Citing Conference Paper
  • June 2024

... This integration of online resources into youth's everyday lives has opened new avenues for seeking support, serving as an alternative option to offline support-seeking, which is often hindered by barriers like stigma and a preference for self-reliance [106,108]. Within the SIGCHI community, researchers have consistently highlighted how youth leverage the internet to seek support in different contexts [8,59,111,147]. For instance, Pretorius et al. [106] explored young people's online help-seeking practices and found that they used online resources to independently search for credible mental health information, seek empathetic and personalized support that validated their experiences, or look for immediate help through real-time interactions such as chat features or hotlines during crises. More recently, and with the advances in technology and artificial intelligence (AI), researchers have also begun exploring how youth perceive new types of online social support, like those available from AI chatbots [25,80,91]. ...

"I'm gonna KMS": From Imminent Risk to Youth Joking about Suicide and Self-Harm via Social Media
  • Citing Conference Paper
  • May 2024

... Recent studies evaluating the efficacy of LLMs in offering mental health intervention have shown promising results. When evaluated against human responses, responses by LLMs such as GPT-4 were found to demonstrate more empathy (Luo et al., 2024), active listening, and helpfulness when responding to relationship or general health-related questions (Vowels, 2024;Young et al., 2024). Another study indicated that GPT-4, when prompted to act as a therapist, exhibits competency, empathy, and therapeutic capacity when delivering singlesession therapy to individuals seeking assistance with relationship challenges . ...

The Role of AI in Peer Support for Young People: A Study of Preferences for Human- and AI-Generated Responses
  • Citing Conference Paper
  • May 2024

... Our results indicate that algorithmic approaches to identify online risks on social media for youth could be more teen-centered and effective if they take into account the difference between the platforms and the types of risks teens discuss encountering the most on those platforms. In contrast, prior works on machine learning (ML) algorithms for detecting social media risks for youth have mainly centered on training models using available datasets [11,73,115] without considering teens' shared experiences on these platforms. Therefore, instead of focusing our efforts on collecting benchmark datasets from various social media platforms, we recommend prioritizing addressing platform-specific challenges that teens discussed when they disclosed their negative experiences on these platforms. ...

Systemization of Knowledge (SoK): Creating a Research Agenda for Human-Centered Real-Time Risk Detection on Social Media Platforms
  • Citing Conference Paper
  • May 2024

... Previous studies have examined the risk perceptions and parental controls across various technologies and applications, including Virtual Reality [3], IoT devices [4], social media platforms [5], and gaming [6]. These studies identified both common and unique risks in various contexts, including explicit content, addiction, cyberbullying, and harassment [7]- [10]. ...

Profiling the Offline and Online Risk Experiences of Youth to Develop Targeted Interventions for Online Safety
  • Citing Article
  • April 2024

Proceedings of the ACM on Human-Computer Interaction

... Hence, we advocate for the development of tailored trauma-informed approaches to address their specific needs. Trauma-informed approaches have been explored in the design of various digital systems and social platforms in HCI [46,109,112,121]. This approach adheres to six core principles: 1) Safety, 2) Trustworthiness and Transparency, 3) Peer Support, 4) Collaboration and Mutuality, 5) Empowerment, Voice and Choice, 6) Cultural, Historical, and Gender Issues. ...

Toward Trauma-Informed Research Practices with Youth in HCI: Caring for Participants and Research Assistants When Studying Sensitive Topics
  • Citing Article
  • April 2024

Proceedings of the ACM on Human-Computer Interaction

... Wang et al. (2013) found that younger individuals are more adept at adopting new technologies due to their digital upbringing, which fosters curiosity and reliance on online platforms for various needs. Similarly, Thai et al. (2023) reported that children and youths hold positive views of AI, expressing interest in AI research and advocating for shared decision-making with AI. McDonald et al. (2023) demonstrated that Generation Z, having grown up with AI, is more likely to integrate AI tools into their studies and research. This study found that education level influences engagement with digital platforms in graduate education. ...

AI through the Eyes of Gen Z: Setting a Research Agenda for Emerging Technologies that Empower Our Future Generation
  • Citing Conference Paper
  • October 2023

... Meanwhile, the internet serves as a valuable resource for CHINS and other youth, offering access to support services they may not otherwise have access to [48,63]. Peer-to-peer support platforms and forums play a crucial role in this regard, allowing youth to freely share their lived experiences anonymously if desired, connect with peers who have similar experiences, provide mutual support, and become part of a larger supportive online community, mirroring the support they might lack offline [34,56,59,66,69]. At the same time, the internet could also facilitate risks associated with at-risk youths' adverse experiences [65]. ...

“Help Me:” Examining Youth’s Private Pleas for Support and the Responses Received from Peers via Instagram Direct Messages
  • Citing Conference Paper
  • April 2023

... Pitfalls of social media-based online peer support have also been highlighted, such as personal distress due to others' experiences [44], unhelpful interactions with others [52], social exclusion [44], and feeling of vulnerability when talking to strangers online [24]. Emerging evidence also documented other adolescent online safety issues including online sexual risks [12,58,112], exposure to explicit media content [31,100,119], and problematic media use with mental health issues [89,101,102]. ...

Sliding into My DMs: Detecting Uncomfortable or Unsafe Sexual Risk Experiences within Instagram Direct Messages Grounded in the Perspective of Youth
  • Citing Article
  • April 2023

Proceedings of the ACM on Human-Computer Interaction

... Drawing from the Razi et al. case study [113], where researchers successfully gathered and analyzed similar sensitive data from social media platforms, it is evident that with appropriate methodologies and ethical considerations, collecting targeted data from Instagram to detect teens' body shaming instances is indeed feasible. By doing so, the actual risks faced by teenagers in their online lives would be acknowledged, which would result in allowing the risk detection models to accurately identify these risks on specific platforms (e.g., [10,100,112]). As such, the models would learn patterns, contextual cues, and platform-specific dynamics that would be more representative of the risks faced by them. ...

Getting Meta: A Multimodal Approach for Detecting Unsafe Conversations within Instagram Direct Messages of Youth

Proceedings of the ACM on Human-Computer Interaction