Buse Carik’s research while affiliated with Virginia Tech and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Figure 1: Frequency of NST based on responses from the Negative Automatic Thoughts (ATQ-N 10) and Anxiety Scale for Autism-Adults (ASA-A). Distribution of participants' responses to the items in these questionnaires, with an overall mean frequency score of 3.25 (on a scale of 1-5). Participants reported a high frequency of thoughts related to the need for preparation, anxiety about social interactions, and discomfort with unfamiliar situations.
Demographics of study participants (N=200).
Demographic and professional information of prac- titioners, including their roles, areas of practice, years of experience, and gender.
Ordinal logistic regression results examining preferences for tone of LLMs. B represents the regression coefficient (standard error in parentheses), where negative values indicate lower likelihoods of preference relative to the reference group. OR represents the odds ratio, with values below 1 indicating reduced odds of preference. The reference group is participants who use LLMs for mental health support. Note. *í µí± < .05; **í µí± < .01; ***í µí± < .001 Tone Group B (í µí±†í µí°¸) OR í µí±-value
Reimagining Support: Exploring Autistic Individuals' Visions for AI in Coping with Negative Self-Talk
  • Preprint
  • File available

March 2025

·

18 Reads

Buse Carik

·

Victoria Izaac

·

Xiaohan Ding

·

[...]

·

Eugenia Rho

Autistic individuals often experience negative self-talk (NST), leading to increased anxiety and depression. While therapy is recommended, it presents challenges for many autistic individuals. Meanwhile, a growing number are turning to large language models (LLMs) for mental health support. To understand how autistic individuals perceive AI's role in coping with NST, we surveyed 200 autistic adults and interviewed practitioners. We also analyzed LLM responses to participants' hypothetical prompts about their NST. Our findings show that participants view LLMs as useful for managing NST by identifying and reframing negative thoughts. Both participants and practitioners recognize AI's potential to support therapy and emotional expression. Participants also expressed concerns about LLMs' understanding of neurodivergent thought patterns, particularly due to the neurotypical bias of LLMs. Practitioners critiqued LLMs' responses as overly wordy, vague, and overwhelming. This study contributes to the growing research on AI-assisted mental health support, with specific insights for supporting the autistic community.

Download

Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns

January 2025

·

21 Reads

·

1 Citation

Proceedings of the ACM on Human-Computer Interaction

Despite the increasing use of large language models (LLMs) in everyday life among neurodivergent individuals, our knowledge of how they engage with and perceive LLMs remains limited. In this study, we investigate how neurodivergent individuals interact with LLMs by qualitatively analyzing topically related discussions from 61 neurodivergent communities on Reddit. Our findings reveal 20 specific LLM use cases across five core thematic areas of use among neurodivergent users: emotional well-being, mental health support, interpersonal communication, learning, and professional development and productivity. We also identified key challenges, including overly neurotypical LLM responses and the limitations of text-based interactions. In response to such challenges, some users actively seek advice by sharing input prompts and corresponding LLM responses. Others develop workarounds by experimenting and modifying prompts to be more neurodivergent-friendly. Despite these efforts, users have significant concerns around LLM use, including potential overreliance and fear of replacing human connections. Our analysis highlights the need to make LLMs more inclusive for neurodivergent users and implications around how LLM technologies can reinforce unintended consequences and behaviors.


Fig. 1. Distribution of the total number of entries including posts, comments, and replies across subreddits for autism, ADHD, social anxiety, and dyslexia.
Comprehensive list of neurodivergent conditions.
Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns

October 2024

·

189 Reads

Despite the increasing use of large language models (LLMs) in everyday life among neurodivergent individuals, our knowledge of how they engage with, and perceive LLMs remains limited. In this study, we investigate how neurodivergent individuals interact with LLMs by qualitatively analyzing topically related discussions from 61 neurodivergent communities on Reddit. Our findings reveal 20 specific LLM use cases across five core thematic areas of use among neurodivergent users: emotional well-being, mental health support, interpersonal communication, learning, and professional development and productivity. We also identified key challenges, including overly neurotypical LLM responses and the limitations of text-based interactions. In response to such challenges, some users actively seek advice by sharing input prompts and corresponding LLM responses. Others develop workarounds by experimenting and modifying prompts to be more neurodivergent-friendly. Despite these efforts, users have significant concerns around LLM use, including potential overreliance and fear of replacing human connections. Our analysis highlights the need to make LLMs more inclusive for neurodivergent users and implications around how LLM technologies can reinforce unintended consequences and behaviors.


CounterQuill: Investigating the Potential of Human-AI Collaboration in Online Counterspeech Writing

October 2024

·

48 Reads

Online hate speech has become increasingly prevalent on social media platforms, causing harm to individuals and society. While efforts have been made to combat this issue through content moderation, the potential of user-driven counterspeech as an alternative solution remains underexplored. Existing counterspeech methods often face challenges such as fear of retaliation and skill-related barriers. To address these challenges, we introduce CounterQuill, an AI-mediated system that assists users in composing effective and empathetic counterspeech. CounterQuill provides a three-step process: (1) a learning session to help users understand hate speech and counterspeech; (2) a brainstorming session that guides users in identifying key elements of hate speech and exploring counterspeech strategies; and (3) a co-writing session that enables users to draft and refine their counterspeech with CounterQuill. We conducted a within-subjects user study with 20 participants to evaluate CounterQuill in comparison to ChatGPT. Results show that CounterQuill's guidance and collaborative writing process provided users a stronger sense of ownership over their co-authored counterspeech. Users perceived CounterQuill as a writing partner and thus were more willing to post the co-written counterspeech online compared to the one written with ChatGPT.


Citations (2)


... Given these limitations in traditional therapy methods, and the growing accessibility of large language models (LLMs), such as ChatGPT, Gemini, and Claude, many autistic individuals use these tools to help with interpersonal communication, such as explaining or interpreting social situations [45,52,111] and better understanding and processing their own emotions and thoughts [47]. Furthermore, an increasing number of autistic users rely on LLMs to discuss personal issues and seek mental health guidance [20,24,79]. However, both mental health professionals and members of the autistic community have raised concerns about the safety of relying on these tools for mental health support [18,60], particularly due to potential neurotypical biases in the LLM responses [61,122] and negative consequences observed in previous uses of chatbots in mental health care [116,117]. ...

Reference:

Reimagining Support: Exploring Autistic Individuals' Visions for AI in Coping with Negative Self-Talk
Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns
  • Citing Article
  • January 2025

Proceedings of the ACM on Human-Computer Interaction

... For instance, researchers have successfully utilized LLMs to extract detailed clinical information from electronic health records (EHRs) [16] and identify sentiment and key themes from patient reviews of medical treatments [17] . Additionally, current studies have leveraged LLMs to monitor public discourse [18] , derive consumer insights [19] , and predict health decisions [20] . These applications highlight the potential of LLMs in transforming diverse, unstructured data sources into valuable knowledge that can support healthcare decision-making. ...

Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language
  • Citing Conference Paper
  • May 2024