Ashita Ashok’s research while affiliated with Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (13)


Fig. 2. Social humanoid robot Ameca from Engineered Arts (left) used in the exclusion study; mock interview flow with the robot (right).
Fig. 3. Higher correlation indicating potential mediation effects.
Fig. 5. Standing Position as a Risk Factor for Robot-Induced Exclusion.
Beyond Attention: Investigating the Threshold Where Objective Robot Exclusion Becomes Subjective
  • Preprint
  • File available

April 2025

·

53 Reads

·

Ashita Ashok

·

Ashim Mandal

·

[...]

·

As robots become increasingly involved in decision-making processes (e.g., personnel selection), concerns about fairness and social inclusion arise. This study examines social exclusion in robot-led group interviews by robot Ameca, exploring the relationship between objective exclusion (robot's attention allocation), subjective exclusion (perceived exclusion), mood change, and need fulfillment. In a controlled lab study (N = 35), higher objective exclusion significantly predicted subjective exclusion. In turn, subjective exclusion negatively impacted mood and need fulfillment but only mediated the relationship between objective exclusion and need fulfillment. A piecewise regression analysis identified a critical threshold at which objective exclusion begins to be perceived as subjective exclusion. Additionally, the standing position was the primary predictor of exclusion, whereas demographic factors (e.g., gender, height) had no significant effect. These findings underscore the need to consider both objective and subjective exclusion in human-robot interactions and have implications for fairness in robot-assisted hiring processes.

Download

From GPT to Open-Source LLMs: Advancing LAIC for More Efficient Qualitative Analysis

March 2025

·

32 Reads

Qualitative data analysis is essential across numerous research domains (e.g., in Psychology or Human-Robot-Interaction), yet its manual coding process is often time-consuming, resource-intensive, and susceptible to human bias. To address these challenges, we introduce LLM-Assisted Inductive Categorization (LAIC), a novel method that utilizes Large Language Models (LLMs) to generate and assign inductive categories. By leveraging the capabilities of LLMs, this approach streamlines qualitative data analysis, enhances coding efficiency, and maintains analytical rigor, offering a scalable solution for researchers working with large datasets. In two preregistered studies6,7, we explored the performance of different GPT models—GPT-3.5 Turbo and GPT-4o—across three temperature settings (0, 0.5, 1) with ten repetitions each, resulting in a total of 120 runs. Outputs were assessed using established qualitative research criteria, including credibility, dependability, confirmability, transferability, and transparency. The results demonstrated that both models effectively developed and assigned inductive categories, often achieving agreement rates higher than those of human coders. Among the tested models, GPT-4o exhibited superior performance, providing clearer category explanations and higher agreement rates, particularly at a temperature setting of 0. This combination produced the most reliable and consistent categorizations, making GPT-4o the recommended model for inductive coding tasks. Building upon these findings, our ongoing research seeks to further enhance the LAIC method by refining prompt designs and incorporating additional LLMs, with a particular focus on open-source models such as DeepSeek and LLama. This expansion aims to increase the method’s accessibility and neutrality, ensuring that researchers can apply LAIC without reliance on proprietary models. By comparing multiple LLMs, we aim to demonstrate the robustness and generalizability of LAIC across diverse models and datasets, solidifying its applicability in various qualitative research contexts. To facilitate the adoption of this innovative method, we have already developed a detailed tutorial and Python scripts for implementing LAIC with GPT models, covering each step from data input and prompt design to category generation and assignment. These resources are openly available under a CC-BY 4.0 license, promoting transparency and reproducibility within the research community. Building on this foundation, we plan to extend the step-by-step instructions4 and customizable templates to encompass additional LLMs, including open-source models. By making these resources freely available, we aim to empower researchers to integrate LAIC into their workflows, reducing the time and effort required for qualitative data analysis while maintaining high analytical standards. Our research aligns with broader efforts to harness artificial intelligence for more efficient and scalable evaluation processes, addressing the growing demand for methods that can handle large volumes of qualitative data without compromising interpretive depth. By demonstrating that LLMs can reliably perform inductive coding tasks, LAIC represents a significant advancement in qualitative research methodology, bridging the gap between traditional coding practices and modern computational techniques. As an innovative approach that goes beyond mere accuracy and precision, LAIC exemplifies the potential of AI to enhance qualitative analysis by improving efficiency, transparency, and reproducibility. Through continued development and the inclusion of open-source models, we aim to further democratize access to AI-assisted qualitative analysis, supporting researchers across disciplines in conducting more efficient, transparent, and reproducible studies.




Investigating Objective and Subjective Social Exclusion in a Group Conversation Led by a Robot

February 2025

·

5 Reads

This study investigates ostracism-based social exclusion in multi-person interactions with robots. To examine this phenomenon, we will conduct a laboratory study in which participants engage in a simulated job interview with the robot Ameca acting as the interviewer. The study compares objective exclusion (measured by the proportion of the robot's attention directed toward each participant) and subjective exclusion (participants' self-reported feelings of being ignored or excluded). We aim to identify the point at which objective exclusion leads to subjective feelings of exclusion and how this impact need fulfillment. After the interview, participants are allowed to stand somewhere else and are asked why they chose the same or a different standing position. Exploratory analyses will examine whether factors such as gender, height, or physical position (angle) relative to the robot influence the actual or assumed likelihood of being excluded.


“You Scare Me”: The Effects of Humanoid Robot Appearance, Emotion, and Interaction Skills on Uncanny Valley Phenomenon

October 2024

·

78 Reads

·

3 Citations

This study investigates the effects of humanoid robot appearance, emotional expression, and interaction skills on the uncanny valley phenomenon among university students using the social humanoid robot (SHR) Ameca. Two fundamental studies were conducted within a university setting: Study 1 assessed student expectations of SHRs in a hallway environment, emphasizing the need for robots to integrate seamlessly and engage effectively in social interactions; Study 2 compared the humanlikeness of three humanoid robots, ROMAN, ROBIN, and EMAH (employing the EMAH robotic system implemented on Ameca). The initial findings from corridor interactions highlighted a diverse range of human responses, from engagement and curiosity to indifference and unease. Additionally, the online survey revealed significant insights into expected non-verbal communication skills, continuous learning, and comfort levels during hallway conversations with robots. Notably, certain humanoid robots evoked stronger emotional reactions, hinting at varying degrees of humanlikeness and the influence of interaction quality. The EMAH system was frequently ranked as most humanlike before the study, while post-study perceptions indicated a shift, with EMAH and ROMAN showing significant changes in perceived humanlikeness, suggesting a re-evaluation by participants influenced by their interactive experiences. This research advances our understanding of the uncanny valley phenomenon and the role of humanoid design in enhancing human–robot interaction, marking the first direct comparison between the most advanced, humanlike research robots.






Citations (5)


... Mori's original picture, as in Figure 1, was later somewhat refuted, particularly because of humanoid robots, which do not always display familiarity. Works such as Berns and Ashok [16] and Yam et al. [17] tried to investigate which anthropomorphism aspects result in familiarity, adding or removing them as "humanizing" or "dehumanizing" robot appearances. The results are not conclusive but clearly illustrate the phenomenon. ...

Reference:

Perspective Chapter: Social Awareness in HRI
“You Scare Me”: The Effects of Humanoid Robot Appearance, Emotion, and Interaction Skills on Uncanny Valley Phenomenon

... This study also suggests supplementary work in different aesthetics of clothing beyond masculinity and femininity. Past work has shown that users have preferences based on the formality of clothing that shape their perceptions of the robot [152]. Future work can explore how other styles such as Victorian, grunge, or camp aesthetics, and how these may shape social perception of robots, especially in communities that form identities around these aesthetics. ...

Robot Dressing Style: An evaluation of interlocutor preference for University Setting
  • Citing Conference Paper
  • August 2023

... That involves complex input synchronization, modality fusion, and context awareness, all of which demand advanced algorithms and ML techniques. Additionally, multimodal systems must be designed to adapt to diverse environments and user preferences, making them versatile across various applications, from healthcare and education to entertainment and smart environments [7][8][9]. Table 1 summarizes prior surveys on multimodal interaction, highlighting their focus areas. This survey advances the discourse on multimodal interaction systems by providing a holistic and integrated analysis that spans critical technologies, synchronization challenges, adaptive systems, and future research directions. ...

Multimodal Perceptual Cues for Context-Aware Human-Robot Interaction

... OpenSMILE [9] is designed for batch extraction of large feature sets for machine learning applications. OpenSMILE has been extensively used in various tasks such as speech emotion recognition [22] and clinical speech analysis [17,23], as well as paralinguistic challenges [24]. We used the eGeMAPS configuration [25] for the extraction, which has become a standard in affective computing and clinical speech research [26]. ...

Paralinguistic Cues in Speech to Adapt Robot Behavior in Human-Robot Interaction
  • Citing Conference Paper
  • August 2022

... The robot-mediated job interview scenario was inspired by prior research (Kumazaki et al., 2017;Nørskov et al., 2020;Zafar et al., 2021). Even though this is a simulation, we are following the high-quality standards that apply to real interviews. ...

Personality Traits Assessment using P.A.D. Emotional Space in Human-robot Interaction
  • Citing Conference Paper
  • January 2021