Figure - available from: Frontiers in Robotics and AI
This content is subject to copyright.
The “symmetrical” robot-mediated job interview, from the perspective of the interviewer (a) and the job candidate (b). In (a), a male interviewer communicates via the robot with a female job candidate, who is seated in another room, as shown in (b). As the interviewer and the applicant communicate, their head movements, lip movements, and speech are transmitted via the robot.
Source publication
It is well-established in the literature that biases (e. g., related to body size, ethnicity, race etc.) can occur during the employment interview and that applicants' fairness perceptions related to selection procedures can influence attitudes, intentions, and behaviors toward the recruiting organization. This study explores how social robotics ma...
Similar publications
The taking of turns is a fundamental aspect of dialogue. Since it is difficult to speak and listen at the same time, the participants need to coordinate who is currently speaking and when the next person can start to speak. Humans are very good at this coordination, and typically achieve fluent turn-taking with very small gaps and little overlap. C...
Citations
... With robot's growing presence in modern workplace [1], they are now also being integrated into personnel selection processes, where they can participate in majority decisionmaking to select candidates [2] or conduct robot-mediated job interviews [3], [4]. While these applications can increase efficiency and consistency, concerns about bias and discrimination remain [5], [6]. ...
As robots become increasingly involved in decision-making processes (e.g., personnel selection), concerns about fairness and social inclusion arise. This study examines social exclusion in robot-led group interviews by robot Ameca, exploring the relationship between objective exclusion (robot's attention allocation), subjective exclusion (perceived exclusion), mood change, and need fulfillment. In a controlled lab study (N = 35), higher objective exclusion significantly predicted subjective exclusion. In turn, subjective exclusion negatively impacted mood and need fulfillment but only mediated the relationship between objective exclusion and need fulfillment. A piecewise regression analysis identified a critical threshold at which objective exclusion begins to be perceived as subjective exclusion. Additionally, the standing position was the primary predictor of exclusion, whereas demographic factors (e.g., gender, height) had no significant effect. These findings underscore the need to consider both objective and subjective exclusion in human-robot interactions and have implications for fairness in robot-assisted hiring processes.
... The robot-mediated job interview scenario was inspired by prior research (Kumazaki et al., 2017;Nørskov et al., 2020;Zafar et al., 2021). Even though this is a simulation, we are following the high-quality standards that apply to real interviews. ...
This study investigates ostracism-based social exclusion in multi-person interactions with robots. To examine this phenomenon, we will conduct a laboratory study in which participants engage in a simulated job interview with the robot Ameca acting as the interviewer.
The study compares objective exclusion (measured by the proportion of the robot's attention directed toward each participant) and subjective exclusion (participants' self-reported feelings of being ignored or excluded). We aim to identify the point at which objective exclusion leads to subjective feelings of exclusion and how this impact need fulfillment. After the interview, participants are allowed to stand somewhere else and are asked why they chose the same or a different standing position. Exploratory analyses will examine whether factors such as gender, height, or physical position (angle) relative to the robot influence the actual or assumed likelihood of being excluded.
... In personnel selection, fairness perception is a core principle in the ethical discussion of AI, as it is directly related to applicants' acceptance of unfavorable outcomes, their impression of the hiring organization and the organization's overall attractiveness (Ababneh et al., 2014;McCarthy et al., 2017;Narayanan et al., 2024). While people expect algorithms to ensure consistency in information processing and reduce biased, discriminatory, or disparate outcomes (e.g., Langer et al., 2019;Miller et al., 2018), several empirical studies on applicants' fairness perceptions of AI-driven recruitment processes have yielded opposite results (e.g., Acikgoz et al., 2020;Folger et al., 2022;Langer et al., 2019;Lee, 2018;Newman et al., 2020;Nørskov et al., 2020). For instance, Lee (2018) found that applicants perceive algorithm-based decisions as less fair than human decisions, particularly in tasks requiring human judgement, such as hiring, due to a general distrust of AI. ...
... As algorithms become increasingly prevalent in organizational settings, extensive research has been dedicated to improving the technical features of algorithms, such as controllability (Lee et al., 2019) and information appropriateness (Harrison et al., 2020), to enhance individuals' perceptions of algorithmic fairness. However, recent studies indicate that increased technical sophistication does not necessarily improve employees' perceptions of algorithmic decision fairness (Köchling & Wehner, 2020;Newman et al., 2020;Nørskov et al., 2020). One possible explanation is that the opacity of algorithms hinders employees from observing and understanding these technical features, leading them to rely on other salient and easily accessible information in the environment to make heuristic judgments, thereby forming perceptual biases (Acikgoz et al., 2020). ...
The application of algorithms in organizations is becoming more widespread. Previous research has aimed to enhance employees’ perceptions of algorithmic fairness by focusing on technical features. However, individuals often struggle to observe and comprehend these features, hindering their ability to form rational fairness judgments. Drawing upon fairness heuristic theory, this study explores how individuals perceive algorithmic fairness when technical features are invisible because of algorithmic opacity. Research conducted with food delivery riders in China suggests that, in the absence of transparent algorithmic information, riders heuristically form perceptions of algorithmic fairness based on more salient and accessible distributive fairness information. These heuristic perceptions of algorithmic fairness further predict outcomes such as task performance and helping behavior. We also found that different distributive fairness information holds varying importance in the process of shaping heuristic perceptions of algorithmic fairness and that algorithmic transparency perceptions and tenure moderate this process. The findings extend fairness heuristic theory and have practical implications.
... 4 In contrast to other vignette studies where robots interact with a human actor (e.g. [49,54]), our vignettes follow a first-person narrative with the videos visually depicting only the robot. In place of an actor, user response is narrated (e.g. ...
Young adults may feel embarrassed when disclosing sensitive information to their parents, while parents might similarly avoid sharing sensitive aspects of their lives with their children. How to design interactive interventions that are sensitive to the needs of both younger and older family members in mediating sensitive information remains an open question. In this paper, we explore the integration of large language models (LLMs) with social robots. Specifically, we use GPT-4 to adapt different Robot Communication Styles (RCS) for a social robot mediator designed to elicit self-disclosure and mediate health information between parents and young adults living apart. We design and compare four literature-informed RCS: three LLM-adapted (Humorous, Self-deprecating, and Persuasive) and one manually created (Human-scripted), and assess participant perceptions of Likeability, Usefulness, Helpfulness, Relatedness, and Interpersonal Closeness . Through an online experiment with 183 participants, we assess the RCS across two groups: adults with children (Parents) and young adults without children (Young Adults). Our results indicate that both Parents and Young Adults favoured the Human-scripted and Self-deprecating RCS as compared to the other two RCS. The Self-deprecating RCS furthermore led to increased relatedness as compared to the Humorous RCS. Our qualitative findings reveal challenges people have in disclosing health information to family members, and who normally assumes the role of family facilitator-two areas in which social robots can play a key role. The findings offer insights for integrating LLMs with social robots in health-mediation and other contexts involving the sharing of sensitive information.
... In this study, we tested the effect of adding ATT to the luxury shopping context, which is not generally adopted in a luxury boutique. Thus, using vignettes can help focus on the participants and clarify study principles, even if they have no prior experience with the technology [69]. In line with recent studies [70], a video-based vignette experimental study was conducted because this technology is new within the context. ...
... In line with recent studies [70], a video-based vignette experimental study was conducted because this technology is new within the context. Online experimental vignettes have been recognized for their effectiveness in revealing perceptions, attitudes, and behaviors, and for not forcing participants to have a solid insight into the research topic in question [69]. For instance, the obstacles posed by conducting field studies on real-life marketing decisions include the constraints of time and financial resources [71]. ...
... 1. AI Marketing Activities (AMA): We gathered thirteen items (see Table A1) used in previous studies to measure AI marketing activities (AMA) according to uniqueness, telepresence, delegation, and interactivity [69,70,[75][76][77][78][79]. These items were previously adopted to verify how AI technologies affect customer perception and behavior related to customer experiences. ...
Artificial Intelligence (AI) has revolutionized interactive marketing, creating dynamic and personalized customer experiences. To the best of our knowledge, no studies have ventured into how firms in the luxury sector can leverage AI marketing activities to innovate their business model and boost the development of future digital marketing to enhance the luxury shopping experience (LSE). Building on the existing LSE literature and adopting a business model innovation (BMI) lens, we conducted an experimental study to identify how AI-powered try-on technology (ATT) can contribute to LSEs and create customer value proxied by customer satisfaction. In addition, we determined the specific dimensions of the LSE that are most affected by AI marketing efforts. Furthermore, our findings explored the role of AI in driving BMI and the interrelationship between enhanced customer satisfaction and BMI. This research contributes to understanding the crucial role of AI in shaping the future of interactive marketing in the luxury context.
... Dependent variables. Perceived Fairness (PF) was measured using a 7-point Likert scale with questions adopted from previous studies (Nørskov et al. 2020;Bauer et al. 2001;McLarty and Whitman 2016). This scale captures domain-specific fairness through three dimensions of the AVI agent's procedural, interactional fairness, and behavioral intentions. ...
The persistent issue of human bias in recruitment processes poses a formidable challenge to achieving equitable hiring practices, particularly when influenced by demographic characteristics such as gender and race of both interviewers and candidates. Asynchronous Video Interviews (AVIs), powered by Artificial Intelligence (AI), have emerged as innovative tools aimed at streamlining the application screening process while potentially mitigating the impact of such biases. These AI-driven platforms present an opportunity to customize the demographic features of virtual interviewers to align with diverse applicant preferences, promising a more objective and fair evaluation. Despite their growing adoption, the implications of virtual interviewer identities on candidate experiences within AVIs remain underexplored. We aim to address this research and empirical gap in this paper. To this end, we carried out a comprehensive between-subjects study involving 218 participants across six distinct experimental conditions, manipulating the gender and skin color of an AI virtual interviewer agent. Our empirical analysis revealed that while the demographic attributes of the agents did not significantly influence the overall experience of interviewees, variations in the interviewees' demographics, significantly altered their perception of the AVI process. Further, we uncovered that the mediating roles of Social Presence and Perception of the virtual interviewer critically affect interviewees' Perceptions of Fairness (+), Privacy (-), and Impression management (+).
... Dependent variables. Perceived Fairness (PF) was measured using a 7-point Likert scale with questions adopted from previous studies (Nørskov et al. 2020;Bauer et al. 2001;McLarty and Whitman 2016). This scale captures domain-specific fairness through three dimensions of the AVI agent's procedural, interactional fairness, and behavioral intentions. ...
The persistent issue of human bias in recruitment processes poses a formidable challenge to achieving equitable hiring practices, particularly when influenced by demographic characteristics such as gender and race of both interviewers and candidates. Asynchronous Video Interviews (AVIs), powered by Artificial Intelligence (AI), have emerged as innovative tools aimed at streamlining the application screening process while potentially mitigating the impact of such biases. These AI-driven platforms present an opportunity to customize the demographic features of virtual interviewers to align with diverse applicant preferences, promising a more objective and fair evaluation. Despite their growing adoption, the implications of virtual interviewer identities on candidate experiences within AVIs remain underexplored. We aim to address this research and empirical gap in this paper. To this end, we carried out a comprehensive between-subjects study involving 218 participants across six distinct experimental conditions, manipulating the gender and skin color of an AI virtual interviewer agent. Our empirical analysis revealed that while the demographic attributes of the agents did not significantly influence the overall experience of interviewees, variations in the interviewees' demographics significantly altered their perception of the AVI process. Further, we uncovered that the mediating roles of Social Presence and Perception of the virtual interviewer critically affect interviewees' perceptions of fairness (+), privacy (-), and impression management (+).
... For instance, vignette studies assessed the accuracy of psychological diagnoses (Domínguez Martínez et al., 2024), the impact of social classes on diagnoses and treatments (Vlietstra et al., 2021), and the effects of corporate social responsibility (Paruzel et al., 2020) or leadership styles (Steinmann et al., 2020). Moreover, online microaggressions against LGBTQIA+ individuals (McInroy et al., 2024), the relationship between conspiracy beliefs and medical treatment preferences (Fournier & Varet, 2024), lay perceptions of narcissistic traits (Villalongo Andino et al., 2023), how group work fosters procrastination (Koppenborg et al., 2024), responsibility attributions concerning domestic violence (Leon & Aizpurua, 2024) or traffic accidents (Copp et al., 2023), victim blame regarding rape (Tomer & Guter, 2024) and accepting robots as job interviewers (Nørskov et al., 2020) or moral advisors (Arlinghaus et al., 2024;Straßmann et al., 2020) were also explored through vignettes. ...
This paper recommends using AI-generated images in vignette studies. AI tools like OpenAI's DALL·E offer efficient, cost-effective creation of customized visual stimuli. To ensure accessibility, we provide a free Python template and guide, requiring no programming experience or subscription plan. Despite challenges, AI-generated images hold substantial potential to transform research methodologies. This innovative approach particularly benefits pilot projects, students, and early-career researchers with limited resources. You can find the Python template here: http://dx.doi.org/10.13140/RG.2.2.20421.46566
... In particular, job applicants exhibit negative reactions to these systems (Gonzalez et al., 2022), often due to their concerns about fairness (Acikgoz et al., 2020;Nørskov et al., 2020). Applicants perceive reduced social presence and insufficient interpersonal treatment in AI-based job interviews compared to traditional twoway communication methods, such as in-person interviews (Langer & Landers, 2021;Lukacik et al., 2020;Mirowska & Mesnet, 2021). ...
Artificial intelligence (AI)‐based job interviews are increasingly adopted in organizations' recruitment activities. Despite their standardization and flexibility, concerns about fairness for applicants remain a critical challenge. Taking a perspective on interface design, this research examines the role of avatar characteristics in shaping perceptions of interactional justice in AI‐based job interviews. Through a scenario‐based study involving 465 participants, the impact of avatar characteristics—specifically, appearance, linguistic style, and feedback informativeness—on applicants' perceptions of interpersonal justice and informational justice was investigated. The findings indicate that avatars characterized by a warm and cheerful appearance, coupled with an affective expression style and informative feedback, significantly enhance perceptions of interpersonal justice and informational justice. These insights offer valuable practical guidance for avatar design in AI‐based job interview systems.