PreprintPDF Available

Beyond Attention: Investigating the Threshold Where Objective Robot Exclusion Becomes Subjective

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

As robots become increasingly involved in decision-making processes (e.g., personnel selection), concerns about fairness and social inclusion arise. This study examines social exclusion in robot-led group interviews by robot Ameca, exploring the relationship between objective exclusion (robot's attention allocation), subjective exclusion (perceived exclusion), mood change, and need fulfillment. In a controlled lab study (N = 35), higher objective exclusion significantly predicted subjective exclusion. In turn, subjective exclusion negatively impacted mood and need fulfillment but only mediated the relationship between objective exclusion and need fulfillment. A piecewise regression analysis identified a critical threshold at which objective exclusion begins to be perceived as subjective exclusion. Additionally, the standing position was the primary predictor of exclusion, whereas demographic factors (e.g., gender, height) had no significant effect. These findings underscore the need to consider both objective and subjective exclusion in human-robot interactions and have implications for fairness in robot-assisted hiring processes.
Content may be subject to copyright.
Beyond Attention: Investigating the Threshold Where Objective Robot
Exclusion Becomes Subjective
Clarissa Sabrina Arlinghaus1, Ashita Ashok2, Ashim Mandal2, Karsten Berns2, and G¨
unter W. Maier1
Abstract As robots become increasingly involved in
decision-making processes (e.g., personnel selection), concerns
about fairness and social inclusion arise. This study examines
social exclusion in robot-led group interviews by robot Ameca,
exploring the relationship between objective exclusion (robot’s
attention allocation), subjective exclusion (perceived exclusion),
mood change, and need fulfillment. In a controlled lab study
(N= 35), higher objective exclusion significantly predicted
subjective exclusion. In turn, subjective exclusion negatively
impacted mood and need fulfillment but only mediated the
relationship between objective exclusion and need fulfillment.
A piecewise regression analysis identified a critical threshold at
which objective exclusion begins to be perceived as subjective
exclusion. Additionally, the standing position was the primary
predictor of exclusion, whereas demographic factors (e.g., gen-
der, height) had no significant effect. These findings underscore
the need to consider both objective and subjective exclusion in
human-robot interactions and have implications for fairness in
robot-assisted hiring processes.
I. INTRODUCTION
With robot’s growing presence in modern workplace [1],
they are now also being integrated into personnel selection
processes, where they can participate in majority decision-
making to select candidates [2] or conduct robot-mediated
job interviews [3], [4]. While these applications can increase
efficiency and consistency, concerns about bias and discrimi-
nation remain [5], [6]. At the same time, robot-led interviews
may offer valuable training opportunities, particularly for
migrants and non-native speakers facing linguistic barriers
[7]. Initial studies suggest that migrants perceive robot-
mediated training experiences positively [8].
In this study, we examine social exclusion in robot-
mediated job interviews using a simulated interview setting.
Specifically, we investigate how robot attention distribution
(objective exclusion) relates to subjective exclusion, mood
changes, and need fulfillment. Our work builds on prior
research on social exclusion in human-robot interactions
(HRIs) [9]–[12], and is theoretically grounded in the Tem-
poral Need-Threat Model (TNTM), which explains how
social exclusion negatively affects mood and need fulfillment
(see Figure 1)[13]. TNTM studied through the Cyberball
paradigm, shows that receiving fewer ball passes worsens
mood and need fulfillment, even when co-players are robots
[14]–[16]. Our study extends this framework to conversa-
1Bielefeld University, Bielefeld, Germany
2RPTU Kaiserslautern-Landau, Kaiserslautern, Germany
*clarissa sabrina.arlinghaus@uni-bielefeld.de
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may
no longer be accessible.
tional settings, where speaking opportunities in group dis-
cussions mirror ball-passing dynamics in Cyberball.
Fig. 1. Temporal Need-Threat Model (TNTM) (adapted from [13])
Previous research indicates that individuals feel excluded
when robots direct fewer questions to them [9], [17] or when
they are unable to participate in a conversation because the
robots communicate in a language, they do not understand
[10], [11]. We explored robot-led job interviews as a novel
setting for studying exclusion, using continuous measures in-
stead of binary inclusion/exclusion conditions. By analyzing
natural variations in robot attention, we identified thresholds
where objective exclusion becomes subjective. To investigate
this transition, we pose the following research question:
Exploratory Question 1. At what level of objective exclu-
sion do individuals begin to feel subjectively excluded?
Attribution is how individuals explain events and expe-
riences [18] and plays a key role in coping with exclusion
[13]. A preceding study suggests that robot exclusion, unlike
human exclusion, is attributed to technical limitations or
programming constraints rather than interpersonal rejection
or workplace bullying, potentially moderating its psycho-
logical effects [19]. To examine attribution in robot-led
interviews, participants selected a new standing position for
a second round and explained their choice. We also assessed
whether they attribute interaction outcomes to positioning
or conversational content, providing insights into whether
exclusion is perceived as self-related (internal) or robot-
related (external). An open-ended feedback section captured
additional explanations of exclusion experiences. Thus, we
formulate our second research question:
Exploratory Question 2. How do participants explain their
experience of objective exclusion by the robot?
Additionally, we examined whether individual factors
(e.g., age, gender, height, standing position) influenced the
likelihood of exclusion. Identifying risk factors was crucial
for ensuring fair and inclusive robot-assisted selection in
group interview settings. By analyzing whether demographic
or spatial variables contributed to exclusion, we aimed to
inform the development of equitable and unbiased hiring
technologies. Thus, we formulate our final research question:
Exploratory Question 3. Are there specific individual fac-
tors (e.g., age, gender, height, physical position/angle) that
arXiv:2504.15886v1 [cs.HC] 22 Apr 2025
increase the likelihood of being excluded by the robot?
Understanding these factors is crucial, as persisting ex-
clusion in social interactions, intentional or unintentional,
can have severe psychological consequences [13]. In the
context of robot-mediated job interviews, it is important
to examine whether reduced robot attention translates into
feelings of exclusion and, in turn, affects the emotional state
and fundamental psychological needs of the participants.
Building on this, we aim to investigate not only the presence
of such exclusion but also its psychological consequences.
Specifically, we expect that reduced robot attention (objective
exclusion) will lead to an increased sense of being ignored
(subjective exclusion), which in turn affects mood and need
fulfillment.
Therefore, we hypothesize that in our proposed setting of
robot-mediated job interviews with social humanoid robot
Ameca1(see Figure 2(left)), reduced robot attention (ob-
jective exclusion) increases self-reported feelings of being
ignored (subjective exclusion), thereby negatively impacting
mood and need fulfillment.
Hypothesis 1. Higher levels of objective exclusion worsen
mood, mediated by subjective exclusion.
Hypothesis 2. Higher objective exclusion reduces need
fulfillment, mediated by subjective exclusion.
Our study advances the understanding of exclusion dy-
namics in HRIs, specifically in robot-assisted interviews.
While prior research focused on subjective exclusion, we
systematically compare objective and subjective exclusion
to uncover underlying mechanisms. If certain individuals
receive less attention, they may be disadvantaged, reinforc-
ing existing inequalities and restricting employment access.
Given the link between unemployment and social exclusion
[20] , ensuring bias-free hiring technologies is essential. To
mitigate these risks, we advocate for rigorous evaluation of
robot-assisted hiring tools to prevent discrimination.
II. METHODS
This study was pre-registered on the Open Science Frame-
work2.
A. Sample
The required sample size was determined via G*Power
3.1 for linear multiple regression (fixed model, R2deviation
from zero; f= 0.35 (large effect); α= 0.05; power = 0.80;
two predictors), yielding a minimum of 31 participants. In
February 2025, we successfully recruited 35 [Nm=Nf=
17, No= 1) international students, primarily from India (N
= 24, 68.6%), which is the second largest nationality after
natives [8]. Participants were aged 22–33 years (M= 26.43,
SD = 2.27) and had a height range of 1.30 m to 1.85 m (M
= 1.65 m, SD = 11.46 cm). A majority (54.3%, N=19) were
in the final stages of their studies, expecting to graduate in
2025, with German proficiency levels ranging from A1 to C1,
1https://rrlab.cs.rptu.de/en/robots/ameca
2https://doi.org/10.17605/OSF.IO/HYB2S
with A2 (N = 12) and B1 (N = 11). All participants provided
informed consent and received a 5-euro Best-Choice voucher
as compensation for their participation. No participants were
excluded from the analyses.
B. Procedure
This study was conducted at RPTU Kaiserslautern (Ger-
many) in collaboration with Bielefeld University, with ethical
approval obtained from both institutions (RPTU: No. 69;
Bielefeld: No. 2025-020-S).
Participants were recruited under the premise of a German-
language job interview training for non-native speakers.
While the interview was in German, questionnaires were
provided in English to reduce cognitive load and anxiety.
A structured interview format was used, as it enhances reli-
ability and validity [21]. Sessions began with introductions,
followed by eight randomized situational and behavioral
questions [22](translated into German), known for their high
predictive validity [23]. Each session included ve standing
participants in a mock group interview, with Ameca au-
tonomously selecting whom to ask the interview question
based on Algorithm 1. This algorithm outlines the process
of initializing eye gaze listeners and dynamically shifting
Ameca’s gaze between participants during the interview.
Previous studies have found discrepancies in the robot’s
gaze-tracking algorithm, thereby inspiring this study to look
in-depth at possible AI biases. The algorithm continuously
updates eye targets, adjusts gaze direction, and incorporates
random gaze shifts when no target is detected. Video record-
ings enabled post-hoc analysis of attention distribution, with
objective exclusion quantified by question frequency. Each
session lasted approx. 40 minutes, including the interview,
questionnaires, and debriefing (see Figure 2(right)).
Algorithm 1 Eyegaze Control Algorithm for HR Interview
1: Initialize variables and participant positions (1 to 5)
2: procedure ONACTI VATE
3: Set camera brightness, add gaze listeners, introduce
robot
4: procedure ONDEACTIVATE
5: Remove gaze listeners
6: procedure ONLOOKUPDATE
7: Update/reset eye target
8: procedure ONEYES UPDATE (executed every frame)
9: Set/clear target position
10: for each of 8 questions do
11: if target exists then
12: Select target from output of OnEyesUpdate(),
adjust eye angles
13: Update gaze, ask question, update target
14: Wait for response
15: procedure RANDOMGAZE
16: Occasionally shift gaze if no target
17: End experiment, deactivate gaze control
Fig. 2. Social humanoid robot Ameca from Engineered Arts (left) used in the exclusion study; mock interview flow with the robot (right).
C. Measures
The following section outlines the study variables, with
all instructions adjusted to fit the job interview context.
Objective Exclusion: Objective exclusion is assessed
by analyzing the robot’s attention during the interview
as follows: Objective Exclusion = 1 - (Number of
questions and responses directed to a participant / Total
number of questions and responses to all participants).
Subjective Exclusion: Participants rate two items (e.g.,
“I was ignored”) on a 5-point Likert scale (1 = Strongly
disagree, 5 = Strongly agree) [13], assessing their per-
ceived exclusion during the robot interaction.
Mood Change: Participants rate eight items (e.g.,
”happy”, ”sad”) on a 5-point Likert scale (1 = Not at
all, 5 = Extremely) [13] before and after the interview.
Mood Change is operationalized as post-measure minus
pre-measure.
Need Fulfillment: Participants rate four items (e.g.,
“invisible recognized”) on a 9-point Likert scale [24].
Standing Position: Participants select one of five posi-
tions before the interview and may choose a new one
for a hypothetical second round.
Open-Ended Feedback: Participants justify their po-
sition choice, assess whether position or responses
influenced the robot more, and provide open-ended
feedback.
D. Analyses
We assessed normality, linearity, homoscedasticity, and
multicollinearity before conducting statistical analyses in R
4.4.3 (R Studio 2024.04.0). The data set (N= 35) contained
no extreme outliers (+-3*IQR).
Mediation Analysis: Two mediation models tested
whether objective exclusion indirectly influenced mood
changes and need fulfillment via subjective exclusion,
using bootstrapped mediation (5000 resamples, Bollen-
Stine test, BCa correction).
Threshold Analysis: Piecewise regression identified
the inflection point where objective exclusion triggers
subjective exclusion.
Qualitative Analysis: Open-ended feedback analyzed
using LLM-Assisted Inductive Categorization [25].
Risk Factors: Multiple regressions examined whether
age, gender, language proficiency, height, and standing
position predicted objective or subjective exclusion.
III. RESULTS
A. Mediation Analyses
Robust mediation analyses revealed that objective exclu-
sion significantly predicted subjective exclusion (ß= 0.412,
SE = 2.174, p= .031). In turn, subjective exclusion signifi-
cantly predicted both mood change (ß= -0.424, SE = 0.083,
p= .015) and need fulfillment (ß= -0.702, SE = 0.140, p
<.001). However, the direct effect of objective exclusion
was not significant for either mood change (ß= -0.071, SE =
0.714, p= .588) or need fulfillment (ß= -0.145, SE = 1.676,
p= .213). Similarly, the indirect effect of objective exclusion
on mood change via subjective exclusion was not significant
(ß= -0.175, SE = 0.636, p= .135). In contrast, the indirect
effect on need fulfillment was significant (ß= -0.289, SE =
2.097, p= .047). The total effect of objective exclusion was
not significant for mood change (ß= -0.246, SE = 0.749, p
= .074), but reached significance for need fulfillment (ß=
-0.435, SE = 2.486, p= .012). These results are visually
depicted in Figure 3.
B. Threshold Analysis
A piecewise regression identified a critical threshold at
0.894 for Objective Exclusion, beyond which Subjective
Exclusion increased more sharply (see Figure 4).
C. Qualitative Analysis
For a (hypothetical) second interview round, Position 4
was the least selected (N=3), while Position 2 was the
most chosen (N = 10). Seventeen participants retained their
initial position, while 18 switched. Reasons for switching
included robot behavior observations (N = 10, e.g., “The
robot seemed to orient more towards its right and had more
eye contact with the person standing in this position”),
strategic positioning (N = 10, e.g., “Standing directly in
front of its gaze might increase my chances of capturing her
attention”), personal preference (N = 5, e.g., “I would want
to be more involved and present in the second and would
want to redeem my confidence by giving better responses”),
and random selection (N = 10, e.g., “There is no specific
reason. I am fine with standing anywhere”). Regarding
Fig. 3. Higher correlation indicating potential mediation effects.
factors influencing HRI, 22 participants identified standing
position (e.g., “Standing position is of greater influence”), 7
emphasized response content (e.g., “The content is more im-
portant”), and 4 cited both (e.g., “I think both have an equal
impact on the flow of conversation”). Two mentioned other
factors (e.g., “My height”). In post-experiment feedback, 12
participants commented on question-response dynamics (e.g.,
“More interactive, where I can also ask some queries back to
her”), 9 on gaze/eye contact (e.g., “The eye contact should
be held with the person speaking!”), 6 on speech clarity (e.g.,
“Since I am not very good at German, I would want the robot
to speak more clearly and a little more loudly”), 2 provided
positive feedback (e.g., “It was good”), and 6 refrained from
commenting (e.g., “No”).
D. Risk Factors
Two multiple linear regression analyses were conducted
to examine potential risk factors for exclusion. Standing
position was the only significant predictor of both objective
exclusion (ß= 0.057, SE = 0.011, p<.001) and subjective
exclusion (ß= 0.475, SE = 0.159, p= .006), indicating that
individuals’ location in the group influenced both their actual
and perceived exclusion. The model for objective exclusion
explained 56.1% of the variance (F(5,27) = 6.91, p<.001),
whereas the model for subjective exclusion accounted for
27.2% but was not statistically significant (F(5,27) = 2.01, p
= .109). A quadratic regression showed that standing position
predicted objective exclusion in a curvilinear fashion (p=
.020, = 60.4%), while the effect for subjective exclusion
Fig. 4. Results of Piecewise Regression.
was not significant (p= .062). To visualize the relationship
for both outcomes, locally estimated scatterplot smoothing
(LOESS) was applied (see Figure 5).
In contrast, gender, age, height, and German proficiency
showed no significant effects (p-values ranging from .626
to .992). An extended model including interaction terms
indicated that gender significantly interacted with age (p
= .004) and standing position (p= .019). The interaction
between gender and height nearly approached significance
(p= .054). In contrast, no interaction effects significantly
predicted subjective exclusion (p-values ranging from .329 to
.888). Finally, the standing position itself was not systemati-
cally associated with any individual characteristics (p-values
ranging from .147 to .835), indicating that participants did
not systematically select their position based on demograph-
ics.
IV. DISCUSSION
This study investigated whether objective exclusion leads
to declines in mood and need fulfillment and whether this
effect is mediated by subjective exclusion. Additionally, we
explored at what level of objective exclusion individuals start
to perceive subjective exclusion, how participants explain
their experience of exclusion, and whether certain individual
characteristics increase the likelihood of being excluded by
the robot.
A. The Role of Subjective Exclusion for Negative Effects
Results showed that objectively excluded participants
also felt subjectively excluded. This aligns with previous
research, where individuals reported subjective exclusion
when they received fewer ball passes [16], were asked fewer
questions [9], [17], were disadvantaged in argumentation
[26], faced language barriers [10], [11], or were ignored
in direct requests [12]. However, objective exclusion does
not automatically impair mood; rather, the crucial factor is
whether individuals perceive themselves as being excluded.
Our findings indicate that subjective exclusion mediates the
relationship between objective exclusion and need fulfillment
but not between objective exclusion and mood change. While
previous studies have shown that objective exclusion affects
psychological needs [15], our findings highlight subjective
Fig. 5. Standing Position as a Risk Factor for Robot-Induced Exclusion.
exclusion as the driving factor, emphasizing its importance
as a core variable and manipulation check in future research.
B. Objective Exclusion Turning into Subjective Exclusion
Study identified a critical threshold for objective exclu-
sion (0.89), beyond which subjective exclusion increased
sharply. This suggests that individuals may tolerate up
to 10% more exclusion before perceiving it as socially
meaningful, highlighting a threshold effect that could in-
form interventions to prevent exclusion in social settings.
Future research should consider a threshold when designing
exclusion manipulations, investigating different group sizes,
and exploring whether a threshold also exists for adverse
psychological effects.
C. Explaining the Experience of Exclusion
Results found that the robot’s inclusion and exclusion
behaviors are predominantly externally attributed to the
standing position. High external attribution has also been
reported in previous research. [19]. Additionally, the current
study also found a higher proportion of internal attributions,
with some participants believing that their German-language
responses influenced the interaction - anxiety commonly
experienced by foreign language learners [8]. This suggests
that individual perceptions of robot exclusion may be shaped
by contextual or self-inflicted factors, warranting further
exploration.
D. Risk Factors for Objective and Subjective Exclusion
Regression analyses further supported participants’
impression that exclusion was mainly influenced by
standing position. Interviewees in position 5 (see Figure
5) received less attention and felt more excluded, yet 17%
still chose it for a second interview. Some aimed to reinte-
grate excluded peers by stepping back themselves, allowing
others to receive more attention - a behavior also observed
during interviews (e.g., encouraging them to speak) and
aligning with prior findings [9]. These results emphasize
the impact of spatial positioning in group dynamics, as
Algorithm 2 Rectified and Fair Eyegaze Control Algorithm
1: procedure ONEYES UPDATE (executed every frame)
2: if human face is detected then
3: if eye contact is detected then
4: Get gaze coordinates (x, y)
5: target SELECTTARGETFAI RLY
6: Adjust gaze to target
7: Ask random question from question list
8: else
9: return ”No eye contact”
10: else
11: return RANDOMGA ZE
12: procedure SELE CT TARGETFAIRLY
13: Identify participants with lowest attention count
14: Break ties randomly
15: Increment attention count for selected participant
16: return selected participant
physical location affected participants’ likelihood of being
addressed or ignored. Considering spatial dynamics in robot-
led interviews is essential for ensuring fairness in HRI. To
support equitable interactions, HRI developers should design
adaptive AI systems that account for spatial bias and ensure
balanced engagement across all interlocutors. A rectified
algorithm is presented in Algorithm 2 that maintains fair
target selection in group interactions such as mock group
interviews with robots. However, gender, age, height, and
language proficiency did not significantly predict exclusion,
which is encouraging for equitable HRIs. Although prior re-
search warns of gender biases in robot interactions [17], [26],
our findings suggest that women were not disadvantaged
in group conversations. Moreover, women did not system-
atically choose less favorable standing positions, ruling out
self-selection bias.
V. CONCLUSION
Overall, our findings contribute to the understanding of
exclusion in HRIs, demonstrating that subjective perception
is key in translating robotic exclusion into psychological
outcomes. Objective exclusion by robot alone does not
automatically worsen mood or need fulfillment; rather, indi-
viduals exhibit an exclusion threshold, beyond which subjec-
tive exclusion significantly intensifies. Our results highlight
standing position as a primary determinant of exclusion,
while demographic factors play a negligible role. Future
research should explore whether robot behavior adaptations
could mitigate exclusion, particularly for individuals in less
favorable positions, and how spatial positioning interacts
with group size and task structure to foster more inclusive
interactions. By considering both subjective perceptions and
contextual influences, future studies can refine strategies for
designing fair and socially inclusive human-robot interac-
tions.
ACKNOWLEDGMENT
SAIL3is funded by the Ministry of Culture and Science
of the State of North Rhine-Westphalia under the grant no
NW21-05A.
DATA AVAILABILITY
The data that support the findings of this study
are openly available here: https://osf.io/s7k5u/
files/osfstorage
REFERENCES
[1] S. K. ¨
Otting, L. Masjutin, J. J. Steil, and G. W. Maier, “Let’s
work together: a meta-analysis on robot design features that enable
successful human–robot interaction at work,” Human Factors, vol. 64,
no. 6, pp. 1027–1050, 2022.
[2] L. Masjutin, J. K. Laing, and G. W. Maier, “Why do we follow robots?
an experimental investigation of conformity with robot, human, and
hybrid majorities,” in 2022 17th ACM/IEEE International Conference
on Human-Robot Interaction (HRI). IEEE, 2022, pp. 139–146.
[3] H. Kumazaki, Z. Warren, B. A. Corbett, Y. Yoshikawa, Y. Matsumoto,
H. Higashida, T. Yuhi, T. Ikeda, H. Ishiguro, and M. Kikuchi, Android
robot-mediated mock job interview sessions for young adults with
autism spectrum disorder: A pilot study, Frontiers in psychiatry,
vol. 8, p. 169, 2017.
[4] S. Nørskov, M. F. Damholdt, J. P. Ulhøi, M. B. Jensen, C. Ess,
and J. Seibt, “Applicant fairness perceptions of a robot-mediated job
interview: a video vignette-based experimental survey,” Frontiers in
Robotics and AI, vol. 7, p. 586263, 2020.
[5] Z. Chen, “Ethics and discrimination in artificial intelligence-enabled
recruitment practices,” Humanities and Social Sciences Communica-
tions, vol. 10, no. 1, pp. 1–12, 2023.
[6] A. K¨
ochling and M. C. Wehner, “Discriminated by an algorithm:
a systematic review of discrimination and fairness by algorithmic
decision-making in the context of hr recruitment and hr development,
Business Research, vol. 13, no. 3, pp. 795–848, 2020.
[7] R. Van den Berghe, J. Verhagen, O. Oudgenoeg-Paz, S. Van der Ven,
and P. Leseman, “Social robots for language learning: A review,”
Review of Educational Research, vol. 89, no. 2, pp. 259–295, 2019.
[8] A. Ashok, B. Bruno, T. Helf, and K. Berns, “” thanks for the practice!”:
Llm-powered social robot as tandem language partner at university,”
in Proceedings of the 2025 ACM/IEEE International Conference on
Human-Robot Interaction, 2025, pp. 1221–1226.
3www.sail.nrw
[9] S. Mongile, G. Pusceddu, F. Cocchella, L. Lastrico, G. Belgiovine,
A. Tanevska, F. Rea, and A. Sciutti, “What if a social robot excluded
you? using a conversational game to study social exclusion in teen-
robot mixed groups,” in Companion of the 2023 ACM/IEEE Interna-
tional Conference on Human-Robot Interaction, 2023, pp. 208–212.
[10] L. Stachnick and L. Kunold, “Isolated by robotic co-workers: the im-
pact of verbal ostracism on psychological needs and human behavior,
in Companion of the 2024 ACM/IEEE International Conference on
Human-Robot Interaction, 2024, pp. 1003–1007.
[11] A. M. Rosenthal-von der P¨
utten and N. Bock, “Seriously, what did one
robot say to the other? being left out from communication by robots
causes feelings of social exclusion,” Human-Machine Communication,
vol. 6, no. 1, p. 7, 2023.
[12] C. Straßmann, C. Eudenbach, A. Arntz, and S. C. Eimler, “” don’t
judge a book by its cover”: Exploring discriminatory behavior in
multi-user-robot interaction,” in Companion of the 2024 ACM/IEEE
International Conference on Human-Robot Interaction, 2024, pp.
1023–1027.
[13] K. D. Williams, “Ostracism: A temporal need-threat model, Advances
in experimental social psychology, vol. 41, pp. 275–314, 2009.
[14] K. D. Williams and B. Jarvis, “Cyberball: A program for use in re-
search on interpersonal ostracism and acceptance,” Behavior research
methods, vol. 38, pp. 174–180, 2006.
[15] C. H. Hartgerink, I. Van Beest, J. M. Wicherts, and K. D. Williams,
“The ordinal effects of ostracism: A meta-analysis of 120 cyberball
studies,” PloS one, vol. 10, no. 5, p. e0127002, 2015.
[16] H. Erel, Y. Cohen, K. Shafrir, S. D. Levy, I. D. Vidra, T. Shem Tov,
and O. Zuckerman, “Excluded by robots: Can robot-robot-human
interaction lead to ostracism?” in Proceedings of the 2021 ACM/IEEE
International Conference on Human-Robot Interaction, 2021, pp. 312–
321.
[17] S. T. B ¨
uttner, M. Goudarzi, and M. Prilla, “Why does the robot only
select men? how women and men perceive autonomous social robots
that have a gender bias, in Proceedings of Mensch und Computer
2024, 2024, pp. 479–484.
[18] K. Sanders, “Attribution theory, in A Guide to Key Theories for
Human Resource Management Research. Edward Elgar Publishing,
2024, pp. 44–51.
[19] C. S. Arlinghaus, V. H¨
orning, C. Wulff, and G. W. Maier,
“Asymmetrical team dynamics: Exclusion by robot coworkers hurts
less, inclusion by human coworkers satisfies more, 2025. [Online].
Available: https://doi.org/10.31219/osf.io/3uthv v2
[20] L. Pohlan, “Unemployment and social exclusion, Journal of Eco-
nomic Behavior & Organization, vol. 164, pp. 273–299, 2019.
[21] J. Levashina, C. J. Hartwell, F. P. Morgeson, and M. A. Campion, “The
structured employment interview: Narrative and quantitative review of
the research literature,” Personnel psychology, vol. 67, no. 1, pp. 241–
293, 2014.
[22] C. D. S. of North Central State College. Common interview questions–
practice list. [Online; accessed 2025-01-21].
[23] P. J. Taylor and B. Small, Asking applicants what they would do
versus what they did do: A meta-analytic comparison of situational
and past behaviour employment interview questions, Journal of
occupational and organizational psychology, vol. 75, no. 3, pp. 277–
294, 2002.
[24] S. C. Rudert and R. Greifeneder, “When it’s okay that i don’t
play: Social norms and the situated construal of social exclusion,”
Personality and Social Psychology Bulletin, vol. 42, no. 7, pp. 955–
969, 2016.
[25] C. S. Arlinghaus, C. Wulff, and G. W. Maier, “Inductive coding
with chatgpt-an evaluation of different gpt models clustering
qualitative data into categories, 2024. [Online]. Available: https:
//doi.org/10.31219/osf.io/gpnye
[26] T. Hitron, B. Megidish, E. Todress, N. Morag, and H. Erel, Ai bias
in human-robot interaction: An evaluation of the risk in gender biased
robots,” in 2022 31st IEEE International Conference on Robot and
Human Interactive Communication (RO-MAN). IEEE, 2022, pp.
1598–1605.
Preprint
Full-text available
Appointment denials can threaten patients' psychological needs and lead to retaliation (e.g., negative reviews). However, it remains unclear whether AI-led appointment scheduling can mitigate or intensify this impact. In a preregistered 2x2 experiment (N = 180), we examined the impact of exclusion (receiving vs. not receiving an appointment) and agent (medical assistant vs. AI) on psychological need threat and retaliatory behavior. Rejection significantly threatened self-esteem, leading to negative word-of-mouth. While meaningful existence and control were also affected, they did not mediate retaliation. Importantly, no significant differences emerged between agents-neither in need threat nor in attribution processes-across both rejection and inclusion conditions, suggesting that AI neither softened nor exacerbated the impact of appointment outcome. This provides a positive outlook for AI adoption, as algorithm aversion appears to be less of a barrier in administrative functions compared to medical decision-making.
Preprint
Full-text available
Nowadays, we can observe more and more work situations where humans and robots work together (e.g., in manufacturing, care, or gastronomy). Consequently, the question arises if work still satisfies fundamental social needs (e.g., belonging, self-esteem, meaningful existence) or if human-robot teams make people feel excluded with severe consequences for individuals and organizations. Buildingon the temporal need-threat model, we examined restaurant employees’ reactions to social inclusion and exclusion from human or robot coworkers in two pre-registered studies (N 1 = 74; N 2 = 256). Our findings demonstrate that social inclusion from human or robot coworkers leads to higher need fulfillment, while social exclusion (ostracism and rejection) from human or robot coworkers triggers need-threat (i.e., low need fulfillment). However, the effect was more pronounced when being included or excluded by human coworkers, possibly due to more internal and uncontrollable attributions. Participants assumed interpersonal like/dislike when included/excluded by human coworkers, whereas they blamed the robots’ programming for being included or excluded by robot coworkers. Ignored participants show more organizational citizenship behavior (e.g., relieving a coworker’s workload) and less counterproductive behavior (e.g., insultinga coworker) towards their human coworkers but not towards their robot coworkers. Both studies showed that people do not mindlessly interpret robot behavior as like social behavior by humans and, therefore, demonstrate a case where the “Computers Are Social Actors” paradigm is not supported. Consequently, social dynamics within human team members should be prioritized in human-robot teams to maintain a healthy work environment.
Preprint
Full-text available
Qualitative data is invaluable, yet its analysis is very time-consuming. To prevent the loss of valuable information and to streamline the coding process for developing and assigning inductive categories, we introduce LLM-Assisted Inductive Categorization (LAIC), a novel method of categorizing text responses using a Large Language Model (LLM). In two pre-registered studies, we tested two Generative Pre-trained Transformer (GPT) models that are commonly used in ChatGPT (GPT-3.5 Turbo and GPT-4o) across three temperature settings (0, 0.5, 1) with 10 repetitions each (120 runs in total). Outputs were evaluated based on established qualitative research criteria (credibility, dependability, confirmability, transferability, transparency). Two human coders also generated inductive categories and assigned text responses accordingly for comparison. Our findings demonstrate that both GPT models are highly effective in developing and assigning inductive categories, even outperforming human coders in agreement rates. Overall, GPT-4o achieved the best results (e.g., better explanations and higher agreement) and is recommended for inductive category formation and assignment with a temperature setting of 0 and 10 repetitions. This approach saves significant time and resources while enhancing analysis quality. Instructions and Python scripts for applying our new coding technique are freely available under a CC-BY 4.0 International license: https://osf.io/h4dux/
Article
Full-text available
This study aims to address the research gap on algorithmic discrimination caused by AI-enabled recruitment and explore technical and managerial solutions. The primary research approach used is a literature review. The findings suggest that AI-enabled recruitment has the potential to enhance recruitment quality, increase efficiency, and reduce transactional work. However, algorithmic bias results in discriminatory hiring practices based on gender, race, color, and personality traits. The study indicates that algorithmic bias stems from limited raw data sets and biased algorithm designers. To mitigate this issue, it is recommended to implement technical measures, such as unbiased dataset frameworks and improved algorithmic transparency, as well as management measures like internal corporate ethical governance and external oversight. Employing Grounded Theory, the study conducted survey analysis to collect firsthand data on respondents’ experiences and perceptions of AI-driven recruitment applications and discrimination.
Article
Full-text available
While humans actually need some overt communication channel to transmit information, be it verbally or nonverbally, robots could use their network connection to transmit information quickly to other robots. This raises the question how this covert robot-robot communication is perceived by humans. The current study investigates how transparency about communication happening between two robots affects humans’ trust in and perception of these robots as well as their feeling of being included/excluded in the interaction. Three different robot-robot communication styles were analyzed: silent, robotic language, and natural language. Results show that when robots transmit information in a robotic language (beep sounds) this leads to lower trust and more feelings of social exclusion than in the silent (i.e., covert) or natural language conditions. Results support the notion that humans are over-sensitive to signs of ostracism which seems to be detected in this style of overt but nonhuman robot-robot communication.