Content uploaded by Sladjana Nørskov
Author content
All content in this area was uploaded by Sladjana Nørskov on Apr 27, 2022
Content may be subject to copyright.
Technological Forecasting & Social Change 179 (2022) 121641
Available online 9 April 2022
0040-1625/© 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Employers’ and applicants’ fairness perceptions in job interviews: using a
teleoperated robot as a fair proxy
Sladjana Nørskov
a
,
*
, Malene F. Damholdt
b
, John P. Ulhøi
c
, Morten Berg Jensen
d
,
Mia Krogager Mathiasen
e
, Charles M. Ess
f
, Johanna Seibt
g
a
Aarhus University, Department of Business Development and Technology, Birk Centerpark 15, 7400 Herning, Denmark
b
Aarhus University, Department of Clinical Medicine, Department of Psychology and Behavioural Sciences, Palle Juul-Jensens Boulevard 82, 8200 Aarhus N, Denmark
c
Aarhus University, Department of Management, Fuglesangs All´
e 4, 8210 Aarhus V, Denmark
d
Aarhus University, Department of Economics and Business Economics, Fuglesangs All´
e 4, 8210 Aarhus V, Denmark
e
Aarhus University, Department of Philosophy and History of Ideas, Jens Chr. Skous Vej 7, 8000 Aarhus C, Denmark
f
University of Oslo, Department of Media and Communication, Gaustadall´
een 21, Forskningsparken, 0349 Oslo, Norway
g
Aarhus University, Department of Philosophy and History of Ideas, Jens Chr. Skous Vej 7, 8000 Aarhus C, Denmark
ARTICLE INFO
Keywords:
Job interview
Fairness perceptions
Fair proxy
Morality
Robot-mediated interview
ABSTRACT
This research examines the perceived fairness of two types of job interviews: robot-mediated and face-to-face
interviews. The robot-mediated interview tests the concept of a fair proxy in the shape of a teleoperated so-
cial robot. In Study 1, a mini-public (n=53) revealed four factors that inuence fairness perceptions of the robot-
mediated interview and showed how HR professionals’ perception of fair personnel selection is inuenced by
moral pragmatism despite clear moral awareness of discriminative biases in interviews. In Study 2, an experi-
mental survey (n=242) conducted at an unemployment center showed that the respondents perceived the robot-
mediated interview as fairer than the face-to-face interview. Overall, the studies suggest that HR professionals
and jobseekers exhibit diverging fairness perceptions and that the business case for the robot-mediated interview
undermines its social case (i.e., reducing discrimination). The paper concludes by addressing key implications
and avenues for future research.
1. Introduction
The employment interview is “a social interaction where the inter-
viewer and applicant exchange and process the information gathered
from each other” (Macan, 2009, p. 215). It is one of the most commonly
used methods to assess job applicants and a critical organizational ac-
tivity that helps rms secure the necessary workforce to remain
competitive over time (Macan, 2009, p. 215). Despite its importance, the
employment interview has been found to lack objectivity, often caused
by implicit biases (García et al., 2008; Graves and Powell, 1996; Purkiss
et al., 2006), thus giving rise to unintentional but potentially discrimi-
nating biases toward applicants (Rivera, 2012). Implicit biases are very
difcult to control and change (Dobbin and Kalev, 2016; Lai et al.,
2016). They involve the unconscious, rapid and automatic processing of
information and can be in direct contradiction to consciously held values
and beliefs of individuals (Hinton, 2017). Personnel selection may be
biased due to well-known non-job-related factors such as the halo effect,
homophily, homosociality, etc. (Holgersson, 2013; Rivera, 2015). Im-
plicit associations an interviewer may have related to, for instance,
physical appearance, obesity, race, and gender, are also some of the
factors known to unintentionally inuence the way applicants are
perceived and evaluated (e.g., Grant and Mizzi, 2014; Heilman and
Saruwatari, 1979; Johnson et al., 2010; Rufe and Shtudiner, 2015).
Indeed, research has documented that interviewers’ intuition, affective
processes and subjective impressions during job interviews prevail over
applicants’ qualications and skills (García et al., 2008; Graves and
Powell, 1996; Huffcutt, 2011). As opposed to the rational, conscious,
and somewhat slower cognitive operations of analytical thinking, se-
lection based on intuitive thinking is nonconscious, affectively charged
and based on rapid cognition and thus unavoidably relies on implicit
biases (Dane and Pratt, 2007; Gore and Sadler-Smith, 2011).
Implicit biases are problematic on both ethical and pragmatic
* Corresponding author.
E-mail addresses: norskov@btech.au.dk (S. Nørskov), malenefd@psy.au.dk (M.F. Damholdt), jpu@mgmt.au.dk (J.P. Ulhøi), mbj@econ.au.dk (M.B. Jensen), mia_
mathiasen@hotmail.com (M.K. Mathiasen), c.m.ess@media.uio.no (C.M. Ess), lseibt@cas.au.dk (J. Seibt).
Contents lists available at ScienceDirect
Technological Forecasting & Social Change
journal homepage: www.elsevier.com/locate/techfore
https://doi.org/10.1016/j.techfore.2022.121641
Received 26 February 2021; Received in revised form 21 March 2022; Accepted 24 March 2022
Technological Forecasting & Social Change 179 (2022) 121641
2
grounds. Ethically, biases of this sort violate basic rights to equal
treatment, respect, and opportunity—i.e., rights that are regarded as
foundational in Western democratic societies (Arneson, 2015). Prag-
matically, when the assessment and selection during the job interview
process are under the inuence of subjective impressions, the interview
process may be perceived as less fair by applicants, thus generating
negative reactions toward the hiring organization (McLarty and Whit-
man, 2016). Applicants’ fairness perceptions of the job interview pro-
cess may therefore affect how such stakeholders perceive the hiring
organization and whether they are likely to recommend it to other
jobseekers or potential collaborators with associated effects on the or-
ganization’s reputation. The perceived fairness of the process may also
affect their decision to accept or reject the job (if offered) and even their
job performance and work attitudes if they accept the job (McLarty and
Whitman, 2016; Ryan and Huth, 2008). A biased personnel selection
process may lead to losing out on candidates who are more skilled than
those to whom the bias is in favor, as well as to less employee diversity
(Rivera, 2012), which in turn may affect company performance and
innovation as well as workgroup creativity and effectiveness (Hewlett
et al., 2013; Homan et al., 2007; Wang et al., 2019).
Fairness perceptions are related not only to freedom from bias but
also to compatibility with ethical standards, consistency of the proced-
ure across candidates and time, representation of the interests of the
affected parties, accuracy of the information that the selection proced-
ure is based on and availability of mechanisms that are able to correct
inaccurate decisions (Colquitt et al., 2001; Leventhal, 1980). To increase
the fairness perceptions of job interviews, it is relevant to understand
how personnel selection can deal with implicit biases. One approach
relies on training (Dobbin et al., 2015) and aims at increasing in-
dividuals’ cognitive control of behavior (Amodio, 2014). However,
training often has a short-term effect, as implicit biases are very resilient
to change (Dobbin and Kalev, 2016). For instance, even individuals who
are strongly motivated and determined to act without prejudice have
been found to exhibit racial biases at the level of preconscious
decision-making (Amodio, 2014). A second approach involves altering
structural conditions in which biases emerge. In personnel selection,
studies show that structural changes in selection procedures can affect
employer biases. For instance, research has documented that relying on
structured rather than unstructured job interviews can reduce biases
toward candidates (Bragger et al., 2002; Kutcher and Bragger, 2004),
and Gilliland (1993) proposed that structured interviews indicate
greater consistency, which leads to higher fairness perceptions. None-
theless, structured interviews are only able to reduce interviewer bias
rather than eradicate it (Aamodt et al., 2006). This means that biases
related to, for instance, overweight (Kutcher and Bragger, 2004), race
(de Kock and Haupteisch, 2018), pregnancy (Bragger et al., 2002), etc.,
may decrease but will remain present in structured interviews to a
certain extent. An important reason for this is visual cues that have been
documented to affect interviewers’ judgments (DeGroot and Moto-
widlo, 1999). In addition to introducing more structure to job in-
terviews, another option is to reduce biases by relying on joint rather
than individual evaluations of candidates (Bohnet et al., 2016). A study
on the audition procedures of symphony orchestras further showed that
an intervention at the interface, i.e., using a curtain between the selec-
tion committee and the candidates, increased the impartiality of the
committee and led to signicantly more female musicians being selected
(Goldin and Rouse, 2000). The latter study in particular indicates the
potential in directly addressing the job interview setup, as it manipulates
the traditional face-to-face interaction to make it less conducive to
discrimination. When a selection procedure is perceived as neutral, i.e.,
as being “based on a full and open accurate assessment of the facts” (p.
768), it improves fairness perceptions related to that situation (Lind
et al., 1997). A neutral procedure may create more focus on objective
criteria and knowledge about candidates, which can be used to achieve
fairer hiring decisions (Goldin and Rouse, 2000).
These considerations motivated us to test a novel structural approach
aimed at increasing applicant fairness perceptions in job interviews. As
reported in this paper, we examined what happens when the traditional,
face-to-face employment interview setup is replaced by a robot-
mediated setup. In recent years, employment interviews have
involved, to various degrees, the use of technology-mediated techniques
such as phone, video conference, or recorded digital interviews (Langer
et al., 2017) as alternatives or supplements to face-to-face interviews.
The present study taps into this line of research by assessing the use of
robot-mediated technology for possible effects on perceived fairness in
employment interviews. A previous study examined how a
robot-mediated job interview affects fairness perceptions (Nørskov
et al., 2020, p. 1). The study, however, investigated a setup in which
both the interviewer and the applicant were visually anonymous, i.e., a
type of a double-blind interview based on “symmetrical visual anonym-
ity”, in which both parties are represented by a teleoperated robotic
proxy (Nørskov et al., 2020, p. 1). In contrast, in this study, we examine
a setup based on asymmetrical visual anonymity, or what could be
termed a single-blind interview, in which only the applicant is repre-
sented by a teleoperated robotic proxy, while the interviewer is visible to
the applicant via a computer screen. This is in line with recent research
on “fair proxy communication”, i.e., a setup for interpersonal commu-
nication where the perceptual biases of the decision-maker ‘cannot get
off the ground’ because the conversation partner is present only by a
proxy (Seibt and Vestergaard, 2018). Asymmetric telepresence for the
purpose of reducing the biases of decision-makers has been shown to be
effective in the context of conict facilitation (Druckman et al., 2021).
Since a job interview is a situation where decisional power is asym-
metric, it thus requires an asymmetric setup (Seibt and Vestergaard,
2018). The decisional power of employers also entails their legal and
moral duty to ensure proper treatment of candidates. There may, how-
ever, be modications to this power imbalance between employers and
candidates, for instance, in times of labor and skills shortage. Another
difference between our study and prior research is that Nørskov et al.
(2020) based their work on respondents who were bachelor’s students
with limited job interview and job search experience, while our study
relies on HR professionals (recruiters, consultants, HR partners, etc.),
and current jobseekers. Finally, we examine both the employers’ and
applicants’ perspectives, while Nørskov et al. (2020) only investigated
the latter.
More specically, our study examines (i) the use of a “fair proxy”
representing the job applicant (in the shape of a teleoperated social
robot) and (ii) its impact on applicants’ and employers’ fairness per-
ceptions of the employment interview process. Our overall research
question is therefore whether replacing a face-to-face job interview with
a robot-mediated job interview affects the perceived fairness of the job
interview from the perspectives of the applicant and the employer and in
what ways.
The paper extends prior research on job interviews and technology
mediation by demonstrating the diverging views of HR professionals and
applicants on robot mediation in job interviews, which suggest that HR
professionals’ focus on the ‘business case’ for diversity, i.e., that more
employee diversity will lead to better nancial outcomes, undermines
the ‘social case’ for diversity, i.e., increasing diversity because it is a
socially responsible thing to do. The paper further demonstrates the
potential of a new technology-based perspective on how to deal with
biases in hiring, namely, through robot mediation in job interviews. In
the following, we rst review the relevant literature and present our
arguments for how robot mediation could impact fairness perceptions in
job interviews. Next, we report the results of two studies and discuss the
ndings and their implications.
2. Theoretical background
2.1. Applicants’ fairness perceptions and job interviews
Research on applicant reactions to selection processes investigates
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
3
“attitudes, affect, or cognitions an individual might have about the
hiring process” (Ryan and Ployhart, 2000, p. 566). As noted by Gilliland
(1993), fair selection procedures are relevant from business, ethical, and
legal perspectives. First, reactions to personnel selection procedures can
negatively affect the corporate brand of the hiring organization and thus
the organization’s ability to attract and hire well-qualied applicants.
Applicants’ perceived fairness of job interviews also holds potential to
affect the hiring organization’s reputation, its ability to attract qualied
job candidates, its collaboration partners, and its capacity to secure high
work performance and positive organizational citizenship behavior
(Bauer et al., 1998; Gilliland, 1993; McCarthy et al., 2017). Second,
hiring organizations should, from an ethical perspective, be concerned
with applicants’ well-being during the interview. Increasing fairness
perceptions, for example, promotes applicants’ self-esteem, self-efcacy,
and well-being (Gilliland, 1993; Schuler, 1993). Third, perceived un-
fairness caused by discrimination during the selection procedure may
lead to applicants’ decisions to pursue legal discrimination cases. While
research has generated several recommendations on how to improve
applicant reactions to the selection process—e.g., job relatedness, giving
feedback, providing selection information (for a recent review, see
McCarthy et al., 2017), and thereby also applicant fairness perceptions,
implicit biases in the employment interview remain an important but
unresolved issue.
The employment interview is an inherently interpersonal process
(Rivera, 2012). This is a situation where subjective impressions and
affective processes, e.g., similarities and liking, gain more signicance in
hiring decisions than candidates’ qualications and cognitive skills
(García et al., 2008; Graves and Powell, 1996; Huffcutt, 2011). Assess-
ment and selection processes are inuenced by mechanisms such as halo
effects and rst impressions (Howard and Ferris, 1996). Positive as well
as negative affective reactions are thus likely to be at play during the
formation of a rst impression. Moreover, it may often be felt immedi-
ately and by both parties. Such affective reactions to unplanned stimuli
trigger an automatic rst reaction, which in turn may inuence how
individuals process and judge information (Zajonc, 1980), thus chal-
lenging the validity of the traditional employment interview as a se-
lection method.
For instance, employers have been found to be more likely to choose
candidates who possess a better cultural t (Rivera, 2012, 2015) and
whose backgrounds seem similar to their own (Bertrand and Mullaina-
than, 2004; Cotton et al., 2008; Kang et al., 2016). Similar mechanisms
apply to gender. Homosociality—the preference for relations with the
same gender—and discrimination are therefore two sides of the same
coin (Holgersson, 2013). Behaviors such as laughing, the use of humor,
and engagement with the interviewer during a job interview have also
been documented to affect hireability (Paulhus et al., 2013). Rivera
(2015), for example, found that interviewers’ emotional responses to
candidates played a key role in their assessment and selection decisions,
leading to biased hiring outcomes. In fact, emotion and homophily (i.e.,
the tendency to have ties with people who share similar sociodemo-
graphic, behavioral, and personal characteristics) were the most prev-
alent factors affecting the assessment of candidates in job interviews
(Rivera, 2015). Complementing this line of research, a recent study
found that the way job candidates display emotions during job in-
terviews affects the likelihood of them being hired (Bencharit et al.,
2018). This effect is positive if there is a match between the emotions (i.
e., calm or excited) displayed by candidates with the interviewer’s
cultural preferences for conveying emotions in job interviews (Bench-
arit et al., 2018).
Physical attractiveness has also been documented to affect hiring
decisions. It has been found that being an attractive man (versus a plain-
looking man) provides a signicant advantage when applying for a job.
Surprisingly, however, being an attractive woman (versus a plain-
looking woman) has the opposite effect on the applicant’s chances of
being selected for further consideration in the hiring process (Rufe and
Shtudiner, 2015). With particular relevance to gender, physical
attractiveness has been found to be capable of exerting an adverse effect
(the “beauty-is-beastly” effect). Physical attractiveness can, for example,
be a disadvantage for women applying for masculine jobs traditionally
lled by male employees (Heilman and Saruwatari, 1979; Johnson et al.,
2010). Research on other potentially stigmatizing aspects of applicants’
physical appearance has shown that obese applicants tend to be
discriminated against during the selection process (Grant and Mizzi,
2014). These ndings suggest that implicit biases can lead to uninten-
tional discrimination regarding appearance (race, gender, body size,
etc.) and behavioral cues (e.g., displays of emotion). Access to the labor
market is thus not necessarily equally available for certain groups of
candidates despite their having the required or even higher qualica-
tions (Gaddis, 2015; Villadsen and Wulff, 2018). These issues may affect
applicants’ reactions to personnel selection in general and to face-to-face
job interviews in particular and point to the need to reconsider the
interview setup. In addition to considering the setup itself, in Study 2,
we further include two individual-level factors that may inuence
applicant fairness perceptions, namely, core self-evaluations (CSEs) and
personal innovativeness. Indeed, research has shown that CSEs affect
applicants’ reactions to selection procedures (McLarty and Whitman,
2016). CSEs refer to “fundamental appraisals that people make of their
own self-worth, competence, and capabilities” (Chang et al., 2012, p.
82) and have been found to be positively related to fairness perceptions
(McLarty and Whitman, 2016). Due to the novelty of the use of robots in
job interviews, we also consider personal innovativeness within the
domain of interactive technologies (to which robotics belong), as
domain-specic personal innovativeness has been shown to affect
innovation adoption within the domain (Roehrich, 2004), which may
inuence how robots are perceived in job interviews.
2.2. Employers’ fairness perceptions and job interviews
While research on fairness perceptions has rightfully focused on the
applicant perspective, examining the employer perspective on the fair-
ness of new selection methods is important to understand the factors
that promote and/or prevent their adoption. The employer perspective
is especially relevant because extant research documents that HR pro-
fessionals are aware of unintended biases related to face-to-face in-
terviews, and yet they still prefer to rely on intuition during this process
(Highhouse, 2008). The chemistry and emotional connection with can-
didates thus remain key factors in applicant selection (Rivera, 2012,
2015; Rynes et al., 2002), although alternative and potentially fairer
interview methods are available, which have been documented to
generate more accurate judgments (Highhouse, 2008; Kuncel et al.,
2013). Indeed, the failure to adopt more effective selection practices has
been widely documented (Rynes et al., 2002). This rejection of more
effective methods may stem from personal preference, practitioners’
beliefs, convenience, reluctance to change, and costs related to switch-
ing to alternative methods (Dana et al., 2013; Rynes et al., 2002). In
regard to fairness perceptions of different selection methods, practi-
tioners may have unique views in this respect. A method that is
perceived as fair by applicants may not necessarily be considered as such
from an employer’s perspective. However, to our knowledge, research
has not yet examined such differences.
The use of robots to mediate job interviews toward more objective
outcomes may be rejected for similar reasons as other assessment and
selection methods (e.g., tests, structured interviews, mechanical com-
bination of applicant information). This is namely because practitioners’
preference for relying on intuition and experience to make selection
decisions may be perceived as being obstructed and thus as less fair from
their perspective. On the other hand, robot-mediated interviews may
offer a chance to limit applicants’ impression management tactics,
which are intended to create a misleading impression of the applicant
(Cuddy et al., 2015) and instead fully focus on their knowledge and
skills. This possibility could be perceived as attractive from the employer
perspective and positively affect their perception of the fairness of the
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
4
interview setup.
Furthermore, similar to other professions, human resource man-
agement is confronted with ethical dilemmas as practitioners try to meet
prot goals (Schumann, 2001). For example, pressures to reduce costs or
urgently ll a job position might lead practitioners to choose selection
procedures that speed up the selection process in ways that are
perceived as less fair by applicants but are convenient for practitioners.
On the other hand, with respect to the robot-mediated job interview, HR
professionals may nd that it could offer a chance for their organizations
to signal high standards of ethical business conduct. Such standards are
not only ethically justied per se; they may also help to enhance the
reputation of the company or enterprise and thereby contribute to
maximizing prots. Moreover, the robot-mediated interview may be
viewed as a way of living up to the moral responsibility entailed in
personnel selection by potentially making the process fairer and
reducing discrimination. On the other hand, employers may perceive the
robot-mediated job interview as unfair because it does not lead to a fair
distribution of benets (e.g., hiring the best candidate) and costs (e.g.,
having to hire and work with males when one prefers female coworkers,
even though this preference is based on prejudice) from the company
perspective (Schumann, 2001). We explore these perspectives in Study
1.
2.3. Technology-mediated job interviews
A few attempts have previously been made to test various
technology-mediated interviews’ effects on fairness perceptions and
biases. The previously mentioned study that compared a face-to-face job
interview with a robot-mediated interview found that the face-to-face
interview was perceived as fairer (Nørskov et al., 2020). Nonetheless,
as the authors also noted, since their respondents were bachelor’s stu-
dents with limited job interview experience, they did “not reect a
representative sample of job applicants and the associated probabilities
of experiencing discrimination” (Nørskov et al., 2020, p. 15), thus call-
ing for further research in the area of robot mediation in job interviews.
In a technology-mediated interview of 416 undergraduates based on
the use of avatars, Behrend et al. (2012), for example, identied a
similar “beauty effect” known from face-to-face interviews. They
examined the impact of the avatars’ attractiveness on online employ-
ment interview ratings and found that applicants with more attractive
avatars received more favorable interview ratings. Technology is thus
not immune to biases, and research conrms that people respond to
social behaviors and features displayed by both human-like and
non-human-like robots and technologies in similar ways as they respond
to other people (e.g., Breazeal, 2002; Reeves and Nass, 1996), thus
transferring social norms as well as gender and racial stereotypes and
same-ethnicity favoritism to their interaction with robots (Eyssel and
Hegel, 2012; Gong, 2008). For this reason, the current study utilized a
teleoperated robot, the Telenoid, which is based on a minimal design
approach (Ishiguro, 2016). The Telenoid’s appearance and behavior are
based on minimal human embodiment, and it thus “appears as both male
and female, as both old and young” (Seibt and Vestergaard, 2018, p. 9).
Prior experimental studies show that the robot was perceived as “a
generic human being” (p. 9) and that a lack of visual cues and social
identities (gender, age, etc.) made it easier for the participants to focus
on the conversation (Seibt and Vestergaard, 2018). For this reason, the
Telenoid was found to be suitable for testing in the job interview
context.
2.4. Fair proxy communication in the employment interview
To increase fairness perceptions of the job interview, we examine the
use of robots as a fair proxy communication (FPC) technology during the
employment interview. FPC is dened as “a specic communicational
setting in which a teleoperated robot is used to remove perceptual cues
of implicit biases in order to increase the perceived fairness of decision-
related communications” (Seibt and Vestergaard, 2018, p. 1). In a
robot-mediated job interview, during which the Telenoid functions as a
possible fair proxy for the applicant, the applicant and the interviewer
are seated in two different rooms (Fig. 1). The interviewer sits opposite
the robotic proxy that represents the applicant. The applicant sits in
front of a computer screen via which she can see the interviewer. The
robot is teleoperated by the applicant, and it has a built-in camera on its
forehead, which is used to transmit the visual image of the interviewer
on the computer screen.
A robot-mediated interview is capable of eliminating visual cues
associated with the applicant’s individual physical appearance, thus
holding the potential for reducing some of the existing biases associated
with a person’s body size, gender, ethnicity, age, etc. This provides a
situation in which both job interviewers and applicants can focus more
on the applicant’s knowledge, skills, and abilities (Gilliland, 1993).
Indeed, Chapman and Rowe (2001) found that applicant competency
ratings received a higher grading in video conference-based interviews
than in face-to-face interviews. The authors speculated that having a
technology-based communication medium might have reduced appli-
cant anxiety, resulting in higher performance (Chapman and Rowe,
2001).
Other studies have found that job interviews conducted via video
conferences or telephone score lower on fairness perceptions than face-
to-face interviews (Sears et al., 2013). On the one hand, this nding may
suggest that, regardless of the type of technology used in job interviews,
job interviews relying on any technology will always be perceived as
being less fair. On the other hand, one could imagine that different
technologies may have different effects on fairness perceptions in job
interviews. Using a teleoperated robot as a fair proxy may be a more
effective communication technology for job interviews than video con-
ferences, telephone, etc. The reason is that embodied agents have a
physical body and are physically present in a job interview situation.
These characteristics are expected to make a robot more engaging and to
elicit more favorable psychological responses, e.g., empathy and trust,
and a greater sense of social presence compared to communication via a
screen or a telephone (Li, 2015; Seo et al., 2015). If, in addition to these
advantages, a teleoperated robot as a fair proxy is able to reduce or
eliminate biases from the job interview, it is plausible that this type of
interview could yield higher perceptions of fairness than a face-to-face
job interview. Our main proposition is therefore that fairness percep-
tions will be higher in the robot-mediated job interview.
3. Study 1: mini-public
Study 1 was designed to explore in-depth how the robot-mediated
job interview is perceived and what factors inuence the fairness per-
ceptions of such interviews. The study employed a deliberative mini-
public design. A mini-public is a method of engaging citizens and pro-
moting deliberation around a certain topic or issue of relevance to the
public (Smith and Set¨
al¨
a, 2018). A mini-public was found to be partic-
ularly suitable for three main reasons. First, it promotes deliberation and
discussion around complex and/or controversial topics (Smith and
Set¨
al¨
a, 2018). Second, because discussions are facilitated, it incites
participants to explain and substantiate their views and to respectfully
pay attention to those of others (Roberts et al., 2020). Third, and most
importantly, during a mini-public, participants’ knowledge about the
issue should increase (Roberts et al., 2020). This third reason is partic-
ularly relevant because of the novelty of the robot-mediated job in-
terviews. By promoting information sharing and learning about this new
job interview approach, a mini-public assists in exposing an extensive
range of reasons and arguments for and against the proposed concept.
3.1. Participants
Seventy-six individuals accepted an open invitation to participate in
a mini-public entitled “Robots in recruitment and hiring processes”.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
5
However, 23 participants cancelled their participation at the last minute
for various reasons (illness, conicting work appointments, etc.). Thus,
the mini-public was attended by 53 participants (26 males). Participants
were not compensated for their participation. Information on work title
or work area was not obtained to ensure participant anonymity. How-
ever, during the roundtable discussions, the participants introduced
themselves to each other, and it became clear that only 11 percent of the
participants were unemployed jobseekers and that the majority of the
employed participants (76 percent), regardless of sector, held jobs that
were related to human resource management, i.e., HR managers, HR
consultants, owners of small and medium-sized recruitment agencies
and similar. In the rest of the paper, we refer to those participants as HR
professionals. Due to noise in the sound recordings, it was not possible to
identify the background information of six participants, so the actual
number of HR professionals may be slightly higher, as several statements
of some of these unidentied participants during the roundtable dis-
cussions indicated professional experience in recruitment and selection.
3.2. Procedure
The invitation to the mini-public was published on Aarhus Uni-
versity’s website and shared through the university’s ofcial online
communication channels, i.e., LinkedIn, Twitter, and Facebook. The
research team members also shared the invitation through their Link-
edIn, Twitter, and Facebook accounts. Both the invitation and the mini-
public were in Danish. The mini-public event was organized as a setup
where interested stakeholders could form, express, and explain their
opinions about robotics in recruitment and selection through pre-
sentations from experts, roundtable debates, and short polls. When the
participants arrived, they received further information about the study,
and verbal and written consent was obtained. They were then assigned a
unique ID number and randomly allocated to a seat at one of eight
roundtables (each table holding approximately seven participants).
The mini-public lasted three hours. The structure of the mini-public
consisted of four main elements, some of which were repeated several
times: i) an on-stage robot-mediated job interview, ii) expert pre-
sentations, iii) roundtable discussions, and iv) a poll. While Table 1
Fig. 1. The robot-mediated job interview, from the perspective of the interviewer (a) and the job candidate (b)
Table 1
The Mini-Public Program.
14.00–14.20 Check-in, coffee and introduction to Mentimeter
14.20–14.30 Welcome and the rst Mentimeter poll (T1)
14.30–14.40 Demonstration of a robot-mediated job interview
14.40–14.50 Presentation of the research group by one of its members
14.50–15.15 Roundtable discussions and the Mentimeter poll (T2)
15.15–15.25 Expert presentation #1
15.25–15.35 Expert presentation #2
15.35–16.00 Roundtable discussions and the Mentimeter poll (T3)
16.00–16.10 Expert presentation #3
16.10–16.20 Expert presentation #4
16.20–16.35 Roundtable discussions and the Mentimeter poll (T4)
16.35–16.50 Q&A
16.50–17.00 Wrap-up
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
6
shows the mini-public program and the exact sequence of the activities,
here we focus on explaining the logic behind these activities. The rst
element, a short robot-mediated job interview, was performed on stage.
It lasted approximately two minutes. It showcased a job interview via a
robotic proxy (resembling the setup in Fig. 1a). We used a teleoperated
android robot, Telenoid R1, developed by the Japanese robotics lab ATR
Hiroshi Ishiguro Laboratories. The Telenoid is designed to display
minimal human embodiment.
Second, during the mini-public, four experts gave 10-minute pre-
sentations on how new technologies are used or could be used in the
recruitment and selection processes. The rst expert was an entrepre-
neur who runs a large network for female entrepreneurs and who is
experienced within the domain of technology and biases. The second
expert was a career advisor at a professional association for technical, IT,
and natural sciences professionals who talked about technology in
general and robotics in particular in recruitment, assessment, and se-
lection processes. The third expert worked with digital transformation
and digital business development in a consultancy rm and discussed
robotics and biases. Finally, the fourth expert was a local politician
whose political party was the rst to try out anonymizing parts of the job
application process. He discussed their experiences with this process.
The overall purpose of the presentations was to show different and
contrasting perspectives on technology in the recruitment, selection,
and assessment processes. The presentations were expected to stimulate
roundtable discussions.
Third, in between the expert presentations, roundtable discussions
took place. The discussions at each table were facilitated by either one of
the researchers involved in the project or one of the four experts. The
facilitators were expected to remain neutral in the discussion and merely
ensure that the discussion continued and that everyone at the table got a
chance to express their opinion. An audio recorder was placed at each
table, and it remained on during the entire event. Unfortunately, due to
a human error, a recorder at one of the tables was turned off after 25
minutes, which meant that the rest of the debate at that table was not
part of the analysis. The recordings summed to a total of 18 hours and 21
minutes. All the debates were transcribed and subsequently analyzed.
Finally, the participants were asked to ll out a poll, which consisted
of ve items (Table 2). Four of those items assessed attitudes toward
robots in job interviews and the effects of technology on equality and
diversity in the job market. These items were measured on a ve-point
Likert scale ranging from “strongly agree” to “strongly disagree.” The
nal item was open-ended. The poll was conducted via Mentimeter and
repeated four times throughout the event (marked as T1-T4 in Table 1)
to detect changes in attitudes toward robots in recruitment. Responses
were anonymous and logged on the participants’ ID numbers.
3.3. Data analysis
We relied on an inductive approach and used NVivo 12 to code and
analyze the transcribed qualitative data collected during the mini-
public. The aim of the analysis was to explore and understand the fair-
ness perceptions of the concept of a robot-mediated employment inter-
view. The coding process consisted of open, axial, and selective coding
(Strauss and Corbin, 1998). Initially, two researchers conducted the
coding independently, and through discussion and consensus, a list of
rst- and second-order categories was developed. The remainder of the
analysis, including the renement of the identied categories as well as
the development of aggregate theoretical dimensions, was conducted by
one of the two researchers, who relied on discussions of data excerpts
and codes with the coauthor team to resolve dilemmas during the
analysis (Salda˜
na, 2013). In each phase, the coding process involved a
constant comparative method aimed at identifying the categories and
their properties and relationships relevant to understanding the partic-
ipants’ fairness perceptions of the robot-mediated job interview (Locke,
2001; Strauss and Corbin, 1998). The cyclical process between the
emerging theory and data allowed us to rene categories and their
properties and relationships and thus develop and clarify our theoretical
insights. These insights were then compared with existing research
(Eisenhardt et al., 2016). Based on this process, we developed our
explanatory framework for fairness perceptions of the robot-mediated
job interview.
To analyze the poll that was conducted at the mini-public, a series of
one-way repeated-measures ANOVAs were conducted exploring the ef-
fects of time on four distinct questionnaire items, i.e., those related to
preference for being interviewed by a robot, beliefs that robots could
have a positive inuence on securing a job, preference for showing one’s
whole person, and the belief that technology will help to increase
equality.
3.4. Results
The analysis of the mini-public data revealed some positive but
mainly negative fairness perceptions of the robot-mediated interview.
The analysis resulted in two aggregate theoretical dimensions: i) factors
inuencing the fairness perceptions of the robot-mediated job interview
and ii) moral pragmatism of HR professionals (i.e., pragmatic stance
toward handling conicting moral perspectives), which is triggered by
the robot-mediated job interview (Tables 3 and 4).
3.4.1. Factors behind the fairness perceptions of the robot-mediated
interview
With respect to this rst theoretical dimension, four factors were
found to inuence the participants’ fairness perceptions: i) dehuman-
ization of the job interview, ii) ensuring “the good match” between the
candidate and the job/organization, iii) false objectivity as a conse-
quence of the robot-mediated job interview, and iv) the robot-mediated
job interview as a symptomatic treatment of discrimination (Table 3).
Each of these factors has negative and/or positive effects on fairness
perceptions and reveals important differences between the HR pro-
fessionals’ and jobseekers’ perspectives on fairness perceptions, which
are presented and discussed below.
3.4.1.1. Dehumanization of personnel selection. The rst factor, dehu-
manization of personnel selection, reveals the concern and the percep-
tion that by using robots to mediate the employment interview,
important features of human nature are being denied in the personnel
selection process. Typical characteristics of human nature, such as
emotional responsiveness, interpersonal warmth, and depth (Martínez
et al., 2017), are eliminated from the interaction. Dehumanization de-
creases fairness perceptions and is attributed to three perceived effects
of the robot-mediated interview: i) it signals that the employer gives
Table 2
The Mini-Public Survey Items.
Items Scale
I would prefer to be interviewed by a robot at my next job
interview
5-point Likert scale
(Strongly disagree...
Strongly agree)
I think that robots in job interviews would have a
positive effect on my chances of being offered a job
5-point Likert scale
(Strongly disagree...
Strongly agree)
I prefer to show my entire personality during a job
interview
5-point Likert scale
(Strongly disagree...
Strongly agree)
I think that technology can help improve equality and
diversity in the labor market
5-point Likert scale
(Strongly disagree...
Strongly agree)
Please add any additional comments regarding the mini-
public program, the presentations, your experiences as
a participant, or anything else you would like to share
Open-ended
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
7
lower priority to the interview, ii) it removes a candidate’s personality
from the interaction, and iii) it removes intuition and emotions, which
are seen as central to the selection process.
3.4.1.1.1. Deprioritizing the job interview. The robot-mediated setup’s
attempt to reduce visual cues is perceived as reducing human-human
interaction. As a consequence, it is seen as giving a lower priority to
the applicants and the interview, as it is perceived to alienate applicants
and interviewers.
P6: There is a certain alienation in it. If I apply for a job in a company,
if I put myself in that situation, and whoever hires me—whether it is my
manager or an HR committee or a recruiting company that does the
recruitment […], and I have to sit and talk to some microphone and then
they hear my voice in the other room. (Roundtable #4)
3.4.1.1.2. Removing personality. Another related perceived disadvan-
tage of the robot-mediated setup is that it removes personality:
P2: In a way, I think that it removes personality.
P1: Yes, exactly.
>P5: Yes, that’s it. And if you remove personality, then you make us
into robots as well. (Roundtable #1)
Removing personality from the robot-mediated job interview seems
to lead to technological dehumanization of the job interview, which is
expected to be of a deeply interpersonal character. The participants
stressed personality as an essential parameter in the applicant selection
process because it is easier to develop a candidate’s lacking competences
than it is to develop or change the incompatible personality of the
candidate:
P6: We always try to turn it upside down and pressure our clients
[…] and say: but he or she has the perfect personality to be able to create
a new perspective on those tasks. [She or he] lacks 20 percent at the
competence level, but we can build upon that. (Roundtable #2)
3.4.1.1.3. Removing intuition and emotions. Furthermore, objectivity as
the ultimate goal of the job interview was disputed. The participants
argued that selection decisions should be made both by intellect and
intuition, but especially the latter. Intuition and emotions were viewed
as essential to personnel selection, emphasizing the importance of the
“human factor” (Roundtable #3) and the need to connect to a candidate
at a personal level: “[…] on LinkedIn, for instance, we look and say ‘that
skill, that skill, those skills’. There’s nothing human about that.” (Round-
table #5). Not being able to use one’s intuition and emotions in the
selection process is compared to “giving up on what makes us human”:
P3: I think we can nd and build many systems that can help us
become more objective, but when all comes to all, I don’t mind either if
the nal decision is made both with sense and with heart and with
intuition [snaps ngers] or what you would call it, right. That I think is
good. And I think that robots will have difculties with that.
P6: I think so, too.
P2: Otherwise it would be equal to letting go of your entire hu-
manity, right. We can’t do that. (Roundtable #2)
3.4.1.2. Securing “the good match”.The second factor highlights that to
achieve “the good match” in the selection process, three aspects need to
be considered: i) personnel selection is a reciprocal process, i.e., both
parties are selecting, ii) the interview setup needs to match a candidate’s
Table 3
Factors Inuencing the Fairness Perceptions of the Robot-Mediated Job Interview.
Second-order categories First-order categories Representative quotes
Dehumanization of the
personnel selection
Deprioritizing the job interview (-) It somehow seems strange that they won’t allocate time. I mean, they will allocate time to talk to
me, but they don’t want to see me. That somehow seems a bit peculiar if it is in connection with a
job interview. (Roundtable #4)
Removing personality (-) P2: That it gets completely cleaned of personality; I think that is, oh well…[sighs]. […]
P5: It’s just that you remove that human factor, I simply don’t believe in that. (Roundtable #3)
Removing intuition and emotions (-) […] And then it’s this thing about, when the gut feeling tells you this is right, I think that’s where
the challenge will be, so that it’s not only about the competences. (Roundtable #2)
The good match Reciprocal selection (-) P7: But there is that part of the match that goes the other way, because one thing is the selection
related to the applicant, but equally as much it is about the selection as applicant. And by putting a
robot in that rst meeting with the company, you miss out on impressions such as values and
culture.
P2: Yes, you can end up feeling… how to put it? Cheated.
P3: Yeah, a little. Because you did not get any idea of what the culture is like.
P2: Yes, or at least you need to remember that a selection process actually goes both ways. (table
#8)
The match between a candidate’s personality
and the interview setup (+/-)
I would think that it would be far more awkward for me to sit and talk to a robot than to a human
being. Even though I have actually seen and experienced quite many robots, I think it’s actually
harder for me to communicate with something – even though the Telenoid has facial features and
stuff like that, it’s just different. (table #3)
The match between the job type and the job
interview setup (+/-)
But I think it also depends to a great extent on what eld we are talking about. I work in an
unemployment insurance fund that is 75 percent in the humanistic eld and 25 in the scientic
eld. And there are perhaps some of the areas where you tend to say, well if they only go for a
person whose programming or other qualities are within something more measurable versus hiring
a new high school teacher, for example, where one can say that there are some other parameters
that may be in play, where one will be defensive in advance against a robot, because it only tells half
of the story. (Roundtable #7)
False objectivity Postponing rejection and biases (-) But exactly that part related to objectivity is what we are struggling with daily. And anything but
equal, it is still the company that decides whom they want to hire. […] And there, I think, […] it
may well be that the person gets to the last step, but it’ll be a waste of time anyways. I don’t think
that’s respect for the individual either. (Roundtable #2)
Imagining (-) “[…] and I don’t think you can prevent people from letting their imagination run wild in the
situation. (Roundtable #3)
Symptomatic treatment Creating awareness of biases (+) […] that it may perhaps make biases visible rather than thinking that you can remove biases by
using robots (Roundtable #1)
Indirectly accepting discrimination in the
labor market (-)
It may be that you can anonymize in the selection situation itself, but if you really don’t want to
have women or immigrants at the workplace, then you still have the problem. […] I would rather
see that the labor market got adapted and arranged in a different way. (Roundtable #2)
* (-) and (+) indicate a positive or negative effect on fairness perceptions of the robot-mediated job interview
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
8
personality, and iii) the job type and the interview setup need to match.
The rst aspect was found to have a negative effect on fairness percep-
tions of the robot-mediated job interview, while the effects of the two
remaining aspects are context dependent.
3.4.1.2.1. Reciprocal selection. There was a general agreement that
an effective selection process needs to ensure that both the hiring or-
ganization and the applicant are able to identify a good match in each
other. Differently put, the selection process and outcome have to benet
both parties. While the concepts of fair proxy and the robot-mediated
interview emphasize reducing or eliminating biases and hence
discrimination against applicants, many participants pointed out that
the needs of hiring organizations should also be considered. The reason
for this is that the applicants are not the only ones being assessed; the
company is being evaluated by the candidates as well. Applicants can
decide to either accept or reject a job offer. Creating a good match is
therefore seen as a “two-way street”, and the robot-mediated setup is
viewed as less fair than the face-to-face setup because it reduces the
chance for reciprocal selection:
P1: And I think that somehow the premise for this is that companies
are the ones deciding whom to hire, but in reality it is just as much the
candidate, the talented candidates, that can equally choose to accept or
decline. And I think that is an important aspect to include and say: well,
it’s the mutual match that’s important. (Roundtable #2)
3.4.1.2.2. The match between a candidate’s personality and the inter-
view setup. In addition to achieving a good match between a hiring or-
ganization and a candidate, the participants generally acknowledged
that there are other factors that need to suit a particular job interview
setup in order for the selection process to succeed. One aspect is a
candidate’s personality. Depending on candidates’ personality traits,
some are more comfortable with a face-to-face job interview, while
others would prefer a robot-mediated interview:
P6: […] that there may be some who will have a more positive
experience of…that it may be more pleasant to sit and talk to a robot,
where you don’t get so nervous. But there will be equally many who
would think it would be a loss, or how to put it, or that would think it
would be annoying. (Roundtable #7)
3.4.1.2.3. The match between the job type and the interview setup.
Participants seemed to agree that those job types that, for instance,
require more technical problem solving and little interpersonal
communication would also be likely to place less emphasis on person-
ality and more on qualications. They therefore thought that a robot-
mediated job interview could be suitable here:
P3: And I think that it is ne if we are talking about a salesman or a
performer or a teacher or something. But if it is “IT-Joe” that will have to
sit and nerd out with something in the basement, how good he is at
selling himself does not have anything to do with how good he is at his
job. (Roundtable #2)
3.4.1.3. False objectivity. The third factor, false objectivity, reduces
fairness perceptions and arises from the perception that i) the robot-
mediated job interview only postpones biases and rejection and ii)
triggers the ‘imagining’ of the candidate due to the absence of visual
cues, which may result in false expectations.
3.4.1.3.1. Postponing rejection and biases. The robot-mediated job
interview, as conceptualized in this paper, aims to maximize objectivity
by removing visual cues that may trigger implicit biases and discrimi-
nation. However, many participants were not convinced that this ob-
jectivity was achievable because at the end of the selection process, a
candidate would need to show his or her entire person in any case.
Therefore, they argued that using a robot-mediated interview would
only postpone the rejection and biases until the very end, which is why
the process may be experienced as falsely objective:
P1: […] But what you also do is, as you mentioned, it’s just that you
kind of postpone it until you show up at work or at the nal round [of the
selection process].
P3: Yes, that’s right. The moment of surprise. [several participants
laugh]
P1: So you just postpone the thing where you say “Oh, that’s what
you look like!” until later in the process.
P3: Yes, that’s correct. (Roundtable #4)
Some participants reasoned that the delayed rejection may feel as an
even greater defeat and that applicants would therefore experience the
process as being less fair than a face-to-face interview:
P7: […] but in the end, you would never get hired without having
stood forward and showing who you are. And ideally the way it should
be is that you show your true self…and not only on the rst day, where
you come into the ofce and then there are some that look at you and
say: “You there, you can just go back home” [laugh around the table].
Then I think it would be an even greater defeat. (Roundtable #4)
3.4.1.3.2. Imagining. The participants argued that if the visual cues
were removed, the interviewer would compensate by lling in the gaps
herself. This ‘imagining’ could build up unrealistic expectations for the
applicants and bias the selection process and outcome in unintended
ways.
P6: […] you build up a person that’s a bit neutral, that you have
some good impressions of. And then the person walks in through the
door, and it may perhaps be an entirely different person than the one you
had in mind. […] I could be worried that […] maybe that reveal will
therefore become even greater: This was exactly what I had expected, or
this was not at all what I had expected. (Roundtable #8)
3.4.1.4. Symptomatic treatment. The fourth factor, symptomatic treat-
ment, is related to the role played by the robot-mediated interview to
treat discrimination. The participants reasoned that the robot-mediated
interview treats the symptoms rather than the causes of discrimination.
Table 4
Moral Pragmatism of HR Professionals Related to the Robot-Mediated Job
Interview.
Second-order
categories
First-order categories Representative quotes
Role morality The right to be biased Because of course it is biased when
you are hiring. It is a whole person
who you have to hire in your
company. (Roundtable #7)
Obligation to the client But it is precisely that objectivity
that we ght against daily. And all
things being equal, it is still the
company that decides whom they
want to hire. And then you can
challenge [them], but I think that
the output is not going to be better
than the input. So, if there is a
person who wants the job
description to mirror him, then it’s
easy to say: “I don’t want a female
who is pregnant.” No, okay. We
are not allowed to say that [out
loud]... (Roundtable #2)
Business case
overrules
social case
Time and cost
effectiveness of the robot-
mediated job interview
But if you get to the stage where
you have a real robot that
interviews, where I don’t have to
sit and spend time, because it is
about the time aspect as well. You
have to hire someone, you have
made the investment […] and then
the [applicant] sits there, and you
are here still doing the work and
asking questions based on what
she responds. (Roundtable #3)
Society hinders unbiased
selection
[…] And the same goes for the
headscarf. There could be some
situations, where it wouldn’t be
wise, but there could be other
situations in which it wouldn’t
matter. (Roundtable #4)
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
9
While unable to treat the root causes of discrimination, the robot-
mediated setup’s symptomatic treatment of discrimination was
perceived as both having a positive and negative effect on fairness
perceptions. On the one hand, the robot-mediated setup may improve
fairness perceptions, as it may create awareness of biases at the indi-
vidual level in personnel selection, which may reduce discrimination.
On the other hand, it may signal the acceptance of discrimination in the
labor market because the robot-mediated interview is unlikely to have
an effect beyond the interview situation and thus have a negative effect
on the fairness perceptions of the setup.
3.4.1.4.1. Creating awareness of biases. Even if the robot-mediated
interview only addresses the symptoms of discrimination, some partic-
ipants argued that it could create an awareness of bias during the se-
lection process and that such awareness could over time positively
impact selection practices:
P3: But again, you should of course be able to hire someone, if you
[pause]… But that consciousness about those biases, that you are
choosing that person because they are from western Jutland, and not
because the person is sharp at something. That you get that awareness
back to the person who is hiring. That I think may be a more sustainable
way of viewing recruitment. (Roundtable #7)
3.4.1.4.2. Indirectly accepting discrimination in the labor market.
Some participants reasoned that by treating symptoms of discrimination
via solutions such as a robot-mediated interview, employers would
actually be giving indirect and unintentional consent to the discrimi-
nation that is present in the labor market:
P3: We say that we have equal rights. We don’t. We don’t have equal
pay either. Maybe that’s the task that should be solved instead of ano-
nymizing ourselves.
P1: But that’s what they are trying to achieve here, namely, equal
rights.
P2: Yes, […] but by doing something like this [refers to the robot-
mediated interview], you accept racism and you accept…
P1: You accept everything.
P5: Well, yes, discrimination.
P2: Discrimination, right. Then we say: Okay, let’s make some […]
technology there instead of removing the issue at its core. (Roundtable
#1)
3.4.2. Moral pragmatism
The second aggregate theoretical dimension is related to the moral
pragmatism of HR professionals (Table 4). It is evident from our data
that the introduction of robots in job interviews raises the defenses of HR
professionals related to morality, which results in moral pragmatism.
The notion of “moral pragmatism” as such is ambiguous, since there are
many different varieties of “pragmatism” in philosophy. What is com-
mon to a pragmatist approach is the resistance to a realist conception of
values and norms as mind-independent entities, the insight that human
decision-making is not guided by one and the same system of normative
ethics (e.g., deontology, utilitarianism, virtue ethics), and/or the insight
that moral decision-making is not guided, across situations, by one and
the same value hierarchy (Draˇ
sˇ
cek et al., 2021; Heney, 2016; Marchetti,
2021). Here, we follow a conception of pragmatism that emphasizes
“that all knowledge and experience are infused with interpretive as-
pects, funded with past experience, and stem from a perspective, i.e., a
point of view” (Rosenthal and Buchholz, 1999, p. 115). Accordingly,
moral decision-making takes place in a perspectival understanding of a
situation—each perspective comes with its own set of moral obligations
and, in particular, with its own instantiation of morally guiding values
and its own ranking of these values. Moral pragmatism of this kind is
very close to moral relativism, but unlike the latter, holds on to the idea
that perspectival moral decisions are neither arbitrary nor ‘locked in’,
but are justiable by and revisable in the situation. In the case of our
study, we found evidence for this understanding of morality and for
perspectival conceptions of fairness and associated norms. The HR
professionals showed a certain degree of unwillingness to revise current
procedures for the sake of the ourishing of the company. As long as the
protection of the company is undertaken for moral reasons (e.g., to
protect the livelihood of the company and its employees), resistance
against changing selection procedures displays not amoral attitudes but
moral pragmatism in the sense dened. Our analysis shows that it results
from a combination of i) role morality that is related to being an HR
professional and ii) the business case (i.e., the bottom-line effects) of the
robot-mediated interview taking precedence over the social case for the
robot-mediated interview (increasing fairness, reducing bias and
increasing diversity). Moral pragmatism demonstrates that fairness from
a company perspective is differently perceived than fairness from an
applicant perspective. From the company perspective, fairness includes
the right to choose according to the company’s own preferences for the
best match, even if it means being biased.
3.4.2.1. Role morality. A majority of HR professionals expressed that
their job entails acting on behalf of their client or company, which
sometimes may involve acting in ways that they would otherwise
consider morally wrong. Their acceptance of this role-based morality
was anchored in two arguments: i) biases are permissible because the
hiring organization has the right to choose according to its own pref-
erences, to the extent that this can be morally justied (e.g., along
utilitarian lines of protecting the jobs of extant employees), and ii) HR
professionals have an obligation to the client/company regardless of
how they personally feel about the nature of the selection process. Role
morality refers to "claim(ing) a moral permission to harm others in ways
that, if not for the role, would be wrong" (Applbaum, 1999, p. 3).
3.4.2.1.1. The right to be biased. The participants, including HR
professionals, acknowledged that ”there are clearly some inappropriate
biases that can be removed” (Roundtable #4). This acknowledgment was
generally present among the participants at each roundtable, and they
were positive about the idea of removing visual cues that trigger biases:
P1: If we could remove everything about who you are and what your
background is, and this and that, so only pure competences remain, and
that you are good at your work, and this and that, I think that would be
great. (Roundtable #6)
Despite clear moral awareness, the HR professionals argued that it is
unfair to remove the hiring organizations’ right to be biased:
P7: […] Because we have some clients, who are business owners, and
he bloody does not, sorry, but he won’t hire Muhammed. He bloody
doesn’t want that in his “shop.” He doesn’t want a Muhammed there.
P1: And he shouldn’t have to.
P7: And he shouldn’t have to.
P1: Then it’s really not cool for Muhammed to be there either.
(Roundtable #2)
Importantly, the ‘should’ that appears in this line of argument is not a
piece of instrumental reasoning but a normative ‘should’ that relates to
rights, obligations, and entitlements. In other words, Participants P1 and
P7 ultimately express a moral point of view relating to emotional
discomfort and human ourishing; they do not argue from an exclu-
sively instrumental perspective.
It was thus argued that the hiring organizations should be able to
recruit based on their own (subjective) attitudes and preferences toward
the types of applicants they want to hire, regardless of those preferences
being unrelated to qualications or work performance. Some decision-
makers are, for instance, more emotionally invested in their businesses
than others and as such, are entitled to their own way of selecting
candidates:
P7: Well, there are some decision-makers out there in the Danish
companies… If you have your own business that you have built over the
course of 30 years, Mr. Knudsen will have his stance on whom he would
like to hire. There are lots of feelings involved in it from his perspective.
And he will not hire [women]—his experience says that male engineers
are better. You cannot take that away from him. (Roundtable #2)
3.4.2.1.2. Obligation to the client. The participants further explained
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
10
that discrimination may be acceptable if it serves the purpose of satis-
fying the client. Generally, the HR professionals who participated in the
mini-public were of the opinion that a fair selection method allows for a
trade-off between accommodating the company needs and wants and
the goal of eliminating discrimination:
P1: Well, it is also fair enough that there are some criteria that an
employer wants for this to work. If you hire a person who is so far away
from the rest of the employee group, it will also cause a lot of
disagreement, because how will the person adjust, and will it work at all,
and will she become isolated or something. Nobody wants that. […] So,
we do it just as much to protect the person as the company, right.
P2: So, there can be good reasons for discrimination?
P1: Well… well… yes. But I wouldn’t call it discrimination.
(Roundtable #3)
Moreover, the HR professionals reasoned that the robot-mediated
interview setup would be unable to yield benets for both parties
because it is not based on “win–win” principles (Roundtable #8). They
argued that the benets of this interview setup for candidates entail a
disadvantage for companies, branding the robot mediation as less fair
and emphasizing their duty to the client:
P2: But again, whom are we, ehh, giving preference to here? Is it the
company that may not get the full benet from this taking a turn […]
Really, to whom are we giving priority here? (Roundtable #5)
3.4.2.2. The business case overrides the social case. There was an un-
derlying perception among the HR professionals that the business case
behind the robot-mediated interview outweighs the social case. This
perception was grounded in two arguments: i) even in the presence of
the many objections against the robot-mediated job interview (cf.
dehumanization, a good match, false objectivity, etc.), the HR pro-
fessionals argued that if the robot-mediated interview involved an
autonomous robot (unlike the teleoperated one), it could be a cost- and
time-efcient solution and therefore attractive, and ii) because stereo-
types and biases are so prevalent in society (including clients and cus-
tomers that companies work with and sell to), society needs to change
before companies can embrace unbiased selection. Both of these argu-
ments reect the participants’ view of nancial performance not as a
‘good in itself’ but as something that is directly connected to companies’
and employees’ well-being, thus reecting moral pragmatism. They are
anchored in a business logic favoring nancial performance over the
good of social justice and show that a business argument is necessary to
justify acting in a socially responsible way.
3.4.2.2.1. Time and cost effectiveness of the robot-mediated interview.
Despite insisting on the importance of relying on intuition in selection as
an argument against the robot-mediated interview, the HR professionals
had an exception:
P3: So it would be very time-saving if you could nd a robot that
could actually interview, and that could also think on its own…that
could ask follow-up questions if [the interviewee] goes off on a tangent
etc., and collect that data. (Roundtable #2)
If the robot were autonomous, they argued that it would be a cost-
and time-effective solution, which would free the HR professionals to
focus on other work tasks:
P1: (interrupts) Yes, then you could hand over the work, right? Now
you are doing it yourself anyway. (Roundtable #8)
This suggests that the HR professionals’ opposition to the robot-
mediated job interview is not necessarily about resistance to change, i.
e., replacing something known, something that “works” (intuition-based
selection), with something new (robot-mediated selection). Rather, the
above examples indicate that their resistance is related to their strong
focus on the business rationale behind the new technology (i.e., time and
cost effectiveness). Since the ultimate motivation for the business logic
was not nancial gain in itself but the livelihood or thriving of the
company and employees, this line of reasoning appeared morally justi-
ed to the participants, even though they understood that it conicted
with the perspectival moral reasoning in favor of the robot-mediated
interview (i.e., in order to reduce discrimination, increase perceptions
of fairness).
3.4.2.2.2. Society hinders unbiased selection. Some participants
argued that society needs to change before companies can make selec-
tion and hiring decisions that are nondiscriminatory, thus placing the
responsibility for the “necessity” to discriminate on society while using
nancial performance to justify this stance. For instance, a participant
recounted a situation involving an applicant who was going through a
gender change:
P2: We had a very peculiar case, well, a few years back, where I was
looking for a managing director for a job, but who would also be able to
sell. And I received a number of applications. […] There were really
many men that applied, and then there was only one woman, who
applied. And she did everything right, and had a good background, and
wrote both a good application and a good CV. And my rst thought was,
it was like, errr, that CV, you did not write that yourself.
P5: Okay?! [sounds surprised]
P2: And the explanation is that men and women use different words.
And men very much emphasize results, and what you have achieved,
and what you have done, and things like that. And women write
differently.
P5: Yes.
P2: And this CV was very much like: I have done, I have created, I
have achieved, I have. So, it was a typical male-CV. Uhm, and then I
invite the person in question to an interview, and it turns out that this is
a woman, with a female name, but it is in fact a man, because it’s
someone who is changing gender. So, [this person] comes from being a
man and is in the process of becoming a woman, in female clothes and
high heels, but still a male voice and a male appearance etc. And uhm,
well, had the right competences, but if the person were to go out and sell
the company, it would create a lot of confusing signals to customers. So,
that person was actually discriminated against in the process, even if the
competences were appropriate and all those things. But it would create
such strange signals if the person was to meet clients, and if one needed
to decode: What is this? What is going on here? Well, a man in women’s
clothes. So, uhm, […] it was really not easy. (Roundtable #3)
Examples similar to this were used to argue that a company cannot
afford to be inclusive and hire an “atypical” type of applicant to be the
face of the company, to represent it, to meet clients, partners, etc.,
because it would risk its performance and protability. They reasoned
that society needs to change and become more open-minded before
companies can be more inclusive and unbiased in hiring a diversity of
applicants:
P4: I can see that there is a certain point [with this technology], but
it’s not the same if you need to go out to customers. But if the society
could move, then recruitment could [move] as well. (Roundtable #3)
That the arguments of the HR professionals are best construed as
reecting a stance of moral pragmatism and not a rejection of moral
principles is supported by the fact that participants talked about the
“rights” of companies to proceed with biased selection procedures,
partly in order to cater to the biases of their clients—they did not qualify
their own standpoint as being immoral or amoral, but justied it along
the lines of moral principles that largely follow utilitarian ethics.
3.4.3. Results of the mini-public poll
As seen in Table 5, statistically signicant effects of time were only
observed for the preference for being interviewed by a robot. Here,
participants rated their preference for being interviewed by a robot
signicantly higher at T4 compared to T1 (t(47)=-2.83, p=.009), while
there were no statistically signicant differences between any of the
other time points or on any of the other questionnaire items (question-
naire items 2, 3, or 4; at any time T1, T2, T3, T4; see Table 5).
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
11
3.5. Discussion
Based on a deliberative mini-public, Study 1 explored fairness per-
ceptions of the robot-mediated interview. Despite the HR professionals’
awareness of the shortcomings of the face-to-face job interview and
ethical reasons for using the robot-mediated interview, the ndings
suggest that face-to-face job interviews are perceived as preferable for
mainly pragmatic reasons, i.e., they entail several well-known positive
benets, such as the opportunity to directly observe personality, social
and other skills, ask complex questions, use probing mechanisms, etc.
Being practically preferable, however, does not necessarily imply that
they are ethically preferable; rather, as we have shown in our theoretical
background, there is extensive and consistent evidence that face-to-face
interviews are inevitably interwoven with numerous biases that un-
dercut basic rights to equality, respect, and equality of opportunity.
Several statements that were made during the mini-public event exem-
plify this point.
The HR professionals argued that robot mediation would prevent
their reliance on intuition, which they insisted was essential for suc-
cessful selection. Research has shown that analytical methods are better
at predicting human behaviors than intuition-based methods (Grove
et al., 2000; Kuncel et al., 2013). Similarly, intuitive expertise in
applicant selection is a poor predictor of a candidate’s future job per-
formance (Highhouse, 2008). In fact, it has been found that unstructured
interviews have low predictive validity and that they may even harm
applicant selection decisions (Kausel et al., 2016). In light of the
observed strong preference for intuition-based selection decisions, the
goal of reducing discriminative biases in selection remains pivotal and
motivates further investigation of robot-mediated job interviews.
Moreover, the nding that the mini-public participants showed a posi-
tive change in their attitudes toward being interviewed by a robot
during the event indicates that the tested interview method may stand a
chance with HR practitioners, but the causes of this change remain
unclear.
Although the study revealed rich insights into fairness perceptions of
the robot-mediated interview, it suffered from one important limitation.
The predominance of HR professionals at the event generated a valuable
understanding of how the decision-makers and organizational repre-
sentatives perceive the robot-mediated interview. However, the appli-
cant perspective was not uncovered in equal depth. While the insights
into applicant perceptions suggested a different and more positive
perception of the robot-mediated interview, these insights were based
on relatively scarce data. This motivated us to pursue further investi-
gation of the applicant fairness perceptions in Study 2.
4. Study 2: experimental survey
Study 2 was designed to explore the effects of the type of job inter-
view (face-to-face vs. robot-mediated) on applicant fairness perceptions.
The main purpose was to investigate whether the mean value of the
interactional fairness construct is the same for the robot-mediated
interview and for the face-to-face interview. The study was based on
an experimental survey design.
4.1. Method
The study was conducted at an unemployment center in Denmark.
Jobseekers were enrolled in the present survey as they are the target
group for robot-mediated interviews, and going through job search and
unemployment. A total of 242 valid responses were obtained. For de-
mographics and employment-related characteristics, please see Table 6.
4.1.1. Procedure
The respondents were approached at the unemployment center and
informed about the study. When oral consent to participate was ob-
tained, they completed the survey delivered online on tablets. The sur-
vey was delivered through the Qualtrics survey system. The survey
included questionnaires (described below) and two video segments
displaying a conventional face-to-face interview and a robot-mediated
job interview. As the technology represents a break with previous
technologies used in this context, we found it critical to ensure that this
context was clearly displayed in the survey. To ensure that the re-
spondents all had the same understanding of the two job interview sit-
uations (robot-mediated vs. face-to-face), the survey included two
scripted videos displaying the two interview conditions (as shown in
Fig. 1).
In the face-to-face job interview, the interviewer and the applicant
are physically seated across from each other. In the robot-mediated job
interview, the Telenoid functioned as a possible fair proxy for the
applicant. In the latter situation, the applicant and the interviewer are
seated in two different rooms. The interviewer sits across from the ro-
botic proxy that represents the applicant. The applicant sits in front of a
computer screen via which the applicant can see the interviewer. The
robot is teleoperated by the applicant, and it has a built-in camera on its
forehead, which is used to transmit the visual image of the interviewer
on the computer screen. Both interviews were based on the same script
and involved the same two individuals. Each condition was thus shown
in a separate video. Each survey respondent watched both videos, and
each video was followed by a set of questions. The sequence of the
videos was randomized.
4.1.2. Measures
As the mini-public results emphasized that the robot-mediated
interview limits the interaction between interviewers and applicants
in undesired ways, we focused on perceived interactional fairness (IF).
We used a well-validated scale to assess interactional fairness percep-
tions in the two conditions consisting of four items, e.g., “The inter-
viewer treated the applicant with dignity” (Bauer et al., 2001).
We also assessed personal innovativeness (PI) within the domain of
interactive technologies (to which social robotics belong). For this
Table 5
The Mini-Public Poll Results.
T1(M,
SD)
T2(M,
SD)
T3(M,
SD)
T4(M,
SD)
One-way
repeated ANOVA
Prefer being
interviewed by a
robot (n=44)*
1.14
(.90)
1.41
(.95)
1.30
(.93)
1.60
(.87)
F(3, 129)=3.33,
p=.022, np
2
=.072
Post hoc analysis,
T1<T4, p=.009
T1=T2, p=.096
T1=T3, p=.227
T2=T3, p=.340
T2=T4, p=.430
T3=T4, p=. 063
Believe that robots
will have a
positive inuence
on securing a job
(n=45)*
1.04
(1.11)
1.35
(1.12)
1.22
(1.19)
1.46
(1.15)
F(3, 132)=2.141,
p=.098 np
2
=.046
Prefer showing
one’s whole
person at
interview
(n=44)*
2.79
(1.27)
2.74
(1.43)
2.78
(1.25)
2.96
(1.25)
F(2.17,93.21)=
.715, p=.503†,
np
2
=.016
Believe technology
will help increase
equality (n=44)*
2.26
(1.50)
2.45
(1.33)
2.04
(1.50)
2.11
(1.47)
F(2.31, 99.54)=
2.45, p=.083†,
np
2
=.054
†Mauchly’s test indicated that the assumption of sphericity had been violated.
The model was corrected using the Greenhouse–Geisser estimates of sphericity.
*Unfortunately, some respondents chose not to reply to all questions. Thus, the
sample size here is the number of participants who responded at all four time
points.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
12
purpose, we relied on three items from Agarwal and Prasad’s (1998)
well-validated scale, e.g., “If I heard about a new interactive technology,
I would look for ways to experiment with it.” Since personal innova-
tiveness in a particular domain is central to explaining innovation
adoption in that domain (Roehrich, 2004), it is also likely that the ap-
plicants’ degree of innovativeness within the interactive technology
domain inuences their fairness perceptions of the robot-mediated
setup.
Finally, we measured core self-evaluations (CSEs), since those have
been shown to be important in explaining applicants’ reactions to se-
lection procedures (McLarty and Whitman, 2016). We relied on Judge
et al.’s (2003) twelve-item instrument. The CSE scale is designed to be
unidimensional, but several studies report a bifactor dimensionality
with positively and negatively worded items loading on two separate
factors (Belendez et al., 2018; Gu et al., 2015; Zenger et al., 2015).
However, in the linguistic and cultural adaptation of the scale into
Danish, three items could not meaningfully be negatively worded, which
is why they were worded positively (Items 4, 8, 10). Thus, in our initial
adaptation, we only rely on three negatively worded items (Items 2, 6,
12).
4.2. Analysis and results
With a model involving several latent variables each measured with
a number of items, the rst step is to assess the validity of the items as
measures of the latent variable. This is a central part of conrmatory
factor analysis (CFA). Our initial analysis showed that, based on the
global goodness-of-t measures CFI, TLI, SRMR, and RMSEA, the psy-
chometric properties of the twelve-item instrument to measure CSE were
dissatisfactory. This group of measures is recommended in the CFA
literature as a basis for assessing the quality of a conrmatory factor
analysis. We utilize established rules of thumb thresholds of 0.95 or
above for CFI/TLI, 0.08 or below for SRMR, and 0.06 or below for
RMSEA; see Brown (2015) for elaboration. Hence, in line with the rec-
ommendations in the CFA literature, we looked for indications of
localized areas of strain using the normalized residuals. After several
rounds of reductions of ill-behaving items (based on the above-
mentioned residuals), we ended up with an instrument measuring CSE
using six indicators from the original instruments (items 1, 3, 5, 8, 10,
11); see Table 7 for the wording and the correspondence between the
original numbering and our numbering. This formulation only uses
positively worded items (since the original negatively worded items 8
and 10 were included in a positive worded form) and satises the usual
global goodness-of-t measures.
Given the objective of comparing the latent means of interactional
fairness in the robot-mediated and face-to-face interviews, we applied
structural equation modeling (SEM) to compare groups on latent means.
Specically, structured means modeling (SMM) is the recommended
choice of analysis, as it allows for a highly exible specication of the
entire model and simultaneously addresses the unreliability of mea-
sures. Ignoring the uncertainty associated with the latent means of
interactional fairness by using, for instance, summated scales and a
standard ANOVA approach for analyzing experimental data might
thwart the detection of important differences. We follow the suggestions
in Steenkamp and Baumgartner (1998) as well as Thompson and Green
(2013) and introduce a series of models with an increasing set of re-
strictions with the purpose of ultimately determining whether the mean
of the latent construct interactional fairness is identical in the two
groups (robot-mediated vs. face-to-face interview), accounting for the
potential inuence of CSE and personal innovativeness.
The point of departure is a model of congural invariance. Here, the
same factor structure applies; that is, the same set of indicators for our
construct applies in each group. There are no restrictions in terms of the
loadings for these indicators or for any of the remaining parameters in
the model. We allow for correlated errors associated with the indicators
of the interactional fairness construct to reect potential method effects
(Brown, 2015). The second step is to identify which loadings, if any, are
identical across the two groups. In step three, we identify which indi-
cator means, among those with identical loadings, are identical across
groups, where for identication, we restrict the mean of interactional
fairness for the robot-mediated group and estimate the mean for the
face-to-face group. In our nal model, we also restrict the path co-
efcients between CSE and robot-mediated as well as face-to-face
interactional fairness to be the same. The model in the rst step (con-
gural invariance) must be a well-tting model to carry out the subse-
quent testing of additional restrictions. We base our assessment of the
initial model, as is standard in the literature, on the chi-square statistic
supplemented with the goodness-of-measures, comparative t index
(CFI) (Bentler, 1990), Tucker–Lewis index (TLI) (Tucker and Lewis,
1973), standardized root mean square residual (SRMR) and root mean
square error of approximation (RMSEA) (Steiger and Lind, 1980 in
Steiger, 2016). We assess differences between models using the incre-
mental chi-square test. We use the MLR estimator in Mplus (Muth´
en and
Table 6
Demographic and Employment-Related Characteristics.
Respondents (n=242; men 40%)
M (SD) or n (%)
Demographic variables:
Age, M (SD)
39.19 (12.50)
Education:
Municipal primary and lower
secondary school or less
Secondary school
Vocational education
Short (<3 years)
Medium (3–4 years)
Bachelor’s degree
Long (5 years or longer)
Other
12 (5%)
19 (8%)
53 (22%)
29 (12%)
44 (18%)
17 (7%)
61 (25%)
7 (3%)
Residency in Denmark:
Born and raised in Denmark
1–6 years
7–14 years
15 years or more
179 (74%)
16 (7%)
10 (4%)
37 (15%)
Unemployment status
Unemployed full-time
Unemployed part-time
203 (85%)
35 (15%)
Time unemployed
<1 year
1–3 years
4 years or more
182 (75%)
55 (23%)
5 (2%)
Work experience
None
<1 year
1–3 years
4–6 years
7–10 years
11–14 years
15 years or more
17 (7%)
18 (7%)
43 (18%)
26 (11%)
31 (13%)
23 (10%)
84 (35%)
Table 7
The Core Self-Evaluations Items.
Item number and original item
number
Item
1(1) I am condent I get the success I deserve in
life
2(3) When I try, I generally succeed
3(5) I complete tasks successfully
4(8) I am condent about my competences
5(10) I feel in control of my job application
process
6(11) I am capable of coping with most of my
problems
*All items are measured on a 5-point Likert scale ranging from strongly agree to
strongly disagree.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
13
Muth´
en, 2012); hence, the incremental chi-square test uses the correc-
tion factor as described on the Mplus website. Table 8 summarizes the
results.
Fig. 2 displays our nal structural equation model (Model 4) that we
use to investigate whether the average value of the latent interactional
fairness construct is identical in the robot-mediated and face-to-face
interviews. Factor loadings (single-headed arrows from a circle to a
square) for CSE and PI have been omitted for brevity.
*We use the established model diagram symbols: squares signify
observed variables, circles signify latent variables, a triangle is the mean
or intercept, curved lines with two arrowheads signify correlations/co-
variances, and straight lines with a single arrowhead signify a direct
effect. Dashed lines represent insignicant relationships between con-
structs. Single arrowhead lines with a 1 on top are marker indicators and
are thus xed to the value of 1. Single arrowhead lines with identical
numbers on top are xed to be equal.
The single-headed arrow from the triangle to the circle associated
with the “interactional fairness, face-to-face” is the relevant effect. Based
on the results from Model 4, we nd the unstandardized mean difference
between the robot-mediated interactional fairness factor and the face-to-
face fairness factor to be -0.315 with a standard error of 0.050. Thus, the
factor mean of the interactional fairness factor is highest for the robot-
mediated group. The effect size according to Hancock (2001) is calcu-
lated as the interactional fairness factor intercept (which is equal to the
above difference due to the identication restrictions) divided by the
square root of the variance of the disturbance for that factor:
ESfairness =|− 0.315|
√0.645 =0.488
The mean for the interactional fairness factor is 0.49 standard de-
viations smaller for the face-to-face group compared with the robot-
mediated group. Using Cohen’s (1988) classication, this is a medium
effect size.
We nd the common unstandardized effect of CSE on interactional
fairness to be 0.348 with a standard error of 0.099. Thus, the higher the
CSE is, the higher the expected interactional fairness. The effect of PI on
interactional fairness is insignicant, with parameter estimates and
standard errors of 0.023 and 0.052 for the robot-mediated group and
0.006 and 0.062 for the face-to-face group.
4.3. Discussion
The ndings of the survey show that jobseekers perceive robot-
mediated job interviews as fairer than face-to-face interviews. From
the jobseeker perspective, this suggests that there is room for improve-
ment of the traditional job interview and indicates the potential of using
robotic proxies for this purpose. Furthermore, only CSE was associated
with fairness perceptions, and there were no differences between the
two conditions (i.e., face-to-face and robot-mediated). We used CSE as a
proxy for the appraisal of the job interview setup. However, in the
present study, only six items of the scale were included in the nal
model. These six items especially relate to generalized self-efcacy (e.g.,
“when I try, I generally succeed,” and “I complete tasks successfully”)
rather than to all four trait domains that the scale claims to assess.
Additionally, all of these items were worded positively, which may in
part explain the t between them. Research suggests that it may be
better to directly measure moderating processes attributed to CSE
(Chang et al., 2012), e.g., factors that motivate jobseekers to prefer one
interview setup over the other. Finally, the lack of effect of personal
innovativeness on fairness perceptions may indicate that alleviating
disadvantages related to the traditional face-to-face job interview is so
fundamental to jobseekers that the high novelty of the alternative setup
and the related technology are not perceived as a challenge. This result
indicates that jobseekers may be ready to embrace radically new solu-
tions that aim to improve job interviews as a selection method,
regardless of their tendencies to adopt new interactive technologies.
5. General discussions and implications
This empirical paper is, to our knowledge, the rst of its kind to
thoroughly examine employers’ and applicants’ fairness perceptions of
robot-mediated employment interviews. We explored whether a robot-
mediated employment interview may be able to improve fairness per-
ceptions by eliminating visual cues that are present in face-to-face
communication. The paper offers three main contributions. First, it
identies key factors that affect fairness perceptions of the robot-
mediated interview. Second, it reveals the diverging perceptions of the
HR professionals and applicants on robot mediation in job interviews.
These diverging perceptions are related to the HR professionals’ moral
pragmatism that arises from their role morality and their focus on the
Table 8
Results from the SMM Analysis.
Model Characteristics Goodness-of-t Incremental
test
1 Same model form for both groups, the mean for the interactional
fairness construct is set
equal to zero for both groups, no restrictions
on indicator intercepts across groups
χ
2
(109)=
173.879
NA
CFI=0.964
TLI=0.955
SRMR=0.052
RMSEA=0.050
2 Same as Model 1 but with equivalent loadings for indicators IF
1
,
IF
2
, and IF
3
, the mean for the interactional fairness construct is set equal to zero for both groups, no restrictions on indicator
intercepts across groups
χ
2
(111)=
178.658
Δ
χ
2
(2)=4.822
CFI=0.963 p value=0.089
TLI=0.954
SRMR=0.058
RMSEA=0.050
3 Same as Model 2 but with the mean for the interactional fairness
construct set equal to zero for the robot-mediated group and estimated
for the face-to-face group, indicator intercepts equivalent for IF
2
and IF
3
across groups
χ
2
(112)=
179.739
Δ
χ
2
(1)=0.964
CFI=0.963 p value=0.326
TLI=0.955
SRMR=0.058
RMSEA=0.050
4 Same as Model 3 but with equal path coefcients between
core self-evaluations and robot-mediated plus face-to-face interactional fairness
χ
2
(112)=
179.809
Δ
χ
2
(1)=0.027
CFI=0.963 p value=0.870
TLI=0.956
SRMR=0.058
RMSEA=0.049
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
14
business case for the robot-mediated interview (nancial aspects),
which undermines its social case (reducing discrimination). Third, it
examines an emerging technology as a future tactic to reduce potential
biases, namely, through robot mediation in personnel selection. We
discuss these three contributions below.
5.1. Employer vs. applicant fairness perceptions
Study 1 exposed different, but mainly negative, fairness perceptions
of the robot-mediated interview that are held by HR professionals. These
perceptions stand in contrast to the positive applicant fairness percep-
tions of the robot-mediated interview found in Study 2. In Study 1, the
four identied factors can be categorized according to two types of
fairness: procedural, i.e., whether the interview process is consistent,
bias-free, correctable, etc., and interactional, i.e., whether the interview
process is conducted in a respectful and informative manner (Bauer
et al., 2001). While procedural fairness from the applicant perspective is
related to consistency and neutrality of the interview (Bauer et al.,
2001), our results show that the employer perspective views an interview
procedure as fair only if it allows subjectivity and intuition. This is
particularly evident with respect to the rst factor, dehumanization of the
interview process. The second factor, the good match, addresses proce-
dural fairness by suggesting that fairness from the employer perspective
is compromised if the interview setup does not secure the circumstances
under which both employers and applicants can identify a suitable
match in each other. This factor is also anchored in the subjective im-
pressions of reciprocal t between candidates and the hiring organiza-
tion. The third factor, false objectivity, decreases both procedural and
interpersonal fairness because the robot-mediated interview is perceived
as postponing rejection and discrimination until the very end of the
selection process, which the employer perspective interprets as disre-
spectful and unfair to the applicants. The nal factor, symptomatic
treatment, was related to the perception that the robot-mediated inter-
view treats the symptoms rather than the causes of discrimination,
which was assessed both positively and negatively. On the one hand,
robot mediation was viewed as having an instrumental value in creating
awareness around the issue of discrimination in the selection process
itself and thus as improving procedural fairness. On the other hand, it
was perceived as giving consent to the discrimination present in the
labor market because it fails to treat the root cause of discrimination and
consequently, as diminishing procedural fairness.
While the robot-mediated interview was thus not perceived as
particularly fair from the employer perspective, the experimental survey
in Study 2 showed that unemployed people perceived the robot-
mediated employment interview as fairer than the face-to-face job
interview. This difference may be attributed to the moral pragmatism of
HR professionals. The HR professionals recognized the issue of implicit
biases in job interviews, but from their perspective, the pragmatic rea-
sons for face-to-face job interviews, e.g., relying on intuition, securing
the “best” candidate-organization t, have their own moral justication
in terms of their obligations to the ourishing of the company and its
extant employees, and thus outweigh the reasons for the robot-mediated
interview, such as fair treatment and equality of rights, which they
recognize as equally moral, from the perspective of the job applicant.
This understanding of moral justication, which we characterized as a
stance of moral pragmatism, has implications for research on employee
selection because it offers an additional explanation as to why employers
resist selection methods and/or technologies that are designed to
improve the objectivity of selection decisions. In addition to contextual
factors, such as organizational culture, habits and politics, the pervasive
but awed beliefs among employers are that a candidate’s character is
too complex to be evaluated by structured and mechanical methods and
that intuitive expertise can better predict a candidate’s work perfor-
mance (Highhouse, 2008). Moral pragmatism is another reason for
resistance to selection technologies, which shows that employers insist
on particular selection methods due to i) their role morality, which
obliges them to meet their clients’ needs and wants even if those entail
biases, and ii) their focus on the business case for relying on a particular
Fig. 2. The estimated model.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
15
method. This results in employers viewing fairness very differently than
applicants, namely, as being fair to the hiring company by letting them
decide on any given selection criteria (including, e.g., gender and race)
they deem relevant, even if those are at odds with equality of
opportunity.
5.2. Overcoming the opposing views of employers and applicants
Our two studies thus point to some of the challenges related to
resolving discrimination in hiring. By taking the morally pragmatic
stance on job interview discrimination, hiring organizations are, for
instance, not only disregarding the effects on applicants’ psychological
health (Friedman et al., 2005) but also the adverse effects of their se-
lection practices on people’s careers over the long run (Hofstra et al.,
2020). By letting discrimination creep into personnel selection, organi-
zations are likewise making themselves vulnerable because discrimi-
nation may in fact lead to not hiring the best candidates (Quillian et al.,
2017), and it may damage the hiring organization’s reputation
(McLarty and Whitman, 2016).
While asserting the importance of intuition and gut feeling to argue
against the robot-mediated job interview, the HR professionals in Study
1 pointed toward a potential exception: If the teleoperated robot were
autonomous, it would be preferable because it would be a cost- and time-
effective solution. Interestingly, this stance seems to contradict the
importance of intuition in job interviews that was claimed by HR pro-
fessionals. It also emphasizes their focus on nancial performance (the
business case). The HR professionals did not believe that autonomous,
AI-driven robots could be unbiased considering the current state of AI
development. Nonetheless, they found that particular option more
appealing than the teleoperated robot mediation due to its potential to
save money and resources. These ndings suggest that focusing on
business cases, i.e., the nancial effects of a personnel selection method,
undermines the social case, i.e., doing what is socially responsible. This
is in line with emerging research that shows that having a strong busi-
ness case for social action often does not lead to social action by orga-
nizations (Williams, 2017). This is because the principles of the business
case for social action are based on an instrumental logic that prioritizes
nancial performance over social action, even in the face of a win–win
situation (Kaplan, 2020). Adhering to this logic, the goal is to balance
stakeholder trade-offs and shared value. However, research shows that
“[e]ven “shared value” approaches insist that the win-win formula al-
ways meets the needs of the bottom line rst” (Kaplan, 2020, p. 1). As
our study shows, the instrumental logic is masked by justicatory vo-
cabulary that belongs to the domain of moral reasoning—HR pro-
fessionals talked of “rights” and used “should” in the normative sense,
not in the sense of furthering instrumental goals but in the sense of
realizing obligations and entitlements. In Study 1, the HR professionals
expressed that diversity can benet the bottom line; however, their
moral pragmatism meant that even if the robot-mediated job interview
setup can be a win–win, nancial performance beats social action. In the
face of a potential win–win situation, HR professionals’ moral pragma-
tism seems unwarranted. One plausible reason for their moral pragma-
tism is that moral judgments are usually weaker for acts that are
frequent or typical (Bigman et al., 2020). Indeed, research has docu-
mented that there is a pervasive and persistent belief among HR pro-
fessionals that intuition is essential in personnel assessment and
selection, even though they are aware of the disadvantages of such an
approach (Highhouse, 2008). Furthermore, discrimination in hiring is
common and institutionalized (Quillian et al., 2017). All of this limits
the feelings of moral outrage that are usually necessary to trigger moral
judgment and motivate social action (Kaplan, 2020; Martin et al., 1984).
Another explanation for the moral pragmatism of HR professionals is
related to their focus on nancial performance. Research in psychology
links exposure to the construct of money directly to unethical behavior
(Kouchaki et al., 2013). In fact, this research suggests that this is why
business decisions are sometimes unethical. The HR professionals’ focus
on nancial performance thus makes them more likely to make immoral
business decisions.
Therefore, while jobseekers in Study 2 perceive the robot-mediated
job interview as fairer, HR professionals in Study 1 feel the opposite
way. As both groups are key stakeholders in the job interview, it seems
reasonable to consider if and/or how this difference can be overcome. A
study by Williams (2017) suggests that making a legal case rather than a
business case is more likely to prompt organizations to act equitably
because the legal case has a greater normative inuence on individual
values, beliefs and behaviors that are related to inequality. The legal
case works, inter alia, because it has a deep moral grounding (Williams,
2017). Similar to the social case, the legal case fosters the belief that
diversity and inclusiveness are morally correct. Thus, legal and social
cases may be used in concert to stimulate moral beliefs that can help
hiring organizations keep their biases in check and act in support of
equality and diversity. Technologies such as robot-mediated job in-
terviews may thus stand a better chance if presented as legal and social
cases. While the robot-mediated job interview may not be able to solve
the root causes of discrimination, as our mini-public ndings suggest,
they may be able to act as moral primers and help install the “values rst
principles” (Seibt and Vestergaard, 2018; Van den Hoven, 2005) in the
selection process to achieve fair treatment and equal opportunities—the
high-ranking moral values that seem to be difcult to live up to in
personnel selection due to implicit biases present in human–human
interaction and institutionalized discrimination in hiring.
5.3. Future research
While our results in the mini-public poll showed a positive change in
attitude toward robot mediation in job interviews, our data did not
reveal the underlying reasons for this change. During the mini-public,
the participants had a chance to reect on the benets and drawbacks
of the robot-mediated interview, and they also considered variations to
the proposed robot-mediated setup. It is possible that these deliberations
changed the participants’ perception of, for instance, ease of use, use-
fulness, and/or novelty, all of which are known to determine the atti-
tudes toward and acceptance of new technologies (e.g., Rogers, 1995;
Venkatesh and Davis, 2000; Wells et al., 2010). Social robots necessarily
entail a novel experience since most people have had no or very limited
interactions with them. In addition, the very nature of social robots
makes them particularly novel. The contradictory combination of
animate and inanimate features that characterize social robots in-
tensies the novelty experience because it makes social robots difcult
to categorize ontologically and because our prior knowledge is inade-
quate to make sense of the experience when we engage in interactions
with them (Smedegaard, 2019). Further studies are indeed necessary to
better understand the rst reactions of people to social robots in job
interviews and to examine whether these reactions change as the nov-
elty decreases.
In a recent study comparing asynchronous digital interviews
(involving no live interaction) to video conference interviews, the lack
of direct interaction was perceived as a little “creepy” (Langer et al.,
2017). The authors explained this by referring to the novelty of the
technology. Similarly, the robot-mediated interview is a fundamentally
novel concept that can be perceived as somewhat outlandish the rst
time a person encounters it. The digital interview, which involves
recording (and storing videos and information), also challenged the
applicants’ perceptions of privacy and negatively affected the appli-
cants’ perceptions of procedural justice (Langer et al., 2017). These two
last shortcomings of the digital interview, however, are not present in
the robot-mediated interview since the same robot-mediated setup is
used for all (consistency), the robot’s appearance is neutral (bias sup-
pression), it seeks to transmit objective information (accuracy), infor-
mation can be modied in situ (correctability), both parties are present
during the interview (representativeness) and it has decency as a goal
(as it instantiates existing ethical principles). With all these potential
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
16
advantages of the robot-mediated employment interview, future
research should examine whether fairness perceptions of such in-
terviews increase as the novelty effect wears off.
Along with the potential for improving applicants’ perceived fair-
ness, undesirable effects of robot-mediated interviews may also emerge.
Among such effects may be a sense of greater distance and social
disconnection, as well as reduced naturalness of the interaction, all of
which may impact the interaction and communication quality between
interviewers and candidates. Relatedly, maximizing applicant fairness
perceptions by relying on robot-mediated job interviews may entail
trade-offs, as our mini-public also indicated. Future research should seek
to identify potential trade-offs and understand them in terms of, e.g.,
their ethical, psychological, business and social effects.
Additionally, it could be possible that different groups (e.g., minor-
ities, mature jobseekers) may respond differently to using fair proxy
communication in personnel selection and thus have different experi-
ences in the organizations that hire them. Further research is needed to
shed light on how being hired in this way affects an applicant’s later
experiences in the new organization, for instance, in relation to the
person’s career development, work relationships, psychological well-
being, and work performance. Additional research into (a) whether
formalizing such selection procedures is, for example, able to create
more workplace satisfaction, inclusion and engagement in organizations
and (b) whether such effects are of a short-term or a long-term character
is also needed.
Last but not least, our ndings also give rise to considerations related
to the design of social robots for job interviews. To successfully extend
HRI to a vital business function such as personnel selection, the robot’s
design should be reconsidered. Our results suggest that the HR pro-
fessionals are likely to adopt robots provided they are autonomous (and
not teleoperated), as such robots were perceived to offer potential for
cost-saving and freeing time for the HR personnel to do other important
tasks. Our data indicated that robotics could help HR professionals
rethink and reconceptualize the interview situation, and an autonomous
robot could be a way of bridging the issues related to moral pragmatism.
Nonetheless, from the perspective of applicants, a telepresence robot
means that applicants communicate with a human interviewer via a
robot rather than with “just” a robot. This may result in different
applicant perceptions, conversational dynamics and communication
quality. Future research should therefore investigate HR professionals’
and applicants’ reactions to different types of robots in job interviews.
5.4. Limitations
This study is not without limitations. First, although mini-publics
(Study 1) are not expected to be a statistically representative sample
of the entire population, they are aimed at representing different
viewpoints on the issue at hand (Brown, 2006). In our mini-public, we
observed a predominance of HR professionals, which thus limited the
variety of viewpoints but provided strong insights into the HR
perspective on the robot-mediated interview. While our open invitation
to the mini-public did not restrict access to anyone and gave everyone an
equal chance to be included (Goodin and Dryzek, 2006), such open in-
vitations are subject to a self-selection bias, which was the likely cause of
the overrepresentation of HR professionals. Second, the mini-public poll
was designed to track any changes in attitudes (a key feature of this
method) rather than measure personality traits and afnities. None-
theless, including such measures in Study 1 may have helped explain the
reasons behind the documented attitude change. Even so, the trait of
personal innovativeness was not found to have an effect in Study 2.
Third, several scales were translated and adapted into Danish in Study 2.
Such culturally adapted scales need to be validated. Fourth, the external
validity of Study 2 may be lower than it would have been if the re-
spondents had experienced the robot-mediated interview rst-hand. To
further address the external validity of this study, future studies based
on real-life job interviews and subsequent applicant evaluations might
allow for comparison with these ndings for possible deviations. Fifth,
even if we accept that the Telenoid is gender and age neutral, the Tel-
enoid may not be perceived as ethnically or voice neutral. Finally, wit-
nessing an interaction with a robot is likely a new experience for the
majority of respondents, which may predispose them to react with
surprise or indecision—reactions that may be intensied when taking
the third-person perspective (Kahn et al., 2011; Turkle, 2011). The exact
nature and impact of these issues on the present study cannot be
determined.
6. Conclusion
By relying on a mixed-method approach, this paper examined how
the use of new technology during employment interviews affects em-
ployers’ and applicants’ fairness perceptions. Using a robot as a fair
proxy in the employment interview is a novel approach for conducting
interviews and has not yet been experienced by employers and appli-
cants. Study 1 revealed four factors that inuence their attitudes toward
and perceptions of the robot-mediated interview. The study further
suggested that the HR professionals’ focus on the business case rationale
related to robot mediation in interviews triggers moral pragmatism—in
the sense of a pragmatically justied prioritization of perspectival moral
evaluations—and undermines the social case related to this type of
interview. Although the mini-public showed predominantly negative
perceptions of the robot-mediated job interview, considering the
participant composition (the majority of whom were HR professionals),
we have argued that it was important to test the technique with active
jobseekers. Indeed, our experimental survey ndings show that the
robot-mediated interview is perceived as fairer than the face-to-face
interview. As such, this job interview method deserves more attention
and further investigation.
Our research relates to this Special Issue in three ways. First, it ex-
amines an important form of social interaction in business settings,
namely, the job interview, in which a robot acts as a new technology-
based mediator. It thus reveals a future area of application for social
robots by uncovering possible gains and/or limitations associated with
extending HRI to a core business function such as recruitment and serves
to inform and open up further research and product development in
recruitment and selection. Second, this paper shows how the social
function of a robot as a mediator in job interviews exposes the clashing
expectations and perceptions held by applicants and HR practitioners
toward the interview itself and discusses how employee selection may
benet from robotics to align these opposing views and potentially
improve fairness perceptions of job interviews. Third, this work is
further related to the consequences for HHI, as it addresses how the use
of social robots in job interviews may affect the existing human under-
standing (recruiters) of that social activity (job interview) by triggering
an opportunity for the recruiters to rethink this important social and
professional activity.
CRediT authorship contribution statement
Sladjana Nørskov: Conceptualization, Methodology, Investigation,
Formal analysis, Writing – original draft, Writing – review & editing,
Visualization. Malene F. Damholdt: Conceptualization, Methodology,
Investigation, Formal analysis, Writing – original draft, Writing – review
& editing, Visualization. John P. Ulhøi: Conceptualization, Methodol-
ogy, Investigation, Writing – original draft, Writing – review & editing.
Morten Berg Jensen: Formal analysis, Writing – original draft, Writing
– review & editing, Visualization. Mia Krogager Mathiasen: Investi-
gation, Formal analysis. Charles M. Ess: Writing – original draft,
Writing – review & editing. Johanna Seibt: Conceptualization, Writing
– review & editing, Funding acquisition.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
17
FUNDING
This work is supported by a Carlsberg Foundation Semper Ardens
Grant (F16-0004). Any opinions, ndings, conclusions, and/or recom-
mendations expressed in this work are those of the authors and do not
necessarily reect the views of either the sponsor or the employer(s) of
the authors. The usual disclaimers apply.
REFERENCES
M. Aamodt, E.G. Brecher, E.J. Kutcher, J.D. Bragger, Do structured interviews eliminate
bias? A meta-analytic comparison of structured and unstructured interviews, In:
Poster – Annual Meeting of the Society for Industrial-Organizational Psychology,
2006, Dallas Texas.
Agarwal, R., Prasad, J., 1998. A conceptual and operational denition of personal
innovativeness in the domain of information technology. Inf. Syst. Res. 9 (2),
204–215.
Amodio, D.M., 2014. The neuroscience of prejudice and stereotyping. Nat. Rev. Neurosci.
15 (10), 670–682.
Applbaum, A.I., 1999. Ethics for Adversaries: The Morality of Roles in Public and
Professional Life. Princeton University Press, Princeton, NJ.
Arneson, R., 2015. Equality of opportunity. In: Zalta, E.N. (Ed.), The Stanford
Encyclopedia of Philosophy. https://plato.stanford.edu/entries/equal-opportunity/.
Bauer, T.N., Maertz Jr, C.P., Dolen, M.R., Campion, M.A., 1998. Longitudinal assessment
of applicant reactions to employment testing and test outcome feedback. J. Appl.
Psychol. 83 (6), 892–903.
Bauer, T.N., Truxillo, D.M., Sanchez, R.J., Craig, J.M., Ferrara, P., Campion, M.A., 2001.
Applicant reactions to selection: development of the selection procedural justice
scale (SPJS). Pers. Psychol. 54 (2), 388–420.
Behrend, T., Toaddy, S., Thompson, L.F., Sharek, D.J., 2012. The effects of avatar
appearance on interviewer ratings in virtual employment interviews. Comput. Hum.
Behav. 28 (6), 2128–2133.
Belendez, M., Bernabeu, A., L´
opez, S., Topa, G., 2018. Psychometric properties of the
Spanish version of the Core Self-Evaluations Scale (CSES-SP). Pers. Individ. Differ.
122, 195–197.
Bencharit, L.Z., Ho, Y.W., Fung, H.H., Yeung, D.Y., Stephens, N.M., Romero-Canyas, R.,
Tsai, J.L., 2018. Should job applicants be excited or calm? The role of culture and
ideal affect in employment settings. Emotion 19 (3), 377–401.
Bentler, P.M., 1990. Comparative t indexes in structural models. Psychol. Bull. 107 (2),
238–246.
Bertrand, M., Mullainathan, S., 2004. Are Emily and Greg more employable than Lakisha
and Jamal? A eld experiment on labor market discrimination. Am. Econ. Rev. 94
(4), 991–1013.
Y.E. Bigman, D. Wilson, M.N. Arnestad, A. Waytz, K. Gray, Algorithmic Discrimination
Causes Less Moral Outrage Than Human Discrimination, 2020. https://psyarxiv.
com/m3nrp/.
Bohnet, I., van Geen, A., Bazerman, M., 2016. When performance trumps gender bias:
joint vs. separate evaluation. Management Science 62 (5), 1225–1234.
Bragger, J.D., Kutcher, E., Morgan, J., Firth, P., 2002. The effects of the structured
interview on reducing biases against pregnant job applicants. Sex Roles 46 (7-8),
215–226.
Breazeal, C.L., 2002. Designing Sociable Robots. MIT Press, Cambridge, MA.
Brown, M., 2006. Survey article: citizen panels and the concept of representation.
J. Political Philos. 14 (2), 203–225.
Brown, T.A., 2015. Conrmatory Factor Analysis for Applied Research. Guilford Press,
New York, NY.
Chang, C.-H., Ferris, D.L., Johnson, R.E., Rosen, C.C., Tan, J.A., 2012. Core self-
evaluations: a review and evaluation of the literature. J. Manag. 38 (1), 81–128.
Chapman, D.S., Rowe, P.M., 2001. The impact of videoconference technology, interview
structure, and interviewer gender on interviewer evaluations in the employment
interview: a eld experiment. J. Occup. Organ. Psychol. 74 (3), 279–298.
Cohen, J., 1988. Statistical Power Analysis for the Behavioral Sciences. Lawrence
Erlbaum Associates, Hillsdale, NJ.
Colquitt, J.A., Conlon, D.E., Wesson, M.J., Porter, C.O., Ng, K.Y., 2001. Justice at the
millennium: a meta-analytic review of 25 years of organizational justice research.
J Appl Psychol 86 (3), 425–445.
Cotton, J.L., O’Neill, B.S., Grifn, A., 2008. The “name game”: affective and hiring
reactions to rst names. J. Manag. Psychol. 23 (1), 18–39.
Cuddy, A.J.C., Wilmuth, C.A., Yap, A.J., Carney, D.R., 2015. Preparatory power posing
affects nonverbal presence and job interview performance. J. Appl. Psychol. 100 (4),
1286–1295.
Dana, J., Dawes, R., Peterson, N., 2013. Belief in the unstructured interview: the
persistence of an illusion. Judgm. Decis. Mak. 8 (5), 512–520.
Dane, E., Pratt, M.G., 2007. Exploring intuition and its role in managerial decision
making, Acad. Manag. Rev. 32 (1), 33–54.
de Kock, F.S., Haupteisch, D.B., 2018. Reducing racial similarity bias in interviews by
increasing structure: A quasi-experiment using multilevel analysis. International
Perspectives in Psychology: Research, Practice, Consultation, 7 (3), 137–154.
DeGroot, T., Motowidlo, S.J., 1999. Why visual and vocal interview cues can affect
interviewers’ judgments and predict job performance. J. Appl. Psychol. 84 (6),
986–993.
Dobbin, F., Kalev, A., 2016. Why Diversity Programs Fail and What Works Better.
Harvard Business Review. Harvard Business Publishing, Brighton, MA.
Dobbin, F., Schrage, D., Kalev, A., 2015. Rage against the iron cage: the varied effects of
bureaucratic personnel reforms on diversity. Am. Sociol. Rev. 80 (5), 1014–1044.
Draˇ
sˇ
cek, M., Buhovac, A.R., Andolˇ
sek, D.M., 2021. Moral pragmatism as a bridge
between duty, utility, and virtue in managers’ ethical decision-making. J. Bus. Ethics
172 (4), 803–819.
Druckman, D., Adrian, L., Damholdt, M.F., Filzmoser, M., Koszegi, S.T., Seibt, J.,
Vestergaard, C., 2021. Who is best at mediating a social conict? Comparing robots,
screens and humans. Group Decis. Negot. 30 (2), 395–426.
Eisenhardt, K.M., Graebner, M.E., Sonenshein, S., 2016. Grand challenges and inductive
methods: rigor without rigor mortis. Acad. Manag. J. 59 (4), 1113–1123.
Eyssel, F., Hegel, F., 2012. S)he’s got the look: gender stereotyping of robots. J. Appl.
Soc. Psychol. 42 (9), 2213–2230.
Friedman, K.E., Reichmann, S.K., Costanzo, P.R., Zelli, A., Ashmore, J.A., Musante, G.J.,
2005. Weight stigmatization and ideological beliefs: relation to psychological
functioning in obese adults. Obes. Res. 13 (5), 907–916.
Gaddis, S.M., 2015. Discrimination in the credential society: an audit study of race and
college selectivity in the labor market. Social Forces 93 (4), 1451–1479.
García, M.F., Posthuma, R.A., Colella, A., 2008. Fit perceptions in the employment
interview: the role of similarity, liking, and expectations. J. Occup. Organ. Psychol.
81 (2), 173–189.
Gilliland, S.W., 1993. The perceived fairness of selection systems: an organizational
justice perspective. Acad. Manag. Rev. 18 (4), 694–734.
Goldin, C., Rouse, C., 2000. Orchestrating impartiality: the impact of "blind" auditions on
female musicians. Am. Econ. Rev. 90 (4), 715–741.
Gong, L., 2008. The boundary of racial prejudice: comparing preferences for computer-
synthesized White, Black, and robot characters. Comput. Hum. Behav. 24 (5),
2074–2093.
Goodin, R.E., Dryzek, J.S., 2006. Deliberative impacts: the macro-political uptake of
mini-publics. Politics Soc 34 (2), 219–244.
Gore, J., Sadler-Smith, E., 2011. Unpacking intuition: a process and outcome framework.
Rev. Gen. Psychol. 15 (4), 304–316.
Grant, S., Mizzi, T., 2014. Body weight bias in hiring decisions: identifying explanatory
mechanisms. Soc. Behav. Pers. 42 (3), 353–370.
Graves, L.M., Powell, G.N., 1996. Sex similarity, quality of the employment interview
and recruiters’ evaluation of actual applicants. J. Occup. Organ. Psychol. 69 (3),
243–261.
Grove, W.M., Zald, D.H., Lebow, B.S., Snitz, B.E., Nelson, C., 2000. Clinical versus
mechanical prediction: a meta-analysis. Psychol. Assess. 12 (1), 19–30.
Gu, H., Wen, Z., Fan, X., 2015. The impact of wording effect on reliability and validity of
the Core Self-Evaluation Scale (CSES): a bi-factor perspective. Pers. Individ. Differ.
83, 142–147.
Hancock, G., 2001. Effect size, power, and sample size determination for structured
means modeling and MIMIC approaches to between-groups hypothesis testing of
means on a single latent construct. Psychometrika 66 (3), 373–388.
Heilman, M.E., Saruwatari, L.R., 1979. When beauty is beastly: the effects of appearance
and sex on evaluations of job applicants for managerial and nonmanagerial jobs.
Organ. Behav. Hum. Perform. 23 (3), 360–372.
Heney, D., 2016. Toward a Pragmatist Metaethics. Routledge, London, UK.
Hewlett, S.A., Marshall, M., Sherbin, L., 2013. How Diversity Can Drive Innovation.
Harvard Business Review. https://hbr.org/2013/12/how-diversity-can-drive-i
nnovation.
Highhouse, S., 2008. Stubborn reliance on intuition and subjectivity in employee
selection. Ind. Organ. Psychol. Perspect. Sci. Pract. 1 (3), 333–342.
Hinton, P., 2017. Implicit stereotypes and the predictive brain: cognition and culture in
“biased” person perception. Palgrave Commun 3, 1–9, 17086.
Hofstra, B., Kulkarni, V.V., Galvez, S.M.N., He, B., Jurafsky, D., McFarland, D.A., 2020.
The diversity–innovation paradox in science. Proc. Natl. Acad. Sci. U. S. A. 117 (17),
9284–9291.
Holgersson, C., 2013. Recruiting managing directors: doing homosociality. Gend. Work
Organ. 20 (4), 454–466.
Homan, A.C., van Knippenberg, D., Van Kleef, G.A., De Dreu, C.K., 2007. Bridging
faultlines by valuing diversity: diversity beliefs, information elaboration, and
performance in diverse work groups. J. Appl. Psychol. 92 (5), 1189–1199.
Howard, J.L., Ferris, G.R., 1996. The employment interview context: social and
situational inuences on interviewer decisions. J. Appl. Soc. Psychol. 26 (2),
112–136.
Huffcutt, A.I., 2011. An empirical review of the employment interview construct
literature. Int. J. Sel. Assess. 19 (1), 62–81.
H. Ishiguro, Transmitting Human Presence Through Portable Teleoperated Androids: A
Minimal Design Approach, in: T. Nishida (Ed.), Human-Harmonized Information
Technology, Springer, Tokyo, 2016, pp. 29–56.
Johnson, S.K., Podratz, K.E., Dipboye, R.L., Gibbons, E., 2010. Physical attractiveness
biases in ratings of employment suitability: tracking down the “beauty is beastly”
effect. J. Soc. Psychol. 150 (3), 301–318.
Judge, T.A., Erez, A., Bono, J.E., Thoresen, C.J., 2003. The core self-evaluations scale:
development of a measure. Pers. Psychol. 56 (2), 303–331.
Kahn, P.H., Jr, A.L.Reichert, Gary, H.E., Kanda, T., Ishiguro, H., Shen, S., Ruckert, J.H.,
Gill, B., 2011. The New Ontological Category Hypothesis in Human-Robot
Interaction. In: Proceedings of the 6th ACM/IEEE International Conference on
Human-Robot Interaction. Association for Computing Machinery, New York,
pp. 159–160.
Kang, S.K., DeCelles, K.A., Tilcsik, A., Jun, S., 2016. Whitened r´
esum´
es: race and self-
presentation in the labor market. Adm. Sci. Q. 61 (3), 469–502.
Kaplan, S., 2020. Beyond the business case for social responsibility. Acad. Manag. Discov.
6 (1), 1–4.
S. Nørskov et al.
Technological Forecasting & Social Change 179 (2022) 121641
18
Kausel, E.E., Culbertson, S.S., Madrid, H.P., 2016. Overcondence in personnel selection:
when and why unstructured interview information can hurt hiring decisions. Organ.
Behav. Hum. Decis. Process. 137, 27–44.
Kouchaki, M., Smith-Crowe, K., Brief, A.P., Sousa, C., 2013. Seeing green: mere exposure
to money triggers a business decision frame and unethical outcomes. Organ. Behav.
Hum. Decis. Process. 121 (1), 53–61.
Kuncel, N.R., Klieger, D.M., Connelly, B.S., Ones, D.S., 2013. Mechanical versus clinical
data combination in selection and admissions decisions: a meta-analysis. J. Appl.
Psychol. 98 (6), 1060–1072.
Kutcher, E.J., Bragger, J.D., 2004. Selection interviews of overweight job applicants: can
structure reduce the bias? J. Appl. Soc. Psychol. 34 (10), 1993–2022.
Lai, C.K., Skinner, A.L., Cooley, E., Murrar, S., Brauer, M., Devos, T., Calanchini, J.,
Xiao, Y.J., Pedram, C., Marshburn, C.K., Simon, S., Blanchar, J.C., Joy-Gaba, J.A.,
Conway, J., Redford, L., Klein, R.A., Roussos, G., Schellhaas, F.M.H., Burns, M.,
Nosek, B.A., 2016. Reducing implicit racial preferences: II. Intervention effectiveness
across time. J Exp Psychol Gen 145 (8), 1001–1016.
Langer, M., K¨
onig, C.J., Krause, K., 2017. Examining digital interviews for personnel
selection: applicant reactions and interviewer ratings. Int. J. Sel. Assess. 25 (4),
371–382.
Leventhal, G.S, 1980. What should be done with equity theory? New approaches to the
study of fairness in social relationships. In: Gergen, K, Greenberg, M, Willis, R (Eds.),
Social exchange: Advances in theory and research. Plenum, New York, pp. 27–55.
Li, J., 2015. The benet of being physically present: a survey of experimental works
comparing copresent robots, telepresent robots and virtual agents. Int. J. Hum.
Comput. Stud. 77, 23–37.
Lind, E., Tyler, T., Huo, Y., 1997. Procedural context and culture: variation in the
antecedents of procedural justice judgments. J. Pers. Soc. Psychol. 73 (4), 767–780.
Locke, K., 2001. Grounded Theory in Management Research. Sage Publications, London.
Macan, T., 2009. The employment interview: a review of current studies and directions
for future research. Hum. Resour. Manag. Rev. 19 (3), 203–218.
Marchetti, S., 2021. Introduction to pragmatist ethics: theory and practice. Eur. J.
Pragmatism Am. Philos. 13 (2), 8–15.
Martin, J., Brickman, P., Murray, A., 1984. Moral outrage and pragmatism: explanations
for collective action. J. Exp. Soc. Psychol. 20 (5), 484–496.
Martínez, R., Rodriguez-Bailon, R., Moya, M., Vaes, J., 2017. How do different
humanness measures relate? Confronting the attribution of secondary emotions,
human uniqueness, and human nature traits. J. Soc. Psychol. 157 (2), 165–180.
McCarthy, J.M., Bauer, T.N., Truxillo, D.M., Anderson, N.R., Costa, A.C., Ahmed, S.M.,
2017. Applicant perspectives during selection: a review addressing “so what?,”
“what’s new?,” and “where to next?”. J. Manag. 43 (6), 1693–1725.
McLarty, B.D., Whitman, D.S., 2016. A dispositional approach to applicant reactions:
examining core self-evaluations, behavioral intentions, and fairness perceptions.
J. Bus. Psychol. 31 (1), 141–153.
Muth´
en, L.K., Muth´
en, B.O., 2012. Mplus User’s Guide. Muth´
en & Muth´
en, Los Angeles,
CA.
Nørskov, S., Damholdt, M.F., Ulhøi, J.P., Jensen, M.B., Ess, C.M., Seibt, J., 2020.
Applicant fairness perceptions of a robot-mediated job interview: A video vignette-
based experimental survey. Front. Rob. AI 7 (163), 586263.
Paulhus, D.L., Westlake, B.G., Calvez, S.S., Harms, P.D., 2013. Self-presentation style in
job interviews: the role of personality and culture. J. Appl. Soc. Psychol. 43 (10),
2042–2059.
Purkiss, S.L.S., Perrew´
e, P.L., Gillespie, T.L., Mayes, B.T., Ferris, G.R., 2006. Implicit
sources of bias in employment interview judgments and decisions. Organ. Behav.
Hum. Decis. Process. 101 (2), 152–167.
Quillian, L., Pager, D., Hexel, O., Arnnn, H.M., 2017. Meta-analysis of eld experiments
shows no change in racial discrimination in hiring over time. Proc. Natl. Acad. Sci. U.
S. A. 114 (41), 10870–10875.
Reeves, B., Nass, C., 1996. The Media Equation: How People Treat Computers,
Television, and New Media Like Real People and Places. Cambridge University Press,
Cambridge, UK.
Rivera, L.A., 2012. Hiring as cultural matching: the case of elite professional service
rms. Am. Sociol. Rev. 77 (6), 999–1022.
Rivera, L.A., 2015. Go with your gut: emotion and evaluation in job interviews. Am. J.
Sociol. 120 (5), 1339–1389.
Roberts, J.J., Lightbody, R., Low, R., Elstub, S., 2020. Experts and evidence in
deliberation: scrutinising the role of witnesses and evidence in mini-publics, a case
study. Policy Sci 53 (1), 3–32.
Roehrich, G., 2004. Consumer innovativeness: concepts and measurements. J. Bus. Res.
57 (6), 671–677.
Rogers, E.M., 1995. Diffusion of Innovations. The Free Press, New York.
Rosenthal, S.B., Buchholz, R.A., 1999. Toward New Directions in Business Ethics: Some
Pragmatic Pathways. In: Frederick, R.E. (Ed.), A Companion to Business Ethics.
Oxford, Blackwell, pp. 112–127.
Rufe, B.J., Shtudiner, Z.E., 2015. Are good-looking people more employable? Manag.
Sci. 61 (8), 1760–1776.
Ryan, A., Huth, M., 2008. Not much more than platitudes? A critical look at the utility of
applicant reactions research. Hum. Resour. Manag. Rev. 18 (3), 119–132.
Ryan, A.M., Ployhart, R.E., 2000. Applicants’ perceptions of selection procedures and
decisions: a critical review and agenda for the future. J. Manag. 26 (3), 565–606.
Rynes, S., Colbert, A., Brown, K., 2002. HR professionals’ beliefs about effective human
resource practices. Hum. Resour. Manag. 41 (2), 149–174.
Salda˜
na, J., 2013. The Coding Manual for Qualitative Researchers. Sage, London, UK.
Schuler, H., 1993. Social Validity of Selection Situations: A Concept and Some Empirical
Results. In: Schuler, J., Farr, J.L., Smith, M. (Eds.), Personnel Selection and
Assessment: Individual and Organizational Perspectives. Erlbaum, Hillsdale, NJ,
pp. 41–55.
Schumann, P.L., 2001. A moral principles framework for human resource management
ethics. Hum. Resour. Manag. Rev. 11 (1), 93–111.
Sears, G.J., Zhang, H., Wiesner, W.H., Hackett, R.D., Yuan, Y., 2013. A comparative
assessment of videoconference and face-to-face employment interviews. Manag.
Decis. 51 (8), 1733–1752.
Seibt, J., Vestergaard, C., 2018. Fair proxy communication: using social robots to modify
the mechanisms of implicit social cognition. Res. Ideas Outcomes 4, e31827.
Seo, S.H., Geiskkovitch, D., Nakane, M., King, C., Young, J.E., 2015. Poor Thing! Would
You Feel Sorry for a Simulated Robot? A Comparison of Empathy Toward a Physical
and a Simulated Robot. In: Proceedings of the Tenth Annual ACM/IEEE International
Conference on Human-Robot Interaction. Association for Computing Machinery,
Portland, Oregon, USA, pp. 125–132.
Smedegaard, C.V., 2019. Reframing the Role of Novelty within Social HRI: From Noise to
Information. In: 2019 14th ACM/IEEE International Conference on Human-Robot
Interaction (HRI). IEEE, Daegu, Korea, pp. 411–420.
Smith, G., Set¨
al¨
a, M., 2018. Mini-Publics and Deliberative Democracy. In: B¨
achtiger, A.,
Dryzek, J.S., Mansbridge, J., Warren, M. (Eds.), The Oxford Handbook of
Deliberative Democracy. Oxford University Press, Oxford, UK, pp. 300–314.
Steenkamp, J.-B.E.M., Baumgartner, H., 1998. Assessing measurement invariance in
cross-national consumer research. J. Consum. Res. 25 (1), 78–90.
Steiger, J.H., 2016. Notes on the Steiger–Lind (1980) handout. Struct. Equ. Model.
Multidiscip. J. 23 (6), 777–781.
Steiger, J.H., Lind, J.M., 1980. “Statistically based tests for the number of common
factors,” in Paper presented at the Meeting of the Psychometric Society (Iowa City,
IA).
Strauss, A., Corbin, J., 1998. Basics of Qualitative Research - Techniques and Procedures
for Developing Grounded Theory. Sage Publications, London.
Thompson, M.S., Green, S.B., 2013. Evaluating between-Group Differences in Latent
Variable Means. In: Hancock, G.R., Mueller, R.O. (Eds.), Structural Equation
Modeling: A Second Course. Information Age Publishing, Charlotte, NC,
pp. 163–218.
Tucker, L.R., Lewis, C., 1973. A reliability coefcient for maximum likelihood factor
analysis. Psychometrika 38 (1), 1–10.
Turkle, S., 2011. Alone Together: Why We Expect More From Technology and Less From
Each Other. Basic Books, New York.
Van den Hoven, M, 2005. Design for values and values for design. Informationage 7 (2),
4–7.
Venkatesh, V., Davis, F.D., 2000. A theoretical extension of the technology acceptance
model: four longitudinal eld studies. Manag. Sci. 46 (2), 186–204.
Villadsen, A.R., Wulff, J.N., 2018. Is the public sector a fairer employer? Ethnic
employment discrimination in the public and private sectors. Acad. Manag. Discov. 4
(4), 429–448.
Wang, J., Cheng, G.H.L., Chen, T., Leung, K., 2019. Team creativity/innovation in
culturally diverse teams: a meta-analysis. J. Organ. Behav. 40 (6), 693–708.
Wells, J.D., Campbell, D.E., Valacich, J.S., Featherman, M., 2010. The effect of perceived
novelty on the adoption of information technology innovations: a risk/reward
perspective. Decis. Sci. 41 (4), 813–843.
Williams, J.B., 2017. Breaking down bias: Legal mandates vs. corporate interests. Wash.
Law Rev. 92, 1473–1513.
Zajonc, R.B., 1980. Feeling and thinking: preferences need no inferences. Am. Psychol.
35 (2), 151–175.
Zenger, M., K¨
orner, A., Maier, G.W., Hinz, A., St¨
obel-Richter, Y., Br¨
ahler, E., Hilbert, A.,
2015. The core self-evaluation scale: psychometric properties of the German version
in a representative sample. J. Pers. Assess. 97 (3), 310–318.
S. Nørskov et al.