ArticlePDF Available

Abstract and Figures

Purpose-The technological evolution of job interviews continues as artificial intelligence enables highly automated interviews as alternative interview approaches. Initial evidence shows that applicants react negatively to such interviews. Additionally, there is emerging evidence that contextual influences matter when investigating applicant reactions to highly automated interviews. However, previous research has ignored higher-level organizational contexts (i.e. which kind of organization uses the selection procedure) and individual differences (e.g. work experience) regarding applicant reactions. The purpose of this paper is to investigate applicant reactions to highly automated interviews for students and employees and the role of the organizational context when using such interviews. Design/methodology/approach-In a 2 × 2 online study, participants read organizational descriptions of either an innovative or an established organization and watched a video displaying a highly automated or a videoconference interview. Afterwards, participants responded to applicant reaction items. Findings-Participants (n ¼ 148) perceived highly automated interviews as more consistent but as conveying less social presence. The negative effect on social presence diminished organizational attractiveness. The organizational context did not affect applicant reactions to the interview approaches, whereas differences between students and employees emerged but only affected privacy concerns to the interview approaches. Research limitations/implications-The organizational context seems to have negligible effects on applicant reactions to technology-enhanced interviews. There were only small differences between students and employees regarding applicant reactions. Practical implications-In a tense labor market, hiring managers need to be aware of a trade-off between efficiency and applicant reactions regarding technology-enhanced interviews. Originality/value-This study investigates high-level contextual influences and individual differences regarding applicant reactions to highly automated interviews based on Artificial Intelligence.
Content may be subject to copyright.
Highly automated interviews: Applicant reactions and the organizational context
Markus Langera, Cornelius J. Königa, Diana R. Sanchezb, and Sören Samadia
aUniversität des Saarlandes
bSan Francisco State University
aMarkus Langer (https://orcid.org/0000-0002-8165-1803), aCornelius J. König
(https://orcid.org/0000-0003-0477-8293), and aSören Samadi, Fachrichtung Psychologie,
Universität des Saarlandes, Saarbrücken, Germany.
bDiana R. Sanchez, Workplace Technology Research Lab, San Francisco State University.
This work was supported by the German Federal Ministry for Education and Research
(BMBF; Project EmpaT, grant no. 16SV7229K). We thank the people from Charamel GmbH
for their continuous support and for providing us with the virtual character Gloria.
Correspondence concerning this article should be addressed to Markus Langer, Universität
des Saarlandes, Arbeits- & Organisationspsychologie, Campus A1 3, 66123 Saarbrücken,
Germany. Tel: +49 681 302 4767. E-mail: markus.langer@uni-saarland.de
This preprint version may not exactly replicate the final version published in the Journal of
Managerial Psychology. Coypright: Langer, M., König, C.J., Sanchez, D.R. & Samadi, S. (in
press). Highly automated interviews: Applicant reactions and the organizational context. Journal
of Managerial Psychology.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 2
ABSTRACT
Purpose: The technological evolution of job interviews continues as highly automated
interviews emerge as alternative approaches. Initial evidence shows that applicants react
negatively to such interviews. Additionally, there is emerging evidence that contextual
influences matter when investigating applicant reactions to highly automated interviews.
However, previous research has ignored higher-level organizational contexts (i.e., which kind
of organization uses the selection procedure) and individual differences (e.g., work
experience) regarding applicant reactions. This study therefore investigates applicant
reactions to highly automated interviews for students and employees and the role of the
organizational context when using such interviews.
Methodology: In a 2×2 online study, participants read organizational descriptions of either
an innovative or an established organization and watched a video displaying a highly
automated or a videoconference interview. Afterwards, participants responded to applicant
reaction items.
Findings: Participants (N = 148) perceived highly automated interviews as more consistent
but as conveying less social presence. The negative effect on social presence diminished
organizational attractiveness. The organizational context did not affect applicant reactions to
the interview approaches, whereas differences between students and employees emerged but
only affected privacy concerns to the interview approaches.
Research implications: The organizational context seems to have negligible effects on
applicant reactions to technology-enhanced interviews. There were only small differences
between students and employees regarding applicant reactions.
Practical implications: In a tense labor market, hiring managers need to be aware of a trad-
off between efficiency and applicant reactions regarding technology-enhanced interviews.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 3
Originality: This study investigates high-level contextual influences and individual
differences regarding applicant reactions to highly automated interviews.
Keywords: personnel selection, job interview technology, automatic applicant assessment,
applicant reactions, organizational context
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 4
Introduction
Organizations modernize their processes to stay up-to-date and to convey an
innovative and attractive image (Chapman and Webster, 2003; Gatewood et al., 1993). This
modernization affects many management processes as well as personnel selection procedures.
For instance, organizations use digital interviews (applicants record responses to interview
questions and send them to the hiring organization; Torres and Mejia, 2017) to show that they
are attractive employers (cf., Chapman and Webster, 2003). However, previous research in
the area of job interviews has predominantly found that technologically-advanced interviews
detrimentally affect applicant reactions (Blacksmith et al., 2016; Langer et al., 2017).
Nonetheless, the technological evolution of the interview continues. Currently, the use of
highly automated interviews is burgeoning (Langer et al., 2019). Within such interviews,
sensors (cameras, microphones) in combination with algorithms and virtual visualization
automate the entire interview process (i.e., acquire information about applicants, evaluate
applicants’ performance, implement actions such as automatic selection of follow-up
questions, using virtual interviewers, cf., Langer et al., 2019).
So far, little is known about the effects of highly automated interviews on applicant
reactions. A study by Langer et al. (2019) indicates that negative applicant reactions might
aggravate the more interviews are automated. Additionally, research has emerged showing
that task-level contextual influences affect people’s reactions to highly automated tools (e.g.,
people react more favorably to the automation of mechanical tasks [work scheduling]
compared to human tasks [hiring], M.K. Lee, 2018). Up to now, higher-level organizational
contexts have been ignored within these studies. More precisely, in previous designs,
participants either applied for a single (hypothetical) organization (e.g., Langer et al., 2019)
or did not receive any information about the hiring organization (e.g., Sears et al., 2013). Yet
in reality, applicants inform themselves about organizations, associate organizations with
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 5
specific attributes (e.g., established or innovative; cf., Slaughter and Greguras, 2009), and
evaluate organizational attractiveness based on their perceived person-organization fit
(Chapman et al., 2005). It follows that the negative effects of automated interviews may only
occur in situations where applicants do not expect innovative selection procedures implying
that highly automated interviews may be more accepted within selection processes of
innovative organizations.
The goals of the current study are therefore (a) to further investigate the effects of
highly automated interviews on applicant reactions, (b) to clarify if the organizational context
interacts with applicant reactions in a way that highly automated interviews are more
accepted for innovative organizations. Finally, this study addresses a limitation of prior
studies who used student samples (e.g. Langer et al., 2019) by collecting a more diverse
sample to investigate differences in reactions to automated tools between students and
employees. To achieve these goals, this study introduced students and full-time employees to
one of two organizations differing in their organizational description (innovative vs.
established organization) and to one of two interview approaches (highly automated
interview vs. videoconference interview).
Background and Hypotheses
Automation of job interviews
Technology for interviews has evolved in a way that established technology-mediated
interview approaches (e.g., videoconference interviews) appear rather old-fashioned. For
instance, within digital interviews, applicants respond via voice or video recordings (Langer
et al., 2017), and hiring managers can evaluate these recordings whenever they want. Using
machine learning algorithms, these recordings can also be evaluated automatically. For
example, the German company Precire automatically evaluates applicants’ voice recordings
(Precire, 2018), whereas the American company HireVue (HireVue, 2018) additionally
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 6
evaluates applicants nonverbal behavior (e.g., smiling). There have also been first attempts
to use virtual characters as interviewers to enhance the interpersonal touch of highly
automated interviews (cf., Langer et al., 2018; K.M. Lee and Nass, 2003).
These approaches have in common that they automate single parts of interviews up to
entire interview processes (Langer et al., 2019). Langer et al. (2019) used Parasuraman et al’s
(2000) model of automation to describe the underlying idea behind highly automated
interviews. Automating interviews includes four processes that can vary from low to high
levels of automation: acquiring information, analyzing information, selecting and deciding
about actions, and implementing these actions (Parasuraman et al., 2000). Acquiring
information is the automation of collecting and extracting data (e.g., record interviews;
automatically extract [non]verbal information). Analyzing information means to evaluate the
automatically acquired data (e.g., having algorithms that evaluate applicant performance).
Selecting and deciding about actions means to build on the information analysis to decide
about further steps (e.g., automatically selecting follow-up questions). The final step of
automation is then to automatically implement these actions (e.g., present follow-up
questions) and setting-up the interview (e.g., in a virtual environment). Similar to Langer et
al. (2019), this study introduces participants to a highly automated interview which includes
technology that allows high-level automation for every aforementioned step.
It is important to note that previous research tentatively supports the value of highly
automated interviews. For instance, Naim and colleagues (2015) found that automatic
evaluation of applicant behavior can predict interview performance. Nonetheless, there is
need for research regarding highly automated interviews which should expand from questions
regarding validity to ethical questions and investigating applicant reactions to such selection
procedures in different contexts (Langer et al., 2019).
Applicant reactions to automated interviews
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 7
Most applicant reaction research is based on Gilliland’s (1993) work on applicant
reactions to selection systems. Gilliland describes several procedural and distributive justice
rules. For instance, procedural justice is given if selection procedures appear job related, and
if they are administered to all applicants consistently. Distributive justice is given if
applicants perceive that they were able to influence outcomes of selection situations through
their behavior. Adhering to the aforementioned justice rules within selection procedures
should positively affect important outcomes such as organizational attractiveness (Chapman
et al., 2005). Since the current study investigates the reactions to different kinds of interview
procedures, it focuses on the procedural part of Gilliland’s model.
Previous research has shown that technology-enhanced interviews can enhance
efficiency and flexibility but can also lead to negative reactions (Blacksmith et al., 2016). For
instance, videoconference interviews are perceived as less fair and offering less opportunity
to perform than face-to-face interviews (Chapman et al., 2003; Sears et al., 2013).
Furthermore, highly automated interviews seem to evoke even less favorable reactions than
videoconference interviews (Langer et al., 2019) because of lower social presence.
Potosky’s (2008) framework of media attributes provides ideas about what
distinguishes different kinds of technology-enhanced interviews. Her framework covers four
attributes of communication media: social bandwidth, interactivity, transparency, and
surveillance. Social bandwidth describes the extent to which a medium allows to send and
receive verbal and nonverbal information (Potosky, 2008). Videoconference interviews
provide social bandwidth as applicants are able to exchange a variety of social signals
(Chapman et al., 2003). Although the current paper describes a highly automated interview
where applicants also send and receive communication information, even the most advanced
technology still lacks the opportunity to communicate equally as in-person interpersonal
communications. For example, it is possible to let a virtual character smile. However, this
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 8
smile might still appear less natural than a human smile (cf., Kätsyri et al., 2015). Therefore,
social bandwidth should be lower in highly automated interviews. Interactivity in Potosky’s
framework describes the possibility for direct interactions. As it is the case for social
bandwidth, highly automated interviews (especially versions with virtual characters) offer
interactivity to some extent. However, they are not as interactive as a conversation with a
human-being through videoconferencing. For instance, they provide less room for
backchanneling (e.g., nodding; Frauendorfer et al., 2014). The third aspect of Potosky’s
framework is transparency. Communication media are transparent if there are no (technical)
issues during communication and if people can ignore the fact that they are using media to
communicate (Potosky, 2008). Highly automated interviews likely reminded people that they
are using media to communicate, whereas in the course of videoconference interviews, the
medium might become less apparent when applicants start to concentrate on the interviewer
(Langer et al., 2017). The last aspect of Potosky’s framework is surveillance. It constitutes
perceptions of how likely it is for a third party to access information about the
communication between communication parties (Potosky, 2008). For example, applicants
might fear that recordings of highly automated interviews are later watched by unauthorized
people (Langer et al., 2017).
Furthermore, highly automated and videoconference interviews might differ regarding
the interview set-up. This study follows the example of Langer et al. (2019) and uses a virtual
set-up with a virtual character. Even if virtual characters should affect social presence
positively (K.M. Lee and Nass, 2003), humans in videoconference interviews might still
convey more social presence. Furthermore, there might be negative effects if perceptions of a
virtual character fall into the uncanny valley (Mori et al., 2012). The uncanny valley
hypothesis proposes that realistic virtual characters might evoke negative feelings in humans,
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 9
possibly because there is a perceptual mismatch for the virtual characters’ behavior or
appearance (e.g., strange body proportions; Kätsyri et al., 2015).
Potosky’s framework suggests that highly automated interviews offer less social
bandwidth, interactivity, and transparency, and there might be more pronounced feelings of
surveillance and a virtual interviewer might also negatively affect applicant reactions. This
predominantly points to negative applicant reactions towards highly automated interviews.
There might be negative consequences regarding perceived social presence within highly
automated interviews. Humans perceive social presence if there is interpersonal warmth and
empathy during an interaction (Walter, Ortbach, & Niehaves, 2015). Usually, they convey
this through the exchange of nonverbal communication (Chapman et al., 2003). Therefore,
applicants might feel less social presence in highly automated interviews. Furthermore, the
differences in Potosky’s aspects of transparency and surveillance might build into greater
concerns from applicants on privacy issues (i.e., concerns about data abuse; Smith et al.,
2011). If applicants are constantly aware that they will submit videos of themselves to an
organization without knowledge of who will access them, this might raise privacy concerns
(Langer et al., 2017).
In spite of these potential negative effects of highly automated interviews on applicant
reactions, people also tend to believe that computers are more objective than humans (Miller
et al., 2018). Applicants might believe that highly automated interviews should make no
difference in how they treat applicants compared to videoconference interviews, where
interviewers behaviors are likely influenced by applicants’ characteristics (e.g., appearance;
Pingitore et al., 1994).
Based on the aforementioned theoretical assumptions, we propose:
Hypothesis 1: Participants will evaluate the highly automated interview as conveying less
social presence, being more consistent but evoking more privacy concerns.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 10
Contextual influences on applicant reactions
Support for Hypothesis 1 would replicate the findings by prior research (e.g., Langer
et al. 2019; M.K. Lee, 2018). To advance research and inform practice, this study builds on
emerging research regarding contextual influences on applicant reactions. M.K. Lee showed
that people prefer human influence during human tasks and chose hiring and work
evaluations as examples for human tasks (in comparison to work scheduling and work
assignment as mechanical tasks). Langer et al. found that people favor automated tools for
low-stake tasks (i.e., training vs. personnel selection). Consequently, the implications of these
studies remain on the task level whereas they ignored important higher-level influences such
as the organizational context. If the organizational context affects people’s reactions this
could lead to valuable insights because these effects might translate to many tasks within the
organizational context.
Applicants ascribe attributes to organizations (Slaughter and Greguras, 2009). One
important attribute of organizations which might affect which selection procedures they use is
innovation (Lievens and Highhouse, 2003). Original and creative organizations whose
success relies on technological innovation might be examples for innovative organizations
(Slaughter et al., 2004). They might operate in dynamic markets, where they need to adapt to
technological changes (e.g., computer science). In comparison, other organizations might be
perceived as more stable and established organizations that operate in industry sectors in
which continuity is valued by customers (e.g., insurance industry) (Slaughter and Greguras,
2009) and where there is also potentially less innovation. Previous research suggests that the
organizational context is a determining factor for applicants attraction to a company
(Chapman et al., 2005). One reasons is that applicants feel attraction to organizations where
they perceive themselves as a good fit (i.e., person-organization fit; Chapman et al., 2005).
For instance, some applicants feel attracted to innovative organizations because they like the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 11
dynamic environment which requires adaptation to changes, whereas others prefer more
stable environments within established organizations (Slaughter and Greguras, 2009).
Despite the importance of the organizational context, it was not only the studies by
M.K. Lee (2018) and Langer et al. (2019) that left it out of their studies. Similarly, Sears and
colleagues (2013) did not provide any details on the hiring organization and evaluated
participants reactions to different interview approaches. In application situations, however,
applicants use multiple sources to learn about the organization and to determine
organizations’ attractiveness as employers (Chapman et al., 2005). Applicants inform
themselves through the organizations’ homepage, job ads, and the selection procedures
organizations use (Gatewood et al., 1993). For instance, applicants perceive organizations
using digital interviews as more innovative (Torres and Mejia, 2017).
Even more importantly, there might be cases where applicants’ perceptions of an
organization and applicants’ perceptions of the applied selection procedures diverge
(Gatewood et al., 1993). For instance, before applicants enter a selection situation, they might
expect an organization to be rather traditional. If they are then confronted with innovative
selection procedures (e.g., highly automated interviews), this could violate applicants’
expectations of the organization, as they had expected an established organization to also use
established selection procedures (e.g., face-to-face interviews). Thus, the perceived job
relatedness of a selection procedure might differ depending on the organizational context. In
the case of innovative organizations, applicants might believe that innovative selection
approaches tell something about the future job at this organization. As attraction to
organizations depends on perceived person-organization fit (Chapman et al., 2005),
applicants in search for an established (innovative) organizational environment might be
irritated by experiencing an innovative (established) selection approach, which could then
negatively affect organizational attractiveness. Thus, we propose:
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 12
Hypothesis 2: Participants will perceive selection procedures as more job related and the
organization as more attractive when the selection procedure matches the perceived attributes
of an organization (e.g., highly automated selection in the selection process of an innovative
organization).
Finally, this study addresses a limitation from prior research. Previous studies used
student samples to investigate applicant reactions to technology-enhanced interviews (e.g.,
Sears et al. 2013) and there is speculation if reactions to highly automated interviews might
diverge between students and the working population (Langer et al., 2019). It might be that
students are more open for technology-enhanced approaches; they might believe that such
approaches are more job related than people who already have a job; and there might be
differences in privacy concerns because of underlying individual differences between
students and worker (cf., Bauer et al., 2006). As most of these assumptions are speculative we
therefore ask:
Research Question 1: Do students and worker react differently to highly automated
interviews?
Method
Sample
The authors consulted prior research by Blacksmith et al. (2016) and Langer et al.
(2017) to determine the required sample size. They found small to medium effect sizes for
applicant reactions comparing different forms of technology-enhanced interviews. Sample
size calculation with G*Power (Faul et al., 2009) revealed that under the assumption of a
small to medium effect (η²p between .04-.06), a sample between N = 125-191 would be
necessary for a power of 1-β = .80. Participants were recruited through social media and
direct contact. Data collection continued until the sample consisted of N = 154 participants.
Six participants were excluded from data analysis (e.g., not reading carefully, pausing the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 13
experiment). The final sample consisted of N = 148 German participants (61% female). Of
these, 54% were employed in full-time jobs (those were the employees within the
independent variable students vs. employees), 39% were students (82% of these studied
psychology) and the rest were either apprentices or high school students. The mean age was
28.80 years (SD = 7.18). The majority of participants (36%) had experienced one to three job
interviews before, 28% had experienced four to five interviews, and 36% had experienced
more than six interviews. Furthermore, 74% of participants had experienced at least one
videoconference interview before. Student participants received course credit and all
participants had the chance to win a gift certificate for online shopping.
Design, procedure, and manipulation
In a 2×2 design (videoconference vs. highly automated interview; established vs.
innovative organization), participants visited an online survey platform and were randomly
assigned to one of the conditions. Afterwards, they had to imagine that they applied for a job.
Then, they received the respective description of the organization (Table 1). The descriptions
differ in 14 text elements that were designed to either reflect an established or an innovative
organization. A pre-test with N = 59 participants (ninnovative = 32, nestablished = 27) was
conducted to verify that organizational attraction to these descriptions was perceived
similarly. Participants in the pre-test were randomly assigned to one of the descriptions and
answered to the same items for organizational attractiveness as the participants in the main
study. Results indicate that the organizations were perceived similarly attractive (Minnovative =
3.46, SDinnovative = 0.61, Mestablished = 3.60, SDestablished = 0.66), t(57) = 0.84, p = .40, d = 0.22.
After reading the description in the main study, participants watched a video showing
either the videoconference or the highly automated interview. Both videos were taken from
Langer and colleagues (2019). They were edited in a way that participants watched a female
virtual/human interviewer interacting with a female applicant, who was only present through
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 14
voice. The virtual/human interviewer asked two questions. In the second question, the
applicant became nervous (i.e., hesitates to answer) and the respective interviewer said that
she noticed nervousness and tried to calm the interviewee (see also Langer et al., 2018,
2019). This way participants should realize that all parts of the interview have been
automatized, as described by Parasuraman et al. (2001) and Langer et al. (2019). Afterwards,
the applicant recovered and responded to the interview question. Then, the video faded out
without any further information about the outcome of the interview.
Measures
Participants responded to the items on a scale from 1 (strongly disagree) to 5
(strongly agree).
Social presence was measured with five items from Walter and colleagues (2015). A
sample item is “The interviewer acted empathically.”
Privacy concerns were measured with seven items, two were taken from Langer and
colleagues (2018) and five were taken from Langer and colleagues (2017). A sample item is
“Situations like the one shown threaten applicants’ privacy.”
Consistency and job relatedness were measured with three items each from a
German version of the Selection Procedural Justice Scale (Bauer et al., 2001; Warszta, 2012).
A sample item for consistency is “This procedure is administered to all applicants in the same
way.” A sample item for job relatedness is “Doing well in this interview means that a person
will also be good in the job.”
Organizational attractiveness was measured with twelve items from Highhouse and
colleagues (2003) and three more from Warszta (2012). A sample item is “This organization
is attractive.”
Manipulation check measure
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 15
To check if participants perceived the organizational description as intended (i.e.,
established vs. innovative), the item The organization described itself as an innovative
organization” was included.
Results
Manipulation check
Participants in the innovative organization condition were more likely to perceive the
organization as innovative, t(126.13) = 10.87, p < .01, d = 1.81.
Testing the hypotheses
Table 2 provides an overview of descriptive statistics and correlations. Table 3 shows
the results of the ANOVAs. Hypothesis 1 stated that participants would evaluate the highly
automated interview as conveying less social presence, more privacy concerning, and more
consistent. Results of the ANOVA indicated that participants perceived highly automated
interviews as slightly to moderately more consistent and as providing slightly to moderately
less social presence. There was no significant difference for privacy concerns. Hypothesis 1
was therefore partially supported.
Hypothesis 2 stated that participants would perceive selection procedures as more job
related and the organization as more attractive when the selection procedure matches the
perception of the organization. The ANOVAs revealed no interaction effects, suggesting that
a match of the selection procedure and the organizational image did not affect job relatedness
or organizational attractiveness.
Research Question 1 asked if students and employees differ in their reactions to
highly automated interviews. Therefore, we coded if participants were working and included
this as an independent variable into the aforementioned ANOVAs. There were no significant
main effects for the difference between students and employees. The only significant effect
was an interaction between the independent variables students versus employees and the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 16
interview type regarding privacy concerns. Students reported higher privacy concerns for
highly automated interviews whereas employees reported higher privacy concerns for
videoconference interviews, F(1, 140) = 1.80, p < .05, η²p = 0.03. However, this result should
be interpreted cautiously because when controlling for alpha-error accumulation (using
Bonferroni-correction) for five dependent variables, the interaction effect did not remain
significant (exact p = 0.028 compared to a corrected α = 0.01).
Additional results
The results indicated that organizational attraction was lower for highly automated
interviews. To further investigate this result, a mediation analysis was conducted using
PROCESS (Hayes, 2013). Following suggestions by Hayes (2013), social presence, privacy
concerns, job relatedness and consistency were included as mediators and organizational
attractiveness as outcome variable. Integrating the findings of Tables 4 and 5 indicated that
the negative indirect effect of the highly automated interview on organizational attractiveness
through social presence was significant. This means that participants perceived organizational
attractiveness resulting from the use of a highly automated interview to be lower because it
conveyed less social presence.
Discussion
The aims of the current study were to (a) further investigate applicant reactions to
highly automated interviews, (b) examine influences of the organizational context on
reactions to technology-enhanced interview approaches, and (c) explore potential differences
between students and employees regarding reactions to highly automated interviews. On the
positive side for highly automated interviews, they were perceived as more consistent than
videoconference interviews. However, participants also perceived highly automated
interviews to provide less social presence which negatively affected organizational
attractiveness. Moreover, this finding was independent from the organizational context,
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 17
implying that a match of the perceptions of an organizational and its selection procedures did
not lead to better perceptions of innovative interview approaches. Lastly, the results
tentatively indicate that there might be small differences between students and employees
privacy concerns within the different interview formats.
First, the findings support assumptions that people perceive machines to be more
consistent (Miller et al., 2018). Participants in the current study appeared to believe that
applicants in highly automated interviews might be treated more consistently than in
videoconference interviews. Therefore, variables related to consistency might be interesting
for future research on highly automated selection procedures. For instance, objectivity of the
outcomes could be another variable that applicants favor in highly automated selection, as the
human influence on selection decisions is minimized which could increase distributive justice
perceptions (i.e., feelings that outcomes are fair; Gilliland, 1993).
However, participants thought that the organization using the highly automated
interviews was less attractive because they perceived less social presence. This supports the
assumption that the highly automated interview offered less social bandwidth and
interactivity as defined by Potosky (2008), replicates findings by Langer et al. (2019), and is
in line with previous findings that show that higher-levels of automation reduce interpersonal
justice perceptions (Langer et al., 2017). Consequently, evidence from different studies
indicates that technology-enhanced interviews are less accepted because of the feature that
makes them more efficient and potentially more consistent: because of minimizing the human
influence. Hiring managers need to decide between efficiency and applicant reactions when
they choose between different interview approaches.
Furthermore, students and employees differed regarding privacy concerns within the
interview formats. Privacy concerns rise when people are uncertain about what happens to
their data (Smith et al., 2011). Consequently, students seemed to be more uncertain about the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 18
use of their data in highly automated interviews, whereas for employees this was the case for
videoconference interviews. Even though this finding should be interpreted cautiously
because of potential alpha-error inflation, it suggests that work experience may play a role
when it comes to privacy concerns in different versions of technology-enhanced interviews
and calls for future research. It is also important to mention, that there were no differences
between students and employees for any other outcome variables which might be good news
for interpreting previous applicant reactions studies (findings likely also generalize to more
mature samples) and future research in this field (no exaggerated need for worries when
planning to use student samples).
Finally, this study investigated if the match of perceptions of organizations and its
selection procedures affects applicant reactions (cf., Gatewood et al., 1993). It appears that
participants did not like the highly automated interview regardless of imagining applying for
an innovative or an established organization. Therefore, the assumption that the
organizational context is a higher-level contextual effect that affects applicant reactions and
that might be more widely applicable compared to task-level effects found by M.K. Lee
(2018) and Langer et al. (2019) has to be currently dismissed.
Limitations
First, participants were not personally interviewed through a videoconference or an
automated interview. Therefore, findings could differ for live interviews (but see Langer et
al., 2019). Nevertheless, watching a video should be more immersive than merely imagining
being in an interview situation which is another common approach in studies using vignettes
(Atzmüller and Steiner, 2010). Second, it is not yet clear which role the virtual character
played in the results of the current study. Following assumptions from human-computer
interaction research (e.g., K.M. Lee and Nass, 2003), omitting the virtual character would
have decreased the social presence of the highly automated interaction even more. This calls
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 19
for further research regarding various forms of highly automated interviews. Third,
participants might have perceived videoconference interviews as “innovative” selection tools
which could explain the non-significant findings regarding the interaction between the
organizational description and the interview approach. However, if the match of the
organization and its selection procedures had led to better applicant reactions, perceiving both
interview approaches as innovative would have led to a main effect for established vs.
innovative organizations in a way that innovative organizations should have been perceived
more favorably this was not the case.
Main practical implications
Organizations seem to be well-advised to check their existing selection approaches
regarding applicant reactions. Even if management automation systems enhance efficiency,
organizations should pay attention to the possible detrimental effects on their applicant pool.
This is especially true in times of a tense labor market where every applicant is a potential
market advantage because applicants might withdraw their application if they are not satisfied
with the way the interview process is handled and potentially advice friends against applying
for a job at an organization that uses automated interviews (Langer et al., 2019; Uggerslev et
al., 2012). Considering the results of the current study, not even innovative organizations
should rely on their image to buffer the negative reactions to technology-enhanced interview
approaches. Finally, organizations should consider which interview tools they use within a
given applicant pool as the current results may imply that applicants with different levels of
work experience seem to react differently to different forms of technology-enhanced
interviews.
Future research
It is still unclear how applicants behave when they are personally confronted with
highly automated interviews. For instance, it might be possible that applicants are less
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 20
motivated, use less impression management, or provide qualitatively different answers, which
might affect the validity of highly automated interviews (Blacksmith et al., 2016).
Additionally, research regarding contextual influences on reactions to automated tools is
growing and there are still many open questions. For instance, the Europe Union introduced
the General Data Protection Regulation (GDPR) in 2018 and other countries are working on
comparable regulations regarding data security, gathering, and evaluation. It is unclear how
such regulations affect people’s trust in organizations using automated tools and algorithms.
Potentially, privacy concerns regarding automated tools are affected by such regulations and
would therefore be different in the US or in China. Similarly, it is unclear how the use of
highly automated tools will affect management practice, law-making and societies as a
whole. Even within the rather narrow field of technology-enhanced interviews many legal,
moral, and ethical questions arise (cf. Zerilli et al., 2018). For instance, how can applicants be
sure that only valid and unbiased information is considered in automatic processes? How can
hiring manager explain their decisions to applicants who were rejected by automated tools?
How to enable people to understand automated tools?
Conclusion
As practice continues to develop innovative selection procedures, it is crucial that this
drive does not outpace scientific understanding of these methods. This study showed that
potential applicants perceive highly automated interviews and organizations using such
interviews as rather negatively and raised questions about contextual and individual
influences regarding applicant reactions. Yet it is merely a small step in catching up with the
ongoing automation of management processes. Hopefully, this study encourages more
scholars to research the emerging intersection between management and computer science.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 21
References
Atzmüller, C. and Steiner, P.M. (2010), Experimental vignette studies in survey research”,
Methodology, Vol. 6, pp. 128138. doi:10.1027/1614-2241/a000014
Bauer, T.N., Truxillo, D.M., Sanchez, R.J., Craig, J.M., Ferrara, P., and Campion, M.A.,
(2001), Applicant reactions to selection: Development of the Selection Procedural
Justice Scale (SPJS), Personnel Psychology, Vol. 54, pp. 387419.
doi:10.1111/j.1744-6570.2001.tb00097.x
Bauer, T. N., Truxillo, D. M., Tucker, J. S., Weathers, V., Bertolino, M., Erdogan, B., and
Campion, M. A. (2006), “Selection in the information age: The impact of privacy
concerns and computer experience on applicant reactions”, Journal of Management,
Vol 32, pp. 601621. doi:10.1177/0149206306289829
Blacksmith, N., Willford, J.C., and Behrend, T.S. (2016), Technology in the employment
interview: A meta-analysis”, Personnel Assessment and Decisions, Vol. 2, 2.
doi:10.25035/pad.2016.002
Chapman, D.S., Uggerslev, K.L., Carroll, S.A., Piasentin, K.A., and Jones, D.A. (2005),
“Applicant attraction to organizations and job choice: A meta-analytic review of the
correlates of recruiting outcomes”, Journal of Applied Psychology, Vol. 90, pp. 928
944. doi:10.1037/0021-9010.90.5.928
Chapman, D.S., Uggerslev, K.L., and Webster, J. (2003), “Applicant reactions to face-to-face
and technology-mediated interviews: A field investigation”, Journal of Applied
Psychology, Vol. 88, pp. 944953. doi:10.1037/0021-9010.88.5.944
Chapman, D.S. and Webster, J. (2003), The use of technologies in the recruiting, screening,
and selection processes for job candidates”, International Journal of Selection and
Assessment, Vol. 11, pp. 113120. doi:10.1111/1468-2389.00234
Frauendorfer, D., Schmid Mast, M., Nguyen, L., and Gatica-Perez, D. (2014), Nonverbal
social sensing in action: Unobtrusive recording and extracting of nonverbal behavior
in social interactions illustrated with a research example”, Journal of nonverbal
behavior, Vol. 38, pp. 231245. doi:10.1007/s10919-014-0173-5
Gatewood, R.D., Gowan, M.A., and Lautenschlager, G.J. (1993), Corporate image,
recruitment, and initial job choice decisions”, Academy of Management Journal, Vol.
36, pp. 414427. doi:10.2307/256530
Gilliland, S.W. (1993) “The perceived fairness of selection systems: An organizational justice
perspective”, Academy of Management Review, Vol. 18, pp. 694734.
doi:10.2307/258595
Hayes, A.F. (2013), Introduction to mediation, moderation, and conditional process analysis:
A regression-based approach, Guilford Press, New York, NY.
Highhouse, S., Lievens, F., and Sinar, E.F. (2003), Measuring attraction to organizations”,
Educational and Psychological Measurement, Vol. 63, pp. 9861001.
doi:10.1177/0013164403258403
HireVue (2018), HireVue OnDemand available at https://www.hirevue.com/products/video-
interviewing/ondemand (accessed 8 July 2018).
Kätsyri, J., Förger, K., Mäkäräinen, M., and Takala, T. (2015), “A review of empirical
evidence on different uncanny valley hypotheses: Support for perceptual mismatch as
one road to the valley of eeriness”, Frontiers in Psychology, Vol 6, 390.
doi:10.3389/fpsyg.2015.00390
Langer, M., König, C.J., and Fitili, A. (2018), Information as a double-edged sword: The
role of computer experience and information on applicant reactions towards novel
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 22
technologies for personnel selection”, Computers in Human Behavior, Vol. 81, pp.
1930. doi:10.1016/j.chb.2017.11.036
Langer, M., König, C.J., and Krause, K. (2017), Examining digital interviews for personnel
selection: Applicant reactions and interviewer ratings”, International Journal of
Selection and Assessment, Vol. 25, pp. 371382. doi:10.1111/ijsa.12191
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews:
Acceptance under the influence of stakes. International Journal of Selection and
Assessment, Vol. 27, pp. 217-234. doi:10.1111/ijsa.12246
Lee, K.M., and Nass, C. (2003), Designing social presence of social actors in human
computer interaction”, in: Proceedings of the CHI 2003 Conference on Human
Factors in Computing Systems, Fort Lauderdale, FL, pp. 289296.
doi:10.1145/642611.642662
Lee, M.K. (2018), “Understanding perception of algorithmic decisions: Fairness, trust, and
emotion in response to algorithmic management”. Big Data & Society, 5,
205395171875668. doi:10.1177/2053951718756684
Lievens, F. and Highhouse, S. (2003), The relation of instrumental and symbolic attributes
to a company’s attractiveness as an employer”, Personnel Psychology, Vol. 56, pp.
75102. doi:10.1111/j.1744-6570.2003.tb00144.x
Miller, F.A., Katz, J.H., and Gans, R. (2018), The OD imperative to add inclusion to the
algorithms of artificial intelligence”, OD Practitioner, Vol. 50 (1), pp. 612.
Mori, M., MacDorman, K., and Kageki, N. (2012), “The uncanny valley”, IEEE Robotics &
Automation Magazine, Vol 19, pp. 98100. doi:10.1109/MRA.2012.2192811
Naim, I., Tanveer, M.I., Gildea, D., and Hoque, M.E. (2015), Automated analysis and
prediction of job interview performance: The role of what you say and how you say
it, in: 11th IEEE International Conference and Workshops on Automatic Face and
Gesture Recognition. Ljubljana, Slovenia, pp. 114. doi:10.1109/fg.2015.7163127
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of
human interaction with automation. IEEE Transactions on Systems, Man, and
Cybernetics - Part A: Systems and Humans, Vol. 30, pp. 286297.
doi:10.1109/3468.844354
Pingitore, R., Dugoni, B.L., Tindale, R.S., and Spring, B. (1994), Bias against overweight
job applicants in a simulated employment interview”, Journal of Applied Psychology,
Vol. 79, pp. 909917. doi:10.1037/0021-9010.79.6.909
Potosky, D., (2008), A conceptual framework for the role of the administration medium in
the personnel assessment process”, Academy of Management Review, Vol. 33, pp.
629648. doi:10.5465/amr.2008.32465704
Precire (2018), Precire Technologies, available at https://www.precire.com/de/start/ (accessed
8 June 2018).
Sears, G., Zhang, H., Wiesner, W.D., Hackett, R.W., and Yuan, Y. (2013), A comparative
assessment of videoconference and face-to-face employment interviews”,
Management Decisions, Vol. 51, pp. 17331752. doi:10.1108/MD-09-2012-0642
Slaughter, J.E. and Greguras, G.J. (2009), Initial attraction to organizations: The influence
of trait inferences”, International Journal of Selection and Assessment, Vol. 17, pp.
118. doi:10.1111/j.1468-2389.2009.00447.x
Slaughter, J.E., Zickar, M.J., Highhouse, S., and Mohr, D.C. (2004), Personality trait
inferences about organizations: Development of a measure and assessment of
construct validity”, Journal of Applied Psychology, Vol. 89, pp. 85103.
doi:10.1037/0021-9010.89.1.85
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 23
Smith, H.J., Dinev, T., and Xu, H. (2011), Information privacy research: An
interdisciplinary review”, Management Information Systems Quarterly, Vol 35, pp.
9891015. doi:10.2307/41409970
Torres, E.N. and Mejia, C. (2017), Asynchronous video interviews in the hospitality
industry: Considerations for virtual employee selection”, International Journal of
Hospitality Management, Vol. 61, pp. 413. doi:10.1016/j.ijhm.2016.10.012
Uggerslev, K.L., Fassina, N.E., and Kraichy, D. (2012), Recruiting through the stages: A
meta-analytic test of predictors of applicant attraction at different stages of the
recruiting process”, Personnel Psychology, Vol. 65, pp. 597660. doi:10.1111/j.1744-
6570.2012.01254.x
Walter, N., Ortbach, K., and Niehaves, B. (2015), Designing electronic feedback: Analyzing
the effects of social presence on perceived feedback usefulness”, International
Journal of Human-Computer Sudies, Vol. 76, pp. 111.
doi:10.1016/j.ijhcs.2014.12.001
Warszta, T. (2012), “Application of Gilliland’s model of applicants’ reactions to the field of
web-based selection”, Unpublished dissertation, Christian-Albrechts Universität Kiel,
Germany.
Zerilli, J., Knott, A., Maclaurin, J., and Gavaghan, C. (2018), “Transparency in algorithmic
and human decision-making: Is there a double standard?”, Philosophy & Technology,
advance online publication. doi:10.1007/s13347-018-0330-6
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 24
Table 1.
Information Presented to the Participants in the Different Organizational Descriptions
Fuchs&Schulz Automotive is one of the oldest (an aspiring) and established (innovative)
automotive suppliers in Germany. The established (progressive) organization with subsidiaries in
China, Japan, the US, Switzerland and the Netherlands expanded since its foundation in 1954
constantly (2001 rapidly). As a family business in the third generation (former start-up) with
more than 1.300 employees and its headquarters in Frankfurt, F&S reached a revenue of 125.2
million Euro in 2016.
F&S offers a classical (innovative) and established (future-oriented) supplier concept and
substantial knowledge in the areas of logistics, sustainability and service. This way F&S
consolidated its position as internationally successful (a global player) and experienced company
(driver of innovation) on the market.
People are the focus of F&S as customers, partners, and employees. Trust and security
(innovation and creativity) is the groundwork of the traditional (future-oriented) organizational
culture. This crystallizes in stable long-term (dynamical and fruitful) relations with the
customers.
Since the foundation of the company, F&S lives tradition (creativity). This way we build a bridge
between reliable consultancy and proximity to customers. Through extensive (interdisciplinary)
project experience, we ensure that we can provide you with consultants that exactly know and
understand the target market (penetrate the target market through constant progress).
Note. Information translated from German. Italic text pieces reflect the manipulation for the
traditional organization (not in italics in the original material for the participants). Text pieces in
brackets reflect the manipulation for the innovative organization.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 25
Table 2.
Correlations and Cronbach’s Alpha for the Study Variables.
Scale
1.
2.
3.
4.
5.
7.
8.
1.
Social Presence
.91
2.
Privacy Concerns
-.19*
.78
3.
Consistency
-.13
.04
.88
4.
Job Relatedness
.36**
-.12
.00
.89
5.
Organizational Attractiveness
.60**
-.20*
-.09
-.33**
.95
6.
Students vs. Employees
.00
.02
-.04
-.13
-.06
7.
Interview
-.23**
.02
-.20*
-.10
-.19*
-
8.
Organization
-.09
.04
-.04
.08
-.11
-.05
-
Note. Coding of students vs. employees: -1 = students, 1 = employees, Coding of interview: -1 =
videoconference, 1 = highly automated. Coding of organization: -1 = established organization, 1 =
innovative organization. N = 148. Numbers along the diagonal represent the Cronbach’s alpha of
the scales.
*p < .05, **p < .01.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 26
Table 3.
Means, Standard Deviations, and ANOVA Results (including partial η²) for the Dependent Variables.
Condition
ANOVA
VC-ES
VC-IN
AI-ES
AI-IN
VC vs. AI
ES vs. IN
Interaction
Variable
M
(SD)
M
(SD)
M
(SD)
M
(SD)
F(1, 144)
η²p
F(1, 144)
η²p
F(1, 144)
η²p
Social Presence
3.07
(0.72)
2.99
(0.88)
2.77
(0.92)
2.48
(0.93)
7.96**
.05
1.15
.01
0.56
.00
Privacy Concerns
3.12
(0.51)
3.09
(0.63)
3.07
(0.71)
3.19
(0.57)
0.03
.00
0.21
.00
0.49
.00
Consistency
3.08
(0.73)
3.12
(0.87)
3.51
(0.94)
3.37
(0.67)
6.32*
.04
0.11
.00
0.44
.00
Job Relatedness
2.49
(0.91)
2.43
(0.79)
2.15
(0.76)
2.45
(0.78)
1.38
.01
0.82
.00
1.81
.01
Organizational Attraction
3.32
(0.53)
3.22
(0.66)
3.11
(0.74)
2.87
(0.79)
5.91*
.04
2.38
.02
0.40
.00
Note. VC = videoconference interviews condition, AI = highly automated interviews condition, ES = established organization condition,
IN = innovative organization condition. nVC-ES = 34, nVC-IN = 41, nAI-ES = 37, nAI-IN = 36.
*p < .05, **p < .01.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 27
Table 4.
Regression Results for the Mediation of the Hypothesized Mediators between the Videoconference vs. Highly automated Interview Condition and
Organizational Attractiveness
Model
R2
Coefficient
SE
p
95% Confidence Interval
Single models
VC vs. AI → Social Presence
.05
-.20
0.07
<.01
[-.34, -.06]
VC vs. AI → Privacy Concerns
.00
.01
0.05
.85
[-.09, .11]
VC vs. AI → Consistency
.04
.16
0.07
<.05
[.04, .30]
VC vs. AI → Job Relatedness
.01
-.08
0.07
.24
[-.21, .05]
VC vs. AI → Organizational Attractiveness
.04
-.14
0.06
<.05
[-.25, -.02]
Model complete
.39
-
-
<.01
-
Social Presence → Organizational Attractiveness
.42
0.06
<.01
[.30, .53]
Privacy Concerns → Organizational Attractiveness
-.10
0.08
.19
[-.26, .05]
Consistency → Organizational Attractiveness
-.01
0.06
.89
[-.12, .11]
Job Relatedness → Organizational Attractiveness
.11
0.60
.09
[-.02, .23]
VC vs. AI → Organizational Attractiveness
-.04
0.05
.39
[-.14, .05]
Note. AI = highly automated condition, VC = videoconference condition. Coding of the variable VC vs. AI: -1 = videoconference interview
condition, 1 = highly automated interview condition. The 95% confidence interval for the effects is obtained by the bias-corrected bootstrap with
10,000 resamples.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 28
Table 5.
Results for the Indirect Effects of Videoconference vs. Highly automated Interview Condition over the Hypothesized Mediators on
Attractiveness of the Procedure
Model
IEmed
SEBoot
95% Confidence Interval
Complete indirect effect
-.13
0.05
[-.24, -.03]
VC vs. AI Social Presence Organizational Attractiveness
-.12
0.05
[-.22, -.04]
VC vs. AI Privacy Concerns Organizational Attractiveness
.00
0.01
[-.03, .01]
VC vs. AI Consistency Organizational Attractiveness
.00
0.02
[-.03, .03]
VC vs. AI → Job Relatedness → Organizational Attractiveness
-.01
0.01
[-.05, .01]
Note. VC = videoconference interview condition, AI = highly automated interview condition. Coding of the variable VC vs. AI: -1 = video
conference condition, 1 = highly automated condition. The 95% confidence interval for the effects is obtained by the bias-corrected bootstrap
with 10,000 resamples. IEmed = completely standardized indirect effect of the mediation. SEBoot = standard error of the bootstrapped effect
sizes.
... Regarding privacy, only four studies investigated privacy concerns related to AI in HRM [27,46,[83][84][85]. Eckhaus Automated candidate screening or suggestions of which job candidate to invite for job interview or to hire [22,30,62,63,76,91,98,142,148,158] Automated suggestion of matching between job seekers and companies [88] Performance evaluation (i.e., groups or individuals based on organizational, individual, or pre-established goals) ...
... Rights reserved. [46] shows that scanning emails for data to feed an AI raises privacy concerns, Cayrat and Boxall [27] investigated how organizations implement mechanisms to ensure data privacy and comply with legal obligations (particularly the European General Data Protection Regulation (GDPR)), whereas Langer et al. [83][84][85] show that the degree of automation of job application process was slightly but positively related to applicants' privacy concerns. ...
Article
Full-text available
As it is the case for many business processes and activities disciplines, artificial intelligence (AI) is increasingly integrated in human resources management (HRM). While AI has great potential to augment the HRM activities in organizations, automating the management of humans is not without risks and limitations. The identification of these risks is fundamental to promote responsible use of AI in HRM. We thus conducted a review of the empirical academic literature across disciplines on the affordances and responsible principles of AI in HRM. This is the first review of responsible AI in HRM that focuses solely on studies containing observations, measurements, and tests about this phenomenon. The multi-domain and multidisciplinary approach and empirical focus provides a better understanding of the reality of the development, study, and deployment of AI in HRM and sheds light on how these are conducted responsibly. We conclude with a call for research based on what we identified as the most needed and promising avenues.
... Then we used principal component analysis (PCA) to identify two essential dimensions reflecting layperson users' perceptions to categorize different AI applications: human involvement and AI autonomy. Using these two dimensions, we clustered ten AI roles that are prevalent in our everyday life and have a direct impact on laypeople (e.g., AI writing assistant [96,139], AI human resource supervisor [48,100], AI police officer [56,190], AI doctor [43,195], etc.) into four clusters: AI tools (low in both dimensions), AI servants (high human involvement and low AI autonomy), AI assistants (low human involvement and high AI autonomy), and AI mediators (high in both dimensions). Finally, we investigated how these four AI role clusters were assessed in three important aspects closely associated with users' reliance on AI: credibility (i.e., trustworthiness, expertise, and goodwill), attitudes, and social approval [101]. ...
... People either directly communicate with these AIs, directly receive the service produced by the AI, or are directly impacted by the AI. These are AI writing assistant (e.g., [96,139]), auto-driving system (e.g., [75,78]), customer service representative (e.g., [76,111]), personal assistant embedded in phones (e.g., [2,71]), journalist (e.g., [24,84]), human resources assessment system (e.g., [48,100]), AI radiology diagnosis provider or assistant (e.g., [43,195]), housekeeping technologies (e.g., [146]), companion robot (e.g., [17,73]), and AI in security surveillance and police systems (e.g., [56,190]). Specialty AIs, such as AIs doing dangerous and risky work (e.g., AIs in the war zone), construction, manufacturing, mining, research, and agriculture, were not included because such AI systems are not much relevant to people's everyday lives, and have rarely been studied from a communication perspective. ...
... Recent shows that people do recognize this important benefit of algorithms, which may stimulate their use in selection contexts. For instance, people seem to believe that algorithms ignore demographic characteristics (Bonezzi & Ostinelli, 2021), have less discrimination motivation (Bigman et al., 2022), and expect consistency from algorithmic decisions (Langer et al., 2020). All of this may show that people expect that algorithms have the potential to foster diversity in selection. ...
... However, would some distrust in courts ensure the support for automation? Research into other settings suggests that people might want to exchange the consistency that is provided by automation for the ability to influence decisions due to human factors (Langer et al. 2020;Schlicker et al. 2021). These fascinating assumptions could be tested in experimental studies within court contexts. ...
Article
Full-text available
Courts are high-stakes environments; thus, the impact of implementing legal technologies is not limited to the people directly using the technologies. However, the existing empirical data is insufficient to navigate and anticipate the acceptance of legal technologies in courts. This study aims to provide evidence for a technology acceptance model in order to understand people’s attitudes towards legal technologies in courts and to specify the potential differences in the attitudes of people with court experience vs. those without it, in the legal profession vs. other, male vs. female, and younger vs. older. A questionnaire was developed, and the results were analyzed using partial least squares structural equation modeling (PLS-SEM). Multigroup analyses have confirmed the usefulness of the technology acceptance model (TAM) across age, gender, profession (legal vs. other), and court experience (yes vs. no) groups. Therefore, as in other areas, technology acceptance in courts is primarily related to perceptions of usefulness. Trust emerged as an essential construct, which, in turn, was affected by the perceived risk and knowledge. In addition, the study’s findings prompt us to give more thought to who decides about technologies in courts, as the legal profession, court experience, age, and gender modify different aspects of legal technology acceptance.
... However, the authors also note the scarcity of field studies that assess trust in ADSSs (which they refer to as "embedded AI") in organizational settings. As mentioned previously, this scarcity is of particular concern for personnel selection, in which the market for predictive hiring tools is growing fast but academic research remains at an early stage (Campion et al., 2016;Langer et al., 2019bLanger et al., , 2021aTambe et al., 2019;Oberst et al., 2020). In personnel selection, the stakes are high because it involves ethic, legal, psychological, and strategic issues in organizations. ...
Article
Full-text available
Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users’ reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation’s consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
Article
Full-text available
Theories and research in human–machine communication (HMC) suggest that machines, when replacing humans as communication partners, change the processes and outcomes of communication. With artificial intelligence (AI) increasingly used to interview and evaluate job applicants, employers should consider the effects of AI on applicants’ psychology and performance during AI-based interviews. This study examined job applicants’ experience and speech fluency when evaluated by AI. In a three-condition between-subjects experiment (N = 134), college students had an online mock job interview under the impression that their performance would be evaluated by a human recruiter, an AI system, or an AI system with a humanlike interface. Participants reported higher uncertainty and lower social presence and had a higher articulation rate in the AI-evaluation condition than in the human-evaluation condition. Through lowering social presence, AI evaluation increased speech rate and reduced silent pauses. Findings inform theories of HMC and practices of automated recruitment and professional training.
Article
Gender imbalances in the labor market continue to be an economic and social problem that could be reduced by artificial intelligence (AI), which is being promoted as a means for fairer and less biased hiring practices. To examine whether these supposed benefits of AI are perceived as such, we have investigated the preferences of individuals, particularly women, for an AI-based evaluation process in a competitive situation. The results of our experimental study (N = 152) show that individuals generally prefer a human evaluator over an AI evaluator—but only if the human evaluator is female. Whereas we demonstrate that women's beliefs in AI to reduce bias and perceived personal discrimination have a positive direct effect, we find no direct effect of the competitors' gender on women's preference for an AI evaluation. However, we find that the belief in AI moderates the other two relationships, which highlights the crucial role of people's general perception of AI tools in realizing AI's full potential and reduce anticipated biases. Our findings provide an initial indication that the use of AI technology in hiring could encourage women to apply for jobs in male-dominated fields and serve as a starting point for future research in this field.
Article
Algorithms are increasingly used by human resource departments to evaluate employee performance. While the algorithms are perceived to be objective and neutral by removing human biases, they are often perceived to be less fair than human managers. This research proposes dignity as an important construct in explaining the discrepancy in perceived fairness and investigates remedial steps for improving dignity and fairness for algorithm-based employee evaluations. Three experiments’ results show that those evaluated by algorithms perceive lower levels of dignity, leading them to believe the process is less fair. In addition, we find that providing justifications for algorithm usage in employee evaluations improves perceived dignity. However, human-algorithm collaboration does not enhance perceived dignity.
Conference Paper
Full-text available
La présélection des CV assistée par des systèmes d'aide à la décision intégrant l'intelligence artificielle connaît actuellement un fort développement dans de nombreuses organisations, soulevant des questions techniques, managériales, juridiques et éthiques. L'objectif De la présente communication vise à mieux comprendre les réactions des recruteurs lorsqu'ils se voient proposer des recommandations basées sur des algorithmes lors de la présélection des CV. Deux attitudes majeures ont été identifiées dans la littérature sur les réactions des utilisateurs aux recommandations basées sur des algorithmes : l'aversion pour les algorithmes, qui reflète une méfiance générale et une préférence pour les recommandations humaines ; et le biais d'automation, qui correspond à une confiance excessive dans les décisions ou les recommandations faites par les systèmes algorithmiques d'aide à la décision (ADSS). En s'appuyant sur les résultats obtenus dans le domaine de l'aide à la décision automatisée, nous faisons l'hypothèse générale que les recruteurs font plus confiance aux experts humains qu'aux systèmes algorithmiques d’aide à la décision, car ils se méfient des algorithmes pour des décisions subjectives comme le recrutement. Une expérimentation sur la sélection des CV a été menée sur un échantillon de professionnels (N=1 100) auxquels il a été demandé d'étudier une offre d'emploi, puis d'évaluer deux CV fictifs dans un plan factoriel 2×2 avec manipulation du type de recommandation (pas de recommandation/recommandation algorithmique/recommandation d'un expert humain) et de la pertinence des recommandations (recommandation pertinente vs non pertinente). Nos résultats confirment l'hypothèse générale de préférence pour les recommandations humaines : les recruteurs font preuve d'un niveau de confiance plus élevé envers les recommandations d'experts humains par rapport aux recommandations algorithmiques. Cependant, nous avons également constaté que la pertinence de la recommandation a un impact différentiel et inattendu sur les décisions : en présence d'une recommandation algorithmique non pertinente, les recruteurs ont favorisé le CV le moins pertinent par rapport au meilleur CV. Ce décalage entre les attitudes et les comportements suggère un possible biais d'automation. Nos résultats montrent également que des traits de personnalité spécifiques (extraversion, neuroticisme et confiance en soi) sont associés à une utilisation différentielle des recommandations algorithmiques. Les implications pour la recherche et les politiques RH sont enfin discutées.
Article
Full-text available
Technological advancements in Artificial Intelligence allow the automation of every part of job interviews (information acquisition, information analysis, action selection, action implementation) resulting in highly automated interviews. Efficiency advantages exist, but it is unclear how people react to such interviews (and whether reactions depend on the stakes involved). Participants (N = 123) in a 2 (highly automated, videoconference) × 2 (high‐stakes, low‐stakes situation) experiment watched and assessed videos depicting a highly automated interview for high‐stakes (selection) and low‐stakes (training) situations or an equivalent videoconference interview. Automated high‐stakes interviews led to ambiguity and less perceived controllability. Additionally, highly automated interviews diminished overall acceptance through lower social presence and fairness. To conclude, people seem to react negatively to highly automated interviews and acceptance seems to vary based on the stakes. OPEN PRACTICES This study was pre‐registered on the Open Science Framework (osf.io/hgd5r) and on AsPredicted (https://AsPredicted.org/i52c6.pdf).
Article
Full-text available
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.
Article
Full-text available
This article details concerns about the potential of machine learning processes to incorporate human biases inherent in social data into artificial intelligence systems that influence consequential decisions in the courts, business and financial transactions, and employment situations. It details incidents of biased decisions and recommendations made by artificial intelligence systems that have been given the patina of objectivity because they were made by machines supposedly free of human bias. The article offers suggestions for addressing the systemic biases that are impacting the viability, credibility, and fairness of machine learning processes and artificial intelligence system.
Article
Full-text available
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies.
Article
Full-text available
Digital interviews (or Asynchronous Video Interviews) are a potentially efficient new form of selection interviews, in which interviewees digitally record their answers. Using Potosky's framework of media attributes, we compared them to videoconference interviews. Participants (N = 113) were randomly assigned to a videoconference or a digital interview and subsequently answered applicant reaction questionnaires. Raters evaluated participants' interview performance. Participants considered digital interviews to be creepier and less personal, and reported that they induced more privacy concerns. No difference was found regarding organizational attractiveness. Compared to videoconference interviews, participants in digital interviews received better interview ratings. These results warn organizations that using digital interviews might cause applicants to self-select out. Furthermore, organizations should stick to either videoconference or digital interviews within a selection stage.
Article
Full-text available
Technologically advanced selection procedures are entering the market at exponential rates. The current study tested two previously held assumptions: (a) providing applicants with procedural information (i.e., making the procedure more transparent and justifying the use of this procedure) on novel technologies for personnel selection would positively impact applicant reactions, and (b) technologically advanced procedures might differentially affect applicants with different levels of computer experience. In a 2 (computer science students, other students) × 2 (low information, high information) design, 120 participants watched a video showing a technologically advanced selection procedure (i.e., an interview with a virtual character responding and adapting to applicants’ nonverbal behavior). Results showed that computer experience did not affect applicant reactions. Information had a positive indirect effect on overall organizational attractiveness via open treatment and information known. This positive indirect effect was counterbalanced by a direct negative effect of information on overall organizational attractiveness. This study suggests that computer experience does not affect applicant reactions to novel technologies for personnel selection, and that organizations should be cautious about providing applicants with information when using technologically advanced procedures as information can be a double-edged sword. Update: While not specifically mentioned in the paper it has implications for explainability and XAI research: providing people with more transparency can have simultaneous positive and negative effects on acceptance.
Article
Full-text available
The use of technology such as telephone and video has become common when conducting employment interviews. However, little is known about how technology affects applicant reactions and interviewer ratings. We conducted meta-analyses of 12 studies that resulted in K = 13 unique samples and N = 1,557. Mean effect sizes for interview medium on ratings (d = -.41) and reactions (d = -.36) were moderate and negative, suggesting that interviewer ratings and applicant reactions are lower in technology-mediated interviews. Generalizing research findings from face-to-face interviews to technology mediated interviews is inappropriate. Organizations should be especially wary of varying interview mode across applicants, as inconsistency in administration could lead to fairness issues. At the same time, given the limited research that exists, we call for renewed attention and further studies on potential moderators of this effect.
Article
Full-text available
The uncanny valley hypothesis, proposed already in the 1970s, suggests that almost but not fully humanlike artificial characters will trigger a profound sense of unease. This hypothesis has become widely acknowledged both in the popular media and scientific research. Surprisingly, empirical evidence for the hypothesis has remained inconsistent. In the present article, we reinterpret the original uncanny valley hypothesis and review empirical evidence for different theoretically motivated uncanny valley hypotheses. The uncanny valley could be understood as the naïve claim that any kind of human-likeness manipulation will lead to experienced negative affinity at close-to-realistic levels. More recent hypotheses have suggested that the uncanny valley would be caused by artificial-human categorization difficulty or by a perceptual mismatch between artificial and human features. Original formulation also suggested that movement would modulate the uncanny valley. The reviewed empirical literature failed to provide consistent support for the naïve uncanny valley hypothesis or the modulatory effects of movement. Results on the categorization difficulty hypothesis were still too scarce to allow drawing firm conclusions. In contrast, good support was found for the perceptual mismatch hypothesis. Taken together, the present review findings suggest that the uncanny valley exists only under specific conditions. More research is still needed to pinpoint the exact conditions under which the uncanny valley phenomenon manifests itself.
Article
Hospitality organizations utilize a variety of selection tools to hire the best candidates. Traditionally, hospitality recruiters have relied on face-to-face interviews for choosing the most qualified candidates to represent the firm. While real-time Internet-based interviewing platforms are increasingly utilized among hospitality organizations, a cutting edge technology-based interviewing phenomenon has emerged: the use of asynchronous video interviews (AVIs). In order to conduct this modality of interviews, employers send text-based questions electronically and the candidate records his or her responses using a webcam via various proprietary software platforms. Following the promise of reduced costs and increased efficiencies, many organizations have adopted this modality of interviews; however, little research has been conducted regarding their effectiveness among both providers and users. Additionally, the appropriateness and alignment of AVI in the hospitality industry for the use of selecting service representatives should be investigated. In light of this, the present research examines the literature on interviewing modalities, predictive validity of selection tools, and electronic Human Resources and presents several propositions as well as an agenda for future research. Furthermore, the present research presents a conceptual model for AVI using the literature on electronic Human Resources as a backdrop.