ArticlePDF Available

Abstract and Figures

Purpose-The technological evolution of job interviews continues as artificial intelligence enables highly automated interviews as alternative interview approaches. Initial evidence shows that applicants react negatively to such interviews. Additionally, there is emerging evidence that contextual influences matter when investigating applicant reactions to highly automated interviews. However, previous research has ignored higher-level organizational contexts (i.e. which kind of organization uses the selection procedure) and individual differences (e.g. work experience) regarding applicant reactions. The purpose of this paper is to investigate applicant reactions to highly automated interviews for students and employees and the role of the organizational context when using such interviews. Design/methodology/approach-In a 2 × 2 online study, participants read organizational descriptions of either an innovative or an established organization and watched a video displaying a highly automated or a videoconference interview. Afterwards, participants responded to applicant reaction items. Findings-Participants (n ¼ 148) perceived highly automated interviews as more consistent but as conveying less social presence. The negative effect on social presence diminished organizational attractiveness. The organizational context did not affect applicant reactions to the interview approaches, whereas differences between students and employees emerged but only affected privacy concerns to the interview approaches. Research limitations/implications-The organizational context seems to have negligible effects on applicant reactions to technology-enhanced interviews. There were only small differences between students and employees regarding applicant reactions. Practical implications-In a tense labor market, hiring managers need to be aware of a trade-off between efficiency and applicant reactions regarding technology-enhanced interviews. Originality/value-This study investigates high-level contextual influences and individual differences regarding applicant reactions to highly automated interviews based on Artificial Intelligence.
Content may be subject to copyright.
Highly automated interviews: Applicant reactions and the organizational context
Markus Langera, Cornelius J. Königa, Diana R. Sanchezb, and Sören Samadia
aUniversität des Saarlandes
bSan Francisco State University
aMarkus Langer (https://orcid.org/0000-0002-8165-1803), aCornelius J. König
(https://orcid.org/0000-0003-0477-8293), and aSören Samadi, Fachrichtung Psychologie,
Universität des Saarlandes, Saarbrücken, Germany.
bDiana R. Sanchez, Workplace Technology Research Lab, San Francisco State University.
This work was supported by the German Federal Ministry for Education and Research
(BMBF; Project EmpaT, grant no. 16SV7229K). We thank the people from Charamel GmbH
for their continuous support and for providing us with the virtual character Gloria.
Correspondence concerning this article should be addressed to Markus Langer, Universität
des Saarlandes, Arbeits- & Organisationspsychologie, Campus A1 3, 66123 Saarbrücken,
Germany. Tel: +49 681 302 4767. E-mail: markus.langer@uni-saarland.de
This preprint version may not exactly replicate the final version published in the Journal of
Managerial Psychology. Coypright: Langer, M., König, C.J., Sanchez, D.R. & Samadi, S. (in
press). Highly automated interviews: Applicant reactions and the organizational context. Journal
of Managerial Psychology.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 2
ABSTRACT
Purpose: The technological evolution of job interviews continues as highly automated
interviews emerge as alternative approaches. Initial evidence shows that applicants react
negatively to such interviews. Additionally, there is emerging evidence that contextual
influences matter when investigating applicant reactions to highly automated interviews.
However, previous research has ignored higher-level organizational contexts (i.e., which kind
of organization uses the selection procedure) and individual differences (e.g., work
experience) regarding applicant reactions. This study therefore investigates applicant
reactions to highly automated interviews for students and employees and the role of the
organizational context when using such interviews.
Methodology: In a 2×2 online study, participants read organizational descriptions of either
an innovative or an established organization and watched a video displaying a highly
automated or a videoconference interview. Afterwards, participants responded to applicant
reaction items.
Findings: Participants (N = 148) perceived highly automated interviews as more consistent
but as conveying less social presence. The negative effect on social presence diminished
organizational attractiveness. The organizational context did not affect applicant reactions to
the interview approaches, whereas differences between students and employees emerged but
only affected privacy concerns to the interview approaches.
Research implications: The organizational context seems to have negligible effects on
applicant reactions to technology-enhanced interviews. There were only small differences
between students and employees regarding applicant reactions.
Practical implications: In a tense labor market, hiring managers need to be aware of a trad-
off between efficiency and applicant reactions regarding technology-enhanced interviews.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 3
Originality: This study investigates high-level contextual influences and individual
differences regarding applicant reactions to highly automated interviews.
Keywords: personnel selection, job interview technology, automatic applicant assessment,
applicant reactions, organizational context
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 4
Introduction
Organizations modernize their processes to stay up-to-date and to convey an
innovative and attractive image (Chapman and Webster, 2003; Gatewood et al., 1993). This
modernization affects many management processes as well as personnel selection procedures.
For instance, organizations use digital interviews (applicants record responses to interview
questions and send them to the hiring organization; Torres and Mejia, 2017) to show that they
are attractive employers (cf., Chapman and Webster, 2003). However, previous research in
the area of job interviews has predominantly found that technologically-advanced interviews
detrimentally affect applicant reactions (Blacksmith et al., 2016; Langer et al., 2017).
Nonetheless, the technological evolution of the interview continues. Currently, the use of
highly automated interviews is burgeoning (Langer et al., 2019). Within such interviews,
sensors (cameras, microphones) in combination with algorithms and virtual visualization
automate the entire interview process (i.e., acquire information about applicants, evaluate
applicants’ performance, implement actions such as automatic selection of follow-up
questions, using virtual interviewers, cf., Langer et al., 2019).
So far, little is known about the effects of highly automated interviews on applicant
reactions. A study by Langer et al. (2019) indicates that negative applicant reactions might
aggravate the more interviews are automated. Additionally, research has emerged showing
that task-level contextual influences affect people’s reactions to highly automated tools (e.g.,
people react more favorably to the automation of mechanical tasks [work scheduling]
compared to human tasks [hiring], M.K. Lee, 2018). Up to now, higher-level organizational
contexts have been ignored within these studies. More precisely, in previous designs,
participants either applied for a single (hypothetical) organization (e.g., Langer et al., 2019)
or did not receive any information about the hiring organization (e.g., Sears et al., 2013). Yet
in reality, applicants inform themselves about organizations, associate organizations with
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 5
specific attributes (e.g., established or innovative; cf., Slaughter and Greguras, 2009), and
evaluate organizational attractiveness based on their perceived person-organization fit
(Chapman et al., 2005). It follows that the negative effects of automated interviews may only
occur in situations where applicants do not expect innovative selection procedures implying
that highly automated interviews may be more accepted within selection processes of
innovative organizations.
The goals of the current study are therefore (a) to further investigate the effects of
highly automated interviews on applicant reactions, (b) to clarify if the organizational context
interacts with applicant reactions in a way that highly automated interviews are more
accepted for innovative organizations. Finally, this study addresses a limitation of prior
studies who used student samples (e.g. Langer et al., 2019) by collecting a more diverse
sample to investigate differences in reactions to automated tools between students and
employees. To achieve these goals, this study introduced students and full-time employees to
one of two organizations differing in their organizational description (innovative vs.
established organization) and to one of two interview approaches (highly automated
interview vs. videoconference interview).
Background and Hypotheses
Automation of job interviews
Technology for interviews has evolved in a way that established technology-mediated
interview approaches (e.g., videoconference interviews) appear rather old-fashioned. For
instance, within digital interviews, applicants respond via voice or video recordings (Langer
et al., 2017), and hiring managers can evaluate these recordings whenever they want. Using
machine learning algorithms, these recordings can also be evaluated automatically. For
example, the German company Precire automatically evaluates applicants’ voice recordings
(Precire, 2018), whereas the American company HireVue (HireVue, 2018) additionally
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 6
evaluates applicants nonverbal behavior (e.g., smiling). There have also been first attempts
to use virtual characters as interviewers to enhance the interpersonal touch of highly
automated interviews (cf., Langer et al., 2018; K.M. Lee and Nass, 2003).
These approaches have in common that they automate single parts of interviews up to
entire interview processes (Langer et al., 2019). Langer et al. (2019) used Parasuraman et al’s
(2000) model of automation to describe the underlying idea behind highly automated
interviews. Automating interviews includes four processes that can vary from low to high
levels of automation: acquiring information, analyzing information, selecting and deciding
about actions, and implementing these actions (Parasuraman et al., 2000). Acquiring
information is the automation of collecting and extracting data (e.g., record interviews;
automatically extract [non]verbal information). Analyzing information means to evaluate the
automatically acquired data (e.g., having algorithms that evaluate applicant performance).
Selecting and deciding about actions means to build on the information analysis to decide
about further steps (e.g., automatically selecting follow-up questions). The final step of
automation is then to automatically implement these actions (e.g., present follow-up
questions) and setting-up the interview (e.g., in a virtual environment). Similar to Langer et
al. (2019), this study introduces participants to a highly automated interview which includes
technology that allows high-level automation for every aforementioned step.
It is important to note that previous research tentatively supports the value of highly
automated interviews. For instance, Naim and colleagues (2015) found that automatic
evaluation of applicant behavior can predict interview performance. Nonetheless, there is
need for research regarding highly automated interviews which should expand from questions
regarding validity to ethical questions and investigating applicant reactions to such selection
procedures in different contexts (Langer et al., 2019).
Applicant reactions to automated interviews
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 7
Most applicant reaction research is based on Gilliland’s (1993) work on applicant
reactions to selection systems. Gilliland describes several procedural and distributive justice
rules. For instance, procedural justice is given if selection procedures appear job related, and
if they are administered to all applicants consistently. Distributive justice is given if
applicants perceive that they were able to influence outcomes of selection situations through
their behavior. Adhering to the aforementioned justice rules within selection procedures
should positively affect important outcomes such as organizational attractiveness (Chapman
et al., 2005). Since the current study investigates the reactions to different kinds of interview
procedures, it focuses on the procedural part of Gilliland’s model.
Previous research has shown that technology-enhanced interviews can enhance
efficiency and flexibility but can also lead to negative reactions (Blacksmith et al., 2016). For
instance, videoconference interviews are perceived as less fair and offering less opportunity
to perform than face-to-face interviews (Chapman et al., 2003; Sears et al., 2013).
Furthermore, highly automated interviews seem to evoke even less favorable reactions than
videoconference interviews (Langer et al., 2019) because of lower social presence.
Potosky’s (2008) framework of media attributes provides ideas about what
distinguishes different kinds of technology-enhanced interviews. Her framework covers four
attributes of communication media: social bandwidth, interactivity, transparency, and
surveillance. Social bandwidth describes the extent to which a medium allows to send and
receive verbal and nonverbal information (Potosky, 2008). Videoconference interviews
provide social bandwidth as applicants are able to exchange a variety of social signals
(Chapman et al., 2003). Although the current paper describes a highly automated interview
where applicants also send and receive communication information, even the most advanced
technology still lacks the opportunity to communicate equally as in-person interpersonal
communications. For example, it is possible to let a virtual character smile. However, this
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 8
smile might still appear less natural than a human smile (cf., Kätsyri et al., 2015). Therefore,
social bandwidth should be lower in highly automated interviews. Interactivity in Potosky’s
framework describes the possibility for direct interactions. As it is the case for social
bandwidth, highly automated interviews (especially versions with virtual characters) offer
interactivity to some extent. However, they are not as interactive as a conversation with a
human-being through videoconferencing. For instance, they provide less room for
backchanneling (e.g., nodding; Frauendorfer et al., 2014). The third aspect of Potosky’s
framework is transparency. Communication media are transparent if there are no (technical)
issues during communication and if people can ignore the fact that they are using media to
communicate (Potosky, 2008). Highly automated interviews likely reminded people that they
are using media to communicate, whereas in the course of videoconference interviews, the
medium might become less apparent when applicants start to concentrate on the interviewer
(Langer et al., 2017). The last aspect of Potosky’s framework is surveillance. It constitutes
perceptions of how likely it is for a third party to access information about the
communication between communication parties (Potosky, 2008). For example, applicants
might fear that recordings of highly automated interviews are later watched by unauthorized
people (Langer et al., 2017).
Furthermore, highly automated and videoconference interviews might differ regarding
the interview set-up. This study follows the example of Langer et al. (2019) and uses a virtual
set-up with a virtual character. Even if virtual characters should affect social presence
positively (K.M. Lee and Nass, 2003), humans in videoconference interviews might still
convey more social presence. Furthermore, there might be negative effects if perceptions of a
virtual character fall into the uncanny valley (Mori et al., 2012). The uncanny valley
hypothesis proposes that realistic virtual characters might evoke negative feelings in humans,
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 9
possibly because there is a perceptual mismatch for the virtual characters’ behavior or
appearance (e.g., strange body proportions; Kätsyri et al., 2015).
Potosky’s framework suggests that highly automated interviews offer less social
bandwidth, interactivity, and transparency, and there might be more pronounced feelings of
surveillance and a virtual interviewer might also negatively affect applicant reactions. This
predominantly points to negative applicant reactions towards highly automated interviews.
There might be negative consequences regarding perceived social presence within highly
automated interviews. Humans perceive social presence if there is interpersonal warmth and
empathy during an interaction (Walter, Ortbach, & Niehaves, 2015). Usually, they convey
this through the exchange of nonverbal communication (Chapman et al., 2003). Therefore,
applicants might feel less social presence in highly automated interviews. Furthermore, the
differences in Potosky’s aspects of transparency and surveillance might build into greater
concerns from applicants on privacy issues (i.e., concerns about data abuse; Smith et al.,
2011). If applicants are constantly aware that they will submit videos of themselves to an
organization without knowledge of who will access them, this might raise privacy concerns
(Langer et al., 2017).
In spite of these potential negative effects of highly automated interviews on applicant
reactions, people also tend to believe that computers are more objective than humans (Miller
et al., 2018). Applicants might believe that highly automated interviews should make no
difference in how they treat applicants compared to videoconference interviews, where
interviewers behaviors are likely influenced by applicants’ characteristics (e.g., appearance;
Pingitore et al., 1994).
Based on the aforementioned theoretical assumptions, we propose:
Hypothesis 1: Participants will evaluate the highly automated interview as conveying less
social presence, being more consistent but evoking more privacy concerns.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 10
Contextual influences on applicant reactions
Support for Hypothesis 1 would replicate the findings by prior research (e.g., Langer
et al. 2019; M.K. Lee, 2018). To advance research and inform practice, this study builds on
emerging research regarding contextual influences on applicant reactions. M.K. Lee showed
that people prefer human influence during human tasks and chose hiring and work
evaluations as examples for human tasks (in comparison to work scheduling and work
assignment as mechanical tasks). Langer et al. found that people favor automated tools for
low-stake tasks (i.e., training vs. personnel selection). Consequently, the implications of these
studies remain on the task level whereas they ignored important higher-level influences such
as the organizational context. If the organizational context affects people’s reactions this
could lead to valuable insights because these effects might translate to many tasks within the
organizational context.
Applicants ascribe attributes to organizations (Slaughter and Greguras, 2009). One
important attribute of organizations which might affect which selection procedures they use is
innovation (Lievens and Highhouse, 2003). Original and creative organizations whose
success relies on technological innovation might be examples for innovative organizations
(Slaughter et al., 2004). They might operate in dynamic markets, where they need to adapt to
technological changes (e.g., computer science). In comparison, other organizations might be
perceived as more stable and established organizations that operate in industry sectors in
which continuity is valued by customers (e.g., insurance industry) (Slaughter and Greguras,
2009) and where there is also potentially less innovation. Previous research suggests that the
organizational context is a determining factor for applicants attraction to a company
(Chapman et al., 2005). One reasons is that applicants feel attraction to organizations where
they perceive themselves as a good fit (i.e., person-organization fit; Chapman et al., 2005).
For instance, some applicants feel attracted to innovative organizations because they like the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 11
dynamic environment which requires adaptation to changes, whereas others prefer more
stable environments within established organizations (Slaughter and Greguras, 2009).
Despite the importance of the organizational context, it was not only the studies by
M.K. Lee (2018) and Langer et al. (2019) that left it out of their studies. Similarly, Sears and
colleagues (2013) did not provide any details on the hiring organization and evaluated
participants reactions to different interview approaches. In application situations, however,
applicants use multiple sources to learn about the organization and to determine
organizations’ attractiveness as employers (Chapman et al., 2005). Applicants inform
themselves through the organizations’ homepage, job ads, and the selection procedures
organizations use (Gatewood et al., 1993). For instance, applicants perceive organizations
using digital interviews as more innovative (Torres and Mejia, 2017).
Even more importantly, there might be cases where applicants’ perceptions of an
organization and applicants’ perceptions of the applied selection procedures diverge
(Gatewood et al., 1993). For instance, before applicants enter a selection situation, they might
expect an organization to be rather traditional. If they are then confronted with innovative
selection procedures (e.g., highly automated interviews), this could violate applicants’
expectations of the organization, as they had expected an established organization to also use
established selection procedures (e.g., face-to-face interviews). Thus, the perceived job
relatedness of a selection procedure might differ depending on the organizational context. In
the case of innovative organizations, applicants might believe that innovative selection
approaches tell something about the future job at this organization. As attraction to
organizations depends on perceived person-organization fit (Chapman et al., 2005),
applicants in search for an established (innovative) organizational environment might be
irritated by experiencing an innovative (established) selection approach, which could then
negatively affect organizational attractiveness. Thus, we propose:
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 12
Hypothesis 2: Participants will perceive selection procedures as more job related and the
organization as more attractive when the selection procedure matches the perceived attributes
of an organization (e.g., highly automated selection in the selection process of an innovative
organization).
Finally, this study addresses a limitation from prior research. Previous studies used
student samples to investigate applicant reactions to technology-enhanced interviews (e.g.,
Sears et al. 2013) and there is speculation if reactions to highly automated interviews might
diverge between students and the working population (Langer et al., 2019). It might be that
students are more open for technology-enhanced approaches; they might believe that such
approaches are more job related than people who already have a job; and there might be
differences in privacy concerns because of underlying individual differences between
students and worker (cf., Bauer et al., 2006). As most of these assumptions are speculative we
therefore ask:
Research Question 1: Do students and worker react differently to highly automated
interviews?
Method
Sample
The authors consulted prior research by Blacksmith et al. (2016) and Langer et al.
(2017) to determine the required sample size. They found small to medium effect sizes for
applicant reactions comparing different forms of technology-enhanced interviews. Sample
size calculation with G*Power (Faul et al., 2009) revealed that under the assumption of a
small to medium effect (η²p between .04-.06), a sample between N = 125-191 would be
necessary for a power of 1-β = .80. Participants were recruited through social media and
direct contact. Data collection continued until the sample consisted of N = 154 participants.
Six participants were excluded from data analysis (e.g., not reading carefully, pausing the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 13
experiment). The final sample consisted of N = 148 German participants (61% female). Of
these, 54% were employed in full-time jobs (those were the employees within the
independent variable students vs. employees), 39% were students (82% of these studied
psychology) and the rest were either apprentices or high school students. The mean age was
28.80 years (SD = 7.18). The majority of participants (36%) had experienced one to three job
interviews before, 28% had experienced four to five interviews, and 36% had experienced
more than six interviews. Furthermore, 74% of participants had experienced at least one
videoconference interview before. Student participants received course credit and all
participants had the chance to win a gift certificate for online shopping.
Design, procedure, and manipulation
In a 2×2 design (videoconference vs. highly automated interview; established vs.
innovative organization), participants visited an online survey platform and were randomly
assigned to one of the conditions. Afterwards, they had to imagine that they applied for a job.
Then, they received the respective description of the organization (Table 1). The descriptions
differ in 14 text elements that were designed to either reflect an established or an innovative
organization. A pre-test with N = 59 participants (ninnovative = 32, nestablished = 27) was
conducted to verify that organizational attraction to these descriptions was perceived
similarly. Participants in the pre-test were randomly assigned to one of the descriptions and
answered to the same items for organizational attractiveness as the participants in the main
study. Results indicate that the organizations were perceived similarly attractive (Minnovative =
3.46, SDinnovative = 0.61, Mestablished = 3.60, SDestablished = 0.66), t(57) = 0.84, p = .40, d = 0.22.
After reading the description in the main study, participants watched a video showing
either the videoconference or the highly automated interview. Both videos were taken from
Langer and colleagues (2019). They were edited in a way that participants watched a female
virtual/human interviewer interacting with a female applicant, who was only present through
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 14
voice. The virtual/human interviewer asked two questions. In the second question, the
applicant became nervous (i.e., hesitates to answer) and the respective interviewer said that
she noticed nervousness and tried to calm the interviewee (see also Langer et al., 2018,
2019). This way participants should realize that all parts of the interview have been
automatized, as described by Parasuraman et al. (2001) and Langer et al. (2019). Afterwards,
the applicant recovered and responded to the interview question. Then, the video faded out
without any further information about the outcome of the interview.
Measures
Participants responded to the items on a scale from 1 (strongly disagree) to 5
(strongly agree).
Social presence was measured with five items from Walter and colleagues (2015). A
sample item is “The interviewer acted empathically.”
Privacy concerns were measured with seven items, two were taken from Langer and
colleagues (2018) and five were taken from Langer and colleagues (2017). A sample item is
“Situations like the one shown threaten applicants’ privacy.”
Consistency and job relatedness were measured with three items each from a
German version of the Selection Procedural Justice Scale (Bauer et al., 2001; Warszta, 2012).
A sample item for consistency is “This procedure is administered to all applicants in the same
way.” A sample item for job relatedness is “Doing well in this interview means that a person
will also be good in the job.”
Organizational attractiveness was measured with twelve items from Highhouse and
colleagues (2003) and three more from Warszta (2012). A sample item is “This organization
is attractive.”
Manipulation check measure
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 15
To check if participants perceived the organizational description as intended (i.e.,
established vs. innovative), the item The organization described itself as an innovative
organization” was included.
Results
Manipulation check
Participants in the innovative organization condition were more likely to perceive the
organization as innovative, t(126.13) = 10.87, p < .01, d = 1.81.
Testing the hypotheses
Table 2 provides an overview of descriptive statistics and correlations. Table 3 shows
the results of the ANOVAs. Hypothesis 1 stated that participants would evaluate the highly
automated interview as conveying less social presence, more privacy concerning, and more
consistent. Results of the ANOVA indicated that participants perceived highly automated
interviews as slightly to moderately more consistent and as providing slightly to moderately
less social presence. There was no significant difference for privacy concerns. Hypothesis 1
was therefore partially supported.
Hypothesis 2 stated that participants would perceive selection procedures as more job
related and the organization as more attractive when the selection procedure matches the
perception of the organization. The ANOVAs revealed no interaction effects, suggesting that
a match of the selection procedure and the organizational image did not affect job relatedness
or organizational attractiveness.
Research Question 1 asked if students and employees differ in their reactions to
highly automated interviews. Therefore, we coded if participants were working and included
this as an independent variable into the aforementioned ANOVAs. There were no significant
main effects for the difference between students and employees. The only significant effect
was an interaction between the independent variables students versus employees and the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 16
interview type regarding privacy concerns. Students reported higher privacy concerns for
highly automated interviews whereas employees reported higher privacy concerns for
videoconference interviews, F(1, 140) = 1.80, p < .05, η²p = 0.03. However, this result should
be interpreted cautiously because when controlling for alpha-error accumulation (using
Bonferroni-correction) for five dependent variables, the interaction effect did not remain
significant (exact p = 0.028 compared to a corrected α = 0.01).
Additional results
The results indicated that organizational attraction was lower for highly automated
interviews. To further investigate this result, a mediation analysis was conducted using
PROCESS (Hayes, 2013). Following suggestions by Hayes (2013), social presence, privacy
concerns, job relatedness and consistency were included as mediators and organizational
attractiveness as outcome variable. Integrating the findings of Tables 4 and 5 indicated that
the negative indirect effect of the highly automated interview on organizational attractiveness
through social presence was significant. This means that participants perceived organizational
attractiveness resulting from the use of a highly automated interview to be lower because it
conveyed less social presence.
Discussion
The aims of the current study were to (a) further investigate applicant reactions to
highly automated interviews, (b) examine influences of the organizational context on
reactions to technology-enhanced interview approaches, and (c) explore potential differences
between students and employees regarding reactions to highly automated interviews. On the
positive side for highly automated interviews, they were perceived as more consistent than
videoconference interviews. However, participants also perceived highly automated
interviews to provide less social presence which negatively affected organizational
attractiveness. Moreover, this finding was independent from the organizational context,
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 17
implying that a match of the perceptions of an organizational and its selection procedures did
not lead to better perceptions of innovative interview approaches. Lastly, the results
tentatively indicate that there might be small differences between students and employees
privacy concerns within the different interview formats.
First, the findings support assumptions that people perceive machines to be more
consistent (Miller et al., 2018). Participants in the current study appeared to believe that
applicants in highly automated interviews might be treated more consistently than in
videoconference interviews. Therefore, variables related to consistency might be interesting
for future research on highly automated selection procedures. For instance, objectivity of the
outcomes could be another variable that applicants favor in highly automated selection, as the
human influence on selection decisions is minimized which could increase distributive justice
perceptions (i.e., feelings that outcomes are fair; Gilliland, 1993).
However, participants thought that the organization using the highly automated
interviews was less attractive because they perceived less social presence. This supports the
assumption that the highly automated interview offered less social bandwidth and
interactivity as defined by Potosky (2008), replicates findings by Langer et al. (2019), and is
in line with previous findings that show that higher-levels of automation reduce interpersonal
justice perceptions (Langer et al., 2017). Consequently, evidence from different studies
indicates that technology-enhanced interviews are less accepted because of the feature that
makes them more efficient and potentially more consistent: because of minimizing the human
influence. Hiring managers need to decide between efficiency and applicant reactions when
they choose between different interview approaches.
Furthermore, students and employees differed regarding privacy concerns within the
interview formats. Privacy concerns rise when people are uncertain about what happens to
their data (Smith et al., 2011). Consequently, students seemed to be more uncertain about the
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 18
use of their data in highly automated interviews, whereas for employees this was the case for
videoconference interviews. Even though this finding should be interpreted cautiously
because of potential alpha-error inflation, it suggests that work experience may play a role
when it comes to privacy concerns in different versions of technology-enhanced interviews
and calls for future research. It is also important to mention, that there were no differences
between students and employees for any other outcome variables which might be good news
for interpreting previous applicant reactions studies (findings likely also generalize to more
mature samples) and future research in this field (no exaggerated need for worries when
planning to use student samples).
Finally, this study investigated if the match of perceptions of organizations and its
selection procedures affects applicant reactions (cf., Gatewood et al., 1993). It appears that
participants did not like the highly automated interview regardless of imagining applying for
an innovative or an established organization. Therefore, the assumption that the
organizational context is a higher-level contextual effect that affects applicant reactions and
that might be more widely applicable compared to task-level effects found by M.K. Lee
(2018) and Langer et al. (2019) has to be currently dismissed.
Limitations
First, participants were not personally interviewed through a videoconference or an
automated interview. Therefore, findings could differ for live interviews (but see Langer et
al., 2019). Nevertheless, watching a video should be more immersive than merely imagining
being in an interview situation which is another common approach in studies using vignettes
(Atzmüller and Steiner, 2010). Second, it is not yet clear which role the virtual character
played in the results of the current study. Following assumptions from human-computer
interaction research (e.g., K.M. Lee and Nass, 2003), omitting the virtual character would
have decreased the social presence of the highly automated interaction even more. This calls
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 19
for further research regarding various forms of highly automated interviews. Third,
participants might have perceived videoconference interviews as “innovative” selection tools
which could explain the non-significant findings regarding the interaction between the
organizational description and the interview approach. However, if the match of the
organization and its selection procedures had led to better applicant reactions, perceiving both
interview approaches as innovative would have led to a main effect for established vs.
innovative organizations in a way that innovative organizations should have been perceived
more favorably this was not the case.
Main practical implications
Organizations seem to be well-advised to check their existing selection approaches
regarding applicant reactions. Even if management automation systems enhance efficiency,
organizations should pay attention to the possible detrimental effects on their applicant pool.
This is especially true in times of a tense labor market where every applicant is a potential
market advantage because applicants might withdraw their application if they are not satisfied
with the way the interview process is handled and potentially advice friends against applying
for a job at an organization that uses automated interviews (Langer et al., 2019; Uggerslev et
al., 2012). Considering the results of the current study, not even innovative organizations
should rely on their image to buffer the negative reactions to technology-enhanced interview
approaches. Finally, organizations should consider which interview tools they use within a
given applicant pool as the current results may imply that applicants with different levels of
work experience seem to react differently to different forms of technology-enhanced
interviews.
Future research
It is still unclear how applicants behave when they are personally confronted with
highly automated interviews. For instance, it might be possible that applicants are less
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 20
motivated, use less impression management, or provide qualitatively different answers, which
might affect the validity of highly automated interviews (Blacksmith et al., 2016).
Additionally, research regarding contextual influences on reactions to automated tools is
growing and there are still many open questions. For instance, the Europe Union introduced
the General Data Protection Regulation (GDPR) in 2018 and other countries are working on
comparable regulations regarding data security, gathering, and evaluation. It is unclear how
such regulations affect people’s trust in organizations using automated tools and algorithms.
Potentially, privacy concerns regarding automated tools are affected by such regulations and
would therefore be different in the US or in China. Similarly, it is unclear how the use of
highly automated tools will affect management practice, law-making and societies as a
whole. Even within the rather narrow field of technology-enhanced interviews many legal,
moral, and ethical questions arise (cf. Zerilli et al., 2018). For instance, how can applicants be
sure that only valid and unbiased information is considered in automatic processes? How can
hiring manager explain their decisions to applicants who were rejected by automated tools?
How to enable people to understand automated tools?
Conclusion
As practice continues to develop innovative selection procedures, it is crucial that this
drive does not outpace scientific understanding of these methods. This study showed that
potential applicants perceive highly automated interviews and organizations using such
interviews as rather negatively and raised questions about contextual and individual
influences regarding applicant reactions. Yet it is merely a small step in catching up with the
ongoing automation of management processes. Hopefully, this study encourages more
scholars to research the emerging intersection between management and computer science.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 21
References
Atzmüller, C. and Steiner, P.M. (2010), Experimental vignette studies in survey research”,
Methodology, Vol. 6, pp. 128138. doi:10.1027/1614-2241/a000014
Bauer, T.N., Truxillo, D.M., Sanchez, R.J., Craig, J.M., Ferrara, P., and Campion, M.A.,
(2001), Applicant reactions to selection: Development of the Selection Procedural
Justice Scale (SPJS), Personnel Psychology, Vol. 54, pp. 387419.
doi:10.1111/j.1744-6570.2001.tb00097.x
Bauer, T. N., Truxillo, D. M., Tucker, J. S., Weathers, V., Bertolino, M., Erdogan, B., and
Campion, M. A. (2006), “Selection in the information age: The impact of privacy
concerns and computer experience on applicant reactions”, Journal of Management,
Vol 32, pp. 601621. doi:10.1177/0149206306289829
Blacksmith, N., Willford, J.C., and Behrend, T.S. (2016), Technology in the employment
interview: A meta-analysis”, Personnel Assessment and Decisions, Vol. 2, 2.
doi:10.25035/pad.2016.002
Chapman, D.S., Uggerslev, K.L., Carroll, S.A., Piasentin, K.A., and Jones, D.A. (2005),
“Applicant attraction to organizations and job choice: A meta-analytic review of the
correlates of recruiting outcomes”, Journal of Applied Psychology, Vol. 90, pp. 928
944. doi:10.1037/0021-9010.90.5.928
Chapman, D.S., Uggerslev, K.L., and Webster, J. (2003), “Applicant reactions to face-to-face
and technology-mediated interviews: A field investigation”, Journal of Applied
Psychology, Vol. 88, pp. 944953. doi:10.1037/0021-9010.88.5.944
Chapman, D.S. and Webster, J. (2003), The use of technologies in the recruiting, screening,
and selection processes for job candidates”, International Journal of Selection and
Assessment, Vol. 11, pp. 113120. doi:10.1111/1468-2389.00234
Frauendorfer, D., Schmid Mast, M., Nguyen, L., and Gatica-Perez, D. (2014), Nonverbal
social sensing in action: Unobtrusive recording and extracting of nonverbal behavior
in social interactions illustrated with a research example”, Journal of nonverbal
behavior, Vol. 38, pp. 231245. doi:10.1007/s10919-014-0173-5
Gatewood, R.D., Gowan, M.A., and Lautenschlager, G.J. (1993), Corporate image,
recruitment, and initial job choice decisions”, Academy of Management Journal, Vol.
36, pp. 414427. doi:10.2307/256530
Gilliland, S.W. (1993) “The perceived fairness of selection systems: An organizational justice
perspective”, Academy of Management Review, Vol. 18, pp. 694734.
doi:10.2307/258595
Hayes, A.F. (2013), Introduction to mediation, moderation, and conditional process analysis:
A regression-based approach, Guilford Press, New York, NY.
Highhouse, S., Lievens, F., and Sinar, E.F. (2003), Measuring attraction to organizations”,
Educational and Psychological Measurement, Vol. 63, pp. 9861001.
doi:10.1177/0013164403258403
HireVue (2018), HireVue OnDemand available at https://www.hirevue.com/products/video-
interviewing/ondemand (accessed 8 July 2018).
Kätsyri, J., Förger, K., Mäkäräinen, M., and Takala, T. (2015), “A review of empirical
evidence on different uncanny valley hypotheses: Support for perceptual mismatch as
one road to the valley of eeriness”, Frontiers in Psychology, Vol 6, 390.
doi:10.3389/fpsyg.2015.00390
Langer, M., König, C.J., and Fitili, A. (2018), Information as a double-edged sword: The
role of computer experience and information on applicant reactions towards novel
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 22
technologies for personnel selection”, Computers in Human Behavior, Vol. 81, pp.
1930. doi:10.1016/j.chb.2017.11.036
Langer, M., König, C.J., and Krause, K. (2017), Examining digital interviews for personnel
selection: Applicant reactions and interviewer ratings”, International Journal of
Selection and Assessment, Vol. 25, pp. 371382. doi:10.1111/ijsa.12191
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews:
Acceptance under the influence of stakes. International Journal of Selection and
Assessment, Vol. 27, pp. 217-234. doi:10.1111/ijsa.12246
Lee, K.M., and Nass, C. (2003), Designing social presence of social actors in human
computer interaction”, in: Proceedings of the CHI 2003 Conference on Human
Factors in Computing Systems, Fort Lauderdale, FL, pp. 289296.
doi:10.1145/642611.642662
Lee, M.K. (2018), “Understanding perception of algorithmic decisions: Fairness, trust, and
emotion in response to algorithmic management”. Big Data & Society, 5,
205395171875668. doi:10.1177/2053951718756684
Lievens, F. and Highhouse, S. (2003), The relation of instrumental and symbolic attributes
to a company’s attractiveness as an employer”, Personnel Psychology, Vol. 56, pp.
75102. doi:10.1111/j.1744-6570.2003.tb00144.x
Miller, F.A., Katz, J.H., and Gans, R. (2018), The OD imperative to add inclusion to the
algorithms of artificial intelligence”, OD Practitioner, Vol. 50 (1), pp. 612.
Mori, M., MacDorman, K., and Kageki, N. (2012), “The uncanny valley”, IEEE Robotics &
Automation Magazine, Vol 19, pp. 98100. doi:10.1109/MRA.2012.2192811
Naim, I., Tanveer, M.I., Gildea, D., and Hoque, M.E. (2015), Automated analysis and
prediction of job interview performance: The role of what you say and how you say
it, in: 11th IEEE International Conference and Workshops on Automatic Face and
Gesture Recognition. Ljubljana, Slovenia, pp. 114. doi:10.1109/fg.2015.7163127
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of
human interaction with automation. IEEE Transactions on Systems, Man, and
Cybernetics - Part A: Systems and Humans, Vol. 30, pp. 286297.
doi:10.1109/3468.844354
Pingitore, R., Dugoni, B.L., Tindale, R.S., and Spring, B. (1994), Bias against overweight
job applicants in a simulated employment interview”, Journal of Applied Psychology,
Vol. 79, pp. 909917. doi:10.1037/0021-9010.79.6.909
Potosky, D., (2008), A conceptual framework for the role of the administration medium in
the personnel assessment process”, Academy of Management Review, Vol. 33, pp.
629648. doi:10.5465/amr.2008.32465704
Precire (2018), Precire Technologies, available at https://www.precire.com/de/start/ (accessed
8 June 2018).
Sears, G., Zhang, H., Wiesner, W.D., Hackett, R.W., and Yuan, Y. (2013), A comparative
assessment of videoconference and face-to-face employment interviews”,
Management Decisions, Vol. 51, pp. 17331752. doi:10.1108/MD-09-2012-0642
Slaughter, J.E. and Greguras, G.J. (2009), Initial attraction to organizations: The influence
of trait inferences”, International Journal of Selection and Assessment, Vol. 17, pp.
118. doi:10.1111/j.1468-2389.2009.00447.x
Slaughter, J.E., Zickar, M.J., Highhouse, S., and Mohr, D.C. (2004), Personality trait
inferences about organizations: Development of a measure and assessment of
construct validity”, Journal of Applied Psychology, Vol. 89, pp. 85103.
doi:10.1037/0021-9010.89.1.85
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 23
Smith, H.J., Dinev, T., and Xu, H. (2011), Information privacy research: An
interdisciplinary review”, Management Information Systems Quarterly, Vol 35, pp.
9891015. doi:10.2307/41409970
Torres, E.N. and Mejia, C. (2017), Asynchronous video interviews in the hospitality
industry: Considerations for virtual employee selection”, International Journal of
Hospitality Management, Vol. 61, pp. 413. doi:10.1016/j.ijhm.2016.10.012
Uggerslev, K.L., Fassina, N.E., and Kraichy, D. (2012), Recruiting through the stages: A
meta-analytic test of predictors of applicant attraction at different stages of the
recruiting process”, Personnel Psychology, Vol. 65, pp. 597660. doi:10.1111/j.1744-
6570.2012.01254.x
Walter, N., Ortbach, K., and Niehaves, B. (2015), Designing electronic feedback: Analyzing
the effects of social presence on perceived feedback usefulness”, International
Journal of Human-Computer Sudies, Vol. 76, pp. 111.
doi:10.1016/j.ijhcs.2014.12.001
Warszta, T. (2012), “Application of Gilliland’s model of applicants’ reactions to the field of
web-based selection”, Unpublished dissertation, Christian-Albrechts Universität Kiel,
Germany.
Zerilli, J., Knott, A., Maclaurin, J., and Gavaghan, C. (2018), “Transparency in algorithmic
and human decision-making: Is there a double standard?”, Philosophy & Technology,
advance online publication. doi:10.1007/s13347-018-0330-6
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 24
Table 1.
Information Presented to the Participants in the Different Organizational Descriptions
Fuchs&Schulz Automotive is one of the oldest (an aspiring) and established (innovative)
automotive suppliers in Germany. The established (progressive) organization with subsidiaries in
China, Japan, the US, Switzerland and the Netherlands expanded since its foundation in 1954
constantly (2001 rapidly). As a family business in the third generation (former start-up) with
more than 1.300 employees and its headquarters in Frankfurt, F&S reached a revenue of 125.2
million Euro in 2016.
F&S offers a classical (innovative) and established (future-oriented) supplier concept and
substantial knowledge in the areas of logistics, sustainability and service. This way F&S
consolidated its position as internationally successful (a global player) and experienced company
(driver of innovation) on the market.
People are the focus of F&S as customers, partners, and employees. Trust and security
(innovation and creativity) is the groundwork of the traditional (future-oriented) organizational
culture. This crystallizes in stable long-term (dynamical and fruitful) relations with the
customers.
Since the foundation of the company, F&S lives tradition (creativity). This way we build a bridge
between reliable consultancy and proximity to customers. Through extensive (interdisciplinary)
project experience, we ensure that we can provide you with consultants that exactly know and
understand the target market (penetrate the target market through constant progress).
Note. Information translated from German. Italic text pieces reflect the manipulation for the
traditional organization (not in italics in the original material for the participants). Text pieces in
brackets reflect the manipulation for the innovative organization.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 25
Table 2.
Correlations and Cronbach’s Alpha for the Study Variables.
Scale
1.
2.
3.
5.
6.
7.
8.
1.
Social Presence
.91
2.
Privacy Concerns
-.19*
.78
3.
Consistency
-.13
.04
.88
4.
Job Relatedness
.36**
-.12
.00
5.
Organizational Attractiveness
.60**
-.20*
-.09
.95
6.
Students vs. Employees
.00
.02
-.04
-.06
-
7.
Interview
-.23**
.02
-.20*
-.19*
.12
-
8.
Organization
-.09
.04
-.04
-.11
-.13
-.05
-
Note. Coding of students vs. employees: -1 = students, 1 = employees, Coding of interview: -1 =
videoconference, 1 = highly automated. Coding of organization: -1 = established organization, 1 =
innovative organization. N = 148. Numbers along the diagonal represent the Cronbach’s alpha of
the scales.
*p < .05, **p < .01.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 26
Table 3.
Means, Standard Deviations, and ANOVA Results (including partial η²) for the Dependent Variables.
Condition
ANOVA
VC-ES
VC-IN
AI-ES
AI-IN
VC vs. AI
ES vs. IN
Interaction
Variable
M
(SD)
M
(SD)
M
(SD)
M
(SD)
F(1, 144)
η²p
F(1, 144)
η²p
F(1, 144)
η²p
Social Presence
3.07
(0.72)
2.99
(0.88)
2.77
(0.92)
2.48
(0.93)
7.96**
.05
1.15
.01
0.56
.00
Privacy Concerns
3.12
(0.51)
3.09
(0.63)
3.07
(0.71)
3.19
(0.57)
0.03
.00
0.21
.00
0.49
.00
Consistency
3.08
(0.73)
3.12
(0.87)
3.51
(0.94)
3.37
(0.67)
6.32*
.04
0.11
.00
0.44
.00
Job Relatedness
2.49
(0.91)
2.43
(0.79)
2.15
(0.76)
2.45
(0.78)
1.38
.01
0.82
.00
1.81
.01
Organizational Attraction
3.32
(0.53)
3.22
(0.66)
3.11
(0.74)
2.87
(0.79)
5.91*
.04
2.38
.02
0.40
.00
Note. VC = videoconference interviews condition, AI = highly automated interviews condition, ES = established organization condition,
IN = innovative organization condition. nVC-ES = 34, nVC-IN = 41, nAI-ES = 37, nAI-IN = 36.
*p < .05, **p < .01.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 27
Table 4.
Regression Results for the Mediation of the Hypothesized Mediators between the Videoconference vs. Highly automated Interview Condition and
Organizational Attractiveness
Model
R2
Coefficient
SE
p
95% Confidence Interval
Single models
VC vs. AI → Social Presence
.05
-.20
0.07
<.01
[-.34, -.06]
VC vs. AI → Privacy Concerns
.00
.01
0.05
.85
[-.09, .11]
VC vs. AI → Consistency
.04
.16
0.07
<.05
[.04, .30]
VC vs. AI → Job Relatedness
.01
-.08
0.07
.24
[-.21, .05]
VC vs. AI → Organizational Attractiveness
.04
-.14
0.06
<.05
[-.25, -.02]
Model complete
.39
-
-
<.01
-
Social Presence → Organizational Attractiveness
.42
0.06
<.01
[.30, .53]
Privacy Concerns → Organizational Attractiveness
-.10
0.08
.19
[-.26, .05]
Consistency → Organizational Attractiveness
-.01
0.06
.89
[-.12, .11]
Job Relatedness → Organizational Attractiveness
.11
0.60
.09
[-.02, .23]
VC vs. AI → Organizational Attractiveness
-.04
0.05
.39
[-.14, .05]
Note. AI = highly automated condition, VC = videoconference condition. Coding of the variable VC vs. AI: -1 = videoconference interview
condition, 1 = highly automated interview condition. The 95% confidence interval for the effects is obtained by the bias-corrected bootstrap with
10,000 resamples.
AUTOMATED INTERVIEWS AND THE ORGANIZATIONAL CONTEXT 28
Table 5.
Results for the Indirect Effects of Videoconference vs. Highly automated Interview Condition over the Hypothesized Mediators on
Attractiveness of the Procedure
Model
IEmed
SEBoot
95% Confidence Interval
Complete indirect effect
-.13
0.05
[-.24, -.03]
VC vs. AI Social Presence Organizational Attractiveness
-.12
0.05
[-.22, -.04]
VC vs. AI Privacy Concerns Organizational Attractiveness
.00
0.01
[-.03, .01]
VC vs. AI Consistency Organizational Attractiveness
.00
0.02
[-.03, .03]
VC vs. AI → Job Relatedness → Organizational Attractiveness
-.01
0.01
[-.05, .01]
Note. VC = videoconference interview condition, AI = highly automated interview condition. Coding of the variable VC vs. AI: -1 = video
conference condition, 1 = highly automated condition. The 95% confidence interval for the effects is obtained by the bias-corrected bootstrap
with 10,000 resamples. IEmed = completely standardized indirect effect of the mediation. SEBoot = standard error of the bootstrapped effect
sizes.
... Perhaps, a reason why the change of perspective went along with a change of preference was that when oneself is being assessed, one wants to be assessed as fairly and with as little personal bias as possible. A presumed major advantage of technical systems compared to humans is their objectivity and consistency [27][28][29] . This might have led participants to prefer the technical systems for their own assessment. ...
... In contrast, when being assessed, these factors might not be as relevant-tools tailored to a specific sub-task, their case-by-case performance, and lack of personal biases might determine the willingness to be assessed by machine intelligence. Specifically, a presumed major advantage of technical systems compared to humans is their objectivity and consistency [27][28][29] . A common fear when being judged by a technical system is reductionism of the algorithm, i.e., that unique individual factors might not be considered 38,39 . ...
Article
Full-text available
Technological advancements are ubiquitously supporting or even replacing humans in all areas of life, bringing the potential for human-technology symbiosis but also novel challenges. To address these challenges, we conducted three experiments in different task contexts ranging from loan assignment over X-Ray evaluation to process industry. Specifically, we investigated the impact of support agent (artificial intelligence, decision support system, or human) and failure experience (one vs. none) on trust-related aspects of human-agent interaction. This included not only the subjective evaluation of the respective agent in terms of trust, reliability, and responsibility, when working together, but also a change in perspective to the willingness to be assessed oneself by the agent. In contrast to a presumed technological superiority, we show a general advantage with regard to trust and responsibility of human support over both technical support systems (i.e., artificial intelligence and decision support system), regardless of task context from the collaborative perspective. This effect reversed to a preference for technical systems when switching the perspective to being assessed. These findings illustrate an imperfect automation schema from the perspective of the advice-taker and demonstrate the importance of perspective when working with or being assessed by machine intelligence.
... A couple of studies compared applicants' fairness perceptions of AIenabled interviews vs. traditional interviews with a human recruiter, revealing contrasting findings. Whereas a group of papers (Acikgoz et al., 2020;Lee, 2018;Newman et al., 2020) found that people perceived algorithm-driven decisions as less fair than human-made decisions, another group of papers (Langer et al., 2019a(Langer et al., , 2019b(Langer et al., , 2020Suen et al., 2019) found no difference in fairness perception between decisions made by an AI or a human. Other studies (Gelles et al., 2018;Kaibel et al., 2019;Langer et al., 2018;van Esch & Black, 2019) examined different contextual and procedural factors, such as the level of information given to applicants regarding the used AI or the level of computer experience of applicants, and how they affect applicant reactions to the use of AI in hiring. ...
... Furthermore, some participants felt that using algorithms and machines to assess humans is demeaning and dehumanizing (Lee, 2018). In contrast to those findings, another group of papers (Langer et al., 2019a(Langer et al., , 2019b(Langer et al., , 2020Suen et al., 2019) found no differences in perceived fairness between interviews with an AI and interviews with a human among job applicants, although most of them exhibited lower favorability to AI interviews. ...
Article
Full-text available
Companies increasingly deploy artificial intelligence (AI) technologies in their personnel recruiting and selection process to streamline it, making it faster and more efficient. AI applications can be found in various stages of recruiting, such as writing job ads, screening of applicant resumes, and analyzing video interviews via face recognition software. As these new technologies significantly impact people’s lives and careers but often trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. However, given the novelty of AI applications in recruiting practice, the subject is still an emerging topic in academic literature. To inform and strengthen the foundation for future research, this paper systematically reviews the extant literature on the ethicality of AI-enabled recruiting to date. We identify 51 articles dealing with the topic, which we synthesize by mapping the ethical opportunities, risks, and ambiguities, as well as the proposed ways to mitigate ethical risks in practice. Based on this review, we identify gaps in the extant literature and point out moral questions that call for deeper exploration in future research.
... Unternehmen sollten in diesem Zusammenhang beachten, dass der Einsatz von künstlicher Intelligenz mit negativen Bewerberreaktionen einhergehen kann. So fanden Studien von Langer und Kollegen, dass hoch automatisierte Interviews zu niedrigerer wahrgenommener sozialer Präsenz und niedrigerer Fairness dieser Interviews sowie zu niedrigerer organisationaler Attrivität der Unternehmen, die diese Interviews einsetzen, führen können (Langer et al., 2019;Langer, König, Sanchez & Samadi, 2020). Und weitere Studien fanden, dass die Ankündigung, dass eine Einstellungsentscheidung von einer künstlichen Intelligenz getroffen wird, zu signifikant niedrigeren Fairness-Einschätzungen führt als bei einem menschlichen Entscheidungsträger (Gonzalez, Capman, Oswald, Theys & Tomczak, 2019;Langer, Baum, König, Hähne, Oster & Speith, 2021). ...
... Increasing evidence suggests that job applicants tend to react unfavorably toward automated processes, such as those using AI/ML. As examples, highly-automated hiring processes can lead people to perceive less control and fairness, greater ambiguity and privacy concerns, and less acceptance of the process, relative to human-led processes (e.g., Gonzalez et al., 2019;Langer et al., 2019Langer et al., , 2020. The use of AI/ML thus appears to influence how applicants experience the hiring process. ...
Article
Full-text available
While many organizations' hiring practices now incorporate artificial intelligence (AI) and machine learning (ML), research suggests that job applicants may react negatively toward AI/ML-based selection practices. In the current research, we thus examined how organizations might mitigate adverse reactions toward AI/ML-based selection processes. In two between-subjects experiments, we recruited online samples of participants (undergraduate students and Prolific panelists, respectively) and presented them with vignettes representing various selection systems and measured participants' reactions to them. In Study 1, we manipulated (a) whether the system was managed by a human decision-maker, by AI/ML, or a combination of both (an “augmented” approach), and (b) the selection stage (screening, final stage). Results indicated that participants generally reacted more favorably toward augmented and human-based approaches, relative to AI/ML-based approaches, and further depended on participants' pre-existing familiarity levels with AI. In Study 2, we sought to replicate our findings within a specific process (selecting hotel managers) and application method (handling interview recordings). We found again that reactions toward the augmented approach generally depended on participants’ familiarity levels with AI. Our findings have implications for how (and for whom) organizations should implement AI/ML-based practices.
... They have made no real changes in the way they work or are organized. Much of the new sophisticated hardware is used to automate more of the existing procedures to speed up outdated tasks rather than to rethink them from scratch [6]. Only a few organizations had a coherent understanding of how to realize the potential of the data collected by their automated systems for industry revolution 5.0 [7]. ...
Article
Full-text available
In the current digitalization dilemma of an organization, there is a need for the business intelligence and knowledge management element for enhancing a perspective of learning and strategic management. These elements will comprise a significant evolution of learning, insight gained, experiences and knowledge through compelling theoretical impact for practitioners, academicians, and scholars in the pertinent field of interest. This phenomenon occurs due to digitalization transformation towards industry revolution 5.0 and organizational excellence in the information system area. This research focuses on the characteristic of a comprehensive performance measure perspective in an organization that conceives information assessment and key challenges of Business Intelligence and Knowledge Management in perceiving a relevant organizational excellence framework. The dynamic research focusing on the decision-making process and leveraging better knowledge creation. The future of organization excellence seemed to be convergent in determining the holistic performance measure perspective and its factors towards industry revolution 5.0. The research ends up with a typical basic excellence framework that will mash up some characteristics in designing an organizational strategic performance framework. The output is a conceptual performance measure framework for a typical decision-making application for organizational strategic performance management dashboarding.
... There is asynchronous video interview, AI chatbots and other automated platforms which interpret and assess the candidate response real-time and provide with the interview score. This also helps in reducing the HR time and their bias (Sivathanu & Pillai, 2018;Langer et al., (2019). ...
Article
Full-text available
Artificial Intelligence (AI) is one of the emerging technologies implemented in all the business fields, including Human Resource (HR) resulting in effective HR process. However, there is a dearth of academic research on the role of AI in the hiring function. The current research has attempted to critically review the various role of AI in HR function specifically focusing on recruitment and selection. The paper also aims to bring clarity on AI and it related concepts connected to the field of HR. The study conducted literature review on AI in Hiring (AIHr) and 200 research articles have been identified initially between 2010 to 2020 related to AIHr. For further analysis the study considered only 20 articles published in quality journals. Through a systematic review, the paper discussed the areas in which research is done with reference to and research gap has been identified for the same. This paper will benefit the future research scholars to understand the keywords related to AIHr and possible areas in which research can be conducted.
... Other recent research has explored new technologybased solutions to training and assessment such as automated feedback and VR interventions (e.g. Langer et al., 2019;Weiner & Sanchez, 2020). This recommendation is provided while acknowledging that good instructional design will likely outweigh the impact of media differences in training interventions (Clark, 1994). ...
Article
Full-text available
Game-based training research has produced various definitions and measures for learner motivation. Inconsistent findings on learner motivation may have contributed to the misapplication of one type of motivation to explain another; inhibiting future research and generating false implications. This study compared 172 students in game-based or computer-based training learning French. Results showed unique relationships between three measures of learner motivation (i.e. motivation to learn, intrinsic motivation and engagement). Motivation to learn did not differ between conditions, while intrinsic motivation and engagement did. A significant portion of the variance in content reactions was explained by all three measures of motivation, while variance for technology reactions was explained only by the motivation to learn and engagement. None of the measures for motivation accounted for significant variance in declarative or procedural knowledge. Results indicate key differences in three measures of motivation.
Article
Full-text available
Virtual reality (VR) is a potential assessment format for constructs dependent on certain perceptual characteristics (e.g., realistic environment and immersive experience). The purpose of this series of studies was to explore methods of evaluating reliability and validity evidence for virtual reality assessments (VRAs) when compared with traditional assessments. We intended to provide the basis of a framework on how to evaluate VR assessments given that there are important fundamental differences to VR assessments compared with traditional assessment formats. Two commercial off-the-shelf (COTS) games (i.e., Project M and Richie's Plank Experience) were used in Studies 1 and 2, while a game-based assessment (GBA; Balloon Pop, designed for assessment) was used in Study 3. Studies 1 and 2 provided limited evidence for the reliability and validity of the VRAs. However, no meaningful constructs were measured by the VRA in Study 3. Findings demonstrate limited evidence for these VRAs as viable assessment options through the validity and reliability methods utilized in the present studies, which in turn emphasize the importance of aligning the assessment purpose to the unique advantages of a VR environment. Practitioner points • Findings were mixed in correlating the VRA scores with similar assessments to the intended constructs being measured. • Details are provided on the design and scoring for the presented VRAs. • Although research using VRAs is still preliminary, there are promising methods through which we might design unique behavior based evaluation.
Article
Companies are increasingly turning to AI software to select candidates, despite concerns that hiring algorithms may produce biased evaluations. This study explores the public perceptions of algorithms used in resume and video interview screening. In addition, the effects of individual characteristics on these perceptions are examined. Using a nationally representative sample, we find that the public generally has a negative attitude towards the use of algorithms in hiring, and the majority do not consider them fair and effective. We also find clear individual differences regarding the perceptions towards algorithms. Specifically, males, people with higher education level and people with higher income have more positive perceptions towards hiring algorithms than their counterparts. The findings contribute to the emerging body of research on hiring algorithms and suggest strategies to increase public acceptance of hiring algorithms.
Article
Full-text available
Companies increasingly deploy artificial intelligence (AI) technologies in their personnel recruiting and selection processes to streamline them, thus making them more efficient, consistent, and less human biased (Chamorro-Premuzic, Polli, & Dattner, 2019) . However, prior research found that applicants prefer face-to-face interviews compared with AI interviews, perceiving them as less fair (e.g., Acikgoz, Davison, Compagnone, & Laske, 2020) . Additionally, emerging evidence exists that contextual influences, such as the type of task for which AI is used (Lee, 2018) , or applicants’ individual differences (Langer, König, Sanchez, & Samadi, 2019) , may influence applicants’ reactions to AI-powered selection. The purpose of our study was to investigate whether adjusting process design factors may help to improve people's fairness perceptions of AI interviews. The results of our 2 x 2 x 2 online study (N = 404) showed that the positioning of the AI interview in the overall selection process, as well as participants’ sensitization to its potential to reduce human bias in the selection process have a significant effect on people’s perceptions of fairness. Additionally, these two process design factors had an indirect effect on overall organizational attractiveness mediated through applicants’ fairness perceptions. The findings may help organizations to optimize their deployment of AI in selection processes to improve people’s perceptions of fairness and thus attract top talent.
Article
Full-text available
Technological advancements in Artificial Intelligence allow the automation of every part of job interviews (information acquisition, information analysis, action selection, action implementation) resulting in highly automated interviews. Efficiency advantages exist, but it is unclear how people react to such interviews (and whether reactions depend on the stakes involved). Participants (N = 123) in a 2 (highly automated, videoconference) × 2 (high‐stakes, low‐stakes situation) experiment watched and assessed videos depicting a highly automated interview for high‐stakes (selection) and low‐stakes (training) situations or an equivalent videoconference interview. Automated high‐stakes interviews led to ambiguity and less perceived controllability. Additionally, highly automated interviews diminished overall acceptance through lower social presence and fairness. To conclude, people seem to react negatively to highly automated interviews and acceptance seems to vary based on the stakes. OPEN PRACTICES This study was pre‐registered on the Open Science Framework (osf.io/hgd5r) and on AsPredicted (https://AsPredicted.org/i52c6.pdf).
Article
Full-text available
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.
Article
Full-text available
This article details concerns about the potential of machine learning processes to incorporate human biases inherent in social data into artificial intelligence systems that influence consequential decisions in the courts, business and financial transactions, and employment situations. It details incidents of biased decisions and recommendations made by artificial intelligence systems that have been given the patina of objectivity because they were made by machines supposedly free of human bias. The article offers suggestions for addressing the systemic biases that are impacting the viability, credibility, and fairness of machine learning processes and artificial intelligence system.
Article
Full-text available
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies.
Article
Full-text available
Digital interviews (or Asynchronous Video Interviews) are a potentially efficient new form of selection interviews, in which interviewees digitally record their answers. Using Potosky's framework of media attributes, we compared them to videoconference interviews. Participants (N = 113) were randomly assigned to a videoconference or a digital interview and subsequently answered applicant reaction questionnaires. Raters evaluated participants' interview performance. Participants considered digital interviews to be creepier and less personal, and reported that they induced more privacy concerns. No difference was found regarding organizational attractiveness. Compared to videoconference interviews, participants in digital interviews received better interview ratings. These results warn organizations that using digital interviews might cause applicants to self-select out. Furthermore, organizations should stick to either videoconference or digital interviews within a selection stage.
Article
Full-text available
Technologically advanced selection procedures are entering the market at exponential rates. The current study tested two previously held assumptions: (a) providing applicants with procedural information (i.e., making the procedure more transparent and justifying the use of this procedure) on novel technologies for personnel selection would positively impact applicant reactions, and (b) technologically advanced procedures might differentially affect applicants with different levels of computer experience. In a 2 (computer science students, other students) × 2 (low information, high information) design, 120 participants watched a video showing a technologically advanced selection procedure (i.e., an interview with a virtual character responding and adapting to applicants’ nonverbal behavior). Results showed that computer experience did not affect applicant reactions. Information had a positive indirect effect on overall organizational attractiveness via open treatment and information known. This positive indirect effect was counterbalanced by a direct negative effect of information on overall organizational attractiveness. This study suggests that computer experience does not affect applicant reactions to novel technologies for personnel selection, and that organizations should be cautious about providing applicants with information when using technologically advanced procedures as information can be a double-edged sword. Update: While not specifically mentioned in the paper it has implications for explainability and XAI research: providing people with more transparency can have simultaneous positive and negative effects on acceptance.
Article
Full-text available
The use of technology such as telephone and video has become common when conducting employment interviews. However, little is known about how technology affects applicant reactions and interviewer ratings. We conducted meta-analyses of 12 studies that resulted in K = 13 unique samples and N = 1,557. Mean effect sizes for interview medium on ratings (d = -.41) and reactions (d = -.36) were moderate and negative, suggesting that interviewer ratings and applicant reactions are lower in technology-mediated interviews. Generalizing research findings from face-to-face interviews to technology mediated interviews is inappropriate. Organizations should be especially wary of varying interview mode across applicants, as inconsistency in administration could lead to fairness issues. At the same time, given the limited research that exists, we call for renewed attention and further studies on potential moderators of this effect.
Article
Full-text available
The uncanny valley hypothesis, proposed already in the 1970s, suggests that almost but not fully humanlike artificial characters will trigger a profound sense of unease. This hypothesis has become widely acknowledged both in the popular media and scientific research. Surprisingly, empirical evidence for the hypothesis has remained inconsistent. In the present article, we reinterpret the original uncanny valley hypothesis and review empirical evidence for different theoretically motivated uncanny valley hypotheses. The uncanny valley could be understood as the naïve claim that any kind of human-likeness manipulation will lead to experienced negative affinity at close-to-realistic levels. More recent hypotheses have suggested that the uncanny valley would be caused by artificial-human categorization difficulty or by a perceptual mismatch between artificial and human features. Original formulation also suggested that movement would modulate the uncanny valley. The reviewed empirical literature failed to provide consistent support for the naïve uncanny valley hypothesis or the modulatory effects of movement. Results on the categorization difficulty hypothesis were still too scarce to allow drawing firm conclusions. In contrast, good support was found for the perceptual mismatch hypothesis. Taken together, the present review findings suggest that the uncanny valley exists only under specific conditions. More research is still needed to pinpoint the exact conditions under which the uncanny valley phenomenon manifests itself.
Article
Hospitality organizations utilize a variety of selection tools to hire the best candidates. Traditionally, hospitality recruiters have relied on face-to-face interviews for choosing the most qualified candidates to represent the firm. While real-time Internet-based interviewing platforms are increasingly utilized among hospitality organizations, a cutting edge technology-based interviewing phenomenon has emerged: the use of asynchronous video interviews (AVIs). In order to conduct this modality of interviews, employers send text-based questions electronically and the candidate records his or her responses using a webcam via various proprietary software platforms. Following the promise of reduced costs and increased efficiencies, many organizations have adopted this modality of interviews; however, little research has been conducted regarding their effectiveness among both providers and users. Additionally, the appropriateness and alignment of AVI in the hospitality industry for the use of selecting service representatives should be investigated. In light of this, the present research examines the literature on interviewing modalities, predictive validity of selection tools, and electronic Human Resources and presents several propositions as well as an agenda for future research. Furthermore, the present research presents a conceptual model for AVI using the literature on electronic Human Resources as a backdrop.