Conference PaperPDF Available

The influence of algorithm aversion and anthropomorphic agent design on the acceptance of AI-based job recommendations

Authors:

Abstract and Figures

Artificial intelligence (AI) offers promising tools to support the job-seeking process by providing automatic and user-centered job recommendations. However, job seekers often hesitate to trust AI-based recommendations in this context given the far-reaching consequences of the importance of the decision for a job on their future career and life. This hesitation is largely driven by a lack of explainability, as underlying algorithms are complex and not clear to the user. Prior research suggests that anthropomorphization (i.e., the attribution of human traits) can increase the acceptance of technology. Therefore, we adapted this concept for AI-based recommender systems and conducted a survey-based study with 120 participants. We find that using an anthropomorphic design in a recommender system for open positions increases job seekers’ acceptance of the underlying system. However, algorithm aversion rises if detailed information on the algorithmic origin is being disclosed.
Content may be subject to copyright.
Algorithm aversion and anthropomorphism in HR
The influence of algorithm aversion and
anthropomorphic agent design on the
acceptance of AI-based job recommendations
Completed Research Paper
Jessica Ochmann1Leonard Michels2
jessica.ochmann@fau.de leonard.michels@fau.de
Sandra Zilker3Verena Tiefenbeck2
sandra.zilker@fau.de verena.tiefenbeck@fau.de
Sven Laumer1
sven.laumer@fau.de
Friedrich-Alexander-University Erlangen-Nuremberg, Germany
1Schöller Endowed Chair for Information Systems,
Fürther Str. 248, 90429 Nueremberg
2Digital Transformation Group,
Lange Gasse 20, 90403 Nuremberg
3Chair of Digital Industrial Service Systems,
Fürther Str. 248, 90429 Nuremberg
Abstract
Artificial intelligence (AI) offers promising tools to support the job-seeking process by
providing automatic and user-centered job recommendations. However, job seekers of-
ten hesitate to trust AI-based recommendations in this context given the far-reaching con-
sequences of the importance of the decision for a job on their future career and life. This
hesitation is largely driven by a lack of explainability, as underlying algorithms are com-
plex and not clear to the user. Prior research suggests that anthropomorphization (i.e.,
the attribution of human traits) can increase the acceptance of technology. Therefore, we
adapted this concept for AI-based recommender systems and conducted a survey-based
study with 120 participants. We find that that using an anthropomorphic design in a rec-
ommender system for open positions increases job seekers’ acceptance of the underlying
system. However, algorithm aversion rises if detailed information on the algorithmic ori-
gin is being disclosed.
Keywords: Algorithm aversion, Anthropomorphism, AI-based recommendations,
Human Resource Management
Introduction
On average, we spend approximately a quarter of our lives working (Pryce-Jones 2010) and our work life
significantly contributes to our level of well-being (Bowling et al. 2010). Consequently, the decision which
jobs to apply to is of major importance. To find a position in line with their expectations, job seekers typically
screen plenty of job proposals and apply for the most appropriate ones, putting much effort into optimizing
their applications (Berg et al. 2010); this process, however, can be stressful and time-consuming (Wanberg
et al. 2010). Recent developments in artificial intelligence (AI) bring promising avenues for overcoming
Forty-First International Conference on Information Systems, India 2020
1
Algorithm aversion and anthropomorphism in HR
these issues by providing automatic, rich, and user-centered recommendations, holding the potential to
considerably improve the way individuals search and apply for jobs.
This approach is increasingly adopted in the human resources (HR) context, which has fostered the devel-
opment of AI-based job recommender systems (Duan et al. 2019). These systems pre-select job alterna-
tives based on job seekers’ preferences, personal data, and data from previous job-seekers with comparable
profiles, by using machine learning (ML) algorithms that predict the most suitable vacancies (Castelvecchi
2016). The resulting AI-based recommendations support job seekers in finding job proposals that best fit
their individual preferences and qualifications (Malinowski et al. 2006). Due to these promising results,
these systems are very likely to greatly impact the future of recruiting both from an organizational and from
an individual perspective. However, ultimately the success of job recommender systems hinges on a large
number of job seekers using them. Consequently, a better understanding of the factors determining the
acceptance of the recommendations provided by these systems is required. In fact, many individuals still
hesitate to rely on them (Laumer et al. 2018). While research in this field in general, and the recruitment
domain in particular, is still scarce (van Esch et al. 2019), the existing body of literature on the acceptance
of recommendations in the general business-to-consumer (B2C) context offers a valuable point of departure
to build upon.
To uncover the factors that govern the acceptance of recommendations, prior research has typically used
frameworks that incorporate cognitive aspects (e.g., Hu and Pu 2009) or relational constructs such as trust
(e.g., Komiak and Benbasat 2006). However, these frameworks and this approach in general may not neces-
sarily apply to high-stake contexts like job-seeking. The main difference between recommendations in the
general consumer context and the job-seeking domain is that the decision to rely on a recommendation for a
new job will have much more important implications on one’s life, compared to simple, inconsequential de-
cisions such as choosing which movie to watch or which song to listen to. Therefore, it is very plausible that
algorithmic aversion (Dietvorst et al. 2015; Logg et al. 2019) arises when individuals encounter AI-based
recommendations in high-stake decision contexts.
In general, algorithm aversion describes the phenomenon that individuals are often reluctant to accept and
to rely on results computed by statistical algorithms and rather trust human forecasts, even though evidence-
based algorithms are more accurate in predicting appropriate alternatives compared to human reasoning
(Dietvorst et al. 2018). Consequently, many individuals are hesitant or entirely refuse to rely on recom-
mendations that are apparently based on algorithmic prediction (Burton et al. 2020; Castelo et al. 2019;
Dietvorst et al. 2015). This tendency might be especially prevalent in high-stake decisions supported by
algorithm-based recommendation systems. As humans often assume that they have superior reasoning com-
pared to algorithms (Dietvorst et al. 2015, 2018), relying on an algorithm’s recommendation is perceived as
a more risky decision than relying on one’s own reasoning. Prior research has shown that when the stakes
of a decision rise, humans tend to become risk-averse (Fehr-Duda et al. 2010), which would manifest in
algorithm aversion in high stake decisions. Recent research, indeed, suggests a two-sided character of this
phenomenon, showing that algorithm aversion is not omnipresent and can be reduced by giving users more
control and allowing them to modify the algorithm (Dietvorst et al. 2018). Further, in situations that re-
quire ample background knowledge (e.g., prediction of business or geopolitical events), users even display
a certain level of algorithm appreciation (Logg et al. 2019).
To overcome the potential issue of algorithm aversion, it might be beneficial for AI-based recommender
systems not to reveal the algorithmic origin of their recommendations. Users might be overwhelmed by the
complex information or mistrust it and consequently decide to rather rely on their judgment of which the un-
derlying processes appear clearer to them. In addition to this potential measure to avoid effects decreasing
the acceptance of AI-based job recommendations, it seems beneficial to investigate the effects of measures
that have been found to increase the acceptance of AI-based recommendations in this high stake decision
context. Prior research emphasizes that users are more likely to accept the choice of a recommender system
when it is presented as a human-like agent (Qiu and Benbasat 2009). One unobtrusive, easy-to-implement
measure to increase the human-likeness of a recommendation agent could be the use of anthropomorphic
(human-like) design features for the presentation of the recommendation (Epley et al. 2007; Pfeuffer et al.
2019; Qiu and Benbasat 2009; Wang et al. 2016). Hence, this study focuses on the investigation of why and
when job seekers will adopt AI-based recommender systems. Our aim is to investigate whether algorithm dis-
Forty-First International Conference on Information Systems, India 2020
2
Algorithm aversion and anthropomorphism in HR
closure and anthropomorphism influence the acceptance of AI-based recommendations in the job-seeking
context. We thus contribute to the literature on algorithm acceptance research (Dietvorst et al. 2018; Logg
et al. 2019) by empirically investigating the effect of algorithm disclosure and anthropomorphism on the
acceptance of AI-based job recommendations, which is a more serious high stake context than the ones
examined in prior research. Thus, we answer the following research question:
RQ: How does disclosing detailed information on the algorithmic origin of an AI-based
job recommendation and the use of anthropomorphic design features to communi-
cate an AI-based job recommendation affect its acceptance by users?
The paper is organized as follows. First, we briefly discuss relevant literature on recommender systems in
human resource management, algorithm aversion regarding AI-based recommendations in particular, the
use of anthropomorphism in recommendation systems, and derive our hypotheses. Next, we motivate our
choice of the scenario-based technique used in the present study and describe the methodology we used
to test our hypotheses. Finally, we report the results of our empirical study and conclude with a general
discussion of the findings.
Related work and development of hypotheses
Recommender systems in human resource management
Recommender systems were first introduced by Resnick et al. (1997) and describe information systems that
analyze user data to produce personalized recommendations that match user preferences. Their objective is
to reduce a user’s potential information overload by sorting and filtering alternatives in terms of relevance
and user fit. Besides the benefits provided for the user, effective recommender systems also help organiza-
tions that offer these systems to increase consumer loyalty and sales and to differentiate themselves from
competitors (Adomavicius et al. 2019; Gomez-Uribe and Hunt 2015; Sharma et al. 2015). Over the last
decade, the application of recommender systems has noticeably increased in a variety of domains, such as
e-commerce, media, and human resources (Lu et al. 2015; Malinowski et al. 2006). While recommender sys-
tems in e-commerce and media predominantly aim to reduce consumers’ efforts necessary to find relevant
products or services, in the recruiting context, two different types of recommender systems are discussed
that address either the organization or the job seeker. First, in the organizational context, CV recommender
systems that are used by recruiters to match a specific vacancy with the most appropriate candidate; second,
job recommender systems for job seekers that match their job preferences with suitable vacancies (Mali-
nowski et al. 2006).
Given the omnipresence of recommender systems, their growing importance in individual decision-making
processes, and their large economic potential (Bodapati 2008), it is crucial for academia and practitioners
alike to understand the factors that influence the acceptance of these systems. Therefore, scholarly research
put increasing effort in the investigation of various theories and models to explain the acceptance of rec-
ommender systems and their results, with a strong focus on the domain of product recommendations in
e-commerce (Adomavicius et al. 2019; Komiak and Benbasat 2006; Moussawi et al. 2020).
It remains unclear, however, if and how these results can be adapted to the job-seeking context that is charac-
terized by high personal stakes of the decision, as job satisfaction highly influences life satisfaction (Bowling
et al. 2010). In addition, prior research highlighted especially the acceptance of conventional job recom-
mender systems that rely on collaborative filtering and content- or knowledge-based techniques (for a review
see Lu et al. 2015; Park et al. 2012). Currently, the diffusion of and advances in the research on artificial intel-
ligence provide additional opportunities for recommender systems to make more precise and user-centered
recommendations (Dietvorst et al. 2018). Algorithms gain in prediction quality, and the resulting AI-based
recommendations have the ability to assist users in the preselection of alternatives in a more sophisticated
manner as they are able to discover intricate structures in large data sets (LeCun et al. 2015). Thus, AI-
based recommender systems have the potential to fundamentally change future job search. Therefore, our
aim is to unveil factors that influence the acceptance of AI-based recommendations. In line with prior re-
search (Promberger and Baron 2006), we introduce the acceptance of AI-based recommendations as our
Forty-First International Conference on Information Systems, India 2020
3
Algorithm aversion and anthropomorphism in HR
dependent variable of interest. In the following, we discuss the concepts of algorithm aversion and anthro-
pomorphism as potentially influential factors regarding the acceptance of AI-based job recommendations.
Algorithm aversion
Algorithms are defined as computer-implementable instructions to perform a determined task and often
outperform human experts in various tasks (LeCun et al. 2015). Multiple scholars have theorized an aversion
towards algorithms that give users automated advice regarding a certain task (Castelo et al. 2019; Dana
and Thomas 2006; Dietvorst et al. 2018) although pioneering research from the 1950s illustrated that even
basic statistical algorithms such as linear regression outperform human experts on medical diagnosing tasks
(Dawes et al. 1989; Meehl 1954). Since then, the fast progress in the field of artificial intelligence has enabled
algorithms to learn from the past, understand and create natural language, and even reflect human emotions
(Castelo et al. 2019), further increasing their potential superiority compared to human reasoning.
The increasing presence of algorithms, however, confronts individuals more frequently with the choice of
whether they should rely on human experts or on algorithms. The dominant theme in this broad academic
research area is that individuals prefer human advice over algorithms (Dietvorst et al. 2015). The underly-
ing reasoning of this so-called algorithm aversion is manifold and can be ascribed to the desire for a perfect
prediction and the mistaken belief that humans are more capable of perfection (Einhorn 1986), ethical con-
cerns (Dawes 1979; Eastwood et al. 2012), and the zero error tolerance for algorithms (Dietvorst et al. 2015).
Moreover, the lack of perceived control over the forecast inhibits the acceptance of algorithms (Dietvorst
et al. 2018). Scholars have further argued that individuals’ mistrust towards machines results in rejection of
algorithm advice (for a review see Castelo et al. 2019). For example, individuals assume that an algorithm
is unable to take one’s unique circumstances fully into account and are averse regarding automated medi-
cal care (Longoni et al. 2019). In the field of recruitment, Diab et al. (2011) found that participants expect
human recruiters to be more useful, professional, fair, and flexible than algorithms that are programmed to
select employees.
In contrast, recent scholarly work reflects that for numerical tasks with an objectively correct answer, indi-
viduals actually prefer advice from algorithms to advice from human beings. This phenomenon is subsumed
under the term algorithm appreciation (Logg et al. 2019). In addition, algorithm familiarity for a certain
task increases trust and acceptance of algorithms. For example, individuals who are familiar with product
or movie recommendations on according platforms tend to rely on the advice of these algorithms (Castelo
et al. 2019).
These controversial findings highlight the need for further academic research to unveil reliable factors that
predict the acceptance of algorithms for different types of tasks and contexts (Castelo et al. 2019). The sys-
tematic exploration of why and when individuals accept algorithms further helps to build an understanding
under which circumstances job seekers rely on AI-based recommendations. As prior research suggests that
disclosing information on the algorithmic origin of a recommendation might lead to reluctance regarding
its acceptance (Burton et al. 2020; Castelo et al. 2019), our study seeks to induce algorithm aversion by vary-
ing the amount of information provided on the algorithmic origin of an AI-based job recommendation. We
call this manipulation algorithm disclosure to address the user’s potential algorithm aversion when being
directly confronted with the advice of an algorithm. In line with prior research, we assume that
Hypothesis 1: Disclosing the information on the algorithmic origin of an AI-based job recommen-
dation leads to a lower acceptance rate of this recommendation.
While algorithm aversion is a concept that can explain why users refrain from accepting AI-based job recom-
mendations, it does not provide a potential lever to actively increase the acceptance rate of such recommen-
dations. As prior research has shown, individuals rather trust humans than algorithms when it comes to rec-
ommended decisions (Dietvorst et al. 2018; Qiu and Benbasat 2009). One promising approach to increase
the acceptance of AI-based job recommendation systems could, therefore be, to increase the human-likeness
of the system. This approach has been discussed by prior research under the term of anthropomorphism.
Forty-First International Conference on Information Systems, India 2020
4
Algorithm aversion and anthropomorphism in HR
Anthropomorphism
The concept of anthropomorphism refers to the process of attributing human characteristics, traits, or fea-
tures to non-human agents, in order to reduce uncertainty and increase comprehension in situations when
knowledge about the mechanisms underlying the behavior and the intentions of the non-human agent is
scarce (Epley et al. 2007; Pfeuffer et al. 2019). By anthropomorphizing the non-human agent, users make
inferences about themselves and other humans to predict its future behavior or make sense of its past be-
havior. If the evaluation of the anthropomorphized non-human agent is positive, it will be associated with
multiple other positive characteristics such as trustworthiness, reliability, or competence (e.g., Aggarwal and
McGill 2007; Benlian et al. 2019; Mourey et al. 2017; Qiu and Benbasat 2009; Wang et al. 2016). Prior re-
search suggests that the positive effect of anthropomorphism on acceptance of a non-human agent is driven
by an increased social presence, referring to the capacity of a technology to convey relational information
and the extent to which it builds up a psychological connection with the user (Cyr et al. 2007, 2009; Qiu
and Benbasat 2009; Schultze and Brooks 2019; Short et al. 1976). Thus, manipulating the extent to which
a job recommendation system incorporates anthropomorphic design features seems to be a valuable and
promising approach to increase its acceptance and the adoption of its recommendations.
The more human-like a non-human agent appears with respect to its visual, auditive, or mental character-
istics, the more likely it will be anthropomorphized (Pfeuffer et al. 2019). The positive effects of this an-
thropomorphization might, however, disappear once the non-human agent becomes too lifelike, such that
it raises a feeling of eeriness in the user that leads to revulsion - an effect that is known as the uncanny
valley (Mori et al. 2012). A design manipulation that is easy to implement and efficient is the implementa-
tion of human images with facial features (Cyr et al. 2009; Gong 2008; Pak et al. 2012; Riegelsberger et al.
2003; Wang et al. 2016). Prior research has shown, for example, that the use of human images with facial
features leads to more positive evaluations of websites and recommendation agents (Cyr et al. 2009; Pak
et al. 2012; Steinbrück et al. 2002; Wang et al. 2016). Further, Qiu and Benbasat (2009) show that increas-
ing the social presence of a product recommendation agent leads to an increase in the user’s intention to
use it as a decision aid. Pak et al. (2012) also report an increased adoption of a decision aid in a medical
context when it was equipped with anthropomorphic characteristics. For personal intelligent agents (PIA),
Moussawi et al. (2020) identified perceived anthropomorphism as an antecedent of PIA adoption. Gruber
et al. (2018) and Gruber (2018), however, investigated the effect of anthropomorphism on the acceptance
of navigation decision aids and found no significant effects. These results suggest that anthropomorphic de-
sign features can lead to increased recommendation acceptance, but also that this effect is not unequivocal.
Further, research has not yet investigated the effects of anthropomorphic design features on the acceptance
of recommendations in areas where high personal stakes are involved and one typically does not rely on
automated recommendation systems, such as the job-seeking domain. To evaluate whether using anthropo-
morphic design features can increase the acceptance of AI-based job recommendations, we will manipulate
whether the recommendation is communicated to the users using a human image or an artificial non-human
image. In addition, in the anthropomorphic condition, we will refer to the recommendation agent in the first
person, giving it a name and a gender, as prior research has shown that this further increases the degree to
which users think of a non-human agent in human terms (Aggarwal and McGill 2007; Mourey et al. 2017).
Based on the results of prior research, we assume that
Hypothesis 2: Communicating the results of an AI-based job recommendation system using an-
thropomorphic design features leads to a higher acceptance rate of this recommen-
dation.
As prior research has shown, trust in the recommendation agent can have substantial effects on adoption
behavior (Komiak and Benbasat 2008, 2006; Qiu and Benbasat 2009; Wang et al. 2016). These effects,
however, do not necessarily persist when manipulating anthropomorphic design features or the degree to
which an AI-based recommendation system is perceived as an elaborate algorithm (i.e., intelligent), as a
recent study has shown (Moussawi et al. 2020). To account for potential trust effects on the effectivity of
the above-mentioned interventions on the acceptance of AI-based recommendations, we control for general
trust in artificial intelligence regarding job-related decisions in our analyses.
Forty-First International Conference on Information Systems, India 2020
5
Algorithm aversion and anthropomorphism in HR
Research method
Experimental Design
We implemented the study as a two-factorial ((anthropomorphic: yes/no) x (AI process disclosure: yes/no))
between-subject design. More precisely, we manipulated A) whether the artificial intelligence was presented
in an anthropomorphic way and B) whether the AI’s underlying processes were disclosed to operationalize
algorithm aversion. In the anthropomorphic condition (AN ), we referred to the artificial intelligence in
the third person and gave it a human name (i.e., “Emily”). Further, the AI-based recommendation was
communicated to the participants by a picture of a woman (see Figure 1). We conducted a pre-test to ensure
that the picture we used in the anthropomorphic condition did not appear negative on relevant dimensions
(Wang et al. 2016). In this pre-test with N= 48 participants, we evaluated whether the person in the picture
appeared professional, authoritarian, like an expert, trustworthy, dependable, reliable or like an HR expert
using a 7-point Likert scale ranging from “Strongly disagree” to “Strongly agree”(Wang et al. 2016). The
sample of participants of this pre-test was not from the sample pool of subjects as the participants in the final
study and was recruited using different online sampling methods (e.g., social media groups, professional
networks). Along all dimensions, the picture was assessed as significantly positive (i.e., compared to the
neutral value of 4). In the non-anthropomorphic conditions (AN), the artificial intelligence was not referred
to in the third person, and its recommendation was communicated by a mechanical, abstract picture of
gears (see Figure 1). In the two conditions with a high degree of algorithm disclosure (AD), participants were
informed that the AI used algorithms, equations, and a comprehensive database to identify the most suitable
position. In the two conditions with a low degree of algorithm disclosure (AD), no such information was
provided. Participants were assigned to one of the four conditions (i.e, AN_AD, AN_AD, AN_AD, AN_AD),
using block randomization. Due to the involvement of human subjects, the authors seeked for and were
granted approval for the study by the person at the university department overseeing the good conduct and
ethical aspects of empirical research.
Figure 1. Pictures used to present the AI-based recommendation in the anthropomorphic
condition (on the left), and non-anthropomorphic condition (on the right).
User study
To test our hypotheses, we applied a scenario-based vignette technique (Finch 1987) and conducted an on-
line, survey-based user study. Further information on the recruitment procedure and study participants will
be provided below. To protect the study’s participants, they were informed about the content of the study,
about its scientific background, and the measures taken to protect their privacy (i.e., anonymization) prior
to starting the study. In addition, the authors’ contact information was provided, allowing the participants
to could contact them if they had any concerns related to the study or their personal data. By participating
in the study, all participants agreed with this. The welcome page of the survey informed participants that
Forty-First International Conference on Information Systems, India 2020
6
Algorithm aversion and anthropomorphism in HR
the survey related to the job application context. Next, they were asked to put themselves in the situation
that they were at the beginning of their thirties, unsatisfied with their current job position and, therefore,
looking for a new full-time job for young professionals. This negative framing of their current job situation
was used to increase the stakes associated with finding a new one, hence emphasizing the importance of the
job searching process. To build up a certain level of initial trust (Moussawi et al. 2020), it was revealed to
them that they, by chance, had learned about a career platform that has a very good reputation, successfully
placed many job seekers, and was recently equipped with an AI-based recommendation system. The specific
wording was used to induce trust in the platform and thus a baseline level of general trust in the AI based on
the reputation of the platform. No direct trust-inducing measure for the AI was used to avoid interference
with potential algorithm aversion. The recommendation system was described according to the experimen-
tal condition, and the participants were informed that they had only trial access to the platform, which meant
that they could only apply to one of the suggested open positions. Participants were then asked for their first
name, gender, highest degree or completed level of education, the kind of company they would like to work
for, the department they would like to work in, and the city in which they were looking for a job. After these
questions, they were shown an example of how the job proposals would be presented to them. Afterwards, a
loading screen appeared for five seconds to imply that the AI-based recommender system was searching for
suitable job proposals. On the next page, participants were presented with the results of the career platform
and the job recommender system in the form of a text describing the results and procedure, according to
the experimental condition, along with a picture stating the AI’s recommendation (see Figure 1). Below this
information, four different job proposals were provided to them, one with a blue frame and a yellow badge,
representing the recommended job. The job proposal that was recommended to the participants was fixed
across conditions (see Figure 2). The recommendations matched the department they wanted to work in.
Participants were asked to consider the options for at least 60 seconds and were not able to proceed with
the survey before that time had elapsed. On average, participants spent 107 seconds on the decision.
Figure 2. Two out of four job proposals presented to the participants in the experimental
task, with one recommended proposal (on the right).
The participants’ task consisted in choosing one of the four open positions they would like to apply for. The
open positions were rated along eight dimensions as either average, above average or below average com-
pared to jobs in other companies of the same industry. We opted for this approach, as prior research has
shown that a choice set of four options described on eight dimensions leads to a suitable level of decision
Forty-First International Conference on Information Systems, India 2020
7
Algorithm aversion and anthropomorphism in HR
difficulty in assessing factors that may influence individual decision-making (Dijksterhuis et al. 2006). To
determine which eight dimensions to use for describing the job proposals, we had conducted a second pre-
test with 37 participants. We asked the participants of that pre-test to rank a set of 12 job dimensions by how
important they perceived them in determining whether they would apply for a job in a comparable scenario
to the one we used in our study; the mean ranks of the different dimensions are presented in Table 1. The
participants of the second pre-test were recruited using online sampling methods (e.g., social media groups,
professional networks); none of them had participated in the first pre-test or was part of the sample that par-
ticipated in the final study. To avoid that one of the job proposals would be considered as the obvious choice
by the participants, we selected the four most important dimensions according to our pre-test and ranked all
job proposals as ‘average’ regarding them. In a second step, we selected four additional job dimensions that
had been ranked as comparably important in the pre-test. Therefore, we selected the dimensions ranked
from 6 to 9 in the pre-test. Regarding these dimensions, the job proposals differed such that for every job
proposal, two dimensions were ranked as ‘above average’ and two as ‘below average’, while controlling that
all four job proposals were ranked differently on these dimensions. After stating their decision, the partici-
pants were asked multiple questions regarding their decision and their impression of the recommendation
system. Participants spent on average 448 seconds to fill out the post-task questionnaire.
# Dimension Mean rank (SD)
1 Salary 2.43 (1.59)
2 Advancement opportunities 4.30 (2.70)
3 Working hours 4.30 (2.20)
4 Location 4.68 (3.00)
5 Further education opportunities 5.89 (3.20)
6 Collegiality among employees 6.76 (2.85)
7 Public reputation of the company 6.89 (3.45)
8 Holiday entitlement 7.27 (2.59)
9 Social benefits 7.65 (2.87)
Table 1. Pre-test results (N= 37): Perceived importance of job dimensions in the
application decision, mean rank and standard deviation (rank 1 = most important).
Scales and measurement variables
To receive a more detailed picture of the effects of anthropomorphic design features and algorithm aver-
sion on the acceptance of AI-based job recommendations, we used multiple literature-based scales. The
perceived human-likeness of the recommendation agent was assessed using the anthropomorphism scale
by Bartneck et al. (2009); two items of the original scale were not included in the survey, as they referred
to human-robot interaction. This resulted in three final items on which the participants’ responses were
assessed using 7-step semantic differentials. In addition, we used the technophobia scale of Sinkovics et al.
(2002), consisting of four items, assessed on a 7-point Likert scale, to control for possible randomization
artifacts regarding the prevalence of technophobia in the experimental groups. We adopted the scale to our
scenario, as recent research suggests that technophobia can have a great impact on the adoption of new tech-
nology (Khasawneh 2018). We further evaluated the general trust participants had in artificial intelligence
regarding job-related decisions by asking them to which extent they trusted the opinion of artificial intelli-
gence when it comes to decisions about their professional future (using a 7-point Likert scale ranging from
“not at all” to “very much”).
Participants
The data was collected using the online participant recruitment service Prolific (Palan and Schitter 2018;
Peer et al. 2017) with an English-speaking sample predominantly from the UK and the USA. We recruited
128 participants. Due to incomplete data, implausible overall duration of the experiment, and inconsistent
answers, the data of 7 participants had to be excluded. A demographic summary of the final participants is
provided in Table 2.
Forty-First International Conference on Information Systems, India 2020
8
Algorithm aversion and anthropomorphism in HR
Sample AN_AD (N= 30) AN_AD(N= 30) AN_AD (N= 30) AN_AD (N= 31)
Gender
Men 9 (30%) 13 (43.3%) 5 (16.7%) 14 (45.2%)
Women 21 (70%) 17 (56.7%) 25 (83.3%) 17 (54.8%)
Mean age (years) 30.1
(SD = 7.17)
32.3
(SD = 5.92)
29.5
(SD = 6.53)
31.8
(SD = 6.95)
Education level
Primary education 2 (6.7%) 3 (10%) 5 (16.7%) 2 (6.5%)
Secondary education 7 (23.3%) 6 (20%) 9 (30%) 11 (35.6%)
Vocational training 1 (3.3%) 1 (3.3%) 0 (0%) 0 (0%)
University, undergraduate 20 (66.7%) 13 (43.3%) 10 (33.3%) 9 (29%)
University, postgraduate 0 (0%) 5 (16.7%) 4 (13.3%) 9 (29%)
Employment status
Employed 25 (83.3%) 23 (76.7%) 20 (66.7%) 20 (64.5%)
Unemployed 1 (3.3%) 2 (6.7%) 3 (10%) 2 (6.5%)
Self-employed 0 (0%) 2 (6.7%) 3 (10%) 3 (9.7%)
Homemaker 2 (6.7%) 2 (6.7%) 1 (3.3%) 3 (9.7%)
Student 2 (6.7%) 1 (3.3%) 3 (10%) 3 (9.7%)
Table 2. Descriptive results of key socio-demographic data of the study sample.
Results
Randomization check
To ensure that the random assignment in the survey led to a uniform distribution of demographic criteria in
the treatment groups, randomization checks were conducted for the four demographic indicators reported
in Table 2. While the difference in the gender distribution between the four groups was marginally signif-
icant (χ2(3) = 7.13, p= .068), a subsequent post-hoc test adjusting p-values by the Benjamini–Hochberg
procedure for multiple comparisons (Benjamini and Hochberg 1995) revealed no significant differences be-
tween the subgroups.
The treatment groups did not differ significantly regarding their mean age (F(3, 117) = 1.21, p= .309). While
Fisher’s exact test revealed significant differences in the distribution of the education level (p= .020), a de-
scriptive interpretation of the results does not indicate any tendencies in the groups, however. The employ-
ment status distribution did not differ significantly between the groups (p= .839). Regarding the prevalence
of technophobia, the four experimental groups did not differ (M= 4.04, SD = 1.35; F(3, 117) = 0.91, p= .440)
on the technophobia scale by Sinkovics et al. (2002). For general trust in artificial intelligence regarding job-
related decisions, we did also not find significant differences between the groups (M= 4.06, SD = 1.33; F(3,
117) = 0.53, p= .660).
Manipulation check
To assess whether the presentation of the recommendation system was perceived as anthropomorphic, we
used the anthropomorphism scale developed by Bartneck et al. (2009) with a seven-point semantic differ-
ential. To adjust the scale for non-robot interaction and to keep the completion time of the survey to a
reasonable limit, we had included only a subscale of the scale in the survey, excluding two items (i.e., Un-
conscious - Conscious; Moving rigidly - Moving elegantly). The scale showed sufficient internal consistency
(Cronbach’s α= .87). Contrary to our assumptions, the AN groups did not perceive the recommendation
system as more anthropomorphic than the AN groups ( (t(119) = -0.24, p= .810).
Forty-First International Conference on Information Systems, India 2020
9
Algorithm aversion and anthropomorphism in HR
Hypothesis testing
To test our hypotheses, we used logistic models to assess whether the acceptance of the recommendation
(dependent variable; dichotomized such that a value of 0 means that the recommendation was not chosen
and 1 that the recommendation was chosen) was influenced by algorithmic disclosure and anthropomorphic
design features (independent variables). This method was chosen as it allows to independently examine the
effects of multiple predictors on a discrete outcome variable (Hosmer et al. 2013). In a first step, we included
the algorithm disclosure variable to assess whether providing information on the process of how the AI
determined the recommendation had an effect on the participants’ decision to accept the recommendation.
The variable was binary coded such that a value of 0 represented no algorithm disclosure and 1 algorithm
disclosure. The logistic regression revealed a significant effect of the algorithm disclosure manipulation on
the acceptance of the recommendation (R2Nagelkerke = 0.06; χ2(1) = 5.22, p= .022), see Table 3. Disclosing
information on the algorithmic disclosure of the recommendation significantly reduced the likelihood that
the recommendation was chosen. This result is in line with our first hypothesis.
To investigate the effect of anthropomorphic design (independent variable) on the recommendation accep-
tance (dependent variable), we included whether participants received the recommendation in an anthro-
pomorphic design in the logistic model as a dummy variable in a second model. The variable took a value
of 0 for no anthropomorphic design and 1 for anthropomorphic design. The logistic regression revealed a
marginally significant effect of the anthropomorphism manipulation on the acceptance of the recommen-
dation (R2Nagelkerke = 0.10; χ2(2) = 8.98, p= .011), see Table 3. The second model was marginally better in
predicting the acceptance of the recommendation than the first model (χ2(1) = 3.77, p= .052). Participants
exposed to the recommendation in an anthropomorphic design were more likely to follow the recommen-
dation. This result has to be interpreted with caution, however, as it was only marginally significant. Our
second hypothesis is therefore only partially supported.
In our third model, we further added the trust participants had in AI-based recommendations when it comes
to decisions about their professional future as a moderator of the effect of algorithm disclosure and anthro-
pomorphic design on the acceptance of the recommendation. This was based on the results of prior research
(Moussawi et al. 2020). The logistic regression revealed no significant main effect of general trust in artificial
intelligence and marginally significant interaction effects with algorithm disclosure and anthropomorphic
design on the acceptance of the recommendation (R2Nagelkerke = 0.23; χ2(5) = 23.04, p< .001), see Table 3.
When adding general trust and its interaction with the effects of algorithm disclosure and anthropomorphic
design features to the model, however, the effect of anthropomorphic design features on the recommenda-
tion acceptance became insignificant. The third model was significantly better in predicting the acceptance
of the recommendation than the second model (χ2(3) = 14.05, p= .003).
Coefficients Estimate (SE) z-value p-value Odds Ratio [95%-CI]
Model 1 Intercept 0.48 (0.27) 1.79 .073
Algorithm disclosure -0.84 (0.37) -2.26 .024* 0.43 [0.21; 0.89]
Model 2
Intercept 0.13 (0.32) 0.40 .692
Algorithm disclosure -0.86 (0.38) -2.28 .023* 0.42 [0.20; 0.88]
Anthropomorphic design 0.73 (0.38) 1.92 .055. 2.07 [0.99; 4.40]
Model 3
Intercept 1.10 (1.21) 0.91 .362
Algorithm disclosure -3.71 (1.49) -2.49 .013* 0.02 [0.00; 0.39]
Anthropomorphic design -1.85 (1.47) -1.26 .207 0.16 [0.01; 2.62]
Trust 0.22 (0.28) -0.77 .439 0.81 [0.45; 1.38]
Algorithm disclosure:Trust 0.66 (0.34) -1.93 .053. 1.94 [1.01; 3.93]
Anthropomorphic design:Trust 0.60 (0.34) 1.77 .077. 1.82 [0.95; 3.35]
Table 3. Results of the logistic regression models. *p<.05, **p<.01, ***p<.001
To facilitate the interpretation of the results, we calculated the probabilities predicted by the third model
of the logistic regression for the different levels of the independent variables. The results are depicted in
Forty-First International Conference on Information Systems, India 2020
10
Algorithm aversion and anthropomorphism in HR
Figure 3. We dichotomized trust by defining low trust as the mean of trust minus the standard deviation
(4.06-1.33 = 2.73) and high trust as the mean of trust plus the standard deviation (4.06+1.33 = 5.39).
0.00
0.25
0.50
0.75
1.00
AD AD
Algorithm Disclosure
Probability of choosing the recommendation
Low Trust
High Trust
AN
AN
0.00
0.25
0.50
0.75
1.00
AN AN
Anthropomorphism
Probability of choosing the recommendation
Low Trust
High Trust
AD
AD
Figure 3. Results of the third logistic regression modeled as probabilities to choose the
recommendation.
Discussion, limitations and future research
The adoption of AI-based recommendations in the job-seeking domain is a crucial determinant concerning
the future success of artificial intelligence in HR. To shed light on the multi-faceted, complex issue of rec-
ommendation acceptance in the job-seeking domain, we conducted an empirical scenario-based vignette
study using an online survey. We investigated the impact of algorithm aversion and the effects of anthro-
pomorphic design features on the acceptance of AI-based job recommendations from a user’s perspective.
Our results highlight that recommendation acceptance in the job-seeking domain cannot be reduced to a
small set of determinants, but is influenced by multiple factors. Our results suggest that algorithm aversion,
triggered by algorithmic disclosure, is an influential factor on recommendation acceptance. In line with our
first hypothesis, we found evidence that disclosing additional information on the algorithmic origin of the
AI-based job recommendation in the treatment groups (AN_AD, AN_AD) led to a significant decrease in
the acceptance of the recommendation compared to the groups to which no additional information was dis-
closed (AN_AD, AN_AD). With regard to our second hypothesis, we found a marginally significant effect
of anthropomorphic design features on the acceptance of the job recommendation. This effect diminished,
however, once we added the general trust in artificial intelligence in the job domain as a moderator to the
model.
Our final model indicates that algorithmic disclosure has a significant, negative effect on the acceptance of AI-
based recommendations in the job-seeking context, indicating that algorithm aversion is a highly influential
and important factor in this domain. A more detailed analysis, including the marginally significant moder-
ation effect of general trust in artificial intelligence in the job domain, showed that this effect loses strength
with increasing general trust. Among low-trust individuals in the high algorithm disclosure groups (AN_AD,
AN_AD), the predicted probability of choosing the recommendation was approximately 20%, compared to
roughly 60% among the low-trust individuals in the low algorithm disclosure groups ((AN_AD, AN_AD).
By contrast, high-trust individuals in the corresponding groups accepted the recommendation with a prob-
ability of roughly 50%, up to over 75%, with only minimal differences between the different algorithmic
disclosure groups.
These results suggest that algorithm aversion can indeed hinder the acceptance of AI-based recommenda-
tions in high-stake contexts. Thus, our findings corroborate the findings of prior research on algorithm aver-
sion (Burton et al. 2020; Castelo et al. 2019; Dietvorst et al. 2015) and strengthens the need for additional
Forty-First International Conference on Information Systems, India 2020
11
Algorithm aversion and anthropomorphism in HR
research in this field. Our findings regarding the moderating effect of general trust in the algorithm-using
technology contribute to prior research by unveiling an influential factor that might partially drive algorithm
aversion in high-stake decision contexts. Individuals with low trust seem to be more sensitive to algorithm
disclosure and more prone to algorithm aversion. By contrast, disclosing information on individuals with
high trust does not seem to induce algorithm aversion. To the best of our knowledge, this is the first study
that reports such effects of general trust on algorithm aversion and, therefore, extends the research on trust
in adoption behavior (Komiak and Benbasat 2008, 2006; Moussawi et al. 2020; Qiu and Benbasat 2009;
Wang et al. 2016), and algorithm aversion. One should keep in mind, however, that the level of general
trust in artificial intelligence could be related to the familiarity with such technology (Komiak and Benbasat
2008). Therefore, it might be the case that disclosing information on the algorithmic origin of the AI-based
job recommendation did not lead to algorithm aversion in the high trust group, as they were already aware of
this relation. Future research should thus investigate this more deeply, and take familiarity and the novelty
of the disclosed information for the users into account.
With regard to our second hypothesis, our final regression did not show a significant main effect of anthro-
pomorphic design features on the acceptance of AI-based job recommendations. A more detailed analysis,
including the marginally significant moderation effect of general trust in artificial intelligence, showed that
the effect of anthropomorphic design features depends on the level of general trust in artificial intelligence
of the user. While for high-trust individuals in the anthropomorphism groups (AN_AD, AN_AD) the pre-
dicted probability of choosing the recommendation increases to over 75% compared to around 45% in the no
anthropomorphism groups ((AN_AD, AN_AD), for low trust individuals, the predicted probability to accept
the recommendation in both anthropomorphism groups is roughly 2% - 5% below the probability in the no
anthropomorphism groups.
These results suggest that anthropomorphic design features do not necessarily increase the acceptance of
AI-based recommendations in the job-seeking context and are not in line with the majority of findings by
prior research (Moussawi et al. 2020; Pak et al. 2012; Qiu and Benbasat 2009). We conjecture that in high-
stake decision contexts, anthropomorphic design features might not be an effective measure to increase the
acceptance of an AI-based recommendation agent’s suggestions. If individuals are generally suspicious re-
garding the applicability of artificial intelligence in the job context and do not trust it, anthropomorphic
design features do not contribute to an increased acceptance rate. It it conceivable that these individuals
generally do not react to any persuasive approaches in these contexts, as they feel less ambivalence with re-
gard to their rejective stance (Jonas et al. 2000; Zemborain and Johar 2007) and tend to ignore information
that is not in line with their attitude (Rothman et al. 2017). Individuals with high trust and thus less suspi-
cion in the anthropomorphism groups, on the other hand, might actively look for information confirming
their prior attitude towards artificial intelligence in job-seeking (Rothman et al. 2017), such as positively
perceived anthropomorphic design features (Epley et al. 2007; Qiu and Benbasat 2009; Wang et al. 2016).
Further research is needed to investigate the underlying mechanisms driving the moderating effects of trust
in this context.
Our results have multiple implications for academic research and contribute to the ongoing discussion re-
garding possible interventions to increase the acceptance of AI-based job recommendations.
First, prior studies in the research stream of recommender systems generated insights by outlining the
impacts of different cognitive (Moussawi et al. 2020) and affective factors (Komiak and Benbasat 2008),
thereby focusing on the consumer in commercial contexts. With the present study, we extend this research
to the job-seeking context that is characterized by higher stakes involved in the decision. Our model shows
that algorithm aversion and anthropomorphism affect the acceptance of AI-based systems. This emphasizes
that technology acceptance in a high-stake context can be influenced by various factors that have not been
considered in prior research so far.
Second, prior research on recommender systems has primarily investigated factors explaining the accep-
tance of recommendations by systems that rely on conventional information technology (Lu et al. 2015).
With the increasing demand for and prevalence of artificial intelligence (Castelvecchi 2016), it is crucial to
discuss if the factors influencing adoption behavior differ between recommendations based on conventional
technology and AI-based recommendations. Our study is a first step in this direction, as we show that disclos-
Forty-First International Conference on Information Systems, India 2020
12
Algorithm aversion and anthropomorphism in HR
ing information on the algorithmic origin of a recommendation can lead to adverse effects on its acceptance
in a context that is characterized by high personal stakes, thereby addressing affective factors. However, the
moderating effects of trust in our study emphasize the need for an integrated view of cognitive and affective
determinants of technology acceptance.
Third, algorithm aversion receives increasing attention in the light of the development in AI-based systems.
It refers to a general tendency of individuals to prefer human forecasts and recommendations over algorithm-
based ones. Our study is among the first to investigate algorithm aversion in the high-stakes context of
job-seeking. By contrast, prior research mainly focused on numeric estimation tasks (Dietvorst et al. 2018;
Logg et al. 2019) or contexts where the consequences of a wrong decision are negligible for the individual
(Yeomans et al. 2019). Our findings suggest that algorithm aversion, elicited by disclosing information on
the algorithmic origin of a recommendation, can be a critical factor inhibiting the acceptance of AI-based
job recommendations in such contexts.
In addition, to the best of our knowledge, our study is the first to investigate the effect of anthropomorphic
design features on the acceptance of AI-based recommendations in the job-seeking context. As these deci-
sions are characterized by much higher stakes than typical decisions in the B2C context in which the use of
anthropomorphic design features has been investigated before (Pak et al. 2012; Qiu and Benbasat 2009),
our study contributes to prior research by testing whether anthropomorphic design features are also an ef-
fective measure in the high-stake context. This contribution is especially important in light of the fact that
high-stake decisions are not yet discussed in the literature on recommender systems. It is reasonable, how-
ever, to assume that due to the advances in artificial intelligence and machine learning, the accuracy and
performance of recommender systems will further increase (Logg et al. 2019). Therefore, such systems will
be increasingly implemented in high-stake contexts, such as medical decision-making or financial decision-
making (e.g., high-volume exchange traded funds). Therefore, it is important to assess measures that could
be used in these contexts to increase the acceptance of recommendation systems in these domains, as they
have the potential to support individuals in achieving better decision outcomes.
For practitioners, our research has multiple valuable implications regarding the introduction of an AI-based
job recommender system. First, and on a general level, disclosing information on the algorithmic origin
of the recommendation could lead to adverse reactions regarding the acceptance of the recommendation.
Therefore, it might be beneficial not to disclose such information. If the information needs to be disclosed
for transparency reasons or due to privacy guidelines, additional measures to increase general trust should
be taken. One promising approach in this regard could be the inclusion of assurance seals on the recommen-
dation system’s website (Odom et al. 2002; Özpolat et al. 2013). Second, anthropomorphic design features
can have a positive effect on the acceptance of AI-based job recommendations. As they are very easy to
implement and do not involve high costs, it is recommended that practitioners include anthropomorphic
design features to potentially increase the rate of acceptance of AI-based recommendations. Other positive
effects of such anthropomorphizations could be increased loyalty and emotional connection of customers
with the company (Araujo 2018; Guido and Peluso 2015).
Although our research provides valuable results for practice and academia, it comes with some limitations
that future research should try to address. The first limitation is that our sample consisted of 121 indivuals
from the USA and the UK. A culturally more diverse sample would increase the external validity of the
study. Hence, in future work, we plan to further test our model in order to evaluate cultural generalization.
Second, our findings are solely based on a survey with a hypothetical scenario, thus the participants’ decision
whether or not to follow the recommendation of the AI-based system did not have actual consequences in
their real lives. In future projects, our aim is to conduct a field study where we plan to implement an AI-
based recommender system in a company and to evaluate user acceptance in real-world decision contexts.
Lastly, to further expand our research, we call on fellow researchers to contribute from the IS domain or
related domains, such as Human-Computer-Interaction.
Conclusion
The findings of this study contribute to both academia and practice. Regarding the research question, we
show that disclosing detailed information on the algorithmic origin of an AI-based recommendation can lead
Forty-First International Conference on Information Systems, India 2020
13
Algorithm aversion and anthropomorphism in HR
to algorithm aversion in a high-stake context like job search. As a result, individuals are more likely to reject
the recommendation. At the same time, the results of our study indicate that the use of anthropomorphic
design features to communicate an AI-based job recommendation can increase user acceptance.
Acknowledgements
This project is funded by the Adecco Stiftung “New Ways for Work and Social Life” and the Bavarian State
Ministry of Science and the Arts, coordinated by the Bavarian Research Institute for Digital Transformation
(bidt).
References
Adomavicius, G., Bockstedt, J. C., Curley, S. P., and Zhang, J. 2019. “Reducing RecommenderSystem Biases:
An Investigation of Rating Display Designs,” MIS Quarterly: Management Information Systems (43:4),
pp. 1321–1341.
Aggarwal, P., and McGill, A. L. 2007. “Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating
Anthropomorphized Products,” Journal of Consumer Research (34:4), pp. 468–479.
Araujo, T. 2018. “Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and Com-
municative Agency Framing on Conversational Agent and Company Perceptions,” Computers in Human
Behavior (85), pp. 183–189.
Bartneck, C., Kulić, D., Croft, E., and Zoghbi, S. 2009. “Measurement Instruments for the Anthropomor-
phism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots,” International Jour-
nal of Social Robotics (1:1), pp. 71–81.
Benjamini, Y., and Hochberg, Y. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Ap-
proach to Multiple Testing,” Journal of the Royal Statistical Society: Series B (Methodological) (57:1),
pp. 289–300.
Benlian, A., Klumpe, J., and Hinz, O. 2019. “Mitigating the Intrusive Effects of Smart Home Assistants by
Using Anthropomorphic Design Features: A Multimethod Investigation,” Information Systems Journal
pp. 1–33.
Berg, J. M., Grant, A. M., and Johnson, V. 2010. “When Callings are Calling: Crafting Work and Leisure in
Pursuit of Unanswered Occupational Callings,” Organization Science (21:5), pp. 973–994.
Bodapati, A. V. 2008. “Recommendation Systems with Purchase Data,” Journal of Marketing Research
(45:1), pp. 77–93.
Bowling, N. A., Eschleman, K. J., and Wang, Q. 2010. “A Meta-analytic Examination of the Relationship
Between Job Satisfaction and Subjective Well-Being,” Journal of Occupational and Organizational Psy-
chology (83:4), pp. 915–934.
Burton, J. W., Stein, M. K., and Jensen, T. B. 2020. “A Systematic Review of Algorithm Aversion in Aug-
mented Decision Making,” Journal of Behavioral Decision Making (33:2), pp. 220–239.
Castelo, N., Bos, M. W., and Lehmann, D. R. 2019. “Task-Dependent Algorithm Aversion,” Journal of Mar-
keting Research (56:5), pp. 809–825.
Castelvecchi, D. 2016. “Can We Open the Black Box of AI?” Nature (538:7623), pp. 20–23.
Cyr, D., Hassanein, K., Head, M., and Ivanov, A. 2007. “The Role of Social Presence in Establishing Loyalty
in E-Service Environments,” Interacting with Computers (19:1), pp. 43–56.
Cyr, D., Head, M. M., Larios, H., and Pan, B. 2009. “Exploring Human Images in Website Design: A Multi-
Method Approach,” MIS Quarterly (33:3), pp. 539–566.
Dana, J., and Thomas, R. 2006. “In Defense of Clinical Judgment and Mechanical Prediction,” Journal
of Behavioral Decision Making (19:5), pp. 413–428.
Dawes, R. M. 1979. “The Robust Beauty of Improper Linear Models in Decision Making,” American Psychol-
ogist (34:7), pp. 571–582.
Dawes, R. M., Faust, D., and Meehl, P. E. 1989. “Clinical Versus Actuarial Judgment,” Science (243:4899),
pp. 1668–1674.
Diab, D. L., Pui, S. Y., Yankelevich, M., and Highhouse, S. 2011. “Lay Perceptions of Selection Decision Aids
in US and Non-US Samples,” International Journal of Selection and Assessment (19:2), pp. 209–216.
Dietvorst, B. J., Simmons, J. P., and Massey, C. 2015. “Algorithm Aversion: People Erroneously Avoid Algo-
Forty-First International Conference on Information Systems, India 2020
14
Algorithm aversion and anthropomorphism in HR
rithms After Seeing Them Err.” Journal of Experimental Psychology: General (144:1), pp. 114–126.
Dietvorst, B. J., Simmons, J. P., and Massey, C. 2018. “Overcoming Algorithm Aversion: People Will Use
Imperfect Algorithms If They Can (Even Slightly) Modify Them,” Management Science (64:3), pp. 1155–
1170.
Dijksterhuis, A., Bos, M. W., Nordgren, L. F., and van Baaren, R. B. 2006. “On Making the Right Choice:
The Deliberation-Without-Attention Effect,” Science (311:5763), pp. 1005–1007.
Duan, Y., Edwards, J. S., and Dwivedi, Y. K. 2019. “Artificial Intelligence for Decision Making in the Era of
Big Data Evolution, Challenges and Research Agenda,” International Journal of Information Manage-
ment (48), pp. 63–71.
Eastwood, J., Snook, B., and Luther, K. 2012. “What People Want From Their Professionals: Attitudes To-
ward Decision-making Strategies,” Journal of Behavioral Decision Making (25:5), pp. 458–468.
Einhorn, H. J. 1986. “Accepting Error to Make Less Error,” Journal of Personality Assessment (50:3), pp.
387–395.
Epley, N., Waytz, A., and Cacioppo, J. T. 2007. “On Seeing Human: A Three-Factor Theory of Anthropomor-
phism.” Psychological Review (114:4), pp. 864–886.
Fehr-Duda, H., Bruhin, A., Epper, T., and Schubert, R. 2010. “Rationality on the Rise: Why Relative Risk
Aversion Increases with Stake Size,” Journal of Risk and Uncertainty (40:2), pp. 147–180.
Finch, J. 1987. “The Vignette Technique in Survey Research,” Sociology (21:1), pp. 105–114.
Gomez-Uribe, C. A., and Hunt, N. 2015. “The Netflix Recommender System: Algorithms, Business Value,
and Innovation,” ACM Transactions on Management Information Systems (6:4).
Gong, L. 2008. “How Social is Social Responses to Computers? The Function of the Degree of Anthropomor-
phism in Computer Representations,” Computers in Human Behavior (24:4), pp. 1494–1509.
Gruber, D., Aune, A., and Koutstaal, W. 2018. “Can Semi-Anthropomorphism Influence Trust and Compli-
ance?” in Proceedings of the Technology, Mind, and Society (TechMindSociety ’18), , New York, New
York, USA: ACM Press.
Gruber, D. S. 2018. The Effects of Mid-range Visual Anthropomorphism on Human Trust and Performance
Using a Navigation-based Automated Decision Aid, Ph.D. thesis, University of Minnesota.
Guido, G., and Peluso, A. M. 2015. “Brand Anthropomorphism: Conceptualization, Measurement, and Im-
pact on Brand Personality and Loyalty,” Journal of Brand Management (22:1), pp. 1–19.
Hosmer, D. W., Lemeshow, S., and Sturdivant, R. X. 2013. Applied Logistic Regression, New Jersey, USA:
John Wiley & Sons, 3rd ed.
Hu, R., and Pu, P. 2009. “Acceptance issues of personality-based recommender systems,” RecSys’09 - Pro-
ceedings of the 3rd ACM Conference on Recommender Systems pp. 221–224.
Jonas, K., Broemer, P., and Diehl, M. 2000. “Attitudinal Ambivalence,” European Review of Social Psychol-
ogy (11:1), pp. 35–74.
Khasawneh, O. Y. 2018. “Technophobia without Boarders: The Influence of Technophobia and Emotional
Intelligence on Technology Acceptance and the Moderating Influence of Organizational Climate,” Com-
puters in Human Behavior (88), pp. 210–218.
Komiak, S., and Benbasat, I. 2008. “A Two-Process View of Trust and Distrust Building in Recommendation
Agents: A Process-Tracing Study,” Journal of the Association for Information Systems (9:12), pp. 727–
747.
Komiak, S. Y. X., and Benbasat, I. 2006. “The Effects of Personalization and Familiarity on Trust and Adop-
tion of Recommendation Agents,” MIS Quarterly (30:4), pp. 941–960.
Laumer, S., Gubler, F., Maier, C., and Weitzel, T. 2018. “Job Seekers’ Acceptance of Job Recommender
Systems: Results of an Empirical Study,” Proceedings of the 51st Hawaii International Conference on
System Sciences pp. 3914–3923.
LeCun, Y., Bengio, Y., and Hinton, G. 2015. “Deep learning,” Nature (521:7553), pp. 436–444.
Logg, J. M., Minson, J. A., and Moore, D. A. 2019. “Algorithm Appreciation: People Prefer Algorithmic to
Human Judgment,” Organizational Behavior and Human Decision Processes (151), pp. 90–103.
Longoni, C., Bonezzi, A., and Morewedge, C. K. 2019. “Resistance to Medical Artificial Intelligence,” Journal
of Consumer Research (46:4), pp. 629–650.
Lu, J., Wu, D., Mao, M., Wang, W., and Zhang, G. 2015. “Recommender System Application Developments:
A Survey,” Decision Support Systems (74), pp. 12–32.
Malinowski, J., Wendt, O., Keim, T., and Weitzel, T. 2006. “Matching People and Jobs: A Bilateral Rec-
Forty-First International Conference on Information Systems, India 2020
15
Algorithm aversion and anthropomorphism in HR
ommendation Approach,” Proceedings of the 39th Hawaii International Conference on System Sciences
(HICSS) pp. 1–9.
Meehl, P. E. 1954. Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evi-
dence., Minneapolis: University of Minnesota Press.
Mori, M., MacDorman, K., and Kageki, N. 2012. “The Uncanny Valley [From the Field],” IEEE Robotics &
Automation Magazine (19:2), pp. 98–100.
Mourey, J. A., Olson, J. G., and Yoon, C. 2017. “Products as Pals: Engaging with Anthropomorphic Products
Mitigates the Effects of Social Exclusion,” Journal of Consumer Research (44:2), pp. 414–431.
Moussawi, S., Koufaris, M., and Benbunan-Fich, R. 2020. “How Perceptions of Intelligence and Anthropo-
morphism Affect Adoption of Personal Intelligent Agents,” Electronic Markets .
Odom, M. D., Kumar, A., and Saunders, L. 2002. “Web Assurance Seals: How and Why They Influence
Consumers’ Decisions,” Journal of Information Systems (16:2), pp. 231–250.
Özpolat, K., Gao, G. G., Jank, W., and Viswanathan, S. 2013. “The Value of Third-Party Assurance Seals in
Online Retailing: An Empirical Investigation,” Information Systems Research (24:4), pp. 1100–1111.
Pak, R., Fink, N., Price, M., Bass, B., and Sturre, L. 2012. “Decision Support Aids with Anthropomorphic
Characteristics Influence Trust and Performance in Younger and Older Adults,” Ergonomics (55:9), pp.
1059–1072.
Palan, S., and Schitter, C. 2018. “Prolific.ac—A Subject Pool for Online Experiments,” Journal of Behavioral
and Experimental Finance (17), pp. 22–27.
Park, D. H., Kim, H. K., Choi, I. Y., and Kim, J. K. 2012. “A Literature Review and Classification of Recom-
mender Systems Research,” Expert Systems with Applications (39:11), pp. 10,059–10,072.
Peer, E., Brandimarte, L., Samat, S., and Acquisti, A. 2017. “Beyond the Turk: Alternative Platforms for
Crowdsourcing Behavioral Research,” Journal of Experimental Social Psychology (70), pp. 153–163.
Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. 2019. “Anthropomorphic Information Systems,” Business
& Information Systems Engineering (61:4), pp. 523–533.
Promberger, M., and Baron, J. 2006. “Do Patients Trust Computers?” Journal of Behavioral Decision Mak-
ing (19:5), pp. 455–468.
Pryce-Jones, J. 2010. Happiness at Work: Maximizing Your Psychological Capital for Success., Oxford,
UK: Wiley-Blackwell.
Qiu, L., and Benbasat, I. 2009. “Evaluating Anthropomorphic Product Recommendation Agents: A Social
Relationship Perspective to Designing Information Systems,” Journal of Management Information Sys-
tems (25:4), pp. 145–182.
Resnick, P., Varian, H. R., and Editors, G. 1997. “Recommender Systems,” Communications of the ACM
(40:3), pp. 56–58.
Riegelsberger, J., Sasse, M. A., and McCarthy, J. D. 2003. “Shiny Happy People Building Trust?” in Pro-
ceedings of the Conference on Human Factors in Computing Systems (CHI 2003), , New York, New York,
USA: ACM Press.
Rothman, N. B., Pratt, M. G., Rees, L., and Vogus, T. J. 2017. “Understanding the Dual Nature of Ambiva-
lence: Why and When Ambivalence Leads to Good and Bad Outcomes,” Academy of Management Annals
(11:1), pp. 33–72.
Schultze, U., and Brooks, J. A. M. 2019. “An Interactional View of Social Presence: Making the Virtual Other
“Real”,” Information Systems Journal (29:3), pp. 707–737.
Sharma, A., Hofman, J. M., and Watts, D. J. 2015. “Estimating the Causal Impact of Recommendation Sys-
tems from Observational Data,” Proceedings of the 16th ACM Conference on Economics and Computation
(EC’15) pp. 453–470.
Short, J., Williams, E., and Christie, B. 1976. The Social Psychology of Telecommunications, London: Wiley.
Sinkovics, R. R., Stöttinger, B., Schlegelmilch, B. B., and Ram, S. 2002. “Reluctance to Use Technology-
related Products: Development of a Technophobia Scale,” Thunderbird International Business Review
(44:4), pp. 477–494.
Steinbrück, U., Schaumburg, H., Duda, S., and Krüger, T. 2002. “A Picture Says More Than a Thousand
Words,” in Proceedings of the Conference on Human Factors in Computing Systems (CHI 2002), , New
York, New York, USA: ACM Press.
van Esch, P., Black, J. S., and Ferolie, J. 2019. “Marketing AI Recruitment: The Next Phase in Job Application
and Selection,” Computers in Human Behavior (90), pp. 215–222.
Forty-First International Conference on Information Systems, India 2020
16
Algorithm aversion and anthropomorphism in HR
Wanberg, C., Zhu, J., and Van Hooft, E. 2010. “The Job Search Grind: Perceived Progress, Self-Reactions,
and Self-Regulation of Search Effort,” Academy of Management Journal (53:4), pp. 788–807.
Wang, W., Qiu, L., Kim, D., and Benbasat, I. 2016. “Effects of Rational and Social Appeals of Online Recom-
mendation Agents on Cognition- and Affect-basedTtrust,” Decision Support Systems (86), pp. 48–60.
Yeomans, M., Shah, A., Mullainathan, S., and Kleinberg, J. 2019. “Making Sense of Recommendations,”
Journal of Behavioral Decision Making (32:4), pp. 403–414.
Zemborain, M., and Johar, G. 2007. “Attitudinal Ambivalence and Openness to Persuasion: A Framework
for Interpersonal Influence,” Journal of Consumer Research (33:4), pp. 506–514.
Forty-First International Conference on Information Systems, India 2020
17
... Previous AI aversion research found consumers with troubled feelings towards AI found advertisements made by AI to be more unnerving . Disclosing AI usage triggers aversion and negatively impacts consumers' acceptance of AI recommendations (Ochmann et al., 2020). Finally, AI aversion was shown to affect the relationship between social trust and perceived risks in using algorithm recommendation systems (Wu et al., 2024). ...
... As previously stated, this may be due to the transparency of disclosures (De Jans et al., 2018), or the consumers' belief about brand involvement in AI usage (Aoki, 2021) Similarly, the results also indicate that AI aversion was not an influencing factor for the decrease in advertising attitude. This is contrary to previous research where disclosing AI was correlated with AI aversion and which then negatively impacted the acceptance of AI-based 25 job recommendations (Ochmann et al., 2020). Since the decrease in advertising attitude due to AI disclosures is not explainable through inferences of manipulative intent or AI aversion, other factors that did not fall under the scope of this study may be at play. ...
Article
Full-text available
This study proposes a model to understand and explain consumers reactions to artificial intelligence disclosures. Artificial intelligence (AI) has the potential to disrupt the advertising industry as marketers and brands can rapidly create highly engaging personalized content using AI. However, the usage of AI is prone to bias and misinformation and can be used to manipulate. Therefore, various lawmakers such as the EU aim to enforce the use of AI disclosures to help protect consumers, although the implications of such disclosures have not yet been studied. This paper draws on existing theories in persuasion knowledge, disclosure theory, inferences of manipulative intent and artificial intelligence aversion to develop a model to understand consumer attitudes towards AI disclosures in Instagram advertisements. A three conditions between-subjects online experiment (Nfinal = 161) was conducted to test the model. The data was analyzed using a moderated mediation model. AI disclosures lead to a direct decrease in advertising attitude. In addition, AI disclosures lead to a decrease in brand attitude only when consumers have high AI aversion. There were no effects of AI disclosures on source credibility. These effects were mediated by inferences of manipulative intent. However, participants who viewed the AI disclosure had lower inferences of manipulative intent then participants who did not view the AI disclosure. Furthermore, no differences were found between AI disclosures pertaining to the use of AI in the creation of the image or the text. Implications are discussed from both theoretical and managerial viewpoints and highlight why the use of AI on social media for advertising purposes should be limited as it will become more transparent in the future.
... Despite different levels of anthropomorphism, subjects did accept the robot's recommendation equally often in both communication styles. This contradicts the research claiming that anthropomorphism improves robot acceptance (Abdi et al., 2022;Biermann et al., 2021;Roesler et al., 2021;Ochmann et al., 2020;Hoffmann et al., 2020;Castelo et al., 2019;Hertz and Wiese, 2018;Ludewig, 2016; ACCEPTANCE OF ROBO-ADVISORS 23 R. H. Kim et al., 2014;Fink, 2012) and therefore rather supports studies in which anthropomorphic interaction design alone cannot reduce algorithm aversion (Finkel and Krämer, 2022) nor lead to higher acceptance (Kulms and Kopp, 2019;Goudey and Bonnin, 2016). ...
... In our study, we used Pepper in both communication styles without visual changes. In previous studies, acceptance-enhancing anthropomorphism increases have been implemented through differences in external appearance (e.g., avatar vs. gear (Ochmann et al., 2020), avatar vs. Pepper vs. tablet (Abdi et al., 2022), Pepper vs. robot arm (Biermann et al., 2021), or donation box with eyes and mouth vs. donation box without eyes and mouth (R. H. Kim et al., 2014)). Consequently, the external appearance should be given greater consideration in the development and evaluation of recommender systems as it has a high impact on user acceptance. ...
Preprint
Full-text available
This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social). Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation. Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.
... For example, news about faulty automated driving systems (Boudette et al., 2022), unethical online behaviors generated by AI-driven systems (Silva, 2022), unlawful user information trading (Bensinger, 2022;He et al., 2019), and users' compromising their privacy to receive personalized AI services (Shin, Lim, et al., 2022) could make people hesitant of accepting AI. In workplaces, people worry that AI-embedded technologies (e.g., algorithmic calculators, job recommendation systems, and robots as office assistants) may outperform and even replace human workers and decision-makers (Granulo et al., 2019;Ochmann et al., 2020;Tredinnick, 2017). In addition, influenced by the common depictions of AI in science fiction films, people fear that highly intelligent and self-aware AI will make decisions on their own and may even cause human extinction . ...
... In addition, influenced by the common depictions of AI in science fiction films, people fear that highly intelligent and self-aware AI will make decisions on their own and may even cause human extinction . The above-described fear of AI may threaten users' autonomy (G€ ornemann & Spiekermann, 2022) and reduce users' satisfaction with AI systems (Luger & Sellen, 2016), which potentially hinders individuals by making choices based on fear toward these technologies rather than logical reasoning (e.g., individuals avoiding AI-recommended jobs even though they know the AI is more accurate; Ochmann et al., 2020). From a society-level perspective, end users' fear of AI technology can be distributed widely and quickly through the internet, potentially leading to misinformation and tension between the general population and technology industries. ...
Article
Fear of artificial intelligence (AI) has become a predominant term in users’ perceptions of emerging AI technologies. Yet we have limited knowledge about how end users perceive different types of fear of AI (e.g., fear of artificial consciousness, fear of job replacement) and what affordances of AI technologies may induce such fears. We conducted a survey (N = 717) and found that while synchronicity generally helps reduce all types of fear of AI, perceived AI control increases all types of AI fear. We also found that perceived bandwidth was positively associated with fear of artificial consciousness, but negatively associated with fear of learning about AI, among other findings. Our study provides theoretical implications by adopting a multi-dimensional fear of AI framework and analyzing the unique effects of perceived affordances of AI applications on each type of fear. We also provide practical suggestions on how fear of AI might be reduced via user experience design.
... Further, we distinguish the manipulation of AI representation. This includes the anthropomorphic design of AI (Ochmann et al. 2020) or the endowment of AI with human capabilities such as apologies or learning mechanisms (Kim & Song 2023). In contrast, the manipulation of individuality attributes comprises experiments that control for personal traits of participants, cognitive biases (Selten et al. 2023), domain expertise, or demographic attributes such as age (Kaufmann 2021). ...
Conference Paper
Full-text available
Effective collaboration between humans and artificial intelligence (AI) results in superior decision-making outcomes if human reliance on AI is appropriately calibrated. The emerging research area of human-AI decision-making focuses on empirical methods to explore how humans perceive and act in collaborative environments. While previous studies provide promising insights into reliance on AI systems, the multitude of studies has made it challenging to compare and generalize outcomes. To address this complexity, we use the theoretical lens of task technology fit theory and synthesize study design choices in four meta-characteristics: collaboration, agent, task, and precondition. Our goal is to develop a taxonomy on AI reliance experiment design choices that helps structure research efforts and supports producing generalizable scientific knowledge. Thus, our research has notable contributions to both empirical science in information systems and practical implications for designing AI systems.
... It is not clear from the literature that any particular class of users have a problem with AI methods regardless whether These studies find different results based on factors such as task difficulty (Bogert, Schecter, and Watson 2021), functionalities to modify the algorithm (Dietvorst, Simmons, and Massey 2018), whether decisions entail moral issues (Bigman and Gray 2018;Ochmann et al. 2020) or whether the domain is health (Promberger and Baron 2006). A widely cited study in use of automation targeted at identifying requirements to attain the most successful combination of humans and automation found that good results are reached when operators have a good grasp of the automation (Parasuraman and Riley 1997). ...
Article
Full-text available
Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.
... Achieving this balance may be challenging (Hickman and Petrin 2021). In the second scenario, paradoxically, while AI-based systems often outperform humans in predictions, some individuals may prefer retaining decision-making control and be averse to AI recommendations (Dietvorst et al. 2015;Jussupow et al. 2020;Ochmann et al. 2020), particularly due to high selfconfidence (Chong 2022) or distrust caused by the incomprehensibility of AI recommendations (Adadi and Berrada 2018;Hofeditz et al. 2022). The lack of explainability is a major transparency-related issue. ...
Article
Full-text available
Recent developments in artificial intelligence (AI) have made AI applications in corporate governance an area of increasing interest to researchers and practitioners. Augmented intelligence, also known as advisory intelligence, is a form of collaboration between humans and AI, where the goal is to enhance human capabilities. It represents the second stage of AI development in corporate governance. In the realm of boards of directors, augmented intelligence offers a unique opportunity to enable machines to effectively collaborate with board members. This collaboration allows boards to harness the expertise of AI without supplanting the essential human element. In contrast, there exist more autonomous AI systems designed to operate independently, which could potentially lead to the replacement of humans on the board. This study seeks to assess the suitability of augmented intelligence for corporate boards, particularly in comparison to these more autonomous AI systems. Furthermore, it addresses the application of augmented intelligence in boardroom decision-making, its role in improving directors' decision-making abilities, and the associated risks and challenges.
... One finding was that feature contribution explanation fulfilled more desired AI explanations when humans had expertise regarding the decision task (Wang & Yin, 2021). Ochmann et al. (2020) also investigated the influence that the disclosure of algorithms has on acceptance. They found that disclosing additional information about the algorithmic origin leads to a significant decrease in the acceptance of the recommendation when these individuals have low trust in AI in general. ...
Conference Paper
Full-text available
The ability of artificial intelligence (AI) to take on complex tasks can facilitate humans' life, but also raise concerns about AI replacing the human workforce. Acceptance and prior trust are prerequisites for successfully using AI. Yet, there is no overview of the research streams that covers the topic of AI trust. We, therefore, analyzed all AI trust literature to this date. We first organized a large number of publications using a topic modeling approach, clustered the results into research streams within AI trust, and built a conceptual framework of human trust in AI. Our analysis yielded 56 topics which we assigned to 11 clusters. Further in-depth analysis of the cluster "human trust" revealed different organizational, AI, and human factors influencing AI trust. Our results contribute to the AI trust literature by reviewing the field, offering a model of human trust in AI using a novel mixed methods approach and uncovering areas for further research.
Chapter
Full-text available
Artificial intelligence (AI) is increasingly used to assist or substitute humans in organizational processes. The transfer of decision-making power and task performance from humans to artificial agents increases uncertainty—defined as a state of not knowing stemming from incomplete and/or ambiguous information—for workers in several ways. Within the broader context of environmental and societal uncertainty, the authors focus specifically on uncertainty at the task level and develop a framework about uncertainty in human–AI interaction, showing that people experience three main facets of uncertainty in their interactions with AI: (a) usage uncertainty—not knowing what specific tasks AI can perform; (b) process uncertainty—not knowing how AI performs tasks and achieves its results; and (c) outcome uncertainty—not knowing what the outcome of AI’s processes will be and how to evaluate them. The authors discuss how each facet of uncertainty can lead to algorithm aversion and appreciation, as well as what can be done to change attitudes and behaviors toward AI from an uncertainty lens. The authors conclude by providing an outlook on how the “AI wave” is changing the nature of work and engendering fears of replacement due to the automation risk, and they ultimately suggest the development of joint cognitive systems between humans and technology to fruitfully regulate uncertainty in human–AI interaction.
Article
The integration of Artificial Intelligence (AI) into various sectors has been a transformative force, revolutionizing traditional practices and introducing new efficiencies. However, this integration is not without its challenges. One sector that stands to benefit immensely from AI integration is Human Resources Management (HRM), a field that is central to the functioning of any organization. Yet, the incorporation of AI into HRM brings to the fore a significant challenge - that of model transparency. This challenge forms the crux of this study, which aims to elucidate the role of Explainable Artificial Intelligence (XAI) within HRM. XAI, a subfield of AI, focuses on creating AI models that are not just efficient but also interpretable and explainable. The importance of XAI becomes particularly pronounced in sensitive areas like HRM, where decisions can have far-reaching impacts on individuals' careers and lives. Therefore, it is crucial for AI systems used in HRM to be not only technically efficient but also ethically sound and transparent. This study underscores this necessity and delves deeper into the role and potential of XAI in HRM. The study conducts a scientometric analysis of research conducted over the past decade (2013-2023) in the field of XAI and its application in HRM. In this study, scientometric analysis serves as a tool to map the landscape of XAI research, identify key themes and patterns, and understand its intersection with HRM. The analysis begins with an exploration of general XAI concepts. This exploration lays the groundwork for understanding the broader context of XAI, its principles, its techniques, and its potential applications. It provides a comprehensive overview of the field of XAI, highlighting its multidisciplinary nature and its vast potential for application in various sectors. Following this, the study delves into the specific application of XAI in HRM. This is a relatively new area of exploration and one that holds significant potential for transforming HRM practices. By employing co-occurrence analysis, a method used to detect themes or patterns in a body of text, the study identifies key patterns and themes in XAI research and its intersection with HRM. The co-occurrence analysis includes of two distinct knowledge maps. The first map focuses solely on XAI, providing a comprehensive overview of the field. It reveals that XAI applications are predominantly linked to the medical field, with less emphasis on human sciences. This finding suggests that while there is significant potential for XAI in HRM, this area has not yet been fully explored or exploited. The second knowledge map examines the synergy between XAI and HRM. Interestingly, the integration of HR in AI revealed a lack of significant correlation with AI themes. This finding indicates a gap in research and innovation at the intersection of these two fields. This gap could stem from divergent backgrounds, perspectives, or a reluctance of HR managers to adopt intelligent systems. The study also identifies technical machine learning and ontological explainability as core aspects of XAI application. These aspects underpin the ability of XAI to provide transparent and understandable AI models. However, the minimal interaction between technical AI and HR indicates the need for interdisciplinary research that combines HR expertise with AI to develop more relevant and effective HR tools. In conclusion, this study provides a comprehensive overview of the current state of XAI and its application in HRM. It highlights the significant potential of this field, while also identifying key challenges and areas for future research. It is hoped that this work will contribute to the ongoing dialogue on the role of AI in HRM and inspire further exploration and innovation in this exciting field. The findings of this study underscore the necessity for AI systems, particularly in sensitive areas like HRM, to be not only technically efficient but also ethically sound and transparent. This is critical for ensuring that AI systems are used responsibly and that they contribute positively to HRM practices.
Article
Applications based on artificial intelligence (AI) play an increasing role in the public sector and invoke political discussions. Research gaps exist regarding the disclosure effects—reactions to disclosure of the use of AI applications—and the deployment effect—efficiency gains in data savvy tasks. This study analyzes disclosure effects and explores the deployment of an AI application in a pre-registered field experiment (n=2,000) co-designed with a public organization in the context of employer-driven recruitment. The linear regression results show that disclosing the use of the AI application leads to significantly less interest in an offer among job candidates. The explorative analysis of the deployment of the AI application indicates that the person–job fit determined by the leaders can be predicted by the AI application. Based on the literature on algorithm aversion and digital discretion, this study provides a theoretical and empirical disentanglement of the disclosure effect and the deployment effect to inform future evaluations of AI applications in the public sector. It contributes to the understanding of how AI applications can shape public policy and management decisions, and discusses the potential benefits and downsides of disclosing and deploying AI applications in the public sector and in employer-driven recruitment.
Article
Full-text available
Artificial intelligence (AI) is revolutionizing healthcare, but little is known about consumer receptivity toward AI in medicine. Consumers are reluctant to utilize healthcare provided by AI in real and hypothetical choices, separate and joint evaluations. Consumers are less likely to utilize healthcare (study 1), exhibit lower reservation prices for healthcare (study 2), are less sensitive to differences in provider performance (studies 3A-3C), and derive negative utility if a provider is automated rather than human (study 4). Uniqueness neglect, a concern that AI providers are less able than human providers to account for their unique characteristics and circumstances, drives consumer resistance to medical AI. Indeed, resistance to medical AI is stronger for consumers who perceive themselves to be more unique (study 5). Uniqueness neglect mediates resistance to medical AI (study 6), and is eliminated when AI provides care (a) that is framed as personalized (study 7), (b) to consumers other than the self (study 8), or (c) only supports, rather than replaces, a decision made by a human healthcare provider (study 9). These findings make contributions to the psychology of automation and medical decision making, and suggest interventions to increase consumer acceptance of AI in medicine.
Article
Full-text available
With the growing proliferation of Smart Home Assistants (SHAs), digital services are increasingly pervading people's private households. Through their intrusive features, SHAs threaten to not only increase individual users' strain but also impair social relationships at home. However, while previous research has predominantly focused on technology features' detrimental effects on employee strain at work, there is still a lack of understanding of the adverse effects of digital devices on individuals and their social relations at home. In addition, we know little about how these deleterious effects can be mitigated by using IT artifact-based design features. Drawing on the person-technology fit model, self-regulation theory and the literature on anthropomorphism, we used the synergistic properties of an online experiment (N=136) and a follow-up field survey with a representative sample of SHA users (N=214) to show how and why SHAs' intrusive technology features cause strain and interpersonal conflicts at home. Moreover, we demonstrate how SHAs' anthropomorphic design features can attenuate the harmful effects of intrusive technology features on strain by shaping users' feelings of privacy invasion. More broadly, our study sheds light on the largely under-investigated psychological and social consequences of the digitization of the individual at home.
Article
Full-text available
In IS research, social presence is generally defined as the perceived capacity of a communication medium to convey contextual cues normally available in face‐to‐face settings. However, theorizing social presence as a property of the technology has been challenged for decades. The objective of this paper is to develop a more contemporary, interactional view of social presence. To this end, this paper develops a new conceptualization of how participants form the sense that each other is present. We characterize the development of this sense as a skilful accomplishment that entails interactants' joint construction of each other as “real.” Viewing social presence as contingent on social practice, we seek to answer the following research question: “How is social presence accomplished in virtual environments?” To explicate how virtual others are perceived as becoming socially present, that is, emotionally and psychologically “real” to someone interacting with them, we draw from Goffman's work, particularly his concepts of involvement and involvement obligation. Detailing two examples of social interaction in the virtual world Second Life, our analysis highlights the key role that this moral obligation, intrinsic to everyday social interaction, plays in virtual others becoming perceived as psychoemotionally “there.” By outlining a model of how the sense of a virtual other as “real” is produced in and through social interaction, our work contributes a sociological perspective to the construct of social presence and underscores some of the material and social conditions necessary for users to perceive virtual others as present.
Article
A personal intelligent agent (PIA) is a system that acts intelligently to assist a human using natural language. Examples include Siri and Alexa. These agents are powerful computer programs that operate autonomously and proactively, learn and adapt to change, react to the environment, complete tasks within a favorable timeframe and communicate with the user using natural language to process commands and compose replies. PIAs are different from other systems previously explored in Information Systems (IS) due to their personalized, intelligent, and human-like behavior. Drawing on research in IS and Artificial Intelligence, we build and test a model of user adoption of PIAs leveraging their uique characteristics. Analysis of data collected from an interactive lab-based study for new PIA users confirms that both perceived intelligence and anthropomorphism are significant antecedents of PIA adoption. Our findings contribute to the understanding of a quickly-changing and fast-growing set of technologies that extend users’ capabilities and their sense of self .
Article
Despite abundant literature theorizing societal implications of algorithmic decision making, relatively little is known about the conditions that lead to the acceptance or rejection of algorithmically generated insights by individual users of decision aids. More specifically, recent findings of algorithm aversion—the reluctance of human forecasters to use superior but imperfect algorithms—raise questions about whether joint human‐algorithm decision making is feasible in practice. In this paper, we systematically review the topic of algorithm aversion as it appears in 61 peer‐reviewed articles between 1950 and 2018 and follow its conceptual trail across disciplines. We categorize and report on the proposed causes and solutions of algorithm aversion in five themes: expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities. Although each of the presented themes addresses distinct features of an algorithmic decision aid, human users of the decision aid, and/or the decision making environment, apparent interdependencies are highlighted. We conclude that resolving algorithm aversion requires an updated research program with an emphasis on theory integration. We provide a number of empirical questions that can be immediately carried forth by the behavioral decision making community.
Article
Artificial intelligence (AI) has been in existence for over six decades and has experienced AI winters and springs. The rise of super computing power and Big Data technologies appear to have empowered AI in recent years. The new generation of AI is rapidly expanding and has again become an attractive topic for research. This paper aims to identify the challenges associated with the use and impact of revitalised AI based systems for decision making and offer a set of research propositions for information systems (IS) researchers. The paper first provides a view of the history of AI through the relevant papers published in the International Journal of Information Management (IJIM). It then discusses AI for decision making in general and the specific issues regarding the interaction and integration of AI to support or replace human decision makers in particular. To advance research on the use of AI for decision making in the era of Big Data, the paper offers twelve research propositions for IS researchers in terms of conceptual and theoretical development, AI technology-human interaction, and AI implementation.
Article
Research suggests that consumers are averse to relying on algorithms to perform tasks that are typically done by humans, despite the fact that algorithms often perform better. The authors explore when and why this is true in a wide variety of domains. They find that algorithms are trusted and relied on less for tasks that seem subjective (vs. objective) in nature. However, they show that perceived task objectivity is malleable and that increasing a task’s perceived objectivity increases trust in and use of algorithms for that task. Consumers mistakenly believe that algorithms lack the abilities required to perform subjective tasks. Increasing algorithms’ perceived affective human-likeness is therefore effective at increasing the use of algorithms for subjective tasks. These findings are supported by the results of four online lab studies with over 1,400 participants and two online field studies with over 56,000 participants. The results provide insights into when and why consumers are likely to use algorithms and how marketers can increase their use when they outperform humans.
Article
Even though computational algorithms often outperform human judgment, received wisdom suggests that people may be skeptical of relying on them (Dawes, 1979). Counter to this notion, results from six experiments show that lay people adhere more to advice when they think it comes from an algorithm than from a person. People showed this effect, what we call algorithm appreciation, when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic attraction (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when: people chose between an algorithm’s estimate and their own (versus an external advisor’s; Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.
Article
Computer algorithms are increasingly being used to predict people's preferences and make recommendations. Although people frequently encounter these algorithms because they are cheap to scale, we do not know how they compare to human judgment. Here, we compare computer recommender systems to human recommenders in a domain that affords humans many advantages: predicting which jokes people will find funny. We find that recommender systems outperform humans, whether strangers, friends, or family. Yet people are averse to relying on these recommender systems. This aversion partly stems from the fact that people believe the human recommendation process is easier to understand. It is not enough for recommender systems to be accurate, they must also be understood.