ArticlePDF Available

Acceptance Acceptance conditions of algorithmic decision support in management

Authors:

Abstract

This thesis explores the acceptance of decision-aiding technologies in management, which is a challenging component in their use. To address the lack of research on algorithmic decision support at the managerial level, the thesis conducted a vignette study with two scenarios, varying the degree of anthropomorphizing features in the system interface. Results from the study, which included 281 participants randomly assigned to one of the scenarios, showed that the presence of anthropomorphized features did not significantly affect acceptance. However, results showed that trust in the system was a crucial factor for acceptance and that trust was influenced by users' understanding of the system. Participants blindly trusted the system when it was anthropomorphized, but the study emphasized that system design should not focus on the benefits of blind trust. Instead, comprehensibility of the system results is more effective in creating acceptance. This thesis provided practical implications for managers on system design and proposed a structural model to fill a research gap on acceptance at the managerial level. Overall, the findings may assist companies in developing decision support systems that are more acceptable to users.
Junior Management Science 8(4) (2023) 887-925
Junior Management Science
journal homepage: www.jums.academy
Editor:
DOMINIK VAN AAKEN
Advisory Editorial Board:
FREDERIK AHLEMANN
JAN-PHILIPP AHRENS
BASTIAN AMBERG
THOMAS BAHLINGER
MARKUS BECKMANN
CHRISTOPH BODE
SULEIKA BORT
ROLF BRÜHL
KATRIN BURMEISTER-LAMP
JOACHIM BÜSCHKEN
CATHERINE CLEOPHAS
NILS CRASSELT
BENEDIKT DOWNAR
RALF ELSAS
KERSTIN FEHRE
MATTHIAS FINK
DAVID FLORYSIAK
GUNTHER FRIEDL
MARTIN FRIESL
FRANZ FUERST
WOLFGANG GÜTTEL
NINA KATRIN HANSEN
ANNE KATARINA HEIDER
CHRISTIAN HOFMANN
SVEN HÖRNER
KATJA HUTTER
LUTZ JOHANNING
STEPHAN KAISER
NADINE KAMMERLANDER
ALFRED KIESER
NATALIA KLIEWER
DODO ZU KNYPHAUSEN-AUFSESS
SABINE T. KÖSZEGI
ARJAN KOZICA
CHRISTIAN KOZIOL
MARTIN KREEB
TOBIAS KRETSCHMER
WERNER KUNZ
HANS-ULRICH KÜPPER
MICHAEL MEYER
JÜRGEN MÜHLBACHER
GORDON MÜLLER-SEITZ
J. PETER MURMANN
ANDREAS OSTERMAIER
BURKHARD PEDELL
MARCEL PROKOPCZUK
TANJA RABL
SASCHA RAITHEL
NICOLE RATZINGER-SAKEL
ASTRID REICHEL
KATJA ROST
THOMAS RUSSACK
FLORIAN SAHLING
MARKO SARSTEDT
ANDREAS G. SCHERER
STEFAN SCHMID
UTE SCHMIEL
CHRISTIAN SCHMITZ
MARTIN SCHNEIDER
MARKUS SCHOLZ
LARS SCHWEIZER
DAVID SEIDL
THORSTEN SELLHORN
STEFAN SEURING
ANDREAS SUCHANEK
TILL TALAULICAR
ANN TANK
ORESTIS TERZIDIS
ANJA TUSCHKE
MATTHIAS UHL
CHRISTINE VALLASTER
PATRICK VELTE
CHRISTIAN VÖGTLIN
STEPHAN WAGNER
BARBARA E. WEISSENBERGER
ISABELL M. WELPE
HANNES WINNER
THOMAS WRONA
THOMAS ZWICK
Volume 8, Issue 4, December 2023
JUNIOR
MANAGEMENT
SCIENCE
Maryam Hammad,
Power to the CEO? Sources of CEO
Power and Its Influences on Strategic Choic es and
Firm Performance
MohidFarooq Butt,
Logical Reasoning in Management:
From “Philosopher Kings” to Logical Managers?
Georg Streicher,
Digital Transformation in Family Businesses
KiramIqbal,
Acceptance conditions of algorithmic decision
support in management
Max-Gerrit Meinel,
Transparenz und organisationale
Legitimität: Eine experimentelleStudieam Beispiel
einesfiktivenUnternehmens
Philip Christoph Häberle,
Discussionofautomotivetrends
and implicationsforGerman OEMs
Regina Maria Martha Förg,
Chances and challenges for the
members of the Fairtrade-supply chain: a case study
of Chile and Switzerland
Stefano Mattana,
Berufseinstiegin Zeitenmobiler Arbeit:
Eine empirischeStudieüberden Einfluss mobilen
Arbeitensauf das psychischeWohlbefin den
kaufmännischerAuszubildenderin Hambu rg
Jan Philipp Natter,
Enabling E-Mobility: How Electric Grids
Can Support High EV Adoption with Residential PV
and Battery Energy Storage Systems
Hannah Fernsebner,
Virtual Reality Transforming the Digital
Learning Environment: An Analysis of Students’
Acceptance
827
845
865
887
926
955
993
1010
1040
1081
PublishedbyJuniorManagement Science e.V.
Acceptance conditions of algorithmic decision support in management
Kiram Iqbal
Ruhr-Universität Bochum
Abstract
This thesis explores the acceptance of decision-aiding technologies in management, which is a challenging component in their
use. To address the lack of research on algorithmic decision support at the managerial level, the thesis conducted a vignette
study with two scenarios, varying the degree of anthropomorphizing features in the system interface. Results from the study,
which included 281 participants randomly assigned to one of the scenarios, showed that the presence of anthropomorphized
features did not significantly affect acceptance. However, results showed that trust in the system was a crucial factor for
acceptance and that trust was influenced by users’ understanding of the system. Participants blindly trusted the system when
it was anthropomorphized, but the study emphasized that system design should not focus on the benefits of blind trust. Instead,
comprehensibility of the system results is more effective in creating acceptance. This thesis provided practical implications
for managers on system design and proposed a structural model to fill a research gap on acceptance at the managerial level.
Overall, the findings may assist companies in developing decision support systems that are more acceptable to users.
Keywords: Decision support systems; Algorithmic management; Artificial intelligence; Anthropomorphizing; Technology
acceptance.
1. Introduction and area of problem
Recent advances in technology enable aid for business
in the context of problem-solving (J. R. Evans & Lindner,
2012). In practice, the usage of systems aiding decisions is
low. Therefore it is necessary to research on acceptance con-
ditions. This introduction outlines the practical and theoret-
ical necessity of deriving acceptance conditions for research.
Furthermore, the structure of the thesis is outlined.
1.1. Objective and research question
The scientific field of business analytics and business in-
telligence has gained high importance in strategic manage-
ment. In this context, it is important to differentiate between
these terms. Business analytics is defined as a process where
data is converted to actions through an analysis of this data
in the context of organizational problem solving or decision-
making (J. R. Evans & Lindner,2012). Business intelligence
is defined as the use of various technologies like information
technology to help managers to gain insights about their busi-
ness and to improve decision-making (Gluchowski,2016).
Since analytic procedures are based on algorithms the term
business analytics can be used as a synonym for algorithmic
decision support.
Despite the rise of opportunities for algorithmic decision
support, arising challenges should not be neglected. Chal-
lenges are legal issues like ownership and privacy of data and
technical obstacles like analysis of complex data and scaling
of algorithms (Mishra & Silakari,2012).
One of the most challenging components due to the use of
algorithmic decision support in business is the acceptance of
these systems by users. From a user’s perspective, one major
problem is that precise algorithms generate the perception of
authoritative correctness therefore human beings can feel in-
ferior toward algorithms. Especially the introduction of deep
learning algorithms in artificial intelligence (Linardatos, Pa-
pastefanopoulos, & Kotsiantis,2020) and the scaling of algo-
rithms (Mishra & Silakari,2012) lead to higher accuracy and
precision which in turn makes the human being feel inferior
to algorithms. In this regard, it is necessary to do further re-
search on the acceptance condition of algorithmic decision
support.
Therefore, this paper conducts an analysis on the follow-
ing research question: Which conditions lead to an accep-
DOI: https://doi.org/10.5282/jums/v8i4pp887-925
K. Iqbal /Junior Management Science 8(4) (2023) 887-925888
tance of algorithmic decision support in management?
1.2. Theoretical and practical research gap
To answer the research question, it is necessary to make
an analysis of the state of the art in research and elucidate
the research gap. Various studies do research on the topic of
acceptance of artificial intelligence-based technologies. Has-
tenteufel and Ganster (2021) apply this topic to the digital
transformation in banking. Therefore, they use the tech-
nology acceptance model by Davis, Bagozzi, and Warshaw
(1989). Hastenteufel and Ganster (2021) identify the trust-
worthiness, perceived usability and social influence as accep-
tance conditions for algorithmic decision support. Gersch et
al. (2021) do research about the challenges particular in trust
in collaborative service delivery with artificial intelligence in
the field of radiology. Therefore they conduct interviews
with various stakeholders in radiology. They identify trust
as an indicator to cope with uncertainties. Furthermore,
they identify that cognitive trust is built in the first contact
with the user. With repeated experience, the user develops
affective trust. Understandability and comprehensibility are
important for users. Further challenges are the change of
own position in the workplace due to the introduction of
support through artificial intelligence and arising of new
duties and prerequisites in the design of the socio-technical
system. Therefore, explainable artificial intelligence should
take into account the perspective of different stakeholders.
Rathje, Laschet, and Kenning (2021) do research about trust
in banking. Therefore, they develop their own research
model based on the models by Mayer, Davis, and Schoorman
(1995), Gefen, Karahanna, and Straub (2003) and Davis
(1989). They conducted a survey with 119 participants
where the affinity to technology is high. Rathje et al. (2021)
identify that trust has a relationship to the intention to use
the technology. Pütz, Düppre, Roth, and Weiss (2021) do re-
search on the topic of acceptance of voice and chatbots. They
use the technology acceptance model (TAM) of Davis (1989)
and the extended version of Venkatesh and Davis (2000)
and Venkatesh and Bala (2008) to analyze the acceptance of
this technology. The approach used by Pütz et al. (2021) is
literature-based. They identify a relation between perceived
usability and perceived user-friendliness. Further results are
a relation between perceived user-friendliness and intention
to use the technology.
Scheuer (2020) develops an acceptance model for the use
of artificial intelligence. The model developed by Scheuer is
called the KIAM model. The KIAM model is an extension of
the TAM model and is considered the Artificial Intelligence
Acceptance Model. Whereas KI is referred to as the german
term for AI. The AI acceptance model (KIAM) consists of a
holistic acceptance model that addresses the characteristics
of the theoretical properties of an AI compared to a classi-
cal computer system. Scheuer (2020) assumes that an AI is
accessible via a technology (e.g., a smartphone application)
and enriching it with Narrow AI services (e.g., a chatbot in-
tegration, Speech-To-Text, or Text-To-Speech) through which
a user can interact with the AI in natural language. Based
on this, two essential components emerge first, the classical
technology in the form of a software application, and sec-
ond, the dialog component for interacting with the AI in the
background. For the classical technology and the investiga-
tion of its acceptance, Scheuer uses the existing TAM model
by Venkatesh and Bala (2008) TAM 3. However, for the di-
alog component and the resulting interaction between the
AI and the user, Scheuer (2020) differentiates to what ex-
tent the user accepts the AI as a personality or even as a
complete person. For this, he considers that psychological
models for measuring sympathy and affection apply as per-
sonality acceptance takes precedence over pure technology
acceptance. In this regard, Scheuer highlights that if the fil-
ter of the perception of the system as a personality is taken
into account an AI is recognized as a personality. This re-
lationship with technology can be described as interpersonal
acceptance. According to inter-parental acceptance-rejection
theory (IPART) (Rohner & Khaleque,2002), interpersonal ac-
ceptance is generated by warmth and affection in the rela-
tionship and is based on sympathy. Sympathy, in turn, is de-
pendent on reciprocity in human behavior of communication
and sameness of character traits. Reciprocity of behavior is
subsequently influenced by a perceived and radiated attrac-
tiveness of and to the other person and positive external per-
ception. Interpersonal acceptance in decision support is a
new component for analyzing acceptance conditions. There-
fore this thesis considers interpersonal acceptance for deriv-
ing acceptance conditions. Due to a lack of research findings
of algorithmic decision support on managerial-level, this the-
sis aims to identify acceptance conditions, in order to con-
tribute to research and practice. This section aimed to em-
phasize the research gap and underline what has been al-
ready used in the context of academic literature. Summing
up, the section shows that there is a need for investigating the
conditions of accepting algorithmic decision support systems
from a managerial perspective.
1.3. Outline of the thesis
This thesis aims to answer the following research ques-
tion: which conditions lead to an acceptance of algorithmic
decision support in management? In order to answer the re-
search questions and derive the conditions that lead to an
acceptance of algorithmic decision support in management,
it is necessary to provide a better understanding of the theo-
retical foundation regarding algorithmic decision support in
management and explain how this takes place in practice.
This will be presented in section two where the relevance of
algorithmic decision support is outlined. Hereby, the advan-
tages of the integration of business analytics into business are
examined. Necessary technological foundations are given in
order to understand the underlying technology behind algo-
rithmic decision support and understand the rapid develop-
ment in performance of computing architecture.
Furthermore, acceptance conditions are derived from the
literature. At first theories for an increase usage of technol-
ogy are examined. In addition, the term acceptance plays
an important role in the context of the research question, as
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 889
the conditions that lead to an acceptance of algorithmic de-
cision support in management are investigated. To further
elaborate on the role of acceptance from a theoretical point
of view, different acceptance models that exist in the liter-
ature are presented. Findings from literature from of non-
managerial-levels are used to derive hypotheses for accep-
tance conditions.
Afterward, a structural equation model will be derived
based on the thoughts of the TAM for conducting a quantita-
tive study (vignette study) to provide empirical evidence to
answer the research question. The target group for the em-
pirical study will be managers and students in future man-
agement positions as the research question focuses on the
acceptance of algorithmic decision support in management.
The items are derived from Scheuer (2020) who introduced
the KIAM model which contains the TAM of Venkatesh and
Bala (2008). The items are used in a vignette study (Wason,
Polonsky, & Hyman,2002). The results are analyzed empiri-
cally and descriptive statistics are provided.
Before estimating the structural equation model, the
quality indicators for the measurement models and struc-
tural models are examined.
In the next section, the survey data is analyzed by esti-
mating a structural equation model. The results of the anal-
ysis are discussed in a further section and contextualized to
findings in literature. This section puts emphasis on the in-
terpretation of the results where the quantitative results are
transferred into qualitative measures and reflected in the the-
oretical foundations. In addition to this, the findings will be
applied and compared to the results of the state of the art in
literature. Afterward, the theoretical and practical implica-
tions are presented along with the limitations of the study.
The conclusion summarizes the findings of the thesis.
2. Understanding acceptance of algorithmic decision
support
In order to answer the research question, it is necessary
to outline theoretical foundations. The following section will
emphasize the importance of algorithmic decision support for
strategic management. At first, the relevance of algorithmic
decision support is derived on a general level. Further, al-
gorithmic decision support is applied to the business context
where advantages of the application of this technology are
outlined. Afterward, the underlying technological compo-
nents or related technologies are addressed for a sufficient
technological foundation.
2.1. Relevance of algorithmic decision support
In order to understand the relevance of algorithmic deci-
sion support, it is important to understand what decisions are
and when they occur. According to Mallach (1994), decisions
are part of the problem-solving process and are defined as a
reasoned choice between available alternatives. The litera-
ture identifies two types of decision-making processes. The
intuitive decision-making approach and the rational decision-
making approach (Alvarez, Barney, & Young,2010). These
approaches are based on the two types of cognitive processes
of Stanovich and West (2000) and are defined as System 1
(based on intuition) and System 2 (based on reasoning). An
intuitive decision-making approach is defined as a decision
based on biases and heuristics (Alvarez et al.,2010). Indi-
viduals tend to use various kinds of heuristics in judgmental
decisions (Tversky & Kahneman,1974).
Managers tend more toward the intuitive decision-
making approach than the rational decision-making ap-
proach (Anderson,2015). Anderson (2015) identified that
only 29% of senior executives of 1135 surveyed base their
decision on data and analysis, where 30 % of them use their
intuition or experience and 28 % of them use advice or ex-
perience of others as a source of decision. The majority of
the surveyed managers use availability heuristics to make
decisions which implies that most managers tend to use
the intuitive decision-making approach. The use of heuris-
tics and biases may lead to efficient decision-making or to
decreased decision quality. Various studies show that the oc-
currence of biases lowers the quality of decisions (Camerer
& Lovallo,1999;Carr & Blettner,2010;Everett & Fairchild,
2015;Forbes,2005;Kahneman & Tversky,1996;Koellinger,
Minniti, & Schade,2007;La Hayward, Forster, Sarasvathy, &
Fredrickson,2010). According to Carr and Blettner (2010)
especially the quality of hot decisions1is strongly related to
the success or survival of companies. This literature shows
those wrong decisions by an intuitive decision-making ap-
proach can lead to the failure of the company. On the other
hand, the advantage of intuitive decision-making is that it
may be faster than rational decision-making. Intuitive de-
cision making is based on System 1 which is faster than
System 2 (Kahneman,2003). The rational decision-making
approach is based on System 2.
In the literature, there is no mutual agreement on an ex-
act description of the process of the rational decision-making
approach. Bazerman and Moore (2012) specify the rational
decision-making approach as a rational model of decision-
making assuming that people follow a certain process. The
rational decision-making process by them is segmented into
six phases: (1) perfectly define the problem (2) identify all cri-
teria (3) accurately weigh all of the criteria according to pref-
erences (4) know all relevant alternatives (5) accurately assess
each alternative based on each criterion (6) accurately calcu-
late and choose an alternative with the highest perceived value
(Bazerman & Moore,2012).
The main problem by the rational decision-making ap-
proach is that human-beings do not have complete informa-
tion (Biswas,2015).
The Prospect Theory addresses the problem of bounded
rationality and gives the advice to use biases and heuristics
when rational decision-making is not applicable (Kahneman,
Slovic, Slovic, & Tversky,1982). Despite incomplete infor-
mation, a manager may use the rational decision-making ap-
proach for problem-solving process. It is tautologic to imply
1Hot decisions are defined as decisions who are critical for companies’
success Janis and Mann (1977)
K. Iqbal /Junior Management Science 8(4) (2023) 887-925890
that decisions based on incomplete information lead to a de-
creased decision quality because the use of incomplete infor-
mation is referred to as the availability heuristic. The effects
of heuristics and biases on decision quality are mentioned
above.
To overcome this vicious cycle the literature suggests a
different kind of decision aids. Decision support systems
(DSS) are a particular technological form of a decision aid.
First DSS help decision-makers by giving them more informa-
tion and extending their decision-relevant knowledge (Mal-
lach,1994). Referring to previous thoughts extended infor-
mation would increase decision quality. Huber (1990) iden-
tifies that managers using computer-assisted decision aiding
would make better decisions. Mcafee, Brynjolfsson, Dav-
enport, Patil, and Barton (2012) consider data-driven deci-
sions better than intuitive decisions because they are based
on evidence. Despite the dynamic development of technol-
ogy computer-aided decision support is not new. In fact, it is
more than 50 years old. The First DSS application was built
in 1970 (Watson & Wixom,2007). The usage of DSS has
various advantages.
Carlson (1977) identifies that DSS can be used in all
decision-making phases. DSS can help to make the rational
decision-making process better by partially reducing previ-
ous incomplete information. Nevertheless, the past 50 years
led to an increased computing power by the factor of ap-
proximately 67.41 Million2according to Moore’s law (Moore,
1965). A better example to understand the increased com-
puting power is given in the following. Assuming no change
in algorithms, operations that needed approximately 2.13
years of calculation to give decision aid in 1970 can now be
processed within one second. Considering the rise of new
and better algorithms which differ in performance since they
are evaluated by runtime (Güting & Dieker,1992;Mcafee et
al.,2012) the performance of algorithmic decision support
is increased. In fact, new algorithmic technologies like ar-
tificial intelligence, big data analytics, neural network, etc.,
leverage the performance of decision support systems. This
increase in the performance of decision support systems may
lead theoretically to an extensive improvement of a rational
decision-making process by reducing time and incomplete in-
formation in theory. At the practical level, necessary data
for information processing should be available since infor-
mation is processed out of data by analytics. The analysis of
data to support decision-making is considered business ana-
lytics (Shanks & Bekmamedova,2012). Besides supporting
decisions, business analytics has a wide range of impacts on
business. Therefore it is necessary to understand the impact
of algorithmic decision support on business and the underly-
ing technologies of algorithmic decision support.
2Meaning the computing power is doubled every second year due to con-
stant costs of transistors. The necessary mathematical operation is 226. 52
years were passed. These years are divided by two results in the power of
26.
2.1.1. Advantage of business analytics in management
In order to understand the impact of business analytics
on management, it is necessary to understand the role and
tasks of management.
Management is defined as leadership in the efficient, in-
formed, purposeful and planned conduct of complex orga-
nized activity (Andrews,1980). The activity is characterized
by high complexity and the desirability to increase the intu-
itive competence of the executing manager. Andrews (1980)
suggests the need for a unitary concept for reducing the com-
plexity of the manager’s job and identifies strategy as a pos-
sible solution to reduce complexity. Therefore it is important
to distinguish between operational and strategic activities.
According to Porter (1996), operational activities are about
performing similar activities. They differ only if they are per-
formed in a more efficient way than rivals. Porter (1996)
defines strategy as the creation of a unique and valuable po-
sition, involving various sets of actions. Therefore Andrews
(1980) delivers the approach of a schematic development of
an economic strategy. According to Andrews (1980), it is
necessary to identify external opportunities and risks and get
insights into the corporate capabilities and resources in terms
of strengths and weaknesses and consider all combinations
of internal and external analysis to evaluate and determine
the best match for opportunity and resources. In the end, a
choice is derived which is called an economic strategy. This
schematic development of an economic strategy is relevant
in theory and practice because the SWOT-Analysis is based
on this scheme (Kotler, Berger, & Bickhoff,2010). Andrews’s
(1980) approach shows that strategy is all about the evalu-
ation and selection of choices similar to the definition of
making decisions. Porter (1996) confirms that strategy is the
deliberate disregard of other alternatives by purposefully lim-
iting what a company should do. Strategic management can
be considered as the reasoned choice or decision between
the combination of strategies from the internal and external
analysis. As mentioned before there are two decision-making
approaches.
The highest valuated companies in the world can be con-
sidered as successful in competition due to the financial in-
dicator. The top five companies with the highest valuation
in May 2022 are Apple, Saudi Aramco, Microsoft, Alphabet
and Amazon (Companiesmarketcap,2022). Except for Saudi
Aramco, the highest valuated companies could establish their
market position due to the use of algorithmic support, ex-
plicitly through the use of artificial intelligence (Rainsberger,
2021). Rainsberger (2021) shows four dimensions where al-
gorithmic aid (artificial intelligence) revolutionizes business
activities. The four dimensions are strategy, performance, ef-
fectiveness and competence. In the following, the wide range
of impacts on business analytics is outlined. Especially strate-
gic management is affected by business analytics.
The assumption that an alternative future can be derived
from certain past events (Luhmann,1990) is essential for
algorithmic aid. This assumption is essential, since analyt-
ics is based on historical data (descriptive analytics), esti-
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 891
mates future outcomes (predictive analytics) and determines
actions for optimizing business outcomes (prescriptive ana-
lytics) (Apté, Dietrich, & Fleming,2012). Descriptive ana-
lytics enable organizations to calibrate opportunities by pro-
viding insights into what happened previously in their inter-
nal and external environment (van Rijmenam, Erekhinskaya,
Schweitzer, & Williams,2019). Anticipating a possible future
leads to a competitive advantage (Koch,2015). Côrte-Real,
Oliveira, and Ruivo (2017) specify that algorithmic aid al-
lows effective internal and external knowledge management
enhancing organizational agility. Côrte-Real et al. (2017) ad-
dress the scheme of economic strategy by Andrews (1980)
for sensing opportunities and threats and seizing possible
chances.
The implementation of algorithmic aid in the internal and
external analysis of a company can gain insights into the in-
ternal processes and external events (Benaben et al.,2019)
with the possibility to analyze this data and make predictions
of future internal processes and external events. Consider-
ing all combinations of internal and external analysis a more
precise evaluation and determination for the best match of
opportunity and resources is possible. The literature sug-
gests that algorithmic aid (predictive analytics) leads to bet-
ter decision-making by improving business value and com-
petitive performance (LaValle, Lesser, Shockley, Hopkins, &
Kruschwitz,2011;Shanks & Bekmamedova,2012). A pos-
sible explanation for this relation is that predictive analyt-
ics helps companies to remain competitive by anticipating
changing environments and adapting to these changes (Ha-
jkowicz et al.,2016).
The distinction between strategic and operational activi-
ties was outlined. We showed that business analytics can en-
hance strategic activities. Other dimensions of Rainsberger
(2021) address operational activities. In the following, a
detailed description of enhanced operational activities is de-
rived.
The second dimension of Rainsberger (2021) is perfor-
mance. Operational activities can be enhanced by business
analytics since we showed that algorithmic aid (Big Data An-
alytics) improves business performance (Mcafee et al.,2012).
Furthermore, Chen, Preston, and Swink (2015) and Apté et
al. (2012) show that algorithmic aid enhances operational
efficiency. An example of operational efficiency is improved
workforce planning and reduced need for new hires and a re-
duction in overtime (D. Barton & Court,2012). Further ben-
efits are decreased cost for IT-Infrastructure and efficient data
delivery resulting in saving time (Watson & Wixom,2007).
An example of the reduction of costs is preventing and mon-
itoring fraud in organizations. Analytics enable fraud detec-
tion at reasonable costs (Mishra & Silakari,2012). All in all
algorithmic aid helps to make effective decisions faster (Reid,
McClean, Petley, Jones, & Ruck,2015) even enabling to au-
tomate operational workflows (Iansiti & Lakhani,2020) re-
sulting in greater performance.
The third dimension of Rainsberger (2021) is compe-
tence. Gartz (2004) shows that business intelligence can
enhance the representation and evaluation of companies’
knowledge using knowledge-based systems. Therefore al-
gorithmic aid can help to preserve knowledge within the
company and make information flow more efficient (Watson
& Wixom,2007).
The fourth dimension of Rainsberger (2021) is effective-
ness. The literature suggests explicit effectiveness of the
use of algorithmic aid in sales and marketing (Halper,2014;
Mishra & Silakari,2012;Rainsberger,2021). The effective-
ness is shown by the term pervasive business intelligence.
Pervasive business intelligence is providing users with infor-
mation for better job performance (Watson & Wixom,2007).
The advantage of pervasive business intelligence is that data
is delivered to the certain user who needs the data to take an
effective decision (Rainsberger,2021). Furthermore, algo-
rithmic aid can provide insights into customer habits & pat-
terns by analyzing customer data (Hamilton & Koch,2015).
Therefore the use of algorithms enables personalized con-
textual interaction with customers (Brahm, Cheris, & Sherer,
2016). Customization to customers’ needs is a very effective
form of gaining competitive advantage at operational level
since data-based customization to customers’ needs brings
value (Davenport,2013). On the other hand customer pri-
oritization by analyzing customer profitability through digi-
tal devices can increase effectiveness of business (Davenport,
2013). The effectiveness of a business can be measured by
financial indicators. J. R. Evans and Lindner (2012) suggest
that algorithmic aid can increase profitability, revenue and
shareholder return. Furthermore, companies’ goals can be
reached faster with the use of analytics (Rainsberger,2021).
2.1.2. Technological foundations for algorithmic decision
support
The main goal of algorithmic decision support is to gain
value-creating information (Mikalef, Pappas, Krogstie, &
Pavlou,2020). The information is derived from data (Azvine,
Cui, Nauck, & Majeed,2006;Benaben et al.,2019). The ab-
straction levels of data, information, decision and knowledge
are shown in Figure 1.
According to Benaben et al. (2019) data is a formalized
observation of the reality. Information is defined as the result
of the interpretation of data through algorithmic methods
(Benaben et al.,2019). The process of applying data analysis
and discovery algorithms over the data is described as Data
Mining (Fayyad, Piatetsky-Shapiro, & Smyth,1996). Ben-
aben et al. (2019) define the exploitation of information gen-
erated by data mining as a decision. The definition of Ben-
aben is not contradictory to the definition of Mallach (1994)
mentioned earlier in this paper due to the fact that the infor-
mation provides the ability for reasoning in choice-settings.
The last distinction by Benaben et al. (2019) is knowledge.
Knowledge is a capitalized static information about ex-
tracted abstract concepts or previous experience (Benaben
et al.,2019). As described earlier the interpretation of data
is executed by algorithms. Therefore, it is necessary to de-
fine algorithms. The literature has a broad definition of algo-
rithms. Moschovakis (2001) outlines the necessity to define
algorithms precisely. According to Moaschavakis, a rigorous
K. Iqbal /Junior Management Science 8(4) (2023) 887-925892
Figure 1: K-DID framework presenting the abstraction levels of data, information, decision and knowledge (Source: Benaben
et al. (2019))
definition can lead to a wrong identification of abstract ma-
chines or mathematical models of computers.
According to Güting and Dieker (1992), an algorithm is
defined as a specific process of tasks with a clear order of
tasks run by mechanical or technical devices to receive an
expected output for a task. Furthermore, they characterize
that every task has to be described clearly and is executable
with finite effort in finite time leading to a termination of an
algorithm. Therefore algorithms can metaphorically be seen
as a recipe for a problem-solving process. The recipes for the
problem-solving process can vary in tasks. In the end the best
performance of a recipe matters.
The implementation of a certain data type on algorithmic-
level is characterized by data structures (Güting & Dieker,
1992). Algorithms differ in performance if they are used
in other data structures than intended. Short runtimes are
performance measures for algorithms. The selection of algo-
rithms is based on a runtime analysis (Güting & Dieker,1992;
Knebl,2019). The runtime analysis does not contain com-
puting power of the underlying hardware run on algorithms.
In the evaluation of algorithms, it is necessary to distinguish
between runtime and computing time since the computing
time involves the performance of hardware and algorithm
combined. In practice, computing time is a desirable perfor-
mance measure for algorithms. Computing time can be re-
duced by aiming for a low runtime of an algorithm or using
performant hardware. Therefore decision support can per-
fectly aid in the decision-making process since the quality of
algorithms is evaluated by time.
Recent advances in hardware show a leveraging effect on
computing power. Besides Moore’s law, other advances in
hardware can be seen in Butters or Kryder’s law. Butter’s
law indicates that the amount of data transmitted by fiber-
glass doubles every 9th month (Rainsberger,2021). Fur-
thermore, Kryder’s law states that storage capacity doubles
every 13th month proportional to one square-centimeters of
a hard drive (Rainsberger,2021). These technological ad-
vances have exponential growth by definition leading to rad-
ical advances of exploitation in the business context. Despite
the rapid development of technology, the conception of com-
puting hardware exhibits weakness in performance due to
architectural issues. Computing architecture nowadays is di-
vided into Central Processing Unit (CPU) and Random Ac-
cess Memory (RAM) defined as Von-Neumann-architecture
(Leimeister,2019). The CPU interprets and executes com-
mands in sequential order and the RAM saves necessary data
for the necessary point of time for processing (Leimeister,
2019). Shi (2021) and Rosenberg (2017) show several prob-
lems in Von-Neumann architectures. Shi (2021) states that
Moore’s law will reach its physical limit in the coming 10
- 15 years. Furthermore, the sequential processing of com-
mands leads to an inefficiency in comparison to the actual
brain. The human brain has advances against the computer
in coping with novelty, complexity and ambiguity (Furber,
2016). The calculating speed and precision of a computer
is higher than that of human brains but the level of intel-
ligence of computers is low (Furber,2016;Shi,2021). In
fact, various research fields of computer science are inspired
by the human brain. Therefore neuromorphic computer ar-
chitecture is a solution toward the challenges faced by Von-
Neumann architecture. Neuromorphic computing is inspired
by the research findings of the structure and operation of
the brain (Furber,2016). Neuromorphic computing aims
to extract the formidable complexity of the biological brain
and apply this knowledge to practical engineering systems
(Furber,2016). An example of neuromorphic computer ar-
chitecture is the product of the german startup from Bochum
called GEMESYS Technologies. This startup develops a neu-
romorphic chip that substitutes Von-Neumann architectures
(GEMESYS Technologies,2022). Recent breakthroughs in
neuromorphic computing research show that computing ar-
chitecture can become intelligent. Kagan et al. (2021) intro-
duce a new system architecture called DishBrain. The Dish-
Brain integrates neurons into digital systems to leverage their
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 893
innate intelligence. Kagan et al. create a synthetic biolog-
ical intelligence by harnessing the computational power of
living neurons. Therefore the DishBrain can exhibit natural
intelligence and create a new computing architectures by po-
tentially substituting Von-Neumann architectures (Kagan et
al.,2021). Future developments in computing architecture
are to use the human brain as a processing unit by creating
an interface between the human brain and the computing
system developing an interface to human brain (Kreutzer &
Sirrenberg,2019). These interfaces are called Brain Machine
Interfaces (Kreutzer & Sirrenberg,2019).
Since software is a leverage for enhancing computing per-
formance (Rosenberg,2017) it is necessary to put emphasis
on recent advances in algorithmic developments.
AI is an expression of form of algorithms. The term AI has
a wide range of definitions, and the selection of a specific def-
inition results in path dependency for research (Wang,2019).
Considering the path dependency interpreted by Wang, a
more general definition of AI is used. According to Rich
(1985), AI is the science of enabling computers to do things
that humans are currently better at. According to Kreutzer
and Sirrenberg (2019), AI is the ability of a machine to per-
form cognitive tasks. This includes reasoning skills, learning,
and finding solutions to problems independently. The idea of
an AI is first introduced by Turing. Turing (1950) investigates
the ability of machines to think. Therefore he argues that a
successful imitation game by a computer can lead to the sug-
gestion that machine can think. In the imitation game, the
computer is programmed for doing realistic conversations.
The imitation game is successful if a human being can differ-
entiate whether the subject in the conversation is a computer
or another human. According to Turing (1950), the percep-
tion of the interrogator plays a role in the evaluation of the
question of whether machines can think or not. Therefore,
if a machine is perceived as a human, Turing considers that
this machine can think. The test for considering a system as
intelligent according to Turing’s definition of intelligence is
defined as the Turing test.
Dellermann, Ebel, Söllner, and Leimeister (2019) define
intelligence as the ability to achieve complex goals, reason,
learn and adaptively perform effective activities within an
environment. Moreover, they extend the concept of intelli-
gence by dividing it into human intelligence and machine in-
telligence to gain complementary capabilities and augment
each other (Dellermann et al.,2019). As mentioned before,
computer architectures are in terms of performance in intel-
ligence lower than human brains. Therefore computer archi-
tectures are inspired by the human. Dellermann et al. (2019)
introduce the term of hybrid intelligence in terms to combine
the advances of human brain and computer systems. The
same applies to algorithms. The research field of compu-
tational intelligence aims to develop algorithms devised to
imitate human information processing and reasoning mech-
anisms for processing complex and uncertain data sources
(Iqbal, Doctor, More, Mahmud, & Yousuf,2020). Further
technologies inspired by humans are neural networks (Iqbal
et al.,2020;Kreutzer & Sirrenberg,2019).
A neural network is a computer system containing hard-
ware and software inspired by human brain (Kreutzer & Sir-
renberg,2019). A neural network has multiple CPUs in or-
der to approximate simultaneous information processing. In
Figure 2the structure of the neural network is shown. The
first layer is the Input Layer where data is stored as input for
further processing by the following Hidden Layer. Following
layers are defined as Hidden Layer. A Hidden Layer can take
the outputs of previous Hidden Layers and do further pro-
cessing generating a new output which is processed by the
following Hidden Layer. The last layer is defined as the Out-
put Layer. The Output layer generates a new output of the
previously generated outputs by the previous Hidden Layer.
Each processing algorithm of a neural network can vary from
the other. In Hidden Layers machine learning algorithms are
also used (Kreutzer & Sirrenberg,2019).
Machine learning (ML) on general level is defined as a set
of methods that can automatically detect patterns in data and
use uncovered patterns to predict future data or to support
other kinds of decision-making under uncertainty (Murphy,
2012). Murphy (2012) states that ML provides automation in
data analysis. Therefore he suggests three types of learning
algorithms supervised learning, unsupervised learning and
reinforcement learning (Murphy,2012). Supervised learning
is when results the machine should process are given a pri-
ori. The machine is trained to process the right results (Rains-
berger,2021). When results are not given a priori a ML algo-
rithm is defined as unsupervised learning. Here the machine
identifies automatically patterns in data and creates results
(Rainsberger,2021). Reinforcement learning is inspired by
human learning the machine gets rewards for right results
and punishments for wrong results (Buxmann & Schmidt,
2021;Murphy,2012;Rainsberger,2021). Punishments and
rewards are normally associated with the teaching process
(Turing,1950). Reinforcement learning is inspired by the
findings of Turing (1950) who suggests instead of program-
ming a simulation of an adult mind programming a simula-
tion of a child’s brain can lead to a simulation of an adult
brain in future. If ML is applied in neural networks the term
Deep Learning is used in literature (Kreutzer & Sirrenberg,
2019).
As mentioned before data is necessary for data mining.
Besides the technological advances in hardware and algo-
rithms, data itself is developing in a broad way. It is nec-
essary to define the term Big Data due to the fact that Big
Data is essential for the previously mentioned technologies.
Mashingaidze and Backhouse (2017) show various defini-
tions of Big Data in literature and practice. Considering the
broad range of definitions for Big Data, Mashingaidze and
Backhouse (2017) synthesize all definitions into a new one.
According to them, Big Data is data that is high in volume
gathered from a variety of sources or data formats and is
generated at high velocity. Conventional technologies are in-
sufficient for the management of Big Data due to the high
level of complexity of Big Data. Therefore new advanced
technologies and techniques for storage and analysis of data
are required (Mashingaidze & Backhouse,2017). Data Ware-
K. Iqbal /Junior Management Science 8(4) (2023) 887-925894
Figure 2: Structure of a neural network (Source: Own illustration according to Kreutzer and Sirrenberg (2019))
houses are the necessary technology for coping with the high
level of complexity of Big Data (Leimeister,2019). Accord-
ing to Leimeister (2019), Data Warehouses are a specialized
technology for the storage of data and information. Further-
more, they are the underlying technology for business intelli-
gence. Their function is to gather data and ensure data qual-
ity considering uniformity, consistency and freedom from er-
ror of data (Leimeister,2019). They provide data for other
systems using interfaces (Leimeister,2019).The mentioned
technologies can be used to assist humans in their work. The
collaboration of technology and humans is called human-
machine interface (Blutner et al.,2009). Human-machine
interfaces are used to execute hybrid intelligence. Sheridan
and Verplank (1978) develop ten automation levels for assist-
ing human. In Figure 3the ten automation levels are shown
the terms used in the illustration are developed by Blutner et
al. (2009).
The first level of automation is the manual control where
the human executes tasks without the aid of the computer.
In the second level (Action Support) the computer suggests
choices for task execution. The Batch processing reduces the
number of choices and takes a preselection of choices. The
selection of the choices is done by the human. In the next
level (Shared Control), the computer reduces the degree of
choices and processes a result for one alternative which is
suggested to the human. The human has the possibility to
select the processed choice. The next level is Decision Sup-
port where the confirmation of the human is necessary for
the computer-aided execution of the processed result. In the
Blended Decision Making level, the human only has the right
for interventions. The computer does the tasks automati-
cally. The next level is Rigid system where the task execu-
tion is automated through computer aid. Human role here is
to monitor task execution. At the Automated Decision Mak-
ing level, the monitoring by human is only provided by re-
quest. The level of control by the computer is extended in
the next level where the computer executes the tasks auto-
matically. Monitoring possibility is only provided after a de-
cision. Furthermore, the last level of automation is full au-
tomation where the computer has full control over the task
and the human is ignored (Blutner et al.,2009;Sheridan &
Verplank,1978). Systems, where various technologies and
concepts are combined to aid in management, are defined
as business intelligence systems (Nedelcu,2013). Therefore,
a decision support system can be seen as an assistant sys-
tem for management tasks. The use of various technologies
to aid managers in decision-making is defined as business
intelligence (Baars & Kemper,2021;Gluchowski,2016). In
fact, the term business intelligence has no uniform definition.
Furthermore, it is important to distinguish between business
analytics and business intelligence. Mashingaidze and Back-
house (2017) show various definitions of business intelli-
gence (BI) and business analytics (BA) in literature and prac-
tice. BI is defined as a set of integrated strategies, applica-
tions, technologies, architectures, processes and methodolo-
gies in order to gather, store, retrieve and analyze data to sup-
port decision-making (Mashingaidze & Backhouse,2017).
According to Mashingaidze and Backhouse (2017), BA is de-
fined as a set of skills, applications, technologies, architec-
tures, processes and methodologies used to collect, store and
retrieve data for the purpose of data mining in order to sup-
port decision-making, inform business strategy and drive per-
formance. The data mining techniques can be descriptive,
predictive and prescriptive from scientific disciplines such as
mathematics and statistics. Since this paper outlined the
technological foundations and benefits of algorithmic deci-
sion support it is necessary for summarization to illustrate
the components of decision support system. The figure 4
shows the framework of Delen and Demirkan (2013). Busi-
ness processes create transaction data which are gathered
in data warehouses. These data warehouses include various
data sources for data mining. Interfaces with other systems
process new data, information or knowledge. The processed
findings are used for consultation of decision makers or to
identify opportunities and risks (environmental monitoring).
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 895
Figure 3: Illustration of ten automation levels for assisting humans (Source: Own illustration according to Sheridan and
Verplank (1978) and Blutner et al. (2009))
Figure 4: A conceptual framework for service-oriented decision support systems (Source: Delen and Demirkan (2013))
2.2. Deriving the need for acceptance conditions: technol-
ogy acceptance models
Since this paper showed a wide range of benefits of algo-
rithmic decision support on business activities, one can sug-
gest to easily implement algorithmic aid to outperform the
competition. In fact, the benefits of algorithmic aid are not
from the application themselves. Moreover, the integration
of algorithmic aid into business where business processes are
transformed and redesigned deliver these kinds of benefits
(Apté et al.,2012). Further literature shows that the appli-
cation of algorithmic aid can even lead in a failure to real-
ize expected performance gains (Mikalef, Boura, Lekakos, &
Krogstie,2019). Mikalef et al. (2019) outline that organi-
zational aspects and managerial skills are more important in
an uncertain environment than application of technology it-
self. Laudon, Laudon, and Schoder (2016) outline factors
that determine the success or failure of an information sys-
tem. They state that four factors influence the result of the
implementation in terms of design, cost, usage and data. The
factors of Laudon are the involvement of users and considera-
tion of their influence which is also supported by (Korsgaard,
Schweiger, & Sapienza,1995). The next factor of Laudon et
al. (2016) is the support from the management. Due to the
fact that this paper focuses on decision support systems used
K. Iqbal /Junior Management Science 8(4) (2023) 887-925896
by managers, managers and users are the same people. How-
ever, the need of support from management is a crucial fac-
tor for decision support systems (Mcafee et al.,2012;Rains-
berger,2021). A further factor of Laudon et al. (2016) is the
degree of complexity and the risk of the implementation pro-
cess. The next factor of Laudon et al. (2016) is the manage-
ment of the implementation process. Laudon et al. (2016)
refer here to the change management process.
In general, the technological change management pro-
cess is related to challenges (Orlikowski, 1992). In specific,
Larson and Chang (2016) show that the adoption of BI appli-
cations and services is challenging for organizations. Accord-
ing to Savolainen (2016), the commitment to a change pro-
cess can be predicted by acceptance. Scheuer (2020) shows
that the aim of acceptance research is the explaining of be-
havior of users in terms of rejection or affirmation due to
the use of material or non-material (artificial) technologies.
Therefore he defines acceptance as the willingness of some-
one to voluntarily accept, acknowledge, approve or agree
with a subject. In fact, a decision suggested by system in-
hibits quality if the decision fulfills the goals and is accepted
by users (Sharma, Mithas, & Kankanhalli,2014).
Scheuer (2020) shows various kinds of acceptance mod-
els in the literature. Acceptance research varies in the point
of technology usage. Various research fields focus on ac-
ceptance: information system research, marketing research,
behavioral consumer research, psychology and philosophy
(Königstorfer,2008). The information system research is
characterized by the technology acceptance model. The
technology acceptance model (TAM) was first introduced
by Davis (1989) and states that the attitude toward using
a technology is influenced by the perceived usefulness or
perceived ease of use. Furthermore, TAM by Davis (1989)
was extended by social influence mechanisms and cognitive
instrumental processes who influence decision making or
an attitude towards using a technology (Venkatesh & Davis,
2000). This extension is defined as TAM 2 (Venkatesh &
Davis,2000). Moreover, a further extension of TAM 2 was
introduced and defined as TAM 3 (Venkatesh & Bala,2008).
TAM 2 by Venkatesh and Davis (2000) is extended by de-
terminants of perceived ease of use by Venkatesh (2000).
The determinants of perceived ease of use are defined in the
following. Perceived Enjoyment is defined as the perception
of joy resulting from system use (Venkatesh,2000). The
second determinant of perceived ease of use is Computer
Self-Efficacy which is defined as the subjective perception to
have the necessary capabilities to perform a specific task us-
ing a computer. A further determinant by Venkatesh (2000)
is the Computer Playfulness defined as the degree of cognitive
spontaneity while interacting with the computer (Venkatesh
& Bala,2008). Perception of External Control is defined as
the perception of support for the computer from organiza-
tional and technological resources. A further determinant
is Computer Anxiety which stands for the perception of fear
while using the computer. The last determinant of Venkatesh
(2000) is the Objective Usability which is defined as a com-
parison of the system in terms of task completion.
Scheuer (2020) develops an acceptance model for the use
of artificial intelligence. The model developed by Scheuer is
called KIAM model (Scheuer,2020). The KIAM model is an
extension of the TAM model and is considered as the Artificial
Intelligence Acceptance Model. Whereas KI is referred to the
german term for AI. The AI acceptance model (KIAM) con-
sists of a holistic acceptance model that addresses the char-
acteristics of the theoretical properties of an AI compared to
a classical computer system. Scheuer assumes that an AI is
accessible via a technology (e.g., a smartphone application)
and enriching it with Narrow AI services (e.g., a chatbot in-
tegration, Speech-To-Text, or Text-To-Speech) through which
a user can interact with the AI in natural language. Based
on this, two essential components emerge first, the classical
technology in the form of a software application, and second,
the dialog component for interacting with the AI in the back-
ground. For the classical technology and the investigation of
its acceptance, Scheuer (2020) uses the existing TAM model
by Venkatesh and Bala (2008) TAM 3. However, for the dia-
log component and the resulting interaction between the AI
and the user, Scheuer (2020) differentiates to what extent
the user accepts the AI as a personality and sees the system
as a complete person or as a technology. Furthermore, he
shows that the perception of the system as technology or as
person determines suitability of acceptance models.
For this, he considers that psychological models for mea-
suring sympathy and affection apply as personality accep-
tance takes precedence over pure technology acceptance if
the system is seen as a person. In this regard, Scheuer (2020)
highlights that if the filter of the perception of the system as a
personality is considered and an AI is recognized as a person-
ality. This relationship with the technology can be described
by interpersonal acceptance models.
On the other hand, if a system is perceived as a technol-
ogy TAM is suitable (Scheuer,2020). He shows that the per-
ception of a system as a technology has an influence on ac-
ceptance.
Since Scheuer (2020) shows that the perception of a sys-
tem as a technology or a person determines how acceptance
is created it is necessary to consider the perception of the
system for acceptance conditions. Therefore it is necessary
to derive further acceptance conditions for decision support
systems.
Since decision support systems may influence the deci-
sion of the user it is important to understand the concept of
persuasive technologies in order to derive acceptance con-
ditions. A persuasive computer is an interactive technol-
ogy that changes the attitude or behavior of the user (Fogg,
1998). The intention for technology usage is key here. If
a person is using the interactive technology with the intent
to extend or change his or her attitude or behavior the type
of intent is defined as autogenous by Fogg (1998). The in-
tent is defined as exogenous if the access to the interactive
technology is given by others (Fogg,1998). Furthermore,
the intent of creation or production of interactive technology
is defined as endogenous by Fogg. Additionally, Fogg (1998)
differentiates three types of computer functions. Fogg (1998)
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 897
sees the computer as a tool for reducing barriers, increasing
self-efficacy, decision support and change of mental models.
Moreover, the computer is seen as a medium for providing
experience by insights and visualization, promoting under-
standing of causal relationships and motivating through ex-
perience. At least Fogg (1998) sees the computer as a social
actor who creates relationships by establishing social norms,
invoking social rules and dynamics and providing social sup-
port or sanction. Fogg (1998) demonstrates that it is impor-
tant to see decision support systems as persuasive technolo-
gies with various kinds of functions for usage.
The structuration approach from DeSanctis and Poole
(1994) focuses on the social structures for human activity
provided by technology. Further, they differentiate between
two social structures. Firstly the features of the technol-
ogy called the structural and secondly the general experi-
ence towards the feature of the technology, defined as spirit.
The adaptive structuration approach of DeSanctis and Poole
(1994) can be seen as a structuration approach that high-
lights the user-centricity in terms of user experience (UX) and
user interface (UI). The spirit of the social structure can be
referred to UX due the definition by Hassenzahl and Tractin-
sky (2006). They define UX as a consequence of a user’s
internal state while interacting with the technology. UI is
defined as including all aspects of system design that affect a
user’s participation in handling the system (Smith & Aucella,
1983). If the features of the system are not comprehensible
a mismatch between system and user is given leading to a
decreased effectiveness of the information system (Barbosa
& Hirko,1980).
UX and UI optimization may be a fundamental part of
acceptance research for information technologies since Gong
(2008) shows that an anthropomorphized interface leads to
higher social responses from users. Scheuer (2020) identi-
fied that anthropomorphizing a system interface has a posi-
tive influence on the perception of the system as technology
and as a person. Therefore the hypothesis should be tested
that an anthropomorphizing of a system leads to an ac-
ceptance of the system (H1).
In the following, the necessity of user-centricity due to
the process of implementation of algorithmic decision sup-
port is outlined. Makarius, Mukherjee, Fox, and Fox (2020)
research how to successfully integrate AI into an organiza-
tion. They outline that comprehension of employees is cru-
cial for a successful integration of AI. Furthermore, Makar-
ius et al. (2020) formulate research necessity in the field of
trust of decision-makers in the output of decision support sys-
tems, fostering team identification between AI systems and
users. Scheuer (2020) identified a positive influence of trust
on acceptance. Trust may be an acceptance condition for
decision-support systems. Therefore the hypothesis should
be tested that trust leads to acceptance (H2).
Rainsberger (2021) identifies major challenges in the
adoption of AI systems. According to Rainsberger insuffi-
cient knowledge about the benefits of the technology, lacking
trust towards the technology and insufficient resources for
technology implementation hinder AI adoption in an orga-
nization. He summarizes that these fallacies arise due to
ignorance of the decision-makers. The TAM by Venkatesh
and Bala (2008) considers the result demonstrability as the
transparency of the information system in result process-
ing. Since Scheuer identifies that trust is influenced by the
transparency of the system the hypothesis higher trans-
parency/comprehensibility of a system leads to more
trust is tested (H3).
Gartz (2004) also sees challenges in implementation of
technologies for decision support due to missing awareness
or lack of interest and motivation of management. It is nec-
essary for senior leaders to recognize the importance of deci-
sion support (Grossman & Siegel,2014;Mcafee et al.,2012).
One major challenge here is the change in the decision-
making culture. In important decisions companies often rely
on “HIPPO” (highest paid person’s opinion) (Mcafee et al.,
2012). If a decision support system is introduced which is
by definition a persuasive technology does the senior leader
or decision maker allow themselves to be overruled by data
(Mcafee et al.,2012)? Orlikowski and Robey (1991) as-
sumes that more information in the decision-making process
leads to a higher power of the decision maker.Does more in-
formation lead to a shift in the decision-making process from
the HIPPO as an expert to HIPPO as an interrogator (Mcafee
et al.,2012) leading to a decreased power? Does the senior
leader accept a decision support system that may lead to a
shift in their role in form of decreased power? Therefore the
hypothesis more transparency leads to higher perceived
participation of the user in the decision-making process
(H4) is tested.
Participation in decision-making leads to a higher percep-
tion of fairness (Korsgaard et al.,1995). Newman, Fast, and
Harmon (2020) show that participation possibilities in the
decision-making process increase trust. Do participation pos-
sibilities in the decision-making process lead to an increased
perception of trust if the output is not comprehensible due
to the black box of algorithmic aid? In order to answer these
questions, the hypothesis the higher the perceived partici-
pation of the system-user in the decision-making process
the higher the perceived trust towards the system (H5) is
tested.
The TAM model considers the perceived usefulness as an
indicator for acceptance. Therefore it is necessary to con-
sider the perceived intelligence of the system as an accep-
tance condition. Furthermore, Scheuer (2020) tests the ef-
fect of perceived intelligence and trust. Despite no evidence
from Scheuer for a relationship between trust and the per-
ceived intelligence, trust may be a mediator for the relation-
ship between perceived intelligence and acceptance. There-
fore, the hypothesis the higher the perceived intelligence
of the system is the higher the trust (H6) should be tested.
As mentioned earlier Scheuer (2020) identifies that the
perception of a system as a technology or as a person creates
acceptance. The following hypotheses should be tested. The
higher the perception of the system as technology is, the
higher acceptance (H7).Furthermore the higher the per-
ception of the system as a person, the higher acceptance
K. Iqbal /Junior Management Science 8(4) (2023) 887-925898
(H8).
2.3. State of the art in literature: acceptance conditions of
algorithmic decision support
In order to answer the research question, it is necessary
to make an analysis of the state of the art in research. Vari-
ous studies do research on the topic of acceptance of artificial
intelligence-based technologies which is one of the technolo-
gies for algorithmic decision support.
Hastenteufel and Ganster (2021) apply this topic to the
digital transformation in banking. They analyze the accep-
tance of Robo Advisors using a modified TAM model. There-
fore they use the technology acceptance model by Davis et
al. (1989) as a foundation. Hastenteufel and Ganster (2021)
identify trustworthiness, perceived usability and social influ-
ence as acceptance conditions for algorithmic decision sup-
port. Similarly to Hastenteufel and Ganster (2021), Rathje
et al. (2021) do research about trust in banking. Therefore
they develop their own research model based on the models
by Mayer et al. (1995), Gefen et al. (2003) and Davis (1989).
They conducted a survey with 119 participants where the
affinity to technology is high. Rathje et al. (2021) identify
that trust has a relationship with the intention to use the tech-
nology. Despite relevant findings for this thesis, these papers
analyze the acceptance on consumer-level.
Gersch et al. (2021) do research about the challenges par-
ticular in trust in a collaborative service delivery with artifi-
cial intelligence in the field of radiology. Therefore they con-
duct interviews with various stakeholders in the radiology.
They identify trust as an indicator to cope with uncertain-
ties. Furthermore, Gersch et al. (2021) identify that cogni-
tive trust is built in the first contact with the user. With re-
peated experience, the user develops affective trust. Under-
standability and comprehensibility are important for users.
Further challenges are the change of own position in the
workplace due to the introduction of support through arti-
ficial intelligence and arising of new duties and prerequi-
sites in the design of the socio-technical system (Gersch et
al.,2021). Therefore they suggest that explainable artifi-
cial intelligence should consider the perspective of different
stakeholders. Since the objective of research in this paper is
applied to the health industry these results are partially ap-
plicable for this thesis.
Pütz et al. (2021) do research on the topic of accep-
tance of voice and chatbots. They use the technology ac-
ceptance model of Davis (1989) and the extended version of
Venkatesh and Davis (2000) and Venkatesh and Bala (2008)
to analyze the acceptance of this technology. The approach
used by Pütz et al. (2021) is literature-based. They identify
a relation between perceived usability and perceived user-
friendliness. Further results are a relation between perceived
user-friendliness and intention to use the technology. Since
the research of Pütz et al. (2021) focuses on the acceptance
of voice and chatbots these results are applicable to the re-
search question of this thesis if these kinds of technology is
considered.
Lee’s (2018) study shows how people perceive decisions
made by algorithms compared with decisions made by hu-
mans. He made an online experiment by using four manage-
rial decisions that required human or mechanical skills. By
manipulating the decision maker in terms of algorithm and
human he measured the perceived fairness, trust and emo-
tional response. In mechanical tasks, decisions made by al-
gorithms and humans were perceived as equally in fairness,
trustworthiness and evoked similar emotions. Decision made
by humans for mechanical tasks differ in terms of trustworthi-
ness due to the attribution to managers’ authority. Decisions
made by algorithms were perceived as fair and trustworthy
due to attribution perceived efficiency and objectivity. In hu-
man tasks, decisions made by humans evoke positive emo-
tions which can be attributed to social recognition. Further
Lee (2018) identifies that human task made by algorithms are
perceived as less fair and trustworthy. Furthermore, decision
made by algorithms in human task evoke negative emotions
due to the perception of a dehumanizing experience of be-
ing tracked and evaluated by machines. The perceived lack
of intuition and subjective decision capabilities caused lower
perception of fairness and trustworthiness. Newman et al.
(2020) analyze the perceived fairness of decision-making by
algorithms in human resource management. They assume
that algorithms increase procedural fairness. Further they
assume that decisions made by algorithms are less accurate
than identical decisions made by humans. Newman et al.
(2020) prove that individuals perceive decisions made by
algorithms as less fair than comparable decisions made by
humans. Further they outline that algorithms are perceived
as reductionistic leading to a decreased perception of fair-
ness. Newman et al. (2020) show that organizational com-
mitment is affected in a negative way by decisions made by
algorithms where the perception of fairness has a mediat-
ing effect. However, the negative effect of decisions made
by algorithms is mitigated in decisions made by hybrid in-
telligence where the human has more involvement. Further-
more, high transparency in algorithmic decisions has a neg-
ative effect on perceived fairness and leads to decontextual-
ization. On the other hand transparency in human decision
leads to an increase in the perception of fairness and cause
less decontextualization.
Lee (2018) shows that the perception of algorithms lies
in the decision context and characteristics of the decision.
Newman et al. (2020) show that the perception of fairness
is human-centered. Despite the strong consideration of Lee
(2018) on acceptance or Newman et al. (2020) on perceived
fairness at worker-level, these results are applicable for this
thesis. Newman et al. (2020) findings about the role of trans-
parency will be considered for this thesis.
Panagiotarou, Stamatiou, Pierrakeas, and Kameas (2020)
confirm Lee’s (2018) results since they reveal that task char-
acteristics matter in order to understand people’s experi-
ences with algorithmic technologies. Furthermore, they find
to prove that participants with different levels of techni-
cal skills have statistical differences in perceived usefulness
of the technology, perceived ease of use, intention to use
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 899
the technology and actual use of the technology. Sagnier,
Loup-Escande, Lourdeaux, Thouvenin, and Valléry (2020)
analyzes the acceptance of Virtual Reality (VR). They iden-
tified an indirect effect of personal innovativeness on the
intention to use due to the fact that people with high per-
sonal innovativeness have an interest in new technologies
and are more likely to perceive a higher usefulness which
leads to an intention to use Virtual Reality. Besides task char-
acteristics, the literature suggests that the characteristics of
the person who is going to use the technology matter. The
results from (Panagiotarou et al.,2020;Sagnier et al.,2020)
can be considered for further analysis.
Uysal, Alavi, and Bezençon (2022) analyze potential
harmful and beneficial effects while using artificial intelli-
gent assistants (AIA) such as Alexa. They identify that an-
thropomorphism of artificial intelligent agents increase con-
sumer satisfaction through increased trust but also threatens
user identity by undermining their comfort by a high degree
of anthropomorphism of the technology. Further, the per-
ception of threat to user-identity increases if the consumer
relationship is closer and the relationship is longer (Uysal
et al., 2021). The perception of threat to human identity
can be mitigated when consumers are aware of data se-
curity solutions and adopt them in relationship with AIA.
The hypothesis by Uysal et al. (2022) that higher anthro-
pomorphism reduces consumer satisfaction and consumer
well-being was not supported. Further findings of Uysal et
al. (2022) indicate that higher threat to human identity re-
duces consumer comfort through decreasing consumer’s AI
empowerment. This effect is attenuated when consumers
with a long relationship to AIA are aware of data issues of
their usage (Uysal et al.,2022). Scheuer’s (2020) findings
imply a new acceptance model based on the dual-process
theory (J. S. B. T. Evans & Stanovich,2013;Kahneman &
Schmidt,2012). He distinguishes between acceptance based
on System 1 (IPART) and System 2 (TAM). Scheuer (2020)
identifies that the acceptance for AI systems is dependent
on the acceptance of the specific technology medium, ac-
ceptance of AI as a technology and the interpersonal accep-
tance. If the degree of anthropomorphism is high the AI is
considered as a personality. The use of AI systems which
are considered as a personality is emotion-driven. Therefore
Scheuer (2020) states that TAMs are not suitable to measure
acceptance of an AI system if an AI system is considered as
personality. Interpersonal acceptance models should be con-
sidered to describe acceptance of AI systems. On the other
hand, TAMs are suitable if the user perceives the AI system as
a technology (Scheuer,2020). Furthermore, Scheuer iden-
tifies that users seek to use an AI system where the degree
of machine learning is controllable and transparent. Since
Panagiotarou et al. (2020) and Sagnier et al. (2020) showed
the relevance of task characteristics, Uysal et al. (2022) and
Scheuer (2020) put emphasis on the relevance of system
features by anthropomorphizing the interface. The findings
of Uysal et al. (2022) and Scheuer (2020) have a high ap-
plicability for this thesis. The specialty of Scheuer’s research
approach is that he refers to the Turing’s idea of intelligence.
Considering the computer developed to the point where it is
perceived as a human being by the user (Turing,1950). Since
no literature except Scheuer (2020) focuses on interpersonal
acceptance Scheuer’s findings contribute to this thesis.
Bader and Kaiser (2019) research on the assessment
of the role of artificial intelligence in workplace decisions.
Bader and Kaiser (2019) outline the spatial and temporal de-
tachment of decision-making. They explore how users deal
with algorithmic decision-making and how user interfaces
influence the involvement of decision-making. Bader and
Kaiser (2019) argue due to sociomateriality the detachment
to decision-making gets reduced. They outline that AI has
a dual role in workplace decisions. On the one hand, AI
creates human attachment due to emotion driven affective
entanglement. On the other hand, AI facilitates detachment
due to deferred decisions and manipulation in data. The dual
role of AI results in high and low involvement in interactions
with the interface. The involvement of interfaces in research
will be necessary for this paper.
Merendino et al. (2018) explore whether Big Data has
changed strategic decision-making processes on board-level.
They identify a lack in cognitive capabilities in order to cope
with Big Data. Furthermore, they outline a friction in group
cohesion on board-level which has consequences on the
decision-making process. Merendino et al. (2018) show that
boards seek new ways of working in order to avoid informa-
tion silos and relying on capabilities of third parties such as
consultants in order to handle Big Data. Merendino et al.
(2018) findings are applicable to this thesis due to the fo-
cus on managers. However, Merendino et al. (2018) results
address decision making of managers in a group.
Abhari, Vomero, and Davidson (2020) analyze the psy-
chological motivation behind the use of BI tools. Therefore
they use the Needs-Affordances-Features framework by Kara-
hanna, Xin Xu, Xu, and Zhang (2018). At first, they identify
that the need for autonomy and competence in business en-
vironment motivates the use of BI tools where psychological
affordance features of autonomy, collaboration and commu-
nication are addressed. Further, they outline that the need
for relatedness, having a place and self-realization motivates
the use of BI that afford the psychological features of col-
laboration and communication. Since Abhari et al. (2020)
researches the adoption of BI on a voluntary user-level the
findings of them are applicable for this thesis.
Meske, Bunde, Schneider, and Gersch (2022) show that
explainability is a prerequisite for fair AI. Therefore Meske
et al. define explainable AI (XAI) by distinguishing it from
interpretable AI. If humans can directly make sense of a ma-
chine’s decision without additional explanation interpretable
AI is given (Guidotti et al.,2018). Giving additional informa-
tion for an explanation as a proxy to comprehend the arguing
process is defined as XAI (Adadi & Berrada,2018).
The TAM by Venkatesh and Bala (2008) considers per-
ceived comprehensibility of the system as an acceptance con-
dition. An explanation as proxy tries to create acceptance by
having a higher result demonstrability. XAI tries to overcome
the boundary between artificial and material by approaching
K. Iqbal /Junior Management Science 8(4) (2023) 887-925900
the cognitive System 2 of the human. By definition, it reduces
simultaneously the amount of information approaching the
cognitive system System 1 of the human by using availability
heuristics. System 1 may lead to misinterpretation which can
be reduced through a personalized XAI according to Meske et
al. (2020). As the literature showed high lack in comprehen-
sibility of the decision support systems, it may be challenging
to create an acceptance through XAI.
The state of the art in literature shows that the research
on acceptance in algorithmic decision support is new as the
oldest literature is from 2018. Furthermore, few studies such
as Merendino et al. (2018) analyze the acceptance of algo-
rithmic decision support on managerial-level. However, they
do not consider the acceptance of a single manager since they
focus on board-level decision-making. Other studies show
various results in the acceptance of algorithmic decision sup-
port. These studies do not focus on managerial-level. The
study of Uysal et al. (2022) shows that anthropomorphism
can lead to an increased consumer satisfaction and increase
in trust. Since trust has a relationship to the intention to use
the technology Rathje et al. (2021), it is necessary to consider
anthropomorphizing of decision support systems for an anal-
ysis of acceptance. Therefore anthropomorphizing an inter-
face can be seen as the structuration approach by DeSanctis
and Poole (1994).
No literature except Scheuer (2020) and Uysal et al.
(2022) focuses on the acceptance of anthropomorphized
systems. Furthermore Scheuer (2020) showed that anthro-
pomorphizing AI systems leads to an use that is emotion-
driven. Since Bader and Kaiser (2019) show a dual role of
AI in the workplace, it is necessary to analyze empirically
whether anthropomorphizing the AI system may mitigate
the detachment from AI system. The answer to this ques-
tion may lead to conditions for an acceptance of algorithmic
support on managerial level.
3. Analyzing acceptance conditions: methodological ap-
proach
This thesis aims to answer the following research ques-
tion: which conditions lead to an acceptance of algorithmic
decision support in management? To answer the research
question two approaches are chosen. A vignette study is
conducted along with a quantitative survey. The results are
empirically analyzed and afterwards, a structural equation
model is derived to illustrate the conditions that may lead to
the acceptance of algorithmic decision support. In this re-
gard, it is first important to explain why in the context of
the research question it was important to select the innova-
tive approach of a vignette study along with a quantitative
survey. According to this, in the following section the ratio-
nale behind the methodological approach of this thesis will
be presented.
3.1. Theoretical foundation of a vignette study
One of the most frequent tools to investigate the beliefs,
attitudes and judgments of respondents is the combination
of quantitative research and vignette study. Vignette stud-
ies are particularly helpful when research is designed to as-
sess judgment from respondents about specific scenarios. In
academic literature quantitative surveys along with vignette
analysis were innovational breakthroughs as they allowed a
new way of assessing public opinion in form of a survey while
keeping the element of integrating contextual perception of
specific situations. In the past, quantitative vignette studies
have been used in different disciplines such as psychology
by Barrera and Buskens (2007), Dülmer (2001)orWalster
(1966) and marketing by Wason et al. (2002) or sociology
by Alves and Rossi (1978), Beck and Opp (2001)orJasso
and Webster Jr (1999). Atzmüller and Steiner (2010) define
a vignette as a “carefully constructed description of a person,
object, or a situation representing a systematic combination
of characteristics.” (Atzmüller & Steiner,2010). According
to Dubinsky, Jolson, Kotabe, and Lim (1991), vignettes help
identify management decisions. They especially outline that
“Vignettes can be particularly illuminating with respect to
managerial implications; an appropriately constructed and
relevant [vignette]can help management discern where spe-
cific action is necessary” (Dubinsky et al.,1991). Another
benefit of vignette studies is that the design of vignettes al-
lows to simultaneously present several explanatory as well as
context-dependent factors through which more realistic sce-
narios are possible (Atzmüller & Steiner,2010). Moreover,
vignettes can be presented in different forms such as text, di-
alog, cartoons, pictures, audios, or videos. Depending on the
research setting and research question a vignette can inhibit
an experimental design feature.
Several researchers argue that vignette surveys are supe-
rior to normal question-based surveys. In this regard, Wason
and Cox (1996) support this statement by outlining that vi-
gnette surveys provide greater realism. Robertson (1993)
underlines that vignette studies offer a greater range of sit-
uational and contextual factors. Similarly, Barnett, Bass,
and Brown (1994) state that vignette studies “approximate
real-life decision-making situations”. Alexander and Becker
(1978) further explain that vignette studies supply standard-
ized stimuli to all respondents which makes a replication of
the study easier and enhances the measurement reliability.
On the other hand, Cavanagh and Fritzsche (1985) argue
that vignette studies also improve construct validity (Wason
et al.,2002). Furthermore, they outline that vignette studies
increase the involvement of the respondents and decrease the
potential of errors through not paying attention to questions
or answering the same throughout the survey.
Researchers claim that in the context of vignette stud-
ies the target group plays an essential role and an appro-
priate population should be selected. Stevenson and Bodkin
(1998) argue that with regard to the decision-making process
vignette studies can be targeted toward students as the stu-
dents are tomorrow’s business professionals. Regarding the
design of vignette studies, researchers suggest that vignettes
should be designed adequately and not much detailed. Hy-
man and Steiner (1996) argue that vignettes should not be
so detailed that they overburden respondents.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 901
Grant and Wall (2009) highlight that especially in the
context of management research it is important to under-
stand causal relationships which in turn requires the use
of experimental or quasi-experimental designs. Through
vignette studies, exactly this aspect is addressed as this re-
search design improves our knowledge about causal rela-
tionships (Aguinis & Bradley,2014). The vignette survey
methodology tackles participants with carefully constructed
and realistic scenarios to assess dependent variables includ-
ing intentions, attitudes, and behaviors. An example of pro-
viding insights on the causal relationships through vignette
surveys is illustrated by McKelvie, Haynie, and Gustavsson
(2011) where they addressed the impact of uncertainty in
the decision-making process of entrepreneurs. In particular,
they provided an evidence on which type of uncertainty had
an effect on whether entrepreneurs choose to exploit or not
to exploit opportunities (Aguinis & Bradley,2014;McKelvie
et al.,2011). Aguinis and Bradley (2014) conducted a re-
view on 30 management journals from 1994 to 2003 and
provided evidence that vignette surveys are a way to address
the problem of internal and external validity.
In this regard, vignette surveys have been used in sev-
eral contexts and formats. Cook (1979) investigate whether
Americans support programs for social groups in need of aid
or not. For this, they used text vignettes. On the other hand,
Atzmüller, Kromer, and Elisabeth (2014) took a closer look
at peer violence among adolescents. For their approach, they
used short video vignettes. Also, audio vignettes have been
used for example by Atzmüller et al. (2014) to investigate
radio news on crimes. Several scholars claim that vignette
surveys are flexible and have a wide range as they allow par-
ticipants to come out of their comfort zone and perceiving
different experimental settings in form of videos, audios, text
etc. Moreover, vignette surveys allow participants to move
away from socially desirable answers or politically correct an-
swers which in turn reduces biases (Steiner, Atzmüller, & Su,
2016).
Based on the aforementioned aspects a vignette survey
seems to be an appropriate tool to first, identify management
decisions (Dubinsky et al.,1991). Secondly, to include dif-
ferent experimental settings in a survey such as videos, au-
dios, text, etc. (Steiner et al.,2016). Thirdly, to construct
realistic scenarios and consider context-dependent aspects
(Atzmüller & Steiner,2010). Because of this, the approach of
a vignette survey was selected to answer the research ques-
tion. In the next section it will be explained why structural
equation modeling is relevant in the context of the research
question and why is it used for the empirical approach in
this thesis. In particular, why is structural equation modeling
used to illustrate the results of the survey.
3.2. Structural equation modeling
With the method of structural equation modeling (SEM)
it is possible to simultaneously model complex relationships
among multiple dependent and independent variables (Hair
Jr. et al.,2021). Moreover, there are two options in SEM
which are common factor-based-SEM (CB-SEM) and partial
least squares SEM (PLS-SEM). In this regard, the option of
common factor-based SEM is mostly used in the context of
accepting or rejecting hypotheses, which serves as an indi-
cator to confirm or reject theories. In a practical manner,
this approach of common factor-based-SEM investigates how
closely a proposed theoretical model is able to reproduce a
covariance matrix for the considered dataset. On the other
hand, we have the PLS-SEM. For this thesis, a PLS-SEM was
used. In the following, it will be explained why PLS-SEM
is more appropriate than a common factor-based SEM for
this thesis. PLS-SEM should be conducted if the objective of
research is an exploratory research for theory development
(Hair Jr. et al.,2021). The objective of research of this thesis
is to identify acceptance conditions. Therefore, the approach
of PLS-SEM fits to the objective of research of this thesis.
According to Jöreskog and Wold (1982), PLS-SEM is a
“causal predictive” approach and aims at explaining the vari-
ance of the dependent variable. Basically, a partial least
square (PLS) path consists of two essential elements.
One element is the inner model whereas the other one is
the outer model. The inner model is referred as a structural
model which links together constructs. The outer model,
on the other hand, is referred as the measurement model.
These measurement model shows the relationships between
the constructs and the indicator variables as rectangles. The
figure 5demonstrates the inner and outer model. Another
benefit of PLS-SEM is that there is a high efficiency in pa-
rameter estimation, and it is flexible in terms of its model-
ing properties. According to Hair Jr., Matthews, Matthews,
and Sarstedt (2017), PLS-SEM is a prediction-oriented ap-
proach and is mostly used in exploratory research. PLS-SEM
maximizes the amount of explained variance of endogenous
constructs in a path model and provides a better understand-
ing of the underlying causes and predictions (Shmueli et al.,
2019).
In addition to this through PLS-SEM, it is also possible
to include control variable to account for the target con-
struct’s variation. Furthermore, PLS-SEM allows the assess-
ment of not only reflective but also formative measurement
models along with single-item constructs, with no identifica-
tion problems. Regarding the single items, it can be said that
they have the advantage of being not complicated in terms
of the scales and result in higher response rates where the
questions are easily understood and answered (Fuchs & Dia-
mantopoulos,2009;Sarstedt & Wilczynski,2009). Hair Jr. et
al. (2021) further point out that a global single item is suffi-
cient and captures the essence of the construct, especially in
the context of executing a redundancy analysis.
As explained in the aforementioned section, the path
model for PLS-SEM will be presented. This thesis aims to an-
swer the following research question: which conditions lead
to an acceptance of algorithmic decision support in manage-
ment? In the theoretical foundation, acceptance conditions
for algorithmic decision support were derived and formu-
lated as hypotheses. The state-of-the-art shows that the
degree of anthropomorphizing an AI system may lead to an
acceptance. Therefore, a new model (figure 6) is derived
K. Iqbal /Junior Management Science 8(4) (2023) 887-925902
Figure 5: A simple path model (Source: Hair Jr. et al. (2021))
from the previously formulated hypotheses.
Due to hypotheses tests the path model is tested for va-
lidity using PLS-SEM. The hypotheses tests are executed in
a two-step process according to Hair Jr. et al. (2021). Hair
Jr. et al. (2021) suggest to first confirm reliability and valid-
ity of the measurement model and then testing the structural
equation model for validity.
In order to create a measurement model, a survey is
conducted where the degree of anthropomorphizing of the
system-interface is manipulated. The items from the survey
are used for creating a reflective measurement model.
The variables in the path model are measured as latent
constructs in a reflective measurement model. The definition
of constructs is described in table 1. In table 2the hypotheses
are summarized.
In an experimental setting, these hypotheses are tested
for validity. These interactions with a system is simulated by
a vignette study.
3.3. Design and parameters of the survey
A vignette study is conducted in german language where
the scenario is described. In this scenario the survey partic-
ipant is in the situation of a manager in a dynamic market
environment where he has to make a hot decision accord-
ing to Janis and Mann (1977). Furthermore, a decision sup-
port system aids in the decision-making. The decision sup-
port system is introduced in the scenario with high prediction
capabilities. Moreover, the decision support system is imple-
mented as an interface. The survey participants simulate an
interaction with a decision support system. Due to the inter-
action, the survey participants are involved in the problem-
finding process. Furthermore, the system suggests a solution
to the problem without further explanation. The survey par-
ticipants have to make the choice to accept the suggestion or
to reject the suggestion and make their own decision. The
structure of the survey is shown in figure 7.
There are two interfaces with a different degrees of an-
thropomorphizing features. The interface with low anthro-
pomorphizing features is created by embedding HTML and
javascript code into the survey tool. The interface is named
Lisa, shown in figure 8.
Furthermore, the interface implemented as an interac-
tive video with high anthropomorphizing features is named
Maria, shown in figure 9.
The degree of anthropomorphizing features is relatively
high due to the use of professional tools. In order to cre-
ate the interface Maria, an AI actor is created with the tool
Colossyan (Colossyan,2022) and saved as a video. Further-
more, the tool Tolstoy (Tolstoy,2022) is used to make the
interface more interactive. Therefore the videos created by
Colossyan (2022) are ordered through the use of various con-
ditions leading to a high degree of anthropomorphizing fea-
tures. The degree of anthropomorphizing could have been
maximized through voice inputs. Despite the substitution of
voice input for the interaction of the user via textual or but-
ton components, voice outputs could be implemented in the
interface. Furthermore, the AI actress uses gestures while
speaking.
The participants are randomly distributed to one inter-
face. After the interaction with the interface, the participant
has to make a decision, where he can approve the suggested
decision by the system or reject the suggestion and choose an
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 903
Figure 6: Path model for hypotheses testing (Source: Own illustration)
Table 1: Construct definition
Constructs Description
Perception as technol-
ogy is abbreviated as
“TEC”
The extent to which the system-user perceives the an-
thropomorphized system as technology. A low value (1)
for perception as technology indicates that the system is
not perceived as a technology.
Perception as person is
abbreviated as “PER”
The extent to which the system-user perceives the an-
thropomorphized system as a person. A low value (1)
for perception as person indicates that the system is not
perceived as a person.
Trust is abbreviated as
“TRU”
The extent to which the user has trust towards the sys-
tem. A low value (1) for trust indicates that the user
does not trust the system.
Transparency is abbre-
viated as “TRAN”
The perceived comprehensibility of the system results. A
low value (1) for transparency indicates that the results
processed by the system were not perceived as trans-
parent in terms of comprehensibility of the decision-
making.
Computer Power is ab-
breviated as “CPOW”
The perceived control of the system within the decision-
making process by the user. A low value (1) for computer
power indicates that the system-user perceives his par-
ticipation in decision-making process as high. A high
value (5) indicates that the system-user perceives the
participation of the system in decision-making process
as high.
Intelligence is abbrevi-
ated as “INT”
The perceived intelligence of the system. A low value
(1) for intelligence indicates that the user perceives the
system as not intelligent.
Acceptance is abbrevi-
ated as AIACC
The willingness to voluntarily approve the system. A
low value (1) for Acceptance indicates that the user is
not willing to use the presented system in the future.
Decision is abbreviated
as ACC
The final decision after receiving aid from the system.
A low value (0) indicates that the user has rejected the
suggested decision from the system. A high value (1)
indicates that the user has accepted the suggested deci-
sion from the system.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925904
Table 2: Formulation of hypothesis
Construct No. Hypothesis
Acceptance
(ACC; AIACC)
H1 An anthropomorphizing of a system leads to an ac-
ceptance of the system.
Acceptance
(ACC; AIACC)
H2 A higher trust in a system leads to higher acceptance.
Trust
(TRU)
H3 A higher transparency/comprehensibility of a sys-
tem has a positive effect on the trust towards the
system.
Computer
Power
(CPOW)
H4 A higher comprehensibility of a system has a positive
effect on the perceived participation of the user in
the decision-making process.
Trust
(TRU)
H5 The higher the perceived participation of the system-
user in the decision-making process is, the higher the
perceived trust towards the system.
Trust
(TRU)
H6 The higher the perceived intelligence of the system
is, the higher the trust.
Acceptance
(ACC; AIACC)
H7 The higher the perception of the system as technol-
ogy is, the higher the acceptance.
Acceptance
(ACC; AIACC)
H8 The higher the perception of the system as person is,
the higher the acceptance.
Figure 7: Structure of the survey (Source: Own illustration)
Figure 8: System interface of Lisa (Source: Own illustration)
alternative decision. Furthermore, the participants are sur-
veyed for their experience while interacting with the system.
In the end demographical data were surveyed.
The survey was created with the survey tool Unipark.
This tool saves cookies on the devices of the participants and
prevents multiple participations in the survey from the same
user. The survey was online from 25.07.2022 to 07.08.2022
and distributed via various channels. Despite no specific tar-
get group, the target group was varied across the distribu-
tion channels. The survey was shared on social networks
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 905
Figure 9: System interface of Maria (Source: Own illustration)
like Linkedin, Instagram and Whatsapp. The target group
on Linkedin was specified as managers or people in lead-
ership positions. The Linkedin post where the survey was
shared had impressions of 6824 (07.08.2022) meaning the
call for participation has reached 6824 people. Furthermore,
students and researchers were targeted due to the distribu-
tion of flyers. 1000 Flyers were printed. People in the uni-
versity were asked to participate in the survey while giving
them the flyer. Furthermore, a flyer could be used multiple
times meaning that minimum of 1000 students or researchers
could be approached by flyers. The survey was shared multi-
ple times on multiple Whatsapp and Instagram accounts with
daily views of approximately 200 people. Leading to a dis-
tribution of approximately 3000 people as a non-specifiable
target group. The sharing activities lead to a distribution of
the survey to 11000 people who could be accounted multiple
times.
746 people clicked on the survey and 253 people can-
celed their survey participation or did not give their consent
to the survey leading to 493 people who started the survey.
212 people canceled their survey participation after starting
the survey leading to 281 people who have fully participated
in the survey. Due to the cancel activities, an equal distri-
bution of the interfaces among the participants could not be
guaranteed.
3.4. Assessment of measurement model
The latent constructs were measured by the previous de-
scribed survey. The operationalization of constructs was
derived from the study of the KIAM-Model by Scheuer
(2020). Since Scheuer considers various items from TAM
by Venkatesh and Bala (2008) in his research, the items from
TAM are used for this study. The operationalization of the
items is executed in german language since the target group
of the study are german speaking students and employees.
All operationalized items for the interface Lisa are shown in
table 3. Items for the interface Maria are mostly identical to
the interface Lisa. The items for Maria only differ from Lisa
when the item consists the name of the interface. Similar
to the study of Scheuer (2020) the measurement of items is
executed on a 5-point-likert scale. Scheuer argues that the
use of this scale minimizes time for the survey participants
and delivers a higher precision than on a 7-point-likert scale
due to more intuitive responses on perception.
On the 5-point-likert scale the rejection of the statement
is coded as one and is increased by one each for a lower re-
jection or higher affirmation of the statement where the max-
imum affirmation of the statement is coded with a five.
The construct of acceptance was measured with a further
construct defined as ACC where the variable is named as
“acc” and is measured on a binary scale. The binary scale
is used to assess the acceptance due the interaction with the
system. A zero in this binary variable reflects rejection of the
suggested decision. Moreover, one reflects an affirmation of
the suggested decision.
The construct of acceptance could have been measured as
a higher order construct, where the constructs AIACC and
ACC are reflective measures for the higher order construct.
This study composites the constructs AIACC and ACC” as
no single constructs because the estimation of the SEM is con-
ducted partially for each acceptance construct leading to a
higher accountability of acceptance conditions. A composi-
tion of both constructs to one higher order construct may dis-
tort the results due to the different scaling of both constructs.
Therefore two SEM are estimated where the construct ACC
is used to validate the results from the estimation with the
construct AIACC”.
As shown previously Scheuer (2020) states that the per-
ception of a system as a technology or a person determines
how acceptance is created. Therefore it is necessary to sep-
arate the measurement of both systems into individual mea-
surement models. A separation of measurement models in-
K. Iqbal /Junior Management Science 8(4) (2023) 887-925906
Table 3: Operationalization of items
Construct Item Question Reference
INT int_1 Das System wirkt intelligent Das System wirkt intelligent”
(Scheuer,2020)
TEC tec_1 “Ich habe das System als Technologie
wahrgenommen”
Ich habe das System als Technologie
wahrgenommen“ (Scheuer,2020)
TEC tec_3 Ich habe Lisa als Technologie
wahrgenommen
Ich habe das System als Technologie
wahrgenommen“ (Scheuer,2020)
PER per_1 Ich habe das System als Persön-
lichkeit wahrgenommen“
Ich habe das System als Persön-
lichkeit wahrgenommen“ (Scheuer,
2020)
PER per_2 Ich habe Lisa als Persönlichkeit
wahrgenommen“
Ich habe das System als Persön-
lichkeit wahrgenommen“ (Scheuer,
2020)
PER per_3 Ich habe in Lisa Menschlichkeit
wahrgenommen“
-
CPOW part_1 „Ich habe keine Kontrolle über die
Nutzung des Systems“
Ich habe Kontrolle über meine
Nutzung des Systems”–TAM
(Scheuer,2020)
TRAN tran_1 “Die Entscheidung des Systems ist
transparent”
-
TRAN tran_2 “Ich kann die Entscheidung des Sys-
tems nachvollziehen“
-
TRAN tran_3 “Für mich ist hinreichend transpar-
ent, wie das System funktioniert“
“Für mich ist hinreichend trans-
parent, wie das System funktion-
iert“(Scheuer,2020)
TRU tru_1 Ich vertraue dem System “Ich vertraue dem System” (Scheuer,
2020)
TRU tru_2 Das System wirkt vertrauensvoll -
TRU tru_3 „Ich vertraue auf die Ergebnisse des
Systems“
“Ich vertraue auf die Ergebnisse
des Systems” (Scheuer,2020)
AIACC aiacc_1 „Die zuvor vorgestellte künstlichen
Intelligenz würde ich aktiv verwen-
den, wenn ich Zugriff auf dieses Sys-
tem habe und die Rahmenbedingun-
gen gegeben sind“
“Eine künstliche Intelligenz wie
dieses System würde ich aktiv ver-
wenden, wenn ich Zugriff auf diese
habe und die Rahmenbedingungen
gegeben sind” (Scheuer,2020)
AIACC aiacc_2 „Angenommen ich hätte Zugriff auf
das System, würde ich es nutzen
wollen“
Angenommen ich hätte Zugriff auf
das System, würde ich es nutzen
wollen”-TAM (Scheuer,2020)
AIACC aiacc_3 „Ich würde das System freiwillig
nutzen, wenn die Rahmenbedingun-
gen gegeben wären“
Ich würde das System, wenn
die Rahmenbedingungen gegeben
wären, freiwillig nutzen”-TAM
(Scheuer,2020)
ACC acc Decision of the user -
creases the accountability of estimation for the certain inter-
face. A separation of measurement leads to major challenges
in the minimum sample size required.
For path coefficients of minimum 0.11 a minimum sample
size of 113 is required to have significant path coefficients
on a 10 % significance level. Furthermore, a sample size of
minimum 155 is required to have significant path coefficients
on a 5 % significance level (Hair Jr. et al.,2021).
Since the measurement model of Maria has a sample size
of 127 and the measurement model of Lisa has a sample size
of 154 the requirements for significant path coefficients on
a 10 % significance level are fulfilled. The minimum sam-
ple size required for path coefficient with a minimum value
of 0.21 is 112 with a significance level of 1% (Hair Jr. et al.,
2021). Both measurement models exceed the minimum sam-
ple size required for significant path coefficients with a min-
imum value of 0.21.
In the following, the sample of the survey is described.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 907
4. Analysis and findings of the survey
In the next section, the descriptive statistics of both mea-
surement models are shown for describing the underlying
sample. Furthermore, the data from the survey is analyzed
according to Hair Jr. et al. (2021). At first, the quality indi-
cators for both measurement models are assessed. Moreover,
the quality indicators of the structural models are evaluated.
At the end of the section, the results from the analysis are
discussed.
4.1. Descriptive statistics: first findings
The sample size is 281, where eight people did not re-
spond to sociodemographic questions. Furthermore, the
average age of the participants is 25,66 years where the
youngest participant being 18 years old and the oldest par-
ticipant being 66 years. The distribution of ages is shown in
the appendix. In addition to this 57.14 % of the participants
were male and 41.03% were female. 1.47% of survey respon-
dents were not specifiable and 0.37% of survey participants
classified their gender as diverse.
More than 50% of the survey participants were students.
The second largest group of the survey is classified as Man-
agers. Further job descriptions of the survey participants are
shown in figure 10.
Furthermore, 23.13% of survey participants stated that
they have already gathered management or leadership ex-
perience with a duration of more than two years. Stan-
dard Industrial Classification (SIC) was used to determine
the branch classifications of the companies of survey partic-
ipants. The majority of branches were not specifiable. 62
survey respondents classified their branch in “Services”. The
branch of Agriculture, Forestry, Fishing” and “Mining” was
not present in the sample. Furthermore, the survey partic-
ipants were asked to classify their organization. 32.6% of
survey participants responded that the classification is not
specifiable. 17.95% of survey participants stated that they
are working in public institutions. Further classifications of
job institutions are shown in figure 11.
Since Panagiotarou et al. (2020) show the relevance of
personal innovativeness on acceptance, it is necessary to have
insights into the affinity to technologies or personal innova-
tiveness of survey participants. At first, 91.21% of survey par-
ticipants evaluated that they have the necessary capabilities
for handling Office-software. Furthermore, 68,5% of survey
participants responded that they have no experience in pro-
gramming. 15.02% of participants stated that they have re-
cently gathered experience in programming (less than years).
16.48% of participants reported that they have more than
years of experience in programming. Since programming ca-
pabilities afford a high level of technical affinity 31.5% of
survey participants can be classified as the minimum share
of survey participant with a high level of technical affinity.
People who spend time for gaming have the need to inform
their self about the latest hardware. Therefore the survey
measured the technical affinity by asking survey participants
whether they like to spend their free time gaming. 40.66%
of survey participants have answered with „Yes“ to this ques-
tion. Besides the gaming experience, it is important to mea-
sure the experience with VR-technology for the assessment
of personal innovativeness. Since Sagnier et al. (2020) ar-
gue that the use of new technologies like VR is related to
personal innovativeness, the survey participants were ques-
tioned whether they have used VR-technology before. 51.28
% of survey participants have answered with „Yes“ to this
question. Overall there is a high affinity to technology and
personal innovativeness among the survey participants. The
experience in handling Office-software may not be a measure
for technical affinity because these are relevant job capabili-
ties and are often expected as general knowledge in practice.
The high share of participants who have Office-capabilities
shows a representativeness of the sample since these capabili-
ties are expected as general knowledge. Furthermore, 31.5%
to 51.28% of survey participants can be classified as partic-
ipants with a higher level of technical affinity leading to an
overall high technical affinity of the sample.
4.1.1. Descriptive statistics: measurement model of Lisa
The summary of descriptive statistics of the measurement
model Lisa are shown in table 4.
The maximum variance for a five-point likert scale on
mathematical foundation is 2.00 3. Therefore the measure-
ment model of Lisa shows a moderate to high variance within
the variables. The acceptance variables show a low to mod-
erate variance. As anticipated the Interface Lisa is perceived
more as a technology (mean =3.9870) than a person (mean
=2,4156).
Furthermore, the correlation matrix of constructs (table
5) shows that “CPOW” is negatively correlated to other con-
structs which is expected. The correlation of the perception
of the system as technology is negatively correlated with the
perception of the system as person. These negative correla-
tions imply a validity of the measurement.
4.1.2. Descriptive statistics: measurement model of Maria
The descriptive statistics measurement model of Maria
are shown in table 6. As shown previously the maximum
variance for a five-point likert scale is 2.00. Therefore the
measurement model of Maria shows a moderate to high vari-
ance within the variables. Similar to measurement model of
Lisa, acceptance variables show a low to moderate variance.
Furthermore, the interface Maria is perceived more as
a technology (mean =4.1417) than a person (mean =
2,5669). Since Maria is an anthropomorphized interface
the perception of the system as technology should be lower
3High dispersion on five-point likert scale means that every number
should be distributed equally among the scale. Therefore a quantity of one
number at each point of scale can be considered for further calculations due
to the reduction of complexity. The average among the scale is equal to the
median. The average is three. This average is considered for the variance
calculation. The sum of squares of the difference between the observation
and the mean is equal to 10. 10 is divided by the number of observations,
leading to a variance of 2.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925908
Figure 10: Job description of survey participants (Source: Own illustration)
Figure 11: Classification of job institutions (Source: Own illustration)
than the perception of the system as technology. In fact, the
system Lisa has a lower mean value for the perception as
technology than the system Maria, which may indicate that
the anthropomorphized system has failed the Turing test
(Turing,1950). Furthermore, the mean value for acceptance
parameters of the system Lisa are slightly higher than the
system Maria. The correlation matrix in table 7shows that
“CPOW” is no more negatively correlated to all other con-
structs which was not expected. Similar to the measurement
model of Lisa the correlation of the perception of the system
as technology with the perception of the system as person is
negative which indicates a validity of the measurement.
4.2. Quality indicators for measurement and structural
model
Hair Jr. et al. (2021) show that the first step in the as-
sessment of measurement models is the examination of indi-
cator reliability. Hair Jr., Risher, Sarstedt, and Ringle (2019)
recommend indicator loadings, above 0.708 as reliable indi-
cators. Indicators under this threshold should be considered
for a removal. Indicator loading below 0.4 should always
be eliminated from the measurement model (Hair Jr. et al.,
2021).
Therefore, a factor analysis was conducted in R using the
package “seminr” by (Hair Jr. et al.,2021). The results of
the factor analysis are shown in the appendix. The measure-
ment model was adjusted by deleting indicators with low
values for indicator loadings. The final stage of the factor
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 909
Table 4: Descriptive summary of the measurement model of Lisa
Descriptive statistics
n min q25 mean median q75 max sd var
l_per_1 154 1 2 2.4091 2 3 5 1.1411 1.3021
l_per_2 154 1 1 2.3312 2 3 5 1.1719 1.3733
l_per_3 154 1 2 2.4156 2 3 5 1.0646 1.1334
l_int_1 154 1 3 3.3766 4 4 5 1.1265 1.2690
l_tran_1 154 1 1 2.0974 2 3 5 1.1130 1.2388
l_tran_2 154 1 2 2.6299 3 4 5 1.1713 1.3719
l_tran_3 154 1 1 2.2338 2 3 5 1.2250 1.5006
l_part_1 154 1 2 2.9481 3 4 5 1.1013 1.2130
l_tru_1 154 1 2 2.9870 3 4 5 1.0846 1.1763
l_tru_2 154 1 2 3.1039 3 4 5 1.0427 1.0872
l_tru_3 154 1 2 3.1494 3 4 5 1.0833 1.1736
l_tec_1 154 1 4 3.9870 4 5 5 0.9768 0.9541
l_tec_3 154 1 3 3.8571 4 5 5 1.0125 1.0252
l_aiacc_1 154 1 3 3.5974 4 4 5 0.9533 0.9088
l_aiacc_2 154 1 3 3.5649 4 4 5 0.9283 0.8618
l_aiacc_3 154 1 3 3.5779 4 4 5 0.9754 0.9514
l_acc 154 0 1 0.7987 1 1 1 0.4023 0.1618
Table 5: Correlation matrix of Lisa
Correlations of constructs of Lisa
TEC PER TRAN INT CPOW TRU AIACC
TEC 1 -0.312 0.109 0.064 -0.095 0.056 0.177
PER -0.312 1 0.347 0.469 -0.169 0.381 0.258
TRAN 0.109 0.347 1 0.557 -0.274 0.536 0.268
INT 0.064 0.469 0.557 1 -0.353 0.581 0.414
CPOW -0.095 -0.169 -0.274 -0.353 1 -0.492 -0.385
TRU 0.056 0.381 0.536 0.581 -0.492 1 0.514
AIACC 0.177 0.258 0.268 0.414 -0.385 0.514 1
analysis is shown in table 8. Indicator loadings from the ini-
tial measurement model are shown in the appendix. Due
to low loadings the item “tran_4” and “part_2” had to be
removed from the measurement model. Further measure-
ment errors in construct validity were identified. Therefore
“tec_2”, “int_2”,”int_3”,”part_4”, ”part_5” were eliminated in
the measurement model.
After removing the previously stated indicators the mea-
surement models were tested for final indicator loadings.
The item “part_3” had an indicator loading of 0.589 for
the measurement model of Maria. Therefore Hair Jr. et al.
(2021) suggest to examine internal consistency. The neces-
sary threshold for internal consistency could not be reached
by both measurement models. Deleting an indicator should
be considered when a removal leads to an increase in relia-
bility (Hair Jr. et al.,2021). Therefore the item “part_3” was
removed from the measurement model.
The second step for evaluating reflective measurement
models according to Hair Jr. et al. (2021) is the examination
of internal consistency reliability. Hair Jr. et al. state that the
use of Cronbach’s alpha is a very conservative reliability mea-
sure. Further, they assume that composite reliability (rhoC)
may be a too liberal measure. Therefore they suggest to use
the reliability measure of rhoA. The reliability summary is
shown in table 9.
The results from the reliability summary show high reli-
ability of the measurement model. Since the rhoA-value for
the construct of “TEC” in the measurement model of Maria
is higher than 1, which may imply measurement errors. The
correlation of items (Appendix 12) for measurement model
of Maria show no anomalies since these items are correlated
0.716. The composite reliability shows reliability values ex-
ceeding the threshold of 0.7 to 0.9 suggested by Hair Jr. et
al. (2021).
They state value above 0.9, especially above 0.95 imply a
redundancy of indicators. As stated by Hair Jr. et al. (2021)
the composite reliability measure may be too liberal measure
for internal consistency. The results on the measure of Cron-
bach’s alpha, which is a conservative measure for reliability,
show that the constructs “TEC”, PER” and “TRAN” can be
K. Iqbal /Junior Management Science 8(4) (2023) 887-925910
Table 6: Descriptive summary of measurement model of Maria
Descriptive statistics
n min q25 mean median q75 max sd var
m_per_1 127 1 2 2.4961 2 3 5 1.0902 1.1885
m_per_2 127 1 2 2.5118 2 3 5 1.1117 1.2360
m_per_3 127 1 2 2.5669 3 3 5 1.1026 1.2157
m_int_1 127 1 3 3.4720 4 4 5 0.9048 0.8187
m_tran_1 127 1 1 2.3150 2 3 5 1.1866 1.4079
m_tran_2 127 1 2 2.8413 3 4 5 1.1298 1.2764
m_tran_3 127 1 1 2.2598 2 3 5 1.1071 1.2256
m_part_1 127 1 2 2.8898 3 4 5 1.0633 1.1306
m_tru_1 127 1 2.5 2.9921 3 4 5 0.9128 0.8333
m_tru_2 127 1 3 3.1969 3 4 5 0.9001 0.8101
m_tru_3 127 1 3 3.0709 3 4 5 0.9101 0.8283
m_tec_1 127 1 4 4.1417 4 5 5 0.8704 0.7575
m_tec_3 127 2 3.5 4 4 5 5 0.8909 0.7937
m_aiacc_1 127 1 3 3.3780 4 4 5 0.9994 0.9989
m_aiacc_2 127 1 3 3.4016 4 4 5 1.0333 1.0676
m_aiacc_3 127 1 3 3.4488 4 4 5 1.0213 1.0430
m_acc 127 0 1 0.7953 1 1 1 0.4051 0.1641
Table 7: Correlation matrix of Maria
Correlations of constructs of Maria
TEC PER TRAN INT CPOW TRU AIACC
TEC 1 -0.386 -0.154 -0.049 0.028 -0.077 -0.128
PER -0.386 1 0.311 0.373 -0.060 0.349 0.418
TRAN -0.154 0.311 1 0.424 -0.185 0.428 0.276
INT -0.049 0.373 0.424 1 -0.102 0.591 0.531
CPOW 0.028 -0.060 -0.185 -0.102 1 0.045 -0.057
TRU -0.077 0.349 0.428 0.591 0.045 1 0.657
AIACC -0.128 0.418 0.276 0.531 -0.057 0.657 1
considered as “good” in terms of reliability. Furthermore, the
constructs of “TRU” and AIACC slightly exceed the thresh-
old of 0.9. The constructs of “INT” and “CPOW have a alpha-
value of 1.0 because they are single item constructs.
The third step for the assessment of reflective measure-
ment model according to Hair Jr. et al. (2021) is convergent
validity. Therefore they suggest to examine the measure of
average variance extracted (AVE). Furthermore, they state
that the AVE should exceed the value of 0.5. The results from
the examination of convergent validity are shown in table 9.
All constructs exceed the threshold suggested by Hair Jr. et
al. (2021).
In order to evaluate reflective measurement models, the
fourth step of Hair Jr. et al. (2021) is the assessment of dis-
criminant validity. They suggest to avoid the Fornell-Larcker
Criterion by Fornell and Larcker (1981) due to an inability of
the criterion to identify discriminant validities issues. There-
fore Hair Jr. et al. recommend to examine the heterotrait-
monotrait ratio of correlations (HTMT) (Henseler, Ringle, &
Sarstedt,2015). The results of the examination of the dis-
criminant validity are shown in table 10.
Hair Jr. et al. (2021) suggest that the values for HTMT
should be significantly lower than the threshold of 0.85.
The values for HTMT shown in table 10 are below the sug-
gested threshold. Furthermore, a significance test is con-
ducted where the structural equation model is bootstrapped
by 10000 samples for generating standard errors and confi-
dence intervals. The significance test shows that the upper
bound of the 95% confidence interval is significantly lower
than the suggested threshold of 0.85 (Hair Jr. et al.,2021).
The results of the bootstrapped values for HTMT are shown
in the appendix. All bootstrapped paths are significantly
lower than the suggested threshold leading to discriminant
validity.
4.3. Analyzing acceptance conditions and robustness checks
of study
Since the previous tests show that the measurement mod-
els fulfill reliability and validity measures, the structural
model can be evaluated for testing the hypotheses. Before
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 911
Table 8: Loadings summary for Lisa and Maria
Loadings summary of Lisa
TEC PER TRAN INT CPOW TRU AIACC
l_per_1 0 0.929 0 0 0 0 0
l_per_2 0 0.908 0 0 0 0 0
l_per_3 0 0.870 0 0 0 0 0
l_int_1 0 0 0 1 0 0 0
l_tran_1 0 0 0.890 0 0 0 0
l_tran_2 0 0 0.881 0 0 0 0
l_tran_3 0 0 0.880 0 0 0 0
l_part_1 0 0 0 0 1 0 0
l_tru_1 0 0 0 0 0 0.946 0
l_tru_2 0 0 0 0 0 0.883 0
l_tru_3 0 0 0 0 0 0.929 0
l_tec_1 0.946 0 0 0 0 0 0
l_tec_3 0.954 0 0 0 0 0 0
l_aiacc_1 0 0 0 0 0 0 0.919
l_aiacc_2 0 0 0 0 0 0 0.908
l_aiacc_3 0 0 0 0 0 0 0.916
Loadings summary of Maria
TEC PER TRAN INT CPOW TRU AIACC
m_per_1 0 0.895 0 0 0 0 0
m_per_2 0 0.872 0 0 0 0 0
m_per_3 0 0.865 0 0 0 0 0
m_int_1 0 0 0 1 0 0 0
m_tran_1 0 0 0.902 0 0 0 0
m_tran_2 0 0 0.879 0 0 0 0
m_tran_3 0 0 0.918 0 0 0 0
m_part_1 0 0 0 0 1 0 0
m_tru_1 0 0 0 0 0 0.923 0
m_tru_2 0 0 0 0 0 0.888 0
m_tru_3 0 0 0 0 0 0.909 0
m_tec_1 0.851 0 0 0 0 0 0
m_tec_3 0.976 0 0 0 0 0 0
m_aiacc_1 0 0 0 0 0 0 0.934
m_aiacc_2 0 0 0 0 0 0 0.929
m_aiacc_3 0 0 0 0 0 0 0.954
Table 9: Internal consistency reliability and convergent validity
Summary of internal consistency reliability and convergent validity
Lisa alpha rhoC AVE rhoA Maria alpha rhoC AVE rhoA
TEC 0.892 0.949 0.902 0.896 TEC 0.835 0.912 0.838 1.349
PER 0.890 0.930 0.815 0.967 PER 0.850 0.909 0.770 0.854
TRAN 0.863 0.914 0.781 0.892 TRAN 0.883 0.927 0.810 0.888
INT 1 1 1 1 INT 1 1 1 1
CPOW 1 1 1 1 CPOW 1 1 1 1
TRU 0.908 0.943 0.846 0.911 TRU 0.892 0.933 0.822 0.897
AIACC 0.902 0.938 0.835 0.902 AIACC 0.933 0.957 0.882 0.935
assessing the structural model, the hypothesis that an an-
thropomorphizing of a system leads to an acceptance of the
system (H1) is tested for validity. Therefore the mean of the
constructs of acceptance in the measurement model of Maria
K. Iqbal /Junior Management Science 8(4) (2023) 887-925912
Table 10: Summary of dicriminant validity
HTMT table of Lisa
TEC PER TRAN INT CPOW TRU AIACC
TEC
PER 0.365
TRAN 0.125 0.392
INT 0.069 0.490 0.592
CPOW 0.101 0.176 0.278 0.353
TRU 0.065 0.419 0.593 0.611 0.516
AIACC 0.198 0.272 0.289 0.435 0.405 0.566
HTMT table of Maria
TEC PER TRAN INT CPOW TRU AIACC
TEC
PER 0.464
TRAN 0.183 0.366
INT 0.043 0.403 0.446
CPOW 0.025 0.067 0.191 0.102
TRU 0.100 0.394 0.476 0.619 0.047
AIACC 0.127 0.468 0.297 0.549 0.059 0.717
has to be significantly higher than the mean of the constructs
of acceptance in the measurement model of Lisa. Therefore
a T-test is conducted in R (R Core Team,2013), shown in
table 11.
The T-test shows that the mean of the anthropomorphized
system is not significantly higher than the mean of the tex-
tual interface. Therefore the hypothesis that an anthropo-
morphizing of a system leads to an acceptance of the system
(H1) is not supported.
In fact, the mean-value of the textual interface for the
construct of AIACC is higher than the mean-value of the an-
thropomorphized system. This difference in AIACC is not sig-
nificant. Furthermore, the means of the constructs of ACC
have equal means for both observation groups. Moreover, the
mean-values for other constructs do not differ significantly,
leading to the assumption that an anthropomorphizing of the
system has no significant effect on acceptance measures. To
confirm this assumption the structural models are assessed
in order to understand how acceptance is created. Further-
more, equal findings in both structural models support the
assumption that there is no significant effect of anthropomor-
phizing the system on acceptance.
In order to evaluate the structural model Hair Jr. et al.
(2021) suggest to first examine potential collinearity issues
in the constructs. Therefore the structural model is tested for
variance inflation factors. According to Becker, Ringle, Sarst-
edt, and Völckner (2015) variance inflation factors above the
threshold of 3.0 assume issues with collinearity. The results
shown in table 12 indicate no issues for potential collinearity.
In the second step Hair Jr. et al. (2021) suggest to exam-
ine the significance of the structural model. Before evaluat-
ing the significance of the structural model it is important to
outline the approach in order to estimate the model.
As mentioned earlier the dataset is divided into two mea-
surement models. Furthermore, the construct of AIACC is
considered as the main indicator for measuring acceptance.
For confirming the results from PLS-SEM estimation, the con-
struct of ACC is considered in a second structural model. The
models only differ in the path coefficients from the predictors
of acceptance to acceptance across both structural models.
For reducing illustrative complexity the construct of ACC is
added to the illustration of the SEM-estimation in figure 12
and figure 13. It is important to outline that both acceptance
constructs were not estimated in a single SE. All in all two
structural models with two measurement models were esti-
mated by using R, specifically the “seminr”-package by (Hair
Jr. et al.,2021), leading to four estimations of PLS-SEM.
In order to examine the significance of the path coeffi-
cients, it is necessary to perform bootstrapping standard er-
rors for calculating confidence intervals (Hair Jr. et al.,2021).
The summary of the bootstrapped paths is shown in the ap-
pendix. Figure 12 and figure 13 show the path coefficients af-
ter bootstrapping. Further green paths indicate positive path
coefficients whereas red paths indicate negative path coeffi-
cients. The significance of paths is marked by stars. A p-value
smaller than 0.01 is marked with three stars, p-value greater
than 0.01 and smaller than 0,05 is marked with two stars and
a p-value greater than 0.05 and smaller than 0.1 is marked
with one star. The significance of paths aids to support the
previously formulated hypothesis.
The path coefficients from Trust to Acceptance show a
positive influence of trust on acceptance. This path is sig-
nificant for both acceptance measures and by both measure-
ment models. The SEM for Lisa shows a path coefficient of
β=0.502 (p <0.001; 5% CI =0.372; 95% CI =0.625)
for the construct Decision (ACC) and 0.445 (p <0.001; 5%
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 913
Table 11: T-test summary for antropomorphizing interfaces
System PER INT CPOW TRAN TRU TEC AIACC ACC
Lisa 2.3852 3.3762 2.9480 2.3203 3.0800 3.3766 3.5800 0.7952
Maria 2.5249 3.4720 2.8897 2.4682 3.0866 3.4645 3.4094 0.7952
p-value 0.2407 0.4352 0.6531 0.2343 0.9518 0.4997 0.1223 1.0000
Table 12: VIF-values for structural model evaluation
Structural model of Lisa
AIACC TRU CPOW
TEC PER TRU TRAN INT CPOW TRAN
1.153 1.344 1.217 1.464 1.546 1.154 .
Structural model of Maria
AIACC TRU CPOW
TEC PER TRU TRAN INT CPOW TRAN
1.180 1.336 1.144 1.250 1.220 1.036 .
CI =0.308; 95% CI =0.571) for the construct Acceptance
(AIACC).
Furthermore, the SEM for Maria shows a path coefficient
of β=0.223 (p =0.008; 5% CI =0.068; 95% CI =0.371) for
the construct Decision and 0.582 (p <0.001; 5% CI =0.372;
95% CI =0.625) for the construct Acceptance (AIACC).
The path coefficients from Transparency to Trust show
a positive influence of transparency on trust. This path is
significant for both measurement models. The SEM for Lisa
shows a path coefficient of β=0.277 (p <0.001; 5% CI =
0.155; 95% CI =0.399). Furthermore, the SEM for Maria
shows a path coefficient of β=0.237 (p =0.005; 5% CI =
0.084; 95% CI =0.388).
The path coefficient from Transparency to Computer
Power shows a negative influence of transparency on trust.
This path is significant for both measurement models. The
SEM for Lisa shows a path coefficient of β=- 0.277 (p <
0.001; 5% CI =- 0.396; 95% CI =- 0.149). Furthermore,
the SEM for Maria shows a path coefficient of β=- 0.186 (p
=0.027; 5% CI =- 0.336; 95% CI =-0.028).
The path coefficient from Computer Power to Trust show
different significant result of the trust towards the system.
This path is significant for both measurement models. The
SEM for Lisa shows a path coefficient of β=- 0.301 (p <
0.001; 5% CI =- 0.395; 95% CI =- 0.205). Furthermore,
the SEM for Maria shows a path coefficient of β=0.142 (p
=0.038; 5% CI =0.011; 95% CI =0.272).
The path coefficient from Intelligence to Trust shows a
positive influence of Intelligence on Trust. This path is signif-
icant for both measurement models. The SEM for Lisa shows
a path coefficient of β=0.321 (p <0.001; 5% CI =0.172;
95% CI =0.460). Furthermore, the SEM for Maria shows a
path coefficient of β=0.504 (p <0.001; 5% CI =0.374;
95% CI =0.633).
The path coefficients from Perception as technology to Ac-
ceptance show a positive influence of trust on acceptance.
The result on path significance differs for both measurement
models. The SEM for Lisa shows a path coefficient of β
=0.096 (p =0.160; 5% CI =- 0.004 ; 95% CI =0.225)
for the construct Decision (ACC) and 0.205 (p <0.001; 5%
CI =0.073; 95% CI =0.336) for the construct Acceptance
(AIACC). Furthermore, the SEM for Maria shows a path co-
efficient of β=- 0.084 (p =0.17; 5% CI =- 0.220; 95% CI
=0.057) for the construct Decision and 0.003 (p =0.5; 5%
CI =- 0.145; 95% CI =0.175) for the construct Acceptance
(AIACC).
The path coefficients from Perception as person to accep-
tance show a positive influence of trust on acceptance. This
path is significant for both measurement models. Further, the
path to the acceptance measure Decision (ACC) is not signif-
icant. The SEM for Lisa shows a path coefficient of β=-
0.069 (p =0.160; 5% CI =- 0.239; 95% CI =0.225) for
the construct Decision (ACC) and 0.159 (p =0.025; 5% CI
=0.033; 95% CI =0.283). Furthermore, the SEM for Maria
shows a path coefficient of β=- 0.091 (p =0.170; 5% CI
=- 0.035; 95% CI =0.261) for the construct Decision and
0.214 (p =0.003; 5% CI =0.091; 95% CI =0.339) for the
construct Acceptance (AIACC).
After assessing the significance of the path coefficients it
is necessary to evaluate the explanatory power of the model
(Hair Jr. et al.,2021). Therefore the measure of R2explains
how much of the variance of the construct is explained by the
model. R2-values of 0.75 indicate a substantial, R2-values
of 0.5 show a moderate and R2values of 0.25 state a low
explanatory power (Hair Jr., Ringle, & Sarstedt,2011). The
measure of R2may increase by the number of explanatory
variables. Therefore it is important to consider the measure
of Adjusted R2. The limitation of Adjusted R2-measure is,
that it may consider the number of explanatory variables but
this measure is not a precise indicator as R2(Hair Jr. et al.,
2021).
The results shown in table 13 state a moderate to low
K. Iqbal /Junior Management Science 8(4) (2023) 887-925914
Figure 12: SEM for Lisa after Bootstrapping (Source: Own illustration)
Figure 13: SEM for Maria after Bootstrapping (Source: Own illustration)
Table 13: Explanatory Power of model: R2
R-squared
Lisa AIACC TRU CPOW ACC Maria AIACC TRU CPOW ACC
R20.303 0.483 0.075 0.257 0.473 0.407 0.034 0.090
Adj. R20.289 0.473 0.069 0.242 0.460 0.393 0.027 0.068
explanatory power of both models. Furthermore, the model
of Lisa and Maria only describes 7.5% and 3.4% of the vari-
ance of the construct of Computer Power. This may indicate
that Computer Power is influenced by unobserved variables
which are not measured by the model. The effect size f2may
be a further measure to evaluate the explanatory power of
the model (Hair Jr. et al.,2021). Results of f2-measure are
shown in the appendix.
Furthermore, the assessment of the predictive power of
the model is the next step in order to evaluate the structural
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 915
model (Hair Jr. et al.,2021). Predictive power is defined as
the ability of a model to predict new observations (Hair Jr.
et al.,2021). Therefore the sample is divided into a hold-
out and multiple training samples. The training sample are
is estimated and evaluated by predicting performance while
comparing the results to the holdout sample (Hair Jr. et al.,
2021). In order to perform cross validation, this process is
repeated by the number of subsamples where the holdout
sample is changed to a training sample and a further training
sample is changed to a holdout sample. Therefore the mea-
sure of root-mean-square error (RMSE) and mean absolute
error (MAE) is calculated. Furthermore, prediction errors
from a linear regression model for each indicator are calcu-
lated. The structural model needs to show lower prediction
errors than the benchmark of prediction errors generated by
the linear model.
In order to perform the prediction, the sample was di-
vided into 10 subsamples. Furthermore the process calcu-
lation of prediction errors is repeated 10 times. The results
generated by the “seminr”-package in R (Hair Jr. et al.,2021)
are shown in table 14.
The results show that six out of seven indicators of the
model Lisa have lower prediction errors (RMSE) than the
benchmark of the linear model. Furthermore, five out of
seven indicators of the model Maria have lower prediction
errors than the benchmark of the linear model. According to
Hair Jr. et al. (2021) a majority of indicators under the bench-
mark of the linear model imply medium predictive power.
The evaluation of the structural model showed no issues
with collinearity in constructs, significant path coefficients,
moderate explanatory power and medium predictive power
for new observations.
4.4. Discussion of study results
After evaluating the structural model, the next section
will interpret the results. Therefore the previously formu-
lated hypotheses are evaluated for validity. Furthermore,
the findings of the study are reflected in prior findings in
research. Possible explanations for the findings are given,
derived from previous research.
4.4.1. Interpreting study results
The hypothesis that an anthropomorphizing of a system
leads to an acceptance (H1) is not supported by the study.
The descriptive statistics show higher mean values for the
perception of the system as technology for the anthropomor-
phized system than for the textual system. On the other
hand, the perception of the system as a person has higher
mean values of the anthropomorphized system than the tex-
tual system which may indicate appropriate system design in
terms of anthropomorphizing. Furthermore, the results from
the t-test indicate that there is no significant influence of an-
thropomorphizing the system on acceptance conditions. De-
spite these findings, this study shows significant differences
in acceptance conditions which may be referred to an influ-
ence of anthropomorphizing features of the system on the
acceptance. The results are shown in table 15.
Furthermore, the study shows empirical evidence that
trust towards the system is the main indicator for creating ac-
ceptance by users since the path coefficients from Trust to AI
acceptance are above 0.5. The hypothesis that higher trust in
a system leads to higher acceptance (H2) is supported by this
study. An anthropomorphizing of the system has higher path
coefficients from Trust to AI acceptance which may indicate
that higher trust towards the system has a higher influence
on acceptance in more anthropomorphized systems.
The results show that trust towards the system is influ-
enced by the transparency or the comprehensibility of the
system. This study shows a significant influence of Trans-
parency on Trust with a path coefficient of 0.277 for the tex-
tual system and a path coefficient of 0.237 for the anthro-
pomorphized system. Therefore the hypothesis that higher
transparency of a system has a positive effect on the trust
towards the system (H3) is supported. Furthermore, the to-
tal effects shown in the appendix imply that the mediator
variables which are influenced by transparency increase the
influence of transparency of the system on trust towards the
system for textual interface with a significant path coefficient
of 0.362. The total effects statistic shows that transparency is
a significant predictor for acceptance with a path coefficient
of 0.161 and the total effects statistic for the anthropomor-
phized system implies that the mediating variable which is
influenced by transparency decreases the influence of trans-
parency on trust. The path from Transparency to Trust re-
mains significant with a path coefficient of 0.123. Similar to
the textual system, transparency is a significant predictor of
acceptance with a path coefficient of 0.125 in anthropomor-
phized systems.
The tautologic relation between Transparency to Com-
puter Power could be proven empirically. This study shows
a significant effect of Transparency on Computer Power with
a path coefficient of 0.276 for the textual interface and a
path coefficient of 0.186 for the anthropomorphized sys-
tem. Therefore the hypothesis that a higher comprehensi-
bility of a system has a positive effect on the perceived par-
ticipation of the user in the decision-making process (H4) is
supported. Since the construct of Computer Power has low
explanatory power the suggested predictor may not be suffi-
cient and other unobserved variables would be more suitable
for predictors. The anthropomorphized system has lower
path coefficients than the textual interface which may in-
dicate that anthropomorphizing features decrease the effect
that transparency leads to a lower perception of a higher role
in decision-making by the system (Computer Power).
Further findings of the study are that Computer Power
has a significant effect on Trust with a path coefficient of -
0.301 for the textual system and a path coefficient of 0.141
for the anthropomorphized system. Therefore the hypothesis
that the higher the perceived participation of the system-user
in the decision-making process is, the higher the perceived
trust towards the system (H5), is supported partially. These
findings were not expected since the literature showed that
higher participation possibilities lead to an increase in trust.
Similar to the literature the results suggest that a perception
K. Iqbal /Junior Management Science 8(4) (2023) 887-925916
Table 14: Summary of prediction errors
Prediction error measures for PLS of Lisa
l_aiacc_1 l_aiacc_2 l_aiacc_3 l_tru_1 l_tru_2 l_tru_3 l_part_1
RMSE 0.865 0.832 0.874 0.842 0.825 0.866 1.068
MAE 0.677 0.626 0.683 0.660 0.663 0.680 0.906
Prediction error measures for LM of Lisa
l_aiacc_1 l_aiacc_2 l_aiacc_3 l_tru_1 l_tru_2 l_tru_3 l_part_1
RMSE 0.913 0.863 0.930 0.847 0.856 0.885 1.027
MAE 0.696 0.632 0.720 0.651 0.657 0.709 0.866
Prediction error measures for PLS of Maria
m_aiacc_1 m_aiacc_2 m_aiacc_3 m_tru_1 m_tru_2 m_tru_3 m_part_1
RMSE 0.811 0.789 0.797 0.801 0.694 0.801 1.060
MAE 0.650 0.637 0.673 0.634 0.568 0.632 0.867
Prediction error measures for LM of Maria
m_aiacc_1 m_aiacc_2 m_aiacc_3 m_tru_1 m_tru_2 m_tru_3 m_part_1
RMSE 0.871 0.816 0.854 0.813 0.646 0.778 1.171
MAE 0.692 0.653 0.700 0.639 0.502 0.579 0.975
of higher participation of the system in decision-making de-
creases trust towards the system for the textual interface. For
the anthropomorphized interface the perception of higher
participation of the system in the decision-making process in-
creases the trust in the system. This effect may be explained
due a lower perception of high participation of the system
in decision-making shown in the comparison of means in
descriptive statistics. Furthermore, the total effects statistic
shows that Computer Power is a significant predictor of ac-
ceptance with a path coefficient of 0.082 for anthropomor-
phized interfaces. On the other hand, the path coefficient of
0.135 is not significant for textual interfaces. This total effect
statistics show that acceptance is increased if the system has
higher power in decision making for anthropomorphized in-
terfaces. On the other hand, there is no significant influence
of the perceived power of the system in decision-making on
acceptance for textual interfaces.
The results show that the perceived intelligence of the
system has an influence on the trust in the system. This
study shows a significant influence of Intelligence on Trust
with a path coefficient of 0.321 for the textual interface and
a path coefficient of 0.521 for the anthropomorphized inter-
face. Therefore the hypothesis that the higher the perceived
intelligence of the system is, the higher the trust (H6) is sup-
ported. The path coefficients of the anthropomorphized sys-
tem show that the perceived intelligence has a greater role
in predicting trust than the path coefficient of the textual in-
terface. Furthermore, the total effects statistic shows that In-
telligence is a significant predictor of acceptance with a path
coefficient of 0.142 for the textual interface and a path co-
efficient of 0.294 for the anthropomorphized interface. This
total effect statistic shows that a higher perceived intelligence
of the system should be considered in order to create accep-
tance of the users.
The results show that the perception of the system as a
technology has an influence on acceptance. This study shows
a significant influence of the perception of the system as tech-
nology on the acceptance by users with a path coefficient of
0.205 for the textual interface. For anthropomorphized inter-
faces the path coefficient of 0.003 is not significant. There-
fore hypothesis that the higher the perception of the system
as technology is, the higher the acceptance (H7), is supported
partially.
Furthermore, the study results show that the perception
of the system as a person has an influence on the acceptance
of users. A Significant influence of the perception of the sys-
tem as a person with a path coefficient of 0.159 for textual
interface and path coefficient of 0.214 for an anthropomor-
phized system is identified. Therefore hypothesis that the
higher the perception of the system as person is, the higher
the acceptance (H8) is supported. The results may indicate
that adding anthropomorphizing features for textual inter-
faces may not be necessary in order to create acceptance since
the path coefficients for the perception of the system as a
technology is higher than the path coefficient of the percep-
tion of the system as a person.
4.4.2. Theoretical relevance of study results
The acceptance research on the non-managerial level
showed that trust is a major condition to create acceptance
by users (Hastenteufel & Ganster,2021;Rathje et al.,2021;
Scheuer,2020;Uysal et al.,2022). This study shows that
findings from literature are applicable on the managerial
level. Specifically, trust is identified as a major condition for
creating acceptance by users. Furthermore, this study shows
how trust is influenced. Since Makarius et al. (2020) iden-
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 917
Table 15: Results on acceptance conditions for algorithmic decision support
From INT CPOW TRAN TRAN PER TEC TRU CPOW TRAN INT
To TRU TRU CPOW TRU AIACC AIACC AIACC AIACC AIACC AIACC
Path coefficients: 0.321 -0.301 -0.276 0.277 0.159 0.205 0.445 -0.135 0.161 0.142
Lisa
T-Statistics 3.678 -5.276 -3.606 3.691 1.973 2.490 5.574 -3.532 3.625 3.097
5% CI 0.172 -0.395 -0.396 0.155 0.033 0.073 0.308 -0.201 0.094 0.094
95 % CI 0.460 -0.205 -0.149 0.399 0.283 0.336 0.571 -0.076 0.238 0.238
Significance
Path coefficients: 0.504 0.142 -0.186 0.237 0.214 -0.003 0.582 0.082 0.123 0.294
Maria
T-Statistics 6.420 1.790 -1.985 2.607 2.834 -0.009 9.048 1.739 2.182 4.599
5 % CI 0.374 0.011 -0.336 0.084 0.091 -0.145 0.472 0.006 0.028 0.193
95 % CI 0.633 0.272 -0.028 0.388 0.339 0.175 0.683 0.161 0.217 0.402
Significance
K. Iqbal /Junior Management Science 8(4) (2023) 887-925918
tified cognitive issues in terms of strategic decision making
where they identified the necessity to do further research in
how decision-makers trust the output received from AI sys-
tems. This study shows how trust is influenced. Furthermore,
Venkatesh and Davis (2000) show that perceived usefulness
is an indicator of acceptance. The study results show that the
perceived intelligence of the system is an acceptance condi-
tion for decision support. Furthermore, the results suggest
that the system should be perceived as useful and exhibit
a certain intelligence confirming the research on TAM by
Venkatesh and Davis (2000). Despite no findings on the in-
fluence of perceived intelligence on trust by Scheuer (2020),
this study showed empirical evidence that the perceived in-
telligence of system has an influence on trust. Research on
acceptance states that it is necessary to have a transparent
system in terms of the comprehensibility of a decision pro-
cess (Gersch et al.,2021;Meske et al.,2022;Scheuer,2020;
Venkatesh & Davis,2000). On the other hand, Newman et
al. (2020) state that an increase in transparency may lead
to a decontextualization of workers. Therefore the results of
this study show that the findings from acceptance research
are applicable on the managerial level, specifically that trans-
parency of a result-generating process influences the trust in
the system in a positive way. Transparency of the decision-
making process of the system is identified as a necessary
condition in order to create acceptance by users.
Further results of this study show evidence for the tauto-
logic relationship between the transparency of result process-
ing and the perceived participation of the system in decision-
making. Orlikowski and Robey (1991) assume that more in-
formation in the decision-making process leads to a higher
power of the decision-maker. Considering the manager as
a decision-maker the study showed that more information
for the manager increases his perceived power in decision-
making. Furthermore, the perceived power of the system
in decision-making decreases. This effect can be explained
by the perception of authoritative correctness of algorithms.
Precise algorithms may generate the perception of correct-
ness therefore human beings can feel inferior to algorithms
(Martini,2019).
On the other hand, a higher perceived power of the sys-
tem in the decision-making process may leads to an increase
in trust towards the system if the system is anthropomor-
phized. Mcafee et al. (2012) questioned whether managers
would accept a decision support system which may lead to
a shift in their role in form of decreased power. The re-
sult showed that higher perceived power by the system in
decision-making is an acceptance condition for anthropo-
morphized systems. These results are contradictory to prior
research and to attribution theory by Kelley and Michela
(1980).
Attribution theory states that individuals seek to under-
stand the cause of their own behavior (Kelley & Michela,
1980). Since this study shows that the power of the sys-
tem in decision-making process can be achieved by reducing
the transparency of the system, it is assumable that users can
have an increased trust in the system even if the system is not
comprehensible. Furthermore, a low comprehensibility of a
system may not lead to an understanding of the cause of own
contribution on the success caused by the decision. Specifi-
cally, anthropomorphized systems cause blind trust. This as-
sumption may be irrational in terms of research findings in
the necessity of explainable systems regarding result process-
ing (Gersch et al.,2021;Meske et al.,2022;Scheuer,2020;
Venkatesh & Bala,2008). This irrational assumption may be
explained by interpersonal acceptance. As mentioned before
insufficient knowledge and a lack of trust hinder the adop-
tion of decision support systems (Rainsberger,2021). An an-
thropomorphizing of systems may build up a personal rela-
tionship by generating sympathy and affection towards the
system which results in interpersonal acceptance (Rohner &
Khaleque,2002). Furthermore, an anthropomorphized sys-
tem may lead to higher perception of effectiveness of the sys-
tem causing automation bias. Anthropomorphized systems
may be perceived as more effective leading to the tendency
to over-rely on decisions made by algorithms (Meske et al.,
2022). Goddard, Roudsari, and Wyatt (2012) shows that au-
tomation bias leads to a potential failure to detect mistakes
made by algorithms. Expectancy theory by Isaac, Zerbe, and
Pitt (2001) may explain this irrational assumption. Isaac et
al. (2001) state that individuals choose a decision based on
the expected outcome of a decision. Therefore a high per-
ception of intelligence may lead to greater expectancy in the
outcome of the decision. Further possible explanation for
the positive effect of higher perceived power of the system
on trust towards the system may be a perception of fairness.
Korsgaard et al. (1995) show that participation possibilities
as the consideration of an input brought for decision-making
or the influence of the input brought for decision making on
the outcome of a decision create procedural justice which is
a prerequisite for fairness. As Lee (2018) and Newman et al.
(2020) show the perception of a fair or trustworthy decision
depends on whom the decision is made. Decisions made by
the anthropomorphized system may be perceived fairer due
to the anthropomorphizing features. Since the anthropomor-
phized system was more perceived as a technology than a
person this explanation may be partially valid. Since the per-
ception of the system as technology was higher in the anthro-
pomorphized system than in the textual system, the Turing
test (Turing,1950) failed. The failure of the Turing test may
purpose that the system was not perceived as intelligent by
the users. In fact, the perceived intelligence of the system was
higher in the anthropomorphized systems than in the textual
system which may propose that Turing’s definition of intelli-
gence is outdated. Intelligence may be connected to the per-
ception of anthropomorphizing features according to Waytz,
Cacioppo, and Epley (2010) like cognition, emotions or inter-
activity. Furthermore, Lee (2018) shows that decisions made
by humans evoke positive emotions due to the possibility
of social recognition. Since anthropomorphized systems are
characterized through the perception of cognitive capabilities
in technology like emotions Waytz et al. (2010), users may
see a psychological pleasure or social gain while interacting
with the technology. Therefore, the social exchange theory
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 919
(Emerson,1976) may be applicable in order to confirm find-
ings on acceptance conditions. The social exchange theory
states that the interaction between two humans is character-
ized by an exchange of costs and utilities. Utilities may be the
effectiveness of the system (Goddard et al.,2012;Lee,2018;
Martini,2019), psychological pleasure or the enjoyment of
system usage (Waytz et al.,2010) and costs may be the per-
ception of inferiority (Baumann-Habersack,2021;Lee,2018;
Newman et al.,2020), possible detachment of decision mak-
ing (Bader & Kaiser,2019) in terms of involvement or the risk
of failure due to data discrimination (Newman et al.,2020).
Lawler and Thye (1999) show that emotion deepen the na-
ture of the relationship between humans. Furthermore, they
show that due to the rise of emotions, humans tend to fo-
cus on the decision rather than on the decision process in a
group. Therefore social exchange theory may be a possible
explanation for blind trust.
The theory of Uncanny Valley by Mori, MacDorman, and
Kageki (2012) shows that anthropomorphizing features lead
to an increase of acceptance influenced by trust (Scheuer,
2020). They state that an increase of anthropomorphizing
features to a certain point lead to a radical reduction of accep-
tance. Furthermore, Mori et al. (2012) outline that after the
critical point of reduction a certain high degree of anthropo-
morphizing leads to increasing effects on acceptance. Since
the anthropomorphized system had high features of anthro-
pomorphizing like gestures, human embodiment and voice
output. The survey participants may felt an imperfection of
anthropomorphizing leading to higher perception of the sys-
tem as technology.
5. Acceptance conditions of algorithmic decision support
for practice and research
Since the literature shows that research on acceptance
conditions for management is critical in order to enhance
the potential of algorithmic decision support (Grossman
& Siegel,2014;Laudon et al.,2016;Mcafee et al.,2012;
Mikalef et al.,2019;Rainsberger,2021;Reid et al.,2015).
This paper identified plenty of acceptance conditions. There-
fore it is necessary to categorize findings for practice and
identify limitations for further research.
5.1. System design implications
This paper showed that an optimization of an interface
in terms of anthropomorphizing has no effect on the accep-
tance. Despite no finding, the way acceptance is created dif-
fers in optimized interfaces. Therefore practitioners should
first define their goal in terms of algorithmic decision support
where they have to specify the role of the user. It is necessary
to adjust the optimization of the interface to the intention to
use the system. If a user should question the output of the
decision support system, the decision-processing of the sys-
tem should be transparent leading to a higher power of the
user during the usage. Therefore anthropomorphized system
would not be suitable.
On the other hand, if a user should rely on the output
of the decision support system, the system should exhibit a
higher perceived intelligence leading to a higher trust. Fur-
thermore, the system should be perceived as a person in order
to create acceptance. Therefore anthropomorphized systems
would be suitable. The research showed that the expected
outcomes of an anthropomorphizing is dependent on the sys-
tem design. Therefore practitioners should pay attention to
a suitable degree of an anthropomorphized system in order
to avoid the Uncanny Valley proposed by Mori et al. (2012).
Practitioners should examine which degree of anthropomor-
phized system is beneficial in order to fulfill their goals. The
implications show that system design is key in order to opti-
mize the interface to create acceptance.
Furthermore, the decision support system should be
trustworthy since trust is identified as the main indicator
for creating acceptance. In order to create trust according
to Lemke, Monett, and Mikoleit (2021) ethical principles
should be considered while designing the system. Specifi-
cally, beneficence, transparency, nonmaleficence, autonomy,
justice, and privacy are principles for an ethical usage of AI
according to M. C. Barton and Pöppelbuß (2022). The deci-
sion processing of the system should be transparent leading
to a higher power of the user. Further performance measure-
ment of the decision may lead to the realization of a positive
impact of own contribution on the decision (attribution the-
ory). Systems with high power in the decision making pro-
cess should be avoided since they have a negative effect on
trust. On the other hand anthropomorphized systems with
high power in decision-making lead to an increase in trust
(blind trust). Future technology advances in hardware like
neuromorphic computer architecture, DishBrain and Brain
Machine Interface or advances in algorithms like Computa-
tional Intelligence or Super Artificial Intelligence may lead
to an affordance of blind trust. Since anthropomorphized
systems should be introduced when a reliance on the system
is afforded, practitioners have the possibility to avoid ethical
principles by designing a system with low transparency lead-
ing to blind trust. They should carefully evaluate whether
they want to benefit from blind trust. It may be beneficial
in order to create acceptance. The research showed that
the benefits (total effects) from blind trust are smaller than
the benefits from trust created by comprehensibility (total
effects). Therefore systems with high effectiveness due to
technological advances should be transparent for the user
because they lead to higher trust and make it possible to
identify their own contribution to the outcome of the deci-
sion (attribution theory). Furthermore, advanced systems
have to exhibit intelligence. The manager should rely on the
system knowing that the system processes decision aid with
high precision. Therefore the effectiveness of the system
should be communicated properly in order to benefit from
expectancy theory (Isaac et al.,2001).
This study showed that an anthropomorphizing may not
have a direct effect on the acceptance. One of the first an-
thropomorphized system was introduced by Microsoft called
Clippy (Swartz,2003). The rejection of this assistant was
K. Iqbal /Junior Management Science 8(4) (2023) 887-925920
high due to malfunction and a low effectiveness of the sys-
tem (Swartz,2003). Due to claims, Microsoft has removed
the function of Clippy in Office (Swartz,2003). Despite the
failure of Clippy an interface with a similar degree of anthro-
pomorphizing may be beneficial for advanced decision sup-
port systems in order to avoid the Uncanny Valley.
5.2. Limitations and future research
Further precise implications for practice could be derived
if the study did not have limitations. The study results were
based on an interaction of users with the system. Therefore
vignettes have to be designed carefully which could imitate a
realistic scenario. Vignettes may distort the perception of the
user through the framing of information. This study carefully
examined framing of information. The vignette was framed
in terms of transparency of the system. The system used in
the vignette was not comprehensible. Therefore the descrip-
tive statistics confirm that on average the users do not under-
stand the decision processing of the results of both systems.
This distortion was necessary in order to examine whether
users would accept the system even if it is not comprehen-
sible. Therefore the vignette described an interaction with
hybrid intelligence where the level can be classified as De-
cision Support System. Further levels of hybrid intelligence
were not specified. Since this study showed that the partici-
pation of the user in the decision-making process is important
for building a trustworthy system, further levels of hybrid in-
telligence should be considered for future research.
Due to measurement errors, constructs of the study con-
tain single-items which may be not beneficial since exoge-
nous variables are not measurable directly. Nevertheless, the
literature shows that single item constructs are appropriate
measures for an exploratory research. Since the research
question focused on the exploration of acceptance conditions
this study examined valid results through the use of PLS-
SEM. In order to validate the constructs on theoretical level,
a further study should be conducted where the data is ana-
lyzed by a common factor-based structural equation model
(CB-SEM).
One major problem of the study is that the perception of
the system as technology of the system of the anthropomor-
phized system was higher than that of the textual system.
This may indicate that the system design of the vignette was
affected by the Uncanny Valley by Mori et al. (2012). Since
this study has aimed to maximize the level of anthropomor-
phizing a specific high degree of anthropomorphizing was
reached. The degree of anthropomorphizing is not specifi-
able uniformly. Therefore the research has to develop a scale
for identifying the degree of anthropomorphizing where fea-
tures of system design are specified in order to derive the
degree of anthropomorphizing. Due to the non-existence of
a certain scale of anthropomorphizing the degree of anthro-
pomorphizing was chosen arbitrarily, which may distort the
results. Further research can focus on the acceptance of an-
thropomorphized systems with different scales of anthropo-
morphizing. The effect of Uncanny Valley is identified in the
cancellation statistics of the survey. Most cancellations of the
survey were done on the page of the introduction of the an-
thropomorphized system (75 survey participants). The re-
sults may be distorted since the users were annoyed by the
presence of a human-like system which resulted in the can-
cellation of the survey. This group would have provided other
results. Further research could examine whether a maxi-
mization of the anthropomorphizing features may lead to a
perception of the system as a person and examine the effect
of interpersonal acceptance on acceptance?
The T-test showed that anthropomorphizing has no effect
on the acceptance or acceptance-creating variables. Further-
more, two PLS-SEM were estimated to identify how accep-
tance is created. This approach could be optimized by us-
ing anthropomorphizing as moderating variable. Since both
models show similar effects except for the aspect of blind
trust in anthropomorphized systems, a lack of explanatory
power exists in the difference in the results of both systems.
R2is low in the construct of system power for both SEM
models highlighting the affordance of a research setting with
moderating effect of anthropomorphizing. Furthermore, the
method of PLS-SEM maximizes the explanatory power of the
model R2(Hair Jr. et al.,2021). Low R2values indicate
that variables were omitted in research, which may lead to
a problem of causal identification (endogeneity). Since the
research focuses on an exploration of acceptance conditions
a validation of endogeneity was not necessary. Therefore fu-
ture research should examine endogeneity to identify causal-
ities for the acceptance conditions.
This study examined the acceptance conditions for a sin-
gle decision-maker. In practice, decision situations may be
more complex. Merendino et al. (2018) show that algorith-
mic decision support can create tension in boards. Therefore
it is necessary to examine acceptance conditions for further
decision scenarios. Future research should identify whether
acceptance conditions for single managers are applicable for
more complex decision scenarios, like group decisions.
6. Conclusion
The aim of this thesis was to investigate the conditions
that lead to the acceptance of algorithmic decision support
systems. In this study, it was especially important to con-
sider the decision-making process of managers. According
to this the target group of this study was german speaking
students and employees including managers. To analyze dif-
ferent conditions, that may lead to the acceptance of algo-
rithmic decision support systems it was necessary to choose
a methodological approach that considers different scenarios
but also provides insights on the perceptions, beliefs, and at-
titudes of the target group. Based on this, a vignette study
along with a quantitative survey was used for the data col-
lection for the thesis. In total 281 german speaking student
and employees including managers participated in the study
during the period from 25.07.22 -07.08.22.
Furthermore, to analyze the conditions of acceptance an
estimation of a PLS-SEM model was conducted.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 921
In the theoretical section, it was assumed that anthropo-
morphizing features may lead to a situation where the user
perceives the system as a person and accordingly shows more
trust and acceptance towards it. But the result show, an ex-
act opposite behavior of the users. As in the vignette study,
two scenarios were presented a textual scenario and a sce-
nario considering anthropomorphizing features. The users
perceive the anthropomorphized scenario as a technology
and show more trust and acceptance towards a scenario that
is not anthropomorphized. Accordingly, the results indicate
that there is no significant influence of anthropomorphizing
the system on acceptance.
On the other hand, this thesis shows how acceptance dif-
fers across both distinct system. This study confirms that
higher trust in a system leads to higher acceptance. In ad-
dition to this, the results show that trust in the system is in-
fluenced by the transparency or the comprehensibility of the
system. In this regard it might be interesting to investigate
how a system can be designed to receive more trust. In other
words, how can the variable transparency or comprehensi-
bility be further elucidated to generate more trust which in
the end leads to a situation where the user accepts a system?
In this regard different vignette settings might be helpful to
investigate scenarios that lead to more transparency and in
turn to more trust and acceptance.
Moreover, this study presented several implications for
managers and academics. It needs to be mentioned that ex-
ponential development in technology can help to aid strategic
and operational decisions in management and can be crucial
in order to be competitive in dynamic markets. Nevertheless,
decision support systems are not used in practice which has
many reasons. The literature shows that major challenges
arise in the domain of management. Studies show that only
few decision-makers understand data concepts well. There-
fore the acceptance of algorithmic decision support is not
given in the practice. Research on acceptance has identified
many conditions in order to foster acceptance of information
systems. Nevertheless, the research focuses on the accep-
tance on worker- or user-level. This study focuses on the gap
in the existing literature on management-level. The research
question is which conditions lead to an acceptance of algo-
rithmic decision support in management.
Summing up, the literature on persuasive technology
shows that an optimization of interfaces leads to more inter-
action with the technology. Anthropomorphizing is identified
as an appropriate way to optimize interfaces. Therefore a
vignette study design is conducted, where the survey partici-
pants simulate an interaction with a decision support system
where the anthropomorphizing is manipulated due to two
alternating degrees of anthropomorphizing (low and high).
The data for both systems were measured on distinct mea-
surement models. Initially, the results show that there is no
effect of anthropomorphizing on acceptance, which may be
biased by Uncanny Valley.
Practitioners should first define the level of hybrid in-
telligence in order to design the system. The system de-
sign should consider effects from the study. Benefits from
blind trust are not recommendable since the creation through
transparency has higher total effects than the total effect of
the perceived power of the system in decision-making pro-
cess. Furthermore, the system has to be effective which may
be realized by technological advances. The effectiveness of
the system has to be communicated in an appropriate level
to enhance the perceived intelligence of the system.
This study showed which conditions lead to an accep-
tance of algorithmic decision support in management in an
explorative study design. These conditions of acceptance
could be confirmed by further research through a CB-SEM.
All in all, it needs to be mentioned that this study firstly,
provided a theoretical contribution by deriving a Structural
model and based on the thoughts of the TAM. Secondly, this
study provided an empirical contribution at a managerial
level as 281 survey respondents participated in this study and
shared their perceptions and attitudes towards two scenarios
constituting two systems.
Finally, this study provided a practical contribution by
showing how companies can use this model as an indicator
to design systems and which conditions are necessary in or-
der to create acceptance for users. All in all, this study con-
tributes to the research gap on acceptance on managerial-
level.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925922
References
Abhari, K., Vomero, A., & Davidson, E. (2020). Psychology of Business
Intelligence Tools: Needs-Affordances-Features Perspective. In Pro-
ceedings of the Annual Hawaii International Conference on System Sci-
ences. Hawaii International Conference on System Sciences.
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey
on explainable artificial intelligence (XAI). IEEE Access,6, 52138–
52160.
Aguinis, H., & Bradley, K. J. (2014). Best Practice Recommendations for De-
signing and Implementing Experimental Vignette Methodology Stud-
ies. Organizational Research Methods,17(4), 351–371.
Alexander, C. S., & Becker, H. J. (1978). The Use of Vignettes in Survey
Research. Public Opinion Quarterly,42(1), 93–104.
Alvarez, S. A., Barney, J. B., & Young, S. L. (2010). Debates in entrepreneur-
ship: Opportunity formation and implications for the field of en-
trepreneurship. In Handbook of entrepreneurship research (pp. 23–
45). Springer.
Alves, W. M., & Rossi, P. H. (1978). Who should get what? fairness judg-
ments of the distribution of earnings. American Journal of Sociology,
84(3), 541–564.
Anderson, C. (2015). Creating a data-driven organization: Practical advice
from the trenches (1st ed.). O’Reilly Media Inc.
Andrews, K. R. (1980). The concept of corporate strategy.
Apté, C., Dietrich, B., & Fleming, M. (2012). Business leadership through
analytics. IBM Journal of Research and Development,56(6), 7: 1–7:
5.
Atzmüller, C., Kromer, I., & Elisabeth, R. (2014). Peer Delinquency:
Wahrnehmung und Bewertung typischer Jugenddelikte aus der Sicht
Jugendlicher als Grundlage für Präventionsmaßnahmen. Innovation
und Technologie (BMVIT), Wien, Österreich.
Atzmüller, C., & Steiner, P. M. (2010). Experimental Vignette Studies in
Survey Research. Methodology,6(3), 128–138.
Azvine, B., Cui, Z., Nauck, D. D., & Majeed, B. (2006). Real Time Busi-
ness Intelligence for the Adaptive Enterprise. In The 8th IEEE In-
ternational Conference on E-Commerce Technology and The 3rd IEEE
International Conference on Enterprise Computing, E-Commerce, and
E-Services (CEC/EEE’06). IEEE.
Baars, H., & Kemper, H.-G. (2021). Business Intelligence & Analytics–
Grundlagen und praktische Anwendungen. Aufl., Wiesbaden in
Druck. Retrieved from https://link.springer.com/content/
pdf/10.1007/978-3-8348-2344-1.pdf
Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? the user inter-
face and its role for human involvement in decisions supported by
artificial intelligence. Organization,26(5), 655–672.
Barbosa, L. C., & Hirko, R. G. (1980). Integration of Algorithmic Aids into
Decision Support Systems. MIS Quarterly,4(1), 1.
Barnett, T., Bass, K., & Brown, G. (1994). Ethical ideology and ethical
judgment regarding ethical issues in business. Journal of Business
Ethics,13(6), 469–480.
Barrera, D., & Buskens, V. (2007). Imitation and learning under uncertainty:
a vignette experiment. International sociology,22(3), 367–396.
Barton, D., & Court, D. (2012). Making advanced analytics work for you.
Harvard Business Review,90(10), 78–83.
Barton, M. C., & Pöppelbuß, J. (2022). Prinzipien für die ethische Nutzung
künstlicher Intelligenz. HMD Praxis der Wirtschaftsinformatik,59(2),
468–481.
Baumann-Habersack, F. H. (2021). Autorität, Algorithmen und Konflikte
Die digitalisierte Renaissance autoritärer Führungsprinzipien. In
Kooperation in der digitalen Arbeitswelt (pp. 279–291). Wiesbaden,
Springer Gabler.
Bazerman, M. H., & Moore, D. A. (2012). Judgment in managerial decision
making. John Wiley & Sons.
Beck, M., & Opp, K. (2001). Der faktorielle Survey und die Messung von
Normen. KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie,
53(2), 283–306.
Becker, J.-M., Ringle, C. M., Sarstedt, M., & Völckner, F. (2015). How
collinearity affects mixture regression results. Marketing Letters,
26(4), 643–659.
Benaben, F., Lauras, M., Montreuil, B., Faugere, L., Gou, J., & Mu, W. (2019).
Physics of Organization Dynamics: An AI Framework for opportunity
and risk management. In 2019 International Conference on Industrial
Engineering and Systems Management (IESM). IEEE.
Biswas, T. T. (Ed.). (2015). Measuring Intrinsic Quality of Human Decisions
(Vol. 9346). Cham, Springer.
Blutner, D., Cramer, S., Krause, S., Mönks, T., Nagel, L., Reinholz, A., &
Witthaut, M. (2009). Assistenzsysteme für die Entscheidungsunter-
stützung. In Große Netze der Logistik (pp. 241–270). Berlin, Heidel-
berg, Springer.
Brahm, C., Cheris, A., & Sherer, L. (2016). What Big Data Means for Cus-
tomer Loyalty. Brief, Bain and Company, August, 7.
Buxmann, P., & Schmidt, H. (2021). Grundlagen der Künstlichen Intelligenz
und des Maschinellen Lernens. In Künstliche Intelligenz (pp. 3–25).
Berlin, Heidelberg, Springer Gabler.
Camerer, C., & Lovallo, D. (1999). Overconfidence and excess entry: An
experimental approach. American economic review,89(1), 306–318.
Carlson, E. D. (1977). Decision support systems: personal computing ser-
vices for managers. Management Review,66(1), 4–11.
Carr, J. C., & Blettner, D. P. (2010). Cognitive control bias and decision-
making in context: Implications for entrepreneurial founders of
small firms. Frontiers of Entrepreneurship Research,30(6), 2.
Cavanagh, G. F., & Fritzsche, D. J. (1985). Using vignettes in business ethics
research. Research in Corporate Social Performance and Policy,7, 279–
293.
Chen, D. Q., Preston, D. S., & Swink, M. (2015). How the use of big data
analytics affects value creation in supply chain management. Journal
of management information systems,32(4), 4–39.
Colossyan. (2022). Create videos with AI actors, real easy. https://www
.colossyan.com/.
Companiesmarketcap. (2022). Largest Companies by Market Cap. https://
companiesmarketcap.com/.
Cook, F. L. (1979). Who should be helped? Public support for social services:
Public Support for Social Services.
Côrte-Real, N., Oliveira, T., & Ruivo, P. (2017). Assessing business value of
Big Data Analytics in European firms. Journal of Business Research,
70, 379–390.
Davenport, T. H. (2013). Analytics 3.0. Harvard Business Review,91(12),
64–72.
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User
Acceptance of Information Technology. MIS Quarterly,13(3), 319–
340.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance
of computer technology: A comparison of two theoretical models.
Management Science,35(8), 982–1003.
Delen, D., & Demirkan, H. (2013). Data, information and analytics as ser-
vices. Decision Support Systems,55(1), 359–363.
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid In-
telligence. Business & Information Systems Engineering,61(5), 637–
643.
DeSanctis, G., & Poole, M. S. (1994). Capturing the Complexity in Advanced
Technology Use: Adaptive Structuration Theory. Organization Sci-
ence,5(2), 121–147.
Dubinsky, A. J., Jolson, M. A., Kotabe, M., & Lim, C. U. (1991). A
Cross-National Investigation of Industrial Salespeople’s Ethical Per-
ceptions. Journal of International Business Studies,22(4), 651–670.
Dülmer, H. (2001). Bildung und der Einfluss von Argumenten auf
das Moralische Urteil. KZfSS Kölner Zeitschrift für Soziologie und
Sozialpsychologie,53(1), 1–27.
Emerson, R. M. (1976). Social Exchange Theory. Annual Review of Sociology,
2(1), 335–362.
Evans, J. R., & Lindner, C. H. (2012). Business Analytics: The Next Frontier
for Decision Sciences. College of Business, University of Cincinnati.
Decision Science Institute,21(12). Retrieved from http://www.cbpp
.uaa.alaska.edu/afef/business{_}analytics.htm
Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher
cognition: Advancing the debate. Perspectives on Psychological Sci-
ence,8(3), 223–241.
Everett, C. R., & Fairchild, R. J. (2015). A theory of entrepreneurial over-
confidence, effort, and firm outcomes. Journal of Entrepreneurial
Finance,17(1), 1–27.
Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to
knowledge discovery in databases. AI magazine,17(3), 37–54.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 923
Fogg, B. J. (1998). Persuasive computers: perspectives and research di-
rections. Proceedings of the SIGCHI conference on Human factors in
computing systems, 225–232.
Forbes, D. P. (2005). Are some entrepreneurs more overconfident than oth-
ers? Journal of business venturing,20(5), 623–640.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation mod-
els with unobservable variables and measure-ment error. Journal of
marketing research,18(1), 39–50.
Fuchs, C., & Diamantopoulos, A. (2009). Using single-item measures for
construct measurement in management research: Conceptual issues
and application guidelines. Die Betriebswirtschaft,69(2), 195.
Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of
Neural Engineering,13(5), 051001.
Gartz, U. (2004). Enterprise information management. In Business intelli-
gence in the digital economy: opportunities, limitations and risks (pp.
48–75). IGI Global.
Gefen, Karahanna, & Straub. (2003). Trust and TAM in Online Shopping:
An Integrated Model. MIS Quarterly,27(1), 51.
GEMESYS Technologies. (2022). Wir bauen einen vom menschlichen Gehirn
inspirierten Computer. https://gemesys.tech/.
Gersch, M., Meske, C., Bunde, E., Aldoj, N., Wesche, J. S., Wilkens, U.,
& Dewey, M. (2021). Vertrauen in KI-basierte Radiologie Erste
Erkenntnisse durch eine explorative Stakeholder-Konsultation. In
Künstliche intelligenz im dienstleistungsmanagement (pp. 309–335).
Wiesbaden, Springer Gabler.
Gluchowski, P. (2016). Business Analytics–Grundlagen, Methoden und Ein-
satzpotenziale. HMD Praxis der Wirtschaftsinformatik,53(3), 273–
286.
Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: a system-
atic review of frequency, effect mediators, and mitigators. Journal of
the American Medical Informatics Association,19(1), 121–127.
Gong, L. (2008). How social is social responses to computers? the function
of the degree of anthropomorphism in computer representations.
Computers in Human Behavior,24(4), 1494–1509.
Grant, A. M., & Wall, T. D. (2009). The neglected science and art of quasi-
experimentation: Why-to, when-to, and how-to advice for organi-
zational researchers. Organizational Research Methods,12(4), 653–
686.
Grossman, R., & Siegel, K. (2014). Organizational models for big data and
analytics. Journal of Organization Design,3(1), 20–25.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi,
D. (2018). A survey of methods for explaining black box models.
ACM computing surveys (CSUR),51(5), 1–42.
Güting, R. H., & Dieker, S. (1992). Datenstrukturen und Algorithmen.
Springer.
Hair Jr., J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., &
Ray, S. (2021). Partial Least Squares Structural Equation Modeling
(PLS-SEM) Using R: A Workbook. Springer International Publishing
AG.
Hair Jr., J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use
and how to report the results of PLS-SEM. European Business Review,
31(1), 2–24.
Hair Jr., J. F., Matthews, L. M., Matthews, R. L., & Sarstedt, M. (2017).
PLS-SEM or CB-SEM: updated guidelines on which method to use.
International Journal of Multivariate Data Analysis,1(2), 107–123.
Hair Jr., J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a Silver
Bullet. Journal of Marketing Theory and Practice,19(2), 139–152.
Hajkowicz, S., Reeson, A., Rudd, L., Bratanova, A., Hodgers, L., Mason, C.,
& Boughen, N. (2016). Tomorrow’s digitally enabled workforce:
Megatrends and scenarios for jobs and employment in Australia over
the coming twenty years.
Halper, F. (2014). Predictive analytics for business advantage. TDWI Re-
search, 1–32.
Hamilton, B., & Koch, R. (2015). From predictive to prescriptive analytics.
Strategic Finance,96(12), 62.
Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research
agenda. Behaviour & Information Technology,25(2), 91–97.
Hastenteufel, J., & Ganster, F. (2021). Einflussfaktoren auf die Akzeptanz
von Robo Advisors: Digitale Kommunikation in der Anlageberatung.
Springer Fachmedien Wiesbaden.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for
assessing discriminant validity in variance-based structural equation
modeling. Journal of the Academy of Marketing Science,43(1), 115–
135.
Huber, G. P. (1990). A theory of the effects of advanced information tech-
nologies on organizational design, intelligence, and decision making.
Academy of Management Review,15(1), 47–71.
Hyman, M. R., & Steiner, S. D. (1996). The vignette method in business
ethics research: Current uses and recommendations. Marketing:
Moving Toward the 21st Century, 261–265.
Iansiti, M., & Lakhani, K. R. (2020). Competing in the age of AI: How
machine intelligence changes the rules of business. Harvard Business
Review,98(1), 60–67.
Iqbal, R., Doctor, F., More, B., Mahmud, S., & Yousuf, U. (2020). Big Data an-
alytics and Computational Intelligence for Cyber–Physical Systems:
Recent trends and state of the art applications. Future Generation
Computer Systems,105, 766–778.
Isaac, R. G., Zerbe, W. J., & Pitt, D. C. (2001). Leadership and motivation:
The effective application of expectancy theory. Journal of managerial
issues, 212–226.
Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of
conflict, choice, and commitment. Free press.
Jasso, G., & Webster Jr, M. (1999). Assessing the gender gap in just earnings
and its underlying mechanisms. Social Psychology Quarterly, 367–
380.
Jöreskog, K. G., & Wold, H. O. A. (1982). Systems under indirect observation:
Causality, structure, prediction. North Holland.
Kagan, B. J., Kitchen, A. C., Tran, N. T., Parker, B. J., Bhat, A., Rollo,
B., . . . Friston, K. J. (2021). In vitro neurons learn and exhibit
sentience when embodied in a simulated game-world. bioRxiv.
Retrieved from https://www.biorxiv.org/content/biorxiv/
early/2021/12/03/2021.12.02.471005.full.pdf
Kahneman, D. (2003). A perspective on judgment and choice: mapping
bounded rationality. American psychologist,58(9), 697.
Kahneman, D., & Schmidt, T. (2012). Schnelles Denken, langsames Denken.
Siedler Verlag.
Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under
uncertainty: Heuristics and biases. Cambridge university press.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions.
1939-1471.
Karahanna, E., Xin Xu, S., Xu, Y., & Zhang, N. (2018). The Needs–
Affordances–Features Perspective for the Use of Social Media. MIS
Quarterly,42(3), 737–756.
Kelley, H. H., & Michela, J. L. (1980). Attribution theory and research.
Annual review of psychology,31(1), 457–501.
Knebl, H. (2019). Algorithmen und Datenstrukturen: Grundlagen und
probabilistische Methoden für den Entwurf und die Analyse. Springer
Vieweg.
Koch, R. (2015). From business intelligence to predictive analytics. Strategic
Finance,96(7), 56–58.
Koellinger, P., Minniti, M., & Schade, C. (2007). “I think I can, I think I can”:
Overconfidence and entrepreneurial behavior. Journal of economic
psychology,28(4), 502–527.
Königstorfer, J. (2008). Akzeptanz von technologischen Innovationen:
Nutzungsentscheidungen von Konsumenten dargestellt am Beispiel von
mobilen Internetdiensten.
Korsgaard, M. A., Schweiger, D. M., & Sapienza, H. J. (1995). Building Com-
mitment, Attachment, and Trust in Strategic Decision-Making Teams:
The Role of Procedural Justice. Academy of Management Journal,
38(1), 60–84.
Kotler, P., Berger, R., & Bickhoff, N. (2010). The quintessence of strategic
management. What You Really Need to Know to Survive in Business,
Berlin.
Kreutzer, R. T., & Sirrenberg, M. (2019). Künstliche Intelligenz verstehen ([1.
Auflage]ed.). Wiesbaden, Springer Fachmedien.
La Hayward, M., Forster, W. R., Sarasvathy, S. D., & Fredrickson, B. L.
(2010). Beyond hubris: How highly confident entrepreneurs re-
bound to venture again. Journal of business venturing,25(6), 569–
578.
Larson, D., & Chang, V. (2016). A review and future direction of agile, busi-
ness intelligence, analytics and data science. International Journal of
Information Management,36(5), 700–710.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925924
Laudon, K. C., Laudon, J. P., & Schoder, D. (2016). Wirtschaftsinformatik:
Eine Einführung (3rd ed.). Pearson Studium.
LaValle, S., Lesser, E., Shockley, R., Hopkins, M. S., & Kruschwitz, N. (2011).
Big data, analytics and the path from insights to value. MIT sloan
management review,52(2), 21–32.
Lawler, E. J., & Thye, S. R. (1999). Bringing Emotions into Social Exchange
Theory. Annual Review of Sociology,25(1), 217–244.
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fair-
ness, trust, and emotion in response to algorithmic management. Big
Data & Society,5(1), 205395171875668.
Leimeister, J. M. (2019). Dienstleistungsengineering und-management: Data-
driven service innovation. Springer.
Lemke, C., Monett, D., & Mikoleit, M. (2021). Digitale Ethik
in datengetriebenen Organisationen und deren Anwendung am
Beispiel von KI-Ethik. In Data Science anwenden (pp. 33–52).
Springer.
Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable
AI: a review of machine learning interpretability methods. Entropy,
23(1), 18.
Luhmann, N. (1990). Risiko und Gefahr. In Soziologische aufklärung 5 (pp.
131–169). Springer.
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising
with the machines: A sociotechnical framework for bringing artificial
intelligence into the organization. Journal of Business Research,120,
262–273.
Mallach, E. G. (1994). Understanding Decision Support Systems and Expert
Systems. Richard D. Irwin. Inc., USA.
Martini, M. (2019). Blackbox Algorithmus. Grundfragen einer Regulierung
künstlicher Intelligenz, Berlin.
Mashingaidze, K., & Backhouse, J. (2017). The relationships between def-
initions of big data, business intelligence and business analytics: a
literature review. International Journal of Business Information Sys-
tems,26(4), 488–505.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model
Of Organizational Trust. Academy of Management Review,20(3),
709–734.
Mcafee, A., Brynjolfsson, E., Davenport, T. H., Patil, D. J., & Barton, D.
(2012). Big data: the management revolution. Harvard Business
Review,90(10), 60–68.
McKelvie, A., Haynie, J. M., & Gustavsson, V. (2011). Unpacking the uncer-
tainty construct: Implications for entrepreneurial action. Journal of
business venturing,26(3), 273–292.
Merendino, A., Dibb, S., Meadows, M., Quinn, L., Wilson, D., Simkin, L.,
& Canhoto, A. (2018). Big data, big decisions: The impact of big
data on board level decision-making. Journal of Business Research,
93, 67–78.
Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2022). Explainable Ar-
tificial Intelligence: Objectives, Stakeholders, and Future Research
Opportunities. Information Systems Management,39(1), 53–63.
Mikalef, P., Boura, M., Lekakos, G., & Krogstie, J. (2019). Big data analyt-
ics and firm performance: Findings from a mixed-method approach.
Journal of Business Research,98, 261–276.
Mikalef, P., Pappas, I., Krogstie, J., & Pavlou, P. A. (2020). Big data and
business analytics: A research agenda for realizing business value.
0378-7206.
Mishra, N., & Silakari, S. (2012). Predictive analytics: a survey, trends, ap-
plications, oppurtunities & challenges. International Journal of Com-
puter Science and Information Technologies,3(3), 4434–4438.
Moore, G. E. (1965). Cramming more components onto integrated circuits.
McGraw-Hill New York.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The Uncanny Valley [From
the Field].IEEE Robotics & Automation Magazine,19(2), 98–100.
Moschovakis, Y. N. (2001). What Is an Algorithm? In Mathematics unlimited
2001 and beyond (pp. 919–936). Berlin, Heidelberg, Springer.
Murphy, K. P. (2012). Machine learning: A probabilistic perspective. MIT
Press.
Nedelcu, B. (2013). Business intelligence systems. Database Systems Jour-
nal,4(4), 12–20.
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias
isn’t fair: Algorithmic reductionism and procedural justice in human
resource decisions. Organizational Behavior and Human Decision Pro-
cesses,160, 149–167.
Orlikowski, W. J., & Robey, D. (1991). Information Technology and the
Structuring of Organizations. Information Systems Research,2(2),
143–169.
Panagiotarou, A., Stamatiou, Y. C., Pierrakeas, C., & Kameas, A. (2020).
Gamification Acceptance for Learners with Different E-Skills. In-
ternational Journal of Learning, Teaching and Educational Research,
19(2), 263–278.
Porter, M. E. (1996). What is strategy? Harvard Business Review,74(6),
61–78.
Pütz, C., Düppre, S., Roth, S., & Weiss, W. (2021). Akzeptanz und Nutzung
von Chat-/Voicebots. In Künstliche intelligenz im dienstleistungsman-
agement (pp. 361–383). Wiesbaden, Springer Gabler.
R Core Team. (2013). R: R: A language and environment for statistical comput-
ing : reference index. http://www.R-project.org/. R Foundation
for Statistical Computing.
Rainsberger, L. (2021). KI–die neue Intelligenz im Vertrieb. Springer Books.
Rathje, R., Laschet, F.-Y., & Kenning, P. (2021). Künstliche Intelligenz in der
Finanzdienstleistungsbranche Welche Bedeutung hat das Kunden-
vertrauen? In Künstliche Intelligenz im Dienstleistungsmanagement
(pp. 265–286). Wiesbaden, Springer Gabler.
Reid, C., McClean, J., Petley, R., Jones, K., & Ruck, P. (2015). Seiz-
ing the information advantage: How organisations can un-
lock value and insight from the information they hold: A
PwC report in conjunction with Iron Mountain. https://
www.pwc.es/es/publicaciones/tecnologia/assets/
Seizing-The-Information-Advantage.pdf.
Rich, E. (1985). Artificial intelligence and the humanities. Computers and
the Humanities,19(2), 117–122.
Robertson, D. C. (1993). Empiricism in business ethics: Suggested research
directions. Journal of Business Ethics,12(8), 585–599.
Rohner, R. P., & Khaleque, A. (2002). Parental acceptance-rejection and
life-span development: A universalist perspective. Online readings in
psychology and culture,6(1), 1–10.
Rosenberg, J. (2017). Security in embedded systems: Important Security
Concepts, Security And Network Architecture, Software Vulnerabil-
ity And Cyber Attacks, Security And Operating System Architecture.
In A. Vega, P. Bose, & A. Buyuktosunoglu (Eds.), Rugged Embedded
Systems: Computing in Harsh Environments (pp. 149–205). Else-
vier/Morgan Kaufmann.
Sagnier, C., Loup-Escande, E., Lourdeaux, D., Thouvenin, I., & Valléry, G.
(2020). User Acceptance of Virtual Reality: An Extended Technol-
ogy Acceptance Model. International Journal of Human–Computer
Interaction,36(11), 993–1007.
Sarstedt, M., & Wilczynski, P. (2009). More for less? a comparison of single-
item and multi-item measures. Die Betriebswirtschaft,69(2), 211.
Savolainen, S. (2016). Could Acceptance Predict Commitment in Organisa-
tional Change? Impact of Changes Caused by Succession From the
Viewpoint of Non-family Employees in Small Family Firms. Manage-
ment,4(5), 197–215.
Scheuer, D. S. (2020). Akzeptanz von Künstlicher Intelligenz. Springer.
Shanks, G., & Bekmamedova, N. (2012). Achieving benefits with business
analytics systems: An evolutionary process perspective. Journal of
Decision Systems,21(3), 231–244.
Sharma, R., Mithas, S., & Kankanhalli, A. (2014). Transforming decision-
making processes: a research agenda for understanding the impact
of business analytics on organisations. European Journal of Informa-
tion Systems,23(4), 433–441.
Sheridan, T. B., & Verplank, W. L. (1978). Human and Computer Control of
Undersea Teleoperators. Defense Technical Information Center.
Shi, Z. (2021). Intelligence Science: Leading the Age of Intelligence. Elsevier
and Tsinghua University Press.
Shmueli, G., Sarstedt, M., Hair, J. F., Cheah, J., Ting, H., Vaithilingam,
S., & Ringle, C. M. (2019). Predictive model assessment in PLS-
SEM: guidelines for using PLSpredict. European Journal of Market-
ing,53(11), 2322–2347.
Smith, S. L., & Aucella, A. F. (1983). Design guidelines for the user interface to
computer-based information systems. MITRE CORP BEDFORD MA.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning:
Implications for the rationality debate? Behavioral and brain sciences,
23(5), 645–665.
K. Iqbal /Junior Management Science 8(4) (2023) 887-925 925
Steiner, P. M., Atzmüller, C., & Su, D. (2016). Designing valid and reliable
vignette experiments for survey research: A case study on the fair
gender income gap. Journal of Methods and Measurement in the Social
Sciences,7(2), 52–94.
Stevenson, T. H., & Bodkin, C. D. (1998). A Cross-National Comparison
of University Students’ Perceptions Regarding the Ethics and Accept-
ability of Sales Practices. Journal of Business Ethics,17(1), 45–55.
Swartz, L. (2003). Why people hate the paperclip: Labels, appearance,
behavior, and social re-sponses to user interface agents.
Tolstoy. (2022). Tolstoy: A new way to communicate, with interactive video.
https://www.gotolstoy.com/.
Turing, A. M. (1950). Mind. Mind,59(236), 433–460.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuris-
tics and Biases: Biases in judgments reveal some heuristics of think-
ing under uncertainty. Science,185(4157), 1124–1131.
Uysal, E., Alavi, S., & Bezençon, V. (2022). Trojan horse or useful helper?
a relationship perspective on artificial intelligence assistants with
humanlike features. Journal of the Academy of Marketing Science,
1–23. Retrieved from https://link.springer.com/article/
10.1007/s11747-022-00856-9
van Rijmenam, M., Erekhinskaya, T., Schweitzer, J., & Williams, M.-A.
(2019). Avoid being the Turkey: How big data analytics changes
the game of strategy in times of ambiguity and uncertainty. Long
Range Planning,52(5), 101841.
Venkatesh, V. (2000). Determinants of Perceived Ease of Use: Integrating
Control, Intrinsic Motivation, and Emotion into the Technology Ac-
ceptance Model. Information Systems Research,11(4), 342–365.
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a
research agenda on interventions. Decision sciences,39(2), 273–315.
Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Tech-
nology Acceptance Model: Four Longitudinal Field Studies. Manage-
ment Science,46(2), 186–204.
Walster, E. (1966). Assignment of responsibility for an accident. Journal of
personality and social psychology,3(1), 73.
Wang, P. (2019). On Defining Artificial Intelligence. Journal of Artificial
General Intelligence,10(2), 1–37.
Wason, K. D., & Cox, K. C. (1996). Scenario utilization in marketing re-
search. Advances in Marketing. Texas: Southwestern Marketing Asso-
ciation, 155–162.
Wason, K. D., Polonsky, M. J., & Hyman, M. R. (2002). Designing vignette
studies in marketing. Australasian Marketing Journal,10(3), 41–58.
Watson, H. J., & Wixom, B. H. (2007). The current state of business intelli-
gence. Computer,40(9), 96–99.
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who Sees Human? The Stabil-
ity and Importance of Individual Differences in Anthropomorphism.
Perspectives on psychological science : a journal of the Association for
Psychological Science,5(3), 219–232.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The study of heuristics and biases in judgment has been criticized in several publications by G. Gigerenzer, who argues that “biases are not biases” and “heuristics are meant to explain what does not exist” (1991, p. 102). This article responds to Gigerenzer's critique and shows that it misrepresents the authors' theoretical position and ignores critical evidence. Contrary to Gigerenzer's central empirical claim, judgments of frequency—not only subjective probabilities—are susceptible to large and systematic biases. A postscript responds to Gigerenzer's (1996) reply.
Book
Full-text available
Partial least squares structural equation modeling (PLS-SEM) has become a standard approach for analyzing complex inter-relationships between observed and latent variables. Researchers appreciate the many advantages of PLS-SEM such as the possibility to estimate very complex models and the method’s flexibility in terms of data requirements and measurement specification. This practical guide provides a step-by-step treatment of the major choices facing researchers when analyzing PLS models using R, a free software environment for statistical computing, which runs on Windows, MacOS, and UNIX computer platforms (https://www.r-project.org/). Adopting the R software’s SEMinR package, which brings a friendly syntax to creating and estimating structural equation models, each chapter offers a concise overview of relevant topics and metrics, followed by an in-depth description of a case study. Simple instructions give readers the “how-tos” of using SEMinR to obtain solutions and document their results. Rules of thumb in every chapter provide guidelines on best practices in the application and interpretation of PLS-SEM.
Chapter
Strategic management has been increasingly characterized by an emphasis on core competences. Firms are advised to divest unrelated businesses and return to core business. Moreover, competitive advantage is now increasingly seen as a matter of efficiently deploying scarce knowledge resources to product markets. Much of this change in emphasis has occurred because of the emergence of a unified and rigorous approach to strategy, often called the resource-based approach. This Reader brings together extracts from the seminal articles that created this dominant perspective in strategic management. It includes the pioneering work of Selznick, Penrose, and Chandler and more recent writing by Wernerfelt, Barney, Teece, and Prahalad and Hamel.
Chapter
Entlang der zuvorgegangenen Analyse der spezifischen Differenzierungsmerkmale Künstlicher Intelligenz in Kapitel 2.3, der vergleichenden Betrachtung der bestehenden Akzeptanzmodelle für innovative Technologien in Kapitel 3.6 und der Weiterentwicklung des Technology Acceptence Model 3 in Kapitel 4 ergibt sich das Referenzmodell für KI-Akzeptanz aus Kapitel 4.3.
Chapter
Algorithmische Systeme sind heute schon ein wichtiger Bestandteil der Entscheidungsfindung in Unternehmen. Etwas ist aber erst wenigen bewusst: Menschen selbst führen bereits seit Längerem Algorithmen im gesellschaftlichen Kontext aus, was nun aber immer mehr auf der Managementebene digitalisiert wirksamer wird – eine Tatsache, die noch nicht genug beleuchtet wurde. Dieser Beitrag beschäftigt sich daher einführend mit der asymmetrischen Wirkmacht von Algorithmen, deren Wechselwirkung mit Führungsautorität und der drohenden Reinszenierung autoritärer Führungspraktiken. Die in diesem Zusammenhang zwangsläufig entstehenden Konflikte zwischen Mitarbeiter:innen und Führungskräften gilt es als Chance zu verstehen und für die Zukunftsfähigkeit zu nutzen. Noch ist die aktive Mitgestaltung an diesem Transformationsprozess möglich. Autorität ist für unsere Zukunft zu wichtig, um ihre Gestaltung, auch im digitalen Raum, (anti-)autoritären Kräften zu überlassen.
Article
In diesem essential werden die Faktoren untersucht, die die Nutzungsabsicht eines Robo Advisors beeinflussen. Hierbei werden zunächst die Digitalisierung in der Finanzindustrie sowie die Entstehung und Funktionsweise von Robo Advisorn thematisiert. Anschließend werden mithilfe einer auf dem Technologieakzeptanzmodell beruhenden empirischen Untersuchung die Akzeptanz von Robo Advisory gemessen und auf dieser Basis Handlungsempfehlungen formuliert. Der Inhalt • Digitaler Wandel in der Bankenbranche • Definition und Funktionsweise von Robo Advisory • Technologieakzeptanzmodell • Konzeption und Auswertung einer Studie zur Akzeptanz von Robo Advisory Die Zielgruppen • Wissenschaftler/Forschung Die Autor*innen Prof. PD Dr. Jessica Hastenteufel ist Professorin für Betriebswirtschaftslehre an der IU Internationale Hochschule und Privatdozentin an der Universität des Saarlandes. Felix Ganster ist Wertpapierspezialist bei der UmweltBank AG Nürnberg.
Chapter
Dieser Beitrag beschäftigt sich mit der Akzeptanz und Nutzung von Technologien im Allgemeinen und von Chat-/Voicebots im Speziellen. Dazu wird insbesondere untersucht, welche Einflussfaktoren der Akzeptanz im Kontext von Chat-/Voicebots eine Rolle spielen. Auf Basis einer Literaturanalyse werden die individuellen Unterschiede der Nutzer, die Systemcharakteristika, der soziale Einfluss und erleichternde Bedingungen diskutiert. Anschließend werden weitere Einflussfaktoren aus der Perspektive der Praxis ergänzt. Der Beitrag schließt mit einem Fazit und Ausblick.
Chapter
Künstliche Intelligenz verspricht im Bereich Radiologie große Verbesserungspotenziale. Wenige Vorarbeiten weisen auf die überragende Bedeutung von Vertrauen hin, damit die verschiedenen Stakeholder bereit sind, ein entsprechendes KI-basiertes Service System in Anspruch zu nehmen. Im Rahmen einer explorativen Stakeholder-Konsultation konnten wertvolle Hinweise für zukünftige Forschung abgeleitet werden. Hierzu zählen u.a. eine notwendige Stakeholderdifferenzierung relevanter Vertrauensaspekte, die Identifikation besonders wichtiger „Moments of (Dis-) Trust“, die Bedeutung von Surrogat-Informationen sowie der Erklärbarkeit KI-gestützter Entscheidungen.