Content uploaded by Keng L. Siau
Author content
All content in this area was uploaded by Keng L. Siau on Mar 26, 2018
Content may be subject to copyright.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 47
Building Trust in Artificial Intelligence,
Machine Learning, and Robotics
TRUST IS HARD TO COME BY
by Keng Siau and Weiyu Wang
In 2016, Google DeepMind’s AlphaGo beat 18-time
world champion Lee Sedol in the abstract strategy
board game Go.1 This win was a triumphant moment
for articial intelligence (AI) that thrusted AI more
prominently into the public view. Both AlphaGo
Zero and AlphaZero have demonstrated the enormous
potential of articial intelligence and deep learning.
Furthermore, self-driving cars, drones, and home robots
are proliferating and advancing rapidly. AI is now part
of our everyday life, and its encroachment is expected
to intensify.2, 3 Trust, however, is a potential stumbling
block. Indeed, trust is key in ensuring the acceptance
and continuing progress and development of articial
intelligence.
In this article, we look at trust in articial intelligence,
machine learning (ML), and robotics. We rst review
the concept of trust in AI and examine how trust in
AI may be dierent from trust in other technologies.
We then discuss the dierences between interpersonal
trust and trust in technology and suggest factors that
are crucial in building initial trust and developing
continuous trust in articial intelligence.
What Is Trust?
The level of trust a person has in someone or something
can determine that person’s behavior. Trust is a primary
reason for acceptance.4 Trust is crucial in all kinds of
relationships, such as human-social interactions,5, 6
seller-buyer relationships7, 8 and relationships among
members of a virtual team.9 Trust can also dene the
way people interact with technology.10, 11
Trust is viewed as: (1) a set of specic beliefs dealing
with benevolence, competence, integrity, and predicta-
bility (trusting beliefs); (2) the willingness of one party
to depend on another in a risky situation (trusting
intention); or (3) the combination of these elements.
Table 1 lists some concepts and antecedents of trust
in interpersonal relationships and trust people have
toward specic types of technology, such as mobile
technology and information systems. The concept-
ualization of trust in human-machine interaction is,
however, slightly dierent (see Table 2).
Compared to trust in an interpersonal relationship,
in which the trustor and trustee are both humans,
the trustee in a human-technology/human-machine
relationship could be either the technology per se and/
or the technology provider. Further, trust in technology
and trust in the provider will inuence each other (see
Figure 1).
Trust is dynamic. Empirical evidence has shown that
trust is typically built up in a gradual manner, requiring
ongoing two-way interactions.12, 13 However, sometimes
a trustor can decide whether to trust the trustee, such
as an object or a relationship, before geing rsthand
knowledge of the trustee — or having any kind of
experience with the trustee. For example, when two
persons meet for the rst time, the rst impression
will aect the trust between these two persons. In such
situations, trust will be built based on an individual’s
disposition or institutional cues.14 This kind of trust
is called initial trust, which is essential for promoting
the adoption of a new technology.15 Both initial trust
formation and continuous trust development deserve
special aention.16 In the context of trust in AI, both
initial trust formation and continuous trust develop-
ment should be considered.
Both initial trust formation and continuous
trust development deserve special attention.
48 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
Table 1 — Conceptualization of trust and its antecedents.
50 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
What Is the Difference Between Trust
in AI and Trust in Other Technologies?
Trust in technology is determined by human character-
istics,17 environment characteristics,18 and technology
characteristics.19 Figure 2 shows the factors and
dimensions of trust in technology.
Human characteristics basically consider the human’s
personality, the trustor’s disposition to trust, and
the trustee’s ability to deal with risks. The trustor’s
personality or disposition to trust could be thought of as
the general willingness to trust others, and it depends
on dierent experiences, personality types, and cultural
backgrounds. Ability usually refers to a trustee’s
competence/group of skills to complete tasks in a
specic domain. For instance, if an employee is very
competent in negotiation, the manager may trust the
employee when he or she takes charge of negotiating
contracts.
Environment characteristics focus on elements such
as the nature of the tasks, cultural background, and
institutional factors. Tasks can be of dierent natures.
For example, a task can be very important or a task
can be trivial. Cultural background can be based on
ethnicity, race, religion, and socioeconomic status.
Cultural background can also be associated with a
country or a particular region. For instance, Americans
tend to trust strangers who share the same group
memberships, and Japanese tend to trust those who
share direct or indirect relationship links.20 Institutional
factors indicate the impersonal structures that enable
one to act in anticipation of a successful future endeav-
or. Institutional factors, according to literature, include
two main aspects: the situational normality and struc-
tural assurances. Situational normality means the
situation is normal, and everything is in proper
order. Structural assurances refer to the contextual
conditions such as promises, contracts, guarantees,
and regulations.
No maer who or what the trustee is, whether it is a
human, a form of AI, or an object such as an organiza-
tion or a virtual team, the impact of human character-
istics and environment characteristics will be roughly
similar. For instance, a person with a high-trusting
stance would be more likely to accept and depend
on others, such as a new technology or a new team
member. Similarly, it will be easier for a technology
provided by an institution/organization with a high
reputation to gain trust from users than it would be for
a similar technology from an institution/organization
without such a reputation.
Technology characteristics can be analyzed from three
perspectives: (1) the performance of the technology,
(2) its process/aributes, and (3) its purpose. Although
human and environment characteristics are fairly
similar irrespective of trustee, the technology character-
istics impacting trust will be dierent for AI, ML, and
robotics than they are for other objects or humans. Since
articial intelligence has many new features compared
to other technologies, its performance, process, and
purpose need to be dened and considered. Using a
two-stage model of trust building,21 Table 3 shows
the technology features related to AI’s performance,
process, and purpose, and their impact on trust.
Building Initial Trust in AI
Several factors are at play during trust building. These
factors include the following:
Figure 2 — Factors and dimensions of trust in technology.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 51
• Representation. Representation plays an important
role in initial trust building, and that is why human-
oid robots are so popular. The more a robot looks like
a human, the easier people can establish an emotional
connection with it. A robot dog is another example of
an AI representation that humans nd easier to trust.
Dogs are human’s best friends and represent loyalty
and diligence.
• Image/perception. Sci- books and movies have
given AI a bad image — when the intelligence we
create gets out of control. Articial general intelli-
gence (AGI) or “strong AI”22 can be a serious threat.
This image and perception will aect people’s initial
trust in AI.
• Reviews from other users. Reading online reviews
is common these days. Compared with a negative
review, a positive review leads to greater initial
trust.23 Reviews from other users will aect the
initial trust level.
• Transparency and “explainability.” To trust AI
applications, we need to understand how they are
programmed and what function will be performed in
certain conditions. This transparency is important,
and AI should be able to explain/justify its behaviors
and decisions. One of the challenges in machine
learning and deep learning is the black box in the ML
and decision-making processes. If the explainability
of the AI application is poor or missing, trust is
aected.
• Trialability. Trialability means the opportunity for
people to have access to the AI application and to
try it before accepting or adopting it. Trialability
enables enhancement of understanding. In an article
in Technological Forecasting and Social Change, Monika
Hengstler et al. state that “when you survey the
perception of new technologies across generations,
you typically see resistance appear from people
who are not users of technology.”24 Thus, providing
chances for potential users to try the new technology
will promote higher initial trust.
Developing Continuous Trust in AI
Once trust is established, it must be nurtured and
maintained. This happens through the following:
• Usability and reliability. Performance includes the
competence of AI in completing tasks and nishing
those tasks in a consistent and reliable manner. The
AI application should be designed to operate easily
and intuitively. There should be no unexpected
downtime or crashes. Usability and reliability
contribute to continuous trust.
• Collaboration and communication. Although most
AI applications are developed to perform tasks
independently, the most likely scenario in the short
term is that people will work in partnership with
intelligent machines. Whether collaboration and
communication can be carried out smoothly and
easily will aect continuous trust.
• Sociability and bonding. Humans are social animals.
Continuous trust can be enhanced with social
activities. A robot dog that can recognize its owner
and show aection may be treated like a pet dog,
establishing emotional connection and trust.
• Security and privacy protection. Operational safety
and data security are two eminent factors that
inuence trust in technology.25 People are unlikely
to trust anything that is too risky to operate. Data
security, for instance, is important because machine
learning relies on large data sets, making privacy a
concern.
Table 3 — Technology features of AI that affect trust building.
52 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
• Interpretability. With a black box, most ML models
are inscrutable. To address this problem, it is neces-
sary to design interpretable models and provide
the ability for the machine to explain its conclusions
or actions. This could help users understand the
rationale for the outcomes and the process of deriv-
ing the results. Transparency and explainability, as
discussed in initial trust building, are important for
continuous trust as well.
• Job replacement. Articial intelligence can surpass
human performance in many jobs and replace human
workers. AI will continue to enhance its capability
and inltrate more domains. Concern about AI taking
jobs and replacing human employees will impede
people’s trust in articial intelligence. For example,
those whose jobs may be replaced by AI may not
want to trust it. Some predict that more than 800
million workers, about a fth of the global labor
force, might lose their jobs soon.26 Lower-skilled,
repetitive, and dangerous jobs are among those likely
to be taken over by machines. Providing retraining
and education to aected employees will help
mitigate this eect on continuous trust.
• Goal congruence. Since articial intelligence has the
potential to demonstrate and even surpass human
intelligence, it is understandable that people treat it
as a threat. And AI should be perceived as a potential
threat, especially AGI! Making sure that AI’s goals
are congruent with human goals is a precursor in
maintaining continuous trust. Ethics and governance
of articial intelligence are areas that need more
aention.
Practical Implications and Conclusions
Articial intelligence is here, and AI applications will
become more and more prevalent. Trust is crucial in the
development and acceptance of AI. In addition to the
human and environment characteristics, which aect
trust in other humans, objects, and AI, trust in AI, ML,
and robotics is aected by the unique technology
features of articial intelligence.
To enhance trust, practitioners can try to maximize
the technological features in AI systems based on the
factors listed in Table 3. The representation of an AI as a
humanoid or a loyal pet (e.g., dog) will facilitate initial
trust formation. The image and perception of AI as a
terminator (like in the Terminator movies) will hinder
initial trust. In this Internet age, reviews are critical, as
well as the ability of articial intelligence to be transpar-
ent and able to explain its behavior/decisions. These are
important for initial trust formation. The ability to try
out AI applications will also have an impact on initial
trust.
Trust building is a dynamic process, involving move-
ment from initial trust to continuous trust development.
Continuous trust will depend on the performance and
purpose of the articial intelligence. AI applications
that are easy to use and reliable — and can collaborate
and interface well with humans, have social ability,
facilitate bonding with humans, provide good security
and privacy protection, and explain the rationale
behind conclusions or actions — will facilitate con-
tinuous trust development. A lack of clarity over job
replacement and displacement by AI along with AI’s
potential threat to the existence of humanity breed
distrust and hamper continuous trust development.
Trust is the cornerstone of humanity’s relationship with
articial intelligence. Like any type of trust, trust in AI
takes time to build, seconds to break, and forever to
repair once it is broken!
Endnotes
1“AlphaGo Versus Lee Sedol.” Wikipedia (hps://
en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol).
2Siau, Keng. “Impact of Articial Intelligence, Robotics,
and Automation on Higher Education.” Proceedings of
the 23rd Americas Conference on Information Systems 2017
(AMCIS 2017), Boston, Massachuses, USA, 10-12 August
2017 (hp://aisel.aisnet.org/cgi/viewcontent.cgi?article=
1579&context=amcis2017).
3Siau, Keng, and Ying Yang. “Impact of Articial Intelligence,
Robotics, and Machine Learning on Sales and Marketing.”
Proceedings of 12th Annual Midwest Association for Information
Systems Conference (MWAIS 2017), Springeld, Illinois, USA,
18-19 May 2017 (hps://aisel.aisnet.org/mwais2017/48).
4Gefen, David, Elena Karahanna, and Detmar W. Straub.
“Trust and TAM in Online Shopping: An Integrated
Model.” MIS Quarterly, Vol. 27, No. 1, March 2003
(hps://pdfs.semanticscholar.org/e2e8/
6748e6abeb7c1077ee7a93029fa4d7843d70.pdf).
Trust building is a dynamic process, involving
movement from initial trust to continuous
trust development. Continuous trust will
depend on the performance and purpose
of the artificial intelligence.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 53
5Hengstler, Monika, Ellen Enkel, and Selina Duelli. “Applied
Articial Intelligence and Trust — The Case of Autonomous
Vehicles and Medical Assistance Devices.” Technological
Forecasting and Social Change, Vol. 105, April 2016 (hps://
www.sciencedirect.com/science/article/pii/S0040162515004187).
6McKnight, D. Harrison, Larry L. Cummings, and Norman L.
Chervany. “Initial Trust Formation in New Organizational
Relationships.”Academy of Management Review, Vol. 23,
No. 3, 1998 (hps://www.jstor.org/stable/259290?seq=1#
page_scan_tab_contents).
7Gefen, David. “E-Commerce: The Role of Familiarity and
Trust.” Omega, Vol. 28, No. 6, 2000 (hps://www.sciencedirect.
com/science/article/pii/S0305048300000219).
8Siau, Keng, and Zixing Shen. “Building Customer Trust in
Mobile Commerce.” Communications of the ACM, Vol. 46, No. 4,
April 2003 (hps://cacm.acm.org/magazines/2003/4/6855-
building-customer-trust-in-mobile-commerce/abstract).
9Coppola, Nancy W., N. Roer, and Starr Roxanne Hil.
“Building Trust in Virtual Teams.” IEEE Transactions on
Professional Communication, Vol. 47, No. 2, 2004 (hps://
www.researchgate.net/prole/N_Roer/publication/
3230326_Building_Trust_in_Virtual_Teams/links/
00b495177d91ab09c8000000/Building-Trust-in-Virtual-
Teams.pdf).
10Li, Xin, Traci J. Hess, and Joseph S. Valacich. “Why Do We
Trust New Technology? A Study of Initial Trust Formation
with Organizational Information Systems.” Journal of
Strategic Information Systems, Vol. 17, No. 1, 2008
(hps://www.sciencedirect.com/science/article/abs/pii/
S0963868708000036).
11Siau, Keng, et al. “A Qualitative Investigation on Consumer
Trust in Mobile Commerce.” International Journal of Electronic
Business, Vol. 2, No. 3, 2004 (hp://cba.unl.edu/research/
articles/645/).
12Gefen (see 7).
13McKnight et al. (see 6).
14McKnight et al. (see 6).
15Li et al. (see 10).
16Siau and Shen (see 8).
17Hengstler et al. (see 5).
18Oleson, Kristin E., et al. “Antecedents of Trust in Human-
Robot Collaborations.” Proceedings of 1st International Multi-
Disciplinary Conference on Cognitive Methods in Situation
Awareness and Decision Support. IEEE, 2011 (hp://
ieeexplore.ieee.org/document/5753439/).
19Schaefer, Kristin E., et al. “A Meta-Analysis of Factors
Inuencing the Development of Trust in Automation:
Implications for Understanding Autonomy in Future
Systems.” Human Factors, Vol. 58, No. 3, 2016 (hps://
www.ncbi.nlm.nih.gov/pubmed/27005902).
20Yuki, Masaki, et al. “Cross-Cultural Dierences in
Relationship- and Group-Based Trust.” Personality and
Social Psychology Bulletin, Vol. 31, No. 1, 2005 (hp://
journals.sagepub.com/doi/abs/10.1177/0146167204271305).
21Siau and Shen (see 8).
22AGI or “strong AI” refers to the AI technology that can
perform almost (if not) all the intellectual tasks that a human
being can.
23Kusumasondjaja, Sony, Tekle Shanka, and Christopher
Marchegiani. “Credibility of Online Reviews and Initial Trust:
The Roles of Reviewer’s Identity and Review Valence.”
Journal of Vacation Marketing, Vol. 18, No. 3, 2012 (hp://
journals.sagepub.com/doi/abs/10.1177/1356766712449365).
24Hengstler et al. (see 5).
25Hengstler et al. (see 5).
26Gray, Alex. “These Are the Jobs Most Likely to Be Taken
by Robots.” World Economic Forum, 15 December 2017
(hps://www.weforum.org/agenda/2017/12/robots-coming-
for-800-million-jobs?utm_content=buer8f3&utm_
medium=social&utm_source=twier.com&utm_
campaign=buer).
Keng Siau is Chair of the Department of Business and Information
Technology at the Missouri University of Science and Technology.
Previously, he was the Edwin J. Faulkner Chair Professor and Full
Professor of Management at the University of Nebraska-Lincoln
(UNL), where he was Director of the UNL-IBM Global Innovation
Hub. Dr. Siau has wrien more than 250 academic publications and is
consistently ranked as one of the top information systems researchers
in the world based on h-index and productivity rate. His research has
been funded by the US National Science Foundation, IBM, and other
IT organizations. Dr. Siau has received numerous teaching, research,
service, and leadership awards, including the International Federation
for Information Processing Outstanding Service Award, the IBM
Faculty Award, and the IBM Faculty Innovation Award. He received
his PhD in business administration from the University of British
Columbia. He can be reached at siauk@mst.edu.
Weiyu Wang is currently pursuing a master of science degree in
information science and technology at Missouri University of Science
and Technology. Ms. Wang received an MBA from the Missouri
University of Science and Technology. She can be reached at
wwpmc@mst.edu.