ArticlePDF Available

Building Trust in Artificial Intelligence, Machine Learning, and Robotics

Authors:

Abstract and Figures

In this article, we look at trust in artificial intelligence, machine learning (ML), and robotics. We first review the concept of trust in AI and examine how trust in AI may be different from trust in other technologies. We then discuss the differences between interpersonal trust and trust in technology and suggest factors that are crucial in building initial trust and developing continuous trust in artificial intelligence.
Content may be subject to copyright.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 47
Building Trust in Artificial Intelligence,
Machine Learning, and Robotics
TRUST IS HARD TO COME BY
by Keng Siau and Weiyu Wang
In 2016, Google DeepMinds AlphaGo beat 18-time
world champion Lee Sedol in the abstract strategy
board game Go.1 This win was a triumphant moment
for articial intelligence (AI) that thrusted AI more
prominently into the public view. Both AlphaGo
Zero and AlphaZero have demonstrated the enormous
potential of articial intelligence and deep learning.
Furthermore, self-driving cars, drones, and home robots
are proliferating and advancing rapidly. AI is now part
of our everyday life, and its encroachment is expected
to intensify.2, 3 Trust, however, is a potential stumbling
block. Indeed, trust is key in ensuring the acceptance
and continuing progress and development of articial
intelligence.
In this article, we look at trust in articial intelligence,
machine learning (ML), and robotics. We rst review
the concept of trust in AI and examine how trust in
AI may be dierent from trust in other technologies.
We then discuss the dierences between interpersonal
trust and trust in technology and suggest factors that
are crucial in building initial trust and developing
continuous trust in articial intelligence.
What Is Trust?
The level of trust a person has in someone or something
can determine that persons behavior. Trust is a primary
reason for acceptance.4 Trust is crucial in all kinds of
relationships, such as human-social interactions,5, 6
seller-buyer relationships7, 8 and relationships among
members of a virtual team.9 Trust can also dene the
way people interact with technology.10, 11
Trust is viewed as: (1) a set of specic beliefs dealing
with benevolence, competence, integrity, and predicta-
bility (trusting beliefs); (2) the willingness of one party
to depend on another in a risky situation (trusting
intention); or (3) the combination of these elements.
Table 1 lists some concepts and antecedents of trust
in interpersonal relationships and trust people have
toward specic types of technology, such as mobile
technology and information systems. The concept-
ualization of trust in human-machine interaction is,
however, slightly dierent (see Table 2).
Compared to trust in an interpersonal relationship,
in which the trustor and trustee are both humans,
the trustee in a human-technology/human-machine
relationship could be either the technology per se and/
or the technology provider. Further, trust in technology
and trust in the provider will inuence each other (see
Figure 1).
Trust is dynamic. Empirical evidence has shown that
trust is typically built up in a gradual manner, requiring
ongoing two-way interactions.12, 13 However, sometimes
a trustor can decide whether to trust the trustee, such
as an object or a relationship, before geing rsthand
knowledge of the trustee — or having any kind of
experience with the trustee. For example, when two
persons meet for the rst time, the rst impression
will aect the trust between these two persons. In such
situations, trust will be built based on an individuals
disposition or institutional cues.14 This kind of trust
is called initial trust, which is essential for promoting
the adoption of a new technology.15 Both initial trust
formation and continuous trust development deserve
special aention.16 In the context of trust in AI, both
initial trust formation and continuous trust develop-
ment should be considered.
Both initial trust formation and continuous
trust development deserve special attention.
48 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
Table 1 Conceptualization of trust and its antecedents.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 49
Table 2 Trust conceptualization in human-machine interaction.
Figure 1 Trust in technology interacts with trust in the provider of the technology.
50 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
What Is the Difference Between Trust
in AI and Trust in Other Technologies?
Trust in technology is determined by human character-
istics,17 environment characteristics,18 and technology
characteristics.19 Figure 2 shows the factors and
dimensions of trust in technology.
Human characteristics basically consider the humans
personality, the trustors disposition to trust, and
the trustees ability to deal with risks. The trustors
personality or disposition to trust could be thought of as
the general willingness to trust others, and it depends
on dierent experiences, personality types, and cultural
backgrounds. Ability usually refers to a trustees
competence/group of skills to complete tasks in a
specic domain. For instance, if an employee is very
competent in negotiation, the manager may trust the
employee when he or she takes charge of negotiating
contracts.
Environment characteristics focus on elements such
as the nature of the tasks, cultural background, and
institutional factors. Tasks can be of dierent natures.
For example, a task can be very important or a task
can be trivial. Cultural background can be based on
ethnicity, race, religion, and socioeconomic status.
Cultural background can also be associated with a
country or a particular region. For instance, Americans
tend to trust strangers who share the same group
memberships, and Japanese tend to trust those who
share direct or indirect relationship links.20 Institutional
factors indicate the impersonal structures that enable
one to act in anticipation of a successful future endeav-
or. Institutional factors, according to literature, include
two main aspects: the situational normality and struc-
tural assurances. Situational normality means the
situation is normal, and everything is in proper
order. Structural assurances refer to the contextual
conditions such as promises, contracts, guarantees,
and regulations.
No maer who or what the trustee is, whether it is a
human, a form of AI, or an object such as an organiza-
tion or a virtual team, the impact of human character-
istics and environment characteristics will be roughly
similar. For instance, a person with a high-trusting
stance would be more likely to accept and depend
on others, such as a new technology or a new team
member. Similarly, it will be easier for a technology
provided by an institution/organization with a high
reputation to gain trust from users than it would be for
a similar technology from an institution/organization
without such a reputation.
Technology characteristics can be analyzed from three
perspectives: (1) the performance of the technology,
(2) its process/aributes, and (3) its purpose. Although
human and environment characteristics are fairly
similar irrespective of trustee, the technology character-
istics impacting trust will be dierent for AI, ML, and
robotics than they are for other objects or humans. Since
articial intelligence has many new features compared
to other technologies, its performance, process, and
purpose need to be dened and considered. Using a
two-stage model of trust building,21 Table 3 shows
the technology features related to AIs performance,
process, and purpose, and their impact on trust.
Building Initial Trust in AI
Several factors are at play during trust building. These
factors include the following:
Figure 2 Factors and dimensions of trust in technology.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 51
Representation. Representation plays an important
role in initial trust building, and that is why human-
oid robots are so popular. The more a robot looks like
a human, the easier people can establish an emotional
connection with it. A robot dog is another example of
an AI representation that humans nd easier to trust.
Dogs are humans best friends and represent loyalty
and diligence.
Image/perception. Sci- books and movies have
given AI a bad image — when the intelligence we
create gets out of control. Articial general intelli-
gence (AGI) or strong AI22 can be a serious threat.
This image and perception will aect peoples initial
trust in AI.
Reviews from other users. Reading online reviews
is common these days. Compared with a negative
review, a positive review leads to greater initial
trust.23 Reviews from other users will aect the
initial trust level.
Transparency and explainability. To trust AI
applications, we need to understand how they are
programmed and what function will be performed in
certain conditions. This transparency is important,
and AI should be able to explain/justify its behaviors
and decisions. One of the challenges in machine
learning and deep learning is the black box in the ML
and decision-making processes. If the explainability
of the AI application is poor or missing, trust is
aected.
Trialability. Trialability means the opportunity for
people to have access to the AI application and to
try it before accepting or adopting it. Trialability
enables enhancement of understanding. In an article
in Technological Forecasting and Social Change, Monika
Hengstler et al. state that when you survey the
perception of new technologies across generations,
you typically see resistance appear from people
who are not users of technology.24 Thus, providing
chances for potential users to try the new technology
will promote higher initial trust.
Developing Continuous Trust in AI
Once trust is established, it must be nurtured and
maintained. This happens through the following:
Usability and reliability. Performance includes the
competence of AI in completing tasks and nishing
those tasks in a consistent and reliable manner. The
AI application should be designed to operate easily
and intuitively. There should be no unexpected
downtime or crashes. Usability and reliability
contribute to continuous trust.
Collaboration and communication. Although most
AI applications are developed to perform tasks
independently, the most likely scenario in the short
term is that people will work in partnership with
intelligent machines. Whether collaboration and
communication can be carried out smoothly and
easily will aect continuous trust.
Sociability and bonding. Humans are social animals.
Continuous trust can be enhanced with social
activities. A robot dog that can recognize its owner
and show aection may be treated like a pet dog,
establishing emotional connection and trust.
Security and privacy protection. Operational safety
and data security are two eminent factors that
inuence trust in technology.25 People are unlikely
to trust anything that is too risky to operate. Data
security, for instance, is important because machine
learning relies on large data sets, making privacy a
concern.
Table 3 Technology features of AI that affect trust building.
52 ©2018 Cutter Information LLC CUTTER BUSINESS TECHNOLOGY JOURNAL
Interpretability. With a black box, most ML models
are inscrutable. To address this problem, it is neces-
sary to design interpretable models and provide
the ability for the machine to explain its conclusions
or actions. This could help users understand the
rationale for the outcomes and the process of deriv-
ing the results. Transparency and explainability, as
discussed in initial trust building, are important for
continuous trust as well.
Job replacement. Articial intelligence can surpass
human performance in many jobs and replace human
workers. AI will continue to enhance its capability
and inltrate more domains. Concern about AI taking
jobs and replacing human employees will impede
peoples trust in articial intelligence. For example,
those whose jobs may be replaced by AI may not
want to trust it. Some predict that more than 800
million workers, about a fth of the global labor
force, might lose their jobs soon.26 Lower-skilled,
repetitive, and dangerous jobs are among those likely
to be taken over by machines. Providing retraining
and education to aected employees will help
mitigate this eect on continuous trust.
Goal congruence. Since articial intelligence has the
potential to demonstrate and even surpass human
intelligence, it is understandable that people treat it
as a threat. And AI should be perceived as a potential
threat, especially AGI! Making sure that AIs goals
are congruent with human goals is a precursor in
maintaining continuous trust. Ethics and governance
of articial intelligence are areas that need more
aention.
Practical Implications and Conclusions
Articial intelligence is here, and AI applications will
become more and more prevalent. Trust is crucial in the
development and acceptance of AI. In addition to the
human and environment characteristics, which aect
trust in other humans, objects, and AI, trust in AI, ML,
and robotics is aected by the unique technology
features of articial intelligence.
To enhance trust, practitioners can try to maximize
the technological features in AI systems based on the
factors listed in Table 3. The representation of an AI as a
humanoid or a loyal pet (e.g., dog) will facilitate initial
trust formation. The image and perception of AI as a
terminator (like in the Terminator movies) will hinder
initial trust. In this Internet age, reviews are critical, as
well as the ability of articial intelligence to be transpar-
ent and able to explain its behavior/decisions. These are
important for initial trust formation. The ability to try
out AI applications will also have an impact on initial
trust.
Trust building is a dynamic process, involving move-
ment from initial trust to continuous trust development.
Continuous trust will depend on the performance and
purpose of the articial intelligence. AI applications
that are easy to use and reliable — and can collaborate
and interface well with humans, have social ability,
facilitate bonding with humans, provide good security
and privacy protection, and explain the rationale
behind conclusions or actions — will facilitate con-
tinuous trust development. A lack of clarity over job
replacement and displacement by AI along with AIs
potential threat to the existence of humanity breed
distrust and hamper continuous trust development.
Trust is the cornerstone of humanitys relationship with
articial intelligence. Like any type of trust, trust in AI
takes time to build, seconds to break, and forever to
repair once it is broken!
Endnotes
1AlphaGo Versus Lee Sedol.Wikipedia (hps://
en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol).
2Siau, Keng. Impact of Articial Intelligence, Robotics,
and Automation on Higher Education.Proceedings of
the 23rd Americas Conference on Information Systems 2017
(AMCIS 2017), Boston, Massachuses, USA, 10-12 August
2017 (hp://aisel.aisnet.org/cgi/viewcontent.cgi?article=
1579&context=amcis2017).
3Siau, Keng, and Ying Yang. Impact of Articial Intelligence,
Robotics, and Machine Learning on Sales and Marketing.
Proceedings of 12th Annual Midwest Association for Information
Systems Conference (MWAIS 2017), Springeld, Illinois, USA,
18-19 May 2017 (hps://aisel.aisnet.org/mwais2017/48).
4Gefen, David, Elena Karahanna, and Detmar W. Straub.
Trust and TAM in Online Shopping: An Integrated
Model.MIS Quarterly, Vol. 27, No. 1, March 2003
(hps://pdfs.semanticscholar.org/e2e8/
6748e6abeb7c1077ee7a93029fa4d7843d70.pdf).
Trust building is a dynamic process, involving
movement from initial trust to continuous
trust development. Continuous trust will
depend on the performance and purpose
of the artificial intelligence.
Get The Cutter Edge free www.cutter.com Vol. 31, No. 2 CUTTER BUSINESS TECHNOLOGY JOURNAL 53
5Hengstler, Monika, Ellen Enkel, and Selina Duelli. Applied
Articial Intelligence and Trust — The Case of Autonomous
Vehicles and Medical Assistance Devices.Technological
Forecasting and Social Change, Vol. 105, April 2016 (hps://
www.sciencedirect.com/science/article/pii/S0040162515004187).
6McKnight, D. Harrison, Larry L. Cummings, and Norman L.
Chervany. Initial Trust Formation in New Organizational
Relationships.Academy of Management Review, Vol. 23,
No. 3, 1998 (hps://www.jstor.org/stable/259290?seq=1#
page_scan_tab_contents).
7Gefen, David. E-Commerce: The Role of Familiarity and
Trust.Omega, Vol. 28, No. 6, 2000 (hps://www.sciencedirect.
com/science/article/pii/S0305048300000219).
8Siau, Keng, and Zixing Shen. Building Customer Trust in
Mobile Commerce.Communications of the ACM, Vol. 46, No. 4,
April 2003 (hps://cacm.acm.org/magazines/2003/4/6855-
building-customer-trust-in-mobile-commerce/abstract).
9Coppola, Nancy W., N. Roer, and Starr Roxanne Hil.
Building Trust in Virtual Teams. IEEE Transactions on
Professional Communication, Vol. 47, No. 2, 2004 (hps://
www.researchgate.net/prole/N_Roer/publication/
3230326_Building_Trust_in_Virtual_Teams/links/
00b495177d91ab09c8000000/Building-Trust-in-Virtual-
Teams.pdf).
10Li, Xin, Traci J. Hess, and Joseph S. Valacich. Why Do We
Trust New Technology? A Study of Initial Trust Formation
with Organizational Information Systems.Journal of
Strategic Information Systems, Vol. 17, No. 1, 2008
(hps://www.sciencedirect.com/science/article/abs/pii/
S0963868708000036).
11Siau, Keng, et al. A Qualitative Investigation on Consumer
Trust in Mobile Commerce.International Journal of Electronic
Business, Vol. 2, No. 3, 2004 (hp://cba.unl.edu/research/
articles/645/).
12Gefen (see 7).
13McKnight et al. (see 6).
14McKnight et al. (see 6).
15Li et al. (see 10).
16Siau and Shen (see 8).
17Hengstler et al. (see 5).
18Oleson, Kristin E., et al. Antecedents of Trust in Human-
Robot Collaborations.Proceedings of 1st International Multi-
Disciplinary Conference on Cognitive Methods in Situation
Awareness and Decision Support. IEEE, 2011 (hp://
ieeexplore.ieee.org/document/5753439/).
19Schaefer, Kristin E., et al. A Meta-Analysis of Factors
Inuencing the Development of Trust in Automation:
Implications for Understanding Autonomy in Future
Systems.Human Factors, Vol. 58, No. 3, 2016 (hps://
www.ncbi.nlm.nih.gov/pubmed/27005902).
20Yuki, Masaki, et al. Cross-Cultural Dierences in
Relationship- and Group-Based Trust.Personality and
Social Psychology Bulletin, Vol. 31, No. 1, 2005 (hp://
journals.sagepub.com/doi/abs/10.1177/0146167204271305).
21Siau and Shen (see 8).
22AGI or strong AIrefers to the AI technology that can
perform almost (if not) all the intellectual tasks that a human
being can.
23Kusumasondjaja, Sony, Tekle Shanka, and Christopher
Marchegiani. Credibility of Online Reviews and Initial Trust:
The Roles of Reviewers Identity and Review Valence.
Journal of Vacation Marketing, Vol. 18, No. 3, 2012 (hp://
journals.sagepub.com/doi/abs/10.1177/1356766712449365).
24Hengstler et al. (see 5).
25Hengstler et al. (see 5).
26Gray, Alex. These Are the Jobs Most Likely to Be Taken
by Robots.World Economic Forum, 15 December 2017
(hps://www.weforum.org/agenda/2017/12/robots-coming-
for-800-million-jobs?utm_content=buer8f3&utm_
medium=social&utm_source=twier.com&utm_
campaign=buer).
Keng Siau is Chair of the Department of Business and Information
Technology at the Missouri University of Science and Technology.
Previously, he was the Edwin J. Faulkner Chair Professor and Full
Professor of Management at the University of Nebraska-Lincoln
(UNL), where he was Director of the UNL-IBM Global Innovation
Hub. Dr. Siau has wrien more than 250 academic publications and is
consistently ranked as one of the top information systems researchers
in the world based on h-index and productivity rate. His research has
been funded by the US National Science Foundation, IBM, and other
IT organizations. Dr. Siau has received numerous teaching, research,
service, and leadership awards, including the International Federation
for Information Processing Outstanding Service Award, the IBM
Faculty Award, and the IBM Faculty Innovation Award. He received
his PhD in business administration from the University of British
Columbia. He can be reached at siauk@mst.edu.
Weiyu Wang is currently pursuing a master of science degree in
information science and technology at Missouri University of Science
and Technology. Ms. Wang received an MBA from the Missouri
University of Science and Technology. She can be reached at
wwpmc@mst.edu.
... According to current research, trust is essential to the adoption, advancement, and development of AI (Siau & Wang, 2018) Regarding trusting technology-mediated services, two research streams have emerged: trusting the technology itself and trusting the innovative firm, including its procedures and communication. ...
... The idea of trust is more complicated when it comes to AI-enabled customer service, since it encompasses not just the brand and technology but also the intention behind and method of utilizing AI (Hengstler, Enkel, & Duelli, 2016); (Siau & Wang, 2018). ...
... In contrast, Bangladesh exhibits limited institutional trust in the management of government agencies due to the incapacity of the representatives and public institutions to fulfill civic responsibilities (Tanny, 2021). In the 21 st century, trust in technology is attributed to human, environmental, and technological characteristics in the institution (Siau & Wang, 2018). So, the chapter substantially considers institutional trust for strengthening human relations. ...
Chapter
Full-text available
Artificial intelligence has transformed the way of thinking, human relationships, and organizational functions in developed and developing countries. The Global South commenced far-reaching efforts toward streamlining AI readiness in government functions by ensuring data protection, cybersecurity, regulation quality, ethical principles, and accountability. In that context, the chapter explores the AI readiness level of Bangladesh and India, especially in data protection, cyber security, regulation quality, ethical principles, and accountability. It further exemplifies how government AI readiness indicators affect trust in government in Bangladesh and India. It also comprehensively analyzes how AI readiness indicators influence the institutional trust of the two countries' governments. Finally, the chapter will reveal how Bangladesh and India have addressed the dialect between the traditional and the virtual atmospheric view of trust in the context of artificial intelligence.
... Bu anlamda yapay zekâ uygulamaları, insan beyninin çalışma mantığını modelleyerek insan gibi düşünebilen, öğrenebilen ve karar verebilen bilgisayar sistemleri geliştirmeyi amaçlamaktadır (Vardarlier & Zafer, 2020). Yapay zekâ, insanın yeteneklerini ve entelektüel davranışlarını taklit ederek, otonom olarak bilgi toplayan ve kararlar veren makine tabanlı sistemleri ifade etmektedir (Siau & Wang, 2018). ...
Article
Full-text available
Bu çalışma, İnsan Kaynakları Yönetimi’nde yapay zekâ uygulamalarına yönelik bilimsel araştırmaların genel eğilimlerini ve entelektüel yapısını bibliyometrik analiz yöntemiyle ortaya koymayı amaçlamaktadır. Konunun akademik ve sektörel açıdan hızla önem kazanması, İKY süreçlerinde yapay zekânın etkilerini anlamaya yönelik sistematik bir değerlendirme yapılmasını gerekli kılmaktadır. Web of Science (WoS) veri tabanından elde edilen 236 araştırma verisi, R dilinde programlanmış “Bibliometrix” uygulaması kullanılarak analiz edilmiştir. Bu kapsamda, konuyla ilgili önde gelen yayınlar, yazarlar, dergiler ve ülkeler belirlenmekte, araştırma eğilimleri ortaya çıkarılmakta ve geleceğe yönelik beklentiler sunulmaktadır. Çalışmada elde edilen bulgular, konuya ilişkin araştırma ilgisinin ve bilimsel yayın sayısının son yıllarda arttığını, Çin ve ABD’nin en üretken ülkeler olduğunu göstermektedir. İşe alım süreçlerinde yapay zekâ uygulaması ve büyük veri analitiği, araştırmalarda sıklıkla kullanılan trend anahtar kavramlardır. Nesnelerin interneti (IoT) teması, konu ile ilgili en güncel araştırma eğilimini göstermektedir. Gelecek araştırmalar, yapay zekânın işe alım dışındaki diğer İKY fonksiyonları üzerindeki etkilerini gündemine almalıdır. İKY’nde yapay zekânın işe bağlılık, gig, ekonomik güvenlik, yasal görünüm ve sürdürülebilir gelişim üzerindeki etkileri gelecekteki araştırma gündemi için potansiyel çalışma konuları olarak belirlenmiştir. Çalışma, İKY’nde yapay zekâ uygulamalarına bilimsel ve sektörel açıdan ilgi duyan kişilere konunun araştırma kapsamı ve entelektüel yapısı hakkında genel bir bakış sunmaktadır.
... Trust building between Human beings and Robots is a prerequisite to the successful adoption of incorporating Robots in society and perceived virtue matters most in this regard [25,26]. One of the methods of making people trust the robot is by easing the user's transparency of the robot's decision-making so that the users know how and why the specific decision was reached. ...
Preprint
Full-text available
As humanoid robots are employed rapidly across several fields, some ethical issues arise regarding the problem of how to ensure that these autonomous systems make decisions in line with human values. On the one hand, rule-based systems in AI and consequentialistic ethics have their weaknesses, as they do not fully understand and appreciate the moral complexities within a situation, resulting in actions that may be proper from a technical perspective but are morally flawed. This research recommends that virtue ethics should be the basis of building the brain of human-like robotic systems. Since virtue ethics are hierarchical, the embedding of moral virtues is given the highest status, and it shall be beneficial in determining the acceptable behavior of autonomous systems in complex ethical environments. We outline the theoretical basis that would allow the integration of critical virtues of honesty, justice, and compassion into robotic systems, along with the means of their implementation: machine learning and ethical reasoning. Using case studies of robotic healthcare systems and service robots, we propose the virtue ethics framework to leverage the moral reasoning capabilities of humanoid robots. The paper ends with implications for the future of AI and robotics, raising further need for research and a good reason why concerns must be raised about implementing these technologies in society.
... Trust building between Human beings and Robots is a prerequisite to the successful adoption of incorporating Robots in society and perceived virtue matters most in this regard [25,26]. One of the methods of making people trust the robot is by easing the user's transparency of the robot's decision-making so that the users know how and why the specific decision was reached. ...
Preprint
As humanoid robots are employed rapidly across several fields, some ethical issues arise regarding the problem of how to ensure that these autonomous systems make decisions in line with human values. On the one hand, rule-based systems in AI and consequentialistic ethics have their weaknesses, as they do not fully understand and appreciate the moral complexities within a situation, resulting in actions that may be proper from a technical perspective but are morally flawed. This research recommends that virtue ethics should be the basis of building the brain of human-like robotic systems. Since virtue ethics are hierarchical, the embedding of moral virtues is given the highest status, and it shall be beneficial in determining the acceptable behavior of autonomous systems in complex ethical environments. We outline the theoretical basis that would allow the integration of critical virtues of honesty, justice, and compassion into robotic systems, along with the means of their implementation: machine learning and ethical reasoning. Using case studies of robotic healthcare systems and service robots, we propose the virtue ethics framework to leverage the moral reasoning capabilities of humanoid robots. The paper ends with implications for the future of AI and robotics, raising further need for research and a good reason why concerns must be raised about implementing these technologies in society.
... Yapay zekâ, bu verileri ayırmayı, sıralamayı ve önceliklendirmeyi kolaylaştırır. Pazarlama ekipleri, yapay zekâyı kullanarak müşteri tercihleri ve demografik veriler hakkında ayrıntılı ve kişiselleştirilmiş bilgiler edinerek, bu veriler doğrultusunda müşterilere özel deneyimler oluşturma imkânı bulmaktadır (Chatterjee ve ark., 2021;Siau & Wang, 2018). Yapay zekâ, pazarlama kampanyalarının hızını önemli ölçüde artırabilir, maliyetleri azaltabilir ve verimliliği artırabilir, bu yüzden daha yüksek bir yatırım getirisi elde etme olasılığı çok daha yüksektir (Haleem ve ark., 2022). ...
Article
Full-text available
Günümüzde yapay zekâ, veri analizi, kişiselleştirme, içerik üretimi ve müşteri hizmetleri gibi dijital pazarlama süreçlerinde verimliliği artıran ve rekabet avantajı sağlayan bir araç haline gelmiştir. Bu araştırma, Türkiye’de dijital pazarlama faaliyetlerinde yapay zekânın nasıl kullanıldığını, karşılaşılan zorlukları ve gelecekteki etkilerini anlamak amacıyla yapılmıştır. Araştırmada yarı yapılandırılmış görüşme yöntemi kullanılarak, beş farklı dijital pazarlama reklam ajansında çalışan profesyonellerle derinlemesine mülakatlar gerçekleştirilmiştir. Elde edilen bulgulara göre, yapay zekânın özellikle veri analizi, kişiselleştirme, içerik üretimi ve reklam optimizasyonu alanlarında önemli bir rol oynadığı, kampanyaların hızını artırıp maliyetleri düşürdüğü belirtilmiştir. Yapay zekânın müşteri hizmetlerinde chatbotlar ve sanal asistanlarla süreçleri hızlandırdığı ve müşteri memnuniyetini artırdığı vurgulanmıştır. Ancak, dil bariyeri, veri güvenilirliği ve algoritmik önyargılar gibi konular, yapay zekânın entegrasyonunda önemli zorluklar olarak ortaya çıkmıştır. Araştırma, Türkiye’de yapay zekânın pazarlama alanındaki etkisinin artmaya devam edeceğini ve rekabet avantajı sağlayacağını öngörmektedir, ancak bu sürecin başarılı olabilmesi için daha fazla eğitim, yatırım ve yerel dil desteğine ihtiyaç duyulmaktadır.
... Trust is a core construct in the interaction between humans and technology. It has a long history in the human factors and automation literature (especially in domains such as aviation, human-robot interaction, or automated driving) [4,6,19], and recently, it gained additional momentum due to the rise of artificial intelligence (AI) systems and emerging issues regarding AI transparency and explainability [5,10,21]. The words "trust" and "trustworthiness" appear over 100 times in a regulatory proposal issued by the European Commission, at it is suggested that trust and beneficial use of AI have a close relationship [3]. ...
Conference Paper
Full-text available
Trust is a highly relevant concept determining how users interact with AI systems. However, while trust is a multi-dimensional construct influenced by various contextual factors, most subjective measurements assess it on a more general level. To better assess the situation- and context-specific nature of trust, we update the situational trust scale for automated driving to allow assessment in other domains. Initial results, based on a lab study with N=23 participants who completed the scale after cooperating with AI systems in two independent scenarios (automated vehicle and AI-supported baggage scanner), confirm that all scale items load onto a single factor. However, additional investigations will be necessary to determine to which degree the scale is sensitive to variations in automation performance. Still, the updated scale can be considered a first step towards measuring situational in various application areas where users interact with automated and AI-driven systems.
Chapter
Full-text available
Peran Internet dalam Memfasilitasi Komunikasi Global Komunikasi Global dalam Pendidikan dan Pembelajaran
Article
Full-text available
The integration of Artificial Intelligence (AI) in Employee Relations (ER) functions is reshaping traditional Human Resource Management (HRM) by enhancing efficiency, automating processes, and improving data-driven decision-making. AI-driven tools such as chatbots, predictive analytics, and sentiment analysis are being increasingly used to handle employee grievances, monitor engagement levels, and ensure compliance with workplace policies. However, despite these advancements, ER functions require a human-centric approach, as AI lacks the emotional intelligence and contextual understanding necessary for managing interpersonal workplace relationships, resolving conflicts, and fostering trust. This study adopts a mixed-method approach, combining quantitative analysis (surveys and AI-driven HR data) with qualitative insights (interviews with HR professionals and employees) to explore the role, impact, and limitations of AI in ER functions. The findings suggest that while AI significantly enhances efficiency, transparency, and predictive capabilities in HRM, it cannot fully replace the human touch required for conflict resolution, ethical decision-making, and employee well-being. The study emphasizes the need for a balanced AI-human collaboration in ER, where AI is leveraged to streamline administrative tasks while HR professionals focus on empathetic, ethical, and emotionally intelligent engagement with employees. The research highlights key challenges, including data privacy concerns, potential biases in AI-driven decisions, and employees' perceptions of AI replacing human interaction. It also provides strategic recommendations for ethical AI implementation in ER, ensuring that AI acts as a support tool rather than a substitute for human-driven HR practices. In conclusion, AI presents both opportunities and challenges for ER functions in HRM. Organizations must adopt a hybrid approach, where AI enhances HR efficiency while maintaining human oversight to foster trust, fairness, and workplace inclusivity. This study contributes to the growing discourse on AI in HRM and provides practical insights for HR professionals, policymakers, and organizations seeking to integrate AI in a responsible, ethical, and employee-friendly manner.
Article
Full-text available
Objective: We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Background: Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. Method: We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. Results: The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Conclusion: Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. Application: This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments.
Article
Full-text available
Arguably, the most critical time frame for organizational participants to develop trust is at the beginning of their relationship. Using primarily a cognitive approach, we address factors and processes that enable two organizational parties to form relatively high trust initially. We propose a model of specific relationships among several trust-related constructs and two cognitive processes. The model helps explain the paradoxical finding of high initial trust levels in new organizational relationships.
Article
Full-text available
This article investigates the main and interactional effects of review valence and the presence of source identity on consumer perception of credibility of an online review and initial trust of travel services being reviewed. An experimental design is developed involving 639 travel consumers. Results indicate that a negative online review is deemed more credible than a positive online review, while a positive online review leads to a greater initial trust than a negative review. These findings apply when the identity of the reviewer is disclosed. However, when the reviewer’s identity is not disclosed, there is no significant difference between positive and negative reviews either in terms of perceived credibility or impact on consumer trust. Theoretical and managerial implications, limitations and future directions are also discussed.
Conference Paper
Full-text available
Robotic systems are being introduced into military echelons to extend warfighter capabilities in complex, dynamic environments. While these systems are designed to complement human capabilities (e.g., aiding in battlefield situation awareness and decision making, etc), they are often misused or disused because the user does not have an appropriate level of trust in his or her robotic counterpart(s). We describe a continuing body of research that identifies factors impacting a human's level of trust in a robotic teammate. The factors identified to date can be categorized as human influences (e.g., individual differences in terms of personality, experience, culture), machine influences (e.g., robotic platform, robot performance in terms of levels of automation, failure rates, false alarms), and environmental influences (e.g. task type, operational environment, shared mental models). A framework for human-robot team trust was constructed, which is evolving into a working model contingent upon the results of an on-going meta-analysis.
Article
Full-text available
Customer trust, which is crucial for the growth and success of mobile commerce, is discussed. Building customer trust is a complex process that involves technology and business practices as well as movement from initial trust formation to continuous trust development. Gaining customer trust involves consideration of its three components: competence trust, predictability trust and goodwill trust. From a customer's perspective, competence trust in e-commerce is built upon the Internet vendor's skills, expertise and operational abilities.
Article
Full-text available
Mobile commerce represents a significant development in e-commerce, offering accessibility, ubiquity, mobility, and localisation to users. Despite the potential of mobile commerce, trust is a major obstacle in its adoption and development. Many consumers feel uncomfortable with the idea of conducting commerce over wireless, hand-held devices. The focus of this research is to understand trust in mobile commerce and to identify factors that are important for trust development. The research builds on Siau and Shen's framework which depicts two key factors influencing trust in mobile commerce. This research not only validates and expands on the existing framework, but also provides an expanded conceptual model for future research.
Article
Full-text available
Familiarity is a precondition for trust, claims Luhmann [28: Luhmann N. Trust and power. Chichester UK: Wiley, 1979. (translation from German)], and trust is a prerequisite of social behavior, especially regarding important decisions. This study examines this intriguing idea in the context of the E-commerce involved in inquiring about and purchasing books on the Internet. Survey data from 217 potential users support and extend this hypothesis. The data show that both familiarity with an Internet vendor and its processes and trust in the vendor influenced the respondents' intentions to inquire about books, and their intentions to purchase them. Additionally, the data show that while familiarity indeed builds trust, it is primarily people's disposition to trust that affected their trust in the vendor. Implications for research and practice are discussed.
Article
Automation with inherent artificial intelligence (AI) is increasingly emerging in diverse applications, for instance, autonomous vehicles and medical assistance devices. However, despite their growing use, there is still noticeable skepticism in society regarding these applications. Drawing an analogy from human social interaction, the concept of trust provides a valid foundation for describing the relationship between humans and automation. Accordingly, this paper explores how firms systematically foster trust regarding applied AI. Based on empirical analysis using nine case studies in the transportation and medical technology industries, our study illustrates the dichotomous constitution of trust in applied AI. Concretely, we emphasize the symbiosis of trust in the technology as well as in the innovating firm and its communication about the technology. In doing so, we provide tangible approaches to increase trust in the technology and illustrate the necessity of a democratic development process for applied AI.
Article
Recent trust research in the information systems (IS) field has described trust as a primary predictor of technology usage and a fundamental construct for understanding user perceptions of technology. Initial trust formation is particularly relevant in an IS context, as users must overcome perceptions of risk and uncertainty before using a novel technology. With initial trust in a more complex, organizational information system, there are a number of external determinants, trusting bases, that may explain trust formation and provide organizations with the needed levers to form or change individuals’ initial trust in technology. In this study, a research model of initial trust formation is developed and includes trusting bases, trusting beliefs, trusting attitude and subjective norm, and trusting intentions. Eight trusting base factors are assessed including personality, cognitive, calculative, and both technology and organizational factors of the institutional base. The model is empirically tested with 443 subjects in the context of initial trust in a national identity system (NID). The proposed model was supported and the results indicate that subjective norm and the cognitive–reputation, calculative, and organizational situational normality base factors significantly influence initial trusting beliefs and other downstream trust constructs. Factors from some of the more commonly investigated bases, personality and technology institutional, did not significantly affect trusting beliefs. The findings have strategic implications for agencies implementing e-government systems and organizational information systems in general.
Article
Two experiments explored differences in depersonalized trust (trust toward a relatively unknown target person) across cultures. Based on a recent theoretical framework that postulates predominantly different bases for group behaviors in Western cultures versus Eastern cultures, it was predicted that Americans would tend to trust people primarily based on whether they shared category memberships; however, trust for Japanese was expected to be based on the likelihood of sharing direct or indirect interpersonal links. Results supported these predictions. In both Study 1 (questionnaire study) and Study 2 (online money allocation game), Americans trusted ingroup members more than outgroup members; however, the existence of a potential indirect relationship link increased trust for outgroup members more for Japanese than for Americans. Implications for understanding group processes across cultures are discussed.