Conference PaperPDF Available

A theoretical model for trust in automated systems

Authors:

Abstract and Figures

The concept of trust in automation has received a great deal of attention in recent years in response to modern society's ever-increasing usage of automated systems. Researchers have used a variety of different automated systems in unique experimental paradigms to identify factors that regulate the trust formation process. In this work-in-progress report, we propose a preliminary, theoretical model of factors that influence trust in automation. Our model utilizes three layers of analysis (dispositional trust, situational trust, and learned trust) to explain the variability of trust in a wide range of circumstances. We are in the process of verifying certain aspects of the model empirically, but our current framework provides a useful perspective for future investigations into the intricacies of trust in automated systems.
Content may be subject to copyright.
A Theoretical Model for Trust in
Automated Systems
Abstract
The concept of trust in automation has received a great
deal of attention in recent years in response to modern
society’s ever-increasing usage of automated systems.
Researchers have used a variety of different automated
systems in unique experimental paradigms to identify
factors that regulate the trust formation process. In this
work-in-progress report, we propose a preliminary,
theoretical model of factors that influence trust in
automation. Our model utilizes three layers of analysis
(dispositional trust, situational trust, and learned trust)
to explain the variability of trust in a wide range of
circumstances. We are in the process of verifying
certain aspects of the model empirically, but our
current framework provides a useful perspective for
future investigations into the intricacies of trust in
automated systems.
Author Keywords
Trust; Automation; Human-computer Interaction;
Human Factors
ACM Classification Keywords
H.1.2 [Models and Principles]: User/Machine Systems--
-Human Factors
Introduction
In today’s modern world, humans trust automated
systems on a day-to-day basis, often without realizing
it. This trust is usually rewarded as automation has
greatly improved the efficiency of many processes. Still,
automation-related accidents occur regularly and they
are sometimes caused by humans misusing (over-
trusting) or disusing (under-trusting) systems.
Researchers have addressed this issue by studying the
intricacies of human trust in automation. Trust is a
particularly influential variable that can determine the
willingness of humans to rely on automated systems to
perform tasks that could also be completed manually.
Experimental research has identified numerous factors
that regulate both the formation and deterioration of
trust in automated systems. Much of this research has
concentrated on the design of interactive automation
such as collision warning systems [12] and automated
decision aids [9]. This research has confirmed that a
Copyright is held by the author/owner(s).
CHI 2013 Extended Abstracts, April 27May 2, 2013, Paris, France.
ACM 978-1-4503-1952-2/13/04.
Masooda Bashir
Information Trust Institute
University of Illinois
Urbana, IL 61801
mnb@illinois.edu
author1@anotherco.com
Kevin Hoff
Information Trust Institute
University of Illinois
Urbana, IL 61801
hoff1@illinois.edu
user’s trust in an automated system depends largely on
the performance and design of the system. However,
this is but one part of the larger picture. Research has
also shown that trust varies depending on the trust
propensity of a user [8] and the context of an
interaction [10]. In this work-in-progress report, we
integrate the findings of a wide variety of studies
focused on factors that influence trust in interactive
automated systems to present a theoretical model of
trust in automation.
Our model (see figure 1) is designed to capture the
dynamic nature of a user’s interaction with an
automated system. The model displays the complexities
of trust by incorporating three broad sources of trust
variability at the dispositional, situational, and learned
level of analysis. While it remains purely theoretical at
this point, we are in the process of empirically verifying
some of its untested components. Meanwhile, our
unique three-layered perspective provides a simple,
useful framework for future investigations into the
variability of human trust in automation.
Background and Related Work
Trust may seem like a straightforward concept, but its
significance varies greatly depending on the context.
On the Internet, trust guides users’ acceptance of
information. Prior work in the CHI community has
concentrated on factors that influence trust in websites
such as credibility and usability [2]. Our trust model
includes similar interface design features that impact
trust, but we also incorporate dispositional and
situational factors. Additionally, our analysis focuses on
automation, rather than websites. Automation can be
defined as “technology that actively selects data,
transforms information, makes decisions, or controls
processes” [5]. Automated systems are everywhere in
the modern world from motion-sensing lights to GPS
route planning systems. Examples of complex
automation can be found in hospitals, nuclear power
plants, aircrafts, and countless other industries.
Our analysis builds on both an existing review of trust
in automation [5] and a meta-analysis of factors that
influence trust in robots [3]. Hancock et al. [3]
quantitatively examined the relative weight of human,
environmental, and robot-related factors that influence
human-robot trust. In their meta-analysis of eleven
experimental studies, the authors found trust to be
most influenced by characteristics of the robot, followed
by the environment. Human-related trust factors were
found to be the least significant, but the authors
attributed this finding to the shortage of experiments
focused on individual differences. Our model shares
some of the same factors used in Hancock et al.’s
meta-analysis, but focuses on all types of automation.
Additionally, our independent analysis of the literature
led us to relatively different conclusions.
Lee and See [5] provide an extensive review of studies
on both interpersonal and human-automation trust to
explain the importance of trust in guiding our behavior
towards automation. Since this paper was published in
2004, numerous studies have provided additional
insights into the variability of trust in automated
systems. For example, new details have emerged on
specific factors related to trust such as age, gender,
personality traits, preexisting moods, sleep loss,
distinct automation error types, and automation design
features [9, 12, 13, 14, 15]. Our model expands on Lee
and See’s analysis by incorporating and organizing
these recent findings into our three-layered framework.
Theoretical Model Development
The basic structure of our model (figure 1) corresponds
to the three sources of variability in all types of trusting
relationships: the truster (who gives trust), the
situation, and the trustee (who receives trust). These
variables respectively reflect the three layers of trust
displayed in our model: dispositional trust, situational
trust, and learned trust. Dispositional trust represents
the variability of individuals’ instinctive tendencies to
trust automation and cannot change in the short-term.
Situational trust varies depending on the specific
context of an interaction. Finally, learned trust is based
on past experiences of a user relevant to a specific
automated system and varies depending on the
characteristics of a system [7].
The context of our model is one interaction with an
automated system. However, during the course of one
interaction, an automated system may perform variably
and its user’s trust will likely fluctuate to correspond
with the system’s real-time performance. To capture
the interactive nature of this relationship, learned trust
is divided into two categories: initial and dynamic. Both
forms of learned trust are relative to characteristics of
the automated system; however, initial learned trust
represents a user’s trust prior to using a system, while
dynamic learned trust represents a user’s trust while
operating a system. It is also important to note that the
model’s sequential structure is designed as such to
ease its comprehensibility. In the real world, trust does
not develop in concrete steps and the various factors
that influence a user’s trust are all interconnected.
The following sections describe the model’s components
in greater detail. Most of the factors listed below have
been supported by empirical evidence, but some factors
are purely theoretical at this point. Research on
individual differences in dispositional trust is especially
scarce. Our current research aims to fill this void by
studying specific personality traits and cultural values
that influence trust.
Figure 1. Full model of factors that influence trust in automation - See figures 2-5 for more details on each aspect of the model
The dotted arrows
represent factors that
can change within the
course of one
interaction
Dispositional Trust
Dispositional trust represents an individual’s instinctive
tendency to trust automation independent of context or
a specific system. Research has shown that certain
types of individuals are more likely to trust automated
systems than others [14]. Culture, age, gender, and
personality traits are the primary sources of variability
in this most basic layer of trust (see figure 2).
The cultural identity of an individual can significantly
influence his or her trusting tendencies. Unfortunately,
very few studies have focused on the role of culture on
trust in automation, but one study found that German
participants were less likely to accept advice from a
social robot compared to Chinese participants [11].
Aging-related changes in dispositional trust can result
from cognitive regressions in working memory, the
ability to selectively focus attention, and mental
workload capacity [4]. Consistent gender differences
have not yet surfaced, but male and female children
prefer different levels of anthropomorphism in social
robots [15]. Finally, specific personality traits such as
extraversion [8], neuroticism, and agreeableness [14]
have been found to impact trust in automation.
Situational Trust
Trust happens differently in distinct situations, even
when the user and automated system remain constant.
This highlights the importance of understanding the
context of an interaction. As displayed in figure 3, there
are two broad sources of variability in situational trust:
the environment and the context-dependent
characteristics of the user.
Environmental Variability
Environmental factors must be taken into account to
fully understand the variability of trust in automation. A
user’s trust in an automated system is largely
determined by the type of system, its complexity, the
task for which it is used, and the perceived risks and
benefits of using it. For example, in one study
participants used route-planning advice from a GPS
system less when situational risk increased through the
presence of driving hazards [10]. Even the framing of a
task can alter the trust formation process [1]. The
physical environment is significant because it can alter
situational risk, a system’s performance, and a user’s
ability to monitor a system. Lastly, trust may vary due
to organizational factors such as the existence of
human teammates or supervisors.
Context-dependent User Variability
Humans have different strengths and weaknesses in
unique contexts and these individualities can impact
trust. For example, when automation users have low-
self confidence in their ability to perform a task, they
will likely trust and utilize automation more. Similarly, a
user’s expertise or familiarity with a particular subject
matter can impact trust [12]. The mental well-being of
a user is also significant. People in positive moods are
more likely to express greater initial trust in automation
[13] and users who are unable to carefully monitor a
system may miss automation errors that would
otherwise degrade trust.
Learned Trust
Learned trust is based on evaluations of a system that
draw from past experiences or insight gained from the
current interaction. This layer of trust varies depending
on three categories of information: preexisting
Figure 3. Situational Trust
Factors
Figure 2. Dispositional Trust
Factors
knowledge, automation design features, and the
performance of an automated system. As previously
mentioned, learned trust is static when based on
preexisting knowledge (see figure 4), but variable when
based on a system’s performance (see figure 5). Design
features are significant because they can alter a user’s
subjective assessment of a system’s performance.
A noteworthy feature of our model is the
interdependent relationship between a system’s
performance, the user’s dynamic learned trust, and the
user’s reliance on the system (note the dotted arrows
in figure 5). When the performance of an automated
system impacts its user’s trust, the user’s reliance
strategy may change. In turn, the extent to which a
system is relied upon can affect its performance, thus
completing the cycle.
Preexisting Knowledge
Before interacting with an automated system, a user’s
trust is biased due to preexisting expectations and the
system and/or brand’s reputation. Past experience with
a system or similar systems can also alter the trust
formation process [12]. Likewise, a user’s level of
understanding about how a specific automated system
works is significant. Scarce empirical research has
studied this effect, but it is likely that users with
greater knowledge of how a system functions can more
accurately calibrate their trust following system errors.
Design Features
The design of automation is a critical factor because it
can alter a user’s perceptions of a system’s
performance. In order to facilitate appropriate trust in
automation, designers must carefully consider the
interface’s ease-of-use, transparency, and appearance.
For example, a recent study showed that adding a
picture of a physician to the interface of a diabetes
management application led younger participants to
greater trust in the system’s advice [9]. A system’s
communication style [11], the feedback it provides, and
the level of control a user has over its functions are
also significant variables.
Performance
Research has shown that automation users adjust their
trust to reflect a system’s real-time performance [12].
As a result, a system’s reliability, validity, predictability,
and dependability are key antecedents of trust.
Additionally, a user’s trust may respond differently to
automation errors depending on the timing of an error
[12], the perceived difficulty of the error [6], and
whether the error is a “miss” or “false alarm” [12]. The
perceived usefulness of a system is another factor that
can impact trust, but to date this has primarily been
researched in the ecommerce domain.
Conclusion
Trust is a particularly influential variable that governs
human reliance on automation. However, trust is not
the only important variable. In certain circumstances,
other situational factors such as the effort to engage a
system, the alternatives to using automation, and a
user’s physical well-being can alter the user’s reliance
strategy without affecting his or her trust. These factors
are included in the model because they can indirectly
influence trust by altering a user’s reliance strategy.
Displaying the variability of trust through an all-
inclusive lens will help expand the current
conceptualization of how human-automation interaction
works. As our model displays, the complexities of trust
Figure 5. Dynamic Learned Trust
(trust during use)
Figure 4. Initial Learned Trust
(trust prior to use)
can be reduced to three broad layers of variability:
dispositional trust, situational trust, and learned trust.
Distinct factors influence each layer, but any given
user’s trust in an automated system is a compilation of
the user’s trusting tendencies, the situation, and the
user’s perceptions of the system.
Our current research is focused on empirically verifying
certain aspects of the model. We are in the process of
identifying additional personality traits and cultural
values that alter trust. In the meantime, our model
provides a useful framework for future studies focused
on factors that influence trust in automated systems.
References
[1] Bisantz, A.M., and Seong, Y. (2001). Assessment of
operator trust in and utilization of automated decision-
aids under different framing conditions. International
Journal of Industrial Ergonomics, 28 (2), 8597.
[2] Fogg, B.J., et al. (2000). Elements that affect web
credibility: Early results from a self-report study. Ext.
Abstracts CHI 2000, ACM Press, 287-288.
[3] Hancock, P.A., et al. (2011). A meta-analysis of
factors impacting trust in human-robot interaction.
Human Factors, 53 (5), 517-527.
[4] Ho, G., Kiff, L.M., Plocher, T., and Haigh, K.Z.
(2005). A model of trust & reliance of automation
technology for older users. AAAI-2005 Fall Symposium:
Caring Machines: AI in Eldercare.
[5] Lee, J.D. and See, K.A. (2004). Trust in
automation: designing for appropriate reliance. Human
Factors, 46 (1), 5080.
[6] Madhaven, P., Wiegmann, D.A. and Lacson, F.C.
(2004). Occasional automation failures on easy tasks
undermines trust in automation. Proc. of 112th Annual
Meet. of the American Psych. Association.
[7] Marsh, S. and Dibben, M.R. (2003). The role of
trust in information science and technology. Annual
Review of Information Science and Technology, 37,
465498.
[8] Merritt, S.M., and Ilgen, D.R. (2008). Not all trust
is created equal: Dispositional and history-based trust
in humanautomation interaction. Human Factors, 50,
194210.
[9] Pak, R., et al. (2010). Decision support aids with
anthropomorphic characteristics influence trust and
performance in younger and older adults.
Ergonomics, 55 (9), 1059-1072.
[10] Perkins, L.A., Miller, J.E., Hashemi, A. and Burns,
G. (2010). Designing for Human-Centered Systems:
Situational Risk as a Factor of Trust in Automation.
Proc. of the 54th Annual Meet. of the Human Fact. and
Erg. Soc., 2130-2134.
[11] Rau, P.L., Li, Y., and Li, D. (2009). Effects of
communication style and culture on ability to accept
recommendations from robots. Computers in Human
Behavior, 25 (2), 587-595.
[12] Sanchez, J., Rogers, W.A., Fisk, A.D., & Rovira, E.
(2011). Understanding reliance on automation: effects
of error type, error distribution, age and experience,
Theoretical Issues in Ergonomics Science, 1-27.
[13] Stokes, C.K., et al. (2010). Accounting for the
human in cyberspace: Effects of mood on trust in
automation. Proc. of the IEEE Internat. Symp. on
Collaborative Technologies and Systems, 180187.
[14] Szalma, J.L. and Taylor, G. S. (2011). Individual
differences in response to automation: the five factor
model of personality. Journal of Experimental
Psychology: Applied, 17 (2), 71-96.
[15] Tung, F-W., J.A. Jacko (Ed.) Influence of Gender
and Age on the Attitudes of Children towards Humanoid
Robots. Human-Computer Interaction, Part IV, 637-
646, 2011.
... Popular measures of trust are typically self-reported Likert scales, based on subjective ratings. For example, individuals are asked to rate their degree of trust on a scale ranging from 1 to 7 [7,16,31]. Although self-reports are a direct way to measure trust, they also have several limitations. ...
... In the reliance model, reliance is considered a behavior that is influenced by trust [24]. The three-layer hierarchical model describes trust as a result of dispositional, situational and learned factors involved in the human-automation interaction [16]. ...
... Popular measures of trust are typically self-reported Likert scales based on subjective ratings. For example, individuals are usually asked to rate their degree of trust on a scale ranging from 1 to 7 [16,43,81]. ...
... In the reliance model, reliance is considered a behavior that is influenced by trust [60]. The three-layer hierarchical model describes trust as a result of dispositional, situational and learned factors involved in the human-automation interaction [43]. ...
Thesis
Full-text available
Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines. In general, people are likely to rely on and use machines they trust, and to refrain from using machines they do not trust. Recent advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing their human behaviors. This dissertation explores the role of trust in the interactions between humans and robots, particularly Automated Vehicles (AVs). Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining both humans' natural trust and robots' artificial trust. Two high-level problems are addressed in this dissertation: (1) the problem of avoiding or reducing miscalibrations of drivers' trust in AVs, and (2) the problem of how trust can be used to dynamically allocate tasks between a human and a robot that collaborate. A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations. This solution combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: (i) the characterization of risk factors that affect drivers’ trust in AVs, which provided theoretical evidence for the development of a linear model for driver trust in AVs; (ii) the development of a new method for real-time trust estimation, which leveraged the trust linear model mentioned above for the implementation of a Kalman-filter-based approach, able to provide numerical estimates from the processing of drivers' behavioral measurements; and (iii) the development of a new method for trust calibration, which identifies trust miscalibration instances from comparisons between drivers' trust in the AV and that AV's capabilities, and triggers messages from the AV to the driver. These messages are effective for encouraging or warning drivers that are undertrusting or overtrusting the AV capabilities respectively as shown by the obtained results. Although the development of a trust-based solution for dynamically allocating tasks between a human and a robot (i.e., the second high-level problem addressed in this dissertation) remains an open problem, we take a step forward in that direction. The fourth contribution of this dissertation is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model is based on mathematical representations of both the trustee agent's capabilities and the required capabilities for the execution of a task. Trust emerges from comparisons between the agent capabilities and task requirements, roughly replicating the following logic: if a trustee agent's capabilities exceed the requirements for executing a certain task, then the agent can be highly trusted (to execute that task); conversely, if that trustee agent's capabilities fall short of that task requirements, trust should be low. In this trust model, the agent's capabilities are represented by random variables that are dynamically updated over interactions between the trustor and the trustee whenever the trustee is successful or fails in the execution of a task. These capability representations allow for the numerical computation of human's trust or robot's trust, which is represented by the probability of a given trustee agent to execute a given task successfully.
... In recent years, research studied such influential trust antecedents in different forms of technology. Previous synthesising studies -mostly metaanalyses -solely included publications where trust and the effect of antecedents were measured quantitatively and the focus lied on specific forms of technology, like robots (Hancock et al. 2011;Hancock et al. 2021), automation (Hoff and Bashir 2013;Schaefer et al. 2016), or recently AI . In order to complement these rather specific studies, the goal of the present paper is to create a full picture of trust antecedents in the presence of intelligent -mostly AIbased -counterparts. ...
Article
Technological systems are becoming increasingly smarter, which causes a shift in the way they are seen: from tools used to execute specific tasks to social counterparts with whom to cooperate. To ensure that these interactions are successful, trust has proven to be the most important driver. We conducted an extensive and structured review with the goal to reveal all previously researched antecedents influencing the human trust in technology-based counterparts. In doing so, we synthesized 179 papers and uncovered 479 trust antecedents. We assigned these antecedents to four main groups. Three of them have been explored before: environment, trustee, and trustor. Within this paper, we argue for a fourth group, the interaction. This quadripartition allows the inclusion of antecedents that were not considered previously. Moreover, we critically question the practice of uncovering more and more trust antecedents, which already led to an opaque plethora and thus becomes increasingly complex for practitioners.
... The concept of trust has been adapted to robotic systems (Hoff & Bashir, 2013;J. D. Lee & See, 2004). ...
Thesis
Teams in different areas are increasingly adopting robots to perform various mission operations. The inclusion of robots in teams has drawn consistent attention from scholars in relevant fields such as human-computer interaction (HCI) and human-robot interaction (HRI). Yet, the current literature has not fully addressed issues regarding teamwork by mainly focusing on the collaboration between a single robot and an individual. The limited scope of human-robot collaboration in the existing research hinders uncovering the mechanism of performance gains in teams that involve multiple robots and people. This dissertation research is an effort to address the issue by achieving two goals. First, this dissertation examines the impacts of interaction between human teammates alone and interaction between humans and robots on outcomes in teams working with robots. Second, I provide insight into the development of teams working with robots by examining ways to promote a team member’s intention to work with robots. In this dissertation, I conducted three studies in an endeavor to accomplish the aforementioned goals. The first study, in Chapter 2, turns to theory trust in teams to explain outcome gains in teams working with robots. This study reports result from a lab experiment, in which two people fulfilled a collaborative task using two robots. The results show that trust in robots and trust in teammates can be enhanced by a robot-building activity and team identification, respectively. The enhanced trust revealed unique impacts on different team outcomes: trust in robots increased only team performance while trust in teammates increased only satisfaction. Theoretical and practical contributions of the findings are discussed in the chapter. The second study, in Chapter 3, uncovers how team member’s efficacy beliefs interplay with team diversity to promote performance in teams working with robots. Results from a lab experiment reveal that individual operator’s performance is enhanced by team potency perception only when the team is ethnically diverse. This study contributes to theory by identifying team diversity as a limiting condition of performance gains for robot operators in teams. The third study, in Chapter 4, focuses on factors leading to the development of teams working with robots. I conducted an online experiment to examine how surface-level and deep-level similarity contribute to trust in a robotic partner and the impact of the trust on a team member’s intention to work with the robot in varying degrees of danger. This study generally shows that the possibility of danger regulates not only the positive link between the surface-level similarity and trust in robot and but also the link between intention to work with the robot and intention to replace a human teammate with the robot. Chapter 5, as a concluding chapter of this dissertation, discusses the theoretical and practical implications drawn from the three studies.
... It happens when a user's trust levels are not appropriateate to the capabilities of the automation being used. Users can be: undertrusting the automation-when they do not use the functionalities that the machine can perform correctly because of a "lack of trust"; or overtrusting the automation-when, due to an "excess of trust", they use the machine in situations where its capabilities are not adequate [22,23,24]. The influence of different risk types (internal, from the ADS itself, and external, from the environment) [25] and of situational awareness in trust development has been investigated [26]. ...
Preprint
Full-text available
Trust in robots has been gathering attention from multiple directions, as it has special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall rapport between humans and robots. Unfortunately, the miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user's trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation -- when they do not use the functionalities that the machine can perform correctly because of a lack of trust; or over-trusting the automation -- when, due to an excess of trust, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver's trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short-term interactions associated with these risk factors influence the dynamics of drivers' trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers' eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers' trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver's trust levels. This capability could avoid under- and over-trusting, which could harm their safety or their performance.
... It happens when a user's trust levels are not appropriateate to the capabilities of the automation being used. Users can be: undertrusting the automation-when they do not use the functionalities that the machine can perform correctly because of a "lack of trust"; or overtrusting the automation-when, due to an "excess of trust", they use the machine in situations where its capabilities are not adequate [17,18,19]. The influence of different risk types (internal, from the ADS itself, and external, from the environment) [20] and of situational awareness in trust development has been investigated [21]. ...
Conference Paper
Full-text available
Trust in robots has been gathering attention from multiple directions, as it has a special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall “rapport” between humans and robots. Unfortunately, miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user’s trust levels are not appropriate to the capabilities of the automation being used. Users can be: undertrusting the automation—when they do not use the functionalities that the machine can perform correctly because of a “lack of trust”; or overtrusting the automation—when, due to an “excess of trust”, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver’s trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short term interactions associated with these risk factors influence the dynamics of drivers’ trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers’ eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers’ trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver’s trust levels. This capability could avoid under- and over trusting, which could harm their safety or their performance.
Article
Trust greatly contributes to human‐AI collaboration, however, human's trust to IPA is hard to establish and lacks exploration. The purpose of this paper is to recognize the factors that affect the trustworthiness of IPA. 358 questionnaires were analyzed by PLS‐SEM to construct the model, while thematic analysis was used to discover expectance of IPA. Chi‐square tests and T‐test were used to distinguish the difference between two user groups. Three factors that capability of system, personality of agent, and availability of interface have a significant impact on the trustworthiness of IPA. The capability of system is the most essential as the threshold with users’ plenty of expectations. Most users pay less attention to the availability of interface and the personality of agent has a great impact on the trustworthiness of IPA. The factors found enrich the trusted AI research and inspire insights of design of IPA.
Article
Full-text available
Industry 4.0, big data, predictive analytics, and robotics are leading to a paradigm shift on the shop floor of industrial production. However, complex, cognitive tasks are also subject of change, due to the development of artificial intelligence (AI). Smart assistants are finding their way into the world of knowledge work and require cooperation with humans. Here, trust is an essential factor that determines the success of human-AI cooperation. Within this article, an analysis within production management identifies possible antecedent variables on trust in AI and evaluates these due to interaction scenarios with AI. The results of this research are five antecedents for human trust in AI within production management. From these results, preliminary design guidelines are derived for a socially sustainable human-AI interaction in future production management systems. Practitioner summary: In the future, artificial intelligence will assist cognitive tasks in production management. In order to make good decisions, humans trust in AI has to be well calibrated. For trustful human-AI interactions, it is beneficial that humans subjectively perceive AI as capable and comprehensible and that they themselves are digitally competent.
Article
People regularly interact with automation to make decisions. Research shows that reliance on recommendations can depend on user trust in the decision support system (DSS), the source of information (i.e. human or automation), and situational stress. This study explored how information source and stress affect trust and reliance on a DSS used in a baggage scanning task. A preliminary sample of sixty-one participants were given descriptions for a DSS and reported trust before and after interaction. The DSS gave explicit recommendations when activated and participants could choose to rely or reject the choice. Results revealed a bias towards self-reliance and a negative influence of stress on trust, particularly for participants receiving help from automation. Controlling for perceived reliability may have eliminated trust biases prior to interaction, while stress may have influenced trust during the task. Future research should address potential differences in task motivation and include physiological measures of stress.
Article
Full-text available
Computerized aids may be used to support decision-making in a variety of complex, dynamic arenas. Of interest in these systems is the extent to which operators utilize and trust such systems, particularly under conditions of potential failure. A theoretical framework to describe potential factors affecting these issues, and an experiment to investigate the role of failure cause on trust and system utilization, are described. Results provide some support for factors in the theoretical framework, and also demonstrated the use of an empirically developed trust scale.
Article
Full-text available
To address the growing needs for care for our aging population, both the public and private sectors are turning to advanced technologies to facilitate or automate aspects of caregiving. The user, who is often an older adult, must now appropriately trust and rely upon the technology to perform its task accurately. However, there is reason to believe that older adults may be more susceptible to unreliable technologies. This paper reviews the work on trust and complacency with older adults and examines the cognitive reasons why older adults are more at risk from faulty technologies. Lessons learned from Honeywell's Independent LifeStyle Assistant™ (I.L.S.A.) are discussed with respect to new high-level requirements for future designs.
Article
Full-text available
This study examined the use of deliberately anthropomorphic automation on younger and older adults' trust, dependence and performance on a diabetes decision-making task. Research with anthropomorphic interface agents has shown mixed effects in judgments of preferences but has rarely examined effects on performance. Meanwhile, research in automation has shown some forms of anthropomorphism (e.g. etiquette) have effects on trust and dependence on automation. Participants answered diabetes questions with no-aid, a non-anthropomorphic aid or an anthropomorphised aid. Trust and dependence in the aid was measured. A minimally anthropomorphic aide primarily affected younger adults' trust in the aid. Dependence, however, for both age groups was influenced by the anthropomorphic aid. Automation that deliberately embodies person-like characteristics can influence trust and dependence on reasonably reliable automation. However, further research is necessary to better understand the specific aspects of the aid that affect different age groups. Automation that embodies human-like characteristics may be useful in situations where there is under-utilisation of reasonably reliable aids by enhancing trust and dependence in that aid. Practitioner Summary: The design of decision-support aids on consumer devices (e.g. smartphones) may influence the level of trust that users place in that system and their amount of use. This study is the first step in articulating how the design of aids may influence user's trust and use of such systems.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
An obstacle detection task supported by ‘imperfect’ automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over-relying on it during non-alarm states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behaviour according to the characteristics of the automation similar to younger adults, although it took them longer to do so. The results of this study suggest that the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human–automation interaction can help designers of automated systems to make predictions about human behaviour and system performance as a function of the characteristics of the automation.
Article
We conducted an online survey about Web credibility, which included over 1400 participants. People reported that Web site credibility increases when the site conveys a real-world presence, is easy to use, and is updated often. People reported that a Web site loses credibility when it has errors, technical problems, or distracting advertisements. Our study is an early effort to identify Web credibility elements and empirically investigate the effect of each.
As society continues to become advanced in technology, automation is increasingly implemented in systems reducing the need for human intervention. Although the system is the main focus in rendering the cognitive workload, the human's use in the system is the main component for successful performance. Trust is a prime factor in building a symbiotic relationship between human-automation interaction and further empirical research is needed to develop appropriate methods for designing trusted systems. These methods can be defined as identifying the prime factors of trust that influence the user's decision making. The focus of this paper looks at the factor of risk to determine if and how risk in a situational environment is an influencing factor of trust in automation. A Global Positioning System (GPS) was used as the automated platform, and participants went through a series of route planning sessions with varying levels of risk and hazards. It was found that as the level of risk increased on hazards, the use and trust of automation decreased.
Conference Paper
The present study examined the effects of mood on trust in automation over time. Participants (N = 72) were induced into either a positive or negative mood and then completed a computer-based task that involved the assistance of an automated aid. Results indicated that mood had a significant impact on initial trust formation, but this impact diminishes as time and interaction with the automated aid increases. Implications regarding trust propensity and trustworthiness are discussed, as well as the dynamic effects of trust over time.
Computerized aids may be used to support decision-making and control in a variety of complex, dynamic arenas. For instance, such systems have been introduced into industrial settings as the means to implement automated control or support decision-making activities such as fault detection and recovery. Of interest in these systems is the extent to which operators utilize and trust such systems, in terms of their ability to successfully control systems, or the information or decision support they provide, particularly under conditions of potential failure. A theoretical framework to describe potential factors affecting these issues, and an experiment to investigate the role of failure cause on trust and system utilization, are described. Results provide some support for factors in the theoretical framework, and also demonstrated the use of an empirically developed trust scale.Relevance to industryAs manufacturing environments increasingly rely on computerized and automated systems for control and human operator support, it is necessary to understand the situational factors which could impact operators’ use of such systems. This paper describes a framework which could be used to investigate trust in industrial automation settings, as well as a rating scale which could be applied.