A Theoretical Model for Trust in
The concept of trust in automation has received a great
deal of attention in recent years in response to modern
society’s ever-increasing usage of automated systems.
Researchers have used a variety of different automated
systems in unique experimental paradigms to identify
factors that regulate the trust formation process. In this
work-in-progress report, we propose a preliminary,
theoretical model of factors that influence trust in
automation. Our model utilizes three layers of analysis
(dispositional trust, situational trust, and learned trust)
to explain the variability of trust in a wide range of
circumstances. We are in the process of verifying
certain aspects of the model empirically, but our
current framework provides a useful perspective for
future investigations into the intricacies of trust in
Trust; Automation; Human-computer Interaction;
ACM Classification Keywords
H.1.2 [Models and Principles]: User/Machine Systems--
In today’s modern world, humans trust automated
systems on a day-to-day basis, often without realizing
it. This trust is usually rewarded as automation has
greatly improved the efficiency of many processes. Still,
automation-related accidents occur regularly and they
are sometimes caused by humans misusing (over-
trusting) or disusing (under-trusting) systems.
Researchers have addressed this issue by studying the
intricacies of human trust in automation. Trust is a
particularly influential variable that can determine the
willingness of humans to rely on automated systems to
perform tasks that could also be completed manually.
Experimental research has identified numerous factors
that regulate both the formation and deterioration of
trust in automated systems. Much of this research has
concentrated on the design of interactive automation
such as collision warning systems  and automated
decision aids . This research has confirmed that a
Copyright is held by the author/owner(s).
CHI 2013 Extended Abstracts, April 27–May 2, 2013, Paris, France.
Information Trust Institute
University of Illinois
Urbana, IL 61801
Information Trust Institute
University of Illinois
Urbana, IL 61801
user’s trust in an automated system depends largely on
the performance and design of the system. However,
this is but one part of the larger picture. Research has
also shown that trust varies depending on the trust
propensity of a user  and the context of an
interaction . In this work-in-progress report, we
integrate the findings of a wide variety of studies
focused on factors that influence trust in interactive
automated systems to present a theoretical model of
trust in automation.
Our model (see figure 1) is designed to capture the
dynamic nature of a user’s interaction with an
automated system. The model displays the complexities
of trust by incorporating three broad sources of trust
variability at the dispositional, situational, and learned
level of analysis. While it remains purely theoretical at
this point, we are in the process of empirically verifying
some of its untested components. Meanwhile, our
unique three-layered perspective provides a simple,
useful framework for future investigations into the
variability of human trust in automation.
Background and Related Work
Trust may seem like a straightforward concept, but its
significance varies greatly depending on the context.
On the Internet, trust guides users’ acceptance of
information. Prior work in the CHI community has
concentrated on factors that influence trust in websites
such as credibility and usability . Our trust model
includes similar interface design features that impact
trust, but we also incorporate dispositional and
situational factors. Additionally, our analysis focuses on
automation, rather than websites. Automation can be
defined as “technology that actively selects data,
transforms information, makes decisions, or controls
processes” . Automated systems are everywhere in
the modern world from motion-sensing lights to GPS
route planning systems. Examples of complex
automation can be found in hospitals, nuclear power
plants, aircrafts, and countless other industries.
Our analysis builds on both an existing review of trust
in automation  and a meta-analysis of factors that
influence trust in robots . Hancock et al. 
quantitatively examined the relative weight of human,
environmental, and robot-related factors that influence
human-robot trust. In their meta-analysis of eleven
experimental studies, the authors found trust to be
most influenced by characteristics of the robot, followed
by the environment. Human-related trust factors were
found to be the least significant, but the authors
attributed this finding to the shortage of experiments
focused on individual differences. Our model shares
some of the same factors used in Hancock et al.’s
meta-analysis, but focuses on all types of automation.
Additionally, our independent analysis of the literature
led us to relatively different conclusions.
Lee and See  provide an extensive review of studies
on both interpersonal and human-automation trust to
explain the importance of trust in guiding our behavior
towards automation. Since this paper was published in
2004, numerous studies have provided additional
insights into the variability of trust in automated
systems. For example, new details have emerged on
specific factors related to trust such as age, gender,
personality traits, preexisting moods, sleep loss,
distinct automation error types, and automation design
features [9, 12, 13, 14, 15]. Our model expands on Lee
and See’s analysis by incorporating and organizing
these recent findings into our three-layered framework.
Theoretical Model Development
The basic structure of our model (figure 1) corresponds
to the three sources of variability in all types of trusting
relationships: the truster (who gives trust), the
situation, and the trustee (who receives trust). These
variables respectively reflect the three layers of trust
displayed in our model: dispositional trust, situational
trust, and learned trust. Dispositional trust represents
the variability of individuals’ instinctive tendencies to
trust automation and cannot change in the short-term.
Situational trust varies depending on the specific
context of an interaction. Finally, learned trust is based
on past experiences of a user relevant to a specific
automated system and varies depending on the
characteristics of a system .
The context of our model is one interaction with an
automated system. However, during the course of one
interaction, an automated system may perform variably
and its user’s trust will likely fluctuate to correspond
with the system’s real-time performance. To capture
the interactive nature of this relationship, learned trust
is divided into two categories: initial and dynamic. Both
forms of learned trust are relative to characteristics of
the automated system; however, initial learned trust
represents a user’s trust prior to using a system, while
dynamic learned trust represents a user’s trust while
operating a system. It is also important to note that the
model’s sequential structure is designed as such to
ease its comprehensibility. In the real world, trust does
not develop in concrete steps and the various factors
that influence a user’s trust are all interconnected.
The following sections describe the model’s components
in greater detail. Most of the factors listed below have
been supported by empirical evidence, but some factors
are purely theoretical at this point. Research on
individual differences in dispositional trust is especially
scarce. Our current research aims to fill this void by
studying specific personality traits and cultural values
that influence trust.
Figure 1. Full model of factors that influence trust in automation - See figures 2-5 for more details on each aspect of the model
The dotted arrows
represent factors that
can change within the
course of one
Dispositional trust represents an individual’s instinctive
tendency to trust automation independent of context or
a specific system. Research has shown that certain
types of individuals are more likely to trust automated
systems than others . Culture, age, gender, and
personality traits are the primary sources of variability
in this most basic layer of trust (see figure 2).
The cultural identity of an individual can significantly
influence his or her trusting tendencies. Unfortunately,
very few studies have focused on the role of culture on
trust in automation, but one study found that German
participants were less likely to accept advice from a
social robot compared to Chinese participants .
Aging-related changes in dispositional trust can result
from cognitive regressions in working memory, the
ability to selectively focus attention, and mental
workload capacity . Consistent gender differences
have not yet surfaced, but male and female children
prefer different levels of anthropomorphism in social
robots . Finally, specific personality traits such as
extraversion , neuroticism, and agreeableness 
have been found to impact trust in automation.
Trust happens differently in distinct situations, even
when the user and automated system remain constant.
This highlights the importance of understanding the
context of an interaction. As displayed in figure 3, there
are two broad sources of variability in situational trust:
the environment and the context-dependent
characteristics of the user.
Environmental factors must be taken into account to
fully understand the variability of trust in automation. A
user’s trust in an automated system is largely
determined by the type of system, its complexity, the
task for which it is used, and the perceived risks and
benefits of using it. For example, in one study
participants used route-planning advice from a GPS
system less when situational risk increased through the
presence of driving hazards . Even the framing of a
task can alter the trust formation process . The
physical environment is significant because it can alter
situational risk, a system’s performance, and a user’s
ability to monitor a system. Lastly, trust may vary due
to organizational factors such as the existence of
human teammates or supervisors.
Context-dependent User Variability
Humans have different strengths and weaknesses in
unique contexts and these individualities can impact
trust. For example, when automation users have low-
self confidence in their ability to perform a task, they
will likely trust and utilize automation more. Similarly, a
user’s expertise or familiarity with a particular subject
matter can impact trust . The mental well-being of
a user is also significant. People in positive moods are
more likely to express greater initial trust in automation
 and users who are unable to carefully monitor a
system may miss automation errors that would
otherwise degrade trust.
Learned trust is based on evaluations of a system that
draw from past experiences or insight gained from the
current interaction. This layer of trust varies depending
on three categories of information: preexisting
Figure 3. Situational Trust
Figure 2. Dispositional Trust
knowledge, automation design features, and the
performance of an automated system. As previously
mentioned, learned trust is static when based on
preexisting knowledge (see figure 4), but variable when
based on a system’s performance (see figure 5). Design
features are significant because they can alter a user’s
subjective assessment of a system’s performance.
A noteworthy feature of our model is the
interdependent relationship between a system’s
performance, the user’s dynamic learned trust, and the
user’s reliance on the system (note the dotted arrows
in figure 5). When the performance of an automated
system impacts its user’s trust, the user’s reliance
strategy may change. In turn, the extent to which a
system is relied upon can affect its performance, thus
completing the cycle.
Before interacting with an automated system, a user’s
trust is biased due to preexisting expectations and the
system and/or brand’s reputation. Past experience with
a system or similar systems can also alter the trust
formation process . Likewise, a user’s level of
understanding about how a specific automated system
works is significant. Scarce empirical research has
studied this effect, but it is likely that users with
greater knowledge of how a system functions can more
accurately calibrate their trust following system errors.
The design of automation is a critical factor because it
can alter a user’s perceptions of a system’s
performance. In order to facilitate appropriate trust in
automation, designers must carefully consider the
interface’s ease-of-use, transparency, and appearance.
For example, a recent study showed that adding a
picture of a physician to the interface of a diabetes
management application led younger participants to
greater trust in the system’s advice . A system’s
communication style , the feedback it provides, and
the level of control a user has over its functions are
also significant variables.
Research has shown that automation users adjust their
trust to reflect a system’s real-time performance .
As a result, a system’s reliability, validity, predictability,
and dependability are key antecedents of trust.
Additionally, a user’s trust may respond differently to
automation errors depending on the timing of an error
, the perceived difficulty of the error , and
whether the error is a “miss” or “false alarm” . The
perceived usefulness of a system is another factor that
can impact trust, but to date this has primarily been
researched in the ecommerce domain.
Trust is a particularly influential variable that governs
human reliance on automation. However, trust is not
the only important variable. In certain circumstances,
other situational factors such as the effort to engage a
system, the alternatives to using automation, and a
user’s physical well-being can alter the user’s reliance
strategy without affecting his or her trust. These factors
are included in the model because they can indirectly
influence trust by altering a user’s reliance strategy.
Displaying the variability of trust through an all-
inclusive lens will help expand the current
conceptualization of how human-automation interaction
works. As our model displays, the complexities of trust
Figure 5. Dynamic Learned Trust
(trust during use)
Figure 4. Initial Learned Trust
(trust prior to use)
can be reduced to three broad layers of variability:
dispositional trust, situational trust, and learned trust.
Distinct factors influence each layer, but any given
user’s trust in an automated system is a compilation of
the user’s trusting tendencies, the situation, and the
user’s perceptions of the system.
Our current research is focused on empirically verifying
certain aspects of the model. We are in the process of
identifying additional personality traits and cultural
values that alter trust. In the meantime, our model
provides a useful framework for future studies focused
on factors that influence trust in automated systems.
 Bisantz, A.M., and Seong, Y. (2001). Assessment of
operator trust in and utilization of automated decision-
aids under different framing conditions. International
Journal of Industrial Ergonomics, 28 (2), 85–97.
 Fogg, B.J., et al. (2000). Elements that affect web
credibility: Early results from a self-report study. Ext.
Abstracts CHI 2000, ACM Press, 287-288.
 Hancock, P.A., et al. (2011). A meta-analysis of
factors impacting trust in human-robot interaction.
Human Factors, 53 (5), 517-527.
 Ho, G., Kiff, L.M., Plocher, T., and Haigh, K.Z.
(2005). A model of trust & reliance of automation
technology for older users. AAAI-2005 Fall Symposium:
Caring Machines: AI in Eldercare.
 Lee, J.D. and See, K.A. (2004). Trust in
automation: designing for appropriate reliance. Human
Factors, 46 (1), 50–80.
 Madhaven, P., Wiegmann, D.A. and Lacson, F.C.
(2004). Occasional automation failures on easy tasks
undermines trust in automation. Proc. of 112th Annual
Meet. of the American Psych. Association.
 Marsh, S. and Dibben, M.R. (2003). The role of
trust in information science and technology. Annual
Review of Information Science and Technology, 37,
 Merritt, S.M., and Ilgen, D.R. (2008). Not all trust
is created equal: Dispositional and history-based trust
in human–automation interaction. Human Factors, 50,
 Pak, R., et al. (2010). Decision support aids with
anthropomorphic characteristics influence trust and
performance in younger and older adults.
Ergonomics, 55 (9), 1059-1072.
 Perkins, L.A., Miller, J.E., Hashemi, A. and Burns,
G. (2010). Designing for Human-Centered Systems:
Situational Risk as a Factor of Trust in Automation.
Proc. of the 54th Annual Meet. of the Human Fact. and
Erg. Soc., 2130-2134.
 Rau, P.L., Li, Y., and Li, D. (2009). Effects of
communication style and culture on ability to accept
recommendations from robots. Computers in Human
Behavior, 25 (2), 587-595.
 Sanchez, J., Rogers, W.A., Fisk, A.D., & Rovira, E.
(2011). Understanding reliance on automation: effects
of error type, error distribution, age and experience,
Theoretical Issues in Ergonomics Science, 1-27.
 Stokes, C.K., et al. (2010). Accounting for the
human in cyberspace: Effects of mood on trust in
automation. Proc. of the IEEE Internat. Symp. on
Collaborative Technologies and Systems, 180–187.
 Szalma, J.L. and Taylor, G. S. (2011). Individual
differences in response to automation: the five factor
model of personality. Journal of Experimental
Psychology: Applied, 17 (2), 71-96.
 Tung, F-W., J.A. Jacko (Ed.) Influence of Gender
and Age on the Attitudes of Children towards Humanoid
Robots. Human-Computer Interaction, Part IV, 637-