ArticlePDF Available
6/12/19 1
Framing Trust: Trust as a Heuristic
Roy J. Lewicki
Max M. Fisher College of Business
The Ohio State University
2100 Neil Avenue
Columbus, Ohio, 43221, USA
Chad T. Brinsfield
Max M. Fisher College of Business
The Ohio State University
2100 Neil Avenue
Columbus, Ohio, 43221, USA
Lewicki, R. and Brinsfield, C. (2011). Trust as a heuristic. In Donohue, W. A.,
Rogan, R. G., & Kaufman, S. Framing Matters: Perspectives on Negotiation
Research and Practice in Communication. New York: Peter Lang Publishing.
6/12/19 2
6/12/19 3
The purpose of this chapter is to explore the idea that trust can be studied as a heuristic.
A heuristic is a decisional short cut. It is a mechanism for processing information that
allows the decision maker to select some information and ignore other information as a
way to make a quicker or ‘easier’ (less complex) decision. During the last three
decades, heuristics have been studied extensively by decision making theorists (c.f.
Tversky and Kahneman, 1981; Bazerman, 2006). In addition researchers have
proposed that judgments about interpersonal fairness act as a heuristic (c.f. Lind, 2001).
This chapter proposes to extend our understanding of heuristics to demonstrate how
interpersonal trust judgments can also function as heuristics.
In the chapter, we will lay out the following line of argument. First, we will
create the argument for why trust judgments should be viewed as a heuristic. Trust and
distrust are cognitive frames that help people to organize and interpret new experiences.
As such, they provide a degree of structure and stability to one’s perception of a
situation or relationship. Once a trust- or distrust-frame is constructed it will generally
persist until influenced by new experience; hence, the frame reduces the need for
effortful monitoring and frequent reanalysis of a situation or relationship. Only when a
relevant new experience passes a certain threshold of perceptual salience will the frame
be adjusted. Moreover, once formed, the trust and/or distrust-frame may function as
higher-order decision heuristics with psychological underpinnings similar to the four
general heuristics that have received much attention among cognitive/decision
scientists: (Representativeness, Availability, Anchoring and Adjustment, and Affect).
As such, trust- and distrust-frames are likely subject to the both the positive effects
6/12/19 4
(i.e., enabling the actor to more effortlessly assimilate large amounts of information),
and negative effects (i.e., biases, and perceptual distortions), often associated with
heuristic use. In this chapter we put forth the argument for conceptualizing trust and
distrust as cognitive frames, and explain their differential implications for heuristic
processing. We will then draw theoretical and practical implications for viewing trust as
a heuristic.
The construct of trust and its importance to relationships
The construct of trust. As noted by Lewicki, Tomlinson and Gillespie (2006),
trust has been conceptualized in a number of different ways. There are two dominant
traditions that broadly characterize the development of research on trust:
• the behavioral tradition of trust, which views trust as rational-choice behavior, such as
more cooperative choices in a game (e.g. Hardin, 1993; Williamson, 1981);
• the psychological tradition, which has worked to understand the complex
interpersonal states associated with trust, such as expectations, intentions, affect,
dispositions and judgments (e.g. Mayer, Davis and Schoorman, 1995; Rousseau, Sitkin,
Burt & Camerer, 1998).
Since we are most interested in trust as a decisional heuristic—i.e. within the
psychological tradition of perceptions and judgments--the behavioral tradition is less
relevant to our current discussion. While trusting behavior may result from a trusting
judgment or decision, we are more interested in the process by which trust functions at
the intention, expectation and affect level which shapes the judgment, and therefore
will focus our attention on the psychological tradition in trust research.
Here are several definitions of trust from leading theoreticians in the field:
6/12/19 5
“Trust is a psychological state comprising the intention to accept vulnerability based on
positive expectations of the intentions or behavior of another” (Rousseau et al, 98).
“Trust is a willingness to be vulnerable to another party based on both the trustor’s
propensity to trust others in general, and on the trustor’s perception that the particular
trustee is trustworthy…. A trustee will be seen as trustworthy if the trustee has ability in
the relevant domain, has benevolence or a positive orientation toward the trustor, and
has integrity, or adheres to a set of values deemed acceptable to the trustor” (Mayer,
Davis and Schoorman, 1995).
“Trust is a belief in, and willingness to act on the basis of, the words, actions and deeds
of another” (McAllister, 1995; Lewicki, McAllister & Bies, 1998).
What are some common elements across these definitions? First, these definitions
emphasize the cognitive and affective processes that lead to ‘trusting behavior’.
Researchers who espouse this approach collect data in settings that emphasize face-to-
face, direct interpersonal contexts and attempt to collect data on a party’s intentions,
motives and affect toward the other, and also understand the perceptions and
attributions that the trustor makes about the other’s personality, intentions, affect and
The Rousseau et. al. definition suggests that trust is composed of two
interrelated cognitive processes: a willingness to accept vulnerability to the actions of
another, and, despite uncertainty about how the other will act, positive expectations
regarding the other’s intentions, motivations and behaviors. Similarly, Mayer et. al.
define trust as a ‘willingness to be vulnerable’ based on a belief that others are
trustworthy in general, and specific expectations that the other is trustworthy (see also
Baier, 1985; Robinson, 1996). Since intentions and expectations are unquestionably
6/12/19 6
intertwined, including both will be critical to a full understanding of the cognitive
components of trust.
The cognitive component of trust includes the conscious beliefs and judgments
that an actor makes about another’s trustworthiness, and is the component which is
most frequently emphasized in prior research on trust. As Lewis and Weigert stated,
“We cognitively choose whom we will trust in which respects and under which
circumstances, and we base the choice on what we take to be ‘good reasons,
constituting evidence of trustworthiness’” (1985:970). This information is used to make
a ‘leap’ of inference about the other’s trustworthiness—or as the authors state, “beyond
the expectations that reason and experience alone would warrant” (Lewis & Weigert,
1985: 970). Considerable research about the characteristics of others that justify this
‘leap of inference’ have tended to focus on three qualities proposed by Mayer et. al.,
1995: attributes that allow the trustor to infer the trustee’s ability, benevolence, and
Intentions and expectations often include an affective or emotional component
as well. While this component has often been underemphasized in trust research; for
example Dirks and Ferrin’s (2002) meta-analysis of trust in leadership found that 94%
of the studies used a one-dimensional cognitive-based definition and measure of trust.
Nevertheless, affective or emotional components play an important role in shaping how
one views the other, particularly in close relationships (Lewis & Weigert, 1985).
Whether these emotions arise from positive attraction and affection for a close friend,
or anger and disappointment following trust betrayal by that same friend, both sets of
emotions tend to affect the overall cognitive evaluation of trust and the other’s
6/12/19 7
trustworthiness. Moreover, some have suggested that cognitive trust is more
challenging to sustain compared to affective trust, and that affective trust may better
withstand the inevitable challenges in interpersonal relationships over time (McAllister,
1995; Williams, 2001).
Finally, cognitive and affective components contribute to a behavioral
intention—that is, a willingness to undertake a risky course of action based on these
confident expectations (cognitions) and feelings (emotions) that the other will respect
and reciprocate the trust. Luhmann (1979) has argued that engaging in these trusting
behaviors actually reinforces and strengthens these cognitive judgments; Mayer et. al.
(1995) point out that the outcomes of trusting behaviors provides information which
either enhance or diminish the disposition to perform further trusting gestures and
judgments of the other’s trustworthiness. Several research studies (e.g. Cummings &
Bromiley, 1966; Clark & Payne, 1997) have developed measures to incorporate all of
these components, as well as a measure of behavioral intentions to act trustworthily.
Trust as a core components of a ‘relationship’. In addition to understanding
the important components of trust, it is also important to emphasize that trust is an
essential component of any relationship. Fiske (1991) argued that there were four
fundamentally different types of relationships: communal sharing (a community bound
together by collective identity, solidarity and unity); authority ranking (a relationship
characterized by formal distinctions of status of power); equality matching (a
relationship in which parties are relative equals, make equal contributions and expect
equal reward distributions); and market pricing (a market system in which people
exchange payments for goods). Regardless of the differences across these relationship
6/12/19 8
forms, trust plays a critical role in building and sustaining each type of relationship.
For example, Greenhalgh and Chapman (1996) studied numerous relationships and
derived a list of 14 key components that characterized different kinds of relationships;
trust was the most important element in the “rapport with others” category, and
included relationship qualities such as reliability, interpersonal integrity, and altruism.
Greenhalgh and Chapman specifically point out that qualities like trust often act
reciprocally in a relationship—that is, trust can determine how a relationship proceeds
forward, and as the relationship proceeds forward, key qualities like trust will be
affected. Similarly, in a more practical application, Bacon (2006), proposing ‘a
manager’s guide to building relationships that work’, features trust building and trust
management as the core process for sustaining effective work relationships.
In summary, trust has been conceptualized in different ways; our interest here is
less in the behavioral manifestations of trust (although they reveal the ‘results’ of trust
judgments), and more in the cognitive and affective components of trust judgments that
eventually shape trusting behaviors. We now turn to an understanding of frames, and
the nature of heuristics as one type of, or category of a ‘frame’.
Framing and Heuristics
‘Framing’ is the dynamic by which people selectively focus on, shape, and
organize the world around them. Framing is about making sense of a complex reality
and defining it in terms that are meaningful to us. Frames act like a ‘figure’, in that they
define a person, event, or process and separate it from the ‘ground’ or background
around it. Frames “impart meaning and significance to elements within the frame and
set them apart from what is outside the frame” (Buechler, 2000, p. 41). Two people
6/12/19 9
walk into a room full of people and see different things: one (the extrovert) sees a great
party; the other (the introvert) sees a scary and intimidating unfriendly crowd. Because
people bring different backgrounds, experiences, expectations, and needs to a situation,
they frame the world they encounter--people, events, and processes--differently. But
even with the same individual, frames can change depending on perspective, or can
change over time. What starts out as a game of tag between two boys may turn into an
ugly fistfight. A favorite football quarterback may be a “hero” when he throws a
touchdown, but a “loser” five minutes later when he throws an interception.
Frames are often part of a broader worldview that reflects an individual’s
interpretations of what is going on, how actors see themselves, and how actors see
others involved in a situation. Frames and framing signal what is important about a
situation; define how an individual names or labels a situation; define how that
individual or situation is viewed over time; bracket what counts as a ‘fact’; provide a
rationale and direction for taking action; are shaped through speaking and listening to
others, which creates memory traces that reflect an individual’s past experience as it is
reflected in the current dialogue; and shape a collective conversation that includes some
information or perspectives and excludes others.
Framing has become a popular concept among social scientists who study
cognitive processes, decision-making, persuasion, and communication. The concepts of
frames and framing have regained the attention of researchers in a broad range of
disciplines, and a variety of definitions for these concepts and the dynamics have been
proposed. But the growth of this work has also generated considerable confusion
among researchers in how these concepts have been defined, used and operationalized
6/12/19 10
these concepts. One recent integrative review of the field has been offered by De Wulf,
Gray, Putnam, Lewicki, Aarts, Bouwen and Van Woerkum (2009). In this review, they
address two questions: what is the nature of a frame? and what is it that gets framed?
With regard to the first question, they proposed that there have been two different
approaches to understanding the nature of a frame: frames as cognitive representations
and frames as interactional co-constructions. The first approach—frames as cognitive
representations—is evidenced in a variety of approaches presented by cognitive
psychologists. For example, Minsky (1975) indicates that cognitive frames are memory
structures that help us organize and interpret new experiences. Frames are created and
stored in memory to assemble information we have gained in the past, and then are
called upon to incorporate and classify new information. As a result, people with
different past experiences may experience the same present situation, but interpret it
differently as a result of the frame shaped by their unique past experience. For example,
these interpretations are often applied to people and issues in disputes:
Disputes, like other social situations, are ambiguous and subject to interpretation.
People can encounter the same dispute and perceive it in very different ways as a result
of their backgrounds, professional training or past experiences. One label that has been
placed on this form of individualized definition of a situation based on interplay of past
experiences and knowledge, and the existing situation, is a “frame.” (Roth & Sheppard,
1995, p. 94)
A second approach to the question “What is the nature of a frame?” is espoused by
sociolinguists and communication theorists, who define frames as social
constructions—that is, collectively agreed-upon or socially negotiated ways to “make
sense of a situation” (Follett, 1942; Tannen, 1979; Kolb, 1995). In describing the
process by which parties with different views about an issue arrive at a joint agreement,
6/12/19 11
Follett suggests that the parties achieve some form of unity, “not from giving in
[compromise] but from ‘getting the desires of each side into one field of vision’
(Follett, 1942, quoted in Putnam and Holmer, 1992). Tannen & Whalat (1993) have
called these ‘interactive frames’, because they define what is going on in an interaction
and are constituted through interaction. Thus, frames emerge as the parties talk about
their preferences and priorities; collective frames emerge as the parties develop shared
or common definitions of the issues related to a situation, ‘what is going on’, who did
what (and why), and a process for managing or resolving these differences. As noted by
Gray (2003):
“When parties frame conflicts, they create interpretations of what a dispute is about,
why it occurred, the other disputants, and whether and how they envision its potential
resolution. The frames we construct during a conflict often attribute blame and offer
predictions about how the conflict will unfold” (Gray, 2003, p. 13).
Trust can function as a frame under both approaches—as a cognitive representation and
as an interactional co-construction. In the first case, trust is a way of cognitively
representing our judgment about another party or process. A judgment of
‘trust’ in another works as a ‘memory structure’ that allow us to organize information
about another persons or thing. Our accumulated judgments of people we ‘trust’ are
different from our judgments of people we do not trust; similarly, the age and reliability
of our automobile determines whether we ‘trust it to start in the morning’ or not. In the
second case—and following from most of the construct definitions of trust we
introduced earlier—trust can emerge as an interactional co-construction as people
interact and get to know each other. Thus, while trust falls within either form of these
authors’ distinctions about ‘what is the nature of a frame’, the most common definitions
6/12/19 12
of trust have been as interactional co-constructions. In this paper, we wish to develop
the arguments that provide additional support for also viewing trust as a cognitive
Further exploring the nature of frames as cognitive representations, DeWulf et.
al. (2009) suggest that there are three ways to approach the second question: what is it
that gets framed? These three approaches include cognitive frames of the substantive
issues that are part of a decision; cognitive frames about the self, other and the
relationship between the people; and cognitive frames about past or anticipated
interactional processes between people (prior to actually engaging in the interaction).
The first approach focuses on the substantive issues, topics or choices of interest, and is
most classically represented by the Tversky and Kahneman line of research on risky
choice. Levin, Schneider & Gaeth (1998) reviewed much of the experimental research
in this tradition and identified three major types of framing effects: ways that parties
frame risk (e.g. describing it as a gain or loss), ways that parties frame an anticipated
interaction goal (e.g. describing it as an opportunity for personal gain or joint gain) and
ways that parties frame attributes of a choice (e.g. describing them as an ‘opportunity’
or a ‘threat’. The second approach focuses on the way a party invokes and shapes
cognitive representations of the self, other and relationships. Frames of self are often
called identity frames: parties see themselves as heroes or victims, winners or losers. In
contrast, frames of others are often called characterization frames, often with the same
positive or negative terminology. Finally, frames of relationships can focus on power,
trust or justice (fairness); the first conveys expectations about relative status and
relationship control; the second conveys whether the party expects that the relationship
6/12/19 13
will be characterized by trust or distrust; and the third conveys expectations about
whether the parties anticipate fair or unfair treatment in the relationship exchange.
Finally, the third approach (frames as cognitive representations of the anticipated
interactional processes) reveal the ways that parties enact their cognitive view of an
expected interaction sequence: dining in a restaurant with a friend, dealing with an
adversary in an upcoming meeting, or making a speech to a large audience. Our
primary interest here is in this second category: judgments about the self, other and
relationships. Trust judgments are integral to self-definitions of identity (how one
judges oneself as trusting or trustworthy), characterizations of others (whether one
trusts another or judges another as trusting or trustworthy) and the status of the
relationship between self and others (trusting or distrusting).
Trust as a heuristic. Although the term “heuristics” has a broad range of
meanings in different fields and even within fields (see Gigerenzer & Todd, 1999 for a
brief history), we use the term in a general psychological sense to refer to heuristics as
cognitively based shortcuts that are used to accomplish a psycho-cognitive objective,
and whose attainment might ordinarily entail more in-depth or elaborate mental
processing were it not for this ‘short cut’. Because complex, optimal judgment and
decision-making strategies typically require large amounts of cognitive capacity and
effort, humans have necessarily evolved numerous strategies for effectively dealing
with (simplifying) the vast array of complex situations they may encounter in their
lives. Elaborate and systematic evaluation of every situation a person may encounter
can overwhelm the cognitive capacity of even the most adept human being; therefore,
6/12/19 14
heuristics perform a vital function for the successful navigation of increasingly complex
interpersonal environments by simplifying the information-processing task.
A key tenant of heuristic processing is the least effort principle (see the
Heuristic-Systematic Model [HSM]: (Chaiken, Lieberman, & Eagly, 1989). According
to Moskowitz (2005), the least effort principle contends that when performing cognitive
tasks, “people prefer less mental effort to more mental effort,” suggesting that “the
default [mental] processing strategy will be the one requiring the least amount of effort
and usurping the least amount of capacity — the heuristic route” (p. 203). One
example of this least effort principle is provided by Petty, Cacioppo, and Schuman
(1983), who examined why people are frequently persuaded by celebrities who endorse
products in various media advertisements. Their studies compared subjects’ reaction to
two ads for a razor. One ad was pitched by a famous celebrity while the other was
pitched by a “regular person.” The results suggest that people overwhelmingly
believed that the famous celebrity was more trustworthy and therefore picked the razor
they endorsed. This research suggest that not only are perceptions of trust influenced
by the use of heuristics (i.e., if the person is famous, they must be trustworthy), but also
how trust functions as a heuristic (i.e., if the product is endorsed by a trusted person
then there is less need to engage in elaborate investigation concerning the quality of the
product or the credentials of the endorser).
Many different forms of heuristics have been proposed across a wide variety
of decision making literatures. For example, Shah (2008) identified over 40 distinct
heuristics across numerous disciplines, many of which are very domain-specific, others
more general. These various heuristics may be used for tasks of search, assessment, or
6/12/19 15
selection of alternatives. For example, a simple search heuristic may be, “given that
this situation is similar to past situations, use whatever search method performed best in
the past’. By employing this simple rule, an individual can reduce the cognitive strain
and costs associated with searching haphazardly. Similarly, an assessment heuristic
may be used to assess the outcomes of a search by applying simple ordering or ranking
criterion according to the individual’s specific needs. And finally, selection heuristics
are often used to select between a set of alternatives by picking the highest ranking
alternative or by applying simple stopping rules which discontinue the search as soon as
any option that meets a pre-established criteria is met.
Although many domain specific heuristics have been proposed, there are four
general heuristics that have received much attention among cognitive/decision
scientists, and which we believe are related to trust judgments: Representativeness,
Availability, Anchoring and Adjustment, and Affect heuristics. We will review each
and consider the applicability to trust.
Representativeness heuristic. Tversky and Kahneman (1981) identified what
has become know as the “representativeness heuristic” in which some probability
judgments are mediated by assessments of similarity (the degree to which X is similar
to Y) even when the observed sample is statistically small. That is, people tend to
judge the probability of an event by finding a ‘comparable known’ event and assuming
that the probabilities will be similar. The primary fallacy occurs because people
assume that similarity in one aspect implies similarity in other aspects. In doing so,
people tend to ignore relevant information such as base rates (the relative frequency
with which an event occurs), the conjunction fallacy (error of judgment according to
6/12/19 16
which a combination of two or more attributes are judged to be more probable than
either attribute on its own), as well as regression towards the mean (where an extreme
value is likely to be followed by one which is much closer to the mean).
In a classic experiment, Kahneman and Tversky (1973) showed people a
personality profile of a student and asked them if they thought this student was an
engineering or law student. Subjects appeared to base their predictions on the
resemblance of the student’s image to stereotypical notions of people in those
respective disciplines. Subjects made their judgments based on the stereotype, and
appeared to ignore additional information that the student was drawn randomly from a
group of 70 engineering students and 30 law students, or from a group of 30
engineering students and 70 law students, although that information was highly relevant
to the judgment in hand.
In another, now classic study, Tversky and Kahneman (1983) presented the
following problem to a number of people: “Linda is 31, single, outspoken and very
bright. She majored in philosophy. As a student, she was deeply concerned with issues
of discrimination and social justice and also participated in antinuclear
demonstrations”. The researchers then asked whether she was more likely to be (a) a
bank teller, or (b) a bank teller and active in the feminist movement. 86% answered (b).
Statistically speaking, however, it is clear that the probability of (b) cannot be higher
than the probability of (a). Kahneman and Tversky blame this error in judgment on the
bases that most people use the representativeness heuristic to make this kind of
judgment: Option 2 seems more "representative" of Linda based on the provided
description of her, even though it is statistically less likely. Not surprisingly the
6/12/19 17
representativeness heuristic has been implicated in the formation and operation of
stereotype biases. That is, when making judgments about an individual or event,
people look for traits or qualities that correspond with previously formed stereotypes, a
judgment which may be highly relevant to judgments of trust or distrust.
We propose that the representativeness heuristic can explain how trust leads to
certain types of trust-related judgments in a negotiation. For example, if an individual
trusts another person, this conceptualization of this other as trustworthy in one domain
likely spills over into other domains relative to interactions with that individual. It may
also be that trust in a certain individual may also result in trusting feelings toward other,
similar individuals. Brewer (e.g. 1981) has repeatedly demonstrated how ‘category-
based judgments’—knowing that an individual is a member of a particular social or
organizational category—can dramatically judgments of their trustworthiness. For
example, if we have previously had first hand experience with an honest auto mechanic
and subsequently developed a high degree of trust for this particular mechanic, we may
be more inclined to trust other auto mechanics. Similarly, trust violations by another
person may lead us to not trust individuals who are similar to the individual who
previously violated our trust.
Availability of information heuristic. According to Tversky and Kahneman’s
(1973) availability heuristic, people estimate the frequency, probability and typicality
of an event “by the ease with which instances or associations come to mind” (p.208).
For example, a person who is asked whether there are more English words that begin
with the letter K or the letter T might try to think of words that begin with both of these
letters. Because the person can most likely think of more words which begin with the
6/12/19 18
letter T, they would likely (and correctly) conclude that T is more frequent than K as
the first letter of English words (Schwarz & Vaughn, 2002).
It is important to note that the diagnostic value of the availability (ease of recall)
heuristic is influenced by both a person’s relative knowledge pertaining to the particular
subject matter, and the personal significance or relevance of the situation or event.
That is, a person will rely less on ease of recall as an indicator of the probability of an
event if they are aware of their own lack of knowledge in a particular area. For
example, if an individual is asked to recall the names of Russian opera singers, the
relative difficulty of the task would not likely cause them to conclude that there are not
many Russian opera singers, but rather they would realize that their own lack of
expertise in this area nullifies the efficacy of the ease of recall as a indicator. Relative to
trust, if a person has had many direct previous experiences with another person, then
the scope of that experience may impact the diagnostic efficacy of trust as an
availability heuristic. That is, a person who has formed a high level of trust with
another due to repeated interactions will have more experiences of trust with that other
to recall, thus increasing the ease of recall of trusting behaviors (c.f. the nature of
knowledge-based trust, Lewicki & Bunker, 1996). Moreover, they will also likely have
more confidence in their own knowledge relative to the trustworthiness of the other
(because their trusting beliefs were formed based on more substantial experience); thus
they considerer themselves relatively more knowledgeable, and therefore more likely to
relay on availability of recall as an indicator of relative probability or frequency. In
contrast, trust based on less extensive personal interactions (c.f. swift trust, Meyerson,
Weick & Kramer, 1996), the other’s reputation, or position, may function less
6/12/19 19
heuristically because the individual will have more reason to question their own
knowledge relative to the other’s trustworthiness and therefore lessen the heuristic
efficacy of trust. Similarly, the level of personal significance of the situation or event
may also impact the use of availability of information as a cognitive shortcut for
making inferences about future expectations. That is, when an event or situation is of
relatively lower personal relevance, an individual is more likely to engage in heuristic
processing, but as the personal relevance increases, individuals become more likely to
engage in more in-depth cognitive elaborations (see Chaiken & Trope, 1999; Grayson
& Schwarz, 1999; Rothman & Schwarz, 1998).
Anchoring and adjustment heuristic. When making judgments about an
individual or event, people make assessments by starting from some relative initial
value, and then adjusting from that initial value toward a final judgment. For example,
most people may not be able to tell you the exact date on which the Cuban missile crisis
ended, but by knowing that it occurred during the administration of President John F.
Kennedy, they can deduce that it must have occurred in the early 1960s and sometime
before the assassination of President Kennedy on November 22nd, 1963 (a date many
people remember exactly). When making estimates such as this, people often use what
Tversky and Kahneman (1974) termed the anchoring and adjustment heuristic. In this
judgment, people anchor their initial estimation on information that readily comes to
mind, and then adjust their thinking around that anchor in what seems like an
appropriate direction. In this case, they may start at January, 1960 or November 1963
and then adjust to a closer date.
6/12/19 20
Anchoring and adjustment effects have been demonstrated in a variety of
contexts including estimates of risk and uncertainty (Wright & Anderson, 1989),
probability estimates (Fischhoff & Beyth, 1975), anticipations of future performance
(Switzer & Sniezek, 1991), perceptions of self-efficacy (Cervone & Peake, 1986),
answers to general knowledge questions (Jacowitz & Kahneman, 1995), and trait
inference (Gilbert & Scher, 1989; Kruger, 1999), just to mention a few. Anchoring
effects have been attributed to insufficient adjustment from an initially irrelevant value
(Tversky & Kahneman, 1974) as well as by the increased accessibility of anchor
consistent information which results from a biased search for information which
supports the initial hypothesis (see Chapman & Johnson, 2002). We propose that under
certain conditions, initial perceptions of trust can function as an anchor that influences
subsequent judgments of risk and uncertainty relative to a specific individual or event.
When people use perceptions of trust as an anchor to form expectations regarding an
individual future behavior, their subsequent “cognitive load” (complexity of
information sorting and processing) may be reduced because elaborate analysis of all
the relevant factors regarding that individual is avoided. However, as the anchoring
and adjustment heuristic indicates, insufficient adjustment away from this initial
anchor, and a biased search for information which supports the initial perception may
yield an inaccurate perception of the target’s likely behavior.
Affect heuristic. For nearly thirty years researchers have been systematically
studying the role of mood and emotions (affect) on memory, judgment, and decision
making (see Bower, 1981; Bower & Forgas, 2000; Forgas, 1995; Schwarz & Clore,
1983; Zajonc, 1980). Three processes by which affect has been demonstrated to impact
6/12/19 21
judgment and decision making are affect-as-information, affect-congruent recall, and
affect-congruent sensitivity effects. We propose that under certain conditions,
judgments of trust and distrust may be impacted by any or all of these affective
processes, and the resultant trust judgments may further influence the extent to which
these affective processes are operative in later judgment and decision making tasks.
Building on significant body of research (see Clore, Schwarz & Conway, 1994;
Forgas, 1995; Zajonc, 1980) on the role of affect in decision making, Finucane,
Alhakami, Slovic, and Johnson (2000) identified the affect heuristic to refer to a
process by which people use affect as a heuristic to make judgments and decisions.
Using a readily available affective impression can be far less cognitively demanding
than weighing the pros and cons, or recalling from memory many relevant examples,
particularly when the judgment or decision task is complex or cognitive resources are
limited (Finucane et al., 2000; Slovic, Finucane, Peters, and MacGregor, 2002). In
addition, actors may also misattribute their affective state toward their feelings about
another person, object, or event (also see affect-as-information; Schwarz & Clore,
1983). According to this affect-as-information view, when computing judgments,
individuals may…ask themselves: ‘How do I feel about it?’, and, in doing so, may
mistake feelings due to a preexisting state as a reaction to the target (Schwarz, 1990, p.
529). Thus, affective influences occur because of an inferential error.
Furthermore, mood and emotions may also impact an individual’s sensitivity to
trust related stimuli through information processing strategy and subsequent affect-as-
information effects. When in a bad mood, individuals may be especially likely to use
more elaborate, systematic processing strategies (Forgas, 1995). A negative mood may
6/12/19 22
be a source of information which signals that there is a problem that needs to be dealt
with. In such instances individuals mobilize their cognitive resources to solve the
problem. Similarly, when an individual is in a positive mood, they may be more likely
to use heuristic, or simplified processing strategies (Forgas, 1995). A good mood
signals that all is right with the world and there is no need for more vigilant, cognitively
intense, systematic processing strategies (for an overview, see Clore, Schwarz &
Conway, 1994). In addition, because good moods are pleasant, an individual may wish
to avoid any task that might disturb their good mood, such as engaging in cognitively
intense processes that may be strenuous and alter the mood. According to the affect-
infusion-model (Forgas, 1995), when an individual is engaged in heuristic processing of
information, their judgments may be impacted through affect-as-information. In this
case an individual in a positive mood will likely engage in more heuristic processing
strategies and misattribute their preexisting positive mood as their feelings about the
trust related stimuli, thus, in effect, becoming less sensitive to it. When in a negative
mood an individual will be motivated to engage in more deliberate processing of the
stimulus and subsequently react more strongly to the same event.
Moreover, according to affect congruent processing, individuals become
sensitized to take in information that agrees with their current affective state (Bower &
Forgas, 2000). During emotionally charged situations, concepts, words, themes and
rules of inference that are associated with that affective state will become primed and
more readily available for use. That is, a person’s affective state will bring into
readiness certain perceptual categories, themes, or ways of interpreting the world that
6/12/19 23
are congruent with their affective state. These mental sets function as interpretive filters
of reality and subsequently bias judgment (Bower, 1983).
In this sense, we can construct arguments for trust influencing mood and mood
influencing trust. In the first case, trust is not really operating as a heuristic, but rather
trust is thought to be an important factor in influencing the extent to which affect-as-
information processes become operative. If an individual trusts another, it is likely that
she will feel more positive emotions regarding this individual, which will influence
subsequent judgments about that individual by making it easier to bring to mind mood-
congruent information about to this individual. In the case of trust, it will be easier to
recall positive information regarding this individual; in the case of distrust it will be
easier to bring to mind negative information regarding this individual. Similarly, the
emotional byproduct of trust or distrust will influence a person’s sensitivity to
information in the environment, such that positive emotions resulting from trust will
make an individual more sensitive to mood-congruent information. For example, if a
person distrusts an individual, this may elicit ostensibly negative emotions regarding
this individual, which will make them more sensitive to mood-congruent information
(in this case negative) concerning this individual. In such cases, the trust—outcome
relationship is mediated by an emotional reaction to the trust-state, and the resultant
affect-congruent information processing bias. The converse relationship between mood
and trust is also true, i.e. that mood influences trust. In a series of studies, Lount (2009)
has shown that positive mood can increase trust, but that the impact of mood is
influenced by situational cues associated with trust or distrust. When available cues
about the other party promoted trust, people in a positive mood increased their trust;
6/12/19 24
when cues promoted distrust, people in a positive mood decreased their trust. Thus,
affect can influence trust development although the relationship is more complex than
simple models of an ‘affect heuristic’ might predict.
Thus far, we have worked to create the foundation for supporting an argument
that trust can operate as a frame and a decision heuristic. The foundation for this
argument was to first review a recent overview of the framing literature, and to point
out that trust can operate both as a cognitive category for viewing others--that is, a
decisional ‘short cut’ or heuristic—as well as an interactional co-construction which
emerges as parties come to create a shared meaning of their relationship. While the
dominant research literature on trust has been to interpret it as an interactional co-
construction, our dominant interest in this paper is to also establish the groundwork for
examining it as a cognitive shortcut. We have attempted to build support for this second
approach to trust as a frame by grounding trust judgments as being similar to four other
established decision heuristics, and showing how trust-related information has the same
powerful influence on perception and decision making as these four heuristics. Based
on our understanding of the ways that these heuristics shape decision making, trust
appears to perform as a decision heuristic. To quote ‘early’ well-known researchers on
“Trust is a functional alternative to rational prediction for the reduction of complexity.
Indeed, trust succeeds where rational prediction alone would fail, because to trust is to
live as if certain rationally possible futures will not occur. Thus, trust reduces
complexity far more quickly, economically and thoroughly than does prediction.”
(Lewis & Weigert, 1985, p. 969).
6/12/19 25
As we indicated, these heuristics have hugely important value to individuals as they
improve the speed and efficiency of accumulating information and making decisions
about others. However, they can also be the source of major decision errors because of
the systematic information processing biases associated with developing and using
these heuristics.
Since we cannot directly access or measure these cognitive mental structures,
but must induce their existence through measuring the observable consequences of
cognitive processing—verbalizations, decisions, and behaviors—we will now attempt
to summarize the evidence that can be mustered from existing writing and research on
Evidence of Trust Operating as a Heuristic, with special attention to negotiations
Here are some examples from the trust literature that indicate that trust can
operate as a heuristic:
Trust is integral to a first impression of the other. Note that several of the
definitions of trust state that trust includes a belief that the other will act benevolently
that is, the willingness to accept vulnerability based on positive expectations of the
other’s intentions or behavior, i.e. the other will work in our behalf and help us achieve
our goals. Research has shown that even though an individual has not met another
party, that individual is most commonly entering into that interaction with some
positive expectations of trust (Kramer, 1994; Meyerson, Weick & Kramer, 1996).
Several authors (Quigley-Fernandez, Malkis & Tedeschi, 1985; McKnight, Cummings
& Chervany, 1998) argue that this effect is caused by early cognitive cues in the
6/12/19 26
interaction, which trigger first impressions. Rapidly forming first impressions cue
broader judgments about the other, including judgments of trust.
Past experience with the other generates information about the other and
shapes expectations of the other. Past experience with the other directly shapes our
expectations of what others will do in the future. This past experience may be limited to
one or more single events with a specific other or may generalize such that individuals
may broadly come to differentially trust others based on this experience. Research by
Rotter (1961) indicates that individuals differ in their dispositions to trust, which Rotter
called a ‘generalized expectancy’ to trust. Rotter defines trust as a ‘stable expectancy
that one can trust others based on past experiences in which trust has been affirmed’,
and demonstrated that individual differences in disposition to trust could be
systematically measured. The representativeness heuristic is at work here.
Reputation of the other shapes expectations about the other. Moreover,
information about the other does not have to be directly obtained, but may be indirectly
obtained through the other’s reputation. A reputation is a “perceptual identity reflective
of the combination of salient personal characteristics and accomplishments;
demonstrated behavior; and intended images presented over a period of time as
observed directly and/or as reported from secondary sources.” (Ferris, Blass, Douglas,
Kolodinsky & Treadway, 2004) Individuals can have a number of different, even
conflicting identities or reputations. Moreover, reputations are perceptual and highly
subjective in nature—thus, “in the eye of the beholder”. Finally, like trust, reputations,
like eggs, are seen as fragile—while it make take a long time to develop a positive
reputation, it can be broken easily with a single careless action, and is difficult to
6/12/19 27
rebuild once broken. In a distributive bargaining research study, Tinsley, O’Connor and
Sullivan showed that knowing the other had a reputation for being a distributive
bargainer leads the other party to trust the other party less, exchange little critical
information, and have poor outcomes in the negotiation. A related study by O’Connor
and Tinsley (2004) indicated that knowing the other had a reputation for integrative
negotiation lead parties to engage in more productive negotiation behaviors and be
more optimistic about their ability to reach a mutually beneficial agreement. Again, the
representative heuristic is dominantly at work here.
Direct experience with the other over time shapes expectations of the other.
The definitions of trust that we proposed above also suggest that reliability and
consistency are key components of trust. Information about reliability and consistency
of the other is only gained through interaction with the other party over time. If an
individual behaves reliably and consistently, then trust in the other should be affirmed
because the individual ‘does what they say they will do’ or behaves in a manner that is
consistent with the earlier expectations. However, inconsistent behavior—that the
other does not do what they said they would do—does not necessarily trigger
immediately lowered trust. Rather, it should trigger a search for cues to determine ways
to explain the inconsistent behavior, and whether that behavior is worthy of continued
trust. For example, Olekalns, Lau, and Smith (2002) found that when behavior was
consistent over time, trust remained high. Trust changed over time with behavioral
inconsistency, dropping when negotiators had a competitive motive, but increasing
when they had an individualistic motive. Here the cognitive representation heuristic of
trust is interacting with an interactive co-construction frame of trust.
6/12/19 28
Motives shape expectations of the other. While there is a generalized
tendency to trust others in the absence of specific information about the other,
variations in individual dispositions, or the contextual cues that shape these
dispositions, will shape the expectations of the other. Thus, individuals can bring these
dispositions into the relationship with the other party (as a function of past experience
with this other party or others), but they can also be shaped by situational cues as to
what is appropriate (for example, information about what the other is likely to do in the
present situation based on context. A long stream of work, starting with the early
studies by Deutsch (1960) and reaffirmed by others (e.g. De Dreu, Weingert & Soon,
2000; Beersma & De Dreu, 1999), indicate that different social motives are linked to
differences in negotiator behavior and to higher levels of trust. In the Olekalns et. al.
(2002) study cited above, overall trust was higher in cooperatively-motivated than in
individualistically-motivated dyads. Moreover, social motives also influenced first
impressions of the other, such that cooperatively oriented negotiators rated the other as
significantly higher in communality than individualistically motivated negotiators.
Again, anchoring and adjustment heuristics are at work here.
The trust heuristic is strongly related to the justice heuristic. Researchers
who study justice in organizations have established the existence of a “fairness
heuristic” (See Lind, 2001; Lind, Kulik, Ambrose, & de Vera Park, 1993). Lind’s
(2001) fairness heuristic theory suggests that people may use perceptions of fairness as
a surrogate or proxy for interpersonal trust. According to Lind, organizational realities
are often overwhelmingly complex and uncertain, making the synthesis and evaluation
of all relevant information to a particular situation impractical. To alleviate this
6/12/19 29
cognitive overload people may use fairness judgments in much the same way that they
would refer to feelings of trust. Considering that the information required to make an
independent trust judgment is frequently not available, or may require too much
cognitive processing, they simply think that if they’re being treated fairly, then that is
enough to act as they would if they had formed an independent trusting judgment of the
person-situation (c.f. Lind, 2001; Lewicki, Wiethoff & Tomlinson, 2005 for reviews).
Indeed, research has supported the view that fairness judgments can act as a
surrogate for trust. For example, Van den Bos, Wilke, and Lind (1998) in two studies
found procedural justice manipulations had stronger effects on the evaluation of an
authority’s decisions when the perceiver had little independent information concerning
the trustworthiness of the authority than when they knew the authority to be either
trustworthy or untrustworthy.
Lind’s fairness heuristic theory suggests that as individuals move into a
relationship they will need to arrive at justice judgments relatively quickly, therefore
these fairness judgments will be developed with the first relevant information exerting
the greatest influence on perceptions of fairness (i.e., primacy effects). According to
the theory, which ever type of justice information is first available, be it information
concerning the fairness of outcome distributions, procedures, or interpersonal
treatment, this information will have the strongest initial influence on the formation of
the fairness heuristic.
Similarly, we expect primacy effects to impact the trust heuristic as a person
initially relies on readily available information relative to any of the various dimensions
of trust previously described (e.g., benevolence, integrity, ability). Through primacy
6/12/19 30
effects this initial information will have a disproportional weighting in the initial
overall perceptions of trustworthiness, as well as anchoring the influence (therefore
attenuating the impact) of new information toward the initial trust evaluation (See the
previous reference to ‘swift trust’)
Furthermore, a series of research studies have drawn an explicit link between
fairness and trust, in that personally fair treatment by another person generates trust in
that person (reference needed). Thus, trust and justice ‘anchor and adjust’ for each
other; evidence of the presence of one quality in another leads people to induce the
other quality.
Trust cues cooperative behavior. Individuals with high trust behave in a
cooperative manner, which elicits cooperation from the other. Thus, trust produces
positive expectations of the other’s conduct, which leads them to behave cooperatively
and expect that the other, will behave cooperatively (similar to the predictions made by
Greenhalgh and Chapman, 1996, above). Research by Butler (1995; 1999) has shown
that trust cues cooperative behavior (see also Kimmel, Magenau, Knonar-Goildband &
Carnevale, 1980), and that cooperative behavior further increases trust. Anchoring and
adjustment may also be in play here in that relational behavior begins from the
comparatively advanced anchored state of relative trust and is adjusted based on
continuing evaluations of trustworthiness.
The empirical research by Olekalns et al. (2002) also supports this relationship.
In their research, students were placed a negotiation simulation and were measured on
trust and impressions of the other shortly after the simulation began, and at the end of
the simulation. The authors constructed a scale to measure perceptions of the other
6/12/19 31
negotiator: power (dominant-submissive, divisive-cohesive, takes control-yields
control) and communality (good-bad, sincere-insincere and relationship-oriented—task
oriented). Initial ratings of the other predicted later ratings of the other; initial trust
predicted later impressions of communality, and early communality predicted later
ratings of trust.
Distrust operates as the negative mirror image of trust. At the beginning of
these arguments, we included a definition of trust that also specified a separate
definition of distrust (Lewicki, McAllister, & Bies, 1998). In that paper, we argued that
distrust should be considered as a separate construct from trust; that no trust was the
polar opposite of trust, and no distrust was the polar opposite of distrust; and that trust
and distrust judgments could simultaneously exist in complex relationships with the
other party. In subsequent papers (e.g. McAllister, Lewicki & Bies, 2003), we have
empirical support for two different constructs that predict ways managers will
differentially use influence attempts in an effort to gain another’s compliance. We
would argue that while trust and distrust are separate constructs, distrust functions as a
heuristic in much the same way as trust.
In this chapter, we have argued that there are two fundamentally different
approaches to ‘framing’ interpersonal trust: as a cognitive representation or heuristic,
and as an interactional co-construction. After briefly reviewing the trust literature and
reviewing the various ways that framing dynamics have been articulated in the research
literature, we offered illustrations of commonly identified heuristics and discussed the
parallels to the way trust judgments are made. We then offered evidence of previous
6/12/19 32
trust studies which exemplify the ‘trust heuristic’. In this final section, we draw
implications for future model building about trust dynamics and future empirical
research on trust.
A variety of different approaches have been used in an attempt to model trust.
For the most part, these attempts have taken one of three different approaches (c.f.
Lewicki, Tomlinson and Gillespie, 2006):
• trust as a ‘personality variable’—i.e. an individual difference in a propensity to
trust (e.g. Rotter, 1971);
• trust as a intention or expectation toward the other party, often containing both
cognitive and affective components (e.g. Rousseau et al., 1998; McAllister, 1995)
which is shaped and enhanced
• trust as behavior, which is usually calibrated by more ‘trusting’ (cooperative)
choices in mixed-motive game situations (e.g. Flores & Solomon, 1998).
Although the framing and heuristic effects of trust are implicit in many of these
conceptualizations (i.e. trust facilitating risk taking without the need for cognitively
intense probability calculations—see Dirks & Ferrin, 2001), the systematic integration
of framing and heuristic theory with trust research has been lacking. This is especially
important when attempting to model how trust and distrust evolve over the course of a
relationship, how the various facets of trust and distrust interact, and how trust and
distrust lead to various outcomes. As previously discussed, heuristics enable decisions
to be made with nominal cognitive effort and therefore enable people to function
efficiently in complex environments. However, heuristic processing may entail a non-
cognizant price in the form of impulsive, incomplete or biased decisions, and
6/12/19 33
unanticipated outcomes. These heuristic effects are generally not accounted for in
models of trust, and may provide a theoretical vantage from which a deeper
understanding of trust can be derived. Moreover, accounting for these heuristic effects
may move the model building (and data collection) forward in the following ways:
• Accounting for trust heuristics may allow researchers to distinguish trust
judgments which are described as arising from ‘individual differences’ (Rotter) from
trust decisions which arise from information-processing short cuts that lead to quick
judgments of the level of trust in another party;
• Accounting for trust heuristics will allow researchers to understand how trust
heuristics are closely linked to other cognitive constructions of other parties, including
perceptions and judgments of self-identity, characterizations of others and strategies for
negotiating with others and resolving conflicts (e.g. Lewicki, Gray and Elliott, 2003;
Brewer, 1981);
• Accounting for trust heuristics may help researchers understand why some
trust breaches may be repaired, while other trust breaches may not be repaired. If trust
is an interactional co-construction, and trust is breached within that interaction, then
traditional ‘trust repair’ efforts (apologies, reparations, forgiveness) may restore the
trust. However, if some trust is heuristically driven, and that trust is breached, the
consequences of ‘fracturing’ the validity of the heuristic may not permit traditional trust
repair tools to be used.
• Finally, and most importantly, accounting for the framing dynamics of trust
may allow researchers to discriminate among approaches for building trust that are
based on trust as a heuristic (allowing ‘quick’ trust judgments of others based on very
6/12/19 34
little information) from those approaches which suggest that trust is dominantly created
through the interactional co-construction approach (e.g. trust judgments are the result of
complex information processing of the benefits and costs of trusting weighted against
the benefits and costs of not trusting). In the first case, we have evidence that people
make ‘swift trust’ judgments based on personal past history and a number of cues about
the other party and the context (e.g. Meyerson, Weick & Kramer, 1998; Kramer, 1998).
On the other hand, we have numerous theories that describe trust building as more slow
and systematic calculations about costs and benefits (e.g. Lewicki & Bunker, 1996), or
as nicely stated by Kramer, ‘trust thickens or thins as a function of the cumulative
history of interaction between interdependent parties’ (1996, p. 218). These
approaches have become more complex as parties have attempted to incorporate affect
in their approaches, understand how trust drives judgments of the other’s
trustworthiness, understand how trust and distrust may operate differently, and
understand how trust may change and transform within a relationship over time.
Theory and research about these two approaches to trust have evolved over the past
thirty-five years, with greater attention given to the ‘co-construction’ approach to trust.
Yet much trust theory and research more implicitly suggests that trust operates as a
heuristic, and that many trust judgments can be viewed in these terms. Systematic work
is necessary to separate these streams of work, articulately model how they are
proposed to work differently, grant legitimacy to both approaches, and then draw out in
great detail the theoretical and empirical implications of treating them separately as
well as understand how they work together.
6/12/19 35
This chapter extends the study of trust into the domain of framing and heuristic
effects. Trust enables people to enter into relational interactions without the need
engage in in-depth analysis of the probabilities of potential outcomes. Trust operates as
a probability heuristic in that trust entails an a-priori probability of expected behavior to
be applied to emergent situations involving a referent person or institution. In
situations of relative uncertainty, trust provides assurance that a desirable course of
events will be realized in the unknowable future. As Luhmann (1979, p.32) wrote,
“trust rests on illusion. In actuality, there is less information available than would be
required to give assurance of success.”
Moreover, trusting cognitions and behaviors depend on how an individual
“perceives” a potentially ambiguous or unclear situation. Considering that trust is
rooted in perception, it is likely that the same situation will be interpreted differently by
different individuals as estimates of risk and expected gains or losses are subjective and
therefore susceptible to systematic heuristic and framing effects. These effects may, in
part, explain why some individuals may take unwise risks, thereby trusting unwisely,
whereas other individuals may experience the increased cognitive and behavioral
efficiencies associated with “well-placed” trust.
6/12/19 36
Bacon, T.R. (2006) What People Want. Mountain View, CA: Davies Black.
Baier, A. 1985. Trust and antitrust. Ethics, 96: 231-260.
Bazerman, M. (2006) Judgment and Managerial Decision Making. Sixth Edition. New
York: John Wiley.
Beersma, B. & De Dreu, C. (1999) Negotiation processes and outcomes in prosocially
and egoistically motivated groups. International Journal of Conflict Management, 10,
Bower, G. H. (1981). Mood and memory. American Psychologist, 36, 129-148.
Bower, G. H. (1983). Affect and cognition. Philosophical Transactions of the Royal
Society of London, 302(B), 387-402.
Bower, G. H., & Forgas, J. P. (2000). Affect, memory, and social cognition. In E. Eich,
J. F. Kihlstrom, G. H. Bower, J. P. Forgas & P. M. Niedenthal (Eds.), Cognition and
emotion. (pp. 87-168). New York: Oxford University Press.
Brewer, M. (1981) Ethnocentrism and its role in interpersonal trust. In M. Brewer & B.
Collins (Eds.), Scientific Inquiry and the Social Sciences. New York: Jossey Bass, pp.
Buechler, S.M. (2000). Social motives in advanced capitalism. New York: Oxford
University Press.
Butler, J.K. (1995) Behaviors, trust and goal achievement in a win-win negotiating
role-play. Group & Organization Management, 20, 486-501.
Butler, J.K. (1999). Trust expectations, information sharing, climate of trust and
negotiation effectiveness and efficiency. Group & Organization Management, 24, 217-
Cervone, D., & Peake, P. K. (1986). Anchoring, efficacy, and action: The influence of
judgmental heuristics on self-efficacy judgments and behavior. Journal of Personality
and Social Psychology, 50, 492-501.
Chaiken, S., Liebermann, A., & Eagly, A. H. (1989). Heuristic and systematic
information processing within and beyond the persuasion context. In J. S. Uleman & J.
A. Bargh (Eds.), Unintended thought. (pp. 212-252).
Chaiken, S., & Trope, Y. (1999). Dual-process theories in social psychology. New
York: Guilford Press.
6/12/19 37
Chapman, G. B., & Johnson, E. J. (2002). Incorporating the irrelevant: Anchors in
judgments of belief and value. In T. Gilovich, D. Griffin & D. Kahneman (Eds.),
Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge
University Press.
Clark, M. C., & Payne, R. L. (1997). The nature and structure of workers' trust in
management. Journal of Organisational Behaviour, 18, 205-224.
Clore, G. L., Schwarz, N., & Conway, M. (1994). Affective causes and consequences of
social information processing. Hillsdale, NJ: Lawrence Erlbaum Associates.
Cummings, L. L., & Bromiley, P. 1996. The organizational trust inventory (OTI):
Development and validation. In R. M. Kramer & T. R. Tyler (Eds.), Trust in
organizations: Frontiers of theory and research: 302-330. Thousand Oaks, CA: Sage
Deutsch, M. (1960). The effect of motivational orientation upon trust and suspicion.
Human Relations, 13, 123-139.
De Dreu, C., Weingart, L.R. & Kwon, S. (2000) Influence of social motives on
integrative negotiation: A meta-analytic review and test of two theories. Journal of
Personality and Social Psychology, 78, 889-905.
De Wulf, A., Gray, B., Putnam, L.L., Lewicki, R., Aarts, N., Bouwen, R. and Van
Woerkum, C. (2009). Disentangling approaches to framing in conflict and negotiation
research: mapping the terrain. Human Relations. 62(2), p. 1-39.
Dirks, K. T., & Ferrin, D. L. (2001). The role of trust in organizational settings.
Organization Science, 12, 450-467.
Fein, S. (1996). Effects of suspicion on attributional thinking and the correspondence
bias. Journal of Personality and Social Psychology, 70, 1164-1184.
Ferris, G., F.R. Blass, C. Douglas, R. Kolodinsky & D. Treadway. Personal reputations
in organizations. In Greenberg, J. (Ed.) Organizational Behavior: The State of the
Science. Mahwah, NJ: Lawrence Erlbaum.
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect
heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making,
13, 1-17.
Fischhoff, B., & Beyth, R. (1975). "I knew it would happen": Remembered
probabilities of once-future things. Organizational Behavior & Human Performance,
13, 1-16.
6/12/19 38
Fiske, A. P. (1991) Structures of social life. New York: Free Press.
Follett, M. P. (1942). Constructive conflict. In H.C. Metcalf & L. Urwick (Eds)
Dynamic Administration: The collected papers of Mary Parker Follett (pp. 30-49).
New York: Harper & Brothers.
Forgas, J. P. (1995). Mood and judgment: The affect infusion model (AIM).
Psychological Bulletin, 117, 39-66.
Gigerenzer, G., & Todd, P. M. (1999). Fast and frugal heuristics: The adaptive toolbox.
In G. Gigerenzer & P. M. Todd (Eds.), Simple heuristics that make us smart. (pp. 3-34).
Gilbert, L. A., & Scher, M. (1989). The power of unconscious belief: Male entitlement
and sexual intimacy with clients. Professional Practice of Psychology, 8, 94-108.
Gray, B. (2003) Framing of environmental disputes. In Lewicki, R., Gray, B. & Elliott,
B. (2003). Making Sense of Intractable Environmental Conflicts: Concepts and Cases.
Washington, D.C. Island Press. Pp. xxx
Grayson, C. E., & Schwarz, N. (1999). Beliefs influence information processing
strategies: Declarative and experiential information in risk assessment. Social
Cognition, 17, 1-18.
Greenhalgh, L. & Chapman, D. (1996). Relationships between disputants: Analysis of
their characteristics and impact. In S. Gleason (Ed.) Frontiers in dispute resolution and
human resources. (pp. 203-228). East Lansing, MI: Michigan State University Press.
Greenberg, J. & Wiethoff, C. (2001) Organizational justice as proaction and reaction:
Implications for research and application. In R. Cropanzano (Ed.), Justice in the
workplace, Vol. 2: From theory to practice. (p. 271-301). Mahwah, NJ: Lawrence
Erlbaum Associates.
Hardin, R. 1993. The street-level epistemology of trust. Politics and Society, 21: 505-
Jacowitz, K. E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks.
Personality and Social Psychology Bulletin, 21, 1161-1166.
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological
Review, 80, 237-251.
Kimmel, M.J., Pruitt, D., Magenau, J.M., Konar-Goldband, E. & Carnevale, P.J.
(1980). Effects of trust aspiration and gender on negotiation tactics. Journal of
Personality and Social Psychology, 38, 9-23.
Kolb, D. (1985). The mediators. Cambridge, MA: MIT Press.
6/12/19 39
Kramer, R. (1994). The sinister attribution error: Paranoid cognition and collective
distrust in organizations. Motivation & Emotion., 18, 199-230.
Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the
egocentric nature of comparative ability judgments. Journal of Personality and Social
Psychology, 77, 221-232.
Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). All frames are not created equal: A
typology and critical analysis of framing effects. Organizational Behavior and Human
Decision Processes, 76, 149-188.
Lewicki, R., Barry, B., Saunders, D. and Minton, J. (2006). Negotiation. Fifth Edition.
Burr Ridge, IL: McGraw Hill Higher Education.
Lewicki, R. J., & Bunker, B. B. 1996. Developing and maintaining trust in work
relationships. In R. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of
theory and research: 114-139. Thousand Oaks, CA: Sage.
Lewicki, R., Gray, B. & Elliott, B. (2003). Making Sense of Intractable Environmental
Conflicts: Concepts and Cases. Washington, D.C. Island Press.
Lewicki, R., McAllister, D. & Bies, R. (1998) Trust and distrust: New relationships and
realities. Academy of Management Review, 23, 3, 438-458.
Lewicki, R.J., Tomlinson, E.C. & Gillespie, N. (2006) Models of interpersonal trust
development: Theoretical approaches, empirical evidence and future directions. Journal
of Management, 32, 6, 991-1022.
Lewicki, R.J., Wiethoff, C. & Tomlinson, E. (2005) What is the role of trust in
organizational justice? In Greenberg, J. and Colquitt, J. Handbook of Organizational
Justice: Fundamental Questions about Fairness in the Workplace. Mahwah, NJ:
Lawrence Erlbaum Associates.
Lewis, J.D. & Weigert, A. (1985). Trust as a social reality. Social Forces, 63, 967-985.
Lind, A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions in
organizational relations. In J. Greenberg & R. Cropanzano (Eds.), Advances in
organizational justice. (pp. 56-68). Stanford, CA: Stanford University Press.
Lind, E. A., Kulik, C. T., Ambrose, M., & de Vera Park, M. V. (1993). Individual and
corporate dispute resolution: Using procedural fairness as a decision heuristic.
Administrative Science Quarterly, 38, 224-251.
Lount, R. (2009) The impact of positive mood on trust in interpersonal and intergroup
interactions. Unpublished doctoral dissertation.
6/12/19 40
Luhmann, N. 1979. Trust and power. Chichester, England: Wiley.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. 1995. An integrative model of
organizational trust. Academy of Management Review, 20: 709-734.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. 1995. An integrative model of
organizational trust. Academy of Management Review, 20: 709-734.
McAllister, D. (1995). Affect- and cognition-based trust as foundations for
interpersonal cooperation in organizations. Academy of Management Journal, 38, 24-
McAllister, D., Lewicki, R.J. & Bies, R. (2001). Hardball: How Trust and Distrust
Interact to Predict Hard Influence Tactic Use. Paper presented at a Symposium,
Academy of Management. Unpublished manuscript.
McKnight, D.H., Cummings, L. & Chervany, N.L. (1998). Initial trust formation in
new organizational relationships. Academy of Management Review, 23, 473-491.
Meyerson, D., Weick, K.E. & Kramer, R.M. (1996). Swift trust and temporary groups.
In R.M. Kramer & T.R. Tyler (Eds.) Trust in organizations. Thousand Oaks, CA: Sage
Publications, 166-195.
Minsky, M. (1975) A framework for representing knowledge. In P. Winston (Ed), The
psychology of computervisions. New York: McGraw Hill.
Moskowitz, G. B. (2005). Social cognition: Understanding self and others. New York:
Guilford Press.
O’Connor, K. & Tinsley, C. H. (2004) Looking for an edge in Negotiations? Cultivate
an integrative reputation. Unpublished manuscript.
Olekalns, M., Lau, F. & Smith, P.L. (2002) The dynamics of trust in negotiation.
Paper presented to the International Association of Conflict Management, Park City,
Parks, C. D., Henager, R. F., & Scamahorn, S. D. (1996). Trust and reactions to
messages of intent in social dilemmas. Journal of Conflict Resolution, 40, 134-151.
Petty, R. E., Cacioppo, J. T., & Schumann, D. (1983). Central and peripheral routes to
advertising effectiveness: The moderating role of involvement. Journal of Consumer
Research, 10, 135-146.
6/12/19 41
Putnam, L. & Holmer, M. (1992). Framing, reframing and issue development. In L.
Putnam & M. Roloff (Eds.), Communication and negotiation (pp.128-155). Newbury
Park, CA: Sage.
Quigley-Fernandez, B., Malkis, F.S. & Tedeschi, J. (1985) Effects of first impressions
and reliability of promises on trust and cooperation. British Journal of Social
Psychology, 24, 29-36.
Robinson, S. L. 1996. Trust and breach of the psychological contract. Administrative
Science Quarterly, 41: 574-599.
Roth, J. & Sheppard, B. (1995) Opening the black box of framing research: The
relationship between frames, communication and outcomes. Academy of Management
Rothman, A. J., & Schwarz, N. (1998). Constructing perceptions of vulnerability:
Personal relevance and the use of experiential information in health judgments.
Personality and Social Psychology Bulletin, 24, 1053-1064.
Rotter, J. (1961) Generalized expectancies for interpersonal trust. American
Psychologist, 3: 1-7.
Rousseau, D., Sitkin, S., Burt, R. & Camerer, C. (1998). Not so different after all: A
cross-discipline view of trust. Academy of Management Review, 23, 393-404.
Schwarz, N. (1990). Feelings as information: Informational and motivational functions
of affective states. In E. T. Higgins & R. M. Sorrentino (Eds.), Handbook of motivation
and cognition. Foundations of social behavior (pp. 527-562). New York: Guilford
Schwarz, N., & Clore, G. L. (1983). Mood, misattribution, and judgments of well-
being: Informative and directive functions of affective states. Journal of Personality
and Social Psychology, 45, 513-523.
Schwarz, N., & Vaughn, L. A. (2002). The availability heuristic revisited: Ease of
recall and content of recall as distinct sources of information. In T. Gilovich, D. Griffin
& D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment.
New York: Cambridge University Press.
Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction
framework. Psychological Bulletin, 134, 207-222.
Slovic, P., Finucane, M., Peters, E., & MacGregor, D. (2002). The affect heuristic. In T.
Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of
intuitive judgment. New York: Cambridge University Press.
6/12/19 42
Switzer, F. S., & Sniezek, J. A. (1991). Judgment processes in motivation: Anchoring
and adjustment effects on judgment and behavior. Organizational Behavior and Human
Decision Processes, 49, 208-229.
Tannen, D. (1979) What is a frame? Surface evidence of underlying expectations. In R.
Freedle (Ed.), New directions in discourse processes. 137-181. Norwood, NJ: Ablex.
Tannen, D. & Wallat, C. (2003). Interactive frames and knowledge schemas in
interaction: Examples from a medical examination/interview.” In D. Tannen (ed.)
Framing in Discourse. Pp. 57-76. New York: Oxford University Press.
Tinsley, C.H., K. O’Connor & B. Sullivan. Tough guys finish last: the perils of a
distributive reputation. (2002) Organizational Behavior and Human Decision
Processes, 621-642.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and
biases. Science, 185, 1124-1131.
Tversky, A. & Kahneman, D. (1981) The framing of decisions and the psychology of
choice. Science, 211, 453-463.
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The
conjunction fallacy in probability judgment. Psychological Review, 90, 293-315.
Williams, M. (2001). In whom we trust: Group membership as an affective context for
trust development. Academy of Management Review, 26, 377-396.
Williamson, O. E. 1981. The economics of organization: The transaction cost approach.
American Journal of Sociology, 87: 548-577.
Wright, W. F., & Anderson, U. (1989). Effects of situation familiarity and financial
incentives on use of the anchoring and adjustment heuristic for probability assessment.
Organizational Behavior and Human Decision Processes, 44, 68-82.
Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American
Psychologist, 35, 151-175.
... Information produced by or that comes from sources that are trusted and seen as credible tends to be accepted and to pass through selection filters more easily than information from sources that are not trusted (Kasperson et al., 1988). Similarly, Lewicki andBrinsfield (2011) andCummings (2014) have argued for understanding trust as a heuristic. Lewicki and Brinsfield (2011) describe trust and distrust as cognitive frames aiding the interpretation and organizing of new experiences. ...
... Information produced by or that comes from sources that are trusted and seen as credible tends to be accepted and to pass through selection filters more easily than information from sources that are not trusted (Kasperson et al., 1988). Similarly, Lewicki andBrinsfield (2011) andCummings (2014) have argued for understanding trust as a heuristic. Lewicki and Brinsfield (2011) describe trust and distrust as cognitive frames aiding the interpretation and organizing of new experiences. ...
... Similarly, Lewicki andBrinsfield (2011) andCummings (2014) have argued for understanding trust as a heuristic. Lewicki and Brinsfield (2011) describe trust and distrust as cognitive frames aiding the interpretation and organizing of new experiences. Once a trust/distrust frame is formed it functions as a shortcut for decision making. ...
Full-text available
Although some disagreement about the strength of the relationship, it is generally agreed within risk research, that trust plays a central role in shaping risk perception and risk responses. Over recent decades, risk managing institutions have experienced what by many has been described as a decline in public trust. Strategies like stakeholder involvement and communication of scientific uncertainties are increasingly implemented to rebuild levels of trust but often prove less effective. Also, trust-related research mainly revolves around the relation between regulators and authorities, on one hand, and the public, on the other, with less attention given to the role of the scientific risk assessor. In this paper, we argue that assessors can act to improve conditions of trust by adopting an understanding of risk, stressing uncertainty and knowledge aspects when conceptualizing and characterizing risk. While ‘full’ trust commonly is seen as an ideal situation and distrust as a state of affairs to be prevented, this approach involves recognizing distrust as a resource. Based on an example regarding the authorization and regulation of a feed additive and the Social Amplification of Risk Framework, we show how such a perspective affects trust, both as a filter for processing, interpreting and responding to risk-related information and as an impact following such processes. Drawing on a typology of trust, we also illustrate how this relates to different dimensions and forms of trust.
... As described, illusions of control and social categorization function as cognitive representations or frames that influence trusting beliefs and intentions (Dewulf et al., 2009;Lewicki and Brinsfield, 2011). Identity frames shape how an employee conceives of themselves and their coworkers in a given social context. ...
... Finally, substantive issue frames help employees assess the potential gains and losses when making a risky choice. In the case of an organization-sponsored sharing platform, an employee's motive or goal for consuming collaboratively with coworkers also represents a substantive issue frame that can influence their trusting beliefs and intentions (Lewicki and Brinsfield, 2011). These cognitive frames are psychologically rooted in the four heuristics of representativeness, availability, anchoring and adjustment, and affect that bias individual decision-making (c.f. ...
... Through abductive analysis of 22 interviews with 15 champions, we shed light on how these providers initially develop trust when consuming collaboratively on an organization-sponsored sharing platform. To conceptually ground our data analysis, we first integrated prior research on trust among employees (McKnight et al., 1998;Colquitt et al., 2007) and cognitive framing (Dewulf et al., 2009;Lewicki and Brinsfield, 2011). Our empirical findings reveal how champions in our field study enacted coworker collaborative consumption as citizens in a community or consumers in a marketplace (see Figures 3, 4). ...
Full-text available
Organization-sponsored sharing platforms extend the sharing economy to workplaces by connecting employees in a private online community where they can socially exchange goods and services with coworkers. Employees share costs but do not earn income during this collaborative consumption. Furthermore, employers pay for their employees to have access to the platform technology and any related transaction fees. Trust is a crucial antecedent for engagement on sharing platforms because it helps mitigate risks during collaborative consumption. However, the literature on trust in the sharing economy has focused almost exclusively on platforms that broker peer-to-peer rental transactions rather than social exchanges. There is also a lack of research about providers' perspectives. We address these gaps by investigating the nature of trust among employees who initially provide goods and services on an organization-sponsored sharing platform. We also explore how these employees' initial trust influences their collaborative consumption with coworkers. Through abductive analysis of 22 interviews with 15 providers on an organization-sponsored sharing platform, we shed light on how employees initially develop trust when providing goods and services to coworkers. By integrating prior research on initial trust among employees and cognitive framing with in-depth qualitative insights, we develop a conceptual model depicting how identity, interaction and issue frames shape these providers' beliefs about coworker trustworthiness and intended sharing strategy. In particular, our empirical findings reveal that employees' social categorization, illusions of control and engagement motive framed their initial trust and enactment of collaborative consumption as citizens in a community or consumers in a marketplace.
... Second, there is a possibility that people who are concerned about GHG and fine dust emissions make judgments about nuclear power using their environmentally friendly heuristics rather than seriously considering the relationship between such emissions and nuclear power. That is, rather than using rational and systematic analysis of the consequences of nuclear power, people may use analytic "short cuts" drawing from existing beliefs and emotions tied to prior experiences [63,64]. There are associations between nuclear power and environmentally negative keywords, such as nuclear weapons and explosions [63,65]. ...
... Coefficients are standardized ones. argument from a trustworthy authority provides the public with a plausible reason to follow such an argument [64,71]. Therefore, one of the urgent tasks for pro-nuclear groups in Korea would be gaining the public's trust. ...
South Korea, a country that built a world-class nuclear power infrastructure, recently shifted to a nuclear phase-out. This shift has been pursued as part of a larger task of electricity mix reform, and one of the integral motives for such reform is addressing greenhouse gas (GHG) and fine dust problems. Thus, verifying the impacts of the public's concerns about GHG/fine dust on their acceptance of nuclear power is essential for informing public communication strategies to revive nuclear power under the current environmental regime. Our analysis using a nationwide survey sample of South Korea (N = 1009, through proportionated quota sampling method) showed that the more the respondents are concerned about greenhouse gases and fine dust, the less they accept nuclear power. These relationships remained negative even after controlling for the effect of a third variable—energy-related environmentalism. This finding means that despite past communication efforts positioning nuclear power as a generation source that can mitigate GHG/fine dust emissions and the widely accepted scientific evidence that supports such positioning, nuclear power in Korea is in jeopardy. Our findings provide implications for public communications and fundamental knowledge for research on the determinants of nuclear power acceptance.
... Frames are culturally codified portions of meaningful experience that the author relies on as being available in the readers' encyclopedic competence and used by them to unravel the plot. At the same time, an important direction in the otherwise boundless research of trust also puts the concept of frames at the centre of its theoretical modelling (Lewicki, Brinsfield 2011) and, as we will see in the next section, the analogies are impressive. ...
... Before proposing certain semiotic reflections on the relationship between face and trust on the Internet, let us elaborate the analogies between the role of trust in the "inferential walks" in text interpretation and the "inferential walks" in real life, when we are faced with a choice among different hypotheses of action. An influential method for studying trust is one that sees it as the essence of heuristics for decision-making and judgment (Lewicki, Brinsfield 2011. Here the concept of the frame is at the heart of the theoretical model and may be seen to introduce the theory of trust in purely semiotic terms: "Framing is about making sense of a complex reality and defining it in terms that are meaningful to us" (Lewicki, Brinsfield 2011: 116). ...
Full-text available
After the cultural explosion of Web 2.0, digital culture reveals an apparently semiotic paradox associated with the incredibly widespread use of images of faces, while at the same time the reason to trust in the authenticity of these faces is constantly declining. This is because graphic technology has made the sophisticated manipulation of images both possible and easy. After a review of the existing semiotic models and considerations of trust, I am proposing a new approach which emphasizes the value-generating properties of trust by analogy with the money sign, seen as “trust inscribed”. Research from the neurosciences supports the hypothesis that the trustworthiness of the face is judged pre-reflexively and primordially. This, therefore, means that a trustworthy face is a premise for more successful communication than an untrustworthy one, notwithstanding the object of discussion and the cultural context. An example concerning social media influencers serves to show that in the internet-dominated globalizing culture, trustworthy faces are a multipurpose communicative asset that makes a difference.
... In a digital world that is characterized by 'information overload' (Bawden & Robinson, 2020;Hiltz & Plotnick, 2013), it is impossible to monitor all the necessary information. As a consequence, other factors influence information-seeking behavior, such as the perception of media, e.g. in terms of trustworthiness (Lewicki & Brinsfield, 2011). When deciding which media channels to use, people will predominantly select, consume and follow the information which is distributed via a trusted channel (Beldad et al., 2010;Lewicki & Brinsfield, 2011;Metzger & Flanagin, 2013). ...
... As a consequence, other factors influence information-seeking behavior, such as the perception of media, e.g. in terms of trustworthiness (Lewicki & Brinsfield, 2011). When deciding which media channels to use, people will predominantly select, consume and follow the information which is distributed via a trusted channel (Beldad et al., 2010;Lewicki & Brinsfield, 2011;Metzger & Flanagin, 2013). ...
... Consumer beliefs of vendor expertise and trustworthiness may be used as heuristics signaling the vendor is skilled and maintains expertise in operations and engineering. The related concept of a trust heuristic has been theorized (Lewicki and Brinsfield, 2011). ...
Consumers who decide to adopt complex, radically innovative products simultaneously can hold very different belief structures that, for example, capture concern for future losses, and beliefs of future gains, as well as the desire to coalesce with referents. This research develops a model of how consumers decide their next electrified vehicle. Based on the Theory of Reasoned Action (TRA) and Risk-Benefit Models, the electric vehicle (EV) purchase decision is modeled as primarily based on beliefs of the perceived benefits and the perceived risks of technology adoption and social influences. Further, beliefs of a manufacturer's expertise and trustworthiness were found to reduce consumer risk concerns and strengthen consumer conviction that the benefits of technology were attainable. Structural equation modeling of survey data confirm the proposed consumer decision model, and our contention that technology adoption can be better understood by specifically exploring discordant consumer beliefs of the post-purchase consequences. The results of our research provide a new understanding of salient consumer risk and benefit beliefs when consumers face new technologies that represent a paradigm shift. Results also provide insight for technology firms that need to constantly develop new strategic marketing actions designed to increase demand for their complex technological products.
... Trust is composed of two components: a cognitive component and an affective component (Lewicki and Brinsfield, 2011;Cho et al., 2015). The cognitive component is based on judgements, beliefs, competence, stability, and expectations while the affective component is based on positive and negative emotions that shape our trust (Lewis and Weigert, 1985). ...
Full-text available
Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public’s trust in AVs. Many factors can influence people’s trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people’s dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his/her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.
... Trust is composed of two components: a cognitive component and an affective component (Lewicki and Brinsfield, 2011;Cho et al., 2015). The cognitive component is based on judgements, beliefs, competence, stability, and expectations while the affective component is based on positive and negative emotions that shape our trust (Lewis and Weigert, 1985). ...
Full-text available
Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public's trust in AVs. Many factors can influence people's trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people's dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his or her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.
Following the cognitive and behavioral approach, this study compares the trust behaviors of entrepreneurs and non-entrepreneurs in a dynamic environment. Due to the differences in the contexts that they face, the thinking frameworks they adopt, and the knowledge structures they form from experience, we argue that entrepreneurs display different trust behaviors from non-entrepreneurs when facing volatile environments in the decision-making process. Adopting established paradigms from behavioral game theory (trust game), we examine the evolution of trust behaviors of the two groups for trust building, trust violation, and trust recovery. In a Singapore-based sample, we find that entrepreneurs build trust more quickly, decrease trust more quickly when faced with trust violations, and recover more quickly from trust violations than non-entrepreneurs. This study contributes to a better understanding of entrepreneurs' trust behaviors over time, their responses to variations in social exchanges, while contributing to overall ongoing discussions of the unique characteristics of entrepreneurs.
Full-text available
Background: The ongoing digitalization in health care is enabling patients to receive treatment via telemedical technologies, such as video consultation (VC), which are increasingly being used by general practitioners. Rural areas in particular exhibit a rapidly aging population, with an increase in associated health issues, whereas the level of attraction for working in those regions is decreasing for young physicians. Integrating telemedical approaches in treating patients can help lessen the professional workload and counteract the trend toward the spatial undersupply in many countries. As a result, an increasing number of patients are being confronted with digital treatment and new forms of care delivery. These novel ways of care engender interactions with patients and their private lives in unprecedented ways, calling for studies that incorporate patient needs, expectations, and behavior into the design and application of telemedical technology within the field of primary care. Objective: This study aims to unveil and compare the acceptance-promoting factors of patients without (preusers) and with experiences (actual users) in using VC in a primary care setting and to provide implications for the design, theory, and use of VC. Methods: In total, 20 semistructured interviews were conducted with patients in 2 rural primary care practices to identify and analyze patient needs, perceptions, and experiences that facilitate the acceptance of VC technology and adoption behavior. Both preusers and actual users of VC were engaged, allowing for an empirical comparison. For data analysis, a procedure was followed based on open, axial, and selective coding. Results: The study delivers factors and respective subdimensions that foster the perceptions of patients toward VC in rural primary care. Factors cover attitudes and expectations toward the use of VC, the patient-physician relationship and its impact on technology assessment and use, patients’ rights and obligations that emerge with the introduction of VC in primary care, and the influence of social norms on the use of VC and vice versa. With regard to these factors, the results indicate differences between preusers and actual users of VC, which imply ways of designing and implementing VC concerning the respective user group. Actual users attach higher importance to the perceived benefits of VC and their responsibility to use it appropriately, which might be rooted in the technological intervention they experienced. On the contrary, preusers valued the opinions and expectations of their peers. Conclusions: The way the limitations and potential of VC are perceived varies across patients. When practicing VC in primary care, different aspects should be considered when dealing with preusers, such as maintaining a physical interaction with the physician or incorporating social cues. Once the digital intervention takes place, patients tend to value benefits such as flexibility and effectiveness over potential concerns.
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.