ArticlePDF Available

Abstract and Figures

Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sci-ences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorpo-rate non-rational elements in the trust formation dynamics. In order to enable agents that interact with humans to have a good estimation of human trust, and take this into account in their behaviour, trust models that incorporate such human aspects are a necessity. A specific non-rational element in humans is that they are often biased in their behaviour. In this paper, models for human trust dynamics are presented incorporating human biases. In order to show that they more accurately describe human behaviour, they have been evaluated against empirical data, which shows that the models perform significantly better.
Content may be subject to copyright.
AUTHOR COPY
Modelling biased human trust dynamics
1
Mark Hoogendoorn a,*, Syed Waqar Jaffry a,b, Peter-Paul van Maanen a,c and Jan Treur a
a Agent Systems Research Group, VU University Amsterdam, De Boelelaan 1081, 1081 HV Amsterdam,
The Netherlands
E-mail: {mhoogen,swjaffry,treur}@few.vu.nl
b Punjab University College of Information Technology (PUCIT), University of The Punjab,
Shahrah-e-Quaid-i-Azam, Lahore, Pakistan
E-mail: swjaffry@pucit.edu.pk
c Department of Perceptual and Cognitive Systems, Netherlands Organisation for Applied Scientific Research
(TNO), P.O. Box 23, 3769 ZG Soesterberg, The Netherlands
E-mail: peter-paul.vanmaanen@tno.nl
Abstract. Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sci-
ences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorpo-
rate non-rational elements in the trust formation dynamics. In order to enable agents that interact with humans to have a good
estimation of human trust, and take this into account in their behaviour, trust models that incorporate such human aspects are a
necessity. A specific non-rational element in humans is that they are often biased in their behaviour. In this paper, models for
human trust dynamics are presented incorporating human biases. In order to show that they more accurately describe human
behaviour, they have been evaluated against empirical data, which shows that the models perform significantly better.
Keywords. Trust, biases, modelling, validation
1. Introduction
Within the domain of multi-agent systems, a vari-
ety of trust models have been proposed (e.g., see
[13,14] for an overview). Often, such trust models
are utilized in an environment in which software
agents should make choices based upon their levels
of trust, and hence, such models aim to optimize the
behavior of the agent by using the most appropriate
trust function. An example of such a model is for
instance described in [12]. In situations where soft-
ware agents interact with humans, trust models that
are incorporated in these agents may have a com-
pletely different purpose: to estimate the trust levels
of the human over time, and take that into considera-
tion in its behavior, for example, by providing ad-
vices from other trustees that are trusted more. If this
is the purpose of the trust model, then the model
1 The work presented in this paper is a significant extension by
more than 40% of (Hoogendoorn, Jaffry, Maanen, and Treur,
2011).
* Corresponding author.
should also explicitly incorporate non-rational human
aspects. Examples of models taking into account
various human aspects are [3,7,11].
In the literature in the domain of Psychology and
Social Sciences it has been shown that one important
non-rational aspect within the formation of trust is
the incorporation of biases. Several biases have been
observed whereby the culture bias is one of the most
reported ones. In [20] it is shown that humans from
collectivistic cultures tend to have a bias towards
trusting members that belong to the same group and
avoid the persons from outside the group. In [8] also
a comparison between individualistic and collecti-
vistic cultures is made which shows that the trust of
the members of an individualistic society is less
negatively biased towards persons from outside their
group. Other authors also emphasize the existence of
such a bias in general, e.g., [15]. If the objective of a
computational model of trust is to create a model that
represents human trust in a natural and accurate
manner, such biases need to be taken into account in
the model.
Web Intelligence and Agent Systems: An International Journal 11 (2013) 21–40
DOI 10.3233/WIA-130260
IOS Press
1570-1263/13/$27.50 © 2013 – IOS Press and the authors. All rights reserved
21
AUTHOR COPY
In this paper, a model has been developed that in-
corporates biases in a model for trust dynamics. In
order to do so, an existing trust model is taken as a
point of departure (cf. [11]), which was applied, for
example, in [17–19]). Biases have been added to this
model using a number of different approaches for the
manner in which biases affect the level of trust. In-
troducing a trust model with the purpose to model
human behaviour in a more realistic way requires a
thorough evaluation of the model. Therefore in this
paper, a number of approaches have been used to
evaluate the introduced models. First of all, the be-
haviour of the models themselves have been rigor-
ously compared and analyzed using identified emerg-
ing properties. Also, an extensive mathematical
analysis of monotonicity, equilibria and behaviour
around equilibria has been performed for this purpose.
In addition to these types of formal analysis, also an
empirical analysis has been performed. The models
have been validated against empirical data that has
been obtained from an experiment conducted with
human subjects. Such a full empirical validation is
not so common for computational trust models.
However, some authors have done some form of
validation. For instance, in (Jonker, Schalken,
Theeuwes and Treur 2004) an experiment has been
conducted whereby the trends in human trust behav-
iour have been analyzed to verify properties underly-
ing trust models developed in the domain of multi-
agent systems. However, no attempt was made to
exactly fit the model to the trusting behaviour of the
human. The outcome of the validation experiment
presented in the current paper shows that the intro-
duced bias-based models perform significantly better
than comparable models without explicit representa-
tion of biases.
This paper is organized as follows. First, in Sec-
tion 2 six new human bias-based trust models are
introduced across computational and human cogni-
tive dimensions. Thereafter, simulation results of
these bias-based trust models are presented in Sec-
tion 3. The formal analyses of the newly designed
bias-based trust models through logical and mathe-
matical means are described in Sections 4 and 5,
respectively. Thereafter, the human-based trust ex-
periment is explained in Section 6. The validation
results of the models based on empirical data col-
lected in the experiment described in Section 6 are
presented in Section 7, and finally, Section 8 is a
discussion.
2. Models for biased trust dynamics
In this section a number of trust models are pro-
posed that incorporate biased human behaviour. In
order to be able to model bias-based trust dynamics,
an existing trust model aimed at representing human
trust is taken as a basis. This is a well-known model
presented in [11] and applied, for example, in [17–
19]. The model is expressed as follows:
∆
  ∆ 1
In this trust model, it is assumed that the human re-
ceives a certain experience at each time point, E(t).
The experience is represented by a value in the inter-
val [0, 1]. It is then compared with the current trust
level T(t) and the difference is multiplied with a trust
update speed factor γ. This difference is then multi-
plied by the chosen step size ∆t and added to the
current trust level to obtain a new trust level.
The model described above does not include bi-
ases; therefore in this paper extensions of the model
are introduced incorporating biases. This can be done
in different manners. It is assumed that human biases
can affect trust in a number of ways. More specifi-
cally, there are different ways in which the bias plays
a role in the formation of a new trust value; this is
referred to as the cognitive dimension in Fig. 1. In
this paper, three options are distinguished:
(a) the bias solely plays a role in the way in which
the human perceives an experience with a spe-
cific trustee: the experience is transformed
from a certain objective value to a subjective
biased experience value, which is then used to
derive a new trust value,
(b) the experience is again perceived differently
based upon the bias, but the current trust value
also plays a role in the perception of the ex-
perience,
(c) the experiences are not biased, but the trust
value itself is biased.
Besides these different possibilities of modelling the
point at which the bias plays a role in the trust forma-
tion process, the precise way in which the bias is
incorporated within the model can also be varied.
There can be assumed a more linear trend in the bias
behaviour, or a logistic type of trend can be assumed;
this is referred to as the computational dimension in
Fig. 1. Given these dimensions, in total 6 models for
incorporating trust in the unbiased model expressed
in Eq. (1) can now be formulated (see Fig. 1):
M. Hoogendoorn et al. / Modelling biased human trust dynamics22
AUTHOR COPY
1. linear model with biased experience,
2. linear model with biased experience influenced
by current trust,
3. linear model with bias solely determined by
current trust,
4. logistic model with biased experience,
5. logistic model with biased experience influ-
enced by current trust,
6. logistic model with bias solely determined by
current trust.
The above models are abbreviated as LiE, LiET, LiT,
LoE, LoET, and LoT respectively. In order to incor-
porate the biased behaviour in the model presented in
Eq. (1), functions have been defined that take the
current experience (for models LiE and LoE), the
experience and the trust (for models LiET and LoET),
or the trust value itself (for models LiT and LoT) and
transforms that into a biased value. This biased value
can then be used to calculate the new trust value
based upon Eq. (1).
2.1. Trust models with biased experience
For the models that express the bias solely based
upon the experience, the following two equations are
used (for linear and logistic respectively):
LiE:

2  11    0.5
2    0.5
LoE:
1 1ିఙିఛ
In the first equation, β is the bias parameter from the
interval [0,1]. Here values for β of 0.0, 0.5 and 1.0
represent an absolute negative, neutral and absolute
positive bias, respectively. It can be seen that for the
case of a positive bias (i.e., β > 0.5) the current ex-
perience is increased with a factor dependent on the
positiveness of the bias (the more positive the bias,
the more the objective experience is increased). For
the logistic equation (LoE), σ and τ are the steepness
and threshold parameters for the logistic transforma-
tion. In the logistic transformation τ is assumed to
represent the human’s bias. It is assumed that this
value has an inverse relationship with β (i.e., τ =
1 – β). Furthermore E(t), and T(t) are the experience
and human trust level on the given trustee at time
point t, respectively. The resulting value of the func-
tion f(E(t)) is the biased experience.
This function can be incorporated into the base
model (Eq. (1)) in a general setting as follows:
∆
 ∆ 2
For the specific (linear and logistic) cases considered
this becomes:
∆
2  11
∆    0.5
∆
2∆    0.5
∆
1 1ିఙିఛ
∆
2.2. Trust models with biased experience affected by
current trust
In the second set of bias equations, the bias plays a
role in combination with the current trust value and
the experience, as expressed below.
LiET:
,1 11 
1
LoET:
,1 1ିఙା்ିఛ

The first equation (linear model) expresses that the
more positive the bias is, the more the evaluation will
be increased depending on the distance of the experi-
ence and the trust to the highest value. The second is
the logistic variant of the model, whereby the combi-
nation of the experience and the trust are used in the
threshold function.
The function can be inserted into the base model in
a general setting as follows:
∆
, ∆ 3
For the specific (linear and logistic) cases considered
this becomes:
Fig. 1. Bias-based trust models.
M. Hoogendoorn et al. / Modelling biased human trust dynamics 23
AUTHOR COPY
∆
"111 
1#∆
∆
1 1ିఙା்ିఛ
∆
2.3. Trust models with bias solely determined by
current trust
The final set of equations concerns the bias solely
based upon the trust level, and not on the experience
itself. The following two equations are used for this
purpose:
LiT:

1121
   0.5

112
   0.5
LoT:

1  1 1ିఙିఛ
The equations follow the same structure as seen for
the experience-based bias, except that now the trust
value is used.
For the general setting it is combined with the base
model as follows:
∆
 ∆ 4
For the specific (linear and logistic) cases consid-
ered this becomes:
∆
 "1121 #∆
   0.5
∆
" 112#∆
   0.5
∆
 "  1
1 1ିఙିఛ
#∆
3. Simulation results for the biased human trust
models
In order to observe the behaviour of bias-based
trust models described in the previous section, sev-
eral simulation experiments are performed. In these
simulation experiments first each model is simulated
independently against a set of experience values and
then these models are compared using a novel tech-
nique called mutual mirroring of models as described
in [9].
3.1. Single model comparisons
In this first experiment, merely one trustee for
which an agent has to form trust is considered. In this
section the results of one of these experiments is
presented in detail. In Table 1 the experimental con-
figuration for this simulation is described. Here it can
be seen that bias parameter is changed from 0.0 to,
0.5 and 1.0 which represents negative, neutral and
positive bias respectively. For comparison purposes,
the bias parameter τ for the logistic model is calcu-
lated by means of the following equation: τ = 1 – β.
The trust rate change γ is taken as 0.25. Furthermore,
the initial trust value is taken as 0.50 which means
that the human has neutral trust at time point 0. The
step size (Δt) is set to 0.50.
The experience sequence used in this experiment
is represented in Fig. 2. It can be seen that experience
provided in this experiment change periodically be-
tween the values 0.0, 0.5 and 1.0 respectively with a
period of 10 time steps. Each of these experience
values represents negative, neutral and positive ex-
perience respectively. This experience sequence is
used to see the behaviour of these models on and
between varying extremes.
In Figs 3–5 the results of the simulations given the
experience sequence introduced above are shown.
In Fig. 3 the agent has a negative bias towards the
trustee. A simulation for a neutral bias is shown in
Fig. 4, whereas a positive bias is used in Fig. 5. It can
be observed in the case of the negative bias that both
the LiE and LiET converge to no trust (value 0) de-
spite the fact that the trustee gives some positive
Table 1
Experimental configuration for simulation experiments
Quantity Symbol Value
Bias parameter
β
(linear model)
τ (logistic model)
0.00, 0.50, 1.00
1.00, 0.50, 0.00
Trust change rate
Γ
0.25
Time step t 0.50
Initial trust T(0) 0.50
Steepness
Σ
5
Experiences
E
(t) Periodic (0.0, 0.5, 1.0)
on 10 time steps each
M. Hoogendoorn et al. / Modelling biased human trust dynamics24
AUTHOR COPY
experiences. The LiT, LoT, and LoE variants show
almost similar trends compared to the base trust
model but with a much lower trust value (which is
precisely as desired due to the negative bias). The
final variant of the model (LoET) shows an undesired
result: the trust is actually higher than the base model.
This is due to higher parameter value of parameter σ
(steepness) which is 5. For lower values of the steep-
ness (<3) this model shows desired results as well
(but has not been shown for the sake of brevity).
In Fig. 4 a neutral bias, i.e., (β = 0.5 and τ = 0.5,
σ = 5) is used, and all the models except for one
show behaviour similar to the baseline model (which
is as expected as there is no bias). The LoET does
however show very different and undesirable behav-
iour as it converges to maximum trust value. This
relates to the fact that for this type of model the value
0.5 does not show an upward-downward symmetry
as required for a non-biased case. Therefore this
model does not qualify well in this respect.
In Fig. 5 an absolute positive bias is set (i.e., β = 1
and τ = 0, σ = 5). In the figure, the LiE. LiET, and
LoET converge to maximum trust (value 1) despite
the fact that the trustee gives some negative experi-
ences. This behavior is not completely as desired, but
could be adjusted by taking a different steepness
value. LoE, LiT and LoT show an almost similar
trend as the baseline trust model does, but with
higher in trust value, precisely is as desired.
3.2. Mutual mirroring of the bias-based trust models
To analyze the generalization capacity of these
models a novel technique named mutual mirroring of
models is used as introduced in [9]; see also [7]. In
this method, a specific trace (simulation run) of a
source model is taken as a basis, and a parameter
tuning approach (e.g., exhaustive search within the
parameter space) for a target model is performed to
see how closely the target model can describe the
trace of the source model (i.e., what the set of pa-
rameters is with minimum error). This gives a good
indication how much the models can describe each
others’ behaviour, and some indication of similarity.
The mirroring is also done in the opposite direction
(i.e., from a trace of the target model to parameters of
the source model). This process of mirroring both
Fig. 3. Simulation results for absolute negative bias (β = 0 and
τ = 1, σ = 5).
Fig. 2. Experience sequence.
Fig. 4. Simulation results for neutral or no bias (β = 0.5 and
τ = 0.5, σ = 5).
Fig. 5. Simulation results for absolute positive bias (β = 1 and
τ = 0, σ = 5).
020 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Time Step
Trust Value
UM
LiE
LiET
LiT
LoE
LoET
LoT
10 20 30 40 50 60 70 80 90 10011012013014 0150160170180
0
0.25
0.5
0.75
1
Time Step
Experience Value
020 40 60 80 100 120 140 160 180
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time Step
Trust Value
UM
LiE
LiET
LiT
LoE
LoET
LoT
020 40 60 80 100 120 140 160 180
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time Ste p
Trust Value
UM
LiE
LiET
LiT
LoE
LoET
LoT
M. Hoogendoorn et al. / Modelling biased human trust dynamics 25
AUTHOR COPY
models into each other is called mutual mirroring of
models. The mirroring process can provide a good
indication on the similarity of models. For more
details on the approach see [7,9].
The mirroring techniques have been applied to the
models introduced in Section 2. The results are
shown in Table 2. Here, the columns represent the
target models while the rows represent the source
models.
For a specific trace of the source model (given a
certain set of parameter settings) the parameters of
the target model are exhaustively searched to gener-
ate behaviour similar to the trace of the source model
with minimum root mean squared error. The values
in each cell of the table represent the average error
for nine different source model traces generated with
different bias values and experience sequences. In the
first row of the table it can be seen that on average
the source model LiE can be approximated using the
LiE, LiET, LiT, LoE, LoET and LoT variants with
error of 0.00, 0.04, 0.22, 0.12, 0.14 and 0.22 respec-
tively. Furthermore in the last column of the first row
it can be seen that the average error of the mirroring
process with all other models is 0.12. This seems to
be the most difficult behaviour to approximate on
average as the other rows show a lower average value.
Especially the behaviour of the LiT and LoE can be
very well approximated by the other models. Fur-
thermore, in the last row the values are shown that
indicate how well a model can describe the other
model’s behaviour. This shows that LiE and LiET
can describe many of the source models very well.
4. Logical verification of the bias-based trust
models
When developing a new model, a thorough analy-
sis of the behaviour is required to have sufficient
confidence in the appropriate behaviour of the model.
One way to perform such an analysis is to conduct a
mathematical analysis (see Section 5). However,
given the complexity of the models proposed in this
paper, the analysis of more complex (temporal) pat-
terns might not be feasible using these techniques.
Therefore, in this section, certain desired emergent
properties are discussed with respect to the bias-
based trust models that express complex patterns
over time. To show that the models indeed generate
this desired behaviour, these properties have been
verified upon the simulation traces that have been
produced by the models proposed in Section 2. This
does not prove a complete adherence of the model to
the properties, but it does shown that for the selected
simulation runs (which are of course carefully se-
lected in order to have representative results) adhere
to the properties or not. In order to perform this veri-
fication in an automated fashion, the hybrid temporal
language TTL (Temporal Trace Language, cf. [2,
16]) and its software environment has been used. In
addition to a dedicated editor TTL features an auto-
mated verification tool that automatically verifies
specified properties against traces that have been
loaded in the verification tool. The language TTL is
explained first, followed by a presentation of the
desired properties related to trust.
4.1. Temporal trace language (TTL)
The hybrid temporal language TTL supports for-
mal specification and analysis of dynamic properties,
covering both qualitative and quantitative aspects.
TTL is built on atoms referring to states of the world,
time points and traces, i.e., trajectories of states over
time. In addition, dynamic properties are temporal
statements that can be formulated with respect to
traces based on the state ontology Ont in the follow-
ing manner. Given a trace γ over state ontology Ont,
the state in γ at time point t is denoted by state(γ, t).
These states can be related to state properties via the
formally defined satisfaction relation denoted by the
infix predicate |=, i.e., state(γ, t) |= p denotes that
state property p holds in trace γ at time t. Based on
these statements, dynamic properties can be formu-
lated in a formal manner in a sorted first-order predi-
cate logic, using quantifiers over time and traces and
the usual first-order logical connectives such as ¬, ,
, , , . As a built-in construct in TTL, summa-
tions can be expressed, indexed by elements X of a
sort S:
X:S case(ϕ(X), V1, V2)
Table 2
Results for mutual mirroring of the models
Source
model
Target model
LiE LiET LiT LoE LoET LoT AVG
LiE 0.00 0.04 0.22 0.12 0.14 0.22 0.12
LiET 0.02 0.00 0.19 0.10 0.13 0.19 0.11
LiT 0.01 0.03 0.00 0.01 0.06 0.00 0.02
LoE 0.01 0.03 0.09 0.00 0.08 0.09 0.05
LoET 0.03 0.05 0.23 0.11 0.00 0.22 0.11
LoT 0.01 0.02 0.00 0.01 0.05 0.00 0.02
AVG 0.02 0.03 0.12 0.06 0.08 0.12
M. Hoogendoorn et al. / Modelling biased human trust dynamics26
AUTHOR COPY
Here for any formula ϕ(X), the expression
case(ϕ(X), V1, V2)
indicates the value V1 if ϕ(X) is true, and V2 other-
wise. For example,
X:S case(ϕ(X), 1, 0)
simply denotes the number of elements X in S for
which ϕ(X) is true. As expressing counting and sum-
mation in a logical format in an elementary manner
in general leads to rather complex formulae, this
built-in construct is very convenient in use. For more
details on TTL and the precise functioning of the
checker tool, see [2,16].
4.2. Verification of bias-based trust models
This section describes verification process for the
bias-based trust models presented in Section 2. First,
in Section 4.2.1 the properties that have been identi-
fied for bias-based trust models are introduced and
then in Section 4.2.2 results of the checks are pre-
sented.
4.2.1. Properties for bias-based trust models
Four properties have been identified with respect
to biased behaviour of human trust. The first property
expresses the general principle of the bias, namely
that once a person has a more positive bias towards a
trustee, this trustee will more frequently be the most
trusted trustee, as expressed in property P1 below.
Note that in this property (and also for properties P2
and P3), it is assumed that the bias does not change
during the simulation, and hence, the value at the first
time point is selected.
P1: General bias property. If within two traces
with the same experience sequence in one trace an
agent has a more positive bias towards a trustee com-
pared to the other trace, and the agent has the same
biases for the other trustees, then the trustee will
more frequently be the trustee with the highest trust
value in the trace with the higher bias compared to
the trace with the lower bias. For example, this then
results in this trustee being selected more frequently.
The formalization of the property is shown below.
First, it is checked whether the traces that are being
compared contain the same experience sequence.
Furthermore, it is checked whether the biases for the
trustee tr1 considered different (and in fact, is higher
in the first trace). Note that this comparison is done at
time point 0 as it is assumed that the bias does not
change over time in a single run. Furthermore, it is
checked whether there exists a single bias value the
agents has for all other trustees in both traces, then
one sums the cases where the trustee tr1 is the trustee
with the highest trust value and this amount should
be higher in the first trace compared to the second.
P1 ∀γ1, γ2:TRACE, tr1:TRUSTEE, b1, b2:REAL
[ same_experience_sequence(γ1, γ2) &
state(γ1, 0) |= bias_for_trustee(tr1, b1) &
state(γ2, 0) |= bias_for_trustee(tr1, b2) & b1 > b2 &
tr2:TRUSTEE ≠ tr1 b3:REAL
[state(γ1, 0) |= bias_for_trustee(tr2, b3) &
state(γ2, 0) |= bias_for_trustee(tr2, b3) ]
[ t:TIME case(highest_trust_value(γ1, t, tr1), 1, 0) ≥
t:TIME case(highest_trust_value(γ2, t, tr1), 1, 0) ] ]
Here the same experience sequence is simply a prop-
erty expressing that the experience values in both
traces should be the same:
same_experience_sequence(γ1:TRACE, γ2:TRACE,)
t:TIME, tr:TRUSTEE, v:REAL
[ state(γ1, t) |= objective_experience_value(tr, v)
state(γ2, t) |= objective_experience_value(tr, v) ]
In the formalisation of the predicate indicating the
highest trust value which is used in P1 the trust value
for the trustee considered is bound by the
quantifier. For this value it is then checked whether
for all other trustees and trust values encountered no
higher value than the value for trustee tr1 is encoun-
tered.
highest_trust_value(γ:TRACE, t:TIME, tr1:TRUSTEE)
v1:REAL
[ state(γ, t) |= trust_value(tr1, v1)
tr2:TRUSTEE ≠ tr1, v2:REAL
[ state(γ, t) |= trust_value(tr2, v2) v2 < v1 ] ]
The second property expresses that the trust level
itself will be higher in the case of a more positive
bias.
P2: Trust comparison. Trustees for which an agent
has a more positive bias have a higher trust value
compared to a trace in which the agent has a lower
bias with respect to the trustee (given that the experi-
ences are equal as well as the biases for the other
trustees).
The formalization of this property is very similar
to P1, except that now a comparison is made between
the trust values themselves.
M. Hoogendoorn et al. / Modelling biased human trust dynamics 27
AUTHOR COPY
P2 ∀γ1, γ2:TRACE, tr1:TRUSTEE, b1, b2:REAL
[ same_experience_sequence(γ1, γ2) &
state(γ1, 0) |= bias_for_trustee(tr1, b1) &
state(γ2, 0) |= bias_for_trustee(tr1, b2) & b1 > b2 &
tr2:TRUSTEE ≠ tr1 b3:REAL
[ state(γ1, 0) |= bias_for_trustee(tr2, b3) &
state(γ2, 0) |= bias_for_trustee(tr2, b3) ]
t:TIME, tv1, tv2:REAL
[ state(γ1, t) |= trust_value(tr1, tv1) &
state(γ2, t) |= trust_value(tr1, tv2) ] tv1 ≥ tv2 ]
In order to facilitate the addition of bias to existing
models, a translation scheme has been proposed to
translate objective experiences into subjective ex-
periences (i.e., experiences coloured by the bias). In
case of a more positive bias, the biased experiences
will be at least as high.
P3: Experience comparison. The objective experi-
ence provided by a trustee is translated into a higher
subjective experience for trustees for which the agent
has a higher bias (given the same experience se-
quence).
The formalization of this property takes the first
part which is by now well-known from P1 and P2 as
an antecedent and checks to see whether the subjec-
tive experiences are indeed at least as high for the
trace in which a higher bias is encountered.
P3 ∀γ1, γ2:TRACE, tr:TRUSTEE, b1, b2:REAL
[ [ same_experience_sequence(γ1, γ2) &
state(γ1, 0) |= bias_for_trustee(tr, b1) &
state(γ2, 0) |= bias_for_trustee(tr, b2) & b1 > b2]
t:TIME, ev1, ev2:REAL
[ [ state(γ1, t) |= subjective_experience_value(tr, ev1) &
state(γ2, t) |= subjective_experience_value(tr, ev2) ]
ev1 ≥ ev2 ] ]
Finally, in some of the bias model, trust is explic-
itly considered to colour the experiences. In case the
trust level is higher, the same objective experience
gets an even more positive value.
P4: Influence of trust upon experience. If the trust
level for a certain trustee at time point t is higher than
the trust level at another time point t’, whereas the
objective experience is equal and not on the bound-
ary of the scale (i.e., 0 or 1), then the subjective ex-
perience will be higher at time point t.
The formalization of this property is a bit more
complicated. First, the property binds the trust value
at a time point t for a certain trustee as well as the
objective experience. Hereby, a check is performed
to make sure the objective experience is neither 0 nor
1 as this would sometimes make it impossible to have
a higher subjective value. Given that this is the case,
and given that the objective experience is the same at
another time point t’ at which the trust value is lower
compared to the trust value at time t, this means that
the subjective value at time t must be higher.
P4 ∀γ:TRACE, t, t’:TIME, tr:TRUSTEE,
tv1, tv2, ov, sv1, sv2:REAL
[ state(γ, t) |= trust_value(tr, tv1) &
state(γ, t) |= objective_experience_value(tr, ov) &
ov > 0 & ov < 1 &
state(γ, t) |= subjective_experience_value(tr, sv1) &
state(γ, t’) |= trust_value(tr, tv2) & tv1 > tv2 &
state(γ, t’) |= objective_experience_value(tr, ov) &
state(γ, t’) |= subjective_experience_value(tr, sv2) ]
sv1 > sv2
4.2.2. Verification results for bias-based trust models
Based upon the traces resulting from simulations
of the trust models so-called traces have been gener-
ated. These traces are essentially logs of the simula-
tions that indicate for each time point what states
hold. These traces are loaded into the TTL Checker
software which then expresses whether a property
(i.e., P1–P4) holds for the trace (or a combination of
traces) or not. The results of the verification are
shown in Table 3. It can be seen that property P1 is
satisfied for all bias models presented in this paper.
When looking at the properties P2 and P3 however,
the properties also hold for the various models that
have been identified. Finally, property P4 is only
satisfied for the models where trust is considered
when forming the subjective experience, which
makes sense as this property precisely, describes this
influence. Properties P3 and P4 are actually not rele-
vant for models LoET and LoT as they do not incor-
porate the notion of subjective experience, therefore
the property is always satisfied (due to the fact that
the antecedent of the implication never holds).
5. Mathematical analysis of bias-based trust
models
The models explored in this paper are adaptive
with respect to the experiences of the agent. This
means, for example, that when in a time period with
very positive experiences, also trust will reach higher
M. Hoogendoorn et al. / Modelling biased human trust dynamics28
AUTHOR COPY
levels, and in periods with less positive experiences
trust levels will go down. For very long periods of
experiences of the same level, the trust level will
reach some stable level, which is an equilibrium for
the model for the given experience level. It gives a
more deepened insight in the model when it is known
what the value of such an equilibrium is for a given
experience level: the model will drive the trust level
in the direction of that value. Moreover, the speed by
which such a convergence process takes place also is
useful information about a model. For these types of
analyses the techniques used in the previous section
are not practical to use, but mathematical techniques
are available that can be used quite well.
The properties addressed here by such mathemati-
cal techniques focus for a given point in time t in
particular on criteria that determine whether due to a
given experience the trust level will increase, de-
crease or will be in equilibrium. Moreover for the
equilibria of the models, the behaviour near such
equilibria is addressed: whether they are attracting or
not, and how fast the convergence takes place. These
properties are much more specific and limited com-
pared to the wider types of properties addressed in
Section 4, but the mathematical methods allow for
more in depth results.
First the general case is addressed; in Table 4 an
overview of the results for the general case is sum-
marised. Next, the analysis is made more specific for
the case of linear functions; at the end of the section
in Table 5 an overview of the results for these spe-
cific linear functions is presented. Note that the
analysis is done for any given time point t, which is
sometimes indicated as an argument, but will some-
times be left out to get expressions more transparent.
5.1. Mathematical analysis of trust models with
biased experience
Recall that for the models that express the bias
solely based upon the experience, the following dif-
ference equation is used.


where it is assumed that γ > 0. Note that from the
equation above it immediately follows:






So, in this case the following criteria can be obtained
for trust models with biased experiences:
Equilibrium, increasing and decreasing: trust models
with biased experiences
(a) T is in equilibrium for a given if and only if
  ,
(b) T is increasing if and only if   ,
(c) T is decreasing if and only if   .
For example, (b) shows a criterion for an experi-
ence to let the trust level increase. If the trust already
has some level T, it can only increase when an ex-
perience with level E at least satisfying   
occurs; otherwise trust will decrease or stay the same.
Another way to use this is to determine directly to
which equilibrium trust can go if a given experience
level E is constantly offered; according to criterion
(a) this equilibrium level for trust is . Further-
more, from the monotonicity criteria above it can be
derived in the following manner that the equilibrium
is always attracting. Suppose Teq is an equilibrium for
E, and 
; this implies
   
and therefore T is increasing for the given E by the
criterion (b) above. Similarly, when 
 for the
given E it is found that T is decreasing by crite-
rion (c). This proves that the process will always
converge to the equilibrium, independent of the func-
tion . This will also be confirmed by the analysis of
the behaviour around the equilibrium below.
Determining the behaviour around an equilibrium
Independent of the precise form of the function
(and hence also independent of the bias parameter β),
the behaviour around an equilibrium for a given
constant experience can be found here as follows.
Write   
 δ, with δ the deviation of
T from the equilibrium Teq for which it holds

.
Table 3
Result of verification
LiE LiET LiT LoE LoET LoT
P1 satisfied satisfied satisfied satisfied satisfied satisfied
P2 satisfied satisfied satisfied satisfied satisfied satisfied
P3 satisfied satisfied satisfied satisfied satisfied satisfied
P4 failed satisfied failed satisfied satisfied satisfied
M. Hoogendoorn et al. / Modelling biased human trust dynamics 29
AUTHOR COPY


 δ
 δ

 δ
δδ   δ
δδδ
δ
 δ
As a differential equation this can be solved analyti-
cally using an exponential function:
δδγ
This shows that the speed of convergence directly
relates to parameter γ, and the convergence rate de-
fined as reduction factor of the deviation per time
unit is
  γ
This is independent of β, or the function . More
specifically, since γ > 0, the convergence rate is al-
ways <1; from this it follows that the equilibrium is
always attracting.
This shows that the speed by which trust adapts to
a certain experience level is independent of the spe-
cific function and bias parameter β; it is higher
when γ is higher and lower when γ is lower.
5.2. Mathematical analysis of trust models with
biased experience also affected by trust
For the models that express the bias based both
upon the experience and the current trust level, the
following difference equation was used:


with γ > 0. In a similar manner as above the follow-
ing criteria are obtained:
Equilibrium, increasing and decreasing: Biased ex-
perience also affected by trust
(a) is in equilibrium for a given if and only if
,0
,
(b) is increasing if and only if ,0,
(c) is decreasing if and only if ,0.
This again shows a criterion, for example, for an
experience to let the trust level increase. If the trust
already has some level T, it can only increase when
an experience with level E at time t at least satisfying
,0 is obtained; otherwise trust will decrease
or stay the same.
Furthermore, some criterion on the function can
be found in order that the equilibrium Teq for E is
attracting. Attracting means that if T is close to Teq
with T < Teq, then for the given E it should be the
case that T increases, which according to the above is
equivalent with , > 0. So, starting from T = Teq
with ,
 = 0, when T is taken lower, the value
of ,
 has to become higher:
   

This is equivalent with the criterion that in  , 
the function
is decreasing in its second argument:
 , 
0. Below this will be confirmed
from the analysis of the behaviour around an equilib-
rium. This shows that not all functions will provide
the property that the trust levels converge to such an
equilibrium value. For a choice to be made for some
function this has to be considered. Below it will be
shown that for the choices made in the current paper
this criterion is always fulfilled.
Determining the behaviour around an equilibrium
Depending on the form of the function and also on
the bias parameter β, the behaviour around an equi-
librium for a given constant experience can be
found as follows. Write 
 δ, with δ(t)
the deviation from the equilibrium  for which it
holds ,
0. For the first-order Taylor
approximation around  in its second argument is
used, where / denotes the partial derivative of
with respect to its second argument :

  

Using this it holds
 δ
  
δ
 δ    
δ
Then the following is obtained:

 δ δ
 δ
δδ 
δ
δ 
   
δ
As a differential equation this can be solved analyti-
cally using an exponential function:
δδ  ,
೐೜
The convergence rate is defined as reduction
factor of the deviation per time unit; this is
M. Hoogendoorn et al. / Modelling biased human trust dynamics30
AUTHOR COPY
 
೐೜
.This provides a condition on
when an equilibrium is attracting, namely
 , 
0. Note that in this case the con-
vergence speed does not only depend on γ but also
on f, which in principle relates to the bias β. This
speed is higher when γ is higher, but also when
 , 
is more negative.
5.3. Mathematical analysis of trust models with bias
solely determined by current trust
For the models that express the bias based only
upon the current trust level, the following difference
equation was used:
    
where γ > 0. Similarly the following criteria are
found:
Equilibrium, increasing and decreasing: Bias solely
determined by current trust
(a) is in equilibrium for a given if and only if
  ,
(b) is increasing if and only if   ,
(c) is decreasing if and only if   .
Like before, this shows a criterion, for example, for
an experience to let the trust level increase. If the
trust already has some level T, it can only increase
when an experience with level E at least satisfying
 is obtained; otherwise trust will decrease
or stay the same. Moreover, a criterion on the func-
tion can be found in order that the equilibrium 
for E is attracting. As before note that attracting
means that if T is close to with 
, then for the
given E it should be the case that T increases, which
according to criterion (b) above is equivalent with
  . So, starting from 
 with 

, when T is taken lower, the value of 
becomes lower:
     
This means that in  the function has to be in-
creasing:  
0. Below, this criterion for
being attracting will be confirmed when the behav-
iour around an equilibrium is analysed. This shows
again that not all functions will provide the prop-
erty that the trust levels converge to an equilibrium
value. For a choice to be made for some function
this criterion  
0 has to be taken into
account. Below it will be shown that for the choices
made in the current paper this criterion is always
fulfilled.
Determining the behaviour around an equilibrium
Again, depending on the form of the function and
also on the bias parameter β, the behaviour around an
equilibrium for a given constant experience can be
found as follows. Write 
 δ, with δ
the deviation from the equilibrium  for which it
holds 
. For the first-order Taylor ap-
proximation around  is used:

    
Using this it is obtained:
      
 δ
 δ
    δ
δδ    δ
δδ

 
δ
δδ 
δ
δ
  
δ
As a differential equation this can be solved analyti-
cally using an exponential function:
δδ ೐೜
This shows that the speed of convergence does not
only relate to parameter γ, but also to /
which in principle relates to the bias β. The conver-
gence rate defined as reduction factor of the devia-
tion per time unit is
   ೐೜
So, also in this case the convergence speed does not
only depend on γ but also on , which in principle
relates to the bias β. This speed is higher when γ is
higher, but also when / is higher.
5.4. Mathematical analysis of the example biased
trust models for the three types
In this section, for each of the three general types
of biased trust models analysed above, it will be
investigated how the criteria can be formulated more
specifically for the linear functions used in the cur-
M. Hoogendoorn et al. / Modelling biased human trust dynamics 31
AUTHOR COPY
rent paper as instances for the function : LiE, LiET,
and LiT.
5.4.1. More specific analysis for the linear case of
bias only depending on experience (LiE)
For the first case the following linear function was
addressed (LiE):
    
   
Case β ≥ 0.5
Criterion for increasing for LiE with β ≥ 0.5
      
     
β  
β      
  β
  
β
    β
β
    ½ β
Criterion for decreasing for LiE with β ≥ 0.5
½ β
Criterion for equilibrium for LiE with β ≥ 0.5
    ½ β
ββ½ 
½β  β
   β  β
    β
Note that for β = 0.5 (no bias) the criterion for an
equilibrium is = , what is to be expected. For β =
0.75, the criterion is
    ½ 
    
     
    
Note that for lower values of T this can provide a
negative number. However, as the experience cannot
be lower than 0, this implies that for such values of
no equilibrium occurs. For β = 0.875, the criterion is
    ½ 
     
    
For β approaching 1, the criterion always becomes a
negative number (implying increase), unless = 1;
this implies that for this value of β no equilibrium
occurs except for = 1 and any value for .
Behaviour around the equilibrium for LiE with
β ≥ 0.5 For this case the behaviour around the equi-
librium does not depend on the specific form of the
function . The convergence rate is:   γ, which
is independent, for example, of β. As γ > 0, the equi-
librium is always attracting.
Case β 0.5
Criterion for increasing for LiE with 0.5
  
  ½
Criterion for decreasing for LiE with 0.5
  ½
Criterion for equilibrium for LiE with 0.5
  ½
  
Behaviour around the equilibrium for LiE with
0.5 For this case the behaviour around the equi-
librium does not depend on the specific form of the
function . The convergence rate is:   γ, which
is independent, for example, of β or E. As γ > 0, the
equilibrium is always attracting.
5.4.2. More specific analysis for the linear case of
bias depending on both experience and trust
(LiET)
For the second case the following linear function
was addressed (LiET):

For the linear example the inequalities and equa-
tion can be explicitly solved as follows.
Table 4
Results of the mathematical analysis for the general case
Bias depends on Increasing/decreasing Equilibrium value Convergence rate Attracting
only on experience
  ,      

γ
always
on experience and trust
,   0, ,  0 ,
0  ,
೐೜
 , 
< 0
only on trust ,   
  ೐೜
 
>0
M. Hoogendoorn et al. / Modelling biased human trust dynamics32
AUTHOR COPY
Criterion for increasing for LiET
     
     
     
β
  ββ
ββ
   β β
Criterion for decreasing for LiET
   β β
Criterion for equilibrium for LiET
    β β
    β
  β    β
ββ  
ββ  
  ββ
Behaviour around the equilibrium for LiET For the
specific linear function used above, it holds:

  
 
 
Using this, for the linear case it is obtained:
δδ
and the convergence rate is . This
shows that for this case the speed of convergence not
only relates to parameter γ, but also to β and . More
specifically, the convergence rate is < 1 if and only if

This is a condition for an equilibrium to be attracting.
It can be rewritten into an explicit criterion for E as
follows:
         
        

This is always the case.
5.4.3. More specific analysis for the case of bias
depending only on trust (LiT)
For the third case the following function was ad-
dressed (LiT):
          when β 0.5
   when β 0.5
This can be analysed more specifically as follows
Case β 0.5
Criterion for increasing for LiT with β 0.5
         
Criterion for decreasing for LiT with β 0.5
         
Criterion for equilibrium for LiT with β 0.5
     
      
         
        
     
For the special case that β = 0.5 (no bias) this latter
criterion reduces to a linear equation – + = 0
with solution = . For the general case β > 0.5 the
above expression is a quadratic equation in with
discriminant
βββ
 βββββ
 βββββ
 ββ
 ββ
 β
From this expression for , which is linear in both β
and , given that β 0.5 it can easily be seen that
is always ≥ 1:
for β = 0.5 it holds 4
1
431,
for β = 1 it holds
8
1
43
541
since 1.
Alternatively, considering special values of :
for = 1 it holds = 1,
for = 0 it holds 8β3!431
since β 0.5.
Therefore is positive and the quadratic equation
has two solutions for T
, 
β
β

ββ
β
Since 1 for the highest solution it holds

β βββ
 β β
 β
   β

Similarly, from 1 it follows that for the lowest
solution (for the –) it holds
M. Hoogendoorn et al. / Modelling biased human trust dynamics 33
AUTHOR COPY
 4β 1  1/22  1  4β 2/22  1  1
Therefore the equilibrium  for a given is the
lowest solution
= β
β = ββ
β 1
This is a positive number since D (4β 1) as can
be seen from the initial expression
ββββ
Behaviour around the equilibrium for LiT with β
0.5 It holds
 
β
  
β
Therefore for this case the convergence rate is
  /೐೜β
This depends both on γ and β, and via  also on .
The criterion for the equilibrium being attracting is
that $/$ 0. This is equivalent to:
  β
As β ≥ 0.5, this is always the case.
Case β ≤ 0.5
Criterion for increasing for LiT with β 0.5
        
 β
Criterion for decreasing for LiT with β 0.5
    
 β
Criterion for equilibrium for LiT with β 0.5
        
  β 
β
This is a quadratic equation in with discriminant
  β
Then
, β 
 β β
Solutions for require that 0; this is equivalent
to:
ββ
ββ
  ββ
    ββ
As 1 and 1β/12β1, this is always
fulfilled. The highest solution is > 1 as can be
seen from

ββββ
  β 
  β  β 
  β    
Therefore the equilibrium value  is the smallest
solution
 
ββββ
As above it can be seen that this is a positive number.
Behaviour around the equilibrium for LiT with β
0.5 It holds
          
    
 β
β    
Therefore for this case the convergence rate is
  /೐೜ β   ೐೜
This depends both on γ and β, and via  also on .
The criterion for the equilibrium being attracting is
that / 0. This is equivalent to:
β   
As β ≤ 0.5, this is always the case.
6. Human-based trust experiment
In this section the human-based trust experiment is
explained. In Section 6.1 the participants are de-
scribed. In Section 6.2 an overview of the used ex-
perimental environment is given. Thereafter, the
procedure of the experiment and data collection is
explained in Section 6.3.
6.1. Participants
Eighteen participants (eight male and ten female)
with an average age of 23 (SD = 3.8) participated in
the experiment as paid volunteers. Non-colour
blinded participants were selected. All were experi-
M. Hoogendoorn et al. / Modelling biased human trust dynamics34
AUTHOR COPY
enced computer users, with an average of 16.2 hours
of computer usage each week (SD = 9.32).
6.2. Task
As the bias-based trust models are designed to
work in situations in which humans have to decide to
trust either one of multiple heterogeneous trustees,
the experimental task used involved three different
trustees, namely two human participants and a sup-
port system. The task was a classification task in
which the two participants on two separate personal
computers had to classify geographical areas accord-
ing to specific criteria as areas that either needed to
be attacked, helped or left alone by ground troops.
The participants needed to base their classification on
real-time computer generated video images that re-
sembled video footage of real unmanned aerial vehi-
cles (UAVs). On the camera images, multiple objects
were shown. There were four kinds of objects: civil-
ians, rebels, tanks and cars. The identification of the
number of each of these object types was needed to
perform the classification. Each object type had a
score (either –2, –1, 0, 1 or 2, respectively) and the
total score within an area had be determined. Based
on this total score the participants could classify a
geographical area (i.e., attack when above 2, help
when below –2 or do nothing when in between).
Participants had to classify two areas at the same
time and in total 98 areas had to be classified. Both
participants did the same areas with the same UAV
video footage.
During the time a UAV flew over an area, three
phases occurred: The first phase was the advice
phase. In this phase both participants and a support-
ing software agent gave an advice about the proper
classification (attack, help, or do nothing). This
means that there were three advices at the end of this
phase. It was also possible for the participants to
refrain from giving an advice, but this hardly oc-
curred. The second phase was the reliance phase. In
this phase the advices of both the participants and
that of the supporting software agent were communi-
cated to each participant. Based on these advices the
participants had to indicate which advice, and there-
fore which of the three trustees (self, other or soft-
ware agent), they trusted the most. Participants were
instructed to maximize the number of correct classi-
fications at both phases (i.e., advice and reliance
phase). The third phase was the feedback phase, in
which the correct answer was given to both partici-
pants. Based on this feedback the participants could
update their internal trust models for each trustee
(self, other, software agent).
In Fig. 6 the interface of the task is shown. The
map is divided in 10 × 10 areas. These boxes are the
areas that were classified. The first UAV starts in the
top left corner and the second one left in the middle.
The UAVs fly a predefined route so participants do
not have to pay attention to navigation. The camera
footage of the upper UAV is positioned top right and
the other one bottom right.
The advice of the self, other and the software agent
was communicated via dedicated boxes below the
camera images. The advice to attack, help, or do
nothing was communicated by red, green and yellow,
respectively. On the overview screen on the left,
feedback was communicated by the appearance of a
green tick or a red cross. The reliance decision of the
participant is also shown on the overview screen
Table 5
Results of the mathematical analysis for the specific linear functions
Bias depends on Increasing/decreasing Equilibrium value Convergence rate Attracting
LiE only on experience:
β ≥ 0.5
1½ 1  /1  β
 1  ½ 1  /1  β
1½1  
/1  β
  1  2 1  β1  
 Always
only on experience:
β ≤ 0.5
½/
½/
½
/
 2
 Always
LiET on experience and
trust
1β/  12β
1β/  12β
 1  β
/  1  2β

 /
1β12β

Always
LiT only on trust:
β ≥ 0.5
   1  2  11  
   1  2  11  

 1
2  11  

T
e
q
=β   β
 
೐೜
Always
only on trust:
β ≤ 0.5
2
1β
1  2 
2
1β
1  2 
2
1β
 
1  2
T
e
q
= ββ 

೐೜
Always
M. Hoogendoorn et al. / Modelling biased human trust dynamics 35
AUTHOR COPY
behind the feedback (feedback only shown in the
feedback phase). The phase depicted in Fig. 6 was
the reliance phase before the participant indicated his
reliance decision.
6.3. Data collection
During the above described experiment, input and
output were logged using a client-server application.
The interface of this application is shown in Fig. 7.
Two other client machines, that were responsible for
executing the task as described in the previous sub-
section, were able to connect via a local area network
to the server, which was responsible for logging all
data and communication between the clients. The
interface shown in Fig. 7 could be used to set the
client’s IP-addresses and ports, as well as several
experimental settings, such as how to log the data. In
total the experiment lasted approximately 15 minutes
per participant.
Experienced performance feedback of each trustee
and reliance decisions of each participant were
logged in temporal order for later analysis. During
the feedback phase the given feedback was translated
to a penalty of either 0, 0.5 or 1, representing a good,
Fig. 6. Interface of the task.
Fig. 7. Interface of the application used for gathering validation
data (Connect), for parameter adaptation (Tune) and validation of
the trust models (Validate).
M. Hoogendoorn et al. / Modelling biased human trust dynamics36
AUTHOR COPY
neutral or poor experience of performance, respec-
tively. During the reliance phase the reliance deci-
sions were translated to either 0 or 1 for each trustee
Si, which represented that one relied or did not rely
on Si.
7. Validation of bias-based trust models
In this section the validation process of the trust
models described in Section 2 are presented. In Sec-
tion 7.1 the parameter adaption technique is ex-
plained, Sections 7.2 and 7.3 explain the model vali-
dation process and results for the bias-based trust
models, respectively.
7.1. Parameter adaptation
The data collection described in Section 6.3 was
repeated twice on each group of two participants,
called condition 1 and condition 2, respectively. The
data from one of the conditions was used for parame-
ter adaptation purposes for each model, and the data
from the other condition for model validation (see
Section 6.3). This process of parameter adaptation
and validation was balanced over conditions, which
means that conditions 1 and 2 switch roles, so condi-
tion 1 is initially used for parameter adaptation and
condition 2 for model validation, and thereafter con-
dition 2 is used for parameter adaptation and condi-
tion 1 for model validation (i.e., cross-validation).
Then the average was calculated of the two calcu-
lated validities, per participant, per model. This last
value is called the accuracy of the models. The re-
sults are in the form of accuracies per trust model and
their differences are detected using a repeated meas-
ures analysis of variance (ANOVA) and post-hoc
Bonferroni t-tests.
After the different models were tuned, the best fit
model (with the maximum accuracy) is selected
based on the maximum accuracy for the participant at
hand. This was done because at the moment one does
not know beforehand which bias type will be suitable
for the specific participant. The results of the valida-
tion process are in the form of accuracies per trust
model (unbiased model (UM), LiE, LiT, LiET, LoE,
LoT, LoET and the best fit model (MAX)).
Both the parameter adaptation and model valida-
tion procedure was done using the same application
as was used for gathering the empirical data. The
interface shown in Fig. 7 could also be used to alter
validation and adaptation settings, such as the granu-
larity of the adaptation.
The number of parameters of the models presented
in Section 2 to be adapted for each model and each
participant suggest that an exhaustive search as de-
scribed in [6] for the optimal parameter values is
feasible. This means that the entire parameter search
space is explored to find a vector of parameter set-
tings resulting in the maximum accuracy (i.e., the
amount of overlap between the model’s predicted
reliance decisions and the actual human reliance
decisions) for each of the models and each partici-
pant. The corresponding code of the implemented
exhaustive search method is shown in Algorithm 1.
In this algorithm, E(t) is the set of experiences (i.e.,
performance feedback) at time point t for all trustees,
RH(e) is the actual reliance decision the participant
made (on either one of the trustees) given a certain
experience e, RM(e,X) is the predicted reliance deci-
sion of the trust model M, given an experience e and
candidate parameter vector X (reliance on either one
of the trustees), X is the distance between the esti-
mated and actual reliance decisions given a certain
candidate parameter vector X, and δbest is the distance
resulting from the best parameter vector Xbest found
so far. The best parameter vector Xbest is returned
when the algorithm finishes. This parameter adapta-
tion procedure was implemented in Microsoft®
C#.Net 2005 development environment.
In order to compare the different bias-based trust
models described in Section 2, the measurements of
experienced performance feedback were used as
input for the models (i.e., as experiences) and the
output (predicted reliance decisions) of the models
was compared with the actual reliance decisions of
the participant as described in Section 6. It is hereby
assumed that the human always consults the most
trusted trustee. The resulting set of parameters is the
set with minimum error in the prediction of the reli-
ALGORITHM 1: ES-PARAMETER-ADAPTATION(E, RH)
1 δbest= ∞, X = 0
2 for all parameters x in vector X do
3 for all settings of x do
4 δx= 0
5 for all time points t do
6 e = E(t), rM = RM(e, X), rH = RH(e)
7 if rM not equal rH then
8 δx= δx+1
9 end if
10 end for
11 if δx < δbest then
12 Xbest= X, δbest= δx
13 end if
14 end for
15 end for
16 return Xbest
M. Hoogendoorn et al. / Modelling biased human trust dynamics 37
AUTHOR COPY
ance decisions for that specific participant. Hence,
the relative overlap of the predicted and the actual
reliance decisions was a measure for the accuracy of
the models.
7.2. Computational complexity
As the models described in Section 2 have a dif-
ferent number of parameters, the parameter tuning
process took a different amount of time for each of
the models. Assuming that S is the number of sub-
jects, M number of model types (namely unbiased,
linear and logistic), B number of bias types (using
experience, trust, and experience and trust), P the
number of parameters with α degree of precision of
the parameters (in the range of 0–1), T the number of
time steps, and N number of trustees, the complexity
is then O (S.M.B.10.T.N). This indicates that it is
exponential in the number of parameters and their
precision value. The models presented here have a
different number of parameters with different types
of precision. The baseline model has one parameter γ
(with 0.01 precision), while linear models have four
(γ, β1, β2, β3 with 0.01) where β1, β2, and β3 represent
the bias of the subject towards each trustee and the
logistic models have seven parameters (γ, τ1, τ2, τ3, σ1,
σ2, and σ3, where γ and τ has precision 0.01 and σ has
precision 1 within range 1 to 20).
If the time required is, for example, calculated for
LoT for tuning one subject then it has S = 1, M = 1,
B = 1, P = 7 (4 parameters with precision 0.01, and 3
parameters with precision 1 and in range (1–20)), T =
100*3 (to calculate the trust value at each time point,
predict the reliance decision and calculate the dis-
tance from the empirical data), and N = 3, then this
counts to 1 × 1 × 1 × 104×2 × 203 × 3 × 102 × 3 = 7.2 ×
1014 computation steps, which on 2.4 MHz computer
will take approx. 3.47 days. For a linear model the
computation time is about 37.5 seconds. So to vali-
date all seven models against one subject will take
10.41 days. If all subjects are validated for all seven
models in a serial fashion (one by one) on a machine
having speed 2.4 MHz then it will cost 166.66 days
to complete. Hence during the process of tuning two
approaches are followed a) to decrease granularity of
the parameters from 0.01 to 0.025 (for α, τ, β) and 1
to 2 (for σ) and secondly to use DAS-4 [1] (distrib-
uted ASCI super computer version 4) which can
distribute the validation of each of the subjects on a
separate machine in a distributed cluster. Hence 16
machines on DAS-4 have been utilized for this pur-
pose. On average these machines have provided 0.31
MHz of computation power. These steps have
speedup the process very much and the whole proc-
ess took approximately 6.19 hours on DAS-4 with
these parameters.
7.3. Validation results
From the data of 18 participants, two outliers have
been removed, which leaves a data set of 16 accura-
cies per model type (UM, LiE, LiT, LiET, LoE, LoT,
LoET and MAX).
The actual found tuned parameters per model type
per participant are too numerous to show in the paper.
Hence we only show the found accuracies.
In Fig. 8a the subjects are shown on the x-axis
while the prediction accuracies of the models are
presented on y-axis. Here it can be seen that the LiE
and LoET variants are mostly on the upper bound of
the prediction accuracy whereas the LiT, LiET, and
LoT are on the lower bound. In Fig. 8b the average
accuracy of the models over the participants is shown.
It can be seen that the LiE and LoET variant provide
better predictions while the LiT, LiET, LoE, and LoT
perform worse compared to the baseline model (UM).
In Fig. 9 the main effect of model type for accu-
racy for known data is shown. A repeated measures
analysis of variance (ANOVA) showed a significant
main effect (F(7, 105) = 61.04, p .01). A post-hoc
Bonferroni test showed that there is a significant
difference between all biased model types and the
unbiased model (UM), p 0.01, for all tests. For
models UM, LiT, LiET and LoT a significantly
higher accuracy was found for the best fit model
(MAX), p 0.01, for all tests.
Finally, for unknown data, a paired t-test showed a
significant improved accuracy of the best fit model
(M = 0.70, SD = 0.16) compared to the unbiased
model (M = 0.66, SD = 0.15), t(15) = 3.13, p 0.01.
This means that at least one of the different biased
models shows an increased capability to estimate
trust of the tested participants, also for unknown data.
8. Discussion and conclusions
In this paper, approaches have been presented that
allow for modelling biases in human trust dynamics.
In order to come to models incorporating such ap-
proaches, an existing model [11], which is often
applied (e.g., [17–19]), has been extended with addi-
tional constructs. A number of different variants have
hereby been introduced:
M. Hoogendoorn et al. / Modelling biased human trust dynamics38
AUTHOR COPY
(1) a model that strictly places the bias on the ex-
perience obtained from the trustee,
(2) a model that combines the trust and experience
and then applies the bias,
(3) a model that uses the previous trust value on
which the bias is applied.
Simulation results of the behaviour of each of the
models have been shown, as well as a comparison of
the behaviour of the models via the mutual model
mirroring method presented in [9]. Furthermore, the
resulting simulation traces have been formally ana-
lysed by means of the verification of formal proper-
ties and were shown to behave as expected. In addi-
tion, a detailed mathematical analysis has been per-
formed to investigate dynamic properties of bias-
based trust models. The properties addressed include
aspects such as when trust is increasing or decreasing,
which equilibria are possible (i.e., T(t + Δt) = T(t )),
and how the behaviour of the models is near the
equilibria, in particular whether they are attracting
and what the rate of convergence to such an equilib-
rium is. The main goal of the research presented here
is to model and validate human bias-based trust.
Therefore, an extensive validation has taken place in
which the bias-based trust models were used to de-
scribe and forecast human trust levels. In this paper,
to tailor the model to a specific human, a simple
parameter estimation technique has been used, but
more complex estimation techniques could also be
applied. The tuning technique used for the personal-
ization of trust models was inspired by the techniques
presented in [6]. The technique applied being exhaus-
tive in nature consumes a lot of computation power.
Hence, during the process of tuning two approaches
are followed a) to decrease granularity of the parame-
ters and secondly to use DAS-4 [1] which can dis-
tribute the validation of each of the subjects on sepa-
rate machine in a distributed cluster. In total 16 ma-
chines on DAS-4 have been utilized for this purpose.
These steps have speedup the process significantly:
approximately 6.19 hours on DAS-4 instead of 166
days on a personal computer.
The validation study of bias-based trust models
showed that for each participant at least one of the
different biased models has an increased capability to
estimate trust, also for unknown data. For known
data (i.e., the models were tuned to it), all of the
models are better compared to the tuned unbiased
model. The latter means that if one is able to develop
a kind of on-line tuning, the accuracies of the models
would certainly benefit. The first means that the
identification of personal characteristics might lead
to an online form of the selection of the best fit
model for unknown data, which on its turn leads to
an improved accuracy.
Within the domain of agent systems, quite some
trust models have been developed, see e.g., [13,14]
for an overview. Although the focus of this paper has
been on the design of bias-based trust models and
validation of these models, other trust models can
also be validated using the experimental data ob-
Fig. 9. Main effect of model type on accuracy.
Fig. 8. a) prediction accuracy of models across subjects, b) average
prediction accuracy of models for all subjects.
M. Hoogendoorn et al. / Modelling biased human trust dynamics 39
AUTHOR COPY
tained in combination with parameter estimation.
This is part of future work. Furthermore, other pa-
rameter adaptation methods will be explored or ex-
tended for the purpose of real-time adaptation. In
addition, we aim to implement a personal assistant
software agent that is able to monitor and balance the
functional state of the human in a timely and knowl-
edgeable manner. Also applications in different do-
mains are explorable, such as the military and air
traffic control domain.
In future, given the approach presented in this pa-
per, other models that represent human trust from the
literature, for example, addressing trust in agents as
teammates (see e.g., [3a,13,14]) could also be ex-
tended with the notion of human biases. Furthermore
it could be investigated that how far these extensions
improve the accuracy of those models.
References
[1] H. Bal, R. Bhoedjang, R. Hofman, C. Jacobs, T. Kielmann,
J. Maassen et al., The distributed ASCI supercomputer
project, SIGOPS Operating System Review 34(4) (2000),
76–96.
[2] T. Bosse, C. Jonker, L.v.d. Meij, A. Sharpanskykh, and
J. Treur, Specification and verification of dynamics in agent
models, International Journal of Cooperative Information
Systems 18 (2009), 167–193.
[3] R. Falcone and C. Castelfranchi, Trust dynamics: How trust
is influenced by direct experiences and by trust itself, in:
Proc. of the Third International Joint Conference on
Autonomous Agents and Multiagent Systems, 2004, pp. 740–
747.
[3a] X. Fan, S. Oh, M. McNeese, J. Yen, H. Cuevas, L. Strater,
and M.R. Endsley, The influence of agent reliability on trust
in humanagent collaboration, in: Proc. of the European
Conference on Cognitive Ergonomics, 2008.
[4] L. Huff and L. Kelley, Levels of organizational trust in
individualist versus collectivist societies: A seven nation
study, Organizational Science 14 (2003), 81–90.
[5] M. Hoogendoorn, S.W. Jaffry, P.-P. van Maanen, and
J. Treur, Modeling and validation of biased human trust, in:
Proc. of the Eleventh IEEE/WIC/ACM International
Conference on Intelligent Agent Technology, O. Boissier
et al., eds, IEEE Computer Society Press, 2011, pp. 256–
263.
[6] M. Hoogendoorn, S.W. Jaffry, and J.Treur, Modeling
dynamics of relative trust of competitive information agents,
in: Proc. of the Twelfth International Workshop on
Cooperative Information Agents, M. Klusch, M. Pechoucek,
and A. Polleres, eds, Lecture Notes in Artificial Intelligence,
Vol. 5180, Springer Verlag, 2008, pp. 55–70.
[7] M. Hoogendoorn, S.W. Jaffry, and J. Treur, An adaptive
agent model estimating human trust in information sources,
in: Proc. of the Ninth IEEE/WIC/ACM International
Conference on Intelligent Agent Technology, R. Baeza-
Yates, J. Lang, S. Mitra, S. Parsons and G. Pasi, eds, IEEE
Computer Society Press, 2009, pp. 458–465.
[8] M. Hoogendoorn, S.W. Jaffry, and J. Treur, Cognitive and
neural modeling of dynamics of trust in competitive
trustees. Cognitive Systems Research, 2012, in press.
[9] S.W. Jaffry and J. Treur, Comparing a cognitive and a neural
model for relative trust dynamics, in: Proc. of Sixteenth
International Conference on Neural Information Processing,
Part I, C.S. Leung, M. Lee, and J.H. Chan, eds, Lecture
Notes in Computer Science, Vol. 5863, Springer Verlag,
2009, pp. 72–83.
[10] C.M. Jonker, J.J.P. Schalken, J. Theeuwes, and J. Treur,
Human experiments in trust dynamics, in: Proc. of the
Second International Conference on Trust Management,
Lecture Notes in Computer Science, Vol. 2995, Springer
Verlag, 2004, pp. 206–220.
[11] C.M. Jonker and J. Treur, Formal analysis of models for the
dynamics of trust based on experiences, in: Multi-Agent
System Engineering, Proc. of the Ninth European Workshop
on Modelling Autonomous Agents in a Multi-Agent World,
F.J. Garijo and M. Boman, eds, Lecture Notes in Computer
Science, Vol. 1647, Springer Verlag, 1998, pp. 221–232.
[12] P.-P.v. Maanen, T. Klos, and K.v. Dongen, Aiding human
reliance decision making using computational models of
trust, in: Proc. of the Workshop on Communication Between
Human and Artificial Agents, IEEE Computer Society Press,
2007, pp. 372–376.
[13] S. Ramchurn, D. Huynh, and N. Jennings, Trust in multi-
agent systems, The Knowledge Engineering Review 19
(2004), 1–25.
[14] J. Sabater and C. Sierra, Review on computational trust and
reputation models, Artificial Intelligence Review 24 (2005),
33–60.
[15] D.O. Sears, The person positivity bias, Journal of
Personality and Social Psychology 44 (2007), 233–250.
[16] A. Sharpanskykh and J. Treur, A temporal trace language for
formal modelling and analysis of agent systems, in:
Specification and Verification of Multi-Agent Systems,
M. Dastani, K.V. Hindriks, and J.J.Ch. Meyer, eds, Springer
Verlag, 2010, pp. 317–352.
[17] S.I. Singh and S.K. Sinha, A new trust model based on time
series prediction and Markov model, in: Proc. of the
International Conference on Information and
Communication Technologies, V.V. Das and R. Vijaykumar,
eds, Communications in Computer and Information Science,
Vol. 101, Springer Verlag, 2010, pp. 148–156.
[18] F. Skopik, D. Schall, and S. Dustdar, Modeling and mining
of dynamic trust in complex service-oriented systems,
Information Systems 35 (2010), 735–757.
[19] F.E. Walter, S. Battiston, and F. Schweitzer, Personalised
and dynamic trust in social networks, in: Proc. of the Third
ACM Conference on Recommender Systems, L. Bergman,
A. Tuzhilin, R. Burke, A. Felfernig, and L. Schmidt-Thieme,
eds, ACM Press, 2009, pp. 197–204.
[20] T. Yamagishi, N. Jin, and A.S. Miller, In-group bias and
culture of collectivism, Asian Journal of Social Psychology
1 (1998), 315–328.
M. Hoogendoorn et al. / Modelling biased human trust dynamics40
... However, while identifying factors that induce changes in trust is a critical step towards characterizing trust behavior; it is alone insufficient for characterizing a quantitative model of this behavior. Moreover, studies have shown that the trust level of humans varies with time due to changing experiences [6], [10] and, as such, any quantitative trust model should be dynamic. ...
... There is no experimentally verified model for describing the dynamics of human trust level in HMI contexts that: 1) incorporates demographic factors and time-varying experiences; and 2) is built on experiments that elicit multiple transitions in trust level. Existing quantitative models either assume that human trust behavior is fully based on rationale [6] or are nonlinear [10], [11]. While the influence of accumulated effects of past interactions on the future trust level has been modeled in multiagent system contexts, they have not been modeled for independent HMI [11], [12]. ...
... Li et al. [26] used the structural equation modeling technique to identify the significance of human attitudes and subjective norm on "trusting intentions". Hoogendoorn et al. [10] developed models with biased experience and/or trust to account for this human behavior. They validated their models using a geographical area classification task and showed that a model with a bias term is capable of estimating trust more accurately than models without an explicit bias. ...
Article
We developed an experiment to elicit human trust dynamics in human–machine interaction contexts and established a quantitative model of human trust behavior with respect to these contexts. The proposed model describes human trust level as a function of experience, cumulative trust, and expectation bias. We estimated the model parameters using human subject data collected from two experiments. Experiment 1 was designed to excite human trust dynamics using multiple transitions in trust level. Five hundred and eighty-one individuals participated in this experiment. Experiment 2 was an augmentation of Experiment 1 designed to study and incorporate the effects of misses and false alarms in the general model. Three hundred and thirty-three individuals participated in Experiment 2. Beyond considering the dynamics of human trust in automation, this model also characterizes the effects of demographic factors on human trust. In particular, our results show that the effects of national culture and gender on trust are significant. For example, U.S. participants showed a lower trust level and were more sensitive to misses as compared with Indian participants. The resulting trust model is intended for the development of autonomous systems that can respond to changes in human trust level in real time.
... However, while identifying factors that induce changes in trust is a critical step towards characterizing trust behavior, it is alone insufficient for characterizing a quantitative model of this behavior. Moreover, studies have shown that the trust level of humans varies with time due to changing experiences [33,37] and, as such, any quantitative trust model should be dynamic. ...
... However, while identifying factors that induce changes in trust is a critical step towards characterizing trust behavior, it is alone insufficient for characterizing a quantitative model of this behavior. Moreover, studies have shown that the trust level of humans varies with time due to changing experiences [33,37] and, as such, any quantitative trust model should be dynamic. In Section 2.1, we will first develop a quantitative dynamic model of human trust that captures the effects of automation reliability and error type (miss or false alarm). ...
... There is no experimentally verified model for describing the dynamics of human trust level in HMI contexts that (1) incorporates demographic factors and time-varying experiences and (2) is built on experiments that elicit multiple transitions in trust level. Existing quantitative models either assume that human trust behavior is fully based on rationale [33] or are nonlinear [37,50]. While the influence of accumulated effects of past interactions on the future trust level have been modeled in multi-agent system contexts, they have not been modeled for independent human-machine interactions [50,51]. ...
Thesis
Full-text available
Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines interact and collaborate with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be calibrated to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time. In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.
... A large number of these models are qualitative models [15,30,40,43] which analyze the factors that affect trust but cannot be used to make quantitative predictions. Some quantitative models, including regression models [14,44] and time-series models of trust [2,27,29,[31][32][33]41], fill this gap but do not account for the probabilistic nature of human behavior. ...
... A large number of these models are qualitative models [15,30,40,43] which analyze the factors that affect trust but cannot be used to make quantitative predictions. Some quantitative models, including regression models [14,44] and time-series models of trust [2,27,29,[31][32][33]41], fill this gap but do not account for the probabilistic nature of human behavior. ...
Preprint
Full-text available
Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.
... On the other hand, regression models [37], [38] quantitatively capture trust but do not consider its dynamic response characteristics. To fill this gap, researchers have proposed both deterministic models [4], [11], [39]- [44] and probabilistic models [14], [33], [45] of human trust dynamics. With respect to probabilistic approaches, several researchers have modeled human trust behavior using Markov models, particularly hidden Markov models (HMMs) [12], [13], [46]. ...
Preprint
Full-text available
Human trust in automation plays an essential role in interactions between humans and automation. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be calibrated to optimize human-machine interactions with respect to context-specific performance objectives. In this article, we present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. This calibration is achieved by varying the automation's transparency---the amount and utility of information provided to the human. The parameterization of the model is conducted using behavioral data collected through human-subject experiments, and three feedback control policies are experimentally validated and compared against a non-adaptive decision-aid system. The results show that human-automation team performance can be optimized when the transparency is dynamically updated based on the proposed control policy. This framework is a first step toward widespread design and implementation of real-time adaptive automation for use in human-machine interactions.
Article
Aomation has become prevalent in the everyday lives of humans. However, despite significant technological advancements, human supervision and intervention are still necessary in almost all sectors of automation, ranging from manufacturing and transportation to disaster management and health care [1]. Therefore, it is expected that the future will be built around human?agent collectives [2] that will require efficient and successful interaction and coordination between humans and machines. It is well established that, to achieve this coordination, human trust in automation plays a central role [3]-[5]. For example, the benefits of automation are lost when humans override it due to a fundamental lack of trust [3], [5], and accidents may occur due to human mistrust in such systems [6]. Therefore, trust should be appropriately calibrated to avoid the disuse or misuse of automation [4].
Article
Intelligent and autonomous technology is performing tasks from driving to healthcare, in which calibration of human trust in automation is critical. In this paper, we use a model-based clustering algorithm to identify and model dominant dynamics of human trust in automation among a general population. In doing so, we seek to balance the tradeoffs between a single generalized, or several individualized, models of human trust. We show that two models optimally represent the sampled population and denote the participants in the two clusters as “Followers” and “Preservers”—those with high and low propensities to trust automation, respectively. We compare the dynamics of each partially observable Markov decision process model as well their associated control policies that calibrate human trust while reducing workload in a reconnaissance mission context. The resulting control policies for varying the transparency of an automated decision aid suggest that the use of a generalized control policy would sub-optimally calibrate human trust dynamics in the general population.
Conference Paper
Full-text available
Trust dynamics can be modelled in relation to experiences. Both cognitive and neural models for trust dynamics in relation to experiences are available, but were not yet related or compared in more detail. This paper presents a comparison between a cognitive and a neural model. As each of the models has its own specific set of parameters, with values that depend on the type of person modelled, such a comparison is nontrivial. In this paper a comparison approach is presented that is based on mutual mirroring of the models in each other. More specifically, for given parameter values set for one model, by automated parameter estimation processes the most optimal values for the parameter values of the other model are determined to show the same behaviour. Roughly spoken the results are that the models can mirror each other up to an accuracy of around 90%. Keywor ds: trust dynamics, cognitive, neural, comparison, parameter tuning.
Chapter
Full-text available
This chapter presents the hybrid Temporal Trace Language (TTL) for formal specification and analysis of dynamic properties of multi-agent systems. This language supports specification of both qualitative and quantitative aspects, and subsumes languages based on differential equations and temporal logics. TTL has a high expressivity and normal forms that enable automated analysis. Software environments for performing verification of TTL specifications have been developed. TTL proved its value in a number of domains.
Conference Paper
Full-text available
In order for personal assistant agents in an ambient intelligence context to provide good recommendations, or pro-actively support humans in task allocation, a good model of what the human prefers is essential. One aspect that can be considered to tailor this support to the preferences of humans is trust. This measurement of trust should incorporate the notion of relativeness since a personal assistant agent typically has a choice of advising substitutable options. In this paper such a model for relative trust is presented, whereby a number of parameters can be set that represent characteristics of a human.
Conference Paper
Full-text available
For an information agent to support a human in a personalized way, having a model of the trust the human has in information sources may be essential. As humans differ a lot in their characteristics with respect to trust, a trust model crucially depends on specific personalized values for a number of parameters. This paper contributes an adaptive agent model for trust with parameters that are automatically tuned over time to a specific individual. To obtain the adaptation, four different techniques have been developed. In order to evaluate these techniques, simulations have been performed. The results of these were formally verified.
Article
A "person-positivity bias" is proposed such that attitude objects are evaluated more favorably the more they resemble individual humans. Because perceived similarity should increase liking, individuals should attract more favorable evaluations than should less personal attitude objects, such as inanimate objects or even aggregated or grouped versions of the same persons. Findings from 11 studies with undergraduate Ss support this view. Individuals were overwhelmingly evaluated favorably. Personal versions of a given attitude object were evaluated more favorably than impersonal versions of it. Individual persons, as wholes, were evaluated more favorably than were their specific attributes. Individuals were evaluated more favorably than were the same individuals in aggregates or groups. Attitudes toward groups were cognitively compartmentalized from attitudes toward individual group members. Perceivers tended to underestimate the positivity of their own and others' attitudes toward individual persons. (38 ref)
Article
Competitiveness in global industries increasingly requires the ability to develop trusting relationships. This requires organizations, and the individuals they are comprised of, to be both trustworthy and trusting. An important question is whether societal culture influences the tendency of individuals and organizations to trust. Based largely on Yamagishi's (1994, 1998a, b) theories explaining trust, commitment, and in-group bias in collectivist cultures, this study examines potential differences in levels of trust between individualist and collectivist cultures. Survey data was collected from 1,282 mid-level managers from large banks in Japan, Korea, Hong Kong, Taiwan, China, Malaysia, and the United States. We first study differences in how individuals from individualist and collectivist societies trust ingroups versus out-groups. This provides an important foundation for hypotheses regarding differences in individual propensities to trust and two measures of organizational trust: internal trust (trust within the organization) and external trust (an organization's trust for suppliers, customers, etc.). Findings show higher levels of propensity to trust and organizational external trust in the United States than in Asia.
Article
In this paper I present an argument that culture of collectivism which characterizes Japanese society is to be conceived in terms of an equilibrium between socio-relational and cognitive traits in which people have acquired expectations for generalized reciprocity within, not across, group boundaries. Maintenance of harmony among group members and voluntary cooperation toward group goals – the characteristics of collectivist culture – are often considered to be fundamentally psychological in nature. It is usually considered that members of a collectivist culture like to maintain harmony and cooperate toward group goals, or that “culture” sneaks into the minds of people and drives them to behave in such a manner. According to this view, culture is a fundamentally psychological or subjective matter. This is the view that I want to challenge in this paper.
Article
The global scale and distribution of companies have changed the economy and dynamics of businesses. Web-based collaborations and cross-organizational processes typically require dynamic and context-based interactions between people and services. However, finding the right partner to work on joint tasks or to solve emerging problems in such scenarios is challenging due to scale and temporary nature of collaborations. Furthermore, actor competencies evolve over time, thus requiring dynamic approaches for their management. Web services and SOA are the ideal technical framework to automate interactions spanning people and services. To support such complex interaction scenarios, we discuss mixed service-oriented systems that are composed of both humans and software services, interacting to perform certain activities. As an example, consider a professional online support community consisting of interactions between human participants and software-based services. We argue that trust between members is essential for successful collaborations. Unlike a security perspective, we focus on the notion of social trust in collaborative networks. We show an interpretative rule-based approach to enable humans and services to establish trust based on interactions and experiences, considering their context and subjective perceptions.
Conference Paper
In this paper, we propose a new statistical predictive model of Trust based on the well-known methodologies of the Markov model and Local Learning technique. Repeatedly appearing similar subsequences in the trust time series constructed from history of direct interactions or recommended trust values collected from intermediaries over a sequence of time slots are clustered into regime. Each regime is learnt by a local model called as local expert. The time series is then modeled as a coarse-grain transition network of regimes by using a Markov process and value of the trust at any future time is predicted by selecting the local expert with the help of the Markov matrix.