Figure 4 - uploaded by Syed Waqar Jaffry
Content may be subject to copyright.
Interface of the experimental task. 

Interface of the experimental task. 

Source publication
Conference Paper
Full-text available
When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Socia...

Context in source publication

Context 1
... second property expresses that the trust level itself will be higher in the case of a more positive bias. In order to facilitate the addition of a bias to existing models, some models transform the experience (i.e. experiences colored by the bias). In case of a more positive bias, the biased experiences will generally be higher (notice that the formalizations have been omitted due to the limited space available). Finally, in some of the bias models, trust is explicitly considered to color the experiences. In case the trust level is higher, the same objective experience gets an even more positive value. Note that in the property, the objective experiences on the boundaries are not considered as the influence of trust cannot always be distinguished there (e.g. if an experience of 1 is encountered, the experience can never become higher than 1). The results of the verification are shown in Table 3.It can be seen that property P1 is satisfied for all bias models presented in this paper. When looking at the properties P2 and P3 however, the properties also hold for the various models that have been identified. Finally, property P4 is only satisfied for the models where trust is considered when forming the subjective experience, which makes sense as this property precisely describes this influence. Properties P3 and P4 are actually not relevant for models LoET and LoT as they do not incorporate the notion of subjective experience , therefore the property is always satisfied (due to the fact that the antecedent of the implication never holds). V. V ALIDATION Besides the fact that the models show the desired behavior, the most interesting aspect to see is whether the models describe human behavior better. Therefore, data from a validation experiment (as presented in [5]) has been used to perform a validation, and the results are presented in this section. First, the experiment setup is addressed, followed by the results. The experimental task was a classification task in which two participants on two separate personal computers had to classify geographical areas at the same time. These areas had to be classified as areas that either needed to be attacked, helped or left alone by ground troops according to specific criteria. The participants needed to base their classification on real-time computer generated video images that resembled video footage of real unmanned aerial vehicles (UAVs). On the camera images, multiple objects were shown. There were four kinds of objects: civilians, rebels, tanks and cars. The identification of the number of each of these object types was needed to perform the classification. Each object type had a score (either −2, −1, 0, 1 or 2, respectively) and the total score within an area had been determined. Based on this total score the participants could classify a geographical area (i.e., attack when above 2, help when below −2 or do nothing when in between). Participants had to classify two areas at the same time and in total 98 areas had to be classified. Both participants did the same areas with the same UAV video footage. During the time a UAV flew over an area, three phases occurred: The first phase was the advice phase. In this phase both participants and a supporting software agent gave an advice about the proper classification (attack, help, or do nothing). This means that there were three advices at the end of this phase. It was also possible for the participants to refrain from giving an advice, but this hardly occurred. The second phase was the reliance phase. In this phase the advices of both the participants and that of the supporting software agent were communicated to each participant. Based on these advices the participants had to indicate which advice, and therefore which of the three trustees (self, other or software agent), they trusted the most. Participants were instructed to maximize the number of correct classifications at both phases (i.e., advice and reliance phase). The third phase was the feedback phase, in which the correct answer was given to both participants. Based on this feedback (i.e. the experience in the model explained in Section 2) the participants could update their internal trust models for each trustee (self, other, software agent). In Figure 4 the interface of the task is shown. In order to compare the different models described in Section 2, the measurements of experienced performance feedback were used as input for the models (i.e. as experiences) and the output (predicted reliance decisions) of the models was compared with the actual reliance decisions of the participant. It is hereby assumed that the human always consults the most trusted trustee. Hence, the reliance decision indicates which trustee is trusted most. Of course, the model still has a number of parameters that need to be tuned towards the specific participant. Therefore, an exhaustive search approach has been taken to tune the parameters of the trust model (cf. [6]). The resulting set of parameters is the set with minimum error in the prediction of the reliance decisions for that specific participant. Hence, the relative overlap of the predicted and the actual reliance decisions was a measure for the accuracy of the models. As these models have a different number of parameters the parameter tuning process took a different amount of time for each of different models. Assuming that S is the number of subjects, M number of model types (namely unbiased, linear and logistic), B number of bias types (using experience, trust, and experience and trust), P the number of parameters with degree of precision of the parameters (in the range of 0 - 1), T the number of time steps, and N number of trustees, P the complexity is then O (S.M.B.10 ¡ .T.N). This indicates that it is exponential in the number of parameters and their precision value. Models presented here have different number of parameter with different types of precisions. The baseline model has one parameter (with 0.01 precision), ¢ which is assumed to be the same for all trustees. The linear models have four ( ¢ , £ 1 , £ 2 , £ 3 with 0.01) where £ 1 , £ 2 , and 3 represent the bias of the subject towards each trustee and the logistic models have seven parameter ( ¢ , ¤ 1 , ¤ 2 , ¤ 3 , ¥ 1 , ¥ 2 , and ¥ 3 , where ¢ and ¤ has precision 0.01 and ¥ has precision 1 within range 1 to 20). In order to enable the parameter estimation to be done within a reasonable time, the DAS-4 cluster has been used [1]. It took a total of 6.19 hours to run on the DAS cluster, whereas on a result computer it would have cost 166.66 days (based on the complexity function). The results of the validation process are in the form of accuracies per trust model (unbiased model (UM), LiE, LiT, LiET, LoE, LoT, LoET and the best fit model (MAX)). The differences in accuracy are detected using a repeated measures analysis of variance (ANOVA) and post-hoc Bonferroni t-tests. Following to this, to test for robustness, the best fit model is also cross-validated (i.e., tuning on half the data (known data), and validating on the other half (unknown data), and vice versa) against the unbiased model, using a paired t-test. From the data of 18 participants (eight male and ten female, with an average age of 23 (SD = 3.8)), two outliers have been removed, which leaves a data set of 16 accuracies per model type (UM, LiE, LiT, LiET, LoE, LoT, LoET and MAX). In Fig. 5a the subjects are shown on the x-axis while the prediction accuracies of the models are presented on y- axis. Here it can be seen that the LiE and LoET variants are mostly on the upper bound of the prediction accuracy whereas the LiT, LiET, and LoT are on the lower bound. In Fig. 5b the average accuracy of the models over the participant is shown. It can be seen that the LiE and LoET variant provide better predictions while the LiT, LiET, LoE, and LoT perform worse compared to the baseline model (UM). In Figure 6 the main effect of model type for accuracy for known data is shown. A repeated measures analysis of variance (ANOVA) showed a significant main effect (F(7, 105) = 61.04, p << .01). A post-hoc Bonferroni test showed that there is a significant difference between all biased model types and the unbiased model (UM), p << 0.01, for all tests. For models UM, LiT, LiET and LoT a significantly higher accuracy was found for the best fit model (MAX), p << 0.01, for all tests. Finally, for unknown data, a paired t-test showed a significant improved accuracy of the best fit model (M=0.70, SD=0.16) compared to the unbiased model (M=0.66, SD=0.15), t(15)=3.13, p<<0.01. This means that VI. D ISCUSSION In this paper, an approach has been presented that allows for the modeling biases in human trust behavior. In order to come to such an approach, an existing model (cf. [9]) has been extended with additional concepts. A number of different variants have hereby been introduced: (1) a model that strictly places the bias on the experience obtained from the trustee (2) a model that combines the trust and experience and then applies the bias, and (3) a model that uses the previous trust value on which the bias is applied. Simulation results of the behavior of each of the model have been shown, as well as a ...

Similar publications

Article
Full-text available
Model structure selection is one among crucial steps in system identification and in order to carry out this, an information criterion is needed. It plays an important role in determining an optimum model structure with the aim of selecting an adequate model to represent a real system. A good information criterion should not only evaluate predictiv...

Citations

... The model is represented by means of differential equations to also enable a formal analysis of the proposed model. Although this approach and its developments (Hoogendoorn et al. 2010(Hoogendoorn et al. , 2011a provide a very good basis for relative trust modeling, the proposed solution is not inherently capable of taking into account the order of recommendations and received evidence. The model is built by adding extra parameters to the classical models of trust. ...
Article
Full-text available
Trust models play an important role in decision support systems and computational environments in general. The common goal of the existing trust models is to provide a representation as close as possible to the social phenomenon of trust in computational domains. In recent years, the field of quantum decision making has been significantly developed. Researchers have shown that the irrationalities, subjective biases, and common paradoxes of human decision making can be better described based on a quantum theoretic model. These decision and cognitive theoretic formulations that use the mathematical toolbox of quantum theory (i.e., quantum probabilities) are referred to by researchers as quantum-like modeling approaches. Based on the general structure of a quantum-like computational trust model, in this paper, we demonstrate that a quantum-like model of trust can define a powerful and flexible trust evolution (i.e., updating) mechanism. After the introduction of the general scheme of the proposed model, the main focus of the paper would be on the proposition of an amplitude amplification-based approach to trust evolution. By performing four different experimental evaluations, it is shown that the proposed trust evolution algorithm inspired by the Grover’s quantum search algorithm is an effective and accurate mechanism for trust updating compared to other commonly used classical approaches.
... Human behavior inference for decision making is critical for building synergistic relation- ships between humans and autonomous systems. Researchers have attempted to predict human behavior using dynamic models that rely on the behavioral responses or self-reported behavior of humans [3], [4]. An alternative is the use of psychophysiological signals like the electroencephalogram (EEG) that represents the electrical activity of the brain. ...
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale[8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
... More recently, researchers have incorporated elements that are not based on rationale in the human trust model. Hoogendoorn et al. introduced 'bias' into their model to account for this[8]. They formulated models with biased experience and/or trust and then validated these models via a geographical areas classification task. ...
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale [8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
Conference Paper
Full-text available
In an increasingly automated world, trust between humans and autonomous systems is critical for successful integration of these systems into our daily lives. In particular, for autonomous systems to work cooperatively with humans, they must be able to sense and respond to the trust of the human. This inherently requires a control-oriented model of dynamic human trust behavior. In this paper, we describe a gray-box modeling approach for a linear third-order model that captures the dynamic variations of human trust in an obstacle detection sensor. The model is parameterized based on data collected from 581 human subjects, and the goodness of fit is approximately 80% for a general population. We also discuss the effect of demographics, such as national culture and gender, on trust behavior by re-parameterizing our model for subpopulations of data. These demographic-based models can be used to help autonomous systems further predict variations in human trust dynamics.
... Although this work provides a good quantitative parametric formulation for calculating bias, the use of such bias, the effect of considering such bias and the advantages of taking it into account is not discussed. The same drawbacks exist in the work of Hoogendoorn et al. [36]. ...
Article
Trust models play an important role in computational environments. One of the main aims of the work undertaken in this domain is to provide a model that can better describe the socio-technical nature of computational trust. It has been recently shown that quantum-like formulations in the field of human decision making can better explain the underlying nature of these types of processes. Based on this research, the aim of this paper is to propose a novel model of trust based on quantum probabilities as the underlying mathematics of quantum theory. It will be shown that by using this new mathematical framework, we will have a powerful mechanism to model the contextuality property of trust. Also, it is hypothesized that many events or evaluations in the context of trust can be and should be considered as incompatible, which is unique to the noncommutative structure of quantum probabilities. The main contribution of this paper will be that, by using the quantum Bayesian inference mechanism for belief updating in the framework of quantum theory, we propose a biased trust inference mechanism. This mechanism allows us to model the negative and positive biases that a trustor may subjectively feel toward a certain trustee candidate. It is shown that by using this bias, we can model and describe the exploration versus exploitation problem in the context of trust decision making, recency effects for recently good or bad transactions, filtering pessimistic and optimistic recommendations that may result in good-mouthing or bad-mouthing attacks, the attitude of the trustor toward risk and uncertainty in different situations and the pseudo-transitivity property of trust. Finally, we have conducted several experimental evaluations in order to demonstrate the effectiveness of the proposed model in different scenarios.
... In situations where software agents interact with humans, trust models that are incorporated in these agents may have a completely different purpose: to estimate the trust levels of the human over time, and take that into consideration in its behavior, for example, by providing advices from other trustees that are trusted more. If this is the purpose of the trust model, then the model 1 The work presented in this paper is a significant extension by more than 40% of (Hoogendoorn, Jaffry, Maanen, and Treur, 2011). * Corresponding author. ...
Article
Full-text available
Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sci-ences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorpo-rate non-rational elements in the trust formation dynamics. In order to enable agents that interact with humans to have a good estimation of human trust, and take this into account in their behaviour, trust models that incorporate such human aspects are a necessity. A specific non-rational element in humans is that they are often biased in their behaviour. In this paper, models for human trust dynamics are presented incorporating human biases. In order to show that they more accurately describe human behaviour, they have been evaluated against empirical data, which shows that the models perform significantly better.
... In a real-world context an important part of a validation process is tuning of the model's parameters to a situation at hand, for example, a person's characteristics. Automated parameter tuning methods are available in the literature (e.g., [52]) and have been succesfully applied in AmI applications; see, for example, [10,31,32]. ...
Article
Full-text available
Within agent-based Ambient Intelligence applications agents react to humans based on information obtained by sensoring and their knowledge about human functioning. Appropriate types of reactions depend on the extent to which an agent understands the human and is able to interpret the available information (which is often incomplete, and hence multi-interpretable) in order to create a more complete internal image of the environment, including humans. Such an understanding requires that the agent has knowledge to a certain depth about the human"s physiological and mental processes in the form of an explicitly represented model of the causal and dynamic relations describing these processes. In addition, given such a model representation, the agent needs reasoning methods to derive conclusions from the model and interpret the (partial) information available by sensoring. This paper presents the development of a toolbox that can be used by a modeller to design Ambient Intelligence applications. This toolbox contains a number of model-based reasoning methods and approaches to control such reasoning methods. Formal specifications in an executable temporal format are offered, which allows for simulation of reasoning processes and automated verification of the resulting reasoning traces in a dedicated software environment. A number of such simulation experiments and their formal analysis are described. The main contribution of this paper is that the reasoning methods in the toolbox have the possibility to reason using both quantitative and qualitative aspects in combination with a temporal dimension, and the possibility to perform focused reasoning based upon certain heuristic information.
Article
Full-text available
In this paper, we propose a new formulation of computational trust based on quantum decision theory (QDT). By using this new formulation, we can divide the assigned trustworthiness values to objective and subjective parts. First, we create a mapping between the QDT definitions and the trustworthiness constructions. Then, we demonstrate that it is possible for the quantum interference terms to appear in the trust decision making process. By using the interference terms, we can quantify the emotions and subjective preferences of the trustor in various contexts with different amounts of uncertainty and risk. The non-commutative nature of quantum probabilities is a valuable mathematical tool to model the relative nature of trust. In relative trust models, the evaluation of a trustee candidate is not only dependent on the trustee itself, but on the other existing competitors. In other words, the first evaluation is performed in an isolated context whereas the rest of the evaluations are performed in a comparative one. It is shown that a QDT-based model of trust can account for these order effects in the trust decision making process. Finally, based on the principles of risk and uncertainty aversion, interference alternation theorem and interference quarter law, quantitative values are assigned to interference terms. By performing empirical evaluations, we have demonstrated that various scenarios can be better explained by a quantum model of trust rather than the commonly used classical models.
Conference Paper
Full-text available
Existing approach towards agent-based safety risk analysis in Air Traffic Management (ATM) covers hazards that may potentially occur within air traffic operations along two ways. One way is to cover hazards through agent-based model constructs. The second way is to cover hazards through bias and uncertainty analysis in combination with sensitivity analysis of the agent-based model. The disadvantage of the latter approach is that it is more limited in capturing potential emergent behaviour that could be caused by unmodelled hazards. The research presented in this paper explores to what extent agent-based model constructs that have been developed for other purposes, are capable to model more hazards in ATM through the agent-based framework. The focus is on model constructs developed by VU Amsterdam, mainly addressing human factors and interaction between multiple (human and computer) agents within teams. Inspired by a large data base of ATM hazards analysed earlier, a number of VU model constructs are identified that have the potential to model remaining hazards. These model constructs are described at a conceptual level, and analysed with respect to the extent at which they increase the percentage of modelled hazards in ATM.