TABLE 3 - uploaded by Syed Waqar Jaffry
Content may be subject to copyright.
RESULT OF VERIFICATION

RESULT OF VERIFICATION

Source publication
Conference Paper
Full-text available
When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Socia...

Context in source publication

Context 1
... results of the verification are shown in Table 3.It can be seen that property P1 is satisfied for all bias models presented in this paper. When looking at the properties P2 and P3 however, the properties also hold for the various models that have been identified. ...

Similar publications

Article
Full-text available
Model structure selection is one among crucial steps in system identification and in order to carry out this, an information criterion is needed. It plays an important role in determining an optimum model structure with the aim of selecting an adequate model to represent a real system. A good information criterion should not only evaluate predictiv...

Citations

... The model is represented by means of differential equations to also enable a formal analysis of the proposed model. Although this approach and its developments (Hoogendoorn et al. 2010(Hoogendoorn et al. , 2011a provide a very good basis for relative trust modeling, the proposed solution is not inherently capable of taking into account the order of recommendations and received evidence. The model is built by adding extra parameters to the classical models of trust. ...
Article
Full-text available
Trust models play an important role in decision support systems and computational environments in general. The common goal of the existing trust models is to provide a representation as close as possible to the social phenomenon of trust in computational domains. In recent years, the field of quantum decision making has been significantly developed. Researchers have shown that the irrationalities, subjective biases, and common paradoxes of human decision making can be better described based on a quantum theoretic model. These decision and cognitive theoretic formulations that use the mathematical toolbox of quantum theory (i.e., quantum probabilities) are referred to by researchers as quantum-like modeling approaches. Based on the general structure of a quantum-like computational trust model, in this paper, we demonstrate that a quantum-like model of trust can define a powerful and flexible trust evolution (i.e., updating) mechanism. After the introduction of the general scheme of the proposed model, the main focus of the paper would be on the proposition of an amplitude amplification-based approach to trust evolution. By performing four different experimental evaluations, it is shown that the proposed trust evolution algorithm inspired by the Grover’s quantum search algorithm is an effective and accurate mechanism for trust updating compared to other commonly used classical approaches.
... Human behavior inference for decision making is critical for building synergistic relation- ships between humans and autonomous systems. Researchers have attempted to predict human behavior using dynamic models that rely on the behavioral responses or self-reported behavior of humans [3], [4]. An alternative is the use of psychophysiological signals like the electroencephalogram (EEG) that represents the electrical activity of the brain. ...
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale[8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
... More recently, researchers have incorporated elements that are not based on rationale in the human trust model. Hoogendoorn et al. introduced 'bias' into their model to account for this[8]. They formulated models with biased experience and/or trust and then validated these models via a geographical areas classification task. ...
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale [8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
Conference Paper
Full-text available
In an increasingly automated world, trust between humans and autonomous systems is critical for successful integration of these systems into our daily lives. In particular, for autonomous systems to work cooperatively with humans, they must be able to sense and respond to the trust of the human. This inherently requires a control-oriented model of dynamic human trust behavior. In this paper, we describe a gray-box modeling approach for a linear third-order model that captures the dynamic variations of human trust in an obstacle detection sensor. The model is parameterized based on data collected from 581 human subjects, and the goodness of fit is approximately 80% for a general population. We also discuss the effect of demographics, such as national culture and gender, on trust behavior by re-parameterizing our model for subpopulations of data. These demographic-based models can be used to help autonomous systems further predict variations in human trust dynamics.
... Although this work provides a good quantitative parametric formulation for calculating bias, the use of such bias, the effect of considering such bias and the advantages of taking it into account is not discussed. The same drawbacks exist in the work of Hoogendoorn et al. [36]. ...
Article
Trust models play an important role in computational environments. One of the main aims of the work undertaken in this domain is to provide a model that can better describe the socio-technical nature of computational trust. It has been recently shown that quantum-like formulations in the field of human decision making can better explain the underlying nature of these types of processes. Based on this research, the aim of this paper is to propose a novel model of trust based on quantum probabilities as the underlying mathematics of quantum theory. It will be shown that by using this new mathematical framework, we will have a powerful mechanism to model the contextuality property of trust. Also, it is hypothesized that many events or evaluations in the context of trust can be and should be considered as incompatible, which is unique to the noncommutative structure of quantum probabilities. The main contribution of this paper will be that, by using the quantum Bayesian inference mechanism for belief updating in the framework of quantum theory, we propose a biased trust inference mechanism. This mechanism allows us to model the negative and positive biases that a trustor may subjectively feel toward a certain trustee candidate. It is shown that by using this bias, we can model and describe the exploration versus exploitation problem in the context of trust decision making, recency effects for recently good or bad transactions, filtering pessimistic and optimistic recommendations that may result in good-mouthing or bad-mouthing attacks, the attitude of the trustor toward risk and uncertainty in different situations and the pseudo-transitivity property of trust. Finally, we have conducted several experimental evaluations in order to demonstrate the effectiveness of the proposed model in different scenarios.
... In situations where software agents interact with humans, trust models that are incorporated in these agents may have a completely different purpose: to estimate the trust levels of the human over time, and take that into consideration in its behavior, for example, by providing advices from other trustees that are trusted more. If this is the purpose of the trust model, then the model 1 The work presented in this paper is a significant extension by more than 40% of (Hoogendoorn, Jaffry, Maanen, and Treur, 2011). * Corresponding author. ...
Article
Full-text available
Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sci-ences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorpo-rate non-rational elements in the trust formation dynamics. In order to enable agents that interact with humans to have a good estimation of human trust, and take this into account in their behaviour, trust models that incorporate such human aspects are a necessity. A specific non-rational element in humans is that they are often biased in their behaviour. In this paper, models for human trust dynamics are presented incorporating human biases. In order to show that they more accurately describe human behaviour, they have been evaluated against empirical data, which shows that the models perform significantly better.
... In a real-world context an important part of a validation process is tuning of the model's parameters to a situation at hand, for example, a person's characteristics. Automated parameter tuning methods are available in the literature (e.g., [52]) and have been succesfully applied in AmI applications; see, for example, [10,31,32]. ...
Article
Full-text available
Within agent-based Ambient Intelligence applications agents react to humans based on information obtained by sensoring and their knowledge about human functioning. Appropriate types of reactions depend on the extent to which an agent understands the human and is able to interpret the available information (which is often incomplete, and hence multi-interpretable) in order to create a more complete internal image of the environment, including humans. Such an understanding requires that the agent has knowledge to a certain depth about the human"s physiological and mental processes in the form of an explicitly represented model of the causal and dynamic relations describing these processes. In addition, given such a model representation, the agent needs reasoning methods to derive conclusions from the model and interpret the (partial) information available by sensoring. This paper presents the development of a toolbox that can be used by a modeller to design Ambient Intelligence applications. This toolbox contains a number of model-based reasoning methods and approaches to control such reasoning methods. Formal specifications in an executable temporal format are offered, which allows for simulation of reasoning processes and automated verification of the resulting reasoning traces in a dedicated software environment. A number of such simulation experiments and their formal analysis are described. The main contribution of this paper is that the reasoning methods in the toolbox have the possibility to reason using both quantitative and qualitative aspects in combination with a temporal dimension, and the possibility to perform focused reasoning based upon certain heuristic information.
Article
Full-text available
In this paper, we propose a new formulation of computational trust based on quantum decision theory (QDT). By using this new formulation, we can divide the assigned trustworthiness values to objective and subjective parts. First, we create a mapping between the QDT definitions and the trustworthiness constructions. Then, we demonstrate that it is possible for the quantum interference terms to appear in the trust decision making process. By using the interference terms, we can quantify the emotions and subjective preferences of the trustor in various contexts with different amounts of uncertainty and risk. The non-commutative nature of quantum probabilities is a valuable mathematical tool to model the relative nature of trust. In relative trust models, the evaluation of a trustee candidate is not only dependent on the trustee itself, but on the other existing competitors. In other words, the first evaluation is performed in an isolated context whereas the rest of the evaluations are performed in a comparative one. It is shown that a QDT-based model of trust can account for these order effects in the trust decision making process. Finally, based on the principles of risk and uncertainty aversion, interference alternation theorem and interference quarter law, quantitative values are assigned to interference terms. By performing empirical evaluations, we have demonstrated that various scenarios can be better explained by a quantum model of trust rather than the commonly used classical models.
Conference Paper
Full-text available
Existing approach towards agent-based safety risk analysis in Air Traffic Management (ATM) covers hazards that may potentially occur within air traffic operations along two ways. One way is to cover hazards through agent-based model constructs. The second way is to cover hazards through bias and uncertainty analysis in combination with sensitivity analysis of the agent-based model. The disadvantage of the latter approach is that it is more limited in capturing potential emergent behaviour that could be caused by unmodelled hazards. The research presented in this paper explores to what extent agent-based model constructs that have been developed for other purposes, are capable to model more hazards in ATM through the agent-based framework. The focus is on model constructs developed by VU Amsterdam, mainly addressing human factors and interaction between multiple (human and computer) agents within teams. Inspired by a large data base of ATM hazards analysed earlier, a number of VU model constructs are identified that have the potential to model remaining hazards. These model constructs are described at a conceptual level, and analysed with respect to the extent at which they increase the percentage of modelled hazards in ATM.