Conference PaperPDF Available

Modeling and Validation of Biased Human Trust

Authors:

Abstract and Figures

When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better.
Content may be subject to copyright.
A preview of the PDF is not available
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale[8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
... More recently, researchers have incorporated elements that are not based on rationale in the human trust model. Hoogendoorn et al. introduced 'bias' into their model to account for this[8]. They formulated models with biased experience and/or trust and then validated these models via a geographical areas classification task. ...
... There is no experimentally verified model for describing the comprehensive dynamics of human trust level in HMI contexts. Existing trust models are either nonlinear or do not capture the human behavior that is not based on rationale [8]. They also ignore the influence of the cumulative effect of past interactions on the present trust level. ...
Conference Paper
Full-text available
In an increasingly automated world, trust between humans and autonomous systems is critical for successful integration of these systems into our daily lives. In particular, for autonomous systems to work cooperatively with humans, they must be able to sense and respond to the trust of the human. This inherently requires a control-oriented model of dynamic human trust behavior. In this paper, we describe a gray-box modeling approach for a linear third-order model that captures the dynamic variations of human trust in an obstacle detection sensor. The model is parameterized based on data collected from 581 human subjects, and the goodness of fit is approximately 80% for a general population. We also discuss the effect of demographics, such as national culture and gender, on trust behavior by re-parameterizing our model for subpopulations of data. These demographic-based models can be used to help autonomous systems further predict variations in human trust dynamics.
... Human behavior inference for decision making is critical for building synergistic relation- ships between humans and autonomous systems. Researchers have attempted to predict human behavior using dynamic models that rely on the behavioral responses or self-reported behavior of humans [3], [4]. An alternative is the use of psychophysiological signals like the electroencephalogram (EEG) that represents the electrical activity of the brain. ...
... The model is represented by means of differential equations to also enable a formal analysis of the proposed model. Although this approach and its developments (Hoogendoorn et al. 2010(Hoogendoorn et al. , 2011a provide a very good basis for relative trust modeling, the proposed solution is not inherently capable of taking into account the order of recommendations and received evidence. The model is built by adding extra parameters to the classical models of trust. ...
Article
Full-text available
Trust models play an important role in decision support systems and computational environments in general. The common goal of the existing trust models is to provide a representation as close as possible to the social phenomenon of trust in computational domains. In recent years, the field of quantum decision making has been significantly developed. Researchers have shown that the irrationalities, subjective biases, and common paradoxes of human decision making can be better described based on a quantum theoretic model. These decision and cognitive theoretic formulations that use the mathematical toolbox of quantum theory (i.e., quantum probabilities) are referred to by researchers as quantum-like modeling approaches. Based on the general structure of a quantum-like computational trust model, in this paper, we demonstrate that a quantum-like model of trust can define a powerful and flexible trust evolution (i.e., updating) mechanism. After the introduction of the general scheme of the proposed model, the main focus of the paper would be on the proposition of an amplitude amplification-based approach to trust evolution. By performing four different experimental evaluations, it is shown that the proposed trust evolution algorithm inspired by the Grover’s quantum search algorithm is an effective and accurate mechanism for trust updating compared to other commonly used classical approaches.
... Although this work provides a good quantitative parametric formulation for calculating bias, the use of such bias, the effect of considering such bias and the advantages of taking it into account is not discussed. The same drawbacks exist in the work of Hoogendoorn et al. [36]. ...
Article
Trust models play an important role in computational environments. One of the main aims of the work undertaken in this domain is to provide a model that can better describe the socio-technical nature of computational trust. It has been recently shown that quantum-like formulations in the field of human decision making can better explain the underlying nature of these types of processes. Based on this research, the aim of this paper is to propose a novel model of trust based on quantum probabilities as the underlying mathematics of quantum theory. It will be shown that by using this new mathematical framework, we will have a powerful mechanism to model the contextuality property of trust. Also, it is hypothesized that many events or evaluations in the context of trust can be and should be considered as incompatible, which is unique to the noncommutative structure of quantum probabilities. The main contribution of this paper will be that, by using the quantum Bayesian inference mechanism for belief updating in the framework of quantum theory, we propose a biased trust inference mechanism. This mechanism allows us to model the negative and positive biases that a trustor may subjectively feel toward a certain trustee candidate. It is shown that by using this bias, we can model and describe the exploration versus exploitation problem in the context of trust decision making, recency effects for recently good or bad transactions, filtering pessimistic and optimistic recommendations that may result in good-mouthing or bad-mouthing attacks, the attitude of the trustor toward risk and uncertainty in different situations and the pseudo-transitivity property of trust. Finally, we have conducted several experimental evaluations in order to demonstrate the effectiveness of the proposed model in different scenarios.
... In situations where software agents interact with humans, trust models that are incorporated in these agents may have a completely different purpose: to estimate the trust levels of the human over time, and take that into consideration in its behavior, for example, by providing advices from other trustees that are trusted more. If this is the purpose of the trust model, then the model 1 The work presented in this paper is a significant extension by more than 40% of (Hoogendoorn, Jaffry, Maanen, and Treur, 2011). * Corresponding author. ...
Article
Full-text available
Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sci-ences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorpo-rate non-rational elements in the trust formation dynamics. In order to enable agents that interact with humans to have a good estimation of human trust, and take this into account in their behaviour, trust models that incorporate such human aspects are a necessity. A specific non-rational element in humans is that they are often biased in their behaviour. In this paper, models for human trust dynamics are presented incorporating human biases. In order to show that they more accurately describe human behaviour, they have been evaluated against empirical data, which shows that the models perform significantly better.
... In a real-world context an important part of a validation process is tuning of the model's parameters to a situation at hand, for example, a person's characteristics. Automated parameter tuning methods are available in the literature (e.g., [52]) and have been succesfully applied in AmI applications; see, for example, [10,31,32]. ...
Article
Full-text available
Within agent-based Ambient Intelligence applications agents react to humans based on information obtained by sensoring and their knowledge about human functioning. Appropriate types of reactions depend on the extent to which an agent understands the human and is able to interpret the available information (which is often incomplete, and hence multi-interpretable) in order to create a more complete internal image of the environment, including humans. Such an understanding requires that the agent has knowledge to a certain depth about the human"s physiological and mental processes in the form of an explicitly represented model of the causal and dynamic relations describing these processes. In addition, given such a model representation, the agent needs reasoning methods to derive conclusions from the model and interpret the (partial) information available by sensoring. This paper presents the development of a toolbox that can be used by a modeller to design Ambient Intelligence applications. This toolbox contains a number of model-based reasoning methods and approaches to control such reasoning methods. Formal specifications in an executable temporal format are offered, which allows for simulation of reasoning processes and automated verification of the resulting reasoning traces in a dedicated software environment. A number of such simulation experiments and their formal analysis are described. The main contribution of this paper is that the reasoning methods in the toolbox have the possibility to reason using both quantitative and qualitative aspects in combination with a temporal dimension, and the possibility to perform focused reasoning based upon certain heuristic information.
Article
Full-text available
In this paper, we propose a new formulation of computational trust based on quantum decision theory (QDT). By using this new formulation, we can divide the assigned trustworthiness values to objective and subjective parts. First, we create a mapping between the QDT definitions and the trustworthiness constructions. Then, we demonstrate that it is possible for the quantum interference terms to appear in the trust decision making process. By using the interference terms, we can quantify the emotions and subjective preferences of the trustor in various contexts with different amounts of uncertainty and risk. The non-commutative nature of quantum probabilities is a valuable mathematical tool to model the relative nature of trust. In relative trust models, the evaluation of a trustee candidate is not only dependent on the trustee itself, but on the other existing competitors. In other words, the first evaluation is performed in an isolated context whereas the rest of the evaluations are performed in a comparative one. It is shown that a QDT-based model of trust can account for these order effects in the trust decision making process. Finally, based on the principles of risk and uncertainty aversion, interference alternation theorem and interference quarter law, quantitative values are assigned to interference terms. By performing empirical evaluations, we have demonstrated that various scenarios can be better explained by a quantum model of trust rather than the commonly used classical models.
Conference Paper
Full-text available
Existing approach towards agent-based safety risk analysis in Air Traffic Management (ATM) covers hazards that may potentially occur within air traffic operations along two ways. One way is to cover hazards through agent-based model constructs. The second way is to cover hazards through bias and uncertainty analysis in combination with sensitivity analysis of the agent-based model. The disadvantage of the latter approach is that it is more limited in capturing potential emergent behaviour that could be caused by unmodelled hazards. The research presented in this paper explores to what extent agent-based model constructs that have been developed for other purposes, are capable to model more hazards in ATM through the agent-based framework. The focus is on model constructs developed by VU Amsterdam, mainly addressing human factors and interaction between multiple (human and computer) agents within teams. Inspired by a large data base of ATM hazards analysed earlier, a number of VU model constructs are identified that have the potential to model remaining hazards. These model constructs are described at a conceptual level, and analysed with respect to the extent at which they increase the percentage of modelled hazards in ATM.
Conference Paper
Full-text available
Trust dynamics can be modelled in relation to experiences. Both cognitive and neural models for trust dynamics in relation to experiences are available, but were not yet related or compared in more detail. This paper presents a comparison between a cognitive and a neural model. As each of the models has its own specific set of parameters, with values that depend on the type of person modelled, such a comparison is nontrivial. In this paper a comparison approach is presented that is based on mutual mirroring of the models in each other. More specifically, for given parameter values set for one model, by automated parameter estimation processes the most optimal values for the parameter values of the other model are determined to show the same behaviour. Roughly spoken the results are that the models can mirror each other up to an accuracy of around 90%. Keywor ds: trust dynamics, cognitive, neural, comparison, parameter tuning.
Conference Paper
Full-text available
Computational Trust and Reputation (CTR) systems are essential in electronic commerce to encourage interactions and suppress deceptive behaviours. This paper focus on comparing two different kinds of approaches to evaluate the trustworthiness of suppliers. One is based on calculating the weighted mean of past results. The second one applies basic properties of the dynamics of trust. Different scenarios are investigated, including a more problematic one that results from introducing newcomers during the simulation. Experimental results presented in this paper prove the benefits of engaging properties of the dynamics of trust in CRT systems, as it noticeably improves the process of business partners’ selection and increases the utility.
Conference Paper
Full-text available
In this paper, the results of a validation experiment for two existing computational trust models describing human trust are reported. One model uses experiences of performance in order to estimate the trust in different trustees. The second model carries the notion of relative trust. The idea of relative trust is that trust in a certain trustee not solely depends on the experiences with that trustee, but also on trustees that are considered competitors of that trustee. In order to validate the models, parameter adaptation has been used to tailor the models towards human behavior. A comparison between the two models has also been made to see whether the notion of relative trust describes human trust behavior in a more accurate way. The results show that taking trust relativity into account indeed leads to a higher accuracy of the trust model. Finally, a number of assumptions underlying the two models are verified using an automated verification tool.
Conference Paper
Full-text available
For an information agent to support a human in a personalized way, having a model of the trust the human has in information sources may be essential. As humans differ a lot in their characteristics with respect to trust, a trust model crucially depends on specific personalized values for a number of parameters. This paper contributes an adaptive agent model for trust with parameters that are automatically tuned over time to a specific individual. To obtain the adaptation, four different techniques have been developed. In order to evaluate these techniques, simulations have been performed. The results of these were formally verified.
Article
Full-text available
The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and Multi-Agent Systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models.
Article
A "person-positivity bias" is proposed such that attitude objects are evaluated more favorably the more they resemble individual humans. Because perceived similarity should increase liking, individuals should attract more favorable evaluations than should less personal attitude objects, such as inanimate objects or even aggregated or grouped versions of the same persons. Findings from 11 studies with undergraduate Ss support this view. Individuals were overwhelmingly evaluated favorably. Personal versions of a given attitude object were evaluated more favorably than impersonal versions of it. Individual persons, as wholes, were evaluated more favorably than were their specific attributes. Individuals were evaluated more favorably than were the same individuals in aggregates or groups. Attitudes toward groups were cognitively compartmentalized from attitudes toward individual group members. Perceivers tended to underestimate the positivity of their own and others' attitudes toward individual persons. (38 ref)
Article
Competitiveness in global industries increasingly requires the ability to develop trusting relationships. This requires organizations, and the individuals they are comprised of, to be both trustworthy and trusting. An important question is whether societal culture influences the tendency of individuals and organizations to trust. Based largely on Yamagishi's (1994, 1998a, b) theories explaining trust, commitment, and in-group bias in collectivist cultures, this study examines potential differences in levels of trust between individualist and collectivist cultures. Survey data was collected from 1,282 mid-level managers from large banks in Japan, Korea, Hong Kong, Taiwan, China, Malaysia, and the United States. We first study differences in how individuals from individualist and collectivist societies trust ingroups versus out-groups. This provides an important foundation for hypotheses regarding differences in individual propensities to trust and two measures of organizational trust: internal trust (trust within the organization) and external trust (an organization's trust for suppliers, customers, etc.). Findings show higher levels of propensity to trust and organizational external trust in the United States than in Asia.
Article
In this paper I present an argument that culture of collectivism which characterizes Japanese society is to be conceived in terms of an equilibrium between socio-relational and cognitive traits in which people have acquired expectations for generalized reciprocity within, not across, group boundaries. Maintenance of harmony among group members and voluntary cooperation toward group goals – the characteristics of collectivist culture – are often considered to be fundamentally psychological in nature. It is usually considered that members of a collectivist culture like to maintain harmony and cooperate toward group goals, or that “culture” sneaks into the minds of people and drives them to behave in such a manner. According to this view, culture is a fundamentally psychological or subjective matter. This is the view that I want to challenge in this paper.
Article
The global scale and distribution of companies have changed the economy and dynamics of businesses. Web-based collaborations and cross-organizational processes typically require dynamic and context-based interactions between people and services. However, finding the right partner to work on joint tasks or to solve emerging problems in such scenarios is challenging due to scale and temporary nature of collaborations. Furthermore, actor competencies evolve over time, thus requiring dynamic approaches for their management. Web services and SOA are the ideal technical framework to automate interactions spanning people and services. To support such complex interaction scenarios, we discuss mixed service-oriented systems that are composed of both humans and software services, interacting to perform certain activities. As an example, consider a professional online support community consisting of interactions between human participants and software-based services. We argue that trust between members is essential for successful collaborations. Unlike a security perspective, we focus on the notion of social trust in collaborative networks. We show an interpretative rule-based approach to enable humans and services to establish trust based on interactions and experiences, considering their context and subjective perceptions.
Conference Paper
In this paper, we propose a new statistical predictive model of Trust based on the well-known methodologies of the Markov model and Local Learning technique. Repeatedly appearing similar subsequences in the trust time series constructed from history of direct interactions or recommended trust values collected from intermediaries over a sequence of time slots are clustered into regime. Each regime is learnt by a local model called as local expert. The time series is then modeled as a coarse-grain transition network of regimes by using a Markov process and value of the trust at any future time is predicted by selecting the local expert with the help of the Markov matrix.