## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Trust models play an important role in computational environments. One of the main aims of the work undertaken in this domain is to provide a model that can better describe the socio-technical nature of computational trust. It has been recently shown that quantum-like formulations in the field of human decision making can better explain the underlying nature of these types of processes. Based on this research, the aim of this paper is to propose a novel model of trust based on quantum probabilities as the underlying mathematics of quantum theory. It will be shown that by using this new mathematical framework, we will have a powerful mechanism to model the contextuality property of trust. Also, it is hypothesized that many events or evaluations in the context of trust can be and should be considered as incompatible, which is unique to the noncommutative structure of quantum probabilities. The main contribution of this paper will be that, by using the quantum Bayesian inference mechanism for belief updating in the framework of quantum theory, we propose a biased trust inference mechanism. This mechanism allows us to model the negative and positive biases that a trustor may subjectively feel toward a certain trustee candidate. It is shown that by using this bias, we can model and describe the exploration versus exploitation problem in the context of trust decision making, recency effects for recently good or bad transactions, filtering pessimistic and optimistic recommendations that may result in good-mouthing or bad-mouthing attacks, the attitude of the trustor toward risk and uncertainty in different situations and the pseudo-transitivity property of trust. Finally, we have conducted several experimental evaluations in order to demonstrate the effectiveness of the proposed model in different scenarios.

To read the full-text of this research,

you can request a copy directly from the authors.

... One of the starting points for considering computers and trust in the same context was Marsh's Ph.D. Thesis (Marsh 1994), which is deemed to be the first publication merging these two concepts. Research regarding (computational) trust and reputation has not only been conducted with regard to supply chains, e-commerce, or Information Systems but also spans a wide variety of different academic disciplines, including psychology, economics, and sensor networks (Ashtiani and Azgomi 2014;Brinkhoff et al. 2015;Grandison and Sloman 2000;Mui et al. 2002aMui et al. , 2002bPinyol and Sabater-Mir 2013b;Sabater and Sierra 2005). Many practical approaches on high-profile applications are still currently in intensive use. ...

... Looking at the results that have been discovered and selected for this review (see Table 6), it becomes apparent that the presented models have been developed in various areas of academia. These range from business related topics, like e-commerce (Majd and Balakrishnan 2015; Ransi and Kobti 2014) and supply chains , over wireless communication systems ) to general approaches not explicitly aimed at a certain domain (Ashtiani and Azgomi 2014;Yu et al. 2014a). As a consequence, the source journals are also representatives of various research areas. ...

... It can be learned that most presented models and mechanisms are not purebred but contain associations to both trust and reputation. A common notion in such papers is the use of reputation to establish trust (Ashtiani and Azgomi 2014), which can be found in different formulations and degrees of interconnection. Considering the used Information Sources, one can observe that most models (26 out of 40 (65%)) use multiple sources, with only 11 models being restricted to a single source (27.5%). ...

Over the recent years, computational trust and reputation models have become an invaluable method to improve computer-computer and human-computer interaction. As a result, a considerable amount of research has been published trying to solve open problems and improving existing models. This survey will bring additional structure into the already conducted research on both topics. After recapitulating the major underlying concepts, a new integrated review and analysis scheme for reputation and trust models is put forward. Using highly recognized review papers in this domain as a basis, this article will also introduce additional evaluation metrics to account for characteristics so far unstudied. A subsequent application of the new review schema on 40 top recent publications in this scientific field revealed interesting insights. While the area of computational trust and reputation models is still a very active research branch, the analysis carried out here was able to show that some aspects have already started to converge, whereas others are still subject to vivid discussions.

... These new findings were the primary motivations for the proposition of a quantum-like model of computational trust in recent works (Ashtiani et al. 2014;Ashtiani and Abdollahi Azgomi 2016). In these works, instead of considering trust or distrust with different degrees, the trust state was considered as a superposition of trust and distrust. ...

... In different trust models, this metric can be a number, a label, or a range of values. 5. Trust operations The main operations performed in a trust model are as follows: (a) trust bootstrapping, which is the challenging task of assigning initial values to the entities involved in a trust-based relationship, (b) propagating trust from one entity to another [the most common form of propagation is constructed by assuming a transitivity property for trust (i.e., when A trusts B and B trusts C, then A trusts C)], and this operation was the main focus of our previous work (Ashtiani et al. 2014;Ashtiani and Abdollahi Azgomi 2016), (c) trust evolution (i.e., updating) should occur when a new evidence or recommendation is received by the trustor; the reception of new information may increase or decrease the trustworthiness value assumed for a candidate in the mind of the trustor and this operation is the main focus of this paper, and (d) aggregating the evidence gathered from different sources that may even contradict each other. ...

... In this section, first, we provide an overview of the quantum-like model of computational trust we have introduced in our previous research (Ashtiani et al. 2014; Ashtiani and Abdollahi Azgomi 2016). When a trust state is defined in the form of a superposition state such as � ⟩ = 0 �distrust⟩ + 1 �trust⟩ , the trustor is uncertain about trusting or distrusting the trustee candidate. ...

Trust models play an important role in decision support systems and computational environments in general. The common goal of the existing trust models is to provide a representation as close as possible to the social phenomenon of trust in computational domains. In recent years, the field of quantum decision making has been significantly developed. Researchers have shown that the irrationalities, subjective biases, and common paradoxes of human decision making can be better described based on a quantum theoretic model. These decision and cognitive theoretic formulations that use the mathematical toolbox of quantum theory (i.e., quantum probabilities) are referred to by researchers as quantum-like modeling approaches. Based on the general structure of a quantum-like computational trust model, in this paper, we demonstrate that a quantum-like model of trust can define a powerful and flexible trust evolution (i.e., updating) mechanism. After the introduction of the general scheme of the proposed model, the main focus of the paper would be on the proposition of an amplitude amplification-based approach to trust evolution. By performing four different experimental evaluations, it is shown that the proposed trust evolution algorithm inspired by the Grover’s quantum search algorithm is an effective and accurate mechanism for trust updating compared to other commonly used classical approaches.

... These new findings were the primary motivation for us to propose a quantum-like model of trust. In a recent work, we have introduced the benefits of the quantum Bayesian inference rule for modeling trust inference compared to its classical counterpart (Ashtiani and Azgomi 2014). ...

... In (Ashtiani and Azgomi 2014), we demonstrated the flexibility of quantum mathematics for modeling the context and bias in computational trust models. We discussed that by presenting the trust state as a vector in various bases, a quantumlike model of trust can provide a powerful mechanism to represent and change the context. ...

... In most of the probabilistic trust models, every time we ask about the trustworthiness of an entity, we get a probabilistic outcome based on the probability assigned to that candidate. But, in quantum systems, upon measuring the system (i.e., evaluating the trustee candidate) the superposition corresponding to a trust state will collapse to a basic state and until no further event has occurred, the trust degree will be a deterministic value (Ashtiani and Azgomi 2014). 3. Non-commutativity property of the Hilbert space-based mathematical foundation of quantum theory will be a great tool to model the relativity and order effects in the context of trust. ...

In this paper, we propose a new formulation of computational trust based on quantum decision theory (QDT). By using this new formulation, we can divide the assigned trustworthiness values to objective and subjective parts. First, we create a mapping between the QDT definitions and the trustworthiness constructions. Then, we demonstrate that it is possible for the quantum interference terms to appear in the trust decision making process. By using the interference terms, we can quantify the emotions and subjective preferences of the trustor in various contexts with different amounts of uncertainty and risk. The non-commutative nature of quantum probabilities is a valuable mathematical tool to model the relative nature of trust. In relative trust models, the evaluation of a trustee candidate is not only dependent on the trustee itself, but on the other existing competitors. In other words, the first evaluation is performed in an isolated context whereas the rest of the evaluations are performed in a comparative one. It is shown that a QDT-based model of trust can account for these order effects in the trust decision making process. Finally, based on the principles of risk and uncertainty aversion, interference alternation theorem and interference quarter law, quantitative values are assigned to interference terms. By performing empirical evaluations, we have demonstrated that various scenarios can be better explained by a quantum model of trust rather than the commonly used classical models.

... Unitary transformations correspond to an individual's shift in perspective and relate one point of view to another. Recently, by using this mechanism Ashtiani and Azgomi introduced a quantum-like model of computational trust (Ashtiani and Azgomi, 2014). Trust, is one of the most widely used concepts in various areas of computational domain such as computer security, recommender systems and so on. ...

... Quantum Bayesian inference as a generalized version of the classical Bayesian inference mechanism can play a key role in modeling more complex inferences in the computational domains. For example, we have shown that the quantum Bayesian inference can be used to model a biased inference mechanism in the context of computational trust (Ashtiani and Azgomi, 2014). Of course, this is only a preliminary research and lots of developments in a wide range of directions can be assumed for these mechanisms. ...

There has always been a steady interest in how humans make decisions amongst researchers from various fields. Based on this interest, many approaches such as rational choice theory or expected utility hypothesis have been proposed. Although these approaches provide a suitable ground for modeling the decision making process of humans, they are unable to explain the corresponding irrationalities and existing paradoxes and fallacies. Recently, a new formulation of decision theory that can correctly describe these paradoxes and possibly provide a unified and general theory of decision making has been proposed. This new formulation is founded based on the application of the mathematical structure of quantum theory to the fields of human decision making and cognition. It is shown that by applying these quantum-like models, one can better describe the uncertainty, ambiguity, emotions and risks involved in the human decision making process. Even in computational environments, an agent that follows the correct patterns of human decision making will have a better functionality in performing its role as a proxy for a real user. In this paper, we present a comprehensive survey of the researches and the corresponding recent developments. Finally, the benefits of leveraging the quantum-like modeling approaches in computational domains and the existing challenges and limitations currently facing the field are discussed.

... Actually, it is often more important to predict distrust for a trust or to trustee [18,19]. Studies in [20,21] show that trust and distrust coexist in the human brain. Sometimes we have no or little contact with the predicted party, so simply classifying the party as being trustful or distrustful is biased. ...

Trust assessment is of great significance to related issues such as privacy protection and rumor transmission in online social networks. At present, there are mainly two types of trust models: discrete type and continuous, type. Little work has been done on considering distrust and trust at the same time, especially on the propagation mechanism of distrust. To this end, this paper proposes a multilevel trust model and a corresponding trust evaluation method based on a reliability model called multivalued decision diagrams (MDDs). The proposed trust model combines characteristics of both discrete and continuous trust and considers the dynamic changing mechanism of the multilevel trust. The propagation of distrust and conflicts of opinions are also handled. Experimental results show that the proposed method outperforms other existing methods.

... We have found that the above two types of behavior go back to the application of two operators, risk-and ambiguity-aversion, so that by applying now this, then the other first, their sequential application leads to different results, called non-commutativity. This is related to the uncertainty principle in quantum mechanics, therefore we identify information seeking as another link to quantum decision theory (Wittek et al., 2013;Ashtiani and Azgomi, 2014;Aerts and Sozzo, 2016). ...

... At the same time, since [Dominich, 2001] proposed to treat precision and recall as complementary operators regulating the surface of effectiveness in information retrieval, whereas [van Rijsbergen, 2004] argued that relevance is an operator on Hilbert space and as such is part of the quantum measurement process, neither was our insight totally unexpected. Rather, connected to the uncertainty principle, we see noncommuting measurements to surface also in information seeking as another link to quantum decision theory [Wittek et al., 2013a;Ashtiani & Azgomi, 2014;Aerts & Sozzo, 2016]. ...

The current deliverable summarizes the work conducted within task T4.5 of WP4, presenting our proposed approaches for contextualized content interpretation, aimed at gaining insightful contextualized views on content semantics. This is achieved through the adoption of appropriate context-aware semantic models developed within the project, and via enriching the semantic descriptions with background knowledge, deriving thus higher level contextualised content interpretations that are closer to human perception and appraisal needs.
More specifically, the main contributions of the deliverable are the following:
- A theoretical framework using physics as a metaphor to develop different models of evolving semantic content.
- A set of proof-of-concept models for semantic drifts due to field dynamics, introducing two methods to identify quantum-like (QL) patterns in evolving information searching behavior, and a QL model akin to particle-wave duality for semantic content classification.
- Integration of two specific tools, Somoclu for drift detection and Ncpol2spda for entanglement detection.
- An “energetic” hypothesis accounting for contextualized evolving semantic structures over time.
- A proposed semantic interpretation framework, integrating (a) an ontological inference scheme based on Description Logics (DL), (b) a rule-based reasoning layer built on SPARQL Inference Notation (SPIN), (c) an uncertainty management framework based on non-monotonic logics.
- A novel scheme for contextualized reasoning on semantic drift, based on LRM dependencies and OWL’s punning mechanism.
- An implementation of SPIN rules for policy and ecosystem change management, with the adoption of LRM preconditions and impacts. Specific use case scenarios demonstrate the context under development and the efficiency of the approach.
- Respective open-source implementations and experimental results that validate all the above.
All these contributions are tightly interlinked with the other PERICLES work packages: WP2 supplies the use cases and sample datasets for validating our proposed approaches, WP3 provides the models (LRM and Digital Ecosystem models) that form the basis for our semantic representations of content and context, WP5 provides the practical application of the technologies developed to preservation processes, while the tools and algorithms presented in this deliverable can be deployed in combination with test scenarios, which will be part of the WP6 test beds.

... At the same time, since Ref. [49] proposed to treat precision and recall as complementary operators regulating the surface of effectiveness in information retrieval, whereas Ref. [50] argued that relevance is an operator on Hilbert space and as such is part of the quantum measurement process, neither was our insight totally unexpected. Rather, connected to the uncertainty principle, we see noncommuting measurements to surface also in information seeking as another link to quantum decision theory [51,52,53]. ...

Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Based on an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain, we demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favour of ambiguity minimization. Since the pattern of behaviour in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

There are few trust models capable of incorporating the co-existence of trust and distrust as distinct concepts. In this regard, most of the existing trust models implicitly use distrust parameters to refine and calculate trust values. However, recent studies have indicated that trust and distrust are two distinct but co-existing concepts. In other words, although trust and distrust are constructed based on different characteristics, they can be used together in the decision making and recommendation processes. In this paper, we present a trust-distrust model for social networks considering subjective and objective characteristics of trust and distrust simultaneously. Competence, honesty, satisfaction, similarity, motivation, availability, tendency to be trusted, the existence of long-term connection/friendship, and centrality are the trustworthiness characteristics covered by the model. Also, surprisal, dishonesty, dissatisfaction, conflict degree, account lifetime, and sudden changes in the number of friends, likes, and comments are the distrust characteristics considered by the model. The proposed model takes into account the uncertainty, sharpness, and vagueness of the beliefs by using subjective logic. The results of the conducted evaluations demonstrate that the proposed model is highly accurate in the decision-making process and has a 90% accuracy in calculating the trust and distrust. We have also compared the results with other similar approaches, by which the proposed model showed a 34% improvement.

Transitivity in trust is very often considered as a quite simple property, trivially inferable from the classical transitivity defined in mathematics, logic, or grammar. In fact the complexity of the trust notion suggests evaluating the relationships with the transitivity in a more adequate way. In this paper, starting from a socio-cognitive model of trust, we analyze the different aspects and conceptual frameworks involved in this relation and show how different interpretations of these concepts produce different solutions and definitions of trust transitivity.

A forager in a patchy environment faces two types of uncertainty: ambiguity regarding the quality of the current patch and risk associated with the background opportunities. We argue that the order in which the forager deals with these uncertainties has an impact on the decision whether to stay at the current patch. The order effect is formalised with a context-dependent quantum probabilistic framework. Using Heisenberg’s uncertainty principle, we demonstrate the two types of uncertainty cannot be simultaneously minimised, hence putting a formal limit on rationality in decision making. We show the applicability of the contextual decision function with agent-based modelling. The simulations reveal order-dependence. Given that foraging is a universal pattern that goes beyond animal behaviour, the findings help understand similar phenomena in other fields.

Preface; Part I. Physics Concepts in Social Science? A Discussion: 1.
Classical, statistical and quantum mechanics: all in one; 2.
Econophysics: statistical physics and social science; 3. Quantum social
science: a non-mathematical motivation; Part II. Mathematics and Physics
Preliminaries: 4. Vector calculus and other mathematical preliminaries;
5. Basic elements of quantum mechanics; 6. Basic elements of Bohmian
mechanics; Part III. Quantum Probabilistic Effects in Psychology: Basic
Questions and Answers: 7. A brief overview; 8. Interference effects in
psychology - an introduction; 9. A quantum-like model
of decision making; Part IV. Other Quantum Probabilistic Effects in
Economics, Finance and Brain Sciences: 10. Financial/economic theory in
crisis; 11. Bohmian mechanics in finance and economics; 12. The
Bohm-Vigier Model and path simulation; 13. Other
applications to economic/financial theory; 14. The neurophysiological
sources of quantum-like processing in the brain; Conclusion; Glossary;
Index.

In cognitive psychology, some experiments for games were reported, and they demonstrated that real players did not use the “rational strategy” provided by classical game theory and based on the notion of the Nasch equilibrium. This psychological phenomenon was called the disjunction effect. Recently, we proposed a model of decision making which can explain this effect (“irrationality” of players) Asano et al. (2010, 2011) and . Our model is based on the mathematical formalism of quantum mechanics, because psychological fluctuations inducing the irrationality are formally represented as quantum fluctuations Asano et al. (2011) [55]. In this paper, we reconsider the process of quantum-like decision-making more closely and redefine it as a well-defined quantum dynamics by using the concept of lifting channel, which is an important concept in quantum information theory. We also present numerical simulation for this quantum-like mental dynamics. It is non-Markovian by its nature. Stabilization to the steady state solution (determining subjective probabilities for decision making) is based on the collective effect of mental fluctuations collected in the working memory of a decision maker.

Exploration and exploitation have emerged as the twin concepts underpinning organizational adaptation research, yet some central issues related to them remain ambiguous. We address four related questions here: What do exploration and exploitation mean? Are they two ends of a continuum or orthogonal to each other? How should organizations achieve balance between exploration and exploitation-via ambidexterity or punctuated equilibrium? Finally, must all organizations strive for a balance, or is specialization in exploitation or exploration sometimes sufficient for long-run success? We summarize the contributions of the work in this special research forum and highlight important directions for future research.

In a recent paper, Michael Friedman and Hilary Putnam argued that the Luders rule is ad hoc from the point of view of the Copenhagen interpretation but that it receives a natural explanation within realist quantum logic as a probability conditionalization rule. Geoffrey Hellman maintains that quantum logic cannot give a non-circular explanation of the rule, while Jeffrey Bub argues that the rule is not ad hoc within the Copenhagen interpretation. As I see it, all four are wrong. Given that there is to be a projection postulate, there are at least two natural arguments which the Copenhagen advocate can offer on behalf of the Luders rule, contrary to Friedman and Putnam. However, the argument which Bub offers is not a good one. At the same time, contrary to Hellman, quantum logic really does provide an explanation of the Luders rule, and one which is superior to that of the Copenhagen account, since it provides an understanding of why there should be a projection postulate at all.

We propose a new theoretical framework for understanding simultaneous trust and distrust within relationships. grounded in assumptions of multidimensionality and the inherent tensions of relationships. and we separate this research from prior work grounded in assumptions of unidimensionality and balance. Drawing foundational support for this new framework from recent research on simultaneous positive and negative sentiments and ambivalence. we explore the theoretical and practical sig- nificance of the framework for future work on trust and distrust relationships within organizations.

Many models of trust consider the trust an agent has in another agent (the trustee) as the result of experiences with that specific agent in combination with certain personality attributes. For the case of multiple trustees, there might however be dependencies between the trust levels in different trustees. In this paper, two alternatives are described to model such dependencies: (1) development of a new trust model which incorporates dependencies explicitly, and (2) an extension of existing trust models that is able to express these interdependencies using a translation mechanism from objective experiences to subjective ones. For the latter, placing the interdependencies in the experiences enables the reuse of existing trust models that typically are based upon certain experiences over time as input. Simulation runs are performed using the two approaches, showing that both are able to generate realistic patterns of interdependent trust values. Keywor ds: Trust modeling, interdependence, trust dynamics.

Assumptions behind optimal foraging theory are: 1) an individual's contribution to the next generation (its fitness) depends on behaviour during foraging; 2) there should be a heritable component of foraging behaviour (the actual innate foraging responses or the rules by which such responses are learned), and the proportion of individuals in a population foraging in ways that enhance fitness will tend to increase over time; 3) the relationship between foraging behaviour and fitness is known; 4) evolution of foraging behaviour is not prevented by genetic contraints; 5) such evolution is subject to 'functional' constraints (eg related to the animal's morphology); and 6) foraging behaviour evolves more rapidly than the rate at which relevant environmental conditions change. The review considers recent theoretical and empirical developments, dealing with behaviour of animals while they are foraging (but ignoring the timing of and the amount of time allocated to such behaviour). Ideas considered include risk aversion and risk proneness; optimal diet; optimal patch choice; optimal patch departure rules; optimal movements; and optimal central place foraging. Means of evaluating optimal foraging theory are discussed. -P.J.Jarvis

Purpose
To elaborate a theory for modeling concepts that incorporates how a context influences the typicality of a single exemplar and the applicability of a single property of a concept. To investigate the structure of the sets of contexts and properties.
Design/methodology/approach
The effect of context on the typicality of an exemplar and the applicability of a property is accounted for by introducing the notion of “state of a concept”, and making use of the state‐context‐property formalism (SCOP), a generalization of the quantum formalism, whose basic notions are states, contexts and properties.
Findings
The paper proves that the set of context and the set of properties of a concept is a complete orthocomplemented lattice, i.e. a set with a partial order relation, such that for each subset there exists a greatest lower bound and a least upper bound, and such that for each element there exists an orthocomplement. This structure describes the “and”, “or”, and “not”, respectively for contexts and properties. It shows that the context lattice as well as the property lattice are non‐classical, i.e. quantum‐like, lattices.
Originality/value
Although the effect of context on concepts is widely acknowledged, formal mathematical structures of theories that incorporate this effect have not been successful. The study of this formal structure is a preparation for the elaboration of a theory of concepts that allows the description of the combination of concepts.

The expected utility hypothesis is one of the building blocks of classical economic theory and founded on Savage’s Sure-Thing
Principle. It has been put forward, e.g. by situations such as the Allais and Ellsberg paradoxes, that real-life situations
can violate Savage’s Sure-Thing Principle and hence also expected utility. We analyze how this violation is connected to the
presence of the ‘disjunction effect’ of decision theory and use our earlier study of this effect in concept theory to put
forward an explanation of the violation of Savage’s Sure-Thing Principle, namely the presence of ‘quantum conceptual thought’
next to ‘classical logical thought’ within a double layer structure of human thought during the decision process. Quantum
conceptual thought can be modeled mathematically by the quantum mechanical formalism, which we illustrate by modeling the
Hawaii problem situation — a well-known example of the disjunction effect — generated by the entire conceptual landscape surrounding
the decision situation.
KeywordsExpected utility–disjunction effect–quantum modeling–quantum conceptual though–ambiguity aversion–concept combinations

We contest a reductive view of trust, quite diffused in economics, and in studies influenced by the Game-Theory framework:
the idea that trust has necessarily to do with contexts requiring “reciprocation”; or that trust is trust in the other’s reciprocation.
A multi-layer cognitive model of trust will be proposed. Trust is not conceived only as an attitude towards the other, implying
different kinds of beliefs (evaluations, expectations, beliefs on the other”s motives, etc.), but also as a willingness to
rely on others that makes us dependent and vulnerable to them, as well as a concrete act of reliance based on this. Not necessarily
we trust people because they will be willing to reciprocate; and we do not necessarily reciprocate for reciprocating. Trust
(even “genuine” trust) is based on a variety of motivations ascribed to others and makes prevail the adoption of our needs
and goals: from “altruism” to “self-interest”, from reciprocation to norms or to affective reasons.

We present a general theory of quantum information processing devices, that can be applied to human decision makers, to atomic multimode registers, or to molecular high-spin registers. Our quantum decision theory is a generalization of the quantum theory of measurement, endowed with an action ring, a prospect lattice and a probability operator measure. The algebra of probability operators plays the role of the algebra of local observables. Because of the composite nature of prospects and of the entangling properties of the probability operators, quantum interference terms appear, which make actions noncommutative and the prospect probabilities nonadditive. The theory provides the basis for explaining a variety of paradoxes typical of the application of classical utility theory to real human decision making. The principal advantage of our approach is that it is formulated as a self-consistent mathematical theory, which allows us to explain not just one effect but actually all known paradoxes in human decision making. Being general, the approach can serve as a tool for characterizing quantum information processing by means of atomic, molecular, and condensed-matter systems.

The broader scope of our investigations is the search for the way in which concepts and their combinations carry and influence meaning and what this implies for human thought. More specifically, we examine the use of the mathematical formalism of quantum mechanics as a modeling instrument and propose a general mathematical modeling scheme for the combinations of concepts. We point out that quantum mechanical principles, such as superposition and interference, are at the origin of specific effects in cognition related to concept combinations, such as the guppy effect and the overextension and underextension of membership weights of items. We work out a concrete quantum mechanical model for a large set of experimental data of membership weights with overextension and underextension of items with respect to the conjunction and disjunction of pairs of concepts, and show that no classical model is possible for these data. We put forward an explanation by linking the presence of quantum aspects that model concept combinations to the basic process of concept formation. We investigate the implications of our quantum modeling scheme for the structure of human thought, and show the presence of a two-layer structure consisting of a classical logical layer and a quantum conceptual layer. We consider connections between our findings and phenomena such as the disjunction effect and the conjunction fallacy in decision theory, violations of the sure thing principle, and the Allais and Elsberg paradoxes in economics.

In order for personal assistant agents in an ambient intelligence context to provide good recommendations, or pro-actively support humans in task allocation, a good model of what the human prefers is essential. One aspect that can be considered to tailor this support to the preferences of humans is trust. This measurement of trust should incorporate the notion of relativeness since a personal assistant agent typically has a choice of advising substitutable options. In this paper such a model for relative trust is presented, whereby a number of parameters can be set that represent characteristics of a human.

A diverse collection of trust-modeling algorithms for multi-agent systems has been developed in recent years, resulting in significant breadth-wise growth without unified direction or benchmarks. Based on enthusiastic response from the agent trust community, the Agent Reputation and Trust (ART) Testbed initiative has been launched, charged with the task of establishing a testbed for agent trust- and reputation-related technologies. This testbed serves in two roles: (1) as a competition forum in which researchers can compare their technologies against objective metrics, and (2) as a suite of tools with flexible parameters, allowing researchers to perform customizable, easily-repeatable experiments. This paper first enumerates trust research objectives to be addressed in the testbed and desirable testbed characteristics, then presents a competition testbed specification that is justified according to these requirements. In the testbed's artwork appraisal domain, agents, who valuate paintings for clients, may gather opinions from other agents to produce accurate appraisals. The testbed's implementation architecture is discussed briefly, as well.

In this paper we claim the importance of a cognitive view of trust (its articulate, analytic and founded view), in contrast with a mere quantitative and opaque view of trust supported by Economics and Game Theory. We argue in favour of a cognitive view of trust as a complex structure of beliefs and goals, implying that the trustor must have a "theory of the mind" of the trustee. Such a structure of beliefs determines a "degree of trust" and an estimation of risk, and then a decision to rely or not on the other, which is also based on a personal threshold of risk acceptance/avoidance.

Many real-life graphs such as social networks and peer-to-peer networks capture the relationships among the nodes by using trust scores to label the edges. Important usage of such networks includes trust prediction, finding the most reliable or trusted node in a local subgraph, etc. For many of these applications, it is crucial to assess the prestige and bias of a node. The bias of a node denotes its propensity to trust/mistrust its neighbours and is closely related to truthfulness. If a node trusts all its neighbours, its recommendation of another node as trustworthy is less reliable. It is based on the idea that the recommendation of a highly biased node should weigh less. In this paper, we propose an algorithm to compute the bias and prestige of nodes in networks where the edge weight denotes the trust score. Unlike most other graph-based algorithms, our method works even when the edge weights are not necessarily positive. The algorithm is iterative and runs in O(km) time where k is the number of iterations and m is the total number of edges in the network. The algorithm exhibits several other desirable properties. It converges to a unique value very quickly. Also, the error in bias and prestige values at any particular iteration is bounded. Further, experiments show that our model conforms well to social theories such as the balance theory (enemy of a friend is an enemy, etc.).

There has been a lot of research and development in the field of computational trust in the past decade. Much of it has acknowledged or claimed that trust is a good thing. We think it’s time to look at the other side of the coin and ask the questions why is it good, what alternatives are there, where do they fit, and is our assumption always correct?
We examine the need for an addressing of the concepts of Trust, Mistrust, and Distrust, how they interlink and how they affect what goes on around us and within the systems we create. Finally, we introduce the phenomenon of ‘Untrust,’ which resides in the space between trusting and distrusting. We argue that the time is right, given the maturity and breadth of the field of research in trust, to consider how untrust, distrust and mistrust work, why they can be useful in and of themselves, and where they can shine.

When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better.

One of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. We present a "Quantum Decision Theory" of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision making of real human beings. The theory describes entangled decision making, non-commutativity of subsequent decisions, and intention interference of composite prospects. We demonstrate how the violation of the Savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. The conjunction fallacy is also explained by the presence of the interference terms. We demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which finding straightforward explanations in the frame of the developed quantum approach.

This paper is concerned with two theories of probability judgment: the Bayesian theory and the theory of belief functions. It illustrates these theories with some simple examples and discusses some of the issues that arise when we try to implement them in expert systems. The Bayesian theory is well known; its main ideas go back to the work of Thomas Bayes (1702-1761). The theory of belief functions, often called the Dempster-Shafer theory in the artificial intelligence community, is less well known, but it has even older antecedents; belief-function arguments appear in the work of George Hooper (16401723) and James Bernoulli (1654-1705). For elementary expositions of the theory of belief functions, see Shafer (1976, 1985).

Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, 'contextuality', is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, 'quantum entanglement', allows cognitive phenomena to be modeled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.

We proceed towards an application of the mathematical formalism of quantum mechanics to cognitive psychology - the problem of decision-making in games of the Prisoners Dilemma type. These games were used as tests of rationality of players. Experiments performed in cognitive psychology by Shafir and Tversky (1992), Croson (1999), Hofstader (1983, 1985) demonstrated that in general real players do not use “rational strategy” provided by classical game theory; this psychological phenomenon was called the disjunction effect. We elaborate a model of quantum-like decision making which can explain this effect (“irrationality” of plays). Our model is based on quantum information theory. The main result of this paper is the derivation of Gorini-Kossakowski-Sudarshan-Lindblad equation whose equilibrium solution gives the quantum state used for decision making. It is the first application of this equation in cognitive psychology.

One of the most cited books in physics of all time, Quantum Computation and Quantum Information remains the best textbook in this exciting field of science. This 10th anniversary edition includes an introduction from the authors setting the work in context. This comprehensive textbook describes such remarkable effects as fast quantum algorithms, quantum teleportation, quantum cryptography and quantum error-correction. Quantum mechanics and computer science are introduced before moving on to describe what a quantum computer is, how it can be used to solve problems faster than 'classical' computers and its real-world implementation. It concludes with an in-depth treatment of quantum information. Containing a wealth of figures and exercises, this well-known textbook is ideal for courses on the subject, and will interest beginning graduate students and researchers in physics, computer science, mathematics, and electrical engineering.

Inferring the pair-wise trust relationship is a core building block for many real applications. State-of-the-art approaches for such trust inference mainly employ the transitivity property of trust by propagating trust along connected users, but largely ignore other important properties such as trust bias, multi-aspect, etc. In this paper, we propose a new trust inference model to integrate all these important properties. To apply the model to both binary and continuous inference scenarios, we further propose a family of effective and efficient algorithms. Extensive experimental evaluations on real data sets show that our method achieves significant improvement over several existing benchmark approaches, for both quantifying numerical trustworthiness scores and predicting binary trust/distrust signs. In addition, it enjoys linear scalability in both time and space.

One of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. We present a "Quantum Decision Theory" of decision-making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision-making of real human beings. The theory describes entangled decision-making, non-commutativity of subsequent decisions, and intention interference of composite prospects. We demonstrate how the violation of the Savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. The conjunction fallacy is also explained by the presence of the interference terms. We demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which allow the finding of straightforward explanations in the frame of the developed quantum approach.

Trust has been recognized as a key component of agent decision making in the context of multiagent systems (MASs). Though diverse trust models and mechanisms, influenced by various fields of study, have been proposed, implemented, and evaluated, we believe that the literature has ignored key aspects of pragmatic and holistic trust based reasoning. In particular, the focus of trust research has been on a posteriori evaluation of the trustworthiness of another agent and relatively few efforts have investigated the issue of establishment, engagement, and usage of trusted relationships. We envision that a holistic agent architecture will not use a trust module as a black-box for evaluating others but as a core component that will inform and shape interactions with other agents in the environment to best serve the decision-makers interests. Accordingly, we present a general and comprehensive trust management scheme (CTMS) that addresses key issues surrounding trust development, maintenance, and use. We present an operational definition of trust motivated by uncertainty management and utility optimization. We identify the various components required of a CTMS and their relationships and overview their use in the existing literature on trust in MAS. We welcome the MAS community to develop on the ideas presented here and build effective agent designs and implementations with fully-integrated CTMS cores.

Large-scale multiagent systems have the potential to be highly dynamic. Trust and reputation are crucial concepts in these environments, as it may be necessary for agents to rely on their peers to perform as expected, and learn to avoid untrustworthy partners. However, aspects of highly dynamic systems introduce issues which make the formation of trust relationships difficult. For example, they may be short-lived, precluding agents from gaining the necessary experiences to make an accurate trust evaluation. This article describes a new approach, inspired by theories of human organizational behavior, whereby agents generalize their experiences with previously encountered partners as stereotypes, based on the observable features of those partners and their behaviors. Subsequently, these stereotypes are applied when evaluating new and unknown partners. Furthermore, these stereotypical opinions can be communicated within the society, resulting in the notion of stereotypical reputation. We show how this approach can complement existing state-of-the-art trust models, and enhance the confidence in the evaluations that can be made about trustees when direct and reputational information is lacking or limited. Furthermore, we show how a stereotyping approach can help agents detect unwanted biases in the reputational opinions they receive from others in the society.

Security and privacy issues have become critically important with the fast expansion of multiagent systems. Most network applications such as pervasive computing, grid computing, and P2P networks can be viewed as multiagent systems which are open, anonymous, and dynamic in nature. Such characteristics of multiagent systems introduce vulnerabilities and threats to providing secured communication. One feasible way to minimize the threats is to evaluate the trust and reputation of the interacting agents. Many trust/reputation models have done so, but they fail to properly evaluate trust when malicious agents start to behave in an unpredictable way. Moreover, these models are ineffective in providing quick response to a malicious agent's oscillating behavior. Another aspect of multiagent systems which is becoming critical for sustaining good service quality is the even distribution of workload among service providing agents. Most trust/reputation models have not yet addressed this issue. So, to cope with the strategically altering behavior of malicious agents and to distribute workload as evenly as possible among service providers; we present in this paper a dynamic trust computation model called "SecuredTrust.” In this paper, we first analyze the different factors related to evaluating the trust of an agent and then propose a comprehensive quantitative model for measuring such trust. We also propose a novel load-balancing algorithm based on the different factors defined in our model. Simulation results indicate that our model compared to other existing models can effectively cope with strategic behavioral change of malicious agents and at the same time efficiently distribute workload among the service providing agents under stable condition.

We present a quantum-like model of decision making in games of the
Prisoner's Dilemma type. By this model the brain processes information
by using representation of mental states in complex Hilbert space.
Driven by the master equation the mental state of a player, say Alice,
approaches an equilibrium point in the space of density matrices. By
using this equilibrium point Alice determines her mixed (i.e.,
probabilistic) strategy with respect to Bob. Thus our model is a model
of thinking through decoherence of initially pure mental state.
Decoherence is induced by interaction with memory and external
environment. In this paper we study (numerically) dynamics of quantum
entropy of Alice's state in the process of decision making. Our analysis
demonstrates that this dynamics depends nontrivially on the initial
state of Alice's mind on her own actions and her prediction state (for
possible actions of Bob.)

Quantum-like structure is present practically everywhere. Quantum-like (QL) models, i.e. models based on the mathematical formalism of quantum mechanics and their generalizations can be successfully applied to cognitive science, psychology, genetics, economics, finances, and game theory.This book is not about quantum mechanics as a physical theory. The short review of quantum postulates is therefore mainly of historical value: quantum mechanics is just the first example of the successful application of non-Kolmogorov probabilities, the first step towards a contextual probabilistic description of natural, biological, psychological, social, economical or financial phenomena. A general contextual probabilistic model (V model) is presented. It can be used for describing probabilities in both quantum and classical (statistical) mechanics as well as in the above mentioned phenomena. This model can be represented in a quantum-like way, namely, in complex and more general Hilbert spaces. In this way quantum probability is totally demystified: Born's representation of quantum probabilities by complex probability amplitudes, wave functions, is simply a special representation of this type. © Springer-Verlag Berlin Heidelberg 2010. All rights are reserved.

In this paper we develop a general quantum-like model of decision making. Here updating of probability is based on linear algebra, the von Neumann–Lüders projection postulate, Born’s rule, and the quantum representation of the state space of a composite system by the tensor product. This quantum-like model generalizes the classical Bayesian inference in a natural way. In our approach the latter appears as a special case corresponding to the absence of relative phases in the mental state. By taking into account a possibility of the existence of correlations which are encoded in relative phases we developed a more general scheme of decision making. We discuss natural situations inducing deviations from the classical Bayesian scheme in the process of decision making by cognitive systems: in situations that can be characterized as objective and subjective mental uncertainties. Further, we discuss the problem of base rate fallacy. In our formalism, these “irrational” (non-Bayesian) inferences are represented by quantum-like bias operations acting on the mental state.

A (directed) network of people connected by ratings or trust scores, and a model for propagating those trust scores, is a fundamental building block in many of today's most successful e-commerce and recommendation systems. We develop a framework of trust propagation schemes, each of which may be appropriate in certain circumstances, and evaluate the schemes on a large trust network consisting of 800K trust scores expressed among 130K people. We show that a small number of expressed trusts/distrust per individual allows us to predict trust between any two people in the system with high accuracy. Our work appears to be the first to incorporate distrust in a computational trust propagation setting.

Trust is a fundamental concept in many real-world applications such as
e-commerce and peer-to-peer networks. In these applications, users can generate
local opinions about the counterparts based on direct experiences, and these
opinions can then be aggregated to build trust among unknown users. The
mechanism to build new trust relationships based on existing ones is referred
to as trust inference. State-of-the-art trust inference approaches employ the
transitivity property of trust by propagating trust along connected users. In
this paper, we propose a novel trust inference model (MaTrust) by exploring an
equally important property of trust, i.e., the multi-aspect property. MaTrust
directly characterizes multiple latent factors for each trustor and trustee
from the locally-generated trust relationships. Furthermore, it can naturally
incorporate prior knowledge as specified factors. These factors in turn serve
as the basis to infer the unseen trustworthiness scores. Experimental
evaluations on real data sets show that the proposed MaTrust significantly
outperforms several benchmark trust inference models in both effectiveness and
efficiency.

Much literature attests to the existence of order effects in the updating of beliefs. However, under what conditions do primacy, recency, or no order effects occur? This paper presents a theory of belief updating that explicitly accounts for order-effect phenomena as arising from the interaction of information-processing strategies and task characteristics. Key task variables identified are complexity of the stimuli, length of the series of evidence items, and response mode (Step-by-Step or End-of-Sequence). A general anchoring-and-adjustment model of belief updating is proposed. This has two forms depending on whether information is processed in a Step-by-Step or End-of-Sequence manner. In addition, the model specifies that evidence can be encoded in two ways, either as a deviation relative to the size of the preceding anchor or as positive or negative vis-à-vis the hypothesis under consideration. Whereas the former (labeled estimation mode) results in data consistent with averaging models of judgment, the latter (labeled evaluation mode) implies adding models. Conditions are specified under which (a) evidence is encoded in estimation or evaluation modes and (b) use is made of the Step-by-Step or End-of-Sequence processing strategies. The theory is shown both to account for much existing data and to make novel predictions for combinations of task characteristics where current data are sparse. Some of these predictions are examined and validated in a series of five experiments. Finally, both the theory and the experimental results are discussed with respect to the structure of models of updating processes, limitations and extensions of the present work, and the importance of developing a procedural theory of judgment.

In experiments of games, players frequently make choices which are regarded as irrational in game theory. In papers of Khrennikov
(Information Dynamics in Cognitive, Psychological and Anomalous Phenomena. Fundamental Theories of Physics, Kluwer Academic,
Norwell, 2004; Fuzzy Sets Syst. 155:4–17, 2005; Biosystems 84:225–241, 2006; Found. Phys. 35(10):1655–1693, 2005; in QP-PQ Quantum Probability and White Noise Analysis, vol.XXIV, pp.105–117, 2009), it was pointed out that statistics collected in such the experiments have “quantum-like” properties, which can not be explained
in classical probability theory. In this paper, we design a simple quantum-like model describing a decision-making process
in a two-players game and try to explain a mechanism of the irrational behavior of players. Finally we discuss a mathematical
frame of non-Kolmogorovian system in terms of liftings (Accardi and Ohya, in Appl. Math. Optim. 39:33–59, 1999).
KeywordsGame theory–Decision-making–Non-Kolmogorovian probability–Quantum-like model

We describe methodology of cognitive experiments (based on interference of probabilities for mental observables) which could verify quantum-like structure of mental information, namely, interference of probabilities for incompatible observables. In principle, such experiments can be performed in psychology, cognitive, and social sciences. In fact, the general contextual probability theory predicts not only quantum-like trigonometric (cos ) interference of probabilities, but also hyperbolic (cosh ) interference of probabilities (as well as hyper-trigonometric). In principle, statistical data obtained in experiments with cognitive systems can produce hyperbolic (cosh ) interference of probabilities. We introduce a wave function of (e.g., human) population. In general, we should not reject the possibility that cognitive functioning is neither quantum nor classical. We discuss the structure of state spaces for cognitive systems.

We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political
conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical
quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible
randomness (as it is commonly accepted in conventional quantum mechanics, e.g., by von Neumann and Dirac), but as a consequence
of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete
information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence.
In this paper we consider in a more detail one special application–QL modeling of brain’s functioning. The brain is modeled
as a QL-computer.

This paper proposes a time-decay based trust model in peer-to-peer networks. Through the transaction time is faded, the proposed model makes trust value changed dynamically with time decay, which achieves the goal that the more transaction close to, the more reliable it is. Finally this paper presents the experimental protocol and simulation. The experimental result shows that the proposed model is more approached fact value and it can resist vicious association attraction, dynamic strategy attraction, and it has a favorable performance.

In two experiments, Ss were read sets of 6 or 8 personality adjectives, and asked to rate their liking of the person so described. In some conditions, S was also requested to recall the adjectives just read.The personality impression data showed a primacy (first impression) effect when recall was not required. Introduction of recall reduced the primacy and, in one condition, caused a recency effect. These results were interpreted as indicating that the primacy was primarily caused by decreased attention to the later adjectives, and that the use of concomitant recall destroyed this primacy by causing S to attend to the later adjectives more completely.The serial recall curves showed a small to moderate primacy component, and a very strong recency component. Further detailed analyses of the recall data were also given.Two implications were drawn from the data. First, it was concluded that the impression memory is distinct from the verbal memory for the adjectives. This conclusion was based on contrasts between the observed impression effects and those that would be expected if the impression depended on the verbal memory. Three objections to this conclusion, based on the possibility that recall probability was an inappropriate index of verbal-memory strength, were also discussed.Second, it was tentatively suggested that a linear model, together with the attention decrement notion, gave the best account of the data. It was finally noted that the linear model also provides a representation of the impression memory that is in harmony with the first conclusion.

There are at least two general theories for building probabilistic–dynamical systems: one is Markov theory and another is quantum theory. These two mathematical frameworks share many fundamental ideas, but they also differ in some key properties. On the one hand, Markov theory obeys the law of total probability, but quantum theory does not; on the other hand, quantum theory obeys the doubly stochastic law, but Markov theory does not. Therefore, the decision about whether to use a Markov or a quantum system depends on which of these laws are empirically obeyed in an application. This article derives two general methods for testing these theories that are parameter free, and presents a new experimental test. The article concludes with a review of experimental findings from cognitive psychology that evaluate these two properties.

The notion of context (complex of physical conditions) is basic in this paper. We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach, it is forbidden to consider abstract context-independent probabilities: “first context and then probability.” We start with the conventional formula of total probability for contextual (conditional) probabilities and then we rewrite it by eliminating combinations of incompatible contexts from consideration. In this way we obtain interference of probabilities without to appeal to the Hilbert space formalism or wave mechanics. Our contextual approach is important for demystification of quantum probabilistic formalism. This approach gives the possibility to apply quantum-like models in domains of science different from quantum theories, e.g., in economics, finance, social sciences, cognitive sciences, psychology.

In this paper, we deal with the sequential decision making prob- lem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when re- peatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learn- ing algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run.

This paper is concerned with two theories of probability judgment: the
Bayesian theory and the theory of belief functions. It illustrates these
theories with some simple examples and discusses some of the issues that
arise when we try to implement them in expert systems. The Bayesian
theory is well known; its main ideas go back to the work of Thomas Bayes
(1702-1761). The theory of belief functions, often called the
Dempster-Shafer theory in the artificial intelligence community, is less
well known, but it has even older antecedents; belief-function arguments
appear in the work of George Hooper (16401723) and James Bernoulli
(1654-1705). For elementary expositions of the theory of belief
functions, see Shafer (1976, 1985).

One of the great strengths of public-key cryptography is its potential to allow the localization of trust. This potential is greatest when cryptography is present to guarantee data integrity rather than secrecy, and where there is no natural hierarchy of trust. Both these conditions are typically fulfilled in the commercial world, where CSCW requires sharing of data and resources across organizational boundaries. One property which trust is frequently assumed or proved to have is transitivity (if A trusts B and B trusts C then A trusts C) or some generalization of transitivity such as *-closure. We use the loose term unintensional transitivity of trust to refer to a situation where B can effectively put things into A's set of trust assumptions without A's explicit consent (or sometimes even awareness.) Any account of trust which allows such situations to arise clearly poses major obstacles to the effective confinement (localization) of trust. In this position paper, we argue against the need to accept unintensional transitivity of trust. We distinguish the notion of trust from a number of other (transitive) notions with which it is frequently confused, and argue that proofs of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.

There are two related purposes of this tutorial. One is to generate interest in a new and fascinating approach to understanding
behavioral measures based on quantum probability principles. The second is to introduce and provide a tutorial of the basic
ideas in a manner that is interesting and easy for social and behavioral scientists to understand.

A (directed) network of people connected by ratings or trust scores, and a model for propagating those trust scores, is a fundamental building block in many of today's most successful e-commerce and recommendation systems. We develop a framework of trust propagation schemes, each of which may be appropriate in certain circumstances, and evaluate the schemes on a large trust network consisting of 800K trust scores expressed among 130K people. We show that a small number of expressed trusts/distrust per individual allows us to predict trust between any two people in the system with high accuracy. Our work appears to be the first to incorporate distrust in a computational trust propagation setting.

Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before.