Article

Notes on Trust as a Causal Basis for Social Science

Authors:
  • Researcher and Advisor at ChiTek-i
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Experiments to scrutinize these notions are notoriously difficult to perform at scale; however, the community structures surrounding Wikipedia have provided us with a serendipitous opportunity for a direct test of these claims, since they gather and publish data in a completely open manner. In the course of studying Wikipedia in relation to matters of online trust, we stumbled across a signal in the data concerning the dynamics of groups that we report on here [15]. ...
... This combines the roles of user intent and the work effort involved in coming together. Readers can find a comprehensive discussion of the physics of this model by Burgess [6,15]. . Crosses indicate estimated error uncertainty. ...
... In this respect, information is a cybernetic concept in the sense that it can involve the exchange of actual information about some aspect of the world or some token such as grooming that defines the quality of a relationship (see also West et al. [2,3]). The link between intentions, promises and trust points to this intentionality as the seed attractor in the explanation of process network dynamics [15]. The attractive 'force' during growth is not a network effect but a kinetic attention effect over these networked bonds. ...
Article
Full-text available
Human communities have self-organizing properties in which specific Dunbar Numbers may be invoked to explain group attachments. By analysing Wikipedia editing histories across a wide range of subject pages, we show that there is an emergent coherence in the size of transient groups formed to edit the content of subject texts, with two peaks averaging at around N=8 for the size corresponding to maximal contention, and at around N=4 as a regular team. These values are consistent with the observed sizes of conversational groups, as well as the hierarchical structuring of Dunbar graphs. We use a model of bipartite trust to derive a scaling law that fits the data and may apply to all group size distributions when these are based on attraction to a seeded group process. In addition to providing further evidence that even spontaneous communities of strangers are self-organizing, the results have important implications for the governance of the Wikipedia commons and for the security of all online social platforms and associations.
... Experiments to scrutinize these notions are notoriously difficult to perform at scale; however, the community structures surrounding Wikipedia have provided us with a serendipitous opportunity for a direct test of these claims, since they gather and publish data in a completely open manner. In the course of studying Wikipedia in relation to matters of online trust, we stumbled across a signal in the data concerning the dynamics of groups that we report on here [15]. ...
... In this respect, information is a cybernetic concept in the sense that it can involve the exchange of actual information about some aspect of the world or some token such as grooming that defines the quality of a relationship (see also West et al. [2,3]). The link between intentions, promises, and trust points to this intentionality as the seed attractor in the explanation of process network dynamics [15]. The attractive 'force' during growth is not a network effect but a kinetic attention effect over these networked bonds. ...
... The attractive 'force' during growth is not a network effect but a kinetic attention effect over these networked bonds. Attraction does not therefore necessarily imply proximity in physical spacetime, but rather in intention space (i.e. the alignment is in intention first and in position only as a secondary consequence of attention [15]). ...
Preprint
Full-text available
THIS IS A MODIFICATION OF AN EARLIER TITLED WORK GROUP PHENOMENA etc Human communities have self-organizing properties in which specific Dunbar Numbers may be invoked to explain group attachments. By analyzing Wikipedia editing histories across a wide range of subject pages, we show that there is an emergent coherence in the size of transient groups formed to edit the content of subject texts, with two peaks averaging at around N = 8 for the size corresponding to maximal contention, and at around N = 4 as a regular team. These values are consistent with the observed sizes of conversational groups, as well as the hierarchical structuring of Dunbar graphs. We use the Promise Theory model of bipartite trust to derive a scaling law that fits the data and may apply to all group size distributions, when based on attraction to a seeded group process. In addition to providing further evidence that even spontaneous communities of strangers are self-organizing, the results have important implications for the governance of the Wikipedia commons and for the security of all online social platforms and associations.
... Promises are not commands or deterministic rules whose outcomes are assumed, they are statements of intent to be fulfilled by best effort. In earlier work, it was shown how trust is related to the notion of promise keeping and that Promise Theory offers a convenient framework in which to formulate an agent based model of its semantics, its dynamics, and its scaling [3,5,7]. Trust also has a number of related and derived interpretations to consider, with variations on semantics, which are adapted to different purposes. ...
... For discrete agents continuum functions map to assessments made by agents about one another. In configuration space, this requires special definitions (see [7]). We shall mainly focus on the interior states, and represent changes schematically as if they were continuum variables for familiarity. ...
... It seems doubtful that such a strategy would be optimal in the long run, since individual potential will only be realized on the scale of the group through sustained learning. Such matters remain to be formalized in future work, though some network effects were discussed in [3,7]. ...
Preprint
Full-text available
This work presents a summary of an operational model for trust and trustability (trustworthiness), building on Promise Theory, as a memory mechanism for time managing cooperation against future returns. The model was proposed and partially developed more extensively in previously available notes, also containing detailed reference literature. Trust is effectively a policy for inattentiveness in scenarios that balance opportunities and risks. The economics of trust are not altruistic. Trust is a cost saving default policy whose complement 'antitrust' or 'risk taking' allows 'selfish agents' to allocate resources for self interest.
... Promises are not commands or deterministic rules whose outcomes are assumed, they are statements of intent to be fulfilled by best effort. In earlier work, it was shown how trust is related to the notion of promise keeping and that Promise Theory offers a convenient framework in which to formulate an agent based model of its semantics, its dynamics, and its scaling [2,4,6]. Trust also has a number of related and derived interpretations to consider, with variations on semantics, which are adapted to different purposes. ...
... For discrete agents continuum functions map to assessments made by agents about one another. In configuration space, this requires special definitions (see [6]). We shall mainly focus on the interior states, and represent changes schematically as if they were continuum variables for familiarity. ...
... It seems doubtful that such a strategy would be optimal in the long run, since individual potential will only be realized on the scale of the group through sustained learning. Such matters remain to be formalized in future work, though some network effects were discussed in [2,6]. ...
Research
Full-text available
This document is background material for the forthcoming Promise Theory of trust. It contains notes on the research literature from a number of disciplines. It's secondary motivation is to comment informally on the extent to which independent research is compatible a priori with Promise Theory, it's definitions and its predictions. A precise summary in Promise Theory is work in progress and will be reported elsewhere. The supposition that trust is a purely human phenomenon is ubiquitous outside Computer Science, and undermines the generality of its significance for human-technological interactions; thus, it is of interest to abstract away specifically human attributes and identify the signalling mechanisms that generalize trust for 'agents' in the general case. Trust emerges as a way of trading 'savings of effort' (process debt) for possible gain. As such it's an important cost saving strategy for agents that have finite resources, and which therefore need to prioritize activities of greater value. This is a way to avoid being taxed by accountability and verification protocols to hedge against uncertainty. The story is linked to the 'tragedy of commons' or shared resource depletion, in this case where a single agent's resources are being shared between possibly competing tasks. In terms of process models, the appearance of an action potential that we call 'trust' is a sign that we are dealing with learning processes, also called memory processes, not transactional Markov processes. In order to understand how we shall use trust to lubricate human-technology (cyborg) relations in an increasingly augmented semi-virtual world, we need a consistent and formalized model of trust that everyone could agree on. In particular, we look for a way to avoid using moral judgements in trust questions as these are frequently misleading.
... Like most humanistic notions, the history of trust has been dominated by ideas of moral philosophy [32][33][34][35]. Some progress has been made in social sciences by attempting to model certain scenarios by analogy to simple physical systems [10,11], however a more agent-centric view of trust can be given by using Promise Theory to capture the simple information relationship between trust and intentionality [36]. In this view, the trust about some subject X is related to the work saved by not verifying X [36]. ...
... Some progress has been made in social sciences by attempting to model certain scenarios by analogy to simple physical systems [10,11], however a more agent-centric view of trust can be given by using Promise Theory to capture the simple information relationship between trust and intentionality [36]. In this view, the trust about some subject X is related to the work saved by not verifying X [36]. However complex the semantics of these processes, they ultimately flatten out insofar as they simply involve different expenditures of effort. ...
Preprint
Full-text available
The capacity to engage with technology requires cooperation within communities. Cooperation depends on trust, with trust dependent on the frequency of interaction. This natural basis for human interaction gives rise to a 'fractal' structure for human communities that reflects the patterns of contact between individuals. To understand how innovations and information diffuse through group populations over time we present an argument based on the Promise Theory of trust and derive a universal scaling relation for the formation of social groups by dynamic equilibrium. The implications of our method suggest a broad applicability beyond purely social groupings to general resource constrained interactions, e.g. in work, technology, cybernetics, and generalized socioeconomic systems of all kinds. Models of cultural evolution and the adoption of novel technologies or ideologies need to take these constraints into account.
... In this respect, information is a cybernetic concept in the sense that it can involve the exchange of actual information about some aspect of the world or some token such as grooming that defines the quality of a relationship (see also West et al. [2,3]). The link between intentions, promises, and trust points to this intentionality as the seed attractor in the explanation of process network dynamics [20]. The attractive 'force' during growth is not a network effect but a kinetic attention effect over these networked bonds. ...
... The attractive 'force' during growth is not a network effect but a kinetic attention effect over these networked bonds. Attraction does not therefore necessarily imply proximity in physical spacetime, but rather in intention space (i.e. the alignment is in intention first and in position only as a secondary consequence of attention [20]). ...
Preprint
Full-text available
We report on the emergence of predictable group sizes for content editing on Wikipedia, and use these to propose an explanation of group dynamics. The data show an emergent coherence in the sizes of groups formed transiently to edit the content of subject texts, with two peaks averaging at around N = 8 for the size corresponding to maximal contention, and peaking at around N = 4 over the whole distribution, with a long tail. The numbers are consistent with Dunbar's conversational group predictions, as well as general group hierarchy. We propose an explanation building on the Promise Theory of trust. and offer a scaling law that we hypothesize may apply for all group distributions based on seeded attraction. Some caveats may apply for direct comparison with the hierarchy of social group sizes owing to the activity of bots. The results have some implications for the governance of the Wikipedia commons and the security of the platform and other similar platforms and associations.
... The limit on conversational group size appears to be set directly by the capacity to manage the mental states or viewpoints of other individuals [10]. The proposal by Burgess to examine the role of trust as a dynamical currency in social interactions between arbitrary agents [13] motivated an empirical study looking at group phenomena on a large scale in Wikipedia editing; there the familiar group patterns for humans were observed in an unusually large sample of data, and could be viewed through the new theoretical lens of Promise Theory [14] for calculating the sizes based on emergent scales [15]. ...
... In Promise Theory impositions play a significant role in reducing trust [25]. This dynamical picture is consistent with a 'work/energy' interpretation of trust alluded to in [13]. ...
Preprint
Full-text available
We present a simple argument using Promise Theory and dimensional analysis for the Dunbar scaling hierarchy, supported by recent data from group formation in Wikipedia editing. We show how the assumption of a common priority seeds group alignment until the costs associated with attending to the group outweigh the benefits in a detailed balance scenario. Subject to partial efficiency of implementing promised intentions, we can reproduce a series of compatible rates that balance growth with entropy.
... From the discussion from [17], there are some pitfalls in trying to visualize untrustworthiness. We might think of a barrier sticking up ahead of us as something untrustworthy: indeed, it garners attention initially. ...
... Unlike the classical (essentially moral) view of trust, Promise Theory takes a pragmatic stance. It has two components: a memory part that acts as a potential shaping the assessment of reliability, analogous to the role of a potential energy, and a kinetic part associated with the immediate attention 'sampling rate' of an agent [17]. The less an agent assesses a process to be potentially trustworthy, the more frequently it will come back to kinetically check on the process. ...
Method
Full-text available
This is part 7 of the time-limited project with NLnet, whose goal is to think about the possible role of trust and trustworthiness in shaping our online cyborg societal future, and in particular to review the use of machine learning analytics to explore tools which help agents assess and adapt to trust. In particular, how do we secure the reliability and resilience to attack for all social phenomena, where humans are interacting with machinery in a more integrated way. We've learned a lot of unexpected things about trust since starting this project, which tend to alter the simplistic ideas of the original proposal. Rather than jumping into coding some nonsense, I've spent time to examine and understand behavioural data in detail. This installment tries to get the thinking straight before considering how to make services and assessments of trustworthiness robust to attack. I've spend some time reading related work and discussing with colleages in psychology, anthropology, and neuroscience. I continue to look at the data from Wikipedia to look at: • Disruption of groups and individuals. • Disruption of collective purpose. • Stealing of resources or data. Although this is only a single source of data, it resonates well with other work in the literature and offers insights about how to use all that investment of knowledge-to go beyond the kind of speculation usually employed in Computer Science. We have a societal interest in security the reliability and stability of purpose in online interactions, which include API interactions but also information dissemination in blogs and chat rooms. To address these points, there are still some details to work out from the previous analysis of group dynamics. I start by making a connection between the trust picture developed from Wikipedia data to the socio-anthropological studies and neuroscience.
... Semantics of the promises were left as an exercise to the reader, since there is no connection between semantic assessment (quality) of promise keeping and timing. The two parts of trust, discussed in [3,4], represent coarse potentials for guiding the interactions of agents. Potential trust, or trustworthiness, is a prior assessment formed by learning from possibly multiple sources of information. ...
... A sub-sample of sentences from the most intentionally significant paragraphs (or "legs" in the nomenclature of the program), where sentences are defined by the punctuation system of the language concerned 2. A list of n-grams (word fragments of length n) ranked by an "intentionality" score that combines frequency of repetition with the work involved in writing. 3. A list of conceptually stable fragments, as 'longitudinal invariants' are fragments that are repeated (not too often) throughout a text in a bursty manner. ...
Method
Full-text available
This is the second part of a set of notes describing a proof of concept implementation for using the Promise Theory of trust. This part of the project concerns an approach for assessing the intentional semantics of agents over short lived interactions: i) for first time meetings, and ii) for rarefied transient encounters, where we can't rely on steady state dynamics to measure a consistent signal stream for reliability, and adjust kinetic trust accordingly. The method used here is not based on deep learning with a large energy footprint, but rather on the spacetime model of semantics as a form of signal processing.
... In previous work [1][2][3][4], I proposed a two part model of trust and how it might work in human-machine systems, based on a self-consistent Promise Theory. This document is about the details of what trust does in such systems, and how we might implement to guide real time adaptation of behaviours. ...
... The two parts of trust, discussed in [2,4], represent coarse potentials for guiding the interactions of agents. Potential trust, or trustworthiness, is a prior assessment formed by learning from possibly multiple sources of information. ...
Method
Full-text available
This is not a scientific paper. It is the first part of a set of notes describing and illustrating a proof of concept for using the Promise Theory of trust to i) trace, and ii) adapt online interactions based on learning. Trust plays a role in human behaviour, and here we consider how it operates as a summary potential for human-machine interactions (including implicitly behind the scenes of large machine learning models). It accompanies a set of code stub examples that focus specifically on cases in which users interact with one another through a third party service. The goal here is to investigate how we might use trust as a guiding potential in human-information systems. For the semantic elaboration, the specific example of users interacting with Wikipedia to read and to write contributions is used for concreteness. They should be viewed in parallel with the example code at https://github.com/markburgess/Trustability.
ResearchGate has not been able to resolve any references for this publication.