Content uploaded by Mark Burgess

Author content

All content in this area was uploaded by Mark Burgess on Jun 10, 2022

Content may be subject to copyright.

(WORKING DRAFT 0.2)

Motion of the Third Kind (II)

Notes on kinematics, dynamics, and relativity in Semantic Spacetime

Mark Burgess

June 10, 2022

Abstract

In part I of these notes, it was shown how to give meaning to the concept of virtual motion,

based on position and velocity, from the more fundamental perspective of autonomous agents and

promises. In these follow up notes, we examine how to scale mechanical assessments like energy,

position and momentum. These may be translated, with the addition of contextual semantics, into

richer semantic processes at scale. The virtualization of process by Motion Of The Third Kind thus

allows us to identify a causally predictive basis in terms of local promises, assessments, impositions.

As in physics, the coarser the scale, the less deterministic predictions can be, but the richer the

semantics of the representations can be. This approach has immediate explanatory applications to

quantum computing, socio-economic systems, and large scale causal models that have previously

lacked a formal method of prediction.

Contents

1 Introduction 2

1.1 Notationandterminology.................................. 3

1.2 Relationship between assessments and promises . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Process relativity: how agents observe one another . . . . . . . . . . . . . . . . . . . . 6

1.4 Covariance between observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Kinematics and dynamics of promises and assessments 9

2.1 Qualitative and quantitative description . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Embedded spacetime view, conﬁguration and phase space . . . . . . . . . . . . . . . . . 10

2.3 Derivatives on a static graph topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4 Separation of boundary value information and guiderails . . . . . . . . . . . . . . . . . 12

2.5 Equations of motion and constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 Canonical and derived currencies, e.g. energy, money, etc . . . . . . . . . . . . . . . . . 13

2.7 Promise dynamical manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Laws of behaviour with ﬁxed boundary conditions 14

3.1 First order static equilibrium solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2 Second order ﬂow from promises with channel diffusion . . . . . . . . . . . . . . . . . 15

3.3 Currency landscapes vs process semantics . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.4 Directionandcurrent .................................... 17

3.5 The meaning of momentum in physics . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.6 The meaning of momentum in Promise Theory . . . . . . . . . . . . . . . . . . . . . . 20

1

4 The semantics of force, momentum, and energy in physics 21

4.1 Quantifying gradients and exchange in Newtonian physics . . . . . . . . . . . . . . . . 21

4.2 Quantifying gradients and exchange in Quantum Mechanics . . . . . . . . . . . . . . . 22

4.3 Quantifying gradients and exchange in Promise Theory . . . . . . . . . . . . . . . . . . 23

4.4 Currency accounting patterns in an agent viewpoint . . . . . . . . . . . . . . . . . . . . 25

4.5 Example: Coupled oscillators in Newtonian mechanics and Promise Theory . . . . . . . 25

4.6 Coupled agent interpretation in Semantic Spacetime . . . . . . . . . . . . . . . . . . . . 27

4.7 Some examples from different semantic scales . . . . . . . . . . . . . . . . . . . . . . . 31

4.8 The variational energy (currency) formulation in physics . . . . . . . . . . . . . . . . . 32

4.9 A variational formulation for agents? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.10 Reinterpreting the action principle in terms of locality . . . . . . . . . . . . . . . . . . . 34

4.11 Conservation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.12 Ballistic or impulsive change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.13 Loops, sampling processes, and ﬁxed ‘currency price’ and energy levels . . . . . . . . . 36

5 Promise lifecycle and boundary dynamics 37

6 Summary and discussion 39

1 Introduction

In part I of these notes [1], we didn’t write down a comprehensive dynamical picture of promise graphs,

when deﬁning virtual motion. Typical uses of promise theory have involved the solution of only static

collections of agents and promises—as preordained circuitry on a low level. However, some hints about

the dynamical evolution of the promise graph were provided in [2–6]. In these follow up notes, I want to

explain how we can use Promise Theory to expose the semantics latent in physical law and construct a

causal dynamical account of virtual processes by appealing to the same quantitative techniques that have

proven successful in physics.

Promise Theory is founded on the assumption that the autonomy of local agents is primary, i.e. that

all motive impetus originates from the inside out. The agents suggest the concept of ‘aether’, except that

our agent substrate is the active party, not a passive medium. The agents are more akin to the cellular

substrate of a cellular automaton, but without a lattice topology. This contrasts with the fully exterior

viewpoint taken by Newtonian mechanics of rigid bodies, where activity was essentially ballistic and

measured by continuous variables for position and momentum. In an agent view, we need to expose

the assumptions that make such a point of view possible. In our model of Motion Of The Third Kind,

the world is made of agents that promise to share interior properties with one another. These sharing

channels are the result of promise bindings between an offer or donor (+) and an acceptance or receptor

(-) by causally independent agents. As the agents coordinate their autonomous activities, processes are

formed between them. The simplest processes are transactions, then oscillations, waves, and then the

formation of large scale structures that can move relative to one another.

The nature of agents’ promises is undeﬁned. Signals may be passed over any kind of channel,

stigmergic (publish-subscribe), ballistic (push), telescopic (pull), etc. Rather, we are interested in under-

standing the resulting semantics and dynamics of changes that result from them. Agents do not move

in any sense—indeed, we don’t consider agents to be bodies embedded in some theatrical spacetime

as Newton did—only their promised attributes move, by changing host, like musical chairs. It turns

out that this model of motion is more like the model of state evolution in Quantum Mechanics, which is

surely interesting in its own right. The symmetries and objective relativistic of Euclidean and Minkowski

spacetime are not presumed in advance. Such assumptions of continuity and global uniformity are to be

determined as virtual constructs built on top of the basic agent to agent interactions. In the semantic

spacetime, space and time are not a passive theatrical backdrop but an active causal circuitry by which

2

agents interact and larger processes can form on a collective scale. Process ﬁrst, embedding for conve-

nience.

Promise theory features a separation of concerns formally into promises, impositions, and assess-

ments. In building a theory of cooperation, assessments (including observations and measurements) are

of crucial importance, since they are the origin of relativity. Without assessment an agent would not have

the ability to measure external promises and alter its own state in response to external inﬂuence. Without

assessment, we would have to assume that agents were altered ballistically by imposition. The process

by which assessments are made—on whatever level an agent has the capability to perform—has to play

a central role in determining its response. The default model of response in physics is a direct driving,

like a contact potential, but this is known to be simplistic even in elementary physical systems.

In the Promise Theory view of virtual motion, the ground state of all agents is to be fully autonomous,

and each agent’s assessments inform only the receiving agent itself. We thus need to explain how vir-

tual processes can embody and propagate across large numbers of agents, when agents are themselves

causally independent.

In this picture, it’s natural to ask whether fully autonomous agents might not simply refuse to com-

ply with information offered to them by neighbours. Indeed, that’s entirely possible—and we see this

behaviour on scales at which agents become more connected and more sophisticated. However, on an el-

ementary scale agents will not necessarily have the interior capacity to ‘know enough’ about the process

they partake in to resist or oppose the general ﬂow (think of birds in a ﬂock, or so-called emergent swarm

intelligence). Separation of scales in a pattern of behaviour will tend to lead to mean ﬁeld behaviours

that are collective in nature [7].

1.1 Notation and terminology

For completeness, let’s summarize the assumed nomenclature. The notation of Promise Theory follows

that described in [8]. There are many indices in the expressions that follow. Readers used to vector

labels should take care. Subscripts like Aido not refer to coordinate components of spacetime vectors

but rather refer to agents, which are effectively point locations. In that sense, a label Aisigniﬁes a

complete location, not a component of a pointer to one. This gives primacy to locally active sites rather

than to passive symmetries invoked for simplicity. I’ll also avoid using the Einstein convention over

repeated indices in order to make summations explicit and avoid confusion with coordinate systems.

We may recall that agents Aimake promises to offer (+) or accept (-) information independently.

Promises have a type and magnitude which are contained in the body b:

Ai

±b

−→ Aj,(1)

and that the aggregation of all such promises forms a graph, whose matrix may be written Π(±)

ij .

•Since any graph is equivalent to an adjacency matrix, we may consider a quantity αito be part of a

column vector on the total graph. The rows and columns label agent identity or effective ‘position’,

as discussed already. We must not assume that positions form a regular lattice as in a Cartesian

basis. The promise graph could be ﬂuctuating in any fashion, e.g. the structure of the Internet is a

diverse graph with varying dimensionality [9]. When we write

•Continuum derivatives have their usual meanings on coordinatized spaces, when we refer to con-

ventional physics.

•Partial derivative ∂trefers to an interior change, i.e. a change at the same agent location Aior

Euclidean location x. The clock referred to by tis to be explained in the semantics of its usage.

•−→

∇jrefers to an exterior change, between agents, starting from the current agent location Aiin the

direction of Aj, if that direction exists.

3

•−→

∇without an index refers to the sum over all directions leading from the agent on which it acts,

like a divergence.

•ψirefers to the interior state of an agent Ai, i.e. the self-graph of interior promises Πi

ii.

•A quantity irefers to a exterior promised partial state belonging to Ai, i.e. is equivalent to stating

Ai

+

−→ ∗. Thus, when any promised quantity Ai

+b

−→ ∗ is promised by a domain of many agents

we can write it as an effective function of the position b(Ai), as from a third party observer view.

The closest analogue of the adjacency matrix from traditional graph theory for a promise graph is the

matrix of combined ±promises:

Π(±)

ij ∼Aij .(2)

Note that the matrix labelled (±)is not a matrix product: Π(±)

ij is unrelated to the usual matrix product

over agent indices (Π(+)Π(−))ij . We can use the wedge notation to represent the logical semantics of

‘AND’ (not geometrical forms), and the product notation

Π(±)

ij =Π(+) ∧Π(−)ij (3)

thus represents the element-by-element product Π(±)

ij = Π(+)

ij Π(−)

ij , not the usual matrix product. This

comes about because linearity is applied over an exterior scale, whereas matching occurs locally at what

we associate with ‘point locations’.

Π(±)

ij (b) = Ai

+b+

−−→ Aj

Ai

−b−

←−−− Aj!=Ai +b+

−−→

−b−

←−−− !Aj.(4)

We can deﬁne the assessment of what degree of this promise binding b+∩b−is kept, by the capability

matrix:

Ck

ij (b) = αkΠ(±)

ij (b),(5)

i.e. an assessment, by some agent Akthat Aiand Ajkeep their promises about bto one another with

expected magnitude b+∩b−. This matrix Ck

ij (b)is what we must use instead of a plain adjacency matrix,

as it contains crucial information about the inhomogeneity, or intrinsic local characteristics of the agents

involved in our society.

Ck

ij is the adjacency matrix for a directed graph. Readers used to reasoning by ﬁat relationships (by

imposition like behaviour), e.g. in Category Theoretic bindings, should take care. The symmetrization

over ±promises does not render a binding undirected, in the sense of graphs. An undirected link would

require four promises, with ±in both directions. Any promise, which is not complemented by its oppo-

site sign, is null potent and can be ignored. This is somewhat analogous to the way quantum transitions

require both a ψand ψ†or bra and ket to complete a channel.

An assessment, in Promise Theory, is an interior property of an agent. Since agents may have dif-

ferent interior properties and state resources, we can’t make a general rule about the resolution of as-

sessments. An assessment is some mapping from set-values promise bodies bto a supportable internal

representation α(b):

α() : Set 7→ Rn,(6)

which is assumed only to be some general internal vector real values. In practice, assessments will be

constrained by interior and exterior boundary conditions. This is the most general scalar mapping. At

this stage we needn’t assume more than this1.

1Some readers might be tempted to delve into Category Theory to express some of the relations, however Category Theory’s

goal is abstract rather than operational, which would distract from the mechanical view I want to focus on here.

4

In a graph, topology and continuity of direction are separate ideas. In a continuum, the coincidence

limit in derivatives essentially removes this distinction and average directions are normalized in terms of

regional or global basis vectors. Thus, in a continuum, one can have concepts like velocity of a pointlike

object, and position itself is a vector. In a graph, position is only a name or label, and velocity is only

promised along point-to-point channels. We’ll makes extensive use of these facts in what follows.

1.2 Relationship between assessments and promises

The ‘measurement problem’, or the separation of a process of measurement sampling from evolution of

a dynamical system, is an issue that only surfaced in the twentieth century—ﬁrst in Special Relativity

and later in Quantum Mechanics. Before this, it was assumed that the equations for dynamical variables

would yield a precise causal answer that decoupled from direct observation. In other words, the process

of observation was assumed to be ubiquitous and instantaneous and was taken for granted.

Questions like ‘how does one body observe the properties of another’ didn’t need to be answered

because equations represented all behaviours as expressions of universal law, which bodies had no choice

but to obey. This decoupling between the agents of study and the processes of interaction to trace them

(also referred to as decoherence in quantum theory) becomes less and less tenable as the agents in a

system probe smaller scales.

Information theory introduced the idea of communication channels in [10]. Promise Theory brings

these ideas together so that every agent is expected to observer every other by an explicit process.

The interactions are not assumed to be independent of the agents’ ability to observe one another, as it

is in most physical equations of motion. Instead, we have to make the interactions of agents compatible

with Information Theory: agents sample one another’s states, something like the dynamics of a game

between players. For every exchange of one agent sampled by another, there has to be a local commu-

nication of state between them. This implies that agents have interior subprocesses (interior timescales)

that are unobservable but faster than observable exterior changes.

In Promise Theory, measurements are replaced by ‘assessments’, which are process valuations made

within agents, in the role of observer. Spacetime is discrete, and each agent forms its own assessments, so

the detection of gradients can only be made from these assessments by each single agent. The observation

of gradients thus relies on information shared between agents, and thus forces are always non-local

properties2. By contrast with the Newtonian calculus, where limits are used to deﬁne all properties at a

point, the observation of a force (as an external party) requires the assessment of at least two successive

agents (e.g. by a third party) to form a concept of a gradient. The assessment of gradients by third parties

is how we calibrate our agent view to form a universal Newtonian kind of description.

All models of spacetime play a basic role in discriminating change. In simple mechanical formula-

tions of physics, the equilibration of forces is treated as instantaneous, which really means ‘much faster

than any timescale in the resulting behaviour’, e.g. force is transmitted instantaneously throughout a rigid

body, or by a connecting spring or a potential V(x), but the same does not extend to the displacements

of normal modes. These assumptions are usually left implicit and are built into the semantics of the

formulae.

Example 1 (Schr¨

odinger equation) The outcome of the Schrodinger equation is a representation of

an ‘instantaneous’ equilibration of changes leading to a state we call the wavefunction. The name

refers to the deliberate injection of wave semantics into the representation for dynamics, based on the

observation that p=h/λ for waves. Wave semantics for momentum lead directly to the apparent non-

locality of predictions at different locations. The idea of momentum as a time derivative m˙xis replaced

with momentum as a gradient of a distributed process over space (which may be static or moving).

2In physics the term ‘information’ is used to imply a bulk measure of possible conﬁgurations, represented by an entropy.

We need to distinguish the measure of bulk information from speciﬁc information (a particular state conﬁguration or message).

Information channels are thus things that can transmit speciﬁc messages, not just thermodynamic bulk.

5

Example 2 (Game theory) An instantaneous response is not allowed in Game Theory. Each player

takes their turn in a synchronous manner. The cycle of assessment intervenes in the causal ﬂow through

the Nyquist sampling law. This game proceeds in rounds of interaction [11–13].

Finally, it’s worth remarking that, in elementary physics, we typically deal with (memoryless) Markov

processes, but as we increase the sophistication or ‘complexity’ of agents at a given semantic scale, the

role of interior process memory (represented by the interior degrees of freedom of agents) becomes more

important. This begins with phenomena like hysteresis, and extends to phenomena in biology, where

agents like DNA encode complex processes. It also clearly applies to cloud computing where agents are

virtual computers.

1.3 Process relativity: how agents observe one another

Observing agent processes is a surprisingly difﬁcult problem. It suggests the need for apparent con-

straints on the homogeneity of systems. Suppose we have an agent Akthat wishes to observe a process

P, as it moved between agents A1and A2(see ﬁgure 1).

1

AA2

Ak

relay

guiderail

Ij

Figure 1: Trying to observe a remote process in agent spacetime. To help discriminate paths, an agent can

cooperate with others along different routes, like a grating (black circles) that label different routes. An autonomous

agent can’t promise that an observation would follow the same path on repeated measurement, so we can never

expect to measure velocity remotely—unless the measurement involves so many agents and paths, over a consistent

equilibrium guiderail, that the differences are irrelevant (such as in a dense lattice or manifold).

The ﬁrst possibility for observing the motion is for agents A1and A2promise to voluntarily signal

Akwhen they are involved with the process. In order to receive such a signal, Akhas to be tuned in to

the signals i.e.

A1, A2

+signal|P

−−−−−→ Ak(7)

Ak

−signal

−−−−→ A1, A2.(8)

In this case, the agents either have to know to observer’s location or they need to broadcast the signal.

It’s impossible for Akto distinguish these case, or to know the time it took for the signal to arrive. The

observer is a passive receiver. Ultimately, Akis beholden to the uncertainties of the arrangement.

6

A second possibility is that the agent Akprobes the agents, assuming that they will respond with a

reply, i.e.

Ak

+probe

−−−−−−→ A1, A2(9)

A1, A2

−probe

−−−−→ Ak(10)

A1, A2

+reply|probe ∧P

−−−−−−−−−→ Ak(11)

Ak

−reply

−−−−−−→ A1, A2.(12)

This relies on Ak’s ability to send and sample messages fast enough (at the Nyquist frequency) to capture

a frame in which the process is passing through the agents and measure the round-trip time. This relies

on a number of key assumptions.

The intermediate agent theorem [8] tells us that we can’t take any agent’s cooperation for granted,

and we clearly can’t guarantee the density of intermediate agents in order to provide a sufﬁcient number

of unique paths by which Akmight distinguish A1from A2. In the Newtonian view, we simply take this

for granted based on the continuum of points in Euclidean geometry.

To go beyond the simple description of velocity, from a local information channel to a collaborative

network is impossible without some high degree of order. Such long range order would amount to being

a homogenous phase of the agents forming spacetime. This can only occur above a certain density of

paths. Without some kinds of agent model, such a claim would have to be speculative.

The departure from thinking of spacetime as a continuum to a dense, homogeneous and isotropic

medium leads us to model an unreliable network of observational locations (see ﬁgure 1). Path variations

can lead to insurmountable distortions of the observational medium. We know this kind of trouble occurs

in solid state media. The phenomenon of Anderson localization is an example in which random impurity

sites, acting as holes, confound transport by trapping waves in local wells [14]. Relativity plays a role in

two ways:

•There are observational delays in observing remote phenomena due to the need to pass information

along some intermediate process, through paths that may or may not be inert, e.g. light or sound

signals can be absorbed, refracted, etc.

•We may need to be able to transform the view of one agent into the view of another. This can

only be imagined on a large coherent scale, since one isn’t even assured of equivalent information

being available to all agents on a fundamental level. We have to assume certain equivalences and

homogeneities between observers in order to satisfy the requirements of a theory like Galilean or

Lorentz invariance, for instance. Symmetry is not a guarantee.

There is only one experiment a process can perform to try to observe remote motion of another

process, without basing measurement on the assumption of a regular measure: the observer Akcan

offer (+ promise) repeated signals at a constant time intervals, and hope that these will be accepted and

returned by the remote process along a consistent path. The observer can measure the round trip time

and the time interval between repeated measurements. If the paths taken by probes are random (as they

are in Internet trafﬁc, for example), or in random vibrating lattices, then the round-trip time can vary.

All this assumes that the agents that are intermediate between observer and observed are cooperative,

reliable, indeed invariant in their behaviour, so that repeated measurements can average away uncertain-

ties and detect patterns and trends. We have to confront the idea that we simply don’t know this, and

it may well be that we have to just take it as an assumption and hope for its consistency. Note partic-

ularly that, the geometry such paths cannot be determined, since one is completely dependent on them

for cooperation. In fact, not all agents may participate. It is only the formation of a stable guiderail

that can allow observation to take place at all. For example, the stabilization of a quantum wavefunction

weights to chance of certain paths participating in change due to the cooperative distribution of local

energy density.

7

Longitudinal motion could be inferred by a temporal dilation of the intervals (red shift). Transverse

motion relies on there being sufﬁcient number of different paths for the observer to be able to distin-

guish position, by an angle θobserved to change with lateral motion. If we assume a consistent and

ﬁxed geometry with constant speed of signalling, then the anglular change observed could be estimated

repeatedly by taking a longer time as ∆t2,

cos θ=h∆t1i ± ∆(∆t1)

h∆t2i ± ∆(∆t2),(13)

where ∆t1and ∆t2are the round-trip times between Akand A1,A2(see ﬁgure 1), with statistical

uncertainties. One can try to measure the drift over long times and large distances and ﬁnd an average

for motion. Then, by geometric arguments and the belief in a constant speed, the answer has to converge

on the usual Newtonian and Lorentzian results.

In the long time limit, measurement could be approached statistically and might converge to a con-

stant answer, However, over timescales that are short compared to the changing of paths (e.g. in quantum

systems or the Internet) there is effectively no way of obtaining the necessary information to calculate

motion. The assumption of a regular path and consistent velocity relative to the observer’s interior clock

is not a reliable one. This problem faces everyone trying to measure data ﬂows on the Internet, for exam-

ple. The observation of a suitably cooperative propagating process may in the worst case be impossible.

In the best case, it might appear random and Brownian in nature, as in ﬁeld path integrals3.

Assuming control over a local region, one might try to use assemblies of agents in cooperation to

form gratings that label different paths (the dark agents in ﬁgure 1). These can act as alternative satellite

receptors to discriminate between different directions. However, there is no guarantee that the paths

taken by the observational signals will not join up later. We need to make some assumptions in order to

be able to observer:

•The distance or path from Akto Aiand back should be single valued, on average, but may ﬂuctuate.

This is known to be untrue for the Internet, whose dimensionality is not even constant over the short

range [9].

•The agents or their observational paths have to be distinguishable by Ak. However, paths that

begin separate can still cross later in their paths, with topological irregularities. This is directly

observable in the Internet, and in lattices with holes.

•One has to assume that there is a consistent ﬁeld of partially ordered intermediate agents that.

Again, if the number of possible paths is not inﬁnite, then a guiderail can route several paths, like

a lens, through common agents, leading to indistinguishability.

These are essentially the conditions one applies to solutions of constraint equations when modelling a

dynamical problem, e.g. in a diffusion distribution, or for the wavefunction in Quantum Mechanics.

1.4 Covariance between observers

We normally think of the transformation of a spacetime viewpoint as a tensor transformation. We can

try to imagine such an object for autonomous agents—though the idea that one would have sufﬁcient

information to construct this is far from clear. Suppose we use agent labels k, ` to refer to a neutral third

party agents, whereas i, j will refer to the active agents in a pair process. The capabilities Ck

ij (b)of an

agent are assumed to be intrinsic to the agent, but the assessment of those capabilities is determined by

the agent making an assessment Ak. Thus, different agents will make different assessments and we can

postulate some transformation matrix Ωto transform these:

Ck

ij (b)7→ C`

ij (b) = X

k

Ω`

kCk

ij (b).(14)

What kind of transformation group would this represent? How might this change of perspective be

transported around a loop, as parallel transport? These questions remain to be answered.

3See for example the discussion by [15] on Brownian support for path integral measures for a quantum ﬁeld.

8

2 Kinematics and dynamics of promises and assessments

Let’s review some of the kinematic and dynamic concepts used to describe change in physics, and relate

them to agents and promises. This should inform the accounting procedures that are applied to dynamics

in a familiar light. Concepts like force and rate of change are ubiquitous in classical physics, and are

based on gradients within a differential description. These smooth functions apply over scales that can

be considered sufﬁciently differentiable4.

Partial order is a discrete concept Gradients are the effective differential characterization of partial

ordering of locations within a ﬁeld. Thus the existence of a changing concentration of some property

over ‘space’ allows us to observe that space. Without such labels, the points would be indistinguishable

and the idea of space and distance would be a purely theoretical construct. In physics, we typically ignore

this and justify the existence of empty space via the processes that move through it, or introduce ﬁelds

and potentials. In Promise Theory, general gradients are effectively replaced by chains of conditional

promises that prescribe an effective order in space and time over the processes that satisfy them as

boundary conditions. Spacetime has no invariant meaning outside of processes in an agent model.

As we’ll see, classical mechanics favours smooth behaviours because it assumes there is an in-built

rigidity of the ‘causal channel’ acting between the entities embedded within spacetime. Such a rigid

guarantee of transmitted inﬂuence is not reasonable in general. indeed, as noted in part I, there is no

obvious reason why a transition from one location to another in a trajectory would continue from the

new position in the same direction. Such an effect must be a large scale collaborative property of agents.

Without large scale coherence, changes could appear at apparently random locations over a guiderail

(as is the case in quantum mechanics), because interior causality may not be directional or even be

determined by what can be observed on the exterior between local entities. Solutions for agent behaviour

may therefore appear to ﬂuctuate at disparate locations at different times, appearing disordered or even

‘random’.

2.1 Qualitative and quantitative description

Computer Science is strongly focused on the semantics of processes. Physics focuses on mainly on

the dynamics of processes, and embeds semantics only by assumption, through the properties of alge-

bra. One of the goals of Promise Theory is to unify semantics (qualitative functional behaviours) with

dynamics (quantitative behaviours). The suppression of semantics leads to famous confusions when

interpreting different theories. In the general case (and thus for virtual motion) we need to separate qual-

itative and quantitative issues carefully. Both can be represented as promises. The price we pay is for

the equations of a dynamical system to be supplemented with additional expressions—as we’ll see in the

coupled oscillator example to follow.

In the general case, we need equations to determine exterior inﬂuences as well as interior states

for every active location in spacetime. This includes the connectivity of agents via possibly multiple

independent channels of inﬂuence. These channels are the equivalent of forces and the promises they

make and they reﬂect their semantic roles. This is closer to the situation in modern particle physics.

Remark 1 (Dynamical variables) The principal kinematic variables in classical systems stem from bal-

listic origins: they are positions ~x and momenta ~p. Directionality is built into vector spaces, so the vector

representation compactiﬁes a lot of reasoning by submerging it as part of the vector infrastructure. From

these, the concept of energy emerges as a link between properties over space and time through the ve-

hicle of ‘forces’ Fand potential energy V, where the forces arise from gradients of some underlying

landscape function of ‘potential’. Both forces and potentials appear in Newtonian mechanics as ﬁelds

over space itself, providing a surrogate labelling. The equations of change in physics refer to material

4The continuum limit uses the limit dx, dt 7→ 0, which seems to refer to the very small. In truth, this artiﬁcial limit is a

representation of ‘zooming out’ of system to a low degree of resolution. Thus it makes most sense in the limit of the very large

systems, where the rough small scale details are negligible.

9

‘bodies’ that are acted on by these forces. The bodies are the only active parts of space in the Newtonian

scheme. This changes in Einstein’s General Relativity.

Example 3 (Music) Music is an example of how virtual processes take place on many levels. If one

considers a piano to be a semantic spacetime, then the playing of music amounts to virtual motion of

the third kind across the keys. In frequency space, melody is a virtual promise encoded by dimensionless

ratios of the string frequencies, not directly by the actual physical strings—but denying the existence of

the ‘aether’ of physical strings would be foolish. There are lessons here about how we should not take

dogmatic positions of ideology when describing phenomena.

2.2 Embedded spacetime view, conﬁguration and phase space

The principal function of spacetime is to chart causality. In a Promise Theory model, exterior spacetime

‘exists’ only in terms of the sequences of interacting agents and their connective channels, through which

a process passes. Snapshots of all agents form spacelike hypersurfaces. Processes can only access one

or more cones of connected agents, through promised channels of different kinds, analogous to the light

cone in physics.

In classical physics, the assumption of trajectories as the solutions of equations of motion within a

system of coordinates suggests the ﬁrst confusion of notational semantics. When we write a trajectory as

x(t)or a force ﬁeld as F(x, t), the quantities labelled ‘x’ in these two cases have very different semantics.

The former is a speciﬁc association representing an ordered sequential orbit of positions taking its values

from the possible x, t. The latter is an ordered ﬁeld with no preferred locations or times.

The spacetime notion of a continuous path trajectory is ﬁrmly planted in our expectations by Newto-

nian physics, thus we ﬁnd behaviours that seem to jump from location to location to be ‘weird’. However,

this is perfectly normal where behaviour arises from within agent rather than by transport between them.

In Promise Theory, our goal is the maintain clear semantics at all times. Agents connect together

to form guiderails rather than trajectories, by virtue of the explicit promises they offer and accept5.

The interior states of agents become causally dependent only if they promise to receive and respond to

promised signals from one another. This is not a quantum idea, indeed rigid entanglement is built into

classical mechanics as an axiom, and we are usually looking at ways to relax this assumption. In other

words, Promise Theory makes minimal assumptions about interaction, and consequently looks more

similar to quantum than classical mechanics. Thus, when we talk about equations of ‘motion’, we really

mean equations about changes to a process that involves and spans multiple agents.

2.3 Derivatives on a static graph topology

Rates of change are an intrinsic part of processes: without change there is no process. Change in time

is the fundamental or intrinsic change (at a single agent or location). Change over space (at constant

time) refers to state which is imprinted in the local memory of agents. Virtual processes on graphs

muddle spacetime concepts, since a single hop represents both forward motion and a tick of a clock,

with effectively constant velocity, so we need to take care in deﬁning rates over spacetime intervals with

semantics that are appropriate for the process6.

In the Promise Theory, with its successive causal boundaries, we are always faced with a distinction

between interior time and exterior time. Interior state may be accumulated from the change processes

over past times (memory processes)—and may be externalized as an instantaneous conﬁguration, as in

centre of mass motion. This induces purely exterior timelike changes (i.e. Markov processes), such as

when a force acts on some rigid body.

5This is similar to particle physics, where messenger particles of different types distinguish different meanings. It contrasts

with the idea that meaning comes from the order and continuity of a set of points alone.

6This subtlety is responsible for much confusion in quantum mechanics, and is responsible for the inability for a quantum

oscillator to come to rest.

10

The deﬁnition of a derivative on a graph is thus more closely connected with ﬂow systems, and

the deﬁnition of advanced, retarded, and Feynman propagators for ﬁelds. We need to be careful about

the semantics of derivatives for graphs, since the coincidence limits δ(x, t)→0cannot be taken. A

difference over some set of value, distributed across multiple agents ε(Ai), distributed over all agents Ai

may be written

−→

∇jεi≡−→

∇jε(Ai) = vj−vi,(15)

=Aij vj−vi.(16)

It refers effectively to an arrow, anchored (by one end) at the agent Ai. The question here is what values

of jare available at a given location Ai? The only directions javailable to form a difference of values

for εiare constrained by the adjacency matrix (Aij ) for the graph of agents Ai. However, this is not a

homogeneous matrix, so Aij = 0 for a large range of i, j. For this reason, we need to be more careful

about deﬁning the channels that connect agents, by using the promise matrices deﬁned in (4) [16].

Taking the effective adjacency to be Aij ↔Ck

ik, as viewed by some agent k, we account for the

channel capacities of the ﬂows from past or future into the local state.

There are now three choices for the proper time derivatives, depending on how we anchor the arrow

to the root agent Ai. We deﬁne the total derivatives, in basis kby:

−→

∇εi=

X

j

Ck

ij εj−εi

(retarded)(17)

←−

∇εi=

εi−X

j

Ck

ji εj

(advanced)(18)

←→

∇εi=

X

j

Ck

ij εj−X

j

Ck

ji εj

(Feynman mixed),(19)

where the repeated jindices are summed over. If we don’t sum over repeated indices, we can deﬁne the

partial derivatives

−→

∇jεi=Ck

ij εj−εi(20)

←−

∇jεi=εi−Ck

ji εj(21)

←→

∇jεi=Ck

ij εj−Ck

ji εjj(not summed).(22)

The mixed derivative is analogous to the Feynman propagator in electrodynamics, and also to the Wigner

function, since it is anchored at the mid-point of a two-hop arrow conjunction.

Notice that if the matrix Ck

ij is sufﬁciently smooth and we consider paths that are sufﬁciently large,

then the gradient will scale so that

−→

∇j∼ −←−

∇j,(23)

since this is the limit in which the Newtonian limit dx 7→ 0emulates the scaling properties of the

promise graph. This leads, in turn, to the conservation of momentum or conservation of scaled promise

alignments. On a smaller scale, we should not take this precisely for granted.

Finally, there is a purely exterior view derivative one could deﬁne, as a scaled instantaneous snapshot

of the graph:

Djεi=Ck

ij (εj−εi).(24)

Each interval is scaled by its channel width, but this leads to no constraint on the ﬂows, so we discard

this.

11

2.4 Separation of boundary value information and guiderails

Boundary information is the complement of spacetime trajectories on an interior region. The purpose of

boundary values is to separate a space into what one knows with certainty, and what can be predicted

based on causality. In a differential model, with continuum parameters, a boundary consists of spacetime

points at which we have deﬁnite information. This is normally at the edge of the system, at the perimeter,

at the start or at the end.

Causal equations, which predict interpolated behaviour, use some notion of continuity in the changes,

and rely on gradients of properties available to the observer. The semantics of these representations may

be subtle. Are the gradients those measured by the observer or by the neighbouring entities? Whose point

of view is the right one? In the Newtonian realm, there is no difference between viewpoints since space

and time are universal. However, in Einsteinian relativity, in quantum theory, and in Promise Theory, the

role of observer is crucial—and every agent can form its own assessment, even a random one.

We showed in part I that there is often a natural decomposition of a system into the formation of a

‘guiderail’ or map of resources supporting a process, and thus revealing where is can take place, and the

execution of the process itself as a measurable phenomenon. The hierarchy of information describing

processes thus begins with the separation of boundary conditions from equations of motion, and then

subsequent separation of measurement from the equations of motion. For example, in Quantum Me-

chanics one has a boundary condition described by a static potential, equations of motion describing

the distribution of energy throughout the space (guiderail) and a separate process of measurement of

dynamic variables. We return to this issue in this second part since it plays a key role in the non-locality

of agent processes.

2.5 Equations of motion and constraint

Equations of motion and constraint take on a variety of forms. The most common representation for these

is through the algebra of rings and ﬁelds, in which addition and multiplication represent superposition and

modulation of values directly and instantaneously. The local response in conﬁguration space, analogous

to Newton’s F=ma must take the form:

Fij =−→

∇jαkΠ(±)

ij =−→

∇jCk

ij (jnot summed) (25)

where α() is an assessment function and Π(±)ij is the promise graph. In a system with many promises

at play, it will be convenient to use the derivative notation to look at a dynamic variable as a local state

ρiat each agent Ai, i.e. simply writing

Fij =−→

∇jρi(26)

Delayed responses can be incorporated using Green functions, as needed in electrodynamics and material

physics, for example, but these are a last resort where simpler equations simply won’t do. Fourier trans-

forms to wave space or momentum space provide other techniques for simplifying non-local behaviours.

We have dispersion relations, which describe the composition of dynamics in terms of complementary

superpositions of dynamical processes—waves being the simplest distributed process.

Remark 2 (Static systems) The dynamics expressed in the diffusive equations above should not mislead

us into thinking that all systems must diffuse and equilibrate. A document is a valid semantic spacetime,

and thus is should obey the equations we write down here. In a static system, which has no interior

degrees of freedom to change, the self-assessments must all vanish, since there is no interior time, so

αi7→ 0, and thus nothing changes in the frame of the document. An exterior observer Akcould still

observe some changes as a result of its own receiver promises changing, so αk6= 0, and it may therefore

observe changes as artifacts of the channel connecting it to the document (perhaps failing eyesight).

12

2.6 Canonical and derived currencies, e.g. energy, money, etc

Newton’s great accomplishment in formulating change, was to describe local behaviour as being guided

(and even determined) by the distribution of an underlying energy currency. Yet, hidden in the expres-

sions, was an implicit algebraic formulation to imply the semantics of these interaction to make it true.

Dynamical currencies are an artiﬁce or accounting device associated with counting activity. Physics’

ballistic origins and economic analogies, lead us to think of energy and money, whimsically, as being

like the fuel for activity within a system. This view is not strictly correct as we’ll see from a promise

theoretic analysis, and can distract from the autonomy of processes.

In Promise Theory, currencies like energy can only be changed by assessments, i.e. ‘valuations’ of

an agent’s promises, made by some other agent’s promise. Nothing material or even manifest needs to

change hands in order for this to happen, the changes can be understood in terms of internal accounting

of information alone—at least for autonomous agents. Currency can be counted by different kinds of

measure, some semantic or symbolic, some dynamic or quantitative (see table 1). In quantitative physics,

one tends to focus on numerical counters that describe average state in terms of ﬂows. This owes its origin

to the history of essentially ballistic descriptions of behaviour, in what all changes were externalized as

motion in space and time. However, in more advanced descriptions, labels like force-charge types and

biochemical signatures are needed to account for the multi-dimensional interactions.

GEN ER AL PHYSICS SO CIO ECONOMIC COMPUTING

Graph Conﬁguration Society Economy Software agent

Agent AiPosition ~x Person/Tool Account Processor

Promise Π(±)

ij Momentum ~p Promise Money Data

Interior promise Hidden variables Hidden state Account balance Internal state

Exterior promise Tensor property Character Wealth? API

Offer (+) Emission rate Capability Supply Capability

Acceptance (-) Afﬁnity / absorption rate Availability Demand Availability

Imposition (+) Collision Immigration Innovation Write/Operation

Proxy agent Particle Token/credential Coin / IOU Packet/Transaction

Assessment α() Measurement/sample Judgement Valuation Read

Currency Energy Trust Money Protocol codes

Interior currency Potential VSelf trust Balance MSubroutine

Exterior currency Kinetic energy TEngagement/risk Payment PAPI function

Binding Bound state Relationship Relationship Session

Currency Gradient Force / ﬁeld Bias Incentive Preference/policy

Table 1: Approximate correspondences between different promise concepts at a range of scales, grouped by

similar semantics. Semantic complexity increases from left to right.

In the promise model, we favour an interpretation in which changes and activity come entirely from

within agents. This view is apparently the opposite of the Newtonian ballistic approach, though we’ll be

able to compare them in section 4. The difference of viewpoint doesn’t invalidate the need for currency

counters. Indeed, quantitative predictions still need explicit counting. Any agent Ak, with access to the

information about a promised process, can evaluate an assessment in its currency αk:

αk(π) = αkAi

+b

−→ Aj(27)

These assessments belong to Rn, so they can be added. Agents Aican also assess their own states αi():

αi=αi(Π(+)

ij ) = αiAi

+b

−→ Aj.(28)

2.7 Promise dynamical manifesto

Based on the foregoing observations, our manifesto for discerning quantitative motion is thus as follows:

13

•In a given system of agents, identify the dominant exchange currencies that count outcomes.

•Capture the semantics of the exchange processes to some level of approximation. What shall we

consider to be ﬁxed (slowly varying) or ﬂuid, stochastic, etc (quickly varying) over a timescale that

relates to the intrinsic scales of the observer’s cognitive process (sampling rate, internal memory,

etc).

•Identify the semantics of prediction: what do we consider to be meaningful measures? This is

usually based on invariances.

3 Laws of behaviour with ﬁxed boundary conditions

The assumption that there are laws of nature that are the same everywhere is reﬂected in the existence of

‘global symmetries’ in physical law, which—in turn—directs us to a high level top-down view view of

causation7. Even when the details of laws are local (e.g. in gauge symmetries), certain aspects are still

assumed to be global: e.g. the charge of the electron8. There are two possibilities here: either there are

rigid non-local symmetries that violate causality, or whatever local differences there may be are simply

unobservable, hidden by the nature of measurement.

In [2, 18] Burgess and Fagernes described how promises can be developed as a scalable system

of state, in the manner of statistical or Quantum Mechanics. This was further reﬁned in [5] with the

separation of interior and exterior degrees of freedom.

3.1 First order static equilibrium solutions

Let’s deﬁne an intrinsic process of a promise graph, over the whole connected region, deﬁned by some

promise graph Π(±)

ij that leads to a landscape potential ρi. The promises that are part of this graph

of bindings form a matrix of channel availability for that subset of promise types. Since the gradient

incorporates an implicit adjacency matrix Ck

ij , the eigenvector ρiis an implicit function of Ck

ij too. The

keeping of those promises thus has an equilibrium distribution:

−→

∇ρi=−→

∇Ck

ij ρi(Ck

ij )=0,(29)

where ρπ

iare the components of the principal eigenvector of the matrix Ck

ij for the promises concerned.

The ρifor each agent is an shared property, which becomes intrinsic for it’s own degree of involvement

with its neighbours. We can label ρithe mass, encumbrance, inertia, drag, or connectedness of agent Ai

(see section 3.6).

At equilibrium, we consider there to be no net force on the graph to ﬁrst order in the gradient:

Fj(Ai) = −→

∇jρi=Ck

ij ρj−ρi= 0,(30)

for partial derivative along the direction j(i), so rearranging gives

Ck

ij ρj=ρi.(31)

If we sum over the j, this is an eigenvector equation to ﬁrst order. By the Frobenius-Perron theorem, the

normalized eigenvector of the promise binding matrix is thus the equilibrium distribution of exchange

currency in the system [3, 19]. We know that, for an asymmetric directed graph, this network cannot be

sustained, and trust will end up at the sources or sinks of the graph. An external forcing term (called

pumping in [3]) is needed to balance an equilibrium.

7One sometimes talks about the existence of godlike observers with access to all information instantaneously.

8Feynman and Wheeler recounted their story about why there is only one universal charge on all electrons. Feynman

recalled that John Wheeler said it’s because there is only one electron! If all electrons appear in pairs, by a strict tree process,

this is assured [17].

14

We should be cautious in interpreting this equation as a representation of currency. Which currency

is represented by this αi()? It is a purely local quantity based on conserved ﬂow, so it satisﬁes the

conditions to be a currency. However, what is it an assessment of? The condition of zero derivative

is more a constraint on homogeneity than an expression of dynamics. We see that the signiﬁcance is

equivalent to the existence of an equilibrium eigenvector. If trust is purely positive, this corresponds

to the eigenvector centrality of the effective adjacency matrix. The principal eigenvector is then a well

understood importance rank based on social connectivity. On the other hand, if the graph is directed,

or its assessment can be negative, then with its associated issues associated with the Perron-Frobenius

theorem [19].

In this formulation there is no global exterior time. The interior time for arriving at an equilibrium

solution is represented by iterations of the eigenvalue matrix operator to yield the steady state solution,

until it becomes stable and the time effectively stops at equilibrium. In practice this is an idealization

of many exchange interactions between connected agents. This whole discussion of autonomous agents

has some interesting analogues to procedures used in effective ﬁeld theory [20]—the methods are framed

rather differently in the guise of externalized continuum physics, but the methods ﬁnd a way to represent

interior processes through algebraic representations.

Remark 3 (Mass localizes non-local effects) The essential feature of this mass is that it’s equilibrium

decouples from the other processes once the promise graph topology (long range order) has settled. So

it becomes just a local parameter representing the degree of inﬂuence that an agent has on neighbours

and vice versa.

If the promises behind ρiare co-dependent, i.e. symmetrically bi-directional, then the process is locked

in phase step by entanglement over the region, and we can interpret ρi∼mias the effective mass of

Ai, with respect to those processes. The magnitude of midetermines an effective local momentum for

processes that pass through the agent.

The mass concept for agents is thus a kind of detailed balance equilibrium. Local dynamics, e.g. in

statistical systems and queueing theory, are based on instantaneous detailed-balance conditions for junc-

tions occur at the level of vertex rules Π(±)

ij . A major distinction between graph circuitry and continuum

trajectories is that vertex conditions act as effective boundary conditions that are distributed throughout

the system, at every agent. These can’t easily be transplanted by a homogeneous equation of motion,

since the agents might themselves be inhomogeneous. Caution is required. As an effective equilibrium

interaction, mihas an instantaneous value and a stable equilibrium value. The relaxation time is fairly

quick, requiring only a few neighbour exchanges to stabilize. Nevertheless, this effective mass is deﬁned

over the same co-time timescale as entanglement, so it’s consistent with a classical limit.

3.2 Second order ﬂow from promises with channel diffusion

Diffusion is a process that occurs relative to a ﬁxed coordinate system. While the processes of diffusion

must reﬂect the symmetries of translation, these are broken by the boundary conditions, just as a the

temporal invariance of a thermal system is broken by the selection of a rest frame for relative motion.

Without changing the conclusions in a signiﬁcant way, we can introduce and explicit exterior time t

and make the trust a function of it α(Ai, t) = αi(t). The diffusion equation in (41) for a homogeneous

network

−→

∇jαi=Ck

ij αj−αi, j not summed (32)

∇2αi=X

j

−→

∇jCk

ij αj−αijsummed (33)

=Ck

ij −δij 2αj(34)

∝∂αi

∂t .(35)

15

In the last line, I wrote αito indicate that the equation is solving for some kind of consistent assessment

of the ambient state. It combines interior state with the state of neighbouring agents probed through

kinetic processes. It relates the kinetic process to something that can drive the system from the outside,

as a boundary condition.

The similarity to the heat-diffusion and the Schr¨

odinger’s equation is both intentional and suggestive.

Since this is postulated, like the Schr¨

odginer equation, based on the structural conditions of the problem,

we are free to suggest the constant of proportionality in the last line. The difference between diffusion

and Schr¨

odinger mechanics is that the latter includes a factor of iin the right hand side, turning relaxation

into waves. Physicists will tend to argue that this is a fundamental difference—indeed it’s a fundamental

change of semantics. A relaxation phenomenon comes to a halt when it reaches equilibrium, but a wave

process doesn’t. The complex factor is more natural for autonomous agents, since they have no reason

to ‘stop’ due to exterior changes: their action comes from within. Relaxation is an effective response to

externalized change. Thus, it’s easy to argue that the Schr¨

odinger complex form is more natural for local

autonomous agents.

Apart from looking like a classic diffusion or Schr¨

odinger wave equation, there is another interpreta-

tion in graph dynamics. The time derivative is a diagonal part of the graph matrix. This is associated with

‘pumping’ of the system [19], or the injection of currency αifrom a source Ai. Thus, we might postulate

this type of equation to calculate the dynamic equilibrium of currency from source to sink along different

promise channels:

Wave-Diffusion process ←Pumping source (36)

This aligns with the role of the time derivative in Schr¨

odinger’s equation too: as a total energy, whose

time variable is for the whole composite system, not for the interior dynamics, where interior velocities

are replaced by implicit waves.

3.3 Currency landscapes vs process semantics

To preserve the causal independence or locality of the agents, we can only relate this to each agent’s

assessments of one another, so the force can be represented as a gradient of an assessment of the promises

αi(Vj)in some currency:

~

F(ij)≡−→

∇ij αi(Vj).(37)

Now α(V)becomes the ﬁtness landscape or potential surface that we’re familiar with in smooth classical

systems. This assessment maps straightforwardly to concepts like utility in von Neumann’s economic

game theory [11], and to statistical expectation values for processes that are algorithmically simple in

nature. As agents become more complex in their interior processes and interactions, assessments can be

based on an increasing number of issues, and the simple algorithmics of Newtonian thinking will not

represent all the interior details.

At every stage in this formulation, agents are behaving independently, but signalling one another

at a certain rate with information, through channels formed from promises. How could this lead to

phenomena in which momentum is conserved in collisions? Recall that, in motion of the third kind, it

isn’t the agents that are moving but what they promise. This is more like the way we think about a wave

in classical mechanics than an extended body or abstract centre of mass.

Example 4 (Process semantics) Waves interfere are are absorbed. The collision of bodies is a distinct

concept, where the abstraction of a ‘body’ has clear boundaries of causal independence. Bodies are

themselves separate collective processes (more like wave packets) that become temporarily entangled

with each other. When bodies collide and become conjoined, or when they split, the relative proportions

of the promises are not directly related to the process that runs on top of them: the effective mass and

velocities are virtual things, are determined by the underlying collective processes, but can behave as

virtual agents. It’s natural to express these fractions using a shared currency (energy, trust, money, etc)

in order the describe ratios and directions of the split.

16

Example 5 (Attraction and repulson) The energy-currency concept is also useful for describing the

tendency for agents to create and destroy processes (see ﬁgure 2). In classical mechanics, an attractive

potential well tends to induce an attraction, e.g. in gravity or electrostatics. In this kind of directional

landscape view, gradients guide behaviour by setting up an effective guiderail. In economics, rather

than saying that successful economic agents make a lot of money, we would say that successful agents

are those that can attract money, since they cannot create or destroy it themselves. In sociology, we

would say that collaboration is attracted by reputational trustworthiness. Counter-concepts like mass

represent resistance to such forces due to orthogonal interactions (obligations).

A1A2

stored

exchange currency

V =

T =

LOCATION

SHARED PROCESSES

kinetics/risk

force

trustworthiness

currency potential

mass

mass

p = mv −> ‘intent’

Figure 2: Currency is the generalization of energy in mechanics. It’s manifestation on different scales comes

with new semantics, such as the emergence of ‘intent’ from momentum. One should be careful when interpreting

these classical mechanics-style illustrations of energy as continuous landscapes, as in this mnemonic. Energy is

an interior assessment of an agent or a process. Moreover, Promise Theory predicts that stored currency (potential

energy) is accounted on the interior of agents, while exchange currency accounted in links between agents, i.e.

a property of shared processes formed by agent interactions. In motion of the third kind, there are no bodies or

exterior potentials that store energy in the Newtonian sense.

3.4 Direction and current

At the simplest level, there are two orthogonal kinds of promise involved in motion of the third kind. In

the semantic spacetime model, these are referred to as EXPRESS and FOLLOWS promises. Expression

promises refer to interior scalar properties that are exposed for observation. Following promises form

partially ordered trajectories of agents along which motion can occur (these are guiderails, in the lan-

guage of part I). The collaboration between interior and exterior promises leads to a continuity equation

for the interior states ρ∼bEXPRESS, for each agent Ai, of the form:

∂tαi(interior) + −→

∇ · ~αi(exterior) = 0 (38)

Assessments of agents, by one another, behaves as a kind of local density ρand current ~

J, measured

in currency units α:

•αi(interior)∼αi(bexpress)7→ ρi, and the exchange of inﬂuence between agents

•~αi(exterior)∼~αi(bfollows)∼~

Ji∝−→

∇iVwould be measured by interior assessments also, leading

to the equivalent of Fick’s law Ji=Ji(−→

∇α(b)).

17

Locally, this then satisﬁes a continuity equation that becomes a heat-diffusion equation subject to bound-

ary conditions and forces. The continuity equation for the currency locally at a point

∂tρ(x, t) + −→

∇ · ~

J(x, t)=0,(39)

which becomes the diffusion equation on use of Fick’s law:

~

J(x, t) = −D(ρ, x)−→

∇ρ(x, t).(40)

These combine to give a diffusion equation:

∂tρ=−→

∇ · (D−→

∇ρ)(41)

If we translate the variables to the present case of agents and promises:

∂αi

∂t +X

j

−→

∇j~

Jij =F(Ai, t),(42)

and by analogy with Fick’s law

~

Jij =−D(Ai, t)−→

∇jρi,(43)

for some D(Ai, t), which we would expect to be constant for the most elementary agents with few

degrees of freedom internally. Kirchoff’s laws at each agent junction [21] tell us that the ﬂow part of

trust would be conserved:

∂α

∂t =incoming −outgoing +F(44)

=Aji δαj−Aij δαj+Fi,(45)

as seen from the perspective of any single agent.

Remark 4 (Autonomy or not?) We need a reasonable answer to the question, why would an agent

voluntarily reduce its currency and under what circumstances? A simple answer is that the reduction is

not always voluntary, because promises may be conditional on some shared property between the agents.

How this comes about is a separate question that we can try to discuss towards the end. We should be

careful not to take the ﬂuid dynamical or monetary analogy too far. The pictures we use to describe

ﬂows and ballistic processes have become ubiquitous and are misleading in the general case. A picture

based on internal states looks somewhat different to the effective picture we might perceive. A quantity

corresponding to potential or internal energy, is an interior resource like a savings account which can

be used to ‘ﬁnance’ activity. This is not directly observable (as potential energy is not observable in

physics). It can only be taken as a hypothesis for predicting behaviour.

3.5 The meaning of momentum in physics

In physics, we also use the term momentum for a number of different process characteristic with inequiv-

alent semantics. The common feature of these is that they are locally directional. Momentum represents

a measure of ‘alignment’ or similarity between the interior states of dynamical entities.

For point particles, momentum summarizes an instantaneous guiderail for the direction of motion—

where am I going next, based on where I am now (essentially like a train following a track). The

Newtonian momentum ~p =m

~

˙xis a surprisingly subtle quantity, involving a time derivative of a vector.

It’s an oddity in dynamicsm but it dominates our idea of momentum, because calculus has established an

unreasonably idealized expectation for what momentum is—and that leads to much confusion.

The concept of a trajectory, or vector ﬁeld, relies on the continuity of this alignment concept too, as

measured relative to a ﬁxed coordinate system in which ~p can be described globally. The coincidence

18

t

∆

semantic

complexity

Agents

Interior assessment layer

Semantic coupling layer

+

b.c.

STATE

GAMES

ACTION POTENTIAL

ψ

−

Π

α() τ

ε

Figure 3: The semantic layers (scales) of action for a general virtual phenomena. At the fundamental level, there

are agents with interior activity (time). The agents can modify one another’s activity by effectively exchanging

a canonical currency through promised channels. Their activity level is an assessment that affects the frequency

of their interactions sampling each others’ state. Thus there’s a layer of state, which is driven by (dependent on)

the activity level of each agent, but which controls the direction and type of state-changing interactions. Exterior

promises represent exterior states, which are channels of interaction. Interior promises represent internal states

that compute the processes within each agent, responsible for sampling and delivering on the promises.

limit, used in the calculus of derivative gradients, means that we can deﬁne momentum as a point quantity,

involving only a time derivative. We transform something non-local into something that appears to

be purely local! This sleight of hand means that it now looks as though every entity ‘contains’ its

momentum. In fact, this is only true in this limit. No such thing is possible in an agent model, and indeed

it isn’t true in Quantum Mechanics either. Think of a game of musical chairs. The chairs cannot contain

momentum. Momentum lies only in the process that takes place between them.

Momentum is carried inside an agent in Newtonian thinking. How could this be? In terms of agents,

this is only possible if all the agents hosting the trajectory have sufﬁcient interior resources for the

collective process to remember its own trajectory. How, for instance, could a dimensionless point ‘know’

about coordinate systems and directions? A pointlike object has no degrees of freedom to align with

and thus encode or ‘remember’ any direction. The locality of Newtonian momentum is an artifact of

the idealizations of Newtonian mechanics that doesn’t survive generalization. We know that it doesn’t

generalize:

•In the presence of electromagnetic ﬁelds, the momentum has to become m~v −e~

A(x, t), where

A(x, t)is a non-local ﬁeld. The point momentum doesn’t even survive a simple coupling! An

extended ﬁeld cannot be represented by a purely local change at a single point.

•In Quantum Mechanics, a similar generalization is needed. The time derivative momentum re-

placed by a non-local gradient over space that inserts explicit wave semantics to momentum, based

on the observation from experiment that p=h/λ. It doesn’t play the same role as the Newtonian

momentum, since it’s carried by the wave, whereas the Newtonian momentum is carried by the

centre of mass process. We usually extrapolate the momentum to be something carried by the

entire body by virtue of the linearity of momentum under composition.

In the differential prescription, momentum can exist instantaneously at a point. However this is impos-

sible for either a wave or a promise, since the latter are non-local processes. The fact that we can take

a differential limit of δx 7→ 0or δt 7→ 0is an artifact that was heavily disputed in Newton’s time, but

which has become accepted without discussion in modern times.

19

3.6 The meaning of momentum in Promise Theory

In Promise Theory, it is promise bindings that describe alignment. The analogue of momentum belongs

not to agents, which are stationary, but to virtual processes that use agents and their promises to propagate

information between them. These are virtual processes. Two promises Ai

+b

−→ Aj, Ai

−b

←− Ajare

partially or fully aligned if and only if b∩b06=∅. Note that this alignment doesn’t refer to a universal

direction in a Euclidean meaning. The concept of direction only exists between pairs of agents on a small

scale. The existence of a long range order, and long range direction are expected to emerge at scale.

The graph matrix Π(±)

ij refers to transfers over a channel, whereas the momentum in the sense of

classical kinematics is a property of a body, transmitted by coincidence or contact. However, it’s also a

vector, meaning that it points in a certain direction. These facts can only be reconciled in a Euclidean

embedding. For graph circuitry, a vector is a channel or link between communicating agents. So the only

corresponding possibility is for the momentum to be linked to the magnitude of what one agent can pass

on to another:

Π(±)

ij :Ai

+~p+

−−−→

−~p−

←−−−

Aj,(46)

which we denote by the causal overlap Ck

ij (p+∩p−), when assessed by a third party Akin scope. The

momentum is thus a guiderail for a process kept as a series of impulse events between the agents.

In Promise Theory, one distinguishes impositions from promises:

•An imposition S+b+

−−−−−→ Ris an offer of inﬂuence that is not aligned with a sampling process

R−b−

−−−→ Sin co-time.

•A promise S+b+

−−→ Ris an offer of inﬂuence that is aligned with the sampling process R−b−

−−−→ S

of the receiver in co-time:

These two modes of interaction correspond roughly to the semantics of ballistic and force ﬁeld transfers.

Impositions are like collisions. When they hit, an imposition arrives to meet a pre-existing promise, but

with no particular phase alignment to the receiver. Thus, impositions are often ineffective.

A pair of agents can exchange inﬂuence through an interplay between interior agent promises (and

the processes that keep them) and exterior promises. In collisions, momenta may be aligned or anti-

aligned. When bodies split and move off in opposite directions (opposite promise alignments), the ratio

of the split involves another parameter in Newtonian thinking: the mass. Thus, the semantics of the

express may be taken as the co-modulation of interior and exterior promises:

momentum ∼interior promises AND exterior promises (47)

∼m×v(48)

The mass is a property that encodes the relative encumbrance of the process ‘body’, in making changes

due to other interactions or commitments. When the encumbrances that lead to effective mass are de-

coupled from those that lead to motion, we can write momentum as a product of independent variables

p=mv. The mass refers to effectively interior properties codifying purely local encumbrances, and the

velocity refers to non-local exterior properties between agents.

These points tell us something about what momentum must correspond to in an agent theory. A

momentum difference p1−p2can be positive or negative. The promise bodies b1∩b2cannot be negative

unless we include their orientation relative to the graph of oriented connections Ck

ij . If we allow Ck

ij to

depend on a proper time (either Ak’s time or Aiand Aj’s co-time, for global versus relative formulations)

then we can make the identiﬁcation:

~p(t)∼mi~

Ck(49)

20

Or from the perspective of agent Ak, over a direction from Ai

±

−→ Aj.

pk

ij (t) = miCk

ij .(50)

The force acting to change this momentum is then relative to the co-time t(ij)

~

F∼dp

dt =∂t(ij)miC(ij)

ij =F(ij)(51)

Finally, the assessed contribution to changes in some promise body bcontribute to bas a memory func-

tion. Just as the momentum ‘remembers’ additively the effects of past interactions in a phase space state

with direction and magnitude, so the overlap bij =b+

i∩b−

jis the route by which we can accumulate

kinetic contributions as a locally stored potential. In general, this will not be possible. Each agent can

only affect its own changes.

4 The semantics of force, momentum, and energy in physics

To better understand how to bring quantitative analysis into Promise Theory, let’s compare the strategies

that emerged for describing physics with what makes sense for an agent view. Physics has gone through

three main paradigms: I’ll call them the Newtonian, the Einsteinian, and the Quantum Mechanical. They

join up in certain limits, but they don’t form a seamless whole. We want to extend their ideas to a more

neutral or ‘unifying’ description within semantic spacetime, so as to apply a similar method to agent

systems—such as we would meet in biology, computing, sociology, and economics.

4.1 Quantifying gradients and exchange in Newtonian physics

In the Newtonian description of elementary mechanics9, inﬂuence is transmitted by direct modulation

of variables, i.e. through the addition and multiplication of dynamical quantities. This leads to an in-

stantaneous response. In more complicated scenarios, this direct modulation doesn’t represent the nature

of responses, and ‘response functions’ typically involve time delay and spatial shifts that introduce non-

local aspects into the description. This is worth mentioning since, in discussions of Quantum Mechanics,

one often attributes non-local effects to something uniquely odd about Quantum Mechanics, and we

should out such ideas out of our minds. Non-locality is a necessary part of interaction on an elementary

level—however, it can sometimes be scaled away in certain effective descriptions over a sufﬁcient scale.

Since the agent point of view aims to expose semantics (or qualitative assumptions) on an equal

footing to the dynamics (quantitative assumptions), let’s start by examining the semantics of a local

Newtonian description and expressing these in terms of agents and their promises. In this way, we can

make explicit the hidden assumptions for comparison with other scenarios. There are essentially two

parts to each decomposition:

•An accounting principle deﬁning ﬂow as an exchange of currency, using the concepts for ‘energy

and momentum’ (or equivalents).

•A precision of the semantics of momentum for each process, e.g. pointlike phase space for classical

mechanics, wave delocalization for quantum mechanics, diffusion for hydrodynamics, etc. These

are the in-out association functions for each process type.

Force is deﬁned to be that inﬂuence, which appears to extend over space in order to accelerate or

retard a body’s motion. On a high level, we can interpret force as an incentive to change. The anthro-

pomorphism is frowned upon in physics (though generally harmless), but it’s exactly what we want in

describing socio-economics! Similarly, the concept rendered impartial as momentum has a directionality

that associates with intentional behaviours at scale. No concept of free will—or of human uniqueness—is

9We should acknowledge that many thinkers contributed over time to the views that now bear Newton’s name.

21

Role E Physical Social Economic

Kinetic T Energy Risk Investment

Potential V Well Trust Savings

Table 2: Heuristic roles for canonical currency at different scales of semantic complexity. Ultimately one might

be able to reduce all counting to energy, but it will not reﬂect the semantics of different scales in a convincing way.

needed to make this or other semantic identiﬁcations, as they provide no more explanation for phenomena

than physics does. We’re merely looking for dynamical representations for the relevant semantics.

A mechanical ‘body’ is represented by a point-like proxy agent, which is the seat of the centre of

mass. We deﬁne the work done as an investment of energy, where ~v ≡~

dx

dt and adding the mass (or

encumbrance) scale for momentum

~

F=~

dp

dt ,(52)

we can identify the relative rates of change by using the collective notion of a trajectory or process in

spacetime to relate these ideas:

~

F·~

dx =~

F·~v dt (53)

=~

dp

dt ·~v dt (54)

=~v ·~

dp (55)

=m~v ·~

dv (56)

=1

2md(~v ·~v)(57)

=d1

2mv2(58)

=dT. (59)

Since this is a total derivative, it depends only on the initial and ﬁnal states. It’s not path dependent. We

can integrate it along any path and it will appear to be conserved, by continuity.

4.2 Quantifying gradients and exchange in Quantum Mechanics

In Quantum Mechanics, the reasoning about an energy currency is similar but the reasoning about force

is completely different (see the correspondence in table 2). In particular, the momentum goes from being

a scaled time derivative, i.e. a velocity or time rate of change in some parameter, at a ﬁxed location,

to being a spatial gradient of such a state displacement, with the stipulation that spatial gradient and

temporal evolution are linked by a wave process rather than a uniform motion in a straight line.

Thus we have the peculiar idea that momentum is no longer carried with the location of the centre

of mass, but is rather something like a potential or ﬁeld whose bias deliberately smears the concept of

motion over a wave process. The justiﬁcation for this was, of course, the empirical fact of interference

phenomena. The effect is to replace interior state changing over time with a diffusion of state over space.

This leads to many of the confusions concerning the so-called non-intuitive behaviours in Quantum

Mechanics.

22

~

F·~

dx =dh~pi

dt ·~

dx, (60)

=d

dthψ|~p|ψi · ~

dx, (61)

=dhψ| − i~−→

∇|ψi · ~

dx

dt ,(62)

7→ −~2hψ|

−→

∇

m|ψi · dhψ|←→

∇ |ψi,(63)

=d−~

2mhψ|−→

∇|ψi·hψ|−→

∇|ψi(64)

=d−~2

2mhψ|∇2|ψi.(65)

Notice how the mnemonic dx/dt is replaced by a current ~

J∼←→

∇, which is the only process in Quan-

tum Mechanics that represents a time ordered relationship between general xand general t, and which

embodies the wave assumption again by virtue of the use of the gradient operator. The result is the

equivalent, but the implicit velocity (here represented by the quantum probability current, which is the

only travelling quantity) is no longer the velocity of the interior state process but the relative velocity

of propagation of the exterior propagation process. This is because we no longer identify state ψas

being related to a local displacement within the spacetime coordinate x. It has become encoded as the

phases of some superposition of wave process. In other words, we’ve shifted from a purely local process

to one where state is externalized and always moves as a wave, exchanging its energy currency with

neighbouring locations. Schr ¨

odinger mechanics is clearly not even deﬁned for point particles.

Minimizing energy locally now implicates spacetime around the location, which is why the simple

harmonic oscillator can’t ever have a stationary state. It’s ground state has to be a wave and thus is

must carry some non-zero energy. Autonomy is gradually eroded at scale in order to maximize future

potential.

4.3 Quantifying gradients and exchange in Promise Theory

For a general promise theory, about an isolated process with promise type ρ, we deﬁne the state memory

at agent Aito be ρi, and we can let this consist of a slowly varying background value ρiand a ﬂuctuating

ﬂow process ˜ρi. We ask: what is the equivalent chain of reasoning for kinetic currency transactions

between agents in Promise Theory? Let the change in potential value of a transaction be realized by a

promised process Π(±)

ij . The effective momentum pfor the process is deﬁned in terms of a current J

pij =miJij ,(66)

where miis some effective mass, encumbrance, drag, inertia, connectedness, etc, see equation (66).

Then the effective work done by a force dp/dt at Aimay be expressed both as a gradient of a guiderail

potential V, or as the rate of change of the momentum over some proper timescale intrinsic to the process.

The work is:

~

F·dAi≡(~

∂iV)·∆i≡d~pij

dt ·∆i. (67)

Now, we deﬁne the momentum in terms of the current to obtain the trivial likeness for the kinetic cur-

rency:

Fij dAi=dpij

dτ ∆i(68)

=d

dτ (miJij ) ∆i(69)

=d(miJij )×di

dτ .(70)

23

where we use the proper time τto underline that this is an interior process co-time for spatial exchange

interactions, not the externalized global time of Newtonian-Galilean physics. We can compute the effec-

tive value of di/dτ simple to be the velocity or current Jij . So,

Fij dAi=Jij d(miJij )(71)

=d(1

2miJ2

ij ).(72)

The latter is the change of kinetic process currency for the virtual process, corresponding to the promise

ρi. To complete this, we need to specify the interaction semantics, or the nature of the current. One

possibility is to invoke the analogy of Fick’s law in diffusion, or the quantum wave momentum (which

turns a time derviative into a change over space, as waves couple space and proper time in their process).

For an arbitrary cooperative graph process, we have to observe the non-locality of interaction. There

is no way of taking a dx 7→ 0type of limit as Newton could for the continuum, so we’ll inevitably take

on ﬂow semantics analogous to Fick’s law. We can take

Jij 7→ −Λ−→

∇j˜ρi,(73)

for some scale Λ, giving for (67),

Vi=1

2miΛ2˜ρ2

i,(74)

for the currency constraint. For a speciﬁc potential V=Vi˜ρi, the guidrail potential can modulate

the changes of state, as in the quantum formulation, for convenience. Finally, we can choose causal

semantics for the gradient, giving the naturally retarded proper time interpretation (setting observer k7→

i:

Vi=1

2miΛ2X

ijk

(˜ρi−Cij ˜ρj) (˜ρi−Cij ˜ρj),(75)

which is to be understood as an equation for ρi, alongside the ﬁrst order

−→

∇jρi= 0,(76)

which implies that ρiis an eigenvector (Cij/λ)ρi=ρiresulting from self-consistent steady state ﬂow.

Note that the assessments implies by Cij are those of the participating agents now (proper time process),

not of a godlike exterior observer.

When Cij >0, the Frobenius-Perron theorem comes into play, and the principal eigenvector is real

and position, with a probabilistic interpretation. In other cases, potentially negative regions lead to more

than a single relevant eigenvector, and the solution is more akin to a Wigner function decomposition

ρi∼b+b−. The currency constraints (75) and (76) are analogous to the Cauchy momentum equation for

a stress tensor Cij in hydrodynamics. This is not unexpected, since we’re basically describing a graph

as a conﬂuence of forward ﬂowing channels scaled by mutual channel strength. This steady state ﬂow

approximation, a result of the continuum assessment values and derivatives used, implicitly assumes

an average ﬂow behaviour. Thus, we derive the analogous utility of a currency concept for large scale

behaviours guided by slow (adiabatic) boundary conditions for an isolated process. What this helps

to underline is that the sometimes mysterious looking results T=1

2mv2,p=mv, etc, are simply

accounting practices that conceal assumed interaction semantics. If the counting is similar, the results

have to be similar in structure too.

We should, of course, ask: what happens when these conditions of continuity break down, for in-

dividual transactions between agents over small timescales, where external conditions are not adiabatic,

and so forth. In that case, we need to develop new rules sets to encompass these semantics. The result

might look more like a cellular automaton, and the usefulness of counting with a conserved currency

may become less clear.

24

4.4 Currency accounting patterns in an agent viewpoint

The general pattern of all these currency accounting formulae assumes a similar form:

∆(reservoir of inﬂuence)∝(interior interactions)×(∆exterior)2,(77)

e.g. mv2, I 2R, CV 2, etc. This is the ‘work’ done by a force. Why would such a force be experienced

by an autonomous agent? Why would an autonomous agent even ‘agree’ to accept or react to an exterior

inﬂuence10? This may not be the correct question to ask in every case, as the promises linking agents

may be co-dependent and thus unavoidable. In the broader case, on socio-economic scales, obstinate

refusal might seem more likely, though this too depends on an assumed absence or mutual dependency.

A full answer requires us to understand how promises are made in the ﬁrst place (see section 5).

When all descriptions are externalized, we can only imagine transfer by collision. However, rec-

ognizing interior properties of spacetime too, distinctions can be distinctions of local state. The ﬁeld

becomes a kind of fabric of agents, as in solid state physics. An autonomous agent needs to accept

promises and form its assessments in advance of the process moving into new territory—and thus accept

a currency transaction. This must involve non-locality at the edge, or equivalently, the existence of an

existing guiderail that primes the space for the potential transactions.

The guiderail is natural when we think of Motion Of The Third Kind (also of the second kind).

However, in a material interpretation (which is still ﬁxated in most classical physics) it stumbles into

problems of interpretation—which is surely the reason why classical thinking make Quantum Mechanics

seem ‘weird’.

If potential energy (or currency) is a store of assessed wealth, what is kinetic energy? In physics, this

is carried by moving bodies, but in virtual motion there are no moving bodies, only moving promises11.

So the counting of transferred currency has to be promised as part of the transfer. In order for the

accounting to be fair and conserved, the promises need to be kept homogeneously. This is Noether’s

theorem. Kinetic energy is the outcome of exchanging locally stored energy currency. An intrinsic

policy of seeking the most potential, trusted, hoarded positions would explain that. Risk taking would be

the opposite that would allow processes to tunnel through apparent barriers.

4.5 Example: Coupled oscillators in Newtonian mechanics and Promise Theory

To better understand how promises and assessments take on the roles of dynamical variables, it’s helpful

to consider an example from the repertoire of basic dynamical systems—the familiar case of coupled

harmonic oscillators. This exhibits aspects of transmission and cooperation that need to be explained in

the context of autonomous agents, as well as accounting of position, momentum, and energy. We can

look at counterparts at other semantic scales.

In the classical view, interactions are based on the forces transmitted by direct contact of idealized

springs. A body with mass m1is connected to an immovable wall by a spring of stiffness k1; another

body with mass m2is connected to another immovable wall by a spring of stiffness k2, and the two

masses are connected by a third spring of stiffness k12 (see ﬁgure 4). The springs are representations of

force and shouldn’t be taken too literally. Most potentials are contactless; what matters is that the force is

approximately linear in relation to the separation of the two bodies—and we treat the length of a spring

as a proxy for that promise.

It appears superﬁcially, from the use of a name xirooted in an association to Euclidean coordinates

(x, t), that we are embedding the system in a single Euclidean space, but this is misleading. The param-

eters entering the equations are displacements from equilibrium positions, which I’ll call ∆xito clearly

distinguish them from the actual coordinate positions. The distinction is important, because each agent

10This anthropomorphism applies directly on a large scale, but we’re not implying any kind of ‘anthropic principle’ for

reverse inference.

11Note that kinetic energy is only apparent by relative velocity, so it can’t be carried as an intrinsic property. It can only be

assessed as a quantity moving inside something relative to the observer. So stored potential becomes moving stored potential

of the proxy agent.

25

A

L

A

R

A2

A

1

1

x2

x

1

k12

k

1

m2

m2

k

Figure 4: Classical representation of coupled oscillators as materialized bodies with internal properties and exter-

nalized dynamics within a Euclidean embedding, alongside an agent view of coupled oscillators with completely

interior state, communicating via promise channels.

formally has its own internal space, but assumes a common time. It makes the relativity of the entities in

the system play a latent role via the magnitude of their interaction via the spring. We take this sleight of

hand entirely for granted for physical springs and ballistic forces, but the origin of these effects in virtual

systems is far from obvious.

The displacement is negative if to the left, and positive if to the right. The signs of the forces are

aligned with the signs of the displacements, To ﬁnd the equations of motion, we can consider either the

forces acting on each body locally, or the total energy expressed in terms of the local bodies. Let’s begin

with the forces. The force on body with mass m1

•Force on entity 1 at equilibrium position x1due to moving the mass is:

F11 =−k1∆x1−k12∆x1(78)

The sign of the restoring force due to the springs is against the direction of the displacement from

both springs even though one is distended and the other is compressed.

•The force on entity 1 due to the movement of entity 2, with position x2and displacement ∆x2is:

F12 =−k12∆x2.(79)

The sign is now opposite since it pulls to the right and there is no local back-reaction on entity 1

from the spring with k2.

By symmetry, with some signs reversed, we obtain two similar equations for the two entities:

F1=m1¨x1=−(k1+k12)∆x1−k12 ∆x2(80)

F2=m1¨x2=−(k2+k12)∆x2−k12 ∆x1.(81)

There are no equations for the walls, since we assume that they are ﬁxed and therefore experience no

deformation. Note ﬁrst how these equations express the essential locality of the processes taking place at

each of the entities, yet some force is effective transmitted from the other. Note also that we don’t need

to know the initial equilibrium positions xi, i = 1,2in order to write these equations in terms of the

displacements ∆xi. It only matters that there exists such a state, which could be considered a boundary

or an initial condition on the conﬁguration.

An interesting outcome of the solutions of these equations is how the behaviour doesn’t depend on

the actual entities at all. This is easiest to see if we simplify away the details, by making k1=k2

26

and m1=m2, and introduce average (slow) and ﬂuctuation (fast) coordinates x=1

2(x1+x2), and

˜x= (x1−x2).

Adding and subtracting the equations now gives simply an equation for the average centre of mass

location and their relative position, a location that doesn’t actually exist except implicitly in the embed-

ding:

m∆¨

x=−k∆x, (82)

m∆¨

˜x=−(k+ 2k12)∆˜x, (83)

which has natural harmonic solutions for the centre of mass and total system of the combined ‘molecule’—

not for the individual entities. This means the system naturally behaves like a single entity not two

separate ones. Why would this be? These facts are all well known and stated in any elementary text-

book, though one does not usually draw attention to the assumptions or question the outcome. However,

this has special signiﬁcance when we come to think of a system representation in terms of agents and

promises.

4.6 Coupled agent interpretation in Semantic Spacetime

Now let’s re-frame this problem as one of communicating agents Ai, i = 1,2without any springs or

physical paraphernalia. This is an unfamiliar exercise for most readers, since we are programmed to

accept the tenets of Newtonian mechanics from an early age.

We need to explain how force can be transmitted (over what channel), why independent agents would

accept information from other agents and how intermediate agents would pass on forces they experience

to others when connected by springs. In particular, we remove any reference to an embedding space

containing the agents, and expresses the displacements from equilibrium with states that are fully internal

to each agent—effectively drawing a boundary around the formerly externalized motion. We expect a

careful decomposition to expose the difference between interior and exterior processes, at the scale of

description. Indeed, this gives some insight into the nature of the common kinematic variables.

How do agents decide what their equilibrium states should be in the ﬁrst place? In the formulation

above, the equilibrium conﬁguration is considered to be an initial state, and is therefore taken for granted.

We can’t compute the initial positions of the masses in an exterior spacetime, because that information

is not contained in the formulation: the coordinates are displacements! This might come as a surprise

to the reader, since Newtonian mechanics is generally formulated in a single Euclidean theatre. In this

case, the coupled oscillators are formulated each in its own Euclidean theatre; these are coupled through

Hooke’s law of springs and the assumption of transmitted force. What we have to do now is ask: how

could we simulate the same behaviour using messages passed between independent agents, e.g. by a

series of agents writing letters to one another?

We begin by postulating two agents A1and A2in the role of the masses (see ﬁgure 4) that make

promises to one another. Next, since the position of the agents is undeﬁned, we ask what corresponds

to ‘displacement’ for agents. There are no positions, so the displacements must refer to some change

of the interior states of the agents. Suppose we postulate a vector ψi, with various components, some

combination of which represents the position. This might be extracted with a ﬁlter, say by some operator

ˆx(we are always free to choose such a representation, without knowing to precise nature of ψ), as in

Quantum Mechanics. We still have to explain what initial value the agents would have for their position,

i.e. corresponding to displacement ∆xi= 0, and how that comes about.

The role of the springs is even more subtle than the question of displacement, and forms a much

longer discussion, if we’re going to be able to tick all the boxes. In a Newtonian view, force is a property

that extends in actual space between the agents and transmits an inﬂuence non-locally, i.e. it implicates

a process in the spacetime continuum, realized by the spring, which therefore cannot exist outside the

agents of Promise Theory. Reaction and back-reaction are assumed equal from the start. That’s an

external imposition, or obligation for the agents. Autonomous agents needn’t accept such an imposition,

so we can’t assume this to be true once we place all the control in the hands of local autonomous agents.

27

Each spring is a physical entity in the Newtonian model, which spans two massive entities, effectively

‘entangling’ their behaviours [22]—a material representation of the inﬂuences, i.e. noting exists without

a material entity to carry it. There need be no such physical entity when information is passed by message

alone, except insofar as information itself requires some medium to represent it. It would be ridiculous

to imagine a kind of classical force transmitted by the ballistic pressure of the message itself (as one

sometimes does with messenger particles in Quantum Field Theory).

Interactions between agents are now handled as the autonomous behaviours, promises, and assess-

ments of agents individually. So if an agent changes its internal displacement, it does so only because

it has independently promised to do so, e.g. when it accepts a certain message, and not because such a

change is imposed and involuntary. Promise Theory thus shifts the externalized magic of ballistic im-

position to an internal magic of localized interior change. This isn’t more explicable, only more locally

accountable, and divorced from a redundant embedding in spacetime.

The walls of the classical system are boundary conditions, assumed immovable and are thus ignored

in the Newtonian formulation, but we must introduce these as agents here because our formulation of

springs requires a ‘source’ location for the information channel replacing the spring to connect to, i.e.

the message of force has to come from somewhere. In Promise Theory that means there must exist an

agent to make that promise. The walls offer their part of the ‘spring’ force (+) that joins them, and the

adjacent agents accept this (-):

AL

+k1

−−→ A1(84)

AL

−k1

←−− A1(85)

AR

+k2

−−→ A2(86)

AR

−k2

←−− A2.(87)

The fact that we need to write four statements here is a consequence of strict autonomy or locality. Notice

that although there are arrows in both directions, the force is only transmitted one way from the walls

to Ai, i = 1,2. This encodes the immovability of the walls. The forces between Ai, i = 1,2are

bi-directional, by assumption of the semantics of springs. If we use the strict notion k(k)

ij to mean the

assessment by agent Akof the spring constant kij offered by Aito Aj, then we have:

A1

+k(1)

12

−−−→ A2(88)

A1

−k(2)

12

←−−− A2(89)

A2

+k(2)

21

−−−→ A1(90)

A2

−k(1)

21

←−−− A1.(91)

Can the agents here still be considered to have an independent choice? Without withdrawing from these

voluntary promises, they have indeed bound themselves into a condition of co-dependence. The effective

transmitted overlap values k12 =k(1)

12 ∩k(2)

12 and k21 =k(2)

21 ∩k(1)

21 . Without further constraint, there is

no need to assume that k12 =k21, however Newton’s third law tells us this must be true, so we shall

assume it here. This can be assured by promising conditional co-dependence or mutual calibration of the

interior values:

A1

±k12|k21

−−−−−→ A2

A2

±k21|k12

−−−−−→ A1,(92)

and the ±notation is a shorthand for mutual offer and acceptance. Now we have established that both

agents know and agree on the value of k12, due to some co-dependent entanglement relationship of

communicating and assessing one another. If either changes its value, the other will follow in step, though

28

several iterations of interior action may be needed to settle on this equilibrium, which has implications

for the local interior time rate at each agent. This equilibration may be considered the formation of the

guiderail, as discussed in part I [1]. It corresponds to a Nash equilibrium in game theory [12, 23].

The same argument, given for k12 also applies to the sharing of displacement state ∆xibetween the

agents, since the responses to one another depend on each agents assessment αiof one another’s state.

Let’s simplify the notation slightly to avoid carrying the assessment process α() around with us:

∆x(i)

i=αi(∆x1,∆x2).(93)

Thus we can write,

A1

+∆x(1)

1

−−−−→ A2

A1

−∆x(2)

1

←−−−− A2

A2

+∆x(2)

2

−−−−→ A1

A2

−∆x(1)

2

←−−−− A1.(94)

The effective overlaps are then ∆x1= ∆x(1)

1∩∆x(2)

1and ∆x2= ∆x(1)

2∩∆x(2)

2.

Given all these local promises, whose ‘decisions’ are made entirely autonomously, we can now form

a promise rule for the dynamics of the oscillators, by promising to obey Newton’s second law locally (as

an interior promise to self). This is just like a programmed algorithm that guides the process within the

agent’s interior. The promise is only observable by its outside effect, by assessing ∆xi. It requires no

communication outside the agents: all changes occur within the agent Ai, so we can write it like this:

Ai

+(F=m∂2

t(∆x))

−−−−−−−−−−→ Ai(95)

Thus the ‘law of nature’ is only a promise made by Ai. It doesn’t extend beyond its interior, except by

indirect coupling. That coupling is expressed by the rigid spatial channel equations (92), and what (95)

implies is the additional rigid coupling between space and time also. The question of time has so far

been suppressed. Autonomy implies that each agent will act independently, its own interior rate, thus

each agent has its own interior view of time. However, the entanglement or co-dependence between the

agents in (94) implies that both must wait for one another’s information in order to equilibrate:

A1

∆x1|∆x2

−−−−−→ A2(96)

A2

∆x2|∆x1

−−−−−→ A1.(97)

Thus processes advance at the shared rate of this feedback loop. We can call this co-time and denote it

by tand ∂t. All the pieces are now in place, and we can formulate each agent’s assessments by:

F1=m1∂2

t∆x(1)

1=−(k1+k12)∆x(1)

1(98)

F2=m2∂2

t∆x(2)

2=−(k2+k12)∆x(2)

2.(99)

Since we’ve hopefully made the point about locally determined values, for simplicity, let’s assume once

again that m1=m2and k1=k2, and suppress all the non-locality that implies, and quickly compare to

the Newtonian formulation. Adding these two equations, and deﬁning complementary contra-oriented

non-local variables analogous to a Wigner formulation:

X=1

2∆x(1)

1+ ∆x(2)

2(100)

X†=1

2∆x(2)

1+ ∆x(1)

2,(101)

29

we have

m ∂2

tX=k12(X−X†)−kX. (102)

or with X−X†≡X12 representing the average equilibrium of the forward and backward processes

(like a Feynman propagator), them

m ∂2

tX=k12X12 −kX. (103)

Promise Theory thus predicts corrections to the standard model of oscillations, when viewed from the

perspective of uncorrelated agents involved. This looks familiar, but let’s remind ourselves that we sup-

pressed the local sampling and assessment process α() in these expressions, for notational convenience.

The variables X, X†are values that belong to the equilibrium entanglement or shared state assessments

of the agents, not to their actual state or to the assessment of either one of them alone. They are the

average self-assessments and co-assessments of the shared interior states. The subtlety here is that there

is a hidden loop of interior time involved in making these assessments, which reveals itself in the way

currency is transmitted (energy is counted). If the assessments can be assumed to be mutually calibrated

(say by a third part observer) then X7→ Xand this reduces to the Newtonian case of simple harmonic

oscillations over a common frequency:

m ∂2

tX=−kX. (104)

There are thus observer based corrections, which mean each observer has its own ‘world’ interpretation

of events. Collapsing these is not necessary for a shared interpretation as long as a single observer

calibrates the interpretation over some scale. Calibration of sender and receiver is built into the unitary

symmetry of Hilbert space in Quantum Mechanics12 . This is natural as long as observations are made

by a consistent agent. For quantum information pipelines, it might not hold. This could be an issue for

quantum computers in principle.

In closing, we might ask whether the purely unobservable interior states of different agents have any

external signiﬁcance? From this, it seems that they are largely unimportant, at least in simple cases of

entanglement. Differences in interior time can be absorbed into local interpretations or assessments αi()

of the mass and spring constant variables. Thus, as long as we attempt to extract a common description

for all the components, the differences are effectively averaged away (analogous to quantum decoher-

ence). This is ultimately the reason why it makes sense to write down a shared view, at least over a range

at which the equilibration feedback of these promises states, by mutual Nyquist sampling, is negligible

compared to their rate of change. It’s known from the Fourier uncertainty principle—which becomes the

Heisenberg uncertainty in Quantum Mechanics [24, 25]—that this is the limit on observability for wave

decompositions of shared processes.

It’s interesting to speculate on how far this equivalence scales, when one considers agents with dif-

ferent interior resources at different scales. Representation of state and long range co-dependence, e.g.

over long chains of agents, requires an interior memory to capture and transmit and account for the cor-

related states. This is not something structureless point particles can reasonably do. As agents increase

in complexity, propagation can become less reliable, and may include more factors that compete for the

limited internal resources. So, if the agents were people and the promises were written letters, it seems

likely that the reliability of the promises could be compromised in more ways and the predictability of

the answer would fall within much wider margins.

12In Quantum Mechanics, one is used to the idea that all process agents, represented by bra and ket states, will behave

symmetrically, so that which is emitted must be absorbed, i.e. (b+)†=b−. This leads to the conserved norm of the Hilbert

space. This need not be true on a larger scale, where there are more open channels to account with. Conservation is only

a simplifying global convention. For each agent individually, the interior accounting has its own calibration. It’s natural to

assume that the accounting is lossless when assessments are made from the perspective of exterior, neutral third party agents.

30

4.7 Some examples from different semantic scales

We are focusing on physics, given its position as the undisputed leader in process descriptions. However,

the aim is to carry these techniques on to more general interactions in semantic spacetime. Let’s pause

here to sketch some characteristics that we would expect to see in the semantics of different regimes over

a range of scales.

Example 6 (Physics) There are signiﬁcant differences here between coherent ﬁeld (wave) excitations

and an assembly of locally contained displacements ∆xiat agent locations Ai. In the former ψis

already a superposition of Fourier modes that assumes an equilibrium has been reached. In relativistic,

there’s a delay added for ﬁnite speed of propagation, but in Schrodinger it’s instantaneous. There is thus

a separation of fast and slow variables. In the latter case ∆xietc, long range response is calculated

from local response by looking for propagation modes.

•Local interioir responses ∆xiare different from distributed ﬁelds like ψ(x, t), in representation,

but their effect is effectively similar.

•Particles/waves ﬂow away from contentious regions, resource sources and sinks (scattering), guided

by momentum.

•Particles follow internalized momenta, distributed processes like waves and collective phenomena

follow potential guiderails determined by energy/forces.

•Spins are interior state, depicted as a change of alignment in external spacetime.

Example 7 (Computing) •Workloads ﬂow away from contention or resource sinks (scattering), in

response to promised policies.

•Workload deployment follows potential guiderails determined by space potential capacity.

•Internal conﬁgurations change in response to policy desired-state Π(±)

ij .

•Message interactions follow guiderails determined by neighbouring agent activity in BGP, IP.

Example 8 (Biology) •Cells have semantic ‘intent’ or relative purpose based in interior epigenetic

state.

•Cellular processes are attracted by biochemical gradients, e.g. nutrients, hormonal signals.

•Any agent can create its own proteins. These can only be used for exchange if the other cells

accepts them.

•Oganisms become wealthy relative to others by attracting cooperation from other agents.

Example 9 (Economics) •Investment/jobs ﬂow away from contention/loss.

•Any agent can create its own money. This can only be used for exchange if the other party accepts

it.

•Agents become wealthy relative to others by attracting common currency from other agents.

Example 10 (Leadership/society) •Alignment around a seed leader or diffusion away.

•Firing accusations (impositions) leads to a reduction in trust (antitrust).

•Continued negative promises may bleed trust from agents and disconnect them from society.

•Authority is concentration of ‘trustedness’ or trustworthiness τi=αi7→ max

•Power is abilty to inﬂuence |ψ|?

31

4.8 The variational energy (currency) formulation in physics

The energy concept has taken on a deep philosphical meaning in physics. It provides a compelling

narrative as a basic ‘substance’ behind everything. It appeals to our penchents both for materialism and

mysticysm, with principles of minimization of energy and action! Ultimately, energy’s role lies in being a

means of counting. Agent models, however, can be applied to all kinds of problem so we need to explore

the role of such a measure in describing more general problems too. Energy has simple semantics, but

we also need to explore the idea that more sophisticated measures of realized and potential activity might

also be important.

The Hamilton or Lagrange formulation expose the deeper connection between spacetime and process.

Energy is introduced for a number of reasons: it is the complementary variable to time, and this has the

role of a proxy for counting temporal activity in the system. Assuming continuity of temporal change

is equivalent to assuming the local conservation of energy at a point. The latter follows from Noether’s

theorem and is exposed directly by the action principle [26]. It doesn’t imply global energy conservation,

which would require some magic to accomplish, and would seem to violate direct observation of an

expanding universe.

The action principle underlines the connection to spacetime through its variational approach, whereas

the equations of Hamilton and Lagrange are coordinate dependent. What these methods do is to embed

the variables of state within a spacetime framework in order to deal with rates of change over extended

scales, from the perspective of an external observer.

If we try to extend this thinking to other types of currency, such as trust and money, it’s not enough

to talk about trust and money alone13. We still need to build the structure and meaning of interactions

between agents, to make the equivalent of the T±Vexpressions in terms of dynamical variables. The

simplicity of this linear separation between saved and transactional energy is a misleading idealization.

A counter-example (such as the Lagrangian of the Standard Model) shows that the choice of independent

variables and their detailed balance conditions are not essential to explain dynamics in terms of energye

exchange.

In the Hamiltonian and Lagrangian formulations, and their equations of motion, dynamical variables

are not strongly related to physical manifestations of the system, i.e. as a conﬁguration of observable

entities in spacetime. Extended structures are replaced by effective average coordinates (centre of mass,

etc) and associated conditions of rigidity which are separated as far as possible. Such idealizations

drive the major ‘laws of physics’ because they can be reduced to these detailed balance requirements for

conservation of currency.

As agents become larger and incorporate more complex semantics, the idea that agents can be treated

as generic (essentially indistinguishable) entities (particles) falls into disrepute, but there remains the

possibility that a single currency and interaction process might dominate the observed behaviour. So,

let’s consider that possibility. How should we write down kinetic and potential currency accounting for

something like generic agents and promises? Is it possible? We need both semantics and dynamics. How

might this apply to something like sociology?

Let’s consider oscillators again, from the perspective of a single third party assessment. What gen-

erating principle might replace the action principle for variables of state (∆xi, p 7→ Ai,Π(±)

ij )? The

classical formulation starts with the action:

S=Zdt X

i1

2m(∂t∆xi)2−1

2k∆x2

i−1

2kij (∆x2−∆x1)2,(105)

or the Hamiltonian

H=X

ip2

i

2m+1

2k∆x2

i+1

2kij (∆x2−∆x1)2.(106)

13It’s tempting to look at money alone in economics, since it’s apparently ubiquitous; but while useful for local accounting,

money is not conserved globally, and is only a proxy for a deeper issue: trust.

32

In Quantum Mechanics, we replace state ∆xiwith a generic vector ψicontaining this and other infor-

mation, which can be ﬁltered with operators, and we replace a local kinetic encapsulation using a double

time differentiation from 1

2mv2by a spatial gradient over a wave-diffusion process ~p 7→ −i~−→

∇. This

explicit choice was motivated by experimental observations of wave behaviour and it leads to excellent

results.

Remark 5 (Classical and Quantum Mechanics) Let’s note in passing that while the quantization of

energy is sometimes highlighted as the important step in going from classical to quantum mechanics,

this is partly misleading. The quantization is not of energy, but of a transfer relative to a sampling

process with frequency ω, by n=n~ω. The confusion arises because we try to embed all the dynamics

within a single Euclidean theatre, in the usual way. It’s the same confusion that could be held in the

case of coupled oscillators. If we look at the centre of mass variables, which are natural positions

using coordinates x=1

2(x+x0),˜x= (x−x0)between each dynamical transition between entities

in the embedding space, e.g. using the Wigner function, we see that the expansion in terms of ~is

actually an expansion in ˜xor ∂/∂p around the average position x14. In other words, the quantum case

is a relaxing of the strict entanglement of neighbouring points in the classical scheme, allowing each

location to behave as if it were more independent, linked more loosely by wavelike currency processes.

The classical formulation, which uses a single Euclidean theatre, attempts to imply that exchanges are

implicitly immediate and localized without ﬁnite extent at every point xi, but ∆xiand xiare completely

independent variables. Their association is an implicit overreach of semantics. Indeed, it represents an

oversampling of spacetime, and the Fourier uncertainty theorem pushes back on that by pointing out that

wavelike transfer implies ∆p∆x≥i~/2[24, 25].

The action principle works well to generate the equations of motion in both classical and quantum

physics, but the energy interactions are idealized. There are more general equations of motion that

include ‘latency’ or ﬁnite response times:

m¨

φ=Zdt0R(t−t0)J(t0, φ),(107)

even ‘iterated game interactions’, where each step in the evolution is the outcome of an interior game

with a Nash equilibrium or ﬁxed point outcome [23, 27]. The latter interaction has no simple algebraic

form, but is represented by cellular automata [28, 29]. In Effective Field Theories [20], these non-local

interactions can be rewritten as integrals over locally modulated variables. This is a redeﬁnition of the

semantic boundary between interior and exterior for the effective agents. We call these ‘dressed particles’

in particle physics.

The relationships that come out of the Hamiltonian and Lagrangian formulations can be deceptively

simple, because they completely take for granted interaction semantics from the realm of classical bal-

listic thinking. Hence, when the semantics change, as they do in Quantum Mechanics, this leads to

understandable confusion.

We should remember that, in Promise Theory, currencies are like energy assessments, about the

state of a promise kept over time. It might be a self-assessment or the assessment of a neighbouring

relationship. It’s hard to get out of the habit of thinking about instantaneous responses and ballistic

physics. Imagine instead the regular relationship between your home and the garbage collection service.

If the currency is the amount of goodwill, then your assessment of the service changes according to

the rate at which the service keeps its promise, and the amount of goodwill you pass on in by feeding

the service. This ‘energy level’ is not an instantaneous judgement, it involves observing and learning

behaviour over time to summarize in a single potential.

14This procedure is what allows us to take the classical limit and recover the poisson bracket from the the quantum commu-

tator.

33

4.9 A variational formulation for agents?

How might we formulate the same kind of energy argument for agents, given that currency is an as-

sessment of individual agents. In the Lagrangian and Hamiltonian formulations of Newtonian and

Schr¨

odinger physics, the assumption that there is a single godlike observer with instantaneous access

to the state of the system at every point. All states at all locations advance in lockstep, by a rigid invisi-

ble hand. This third party observer forms assessments of every agent’s promise to externalize state and

constructs .

Variations are in the perception of the receiver.

∆xi7→ αk(∆x(Ai)) (108)

δ∆xi7→ δαk(∆x(Ai)).(109)

So that Ak’s understanding of the system is represented by the action principle:

S=X

i1

2m(∂tαk(∆x(Ai)))2−1

2k(αk(∆x(Ai))2−1

2kij (αk(∆x(A2)−αk(∆x(A1))2.(110)

The time derivative in this expression measures the single calibrated clock of the observer Ak, indepen-

dently of however time passes for the agents A1, A2.

4.10 Reinterpreting the action principle in terms of locality

The action principle feels rooted in ideas about the continuum and the exterior embeddings of Newtonian

physics, and yet it also generates the exterior equations of Quantum Mechanics, leaving the interior

aspects of measurement something of a controversial appendage. It’s natural to ask what the action

principle could mean for agent based systems and virtual processes. Is there an interpretation of the

action principle that applies to agent based systems?

If we recall that kinetic energy Tcorresponds to exterior exchange currency, while potential Vis

about interior accumulation of currency, then maximixing T−Vhas the semantics of maximizing

autonomy, or a principle of least dependency. The separation of interior and exterior processes and

compositionality of agents at different scales seems to preserve this.

Certainly this preference for an autonomous state is appealing on a number of levels, but it’s also

clearly not a panacea. Just as the action principle doesn’t always minimize energy, nor does Fermat’s

Principle of least time in optics necessarily minimize the time—we also have to take into account con-

straints on this from the pre-existing interactions with an ambient environment. We still haven’t dis-

cussed why agents might create or deprecate promises themeselves. How do particles in physics get their

charges? Why do they start responding to one kind of ﬁeld or another? How and why do interactions get

switched on and off? This requires as much explanation as least action, but it aligns well with the most

basic axiom of Promise Theory, which is agent independence.

Part of the answers come from the semantics we derive at each level. Physics suppresses semantics to

appear impartial—a side effect of it’s manifesto for universality. When we reach biochemical and socio-

economic scales, agents are too rich in semantics to be neatly boxed as commodity particles. When we

minimize T−Vwe are trying to reduce the amount of kinetic activity (relative exchanges, risk, activity,

but also accusation and other impositions) and simulatenously increase the stored potential (reservoir

of savings, trusted potential, etc) from all the promises between agents. If we include semantics, then

part of the stored potential of high level agents includes encoded information, algorithmsm, personality

proﬁles, etc.

One of the weaknesses of the action principle and Hamilton’s exterior formulation lies in the use of

momenta as a key part of the construction for dynamics. This doesn’t naturally translate to systems such

as (anti)ferromagnetic spin systems, where alignment of interior quantum numbers only has a shadow

representation as exterior vectors. This has always felt like a confusion: spin is angular momentum, but

not translational—not exterior. Then where is the relevant boundary between interior and exterior? On

34

the other hand, spin waves and phonons are precisely the kind of virtual processes that we are studying

here. Agent models and virtual processes seem to have an interesting role of play here in exposing

more reasonable semantics for phenomena that have remained essentially mysterious by convention for

a century.

Remark 6 (Equilibrium rather than minimum) The variational principle seeks out a generic equilib-

rium rather than a minimum. An equilibrium outcome would make a steady state, but even a changing

state can minimize in stepwise (adiabatic) legs. There’s a difference between minimizing one’s depen-

dence on an external agent and doing nothing at all. Agents are constrained by certain promises built

into the description (Lagrangian or promise matrix). Minimizing dependence could mean trying to move

in step so as to reduce the tension between agents. Thus a ‘minimization’ of dependence could still come

about by redeﬁning agents into entangled groups—molecules from atoms, etc.

4.11 Conservation requirements

In Euclidean and Minkowski spacetime, the conservation of quantities is explained by Noether’s theorem

as a necessary property of the homogeneity and continuity of spacetime. Under variations in space and

time, and functions that depend on these, conservation follows from

δS =ZdV δL= 0.(111)

In order for the variation to be nought, a change in one place must be compensation by a change in

another. If we vary with respect to ξ,

δS =ZdV Rδξ = 0 (112)

then implies conservation of the complementary quantity Racross any change contour for ξ. In Promise

Theory, a similar condition would be

X

ij

δξCk

ij = 0.(113)

Notice that this is continuity of assessment, which is the internalized process of agents, rather than the

externalized view of forces. Thus continuity doesn’t require us to sacriﬁce the principle of autonomy.

Conservation laws, or process continuity, are prominent features of processes that we rely of for

predicting outcome. The binding of ±promises in Promise Theory gives a way to account for such laws

without assuming material constraints. One can form as many agents as one likes without violating a

conservation principle as long as it relies on binding. Any agents created without a binding partner are

unobservable and may therefore be discounted. Thus conservation laws don’t rely on the counting of

agents, but rather of information channels.

4.12 Ballistic or impulsive change

Let’s return brieﬂy to the question of Newton’s ballistic world. In a ballistic view, large scale processes

and forces rule the system from the exterior. We describe motion in terms of ‘rigid bodies’ of coherent

material, which behave as singular entities (by centre of mass) that move of their own right. The motion

of such a body isn’t usually described as a virtual phenomenon—indeed, during Einstein’s time, physi-

cists even denied the possibility of an absolute motion, associated with the idea of an aether, because

Einstein showed that one would not be able to measure such motion. But we should be careful here. The

fact that we are unable to measure something doesn’t imply its non-existence. It seems perfectly possible

that all phenomena we consider to by fundamental today are in fact virtual phenomena on some deeper

set of agents and nothing changes.

35

Waves are the most natural kind of elementary process that involves exterior motion. Ballistic in-

teractions are different from waves. A ballistic impulse transfer is a singular transaction, like a single

sample. In Promise Theory, ballistic interactions have the semantics of impositions (see section 3.6),

which translates into the extent to which timescales can be separated between sender and receiver pro-

cesses. This goes back to the Nyquist sampling law, where events that are undersampled can appear

as random ﬂuctuations, out of the blue. The inexplicable nature of impostions makes them more like

boundary conditions than continuous evolution. The same is true for impulses in Newtonian mechanics.

To go deeper into this issue, we need to study a range of examples at different scales—in biology and

in socio-economic systems. That’s for another occasion.

4.13 Loops, sampling processes, and ﬁxed ‘currency price’ and energy levels

In the agent description, agents are characterized by a boundary (whether physical or virtual). There are

interior processes and exterior processes with respect to this boundary. Since activity originates locally

and independently in all agents, it has to be interior processes that are the source of activity. Similarly,

interior processes must be responsible for interactions between agents. The most basic sampling loop is

the process that can detect changes in other agents. This cyclic process leads to a natural connection with

Fourier representations, and the Nyquist-Shannon theorem of sampling implies a fundamental limit on

observation between agents, which is the uncertainty principle associated with the average wavenumber

and position spectra.

Let’s suppose that an agent samples a promised process at a rate of ν=ω/2πsamples per second,

and that any proper change of conﬁguration requires the reception of a whole number nof symbols.

The units of ‘action’ are energy ×time. The energy associated with a symbol is a reﬂection of the time

it takes to receive it. If symbols can be absorbed at a constant rate proportional to a sampling loop of

circular frequency ω, then we can read ω/2symbols per second (by the Shannon-Nyquist law). Now, if

the energy cost is proportional to the time taken to read a symbol then we can introduce a constant with

dimensions of action, and a ﬁxed energy currency cost per unit of symbol encoding ∆.

H= 2∆×∆tsymbol.(114)

radian-energy seconds per unit length of symbol.

∆(ψ7→ ψ0) = nωH. (115)

Thus the rate symbolic information transferred (as opposed to entropy which is average information per

unit symbol), by a promise binding, results from a whole number of cycles. Thus energy is quantized

due to the cyclic nature of the interior time sampling process.

This relation effectively expresses the idea that a change of state requires a bounded cyclic process

to sample i.

Example 11 (Atomic transitions) Electron oribitals form cyclic processes, which have spectra that can

absorb or emit single photons, as long as the frequency of the transition matches

∆=hν (116)

with a high degree of accuracy in emission. The orbitals are a whole number of wavelengths? An atomic

orbital can accept a photon. Only agents with such interior processes will be able to absorb information

in this way. In usual Fourier variables, we use ω= 2πν, so we can deﬁne

∝∆ρ=n~ω, n = 1,2,3. . . , (117)

where H7→ ~. Only complete strings will be stable under absorption, since these correspond to stable

eigenstates.

36

Nomenclature is important here. In physics, information is associated with the number of degrees of

freedom in a system, or the entropy—not with a change of state (which is how one normally thinks of

information in computing). A data signal alters the existing states of that compose information. The

Shannon entropy is the information density or expected symbol frequency per unit length of message:

S=hI()i/L =−X

plog p,(118)

where piis the probability per unit path length L, of randomly ﬁnding an agent in state ψi={, mi, . . .},

i.e. expressing the symbol from the alphabet spanned by ψ. The alphabet is constrained as the outcome

of an equilibrium process, compatible with the boundary conditions, e.g. the usual eigenstates of the spa-

tial boundary dynamics in the Schr¨

odinger equation. If we constrain this with additional normalization:

X

p= 1 (119)

X

p=hi,(120)

then phas the Boltzmann form p= exp(−β). The Shannon entropy is is related to the intrepretion of

the von Neumann entropy

S=Tr ρlog ρ. (121)

The dimensions of entropy are related to energy and work by thermodynamic relations dU ∼dW ∼

P dV ∼T dS .

5 Promise lifecycle and boundary dynamics

Before ending, we need to (at least begin to) address the ﬁnal elephant. There is more to promise

dynamics than the ﬂow of currencies like energy and trust. There is conﬁguration.

At the start we alluded to the idea that the separation of promises from agents leads to a separation of

underlying infrastructure and virtual processes on top. All the counting of the dynamics that we would

expect to see belongs to the virtual layer, but it relies on the precise matching of promises underneath.

How does the mutuality of promise bindings emerge?

How does the number of agents change in time, and how do the numbers and types of promises

between them change? What makes promises correlated into (+) and (-) that match? So far we’ve only

considered ﬂows of activity concerned with the keeping of the promises—this is what corresponds to

dynamics in physical science. Fixed promises on existing agents sustain the activity with effectively

ﬁxed boundary conditions. The missing piece in this picture is how such large scale changes in structure

comes about, with a degree of homogeneity that allows promises to be compatible, in the ﬁrst place. This

is analogous to a scale on which the boundary conditions of classical equations of motion also change.

It leads us away from linearity.

As we rise up the scale hierarchy to increasingly complex or sophisticated agent scales, the roles of

semantics and boundary conditions become increasingly intertwined. In Promise Theory, the topology of

the promises, i.e. the boundary conditions can lead to the formation of promises on new scales through

combinatoric dependencies. This is how electronic circuits and biological cells operate. The missing

pieces in our story are: how are connections made, how new bodies are introduced, and how do the

boundary conditions change?

This is slightly paradox in these questions in a model of autonomous agents, since the formation of

components with correlated roles can’t be a purely autonomous activity. To yield a cooperative system,

change has to be mutual. Locks and keys have to be made together in order to ﬁt. There are basically

two extreme answers for how agents get to know new agents and form promises with them:

37

•It’s random–all agents can make make promises with others equally and one hopes for Darwinistic

selection. This approach seems to work in immunology for antibody epitope generation [30], but

it isn’t the only alternative.

•The types of promise could be preordained, with the topology given by some underlying process.

This leads to the turtle or the god problem of endless dependence.

A Promise Theory approach predicts two possible parts to the process. First, one needs to generate

generic agents with basic capabilities to represent states and form channels. These are process entities

like cells or computers, on some scale. Once we have a number of these agents, like stem cells, their

properties can be discriminated by ﬁssion and fusion.

Agent numbers can be altered by combinatoric processes—either spontaneously or deterministically

by ﬁssion and fusion. If agents appear spontaneously, then somehow they have to do so pre-entangled.

A simpler answer is that they arise from the ﬁssion of a single agent. Then the two agents split hand and

glove, lock and key, as do complex molecules in biochemistry (see ﬁgure 5).

b+ b−

AA + −

A

b+ | ξ

b−| ξ

Figure 5: Complementary pairs of promises are naturally calibrated when they emerge from the dissociation of a

single agent. In a hierarchy of such dissociation types, correlations between ±agent promises might be quite far

apart.

Example 12 (Molecular agents) At the level of chemistry, interaction channels are expressed through

electronic donor and receptor states in molecules embedded in three dimensional space. In biology,

larger molecular shapes form donor and receptor sites for biochemical interaction. On larger scales,

organisms can exchange smaller organisms or organelles, or respond to gradients of diffusing molecules

like nutrients or hormone signals. At each scale there are thus representations of information using

whatever proxies are available to carry it.

In promise notation, a stem cell making no particular promise, but having a common process origin ξ,

expresses:

A∅|ξ

−−→ ∗.(122)

After ﬁssion, this becomes

A+

+b1ξ

−−−→ A−,∗(123)

A−

−b1ξ

−−−→ A+,∗,(124)

38

such that the sum is maintained. If the common origin dependency ξgoes away, or if one agent can

replace it by also promising ξ, perhaps by absorbing some other agent, then the entanglement of the two

agents also dissolves. This is how agents can become ‘standalone’ or entangled. These mechanisms are

known in industrial and biological terms, but they must also apply on any scale. ξcan include conditions

such as the sufﬁciency of interior resources, e.g. if kinetic energy or trust are below a certain threshold,

relative to an interior level potential certain behaviours may be suppressed. This is the case of tunnelling

in Quantum Mechanics. In our Promise Theory model, we ﬁnd similarly that the concept of classically

forbidden processes T < V are not forbidden but are only less effective.

In Promise Theory, we have the general problem of understanding why some agents would have +b

and some would have −b, and how these come and go under scaling. The solution of the end to end

delivery problem in [8] shows how semantic scaling works by composition. The fusion of several agents

into a single agent with promises combined will only satisfy a strict algebra if there is some prequisitie

calibration of the agents by a common standard, i.e. what we label ξ. For independent agents to manage

this, they need to be able to recognize one another’s promises. Simply pouring transistors and other

components into a bag does not make a computer15. Logically, the prerequisites for promises to form at

an agent are:

•The capability to align with a certain intention (to form an arrow of the intended kind).

•The ability to distinguish promisees (to point to a speciﬁc recipient).

•Sufﬁcient currency in the binding.

Scaling plays a role in computational distinguishability too [4–6]. No agent has enough memory to

recognize and remember the identities of every other agent in a system, so characteristic scalar promises

will tend to lead to there being agents that are indistinguishable, i.e. which promise only labels that fall

into generic classes. The usual algebras of group theory can represent the combinatorics of promises in

ﬁssion and fusion processes, but they cannot explain the processes or mechanisms responsible for their

implementation.

6 Summary and discussion

In these lecture notes, we’ve been looking at just a single aspect of agent systems, through the lens of

Promise Theory or the Semantic Spacetime model, namely processes and their representations of virtual

motion. Using one simple well-known example of coupled oscillators, we’ve expressed a physical system

in the alternative framework of Promise Theory to expose the layers of hidden assumptions that easily

bind us to a narrow and incorrect view of spacetime phenomena. In a Promise Theory view, what

appeared to be impartial dynamical quantities become individual local assessents of outcome, and forces

become promises expressing individual alignments over different channels of interaction. Energy and

momentum become surrogates for information and similarity of interior state.

Motion Of The Third Kind is something we experience all around us, from waves to biology, in

music, and even the changes we observe in a society. By formulating these ideas generally, we ﬁnd a

hierarchy of processes to study in a common language of information. While learning a new language

won’t change the virtues of knowing the old, such a lingua franca helps us to see beyond a narrow ﬁeld

of interest. The need for this expansive thinking has only grown in the Information Age, yet the most

important processes in our world remain poorly understood.

The exploration of more general phenomena requires us to combine both dynamics and semantics in

order to see patterns—if not to actually make quantitative predictions. The latter is likely too ambitous

in the short term, but even the characterization of phenomena based on ﬂows of inﬂuence would be a

step forward in socio-economic sciences, which rely far too much on moral conjecture. Only a handful

15In physics, the particle-antiparticle process e+e−→γmay be viewed as a fusion of two opposite promise agents into a

single neutral agent.

39

of authors have tried to formalize a physics of social scales [31]. Von Neumann was probably aware

of a deep connection between these areas in describing cellular automata and formal games, though he

made progress mainly in quantitative terms [28, 29, 32]. As a contemporary generation of researchers

seeks to create quantum computers, based on the analogy between virtual and physical processes, it has

never been more important to forge an understanding that uniﬁes classical, quantum, and computational

behaviours [33]. Promise Theory can surely continue to play a role in this synthesis.

Where could this go next? Specialization could be the next step, in order to expose a number of

examples from the general approach. We need to pick a scale to go deeper. In modern physics, there

are four known promise channels for interactions: the electromagnetic, strong and weak nuclear, and the

gravitational channel that express inﬂuence. Then there is the Higgs ﬁeld that contributes a uniform mass

‘encumbrance’ to the different types of agent process, at least for the Standard Model. Gravitation, on the

other hand, has been connected with the geometry of spacetime itself, which may actually underpin the

other three as a basic communication channel between points. As such it can form guiderails of its own

and inﬂuence generalized momenta (promises). In cloud computing, the natural currencies are identity

tokens and process counters, network protocols and process control blocks. In socioeconomic systems,

the currencies are things like trust, property, and money [34].

The interactions and potentials of general phenomena differ from the elementary conﬁgurations in

the world of physics, but we should be able to model them too. Proxy carriers feature prominently called

messenger particles explicitly keep the promises between agents, like coins in an economy. The idea of

an embedding space is convenient and comfortable, but is ultimately unhelpful in describing processes

on a detailed level. The beauty of continuum representations lies in the large scale world that Newton

and his forbears knew. The beauty of agent models is in directly understanding individual concerns.

Macroeconomics is an affront to the concerns of an individual’s microeconomics. There is nothing here

to suggest that only one kind of equations representing scenarios could be the last word in explaining all

behaviours.

It seems a reasonable view to treat all behaviours as virtual (as motion of the third kind) in some

space of agents if that is helpful too. In this age of computers and rich information, it is indeed very

useful, as this is a new realm of dynamical behaviour that we’ve only just begun to try to understand.

References

[1] M. Burgess. Motion of the third kind (i) notes on the causal structure of virtual processes for

privileged observers. DOI: 10.13140/RG.2.2.30483.35361 (notes available on Researchgate), 2021.

[2] M. Burgess and S. Fagernes. Laws of human-computer behaviour and collective organization.

submitted to the IEEE Journal of Network and Service Management, 2008.

[3] J.A. Bergstra and M. Burgess. Local and global trust based on the concept of promises. Technical

report, arXiv.org/abs/0912.4637 [cs.MA], 2006.

[4] M. Burgess. Spacetimes with semantics (i). arXiv:1411.5563, 2014.

[5] M. Burgess. Spacetimes with semantics (ii). arXiv.org:1505.01716, 2015.

[6] M. Burgess. Spacetimes with semantics (iii). arXiv:1608.02193, 2016.

[7] M. Burgess and S. Fagernes. Norms and swarms. Lecture Notes on Computer Science, 4543

(Proceedings of the ﬁrst International Conference on Autonomous Infrastructure and Security

(AIMS)):107–118, 2007.

[8] J.A. Bergstra and M. Burgess. Promise Theory: Principles and Applications (second edition).

χtAxis Press, 2014,2019.

40

[9] M. Burgess. On the scale dependence and spacetime dimension of the internet with causal sets,

2022.

[10] C.E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois

Press, Urbana, 1949.

[11] J.V. Neumann and O. Morgenstern. Theory of games and economic behaviour. Princeton University

Press, Princeton, 1944.

[12] J.F. Nash. Essays on Game Theory. Edward Elgar, Cheltenham, 1996.

[13] R.B. Myerson. Game theory: Analysis of Conﬂict. (Harvard University Press, Cambridge, MA),

1991.

[14] P.W. Anderson. Absence of diffusion in certain random lattices. Phys. Rev., 109:1492–1505, Mar

1958.

[15] R. Rivers. Path Integral Methods in Quantum Field Theory. Cambridge, 1987.

[16] M. Burgess. On the scaling of functional spaces, from smart cities to cloud computing.

arXiv:1602.06091 [cs.CY], 2016.

[17] J. Gleick. Genius: The Life and Science of Richard Feynman. Random House, 1992.

[18] M. Burgess and S. Fagernes. Laws of systemic organization and collective behaviour in ensembles.

In Proceedings of MACE 2007, volume 6 of Multicon Lecture Notes. Multicon Verlag, 2007.

[19] J. Bjelland, M. Burgess, G. Canright, and K. Eng-Monsen. Eigenvectors of directed graphs and

importance scores: dominance, t-rank, and sink remedies. Data Mining and Knowledge Discovery,

20(1):98–151, 2010.

[20] C.P. Burgess. Introduction to Effective Field Theory. Cambridge University Press, Cambridge,

2021.

[21] J. von Below. Kirchhoff laws and diffusion on networks. Linear Algebra and its Applications,

121:692–697, 1989.

[22] P. Borrill, M. Burgess, A. Karp, and A. Kasuya. Spacetime-entangled networks (i) relativity and

observability of stepwise consensus. arXiv:1807.08549 [cs.DC], 2018.

[23] R. Axelrod. The Complexity of Cooperation: Agent-based Models of Competition and Collabora-

tion. Princeton Studies in Complexity, Princeton, 1997.

[24] P.A. Millette. The heisenberg uncertainty principle and the nyquist-shannon sampling theorem.

Progress in Physics, 3:9–14, 2013.

[25] M. Cartwright. Fourier Methods for mathematicians, scientists, and engineers. Ellis Horwood,

1990.

[26] M. Burgess. Classical Covariant Fields. Cambridge University Press, Cambridge, 2002.

[27] R. Axelrod. The Evolution of Co-operation. Penguin Books, 1990 (1984).

[28] J. von Neumann. The general and logical theory of automata. Reprinted in vol 5 of his Collected

Works (Oxford, Pergamon), 1948.

[29] S. Wolfram. A New Kind of Science. Wolfram Media, 2002.

41

[30] A.S. Perelson and G. Weisbuch. Immunology for physicists. Reviews of Modern Physics, 69:1219,

1997.

[31] S. Galam. Sociophysics. Springer, 2012.

[32] J. von Neumann. Probabiltistic logics and the synthesis of reliable organisms from unreliable

components. Reprinted in vol 5 of his Collected Works, 1952.

[33] B. Coeke and A. Kissinger. Picturing Quantum Processes: A First Course in Quantum Theory and

Diagrammatic Reasoning. Cambridge, 2017.

[34] J. Bergstra and M. Burgess. Money, Ownership, and Agency.χt-axis Press, 2019.

42