PreprintPDF Available

Motion of the Third Kind (I) Notes on the causal structure of virtual processes for privileged observers

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Virtual motion is a description of how observable properties move from location to location as a side effect of interior agent processes. Waves are one example of virtual motion-where a displacement function changes against the fixed positions of some medium as information. Other examples can be found in cloud computing, mobile telecommunications, and biology. Virtual transmission is qualitatively different from particle motion, where one assumes the existence of material carriers that are distinct from an empty background space. A collection of agents, which passes observable markers from agent to agent, is like a transport logistics chain. Because of the reversal of hierarchy , or 'inside out' representation, virtual motion has a structure much like quantum interactions, as well as the movement of money, embedded sensor signals, tasks, and information by computational processes. We define the concepts of position, time, velocity, mass, and acceleration for simple instantaneous transitions, and show that finiteness of agent resources implies a maximum speed for virtual motion at each location. The evolution of artificial network communications and advances in bioinformatics, in recent decades, underlines a need to write down the dynamical and semantic relationships for virtual motion, thus exposing dynamically similar phenomena that span disparate scales and bodies of knowledge. This work fuses interaction semantics in Promise Theory with ordinary scaling. In physics, it is normal to extrapole dynamical models causally downwards, by correspondence: the study of virtual motion offers an alternative bottom-up extrapolation.
Content may be subject to copyright.
(WORKING DRAFT 0.2)
Motion of the Third Kind (I)
Notes on the causal structure of virtual processes for privileged observers
Mark Burgess
June 30, 2021
Abstract
Virtual motion is a description of how observable properties move from location to location as a
side effect of interior agent processes. Waves are one example of virtual motion—where a displace-
ment function changes against the fixed positions of some medium as information. Other examples
can be found in cloud computing, mobile telecommunications, and biology. Virtual transmission
is qualitatively different from particle motion, where one assumes the existence of material carriers
that are distinct from an empty background space. A collection of agents, which passes observable
markers from agent to agent, is like a transport logistics chain. Because of the reversal of hierar-
chy, or ‘inside out’ representation, virtual motion has a structure much like quantum interactions, as
well as the movement of money, embedded sensor signals, tasks, and information by computational
processes. We define the concepts of position, time, velocity, mass, and acceleration for simple in-
stantaneous transitions, and show that finiteness of agent resources implies a maximum speed for
virtual motion at each location.
The evolution of artificial network communications and advances in bioinformatics, in recent
decades, underlines a need to write down the dynamical and semantic relationships for virtual motion,
thus exposing dynamically similar phenomena that span disparate scales and bodies of knowledge.
This work fuses interaction semantics in Promise Theory with ordinary scaling. In physics, it is
normal to extrapole dynamical models causally downwards, by correspondence: the study of virtual
motion offers an alternative bottom-up extrapolation.
Contents
1 Introduction 2
1.1 Virtual motion and scale dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Generalized relative change per unit time . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Outline ........................................... 6
1.4 Notation........................................... 6
2 Agent based spacetime 7
2.1 Conservation of process observables, cloning, exclusion . . . . . . . . . . . . . . . . . . 9
2.2 Distance by increments (hops) and work interpretation . . . . . . . . . . . . . . . . . . 11
2.3 Time increments: interior, exterior, and channel time . . . . . . . . . . . . . . . . . . . 12
2.4 Inhomogeneous information channel capacity . . . . . . . . . . . . . . . . . . . . . . . 13
3 Privileged instantaneous observer motion 15
3.1 Causal semantics of q,v, and t............................... 15
3.2 Instantaneous velocity at a point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Mass, instantaneous force, and acceleration . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Composition of transitions—scaling propagation in space and time . . . . . . . . . . . . 19
1
4 Brownian motion 21
4.1 Representation of transition chains by generating functionals . . . . . . . . . . . . . . . 21
5 Coherent directional motion (translation) 25
5.1 Chains with long range order for continuity . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 Causal structure of linear motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Averagerateandvelocity.................................. 28
5.4 Directed motion of ‘large’ or composite virtual strings and bodies . . . . . . . . . . . . 31
5.5 Longitudinal exterior process rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6 Guiderails 35
6.1 Mapsoftopography..................................... 35
6.2 Symmetry breaking directed processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3 Continuity of motion and scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.4 Pathguides ......................................... 38
6.5 Promisingdimensionality.................................. 39
6.6 Guiderailformation..................................... 41
6.7 Process interference and channel reservation . . . . . . . . . . . . . . . . . . . . . . . . 43
6.8 Trajectory mapping and path availability . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.9 Process vertices or junctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7 Summary 48
A Continuum motion and conservation laws 49
B Density matrix representation for sender receiver interactions 50
C Wigner function and interior/exterior time 53
D Causal advanced and retarded propagation of vector promises 54
1 Introduction
Virtual motion is a common but often unrecognized phenomenon. When information is passed from one
anchored location to another, with or without replication, it characterizes virtual motion, or Motion of
the Third Kind [1–3]. When scheduled tasks or data packets move from one computer to another, the
computers stay fixed and information is passed transactionally from point to point: that’s virtual motion.
When jobs move from virtual computer to virtual computer, and virtual computers (software processes)
hop from physical computer to physical computer, that’s two levels of virtual motion. When proteins
are transacted from cell to cell by extracellular vesicles, or when a cell phone user reconnects from base
station to base station, that’s virtual motion between cells. The physical motion of users in conventional
spacetime is irrelevant to the processes in a cell phone network, as is the existence of vesicles to cells,
because these incidental mechanisms are unobservable and therefore unknowable to those processes.
Cellular automata exhibit virtual motion. Virtual motion is composed of discrete transitions. When bits
are shifted or rotated in a computer register, that’s virtual motion too. Virtual motion is everywhere,
because there are always questions about underlying mechanism that are unknowable.
Virtual motion differs from the more usual independent motion of the first kind and second kinds
in two ways (see figure 1): i) by assuming all observables result from interior states of some anchored
‘computable’ process—usually hosted by a finite number of discrete locations, i.e. a collection of inter-
connected state machines, and ii) by not interpolating a continuum of intermediate locations in between,
only a generic communications channel. In other words, Motion of the Third Kind happens by propa-
gating a distinguishable state from location to location, like motion on a graph. This process description
2
leads to a curiously ‘upside down’ point of view, particularly in the way that processes are separated into
interior (local) and exterior (non-local) parts. Systems are ‘open’ from within, rather than without—and
this leads to behaviours that bear more than a passing resemblance to quantum mechanics.
Whether we consider a phenomenon to be motion of the first kind, second, or third kind, is a choice
to some extent (see figure 1). Virtual motion of the third kind has qualities that make it of special interest
to technology, biology, and physics alike. As long as we observe the movement of information then we
are dealing with virtual motion.
ABCD
ABCD
ABCD
ABCD
ABCD
ABCD
Figure 1: Motion of the first, second, and third kinds, from reference [1]: agent exchange, satellite
rebinding and virtual motion.
Virtual motion is hardly new—the motion of waves and currents could be characterized in this way—
though these are normally described in the continuum limit. What is new lies in treating the problem
seriously as a phenomenon in its own right. In physics, the distinctions between particle motion and
virtual motion are often clumsy and muddled, which impedes the identification of processes through
cultural barriers. One importance of understanding virtual motion lies in its relevance to technology and
biology. Today, we recognize the importance of information itself, rather than dwelling on its material
representation—but this leads to obvious questions: is all motion really virtual motion? Is matter really
separate from spacetime? Is virtual motion compatible with Newton’s laws or quantum mechanics?
There is a school of thought in physics that the strangeness of quantum mechanics is fundamentally
different from other areas of natural science. Here, we’ll see that this is not the case.
It would be impractical to be able to address every possible question in an introductory paper, so
many issues will be deferred to a number of sequels. In this beginning, we’ll show that the common un-
derstanding of motion is largely biased by the set of abstractions used to frame it. Reproducing familiar
kinematics as virtual motion and making them compatible with Information Theory is a revealing exer-
cise, which helps to demystify the apparent rift between classical, quantum, biological, and technological
systems.
1.1 Virtual motion and scale dependence
Explanations of motion in physics are typically scale dependent—both in terms of their extent (spacetime
scale) and their number (bulk combinatoric complexity). This divides disciplines into cultural chapters,
each with their own representations and vernaculars. Whether some of these descriptions are more ‘real’
than others is not a question we need to consider here. By starting with the concept of virtual motion,
in terms of state information alone, we start at the bottom of the pile. Neither should this be confused
with the modern discipline of quantum information, whose agenda is to apply quantum mechanics not
to question its origin. So, descriptions differ, but the formulae for counting change are straightforward
and universal: distance equals speed multiplied by time. We can take this as an axiom, yet even after
centuries of taking it for granted, it holds surprises.
A key consideration for thinking about change is the choice of whether to adopt a ‘god’s eye view’
(looking from outside the system, with instantaneous privileged access to all locations) or a ‘local ob-
server view’ (trapped within the system, relying only on signals propagated within it for knowledge). The
former is a Galilean or Newtonian formulation of the world—and distance, speed, and time are treated as
almost independently specifiable quantities, although the equation says that only two can be independent.
The latter is an Einsteinian view, which is more subtle and more complicated. Quantum mechanics often
3
seems to cast doubt on ther veracity of both, leading to a kind of deification of its apparent strangeness.
However, virtual motion also appears different, because the source of motion is from within rather than
assumed from without. It’s neither based on the Newtonian idea of an independent space, an indepen-
dent time, and independent material bodies that move within them, viewed with unlimited access—nor
the Einsteinian idea of an independent spacetime with material phenomena observing one another while
trapped inside a closed communications network. Virtual kinematic phenomena present themselves out-
side that core model of physics, because processes happen largely within bounded regions whose interior
processes can’t be observed. They exchange exterior information over some kind of information channel,
but there is no familiar notion of straight lines, paths, or coordinates.
Other differences concern the laws for motion. The Galilean concept of uniform motion in a straight
line (or geodesic) is somewhat exceptional in nature—it doesn’t appear in a natural way for systems of
state. Could this ever be true for virtual processes? If so, it would provide a link between transitional
motion and ballistic Newtonian behaviour. In ballistic motion, energy is passed around from without,
but in virtual motion all active resources come from within. The concepts of trajectory and momentum,
which were enormously helpful to Galileo and Newton for predicting macro-phenomena, have no obvi-
ous relevance to the class of processes that we consider to be ‘virtual’ or derivative of other underlying
processes, whether at the spectroscopic, biological, socio-economic, or technological levels. Their his-
torical importance (closely connected with the associated ‘particle’ concept) makes them unavoidable
issues to contend with, so it’s interesting to see how they relate to information on an intuitive level.
The energy question, applied to a local process model, relates to the processing rate or capacity of
agents. If multiple processes need to pass through a region of agent spacetime they may need to reserve
capacity for the sharing of finite resources [4–7]. Even in classical and quantum physics, energy lies at
the heart of equations of motion. There is thus a natural connection between kinematics in physics and
information theoretical transport. Securing sufficient channel capacity to carry a communication process
is a key part of these technological implementations, but one could also ask how a moving classical or
quantum particle secures sufficient ‘channel bandwidth’ to propagate through a busy vacuum, in order to
move as a wave or particle with a consistent linear momentum. The narratives for these descriptions of
motion are somewhat different, so we shall reconcile them.
Virtual motion is everywhere in computer systems. In modern cloud computing, workloads migrate
from data centre to data centre, as virtual machines or tasks are moved from physical host to physical host
to find available capacity or to optimize availability. Information processes move with respect to both
physical and logical networks in mobile communications on a continuous basis, thanks to virtualization
and resource sharing.
Example 1 (Database backup transport) Database backup is virtual motion in which a structured
data set is translated from one structured superagent location to another. The intermediate states of
the database may or may not be observable. This leads to problems in modern computer systems when
observers are not sure about the completeness or integrity of data. The consensus problem refers to the
problem of copying bulk data as an effective transaction by ‘locking’ or rendering data unobservable
during macrotransitions [8].
Example 2 (Mobile phone session migration) As mobile phone users (e.g. 5G sessions) move phys-
ically around the landscape, their radio signals bind to point-like cell transmitter towers, called base
stations. The network of these base stations forms a discrete set of agent locations, and users appear to
jump from tower to tower.
Example 3 (Nerve signalling) When nerves are stimulated, signals formed from chemical processes
move from nerve cell to nerve cell to reach the brain. These signals approximate wave packets that
traverse the stationary nerve cells.
Example 4 (Monetary flows) In the conventional money system, monetary transactions are sent as data
between banks. The network of payment terminals where we swipe credit cards plays the role of ‘cell
4
towers’ for mobile payments. Relative to this network, credit cards seem to jump from location to location
in a stochastic manner. Banks that receive the payments act routers deciding how to forward transactions
to a destination.
Example 5 (Euclidean space) If we think of a coordinate system as a grid of fixed addresses, numbered
in a total ordering along the axes, then the motion of a body in this imaginary theatre is appears as
virtual motion to each location. Timeseries changes measured at particular locations (a passing train, a
river, etc) appear as changes in time to the promises made by the coordinate addresses.
1.2 Generalized relative change per unit time
We face two challenges in defining virtual motion: one is that such motion is typically composed of
discrete changes, and the second is that we have to select measures for distance and time from a variety
of possible candidates, given that there are no Euclidean coordinates and the relative positions of agents
are unknown—whatever that means. To describe the motion of information we need to relate these
concepts to an underlying process execution, on which the virtual process unfolds. In the basic equation,
distance =speed ×time,(1)
only two out of the three quantities can be independently determined. Initially, it was assumed that
distance and time were fundamental and speed was derivative. However, because we need travelling
signals to measure distance, the speed of light in a vacuum became the only possible invariant.
Our basic understanding of the speed of bodies is from a characterization of their progress per unit
time along some trajectory, relative to some fixed journey markers. For extended bodies, rectilinear mo-
tion is defined for the centre of mass. The calibration of these markers is crucial to the determination of
distance as a measure of change. To understand how speed and motion arise from the bottom up, espe-
cially in the less familiar territory of information systems, we need to be aware of all those assumptions.
Figure 2: Coordinates are fictitious markers, based on the assumption of spacetime as a rigid crystalline array
of points. To define coordinate locations in practice, we have to compare distances and times to other periodic
processes to infer the existence of a scale—something like painting lines on a road. The assumed regularity
of timing in the (cyclic) reference process is how we define intervals of distance. In other words, time is the
fundamental measure by which we measure distance. Further complications arise if we can’t assume that the
measure of comparative distance can be performed instantaneously—so called ‘latency’ or processing delay by
agents is a further complication that distorts our ability to define distance.
Detecting change is relatively easy; quantifying change according to some calibrated scale is a much
more difficult problem that involves many subtleties. Newton’s brilliant achievement was to associate
coordinates with a grid of regular markers, called distances and to define space and time according to
this imaginary world of graduated scale (see figure 2). A deaf and blind person would not find much
solace in Newton’s model of the universe, however. One can imagine that observer waiting for taps
on his shoulder to indicate changes. He could count in his head to infer the speed of the tapping and
define that as a velocity. How do we know he counts at a constant rate? Of course, we can’t know that.
We would have to measure that counting relative to something else, like the tapping on his shoulder.
5
Suddenly we realize that we have to simply take one of these processes as a standard and measure the
other relative to it. This is where Information Theory comes in. Nyquist’s sampling law tells us that
we need a sampling process (or clocked process) that moves twice as fast as the fastest change we want
to count to be sure of capturing its essence. This notion of relative time and clocks is fundamental to
our ability to measure anything. Newton ingeniously avoided this question entirely by introducing the
concept of absolute distance and time based of Descartes coordinate systems. Since this is the familiar
case, we’ll start with this view.
1.3 Outline
This paper follows on directly from [1–3]. We take a synthesis of i) Promise Theory to describe local
interaction semantics, ii) Information Theory for transmission and process mechanics, and iii) the usual
kinematic counting methods used to quantify changes. The approach is not intended to be (directly)
associated with quantum computing (i.e. the application of quantum mechanical formalism to repre-
sent computational problems). The goal of this study is not to impute any assumptions about scale or
representation—rather, we’ll look for straightforward representations of kinematic concepts for virtual
processes1. Motion is defined in terms of the changing information of the agents, i.e. promised by a
set of agents that can pass messages, rather than by independent bodies moving in empty space. Also
note that the concept of information is the elementary idea of coded symbolic strings as used in automata
theory, not the statistical approach that relies on the estimation of entropy to quantify it.
With these qualifications, there are two ways we can look at properties like position, speed, as well
as causative force and acceleration: one is to consider only what an observer can see by direct observa-
tion; the other is to infer what happened at one time by measuring distances and times using separate
processes—and then to employ a theoretical model, together with the assumed invariance of the config-
uration, to argue what happened through the lens of the model. This latter approach is the conventional
representation of kinematics as a Galilean-Newtonian geometrical theory, based on idealized coordinate
grids and instantaneous knowledge of changes. Since that approach is best known, we begin there and
defer a discussion of local observer relativity for the sequels.
1.4 Notation
From Promise Theory, agents are denoted Ai, for locations i= 1,2,3, . . ., and Latin indices i, j are
always agents identities. Agents need not be thought of as ‘point-like’ at any scale, but their size is not
observable unless explicitly promised in some way. Agents expose information by promising it [10].
They keep promises by interior processes, based on a finite interior processing capacity Ci(bits per
second). Sometimes, in discussing communication, we shall write Sfor an agent in the role of ‘sender’
and Rfor receiver, indicating causally ordered roles for the process.
The position xis only used when discussing the continuum limit. For a measurable dynamical
quantity, we’ll use q, with trajectory q(t). For virtual motion, we cannot write x(t)to mean q(t), because
there is no parameterized curve that represent geometrical motion. The quantity q(t)takes integer values
in irepresenting Ai. The position is the location of the promise πimade by Ai.
The model of agents as housing spatially bounded processes leads to natural separation of variables
and processes, and thus of interior and exterior clocks and therefore times (see figures 3 and 4, and the
discussion in reference [11]). τ(i, j)is an exterior unit of time for a process to between two agents,
e.g. for a promise of some state to be offered and accepted from agent Sto R.tiis a unit of interior
time for agent Ai. Interior time is unobservable unless another agent is entangled with it, i.e. is part of
some co-dependent process. t(i, j )is co-time for the entanglement of agents Aiand Aj.
When discussing information, a model of state is the natural description. When discussing states of
an agent process, on some scale, it’s revealing to adopt the kind of algebra and notion which has become
1The use of states vectors to represent local dynamics, while analogous to quantum theory, does not imply the use of quantum
mechanics—reader beware! This is also normal practice in statistical mechanics and hydrodynamics. See also Koopman-von
Neumann mechanics [9].
6
∆τ
AiAj
Ai
Aj
ti
tj
t(i,j)
exterior
promises
exterior time
subtime
interior
co−time
+
superagent
Figure 3: ‘Semantic’ spacetime is formed from elementary agents at each scale, where finite interior processing
resources are responsible for exterior interactions. At the lowest level, interior process ticks measure interior time.
When agents are entangled into co-dependent processes, forming superagents, they share a common clock. When
agents or entangled superagents interact weakly, changes observed between them form a macroscopic clock des-
ignated exterior time. Exterior time is what we can usually observe, and is the variable that appears in differential
equations.
the norm quantum mechanics—and is adopted in statistical hydrodynamics. In use a quasi-Dirac notation
with ‘bra’ and ‘ket’ symbols hh and ii respectively for the state space of processes, in order to distinguish
them from the quantum mechanical Hilbert space brackets hand i, so we don’t muddle these.
2 Agent based spacetime
Agent spacetime has been defined in a number of papers [1–3], and extended to include the causal
exchange of information, with conservation, in [11]. Within this arena, three kinds of motion can be
defined (see figure 1), but we shall look only at motion of the Third Kind, which corresponds to the
handover of observable information between agents, something like a relay race or a transport logistics
chain of warehouses. The value of this model is that it allows us to reexamine motion from the viewpoint
of local reservoirs of information where information processes interact, using only the assumption of
strong locality, and information theoretic sampling as constraints [12,13].
In agent spacetime [1], agents are the elementary ‘places’ that constitute space—they may be mapped
to points, atoms, molecules, cells, computers, etc, at different scales, in order to compare virtual motion
with other scenarios. Since the goal is to describe processes in terms of their natural representation, we
avoid imputing a model of an underlying spacetime that can’t be the source of an observation. Agent
space is unlike Euclidean space, where points are infinitesimal, inert, and densely packed, where there is
no notion of being ‘inside’ or ‘outside’ points—everything takes place outside points or ‘at’ the points.
Any non-trivial phenomena are thus attributed to the existence of ‘matter’ which occupies space2.
The ability to separate interior processing from exterior observables turns out to be crucial to a clear
presentation of virtual motion in observable phenomena, as well as to a definition of time that can later
be compatible with local observer relativity. Virtual motion is an exterior (cooperative) process, driven
by interior processes. In literature, this separation may be seen in descriptions of spacetime developed
for computing, such as Milner’s bigraphs, the Pi calculus, and Process Algebras [14], which are based
on agents and graphs, spanning sets, and hierarchical graph embeddings [1, 15], rather than on manifold
embeddings of graphs. This may lead to some confusion on first reading, as it’s hard to dissociate
visualization from Euclidean space thinking. Trajectories and symmetries for translational degrees of
freedom are no longer the axiomatic logical primitives that they are in Euclidean space, rather we think
of the world in terms of cells at some scale—‘surfaces’ that enclose regions where work is processed,
2This view is not always fully consistent, as revealed by the inconsistencies for observers of relativistic quantum field, and
it’s unclear whether matter occupies, replaces, or displaces space. Luckily we don’t need to worry about that here.
7
t
O
t
i’
t
i
T
i
A
Ai’
q
∆τ
Universal
Observer
Slower
Faster
Exterior
Figure 4: Comparing time rates of different agents in spacetime comparing a transition q. Each agent Aihas
its own, possibly different rate of ticking, which includes a third party observer O. When qis executed, pairs
of agents entangle and default to which is less than or equal to the slower of the two agents’ time rates. Since
the transition involves multiple exchanges to complete a reliable (conservative) protocol, qwill be measured as
a longer interval, whose length depends on what fraction of available time is given to that process by each agent.
In Newtonian mechanics, all measurements are given in an imagined universal time coordinate, on the assumption
that all clocks can be synchronized. Although this is a false assumption, it works for slow processes and fast
observations.
more akin to the concentric enclosing surfaces of Gauss’s law. Each location agent spacetime is an active
region, where processes take place with finite resources3. When discussing scaling, we imagine the
composition of agents whereby ‘smaller’ agents are virtually or actually inside superagents [2], rather
than merely being ‘next to’ them.
Consider a set of agents Ai, which are the locations in this graph-like spacetime, forming a one
dimensional path through the set of ordered agents. To order agents, and form something like a coordinate
system, familiar from conventional geometry, we have to assume a basic distinguishability on some level.
A path is a set of successive agents, here labelled along the chosen trajectory incrementally, so that each
step icounts the ‘proper time’ of the trajectory.
An, A0, A1. . . . . . Ai1, Ai, Ai+1, . . . . . . An(2)
This is similar to the construction of causal sets, where increasing iis determined by a successor rela-
tion succ(i)7−i+ 1 [16–21]. Labelling the agents in this manner is equivalent to having each agent
promise a locally unique name or address, as any distinguishable property. In order to exhibit dynami-
cal behaviours, and maintain compatibility with information theory, agents must have interior states and
processes that correspond to observable changes of the states on the exterior. This is essentially deter-
ministic, but does not imply that observation is deterministic, since agent processes can only reliably
sample messages passed over channels between agents at the Shannon-Nyquist frequency which would
imply an exponential succession of rates for error free capture. Every Aimust have its own interior clock
time (called interior time). The strict locality of agent processes is equivalent to assuming no a priori
simultaneity of events within different agents, i.e. no synchrony between the clocks or their ticking rates.
3In some ways this is reminiscent of Maxwell’s mechanical model of electromagnetism, which worked inexplicably well.
8
Briefly summarizing essential Promise Theory, agents interact by promising one another process in-
teractions from their exterior. This contains a richer model of adjacency than a simple successor relation,
and makes a connection to information channels. Every offer (+) has to be matched with an acceptance
promise (-) to satisfy locality:
A+M
A0(3)
A0M0
A, (4)
and the effect of the promise is the overlap or convolution MM0, which preserves each agent’s auton-
omy locality of control at this scale, but which will smear into an average effect on a larger scale. Promise
semantics require that no agent can make a promise on behalf of another, nor control the outcome of a
process within any other than itself. Notice that a promise interaction is oriented in space, and that the
(+) promise is causal, while the (-) promise appears acausal with respect to the direction of the joint
process. This acausality has no meaning on larger scales, as it is confined to the interior of the virtual
process. Agents can combine in a simple way to extend the boundary between ‘interior processes’ and
‘exterior processes’ [2]. We shall not consider that here, nor do we assume any particular scale for the
agents. They could be computers, bank accounts, cellular automata, or atoms (see figure 5).
q
∆τ
q
∆τ
ii’
AA
−∆ q
+∆ q
q
∆τ
interior/exterior time
Weyl / Wigner
Transition function
Figure 5: The Weyl transformation (appendix C) to agent-centric coordinates gives an effective separation of
interior and exterior time for phase space process variables. This leads to a natural agent-centric view of phase
space for traditional continuum variables. In the discrete spacetime formulation, this is the natural promise-centric
view.
An agent can promise an observable scalar property qby promising to any third party observer agent
O(see figure 6).
Ai
+q
O(5)
Oq
Ai.(6)
This neutral third party agent is essential to calibrate processes as a single arbiter of measurements,
since Aiare independent and we can’t assume any global symmetry principle to regularize the meaning
of +qpromised by different agents. It’s role will become more important in the sequel, where we
consider relativity. If multiple agents make a promise of the form (5), then the only common factor is the
equivalence of the promise (6).
2.1 Conservation of process observables, cloning, exclusion
The notion of a particle as a material body, i.e. a coherent countable entity that cannot be destroyed, was
invented to represent matter as a substance distinct from empty space. With the discovery of quantum
theory, the particle notion and its nomenclature have gradually mutated beyond recognition, leading to
obfuscation, so we’ll avoid the term particle, though not the original concept. It’s helpful to begin by
9
trying to construct a ‘countable entity’ view of motion using information. In virtual motion, agents can
only observe information promised to them. Thus, an agent Aihas to ‘promise’ some property Xwith
an offer (written +) to a potential recipient
Ai+X(+)
Aj,(7)
and in order to be receptive to this information, receivers have to promise to accept X(written )
AjX()
Ai,(8)
resulting in a transfer measure of X(+)X()with type or domain X.
To trace the movements of ‘token like’ entities represented by X, information or ‘scalar promises’ [1]
can be passed along by such interactions—copied or translated. During copying, past changes to states
along the trajectory are left in their changed state resulting in a wake. In translating, states are reverted
from an occupied back to an empty state. A material model automatically behaves like the latter, with
‘translation semantics’. A localized token-like entity, whether real or virtual, is not copied but shifted,
cleaning up traces of information behind itself to retain a finite localized size. This allows Xto be a
conserved quantity.
In classical mechanics, representative counters, real or virtual, such as energy, mass, charge, etc, are
thought of as different Xleading to countable numbers of X. Their status as real or virtual is a matter
of opinion (no pun intended). Computer programmers similarly make use of such counting every day in
games and databases, etc. The assumption of matter as an immutable counter simplifies the rules we need
to encode conservation in classical mechanics. We needn’t read more into it than that. For virtual motion,
or indeed any information based description that does not build its arguments on classical mechanics, we
need to explain this behaviour.
Suppose we have a token counter X. An agent Apasses a message M(X)to agent A0to pass along
a counter X, if and only if it is in possession of the counter Xalready, as a conditional promise. The
receiver A0may in turn promise to accept the message M(X)if and only if it does not (¬) already have
X(written with the |as the ‘if’):
π(+)
M:A+M(X)|X
A0conditional offer (9)
π()
M:A0M(X)X
A. conditional acceptance (10)
Then, in order to maintain a fixed Xcount, agents have to continue this protocol or handshake: Amust
promise that it no longer has Xafter the message to accept the transition has been received, and A0
promises that it does have it.
π(+)
¬X:A¬X|−M(X)
→ ∗ (11)
π(+)
X:A0X|M(X)
→ ∗ (12)
This has the semantics of an exclusion principle, forbidding an agent to accept a counter that it already
has. During the keeping of these promises, a system is locked (unobservable), else we could potentially
observe or infer the properties to be in several locations ‘at the same time’ according to some local-
ized observer. This method of locking is the basis of concurrent programming for shared resources in
computer science.
The agents on the right hand side of (11) written with a wildcard ‘*’ recipient are made to all. If
any of these promises above can’t be kept, the reliable transfer can’t be maintained either. Notice that
the transfer requires non-trivial processing of algebraic ‘logic’ for the exchange at both ends. Such a
conservative exchange cannot be performed by a simple ballistic processes unless one insists on the
literal existence of a class of immutable material counters. One can go some way with this idea, but it
falls apart quickly once the finiteness of agents is required.
10
From a computational (information process) point of view, the invariant in this interaction is the
length of the message |M(X)|that implements a transition of the counter. The time taken is a relative
assessment in general, based on successive agents’ interior resources. In this first paper, we’ll assume a
Newtonian world in which clocks are sychronized so this issue will be less important. During message
transfer, agents’ process clocks advance in lockstep for the duration, but what happens between different
agents can’t be readily compared. Adopting the Newtonian singular ‘great universal clock in the sky’ we
imagine measuring faster and slower agent transitions, leading to different rates of processing and thus
of propagation on a pairwise basis.
2.2 Distance by increments (hops) and work interpretation
In virtual motion, distance must be the assumed invariant, because it is unobservable. There are two
ways we could quantify the incremental distance d(i, j)in the processes in (11):
1. Assume a regular grid of coordinates, analogous to the Cartesian tradition, but only along a given
causal trajectory—then attribute unit increments to successive ‘hops’ along the chain. Because
there is no independent universal speed by which to define distance in terms of time, we have can
only count these as atomic increments4. Since we assume that agents can be recognized by their
labels,
d(i, i + 1) = |(i+ 1) i| × length unit d. (13)
where dis just a dimensional scale, so then along any causal path P, but not in general
dP(i, j)=(ji)P×d, (14)
iff P:Ai+1 =succP(Ai)(15)
This is the simplicity of assuming universal coordinates, but in some ways it’s a tautology, and it
becomes problematic where the relationships between agents change dynamically. For example,
if we use a steel ruler to measure distance, but the temperature varies, then the interatomic (inter-
agent) distances that make up the calibrated process spacing may vary relative to finer grained
measuring apparatus—so we have to trust the standard and believe in the randomness of ‘errors’
as part of calibration. For that reason wave processes in ‘uniform media’ (or empty vaccuum) are
often used for counting. This too is an arbitrary choice, based on assumed invariance. This issue
never fully disappears, so for agent models, one relies on the invariance of the cellular structure to
spacetime.
2. A second approach could be to measure effective distance by the length of message or number of
process steps required to implement a transition, during the entangled interaction [11]. However,
the semantics of this measure are closer to the progress we associate as ‘work done’. The workload
|M|or |q|is the length of the message needed to perform a transfer of some promise from one
location to another over the entangled connection—a finite state message, in information theoretic
terms, i.e. a number of transition symbols to be processed. If the agents are to give rise to a homo-
geneous transfer of qthen the length of this message should be the same in each case, depending
only on the nature of qnot on the agent concerned. In that case, this becomes equivalent to the first
measure—an arbitrary dimensional scale representing a ‘hop’. This is consistent with Noether’s
theorem, stating that the continuity of spacetime implies the conservation of work.
dM(i, i + 1) = |M| × length unit (16)
Although this definition of distance is constant, its message length could depend on the nature
of the property qbeing measured. So the effective distance or work could vary from process to
4In Special Relativity, the choice to take cas the constant comes from Maxwell’s prediction for cin terms of the independent
constants of the electromagnetic field. Here, there is no corresponding anchor, so we take the unit of distance to be the invariant.
11
process if we choose this definition. In practice, measuring distance in this way is a separate path
from more conventional geometry, so we’ll not consider it further.
Accordingly, we may conclude that this measure of distance as the characteristic of ‘work done’,
as distance measured in ‘process space’ (the generalization of phase space) and as such can be
associated with the proper time of the process, which is the conventional complementary quantity
for energy, as per Noether’s theorem.
There is no fully satisfactory answer for imagining extended distance, at least for elementary agents. The
problem with the first proposal is that agents could be in any ‘gaseous’ structureless state and therefore
this intuition deviates from the Euclidean instinct to impute a topologically rigid coordinate system. Be-
cause the configuration of agents is not necessarily regular and smooth near the point of measurement—
even when seeking an instantaneous point velocity—we don’t escape the need for two-point functions,
like the familiar response functions in physics that encode causal response to source disturbances in
fields.
2.3 Time increments: interior, exterior, and channel time
The fact that causally independent processes have to expend interior process cycles to partake in (enact
and observe) transitions arriving from agents is the basis of the Shannon-Nyquist formula [13]. Further,
the fact that agents may have different capacities, for executing their own sampling process, means ‘pro-
cess ticks’ or local time will run at different rates at different locations. This is obvious in computer
networks, where different CPU rates exist, and in biochemical processes where different chemical con-
centrations and temperatures exist. In elementary spacetime, we have no idea what the criteria may be,
so we can simply assume that spacetime may be inhomogeneous in this way.
The fundamental kind of tick of time is interior time (analogous to CPU ticks), which we write ti
(see figures 3 and 4). A process tick at Ai
ti1
Ci
,(17)
where Ciis the fundamental finite rate of ticking at Ai.tican be some fraction of this limit if processes
share time resources. An interior process of length |M|will therefore take
ti(M)|M|
Ci
.(18)
By the Nyquist theorem, we must have a sampling penalty during transmission of processes, so the
co-time ∆(i, j)of a process entangled between Aiand Ajis the number of co-ticks during which the
interior time clocks of the entangled agents are in lockstep, as a result of waiting for a shared message
resource. Each sample of a local symbol transition t(i, j )costs:
t(i, j)νNtj,(19)
where νN= 2 is the Nyquist sampling factor. Notice that directed communication is asymmetric, and
limited by the receiver. So, each successive agent in a chain of identical agents has only a fifty percent
chance of receiving a symbol in one shot. Similarly, each potential promise change transmitted by a
reliable channel costs
τνBt(i, j),(20)
where νB= 4 is the Borrill handshaking factor [11, 22]. Similarly, the effective channel capacity of such
a change is
C(i, j)min (Ci, Cj)
νB
,(21)
12
since reliable transmission requires at least a νBstep protocol. In a Newtonian universe, with instan-
taneous universal time, we can compare any of these timescales to a universal time Tas long as
Ttifor all i. These time relationships are simple consequences of the finite interior resources
enclosed within the agent boundary, versus the observable resources exposed at the boundary to adjacent
agents.
Example 6 (Hamiltonian mechanics) In classical and quantum mechanics (especially by the Hamil-
tonian formulation), the Hamiltonian total energy is an exterior (macroscopic) operator which is the
generator of exterior time τ. The momenta of ‘particles’ q(t)is an exterior representation of the inte-
rior process states passed along. The Hamiltonian describes or prescribes an on-going evolution of the
interior distribution of resource patterns over the whole of the system, while the unobservable interior
processes are responsible for the transitions from location to location in phase space.
2.4 Inhomogeneous information channel capacity
In order to be compatible with information theory, we relate time ticks to the concepts of an informa-
tion channel [12, 13]. A channel is not a completely defined concept, rather it’s a partial abstraction
for a conduit along which information can be passed. The physical characteristics of different channels
(waveguides, cables, wireless frequency ranges, etc) mainly play a role in terms of the noise characteris-
tics. They may be represented here as different promise types.
An information channel connecting agents Aiand Aj(or labelling Siand Rjto characterize the
polarization into sender and receiver). Shannon characterized only channels’ average properties by esti-
mating the abstract uncertainty of them having a shared state—in terms of generically assumed statistics,
thus defining the informational entropy:
H=
N
X
n=1
pnlog2pn,(22)
where pnis the ‘probability’ of encountering symbol nfrom an alphabet Σof Ndistinct symbols. The
type of probability (frequentist, Bayesian, or propensity interpretations) is undefined (as is probability
in quantum theory). In a quantum (information) system, the von Neumann entropy is generally used to
represent average information in terms of the density matrix5:
HQM =Tr ρlog ρ. (23)
Here, we’ll try to distinguish clearly between what is a statistical characterization and a low level causal
structure—even if that leads to a speculative in the quantum case6. The question about the interpretation
of the probabilities arises again here: one can choose between frequentist (spacelike, transverse, or ‘out
of band’ ensembles), Bayesian (timelike, longitudinal, or ‘in band’ ensembles) or propensity (affinity)
interpretations. For virtual motion, there are too many unknowns about the underlying channel to claim
frequentist determinism or causal invariance, where past behaviour implies future behaviour. One is
therefore left to try to estimate C(i, j)by finite resource arguments7.
A message Mcan be characterized as a linear combination of symbols σnΣwith coefficients ξn:
M=
N
X
n=1
ξnTn,(24)
5This just kicks the interpretation can down the road, since the quantum probability is also not fully defined.
6Following a universal prescription for all scales doesn’t violate any quantum mechanical formulations, but ends up looking
like a mixture of the Bohm pilot wave interpretation and the Transactional Interpretation [23, 24] for quantum mechanics. The
cellular automaton interpretation is also of interest [25].
7In quantum mechanics, the finite energy conservation determines a Hamiltonian channel allocation process, subject to
boundary conditions, whose outcome is a plausible distribution ψ(x)to guide transitions. The Hamiltonian is a generator for
exterior time. This interpretation is quite close to the Bohm interpretation [26, 27].
13
and Tnis the generator of symbol σn. Over an ensemble of messages, the average number of bits
transmitted per possibly redundant symbol must be less than or equal to this limit. The channel capacity
for a transition from an agent Sito SRis given by the ‘mutual information’ passed from sender to
receiver:
I(Si, Rj) = I(i, j) =
N
X
s,r=1
psr log2psr
pspr
,(25)
C(Si, Rj) = C(i, j) = max
ps,pr
I(Si, Rj)(26)
where psr is the joint probability of a symbol being at both Sand R‘simultaneously’. In other words,
the channel capacity is the maximum value for the information that is common to both agents during a
transition, where prs is taken relative to the probability that both accidentally and independently share
the same information pspr.
Note that, while the measurement of channel capacity is a statistical outcome (as all empirical mea-
sures are), this does not imply that information is necessarily a statistical quantity. Specific states mea-
sured as bits and bytes are information, not only averages. Much attention is given to this statistical
representation, though far less attention is given to the assumed interpretation of the probabilities. Em-
pirically, the piare measured in a frequency interpretation, but more often (especially in quantum theory)
a propensity interpretation (‘affinity’ for future likelihood) is assumed, as is the norm for quantum me-
chanics [28]8. The channel capacity represents a statistical characterization of the maximum throughput
in bits per second for a channel. Shannon’s theorem tells us that attempts to transmit at a lower rate can
always be error corrected to be reliable, while attempts to transmit at a higher rate will lead to symbol er-
rors that approach infinite as the rate approaches infinity. Thus, in principle a channel has not maximum
rate from information theoretic reasoning, only a reliability limit. This is all one can infer statistically.
However, here we are assuming finite agent resources, so I’ll interpret the limit as a hard limit on agent
capacity (e.g. related to the maximum CPU rate in a computer). We should also not discount the possi-
bility that different types of message could have private channels, so we can add a subscript Cq(i, j)to
accommodate. Thus, a channel for passing Mδq may have private capacity compared to a channel
for Mδq0. If not, processes that fall into the same class would contend with one another for resources.
This is where the exclusion principle may enter.
The channel capacity C(i, j), as defined, is effectively a statistical invariant, standing for an infinites-
imal property of spacetime, which effectively replaces spatial ‘adjacency’ xx+dx with a transition
availability given by the channel utilization:
Utilization =I(i, i + 1)
C(i, i + 1) β. (27)
where Iis the actual rather than the maximal mutual information in a series of messages, over some
timescale. This measure is the same as the ‘utilization’ described in Queueing Theory [29, 30]. The key
point is that the information channel is not a separable pure state function C(i, j)6=X(i)Y(j), even
though it is a process composed from initially independent agents. If we suppose that the local state of
Aiis define as ψi, then I(i, j)is not a product state, but a convolution. This alone tells us that the natural
language of the states is a Fourier transform which immediately opens for signal uncertainty effects and
effective momenta of the quantum ilk, from the Fourier transform variable [31]. This in turn opens for
uncertainty relations, analogous to Heisenberg’s due to different scale separations.
Based on the information transmission mechanism, using the dimensions of the channel (bits per
second of exterior time) tau, we can express the time to perform a transition assuming a rational
fraction βof the maximum resource utilization. For a message M, the co-time to transmit across a single
8The propensity interpretation for probability relates two ontological concept in physics: potential and transition probability.
It’s not clear that these are distinct concepts in a causal theory where both potentials and probabilities have the interpretation of
a guide to future behaviour. Both summarize a kind of memory, or gradient that guides future outcomes.
14
channel hop from Aito Ajis:
tq(i, j) = |M|
βMCM(i, j).(28)
This gives us an expression for the instantaneous velocity rooted in virtual information transmission:
vM=d
t(i, j)=βM
d CM(i, j)
|M|.(29)
We can thus define a shorthand for the maximum effective instantaneous speed by
cM=d CM(i, j)
|M|,(30)
in terms of the process utilization or fractional velocity:
βM=vM
cM
,(31)
for signals of type M, assuming the messages of fixed type have have fixed length. As we combining
agent transitions into longer paths, the minimum bottleneck rate will play a role.
3 Privileged instantaneous observer motion
A Newtonian observer experiences (effectively) instantaneous knowledge, meaning that a theoretical
coordinate view and a direct observer view can be treated as the same. This is partly due to neglecting
the finite speed of communication and partly due to the assumption of invariances that eliminate time
limits and distortions. We begin with a Newtonian model describe virtual motion.
3.1 Causal semantics of q,v, and t
In classical mechanics, the trajectory of a particle is written q(t)(see figure 6), whose value is interpreted
as the coordinate position x(t), in a regular system of spacetime coordinates (t, x). The analogue of this
quantity in agent spacetime is the scalar promise in (5). Motion of the third kind involves a succession
of translations of this scalar promise from Aito Ai+1 in some exterior time interval shared with the
observer. Subsequent transitions move the promise along some one-dimensional path or chain of promise
interactions—as if by a series of ‘teleportations’. The promises ensure that only one of the agents can
make the promise at the same time so that the promise acts as a virtual particle. Motion of the third kind
can be characterized in two ways:
1. Observer follows the motion along its trajectory:
(Aii
+q
O
Ai
+¬q
O)(Aii
+¬q
O
Ai
+q
O),(32)
where the notation ¬qis the promise of ‘NOT q’, or the absence of q. We also take for granted
that the observer is attuned to detect the promise by accepting
Oq
Ai,i(33)
O−¬q
Ai,i(34)
This promise is as important as the promise of qitself in order to account for conservation of the
property labelled q. In classical mechanics this accounts for the conservation of particle number;
in electromagnetism, it is the conservation of charge, etc. It’s almost clear from this that one
15
A2
A2
+q
A A A A
1 3 45
−q
Oq(t) =
Figure 6: Motion of the third kind (virtual motion) involves the passing of a promise, say q, like a baton from
agent to agent. The promise of the observable property can be accepted by an observer—itself a process promised
by some agent—which assesses the location of the promise at each time ton its own clock, only if it can distinguish
between the agents Ai.
has a Noether theorem-like connection between spacetime continuity and conservation of token
counting. In more conventional language, this transition would be labelled as a closed trajectory
within a ‘Cartesian theatre’
q(i, τ )q(i+ 1, τ + 1),(35)
representing a tick of universal exterior time. We note that this transition time is an exterior time,
belonging essentially to the observer’s clock, and is different from whatever interior process time
is required in order to implement the transition from Aito Ai+1. An instantaneous transition
conflicts with information theory, where measurement can only be performed at half the sampling
frequency of the process keeping the (-q) promise in (33). We’ll return to that issue below. In a
Newtonian world, one assumes that an observer is infinitely fast.
2. The stationary observer watches a single location and sees the promise of +qcome and go over a
time interval, in units of the regular sampling frequency:
ti1:Ai
+¬q
O(36)
ti:Ai
+q
O(37)
ti+1 :Ai
+¬q
O(38)
The distance moved is then assumed to be one unit and the time is assumed to be the time be-
tween two consecutive samples. In this picture, relative motion is easy to define, because both the
watcher and the process q(τ)would use the same global coordinate system. The assumption of
instantaneous knowledge guarantees that (however it happens) we can simply calculate the relative
velocity by checking the coordinate labels promised by the agents that promise qand +q. That
gives the relative velocities according to the usual assumptions.
In the reality of agent based systems, this is not so easy to do because the paths taken by the
observer and the particle trajectory need not be parallel. This is where the convenient Newtonian
picture of regularity breaks down due to the roughness of spacetime on the scale of networks and
agents.
The implementation of the transition is by conditional promises, as discussed in section 2. Let’s refer
16
to the invariant message that transfers a promise of +qby q:
π(+)
q:Ai
+∆q|q
Ai+1 (39)
π()
q:Ai+1
qq
Ai.(40)
It’s assumed that the number of steps |q|required to complete this transition qis an invariant;
however, different agents might execute these steps with greater or lesser efficiency, and at different
maximum rates of utilization. In words, these promises declare that some agent, Ai, offers a transition
to its successor that we name qiff it already has q, and Ai1accepts such an offer iff it currently does
not have q. In this way, there will be only one agent with a q. The process by which qpasses from Aito
Ai+1 was described by entanglement of the agents’ interior processes in [11]. In computer science, this
kind if process is often called a handshake9.
3.2 Instantaneous velocity at a point
The instantaneous velocity of this process, formed from paired transition promises π(+)
qπ()
qis:
viπ(+)
qπ()
q=d(i, i + 1)
t(i, i + 1) vmax
i(41)
where we note that this does not necessarily generalize to non-adjacent intervals i, i0. The co-time
t(i, j)is not directly observable by outside agents, so this will have to be related to the exterior time
(see figure 4). To be strictly correct, the time here should be the exterior time, since velocity is measured
by exterior agents. In the privileged observer view, the difference doesn’t matter since the clocks are
synchronized—but later a correction will be in order.
We can now define the Newtonian velocity, based on all the prior assumptions, including the univer-
sality of observer time (all clocks synchronized), and standardized invariant distances. We use the usual
formula for instantaneous exterior velocity over an interval:
Velocity =distance
time (42)
Exterior time velocity =d(Ai, Aj)
τ(Ai, Aj),(43)
Co-time proper velocity =d(Ai, Aj)
t(Ai, Aj),(44)
where d(Ai, Aj)is some (yet to be defined) function for the distance between agents Aiand Aj, and
τ(i, j)is a time interval, measured for the transition by an observer O’s clock. The effective velocity
should be defined in terms of exterior (observable) time, while the ‘proper’ velocity should be measured
in interior co-time. So, we need to relate these in the general case. This definition of velocity is the
9The time-reversal or direction symmetry of spacetime is, of course, violated by this interaction that prefers transitions
in a certain direction. In any transition system, motion is only enabled by a kind of symmetry breaking potential, which is
already ingrained in the description at a basic level. This observation will feature importantly in section 6. Reversibility can
be restored on average, if boundary conditions on a larger scale are conducive. In classical ballistic physics, the symmetry
breaking is usually encoded in the material particle processes, in the form of ‘momentum’ or velocity, which is initially broken
by boundary conditions. In a a general transition system, such boundary conditions have to form a non-local ‘field’ that
affects every pair of agents along the path, because there is no consistent notion of direction for a transition process to rely
on (a point cannot know its orientation). These are details to be understood and made consistent (see section 6). We could
restore the symmetry by hand, allowing the trajectory to make transitions in either direction, but then we’d have to account
for the symmetry breaking during each transition in order to maintain a constant classical velocity in a straight line (without
spontaneous Brownian reversals). This should be a clear indication that conservation of motion in a straight line is a memory
process, not a Markov process, which would be a Brownian motion.
17
discrete network analogue of the derivative dx/dt, where xij. The conceptual difference here is
that there is no infinity of locations between the end points to make this smooth and to argue consistency
between locations of the interval here, so this cannot be understood as an average velocity over the
distance, or a limit.
3.3 Mass, instantaneous force, and acceleration
There is still more we can learn from looking at Newtonian reasoning applied to virtual motion. If
velocity can be less than the natural maximum value, then how do we change it? In universal time,
acceleration can be defined in the usual way, but now ‘instantaneous acceleration’ involves a minimum
of three agents. The Newtonian derivative d2x/dt2conceals a non-local dependency on three locations
whose properties can’t be assumed away as a continuum limit approximation based on differentiability
of smooth functions. An acceleration is thus a change in velocity, now involving the promises of three
agents, Ai1, Ai, Ai+1, where the force transformation is applied at the middle agent, and time intervals
for the two transitions, as measured by the observer refer to t1(Ai1, A1)and t2(AiAi+1). We can
write this as vover a time interval tfor some t. Assuming an instantaneous positive acceleration
from v1to v2> v1:
aiv2v1
1
2(∆T(i1, i)+∆T(i, i + 1)) Fi/m. (45)
As before, we have to deal with the subtlety of exactly which time is represented by the denominator in
practice. Here, we can write the universal Tas a placeholder. In the continuum idealized view, this is
unambiguous, but in the agent model we have to choose between interior and exterior times of different
agents. Acceleration is observed by exterior agents and forces are also considered exterior, so we expect
the time to be based on an average exterior time τ. We return to this in section 5.5.
Having defined distance earlier, the only possibilities for an instantaneous change in velocity are
either a discontinuous change in the work length |q|, or a change in the time resource allocation t
for a constant process. We can define the origin of instantaneous force to be that perceived influence that
leads to the change.
If the message length |q|is not constant, then increased speed means reducing the length of |q|.
This has an obvious limit once the length of the work is zero.
If there is a change in the time utilization of processes executed at each agent, i.e. the rate of
work allocated to instantaneous transitions, then we have to pay attention to the maximum fraction
of finite resource available to allocate, and again there is a limit on the achievable instantaneous
velocity.
The counterpoint to force is the Newtonian mass. The concept of mass occurs in two places in physics:
as a ‘coupling constant’ in Newton’s generalized force-acceleration law and as both a ‘source charge’
and a ‘coupling constant’ in Newton’s specific gravitational force-acceleration law. The origin of mass
has long been debated by philosophers [32, 33]. Although lay-persons associate mass as equivalent to
bulk and substance, technically, mass only appears in the context of force and acceleration. At constant
velocity, mass plays no role in kinematics.
The effect of force and mass with respect to velocity occurs only in the ratio F/m. The consistent
application of a force over many points along a trajectory, for some property q, could increase the velocity
indefinitely. This is inconsistent with the finite limits of agents for processing virtual motion. In order
for the effect of constant force to peak at maximum velocity, due to finite local processing capacity, the
effective mass coupling cannot also be constant; it must be a function of velocity:
v
t=F
m(v)(46)
18
where m(v)must have the general form:
m(β) = mv
c=m0
(1 − |β|η)ξ×(
X
m=0
fm|β|m)η, ξ > 0,(47)
for some constant polynomial coefficients fm0, and it’s assumed that mass can’t depend on the sign
of the velocity. Significantly, the mass approaches infinity as |β| → 1, and must have a nominal value
for β= 0, since the ‘cost’ or resistance to reserving a change in processing capacity does not vanish
simply because the initial velocity is zero. This expression has plenty of unknowns, as it’s based only
on asymptotic behaviour. Without further constraints, we can’t say much more about the dependence
here—it will be process dependent. The invariant m0refers to a property of inertial at rest, whose origin
is yet to be explained. It has yet to be shown how the effective mass, which is experienced by virtual
processes in relative impulses and accelerations, is related to this quantity in a relativistic treatment10.
We expect finiteness of local resources to return later in connection with capacity allocation. If a
transition process message |q|continues to grow to infinite size, then no agent will have the interior
states to be able to accept the message in one go, so the process would either be unable to continue or
would have to split into several smaller processes. This tells us that virtual motion must place an upper
limit on transition information. If a message grew beyond the capacity of an agent to accept it, the process
would have to either terminate or split into several smaller processes (see example 34). Process splitting
of a conserved quantity could mean potentially violating the exclusion principle, i.e. the creation of two
processes that could not coexist on the same agent, unless the processes separated. We would thus have
to imagine something analogous to ‘pair creation’ as a non-local interaction of a new kind implicit in
the ‘operating system’ of the virtual process. The spawned processes could continue to propagate close
together, either slightly out of phase in the same direction, or be split into separate branches which would
have to travel along different paths
3.4 Composition of transitions—scaling propagation in space and time
So far, we’ve only considered the instantaneous velocity between neighbouring agents—i.e. the quantity
which is analogous to the derivative dx/dt in an absolute spacetime—in order to expose the inhomo-
geneous properties of each location, which makes agent spacetime less susceptible to assumptions of
translational symmetry. It’s now a simple matter, in principle, to sum the intervals for distance and time
to find average velocities over larger distances. How signals and outcomes are transported through ex-
tended space, and at what rate (for each observer) is the question that enables the building of systems
from components on a variety of scales. Although the issues are quite universal, we confront them in
different ways at different scales. Propagation involves not only promising and accepting but the coop-
eration of individual agents, and a certain homogeneity in their local assessments. Only then can we
form channels for information to pass along reliably and maintain an illusion of homogeneous ‘order’.
10A priori, it may appear difficult to offer a precise reason for a concept as generic as the effect of mass, across all scales
and processes. One way to imagine the reason for a cost associated with acceleration is that a change of channel utilization
involves clearing room for a larger share of processing—a process overhead. We might suppose that agents are busy with
some kind of ground state ‘process idling’ (something like vacuum fluctuations in physics, where locations are always on active
standby), and that pushing aside this idling to schedule other processes requires a process cost of its own, analogous to a context
switching overhead in computing. This overhead is easily identifiable in macroscopic systems, and represents an interesting
speculation for quantum spacetime. But then why doesn’t the cost of increasing process share have to be applied at every point
along the trajectory? The information about reservation must pass along the entire process’s guiderail, which suggests that
the cost of force concerns not only the cost of editing the message, but of redistributing local capacity reservations along the
trajectory on a larger scale. This is easy to imagine for a computer program, or for an RNA strand. For a quantum spacetime
process it would have to be related to a change of the system-wide wavefunction (Einstein’s spooky action at a distance). Such
action at a distance occurs here due to separation of allocation from execution by scale. The assumption that mass is ‘carried
with’ a process as a property (immortalized as momentum mv) also suggests that it would encoded in the message length |q|
somehow. However, in the Einsteinian view of mass a geometry of spacetime, mass seems to be baked into the environment
rather than the trajectory. Einstein’s spacetime geometry is itself a process, not a static geometry of space alone—spacetime is
exactly what we mean by a process.
19
There are two kinds of process. In structureless Euclidean space, the difference between them can only
be encoded through potential functions. In an agent spacetime, the inhomogeneous discrete nature of
agents means we have to confront these carefully:
Memoryless (Markov) processes:
This is the usual case in physics, where motion is apparently ‘memoryless’ because spacetime is
structureless and the memory used has to be hidden as an implicit extra—potentials, and coordi-
nates etc11. At each Markov transition, the direction is random. This is stochastic motion.
Memory (stigmergic) processes:
Ballistic motion of an elastic body is a memory process in which the momentum remembers di-
rection and rate. In a memory process, some kind of pre-calculated guiderail is needed to keep
a memory of the process that initiated it, and that shape guides the virtual trajectory q(t). All
non-Brownian motion in fact relies on memory of this kind; when the spatial size of a promised
outcome q(τ)is large compared to the spatial size of inhomogeneities, a guiderail becomes in-
effective and q(τ)follows a random path. Cohesion of a large body, on the other hand, tends to
keep motion uniform by spontaneous polarization so the path appears to follow a straight line. We
return to this in section 6.
There are two corresponding issues:
Consistent direction and its functional semantics.
Consistent transport rate, i.e. velocity and for whom.
Example 7 (Phase space) Descriptions of systems in terms of microstates, with abstract flows, have a
long history. In physics, we are used to describing behaviours in terms of variables that correspond to
motion using continuum ‘shadow variables’ like energy, potential, etc instead of distances and times.
This is reflected in non-local descriptions based on Fourier transforms, etc. We take it for granted that
changes of state are changes of position—i.e. configuration space is a part of state space. However, clas-
sical motion needs two variables to fully describe state: position and momentum. Momentum remembers
total energy (through mass) and rate of change (velocity). Phase space is the memory of the system. Only
later when interior states (thermodynamics and quantum numbers) were described was there a need to
separate interior and exterior degrees of freedom. Thus, we are conditioned to thinking of momentum
as a key variable—as witnessed by Hamiltonian equations of motion—and we take for granted the con-
nection between the energy of a system and the momentum of its constituents, expressed by Hamilton’s
equations.
Example 8 (Semantics of open and closed systems) The technical semantics of an agent model turn
the picture of interior and exterior upside down with respect to normal differential descriptions. If we
compare a bulk continuum flow picture with the interior-exterior split, it’s easy to see that ‘externality’
can be inside a finite region, but has to be ‘outside’ an infinitesimal point. An open system is usually one
connected to an exterior reservoir of some kind. The reservoir affects the system globally. In an agent
model, the resources of an agent are unexplained and thus ‘open’, but purely local. Consider, the total
time derivative:
d
dt =t+vii,(48)
by chain rule. We would typically frame this as interior change to the positions of the fluid, plus an
exterior change to a system (e.g. thermodynamic bulk temperature change), but in the opposite manner
11Memoryless systems are not really memoryless, as they must have states in order to respond to transitions—so we should
really think of them as minimal memory states.
20
to an agent system. For example, consider a temperature field T(x, t)in a river. The partial derivative
with respect to time (at constant position) describes warming say by the sun—an external input, while the
velocity dependent term describes change by flow of the water, i.e. the temperature moving downstream.
In an agent view, this is upside down: partial changes come from inside the agents, not outside a system
boundary, and flow terms are exterior to the agents. Thus, the semantics of an agent system seem upside
down. This affects the way we calculate quantities we consider to be primary.
4 Brownian motion
The fundamental kind of motion—viewed from the bottom up—is stochastic motion. Brownian motion
has no preferred orientation, regularity, nor bias. Any conservation laws are true only on average. That
observation suggests that conservation laws are not fundamental in to a discrete view of spacetime pro-
cesses, but only emerge as effective accounting balances over ensembles. The start of a trajectory is an
arbitrary point: this is effectively a boundary condition which is directionless in spacetime. Completing
the transition promises begun in (39):
π(+)
q:AI
+∆q|q
AI+1 (initial)(49)
Thence, each agent will process the conditional sequence:
π()
q:Ai+1
qq
Ai(50)
π(+)
q:Ai+1
+q|q
→ ∗ (51)
π(+)
q:Ai+1
+∆q|q
Ai+1 (continue)(52)
where the last line repeats the promise to pass on the relay baton, but without any clear direction. These
promises sustain a causal continuity of motion along a random path, without direction or constant ve-
locity. As long as there is a ‘next agent’ in the chain ii+ 1, making identical promises, this will
continue. As a final boundary condition, we might assume that there is a destination, defined by a final
promise (50):
π()
q:Ai
qq
AF(final)(53)
which does not lead to new transition (52). Such a state would be called an absorbing state in graph
theory. In a random walk, this kind of singular point would presumably be justified on the basis of other
special promises made by that particular agent. The inhomogeneities of semantic spacetime are thus
ultimately responsible for the starting and stopping of motion.
The question that jumps out from this simple model is: how does a transition process know where
it’s going? Why would spacetime agents should be pre-programmed to keep promises in this fashion to
begin with. How did they reach this promise configuration? In technology this is by design, in biology
it’s a result of evolution. Newtonian physics doesn’t answer this question. In agent spacetime, the
instantaneous velocity of each transition will depend on the pre-allocation of resources at each agent.
This is what needs explanation. Continuity of direction implies a kind of long range order amongst the
agents leading to semantic of directional behaviour. Thus, in order for elementary agents’ velocity (and
thus momentum) to be homogenous and continuous, a pre-allocation of local resources has to be assumed
possible. This leads us in section 6.8 to the separation of virtual motion into two processes.
4.1 Representation of transition chains by generating functionals
Assuming that we solve the conundrum of destination, and can allocate some route, either locally to the
next hop or all the way to the final boundary condition like a scattering problem, then we must be able
21
to compose transitions from atomic jumps to larger superagent regions. Thus we’re led to consider a
transition function formalism, as in quantum theory [34]:
hhfinal|initialii (54)
The time-series correlated by promises between input and output processes [35] is what generates a
scattering formalism based on a promised interior process:
hhoutput |Πprocess |inputii (55)
The nature of the input and output states is not implied here. They may be of different sizes (see figure
15). To compute this transition, whatever its interpretation, we need to know the effective channel
capacity between two the initial and final agents is C(i, j), which may be a scaled capacity for a cross
section over several parallel agent paths, like a current (see section 6.3).
Example 9 (Network capacity) In fixed and cellular networks the effective transition rate capability is
enshrined as a Quality of Service (QoS) promise [36, 37]. How the total capacity across all parallel
channels is composed and shared affects the experience of transmission velocity by individual users.
q1
q2
q3
1
A
2
A
3
A
q1
q2
q3
qP
(a) (b)
Figure 7: Composed trajectories (whether random or ordered) have two main features i) conditional ‘path inte-
grals’, and ii) absorbing states, i.e. convergent fixed points (stable subgraphs) [38].
A solution for the trajectory of some promised message M, or property q, which takes into account
the coordination of finite resources across a region of spacetime suggests a separation of concerns in-
volves the allocation of resources and then transport along the transition chain. We can easily create
method for composing paths. The Feynman path integral and the Schwinger action principle are the
standard means for quantum theory, but their relation to quantum theory is spurious—they are generic
methods for composition of graphs. Suppose we imagine the state of a region of spacetime by composi-
tion of many local agents:
ψ=ψ1ψ2. . . (56)
There are two main kinds of state transitions (shown in figure 7). The usual kind of trajectory in figure 7
a) is the classic ballistic path followed by a linear conditional process, or causal set path. This is a path
through state space q, but with conditional promises over a scaled set (in which agent promises become
the states in a superagent), this can also be a path through a set of agents (i.e. a spacetime path).
ψ12|q1ii =|q2ii (57)
A second class of states is absorbing: the so-called stable subgraph or convergent states that are fixed
points qpof a class of convergent transitions:
ψnp|qnii =|qpii (58)
ψnp|qpii =|qpii.(59)
22
Any process trajectory is an ordered composition of transitions operators, mapping from an initial
state configuration to a final state. If we use the useful Dirac bra-ket shorthand notation for states ψ+7−
|ψii, then a spacelike trajectory has the form:
|ψoutii = ∆ψ(qn). . . ψ(q1)ψinEE (60)
i.e. the path ordered composition of changes from agent location to agent location. A timelike trajectory,
takes the form:
|ψoutii = ∆ψn(q). . . ψ1(q)ψinEE (61)
i.e. a sequence of changes on the same set of states. The patterns in (57) and (58) represent the main
cases for retarded and advanced propagation respectively [39]. The body of a promise might consist of
a detailed path or merely a desired end state, and what the receiver accepts might be only a subset of
this offer. This makes a promise theoretic interaction different from what is conventionally assumed in
physics, where conservation laws insist on the equality of offer and acceptance during transactions12.
It’s surely worth a brief mention that, if one assumes a description based on quasi-infinitesimal
changes (say, by arguing for sufficiently large-scale statistical coarse-graining as we do in physics), then
there will be conservation of accounting measure within each agent, allowing transitions to be formulated
in the usual path integral representation of a partition-transition function at each location along time-like
trajectories with interior conservation along the path:
M|qii = ∆q|qii = ∆Ai|qii 7−ψ(q)'ξπTδπ,(62)
where the matrix Tδπ is the generator of a transition for a step δπ towards keeping the promise π. There
is no basis for making this assumption along spacelike trajectories. So, by the usual technique of expo-
nentiation, 1 + ξπTδπ exp(ξπTδπ )for infinitesimal:
ψ|ψii =eξπTδπ |ψii,(63)
analogous to the generators of a canonical group [34], and adding a boundary condition constraint on
the allowed paths from the current state the path count, which approximates a density of collective path
states is
ln ZπeξπTδπ +Gπ,(64)
for the unspecified generator Gπof the initial conditions and subsequent accounting constraints. So
transition function ‘amplitudes’ can be obtained in the analogous way as for statistical mechanics or
quantum theory, by an effective action of the Boltzmann-Shannon entropy partition form along the paths:
δhhψout|ψin ii
hhψout|ψin ii δln ZπeξπTδπ +Gπ.(65)
This amounts to applying ordered sequences of transition matrix operators Tδπ to the composition of
microtransitions that keep promises statistically. The expression (65) is known as the path integral, and
is often used to explain the emergence of classical paths from Brownian quantum motion based on the
interference of phases generated by Hamiltonian energy conservation. It can be justified by several phase
space analogous means, e.g. Feynman’s multipath argument [40], or the Schwinger Action Principle
[34]. This is consistent with the view that virtual motion naturally separates into guiderail and guided
walk—as it acts as a generator for that exact process, leaving the distribution of ψimplicit in boundary
conditions and an equilibration (relaxation) assumption implicit in wave modes used calculate it.
12The accounting status of what is promised is not defined a priori. Whether one should assume conservation of ‘token’
exchange is an open question for a particular formulation.
23
Example 10 (Phase space) The quality that distinguishes quantum mechanics from particle diffusion is
parallelism could be viewed as the process that determines the guiderail function ψ(x). For coherent
body, the centre of mass can only take a single path, by assumption. However, for processes that can
be parallelized different parts of the coherent process can take different paths, as long as they are re-
combined somewhere along their destination. This is why constructive interference predicts successful
outcomes (promises of outcome, conditional on all dependencies
Rj
q|qi,i
Ai(66)
Rj
qi
Ai(67)
In the continuum limit, there are more options: Hamilton’s equations, Liouville’s equation, the action
principle, etc are alternative generating strategies:
hhAii
∂t =DD{A, H }PQ EE,(68)
where the bracket {A, B}PQ is defined according to the algebra of the case. Phase space formulations
are important for normalizability of transport process descriptions, as long as we try to go through the
stage of expressing probabilities. The need to dwell on phase space methods is less obvious for a purely
transition system.
Example 11 (Cloud datacentre job spillover, effective dipole moment) Single transitions between dif-
ferent agents can also take the form of transitions in bound states (like atomic energy levels). The analogy
of Einstein’s stimulated emission phenomenon in a laser is the case of job migration from one datacentre
(degenerate energy level) to another.
Let A1and A2be superagents representing datacentres with interior capacities C1and C2respec-
tively. Suppose that process jobs Jnormally run in A1, because the A1is closer to the source of customer
traffic and has lower latency. A transfer of job traffic from one to the other can be expressed as rate dif-
ference (analogous to a photon):
J/τ=C2C1,(69)
on dimensional grounds. Similarly, on dimensional grounds, we can follow Einstein’s argument for the
transitions. The rate of jobs moving from A1to A2is proportional to the local job traffic level ρ(J):
R12 =B12 ρ(J).(70)
The rate in the opposite direction is similarly proportional to the traffic level (jobs could go in either
direction between cooperating datacentres). However, there is also a rate at which jobs are moved from
A2to A1at random, independently of the traffic level. This is analogous to the spontaneous emission
rate. It could be due to failures or noise due to contention with other processes at the secondary site.
R21 =A21 +B21 ρ(J).(71)
At ‘memoryless’ equilibrium (where memoryless refers to the scale of the superagent), one may therefore
write a detailed balance relation for the number of jobs niat each site i= 1,2:
n1[B12ρ(J)] = n2[A21 +B21 ρ(J)] (72)
whence, the traffic density must have the equilibrium form:
ρ(J) = A21 /B21
hn1
n2
B12
B21 1i(73)
24
At maximum entropy, we expect a Boltzmann distribution of states for long co-times t(1,2) for the com-
bined system.
n2
n1
=g2
g1
e(C1C2)/C,(74)
where giare the ‘degeneracies’ of each datacentre, i.e. the number of potentially available redundant
host job slots, and Cis a dimensional scale that corresponds to the latency cost of a transition.
The fact that A12 = 0 implies there is a preferred or default direction to the transitions, which selects
a causal direction. In the spontaneous emission case, this corresponds to there being no spontaneous
absorption (which is a causal symmetry broken by presumed circumstances and might not be true in
general). There is thus a symmetry breaking that corresponds to the ‘dipole’ moment formed by the
superagent A1a2whose direction is determined by the promise to offload traffic in a preferred direction.
If we compare the traffic density to that predicted by queueing theory of an M/M/1memoryless
queue [29, 41], then average number of transitions waiting in a queue is
hhnii =ρ
ρ1(75)
where rho =µ/λ is the completion rate, for arrival rate λand service rate µ, and a fraction of those
that complete without rejection is the average capacity or service availability:
A∼ hhnii/f ρ/f
ρ1.(76)
So the deliberate promotion of a job corresponds to a stimulated transition, and the spontaneous decay
corresponds to a rejection rate, possibly due to insufficient capacity or forbidden onward transition
etc (see discussion in section 6.8.). This shows that the dipole moment can be interpreted simply as a
virtual queue for transitions, whose direction implies a gradient. The semantics of this relationship are
described by promises:
+J
AiIncoming jobs (77)
AiJ
Stimulated absorb jobs (78)
Ai
+Ci
Interior capacity (79)
A1
+∆J|J,C1
A2Stimulated transfer jobs (80)
A2J2
A1Accept at a rate J2J1(81)
A2
+∆J|J,C2
A1(82)
A1J1
A2(83)
The identification of a gradient with a preferred process direction, even in this example of stochastic two-
agent transitions, turns out to be a persistent feature of directional motion, especially when composed
from multiple transitions. Although we often ignore the origin of such a gradient, local or global, and
bake its present into coordinate directions and unexplained memory properties of momenta, the need to
explain this for virtual motion becomes critical in order to understand coherent motion.
5 Coherent directional motion (translation)
Motion in a straight line, in the usual sense of Newtonian momentum, requires functional semantics that
are not explicit in the picture so far. Without boundary conditions to break the spacetime reversal sym-
metry, there can no motion other than unbiased Brownian motion. So, to go beyond random behaviour
to coherent directed transport, we need to break the symmetry by fixing both direction and uniform
transition channel availability.
25
The full causal sequence of promise-keeping events for translation to continue indefinitely requires
that each agent promise not only to accept handovers from foregoing agents in a chain, but also that it
will make the identical offer to neighbours. It’s natural for this instruction to be passed along as part of
the promised information interpreted as +q, conditional on some guiderail condition.
Example 12 (Guided bioinformatic gradient signalling) This is visibly the case for DNA, for example,
where stem cells are switched epigenetically by transmitted growth factors in gradients [42] that allow
different promises to propagate to different locations in a highly complex network of cells, whose topology
is determined by a network of signalling interactions.
5.1 Chains with long range order for continuity
Suppose that a collection of agents effectively (and emergently) promises to form a chain, or even a
coordinate lattice (figure 8), by each promising its role of being next to a neighbour. Now, we need basic
distinguishability of agents by their neighbours, in order to specify their precedence:
A1X1
A2
X2|X1
A3. . . (84)
This is a local ordering, i.e. a polarization field on space. It can be implemented as a memory process
in which have names and addresses (like the index labels), or it can be formed by local interactions as a
Markov chain analogous to an anti-ferromagnetic state in physics. Filling out the promises in full [1],
A1
+X1
X1
A2
+X2|X1
X1
X2
A3
+X3|X2
X2
X3
A4. . . An(85)
This leads to serial order, which can be symmetrized away [21]. Notice the apparently acausal feed-
forward promises at each that complete the logical semantics ‘I have made a conditional promise, and I
promise that I have the condition in hand’.
The notation in (85) suggests that every agent need to know a unique name of its nearest neighbours.
That may not be strictly true, since in a Markov process it suffices to know that one has a predecessor
in order to build a ladder, through a form of Long Range Order [43,44]. However, if we are interested
in maintaining a causal order relation in which each agent knows in which direction a certain vector
continues, that information is not clear without labels that go beyond a simple predecessor.
Again, there is no reversibility yet. For that, we would need an independent chain of promises in the
opposite direction:
A1
X2|X3
A2
X3|X4
A3. . . (86)
The property of reversibility is not automatic, if one takes locality as an intrinsic starting point—it has
to be promised independently by some or all agents. Why we should observe symmetry between left
and right, forwards and backwards, up and down, is unclear. We’ve come to assume that symmetry is an
absence of information, but if we treat all agents as a coordinate lattice, that is not the case. Indeed, the
opposite is true. There is long range order.
From equation (85), we see that each agent has to be ‘aware’ of its neighbours, by continuous
interaction—like swarm dynamics [45–47], and this is reflected in the Borrill model [11] for network
communication. Dynamically this means having a Nyquist sampling loop to observe and maintain state
updates, in order to form a cooperative lattice—because agents are assumed to be a priori autonomous
or causally independent. The loop is a minimal requirement for communication between local agents.
Normally in mathematics, we impose the ordering of spacetime points as a topological requirement. A
strongly local view, as taken in Promise Theory, offers no reason to assume this. The ‘awareness’ of
agents by one another implies a looping process of observation13 , the most primitive example could be
an oscillation—or what physicists might model as a spring.
13This sampling loop, in the Nyquist sense, was popularized as ‘cognitive agent’ model in [48].
26
observer
AA2A3A4
X | X 0
1X | X 1
2X | X2
3
1
Figure 8: A number of agents Aipromise that they are greater than their neighbours and less than the preceding
one. Their gradient promises all have only local significance, as if vanishing a Newtonian limit were taken—all
agents could promise the same, from their own perspective, in opposition to one another. For an external observer,
there is nothing in the information supplied by the agents that suggests their relative order unless the observer
accepts such information from the remote agents on trust. Otherwise it has to trust its own ability to discriminate
messages by sensory criteria (e.g. the angle of incidence of the signal on its boundary), which assumes macroscopic
interior structure on the observer—suggesting that the size of the observer, and its relation to promise channels,
play a role in the ability to assess order. The labels on the Aiare for convenience. Nothing should be implied by
their order. Indeed, this exemplifies a common problem we have in assuming our Newtonian abilities to label and
order by virtue of unlimited powers of access and observability.
5.2 Causal structure of linear motion
Setting aside the counting of changes for a moment, we now have enough to express the causal structure
for uniform motion along a guiderail in terms of promise interaction semantics:
Source, emission, initial condition tor potential pA.
π(+)
q:AI
+q
→ ∗ (87)
π(+)
q:AI
+∆q|q
AI+1 (initial)(88)
Resource allocation equilibration The agents establish a channel, out of band, as part of a wider
resource equilibration process ψ:
Ai
+Ci|ψ
Aj(89)
Aj
Ci
Ai(90)
Aj
+Cj|ψ
Ai(91)
Ai
Cj
Aj(92)
Ai, Aj
±C(i,j)|Ci,Cj
Aj, Ai(93)
where the process ψis a collective interior promise by the combined superagent, dependent on
individual agent capacities Ci(Ai):
{Ai}±ψ|{Ci}
→ {Ai},(94)
where ψhas the form of a distribution over the agents;
ψ(Ai) = hhAi|ψii.(95)
27
Equation of motion, propagator
Availability of processing slots or ‘absorption acceptance states’ is conditional on the existence of
receptors for a type of promise exchange as well as on channel capacity. The channel capacity of
a link between two agents Aiand Ajis a function of both C(i, j)due to locality (agent indepen-
dence). So, in order to quantify the propensity or availability for transitions, we can acknowledge
that the transition is in fact conditional on not only acceptance but on capacity:
The continuous leapfrog waveform that propagates +qfrom AI, one step at a time along a path of
agents ordered, as in section 5.1, takes the form:
ret
q(i, i + 1)
π(+)
q:Ai
+∆q|q , Cq(i,i+1)
Ai+1
π()
q:Ai+1
qq , Cq(i+1,i)
Ai
,(96)
where ret
q(i, i + 1) is the retarded propagator (see appendix D). Note that, unlike the Brownian
case, this now depends on the local channel capacity and guiderail Cq(i, j). Then the pattern
repeats for ii+ 1.
If the velocity utilization βis encoded self-contained in the virtual body, like a momentum, then the
channel reservation ‘request’ is information passed along in the message, so the channel capacity
is a function of the information received in the message q,
π(+)
C+:Ai
+C(i,i+1) |q
Ai+1 (97)
On the other hand, if the motion is like a wave, which depends only on the medium (or field) then
the velocity is a programmed into (a function of) spacetime, preallocated by a reservation process
spanning all points ψ(Ai), which is an effective invariant of the motion, and no other information
needs to be carried with the process:
π(+)
C+:Ai
+C(i,i+1) |ψ(Ai)
Ai+1 (98)
And promises for directional continuity path reservation ...but this can’t always be promised!
We assume that a spacetime channel has been identified, per the non-local process in (85). Without
this guiderail, motion can only be Brownian.
Sink, absorption, final condition
To end a process, there has to be explicit absorption.
π()
q:Ai
qq
AF(99)
In all such boundary conditions, conservation of momentum etc is violated by the unnatural termi-
nation condition. This is the price for bounding a problem in spacetime, and is of no significance.
We simply ignore this.
5.3 Average rate and velocity
The ability to count microscopic point transitions from agent to agent exceeds experimental technique
in most cases. It’s more common to measure average velocity over an assumed series of transitions.
Moreover, when we talk about flows, we also imagine velocity fields (dispersion), e.g. in laminar flow
through a pipe, which reflects boundary conditions along a guide. These are two very different scenarios
for a state-based model of virtual motion.
28
Single agent motion (longitudinal average): summed over a serial path length, in a single di-
rection (velocity) or in any direction (speed). This applies to the rate of transport in data streams,
wavefronts, percolation, the transport of data in a pipeline, logistics supply chains, etc.
Suppose the adjacency matrix for the network is Πij(ψ), based on the allocations summarized in
ψ, then if we apply this to a vector with a single non-zero element agent in the boundary set Ai,
successive applications of matrix will result in the downstream trajectory locations:
ij(ψ))n|q(0)ii 7−→ |q(n)ii (100)
The length of this path is Nd.
The average velocity along this single agent strand of the motion can be denoted vq(Ai). It tends
to a limit in terms of the exterior time from start to finish: For a single promise q, carried by a
single agent at a time, the average transition velocity over a serial path of Ladjacent agents would
reduce to the Newtonian simple form:
vq(Ai) = Ld
PL
i=1 τi
=d
τcq.(101)
as we’d expect in rigid coordinates—and we assume that exterior time τis the same for all
observers.
This expression assumes serial exterior timesteps, and no scattering or interference, so it will not
be true for all superagent process scaling, e.g. those where a coherent ‘body’, ‘string’ or cluster of
locations all promise some property in a particular order is transformed. Then one can imagine a
situation, as in traffic, where parts of the string bunch up in a queue due to bottlenecks, while ahead
of the bottleneck the strings are dispersed. Coherent motion of string will introduce necessary ideas
like compressibility (see section 5.4).
τ(out,in) =
N
X
i=1
τi=Tout Tin.(102)
Superagent bulk motion (transverse average): a hydrodynamic flow, in which parallel transi-
tions move together is difficult to describe without the scaffolding of exterior coordinates, as there
is no natural way to define simultaneous without continuum constructions. In virtual motion of the
third kind agents do not pass a coordinate finishing line that can be used to conveniently measure
their velocity. Instead, we can define a transverse slice of agents S={Ai}, over which we form
an ensemble for averaging. The average velocity is now:
vgroup =|Si|d
P|Si|
j=1 τjS
cq.(103)
The challenge with this measure is that, as processes converge of diverge along the path, the size of
the set Siat nominal time τimay not be constant. The velocity dispersion over the slice area means
also creates a challenge for continuing this notion of flow: the superagent body is necessarily
compressible, since different parts of the slice will ‘arrive’ at their successors after different times,
assuming exterior time can be measured with high enough resolution to tell the difference.
This approach is the way one calculates the a rate of mixing, or the velocity of money in economics,
for example—an averaged Brownian process. It also applies to effective ‘centre of mass’ position
of bulk bodies moving coherently as a single body, but aligned longitudinally in the direction of
virtual motion.
29
In spite of minor subtleties, the conclusion of defining virtual velocity is straightforward. There is a
maximum speed limit due to the finiteness of interior resources. There will be dispersion from individual
agent variations, and capacity allocations need to be determined somehow—likely in a separate process.
The variations in rates further imply that the position of a composite virtual promise (one that spans
parallel agents) will eventually spread out due to this dispersion in a wavelike manner.
Example 13 (Cloud job migration) Computational data processing jobs are often migrated from loca-
tion to location to locations where processing is cheap—or to locations where large data sets are located.
Data streaming services, like television streaming services often shift between redundant backups that
are close to clients in order to minimize response times. In this case there are two layers of virtual mo-
tion: the migration of the streaming service, with all its software, and the motion of the data packets.
Many companies have data centres in the USA, Asia, and Europe— the major planetary regions to track
peak demand across timezones, this data services track the rotation of the globe (see figure 9).
USA
Asia
Europe
USA
Asia
Europe
USA Asia Eur USA Asia Eur
Figure 9: If motion of a coherent process continues for long enough, dispersion will eventually increase in entropy
leading to delocalization of the process. This can be expected to be most pronounced over short cyclic chains, like
cloud computing data centre migrations.
These three agents promise one another transfers every 6-12 hours, and the transfer time may vary
from short to long depending on the amount of work. The agent spacetime is one dimensional and
circular—toroidal. A job that starts at A1will travel to A2and then A3and back to A1within a 24
hour period, unless capacity is unavailable. Over time, parallel processes that were initially congruent
will disperse due to different velocities and channel properties. Promises that began as delta functions
moving around the agents will spread out in space as entropy increases due to random dispersion σ. For
some maximum entropy form with average position piexp((qqi)2), we expect this to decay
statistically to pi= 1/3over time as its average state disperses. Thus the probability of finding a job in
a particular location becomes described by the ‘plane wave’ ψ(Ai)with no preferred location. The job
is never located at more than one location at a time (except while in transit and unobservable), but the
probability is completely delocalized.
This example of a separation of the description of the agents into two levels: a kinematic motion and a
probabilistic complementary description that is easier to compute is a natural effect of trying to describe
bulk localized processes at scale—and we look at these briefly in section 5.4.
Example 14 (Velocity of money) The economic flow known as the velocity of money is a bulk current
rather than a velocity, calculated with respect to a fixed time period (yearly). Let Mbe the total money
supply over such a longitudinal time period, e.g. a year; let Pbe the average price level of goods and
services, and Qbe the quantity of these sold per unit time period.
[Q] = N/[t], P = [M]/N, v = [M]/[t],(104)
30
Then, the GDP in a single period is defined as:
Mv =P Q (105)
where vis called the velocity of money. Here too, we note that time is a secondary variable even though
it is the generator of all changes. It’s averaged away according to a fixed exterior coordinate system of
the yearly calendar. Clearly this is not a transactional velocity, but rather a bulk statistical flow. In the
information age, we could separate the velocity by type of promise, by decomposing the promise structure
as in relations (50).
5.4 Directed motion of ‘large’ or composite virtual strings and bodies
The movement of a ‘body’ or superagent, formed from several smaller agents, involves exterior transla-
tion with preservation of interior information and structure. The motion of large scale coherent phenom-
ena is unlikely to resemble the motion of microscopic properties, just on the basis of scaling arguments.
In physics, the motion of bodies on a Newtonian scale (ballistics to planets), begins with smooth, rigid
bodies, engaging in uniform and persistent translation, which can be compared to the basically stochastic
motion of atoms and molecules. Motion on a subatomic level is not well understood on a causal level—
we have only a statistical description partly due to the difficulty of probing and partly due to the need
for repeatability (see figure 10). Each superagent in the figure can be assumed to be contain an interior
process, formed by cooperation of subagents Ai, and promising a property +QA. Then the previous
expressions for velocity, etc, apply to this case also.
observer
0
1
1
Q
Q
2
3
Q
4
Q
X | X2
3
X | X 1
2
X | X
Figure 10: Scaling up processes based on agents that cooperate on a smaller scale takes us to virtual particles (su-
peragents in PT), which are more likely candidates for expressing different physical properties and distinguishable
information than elementary agents. These virtual entities can move with respect to one another, like amoeboid
motion, recall their direction like particles with momentum, etc. What are the minimal sets of promises that would
enable such behaviours?
The formalism of agent spacetime gives us an advantage compared to Euclidean coordinates when
discussing memory processes. In terms of continuum variables, we have to represent memory processes
in ˜xfollow a guiderail x. Markov processes are random, but—if the inhomogeneities are small compared
to the size of the body:
x˜x,(106)
then there will be apparent stability of the trajectory. The Markov process cannot know that it’s fol-
lowing the shortest path without a precalculated guiderail, but we can’t see the difference because the
background is homogeneous.
31
In virtual motion, there is also an abundance of cases where large coherent bodies are formed from
properties that are promised by superagents, which move together, as serial trains or as parallel cross
sections.
Example 15 (Coherent parallel motion) In cognition, an image from the retina propagates along nerve
fibres as an approximately coherent entity, allowing us to see or hear phenomena projected at a scaled
‘moment in time’.
The configuration
Ai+A
→ ∗ (107)
Ai+1 +B
→ ∗ (108)
Ai+2 +C
→ ∗ (109)
is a non-correlated spurious cluster of promises. To conjoin these into a superagent, the agents need to
cooperate in passing on relative relationships:
Ai
+A|P()N(B)
→ ∗ (110)
Ai+1
+B|P(A)N(C)
→ ∗ (111)
Ai+2
+C|P(B)N()
→ ∗ (112)
and thus as a superagent, the string can make a coherent promise:
{Ai, Ai+1, Ai+2}+ABC
→ ∗.(113)
Using the notation in appendix D, we can define ‘next’ and ‘previous’ string operators from advanced
and retarded promise functions, along the process timeline, to express the correlations along the symbols
A, B, C promised by the agents.
N() ret
jk () (114)
P() adv
ij () (115)
The precise coherent motion of such a string cannot maintain precise rigidity of the string, as moving
any one of the symbols breaks the next and previous relationship promises, but this can be restored after
some cycles of interior time. So, during the interior times when the pure state of the string is broken
by being in a partial (mixed) state of transfer, the promise is unobservable and this necessarily slows
the effective clock of the superagent. Generally, as agents become larger in size, by composition, their
clocks run slower relative to a privileged fast observer.
Example 16 (Composite motion and swarms in biology) Motion in biology takes the form of swim-
ming or crawling motion on the substrate of other structures [2]. Biological systems are partly in a
fluid state and partly in a rough crystalline tissue state. Swarm motion may be loosely coherent with-
out there being explicit forces to constrain the motion. Physicists might imagine a swarm of birds or
bees held together by springs, but in reality only information is passed between autonomous processes.
Swarms are in a gaseous state, so they are not virtual motion. On the other hand, slimes moulds and
clusters of caterpillars form ‘rolling swarms’ form self-contained virtual processes that move relative to
one another and thus form virtual motion on a couple of levels at the same time.
Large bodies may be defined in terms of promises between superagents, i.e. promises that span
multiple spacetime agents, formed by hierarchies of composition. The promises between subagents that
hold them together would be represented as forces in physics. How these come about remains to be
described for virtual information.
32
A large ‘body’ Qis a virtual process which, at every timestep, spans a region or superagent formed
from many elementary agents. The process may be held together (resist diffusion) by a kind of
non-repulsive force, or simply by sharing a co-moving frame.
A large body is cohesive, i.e. the promises persist beyond the scale of transitions so that there is an
effective attraction between the promises at different linked agents. Note that, agents may not be
nearest neighbours relative to all promises in order to be linked in this way. The order of promises
should preserved in a causal (timelike) direction and in a transverse direction.
Wave phenomena may not have a localizable shape on agent space. Waves are a process that
spans many elementary locations which cooperate in the manner of a field to shift some displace-
ment measure in a periodic fashion by passing relative displacements along in a coherent fashion.
Fourier analysis provides a continuum representation for the composition of such phenomena.
A centre of mass or ‘of promise’ is effectively determined by promises from members within the su-
peragent boundary to point away from the exterior boundary or interface. So, independent of dimension,
there is a net tendency to be pointing away from the exterior.
Example 17 (Centre of mass behaviour and database consensus) The centre of mass is the average
‘not exterior’ direction, which is one way of promising a spherical or radial coordinate system without
coordinates or dimension. When agents move more or less coherently on average so that the centre of
mass has some integrity, then we can reduce the effective system by coarse graining and use a centre
of virtual promising as a scaled promise to represent the body. The origin of the coherence requires
promises on the interior, such as those leading to database cluster consensus in information technology.
5.5 Longitudinal exterior process rate
Transitions are the result of a number of interior process steps, which are invisible to exterior agents.
The times for transitions can’t just be added naively because we don’t know whether local processes at
Aiare in phase with those at Aj, as measured with respect to a observer’s exterior time. The situation is
analogous to signal processing and timeseries analysis for input/output processes [35], where correlation
functions such as the Wigner function may be used to study spacetime propagation of signals [49–54]
(see appendix C).
Example 18 (Cloud tripole migration) Technological systems are small in terms of agent numbers
where non-Brownian motion apply. Within a cloud datacentre, there is plenty if Brownian motion at
the level of computer agents. However, there are also large coordinated transfers of processes between
datacentres (at the superagent level). Small chains are driven by the independent process gradient of
timezones, which track the human working day. Typically companies have datacentres in USA, Asia, and
Europe to handle local timezone issues. As the Earth rotates, and users wake and begin their days, jobs
migrate noisily from datacentre to datacentre. DNS services create noise by redirecting service sessions
to equivalents in other locations as part of resource allocation. Thus, at best, one has only a probabilistic
notion of where any job is located in this spacetime.
In agent spacetime, a process looks like an information circuit, At each node along the path Ai, the
acceptance of a promise +qinvolves a ‘job’ or task to be executed on the interior. A job Jis observable
on the exterior of the agent only insofar as it leads to a delay in the exterior time for which +qis promised
by Ai. This is a generic input-output process.
hhoutput |ˆ
J|inputii (116)
Suppose the job Jis conveyed by a message Mof length |M|.
J=λaja(117)
33
for some complete set of basis functions ja, e.g. memory, CPU, etc. The exterior time required to execute
this job at Aimust be some (unobservable) function f:
τi(J) = f(λa, M, ji, βi),(118)
The propagation time with latency for qto move from Aito Ajis thus:
τ(i, j)=∆t(i, j) + 1
2τi(Jq) + 1
2τj(Jq)(119)
or, if we write this in a Weyl symmetric local form, sharing then we should add the co-time before and
after:
τi=1
2t(i, i 1) + ∆τi(Jq) + 1
2t(i+ 1, i)(120)
which may be written, combining the last two transition overhead terms as ±:
|J|
βiCi+ ∆±.(121)
which—in turn—depends on the allocation of resources λaand channel utilization βiacross the dis-
tributed process space. Because this time depends on unobservable (hidden) variables, we can only
know it as a probability distribution over completion times. Based on this exterior time, we can now look
at the minimum transition time for passing a job Jof length |J|: through Ai:
min
β(∆τi) = |J|
Ci
+ ∆±.(122)
This is the transition time corresponding to the maximum velocity:
vi=βiCid
|J|+βiCi±(123)
where
max
β(vi) = d
minβ(τi)=d
±,(124)
which is clearly the minimum entanglement time for passing a promise of zero length, independent of the
interior time generator Ci. This time is a theoretical minimum as it can’t be achieved by any observable.
The maximum normal velocity for a simple transition J= ∆qcould be
max
β(vi(q)) = Cid
|q|+Ci±.(125)
At this stage, it’s interesting to return to the question of acceleration as discussed in section 3.3. On
the level of a transition, a change of velocity can only be a change of utilization βior of job length |J|.
There is no simple relationship between the Newtonian F=ma and an expression we could derive for
the acceleration. Based on the symmetry around an agent location (Weyl coordinates) it seems natural to
define the quantity corresponding to ¨qas:
aqv2v1
1
2(∆qτ1+ ∆qτ2).(126)
This can be evaluated to give
a=2∆d(∆qτ1qτ2)
qτ1qτ2(∆qτ1+ ∆qτ2),(127)
34
which is a muddle of interlocking processes with no obvious relationship to a force or a mass. This
problem is endemic to state descriptions of change, such as quantum mechanics where one has to rely on
exterior potential to explain the balance of payments in changes, without describing the mechanism for
the change. We merely assume a change result from one body striking another in the ballistic Galilean
view, or from a mysterious charge-field interaction that imparts a change. Equations of motion only
count these changes. All we can say here is that the fact that velocity peaks at a maximum value for
each job type Jmeans that acceleration for the same type approaches zero (as qτ1qτ2qτi)
if and only if the co-time entanglement ±is homogeneous between all pairs of agents. This is the
fundamental limit of spacetime homogeneity. A meaningful force and mass concept will always be by
construction.
For large moving bodies formed by composition and averaging of fast processes, it makes sense to
try to define quantities in terms of generating functionals based on the state generators, as one does in
quantum theory (see section 5.4). The only reason for a body on a large scale to follow an average path
direction is if it is following a guide rail14 in which the direction has already been negotiated or selected
by some carrier signalling process. This is a part of channel reservation.
6 Guiderails
As hinted already, the implementation of coherent motion, analogous to Galilean motion in a straight line
is a highly unnatural state of affairs that requires significant infrastructure to support it. How the process-
ing utilization is allocated longitudinally and tranverse to the direction of motion cannot be explained by
purely local processes on the scale of an agent. A cooperative process is necessary.
A natural way to address this is by the separation of process scales, which also describes promise
scaling [2]. Why would a virtual body continue in the same direction? What even is a persistent direction
through a network cluster of discrete agents, ordered or disordered? We take for granted the ability to
reach a specific destination along a guided path, e.g. by ‘seeing the way’ (line of sight), however this
is not possible on a larger scale. This capability is possible in Euclidean space, because it has a rigid
and regular geometry. What information process accounts for this effective privileged observer view
on a local scale? Trapped within a system, an elementary process needs a local map of neighbourhood
‘terrain’ which is an approximate invariant of the space.
6.1 Maps of topography
Maps are inhomogeneties formed from a separation between fast and slow variables x, ˜x, where the map
would depend on x. In a continuum space, a priori there’s an infinity of possible paths. In a network,
there may be no solution at all to connect points. How do we even name the destination and know when
we’ve arrived there? The destinations need distinguishable names or features that make them uniquely
recognizable. In a coordinate system, we can name nodes by counting distance. No such property exists
in a network. The outcome is that we need to think about distance and location differently. This is not
news in computer networking, where the problem is well understood.
Example 19 (Lost in space without guiderails) Imagine being in a children’s playroom in weightless
outer space, in a room filled with randomly coloured balls. How would you define a direction? How
would you count the number of dimensions?
Let’s call the outcome of a process by a persistent process capacity and direction is reserved a
‘guiderail’. Spatial guiderails discriminated by semantics, in the macroscopic world, are everywhere—
roads and homing signals, selectable by agent discrimination. In the language of forces, we have train
tracks, potentials, fields, hills and valleys, etc. None of these are explanations, merely representations.
14This is similar in idea to the pilot wave idea in Quantum Mechanics
35
Definition 1 (Guiderail, transport channel, or route reservation) The selection of a bounded collec-
tion of agents oriented along a path of causal transitions, defining a trajectory. The guiderail may have
finite cross section (width) and length. The path formed from discrete promise relationships defines the
meaning of a geodesic through spacetime—a local route from start to finish, subject to some boundary
conditions (see figure 11).
Why don’t we need guiderails in classical mechanics? In fact we do. Embedded in Newton’s laws
is a cleverly implicit assumption that spacetime itself remembers direction for all observers at the same
time. By naming axes in a coordinate system, and describing processes relative to this rigid basis set,
we have implicitly imbued continuity and direction into the description. Momentum is the generator
of translations only when space is uniform and homogenous and there is a constraint on position as a
function of time, generated by the Hamiltonian or kinematic energy. So it’s reasonable to expect linear
momentum to be a feature supported by spacetime itself. What programming does spacetime need to
execute on this?
q
(x)ψ
in band
out of band
Figure 11: Coherence of direction at scale behaves like a self-organizing waveguide in which agents’ promises
are approximately aligned to contain motion along the line of the guide. Trajectories for macroscopic bodies are
scaled by the formation of such guiderails, requiring long range order along the direction and transition weight
directing along the line.
6.2 Symmetry breaking directed processes
To explain uniform virtual motion in a straight line, we have to explain two things:
How the concept of a persistent direction arises for a virtual process in a random network of agents.
How average channel availability can be secured for continuity over a trajectory.
These contentions may be of little interest in a technology setting, where the paths are short—but it
would miss an obvious opportunity to forego an answer to these issues. These issues translate into two
related notions that are in need of comment:
The emergence of long range order in agents’ transition promises, with or without reversibility.
How do consistent directions crystallize from a gaseous, ballistic agent interactions? This can
only be explained as a memory process in spacetime itself. No memoryless process can maintain
uniform coherence—we have to explain spacetime bias.
36
The irreversible directionality of promises on a large scale—why do agents keep moving in a
consistent direction that has already emerged15. This information must be carried with the virtual
body that’s moving.
Although modern physics elevates symmetry as a key principle, and sets aside the origin of ‘boundary
condition’ in favour of isolated generality and covariance, this is slightly dishonest—interesting things
only happen when symmetries are broken. Brownian motion preserves directional symmetry on av-
erage, but not on a microscopic level. Mathematically, it is the imposition of boundary conditions on a
larger scale that breaks local symmetry along directed paths—but the processes by which such conditions
emerge is what we are asking here.
6.3 Continuity of motion and scattering
Currents and hydrodynamic flows are examples of bulk motion, involving many promises moving to-
gether. In virtual motion, such flows are represented by bundled parallel processes across spacelike
sections of agents (like a fibre bundle). In such flows, dispersion of velocity across the cross section and
length of the path is a normal feature, too extensive to discuss here. We can note briefly how direction
and continuity are guided in bulk processes.
Example 20 (Probability current in quantum mechanics) In quantum mechanics, the expression for
bulk transport is the probability current, which has a telling structure. The non-oriented local position
of a ‘particle’ is given by the expectation value of the position operator:
x(t)ψ(x, t) ˆx ψ(x, t).(128)
This is a Brownian process, since it has no orientation except for the pre-programmed coordinates x.
The oriented, non-local transport of the particle relies on a gradient of the non-local guiding function
ψ(x, t), which stands in for the effective momentum of the process:
mJiψ(i~
i)ψψˆpiψ. (129)
Since there is no inhomogeneous gradient for plane wave solutions, the meaning of a current is not as
one would expect for a diffusion process—there is a constant density, just a rolling wave, and the velocity
comes from the imaginary phase part. This shows how QM, in the absence of a guiding potential,
represents steady state of Brownian fluctuating change as interior phases, not a classical type of directed
motion.
In quantum conduction, a motion gradient encodes the direction of travel is based on exterior bound-
ary conditions, and exterior promises between the agents. There is a representation ambiguity in quan-
tum mechanics, since there isn’t a clean separation between interior and exterior processes, except in
the pilot wave approach [27]. Memory resources are interior to the agents. We solve this in an exterior
representation by introducing an effective potential field, e.g. for charges, the momentum p7−peA,
where the vector potential Ais a convenient field representation of the boundary condition as a local
memory function. For a superconducting current, this is the exact value of the persistent momentum
remembered by the supercurrent [55, 56].
Example 21 (Buffer pooling and waiting (dynamic flow)) When an agent allows messages Mfrom
more than one source, that passed along different paths, the arrivals may not preserve the order in which
they are sent. By buffering arrivals, the agent can wait for all dependencies to arrive before reassembling
the dependencies to keep the next conditional promise. Buffering also allows other processes to ‘flow
15The average reversibility of ‘physical law’, i.e. the generators of possible motion is something of a red Herring that
gets more attention than it deserves. It’s only irreversible trajectories that are of interest for describing change. Entropy is
sometimes associated with the arrow of time, but this is another diversion. While it’s true that average statistical state can be
used to measure global time, local time is measured by local directed processes, without which entropy would be impossible.
37
through’ the agent while the agent is waiting, by timesharing. This is a well known technique in Com-
puter Science, e.g. in TCP/IP networking. Interior agent memory is vitally important to understanding
parallelizable multipath phenomena. It’s likely that no agents involved in dynamical phenomena manage
without a buffer of internal memory. This would explain interference and wave phenomena in a simple
way. Buffered data are in a mixed state—neither in one place or another, but locked in transition state
(see appendix B).
Example 22 (Process pipelines and circuits (semantic flow)) In electronics, service design, and data
processing, components are chained together to form pipelines through which information is passed and
transmuted. These are promise chains which can be considered as exhibiting virtual motion, where a
simple ‘process marker’ is passed along to track the changing interior state of the process. All state
machines including Turing machines operate in this way. More mundane examples can be found at all
scales:
1. Hospital: patient arrives, registers, waiting area, treated by doctor, wait, checkout. This is virtual
motion relative to the tracking registration process.
2. Airport: passenger check-in, security check, wait for all independent passengers to form super-
agent, flying, arrive at destination, wait for all bags, collect bags, end. This is virtual motion
relative to the process markers.
3. Road toll: Cars arrive at toll station, splay out to parallel channels, cars re-merge into a smaller
number of branches by contention. The physical motion becomes virtual motion when observed or
projected by traffic cameras.
4. Money transfer: Start bank account, next bank account, next bank account, final bank account,
etc.
Note the waiting steps, which correspond to a semantic composition analogous to constructive interfer-
ence (after a delay). Contention corresponds to destructive interference.
6.4 Path guides
In virtual motion, the possible directions of motion follow the structure of the underlying agents on a
mesoscopic level. On a macroscopic level, direction depends on how each local agent counts independent
changes. The concept of causal sets allows us to define paths or chains of transitions in causal (pre-
requisite) order [16–20], but this is only a tautology associated with timelike propagation.
Example 23 (Internet Routing) Routers are a special class of memory nodes in a graph that have
maps—preprogrammed local directional fields, with non-local knowledge about how to reach named
locations (figure 12). Thus, in routed traffic, the notion of an invariant direction is programmed into
spacetime, not into the virtual promises that are propagated. A kind of relatively ‘memoryless’ prop-
agation is used within superagents for ‘broadcasting’, however in that case the burden of memory is
just shunted to the edge, where each receiver agent has to remember its name. Such unnamed Brown-
ian signals are used to diffuse and negotiate the polarized memory about local spacetime direction and
structure. Once spacetime ‘knows’ how to reach certain destinations or directions, local motion only
needs the local ‘directory’ information in the guardrail’s routing tables.
At different semantic scales, Internet routing information is maintained in memory in the form of
routing tables, service discovery directories, etc, and is collected and updated by separate ‘out of band’
protocol interactions, during which the routes are inaccessible and agents are unobservable.
Lemma 1 (Direction or route) Path order reservation is equivalent to the formation of a causal set.
38
router
router
router
router
router
router
router
guiderail
transverse
longitudinal
CAUSAL SET
Figure 12: Internet routing involves a hierarchy of agents, separated into two main types with respect to the motion
of data. Clusters of host agents each promise a unique address within the cluster, which broadcast messages to
one another on their interior. They form collective superagents called ‘subnets’, which also promise a collective
address (the IP prefix) to other routers. Router agents maintain memory of a field of directed promises to forward
virtual messages by destination address; thus routers form a guiderail field for virtual packets to follow based on
source and destination promises. (see example 23).
This separation is readily observed in macroscopic processes like Internet routing, where direction
is precalculated and packets are carried along by the currents like messages in bottles. By contrast,
wireless radio transmission, radio signals broadcast using waves to flood a region hoping to hit a target
indiscriminately. Without named receivers (encoding identity into spacetime locations) there can be no
direction to wave transmission. In biology, chemical and electric potential gradients form a guiderail for
cells with interior polarization to align with and follow. In the nervous system, signalling passes along
specific waveguides that form a pre-allocated map. What map explains Galilean-Newtonian motion in a
straight line? Einstein showed that geodesics on curved spacetime represent the temporal guiderails for
particle motion.
Not all dimensions may necessarily be available for motion. Some can be reserved for signalling
‘out of band’. In figure 13, a common topology for calibrated control in dynamical feedback is depicted.
Agents form an observable spacetime on the rim of a hub star-configuration space. The agents on the edge
form an ordered configuration space. They can pass signals to neighbours or through the hub ‘short cut’.
This interior channel can be of a qualitatively different type to the ordinary spacetime connections on
the rim, with transition rate that’s independent of spatial rim distance. Agents can therefore be entangled
over long distances without a distance penalty, by using a ‘tunnel’ or ‘wormhole’. This model is widely
used for virtualized control in information systems.
6.5 Promising dimensionality
If a path is a collection of agents ordered by a successor relation, then spatial dimensions correspond to
the existence of independent degrees of freedom—alternative successors in a trajectory at each point in
a graph. In a coordinate view of spacetime, the number of dimensions is assumed. It is constant and
immutable. In agent based spacetime, the possibilities for motion depend on the effective adjacencies for
agents in the manner of a graph [1].
Example 24 (Statistical dimensionality) The relation between linear motion and Brownian motion sug-
gests that paths form from the possibility of percolation in random graphs [1,57]. A related model is the
path integral, by which wavelike interference brings about a favoured path on average by interference
and ‘stationary phase’ leading to a potential guiderail for classical paths, while non-classical paths
remain essentially Brownian in form and become irrelevant details under scaling [58]. The effective
dimensionality of spacetime may even be effectively emergent from statistical percolation in a discrete
39
Ai
hub agent
Figure 13: One way to ensure rapid distance-independent correlation between agents is a hub star-configuration
space (see the larger picture in figure 12). Neighbours on the edge form a configuration space, but don’t need to
pass all signals through each other to communicate. This interior channel can be qualitatively different from an
ordinary spacetime connection, with transition rate that’s independent of spatial distance. This model is widely
used for virtualized control in information systems. ).
agent-like model, as proposed by Myrheim et al. in [16–20].
In a graph, dimension is a local property, which suggests that dimensionality can only be an emergent
property of a memory process relating to the channels between agents. Let each node have an in-degree
kin and an out-degree kout which represent incoming and outgoing degrees of freedom [59, 60]. The
outgoing degree represents a number of information channels, and the effective dimension of spacetime
at a point. In Promise Theory, the existence of these channels depends on both agents, by offer (+)
and acceptance (-) at every point. Suppose then we denote these directions at each point by a tuple of
basis vectors ˆe(±)
k(Ai)for the in and out degrees—these are just symbols in some alphabet, without a
representation. In our local agent model, each location Aipromises its contribution to the total system-
wide state vector ψas ψie(±)
1,ˆe(±)
2,ˆe(±)
3, . . .), where the different kare distinguishable types, related to
promises expressed at the agent boundary:
Ai
±ˆe1,±ˆe2,±ˆe3, ...
→ ∗.(130)
The assumption here is that promise offers e1only bind to promise receptors ˆe1, and so on, so that
spacetime forms a kind of anti-ferromagnetic stability. Without this, long range order the different dimen-
sions will be irrelevant and non-aligned. This is prediction of Promise Theory—consistent dimensional
transmission requires both (+) and (-) promises to bind and lead to an ordered state.
Example 25 (Promise receptor types as dimensions) We see such receptors clearly in chemistry (holes)
and in cell biology (protein bindings), for example. In computer networks, receptors take the form of
‘ports’ with different numbers that correspond to different protocols or ‘forces’, e.g. port 80 is HTTP,
port 22 is SSH, etc. In fundamental physics the different interaction types (the so-called ‘fundamental
forces’, except gravity) play this type role. In biology cell protein markers and receptors play the role of
these transfers. In computing, service ports play this role. In vision, optical receptors for R,G,B play this
role. The semantics of these cases may be different, but the causal independence in each case is what
renders the information ‘typed’.
Example 26 (Crystals and alloys) In chemistry, the donors and receptors for electrons and their sym-
metry groups determine the local order of crystal formation, in conjunction with Pauling’s rules. The
40
symmetry groups of the constituent agents go to determine the dimensionality of the grains, i.e. the sep-
aration of donor and receptor sites. Close packing structures may result if there are more donors and
receptors, i.e. greater symmetry than an agent can afford to maintain channels with. One way for this
to occur is by covalent sharing. In alloy formation, even high entropy alloys have some regularity ac-
cording to effective size, which translates into an effective interaction co-time between the agents along
a particular bond.
Example 27 (Cell structures) In a quasi-Euclidean lattice, exterior promises correspond to lattice bind-
ings between nodes. In biology, the exterior promises would correspond to protein spikes and receptors
expressed by cells. In chemistry, the they correspond to electron valencies, etc. Weaker bonds are slower
to respond than stronger bindings. As we scale the size of a multi-agent system, these timings manifest
as effective forces.
The affinity or propensity for an agent in a receiver Rrole to accept a promise from another agent Sis a
typed (-) may be expressed as:
Rb
S. (131)
This allows for a number of different interactions, which we assume have dedicated channel capacity per
type. Since the promises that result in structural integrity are only a possibly small subset of the total
promises expressed by the agent, we don’t have to assume that a semantic spacetime is homogeneous
in its functional characteristics [1–3]. Agents need only to be able to form bindings that match with a
potential neighbour. The jigsaw pieces need to match over some allocated channel (whose origin remains
mysterious).
Direction and orientation are clearly memory processes. The local orientation of an agent Aiin an
effective Euclidean embedding is determined by the values of these promised vector components, and the
dimension of the tuple representation is dR×Dwhere dRis the dimension of the representation of each
promise and Dis the effective exterior dimension of agent spacetime at the point. The existence of typed
bindings, allow cells of long range order to form. Thus dimensionality can be consistent over grains with
such order, but might change in other grains. This is a typical situation in solid state physics of materials,
for example. Kinematically, a direction is only consistent if a process qcontinues in that direction as
an internal property. This would be the origin of momentum continuity. As long as momentum is carried
with a moving observable, its direction has to be distinguished by a consistent vector with respect to a
consistent set of conditional forwarding rules, e.g.
Ai
+∆qe0
1,ˆe1)|ˆe1
Aj,(132)
which would take qarriving from direction ˆe1and propagate it on through in a consistent direction to
the next ˆe0
1, conditionally on having received the promise e1from the previous agent. If, on the other
hand, Aipromised to forward in direction ˆe2, the motion would go through a right angle bend. This has
exactly the semantics of a forwarding table, as is used in networking technology. Clearly the notion of
momentum conservation, which is embedded within ballistic assumptions, is not an inevitable behaviour
and has only emergent significance.
6.6 Guiderail formation
In scaling transport phenomena, we have to explain how local direction translates into non-local direction
(global or persistent, including curvature). Typically, if we assume the primacy of locality, this has to
happen by the emergence of Long Range Order [1]—involving a phase transition. With limited memory
per agent, on whatever scale we are describing, maps of spacetime will have only finite efficiency to
point and sustain promise trajectories q(τ)along the shortest path to their destination.
If only knowledge of nearest neighbours can be carried, the motion is Brownian. If the message q
can align with a scalar gradient field like a hill-climbing or gradient descent method, then guided motion
41
is possible by diffusion. The question then shifts to what process sets up the gradient field in a certain
form. A guiderail is thus an overlay field which acts as a directed map, against which transitions are
made conditionally. With conservation and homogeneity of state density assumed, this necessitates a
Hermitian structure, since that is the symmetry which preserves those norms. Transition probabilities
ψ
2ψ1interpret as the probability of emission at A1AND conditional absorption at A2.
Example 28 (Matching emission and absorption guides) In the transactional interpretation of quan-
tum mechanics, emitter ψand acceptor ψwaves propagate from start to end states forming the guiderail
ψdaggerψ. From start to finish, there is an implicit signalling that is (once again) out of band.
In practice, we don’t need perfect information to enable routing of a trajectory q(τ). The virtual
promise only has to get closer to its destination for the channels to sense the gradient change as a func-
tion of the exterior time, so that another leg can take over. This is the causal structure of cell-phone
networks and propagation in metals. This local granularity opens a new set of questions concerning
the specificity of effective ‘intent’ in the processes, which is scale-dependent. Intent, in a mathematical
sense, is basically a form of directional alignment within a space of possibility. Consistent and oriented
average dimensions, such as one invents for vector spaces, are characterized by having ‘named’ tuple
properties, e.g. (x, y , z), up to symmetry transformations. In a similar way, this is why one assigns
names and addresses to Internet subnets and computers. Without consistent naming, and alignment, mo-
tion would be indistinguishable and Brownian (random). If we can distinguish properties q1, q2, . . . in
such a way that an increase in one does not lead to a change in the other, then we have a set of local
basis vectors. Consistency implies a form of long range order—a field gradient that can be detected by
the virtual phenomena that ride along it like a guiderail.
The most basic discrimination of input and output along different channels is thus what we call
spacetime dimension. The input degree is the number of channels that can interfere to select an output.
The output degree is the number of possible degrees of freedom for branching the result of a directed
process along independent timelines. These input and output directions are encoded by type-specific
promises that act as routing networks. Thus directionality in agent spacetime is determined only by
the timeline of a process on some scale. On a small scale, transitions between agents determine local
dimensionality. On a larger scale, transitions between coarse grained superagents determine the effective
dimensionality. Euclidean dimensions have no other natural correspondence than named receptor/emitter
types. A collection of agents all with receptors and emitters of type ±x, ±y, ±z, . . . would behave like
a discrete lattice of dimension N, i.e. each with long range order and Nindependent successor relations
(see section 5.1).
Lemma 2 (Dimensional bindings) To represent Nindependent dimensions, agents need at least N
independent donor (+) promises and Nreceptors (-).
If a signal were to propagate (percolate) through a graph connected by such bindings, Local services can
act as transducers, converting one kind of signal into another. The rules for accounting these type trans-
mutations differ. Conservation ins physical law is based on group symmetries that allow transmutations
for certain non-Abelian groups.
Example 29 (Artificial Neural Networks) ANNs are regular arrays of interwoven agents that promise
to route information in a mesh that resembles a probabilistic forwarding plane in a network switch.
The boundary conditions at the ends effectively determine a distribution of weights over the array that
approximates a Markov process, during ‘training’ of the network. The weights form a guiderail for
subsequent motion through the array, which interfere according to the potential set up in training to
reproduce similar interference patterns for similar inputs. The key aspect of neural networks lies in their
application for dimensional reduction: where inputs agents greatly outnumber output agents. The latter
are thus used as with the functional semantics of an index type classification of the input.
42
The agent model might not bring us any closer to understanding the reasons for elementary processes,
i.e. the origins of change or communication between agents. However, we are led to an important
separation of concerns that is not obvious in either the classical or quantum pictures. The necessity of a
process by which extended spacetime trajectories form (spacelike extended objects and timelike paths)
and interact becomes quite clear for virtual motion.
6.7 Process interference and channel reservation
The logical need for non-local information about direction and availability to set up a guiderail for tran-
sitions means there has to be an on-going or causally prior process which can propagate and ‘relax’ or
equilibrate local agent information about resources within a bounded region we call the system. This
implies that the setup time for a system-wide ‘state function’ ψ(A1, A2, . . .), which summarizes these
matters, takes a finite time. This setup time is neglected in quantum mechanics, so the propagation of
information leads to non-local correlation effects.
Interference, like dispersion, is normally associated with waves, because waves are the most elemen-
tary representation of non-local processes and come natural from any Fourier decomposition, and Fourier
spectra are the coordinate system of continuous and discrete processes. However, interference can occur
between any two processes that share resources, because these are not product states but convolutions.
Generalized virtual processes interference is called resource contention in cybernetics. Parallel processes
can contend constructively and destructively, but we might overlook the process for that contention by
separating the trajectory into a determination part and a realization part (analogous to the separation of
boundary conditions and equation of motion).
Example 30 (Resource contention in infrastructure) In cloud computing, the high level of parallelism
and interference, contending for shared resources, leads to dynamically similar resource constrained
phenomena. For instance, see example 35. Because resource control is virtual, process flows are not
constrained by rigid guiderails, as they are in the electricity net, for instance. Process migration can
equilibrate dynamically. This leads to many quantum-like uncertainty phenomena concerning location
and predictability of computational jobs.
Figure 14: Scaled macroscopic motion through a guiderail is effectively a succession of multiple slit inference
transitions formed from discrete transitions. On a large scale, these have the same structure as quantum mechanical
wavefunction algebra, only referring to the states |Aiii. The transverse confinement of the paths is what leads to
coherent dimensionality of semantic spacetime processes.
The compositional nature of a discrete spacetime is closely related to the quantum algebra for multi-
slit experiments (see figure 14), because agents are slits through which processes must flow. By at-
tempting to allocate availability along paths by routing through multiple intermediate agents, we recreate
the double slit experiment over and over again. Computing probabilities along such alternative paths
necessarily involves the same kind of algebras, since each location projects its own local set of interior
states and divergences and convergences of partial processes inevitably lead to interference effects on a
43
symbolic level16. The path integral formulation was motivated by the wave interpretation of the double
slit experiment—for photon and electron waves, which led to the notion of wave-particle duality and
quantum ‘weirdness’. From the perspective of virtual motion, non-local interference is an entirely nat-
ural consequence of the separation of process by scale, into two independent component processes: a
resource allocation or channel availability mapping of the system, and the subsequent motion relative
to that map17. On a small scale, there is insufficient stability in the processes that determine direction
and continuity. Appealing to an information theoretic model of virtual change, we have no choice but
to confront contention as a basic feature of kinematics of virtual motion. It’s intriguing and relevant to
compare this information theoretic view with the more elusive origins of the Schr¨
odinger equation, and
see where the guiderails hiding in plain sight.
Example 31 (Quantum Mechanics as agent spacetime) In quantum mechanics, one builds a picture
of mechanics as a state machine (whose interior details are unknown). Process paths can propagate
independently or contend at ‘junctions’, which are precisely interference fringes in a continuum approx-
imation to spacetime. A state vector ψrepresents an interior vector, whose behaviour is revealed only
through certain exterior ‘promises’ known as observables. The average evolution of the state vector in
exterior time is described by the Schr¨
odinger (or Hamilton Liouville) equation, in which the Hamiltonian
operator is the generator of exterior time. In Dirac notation, one can write the projection of the state
vector |ψionto exterior quantities like position hx|as the inner product. The inner product hx|ψiis
interpreted as ψrevealed through the lens of variables x.
In agent spacetime, agents Aireplace the xcoordinate:
hx|ψi ↔ hhAi|ψii (133)
and the bra-kets of quantum Dirac notation map directly to the state vectors used here h. . .i hh . . . ii.
The causal separation of motion into channel pre-reservation and guiderail motion is implicit in the
expansion of the state vector’s relationship to observable ‘promises’. If we take a representation of the
wavefunction, as a field spanning all the agent locations in a spacetime region, subject to the constraints
of exterior time development, and we introduce the identity operator, as a complete set of states, in the
usual algebraic way:
I=X
q
|qii hhq|(134)
them projecting locally over the virtual promise states:
hhAi|ψii =X
q
hhAi|qii
| {z }
fast : guided motion
× hhq|ψii
| {z }
slow: mapping
.(135)
or, projecting non-locally:
hhAi|ψii =X
i
hhq|Aiii
| {z }
fast : position
× hhAi|ψii
| {z }
slow: wavefn
.(136)
In quantum mechanics, we can easily miss this connection by assuming the property qis identical to a
real material particle at x, rather than being an independent set of interior states.
This structure is only a matter of dynamical representation in terms of finite sets of states—an exten-
sion of a non-continuum phase space approach. So there is no reason why we could not apply it to other
systems. It is not unique to quantum theory. The interference of paths is usually considered to be the
point of departure for quantum mechanics, but this is simply a property of interference between parallel
processes that are somehow related and therefore either reinforce or cancel out outcomes in regions that
are (in)compatible with resources.
16In quantum mechanics, discreteness is overlaid through processes on top of a spacetime which is assumed continuous, but
the continuity of spacetime is an irrelevance used only as a convenient representation that eventually leads to awkward infinities.
17A similar idea was espoused by de Broglie and Bohm in their attempts to make sense of quantum mechanics [26, 27], and
have since been repeated by Bell and others.
44
Guiderails can’t tell us where a process is and where it isn’t. They act on a longer timescale, weight-
ing the successor relation in a causal transition function. Rather, they map out the distribution of re-
sources for which promises contend, i.e. interference patterns, and thus guide (in advance of actual
change) the local accounting for virtual processes to be accepted by a region of spacetime.
6.8 Trajectory mapping and path availability
The clues from section 4 tells us that motion along a predictable path structure and random transitions are
two separate kinds of promise. The division between channel reservation and actual transition processing
is analogous to the division of phase space between position and momentum. Momentum, as the formal
canonical generator of translations, is the analogue of a channel reservation. Channel capacity reservation
is a key problem in technological models of virtual [4–7].
The problem of sustained motion in a straight line thus divides into two parts (a separation of scales):
path allocation or the establishment of a channel that encodes continuity of direction and velocity, and
the subsequent dynamics of the agent transitions, relative to the allocated guiderail as a memory process.
Out of Band Allocation: pre-allocation of route solution. A process resolves the solution of
the distribution of resources relative to boundary conditions and finite agent capacities. Gradient
fields drive directed motion in the direction of boundary information18. The boundary condition
or distinction between source and receiver has to be encoded into this guiderail to account for
the scaled average direction, and this must be realized on a small scale by composition of many
information channels.
Example 32 (Information superhighway) In biology, cells form channels (arteries and organs)
to channel processes in advance as part of morphology. Morphology is the prior guiderail process
that enables the organism to create its map for functional circuitry. The electromagnetic field
encodes direction by field polarization represented as abstract potential. Agents move in this
potential, which thus transmits information non-locally to them.
In Band Allocation: In some cases, a promised signal qcan be passed along a trajectory, and find
its way in ‘realtime’, or ‘in band’—by probabilistic scattering off the effective potential of resource
availability. If there is insufficient capacity or forbidden transitions along the possible directions
for agent output, alternatives could come into play. This is normal in Brownian motion. The
unusual case is for straight line persistent motion, since that is hard to define without an imaginary
coordinate system.
Example 33 (Stigmergic trails) In biology, cells move by polarizing their shapes along gradients
that are negotiated on an on-going basis. Insects leave pheromone trails for themselves and others
to follow. This is quasi-ballistic memory process leads to swarm dynamics. Lightning strikes follow
a path of least electrical resistance. The strong feedback between the process and the environment
means that these paths can’t be preallocated, except with lightning conductors on buildings.
The main difference between ‘in band’ and ‘out of band’ allocation is whether the conditions and pro-
cesses are stable and separable. If there is sufficiently weak coupling (allowing separability, linear super-
position, etc), the channel and route conditions can be effective invariants of the virtual motion (or slowly
varying, adiabatic) during the process translation and there is predictability. If the boundary conditions
are changing on the same timescale as the translation process, then the result may be non-linear, even
chaotic like coupled oscillators.
18We should not confuse forces and potential gradients with a temporal gradient, though the former generally implies the
latter plus continuous acceleration. The former leads to an acceleration, the latter acts like a sensory map for counting location
along allowed paths—virtual train tracks for a process to follow irreversibly, and correlated over long distances.
45
If a promised job in motion Jdepends on a minimum resource which cannot be met, it may have
to be transferred to a new agent, else the motion cannot proceed. It has to either tunnel through to
another agent as a signal, or be dropped or suspended until it can proceed. This is illustrated by a
rather mundane example 34. The arrival of noisy contention with an agent’s processes, due to exterior
interactions of different types, could also be enough dislodge a job and force a migration to another host
agent (stimulated emission as in example 11).
Example 34 (Train schedules) Consider the set of agents in semantic spacetime composed of train sta-
tions that are connected by guiderails. A train journey is a message T, which station Apromises to
transfer to station A0given that it has a train T.
AT|T
A0(137)
In order for the train to transfer to station A0, the station has to accept the train. If no promise is made.
However, the train is an exclusive promise so it blocks the track. What happens depends on the larger
promises in the spacetime around Aand A0.
1. The train can’t leave Aon its journey.
2. The train blocks the track on the track between Aand A0.
3. The train is removed from Aand is ‘dropped’, i.e. it disappears without conservation of trains.
4. The train cannot stop at A0but it can pass through to some other station A00.
5. The train is diverted at Ato another train station along a different track, which does not block the
rail between Aand A0.
6. The train is reflected back in the direction along which it came, if there is a track aligned along
that direction.
We see that there are many possibilities. The narrative about conservation is different in each case.
Consider how this story would change for a car instead of a train, or a cell inside an organism, a data
packet passing through a data processing pipeline, electricity passing through the distribution grid, or a
photon passing between atoms.
It’s now clear why virtual motion by local operators is qualitatively different from classical Newto-
nian motion, and rather similar to quantum state motion. It’s a resource constrained local picture, whose
paths are guided by the distribution of resources (as in the Schr¨
odinger equation) and this leads to inside-
out behaviour. Once the location of a specific inhomogeneous resource limit becomes irrelevant, due to
size of ensemble with interior coherence one can create an effective ‘in band’ description that looks like
Newton’s ballistic model.
6.9 Process vertices or junctions
To round off the overview of guided motion, we can mention briefly the effect of interaction vertices,
already mentioned above in connection with continuum wave interference fringes. This is another ex-
ample of conditional promise semantics with dependencies. Just as scattering due to contention can fan
out trajectories along multiple paths, so interactions that depend on multiple channels integrate causal
pre-requisites into single outcomes. These two processes (figure 15) are the basis for all causal circuitry
from Feynman diagrams to electronics and biology.
Within agents, interior processes are unobservable. However, a branching process could (re)route a
virtual process along several exterior paths with availability, rather than forwarding to the emitter pointing
in the incoming direction. A single incoming promise could lead to the emission of several outputs from
a single input (see figure 15). The finiteness of agent resources would reduce the effective rate of launch,
46
(a) (b)
q
J
J
q
J
J
J
1
2
3
J1
4
5
+
+
+
++
+
+
+
Figure 15: A process trajectory can converge from several sub-processes (a) as in constructive interference of
waves, or a single process can scatter into several (b) if interior processes . Note that the diagram shows only the
(+) promises for simplicity—this shouldn’t suggest a ballistic transfer.
leading to a splitting of the effective momentum. This is the basis of modern data circuitry [61–63] in
cloud data processing. For brevity, we needn’t speculate about how the effective conservation rules work
here.
The summary is that finite resources may lead to fragmentation and reassembly of processes over
exterior channels due to finite interior resources. Recombination is also possible, and this accounts for
semantic interference over whatever alphabet of states the process is expressed in (the generalization of
wave interference fringes). This might represent a job execution on a computer, or a biochemical process
in a cell, a factory assembly, etc, or a quantum interaction. What matters is not the scale but the nature
of the interior-exterior split. Figure 15 shows process junctions where paths converge (a)
Ai
q|J1,J2,J3,...
Aj,(138)
or diverge from a single interaction agent (b),
Ai
J1|q
A1,(139)
Ai
J4|q
A2,(140)
Ai
J5|q
A3.(141)
Thus either a tuple of promised values is accepted by an agent and is transmuted into a single promise,
superposing parallel causal paths, or a single value is received and is split into several independent tra-
jectories. There are various possible semantics for such interactions, including waiting for all dependent
values to arrive [63].
Example 35 (Internet route splitting) When a large packet of Internet data meet a flow constriction,
such as a smaller ‘MTU’ promise along a network route, packets will be broken up and may travel along
different paths. This is analogous to a decay scattering process in particle physics, where the combined
momentum cannot be sustained by the underlying spacetime channel. The resulting combined packets
will only be absorbed by a location that has access to all the components, so multiple paths must interfere
constructively for Internet traffic. This is the opposite of resource contention, determined by the receiver,
i.e. by the role of acceptor states, predicted by the (-) promises in Promise Theory.
Example 36 (Capacity allocation in virtual motion) Carrier capacity in phone networks for mobile
users is an active area of research [4–7,64,65]. So-called quality of service guarantees are the analogue
of the desire to conserve momentum in mechanics. However, the scale ratio between agent resources and
promise continuity is far more precarious in a data network than in mechanical bodies formed from 1030
smaller agents.
47
EXT ERIOR M EMO RY /STATE IN TER IO R MEM ORY /STATE
Force/mass Position, Momentum, (kinetic energy)
Boundary conditions Phase space
Input/output Promises q,q
Exterior time τt
Channel capacity Process capacity Ci
Guiderail channel capacity Neighbour channel capacity C(i, j)
Entropy Fragmentation
Guardrail Interior process transitions
Potential (energy) function Algorithm/transition matrix
Direction Agent node degree kin , kout
Gradient field Dependency chain
Causal successor relation Dependency chain
Distribution of affinity
Exterior promise graph Interior promise graph
Table 1: Kinematic and dynamical qualities and quantities in rough correspondence for different models and
scales, separated by interior and exterior status.
7 Summary
Virtual motion is a rich area of study. There are many topics remaining to cover in a sequel. Virtual
motion offers an inside out and upside down view of traditional motion, in which resources and changes
are intrinsic or interior to the locations along a path which forms the exterior. This contrasts with in-
formation being transmitted ballistically to form a mean free collision path through an empty coordinate
theatre. Instead of extrapolating downwards from Galilean-Newtonian descriptions, we extrapolate up
from primitive information exchanges. This inevitably leads to a state based description, and a lot in
common with quantum mechanics. The myth that quantum weirdness is intrinsic to quantum mechanics
is easily exploded, as we see how the strictly local agent-centric process view exhibits basically the same
behaviours. Table 1 shows how different descriptions match in their model semantics.
To describe virtual motion at scale, we need to confront two very different modes of transfer: tran-
sitions (hops) and translations (sliding). The latter have to be explained in terms of the former, which
has specific consequences. Amongst other things, there is a natural separation into a large scale (slow)
process that allocates a guiderail or channel path for transitions from end to end, and a local (fast) process
that propagates signals along the guide. Transitions can be composed to form random walks, or stochas-
tic Brownian motion without any new assumptions. Translations, on the other hand, add significant new
assumptions: continuity of direction, rate, and perhaps ‘mass’. At scale, composite agent motion must
lead to coherent hydrodynamic transport phenomena.
At present, the practical value of this work may be only academic curiosity. However, the rapid
growth in scale of information systems, with complex notions of location and transmission—in wired and
wireless networks, as well as the development of biological processes suggests that a fuller understanding
of processes and their relationships is forthcoming. Other curiosities, like what the model could have to
say about quantum gravity also make for fascinating speculation or even toy models, analogous to spin
networks [66, 67]. A few appendices have been added to declare points of contact bridging these and
points.
The question of relative motion and intrinsic spacetime observability is conspicuously absent from
this paper—but as we see, virtual motion is a non-trivial problem that touches on many existing descrip-
tions that live in different branches of physics, and are generally considered unique and fundamental.
There is much more to be explained—including the role of mass, which is a largely inert parameter for
kinematic behaviour. During accelerative changes is remains to explain the meaning of an effective mass
48
parameter. We leave this and other topics for the sequel.
Appendices
In the interest of making cross disciplinary connections for a wider readership, the following appendices
explain the correspondences.
A Continuum motion and conservation laws
We can recall the differential generators of motion from the action principle. For a simple Newtonian
representation the action is
S=Zdt 1
2m(tq)2+φ(q)(142)
where q(t)is the trajectory, v=dq/dt is the velocity, a=dv/dt, and the mass can now be velocity
dependent. Now we are not distinguishing interior and exterior time.
δm =m
∂v
∂v
∂t
∂t
∂q δq (143)
=1
v
∂m
∂v a δq (144)
so that the variation of the action becomes
δS =Zdt 1
2δm v2t(mv)δq +(q)
dq δq(145)
=Zdt m+v
2
∂m
∂v a(q)
dq δq 1
2mv δq(146)
where the parenthesis represents an effective comoving mass, having assumed that the mass is not ex-
plicitly time dependent. The coefficients of the variations must vanish both over the integral and on the
boundary terms, giving the familiar equation of motion,
F=(q)
dq =m+v
2
∂m
∂v a, (147)
with the exception of a correction for velocity dependent mass. The surface term represents the generator
of infinitesimal translations (or transitions) δq 6= 0, and tells us that the canonical momentum pmv
must remain constant over each infinitesimal translation, effectively showing Noether’s tautology for the
equivalence between spatial homogeneity and momentum conservation. If we add an impulse to the
potential at ti, then
φ(q)φ(q)+∆φ δ(tti),(148)
and the surface term becomes
1
2mv + ∆φδq(t0)=0,(149)
which accommodates an impulsive force at a single moment. Converting to a time translation, δq t,
gives the conservation of energy in a similar way. All this is familiar to a physicist. For an agent model,
however, we have no convenient calculus of variations, and the conservation across an interface is hidden
in the processes that link each transition: interior processes that vanish in the continuum limit at a point,
49
and exterior processes that survive as a smooth trajectory. For agent model, these need to be dealt with
separately. In the discrete case, conservation across each transition means:
δp =mi+1vi+1 mivi= ∆φ, (150)
but it’s unclear whether this is the relevant criterion for a virtual transition. The conservation of momen-
tum, in canonical dynamics, follows, by Noether’s theorem, from the homogeneity of space. Here this
means
m(v)|q|
t= 0 (151)
across some interface, where qis constant, thus:
m1
t1
=m2
t2
.(152)
B Density matrix representation for sender receiver interactions
Along each hop of a path, we can consider the state evolution from the perspective of either sender or
receiver, using a density matrix approach [28]. From the perspective of any agent, at any scale, the picture
we have of interaction is one where a local process observes another remote process, and we can view
the interaction as a composite system. The density matrix for a local agent lives on the same variable
space as the evolving interior processes. In a privileged observer view, the space and time variables are
exterior variables that are common to the composite system. The density operator on these states ρis an
operator on Ai, which has interior states Ii. The exterior states belong to another co-entangled system
Aj. It’s convenient to relabel these as sender and receiver along an oriented path Siand Rj(figure 16).
ij
SR
Figure 16: Motion along a channel from coarse-grained superagent to superagent provides the most general scaled
picture of transitions. This view leads naturally to a density matrix formulation due to the separation of interior
and exterior process variables. The scale at which we describe this process now matters. If the circles represent a
scale S, then the arrows are shown at scale S1.
To foster associations, we introduce a Dirac notation for the states of promise agents in semantic
spacetime. The double bra-ket lines should remind us that the nature of these states is not to be associated
necessarily with Hilbert space, indeed need not be defined i detail here, except to say that they are
assumed normalizable. Depending on the nature of the agents a complete set of states spanning the
agent’s ‘possibility space’ may take various forms. Let
|Aiii (153)
be the interior states of agent Ai, which are unobservable to other agents.
Consider a capacity allocation process ψ(Ai, Aj). The joint distribution of available resources can
be written:
ψ(Si, Rj) = X
i
ci(Rj)I(Si),(154)
50
for some set of interior states Iiand exterior states Ej.
Ii(Si) = hhSi|Iiii =hhSi|iii (155)
Ej(Rj) = hhRj|Ejii =hhRj|jii (156)
The most general capacity allocation distribution would be a product state:
ψSRIE, (157)
so we could write the outcome of the channel reservation process ψas a complete set:
|ψii =X
i,j
cij |Iiii|Ekii.(158)
In Dirac notation, (154) becomes
ψ(Ai, Aj) = hhAj|hhAi|ψii (159)
=X
i,j
cij hhAi|IiiihhAj|Ejii,(160)
and comparing to (154), we have the coefficients expressed as projected linear combinations of exterior
process availability:
ci(Aj) = X
j
cij hhAj|Ejii.(161)
For example, as a two-state system, we can write the interior states for an observable qas a vector:
q
¬q=q1
0+¬q0
1,(162)
in the manner familiar from quantum theory. We may note that a state can be mixed locally between
these two conditions (as a result of being in an indeterminate representation—this could only plausibly
happen if Aiwere a superagent with interior structure)—or mixed between Sand Ras a result of the
subtime transition steps [11]. So if both agents are in a pure state (only one of them holds the counter
q), then the combined co-entangled superagent is also in a pure state and there needs to be no correlation
between the agents as the agents are not engaged in a transition. If superagents Sand Rare both in a
locally mixed state then their co-entangled composition could also in an interior mixed state, by virtue of
both sides’ condition being correlated, or they could be independently mixed. If only one of the agents is
in a pure state, then the combined state must be mixed even if they are not correlated. These possibilities
only arise for scaled composite agents (agents that distribute their state amongst sub-agents), and the
non-coarse grained entanglement mechanism is more complicated than that described in [11]. We are
not used to thinking in these terms for classical systems, but that’s more a failure of imagination than an
impossibility. The failure lies in assuming that no state can be part of a process that has interior steps
(subtime transitions) on the interior of an agent. As long as we are forced to confront those details, for
strict locality, the picture becomes the natural one.
So, if we consider this simple two-state promised agent transition process ππ+to be an operation
on these states, without needing to explain its representation, then we can express this transition:
π()
δq π(+)
δq =X
i,j
|Iiii|Ejii · hhEj|hhIi|,(163)
so that the expectation value, which corresponds to the availability of a transition with operator
Tqπ()
δq π(+)
δq (164)
51
is
hhTqii =X
i,j,i0,j0
c()
ij c(+)
i0j0hhEj|hhIi|, π()
δq π(+)
δq |Ii0ii|Ej0ii (165)
=X
i,i0
ρii0hhIi|Tq|Ii0ii (166)
=Tr (ρ Tq)(167)
for density matrix ρ, which can be written as a convex combination of projections for the pure states,
each with availabilities or weighting wi:
ρ=X
i
wi|iiihhi||,where X
i
wi= 1,(168)
wi=X
j
c()
ij c(+)
ij ,(169)
and wiis the relative availability for a channel capacity reservation.
A pure state ψi=Piwi|iii, where Piw2
i= 1. The location of qcan be rotated from one
location to the next by a phase.
A mixed state is ψi=Piwi|iii, where Piwi= 1 the location of qis neither in one location or
another.
The off-diagonal elements express the entanglements of neighbouring agents that establish channels for
transmission q. For a so-called pure state, only one weight affinity wiis non-zero. While messages
are queued in buffer during transmission, they are in a mixed state, unobservable. Note that this linear
structure depends on the causal independence of the agents, not on any particular scale. As long as we
trace non-independence through interactions, the remains true. The Pauli matrices, well known as the
fundamental generators for rotations, are useful in describing two-state flip-flop transitions:
σ1=0 1
1 0 , σ2=0i
i0, σ3=1 0
01.(170)
If we associate π(+)
q=σ3pointing from Aito Ai1and π()
q=σ2pointing from Aito Ai+1, on the
two state basis in (162), then we can write a current density at Ai, which is completely analogous to that
for the Schr¨
odinger wavefunction process:
qi=i
2DD¬qihπ()
q, π(+)
qiqiEE.(171)
This includes the exclusion of states, so that an agent can only promise one +qat any place and time.
On the other hand, we can easily form representations with a ring of nstates as in [68]:
a+|nii =1 1
0 1 n
1=|n+ 1ii (172)
a|nii =11
0 1 n
1=|n1ii (173)
(174)
With convergent and idempotent absorbing end states generated by
aN|xii =0N
0 1 x
1N
1=|Nii.(175)
52
The current operation can be used to define instantaneous bulk velocity at Aiby looking at a channel
guide rail of constant cross section σ:
v=Ji×σ, (176)
where σis some constant scale. This is analogous to the definition of a classical velocity in quantum
mechanics (which is a virtual construction of the state space) by
mv =m JQM
ψψ=Im ψ
ψ.(177)
C Wigner function and interior/exterior time
The Wigner function is used in time series analysis and on coarse-grained path trajectory descriptions,
where the distinction between agent (coordinate) location and average grain location are at odds due to
the finite size of granular decomposition. This distinction is crucial in the classical Hamiltonian limit,
which is an average coarse grained picture. It’s an example of the use of a Weyl transformation from
two-point coordinates x, 1, x2to transition-oriented coordinates about a ‘centre of mass’ for grains:
x1
2(x1+x2),(178)
˜x(x1x2),(179)
so that
x1
x2=x±˜x
2.(180)
A Weyl transform in phase space is
A(x, p) = Z(d˜x)eip˜x/~DDx+˜x
2ˆ
Ax˜x
2EE (181)
=Z(d˜p)ei˜p˜x/~DDp+˜p
2ˆ
Ap˜p
2EE,(182)
for which a key property is
Tr(ˆ
Aˆ
B) = Z(dx)(dp)A(x, p)B(x, p).(183)
The Wigner function is the application of this to a density (matrix) operator (see appendix B). It used in
signal processing (time series analysis), which is directly applicable to virtual motion from the perspec-
tive of series of transition functions. The time series fluctuation correlator
Cq(t2, t1) = hh((q(t