Content uploaded by Mark Burgess
Author content
All content in this area was uploaded by Mark Burgess on Sep 22, 2020
Content may be subject to copyright.
The Semantic Spacetime Hypothesis
A Guide to the Semantic Spacetime Project Work
Mark Burgess
Abstract—This note is a guide to ongoing work and literature
about the Semantic Spacetime Hypothesis: a model of cognition
rooted in Promise Theory and the physics of scale. This article
may be updated with new developments.
Semantic Spacetime is a model of space and time in terms of
agents and their interactions. It places dynamics and semantics
on an equal footing. The Spacetime Hypothesis proposes that
cognitive processes can be viewed as the natural scaling (semantic
and dynamic) of memory processes, from an agent-centric local
observer view of interactions. Observers record ‘events’ and
distinguish basic spacetime changes and spacetime serves as the
causal origin of all cognitive representation.
If the Spacetime Hypothesis prevails, it implies that relative
spacetime scales are crucial to bootstrapping cognition and that
the mechanics of cognition are directly analogous to sequencing
representations in bioinformatic process, under the phenomenon
of an interferometric process of selection. The hypothesis remains
plausible (has not been ruled out). Experiments with text mining,
i.e. natural language processing, illustrate how the method shares
much in common with bioinformatic analysis. The implications
of this are broad.
I. INTRODUCTION
Semantic Spacetime was proposed as an approach to the
modelling of space and time in human-technology interactions.
It overcomes some of the shortcomings of alternatives like
bigraphs, vector spaces, coordinates, and descriptive logics [1].
However, it was quickly apparent that it also has applicability
in physics, chemistry, biology, and in Artificial Intelligence
research, where the traditional spacetime focus on symmetries
is more a hindrance than a help. Semantic Spacetime is a graph
theoretic model of space and time, which treats qualitative and
quantitative aspects of phenomena on an equal footing. Initially,
it was about unifying descriptions of process in technology,
humanity, and materials [1]–[4], but has since revealed links to
information theory, quantum theory, and bioinformatics. Early
work on Semantic Spacetime was motivated by the desire for a
model to understand highly distributed computing, such as the
Internet of Things, where every element of spacetime can, in
principle, make and keep a variety of promises. Those promises
are realized in the form of services, attributes, and states [1].
The scaling of behaviours and boundaries was explored in later
works [2], [3].
Building on this framework, the Semantic Spacetime Hypo-
thesis expresses a simple proposition, which arose naturally
from the scaling of observation: namely that exterior spacetime
processes (over multiple scales) are the only likely informatic
origin of all aspects of cognitive processes and representations.
Here ‘cognition’ is understood to mean the evolution of
the agent’s interior state based on localized sampling of
exterior information. The model imputes that cognition can be
understood as one end of a scale of phenomena, which ranges
from a single atom absorbing a photon, to a brain observing with
multiple senses. The understanding of high level cognition (e.g.
in humans), including all its symbology, is hypothesized to be
a natural extension of those agent-centric models accumulated
over multiple scales. The underpinnings for Semantic Spacetime
began a decade ago with work on knowledge representations,
using Promise Theory in collaboration with Alva Couch,
University of Tufts [5]–[7], and work related to the autonomous
software agent system CFEngine [8]. The scope of issues
covered is large and draws on ideas from physics, information
theory, and computer science. Expositions exist on a variety
of levels in the reference articles, as well as in semi-popular
form in [4].
II. SEMANTIC SPAC ET IME
Semantic Spacetime is based on Promise Theory [9] (a
form of labelled Graph Theory), in which promises represent
properties, attributes, and the semantics of interactions between
agents. Normally space is considered to be an ‘empty’ arena,
and dynamical phenomena are attributed to a separate concept
of ‘matter’, which ‘occupies’ the space. This view leads to as
many problems of semantics as it simplifies. In the Semantic
Spacetime model, the roles of matter and space are simplified
to be different states of the same basic fabric. Locations in
Semantic Spacetime are represented by agents which, analogous
to atoms, can be in a fluid or solid phase, depending on their
bindings (represented as generalized ‘promises’ [9]). A promise
binding is complete when a particular type of promise is offered
(
+b1
) by one agent and accepted (
−b2
) by another, with non-
zero overlap b1∩b2.
The combinatorics of promises lead to a rich chemistry
of interactions between agents at any scale. The scaling of
roles and agents is a non-trivial issue within that chemistry
[2], [4]. In particular, Semantic Spacetime concerns itself with
how one understands locality [1] and whether processes lie on
the inside or lie outside of agent boundaries [2]. In this view,
the observability of phenomena also depends on what agents
promise, and agent boundaries view the world through a series
of ‘event horizons’ beyond which interior state is hidden from
view [10], [11].
Semantic Spacetime is especially relevant at the scale of
human-technology systems, where interactions have rich and
complex semantics. It’s also a model of bioinformatic processes,
where molecular fragments act as agents that express promises
in the form of potential bindings. The concept of reasoning,
normally associated with logic, becomes an implicit outcome
of memory processes within each observer above a certain
level of sophistication. Visualizing the many working parts
of the model and its hypothesis is challenging, some explicit
representations have been coded in software to implement and
test the reasoning [12], [13].
In classical mathematical thinking, space has no ‘function’
per se, but it has the semantics of symmetry and invariance
under group transformations. The emphasis is on coordinates
and their translational and rotational invariances, not on
phenomena that occur within space. In Semantic Spacetime,
there is no distinction between space and matter, and there is
no assumption of symmetry. The function of agents, which
form space, is to act as a the repository for memory or
process states. Without states, change and process cannot
be expressed. Space is thus the ledger on which patterns of
state can be drawn. Many of the non-deterministic aspects of
behaviours can be understood as natural consequences of the
incomplete information available to observers [14]. As clusters
of interacting agents (spacetime regions) grow large, the speed
of communication channels between them becomes a limitation
on the scaling of those promises1.
III. LOCAL OBSERVER SEMANTICS
Agents experience the world by accepting and sampling
interactions, as per the Shannon-Nyquist sampling law [15],
[16]. Processes experience a ‘proper timeline’ by sampling the
linearized sequence of changes arriving at each agent. Timelike
changes (changes in time at the same location or agent) are
called longitudinal. Spacelike changes (variations experienced
by different agents, in parallel) are transverse within a single
snapshot of time. Due to finite communication rate, observations
are dominated by the observer’s longitudinal process (see figure
1), in which events may appear to be co-active (i.e. which are
measured during the same sampling interval). The coactivation
model is particularly relevant in biology, e.g. immune systems,
where the presence of several parts is necessary to assemble
the key to a certain process.
t3
t2
t1
,
(( ) ( ))
,=>
simultaneous
simultaneous
context
context
,
( )
,
Fig. 1: The longitudinal data stream is coarse-grained into sampling
frames and may be fractionated into events, which are identified
as invariant symbols spectrographic of interferometric analysis. The
perceived ‘simultaneity’ of symbols refers to their co-activation within
a frame.
IV. SCALING
In Computer Science, scaling is a concept which usually
refers to the addition of computational processors to parallelize
1
The relationship to Einstein’s view of relativity is straightforward. Any
bounded region of space defines (and acts like) a clock. The countable
(observable) state changes by a bounded (local) observer on any scale count
time. In practice, the finite speed of communication between agents is a limit
on the ability to scale dynamics, so descriptions where process ‘latency’ (signal
delay) are significant tend to focus on perception of this limitation.
queueing services (Amdahl’s law). In physics, scaling concerns
the manner by which any quantitative measure may depend
on a combination of others, linked through master equations
(Equations of Motion) that represent a phenomenon to relate
different variables. Scaling allows observers to predict when
two systems, of different size or composition, will be similar or
dissimilar in their behaviours. If two systems are dynamically
similar then one can be ‘renormalized’ to predict the other.
This is the basis for the testing of scale models, e.g. in wind-
tunnels. The Buckingham-Pi theorem states that such dynamical
similarity depends on the values of key dimensionless ratios, i.e.
combinations of the variables whose engineering dimensions
all cancel out. Here the term ‘engineering dimension’ is used in
the sense of the measurement units (mass, length, time, etc), not
the degrees of freedom associated with direction in Euclidean
space.
Generalized scaling is different from the increasingly popu-
lar notion of scale-free phenomena, where probabilistic models
lead to power laws and other signature effects. Probabilities
are, of course, themselves dimensionless ratios based on a
normative scale of the same measurement process. Numerator
and denominator refer to the same kind of process (with the
same engineering dimension). Generalized scaling, on the other
hand, refers to combinations of different types of measurements.
Measurement scales (engineering dimensions) are the basis of
all descriptions of natural processes in physics and chemistry.
The Spacetime Hypothesis contends that we have to return
to a natural scale analysis (not a scale-free probabilistic one)
to understand phenomena and their semantics [1]–[4]. The
treatment of scales has a long history in physics [17], [18].
V. TH E FO UR S EM AN TI C RELATI ON S OF S PACETIME
Physics purposely eliminates many explicit semantic distinc-
tions in order to reduce problems to a standardized quantitative
form. Quantitative measurements are harder when agents and
their interactions fall into too many categories. The abstractions
of space and time which emerge from that general manifesto
lead to the introduction of coordinate systems (comparing
phenomena to a standardized periodic process to allow counting
in cycles). This separation of measurement coordinates from
semantics of phenomena can have disadvantages.
In Computer Science, all actions take place a localized
point locations, in a network or graph, described by network
addresses. This is modelled in various ways e.g. as Milner’s
bigraphs [19]
2
. In the absence of a Euclidean coordinate space,
there remain basic semantic relationships that can be attributed
to interactions, whether discrete or continuous, in order to
measure them. These are promises in Promise Theory language.
In Computer Science, rich environmental characteristics also
provide information with functional and computational value.
Semantics relationships may entail a wide variety of subtle
interpretations, or ‘subtypes’. In physics, there is a simplifying
separation of concerns in standard separable representations of
spacetime. The Spacetime Hypothesis proposes a compromise,
by postulating that all semantic distinctions ultimately map to
four basic spacetime types (see figure 2):
2
Actually, this is also true in quantum theory, which explains the form of
Feynman diagrams, which are graphs with lines and vertices. So the underlying
form of physics is probably graphical too, and the Euclidean embedding is
merely convenient coordinate scaffolding.
2
event
event
event
φnφnφnφnφnφn
φ1φ1φ1φ1φ1φ1φ1φ1
φnφn
φn
φ1
event event event
context hub
context hub
context hub
proper time
followed by
part of
followed by followed by
followed by
Namespace region
close to
followed by
part of
CONCEPT
EVENT
CONTEXT
THEME
FRAGMENT
agents (semantic scale)
Fig. 2: Spacetime structure is constructed semantically on four kinds
of relation. The privileged axis of this diagram lies around event chains.
These express pattern fragments
φn
downwards, and are aggregated
within concepts upwards. The accretion of concepts from events may
lead to significant overlap, which places some concepts closer to
others by virtue of similarity of constituents. This is a form of detailed
co-activation in the network of the lower layers. Concepts and themes
arise from fragments accumulated through the sensory process.
•
FOLLOWS: Sampled events follow one another in
process time
t
, according to some partial order relation
‘
>
’. When applied to narrative or other sequential
processes there is a partial order which can be retained
from episodic events into a chain, leading to a strong
promise binding.
•
CONTAINS: Collections of agents can be considered
parts of a larger whole. Thus a collection contains
member agents, which allows scaling of identity. Mem-
bership or containment is also the basis for measuring
similarity (distance). This relationship distinguishes
what is interior and exterior to an agent, which is
central to all relativity.
•
EXPRESSES: Invariant labels express ‘identity infor-
mation’, which distinguishes agents from one another.
The identity or proper names of agents are thus
‘expressed’ as ‘scalar promises’, or self-properties, On
a larger scale, the same is true of aggregations of
agents, acting a superagents.
•
SIMILAR TO (NEAR): Agents can be compared
on the basis of their expressed promises for a degree
of similarity. Closeness, in fragment space, therefore
comes from the interferometry of fractional sets.
These types have been discussed earlier in [3], [10], [11] and
are based on an original proposal by A. Couch. Within this
graph representation, relations can be represented by links or
edges of a graph. Scalar quantities can be represented either
with or without edges [1].
VI. COGNITION LOOP
Using an agent-centric model, which places semantics
and dynamics on equal footing, Semantic Spacetime captures
the essence of the (relativistic) measurement problem in a
natural way, to represent processes in a form familiar in
computer science and interaction systems like quantum theory.
The inherently local perspective allows an agent observer,
with interior memory, to bootstrap knowledge of its exterior
cumulatively throughout an extended cognitive process. That
process may be viewed on any scale, from atom to human
brain or supercomputer. Knowledge representations can be
constructed within the interior memory states of the agent,
based on the serialized sensory inputs. Memories may then
be encoded using the four spacetime relations. The Spacetime
Hypothesis posits that this must suffice to generate all the
processes necessary to explain cognition and reasoning [3],
[10].
A cognitive agent doesn’t only recall memories from the
episodes it has observed in the past, it is able to construct
new linearized narratives of its own by mixing old and new
experiences together—from emitting a new photon to explaining
ideas. Both dynamically and semantically, old and new can
only be compared on the semantic basis of a fragment pattern
space. These new and invented narratives include processes that
we call ‘reasoning’, ‘explanation’, ‘description’, etc. To address
its memories, it needs a context-driven semantic lookup key:
numerical addressing, as used in contemporary computing, is
not the natural model for memory recall for an agent adapted to
cognition, as there is no obvious lookup index key for memories
that uses quantitative addressing. The natural numbers do not
arise naturally outside the order of basic samples. This was
discussed in [10], [13]. A context-based qualitative addressing
is possible, however, using semantic context as the associative
memory key [13]. The fractionated information in samples can
promise this role. The idea is analogous to the way DNA/RNA
fragments of genetic code act as the keys to unlock outcomes
based in surrounding bioinformatic processes. The process of
co-activation, or simultaneous incidence of parts, within a frame
of coarse sampling (figure 1), is what associates patterns with
context [12]. This is essentially how immune systems recognize
patterns, providing another bioinformatic analogue. An agent is
not limited only to experiences from novel sensory input. It can
treat its own patterns of recall, i.e. recycled memories, alongside
the novel sensory stream, on an equal footing. The mental model
of the agent is thus a mixture of reflexive introspection and
sampling of the environment.
VII. TES TI NG SE MA NT IC SPACETIME HYPOTH ES IS
One of the interesting results to emerge from the studies
of episodic experiences in this model, from natural language
sources [12], [13], is that episodic sensory experiences don’t
lead to a generalized understanding of phenomena without extra
help. The normal longitudinal processing of episodes can’t
reference other contexts in a natural way, thus some kind of
explicitly out-of-band process is needed to connect experiences
transversely between contexts in post-processing
3
. Thus, while
sensory experience can be driven as a causal pipeline, cross
referencing needs an interior clock, like a batch process. The
transverse integration of memory is a process by which semantic
fragments join cumulatively into themes and higher level
concepts [13]. These lead, in turn, to higher level narratives in
3
It’s interesting to speculate whether this corresponds to the function of
sleep, and whether dreams are the byproduct of such a process.
3
principle—though convincing demonstrations have not yet been
shown at sufficient scale to confirm this part of the hypothesis.
The Semantic Spacetime Hypothesis is limited in detail but
far-reaching in scope. It implies that cognitive phenomena are
based exclusively on distinctions arising from the measurement
of spacetime processes, calibrated by an observer’s relative
sampling of scales. This makes sense from an evolutionary
perspective: in the beginning of evolutionary process, there
were no concepts or distinctions except for the existence of
differences in properties and locations at different snapshots of
history. Sequences of events, grammars, semantics etc, could
only emerge in a self-consistent way from causal processes.
Measurement, in turn, is a fundamental process within a receiver
agent, from which statistical inferences can be made only if
there is local memorization of repeated phenomena. An agent’s
perception of change (time) can further only arise from a
regular and repeated sampling (on a self-calibrated timescale)
to measure exterior phenomena as its interior calibrations allow.
In [12], the process of sampling and scale detection was
demonstrated for data based on text analysis
4
. The virtue of text
analysis is that it contains clear semantic representations that
are easy for humans to perceive. Setting aside the linguistics of
narrative, one may consider a stream as a sequence of concepts
about things and their changes, described by the four semantics.
The approach was basically similar to molecular spectrography:
large samples are broken up into small parts and one looks for
the smallest elements from which all others can be constructed,
relative to the sampling. A chemistry of composition for the
parts can then be learned from a process of fractionation and
frequency measurement.
VIII. WORK TO BE D ON E
So far, testing of the Spacetime Hypothesis has been limited
to the bottom-up demonstration of concept extraction and
representation of narrative, based on a small amounts of data
and within limited ranges of the dimensionless ratios that
govern the combinatorics of the promise graph. The ability to
represent sophisticated concepts and themes doubtless requires
a much more extensive exploration over a wider range of
scales. Observable concepts are not all that exists in the
world of language and semantics: higher generalizations of
concepts appear to come from the nuances that emerge in
human constructions: e.g. ‘all dogs are fluffy’. The concept of
“all dogs” (a category) is not obviously sensory or contextual
in nature. It has to be a byproduct of coarse-grained reasoning
over different contexts. It could be understood reflexively from
processes of introspection, analogous to the transverse ‘sleep’
processes. This remains to be shown from a more detailed
understanding of context.
So far, one has the sense of only scratching the surface of
the problem. This might already be sufficient to make progress
in technological applications, but still falls short of something
4
The method of learning scales from a stream and associating semantics
to them was also used implicitly in the CFEngine software [20]. Attempting
to extend this to more general data sources, associated with the technology
of the Internet of Things, was a goal of the Cellibrium project [21], but was
hampered by lack of data from external sources. A breakthrough was made in
the two papers [12], [13] by using natural language text as the data. Sources of
text are available everywhere, and English text is both discrete, with a simple
combinatoric alphabet, and easy to parse and process.
akin to human cognitive sophistication. Our human propensity
for keeping separate multiple nuanced distinctions amongst
related concepts, while at the same time being able to bridge
others into generalizations in different contexts, is impressive.
There is currently no model to explain how this might work.
Experiments thus far are incapable of demonstrating any such
capability. However, they have worked with deliberately small
data sets in order to understand the scaling of processes without
resorting to ‘big data’. The studies show how little data are
actually needed to establish the basic mechanisms for concepts
and themes to be plucked from a sensory experience.
IX. CONCLUSIONS
The degree of precision by which humans understand and
express concepts is impressive. The Spacetime Hypothesis is
simple, and bottom-up, as fundamental theories should be. It
draws on elementary ideas from a wide range of sources. The
scope of the hypothesis makes it challenging to test and validate,
though relatively easy to falsify. Studies made thus far indicate
its general plausibility, but have explored only the bottommost
scales in the hierarchy of representations. Even at this level, it
has shown promise in the realm of technology systems, where
concepts are more primitive and mechanical.
Does the Spacetime Hypothesis explain cognition? What
does it look like? Perhaps it offers one step along the way,
at the level of evolutionary bootstrapping. If so, the present
answer is basically ‘biochemistry’. The lack of precision doesn’t
diminish the importance of that primary insight. A link between
this work and Artificial Neural Networks could be explored—
these dominate modern approaches to artificial cognition. The
graphical structure bears some passing similarities, which is
also intriguing; it would surely be interesting to compare the
memory graph structures developed from this work to the
effective architecture in working neural networks to see if it
sheds any light on why they are effective at sensory processing.
No doubt these problems will be addressed as time permits.
REFERENCES
[1] M. Burgess. Spacetimes with semantics (i). arXiv:1411.5563, 2014.
[2]
M. Burgess. Spacetimes with semantics (ii). arXiv.org:1505.01716,
2015.
[3] M. Burgess. Spacetimes with semantics (iii). arXiv:1608.02193, 2016.
[4] M. Burgess. Smart Spacetime.χtAxis Press, 2019.
[5]
Mark Burgess. Knowledge management and promises. Lecture Notes
on Computer Science, 5637:95–107, 2009.
[6]
A. Couch and M. Burgess. Compass and direction in topic maps. (Oslo
University College preprint), 2009.
[7]
A. Couch and M. Burgess. Human-understandable inference of causal
relationships. In Proc. 1st International Workshop on Knowledge
Management for Future Services and Networks, Osaka, Japan, 2010.
[8]
M. Burgess. A site configuration engine. Computing systems (MIT
Press: Cambridge MA), 8:309, 1995.
[9]
J.A. Bergstra and M. Burgess. Promise Theory: Principles and
Applications (second edition).χtAxis Press, 2014,2019.
[10]
M. Burgess. A spacetime approach to generalized cognitive reasoning
in multi-scale learning. arXiv:1702.04638, 2017.
[11]
M. Burgess. From observability to significance in distributed systems.
arXiv:1907.05636 [cs.MA], 2019.
[12]
M. Burgess. Testing the quantitative spacetime hypothesis using artificial
narrative comprehension (i): Bootstrapping meaning from episodic
narrative view as a feature landscape. preprint, 2020.
4
[13]
M. Burgess. Testing the quantitative spacetime hypothesis using artificial
narrative comprehension (ii): Establishing the geometry of invariant
concepts, themes, and namespaces from narrative. preprint, 2020.
[14]
M. Burgess. In Search of Certainty: the science of our information
infrastructure. Xtaxis Press, 2013.
[15]
C.E. Shannon and W. Weaver. The Mathematical Theory of Communi-
cation. University of Illinois Press, Urbana, 1949.
[16]
T.M. Cover and J.A. Thomas. Elements of Information Theory. (J.Wiley
& Sons., New York), 1991.
[17]
G.I. Barenblatt. Scaling, self-similarity, and intermediate asymptotics.
Cambridge, 1996.
[18] G.I. Barenblatt. Scaling. Cambridge, 2003.
[19]
R. Milner. The space and motion of communicating agents. Cambridge,
2009.
[20]
M. Burgess. Probabilistic anomaly detection in distributed computer
networks. Science of Computer Programming, 60(1):1–26, 2006.
[21] M. Burgess. Cellibrium project.
https://github.com/markburgess/Cellibrium, 2015.
5