added a

**research item**Updates

0 new

4

Recommendations

0 new

0

Followers

0 new

3

Reads

0 new

87

A scaled replica of a quantum Bell experiment is constructed using autonomous (local) software agents communicating deterministically over a simple read-write channel. This demonstrates the phenomenon of entanglement in a localized or distributed concurrent computer system. By controlling time rates carefully, we replicate a 'godlike view' of a low resolution probabilistic quantum system, and can thus compare it to the predictions of quantum mechanics, step by step. The results offers an explicit proof by demonstration that entanglement and superposition are not limited to quantum systems, as well as interesting insights into how quantum non-locality may work in physics. To make the point convincingly, the experiment is constructed in such a way as to replicate results equivalent to the textbook examples of a singlet Bell state violating the CHSH inequality.

In part I of these notes, it was shown how to give meaning to the concept of virtual motion, based on position and velocity, from the more fundamental perspective of autonomous agents and promises. In these follow up notes, we examine how to scale mechanical assessments like energy, position and momentum. These may be translated, with the addition of contextual semantics, into richer semantic processes at scale. The virtualization of process by Motion Of The Third Kind thus allows us to identify a causally predictive basis in terms of local promises, assessments, impositions. As in physics, the coarser the scale, the less deterministic predictions can be, but the richer the semantics of the representations can be. This approach has immediate explanatory applications to quantum computing, socio-economic systems, and large scale causal models that have previously lacked a formal method of prediction.

We describe a policy-based approach to the scaling of shared data services, using a hierarchy of calibrated data pipelines to automate the continuous integration of data flows. While there is no unique solution to the problem of time order, we show how to use a fair interleaving to reproduce reliable 'latest version' semantics in a controlled way, by trading locality for temporal resolution. We thus establish an invariant global ordering from a spanning tree over all shards, with controlled scalability. This forms a versioned coordinate system (or versioned namespace) with consistent semantics and self-protecting rate-limited versioning, analogous to publish-subscribe addressing schemes for Content Delivery Network (CDN) or Name Data Networking (NDN) schemes.

A statistical measure of dimension is used to compute the effective average space dimension for the Internet and other graphs, based on typed edges (links) from an ensemble of starting points. The method is applied to CAIDA's ITDK data for the Internet. The effective dimension at different scales is calibrated to the conventional Euclidean dimension using low dimensional hypercubes. Internet spacetime has a 'foamy' mul-tiscale containment hierarchy, with interleaving semantic types. There is an emergent scale for approximate long range order in the device node spectrum, but this is not evident at the AS level, where there is finite distance containment. Statistical dimension is thus a locally varying measure, which is scale-dependent, giving an visual analogy for the hidden scale-dependent dimensions of Kaluza-Klein theories. The characteristic exterior dimensions of the Internet lie between 1.66Â±0.00 and 6.12Â±0.00, and maximal interior dimensions rise to 7.7.

Series of articles on Medium about Semantic Spacetime and graph based process modelling
https://mark-burgess-oslo-mb.medium.com/list/semantic-spacetime-and-data-analytics-28e9649c0ade

Virtual motion is a description of how observable properties move from location to location as a side effect of interior agent processes. Waves are one example of virtual motion-where a displacement function changes against the fixed positions of some medium as information. Other examples can be found in cloud computing, mobile telecommunications, and biology. Virtual transmission is qualitatively different from particle motion, where one assumes the existence of material carriers that are distinct from an empty background space. A collection of agents, which passes observable markers from agent to agent, is like a transport logistics chain. Because of the reversal of hierarchy , or 'inside out' representation, virtual motion has a structure much like quantum interactions, as well as the movement of money, embedded sensor signals, tasks, and information by computational processes. We define the concepts of position, time, velocity, mass, and acceleration for simple instantaneous transitions, and show that finiteness of agent resources implies a maximum speed for virtual motion at each location. The evolution of artificial network communications and advances in bioinformatics, in recent decades, underlines a need to write down the dynamical and semantic relationships for virtual motion, thus exposing dynamically similar phenomena that span disparate scales and bodies of knowledge. This work fuses interaction semantics in Promise Theory with ordinary scaling. In physics, it is normal to extrapole dynamical models causally downwards, by correspondence: the study of virtual motion offers an alternative bottom-up extrapolation.

The problem of extracting important and meaningful parts of a sensory data stream, without prior training, is studied for symbolic sequences, by using textual narrative as a test case. This is part of a larger study concerning the extraction of concepts from
spacetime processes, and their knowledge representations within hybrid symbolic-learning `Artificial Intelligence'.
Most approaches to text analysis make extensive use of the evolved human sense of language and semantics. In this work, streams are parsed without knowledge of semantics, using only measurable patterns (size and time) within the changing stream of symbols---as an event `landscape'. This is a form of interferometry. Using lightweight procedures that can be run in just a few seconds on a single CPU, this work studies the validity of the Semantic Spacetime Hypothesis, for the extraction of concepts as process
invariants. This `semantic preprocessor' may then act as a front-end for more sophisticated long-term graph-based learning techniques. The results suggest that what we consider important and interesting about sensory experience is not solely based on higher reasoning, but on simple spacetime process cues, and this may be how cognitive processing is bootstrapped in the beginning.

Given a pool of observations selected from a sensor stream, input data can be robustly represented, via a multiscale process, in terms of invariant concepts, and themes. Applying this to episodic natural language data, one may obtain a graph geometry associated with the decomposition, which is a direct encoding of spacetime relationships for the events. This study contributes to an ongoing application of the Semantic Spacetime Hypothesis, and demonstrates the unsuper-vised analysis of narrative texts using inexpensive computational methods without knowledge of linguistics. Data streams are parsed and fractionated into small constituents, by multiscale in-terferometry, in the manner of bioinformatic analysis. Fragments may then be recombined to construct original sensory episodes-or form new narratives by a chemistry of association and pattern reconstruction, based only on the four fundamental spacetime relationships.

The explicit link between Promise Theory and Information Theory, while perhaps obvious, is laid out explicitly here. It's shown how causally related observations of promised behaviours relate to the probabilistic formulation of causal information in Shannon's theory, and thus clarify the meaning of autonomy or causal independence, and further the connection between information and causal sets. Promise Theory helps to make clear a number of assumptions which are commonly taken for granted in causal descriptions. The concept of a promise is hard to escape. It serves as proxy for intent, whether a priori or by inference, and it is intrinsic to the interpretations of observations in the latter.

Semantic spacetime is a model of configuration space, created within the framework of Promise Theory-a form of extended graph theory that embodies a strong form of locality and observer relativity, for discrete processes. These notes lay out the structure of semantic spacetime in a way that will hopefully shed light on related work. In conventional descriptions of spacetime, homogeneity, isotropy and global ordering are assumed by decree at all locations. Causal Set theory creates a minimal set of assumptions about discrete spacetime, but with many unanswered questions, and an incomplete explanation about the nature of time. Promise Theory (PT) may have something to add here. PT has no global quantities or symmetries, so conventional assumptions about symmetry have to be constructed explicitly by local interaction. In this way, Promise Theory rejects symmetry as the default assumption and gives primacy to locality instead, in a largely scale independent way. PT incorporates information channel theory and therefore serves as a bridge to incorporating information as the basic foundation of physics.

The concept of spacetime is limited and outdated ... here's how it can be generalized
See also blog: http://markburgess.org/blog_spacetime19.html
See on Amazon: https://www.amazon.com/dp/1797773704/ref=sr_1_fkmrnull_1?keywords=smart+space+time+burgess&qid=1551778123&s=gateway&sr=8-1-fkmrnull

Relationships between objects constitute our notion of space. When these relationships change we interpret this as the passage of time. Observer interpretations are essential to the way we understand these relationships. Hence observer semantics are an integral part of what we mean by spacetime. Semantics make up the essential difference in how one describes and uses the concept of space in physics, chemistry, biology and technology. In these notes, I have tried to assemble what seems to be a set of natural, and pragmatic, considerations about discrete, finite spacetimes, to unify descriptions of these areas. It reviews familiar notions of spacetime, and brings them together into a less familiar framework of promise theory (autonomous agents), in order to illuminate the goal of encoding the semantics of observers into a description of spacetime itself. Autonomous agents provide an exacting atomic and local model for finite spacetime, which quickly reveals the issues of incomplete information and non-locality. From this we should be able to reconstruct all other notions of spacetime. The aim of this exercise is to apply related tools and ideas to an initial unification of real and artificial spaces, e.g. databases and information webs with natural spacetime. By reconstructing these spaces from autonomous agents, we may better understand naming and coordinatization of semantic spaces, from crowds and swarms to datacentres and libraries, as well as the fundamental arena of natural science.

Relationships between objects constitute our notion of space. When these
relationships change we interpret this as the passage of time. Observer
interpretations are essential to the way we understand these relationships.
Hence observer semantics are an integral part of what we mean by spacetime.
Semantics make up the essential difference in how one describes and uses the
concept of space in physics, chemistry, biology and technology. In these notes,
I have tried to assemble what seems to be a set of natural, and pragmatic,
considerations about discrete, finite spacetimes, to unify descriptions of
these areas.
It reviews familiar notions of spacetime, and brings them together into a
less familiar framework of promise theory (autonomous agents), in order to
illuminate the goal of encoding the semantics of observers into a description
of spacetime itself. Autonomous agents provide an exacting atomic and local
model for finite spacetime, which quickly reveals the issues of incomplete
information and non-locality. From this we should be able to reconstruct all
other notions of spacetime.
The aim of this exercise is to apply related tools and ideas to an initial
unification of real and artificial spaces, e.g. databases and information webs
with natural spacetime. By reconstructing these spaces from autonomous agents,
we may better understand naming and coordinatization of semantic spaces, from
crowds and swarms to datacentres and libraries, as well as the fundamental
arena of natural science.

Using Promise Theory as a calculus, I review how to define agency in
a scalable way, for the purpose of understanding semantic
spacetimes. By following simple scaling rules, replacing individual
agents with `super-agents' (sub-spaces), it is shown how agency can
be scaled both dynamically and semantically.
The notion of occupancy and tenancy, or how space is used and filled
in different ways, is also defined, showing how spacetime can be
shared between independent parties, both by remote association and
local encapsulation. I describe how to build up dynamic and semantic
continuity, by joining discrete individual atoms and molecules of
space into quasi-continuous lattices.

The study of spacetime, and its role in understanding functional systems has received little attention in information science. Recent work, on the origin of universal scaling in cities and biological systems, provides an intriguing insight into the functional use of space, and its measurable effects. Cities are large information systems, with many similarities to other technological infrastructures, so the results shed new light indirectly on the scaling the expected behaviour of smart pervasive infrastructures and the communities that make use of them. Using promise theory, I derive and extend the scaling laws for cities to expose what may be extrapolated to technological systems. From the promise model, I propose an explanation for some anomalous exponents in the original work, and discuss what changes may be expected due to technological advancement.