Conference PaperPDF Available

Laws of Systemic Organization and Collective Behaviour in Ensembles

Authors:

Abstract and Figures

Can the whole be greater than the sum of its parts? The phenomenon of emergence claims that it can. Autonomics suggests that emergence can be har-nessed to solve problems in self-management and behavioural regulation without human involvement, but the definitions of these key terms are unclear. Using promise theory, and the related operator theory of Burgess and Couch, we define behaviour in terms of promises and their outcomes. We describe the interaction of agents and their collective properties.
Content may be subject to copyright.
Laws of Systemic Organization and Collective
Behaviour in Ensembles
Mark Burgess and Siri Fagernes
Oslo University College, Norway
Mark.Burgess@iu.hio.no
Siri.Fagernes@iu.hio.no
Abstract. Can the whole be greater than the sum of its parts? The phenomenon
of emergence claims that it can. Autonomics suggests that emergence can be har-
nessed to solve problems in self-management and behavioural regulation without
human involvement, but the definitions of these key terms are unclear. Using
promise theory, and the related operator theory of Burgess and Couch, we define
behaviour in terms of promises and their outcomes. We describe the interaction
of agents and their collective properties.
1 Introduction
The organization of functional entities within a system of components is a key aspect
of delegation and specialization in any system, and is closely related to a description
of its behaviour. In a system of interacting agents, agents can differentiate themselves
into different functional roles, like cells in an organism, which work together to extend
the scope of their collective behaviour. Regardless of whether such roles and groupings
emerge from a dialogue with environmental conditions or are pre-programmed, it is
important for an engineer, scientist or analyst to be able to identify these functional
elements and understand their relationship to the whole. The term organization is often
used for this.
In this paper we use promise theory to define and distinguish different kinds of
singular and collective behaviour in systems whose components can be isolated and
modelled as autonomous agents. We also define organization and emergent outcomes
in a way that we believe offers a new insight into these phenomena.
Computer science has an often limited view of system behaviour. Typically it de-
scribes only what we shall call programmed behaviour, i.e. that which can be repre-
sented by a state machine. In the Unified Modelling Language, for instance, behaviour
is represented as algorithms, flow diagrams and state charts. However, this is an opti-
mistic view of the scope of behaviour.Any system capable of basing its actions on input
events from its environment, or whose resources are governed and modified by the same
environmental conditions, are necessarily unpredictable from a state machine view-
point. Some authors have used the idea of swarms as a model for collective behaviour
amongst agents, but they generally approach this by programmed mimicry rather than
an identification of phenomenological characters. Here we examine an approach that
deliberately avoids an algorithmic view of behaviour.
2 Axioms of promise theory
Promises are now a well-described as a modelling framework (see [1–3]). The frame-
work builds from an atomic and fully distintegated view of behaviour into systems that
exhibit coordination and familiar programming, using the concept of agents that are
completely impenetrable to outside influence, with private knowledge, and the promises
that they make to one another. A promise with body +bis understood to be a specifica-
tion to “give” behaviour from one agent to another (possiblyin the manner of a service),
while a promise with body bis a specification of what behaviour will be “received”
or “used” by one agent from another (see table 1). A promise valuation viaj
b
ak
is a subjective interpretation by agent ai(in any local currency) of the promise in the
parentheses. Usually an agent can only evaluate promises in which it is involved.
Symbol Interpretation
a+b
aPromise with body b
ab
aPromise to accept b
va(ab
a)The value of promise to a
va(ab
a)The value of promise to a
Combination of promises in parallel
Combination of promises in series
Table 1. Summary or promise notation
Promises are considered to be basic (“genetic”) characteristic properties of agents
(see table 2 for a whimsical analogy between genes and promises). A promise body
bhas a type which describes the nature or subject of the promise, and a constraint
which explains what restricted subset of the total possible degrees of freedom are
being promised. Since any dynamical, systematic behaviour is a balance between
degrees of freedom and constraints[4], this should be sufficient to describe a wide
variety of phenomena.
The environment in which agents live and act can itself be represented as an au-
tonomous agent with extensive internal resources. We denote this agent E.
Promise theory is mainly about the analysis of snapshots in which promises are
fixed. If basic promises change, we enter a new “epoch” of the system in which
basic behaviours change. For a fixed static set of promises, behaviour continues ac-
cording to the same basic pattern of interactions between agents and environment.
We can couch the latter statements as a rough paraphrasinganalogous to Newton’s laws
of motion for agents, to be revisited later:
1. An agent continues in a state of uniform behaviour, like a deterministic automation,
unless it makes at least one promise of type b, for some non-empty promise body.
2. The observable behaviour of an agent changes in relation to a coupling to input
from an outside source (see section 3).
3. For every external influence received through a promise with body b=h−τ , χ1i,
there must be an ‘equal’ opposite promise +b=h+τ , χ2i, which is the source of
the influence. If χ16=χ2, then the interaction is of magnitude χ1χ2.
Genetic attribute Promise attribute
Genes Promise type τ
Alleles Constraint χ
Phenotype Promise role
Proteins Operators ˆ
O[5]
Gene network Promise graph
Table 2. A whimsical association between promise theory and genetics. In both cases the role of
the environment is crucial to understanding resulting behaviour, regardless of what is promised.
Definition 1 ((In)exact promises). A promise a1
b
a2is inexact if the constraint
χ(b)has residual degrees of freedom. i.e. if it is not a complete and unambiguous be-
havioural specification. For example q= 5 is an exact specification, while 1< q < 5is
inexact. The same principle applies to the possible outcome of a promise, however the
actual outcome is naturally exact.
One of the strengths of promise theory is that it defines groups and roles as empirical
observables, rather than as necessarily pre-design features[2, 6]. For this is it important
to be able to compare agents using an impartial third party. The coordination promise
is used for this (see fig. 1). The reduction rule for coordination promises for the case in
1
2
3
C(b) b
Fig.1. Serial composition of a promise and a coordination promise. The dashed arrow is implied
by the C(b)promise.
which n1promises n2that it will coordinate on the matter of b, given that n2promises
n3bfollows. The symbol is used to signify the composition of these promises.
C(b)
n1n2
|{z }
‘Coordinate with
b
n2n3
| {z }
Promise
b
n1n3.(1)
We use this below in the identification of observable properties, since it implies a basis
for n3to compare n1and n2.
3 Behaviour and measurement defined
We must take care to distinguish between a promise and the outcome of a promise. If
one assumes that promises are necessarily kept, this distinction is moot. However, in all
realistic systems promises are only kept with a certain probability.
Agoal,consequence,result is an outcome i.e. a specification of the states of all
involved agents’ observable behaviours at some agreed time after a promise was made.
A goal could conceivably be associated with the promise body, but we prefer to keep it
as a separate concept, since one’s goals and one’s promises are not necessarily perfectly
correlated. A promise, on the other hand, is an announcement of future behaviour, which
could be the intention to attain a goal. The outcome is what actually happened, whether
a goal was announced or not. The projected outcome (or goal) might be exact, in which
case it is a precise specification of the final state, or it might be inexact, allowing for a
range of possible values, e.g. aim within the circle.
Behaviour is a pattern of change in the observable measures of a system. It is gov-
erned by the interplay between degrees of freedom and its constraints[4]. In promise
theory, internal changes to observables are assumed to occur through operations[5].
The operations required to fulfil the terms of a promise are not necessarily defined,
since there might be many acceptable ways to achieve the promise.
Promises also define the only observables in promise theory. No knowledge is ex-
changed without it being promised. For instance, the location and nature of an agent
can only be observed if the agent promises to make itself visible. This includes inter-
actions with an environment. The environment itself must promise its secrets to agents.
This has the advantage of making explicit all communication, including observations of
environment and boundary conditions etc.
To discuss behaviour over time we need to notion of a trajectory. This is the path
or set of intermediate states between the start and the current value of an agent’s ob-
servables, e.g. the history of an agent or the complete history of state transitions in a
finite state machine. Let qbe a vector of state information (which might include posi-
tion, internal registers and other details)[7]. Such a trajectory begins at a certain time
t0with a certain coordinate value q0, known as the initial conditions. The trajectory is
then a parameterized function q(t, σ), for some vector of parameters σarriving from
an outside source, and we identify the behaviour of an isolated system as the triplet as
the determined trajectory:
hq0, t0,ˆ
O(t, σ)i, t > t0.(2)
The function ˆ
O(t, σ)is a constant transition matrix or operator which takes q(ti)to
q(ti+1)for integer time index i, or alternatively q(t)to q(t+dt)in a differential form.
In other words, any change in an agent’s behaviour is generated by operators
qq=q+δq=ˆ
O(σ)q= (1 + ˆ
G(σ))q.(3)
i.e. δq=ˆ
G(σ)q. This can be given meaning differentially also.
We now wish to provide a theorem which we express as a basic law.
Law 1 (Law of Inertia) An agent’s observable properties hold a constant, determin-
istic trajectory q(t)unless it also promises to use the value of an external source σto
modify its transition matrix ˆ
O(σ).
Proof. Each agent has access only to information promised to it, or already internal to
it. A local promise ai
f(σ)
ajthat depends on an externally promised parameter σis
clearly a conditional promise ai
f(σ)/σ
aj, where σis the value promised by another
agent. In order to acquire the value of σ, we require ai
σ
ajand a corresponding
promise to provide σto aieither from the environment or from another agent. Thus,
if an agent does not promise to use any input σfrom another agent, all of its internal
variables and transition matrices must be constant.
Note also that by the definitions in [2], a conditional promise is not a promise without
a use-promise. This fits naturally with the argument in the theorem.
Corollary 1 (A conditional promise is not exact). By reversing the theorem we see
that a conditional promise must, by definition have a residual degree of freedom, which
is the value of the dependent condition.
Based on this property of promises, we can now classify behaviours by several charac-
ters:
Rigid behaviour (deterministic): An agent whose observable properties do not de-
pend on any external circumstances has rigid behaviour[8, 9]. This is true if and
only if the agent has no use-promises (bfor some b), and all other promises +b
are exact promises. In this case the internal change operator ˆ
Ocannot depend on
any external information and must therefore be constant, hence the trajectory is
constant and deterministic.
Reactive or Adaptive behaviour: The observable properties of an agent aichange in
response to a change in its input from another agent aj(or from the environment).
This requires promises:
alocal
I
aext (4)
alocal
+O(I)/I
a
ext (5)
where Irepresents a promise of input from an external agent, and O(I)represents a
promise of some observable output to another external agent which is conditionally
a function of the input.
It makes sense to distinguish:
Ordered behaviour: An agent whose observable properties change according to a
deterministic algorithmic pattern.
Disordered (“random”) behaviour: The observable properties of an agent react or
adapt to information from the environment which changes in a non-monotonic and
unpredictable manner.
Disorder (randomness) is assumed to come only from the environment, since no other
agent has sufficient complexity. However, we should be careful in making judgements
about this, as “randomness” is to some extent a philosophical admission of ignorance
rather than a real phenomenon, except in the quantum world. Finally, there is behaviour
that pertains to more than one agent.
Collective behaviour: This refers to a collection of agents that are connected by
promises of any type within a promise graph. Collective behaviour is assumed to
involve interaction (i.e. we should disregard coincidentally similar behaviour with
zero mutual information[4]). In any given temporal shapshot, agents can exhibit:
Differentiated behaviour:
Agents are partitioned into a division of labour. This requires that agents be
identifiable and that some process or entity within the ensemble decide the
division. It is possible for environment (e.g. boundary conditions) to determine
this partitioning.
Undifferentiated behaviour:
Agents play identical roles in the ensemble and require no specific labels, since
all promises are to any accessible agent.
Note, undifferentiated behaviour could be coincidental, like a disordered gaseous phase
of matter (all the component elements make identical unconditional promises but never
interact with one another) and it could imply interaction (like in a possibly ordered solid
phase). Normally we are only interested in the possibility of coordinated, collective
phenomena, but the possibility of a phase transition from one to the other is clearly
interesting.
We now consider a rule that allows us to reduce a pattern of pre-existing promises
into an effective promise of coordination.
Law 2 (Rewriting rule for empirically collective behaviour) Let aibe a group of un-
differentiated agents, for i, j in their index set, all making the same promises to all
others (a complete graph) within the group. The parallel combination of the promise
bodies may be considered equivalent to a single new promise body of type C(b), i.e.
M
ai
b
ai
C(b)
(6)
where runs over all the promise bodies that are common to the group and represents
all agents in the ensemble.
This law is easily justified as it reflects the observation that when a number of agents
behaves similarly in response to one another, and has no labels that otherwise distin-
guish special roles, an external observer can only say that all of the agents are behaving
in a coordinated way. Thus the observer sees a coordinated group for all intents and
purposes, although it was not formally agreed by the agents.
4 Leaky agents: noisy environment
Almost no real systems behave like the strictly autonomous agents of promise theory.
It is impossible to isolate systems from physical reality. However, there is a long tradi-
tion in the natural sciences of making laws for idealized “closed” systems. To obtain a
more realistic model of a system, we must give every agent a use-promise from the en-
vironment to allow non-specified environmental conditions to be explicitly modelled.
This is the route by which input will be admitted to an emergent functionality of the
group. The environment agent is naturally assumed to promise its information to all
other agents. Thus every real-world system plays the appointed role of a user of envi-
ronmental information. The exact nature of the environmental influence on each agent
might be different.
Definition 2 (Leaky agents). We define a leaky agent to be an agent making any
promise to receive information from the environment E,ai
envi
E.
5 Organization
The term ‘organization is widely used to refer to collective behaviour in the institutions
of business and society. Its intended meaning is not entirely clear in natural language,
but we shall provide a definition below that is unambiguous. One source of ambiguity
is the difference between organization and order. This is particularly so in using the
terms: self-organized and self-ordered for presumably emergent phenomena (see table
3).
Name Conscious Result
Organization Pre-planned Purpose/Goal
Self-organization Observed/Identified Recognizable outcome
Table 3. Is organization planned or observed? Was it designed or did it evolve? It turns out that
the main property of organization is a separation into identifiable components, allowing easy
location of services within the whole.
The now established term self-organization forces us to define the meaning of orga-
nization clearly, since it implies that organization may be something that is both identi-
fied a priori by design, or a postiori as a system property.
Intuitively we think of an organization to be the implementation of a plan, or dis-
crete structural pattern on a particular group of entities. “Organization” (from the Greek
word for tool or instrument) implies to us a tidy compartmentalization of functions. We
can represent such functions as agents’ internal operations, and we know that all dis-
crete combinatoric patterns are classified by grammars of the Chomsky hierarchy[10,
4], which may be formed from the alphabet of such operators. Moreover, patterns can
be formed for different degrees of freedom; for instance:
Spatial or role-based partitioning of operations between parallel agents.
Temporal (schedule), i.e. serial ordering of operations at an agent.
For some, an organization also imbues a conscious decision amongst a number of agents
to work together, with a hierarchical structure, and a leader. There is a separation of
concerns, or division of labour in the solution of a task. Many also believe in the value of
reusability (a subjective valuation of implementation which could lead to an economic
criterion for selection of one structure over another). These ideas are complementary.
We define an organization as a discrete pattern that is formed from interacting agents
and which facilitates the achievement of a desired trajectory or task, i.e. a change from
an initial state Qito a final state Qfover a certain span of time. We refer to the discus-
sion of systems in ref. [7] for the definition of a task.
Definition 3 (Organization). The term organization is both an adjective for a phe-
nomenon and a noun for the arena in which that phenonemon takes place. We describe
the arena as follows. Let Ebe an ensemble of agents. The observables of Ecan for-
mally be written by direct sum Q=q1q2...qN, but we do not assume that these
are public knowledge to an actual agent. An organizational entity within the ensemble
consists of the tuplet Z=hG,Q,A,Si, where:
1. G E is a subset of agents with a connected promise graph Aij .
2. Sis a string of matrix of operators ˆ
OA(ai
±∗
aj)providing the observable
changes, for some sequence index A, whose diagonal elements include the opera-
tions ˆ
OA(t, σ).Sspans all the observables in the ensemble with column dimension
Pi∈E dim(qi), and modifies the observables of all agents: ˆ
OQ=Q.
The property of organization can now be understood as a discrete pattern induced with
Z. We discern two orthogonal types of organization (analogous to the longitudinal and
transverse nature of wave patterns in a continuum description):
Serial organization is the syntax of S. It is a property of single or multiple agents,
and is a pattern in the ordering of operational changes ˆ
OA, classified by a Chomsky
grammar.
Parallel organization: is the partitioning of Qinduced by the irreducible modules
of ˆ
OAat each serial step. This is a property of multiple agents and is characterized
by the eigenstructure of ˆ
OA, which defines natural regions in the graph[11].
An organization is more than a collection of agents, or a number of promises: it has
to do with observable properties of a group of agents and their trajectories. It is it-
self a form of structured behaviour. The degree of parallel organization is least when
every agent evolves independently with all agent operators on the diagonal and no inter-
communication, and grows as the average size of the irreducible modules grows of ˆ
O
grows. The largest organization includes all agents, with input and output only to the
environment.
A service or task here is a function that generally requires input and provides output.
We must therefore ask where this comes from and goes to. One often imagines that
this arrives from and leaves to “outside” an organization, so that an organization is
often understood to have some kind of boundary. However, note the following: The
interaction of two organizations, formed when one organization makes promises to the
other, and perhaps vice versa, is itself an organization. This is trivial from the definition,
but the implications are extensive.
First of all, since the implementation of promises is only partially ordered, several
inequivalent serial patterns could emerge; moreover, if the concept of organization is
extensible then we cannot argue to limit its boundary when it interacts with other agents
or organizations.
The only natural boundary for interaction is the limit of the whole promise graph (no
inside and outside), i.e. a system in which every promised output feeds into a promised
input of another agent, and every input is saturated with an output from another source.
One is then free to see organization and to partition the complete graph into organiza-
tional entities internally by any criteria. Roles are one approach: associated group (cen-
tralization) and coordinated group (peer to peer) are the two models for coordinating
behaviour. Centrality based on local maxima of importance measures[11] is another.
What is the purpose of identifying organizational boundaries? Organization can
cheaply facilitate the location of necessary services within an ensemble, to enable the
task that is being performed by it[12]. In an undifferentiated swarm there is nothing to
identify a discrete structure, so any internal specialization is suppressed and the struc-
ture is hence amorphous, more like a liquid condensate than a polycrystalline structure.
6 Fixed point and irreducible behaviour (symbiosis)
Symbiosis is what happens when the output of one agent or organization is promised
to the input of another agent or organization and vice versa, and the result is a sustain-
able state or trajectory. The parts of the symbiote do not need to perform a function
to agents outside their partnership, as long as they mutually quench each other’s roles.
This mutual closure is a basic topological configuration that allows the persistence of
an operational relationship (an ecosystem)1. The result is equilibrium. Although there
is no room to pursue this here, our definition allows us to see stable generalizations
of organization as the eigenvectors of strings of operators. Thus the natural axes of an
organization can be computed by a kind of principal component analysis. The opera-
tors ˆ
Oican be internally reducible[13], in which case the eigenvector analysis would
reveal a natural structure of separable entities within the boundaries of the closed be-
haviour, each of which is build from atomic agents. This is like examining a cell under
a microscope.
7 Subjective reduction: emergence and swarms
When is behaviour designed and when does it emerge? Promises are given or designed
and outcomes emerge. Neither promises nor programming exactly determine behaviour
in general. Rather we must look to the spectrum of observable outcomes, as the interplay
between freedoms and constraints in the agents[7, 4].
1Creationists believe that life has a purpose and therefore they must have been designed. Evo-
lutionists believe that life has no purpose, but that ecologies are self-satisfying organizations
that persist because they can.
Definition 4 (Emergent behaviour). A pattern of change in the observable properties
of a group of leaky agents exhibiting non-rigid, collective behaviour which, in dialogue
with its environment, leads to a perceived organization that was not explicitly promised.
This definition is intentionally subjective, and requires some work on the part of the
reader. We believe the identification of emergent behaviour is subjective, as its utility is
essentially fortuitous and depends on what probes it from outside.
The interplay between locally determined collective behaviour and the value it holds
to participating agents, within its environment context is key to shaping the patterns that
are observed. The value to an observer depends on whether it fulfils a role that is needed.
We believe that this idea of serendipitous recognition fits with the spirit with which the
term is used and intended in the literature.
For example, consider a number of ad hoc transceiver agents who promise to fol-
low some behavioural algorithm that results in the formation of an aesthetic pattern,
or an unexpected functional side effect, like message delivery. In either case there is
a subjective value associated with the outcome. Ants forming concentric circles in an
environment has been considered emergent[14], however the significance of the circles
is a human æsthetic.
We assume that emergent behaviour can be understood in terms of promises only
through an algebraic reduction, since by its very definition it is unexpected. Emergent
behaviour should be measurable by the same standards as any other kind of behaviour.
We propose that the residual freedoms in the agents that are not constrained, are in fact
selected from by information received from outside the agent, (symbiosis with the envi-
ronment) resulting in patterns of behaviour that are unexpected, but which nevertheless
lie within the bounds of the promises given (see fig. 2).
Agent
Promises
Behaviour
Functional
Consequences
Environment (agent) observables
(environment selects within constraints)
Emergent Agent
Behaviour
Fig.2. Emergent behaviour requires the environment, or other agents to supply additional con-
straints that select a particular policy from the residual degrees of freedom. If the promised con-
sequences of the promised behaviour is valued by the promiser then it will likely continue to keep
its promise.
An example of emergent behaviour often cited is the idea of a swarm. Many defini-
tions of swarms have been offered[14–21]. It is interesting to ask what causes swarm-
like behaviour to be recognizable, i.e. what are the necessary and sufficient conditions
for a swarm2.
Definition 5 (Emergent group or Swarm). A collection of leaky agents that may be
seen by any external observer as exhibiting undifferentiated, collective behaviour.
What remains to be discussed at length is the economic aspect of swarm formation,
in motivating the right promises. Curiously, this is somewhat analogous to classical me-
chanics in which Newton’s laws offer the promise of behaviour, but energy shapes the
forces and hence the changes; or likewise in quantum mechanics in which a wavefunc-
tion offers a promise of behaviour but energy drives the observable transitions. Swarm
behaviour is an interplay between locally promised changes, and the value of these in
relation to environmental factors (see fig. 2). There will not be room to address this
issue here, but this must be seen as as central challenge to completing this description
of emergent behaviour, and it is encouraging that we recognize familiar patterns of me-
chanics in our story. The analogy to Newton’s laws is natural, even comforting, because
they represent the simplest most basic statements about the meaning of change, which
must be present in any dynamical system. The laws are essentially the same, within the
framework of a different descriptive model.
Example: routing It is possible to interpret traffic routing[22] as an example of emer-
gent behaviour amongst autonomous agents that make simple identical promises to
one another. Each agent in the routing cloud promises to provide and receive topol-
ogy change information and to relay traffic to at least one neighbour based on a metric
condition. Let ai, ajbe agents in a group with promises:
ai
±traffic
aj
ai
±updates
aj
ai
±relay(metric)
aj(7)
where it is assumed that “updates” include the receipt of metric information. Using
the law 2 of undifferentiated groups, we can conjoin these two promises into a single
compound type of cooperative promise and call it “routing”.
ai
±updates
ajai
±relay(metric)
ajai
±C(routing)
aj(8)
and hence from ref. [2], this implies the existence of an effective promise to an arbi-
trary or even hypothetical external observer by each pair conjoined by ±C(routing)
promises, named “routing”. Thus, if promises are kept, there will be a consequence of
routing that we can interpret as emergent, since each agent exhibits behaviour based on
2In the literature researchers have been more interested in what swarms do rather than in what
they are.
information from its neighbouring agents. The sceptical reader might see this as unnat-
ural since routing was designed to work in this way. Our contention is that, designed
or not, whether one considers behaviour to be emergent or not is a subjective matter.
Emergent behaviour is simply ordinary functional behaviour that requires no special
magic to understand, only the right point of view.
A final question we must ask about this routing ‘swarm’: if it is to have the emergent
function that is claimed (routing), as not only seen by an external observer but used by
an external agency for communicaton, how will data get into the cloud in order to be
routed? We must hence define the boundaries of the group (“inside” and “outside”) and
make sure that the boundary is not closed to input and output in order to complete the
picture.
8 Laws of interaction
In swarm intelligence, authors speak often of stigmergic communication, or communi-
cation through an intermediate medium, such as leaving a trail or a message.
Theorem 1 (Non-rigidity and intermediate leaky agents). Stimergic communication
(involving an intermediate agent) in an environment can never guarantee rigid be-
haviour, unless all agents are fully isolated from the environent.
Proof. Consider the transfer of information from a1via a2to a3. Let drepresent a
promise body to make data available. The transfer of information through a2requires
(at a minimum) the promises:
a1
+d12
a2(9)
a2
+d13/+d12
a3(10)
a2
d12
a3(11)
ai
env
E, i = 1,2,3(12)
Agent a2is leaky and therefore makes a use-promise to the environment. Hence a3can-
not know whether environmental information or information from a2has been injected
into the data it received. Suppose a1were to encode the data in such a way that a3could
verify transmission. This would require a direct promise from one to the other, using
a direct carrier, since a3knows only what it receives, not where the information came
from. As long as an intermediate agent is present corruption is possible.
We note that all communication is, in fact, stigmeric in some degree and therefore
this theorem has fundamental consequences for system design. It shows the need for
local maintenance of local promise goals.
Corollary 2 (Stigmeric communication implies possible emergence). If the commu-
nication underlying a use-promise between leaky agents is not direct, then one should
expect emergent consequences.
We can now restate our paraphrased version of mechanics, using the formulations
of the previous laws, and in terms of clear statements about state transitions:
Law 3 (External interaction) The change or variation of promised behaviour δai
δq
aext
in an agent is proportional to the promised action Aai
Σ
aextof an external
source Σused by the agent, for small disturbances.
Proof. We define the generalized trajectory or behavioural momentum for type τby
δτq=ˆ
Gτq(13)
where ˆ
Gτis the matrix valued generator of behaviours of type τ, see eqn. (3). Now,
let Στby a transformation matrix from outside the agent, where ai
Στ
aext, and this
generates the transformation ˆ
GτΣτˆ
GτΣT
τ. For small transformations we may write
ΣτI+στ, and
δˆ
Gτ=σT
τˆ
G+ˆ
τ+σT
τˆ
τ(14)
Thus from eqn. (13) we have
δ2
τq=δτˆ
Gτq+ˆ
Gτδτq
δ2
τq=δV
δqT,(15)
where the scalar interaction potential is defined by V=qT[δτ(στ)ˆ
G+ˆ
G2
τ]q. For
completeness, we can write (15) suggestively in promise notation, defining the interac-
tion valuation function of a promise A(·):
δai
δτq
aext=Aai
Στ
aext(16)
With the action-reaction law, which is an axiom in promise theory, this completes the
mechanics of behaviour in promise theory.
9 Falsification
Our use of “promise theory” begs two questions: is this a theory (can it make falsifi-
able predictions) and is emergent behaviour a real phenomenon? A full answer to these
questions is beyond the scope of this paper, but we make the following comments. The
language of promises alone is not a theory, it is merely a language and hence statements
in it can only be proven true or false as a matter of definition. The present paper is about
completing definitions required to describe promises that change dynamically.
There are however other properties that arise from the assumptions of promise
theory themselves. For agent autonomy, for instance, the assumption of leaky agents
predicts that there will be emergent behaviour, according to the definitions we have
provided. This is a prediction that is verifiable only by observation. The langauge of
promises allows us to be clear about the predictions that follow from these assumptions,
within the promise language. We have also predicted that certain promise structures will
have fixed points that lead to equilibrium stability. This follows from the network nature
of promises, and it is non-trivial in general to complete the description leading to this
for reasons of algebra that we cannot go into here. We predict that symbiosis will be
common from the economics of promises. These predictions must also be observed in
actual systems.
As with Newton’s laws (to which we have alluded in the paper), the expressions we
have written down essentially define what is meant by change in the basic observables
of the system. Our paper is one of definition and clarification, not verification, so we
can at best observe that changes occur and that our definitions change appropriately
with them. This is why we state these rules as laws not models. Some work has to be
done before all the necessary relationships between promises can be incorporated into
a detailed falsifiable model of dynamic system behaviour.
10 Conclusions
Can the whole be greater than the sum of its parts? What does this mean? Recall the
performance of Nsingle server queues, and the performance of a single queue with
Nservers[4]. The latter performs provably “greater than or equal to” the former, and
yet the sum of the parts is the same. Functional behaviour allows for reinterpretation of
resources. If one is allowed to re-order the relationships between input and output then
one can take advantage of resources that would normally go to waste; but the actual
physical properties of the system have not changed, thus we see how the phrase can
come about without any magic provided there is interaction.
When writers speak of emergence, they tend to think of self-organization, one spe-
cial kind of emergence. The behaviour of ants and termites is particularly hallowed.
The interesting property here is a reduction in entropy or the disorder of the system ac-
cording to some measuring scale. According to our definitions, all collective behaviour
is emergent behaviour, since the environment behaves just like a super-agent. We feel
that this is an advantage. As long as simple definitions explain the observations, then
Occam’s razor allows us to cut away the mysticism that infects desriptions of emergent
behaviour.
The term “behaviour” is wafted loosely in computer science with often little clar-
ification. Many of the questions about behaviour have been answered in the natural
sciences. We have attempted to offer a usable description of the organization and be-
haviour based on the long scientific tradition of describing observable characters, within
the language of promises. The descriptions we offer here offer a platform from which
to clarify a number of issues in autonomic systems in particular. There are unanswered
questions about the subjective nature of agent perceptions that motivate the need for a
proper theory of measurement based on promise agents. From such a theory it should be
possible to decide whether peer to peer and centralized systems are comparable organi-
zations with interchangable properties, or whether they are two fundamentally different
things.
We believe moreover that it is possible to go further and define mechanical, mate-
rial properties for promise graphs, by analogy to how physics describes the large scale
properties of matter from an atomic model. Why is wood strong and glass brittle? Why
is one computational structure robust and another fragile? These are analogous ques-
tions that are about scale as well as the underlying promises that bind the parts into a
whole. We must work towards suitable and useful definitions of these properties. We
believe that this such definitions must follow from promise theory or something like it.
We return to these issues in future work.
This work is supported by the EC IST-EMANICS Network of Excellence (#26854)
References
1. Mark Burgess. An approach to understanding policy based on autonomy and voluntary
cooperation. In IFIP/IEEE 16th international workshop on distributed systems operations
and management (DSOM), in LNCS 3775, pages 97–108, 2005.
2. M. Burgess and S. Fagernes. Pervasive computing management: A model of network policy
with local autonomy. IEEE Transactions on Software Engineering, page (submitted).
3. M. Burgess and S. Fagernes. Voluntary economic cooperation in policy based management.
IEEE Transactions on Network and Service Management, page (submitted).
4. M. Burgess. Analytical Network and System Administration Managing Human-Computer
Systems. J. Wiley & Sons, Chichester, 2004.
5. M. Burgess and A. Couch. Autonomic computing approximated by fixed point promises.
Proceedings of the 1st IEEE International Workshop on Modelling Autonomic Communica-
tions Environments (MACE); Multicon verlag 2006. ISBN 3-930736-05-5, pages 197–222,
2006.
6. M. Burgess and S. Fagernes. Autonomic pervasive computing: A smart mall scenario using
promise theory. Proceedings of the 1st IEEE International Workshop on Modelling Auto-
nomic Communications Environments (MACE); Multicon verlag 2006. ISBN 3-930736-05-5,
pages 133–160, 2006.
7. M. Burgess. On the theory of system administration. Science of Computer Programming,
49:1, 2003.
8. J.M. Hendrickx et al. Rigidity and persistence of three and higher dimensional forms. In
Proceedings of the MARS 2005 Workshop on Multi-Agent Robotic Systems, page 39, 2005.
9. J.M. Hendrickx et al. Structural persistence of three dimensional autonomous formations. In
Proceedings of the MARS 2005 Workshop on Multi-Agent Robotic Systems, page 47, 2005.
10. H. Lewis and C. Papadimitriou. Elements of the Theory of Computation, Second edition.
Prentice Hall, New York, 1997.
11. G. Canright and K. Engø-Monsen. A natural definition of clusters and roles in undirected
graphs. Science of Computer Programming, 53:195, 2004.
12. S. Johnson. Emergence. Penguin Press, 2001.
13. M. Burgess, G. Canright, and K. Engø. Inportance-ranking functions from the eigenvectors
of directed graphs. Journal of the ACM (Submitted), 2004.
14. E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm Intelligence: From Natural to Artificial
Systems. Oxford University Press, Oxford, 1999.
15. J. Kennedy and R.C. Eberhart. Swarm Intelligence. Morgan Kaufmann (Academic Press),
2001.
16. M. Wooldridge. An Introduction to MultiAgent Systems. Wiley, Chichester, 2002.
17. G. Di Caro and M. Dorigo. Antnet: Distributed stigmergetic control for communications
networks. Journal of Artificial Intelligence Research, 9:317–365, 1998.
18. L. Arlotti, A. Deutsch, and M. Lachowicz. On a discrete boltzmann type model of swarming.
Math. Comp. Model, 41:1193–1201, 2005.
19. Kazadi S. Swarm Engineering. PhD thesis, California Institute of Technology, 2000.
20. F. Heylighen. Open Source Jahrbuch, chapter Why is Open Access Development so Success-
ful? Stigmergic organization and the economics of information. Lehrmanns Media, 2007.
21. J.H. Holland. Emergence: from chaos to order. Oxford University Press, 1998.
22. C. Huitema. Routing in the Internet (2nd edition). Prentice Hall, 2000.
... Thus, semantics constrain promise bodies to a set of linguistic atoms (morphemes) which are discrete. In nature, we see this in everything from gene codons, to cells and organisms, to Chinese ideograms 6 Figure 4: Body parts or linguistic atoms of intent may be used as a spanning set for the body of any promise in a region. This is like a coordinate system of intent. ...
... These are well known in information theory. 6 One can speculate about the reason for the size of discrete patterns used to convey meaning. Dynamical scales will ultimately place limits of the comprehension of an agent. ...
... The impact of a promise is defined through its binding strength, or effective coupling constant. In earlier work, I defined the notion of a trajectory for an agent, and the corresponding notion of a generalized force, obeying Newtonian semantics [6,7]. Intuitively, one expects a force to be something that impacts an agent's trajectory ...
Research
Full-text available
Using Promise Theory as a calculus, I review how to define agency in a scalable way, for the purpose of understanding semantic spacetimes. By following simple scaling rules, replacing individual agents with `super-agents' (sub-spaces), it is shown how agency can be scaled both dynamically and semantically. The notion of occupancy and tenancy, or how space is used and filled in different ways, is also defined, showing how spacetime can be shared between independent parties, both by remote association and local encapsulation. I describe how to build up dynamic and semantic continuity, by joining discrete individual atoms and molecules of space into quasi-continuous lattices.
... We follow the initial development of [7, 8, 10] for an approach to a theory of promises with a principled emphasis on agent autonomy. A simple notation for promises, involving four components: a promiser, a promisee, a promise type, and a promise 1 The role of autonomy for agents acting upon the reception of decision outcomes has not been brought into focus in my work [1] on outcome oriented decision taking and in these papers the possibility that decision outcomes imply obligations for other agents is left open. ...
... Contemplating promise dynamics below will lead to the identification of a number of features that may be attached to promises on top of the four components mentioned above. This in turn leads us to the specification of a variety of promise statement notations extending the expressive power of the base notation from [7, 8] and [10]. At a closer inspection promise dynamics, as well as imposition dynamics, and the dynamics of other directionals is dominated by levels of mutual trust between agents and the design and implementation of trust maintenance functionality. ...
... In its simplest form (called ground form), p[π: b]q, a promise statement conveys (the name of) a promiser (p) a promise type π a promise body (b), and a promisee (q). These promise statements, though with a more figurative notation with an arrow between promiser and promise and promise type and body as a subscript for that arrow, have been introduced and used extensively in [7, 8, 10]. In this notation p[ Subject fragmentation: Upon its issuing a promise fragments over a community of agents, that is each agent in scope of the promise becomes the carrier of a fragment of it. ...
Article
Full-text available
Promises, impositions, proposals, predictions, and suggestions are categorized as voluntary co-operational methods. The class of voluntary co-operational methods is included in the class of so-called directionals. Directionals are mechanisms supporting the mutual coordination of autonomous agents. Notations are provided capable of expressing residual fragments of directionals. An extensive example, involving promises about the suitability of programs for tasks imposed on the promisee is presented. The example illustrates the dynamics of promises and more specifically the corresponding mechanism of trust updating and credibility updating. Trust levels and credibility levels then determine the way certain promises and impositions are handled. The ubiquity of promises and impositions is further demonstrated with two extensive examples involving human behaviour: an artificial example about an agent planning a purchase, and a realistic example describing technology mediated interaction concerning the solution of pay station failure related problems arising for an agent intending to leave the parking area.
... A hypothesis of promise theory is that one may define a notion of force for agents, which is attractive when there is economic advantage, and repulsive for economic disadvantage 19 . The formation of super-agents thus comes about, for economic reasons [29], by the value of collaboration. If the promises are unconditional, superagents will be localized. ...
... They are defined semantically, by labelling of community membership. The simple explanation [6, 9, 29] is that a city is defined to be that collection of agents that mutually promise to be members of the city, and that are accepted as such by the city authorities. In practice, the population must register as residents, and they receive promises of services (including tax collection). ...
Article
Full-text available
The study of spacetime, and its role in understanding functional systems has received little attention in information science. Recent work, on the origin of universal scaling in cities and biological systems, provides an intriguing insight into the functional use of space, and its measurable effects. Cities are large information systems, with many similarities to other technological infrastructures, so the results shed new light indirectly on the scaling the expected behaviour of smart pervasive infrastructures and the communities that make use of them. Using promise theory, I derive and extend the scaling laws for cities to expose what may be extrapolated to technological systems. From the promise model, I propose an explanation for some anomalous exponents in the original work, and discuss what changes may be expected due to technological advancement.
... A hypothesis of promise theory is that one may define a notion of force for agents, which is attractive when there is economic advantage, and repulsive for economic disadvantage 19 . The formation of super-agents thus comes about, for economic reasons [29], by the value of collaboration. If the promises are unconditional, superagents will be localized. ...
... They are defined semantically, by labelling of community membership. The simple explanation [6, 9, 29] is that a city is defined to be that collection of agents that mutually promise to be members of the city, and that are accepted as such by the city authorities. In practice, the population must register as residents, and they receive promises of services (including tax collection). ...
Article
Full-text available
The study of spacetime, and its role in understanding functional systems has received little attention in information science. Recent work, on the origin of universal scaling in cities and biological systems, provides an intriguing insight into the functional use of space, and its measurable effects. Cities are large information systems, with many similarities to other technological infrastructures, so the results shed new light indirectly on the scaling the expected behaviour of smart pervasive infrastructures and the communities that make use of them. Using promise theory, I derive and extend the scaling laws for cities to expose what may be extrapolated to technological systems. From the promise model, I propose an explanation for some anomalous exponents in the original work, and discuss what changes may be expected due to technological advancement .
... However this is a belief which is based on the widely held assumption that program design dictates behavioural outcome. This is far from clear, as we show in ref. [1]. ...
Conference Paper
Full-text available
Distributed systems that interact have neither intrinsically common knowledge nor consistent models for interacting with other systems. This relativ-ity or even inconsistency presents a fundamental limitation to autonomic collab-oration. In this paper, we define knowledge using the well-understood concept of information and show how promise theory simply describes the required condi-tions for coordinated cooperation and interaction in a distributed system.
... However this is a belief which is based on the widely held assumption that program design dictates behavioural outcome. This is far from clear, as we show in ref. [1]. ...
Conference Paper
Full-text available
Distributed systems that interact have neither intrinsically common knowledge nor consistent models for interacting with other systems. This relativ-ity or even inconsistency presents a fundamental limitation to autonomic collab-oration. In this paper, we define knowledge using the well-understood concept of information and show how promise theory simply describes the required condi-tions for coordinated cooperation and interaction in a distributed system.
... Let us define a valuation of a promise known as the outcome by the notation: o(a 1 b → a 2 ). The outcome returns a value in [0, 1] where 0 means not-kept and 1 means kept. Intermediate values can be used for any purpose, such as statistical compliance. ...
Article
Full-text available
We begin with two axioms: that system behaviour is an empirical phenomenon and that organization is a form of behaviour. We derive laws and characterizations of behaviour for generic systems. In our view behaviour is not determined by internal mech- anisms alone but also by environmental forces. Systems may 'announce' their internal expectations by making "promises" about their intended behaviour. We formalize this idea using promise theory to develop an reductionist understanding of how system behaviour and organization emerges from basic rules of interaction. Starting with the assumption that all system components are autonomous entities, we derive basic laws of influence betwe en them. Organization is then understood as persistent patterns in the trajectories of the system. We show how hierarchical structure emerges from the need to offload the cost of observational calibration: it is not a design requirement for control, rat her it begins as an economic imperative which then throttles itself through poor scalability and leads to clustered tree structures, with a trade-off between depth and width.
... In promise theory, emergent behaviour is explained by noting the indistinguishability of certain collections promises from others. Without getting into details, we say that a system has emergent properties if it seems to promise something, from the viewpoint of an external observer who in scope, that in fact it does not explicitly promise, see Burgess & Fagernes (2007a) In the same way, it is possible for a knowledge model to make no explicit design promises about category and yet still form a structure that appears to cluster around certain 'attractor topics' in the manner of a hierarchy. The spontaneous formation of hierarchies is a relatively well-known phenomenon in network science, see Newman et al. (2001); Watts (1999) and is related to the 'small worlds' phenomenon. ...
... An account of how promises arrive, persist and are removed again is forthcoming. Some work has already been done in this area, however[11, 12] but scope for embellishment is vast, as is the number applications for the concept of promises. In the latter reference, the matter of organization is related to promises, as a form of cooperation between individuals or autonomous agents. ...
Article
Full-text available
We discuss for the concept of promises within a framework that can be applied to either humans or technology. We compare promises to the more established notion of obligations and find promises to be both simpler and more effective at reducing uncertainty in behavioural outcomes.
Article
Full-text available
The framework of promise theory offers an alternative way of understanding programming models, especially in distributed systems. We show that promise theory can express some familiar constructs and resolve some problems in program interface design, using fewer and simpler concepts than the Unified Modelling Language (UML).
Article
Full-text available
Abstract We present a model for policy based management, stressing the role of decisive autonomy in generalized networks. The organization and consistency of agent cooperation is discussed within a cooperative network. We show that some simple rules can eliminate formal inconsistencies, allowing robust approximations to management. Using graph theoretical ranking methods, we evaluate also the probable consistency and robustness of cooperation in a network,region. Our theory makes,natural contact with social network,models,in building a theory of pervasive computing. We illustrate our model,with a number,of examples. Index Terms Configuration management, ad hoc networks, peer to peer, per vasive computing.
Article
Full-text available
We present a simplified example of a pervasive computing scenario using promises to model the interaction policies between the agents. We examine how the autonomic nodes stabilize into a robust functional system in spite of their autonomous decision making. We use promises both as a means of modelling a potential specification and as a complementary eye glass for interpreting and understanding emergent behaviour. The analysis of promises reveals 'faults' in the policies, which prevent the collaborative functioning of the system as a whole. Successful interactions are the result of a bargaining process. The method of eigen- vector centrality is used to locate the most important and vulnerable agents to the functioning of the system.
Article
Full-text available
We use the concept of promises to develop a service ori- ented abstraction of the primitive operations that make an autonomic computer system. Convergent behaviour does not depend on centralized control. We summarize necessary and sufficient conditions for maintain- ing a convergently enforced policy without sacrificing autonomy of deci- sion, and we discuss whether the idea of versioning control or "rollback" is compatible with an autonomic framework.
Article
Full-text available
Abstract Systems with decentralized authority are sometimes,considered to be ‘unmanaged’ or even unman- ageable. Promise theory is an approach to policy that assumes complete,decentralization of authority. Cooperation between agents or systems is entirely voluntary, so why would agents cooperate in forming policy? By exhibiting the relationship between promise theory and game theory, we propose that there is a natural economic,incentive for cooperation in distribu ted systems with autonomous,control. The possibility of trading between,agents motivates the definit ion of a common,currency. Our results are especially applicable to the analysis of policy in a Service Oriented Architecture. We derive minimal requirements for the existence of stable Agreements between agents, with or without monetary payment.
Article
Full-text available
Built upon a recently developed theoretical framework, we consider some practical issues raised in multi-agent formation control in three dimensional space. We introduce the partial equilibrium problem, which is associated with unsafe control of a formation in practical 3-dimensional applications. We define structurally persistent graphs, a class of persistent graphs free of any partial equi-librium problem. In real deployment of control of multi-agent systems, forma-tions with underlying structurally persistent graphs are of interest. We study the connections between the allocation of degrees of freedom (DOFs) across agents and the characteristics of persistence and/or structural persistence of a directed graph. We also show how to transfer degrees of freedom between agents, when the formation changes with new agent(s) added, to preserve persistence and/or structural persistence.
Conference Paper
Full-text available
Presently, there is no satisfactory model for dealing with political autonomy of agents in policy based management. A theory of atomic policy units called ‘promises’ is therefore discussed. Using promises, a global authority is not required to build conventional management abstractions, but work is needed to bind peers into a traditional authoritative structure. The construction of promises is precise, if tedious, but can be simplified graphically to reason about the distributed effect of autonomous policy. Immediate applications include resolving the problem of policy conflicts in autonomous networks.
Article
A new model for interacting “agents” (organisms, cells, particles etc.) is proposed. We consider the one-dimensional case in which agents are characterized by their position and orientation (+/-) with “majority-based” local (swarming) interaction controlled by a sensitivity parameter (⋎). The model possesses equilibrium solutions corresponding to the diffusive (isotropic) and the aligned (swarming) state. In the space-independent case, for ⋎ > 1 alignment asymptotically occurs while for 0 < ⋎ < 1 alignment is asymptotically destroyed. This behaviour can be interpreted as a phase transition. In the space-dependent case, we provide an existence theory and prove the existence of a Lyapunov functional.
Article
This paper describes a mean field approach to defining and implementing policy-based system administration. The concepts of regulation and optimization are used to define the notion of maintenance. These are then used to evaluate stable equilibria of system configuration, that are associated with sustainable policies for system management. Stable policies are thus associated with fixed points of a mapping that describes the evolution of the system. In general, such fixed points are the solutions of strategic games. A consistent system policy is not sufficient to guarantee compliance; the policy must also be implementable and maintainable. The paper proposes two types of model to understand policy driven management of Human-Computer systems: (i) average dynamical descriptions of computer system variables which provide a quantitative basis for decision, and (ii) competitive game theoretical descriptions that select optimal courses of action by generalizing the notion of configuration equilibria. It is shown how models can be formulated and simple examples are given.