Content uploaded by Mark Burgess
Author content
All content in this area was uploaded by Mark Burgess on Jan 20, 2015
Content may be subject to copyright.
Laws of Systemic Organization and Collective
Behaviour in Ensembles
Mark Burgess and Siri Fagernes
Oslo University College, Norway
Mark.Burgess@iu.hio.no
Siri.Fagernes@iu.hio.no
Abstract. Can the whole be greater than the sum of its parts? The phenomenon
of emergence claims that it can. Autonomics suggests that emergence can be har-
nessed to solve problems in self-management and behavioural regulation without
human involvement, but the definitions of these key terms are unclear. Using
promise theory, and the related operator theory of Burgess and Couch, we define
behaviour in terms of promises and their outcomes. We describe the interaction
of agents and their collective properties.
1 Introduction
The organization of functional entities within a system of components is a key aspect
of delegation and specialization in any system, and is closely related to a description
of its behaviour. In a system of interacting agents, agents can differentiate themselves
into different functional roles, like cells in an organism, which work together to extend
the scope of their collective behaviour. Regardless of whether such roles and groupings
emerge from a dialogue with environmental conditions or are pre-programmed, it is
important for an engineer, scientist or analyst to be able to identify these functional
elements and understand their relationship to the whole. The term organization is often
used for this.
In this paper we use promise theory to define and distinguish different kinds of
singular and collective behaviour in systems whose components can be isolated and
modelled as autonomous agents. We also define organization and emergent outcomes
in a way that we believe offers a new insight into these phenomena.
Computer science has an often limited view of system behaviour. Typically it de-
scribes only what we shall call programmed behaviour, i.e. that which can be repre-
sented by a state machine. In the Unified Modelling Language, for instance, behaviour
is represented as algorithms, flow diagrams and state charts. However, this is an opti-
mistic view of the scope of behaviour.Any system capable of basing its actions on input
events from its environment, or whose resources are governed and modified by the same
environmental conditions, are necessarily unpredictable from a state machine view-
point. Some authors have used the idea of swarms as a model for collective behaviour
amongst agents, but they generally approach this by programmed mimicry rather than
an identification of phenomenological characters. Here we examine an approach that
deliberately avoids an algorithmic view of behaviour.
2 Axioms of promise theory
Promises are now a well-described as a modelling framework (see [1–3]). The frame-
work builds from an atomic and fully distintegated view of behaviour into systems that
exhibit coordination and familiar programming, using the concept of agents that are
completely impenetrable to outside influence, with private knowledge, and the promises
that they make to one another. A promise with body +bis understood to be a specifica-
tion to “give” behaviour from one agent to another (possiblyin the manner of a service),
while a promise with body −bis a specification of what behaviour will be “received”
or “used” by one agent from another (see table 1). A promise valuation viaj
b
−→ ak
is a subjective interpretation by agent ai(in any local currency) of the promise in the
parentheses. Usually an agent can only evaluate promises in which it is involved.
Symbol Interpretation
a+b
−→ a′Promise with body b
a′−b
−→ aPromise to accept b
va(ab
−→ a′)The value of promise to a
va′(ab
−→ a′)The value of promise to a′
⊕Combination of promises in parallel
⊗Combination of promises in series
Table 1. Summary or promise notation
–Promises are considered to be basic (“genetic”) characteristic properties of agents
(see table 2 for a whimsical analogy between genes and promises). A promise body
bhas a type which describes the nature or subject of the promise, and a constraint
which explains what restricted subset of the total possible degrees of freedom are
being promised. Since any dynamical, systematic behaviour is a balance between
degrees of freedom and constraints[4], this should be sufficient to describe a wide
variety of phenomena.
–The environment in which agents live and act can itself be represented as an au-
tonomous agent with extensive internal resources. We denote this agent E.
–Promise theory is mainly about the analysis of snapshots in which promises are
fixed. If basic promises change, we enter a new “epoch” of the system in which
basic behaviours change. For a fixed static set of promises, behaviour continues ac-
cording to the same basic pattern of interactions between agents and environment.
We can couch the latter statements as a rough paraphrasinganalogous to Newton’s laws
of motion for agents, to be revisited later:
1. An agent continues in a state of uniform behaviour, like a deterministic automation,
unless it makes at least one promise of type −b, for some non-empty promise body.
2. The observable behaviour of an agent changes in relation to a coupling to input
from an outside source (see section 3).
3. For every external influence received through a promise with body −b=h−τ , χ1i,
there must be an ‘equal’ opposite promise +b=h+τ , χ2i, which is the source of
the influence. If χ16=χ2, then the interaction is of magnitude χ1∩χ2.
Genetic attribute Promise attribute
Genes Promise type τ
Alleles Constraint χ
Phenotype Promise role
Proteins Operators ˆ
O[5]
Gene network Promise graph
Table 2. A whimsical association between promise theory and genetics. In both cases the role of
the environment is crucial to understanding resulting behaviour, regardless of what is promised.
Definition 1 ((In)exact promises). A promise a1
b
−→ a2is inexact if the constraint
χ(b)has residual degrees of freedom. i.e. if it is not a complete and unambiguous be-
havioural specification. For example q= 5 is an exact specification, while 1< q < 5is
inexact. The same principle applies to the possible outcome of a promise, however the
actual outcome is naturally exact.
One of the strengths of promise theory is that it defines groups and roles as empirical
observables, rather than as necessarily pre-design features[2, 6]. For this is it important
to be able to compare agents using an impartial third party. The coordination promise
is used for this (see fig. 1). The reduction rule for coordination promises for the case in
1
2
3
C(b) b
Fig.1. Serial composition of a promise and a coordination promise. The dashed arrow is implied
by the C(b)promise.
which n1promises n2that it will coordinate on the matter of b, given that n2promises
n3bfollows. The symbol “⊗” is used to signify the composition of these promises.
C(b)
n1→n2
|{z }
‘Coordinate with′
⊗b
n2→n3
| {z }
Promise
⇒b
n1→n3.(1)
We use this below in the identification of observable properties, since it implies a basis
for n3to compare n1and n2.
3 Behaviour and measurement defined
We must take care to distinguish between a promise and the outcome of a promise. If
one assumes that promises are necessarily kept, this distinction is moot. However, in all
realistic systems promises are only kept with a certain probability.
Agoal,consequence,result is an outcome i.e. a specification of the states of all
involved agents’ observable behaviours at some agreed time after a promise was made.
A goal could conceivably be associated with the promise body, but we prefer to keep it
as a separate concept, since one’s goals and one’s promises are not necessarily perfectly
correlated. A promise, on the other hand, is an announcement of future behaviour, which
could be the intention to attain a goal. The outcome is what actually happened, whether
a goal was announced or not. The projected outcome (or goal) might be exact, in which
case it is a precise specification of the final state, or it might be inexact, allowing for a
range of possible values, e.g. aim within the circle.
Behaviour is a pattern of change in the observable measures of a system. It is gov-
erned by the interplay between degrees of freedom and its constraints[4]. In promise
theory, internal changes to observables are assumed to occur through operations[5].
The operations required to fulfil the terms of a promise are not necessarily defined,
since there might be many acceptable ways to achieve the promise.
Promises also define the only observables in promise theory. No knowledge is ex-
changed without it being promised. For instance, the location and nature of an agent
can only be observed if the agent promises to make itself visible. This includes inter-
actions with an environment. The environment itself must promise its secrets to agents.
This has the advantage of making explicit all communication, including observations of
environment and boundary conditions etc.
To discuss behaviour over time we need to notion of a trajectory. This is the path
or set of intermediate states between the start and the current value of an agent’s ob-
servables, e.g. the history of an agent or the complete history of state transitions in a
finite state machine. Let qbe a vector of state information (which might include posi-
tion, internal registers and other details)[7]. Such a trajectory begins at a certain time
t0with a certain coordinate value q0, known as the initial conditions. The trajectory is
then a parameterized function q(t, σ), for some vector of parameters σarriving from
an outside source, and we identify the behaviour of an isolated system as the triplet as
the determined trajectory:
hq0, t0,ˆ
O(t, σ)i, t > t0.(2)
The function ˆ
O(t, σ)is a constant transition matrix or operator which takes q(ti)to
q(ti+1)for integer time index i, or alternatively q(t)to q(t+dt)in a differential form.
In other words, any change in an agent’s behaviour is generated by operators
q→q′=q+δq=ˆ
O(σ)q= (1 + ˆ
G(σ))q.(3)
i.e. δq=ˆ
G(σ)q. This can be given meaning differentially also.
We now wish to provide a theorem which we express as a basic law.
Law 1 (Law of Inertia) An agent’s observable properties hold a constant, determin-
istic trajectory q(t)unless it also promises to use the value of an external source σto
modify its transition matrix ˆ
O(σ).
Proof. Each agent has access only to information promised to it, or already internal to
it. A local promise ai
f(σ)
−→ ajthat depends on an externally promised parameter σis
clearly a conditional promise ai
f(σ)/σ
−→ aj, where σis the value promised by another
agent. In order to acquire the value of σ, we require ai
−σ
−→ ajand a corresponding
promise to provide σto aieither from the environment or from another agent. Thus,
if an agent does not promise to use any input σfrom another agent, all of its internal
variables and transition matrices must be constant.
Note also that by the definitions in [2], a conditional promise is not a promise without
a use-promise. This fits naturally with the argument in the theorem.
Corollary 1 (A conditional promise is not exact). By reversing the theorem we see
that a conditional promise must, by definition have a residual degree of freedom, which
is the value of the dependent condition.
Based on this property of promises, we can now classify behaviours by several charac-
ters:
–Rigid behaviour (deterministic): An agent whose observable properties do not de-
pend on any external circumstances has rigid behaviour[8, 9]. This is true if and
only if the agent has no use-promises (−bfor some b), and all other promises +b′
are exact promises. In this case the internal change operator ˆ
Ocannot depend on
any external information and must therefore be constant, hence the trajectory is
constant and deterministic.
–Reactive or Adaptive behaviour: The observable properties of an agent aichange in
response to a change in its input from another agent aj(or from the environment).
This requires promises:
alocal
−I
−→ aext (4)
alocal
+O(I)/I
−→ a′
ext (5)
where Irepresents a promise of input from an external agent, and O(I)represents a
promise of some observable output to another external agent which is conditionally
a function of the input.
It makes sense to distinguish:
–Ordered behaviour: An agent whose observable properties change according to a
deterministic algorithmic pattern.
–Disordered (“random”) behaviour: The observable properties of an agent react or
adapt to information from the environment which changes in a non-monotonic and
unpredictable manner.
Disorder (randomness) is assumed to come only from the environment, since no other
agent has sufficient complexity. However, we should be careful in making judgements
about this, as “randomness” is to some extent a philosophical admission of ignorance
rather than a real phenomenon, except in the quantum world. Finally, there is behaviour
that pertains to more than one agent.
–Collective behaviour: This refers to a collection of agents that are connected by
promises of any type within a promise graph. Collective behaviour is assumed to
involve interaction (i.e. we should disregard coincidentally similar behaviour with
zero mutual information[4]). In any given temporal shapshot, agents can exhibit:
•Differentiated behaviour:
Agents are partitioned into a division of labour. This requires that agents be
identifiable and that some process or entity within the ensemble decide the
division. It is possible for environment (e.g. boundary conditions) to determine
this partitioning.
•Undifferentiated behaviour:
Agents play identical roles in the ensemble and require no specific labels, since
all promises are to any accessible agent.
Note, undifferentiated behaviour could be coincidental, like a disordered gaseous phase
of matter (all the component elements make identical unconditional promises but never
interact with one another) and it could imply interaction (like in a possibly ordered solid
phase). Normally we are only interested in the possibility of coordinated, collective
phenomena, but the possibility of a phase transition from one to the other is clearly
interesting.
We now consider a rule that allows us to reduce a pattern of pre-existing promises
into an effective promise of coordination.
Law 2 (Rewriting rule for empirically collective behaviour) Let aibe a group of un-
differentiated agents, for i, j in their index set, all making the same promises to all
others (a complete graph) within the group. The parallel combination of the promise
bodies may be considered equivalent to a single new promise body of type C(b), i.e.
M
ℓ
ai
bℓ
−→ ∗≡ai
C(⊕ℓb)
−→ ∗ (6)
where ℓruns over all the promise bodies that are common to the group and ∗represents
all agents in the ensemble.
This law is easily justified as it reflects the observation that when a number of agents
behaves similarly in response to one another, and has no labels that otherwise distin-
guish special roles, an external observer can only say that all of the agents are behaving
in a coordinated way. Thus the observer sees a coordinated group for all intents and
purposes, although it was not formally agreed by the agents.
4 Leaky agents: noisy environment
Almost no real systems behave like the strictly autonomous agents of promise theory.
It is impossible to isolate systems from physical reality. However, there is a long tradi-
tion in the natural sciences of making laws for idealized “closed” systems. To obtain a
more realistic model of a system, we must give every agent a use-promise from the en-
vironment to allow non-specified environmental conditions to be explicitly modelled.
This is the route by which input will be admitted to an emergent functionality of the
group. The environment agent is naturally assumed to promise its information to all
other agents. Thus every real-world system plays the appointed role of a user of envi-
ronmental information. The exact nature of the environmental influence on each agent
might be different.
Definition 2 (Leaky agents). We define a leaky agent to be an agent making any
promise to receive information from the environment E,ai
−envi
−→ E.
5 Organization
The term ‘organization’ is widely used to refer to collective behaviour in the institutions
of business and society. Its intended meaning is not entirely clear in natural language,
but we shall provide a definition below that is unambiguous. One source of ambiguity
is the difference between organization and order. This is particularly so in using the
terms: self-organized and self-ordered for presumably emergent phenomena (see table
3).
Name Conscious Result
Organization Pre-planned Purpose/Goal
Self-organization Observed/Identified Recognizable outcome
Table 3. Is organization planned or observed? Was it designed or did it evolve? It turns out that
the main property of organization is a separation into identifiable components, allowing easy
location of services within the whole.
The now established term self-organization forces us to define the meaning of orga-
nization clearly, since it implies that organization may be something that is both identi-
fied a priori by design, or a postiori as a system property.
Intuitively we think of an organization to be the implementation of a plan, or dis-
crete structural pattern on a particular group of entities. “Organization” (from the Greek
word for tool or instrument) implies to us a tidy compartmentalization of functions. We
can represent such functions as agents’ internal operations, and we know that all dis-
crete combinatoric patterns are classified by grammars of the Chomsky hierarchy[10,
4], which may be formed from the alphabet of such operators. Moreover, patterns can
be formed for different degrees of freedom; for instance:
–Spatial or role-based partitioning of operations between parallel agents.
–Temporal (schedule), i.e. serial ordering of operations at an agent.
For some, an organization also imbues a conscious decision amongst a number of agents
to work together, with a hierarchical structure, and a leader. There is a separation of
concerns, or division of labour in the solution of a task. Many also believe in the value of
reusability (a subjective valuation of implementation which could lead to an economic
criterion for selection of one structure over another). These ideas are complementary.
We define an organization as a discrete pattern that is formed from interacting agents
and which facilitates the achievement of a desired trajectory or task, i.e. a change from
an initial state Qito a final state Qfover a certain span of time. We refer to the discus-
sion of systems in ref. [7] for the definition of a task.
Definition 3 (Organization). The term organization is both an adjective for a phe-
nomenon and a noun for the arena in which that phenonemon takes place. We describe
the arena as follows. Let Ebe an ensemble of agents. The observables of Ecan for-
mally be written by direct sum Q=q1⊕q2⊕...qN, but we do not assume that these
are public knowledge to an actual agent. An organizational entity within the ensemble
consists of the tuplet Z=hG,Q,A,Si, where:
1. G ⊂ E is a subset of agents with a connected promise graph Aij .
2. Sis a string of matrix of operators ˆ
OA(ai
±∗
−→ aj)providing the observable
changes, for some sequence index A, whose diagonal elements include the opera-
tions ˆ
OA(t, σ).Sspans all the observables in the ensemble with column dimension
Pi∈E dim(qi), and modifies the observables of all agents: ˆ
OQ=Q′.
The property of organization can now be understood as a discrete pattern induced with
Z. We discern two orthogonal types of organization (analogous to the longitudinal and
transverse nature of wave patterns in a continuum description):
– Serial organization is the syntax of S. It is a property of single or multiple agents,
and is a pattern in the ordering of operational changes ˆ
OA, classified by a Chomsky
grammar.
– Parallel organization: is the partitioning of Qinduced by the irreducible modules
of ˆ
OAat each serial step. This is a property of multiple agents and is characterized
by the eigenstructure of ˆ
OA, which defines natural regions in the graph[11].
An organization is more than a collection of agents, or a number of promises: it has
to do with observable properties of a group of agents and their trajectories. It is it-
self a form of structured behaviour. The degree of parallel organization is least when
every agent evolves independently with all agent operators on the diagonal and no inter-
communication, and grows as the average size of the irreducible modules grows of ˆ
O
grows. The largest organization includes all agents, with input and output only to the
environment.
A service or task here is a function that generally requires input and provides output.
We must therefore ask where this comes from and goes to. One often imagines that
this arrives from and leaves to “outside” an organization, so that an organization is
often understood to have some kind of boundary. However, note the following: The
interaction of two organizations, formed when one organization makes promises to the
other, and perhaps vice versa, is itself an organization. This is trivial from the definition,
but the implications are extensive.
First of all, since the implementation of promises is only partially ordered, several
inequivalent serial patterns could emerge; moreover, if the concept of organization is
extensible then we cannot argue to limit its boundary when it interacts with other agents
or organizations.
The only natural boundary for interaction is the limit of the whole promise graph (no
inside and outside), i.e. a system in which every promised output feeds into a promised
input of another agent, and every input is saturated with an output from another source.
One is then free to see organization and to partition the complete graph into organiza-
tional entities internally by any criteria. Roles are one approach: associated group (cen-
tralization) and coordinated group (peer to peer) are the two models for coordinating
behaviour. Centrality based on local maxima of importance measures[11] is another.
What is the purpose of identifying organizational boundaries? Organization can
cheaply facilitate the location of necessary services within an ensemble, to enable the
task that is being performed by it[12]. In an undifferentiated swarm there is nothing to
identify a discrete structure, so any internal specialization is suppressed and the struc-
ture is hence amorphous, more like a liquid condensate than a polycrystalline structure.
6 Fixed point and irreducible behaviour (symbiosis)
Symbiosis is what happens when the output of one agent or organization is promised
to the input of another agent or organization and vice versa, and the result is a sustain-
able state or trajectory. The parts of the symbiote do not need to perform a function
to agents outside their partnership, as long as they mutually quench each other’s roles.
This mutual closure is a basic topological configuration that allows the persistence of
an operational relationship (an ecosystem)1. The result is equilibrium. Although there
is no room to pursue this here, our definition allows us to see stable generalizations
of organization as the eigenvectors of strings of operators. Thus the natural axes of an
organization can be computed by a kind of principal component analysis. The opera-
tors ˆ
Oican be internally reducible[13], in which case the eigenvector analysis would
reveal a natural structure of separable entities within the boundaries of the closed be-
haviour, each of which is build from atomic agents. This is like examining a cell under
a microscope.
7 Subjective reduction: emergence and swarms
When is behaviour designed and when does it emerge? Promises are given or designed
and outcomes emerge. Neither promises nor programming exactly determine behaviour
in general. Rather we must look to the spectrum of observable outcomes, as the interplay
between freedoms and constraints in the agents[7, 4].
1Creationists believe that life has a purpose and therefore they must have been designed. Evo-
lutionists believe that life has no purpose, but that ecologies are self-satisfying organizations
that persist because they can.
Definition 4 (Emergent behaviour). A pattern of change in the observable properties
of a group of leaky agents exhibiting non-rigid, collective behaviour which, in dialogue
with its environment, leads to a perceived organization that was not explicitly promised.
This definition is intentionally subjective, and requires some work on the part of the
reader. We believe the identification of emergent behaviour is subjective, as its utility is
essentially fortuitous and depends on what probes it from outside.
The interplay between locally determined collective behaviour and the value it holds
to participating agents, within its environment context is key to shaping the patterns that
are observed. The value to an observer depends on whether it fulfils a role that is needed.
We believe that this idea of serendipitous recognition fits with the spirit with which the
term is used and intended in the literature.
For example, consider a number of ad hoc transceiver agents who promise to fol-
low some behavioural algorithm that results in the formation of an aesthetic pattern,
or an unexpected functional side effect, like message delivery. In either case there is
a subjective value associated with the outcome. Ants forming concentric circles in an
environment has been considered emergent[14], however the significance of the circles
is a human æsthetic.
We assume that emergent behaviour can be understood in terms of promises only
through an algebraic reduction, since by its very definition it is unexpected. Emergent
behaviour should be measurable by the same standards as any other kind of behaviour.
We propose that the residual freedoms in the agents that are not constrained, are in fact
selected from by information received from outside the agent, (symbiosis with the envi-
ronment) resulting in patterns of behaviour that are unexpected, but which nevertheless
lie within the bounds of the promises given (see fig. 2).
Agent
Promises
Behaviour
Functional
Consequences
Environment (agent) observables
(environment selects within constraints)
Emergent Agent
Behaviour
Fig.2. Emergent behaviour requires the environment, or other agents to supply additional con-
straints that select a particular policy from the residual degrees of freedom. If the promised con-
sequences of the promised behaviour is valued by the promiser then it will likely continue to keep
its promise.
An example of emergent behaviour often cited is the idea of a swarm. Many defini-
tions of swarms have been offered[14–21]. It is interesting to ask what causes swarm-
like behaviour to be recognizable, i.e. what are the necessary and sufficient conditions
for a swarm2.
Definition 5 (Emergent group or Swarm). A collection of leaky agents that may be
seen by any external observer as exhibiting undifferentiated, collective behaviour.
What remains to be discussed at length is the economic aspect of swarm formation,
in motivating the right promises. Curiously, this is somewhat analogous to classical me-
chanics in which Newton’s laws offer the promise of behaviour, but energy shapes the
forces and hence the changes; or likewise in quantum mechanics in which a wavefunc-
tion offers a promise of behaviour but energy drives the observable transitions. Swarm
behaviour is an interplay between locally promised changes, and the value of these in
relation to environmental factors (see fig. 2). There will not be room to address this
issue here, but this must be seen as as central challenge to completing this description
of emergent behaviour, and it is encouraging that we recognize familiar patterns of me-
chanics in our story. The analogy to Newton’s laws is natural, even comforting, because
they represent the simplest most basic statements about the meaning of change, which
must be present in any dynamical system. The laws are essentially the same, within the
framework of a different descriptive model.
Example: routing It is possible to interpret traffic routing[22] as an example of emer-
gent behaviour amongst autonomous agents that make simple identical promises to
one another. Each agent in the routing cloud promises to provide and receive topol-
ogy change information and to relay traffic to at least one neighbour based on a metric
condition. Let ai, ajbe agents in a group with promises:
ai
±traffic
−→ aj
ai
±updates
−→ aj
ai
±relay/χ(metric)
−→ aj(7)
where it is assumed that “updates” include the receipt of metric information. Using
the law 2 of undifferentiated groups, we can conjoin these two promises into a single
compound type of cooperative promise and call it “routing”.
ai
±updates
−→ aj⊕ai
±relay/χ(metric)
−→ aj≡ai
±C(routing)
−→ aj(8)
and hence from ref. [2], this implies the existence of an effective promise to an arbi-
trary or even hypothetical external observer by each pair conjoined by ±C(routing)
promises, named “routing”. Thus, if promises are kept, there will be a consequence of
routing that we can interpret as emergent, since each agent exhibits behaviour based on
2In the literature researchers have been more interested in what swarms do rather than in what
they are.
information from its neighbouring agents. The sceptical reader might see this as unnat-
ural since routing was designed to work in this way. Our contention is that, designed
or not, whether one considers behaviour to be emergent or not is a subjective matter.
Emergent behaviour is simply ordinary functional behaviour that requires no special
magic to understand, only the right point of view.
A final question we must ask about this routing ‘swarm’: if it is to have the emergent
function that is claimed (routing), as not only seen by an external observer but used by
an external agency for communicaton, how will data get into the cloud in order to be
routed? We must hence define the boundaries of the group (“inside” and “outside”) and
make sure that the boundary is not closed to input and output in order to complete the
picture.
8 Laws of interaction
In swarm intelligence, authors speak often of stigmergic communication, or communi-
cation through an intermediate medium, such as leaving a trail or a message.
Theorem 1 (Non-rigidity and intermediate leaky agents). Stimergic communication
(involving an intermediate agent) in an environment can never guarantee rigid be-
haviour, unless all agents are fully isolated from the environent.
Proof. Consider the transfer of information from a1via a2to a3. Let drepresent a
promise body to make data available. The transfer of information through a2requires
(at a minimum) the promises:
a1
+d12
−→ a2(9)
a2
+d13/+d12
−→ a3(10)
a2
−d12
−→ a3(11)
ai
−env
−→ E, i = 1,2,3(12)
Agent a2is leaky and therefore makes a use-promise to the environment. Hence a3can-
not know whether environmental information or information from a2has been injected
into the data it received. Suppose a1were to encode the data in such a way that a3could
verify transmission. This would require a direct promise from one to the other, using
a direct carrier, since a3knows only what it receives, not where the information came
from. As long as an intermediate agent is present corruption is possible.
We note that all communication is, in fact, stigmeric in some degree and therefore
this theorem has fundamental consequences for system design. It shows the need for
local maintenance of local promise goals.
Corollary 2 (Stigmeric communication implies possible emergence). If the commu-
nication underlying a use-promise between leaky agents is not direct, then one should
expect emergent consequences.
We can now restate our paraphrased version of mechanics, using the formulations
of the previous laws, and in terms of clear statements about state transitions:
Law 3 (External interaction) The change or variation of promised behaviour δai
δq
−→ aext
in an agent is proportional to the promised action Aai
−Σ
−→ aextof an external
source Σused by the agent, for small disturbances.
Proof. We define the generalized trajectory or behavioural momentum for type τby
δτq=ˆ
Gτq(13)
where ˆ
Gτis the matrix valued generator of behaviours of type τ, see eqn. (3). Now,
let Στby a transformation matrix from outside the agent, where ai
−Στ
−→ aext, and this
generates the transformation ˆ
Gτ→Στˆ
GτΣT
τ. For small transformations we may write
Στ≃I+στ, and
δˆ
Gτ=σT
τˆ
G+ˆ
Gστ+σT
τˆ
Gστ(14)
Thus from eqn. (13) we have
δ2
τq=δτˆ
Gτq+ˆ
Gτδτq
δ2
τq=−δV
δqT,(15)
where the scalar interaction potential is defined by V=−qT[δτ(στ)ˆ
G+ˆ
G2
τ]q. For
completeness, we can write (15) suggestively in promise notation, defining the interac-
tion valuation function of a promise A(·):
δai
δτq
−→ aext=Aai
−Στ
−→ aext(16)
With the action-reaction law, which is an axiom in promise theory, this completes the
mechanics of behaviour in promise theory.
9 Falsification
Our use of “promise theory” begs two questions: is this a theory (can it make falsifi-
able predictions) and is emergent behaviour a real phenomenon? A full answer to these
questions is beyond the scope of this paper, but we make the following comments. The
language of promises alone is not a theory, it is merely a language and hence statements
in it can only be proven true or false as a matter of definition. The present paper is about
completing definitions required to describe promises that change dynamically.
There are however other properties that arise from the assumptions of promise
theory themselves. For agent autonomy, for instance, the assumption of leaky agents
predicts that there will be emergent behaviour, according to the definitions we have
provided. This is a prediction that is verifiable only by observation. The langauge of
promises allows us to be clear about the predictions that follow from these assumptions,
within the promise language. We have also predicted that certain promise structures will
have fixed points that lead to equilibrium stability. This follows from the network nature
of promises, and it is non-trivial in general to complete the description leading to this
for reasons of algebra that we cannot go into here. We predict that symbiosis will be
common from the economics of promises. These predictions must also be observed in
actual systems.
As with Newton’s laws (to which we have alluded in the paper), the expressions we
have written down essentially define what is meant by change in the basic observables
of the system. Our paper is one of definition and clarification, not verification, so we
can at best observe that changes occur and that our definitions change appropriately
with them. This is why we state these rules as laws not models. Some work has to be
done before all the necessary relationships between promises can be incorporated into
a detailed falsifiable model of dynamic system behaviour.
10 Conclusions
Can the whole be greater than the sum of its parts? What does this mean? Recall the
performance of Nsingle server queues, and the performance of a single queue with
Nservers[4]. The latter performs provably “greater than or equal to” the former, and
yet the sum of the parts is the same. Functional behaviour allows for reinterpretation of
resources. If one is allowed to re-order the relationships between input and output then
one can take advantage of resources that would normally go to waste; but the actual
physical properties of the system have not changed, thus we see how the phrase can
come about without any magic provided there is interaction.
When writers speak of emergence, they tend to think of self-organization, one spe-
cial kind of emergence. The behaviour of ants and termites is particularly hallowed.
The interesting property here is a reduction in entropy or the disorder of the system ac-
cording to some measuring scale. According to our definitions, all collective behaviour
is emergent behaviour, since the environment behaves just like a super-agent. We feel
that this is an advantage. As long as simple definitions explain the observations, then
Occam’s razor allows us to cut away the mysticism that infects desriptions of emergent
behaviour.
The term “behaviour” is wafted loosely in computer science with often little clar-
ification. Many of the questions about behaviour have been answered in the natural
sciences. We have attempted to offer a usable description of the organization and be-
haviour based on the long scientific tradition of describing observable characters, within
the language of promises. The descriptions we offer here offer a platform from which
to clarify a number of issues in autonomic systems in particular. There are unanswered
questions about the subjective nature of agent perceptions that motivate the need for a
proper theory of measurement based on promise agents. From such a theory it should be
possible to decide whether peer to peer and centralized systems are comparable organi-
zations with interchangable properties, or whether they are two fundamentally different
things.
We believe moreover that it is possible to go further and define mechanical, mate-
rial properties for promise graphs, by analogy to how physics describes the large scale
properties of matter from an atomic model. Why is wood strong and glass brittle? Why
is one computational structure robust and another fragile? These are analogous ques-
tions that are about scale as well as the underlying promises that bind the parts into a
whole. We must work towards suitable and useful definitions of these properties. We
believe that this such definitions must follow from promise theory or something like it.
We return to these issues in future work.
This work is supported by the EC IST-EMANICS Network of Excellence (#26854)
References
1. Mark Burgess. An approach to understanding policy based on autonomy and voluntary
cooperation. In IFIP/IEEE 16th international workshop on distributed systems operations
and management (DSOM), in LNCS 3775, pages 97–108, 2005.
2. M. Burgess and S. Fagernes. Pervasive computing management: A model of network policy
with local autonomy. IEEE Transactions on Software Engineering, page (submitted).
3. M. Burgess and S. Fagernes. Voluntary economic cooperation in policy based management.
IEEE Transactions on Network and Service Management, page (submitted).
4. M. Burgess. Analytical Network and System Administration — Managing Human-Computer
Systems. J. Wiley & Sons, Chichester, 2004.
5. M. Burgess and A. Couch. Autonomic computing approximated by fixed point promises.
Proceedings of the 1st IEEE International Workshop on Modelling Autonomic Communica-
tions Environments (MACE); Multicon verlag 2006. ISBN 3-930736-05-5, pages 197–222,
2006.
6. M. Burgess and S. Fagernes. Autonomic pervasive computing: A smart mall scenario using
promise theory. Proceedings of the 1st IEEE International Workshop on Modelling Auto-
nomic Communications Environments (MACE); Multicon verlag 2006. ISBN 3-930736-05-5,
pages 133–160, 2006.
7. M. Burgess. On the theory of system administration. Science of Computer Programming,
49:1, 2003.
8. J.M. Hendrickx et al. Rigidity and persistence of three and higher dimensional forms. In
Proceedings of the MARS 2005 Workshop on Multi-Agent Robotic Systems, page 39, 2005.
9. J.M. Hendrickx et al. Structural persistence of three dimensional autonomous formations. In
Proceedings of the MARS 2005 Workshop on Multi-Agent Robotic Systems, page 47, 2005.
10. H. Lewis and C. Papadimitriou. Elements of the Theory of Computation, Second edition.
Prentice Hall, New York, 1997.
11. G. Canright and K. Engø-Monsen. A natural definition of clusters and roles in undirected
graphs. Science of Computer Programming, 53:195, 2004.
12. S. Johnson. Emergence. Penguin Press, 2001.
13. M. Burgess, G. Canright, and K. Engø. Inportance-ranking functions from the eigenvectors
of directed graphs. Journal of the ACM (Submitted), 2004.
14. E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm Intelligence: From Natural to Artificial
Systems. Oxford University Press, Oxford, 1999.
15. J. Kennedy and R.C. Eberhart. Swarm Intelligence. Morgan Kaufmann (Academic Press),
2001.
16. M. Wooldridge. An Introduction to MultiAgent Systems. Wiley, Chichester, 2002.
17. G. Di Caro and M. Dorigo. Antnet: Distributed stigmergetic control for communications
networks. Journal of Artificial Intelligence Research, 9:317–365, 1998.
18. L. Arlotti, A. Deutsch, and M. Lachowicz. On a discrete boltzmann type model of swarming.
Math. Comp. Model, 41:1193–1201, 2005.
19. Kazadi S. Swarm Engineering. PhD thesis, California Institute of Technology, 2000.
20. F. Heylighen. Open Source Jahrbuch, chapter Why is Open Access Development so Success-
ful? Stigmergic organization and the economics of information. Lehrmanns Media, 2007.
21. J.H. Holland. Emergence: from chaos to order. Oxford University Press, 1998.
22. C. Huitema. Routing in the Internet (2nd edition). Prentice Hall, 2000.