Content uploaded by Jan Treur
Author content
All content in this area was uploaded by Jan Treur on Dec 19, 2020
Content may be subject to copyright.
Content uploaded by Jan Treur
Author content
All content in this area was uploaded by Jan Treur on Dec 04, 2020
Content may be subject to copyright.
Content uploaded by Jan Treur
Author content
All content in this area was uploaded by Jan Treur on Oct 27, 2020
Content may be subject to copyright.
Content uploaded by Jan Treur
Author content
All content in this area was uploaded by Jan Treur on Oct 26, 2020
Content may be subject to copyright.
Modeling Multi-Order Adaptive Processes
by Self-Modeling Networks
Jan TREUR1
Social AI Group, Vrije Universiteit Amsterdam, De Boelelaan 1111, 1081HV
Amsterdam, the Netherlands
Abstract濁澳A self-modeling network for some base network is a network extension
that represents part of the base network structure by a self-model in terms of added
network nodes and connections for them. By iterating this construction, multi-order
network adaptation is easily obtained. A dedicated software environment for self-
modeling networks that has been developed supports the modeling and simulation
processes. This will be illustrated for a number of adaptation principles from a
number of application domains.
Keywords. Adaptive network, self-modeling network, multi-order adaptive
1. Introduction
A self-modeling network is a network that represents part of its own network structure
by a self-model in terms of dedicated network nodes and connections for them. A
network structure can be described by network characteristics for connectivity for
connections between nodes, aggregation for combining multiple incoming impacts on a
node, and timing for the speed of node state dynamics; e.g., [1, 2, 3]. Any base network
can be extended to a self-modeling network for it, by adding a self-model for part of the
base network’s structure. In this case, the added self-model consists of a number of added
nodes representing specific characteristics of the base network structure, such as
connection weights and excitability thresholds, plus connections for these added self-
model nodes. For the approach considered here, in general nodes in a network are
assumed to have activation levels that can change over time due to impact from other
nodes from which they have incoming connections. If in particular the nodes from a self-
model representing some of the network characteristics of a base network are dynamic,
these base network characteristics become adaptive, thus an adaptive base network is
obtained, in the sense that adaptation of the base network is modeled by the dynamics
within the self-modeling network extending the base network.
Moreover, multi-order network adaptation can be obtained by iterating this self-
modeling construction. If multi-order self-models are included in a self-modeling
network, any included self-model (of some order) can have its own (next-order) self-
model within the overall network where the latter self-model represents some of the
network characteristics of the former self-model. For example, this allows to control the
dynamics of self-models, so that self-controlled adaptive networks are obtained.
1 Corresponding Author: Jan Treur, Social AI Group, Vrije Universiteit Amsterdam, De Boelelaan 1111,
1081HV Amsterdam, the Netherlands; Email: j.treur@vu.nl.
Machine Learning and Artificial Intelligence
A.J. Tallón-Ballesteros and C. Chen (Eds.)
© 2020 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/FAIA200784
206
A dedicated software environment for self-modeling networks that has been
developed supports these modeling and simulation processes; see [3], Ch. 9. In this paper,
for a number of adaptation principles from different application domains, it will be
illustrated how they can be modeled by proper pre-specified self-models that can be used
as building blocks to extend any base network to make it adaptive.
In the paper, first in Section 2 the modeling approach from [3] based on self-
modeling networks is briefly described. In Section 3 nine different adaptation principles
from the Cognitive Neuroscience and Social Science literature are described. Next, in
Section 4, for the adaptation principles described in Section 3 it is shown in more detail
how they can be modeled by self-models. Section 5 is a discussion.
2. Networks Using Self-Models: Self-Modeling Networks
In this section, the network-oriented modelling approach used from [3] is introduced.
Following [3, 4], a temporal-causal network model is characterized by (here X and Y
denote nodes of the network, also called states):
x Connectivity characteristics Connections from a state X to a state Y and their
weights Z
ZX,Y
Aggregation characteristics For any state Y, some combination function cY(..)
(usually with some parameters) defines the aggregation that is applied to the
impacts ZX,YX(t) on Y from its incoming connections from states X
x Timing characteristics Each state Y has a speed factor KY defining how fast it
changes for given impact.
The following difference (or differential) equations that are used for simulation
purposes and also for analysis of temporal-causal networks incorporate these network
characteristics ZX,Y, cY(..), KY in a standard numerical format:
ܻሺݐ'ݐሻൌܻሺݐሻK[܋ܻሺZభǡܺଵሺݐሻǡǥǡZೖǡܺሺݐሻሻെܻሺݐሻሿ'ݐ (1)
for any state Y and where
ܺଵ to ܺ are the states from which Y gets its incoming
connections. Here the overall combination function cY(..) for state Y is the weighted
average of available basic combination functions cj(..) by specified weights Jj,Y (and
parameters Sଵǡǡ, Sଶǡǡ of cj(..)) for Y:
cY(V1, …, Vk) = JభǡೊୡభሺభǡǥǡೖሻାǥାJǡೊୡሺభǡǥǡೖሻ
JభǡೊାǥାJǡೊ (2)
Such Eq. (1) and (2) are hidden in the dedicated software environment; see [3], Ch 9.
Within this software environment, currently around 40 useful basic combination
functions are included in a combination function library; see Table 1 for some of them.
The above concepts enable to design network models and their dynamics in a declarative
manner, based on mathematically defined functions and relations.
Table 1. Examples of basic combination functions from the library.
Notation
Formula
Parameters
Euclidean
eucl
n,O(V1, …, Vk
)
ඨܸଵڮܸ
O
Order
n>0
Scaling
factor O>0
Advanced
logistic sum
alogistic
V,W(V1,
…,Vk)
ሾଵ
ଵାୣିોሺభାڮାೖିૌሻെ ଵ
ଵାୣોૌሻሿ
(1
+e-στ)
Steepness VV>0
Excitability threshold W
W
Scaled
maximum
smax
O(V1, …, Vk
)
max
(V1, …, Vk)/O
Scaling
factor O>0
Scaled
minimum
sm
inO(V1, …, Vk)
m
in(V1, …, Vk)/O
Scaling
factor O>0
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 207
Note that there is a crucial distinction for network models between network
characteristics and network states. Network states have values (their activation levels)
and are explicit representations that may be accessible for network states by connections
to and from them and can be handled or manipulated in that way. They can be considered
to provide an informational view on the network; usually the states are assumed to have
a certain informational content. In contrast, network characteristics (such as connection
weights and excitability thresholds) have values (their strengths) and determine (e.g.,
cognitive) processes and behavior in an implicit, automatic manner. They can be
considered to provide an embodiment view on the network. In principle, these
characteristics by themselves are not directly accessible nor observable for network
states; in principle you can make connections between states but you cannot make
connections between network characteristics or between states and characteristics.
As indicated above, ‘network characteristics’ and ‘network states’ are two distinct
concepts for a network. Self-modeling is a way to relate these distinct concepts to each
other in an interesting and useful way. A self-model is making the network characteristics
(such as connection weights and excitability thresholds) explicit in the form of adding
states (called self-model states) for these characteristics and also connections for these
additional states. Thus, the network gets an internal self-model of part of its network
structure: it explicitly represents information about its own network structure. In this way,
by iteration different self-modeling levels can be created where network characteristics
from one level relate to network states at a next level. Thus, an arbitrary number of self-
modeling levels can be modeled, covering second-order or higher-order effects.
More specifically, adding a self-model for a temporal-causal base network is done in
the way that for some of the states Y of the base network and some of the network
structure characteristics for connectivity, aggregation and timing (i.e., some from Z
ZX,Y,
Jj,Y, Si,j,Y, KY), additional network states WX,Y, Cj,Y, Pi, j,Y, HY (self-model states or
reification states) are introduced and connected to other states:
a) Connectivity self-model
x Self-model states WX,Y are added representing connectivity characteristics, in
particular connection weights ZX,Y
b) Aggregation self-model
x Self-model states Cj,Y are added representing aggregation characteristics, in
particular combination function weights Jj,Y
x Self-model states Pi,j,Y are added representing aggregation characteristics, in
particular combination function parameters Si,j,Y
c) Timing self-model
x Self-model states H
Y are added representing timing characteristics, in
particular speed factors KY
The notations WX,Y, Cj,Y, Pi,j,Y, HY for the self-model states indicate the referencing
relation with respect to the characteristics ZX,Y, Jj,Y, Si,j,Y, KY: here W refers to Z, C refers
to J, P refers to S, and H refers to K, respectively. For the processing, these self-model
states define the dynamics of any state Y in a canonical manner according to Eq. (1) and
(2) whereby the values of ZX,Y, Jj,Y, Si,j,Y, KY are replaced by the state values of WX,Y, Cj,Y ,
Pi,j,Y, HY at time t, respectively.
Note that concerning the terminology used, only the states that represent some
network characteristics are called self-model states. The states to which these self-model
states are connected still belong to the self-model (e.g., as depicted in Figure 1 and
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks208
further) but they can either be other self-model states or other states that are not self-
model states, such as the states X and Y. An example of an aggregation self-model state
Pi,j,Y for a combination function parameter S
Si,j,Y is for the excitability threshold WY of state
Y, which is the second parameter of a logistic sum combination function; then Pi,j,Y is
usually indicated by TY, where T refers to W. The network constructed by the addition of
a self-model to a base network is called a self-modeling network or a reified network for
this base network. This constructed network is also a temporal-causal network model
itself, as has been shown in [3], Ch. 10; for this reason, this construction can easily be
applied iteratively to obtain multiple levels or orders of self-models, in which case the
resulting network is called a multi-order or higher-order self-modeling network or reified
network.
3. Adaptation Principles from Different Domains
In this section, a number of adaptation principles of different orders are described as can
be found in the literature on Cognitive Neuroscience and Social Sciences.
3.1. First-order Adaptation Principles
First-order adaptation principles for some base network address adaptation of some of
the base network’s characteristics concerning its connectivity, aggregation of multiple
connections and timing of node state dynamics. Much research has focused in particular
on learning of connectivity characteristics based on adaptive connections, but also other
characteristics can be made adaptive, as will be discussed.
3.1.1. The Hebbian Learning Adaptation Principle
As a first example, for mental or neural networks, the Hebbian learning adaptation
principle [5] can be formulated by:
‘When an axon of cell A is near enough to excite B and repeatedly or persistently (3)
takes part in firing it, some growth process or metabolic change takes place in one
or both cells such that A’s efficiency, as one of the cells firing B, is increased.’
[5], p. 62
This is sometimes simplified (neglecting the phrase ‘one of the cells firing B’) to:
‘What fires together, wires together’ [6, 7]
This can easily be modeled by using a connectivity self-model based on self-model states
WX,Y representing connection weights ZX,Y.
3.1.2. The Bonding by Homophily Adaptation Principle
An example of the use of a network’s self-model for the social domain is the bonding by
homophily adaptation principle
‘Birds of a feather flock together’ (4)
This expresses how being ‘birds of a feather’ or ‘being alike’ strengthens the connection
between two persons [8-13]. Similar to the Hebbian learning case, this can be modeled
by a social network’s connectivity self-model based on self-model states WX,Y
representing connection weights ZX,Y.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 209
3.1.3. The More Becomes More Adaptation Principle
Another first-order adaptation principle for social networks is the ‘more becomes more’
principle expressing that more popular people attract more connections:
‘Persons with more connections attract more connections’ [4], p. 311 (5)
In a wider context this more becomes more principle relates to what sometimes is
called ‘the rich get richer’ [14, 15], ‘cumulative advantage’ [16], ‘the Matthew effect’
[17] or ‘preferential attachment’ [18]. Similar to the Hebbian learning and bonding by
homophily cases, this can be modeled by a social network’s connectivity self-model
based on self-model states WX,Y representing connection weights Z
ZX,Y.
3.1.4. The Interaction Connects Adaptation Principle
The idea behind the Interaction Connects adaptation principle from Social Science is that
‘The more interaction you have with somebody, the stronger you will become connected’ (6 )
See, for example, [19-23]. Similar to the Hebbian learning and bonding by homophily
cases, this can be modeled by a social network’s connectivity self-model based on self-
model states WX,Y representing connection weights ZX,Y.
3.1.5. The Enhanced Excitability Adaptation Principle
Although connectivity adaptation has some popularity in the literature, also other
characteristics can be made adaptive. Instead of a connectivity self-model to model
adaptive connection weights, also an aggregation self-model can be used, for example,
to model intrinsic neuronal excitability, as described in [24]:
‘Long-lasting modifications in intrinsic excitability are manifested in changes (7)
in the neuron's response to a given extrinsic current (generated by synaptic
activity or applied via the recording electrode).’ [24], p. 30
This form of adaptation can be modeled by an aggregation self-model based on self-
model states TY for adaptive excitability thresholds. For example, this type of self-model
has been used to model adaptation (desensitization) to spicy food; see [25].
3.2. Second-Order Adaptation Principles
The examples of adaptation principles in Section 3.1 refer to forms of plasticity, which
can be described by a first-order adaptive network that is modelled using a dynamic first-
order self-model for connectivity or aggregation characteristics of the base network, in
particular for the connection weights and/or the excitability thresholds used in
aggregation. Whether or not and to which extent such plasticity as described above
actually takes place is controlled by a form of metaplasticity; e.g. [26-31].
3.2.1. The Exposure Accelerates Adaptation Speed Adaptation Principle
For example, in [29] the following compact quote is found indicating that due to stimulus
exposure, the adaptation speed will increase:
‘Adaptation accelerates with increasing stimulus exposure’ [29], p. 2. (8)
This indeed refers to a form of metaplasticity, which can be described by a second-order
adaptive network that is modeled using a dynamic second-order timing self-model, for
timing characteristics of a first-order self-model for the first-order adaptation, based on
self-model states HWX,Y for adaptive learning speed.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks210
3.2.2. The Exposure Modulates Persistence Adaptation Principle
A similar perspective can be applied to obtain a principle for modulation of persistence.
‘Stimulus exposure modulates persistence of adaptation’ (9)
Depending on further context factors, this can be applied in different ways. Reduced
persistence can be used in order to be able to get rid of earlier learnt connections that do
not apply. However, enhanced persistence can be used to keep what has been learnt. This
also refers to a form of metaplasticity, which can be described by a second-order adaptive
network that is modeled using a dynamic second-order aggregation self-model, for
persistence characteristics of a first-order self-model for the first-order adaptation, based
on self-model states MWX,Y for an adaptive persistence factor.
3.2.3. The Plasticity Versus Stability adaptation principle
In a similar direction [31] it is more generally discussed how it depends on the
circumstances when the extent of plasticity is or should be high and when it is or should
be low in favor of stability:
‘The Plasticity Versus Stability Conundrum’ [31], p. 773. (10)
This principle relates to the previous two and can use these second-order self-models.
3.2.4. The Stress Blocks Adaptation Principle
Yet another principle that is indicated in the literature refers to the effect of high stress
levels on the extent of plasticity:
‘High stress levels slow down or block adaptation’ (11)
See, for example, the following quote from [27], where such slowing down or blocking
of adaptation is called negative metaplasticity:
‘Numerous electrophysiological studies have shown that ‘negative’ metaplasticity develops
in brain areas such as the hippocampus and its related structures (e.g., the lateral septum and
the nucleus accumbens) following stress.’ [27], p. 631
This can be described by a second-order adaptive network modeled using a dynamic
second-order timing self-model, for timing characteristics of a first-order self-model for
the first-order adaptation, based on self-model states HWX,Y for adaptive learning speed.
The first- and second-order adaptation principles such as the one summarized in (3)
to (11) above have been formalized in the form of self-models used in first- and second-
order adaptive network models that have been designed, as discussed in Section 4.
4. Using Self-Models to Formalize Adaptation Principles
In this section, it will be shown how the modeling approach for self-modeling network
models described in Section 2 can be used to model the adaptation principles of different
orders discussed in Section 3. In particular the connectivity and aggregation
characteristics of the addressed self-models are discussed. Timing characteristics for
these self-models are just values (speed factors for each of the states) that will usually be
set depending on a specific application. When self-models are changing over time in a
proper manner, this offers a useful method to model adaptive networks based on any
adaptation principles. This does not only apply to first-order adaptive networks, but also
to higher-order adaptive networks, by using higher-order self-models.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 211
4.1. First-Order Self-Models for First-Order Adaptation Principles
First the adaptation principle for Hebbian Learning will be addressed, as described in
Section 3.1.1. To incorporate the ‘firing together’ part, for the self-model’s connectivity
characteristics, upward causal connections to connectivity self-model state WX,Y from
and
are used to formulate a Hebbian learning adaptation principle; see Figure 1. The
upward connections have weight 1 here. Also a connection from WX,Y to itself with
weight 1 is used; in pictures they are usually left out.
So, the connectivity characteristics of the self-model here consist of the three nodes
WX,Y, X, and Y, together with the two incoming upward connections (the blue arrows)
from X and Y to WX,Y, one outgoing connection from WX,Y to Y (the pink downward
arrow), and the leveled connection (black arrow) from X to Y. Note that as mentioned in
the last paragraph of Section 3.2, only the states that represent a network characteristic
are called self-model states, in this case WX,Y. In connectivity pictures such as Figure 1
and further, the self-model states are the states with an outgoing (pink) downward
connection. Some other states to which they are connected such as in this case X and Y
are still part of the self-model, but will not be called self-model states; they do not have
an outgoing downward connection. The downward connection takes care that the value
of WX,Y is actually used for the connection weight of the connection from X to Y. For the
aggregation characteristics of the self-model, one of the options for a learning rule is
defined by the combination function hebbP
P(V1, V2, W) from Table 2, where V1, V2 refer
to the activation levels of the connected states X to Y, and W to the value of WX,Y
representing the connection weight. For more options of Hebbian learning combination
functions and further mathematical analysis of them, see, for example [3], Ch. 14.
Table 2 Combination functions for self-models modeling the first- and second-order adaptation principles. The
first five rows cover the first-order adaptation principles from Section 3.1 and the last four rows the second-
order adaptation principles from Section 3.2.
Adaptation principle and
self-model state
Combination function
options
Variables and
Parameters
Hebbian
Learning
W
X,Y
3.1.1
hebb
P(V1, V2, W) =
V
1V2 (1-W) + P W
V
1
,V
2
activation levels of connected states
W
activation level of self-
model state for
connection weight
P persistence factor
Bonding
by
Homophily
W
X,Y
3.1.2
slhomo
D,W(V1, V2, W) =
W
+ D W (1-W) (W- | V1 - V2
|)
V
1,
V
2
activation levels of connected persons
W
connection weight
D
modulation factor
W tipping point
More
Becomes
More
W
X,Y
3.1.3
eucl
n,O(W1, …, Wk )
alogistic
V,W(W1, …, Wk )
W
1
, …,
W
k
activation levels of self-model
states for connection weights of persons
connected to B
Interaction Connects
WX,Y
3.1.4
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk)
V
1
, …,
V
k
impacts from interaction states for
the connected person
Enhanced Excitability
TY
3.1.5
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk )
V
1, …, Vk impacts from base states
Exposure Accelerates
Adaptation Speed HWX,Y
3.2.1
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk )
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Exposure Modulates
Persistence MWX,Y
3.2.2
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk )
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Plasticity Versus
Stability HWX,Y, MWX, Y
3.2.3
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk )
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Stress Blocks
Adaptation HWX,Y
3.2.4
eucl
n,O
(V
1
, …,
V
k
)
alogisticVV,W
W(V1, …, Vk )
V
1
, …,
V
k
impacts from base states for stress
level and first-order self-model states
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks212
Next, the adaptation principle for Bonding by homophily will be addressed, as
described in Section 3.1.2. It happens that for this connectivity self-model exactly the
same connectivity characteristics apply as for Hebbian learning, as depicted in Figure 1.
Figure 1. Connectivity characteristics of the self-model for the Hebbian Learning adaptation principle for
Mental Networks or the Bonding by Homophily adaptation principle for Social Networks
For aggregation characteristics of this self-model, an option for an adaptation rule
is defined by the combination function slhomoD
D,W(V1, V2, W) from Table 2, where V1, V2
refer to the activation levels (for example, for some opinion) of the connected persons
and W to the value of WX,Y representing the connection weight. For more options and
further mathematical analysis, see, for example [3], Ch. 13, or [13].
The More Becomes More adaptation principle as described in Section 3.1.3 has
connectivity characteristics as shown in Figure 2. Here, the connectivity self-model
states for different connections affect each other, as a connection of a person X3 to a given
person Y depends on the existence and strengths of connections from other persons Xi to
the same person Y; see the black leveled arrows in the upper plane.
Figure 2. Connectivity characteristics of a self-model for the More Becomes More adaptation principle for
person X3 with respect to person Y
So, in this case the connectivity characteristics of the self-model are the nodes WX1,Y,
WX2,Y, WX3,Y, and Y, together with leveled connections (black arrows) from each WXj,Y to
WX3,Y and downward connections (pink arrows) from each WXj,Y to Y. Again, these (pink)
downward connections takes care that the value of WXj,Y is actually used for the
connection weight of the connection from Xjto Y. For the aggregation characteristics of
this self-model, some form of aggregation of the weights of these other connections
represented by the WXj,Y can be used, such as by using a Euclidean or logistic sum
combination function; see Table 2. For example, in [32] a logistic sum function was used,
and in [33] a scaled sum (with scaling factor the number of existing connections for Y
resulting in an average weight), which is a first-order Euclidean combination function.
For the Interaction Connects adaptation principle described in Section 3.1.4, the
connectivity self-model states for the connection weights are affected by certain states
Y
X
Z
W
X,Y
Y
X3
X1
X2
WX3,Y
WX1,Y
WX2,Y
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 213
IXj,Y representing the strength of (actual) interaction. Therefore, the connectivity
characteristics of a self-model for this adaptation principle are as shown in Figure 3,
with (blue) upward connections from interaction states IXi,Y to the self-model states WXi,Y
and (pink) downward connections from WXi,Y to Y. Note that there also multiple
interaction states can be used for one connection, for example, for different interaction
channels. The aggregation characteristics of the self-model states WXi,Y can be specified,
for example, by a Euclidean or logistic sum function, as shown in Table 2.
Figure 3. Connectivity characteristics of a self-model for the Interaction Connects adaptation principle for
persons X3, X2 and X3 with respect to person Y.
For the Enhanced Excitability adaptation principle described in Section 3.1.5, an
aggregation self-model with connectivity characteristics depicted in Figure 4 can be used.
Figure 4. Connectivity characteristics of a self-model for the Enhanced Excitability adaptation principle for
persons X3, X2 and X3 with respect to person Y.
In this case state Y is assumed to use a logistic sum combination function, which has
an excitability threshold parameter W (or any other function with such a parameter). Here
this excitability threshold is represented by aggregation self-model state TY which is
affected by exposure from activation of the involved states. Note that to enhance
excitability, the value of self-model state TY representing the excitability threshold has
to decrease. Therefore, these upward connections need to get negative connection
weights, whereas a positive connection weight from TY itself can be used. In this case,
the (pink) downward connection from TY to Y takes care that the value of TY is actually
used for the threshold value of the logistic sum function of Y. Also a connection from a
related connectivity self-model state WX,Yto TY with positive connection weight might
be added in this self-model to obtain some balancing effect. For the aggregation
characteristics, for example, a Euclidean (with odd order n to keep the negative impacts
negative) or logistic sum function can be used for TY, as shown in Table 2.
Y
X3
X1
X2
I
X
2,Y
I
X
1,Y
Y
IX3,Y
WX3,Y
W
X1
,
Y
W
X
2,
Y
Y
X
T
Y
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks214
4.2. Second-Order Self-Models for Second-Order Adaptation Principles
The first second-order adaptation principle discussed is the Exposure Accelerates
Adaptation Speed principle described in Section 3.2.1. This is modeled by a second-order
timing self-model. As it is a second-order adaptation principle for some first-order
adaptation principle, for the sake of clarity it is described here with respect to the first-
order adaptation principle for Hebbian Learning; although it might be applied to other
first-order adaptation principle as well, but then it will have a similar structure to what is
shown here. The connectivity characteristics of this timing self-model are shown in
Figure 5; they consist of the states HWX,Y, WX,Y, X, and Y, together with the (positive,
blue) upward connections from the two base states X and Y to the self-model state HWX,Y
expressing the part of the principle referring to ‘exposure’, the (negative, blue) upward
connection from WX,Y to the self-model state HWX,Y, and the downward (pink) connection
from HWX,Y to WX,Y that takes care that the value of HWX,Y is actually used as speed factor
for WX,Y. By the upward connections, stronger activation of the base states X and Y will
lead to an increased value of HWX,Y, as indicated by the part of the principle referring to
‘accelerates’. The (negative) upward connection from the considered state WX,Y to HWX,Y
can be used for (counter)balancing. For the aggregation characteristics, for example a
Euclidean (with odd order n to keep the negative impacts negative) or logistic sum
function can be used for HWX,Y, as shown in Table 2.
Figure 5. Connectivity of a second-order self-model for the Exposure Accelerates Adaptation Speed adaptation
principle with a first-order self-model for Hebbian learning.
Next, the second-order Exposure Modulates Persistence adaptation principle (for the
first-order Hebbian Learning principle) described in Section 3.2.2 is addressed, based on
second-order aggregation self-model state MWX,Y representing persistence of the first-
order adaptation. For the connectivity characteristics of this self-model, see Figure 6.
Y
X
Z
W
X,Y
H
WX,Y
Y
X
Z
W
X,Y
M
WX,Y
Figure. 6. Connectivity of a second-order self-model for the Exposure Modulates Persistence adaptation
principle with a first-order self-model for Hebbian learning
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 215
The upward connections from base states X and Y to MWX,Y may suppress the
persistence (when they are negative). This paves the road to get rid of the learnt effects
from the past in case they are no longer applicable. The positive upward connection from
first-order state WX,Y to HWX,Y can be used for counterbalancing. However, the upward
connections from base states X and Y to MWX,Y can also be made positive in which case
they increase persistence during a learning process to keep the learnt effect well. This
also illustrates the Plasticity Versus Stability Conundrum adaptation principle described
in Section 3.2.3. The (pink) downward connection from MWX,Y to WX,Y takes care that
the value of WX,Y is actually used for the connection weight of the connection from X to
Y. For the aggregation characteristics, for example a Euclidean (with odd order n) or
logistic sum function can be used for MWX,Y, as shown in Table 2.
Finally, a second-order self-model for the Stress Blocks Adaptation principle
described in Section 3.2.4 can be obtained in a similar way as the one for Exposure
Accelerates Adaptation Speed principle (see connectivity in Figure 5) but this time with
connectivity characteristics based on a negative upward connection from a base state
representing the stress level, which brings the timing characteristic self-model state
HWX,Y, to low values or even 0. For the aggregation characteristics, again for example a
Euclidean or logistic sum function can be used for HWX,Y; see Table 2.
5. Discussion
In this paper the use of self-modeling networks to model adaptive biological, mental and
social processes of any order of adaptation was addressed. Following the network-
oriented modeling approach described in [3], it was shown how self-models for networks
provide useful pre-specified building blocks to design complex multi-order adaptive
network models in the form of self-modeling networks. This was illustrated for a number
of adaptation principles from different application domains. A dedicated software
environment for self-modeling networks that has been developed supports the modeling
and simulation: https://www.researchgate.net/project/Network-Oriented-Modeling-Software.
As an illustration, in [3], Ch. 4, four of the adaptation principles known from the
literature and specified in Section 4 were applied to obtain a network model involving
both plasticity and metaplasticity. In particular, two first-order adaptation principles (for
Hebbian Learning and for Enhanced Excitability) and two second-order adaptation
principles (for Exposure Accelerates Adaptation Speed and for Exposure Modulates
Persistence) are covered in this network model.
References
[1] Treur J. Multilevel network reification: representing higher order adaptivity in a network. In: Aiello L,
Cherifi C, Cherifi H, Lambiotte R, Lió P, Rocha L. editors. Proc. of the 7th Int. Conf. on Complex
Networks and their Applications, ComplexNetworks'18, vol. 1. Studies in Computational Intelligence,
vol. 812, Springer Nature, 2018, p. 635-51.
[2] Treur J. Modeling higher-order adaptivity of a network by multilevel network reification. Network
Science 2020;8: S110-44.
[3] Treur J. Network-oriented modeling for adaptive networks: designing higher-order adaptive biological,
mental, and social network models. Cham, Switzerland: Springer Nature Publishing; 2020. 412 p.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks216
[4] Treur J. Network-Oriented Modeling: Addressing Complexity of Cognitive, Affective and Social
Interactions. Cham, Switzerland: Springer Publishers; 2016. 499 p.
[5] Hebb DO. The organization of behavior: A neuropsychological theory. New York: John Wiley and Sons;
1949. 335 p.
[6] Shatz CJ. The developing brain. Sci. Am. 1992; 267:60–67. (10.1038/scientificamerican0992-60)
[7] Keysers C, Gazzola V. Hebbian learning and predictive mirror neurons for actions, sensations and
emotions. Philos Trans R Soc Lond B Biol Sci 2014;369: 20130175.
[8] Pearson M, Steglich C, Snijders T. Homophily and assimilation among sport-active adolescent substance
users. Connections 2006;27(1):47–63.
[9] McPherson M, Smith-Lovin L, Cook JM. Birds of a feather: homophily in social networks. Annu. Rev.
Sociol. 2001; 27:415–44.
[10] Levy DA, Nail PR. Contagion: A theoretical and empirical review and reconceptualization. Genetic,
social, and general psychology monographs 1993;119(2):233-284.
[11] Holme P, Newman MEJ. Nonequilibrium phase transition in the coevolution of networks and opinions
Phys. Rev. E 2006;74(5):056108.
[12] Sharpanskykh A, Treur J. Modelling and analysis of social contagion in dynamic networks.
Neurocomputing 2014; 146:140–50.
[13] Treur J. Mathematical analysis of the emergence of communities based on coevolution of social
contagion and bonding by homophily. Applied Network Science 2019;4: article 1.
[14] Simon HA. On a class of skew distribution functions Biometrika 1955; 42: 425–40.
[15] Bornholdt S, Ebel H. World wide webscaling exponent from Simon’s 1955 model Phys. Rev. E 2001;64:
article 035104.
[16] Price DJ. de S. A general theory of bibliometric and other cumulative advantage processes J. Amer. Soc.
Inform. Sci. 1976; 27: 292–306
[17] Merton RK. The Matthew effect in science. Science 1968;159: 56–63.
[18] Barabási AL, Albert R. Emergence of scaling in random networks. Science 1999; 286: 509-512.
[19] Hove MJ, Risen JL. It’s all in the timing: interpersonal synchrony increases affiliation. Soc. Cognit.
2009; 27: 949–60. (doi:10.1521/soco. 2009.27.6.949)
[20] Pearce E, Launay J, Dunbar RIM. (). The Ice-breaker Effect: singing together mediates fast social
bonding. Royal Society Open Science 2015;2: article 150221 http://dx.doi.org/10.1098/ rsos.150221.
[21] Weinstein D, Launay J, Pearce, E, Dunbar RIM, Stewart L. Singing and social bonding: Changes in
connectivity and pain threshold as a function of group size. Evolution & Human Behaviour
2016;37(2):152-58. doi: 10.1016/j.evolhumbehav.2015.10.002
[22] Gilbert E, Karahalios K. Predicting tie strength with social media. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems CHI’09, 2009, p. 211-20.
[23] Morris MR, Teevan J, Panovich K. What do people ask their social networks, and why? a survey study
of status message Q&A behavior. CHI 2010. 2010.
[24] Chandra N, Barkai E. A non-synaptic mechanism of complex learning: modulation of intrinsic neuronal
excitability. Neurobiology of Learning and Memory 2018; 154: 30-36.
[25] Choy M, El Fassi S, Treur J. An adaptive network model for pain and pleasure through spicy food and
its desensitization. Cognitive Systems Research 2020: in press
[26] Abraham WC, Bear MF. Metaplasticity: the plasticity of synaptic plasticity. Trends in Neuroscience
1996;19(4):126-130.
[27] Garcia R. Stress, metaplasticity, and antidepressants. Current Molecular Medicine 2002; 2: 629-38.
[28] Magerl W, Hansen N, Treede RD, Klein T. The human pain system exhibits higher-order plasticity
(metaplasticity). Neurobiology of Learning and Memory 2018; 154:112-20.
[29] Robinson BL, Harper NS, McAlpine D. Meta-adaptation in the auditory midbrain under cortical
influence. Nat. Commun. 2016; 7: article 13442.
[30] Sehgal M, Song C, Ehlers VL, Moyer Jr JR. Learning to learn – intrinsic plasticity as a metaplasticity
mechanism for memory formation. Neurobiology of Learning and Memory 2013; 105: 186-99.
[31] Sjöström PJ, Rancz EA, Roth A, Hausser M. Dendritic excitability and synaptic plasticity. Physiol Rev
2008; 88: 769–840.
[32] Beukel S van den, Goos SH, Treur J. An adaptive temporal-causal network model for social networks
based on the homophily and more-becomes-more principle. Neurocomputing 2019; 338: 361-71
[33] Blankendaal R, Parinussa S, Treur J. A temporal-causal modelling approach t o integrated contagion and
network change in social networks. Proceedings of the Twenty-second European Conference on
Artificial Intelligence, ECAI'16, 2016, p. 1388–96
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 217