Conference PaperPDF Available

Modeling Multi-Order Adaptive Processes by Self-Modeling Networks

Authors:

Abstract and Figures

See a video presentation on YouTube here: https://www.youtube.com/channel/UCCO3i4_Fwi22cEqL8M_PgeA. This paper covers the contents of the Keynote Speech with the same title. The paper addresses the use of self-modeling networks to model adaptive biological, mental, and social processes of any order of adaptation. A self-modeling network for some base network is a network extension that represents part of the base network structure by a self-model in terms of added network nodes and connections for them. A network structure, in general, involves network characteristics for connectivity (connections between nodes), aggregation (combining multiple incoming impacts on a node), and timing (node state dynamics speed). By representing some of these network characteristics by a self-model using dynamic node states, these characteristics become adaptive. By iterating this construction, multi-order network adaptation is easily obtained. A dedicated software environment for self-modeling networks that has been developed supports the modeling and simulation processes. This will be illustrated for a number of adaptation principles from a number of application domains, for example, for Cognitive Neuroscience by a second-order adaptive network model to model plasticity of connections and node excitability, and metaplasticity to control such plasticity.
Content may be subject to copyright.
Modeling Multi-Order Adaptive Processes
by Self-Modeling Networks
Jan TREUR1
Social AI Group, Vrije Universiteit Amsterdam, De Boelelaan 1111, 1081HV
Amsterdam, the Netherlands
Abstract濁澳A self-modeling network for some base network is a network extension
that represents part of the base network structure by a self-model in terms of added
network nodes and connections for them. By iterating this construction, multi-order
network adaptation is easily obtained. A dedicated software environment for self-
modeling networks that has been developed supports the modeling and simulation
processes. This will be illustrated for a number of adaptation principles from a
number of application domains.
Keywords. Adaptive network, self-modeling network, multi-order adaptive
1. Introduction
A self-modeling network is a network that represents part of its own network structure
by a self-model in terms of dedicated network nodes and connections for them. A
network structure can be described by network characteristics for connectivity for
connections between nodes, aggregation for combining multiple incoming impacts on a
node, and timing for the speed of node state dynamics; e.g., [1, 2, 3]. Any base network
can be extended to a self-modeling network for it, by adding a self-model for part of the
base network’s structure. In this case, the added self-model consists of a number of added
nodes representing specific characteristics of the base network structure, such as
connection weights and excitability thresholds, plus connections for these added self-
model nodes. For the approach considered here, in general nodes in a network are
assumed to have activation levels that can change over time due to impact from other
nodes from which they have incoming connections. If in particular the nodes from a self-
model representing some of the network characteristics of a base network are dynamic,
these base network characteristics become adaptive, thus an adaptive base network is
obtained, in the sense that adaptation of the base network is modeled by the dynamics
within the self-modeling network extending the base network.
Moreover, multi-order network adaptation can be obtained by iterating this self-
modeling construction. If multi-order self-models are included in a self-modeling
network, any included self-model (of some order) can have its own (next-order) self-
model within the overall network where the latter self-model represents some of the
network characteristics of the former self-model. For example, this allows to control the
dynamics of self-models, so that self-controlled adaptive networks are obtained.
1 Corresponding Author: Jan Treur, Social AI Group, Vrije Universiteit Amsterdam, De Boelelaan 1111,
1081HV Amsterdam, the Netherlands; Email: j.treur@vu.nl.
Machine Learning and Artificial Intelligence
A.J. Tallón-Ballesteros and C. Chen (Eds.)
© 2020 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/FAIA200784
206
A dedicated software environment for self-modeling networks that has been
developed supports these modeling and simulation processes; see [3], Ch. 9. In this paper,
for a number of adaptation principles from different application domains, it will be
illustrated how they can be modeled by proper pre-specified self-models that can be used
as building blocks to extend any base network to make it adaptive.
In the paper, first in Section 2 the modeling approach from [3] based on self-
modeling networks is briefly described. In Section 3 nine different adaptation principles
from the Cognitive Neuroscience and Social Science literature are described. Next, in
Section 4, for the adaptation principles described in Section 3 it is shown in more detail
how they can be modeled by self-models. Section 5 is a discussion.
2. Networks Using Self-Models: Self-Modeling Networks
In this section, the network-oriented modelling approach used from [3] is introduced.
Following [3, 4], a temporal-causal network model is characterized by (here X and Y
denote nodes of the network, also called states):
x Connectivity characteristics Connections from a state X to a state Y and their
weights Z
ZX,Y
Aggregation characteristics For any state Y, some combination function cY(..)
(usually with some parameters) defines the aggregation that is applied to the
impacts ZX,YX(t) on Y from its incoming connections from states X
x Timing characteristics Each state Y has a speed factor KY defining how fast it
changes for given impact.
The following difference (or differential) equations that are used for simulation
purposes and also for analysis of temporal-causal networks incorporate these network
characteristics ZX,Y, cY(..), KY in a standard numerical format:
ܻሺݐ'ݐሻܻሺݐሻK[܋ܻZǡ௒ܺݐǡǥǡZǡ௒ܺݐሻെܻሺݐሻ'ݐ (1)
for any state Y and where
ܺ to ܺ are the states from which Y gets its incoming
connections. Here the overall combination function cY(..) for state Y is the weighted
average of available basic combination functions cj(..) by specified weights Jj,Y (and
parameters Sଵǡ௝ǡ௒, Sଶǡ௝ǡ௒ of cj(..)) for Y:
cY(V1, …, Vk) = Jభǡೊୡሺ௏ǡǥǡ௏ାǥାJ೘ǡೊୡሺ௏ǡǥǡ௏ሻ
JభǡೊାǥାJ೘ǡೊ (2)
Such Eq. (1) and (2) are hidden in the dedicated software environment; see [3], Ch 9.
Within this software environment, currently around 40 useful basic combination
functions are included in a combination function library; see Table 1 for some of them.
The above concepts enable to design network models and their dynamics in a declarative
manner, based on mathematically defined functions and relations.
Table 1. Examples of basic combination functions from the library.
Notation
Formula
Parameters
Euclidean
eucl
n,O(V1, …, Vk
)
ܸ൅ڮ൅ܸ
O
Order
n>0
Scaling
factor O>0
Advanced
logistic sum
alogistic
V,W(V1,
…,Vk)
ଵାୣିሺ௏ାڮା௏ିെ
ଵାୣોૌ
(1
+e-στ)
Steepness VV>0
Excitability threshold W
W
Scaled
maximum
smax
O(V1, …, Vk
)
max
(V1, …, Vk)/O
Scaling
factor O>0
Scaled
minimum
sm
inO(V1, …, Vk)
m
in(V1, …, Vk)/O
Scaling
factor O>0
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 207
Note that there is a crucial distinction for network models between network
characteristics and network states. Network states have values (their activation levels)
and are explicit representations that may be accessible for network states by connections
to and from them and can be handled or manipulated in that way. They can be considered
to provide an informational view on the network; usually the states are assumed to have
a certain informational content. In contrast, network characteristics (such as connection
weights and excitability thresholds) have values (their strengths) and determine (e.g.,
cognitive) processes and behavior in an implicit, automatic manner. They can be
considered to provide an embodiment view on the network. In principle, these
characteristics by themselves are not directly accessible nor observable for network
states; in principle you can make connections between states but you cannot make
connections between network characteristics or between states and characteristics.
As indicated above, ‘network characteristics’ and ‘network states’ are two distinct
concepts for a network. Self-modeling is a way to relate these distinct concepts to each
other in an interesting and useful way. A self-model is making the network characteristics
(such as connection weights and excitability thresholds) explicit in the form of adding
states (called self-model states) for these characteristics and also connections for these
additional states. Thus, the network gets an internal self-model of part of its network
structure: it explicitly represents information about its own network structure. In this way,
by iteration different self-modeling levels can be created where network characteristics
from one level relate to network states at a next level. Thus, an arbitrary number of self-
modeling levels can be modeled, covering second-order or higher-order effects.
More specifically, adding a self-model for a temporal-causal base network is done in
the way that for some of the states Y of the base network and some of the network
structure characteristics for connectivity, aggregation and timing (i.e., some from Z
ZX,Y,
Jj,Y, Si,j,Y, KY), additional network states WX,Y, Cj,Y, Pi, j,Y, HY (self-model states or
reification states) are introduced and connected to other states:
a) Connectivity self-model
x Self-model states WX,Y are added representing connectivity characteristics, in
particular connection weights ZX,Y
b) Aggregation self-model
x Self-model states Cj,Y are added representing aggregation characteristics, in
particular combination function weights Jj,Y
x Self-model states Pi,j,Y are added representing aggregation characteristics, in
particular combination function parameters Si,j,Y
c) Timing self-model
x Self-model states H
Y are added representing timing characteristics, in
particular speed factors KY
The notations WX,Y, Cj,Y, Pi,j,Y, HY for the self-model states indicate the referencing
relation with respect to the characteristics ZX,Y, Jj,Y, Si,j,Y, KY: here W refers to Z, C refers
to J, P refers to S, and H refers to K, respectively. For the processing, these self-model
states define the dynamics of any state Y in a canonical manner according to Eq. (1) and
(2) whereby the values of ZX,Y, Jj,Y, Si,j,Y, KY are replaced by the state values of WX,Y, Cj,Y ,
Pi,j,Y, HY at time t, respectively.
Note that concerning the terminology used, only the states that represent some
network characteristics are called self-model states. The states to which these self-model
states are connected still belong to the self-model (e.g., as depicted in Figure 1 and
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks208
further) but they can either be other self-model states or other states that are not self-
model states, such as the states X and Y. An example of an aggregation self-model state
Pi,j,Y for a combination function parameter S
Si,j,Y is for the excitability threshold WY of state
Y, which is the second parameter of a logistic sum combination function; then Pi,j,Y is
usually indicated by TY, where T refers to W. The network constructed by the addition of
a self-model to a base network is called a self-modeling network or a reified network for
this base network. This constructed network is also a temporal-causal network model
itself, as has been shown in [3], Ch. 10; for this reason, this construction can easily be
applied iteratively to obtain multiple levels or orders of self-models, in which case the
resulting network is called a multi-order or higher-order self-modeling network or reified
network.
3. Adaptation Principles from Different Domains
In this section, a number of adaptation principles of different orders are described as can
be found in the literature on Cognitive Neuroscience and Social Sciences.
3.1. First-order Adaptation Principles
First-order adaptation principles for some base network address adaptation of some of
the base network’s characteristics concerning its connectivity, aggregation of multiple
connections and timing of node state dynamics. Much research has focused in particular
on learning of connectivity characteristics based on adaptive connections, but also other
characteristics can be made adaptive, as will be discussed.
3.1.1. The Hebbian Learning Adaptation Principle
As a first example, for mental or neural networks, the Hebbian learning adaptation
principle [5] can be formulated by:
‘When an axon of cell A is near enough to excite B and repeatedly or persistently (3)
takes part in firing it, some growth process or metabolic change takes place in one
or both cells such that A’s efficiency, as one of the cells firing B, is increased.’
[5], p. 62
This is sometimes simplified (neglecting the phrase ‘one of the cells firing B’) to:
‘What fires together, wires together’ [6, 7]
This can easily be modeled by using a connectivity self-model based on self-model states
WX,Y representing connection weights ZX,Y.
3.1.2. The Bonding by Homophily Adaptation Principle
An example of the use of a network’s self-model for the social domain is the bonding by
homophily adaptation principle
‘Birds of a feather flock together’ (4)
This expresses how being ‘birds of a feather’ or ‘being alike’ strengthens the connection
between two persons [8-13]. Similar to the Hebbian learning case, this can be modeled
by a social network’s connectivity self-model based on self-model states WX,Y
representing connection weights ZX,Y.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 209
3.1.3. The More Becomes More Adaptation Principle
Another first-order adaptation principle for social networks is the ‘more becomes more’
principle expressing that more popular people attract more connections:
‘Persons with more connections attract more connections’ [4], p. 311 (5)
In a wider context this more becomes more principle relates to what sometimes is
called ‘the rich get richer’ [14, 15], ‘cumulative advantage’ [16], ‘the Matthew effect’
[17] or ‘preferential attachment’ [18]. Similar to the Hebbian learning and bonding by
homophily cases, this can be modeled by a social network’s connectivity self-model
based on self-model states WX,Y representing connection weights Z
ZX,Y.
3.1.4. The Interaction Connects Adaptation Principle
The idea behind the Interaction Connects adaptation principle from Social Science is that
‘The more interaction you have with somebody, the stronger you will become connected’ (6 )
See, for example, [19-23]. Similar to the Hebbian learning and bonding by homophily
cases, this can be modeled by a social network’s connectivity self-model based on self-
model states WX,Y representing connection weights ZX,Y.
3.1.5. The Enhanced Excitability Adaptation Principle
Although connectivity adaptation has some popularity in the literature, also other
characteristics can be made adaptive. Instead of a connectivity self-model to model
adaptive connection weights, also an aggregation self-model can be used, for example,
to model intrinsic neuronal excitability, as described in [24]:
‘Long-lasting modifications in intrinsic excitability are manifested in changes (7)
in the neuron's response to a given extrinsic current (generated by synaptic
activity or applied via the recording electrode).’ [24], p. 30
This form of adaptation can be modeled by an aggregation self-model based on self-
model states TY for adaptive excitability thresholds. For example, this type of self-model
has been used to model adaptation (desensitization) to spicy food; see [25].
3.2. Second-Order Adaptation Principles
The examples of adaptation principles in Section 3.1 refer to forms of plasticity, which
can be described by a first-order adaptive network that is modelled using a dynamic first-
order self-model for connectivity or aggregation characteristics of the base network, in
particular for the connection weights and/or the excitability thresholds used in
aggregation. Whether or not and to which extent such plasticity as described above
actually takes place is controlled by a form of metaplasticity; e.g. [26-31].
3.2.1. The Exposure Accelerates Adaptation Speed Adaptation Principle
For example, in [29] the following compact quote is found indicating that due to stimulus
exposure, the adaptation speed will increase:
‘Adaptation accelerates with increasing stimulus exposure’ [29], p. 2. (8)
This indeed refers to a form of metaplasticity, which can be described by a second-order
adaptive network that is modeled using a dynamic second-order timing self-model, for
timing characteristics of a first-order self-model for the first-order adaptation, based on
self-model states HWX,Y for adaptive learning speed.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks210
3.2.2. The Exposure Modulates Persistence Adaptation Principle
A similar perspective can be applied to obtain a principle for modulation of persistence.
‘Stimulus exposure modulates persistence of adaptation’ (9)
Depending on further context factors, this can be applied in different ways. Reduced
persistence can be used in order to be able to get rid of earlier learnt connections that do
not apply. However, enhanced persistence can be used to keep what has been learnt. This
also refers to a form of metaplasticity, which can be described by a second-order adaptive
network that is modeled using a dynamic second-order aggregation self-model, for
persistence characteristics of a first-order self-model for the first-order adaptation, based
on self-model states MWX,Y for an adaptive persistence factor.
3.2.3. The Plasticity Versus Stability adaptation principle
In a similar direction [31] it is more generally discussed how it depends on the
circumstances when the extent of plasticity is or should be high and when it is or should
be low in favor of stability:
‘The Plasticity Versus Stability Conundrum’ [31], p. 773. (10)
This principle relates to the previous two and can use these second-order self-models.
3.2.4. The Stress Blocks Adaptation Principle
Yet another principle that is indicated in the literature refers to the effect of high stress
levels on the extent of plasticity:
‘High stress levels slow down or block adaptation’ (11)
See, for example, the following quote from [27], where such slowing down or blocking
of adaptation is called negative metaplasticity:
‘Numerous electrophysiological studies have shown that ‘negative’ metaplasticity develops
in brain areas such as the hippocampus and its related structures (e.g., the lateral septum and
the nucleus accumbens) following stress.’ [27], p. 631
This can be described by a second-order adaptive network modeled using a dynamic
second-order timing self-model, for timing characteristics of a first-order self-model for
the first-order adaptation, based on self-model states HWX,Y for adaptive learning speed.
The first- and second-order adaptation principles such as the one summarized in (3)
to (11) above have been formalized in the form of self-models used in first- and second-
order adaptive network models that have been designed, as discussed in Section 4.
4. Using Self-Models to Formalize Adaptation Principles
In this section, it will be shown how the modeling approach for self-modeling network
models described in Section 2 can be used to model the adaptation principles of different
orders discussed in Section 3. In particular the connectivity and aggregation
characteristics of the addressed self-models are discussed. Timing characteristics for
these self-models are just values (speed factors for each of the states) that will usually be
set depending on a specific application. When self-models are changing over time in a
proper manner, this offers a useful method to model adaptive networks based on any
adaptation principles. This does not only apply to first-order adaptive networks, but also
to higher-order adaptive networks, by using higher-order self-models.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 211
4.1. First-Order Self-Models for First-Order Adaptation Principles
First the adaptation principle for Hebbian Learning will be addressed, as described in
Section 3.1.1. To incorporate the ‘firing together’ part, for the self-model’s connectivity
characteristics, upward causal connections to connectivity self-model state WX,Y from
and

are used to formulate a Hebbian learning adaptation principle; see Figure 1. The
upward connections have weight 1 here. Also a connection from WX,Y to itself with
weight 1 is used; in pictures they are usually left out.
So, the connectivity characteristics of the self-model here consist of the three nodes
WX,Y, X, and Y, together with the two incoming upward connections (the blue arrows)
from X and Y to WX,Y, one outgoing connection from WX,Y to Y (the pink downward
arrow), and the leveled connection (black arrow) from X to Y. Note that as mentioned in
the last paragraph of Section 3.2, only the states that represent a network characteristic
are called self-model states, in this case WX,Y. In connectivity pictures such as Figure 1
and further, the self-model states are the states with an outgoing (pink) downward
connection. Some other states to which they are connected such as in this case X and Y
are still part of the self-model, but will not be called self-model states; they do not have
an outgoing downward connection. The downward connection takes care that the value
of WX,Y is actually used for the connection weight of the connection from X to Y. For the
aggregation characteristics of the self-model, one of the options for a learning rule is
defined by the combination function hebbP
P(V1, V2, W) from Table 2, where V1, V2 refer
to the activation levels of the connected states X to Y, and W to the value of WX,Y
representing the connection weight. For more options of Hebbian learning combination
functions and further mathematical analysis of them, see, for example [3], Ch. 14.
Table 2 Combination functions for self-models modeling the first- and second-order adaptation principles. The
first five rows cover the first-order adaptation principles from Section 3.1 and the last four rows the second-
order adaptation principles from Section 3.2.
Adaptation principle and
self-model state
Variables and
Parameters
Hebbian
Learning
W
X,Y
3.1.1
P(V1, V2, W) =
1V2 (1-W) + P W
V
1
,V
2
activation levels of connected states
W
activation level of self-
model state for
connection weight
P persistence factor
Bonding
by
Homophily
W
X,Y
3.1.2
D,W(V1, V2, W) =
+ D W (1-W) (W- | V1 - V2
V
1,
V
2
activation levels of connected persons
W
connection weight
D
modulation factor
W tipping point
More
Becomes
More
W
X,Y
3.1.3
n,O(W1, …, Wk )
V,W(W1, …, Wk )
W
1
, …,
W
k
activation levels of self-model
states for connection weights of persons
connected to B
Interaction Connects
WX,Y
3.1.4
n,O
1
k
V
1
, …,
V
k
impacts from interaction states for
the connected person
Enhanced Excitability
TY
3.1.5
n,O
1
k
V
1, …, Vk impacts from base states
Exposure Accelerates
Adaptation Speed HWX,Y
3.2.1
n,O
1
k
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Exposure Modulates
Persistence MWX,Y
3.2.2
n,O
1
k
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Plasticity Versus
Stability HWX,Y, MWX, Y
3.2.3
n,O
1
k
V
1
, …,
V
k
impacts from base states and first-
order self-model states
Stress Blocks
Adaptation HWX,Y
3.2.4
n,O
1
k
V
1
, …,
V
k
impacts from base states for stress
level and first-order self-model states
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks212
Next, the adaptation principle for Bonding by homophily will be addressed, as
described in Section 3.1.2. It happens that for this connectivity self-model exactly the
same connectivity characteristics apply as for Hebbian learning, as depicted in Figure 1.
Figure 1. Connectivity characteristics of the self-model for the Hebbian Learning adaptation principle for
Mental Networks or the Bonding by Homophily adaptation principle for Social Networks
For aggregation characteristics of this self-model, an option for an adaptation rule
is defined by the combination function slhomoD
D,W(V1, V2, W) from Table 2, where V1, V2
refer to the activation levels (for example, for some opinion) of the connected persons
and W to the value of WX,Y representing the connection weight. For more options and
further mathematical analysis, see, for example [3], Ch. 13, or [13].
The More Becomes More adaptation principle as described in Section 3.1.3 has
connectivity characteristics as shown in Figure 2. Here, the connectivity self-model
states for different connections affect each other, as a connection of a person X3 to a given
person Y depends on the existence and strengths of connections from other persons Xi to
the same person Y; see the black leveled arrows in the upper plane.
Figure 2. Connectivity characteristics of a self-model for the More Becomes More adaptation principle for
person X3 with respect to person Y
So, in this case the connectivity characteristics of the self-model are the nodes WX1,Y,
WX2,Y, WX3,Y, and Y, together with leveled connections (black arrows) from each WXj,Y to
WX3,Y and downward connections (pink arrows) from each WXj,Y to Y. Again, these (pink)
downward connections takes care that the value of WXj,Y is actually used for the
connection weight of the connection from Xjto Y. For the aggregation characteristics of
this self-model, some form of aggregation of the weights of these other connections
represented by the WXj,Y can be used, such as by using a Euclidean or logistic sum
combination function; see Table 2. For example, in [32] a logistic sum function was used,
and in [33] a scaled sum (with scaling factor the number of existing connections for Y
resulting in an average weight), which is a first-order Euclidean combination function.
For the Interaction Connects adaptation principle described in Section 3.1.4, the
connectivity self-model states for the connection weights are affected by certain states
Y
X
Z
W
X,Y

Y
X3
X1
X2
WX3,Y
WX1,Y
WX2,Y
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 213
IXj,Y representing the strength of (actual) interaction. Therefore, the connectivity
characteristics of a self-model for this adaptation principle are as shown in Figure 3,
with (blue) upward connections from interaction states IXi,Y to the self-model states WXi,Y
and (pink) downward connections from WXi,Y to Y. Note that there also multiple
interaction states can be used for one connection, for example, for different interaction
channels. The aggregation characteristics of the self-model states WXi,Y can be specified,
for example, by a Euclidean or logistic sum function, as shown in Table 2.
Figure 3. Connectivity characteristics of a self-model for the Interaction Connects adaptation principle for
persons X3, X2 and X3 with respect to person Y.
For the Enhanced Excitability adaptation principle described in Section 3.1.5, an
aggregation self-model with connectivity characteristics depicted in Figure 4 can be used.
Figure 4. Connectivity characteristics of a self-model for the Enhanced Excitability adaptation principle for
persons X3, X2 and X3 with respect to person Y.
In this case state Y is assumed to use a logistic sum combination function, which has
an excitability threshold parameter W (or any other function with such a parameter). Here
this excitability threshold is represented by aggregation self-model state TY which is
affected by exposure from activation of the involved states. Note that to enhance
excitability, the value of self-model state TY representing the excitability threshold has
to decrease. Therefore, these upward connections need to get negative connection
weights, whereas a positive connection weight from TY itself can be used. In this case,
the (pink) downward connection from TY to Y takes care that the value of TY is actually
used for the threshold value of the logistic sum function of Y. Also a connection from a
related connectivity self-model state WX,Yto TY with positive connection weight might
be added in this self-model to obtain some balancing effect. For the aggregation
characteristics, for example, a Euclidean (with odd order n to keep the negative impacts
negative) or logistic sum function can be used for TY, as shown in Table 2.

Y
X3
X1
X2
I
X
2,Y
I
X
1,Y
Y
IX3,Y
WX3,Y
W
X1
,
Y
W
X
2,
Y
Y
X
T
Y
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks214
4.2. Second-Order Self-Models for Second-Order Adaptation Principles
The first second-order adaptation principle discussed is the Exposure Accelerates
Adaptation Speed principle described in Section 3.2.1. This is modeled by a second-order
timing self-model. As it is a second-order adaptation principle for some first-order
adaptation principle, for the sake of clarity it is described here with respect to the first-
order adaptation principle for Hebbian Learning; although it might be applied to other
first-order adaptation principle as well, but then it will have a similar structure to what is
shown here. The connectivity characteristics of this timing self-model are shown in
Figure 5; they consist of the states HWX,Y, WX,Y, X, and Y, together with the (positive,
blue) upward connections from the two base states X and Y to the self-model state HWX,Y
expressing the part of the principle referring to ‘exposure’, the (negative, blue) upward
connection from WX,Y to the self-model state HWX,Y, and the downward (pink) connection
from HWX,Y to WX,Y that takes care that the value of HWX,Y is actually used as speed factor
for WX,Y. By the upward connections, stronger activation of the base states X and Y will
lead to an increased value of HWX,Y, as indicated by the part of the principle referring to
‘accelerates’. The (negative) upward connection from the considered state WX,Y to HWX,Y
can be used for (counter)balancing. For the aggregation characteristics, for example a
Euclidean (with odd order n to keep the negative impacts negative) or logistic sum
function can be used for HWX,Y, as shown in Table 2.
Figure 5. Connectivity of a second-order self-model for the Exposure Accelerates Adaptation Speed adaptation
principle with a first-order self-model for Hebbian learning.
Next, the second-order Exposure Modulates Persistence adaptation principle (for the
first-order Hebbian Learning principle) described in Section 3.2.2 is addressed, based on
second-order aggregation self-model state MWX,Y representing persistence of the first-
order adaptation. For the connectivity characteristics of this self-model, see Figure 6.
Y
X
Z
W
X,Y
H
WX,Y
Y
X
Z
W
X,Y
M
WX,Y
Figure. 6. Connectivity of a second-order self-model for the Exposure Modulates Persistence adaptation
principle with a first-order self-model for Hebbian learning
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 215
The upward connections from base states X and Y to MWX,Y may suppress the
persistence (when they are negative). This paves the road to get rid of the learnt effects
from the past in case they are no longer applicable. The positive upward connection from
first-order state WX,Y to HWX,Y can be used for counterbalancing. However, the upward
connections from base states X and Y to MWX,Y can also be made positive in which case
they increase persistence during a learning process to keep the learnt effect well. This
also illustrates the Plasticity Versus Stability Conundrum adaptation principle described
in Section 3.2.3. The (pink) downward connection from MWX,Y to WX,Y takes care that
the value of WX,Y is actually used for the connection weight of the connection from X to
Y. For the aggregation characteristics, for example a Euclidean (with odd order n) or
logistic sum function can be used for MWX,Y, as shown in Table 2.
Finally, a second-order self-model for the Stress Blocks Adaptation principle
described in Section 3.2.4 can be obtained in a similar way as the one for Exposure
Accelerates Adaptation Speed principle (see connectivity in Figure 5) but this time with
connectivity characteristics based on a negative upward connection from a base state
representing the stress level, which brings the timing characteristic self-model state
HWX,Y, to low values or even 0. For the aggregation characteristics, again for example a
Euclidean or logistic sum function can be used for HWX,Y; see Table 2.
5. Discussion
In this paper the use of self-modeling networks to model adaptive biological, mental and
social processes of any order of adaptation was addressed. Following the network-
oriented modeling approach described in [3], it was shown how self-models for networks
provide useful pre-specified building blocks to design complex multi-order adaptive
network models in the form of self-modeling networks. This was illustrated for a number
of adaptation principles from different application domains. A dedicated software
environment for self-modeling networks that has been developed supports the modeling
and simulation: https://www.researchgate.net/project/Network-Oriented-Modeling-Software.
As an illustration, in [3], Ch. 4, four of the adaptation principles known from the
literature and specified in Section 4 were applied to obtain a network model involving
both plasticity and metaplasticity. In particular, two first-order adaptation principles (for
Hebbian Learning and for Enhanced Excitability) and two second-order adaptation
principles (for Exposure Accelerates Adaptation Speed and for Exposure Modulates
Persistence) are covered in this network model.
References
[1] Treur J. Multilevel network reification: representing higher order adaptivity in a network. In: Aiello L,
Cherifi C, Cherifi H, Lambiotte R, Lió P, Rocha L. editors. Proc. of the 7th Int. Conf. on Complex
Networks and their Applications, ComplexNetworks'18, vol. 1. Studies in Computational Intelligence,
vol. 812, Springer Nature, 2018, p. 635-51.
[2] Treur J. Modeling higher-order adaptivity of a network by multilevel network reification. Network
Science 2020;8: S110-44.
[3] Treur J. Network-oriented modeling for adaptive networks: designing higher-order adaptive biological,
mental, and social network models. Cham, Switzerland: Springer Nature Publishing; 2020. 412 p.
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks216
[4] Treur J. Network-Oriented Modeling: Addressing Complexity of Cognitive, Affective and Social
Interactions. Cham, Switzerland: Springer Publishers; 2016. 499 p.
[5] Hebb DO. The organization of behavior: A neuropsychological theory. New York: John Wiley and Sons;
1949. 335 p.
[6] Shatz CJ. The developing brain. Sci. Am. 1992; 267:60–67. (10.1038/scientificamerican0992-60)
[7] Keysers C, Gazzola V. Hebbian learning and predictive mirror neurons for actions, sensations and
emotions. Philos Trans R Soc Lond B Biol Sci 2014;369: 20130175.
[8] Pearson M, Steglich C, Snijders T. Homophily and assimilation among sport-active adolescent substance
users. Connections 2006;27(1):47–63.
[9] McPherson M, Smith-Lovin L, Cook JM. Birds of a feather: homophily in social networks. Annu. Rev.
Sociol. 2001; 27:415–44.
[10] Levy DA, Nail PR. Contagion: A theoretical and empirical review and reconceptualization. Genetic,
social, and general psychology monographs 1993;119(2):233-284.
[11] Holme P, Newman MEJ. Nonequilibrium phase transition in the coevolution of networks and opinions
Phys. Rev. E 2006;74(5):056108.
[12] Sharpanskykh A, Treur J. Modelling and analysis of social contagion in dynamic networks.
Neurocomputing 2014; 146:140–50.
[13] Treur J. Mathematical analysis of the emergence of communities based on coevolution of social
contagion and bonding by homophily. Applied Network Science 2019;4: article 1.
[14] Simon HA. On a class of skew distribution functions Biometrika 1955; 42: 425–40.
[15] Bornholdt S, Ebel H. World wide webscaling exponent from Simon’s 1955 model Phys. Rev. E 2001;64:
article 035104.
[16] Price DJ. de S. A general theory of bibliometric and other cumulative advantage processes J. Amer. Soc.
Inform. Sci. 1976; 27: 292–306
[17] Merton RK. The Matthew effect in science. Science 1968;159: 56–63.
[18] Barabási AL, Albert R. Emergence of scaling in random networks. Science 1999; 286: 509-512.
[19] Hove MJ, Risen JL. It’s all in the timing: interpersonal synchrony increases affiliation. Soc. Cognit.
2009; 27: 949–60. (doi:10.1521/soco. 2009.27.6.949)
[20] Pearce E, Launay J, Dunbar RIM. (). The Ice-breaker Effect: singing together mediates fast social
bonding. Royal Society Open Science 2015;2: article 150221 http://dx.doi.org/10.1098/ rsos.150221.
[21] Weinstein D, Launay J, Pearce, E, Dunbar RIM, Stewart L. Singing and social bonding: Changes in
connectivity and pain threshold as a function of group size. Evolution & Human Behaviour
2016;37(2):152-58. doi: 10.1016/j.evolhumbehav.2015.10.002
[22] Gilbert E, Karahalios K. Predicting tie strength with social media. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems CHI’09, 2009, p. 211-20.
[23] Morris MR, Teevan J, Panovich K. What do people ask their social networks, and why? a survey study
of status message Q&A behavior. CHI 2010. 2010.
[24] Chandra N, Barkai E. A non-synaptic mechanism of complex learning: modulation of intrinsic neuronal
excitability. Neurobiology of Learning and Memory 2018; 154: 30-36.
[25] Choy M, El Fassi S, Treur J. An adaptive network model for pain and pleasure through spicy food and
its desensitization. Cognitive Systems Research 2020: in press
[26] Abraham WC, Bear MF. Metaplasticity: the plasticity of synaptic plasticity. Trends in Neuroscience
1996;19(4):126-130.
[27] Garcia R. Stress, metaplasticity, and antidepressants. Current Molecular Medicine 2002; 2: 629-38.
[28] Magerl W, Hansen N, Treede RD, Klein T. The human pain system exhibits higher-order plasticity
(metaplasticity). Neurobiology of Learning and Memory 2018; 154:112-20.
[29] Robinson BL, Harper NS, McAlpine D. Meta-adaptation in the auditory midbrain under cortical
influence. Nat. Commun. 2016; 7: article 13442.
[30] Sehgal M, Song C, Ehlers VL, Moyer Jr JR. Learning to learn – intrinsic plasticity as a metaplasticity
mechanism for memory formation. Neurobiology of Learning and Memory 2013; 105: 186-99.
[31] Sjöström PJ, Rancz EA, Roth A, Hausser M. Dendritic excitability and synaptic plasticity. Physiol Rev
2008; 88: 769–840.
[32] Beukel S van den, Goos SH, Treur J. An adaptive temporal-causal network model for social networks
based on the homophily and more-becomes-more principle. Neurocomputing 2019; 338: 361-71
[33] Blankendaal R, Parinussa S, Treur J. A temporal-causal modelling approach t o integrated contagion and
network change in social networks. Proceedings of the Twenty-second European Conference on
Artificial Intelligence, ECAI'16, 2016, p. 1388–96
J. Treur / Modeling Multi-Order Adaptive Processes by Self-Modeling Networks 217
... By graphically conceptualizing a complex process, the modeling approach acts as a basis of the conceptualization by representing the dynamic processes without separating or isolating different states. Temporal-causal networks specifically, incorporate the timing of dynamic or cyclical processes based on a continuoustime temporal dimension to time causal effects (Treur, 2020a(Treur, , 2020b(Treur, , 2020c. Such networks can be represented both conceptually and numerically. ...
... Such networks can be represented both conceptually and numerically. Moreover, this network-oriented modeling approach is an appropriate method for modeling a complex phenomenon such as handling mental models in mental processes (Treur and Van Ments, 2022) and organizational learning and the roles of different contextual factors; e.g., ( (Treur, 2020a(Treur, , 2020b to obtain an adaptive reified network, also referred to as a self-modeling network (Treur, 2020c). The name stems from the fact that network reification enables explicit representation of network characteristics by nodes in the network, which is also called self-modeling. ...
Conference Paper
This paper investigates computationally the following research hypotheses: (1) Higher flexibility and discretion in organizational culture results in better mistake management and thus better organizational learning, (2) Effective organizational learning requires a transformational leader to have both high social and formal status and consistency, and (3) Company culture and leader's behavior must align for the best learning effects. Computational simulations of the introduced adaptive network were analyzed in different contexts varying in organization culture and leader characteristics. Statistical analysis results proved to be significant and supported the research hypotheses. Ultimately, this paper provides insight into how organizations that foster a mistake-tolerant attitude in alignment with the leader, can result in significantly better organizational learning on a team and individual level.
... Realistic network models are usually adaptive: often not only their states but also some of their network characteristics change over time. By using a self-modeling network (also called a reified network), a similar network-oriented conceptualization can also be applied to adaptive networks to obtain a declarative description using mathematically defined functions and relations for them as well; see (Treur, 2020a(Treur, , 2020b. This works through the addition of new states to the network (called self-model states) which represent (adaptive) network characteristics. ...
Conference Paper
This research addresses the influence of leadership and communication on learning within an organisation by direct mutual interactions in dyads. This is done in combination with multilevel organizational learning as an alternative route, which includes feed forward and feedback learning. The results show that effective communication (triggered by the active team leader, and/or by natural, informal communication), leads to a faster learning process within an organization compared to the longer route via feed forward and feedback formal organisational learning. However, this more direct form of bilateral learning in general may take more of the employee's time, as a quadratic number of dyadic interactions in general is less efficient than a linear number of interactions needed for feed forward and feedback organisational learning.
Chapter
For a video presentation, see https://www.youtube.com/watch?v=PRUzrkf1mW4. When people interact, their behaviour tends to become synchronised, a mutual coordination process that fosters short-term adaptations, like increased affiliation, and long-term adaptations, like increased bonding. This paper addresses for the first time how such short-term and long-term adaptivity induced by synchronisation can be modeled computationally by a second-order multi-adaptive neural agent model. This neural agent model addresses movement, affect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The behaviour of the introduced neural agent model was evaluated in a simulation paradigm with different stimuli and communication enabling conditions. The outcomes illustrate how synchrony leads to stronger short-term affiliation which in turn leads to more synchrony and stronger long-term bonding, and conversely.
Article
Full-text available
This paper aims to map out the adaptive causal pathways of processes underlying capsaicin consumption and the desensitization process of the TRPV1 receptor as a feedback loop together with pain and pleasure perception. In order to map out these causal capsaicin pathways, adaptive causal network modeling was applied, which is a way of modeling biological, neural, mental, and social processes from an adaptive causal modeling perspective.
Article
Full-text available
For related videos, see the YouTube channel on Self-Modeling Networks here: https://www.youtube.com/channel/UCCO3i4_Fwi22cEqL8M_PgeA. In network models for real-world domains often network adaptation has to be addressed by incorporating certain network adaptation principles. In some cases, also higher-order adaptation occurs: the adaptation principles themselves also change over time. To model such multilevel adaptation processes it is useful to have some generic architecture. Such an architecture should describe and distinguish the dynamics within the network (base level), but also the dynamics of the network itself by certain adaptation principles (first-order adaptation level), and also the adaptation of these adaptation principles (second-order adaptation level), and maybe still more levels of higher-order adaptation. This paper introduces a multilevel network architecture for this, based on the notion network reification. Reification of a network occurs when a base network is extended by adding explicit states representing the characteristics of the structure of the base network. It will be shown how this construction can be used to explicitly represent network adaptation principles within a network. When the reified network is itself also reified, al-so second-order adaptation principles can be explicitly represented. The multilevel network reification construction introduced here is illustrated for an adaptive adaptation principle from Social Science for bonding based on homophily. This first-order adaptation principle describes how connections are changing, whereas this first-order adaptation principle itself changes over time by a second-order adaptation principle. As a second illustration, it is shown how plasticity and metaplasticity from Cognitive Neuroscience can be modeled.
Book
Full-text available
This book addresses the challenging topic of modeling adaptive networks, which often have inherently complex behaviour. Networks by themselves usually can be modeled using a neat, declarative and conceptually transparent Network-Oriented Modeling approach. For adaptive networks changing the network’s structure, it is different; often separate procedural specifications are added for the adaptation process. This leaves you with a less transparent, hybrid specification, part of which often is more at a programming level than at a modeling level. This book presents an overall Network-Oriented Modeling approach by which designing adaptive network models becomes much easier, as also the adaptation process is modeled in a neat, declarative and conceptually transparent network-oriented manner, like the network itself. Due to this dedicated overall Network-Oriented Modeling approach, no procedural, algorithmic or programming skills are needed to design complex adaptive network models. A dedicated software environment is available to run these adaptive network models from their high-level specifications. Moreover, as adaptive networks are described in a network format as well, the approach can simply be applied iteratively, so that higher-order adaptive networks in which network adaptation itself is adaptive too, can be modeled just as easily; for example, this can be applied to model metaplasticity from Cognitive Neuroscience. The usefulness of this approach is illustrated in the book by many examples of complex (higher-order) adaptive network models for a wide variety of biological, mental and social processes. The book has been written with multidisciplinary Master and Ph.D. students in mind without assuming much prior knowledge, although also some elementary mathematical analysis is not completely avoided. The detailed presentation makes that it can be used as an introduction in Network-Oriented Modelling for adaptive networks. Sometimes overlap between chapters can be found in order to make it easier to read each chapter separately. In each of the chapters, in the Discussion section, specific publications and authors are indicated that relate to the material presented in the chapter. The specific mathematical details concerning difference and differential equations have been concentrated in Chapters 10 to 15 in Part IV and Part V, which easily can be skipped if desired. For a modeler who just wants to use this modeling approach, Chapters 1 to 9 provide a good introduction. The material in this book is being used in teaching undergraduate and graduate students with a multidisciplinary background or interest. Lecturers can contact me for additional material such as slides, assignments, and software. Videos of lectures for many of the chapters can be found at https://www.youtube.com/watch?v=8Nqp_dEIipU&list=PLF-Ldc28P1zUjk49iRnXYk4R-Jm4lkv2b.
Article
Full-text available
In this paper it is analysed how community formation in an adaptive network for bonding based on similarity (homophily) can be related to characteristics of the adaptive network's structure, which includes the structure of the adaptation principles incorporated. In particular, this is addressed for adaptive social networks for bonding based on homophily. To this end, relevant properties of the network and the adaptation principle have been identified, such as a tipping point for similarity. As one of the results, it has been found how the emergence of communities strongly depends on the value of this tipping point. Moreover, it is shown that some properties of the structure of the network and the adaptation principle entail that the connection weights all converge to 0 (for persons in different communities) or 1 (for persons within one community). Published in Applied Network Science Journal, 2019, Special Issue on Community Structure in Networks, Guest editors Gergely Palla, Hocine Cherifi, Boleslaw K. Szymanski.
Article
Full-text available
This study describes the use of adaptive temporal-causal networks to model and simulate the development of mutually interacting opinion states and connections between individuals in social networks. The focus is on adaptive networks combining the homophily principle with the more becomes more principle. The model has been used to analyse a data set concerning opinions about the use of alcohol and tobacco, and friendship relations. The achieved results provide insights in the potential of the approach.
Article
Full-text available
Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation accelerates when an environment is re-encountered—in response to a sound environment that repeatedly switches between quiet and loud, midbrain neurons accrue experience to find an efficient code more rapidly. This phenomenon, which we term meta-adaptation, suggests a top–down influence on the midbrain. To test this, we inactivate auditory cortex and find acceleration of adaptation with experience is attenuated, indicating a role for cortex—and its little-understood projections to the midbrain—in modulating meta-adaptation. Given the prevalence of adaptation across organisms and senses, meta-adaptation might be similarly common, with extensive implications for understanding how neurons encode the rapidly changing environments of the real world.
Conference Paper
Full-text available
This paper introduces an integrated adaptive temporal-causal network model for dynamics in networks of social interactions addressing contagion between states, and changing connections within these social networks by two principles: the homophily principle and the more-becomes-more principle. The model has been evaluated in three different manners: by simulation experiments, by verification based on mathematical analysis, and by validation against an empirical data set.
Book
Full-text available
This book has been written with a multidisciplinary audience in mind without assuming much prior knowledge. In principle, the detailed presentation in the book makes that it can be used as an introduction in Network-Oriented Modelling for multidisciplinary Master and Ph.D. students. In particular, this implies that, although also some more technical mathematical and formal logical aspects have been addressed within the book, they have been kept minimal, and are presented in a concentrated and easily avoidable manner in Part IV. Much of the material in this book has been and is being used in teaching multidisciplinary undergraduate and graduate students, and based on these experiences the presentation has been improved much. Sometimes some overlap between chapters can be found in order to make it easier to read each chapter separately. Lecturers can contact me for additional material such as slides, assignments, and software Springer full-text download: http://link.springer.com/book/10.1007/978-3-319-45213-5
Article
The human pain system can be bidirectionally modulated by high-frequency (HFS; 100Hz) and low-frequency (LFS; 1Hz) electrical stimulation of nociceptors leading to long-term potentiation or depression of pain perception (pain-LTP or pain-LTD). Here we show that priming a test site by very low-frequency stimulation (VLFS; 0.05Hz) prevented pain-LTP probably by elevating the threshold (set point) for pain-LTP induction. Conversely, prior HFS-induced pain-LTP was substantially reversed by subsequent VLFS, suggesting that preceding HFS had primed the human nociceptive system for pain-LTD induction by VLFS. In contrast, the pain elicited by the pain-LTP-precipitating conditioning HFS stimulation remained unaffected. In aggregate these experiments demonstrate that the human pain system expresses two forms of higher-order plasticity (metaplasticity) acting in either direction along the pain-LTD to pain-LTP continuum with similar shifts in thresholds for LTD and LTP as in synaptic plasticity, indicating intriguing new mechanisms for the prevention of pain memory and the erasure of hyperalgesia related to an already established pain memory trace. There were no apparent gender differences in either pain-LTP or metaplasticity of pain-LTP. However, individual subjects appeared to present with an individual balance of pain-LTD to pain-LTP (a pain plasticity "fingerprint").
Article
Training rats in a particularly difficult olfactory discrimination task initiates a period of accelerated learning of other odors, manifested as a dramatic increase in the rats' capacity to acquire memories for new odors once they have learned the first discrimination task, implying that rule learning has taken place. At the cellular level, pyramidal neurons in the piriform cortex, hippocampus and bsolateral amygdala of olfactory-discrimination trained rats show enhanced intrinsic neuronal excitability that lasts for several days after rule learning. Such enhanced intrinsic excitability is mediated by long-term reduction in the post-burst after-hyperpolarization (AHP) which is generated by repetitive spike firing, and is maintained by persistent activation of key second messenger systems. Much like late-LTP, the induction of long-term modulation of intrinsic excitability is protein synthesis dependent. Learning-induced modulation of intrinsic excitability can be bi-directional, pending of the valance of the outcome of the learned task. In this review we describe the physiological and molecular mechanisms underlying the rule learning-induced long-term enhancement in neuronal excitability and discuss the functional significance of such a wide spread modulation of the neurons' ability to sustain repetitive spike generation.