Content uploaded by Jan Treur
Author content
All content in this area was uploaded by Jan Treur on Mar 04, 2020
Content may be subject to copyright.
Network Science (2020), 1–35
doi:10.1017/nws.2019.56
ORIGINAL ARTICLE
Modeling higher order adaptivity of a network
by multilevel network reification
Jan Treur
Social AI Group, Vrije Universiteit Amsterdam, Amsterdam, Netherlands (email: j.treur@vu.nl)
Action Editor: Hocine Cherifi
Abstract
In network models for real-world domains, often network adaptation has to be addressed by incorporating
certain network adaptation principles. In some cases, also higher order adaptation occurs: the adaptation
principles themselves also change over time. To model such multilevel adaptation processes, it is useful to
have some generic architecture. Such an architecture should describe and distinguish the dynamics within
the network (base level), but also the dynamics of the network itself by certain adaptation principles (first-
order adaptation level), and also the adaptation of these adaptation principles (second-order adaptation
level), and may be still more levels of higher order adaptation. This paper introduces a multilevel network
architecture for this, based on the notion network reification. Reification of a network occurs when a base
network is extended by adding explicit states representing the characteristics of the structure of the base
network. It will be shown how this construction can be used to explicitly represent network adaptation
principles within a network. When the reified network is itself also reified, also second-order adaptation
principles can be explicitly represented. The multilevel network reification construction introduced here
is illustrated for an adaptive adaptation principle from social science for bonding based on homophily and
one for metaplasticity in Cognitive Neuroscience.
Keywords: network reification; higher order network adaptation; adaptive social networks; plasticity and metaplasticity
1. Introduction
Within the complex dynamical systems area, adaptive behavior is an interesting and quite relevant
challenge, addressed in various ways, see, for example, Helbing et al. (2015) and Perc & Szolnoki
(2010). In particular for network-oriented dynamic modeling approaches, network models for
real-world domains often show some form of network adaptation based on certain network adap-
tation principles. Such principles describe how certain characteristics of the network structure
change over time, for example, the connection weights in mental networks with Hebbian learning
(Hebb, 1949) or in social networks with bonding based on homophily, for example, Byrne (1986),
McPherson et al. (2001), Pearson et al. (2006), Sharpanskykh & Treur (2014). Sometimes higher
order adaptation also occurs in the sense that the adaptation principles for a network themselves
also change over time. For example, plasticity in mental networks as described, for example, by
Hebbian learning is not a constant feature but usually varies over time, according to what in cog-
nitive neuroscience has been called metaplasticity, for example, Abraham & Bear (1996), Magerl
et al. (2018), Parsons (2018), Schmidt et al. (2013), Sehgal et al. (2013), Zelcer et al. (2006). To
model such multilevel network adaptation processes in a principled manner, it is useful to have
some generic architecture. Such an architecture should be able to distinguish and describe
© The Author(s) 2020. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the
Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and
reproduction in any medium, provided the original work is properly cited.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
2J.Treur
1. the dynamics within the base network;
2. the dynamics of the base network structure by network adaptation principles (first-order
adaptation);
3. the adaptation of these adaptation principles (second-order adaptation);
4. and maybe still more levels of higher order adaptation.
In the current paper, it is shown how such distinctions indeed can be made within a network-
oriented modeling framework using the notion of reified network architecture.
Reification is known from different scientific areas. It literally means representing something
abstract as a material or concrete thing, or making something abstract more concrete or real
(Merriam Webster and Oxford dictionaries). Reification has been shown to provide advantages
in modeling and programming languages in other areas of AI and computer science, for example,
Bowen & Kowalski (1982), Demers & Malenfant (1995), Galton (2006), Hofstadter (1979), Sterling
&Shapiro(1996), Sterling & Beer (1989), Weyhrauch (1980). Such advantages concern, for exam-
ple, enhanced expressive power and more support for modeling adaptivity. Network reification
provides similar advantages. In Treur (2018a), it has been shown how network reification can be
used to explicitly represent adaptation principles for networks in a transparent and unified man-
ner. Examples of such adaptation principles are, among others, principles for Hebbian learning
(to model plasticity in the brain) and for bonding based on homophily (to model adaptive social
networks). Using network reification, adaptive mental networks and adaptive social networks can
be addressed well, as shown in Treur (2018a).
Including reification states for the characteristics of the base network structure (connection
weights, speed factors, and combination functions) in the extended network is one step. The next
step is defining proper temporal–causal relations for them and relations with the other states.
Then, a reified network is obtained that explicitly represents the characteristics of the base net-
work, and, moreover, how this base network evolves over time based on adaptation principles
that change the causal network relations. In Treur (2018a), it was shown how this can be used for
a variety of adaptation principles known from cognitive neuroscience and social science.
However, this is not the end of the story. Such reified adaptive networks form again a basic
network structure defined by certain characteristics, such as learning rate or adaptation speed of
change of connections. Adaptation principles may be adaptive themselves too, according to cer-
tain second-order adaptation principles. From recent literature, it has become quite clear that in
real-world domains such characteristics can still change over time. The notion of metaplasticity
or second-order adaptation has become a focus of study in literature such as Arnold et al. (2015),
Chandra & Barkai (2018), Daimon et al. (2017), Magerl et al. (2018), Parsons (2018), Robinson
et al. (2016), Sehgal et al. (2013), Schmidt et al. (2013), Zelcer et al. (2006). This area of higher order
adaptivity is a next challenge to be addressed. To this end, network reification can be applied again
on reified structures, thus obtaining repeated or multilevel reification. In the current paper, such
a construction of multilevel reification is illustrated for a network-oriented modeling approach
based on temporal–causal networks (Treur, 2016;2019a). This multilevel reification construction
introduced here can be used to model higher order adaptivity of any level. The multilevel reifica-
tion architecture has been implemented by the author in Excel (limited version) and in MATLAB
(general version). The homophily context will be used as a first application for the social sci-
ence area and the context of plasticity and metaplasticity as a second application for the cognitive
neuroscience area.
The current paper is an extended (by more than 90%) and rewritten version of Treur (2018b).
In the discussion, the differences are discussed in more detail. In Section 2, the network-oriented
modeling approach based on temporal–causal networks is briefly summarized. Section 3provides
an overview of different application domains for higher order adaptation. Next, in Section 4,the
network reification concept is introduced, and in Section 5, the more general multilevel network
reification construction is defined. Moreover, it is shown how it can model second-order network
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 3
Tab le 1 . Conceptual and numerical representation of a temporal–causal network structure, adopted from (Treur, 2019a).
Concepts Notation Explanation
States and
connections
X,Y,X→YDescribes the nodes and links of a network structure (e.g., in graphical
or matrix format)
........................................................................................................................................................................................................................................................................
Connection weight ωX,YThe connection weight ωX,Y∈[−1, 1] represents the strength of the
causal impact of state Xon state Ywith X→Y
........................................................................................................................................................................................................................................................................
Aggregating
multiple impacts
cY(..)ForeachstateY,acombination function cY(..) is chosen to combine the
causal impacts of other states on state Y
........................................................................................................................................................................................................................................................................
Timing of the causal
effect
ηYFor each state Y,aspeed factor ηY≥0 is used to represent how fast a
state is changing upon causal impact
Concepts Numerical representation Explanation
State values over
time t
Y(t) At each time point t,eachstateY
in the model has a real number
value in [0, 1]
........................................................................................................................................................................................................................................................................
Single causal impact impactX,Y(t)=ωX,YX(t)Att,stateXwith connection to
state Yhas an impact on Y,using
weight ωX,Y
........................................................................................................................................................................................................................................................................
Aggregating
multiple impacts
aggimpactX,Y(t)=cY(impactX1,Y(t), ..., impactXk,Y(t))
=cY(ωX1,YX1(t), ..., ωXk,YXk(t))
The aggregated impact of multi-
ple states Xion Yat tis determined
using combination function cY(..)
........................................................................................................................................................................................................................................................................
Timing of the causal
effect
Y(t+t)=Y(t)+ηY[aggimpactX,Y(t)−Y(t)]t
=Y(t)+ηY[cY(ωX1,YX1(t), ..., ωXk,YXk(t)) −Y(t)]t
The causal impact on Yis exerted
over time gradually, using speed
factor ηY
adaptivity. This is illustrated by a second-order adaptive network based on a first-order adapta-
tion principle for bonding based on homophily and a second-order adaptation principle for the
characteristics of this first-order adaptation principle. In Section 6, example simulations for this
multilevel network reification example are presented. Section 7shows a mathematical analysis of
this example model. Section 8presents a complexity analysis of network reification. In Section 9,
it is shown how the modeling approach based on reified network models can be applied to plas-
ticity and metaplasticity from cognitive neuroscience. Section 10 discusses a specification format
for this method and an implemented modeling environment for it. Section 11 is a discussion.
2. Structure and dynamics of temporal–causal networks
The network structure of a temporal–causal network model can be described conceptually by a
graph with nodes and directed connections and a number of labels for such a graph representing
the network characteristics: connection weights ωX,Y, speed factors ηYof states Y,andcombi-
nation functions cY(..) for states Y;seeTable1, upper part, and Figure 1for an example of a
basic fragment of a network with states X1,X2,andY, and labels ωX1,Yand ωX2,Yfor connec-
tion weights, cY(..)for combination function, and ηYfor speed factor. A library with a number
of standard combination functions is available as option, but also new functions can be added. In
the lower part of Table 1, it is shown how the numerical representation of the network’s dynamics
is defined in terms of the above labels (see also Treur 2016, Chapter 2). Here, X1,...,Xkare
the states that have outgoing connections to state Y. These formulas in the last row in Table 1
define the detailed dynamic semantics within a temporal–causal network. They can be used for
mathematical analysis and for simulation and can be written in differential equation format as
follows:
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
4J.Treur
X2,Y
Y
X1,Y
c
Y
(..)
Figure 1. Fragment of a temporal–causal networkstructure in a labeled graph representation. The basic elements are nodes
and their connections, with for each node Ya speed factor ηYand a combination function cY(..), and for eachconnection from
Xto Ya connection weight ωX,Y.
dY(t)
dt=ηY[cY(ωX1,YX1(t), ...,ωXk,YXk(t)) −Y(t)]
Examples of combination functions are the identity id(.)for states with impact from only
one other state, the scaled sum ssumλ(..)with scaling factor λ, the scaled minimum function
sminλ(..) and maximum function smaxλ(..),andtheadvanced logistic sum combination function
alogisticσ,τ(..)with steepness σand threshold τ(see also Treur 2016, Chapter 2, Table 2.10):
id(V)=V
ssumλ(V1,...,Vk)=V1+···+Vk
λ
sminλ(V1,...,Vk)=min(V1,...,V
k)
λ
smaxλ(V1,...,Vk)=max(V1,...,Vk)
λ
alogisticσ,τ(V1, ...Vk)=1
1+e−σ(V1+...+Vk−τ)−1
1+eστ (1 +e-στ)
Note that for basic combination functions, two parameters may be considered. Examples are the
scaling factor λ, the steepness σ, and the threshold τabove. These parameters may also be included
as arguments in the function, for example, alogistic(σ,τ,V1, ...Vk)
Examples of combination functions applied in particular for adaptive networks are the
following
• Hebbian learning (see Section 9)
hebbμ(V1,V2,W)=V1V2(1−W)+μW
Here, V1and V2refer to the activation levels of two connected states and Wto their connection
weights; μis a parameter for the persistence factor.
• Simple linear homophily (see Section 5)
slhomoα,τ(V1,V2,W)=W+αW(1 −W)(τ−|V1−V2|)
Here, V1and V2refer to the activation levels of the states of two connected persons and Wto
their connection weight; τis a tipping point parameter, and αis a modulation parameter. This is
applied to model bonding based on homophily (see Sections 5and 6).
The set of already available combination functions forms a combination function library (with
currently 35 functions), which can be chosen during the design of a network model.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 5
3. Adaptivity and higher order adaptivity of networks
Adaptivity of a system or model is always relative to a chosen basic structure. Such a structure is
defined by a number of characteristics that for a nonadaptive case are assumed constant but for an
adaptive case (some of them) are assumed to change over time. In principle, any adaptive system
could also be modeled as just some type of complex dynamic system, defined as a state-determined
system in the sense of Ashby (1960). However, from a conceptual viewpoint, the choice of a suit-
able basic structure often has advantages. For example, for a network structure as a basis such
characteristics concern the connections and their weights, the way in which different connections
to the same node are aggregated, and the characteristics indicating how flexible or rigid nodes
are (for the speed by which nodes can change). For the case that a network structure is chosen as
a basis, adaptive means that the network structure is changing; in particular, given the concepts
used to define a network structure in Section 2, this means that some of the connection weights,
the combination functions to aggregate multiple impacts, and/or the speed factors are changing
over time. In Section 4, it will be shown how adaptation principles defining an adaptive network
can be modeled in a reified network.
3.1 First-order adaptation principles
There are many well-known examples of first-order adaptive networks, for example, related to
or inspired by adaptation principles from neuroscience, cognitive science, or social science. Just
some examples are
• neural networks equipped with (machine) learning mechanisms such as back propagation or
deep learning to adapt connection weights over time;
• mental or neural networks equipped with a Hebbian learning mechanism (Hebb, 1949)to
adapt connection weights over time;
• Social networks equipped with an adaptation mechanism for bonding based on homophily
(McPherson et al., 2001) to adapt connection weights over time.
Although the majority of the known first-order adaptive networks consider adaptations of con-
nection weights over time, also other elements of the network structure can be considered to be
adaptive, such as
• adaptive combination functions for aggregation and activation of nodes, for example, their
threshold values to model adaptive intrinsic properties of neurons such as their excitabilty
(e.g., Chandra & Barkai, 2018);
• adaptive speed factors of nodes to model adaptive processing speed, for example, Robinson
et al. (2016): “Adaptation accelerates with increasing stimulus exposure” (p. 2).
As several real-world examples show, adaptation principles may be adaptive themselves too,
according to certain second-order adaptation principles. This will be discussed next.
3.2 Second-order adaptation principles
Second-order adaptivity can occur in many forms and applications. From recent literature,
it has become quite clear that in real-world domains, characteristics representing adaptation
principles can still change over time, and often for good reasons. The notion of metaplasticity
or second-order adaptation has become a focus of study in the domains of cognitive neuroscience
and social sciences. Some examples are briefly discussed here:
• In an adaptive mental network based on Hebbian learning, the learning speed or persistence
factor may change over time, for example, due to age or due to experiences or medicin. Such
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
6J.Treur
second-order adaptation has been discussed for plasticity of the brain and in evolutionary
context, for example, see Arnold et al. (2015), Daimon et al. (2017), Robinson et al. (2016).
• In literature such as Arnold et al. (2015), Chandra & Barkai (2018), Daimon et al. (2017),
Magerl et al. (2018), Parsons (2018), Robinson et al. (2016), Sehgal et al. (2013), Schmidt
et al. (2013), Zelcer et al. (2006), various studies are reported which show how adaptation of
synapses as described, for example, by Hebbian learning can be modulated by suppressing the
adaptation process or amplifying them, thus some form of metaplasticity is described. Factors
affecting synaptic plasticity as reported are temporarily enhanced excitability of neurons, for
example, based on previous (learning) experiences or stress.
• From the social science area, in an adaptive social network based on an adaptation principle
for bonding based on homophily (McPherson et al., 2001), the similarity measure determin-
ing how similar two persons are may change over time, for example, due to age or other
varying circumstances. As an example, for somebody who is very busy or already has a lot of
connections, the requirements for being similar might become more strict.
• Also from the social science area is the second-order adaptation concept called “inhibit-
ing adaptation” described in Carley (2002;2006). The idea is that networked organizations
need to be adaptive in order to survive in a dynamic world. However, some types of circum-
stances affect the adaptivity in a negative manner, for example, frequent changes of persons or
(other) resources. Such circumstances can be considered as inhibiting the adaptation capa-
bilities of the organization. Especially, in Carley (2006), it is described in some detail how
such inhibiting of adaptation can be exploited as a strategy to attack organizations that are
considered harmful or dangerous such as terrorist networks, by creating circumstances that
indeed achieve inhibiting adaptation.
The third topic above on adaptive adaptation principles for bonding based on homophily will be
illustrated in more detail in Sections 5and 6; here, the first-order adaptation principle, describing
adaptation of connections based on bonding by homophily, is adapted by a second-order adapta-
tion principle that takes into account how many (and how strong) connections a person already
has. For the first two topics above, to address metaplasticity, adaptive adaptation principles for
Hebbian learning can be considered, for example, in which the adaptation speed factor (learn-
ing rate) for the first-order adaptation principle is changing based on a second-order adaptation
principle.
3.3 Higher order adaptation principles
A reasonable question is whether also adaptation principles of third or even higher order can
make sense. Interestingly, a Google Scholar search (on February 3, 2019) on “adaptation” provides
4 million results, “second order adaptation” 360, “third order adaptation” only 14, and “fourth
order adaptation” only 3. The graphs in Figure 2show the logarithm and double logarithm of
the number of results (hits) for the different orders. The dotted line is a linear trend line with
linear formula in xand yas indicated: the double logarithm seems to approximate the pattern
best, so the number of hits is in the order of a double (negative) exponential pattern e(35.19 e−0.8684n)
as function of the order n. This very strong pattern might suggest that adaptation of order higher
than 2 is not often considered a very useful or applicable notion, to say the least. However, at least
in Fessler et al. (2015), some interesting ideas are put forward on higher order adaptation for the
area of evolutionary adaptive processes.
For example, the S-curve in the human spine reflects the determinative influence of
the original function of the spine as a suspensory beam in a quadrupedal mammal, in
contrast to its current function as a load-bearing pillar: whereas the original design
functioned efficiently in a horizontal position, the transition to bipedality required
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 7
Figure 2. Logarithm (left graph) and double logarithm (right graph) of the number of hits of adaptivity of different orders
(vertical axis) in Google Scholar vs the order of adaptation (horizontal axis). The dotted linear trend line shows that the
double logarithm of the number of hits has an almost linear dependence of the order of adaptivity.
the introduction of bends in the spine to position weight over the pelvis (Lovejoy,
2005). The resulting configuration makes humans prone to lower back injury, illus-
trating how path dependence can both set the stage for kludgy designs and constrain
their optimality. Moreover, the combination of bipedality and pressures favoring
large brain size in humans exacerbates a conflict between the biomechanics of loco-
motion (favoring a narrow pelvis) and the need to accommodate a large infant skull
during parturition. This increases the importance of higher-order adaptations such
as relaxin, a hormone that loosens ligaments during pregnancy, allowing the pelvic
bones to separate. (Fessler et al., 2015)
Here, it is suggested that the following types of adaptation can be considered for the human
spine:
• First-order adaptation:
For quadrupedal mammals, a straight horizontal spine is an advantage.
• Second-order adaptation:
Transition to bipedality requires introduction of bends in the spine to position weight over
the pelvis, making humans prone to lower back injury.
Similarly, the following types of adaptation can be considered for the human pelvis:
• First-order adaptation:
Bipedality favors a narrow pelvis.
• Second-order adaptation:
Larger brain size needs a wider pelvis: using relaxin allowing the pelvic bones to separate
during giving birth
In another part of Fessler et al. (2015), the following is put forward:
Also of relevance here, one form of disgust, pathogen disgust, functions in part as
a third-order adaptation, as disease-avoidance responses are upregulated in a man-
ner that compensates for the increases in vulnerability to pathogens that accompany
pregnancy and preparation for implantation—changes that are themselves a second-
order adaptation addressing the conflict between maternal immune defenses and the
parasitic behavior of the half-foreign conceptus (Fessler et al., 2005, Jones et al., 2005,
Fleischman & Fessler, 2011). (Fessler et al., 2015)
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
8J.Treur
Here, it is suggested that the following types of adaptation can be considered for the human way
of handling pathogens in the external world, in particular during the first trimester of pregnancy:
• First-order adaptation:
Immune system attacks pathogens.
• Second-order adaptation:
During pregnancy, the immune system is less active as foreign material has to enter.
• Third-order adaptation:
During pregnancy, disgust makes that potential pathogens are avoided so that less risks are
taken for entering of pathogens while the immune system is low functioning.
It could be argued that this domain of evolutionary development is not exactly comparable to
the types of application domains considered in the current paper. For example, in evolutionary
processes, no organisms in their daily life are considered but species on an evolutionary relevant
long-term timescale. However, at least in a metaphorical sense, this evolutionary domain might
provide an interesting source of inspiration.
4. Addressing network adaptation by network reification
Network reification is a construction principle by which a base network is extended by extra states
that represent the characteristics of the network structure. The new states added represent the
characteristics for the network structure. They are what are called reification states for the char-
acteristics, in other words, the characteristics are reified by these states. More specifically, these
reification states represent the labels for connection weights,combination functions,andspeed fac-
tors showninTable1. For connection weights ωXi,Yfor the connection from state Xito state Y
and speed factors ηYfor state Y, their reification states WXi,Yand HYrepresent the value of them,
and the vector CY=(C1,Y,C2,Y,... .) represents the chosen combination functions for state Y.
In Figure 3, the reification states are depicted in the upper (blue) plane, whereas the states of the
base network are in the lower (pink) plane.
Causal relations for these reified characteristics of a network can be defined within the reified
network: incoming connections affecting them and outgoing connections from them to base net-
work states. Such connections are the way in which adaptation principles are explicitly represented
within the (reified) network (Treur, 2018a). The downward pink arrows in Figure 3define how
the reification states contribute to an aggregated impact on the related base network state.
These downward connections in Figure 3and the combination functions for the base states are
defined in a generic manner. The general pattern is that each of the reification states WXi,Y,HY,
CY,andPYfor connection weights, speed factors, and combination functions has a causal
connection to state Yin the base network, as they all affect Yin their own way. All depicted
connections get weight 1, and in the reified network, the speed factors of the base states are set at
1 too. For the base states, new combination functions are needed that will be defined below. The
different components
C1,Y,C1,Y, ...
for CYare explained as follows. During modeling, a sequence of basic combination functions
bcf1(...),...,bcf
m(...)
is chosen from the function library discussed in Section 2(if desired also new functions can be
added to that library), to be used in the specific application addressed. For example,
bcf1(...)=ssumλ(..)
bcf2(...)=alogisticσ,τ(..)
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 9
First reification level
Base level
Y
X
C1,Y
HY
WXi,Y
P1,1,Y
C2,Y
P1,2,Y
Figure 3. Network reification for a temporal–causal network within the upper, blue plane reification states HYfor the speed
factor of base state Y,Cj,Yfor the combination functions of Y, and WXi,Yfor the weights of the connections from Xito Y, and
Pi,j,Yfor the combination function parameters. The downward connections from these reificationstates to state Yin the base
network (in the lower, pink plane) indicate their causal effect on Y.
Figure 4. Multilevel reified network model picture of a second-order adaptive social network based on homophily. At the
first reification level (middle, blue plane), the reification states WX1,Yand WX2,Yrepresent the adaptive connection weights
for Y; they are changing based on a homophily principle. At the second reification level (upper, purple plane), the reified
tipping point states TPWX1,Yand TPWX2,Yrepresent the adaptive tipping point values for the connection adaptation based on
homophily, and HWX1,Yand HWX2,Yrepresent the connection weight adaptation speed factors.
Each basic combination function bcfj(...)is assumed to have two parameters for each state:
π1,j,Yπ2,j,Y.Thesecombination function parameters π1,1,Y,π2,1,Y,...π1,m,Y,π2,m,Yin the m
selected combination functions can also be explicitly represented by parameter value reification
states
P1,1,Y,P2,1,Y,...P1,m,Y,P2,m,Y
so that they can also become adaptive. Their values are considered as the first arguments in
bcfm(..) and also included as arguments in cY(...). Note that for applications, often more
informative names are used for these parameters πi,j,Yand their reification states Pi,j,Y,for
example, reification state TPWXi,Yin Figure 4for the tipping point parameter τfor bonding by
homophily and MWsrss,psain Figure 10 for the persistence parameter μfor Hebbian learning.
Inthebasenetwork,foreachstateY,combination function weights γ
i,Yare assumed: numbers
that may change over time such that the combination function cY(..)is expressed by:
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
10 J. Treur
Box 1. Derivation of the universal combination function for a reified network; see also Treur (2018a).
cY(t,π1,1,Y,π2,1,Y, ..., π1,m,Y,π2,m,Y,V1, ..., Vk)=
γ1,Y(t)bcf
1(π1,1,Y,π2,1,Y,V1, ..., Vk)+... +γm,Y(t)bcf
m(π1,m,Y,π2,m,Y,V1,...,Vk)
γ1,Y(t)+... +γm,Y(t)
This describes that for Ya weighted average of basic combination functions is used. But if exactly
one of the γi,Y(t) is nonzero, just one basic combination function is selected for cY(..).This
approach makes it possible, for example, to gradually switch from one combination function bcfi
to another one bcfjover time by decreasing the value of γi,Y(t) and increasing the value of γi,Y(t)
The basic combination function weights γi,Yare represented by the reification states Ci,Y.InTreur
(2018a), a universal combination function c∗
Y(..) has been found for any base state Yin the reified
network (for a derivation, see Box 1):
c∗
Y(H,C1, ... .. , Cm,P1,1,P2,1 , ... , P1,m,P2,m,W1,...,Wk,V1, ..., Vk,V)
=HC1bcf1(P1,1,P2,1 ,W1V1, .., WkVk)+..... +Cmbcfm(P1,m,P2,m,W1V1,..,WkVk)
C1+..... +Cm
+(1−H)V
=HC1bcf1(P1,1,P2,1 ,W1V1, .., WkVk)+..... +Cmbcfm(P1,m,P2,m,W1V1, .., WkVk)
C1+..... +Cm
−V+V
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 11
where
•Hrefers to the speed factor reification HY(t)
•Pi,jto the parameter reification value Pi,j,Y(t) of parameter i=1, 2 of basic combination
function j=1, ...,m
•Cito the combination function weight reification Ci,Y(t)
•Vito the state value Xi(t)of base state Xi
•Wito the connection weight reification WXi,Y(t)
•Vto the state value Y(t) of base state Y.
This combination function c∗
Y(..) makes that the dynamics of any base state Ywithin the reified
network is described by the following universal difference equation:
Y(t+t)=Y(t)+c∗
Y(HY(t), C1,Y(t), ....., Cm,Y(t), P1,1,Y(t),P2,1,Y(t), ...,
P1,m,Y(t),P2,m,Y(t),WX1,Y(t)....,
WXk,Y(t),X1(t), ..., Xk(t),Y(t)) −Y(t)]t
which can be rewritten into equivalent forms such as
Y(t+t)=Y(t)+
[HY(t)C1,Y(t)bcf
1P1,1,Y(t), P2,1,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t)+..... +Cm,Y(t)bcf
m(P1,m,Y(t), P2,m,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t))
C1,Y(t)+..... +Cm,Y(t)
+(1 −HY(t))Y(t)−Y(t)]t
Y(t+t)=Y(t)+
[HY(t)C1,Y(t)bcf
1P1,1,Y(t), P2,1,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t)+..... +Cm,Y(t)bcf
m(P1,m,Y(t), P2,m,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t))
C1,Y(t)+..... +Cm,Y(t)
−HY(t))Y(t)−Y(t)]t
Y(t+t)=Y(t)+
HY(t)[ C1,Y(t)bcf
1P1,1,Y(t), P2,1,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t)+..... +Cm,Y(t)bcf
m(P1,m,Y(t), P2,m,Y(t), WX1,Y(t)X1(t), ...., WXk,Y(t)Xk(t))
C1,Y(t)+..... +Cm,Y(t)
−Y(t)]t
Note that structures added by the reification process are not reified themselves. However, the
structure of the reified network can also be reified by a next step: providing what is then called a
second-order reification. In next section, it is explored how such second-order reification can be
done and how it can be used to model second-order adaptation: adaptive adaptation principles.
5. Multilevel network reification
In this section, the multilevel reification architecture is introduced that allows modeling of net-
works with arbitrary orders of adaptation. In this architecture, the base network has its own
internal dynamics, but it also evolves through one or more adaptation principles (called first-
order adaptation principles). Moreover, these first-order adaptation principles themselves can
change based on other adaptation principles (called second-order adaptation principles). So the
architecture offers nreification levels for an arbitrary nwhere on reification level iadaptation
principles are defined for ith-order adaptation.
In the current section, an example inspired by social science will be used as an illustration
of this for n=2, so level 0 provides the base network and then reification level 1 and reifica-
tion level 2 are used to represent first- and second-order adaptation principles, respectively. This
example describes the way in which connections between two persons change over time based
on similarity between the persons. This concerns the bonding-by-homophily adaptation principle
as a first-order adaptation principle, represented at reification level 1 of the multilevel reification
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
12 J. Treur
architecture. In this adaptation principle, there is an important role for the homophily similarity
tipping point τ. This indicates the value such that
• when the dissimilarity between two persons is less than this value, their connection will
become stronger, and
• when the difference is more, their connection will become weaker.
Such tipping points are usually considered constant, but it may be more realistic when they are
assumed adaptive over time. Here, an adaptive form of the bonding-by-homophily adaptation
principle is used where (for good reasons) the tipping points change over time by a second-order
adaptation principle represented at level 2 in the multilevel reification architecture. In addition,
also a second-order adaptation principle is included for the speed factor of the connection weight
adaptation based on the first-order adaptation principle.
The architecture of the example multilevel reified network is shown in Figure 4. The middle
(blue) plane shows how the reification states WXi,Yare used for first-order reification of the
connection weights ωXi,Y. The downward arrows show the network relations of these reification
states WXi,Yto the states Xiand Yin the base network. Such network relations (including their
labels, such as combination functions; see below) for reification states define the first-order adap-
tation principle based on homophily. Note that for this example, speed factors and combination
functions for the base level states Yare considered constant, so speed factor reification states HY
and combination function reification states Ci,Yfor the base network states Yhave been left out
of the middle plane.
On top of the first-order reified network, a second reification level has been added (the upper,
purple plane), in order to get a second-order reified network. Here, the following reification states
are added:
• reification states TPWXi,Yfor the similarity tipping point parameter of the homophily
adaptation principle for the connection weight reified by state WXi,Y
• reification states HWXi,Yfor the speed factor characteristic of the homophily adaptation
principle.
Also, for these reification states (upward and downward), connections have been added.
These connections (together with the relevant combination functions discussed below) define
the second-order adaptation principle based on them. After having defined the overall architec-
ture, the combination functions for the new network are defined. Note that the speed factors and
connection weights in this new network are kept simple: all of them are set at 1.
5.1 Base-level combination functions (the lower plane)
For this level, the combination functions are
c∗
Y(W1, ..., Wk,V1, ..., Vk)=γ1,Ybcf1(W1V1, .., WkVk)+..... +γk,Ybcfm(W1V1, .., WkVk)
γ1,Y+..... +γk,Y
where
•Vistands for Xi(t);
•Wistands for WXi,Y(t).
See also Treur (2018a). Here, for this example, the coefficients γi,Y(t) are assumed constant. For
example, if cY(..)is chosen as the advanced logistic sum function alogisticσ,τ(...),whichisthe
second in the row bcf1(..), bcf2(..)..., then γ1,Y=0, γ2,Y=1, and this obtains
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 13
c∗
Y(W1, ..., Wk,V1, ..., Vk)=bcf2(W1V1, .., WkVk)=alogisticσ,τ(W1V1, .., WkVk)
=1
1+e−σ(W1V1+...+WkV−τ
k)−1
1+eστ)(1 +e−στ )
5.2 First reification level combination functions (the middle plane)
At this first reification level, the combination function for the homophily adaptation principle at
the (the middle plane) is needed (see Section 2above or Treur, 2016, Chapter 11, Section 11.7):
c∗
WXi,Y(H,V1,V2,T,W)=H(W+αWXi,YW(1 −W)(T−|V1−V2|)) +(1 −H)W
where
•Hrefers to the speed factor reification HWXi,Y(t)forWXi,Y
•V1to Xi(t),V2for Y(t)
•Tto homophily tipping point reification TPWXi,Y(t)forWXi,Y
•Wto connection weight reification WXi,Y(t)
•αWXi,Yis a homophily modulation factor
This combination function (together with connection weights and speed factor 1) defines the
following difference equation for WXi,Y(see Section 2,Table1):
WXi,Y(t+t)=WXi,Y(t)+
[HWXi,Y(t)(WXi,Y(t)+αWXi,Y(t)(1 −WXi,Y(t))(TPWXi,Y(t)−|Xi(t)−Y(t)|))
+(1 −HWXi,Y(t))WXi,Y(t)−WXi,Y(t)]t
This indeed makes that an increase in connection weight will take place (in a linear fashion) when
the difference in states |Xi(t)−Y(t)|of two persons is less than the tipping point TPWXi,Y(t)and
decrease will take place when this difference is more.
5.3 Second reification level combination functions (the upper plane)
At this level, it is defined how the tipping points should be adapted to circumstances. The principle
is used that the tipping point of a person will become higher if the person lacks strong connec-
tions (the person becomes less strict) and will become lower if the person already has strong
connections (the person becomes more strict). This is handled using an average norm weight ν
for connections. This can be considered to relate to the amount of time or energy available for
social contacts. So the effect is
• if the connections of a person are on average stronger than ν, downward regulation takes
place: the tipping point will become lower;
• when the connections of this person are on average weaker than ν, upward regulation takes
place: the tipping point will become higher.
This is expressed in the following combination function for the second reification level:
c∗
TPWY,Xi
(W1, ..., Wk,T)=T+αTPWY,XiT(1 −T)(νTPWY,Xi−(W1+... +Wk)/k)
where
•Trefers to the homophily tipping point reification value TPWY,Xi(t)forWY,Xi;
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
14 J. Treur
•Wjrefers to connection weight reification value WY,Xj(t);
•αTPWY,Xiis a modulation factor for the tipping point;
•νTPWY,Xiis a norm for Yfor average connection weight for WY,X1to WY,Xk
Together with connection weights and speed factor 1, this combination function defines the
following difference equation for TPWY,Xi(t) (see Section 2,Table1):
TPWY,Xi(t+t)=TPWY,Xi(t)+
[TPWY,Xi(t)+αTPWY,XiTPWY,Xi(t)(1 −TPWY,Xi(t))(νTPWY,Xi
−(WY,X1(t)+... +WY,Xk(t))/k)−TPWY,Xi(t)]t
As an alternative, note that as a slightly different variant the division by kcan be left out. Then,
the norm does not concern the average but the cumulative connection weights.
For the opposite connections, similarly, the following combination function can be used:
c∗
TPWXi,Y(W1, ..., Wk,T)=T+αTPWXi,YT(1 −T)(νTPWXi,Y−(W1+... +Wk)/k)
where
•Trefers to the WXi,Yhomophily tipping point reification value TPWXi,Y(t);
•Wjrefers to the connection weight reification value WXj,Y(t);
•αTPWXi,Yis a modulation factor for TPWXi,Y;
•νTPWXi,Yis a norm for average (incoming) connection weights for Y.
For the adaptive connection adaptation speed factor, the following combination function can be
considered making use of a similar mechanism using a norm for connection weights.
c∗
HWY,Xi
(W1, ..., Wk,S)=S+βHWY,Xi
S(1 −S)(νHWY,Xi−(W1+... +Wk)/k)
where
•Srefers to WY,Xispeed factor reification value HWY,Xi(t);
•Wjto connection weight reification value WY,Xj(t);
•βHWY,Xiis a modulation factor for HWY,Xi;
•νHWY,Xiis a norm for average of (outgoing) connection weights for Y.
This combination function defines the following difference equation for HWY,Xi:
HWY,Xi(t+t)=HWY,Xi(t)
+HWY,Xi(t)+βHWY,Xi
HWY,Xi(t)1−HWY,Xi(t)×νHWY,Xi−WY,X1(t)+... +WY,Xk(t)
k−HWY,Xi(t)t
Also, here, an opposite variant is possible:
c∗
HWXi,Y(W1, ..., Wk,S)=S+βHWXi,YS(1 −S)(νHWXi,Y−(W1+... +Wk)/k)
where
•Sis used for WXi,Yspeed factor reification value HWXi,Y(t);
•Wjfor connection weight reification value WXj,Y(t);
•βHWXi,Yis a modulation factor for HWXi,Y;
•νHWXi,Yis a norm for average of (incoming) connection weights for Y.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 15
Tab le 2 . Main network characteristics for Scenario 1/Scenario 2.
Base level First reification level Second reification level
Contagion alogistic 1 Homophily modulation 1 Tipping point speed factors 1
steepness σXifor Xifactor αWX1,Xifor WX1,XiηTPWX1,Xi
for TPWX1,Xi
........................................................................................................................................................................................................................................................................
Contagion alogistic 1.5 Connection weight speed 1 Tipping point modulation 0.1/0.9
threshold τXifor Xifactor ηWX1,Xifor WX1,Xifactors αTPWX1,Xifor TPWX1,Xi
........................................................................................................................................................................................................................................................................
Speed factor ηXifor 0.5 Tipping point connection 0.6
base state Xinorms νTPWX1,Xifor TPWX1,Xi
Tab le 3 . Scenarios 1 and 2: Initial values for connection weights and tipping points.
Connections X1X2X3X4X5X6X7X8X9X10
X10.5 0.3 0.1 0.2 0.6 0.5 0.2 0.3 0.4
........................................................................................................................................................................................................................................................................
X20.5 0.6 0.3 0.4 0.7 0.7 0.9 0.5
........................................................................................................................................................................................................................................................................
X30.3 0.6 0.7 0.4 0.4 0.6 0.8
........................................................................................................................................................................................................................................................................
X40.6 0.4 0.6 0.4 0.6 0.7 0.8
........................................................................................................................................................................................................................................................................
X50.2 0.5 0.7 0.6 0.4 0.9
........................................................................................................................................................................................................................................................................
X60.6 0.6 0.7 0.5 0.7 0.7 0.5 0.7
........................................................................................................................................................................................................................................................................
X70.2 0.8 0.6 0.7 0.6 0.7 0.7
........................................................................................................................................................................................................................................................................
X80.6 0.5 0.6 0.5 0.4 0.5
........................................................................................................................................................................................................................................................................
X90.6 0.6 0.7 0.4 0.7 0.6
........................................................................................................................................................................................................................................................................
X10 0.6 0.7 0.7 0.4 0.6 0.8
TPWX1,X2TPWX1,X3TPWX1,X4TPWX1,X5TPWX1,X6TPWX1,X7TPWX1,X8TPWX1,X9TPWX1,X10
TP(0)
WX1,Xi0.4 0.35 0.5 0.65 0.2 0.3 0.25 0.55 0.6
Tab le 4 . Scenario 3: Main network characteristics.
Base level First reification level Second reification level
Contagion alogistic
steepness σXifor Xi
0.8 Homophily modulation
factor αWXj,Xifor WXj,Xi
1 Tipping point speed factors
ηTPWXj,Xi
for TPWXj,Xi
0.5
........................................................................................................................................................................................................................................................................
Contagion alogistic
threshold τXifor Xi
0.15 Connection weight speed
factor ηWXj,Xifor WXj,Xi
1 Tipping point modulation
factors αTPWXj,Xifor TPWXj,Xi
0.4
........................................................................................................................................................................................................................................................................
Speed factor ηXifor
base state Xi
0.5 Tipping point connection
norms νTPWXj,Xifor TPWXj,Xi
0.4
6. Simulation scenarios
The example simulations concern adaptive social network scenarios with 10 persons X1to X10.
The first two scenarios address a case in which only the outgoing connections of X1are adaptive
and the other connection weights are kept constant. For all simulations, t=1 was used, and the
focus in all three scenarios was on the homophily adaptation with constant connection weight
speed factor HWXj,Xi=ηWXj,Xi=1. In Table 2, the main network characteristics for Scenarios 1
and 2 can be found, and in Table 4for Scenario 3. In Table 3, the initial values for connection
weights and tipping points are shown for Scenarios 1 and 2.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
16 J. Treur
Figure 5. Scenario 1: Uppergraph: The adaptive weights WX1,Xiof outgoing connections from X1over time, with the thick
pink line showing the average weight of them. This average weight initially is 0.344, which is below the desired value 0.6
of the norm νTPWX1,X, but finally it approximates the value 0.6 of this norm. But before that is achieved, over-controlling
reactions cause strong fluctuations of having too intensive connections until time point 300, and again too low intensively of
connections between time point 400 and time point 750. Lower graph: The adaptive tipping points TPWX1,Xjover time. Initially
the tipping point values were much too high, which explains why in the first phase (until time 400) too many connections
were strengthened. After the short initial phase the tipping point values have been adapted so that they became very low so
that the connections were weakened, and finally they reached some equilibrium values.
6.1 Scenario 1: Adaptive connections from X1;αTWX1,Xi
=0.1
For this scenario, the initial values for connection weights and tipping points can be found in
Table 3. The average of the initial values of WX1,Xiis 0.344, which is below the norm νTPWX1,Xi
which is 0.6. The example simulation for this scenario shown in Figures 5and 7may look a bit
chaotic where some connections seem to meander between high and low. However, in this sce-
nario, it can be seen that the average connection weight, indicated by the thick pink line converges
to 0.60145 (at time point 1750), which is close to 0.6, which was chosen as the norm νTPWX1,Xifor
the average connection weight. So at least this convergence of the average connection weight to
νTPWX1,Ximakes sense. As can be seen in Figure 5, there is some variation of the connection weights
around the average connection weight 0.60145 at time 1750. Note that the connection weights at
time 1750 do not correlate with the initial connections weights; they are determined by the simi-
larity in states via the homophily principle. With all of these nine persons, X1initially developed
very strong connections (above 0.97) around time 50, but that turned out too much. Therefore, six
of the nine were reduced between time 100 and 500, while three stayed high all the time: WX1,X3,
WX1,X5,andWX1,X9. Two of these six stayed very low: WX1,X8and WX1,X10 .
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 17
Figure 6. Scenario 1: Resulting connection weights
WX1,Xiat time 1750 compared to their initial values.
As by the homophily principle the similarity of state
values has an important effect on the dynamics of
these connection weights, it can be seen that there is
not much correlation between initial and final values
of the weights.
Figure 7. Scenario 2: Upper graph: The adaptive weights of outgoing connections from X1over time, with the thick pink
line showing the average weight for X1. Compared to Scenario 1, this time it turns out more difficult to reach an equilibrium.
Lower graph: The adaptive tipping points TPWX1,Xjover time. The tipping point values react in a rather sensitive manner to
the fluctuations shown in the upper graph.
As with six connections very low, this made the average of connections too low, from these
six, three were increased after time 750, and a fourth one after time 1000. Eventually, two of them,
WX1,X2and WX1,X7, are around 0.6, one, WX1,X4, is around 0.8, and one, WX1,X6, is around 0.35. So
what has emerged is that the person eventually has developed and kept three very good contacts
X3,X5,andX9, has lost two contacts X8and X10, and has kept the other four contacts with an
intermediate type of different strengths. Figure 7lower graph shows the variation in tipping point
reification states over time.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
18 J. Treur
Tab le 5 . Scenario 3: Initial connection weights.
Connections X1X2X3X4X5X6X7X8X9X10
X10.5 0.3 0.1 0.2 0.6 0.5 0.2 0.3 0.4
........................................................................................................................................................................................................................................
X20.5 0.6 0.3 0.4 0.7 0.7 0.9 0.5
........................................................................................................................................................................................................................................
X30.3 0.6 0.7 0.7 0.4 0.4 0.6 0.8
........................................................................................................................................................................................................................................
X40.6 0.4 0.6 0.4 0.6 0.7 0.8 0.9
........................................................................................................................................................................................................................................
X50.2 0.5 0.7 0.4 0.4 0.9 0.4
........................................................................................................................................................................................................................................
X60.6 0.6 0.7 0.5 0.7 0.7 0.5 0.7
........................................................................................................................................................................................................................................
X70.2 0.8 0.6 0.7 0.6 0.7 0.7
........................................................................................................................................................................................................................................
X80.6 0.5 0.4 0.6 0.5 0.4 0.5
........................................................................................................................................................................................................................................
X90.6 0.6 0.7 0.4 0.7 0.6
........................................................................................................................................................................................................................................
X10 0.6 0.7 0.7 0.4 0.6 0.8
6.2 Scenario 2: Adaptive connections from X1,αTPWX1,Xi
=0.9
Scenario 1 shown above is actually not one of the most chaotic scenarios; some other scenarios
show a much more chaotic pattern. As an example, when for the tipping point adaptation the
much higher modulation factor αTPWX1,Xi=0.9 is chosen (instead of the 0.1 in Scenario 1; all other
values stay the same), the pattern is still more chaotic, as shown in Figure 7. Yet, on the long term,
in this case, the average connection weight moves around the set point 0.6; but notice that around
time point 1250 it seemed that the process was close to an equilibrium, but that was violated by
what happened later. Moreover, the fluctuating pattern of the tipping points in Figure 7also does
not suggest that it will become stable.
6.3 Scenario 3: All connections adaptive
In Scenario 3, all connections are adaptive with main network characteristics shown in Table 4
and initial connection weight values shown in Table 5. The norm for average connection weight is
0.4 this time.
In Figure 8, the simulation results are shown for Scenario 3. As can be seen in Figure 9,even-
tually, all connection weights converge to 0 or 1. Figure 8upper graph shows in particular the
values of the connection weights from X1and their average, and Figure 8middle graph shows the
corresponding tipping points.
Figure 8shows that in the process eventually the average connection weights per person con-
verge in some unclear manner to a discrete set of values: 0.111111 (X10), 0.222222 (X5), 0.333333
(X3,X9), and 0.555555 (X1,X2,X4,X6,X7,X8), all multiples of 0.111111; the overall average ends
up in 0.433333 (recall that the norm νTPWXi,Xjfor average connection weight for each person was
0.4). Also, in other simulations, this discrete set of multiples of 0.111111 shows up. In Section 6,it
will be analyzed where these values come from.
Figure 9shows that all connection weights converge to 0 or 1. Also, this will be analyzed in
Section 6. The tipping points for all outgoing connections of X1converge to 0 (see also Figure 8)
and for all outgoing connections of the other persons they converge to 1.
7. Analysis of equilibrium values
In this section, the possible values to which certain states in the second-order reified network may
converge are analyzed.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 19
Figure 8. Scenario 3: Upper graph: The adaptive weights of outgoing connections from X1over time, with the thick pink line
showing the average weight for X1. Here, after time 750 all connection weights become 0 or 1. Middle graph: The adaptive
tipping points TPWX1,Xjover time. Due to the very low connection weights between 500 and 750 (see upper graph) the tipping
point values show a strong temporary increase to enable strengthening of connections. Lower graph: Average connection
weights for each of X1to X10 and overall average of all connections over time.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
20 J. Treur
Figure 9. Scenario 3: All connection weights are 0 or 1 at time 1750.
7.1 Definition (stationary point and equilibrium)
A state Yhas a stationary point at tif dY(t)/dt= 0. The network is in equilibrium at tif every state
Yof the model has a stationary point at t.
Note that this Yalso applies to the reification states. Given the differential equation for a
temporal–causal network model, a more specific criterion can be found.
7.2 Criterion for a stationary point in a temporal–causal network
Let Ybe a state with speed factor ηY>0andX1,...,Xkbe the states with outgoing connections
to state Y.Then,Yhas a stationary point at tiff cY(ωX1,YX1(t), ..., ωXk,YXk(t)) =Y(t).
This can be applied to the states at all levels in the second-order reified network, in particular
to the first and second reification levels. As a first step, the above criterion applied to tipping point
reification states at the second reification level is as follows:
c∗
TPWXi,Xj
(W1,...,Wk,T)=T
with T=TPWXi,Xjand Wl=WXi,Xl. This provides the following equation:
T+αTPWXi,XjT(1 −T)(νTPWXi,Xj−(W1+... +Wk)/k)=T
which can be rewritten as follows:
T+αTPWXi,XjT(1 −T)(νTPWXi,Xj−(W1+... +Wk)/k)=0
Assuming that αTPWXi,Xjis nonzero, this equation has three solutions:
T=0orT=1or(W1+... +Wk)/k=νTPWXi,Xj
Similarly, the criterion can be applied to connection weights at the first reification level:
c∗
WXi,Xj(H,V1,V2,T,W)=W
with
H=HWXi,Xj
Vi=Xi(t)
T=TPWXi,Xj
W=WXi,Xj
This provides the following equation
HW+αWXi,XjW(1−W)(
T−|V1−V2|)+(1−H)W=W
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 21
Tab le 6 . Overview of the different solutions of the equilibrium equations for WXi,Xjand TPWXi,Xj.
HWXi,Xj(t)=0WXi,Xj(t)=0WXi,Xj(t)=1|Xi(t)−Xj(t)|=TPWXi,Xj(t)
TPWXi,Xj(t)=0HWXi,Xj(t)=0WXi,Xj(t)=0WXi,Xj(t)=1Xi(t)−Xj(t)
TPWXi,Xj(t)=0TPWXi,Xj(t)=0TPWXi,Xj(t)=0TPWXi,Xj(t)=0
........................................................................................................................................................................................................................................................................
TPWXi,Xj(t)=1HWXi,Xj(t)=0WXi,Xj(t)=0WXi,Xj(t)=1Xi(t)=0 and Xj(t)=1orXi=1
TPWXi,Xj(t)=1TPWXi,Xj(t)=1TPWXi,Xj(t)=1 and Xj=0
TPWXi,Xj(t)=1
........................................................................................................................................................................................................................................................................
mWXi,Xm/k=νTPWXi,XjHWXi,Xj(t)=0 XXXXXXX XXXXXXX |Xi(t)−Xj(t)|=TPWXi,Xj(t)
mWXi,Xm/k=νTPWXi,XjmWXi,Xm/k=νTPWXi,Xj
which can be rewritten as follows
HαWXi,XjW(1−W)(
T−|V1−V2|)+W=W
HαWXi,XjW(1−W)(
T−|V1−V2|)=0
Assuming that αWXi,Xjis nonzero, this equation has four solutions:
H=0orW=0orW=1or |V1−V2|=T
In combination with the three solutions for TWXi,Xj(t), the matrix in Table 6can be found.
In cases that HWXi,Xj(t) is nonzero and |Xi(t)−Xj(t| = TPWXi,Xj(t) (which was the case for
the simulations displayed in Section 5), all WXi,Xj(t)are0or1,sowithk=9theaverage
(WXi,X1(t)+.+WXi,Xk(t))/kis a multiple of 1/9. In the simulation case, the norm νTPWXi,Xj=0.4
is not a multiple of 1/9. Therefore, the cases (WXi,X1(t)+.+WXi,Xk(t))/k=νTPWXi,Xjcannot actu-
ally occur, so then TPWXi,Xj(t)=0orTPWXi,Xj(t)=1andWXi,Xj(t)=0orWXi,Xj(t)=1arethe
only solutions. This is also shown by the simulations (e.g., see Figure 8); indeed all averages are
multiples of 1/9=0.111111, as found above (see Figure 8). This explains the discrete set of num-
bers 0.111111, 0.222222, 0.333333, and so on, observed in the simulations; as shown here, this
strongly depends on the number of states. Note that although in general the reified speed factor
HWXi,Xj(t) may be assumed nonzero, there may also be specific processes in which it converges
to 0, for example, like the temperature in simulated annealing.
8. On the added complexity
Note that, as for any dynamical system, by adding adaptivity to a network always complexity is
added. In this section, it is discussed how complexity of a network increases when reification is
applied: first for first-order reification and next for higher order reification.
8.1 Added complexity for first-order adaptation
To start with the outcome, network reification will increase complexity, but this will at most be
quadratic in the number of nodes Nand linear in the number of connections Mof the original
network. More specifically, if mis the number of basic combination functions considered, then
the number of nodes in the reified network is at most N(original nodes) + N(nodes for speed
factors) + N2(nodes for connection weights) + mN (nodes for combination functions), which is
(2 +m+N)N
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
22 J. Treur
If not all connections are used but only a number Mof them, the outcome is
(2 +m)N+M
This is linear in number of nodes and connections. The number of connections in the reified
network is M(original connection weights) + N(speed factors to their states) + Yindegree
(Y)=M(connection weights to their states) + mN (combination function weights to their states),
which is
(m+1)N+2M
Again, this is linear in number of nodes and connections.
8.2 Added complexity for higher order adaptation
If this analysis is applied in an iterative manner for second-order network reification, then the
increase in complexity is still polynomial: at most in the fourth power of the number of nodes:
(N2)2=N4
Can this iteration still be continued further, thus obtaining nth-order reification for any n?Yes,
theoretically, there is no end in this. But also practically, for example, in the case used as illustra-
tion in the current paper, the parameter νTXi,Xjfor the norm of the average connection weight
for the tipping point adaptation used as characteristic at the second reification level still could be
made adaptive (e.g., related to how busy someone is) and reified at a third reification level. For
third-order reification, the increase in complexity is still polynomial: at most in the order of
((N2)2)2=N8
If nreification levels are added, then it is in the order of
N(2n)
which is still polynomial in N, but double exponential in n. The latter may suggest to limit
the number of reification levels in practical applications to just a few, or alternatively, in each
reification step add only a few new reification states: for each step reification can be done in a
partial manner as well. For example, if only speed factors are reified, the number of states will
only increase in a linear way: one extra state for each existing state. Recall the double negative
exponential pattern of hits in the order of
e(35.19 e−0.8684n)
discussed in Section 3and illustrated in Figure 2. In the current literature, adaptation of order
higher than 2 is extremely rare and of order higher than 3 practically absent. This supports the
idea that for now adaptation of order >3 is not considered interesting enough to be addressed.
As shown above, for adaptation of order 3, the added complexity is in the order of an 8th degree
polynomial and for order 2 a 4th degree polynomial. Note, however, that Section 10.3 points out
how efficient simulation of large-scale reified networks of thousands or even millions of states can
be achieved by applying a form of precompilation.
9. Application to plasticity and metaplasticity in cognitive neuroscience
In this section, it is shown how the reified temporal–causal network modeling approach can be
used to model important developments in empirical science, in particular concerning plasticity
and metaplasticity. These are important notions in state-of-the-art research on cognitive neu-
roscience, introduced not from a computational modeling perspective but by purely empirical
researchers to clarify what was found empirically. This section shows how these notions can be
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 23
Tab le 7 . State names for the plasticity and metaplasticity network model with their explanations.
State number State name Level Explanation
X1sssBase level Sensor state for stimulus s
........................................................................................................................................................................................................................................................................
X2srssBase level Sensory representation state for stimulus s
........................................................................................................................................................................................................................................................................
X3bssBase level Belief state for stimulus s
........................................................................................................................................................................................................................................................................
X4psaBase level Preparation state for response a
........................................................................................................................................................................................................................................................................
X5Wsrss,psaFirst reification level Reified representation state for connection
weight ωsrss,psa
........................................................................................................................................................................................................................................................................
X6TsrssFirst reification level Reified representation state for threshold
parameter τsrssof base state srss
........................................................................................................................................................................................................................................................................
X7TpsaFirst reification level Reified representation state for threshold
parameter τpsaof base state psa
........................................................................................................................................................................................................................................................................
X8HWsrss,psaSecond reification level Reified representation state for speed factor
ηWsrss,psafor reified representation state Wsrss,psa
........................................................................................................................................................................................................................................................................
X9MWsrss,psaSecond reification level Reified representation state for persistence
factor parameter μWsrss,psafor reified
representation state Wsrss,psa
connected to the reified temporal–network modeling approach described in the current paper.
This particular example shows the essential elements but is kept relatively simple; it can easily be
extended by adding more states and connections.
Mental networks equipped with a Hebbian learning mechanism (Hebb, 1949) are able to adapt
connection weights over time and learn or form memories in this way. Within neuroscience, this is
usually called plasticity.Insomecircumstances,itisbettertolearn(andchange)fast,butinother
circumstances, it is better to stay stable and persist what has been learnt in the past. To control
this, a type of (higher order) adaptation called metaplasticity is used. It has become an important
focus of study in neuroscience. In literature such as Abraham & Bear (1996), Chandra & Barkai
(2018), Magerl et al. (2018), Parsons (2018), Robinson et al. (2016), Sehgal et al. (2013), Schmidt
et al. (2013), Sjöström et al. (2008), various studies are reported which show how adaptation of
synapses as described, for example, by Hebbian learning, is modulated by suppressing the adapta-
tion process or amplifying it. Among the reported factors affecting synaptic plasticity are stimulus
exposure, activation, previous experiences, and stress, which can accelerate or decelerate learning,
or induce temporarily enhanced excitability of neurons, which in turn positively affects learning,
for example, Chandra & Barkai (2018)andOhetal.(2003).
9.1 Plasticity and metaplasticity in cognitive neuroscience
The reifified temporal–causal network modeling approach was applied to a case involving both
plasticity and metaplasticity, acquired from the literature mentioned above. Recall the different
types of reification states depicted in Figure 3. A network picture of the designed reified network
model for plasticity and metaplasticity is shown in Figure 10.Table7displays the explanations
of the states. Section 10.1 shows the complete specification. Here, the plasticity of the response
connection from srssto psais considered, modeled by Hebbian learning. Note that the two T-
states and the M-state are combination function parameter states here, respectively, for excitability
threshold τof srssand psaand for persistence parameter μfor the Hebbian learning of the con-
nection from srssto psa. The alternative path via the belief state bsssupports this learning by
contributing to the activation of psa, thus relating to the original formulation of the (first-order)
Hebbian learning adaptation principle in Hebb (1949):
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
24 J. Treur
ps
a
W
srs
s
,ps
a
srs
s
HW
srs
s
,ps
a
T
ps
a
T
srs
s
ss
s
MW
srs
s
,ps
a
Second
reification
level
First
reification
level
Base
level
bs
s
Figure 10. Overview of the reified network architecture for plasticity and metaplasticity with base level (lower plane, pink),
first reification level (middle plane, blue) and second reification level (upper plane, purple), and upward causal connections
(blue) and downward causal connections (red) defining interlevel relations. The downward causal connections from the two
T-states affect the excitability of the (presynaptic and postsynaptic) states srssand psa. The downward causal connections
from the H-state and M-state affect the adaptation speed and the persistence factor of the connection weight reification
state Wsrss,psa.
When an axon of cell A is near enough to excite B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes place in one or both
cells such that A’s efficiency, as one of the cells firing B, is increased. (Hebb, 1949)p.62
In principle, this will start to work when the external stimulus sis sensed through sensor state sss.
However, as discussed in Section 1, whether or not and to which extent learning actually takes
place is controlled by a form of metaplasticity, and also depends on factors such as excitability
characteristics of the involved states. To model metaplasticity, the model includes a second reifi-
cation level with states HWsrss,psarepresenting the speed of the learning (learning rate) of ωsrss,psa,
and MWsrss,psarepresenting the persistence μWsrss,psaof the connection weight ωsrss,psa.Theyhave
dynamic values depending on the other states. For example, if at some point in time the value
of HWsrss,psais 0, no learning will take place, and if MWsrss,psahas value 0, no learnt effects will
persist; the values of these second-order reification states depend on activation of the presynaptic
and postsynaptic states srssand psa, also see the following second-order adaptation principle in
Robinson et al. (2016):
Adaptation accelerates with increasing stimulus exposure. (Robinson et al., 2016,p.2)
To address dynamic levels of excitability of base states, first-order reification states Tsrssand
Tpsahave been included that model the intrinsic excitability of the presynaptic and postsynap-
tic states srssand psa, respectively, by the value of the thresholds τsrssand τpsaof their logistic sum
combination functions (also see Chandra & Barkai, 2018):
Learning-related cellular changes can be divided into two general groups: modifica-
tions that occur at synapses and modifications in the intrinsic properties of the neu-
rons. While it is commonly agreed that changes in strength of connections between
neurons in the relevant networks underlie memory storage, ample evidence suggest
that modifications in intrinsic neuronal properties may also account for learning-
related behavioral changes. Long-lasting modifications in intrinsic excitability are
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 25
First and second order reification states
sss
srss
bss
psa
srss,psa
Wsrss,psa
Tsrss
Tpsa
HWsrss,psa
MWsrss,psa
Figure 11. Upper graph: dynamics of base states and the adaptive connection weight ωsrss,psa. Lower graph: dynamics of
the reifications states including the first-order reification state Wsrss,psafor the adaptive connection weight, and Tsrssand
Tpsafor the activation threshold for the presynaptic and postsynaptic states srssand psa, and the second-order reification
states HWsrss,psaand MWsrss,psafor the adaptation speed and persistence factor of the connection weight reification state
Wsrss,psa.
manifested in changes in the neuron’s response to a given extrinsic current (gener-
ated by synaptic activity or applied via the recording electrode). (Chandra & Barkai,
2018, p. 30)
For most states, the combination function used is the alogisticσ,τ(..) function. The only exceptions
are the sensor state ssswhich uses the Euclidean function eucl1,λ(..) and Wsrss,psawhich uses the
Hebbian combination function hebbμ(..).
9.2 Simulation experiments for plasticity and metaplasticity
Following what is reported in the literature on metaplasticity, a number of simulation experiments
have been performed. In particular, a scenario is shown here in which the focus was on the effect
of activation of the postsynaptic state psaon plasticity; the effect of the presynaptic state srsson
reification states was blocked (weights of upward links from srsswere set to 0). In Figure 11,the
simulation results are shown. For settings, see the specification in Section 5. The upper graph
shows the activation levels of the base states and how the weight of the connection from srssto psa
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
26 J. Treur
is learnt. Here, the activation levels and also the exact shape of the learning curve also depend on
controlling factors shown in the lower graph in Figure 11. As can be seen there, following exposure
to stimulus s, the threshold values Tsrssand Tpsafor the activation of srssand psaare decreasing
to low levels, which substantially increases the excitability of srssand psaconform (Chandra &
Barkai, 2018) and therefore gives a boost to the activation levels of these base states, which in
turn strengthens the Hebbian learning. Also, it is shown that following exposure to stimulus sthe
learning speed HWsrss,psastrongly increases, which conforms Robinson et al. (2016).
These controlling measures together result in a quite steep increase of the connection weight
reification state. However, after the learnt level of the weight has become high, the thresholds
increase again, and the learning speed decreases again. This makes the excitability of srssand
psalower and stops the boosts on learning; this has a positive effect on stabilizing the situation,
in accordance with what, for example, in Sjöström et al. (2008), is called “The Plasticity Versus
Stability Conundrum” (p. 773).
10. A modeling environment for reified temporal–causal network models
The modeling environment as developed by the author and described in this section covers a
modeling format for a type of computational architecture (reified network architecture), which
is implementation-independent (it can be used for simulation by implementations in differ-
ent ways), and an example implementation of a computational reified temporal–causal network
engine implemented in MATLAB.
Section 10.1 introduces the implementation-independent notions of role matrices and a new
specification format for dynamic and adaptive causal network models. This format has a basis
in the implementation-independent mathematical notions of matrix and function. The notion
of role matrix provides a new, alternative, and more compact specification format for causal
networks in contrast to connection matrices that are often used to specify networks at an
implementation-independent level.
The basic elements of the specification format are declarative: connection weights, combination
functions, speed factors, and role matrices grouping them are declarative mathematical objects.
Together these elements assemble in a standard manner a set of first-order difference or differ-
ential equations (as shown in Table 1), which are declarative temporal specifications. The model’s
behavior is fully determined by these declarative specifications, given some initial values. The
modeling process is strongly supported by using these declarative building blocks. Very complex
adaptive patterns can be modeled easily, and in (temporal) declarative form.
In Section 10.2, the computational reified temporal–causal network engine is described that
has been developed in MATLAB. This is the software that takes the declarative specification of
Section 10.1 and makes it run. The design of this software is structure-preserving in relation to
the mathematical description and can be easily translated into any other software environment
that supports the mathematical notions of function and matrix.
10.1 Specification format for a reified temporal–causal network model
For networks, matrices are often used to describe them in a way that is easily accessible com-
putationally. Moreover, MATALAB (“Matrix Laboratory”) is available as a software environment
dedicated to handle matrices. Therefore, the choice was made to use specific types of matrices
(called role matrices) to define a specification format to specify reified network models, and as a
well-defined basis for implementation in MATLAB or any other environment supporting matri-
ces. In role matrices, it is specified which other states have impact on a given state (the incoming
arrows in Figures 4and 10), but distinguished according to their role: base or nonbase connections,
from which for the latter a distinction is made for the roles connection weight reification,speed
factor reification,combination function weight reification,andcombination function parameter
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 27
reification (see also Figure 3). During execution of the model, each of the nonbase roles (indi-
cated by the downward connections in the model pictures) defines a special effect on the target
state. The role specification indicates which special effect is exerted by which state. Note that this
makes that the more informative naming of states is not needed to specify such information. For
example, the reification state names such as H,W,C, or Pas used in descriptions of the models
are only meant for human understanding and are not used in execution of the model; instead,
in execution, the role matrices define which are the downward causal connections and which
of the network characteristics they affect. Here, state names used are just the X1,X2, ...based
on numbering of the states. Role matrices enable to apply structure-preserving implementation.
The matrices as first dimension have rows according to the states X1,X2,X3, .... numbered by
1, 2, 3, ...
Note that the matrices used in the reified network model specification format have a compact
format and (in contrast to network pictures as in Figure 10) also specify an ordering, which is
important as combination functions used to integrate the impact from multiple connections are
not always symmetric.
A first role matrix mb specifies on each row for a given state from which states at the same
or a lower level that state gets incoming connections (see Box 2). They play the role of the base
connectivity. This matrix contains the information depicted in Figures 4and 10 by upward or
leveled arrows plus for each state a numbering of the incoming base connections (see the 1 to 4
in the top row). For example, in the third row, it is indicated that state X3(=bss) only has one
incoming base connection, from state X2(=srss). As another example, the fifth row indicates that
state X5(=Wsrss,psa) has incoming base connections from X2(=srss), X4(=psa), X5(=Wsrss,psa)
itself, and in that order, which is important as the Hebbian combination function hebbμ(.)used
here is not symmetric in its arguments. Note that the second column with more informative state
names in each of the matrices depicted in Box 2is not part of the specification but have just been
added for human understanding.
In a similar way, the four types of role matrices for nonbase connectivity (i.e., connectivity
from reification states at a higher level of reification: the downward arrows in Figures 4and 10)
were defined (see Box 2): role matrices mcw for connection weights, ms for speed factors, mcfw
for combination function weights, and mcfp for combination function parameters. Within each
role matrix, a difference is made between cell entries indicating (in red) a reference to the name
of another state that as a form of reification represents in a dynamic manner an adaptive net-
work characteristic, and entries indicating (in green) fixed values for nonadaptive characteristics.
The red cells represent the downward causal connections from the reification states in pictures as
shown in Figures 4and 10, with their specific roles W,H,C,Pindicated by the type of role matrix.
The type of role matrix in which they are represented actually defines the roles of the reification
states so that there is no need to computationally use information from the names W,H,C,Pof
them; they may have any own given names.
For example, in Box 2,thenameX5in the red cell row-column (4, 1) in role matrix mcw
indicates that the value of the connection weight from srssto psacan be found as value of the fifth
state X5.Incontrast,the1ingreencell(5,1)ofmcw indicates the static value of the connection
weight from X2(=srss)toX5(=Wsrss,psa). Similarly, role matrix ms indicates (in red) that X8
represents the adaptive speed factor of X5, and (in green) that the speed factors of all other states
have fixed values.
For a given application, a limited fixed sequence of combination functions is specified by mcf
= [1 2 3], where the numbers 1, 2, and 3 refer to the numbering in the function library which
currently contains 35 combination functions, the first three being eucln,λ(..),alogisticσ,τ(..),and
hebbμ(..).InBox2, the role matrices mcfw and mcfp are shown for combination function weights
and parameters, respectively. Here, the matrix mcfp is a 3D matrix with first dimension for the
states, second dimension for the two combination function parameters, and third dimension for
the combination functions.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
28 J. Treur
mb 1 2 3 4
X1sssX1
X2srssX1
X3bssX2
X4psaX2X3
X5Wsrss,psaX2X4X5
X6TsrssX2X4X6
X7TpsaX2X4X7
X8HWsrss,psaX2X4X5X8
X9MWsrss,psaX2X4X5X9
mcfw
1 2 3
eucl alog-
istic hebb
X1sss1
X2srss1
X3bss1
X4psa1
X5Wsrss,psa1
X6Tsrss1
X7Tpsa1
X8HWsrss,psa1
X9MWsrss,psa1
ms 1
X1sss0.5
X2srss0.5
X3bss0.2
X4psa0.5
X5Wsrss,psaX8
X6Tsrss0.3
X7Tpsa0.3
X8HWsrss,psa0.5
X9MWsrss,psa0.1
mcw 1 2 3 4
X1sss1
X2srss1
X3bss1
X4psaX51
X5Wsrss,psa1 1 1
X6Tsrss-0.4 -0.4 1
X7Tpsa-0.4 -0.4 1
X8HWsrss,psa1 1 -0.4 1
X9MWsrss,psa1 1 1 1
function
mcfp
parameter
1 2 3
eucl alog-
istic hebb
1 2 1 2 1 2
X1sss1 1
X2srss5X6
X3bss50.2
X4psa5X7
X5Wsrss,psaX9
X6Tsrss50.7
X7Tpsa50.7
X8HWsrss,psa5 1
X9MWsrss,psa5 1
τμσ
Box 2. Specification in role matrices format for the second-order adaptive reified example network, covering from top to
bottom: role matrix mb for connected base states (= states of same or lower level with outgoing connections to the given
state), role matrix mcw for connection weights, role matrix mcfw for combination function weights, role matrix mcfp (3D)
for combination function parameters, and role matrix ms for speed factors.
10.2 The computational reified temporal–causal network engine developed in MATLAB
The computational reified network engine developed takes a specification in the format as
described in Section 10.1 and runs it; for more details, see Treur (2019b). It consists of two parts.
First, each role matrix (which can be specified easily as table in Word or in Excel, for example) is
copied or read in MATLAB in two variants:
•avalues matrix for the static values (adding the letter vto the name) from the green cells with
constant values and
•anadaptivity matrix for the adaptive values represented by reification states (adding the letter
ato the name) from the red cells with state names, thereby replacing Xiby the index i.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 29
Tab le 8 . Role matrices mcwa and mcwv for the connection weights.
mcfp 123 4
X1sss
.........................................................................................................................................................
X2srss
.........................................................................................................................................................
X3bss
.........................................................................................................................................................
X4psa5
.........................................................................................................................................................
X5Wsrss,psa
.........................................................................................................................................................
X6Tsrss
.........................................................................................................................................................
X7Tpsa
.........................................................................................................................................................
X8HWsrss,psa
.........................................................................................................................................................
X9MWsrss,psa
mcwv 123 4
X1sss1
.........................................................................................................................................................
X2srss1
.........................................................................................................................................................
X3bss1
.........................................................................................................................................................
X4psa1
.........................................................................................................................................................
X5Wsrss,psa111
.........................................................................................................................................................
X6Tsrss−0.4 −0.4 1
.........................................................................................................................................................
X7Tpsa−0.4 −0.4 1
.........................................................................................................................................................
X8HWsrss,psa11−0.4 1
.........................................................................................................................................................
X9MWsrss,psa111 1
Note that states Xjare represented in MATLAB by their index number j. For example, from mcw
two matrices mcwa (adaptive connection weights matrix) and mcwv (connection weight values
matrix) are derived in this way (see Table 8). The numbers in mcwa indicate the state numbers of
the reification states where the values can be found, and in mcwv, the numbers indicate the static
values directly.
Empty cells are filled with NaN (Not a Number) indications. After copying to MATLAB, this
results in the matrices in MATLAB representation as shown in Box 3.
Note that the values in the value matrices can differ per scenario addressed, they represent the
settings of the simulation as usually occur for any type of simulation.
As another example, for the 3D role matrices mcfpa and mcfpv, their representations in
MATLAB are as depicted in Boxes 4and 5.
During a simulation, for each step from kto k+1 (with step size t,inMATLABdenotedby
dt), based on the above role matrices, first for each state Xj, the right values (either the fixed value,
or the adaptive value found in the indicated reification state) are assigned to:
s(j, k) speed factor for state Xj
b(j, p, k) value for the pth state with outgoing base connection to state Xj
cw(j, p, k) connection weight for the pth state with outgoing base connection to state Xj
cfw(j,m,k)weightforthemth combination function for Xj
cfp(j,p,m,k)thepth parameter value of the mth combination function for Xj
Note that what is called an outgoing base connection to state Xjhere is an outgoing connec-
tion of a base state to state Xjof the software, as specified in the base matrix mb (Box 2). For the
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
30 J. Treur
Box 3. Role matrices mcwa and mcwv for adaptive connection weights (mcwa) and connection weight values (mcwv)as
represented in MATLAB.
Box 4. Role matrix mcfpa for adaptive combination function parameters as represented in MATLAB. This consists of three
2D matrices each with first (vertical) dimension for the states and second (horizontal) dimension for the two parameters,
grouped in the vertical direction according to the third dimension for the three combination functions used in this model
(eucl(.),alogistic(.), and hebb(.), respectively). Note that here default values are used for parameter values for cases in
which they actually are not used.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 31
Box 5. Role matrix mcfpv for combination function parameter values as represented in MATLAB. These are three 2D matri-
ces each with first (vertical) dimension for the states and second (horizontal) dimension for the two parameters, grouped
in the vertical direction according to the third dimension for the three combination functions used in this model (eucl(.),
alogistic(.), and hebb(.), respectively). Note that here default values are used for parameter values for cases in which they
actually are not used.
adaptive network characteristics, this part is processing the downward connections from the reifi-
cation states (indicated in the adaptation matrices, as for these adaptive characteristics the values
can be found there); for the nonadaptive characteristics, it is just assigning the fixed value taken
from the value matrix.
Then, as a second part of the computational reified network engine, for the step from kto k+1,
the following is applied for each j; here X(j,k) denotes Xj(t)fort=t(k) =kdt; see Box 6.
Note that functions with multiple groups of arguments here in MATLAB get vector arguments
where groups of arguments become vectors of variable length. For example, the basic combina-
tion function bcfi(P1,i,P2,i,W1V1,...,WkVk) as expressed in Section 3becomes bcf(i, p, v) in
MATLAB with vectors p =[P1,i,P2,i] for function parameters and v =[W1V1,...,WkVk]forthe
values of the function arguments. This format bcf(i, p, v) is used as the basis of the combination
function library developed (currently numbered by i = 1–35); for an overview of this basic com-
bination function library, see Treur (2019b). As can be seen, the structure of the code of this
computational reified network engine is quite compact, based on the universal difference equa-
tion discussed in Section 3and closely resembles the formulae in Table 1: structure-preserving
implementation.
10.3 On simulating large-scale reified networks
In the above description, the role matrices defining the model are inspected at every simulation
step. There is a second option for implementation by separating this work from simulation time,
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
32 J. Treur
Box 6. Main loop in the developed MATLAB implementationof the Reified Temporal–Causal Network Engine; see also (Treur,
2019b)
in the form of precompiling. Doing so, the one universal difference equation in the code shown
above is instantiated for each of the states, so it is replaced by nspecific difference equations with
nthe number of states. The resulting set of difference (or differential) equations can be run in any
software environment for differential equation simulation. As there are quite efficient software
environments for this, for large-scale reified networks of thousands or even millions of states,
such environments can be used for successful simulation.
11. Discussion
The multilevel network reification architecture described here has advantages similar to those
found for reification in modeling and programming languages in other areas of AI and Computer
Science, for example, Bowen & Kowalski (1982), Demers & Malenfant (1995), Galton (2006),
Hofstadter (1979), Sterling & Shapiro (1996), Sterling & Beer (1989), Weyhrauch (1980). A rei-
fied network enables to model dynamics of the original network by dynamics within the reified
network, thus representing an adaptive network by a nonadaptive network. Network reification
provides a unified manner of modeling adaptation principles and allows comparison of such prin-
ciples across different domains, as has been illustrated in (Treur, 2018a). In the current paper, it
was shown how a multilevel reified network architecture enables a structured and transparent
manner to model network adaptation of any order.
This was illustrated first for a first-order adaptation principle based on bonding-by-homophily
from social science (Byrne, 1986, McPherson et al., 2001, Pearson et al., 2006, Sharpanskykh &
Treur, 2014) represented at the first reification level, and in addition a second-order adaptation
principle describing change of the characteristics “similarity tipping point” and “speed factor”
of this first-order adaptation principle. This second-order adaptive network model for bonding
by homophily was not yet compared to empirical data. First-order adaptive network models for
bonding by homophily that can be considered precursors of the current second-order network
model have been compared to empirical data sets in Beukel et al. (2019), Blankendaal et al. (2016),
Boomgaard et al. (2018). In order to compare the new second-order network model described here
to empirical data, the second-order adaptation effect most likely will require further requirements
for the data sets that are needed. This is left for a future enterprise. Also, in further social science
literature, cases are reported where network adaptation is itself adaptive, for example, in Carley
(2002,2006), the second-order adaptation concept called “inhibiting adaptation” for network
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 33
organizations is described. For further work, it would be interesting to explore the applicability
of the introduced modeling environment for such domains further.
In this paper, the introduced modeling environment for reified temporal–causal networks was
also applied to model plasticity and metaplasticity known from the empirical neuroscientific liter-
ature (see Section 9). Although some specific computational models for metaplasticity have been
put forward with interesting perspectives for artificial neural networks, for example, in Marcano-
Cedeno et al. (2011), Andina et al. (2007), Andina et al. (2009), and Fombellida et al. (2017),
the modeling environment proposed here provides a more general architecture. Application may
extend well beyond the neuro-inspired area (as already shown in Sections 5–7).
The causal modeling area has a long history in AI, for example, Kuipers & Kassirer (1983)and
Kuipers (1984). The current paper can be considered a new branch in this causal modeling area. It
adds dynamics to causal models, making them temporal, but the main contribution in the current
paper is that it adds a way to specify (multiorder) adaptivity in causal models, thereby conceptually
using ideas on meta-level architectures that also have a long history in AI, for example, Weyhrauch
(1980), Bowen & Kowalski (1982), Sterling & Beer (1989). So the modeling approach connects
two different areas with a long tradition in AI, thereby strongly extending the applicability of
causal modeling to dynamic and adaptive notions such as plasticity and metaplasticity of any
order, which otherwise are out of reach of causal modeling.
The dedicated modeling environment as described in Section 10 includes a new specification
format for reified networks and comes with a newly implemented dedicated computational reified
network engine, which can simply run such specifications. Moreover, a library of currently 35
combination functions is offered, which can be extended easily. Using this software environment,
the development process of a model can focus in a declarative manner on the reified network
specification and therefore is quite efficient, while still all kinds of complex (higher order) adaptive
dynamics are covered without being bothered by implementation details.
This construction can be continued to obtain a network architecture that is adaptive up to
any order n. In Section 3, it was discussed in how far adaptation principles of order 3 or higher
are considered to be useful in the current literature, and a double negative exponential pattern
was found for the number of hits in Google Scholar against the order of adaptation. In an nth-
order reified network, there still will be network structures introduced in the last step from n-1
to nthat have no reification within the nth-order reified network. From a theoretical perspective,
the construction can be repeated (countable) infinitely many times, for all natural numbers n;then
ω-order reification is obtained, where ωis the ordinal for the natural numbers. This is theoretically
well defined as a mathematical structure. All network structures in this ω-order reified network are
reified within the network itself, so it is closed under reification. Whether or not such an ω-order
construction has a useful application in practice, or can be used to explore theoretical research
questions is still been open question, another subject for future research.
Compared to Treur (2018b), the following have been added, which together makes more than
90% extra:
• Many more references.
• More examples and explanation on combination functions in Section 2.
• A new Section 3was added to relate the work more to the literature on adaptivity and higher
order adaptivity.
• More technical details were added in Section 4.
• In Section 6, two more simulation scenarios were added.
• A new Section 8was added with a more extensive complexity analysis.
• A new Section 9was added to describe applicability to plasticity and metaplasticity in
cognitive neuroscience.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
34 J. Treur
• A new Section 10 was added to describe the specification format used to design reified
network models and the compuational reified network engine that was implemented in
MATLAB.
Acknowledgments. The author is grateful to the anonymous reviewers with comments which led to improvements in the
text.
Conflict of interest. The author has nothing to disclose.
References
Abraham, W. C., & Bear, M. F. (1996). Metaplasticity: The plasticity of synaptic plasticity. Trends in Neuroscience,19(4),
126–130.
Andina, D., Jevtic, A., Marcano, A., & Adame, J. M. B. (2007). Error weighting in artificial neural networks learning inter-
preted as a metaplasticity model. In Proceedings of IWINAC’07,Part I (pp. 244–252), Lecture notes in computer science.
Springer.
Andina, D., Alvarez-Vellisco, A., Jevtic, A., & Fombellida, J. (2009). Artificial metaplasticity can improve artificial neural
network learning. Intelligent Automation and Soft Computing,15(4), 681–694.
Arnold, S., Suzuki, R.,& Arita, T. (2015). Selection for representation in higher-order adaptation. Minds and Machines,25(1),
73–95.
Ashby, W. R. (1960). Design for a brain (2nd ed.). London: Chapman and Hall.
Beukel, S. V. D., Goos, S. H., & Treur, J. (2019). An adaptive temporal-causal network model for social networks based on the
homophily and more- becomes-more principle. Neurocomputing,338, 361–371.
Blankendaal, R., Parinussa, S., & Treur, J. (2016). A temporal-causal modelling approach to integrated contagion and network
change in social networks. In Proceedings of the 22nd European Conference on Artificial Intelligence, ECAI’16 (pp. 1388–
1396), Frontiers in artificial intelligence and applications, vol. 285. IOS Press.
Boomgaard, G., Lavitt, F., & Treur, J. (2018). Computational analysis of social contagion and homophily based on an adaptive
social network model. In O. Koltsova, D. I. Ignatov, & S. Staab (Eds.), Proceedings of the 10th international conference on
social informatics, SocInfo’18, vol. 1 (pp. 86–101), Lecture notes in computer science, vol. 11185. Springer..
Bowen, K. A., & Kowalski, R. (1982). Amalgamating language and meta-language in logic programming. In K. Clark &
S. Tarnlund (Eds.), Logic programming (pp. 153–172). New York: Academic Press.
Byrne, D. (1986). The attraction hypothesis: Do similar attitudes affect anything? Journal ofPersonality and Social Psychology,
51(6), 1167–1170.
Carley, K. M. (2002). Inhibiting adaptation. In Proceedings of the 2002 command and control research and technology
symposium (pp. 1–10). Monterey, CA: Naval Postgraduate School.
Carley, K. M. (2006). Destabilization of covert networks. Computational & MathematicalOrganization Theory,12, 51–66.
Chandra, N., & Barkai, E. (2018). A non-synaptic mechanism of complex learning: Modulation of intrinsic neuronal
excitability. Neurobiology of Learning and Memory, 154, 30–36.
Daimon, K., Arnold, S., Suzuki, R., & Arita, T. (2017). The emergence of executive functions by the evolution of second–order
learning. Artificial Lifeand Robotics,22, 483–489.
Demers, F. N., & Malenfant, J. (1995). Reflection in logic, functional and objectoriented programming: a Short Comparative
Study. In Workshop on reflection and meta-level architecture and their application in ai,IJCAI’95 (pp. 29–38).
Fessler, D. M. T., Clark, J. A., & Clint, E. K. (2015). Evolutionary psychology and evolutionary anthropology. In D. M. Buss
(Eds.), The handbook of evolutionary psychology (pp. 1029–1046). New York: Wiley and Sons.
Fessler, D. M. T., Eng, S. J., & Navarrete, C. D. (2005). Elevated disgust sensitivity in the first trimester of pregnancy: Evidence
supporting the compensatory prophylaxis hypothesis. Evolution & Human Behavior,26(4), 344–351.
Fleischman, D. S., & Fessler, D. M. T. (2011). Progesterone’s effects on the psychology of disease avoidance: Support for the
compensatory behavioral prophylaxis hypothesis. Hormones and Behavior,59(2), 271–275.
Fombellida, J., Ropero-Pelaez, F.J., & Andina, D. (2017). Koniocortex-like network unsupervised learning surpasses super-
vised results on wbcd breast cancer database. In Proceedings of IWINAC’17, Part II (pp. 32–41), Lecture notes in computer
springer science, vol. 10338. Springer.
Galton, A. (2006). Operators vs. Arguments: The ins and outs of reification. Synthese, 150, 415–441.
Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley and Sons.
Helbing, D., Brockmann,D., Chadefaux, T., Donnay, K., Blanke, U., Woolley-Meza, O., ...Perc, M. (2015). Saving human
lives: What complexity science and information systems can contribute. Journal of Statistical Physics,158, 735–781.
Hofstadter, D. R. (1979). Gödel, Escher, Bach.NewYork:BasicBooks.
Jones, B.C., Perrett, D.I., Little, A.C., Boothroyd, L., Cornwell, R.E., Feinberg, D.R., ...Moore, F.R.(2005). Menstrual cycle,
pregnancy and oral contraceptive use alter attraction to apparent health in faces. Proceedings of the Royal Society B, 272,
347–354.
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available
Network Science 35
Kuipers, B. J. (1984). Commonsense reasoning about causality: Deriving behavior from structure. Artificial Intelligence,24,
169–203.
Kuipers, B. J., & Kassirer, J. P. (1983). How to discover a knowledge representation for causal reasoning by studying an expert
physician. In Proceedings of the eighth international joint conference on artificial intelligence, IJCAI’83 (pp. 49–56). Los
Altos, CA: William Kaufman.
Lovejoy, C. O. (2005). The natural history of human gait and posture. Part 2: Hip and thigh. Gait & Posture,21(1), 113–124.
Magerl, W., Hansen, N., Treede, R. D., & Klein, T. (2018). The human pain system exhibits higher-order plasticity
(metaplasticity). Neurobiology of Learning and Memory, 154, 112–120.
Marcano-Cedeno, A., Marin-De-La-Barcena, A.,Jimenez-Trillo, J., Pinuela, J. A., & Andina, D. Artificial metaplasticity neural
network applied to credit scoring. International Journal Of Neural Systems,21(4), 311–317.
McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review
Sociology,27, 415–444.
Oh, M. M., Kuo, A. G., Wu, W. W., Sametsky, E. A., & Disterhoft, J. F. (2003). Watermaze learning enhances excitability of
CA1 pyramidal neurons. Journal of Neurophysiology,90(4), 2171–2179.
Parsons, R. G. (2018). Behavioral and neural mechanisms by which prior experience impacts subsequent learning.
Neurobiology of Learning and Memory,154, 22–29.
Pearl, J. (2000). Causality. NewYork: Cambridge University Press.
Pearson, M., Steglich, C., & Snijders, T. (2006). Homophily and assimilation among sport-active adolescent substance users.
Connections,27(1), 47–63
Perc, M., & Szolnoki, A. (2010). Coevolutionary games - A mini review. BioSystems,99, 109–125.
Robinson, B. L., Harper, N. S.,& McAlpine, D. (2016). Meta-adaptation in the auditory midbrain under cortical influence.
Nature Communications, 7, 13442.
Sehgal, M., Song, C., Ehlers, V. L., & Moyer., J. R. Jr (2013). Learning to learn – Intrinsic plasticity as a metaplasticity
mechanism for memory formation. Neurobiology of Learning andMemory,105, 186–199.
Schmidt, M. V., Abraham, W. C., Maroun, M., Stork, O., & Richter-Levin, G. (2013). Stress-Induced Metaplasticity: From
Synapses To Behavior. Neuroscience, 250, 112–120.
Sharpanskykh, A., & Treur, J. (2014). Modelling and analysis of social contagion in dynamic networks. Neurocomputing,146,
140–150.
Sjöström, P. J., Rancz, E. A., Roth, A., & Hausser, M. (2008). Dendritic excitability and synaptic plasticity. Physiological
Reviews,88, 769–840.
Sterling, L., & Shapiro, E. (1996). The art of prolog. Chapter 17 (pp. 319–356). Cambridge, London: MITPress.
Sterling, L., & Beer, R. (1989). Metainterpreters for expert system construction. Journal of Logic Programming,6, 163–178.
Treur, J. (2016). Network-oriented modeling: addressing complexity of cognitive, affective and social interactions.Cham:
Springer.
Treur, J. (2018). Network reification as a unified approach to represent network adaptation principles within a network. In
Proceedings of the 7th International Conference on Natural Computing (pp. 344–358), Lecture notes in computer science,
vol 11324. Springer.
Treur, J. (2018). Multilevel network reification: Representing higher order adaptivity in a network. In Proceedings of the 7th
International Conference on Complex Networks and their Applications, ComplexNetworks’18, vol. 1 (pp.635–651), Studies
in computational intelligence, vol. 812. Cham:Springer.
Treur, J. (2019a). The ins and outs of network-oriented modeling: From biological networks and mental networks to social
networks and beyond. In N. T. Nguyen (Ed.), Transactions on computational collective Intelligence 32 (pp. 120–139).
Heidelberg: Springer. Contents of Keynote Lecture at ICCCI’18.
Treur, J. (2019b). Design of a software architecture for multilevel reified temporal-causal networks.
doi:10.13140/RG.2.2.23492.07045. Retrieved from https://www.researchgate.net/publication/333662169.
Weyhrauch, R. W. (1980). Prolegomena to a theory of mechanized formal reasoning. Artificial Intelligence,13, 133–170.
Zelcer, I., Cohen, H., Richter-Levin, G., Lebiosn, T., Grossberger, T., & Barkai, E. (2006). A cellular correlate of learning-
induced metaplasticity in the hippocampus. Cerebral Cortex,16, 460–468.
Cite this article: Treur J. Modeling higher order adaptivity of a network by multilevel network reification. Network Science
https://doi.org/10.1017/nws.2019.56
at https://www.cambridge.org/core/terms. https://doi.org/10.1017/nws.2019.56
Downloaded from https://www.cambridge.org/core. Vrije Universiteit Bibliotheek, on 04 Mar 2020 at 08:19:32, subject to the Cambridge Core terms of use, available