A preview of the PDF is not available
Network-Oriented Modeling for Simulation and Analysis of a Dynamic, Adaptive and Evolving World
Abstract
See a video as Linked Data or on the YouTube channel on Self-Modeling Networks here: https://www.youtube.com/channel/UCCO3i4_Fwi22cEqL8M_PgeA. This paper describes the contents of a Keynote Speech with the same title. Networks have become a useful concept to model a wide variety of processes and phenomena in the world resulting in, for example, biological networks, neural networks, mental networks, economic networks, and social networks. Networks by themselves have a transparent type of structure based on nodes and connections between them which can be described in a transparent manner by declarative mathematical relations and functions, for example, a graph for follow-relationships in Social Media.
However, phenomena in the world usually are not static but dynamic, adaptive or evolving. Dynamics within a network can be described as nodes affecting each other through the connections, for example, social contagion in a social network, propagation of activation in a neural network, or mental causation in a mental network. Adaptive phenomena can be described by adaptation principles for changing network structure characteristics such as connection weights or excitability thresholds, for example, for plasticity of the brain, connections between mental states that become stronger due to learning, or connections in a social network that become stronger due to the extent of similarity of two persons.
This is not the end of the story. Such network adaptation principles themselves can be adaptive as well, which can be modeled by second-order adaptive networks, for example, in neural or mental networks metaplasticity describing under what circumstances plasticity should accelerate or decelerate or even completely be blocked, or in social networks criteria describing when similarity of two persons is strong enough to strengthen the connection. This may lead to multiple orders of adaptation.
Also here the story does not end. In evolutionary processes still more complex phenomena may take place. Given causal pathways developed at some stage of evolution, a next step in the evolution may be the addition of a causal pathway that modifies an earlier developed causal pathway in a dynamic manner. Also this phenomenon can be described by networks, where also the behavioural plasticity of individuals can be integrated. Such an integrated network model covers not only a biological network model but also a mental network model for the adaptive decision making about the behaviour (for example, the choice for a specific type of food), and learning of skills for such a choice, which in turn affect the physical evolutionary steps.
After the above sketched landscape one can easily not see the forest for the trees. Networks themselves may have transparent declarative descriptions as long as they are static, but as soon as dynamics, adaptation (of multiple orders) or evolution are involved, for modeling usually all kinds of procedural algorithmic programming-like descriptions are added so that a less transparent and less declarative hybrid form of model occurs.
In the current paper, an alternative way of network modeling is described which makes that such procedural specifications are not needed for dynamic, adaptive or evolving networks, but instead also more transparent declarative descriptions can be used for them, based on mathematical functions and relations, in conjunction with a transparent notion of architecture for which a dedicated modeling environment has been implemented.
Supplementary resources (2)
ResearchGate has not been able to resolve any citations for this publication.
The influence of acute severe stress or extreme emotion based on a Network-Oriented modeling methodology has been addressed here. Adaptive temporal-causal network modeling is an approach to address the phenomena with a complexity that cannot be or are hard to be explained in a real-world experiment. In the first phase, the suppression of the existing network connections as a consequence of the acute stress modeled and in the second phase relaxing the suppression by giving some time and starting new learning of the decision making in accordance with the presence of stress starts again.
Decision making plays an important role in many situations. This applies both to individual decision making and collective decision making, although the scope of the consequences of a decision may be quite different for both cases. In the collective case, a worldwide scale can be reached whereas in an individual situation often the scale is limited to personal life. Nevertheless, also collective decisions usually have a basis in decisions of individuals within a population. In this paper, it is discussed how in certain cases individual decisions can indeed lead to collective decisions with a worldwide scope of consequences. Two mechanisms for this are considered in particular: influencer-driven social contagion within social networks and plasticity-driven evolution within biological populations.
For related videos, see the YouTube channel on Self-Modeling Networks here: https://www.youtube.com/channel/UCCO3i4_Fwi22cEqL8M_PgeA. In network models for real-world domains often network adaptation has to be addressed by incorporating certain network adaptation principles. In some cases, also higher-order adaptation occurs: the adaptation principles themselves also change over time. To model such multilevel adaptation processes it is useful to have some generic architecture. Such an architecture should describe and distinguish the dynamics within the network (base level), but also the dynamics of the network itself by certain adaptation principles (first-order adaptation level), and also the adaptation of these adaptation principles (second-order adaptation level), and maybe still more levels of higher-order adaptation. This paper introduces a multilevel network architecture for this, based on the notion network reification. Reification of a network occurs when a base network is extended by adding explicit states representing the characteristics of the structure of the base network. It will be shown how this construction can be used to explicitly represent network adaptation principles within a network. When the reified network is itself also reified, al-so second-order adaptation principles can be explicitly represented. The multilevel network reification construction introduced here is illustrated for an adaptive adaptation principle from Social Science for bonding based on homophily. This first-order adaptation principle describes how connections are changing, whereas this first-order adaptation principle itself changes over time by a second-order adaptation principle. As a second illustration, it is shown how plasticity and metaplasticity from Cognitive Neuroscience can be modeled.
In this paper it is addressed how network structure can be related to asymptotic network behaviour. If such a relation is studied, that usually concerns only strongly connected networks and only linear functions describing the dynamics. In this paper both conditions are generalised. A couple of general theo-rems is presented that relates asymptotic behaviour of a network to the network’s structure characteristics. The network structure characteristics on the one hand concern the network’s strongly connected components and their mutual connections; this generalises the condition of being strongly connected to a very general condition. On the other hand, the network structure characteristics considered generalise from linear functions to functions that are normalised, monotonic and scalar-free, so that many nonlinear functions are also covered. Thus the contributed theorems generalise existing theorems on the relation between network structure and asymptotic network behaviour only addressing specific cases such as acyclic networks, fully and strongly connected networks, and theorems addressing only linear functions.
In this paper, a fourth-order adaptive agent model based on a multilevel reified network model is introduced to describe different orders of adaptivity of the agent's biological embodiment, as found in a case study on evolutionary processes. The adaptive agent model describes how the causal pathways for newly developed features in this case study affect the causal pathways of already existing features, which makes the pathways of these new features one order of adaptivity higher than the existing ones, as they adapt a previous adaptation. A network reification approach is shown to be an adequate means to model this in a transparent manner.
Videos of lectures on several chapters of this book can be found at: https://www.youtube.com/playlist?list=PLtJH8O7BvdydRVu9RXuhdtAo2S2wMPtgp. For more applications, see the Self-Modeling Networks channel at https://www.youtube.com/@self-modelingnetworks4255. This book addresses the challenging topic of modeling (multi-order) adaptive dynamical systems, which often have inherently complex behaviour. This is addressed by using their network representations. Networks by themselves usually can be modeled using a neat, declarative and conceptually transparent Network-Oriented Modeling approach. For adaptive networks changing the network’s structure, it is different; often separate procedural specifications are added for the adaptation process. This leaves you with a less transparent, hybrid specification, part of which often is more at a programming level than at a modeling level. This book presents an overall Network-Oriented Modeling approach by which designing adaptive network models becomes much easier, as also the adaptation processes are modeled in a neat, declarative and conceptually transparent network-oriented manner, like the base network itself. Due to this dedicated overall Network-Oriented Modeling approach, no procedural, algorithmic or programming skills are needed to design complex adaptive network models.
A dedicated software environment is available to run these adaptive network models from their high-level specifications. Moreover, as adaptive networks are described in a network format as well, the approach can simply be applied iteratively, so that higher-order adaptive networks in which network adaptation itself is adaptive too, can be modeled just as easily; for example, this can be applied to model metaplasticity from Cognitive Neuroscience. The usefulness of this approach is illustrated in the book by many examples of complex (higher-order) adaptive network models for a wide variety of biological, mental and social processes.
The book has been written with multidisciplinary Master and Ph.D. students in mind without assuming much prior knowledge, although also some elementary mathematical analysis is not completely avoided. The detailed presentation makes that it can be used as an introduction in Network-Oriented Modelling for adaptive networks. Sometimes overlap between chapters can be found in order to make it easier to read each chapter separately. In each of the chapters, in the Discussion section, specific publications and authors are indicated that relate to the material presented in the chapter. The specific mathematical details concerning difference and differential equations have been concentrated in Chapters 10 to 15 in Part IV and Part V, which easily can be skipped if desired. For a modeler who just wants to use this modeling approach, Chapters 1 to 9 provide a good introduction.
The material in this book is being used in teaching undergraduate and graduate students with a multidisciplinary background or interest. Lecturers can contact me for additional material such as slides, assignments, and software. Videos of lectures for many of the chapters can be found at https://www.youtube.com/watch?v=8Nqp_dEIipU&list=PLF-Ldc28P1zUjk49iRnXYk4R-Jm4lkv2b.
Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation accelerates when an environment is re-encountered—in response to a sound environment that repeatedly switches between quiet and loud, midbrain neurons accrue experience to find an efficient code more rapidly. This phenomenon, which we term meta-adaptation, suggests a top–down influence on the midbrain. To test this, we inactivate auditory cortex and find acceleration of adaptation with experience is attenuated, indicating a role for cortex—and its little-understood projections to the midbrain—in modulating meta-adaptation. Given the prevalence of adaptation across organisms and senses, meta-adaptation might be similarly common, with extensive implications for understanding how neurons encode the rapidly changing environments of the real world.
The human pain system can be bidirectionally modulated by high-frequency (HFS; 100Hz) and low-frequency (LFS; 1Hz) electrical stimulation of nociceptors leading to long-term potentiation or depression of pain perception (pain-LTP or pain-LTD). Here we show that priming a test site by very low-frequency stimulation (VLFS; 0.05Hz) prevented pain-LTP probably by elevating the threshold (set point) for pain-LTP induction. Conversely, prior HFS-induced pain-LTP was substantially reversed by subsequent VLFS, suggesting that preceding HFS had primed the human nociceptive system for pain-LTD induction by VLFS. In contrast, the pain elicited by the pain-LTP-precipitating conditioning HFS stimulation remained unaffected. In aggregate these experiments demonstrate that the human pain system expresses two forms of higher-order plasticity (metaplasticity) acting in either direction along the pain-LTD to pain-LTP continuum with similar shifts in thresholds for LTD and LTP as in synaptic plasticity, indicating intriguing new mechanisms for the prevention of pain memory and the erasure of hyperalgesia related to an already established pain memory trace. There were no apparent gender differences in either pain-LTP or metaplasticity of pain-LTP. However, individual subjects appeared to present with an individual balance of pain-LTD to pain-LTP (a pain plasticity "fingerprint").
Training rats in a particularly difficult olfactory discrimination task initiates a period of accelerated learning of other odors, manifested as a dramatic increase in the rats' capacity to acquire memories for new odors once they have learned the first discrimination task, implying that rule learning has taken place.
At the cellular level, pyramidal neurons in the piriform cortex, hippocampus and bsolateral amygdala of olfactory-discrimination trained rats show enhanced intrinsic neuronal excitability that lasts for several days after rule learning. Such enhanced intrinsic excitability is mediated by long-term reduction in the post-burst after-hyperpolarization (AHP) which is generated by repetitive spike firing, and is maintained by persistent activation of key second messenger systems. Much like late-LTP, the induction of long-term modulation of intrinsic excitability is protein synthesis dependent. Learning-induced modulation of intrinsic excitability can be bi-directional, pending of the valance of the outcome of the learned task.
In this review we describe the physiological and molecular mechanisms underlying the rule learning-induced long-term enhancement in neuronal excitability and discuss the functional significance of such a wide spread modulation of the neurons' ability to sustain repetitive spike generation.
Memory is often thought about in terms of its ability to recollect and store information about the past, but its function likely rests with the fact that it permits adaptation to ongoing and future experience. Thus, the brain circuitry that encodes memory must act as if stored information is likely to be modified by subsequent experience. Considerable progress has been made in identifying the behavioral and neural mechanisms supporting the acquisition and consolidation of memories, but this knowledge comes largely from studies in laboratory animals in which the training experience is presented in isolation from prior experimentally-controlled events. Given that memories are unlikely to be formed upon a clean slate, there is a clear need to understand how learning occurs upon the background of prior experience. This article reviews recent studies from an emerging body of work on metaplasticity, memory allocation, and synaptic tagging and capture, all of which demonstrate that prior experience can have a profound effect on subsequent learning. Special attention will be given to discussion of the neural mechanisms that allow past experience to affect future learning and to the time course by which past learning events can alter subsequent learning. Finally, consideration will be given to the possible significance of a non-synaptic component of the memory trace, which in some cases is likely responsible for the priming of subsequent learning and may be involved in the recovery from amnestic treatments in which the synaptic mechanisms of memory have been impaired.