ArticlePDF Available

Efficient simulation of non-Markovian dynamics on complex networks

Abstract and Figures

We study continuous-time multi-agent models, where agents interact according to a network topology. At any point in time, each agent occupies a specific local node state. Agents change their state at random through interactions with neighboring agents. The time until a transition happens can follow an arbitrary probability density. Stochastic (Monte-Carlo) simulations are often the preferred—sometimes the only feasible—approach to study the complex emerging dynamical patterns of such systems. However, each simulation run comes with high computational costs mostly due to updating the instantaneous rates of interconnected agents after each transition. This work proposes a stochastic rejection-based, event-driven simulation algorithm that scales extremely well with the size and connectivity of the underlying contact network and produces statistically correct samples. We demonstrate the effectiveness of our method on different information spreading models.
Content may be subject to copyright.
Efficient simulation of non-Markovian dynamics on complex
Gerrit Großmann1*, Luca Bortolussi1,2 , Verena Wolf1,
1Saarland Informatics Campus, Saarland University, Saarbr¨ucken, Germany
2Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy
We study continuous-time multi-agent models, where agents interact according to a
network topology. At any point in time, each agent occupies a specific local node state.
Agents change their state at random through interactions with neighboring agents. The
time until a transition happens can follow an arbitrary probability density.
Stochastic (Monte-Carlo) simulations are often the preferred—sometimes the only
feasible—approach to study the complex emerging dynamical patterns of such systems.
However, each simulation run comes with high computational costs mostly due to
updating the instantaneous rates of interconnected agents after each transition.
This work proposes a stochastic rejection-based, event-driven simulation algorithm
that scales extremely well with the size and connectivity of the underlying contact
network and produces statistically correct samples. We demonstrate the effectiveness of
our method on different information spreading models.
Author summary
Epidemic spreading, diffusion of opinions and memes in social networks, neuron activity,
spreading of malware, blackouts of financial institutions are only some examples of
non-linear dynamical processes on complex networks whose understanding is very
important for our society. The evolution of these processes and the emergence of
behavioral patterns is strongly influenced by the topology of the underlying network.
The most widespread modeling approach for the analysis of such systems are
agent-based models in which agents correspond to nodes that interact with their
neighbors according to an underlying network topology (e.g. the connections in the
social network for rumor spreading). In these models, agents have a finite number of
internal states, the internal state changes over time due to interaction with nearby
nodes. These changes happen after some random time delays, which can in principle
follow arbitrary probability distributions, depending on the scenario considered. In the
case that only exponentially distributed delays are considered, the underlying stochastic
process has the Markov property. However, in many applications the distribution of the
delays is different from exponential. Such non-Markovian processes are usually very
challenging to analyze with the only feasible way often being stochastic simulations.
Stochastic simulation algorithms generate samples of the process, in this case
statistically correct trajectories of the dynamics over time. However, the simulation of
non-Markovian models can be computationally expensive. In particular, the analysis of
large networks can become infeasible.
October 30, 2020 1/20
In this paper, we propose a novel stochastic simulation algorithm based on the idea
of rejection sampling. We show that our algorithm is more efficient than other existing
methods, particularly for what concerns its scalability with respect to the network size.
We investigate and compare its performance and effectiveness on several models,
including epidemic spreading, a voting model, and a neuron spiking.
Introduction 1
Networks provide a general language for the representation of interconnected systems. 2
Computational modeling of stochastic dynamical processes happening on top of network
typologies is a thriving research area [1
3]. Here, we consider continuous-time spreading
dynamics on networks. That is, at each point in time all agents (i.e. nodes) occupy a 5
specific local state (resp. compartment, internal state, or node state). The node states 6
change over time but the underlying contact network which specifies the connectivity 7
remains the same. 8
The most common framework for such processes is the Susceptible - Infected - 9
usceptible (
) model and its many variants [4
6]. In the
model, agents are either
susceptible (healthy) or infected. Infected agents can recover (become susceptible again)
or infect neighboring susceptible agents. SIS-type diffusion models have proven to be 12
useful for the analysis, prediction, and reconstruction of opinion- and meme-spreading 13
in online social networks [7,8] as well as for the propagation of neural activity [9,10], 14
and the spread of malware [11] and blackouts in financial institutions [12, 13]. 15
Agents change their state either by interacting with another agent (e.g., they become
infected) or spontaneously and independently from their neighbors (e.g., when they 17
recover). We call the state change of an agent an event. Previous work focused primarily
on so-called Markovian models, in which the probability of an agent changing its state 19
in the next infinitesimal time unit is constant (that is, independent of the time the 20
agent has already spent in its current state). We call these agents memoryless, because
they don’t “remember” how much time they have already spend in their internal state.
As a consequence of the memoryless property, the time until an agent changes its 23
local state follows an exponential distribution. The exponential distribution is 24
parameterized by some rate λR0. This rate can vary for different types of events 25
(recovery, infection, etc.) and depend on the direct neighborhood. 26
It is long known that it is unrealistic to assume exponentially distributed inter-event
times in many real-world scenarios. As empirical results show, this holds for instance for
the spread of epidemics [14–18], opinions in online social networks [19,20], and neural 29
spike trains [21]. Assuming inter-event times that can follow arbitrary distributions 30
complicates the analysis of such processes. Often Monte-Carlo simulations are the only
feasible way to investigate the emerging dynamics, but even these suffer from high 32
computational costs. Specifically, they often scale badly with the size of the contact 33
networks. 34
Recently, Masuda and Rocha introduced the Laplace-Gillespie algorithm (LGA) for 35
the simulation of non-Markovian dynamics on networks in [22]. The method is based on
an earlier approach, the non-Markovian Gillespie algorithm (
) by Bogun´a et al [23].
Masuda and Rocha aim at minimizing the computational burden of sampling inter-event
times. However, both methods, nMGA and LGA, require a computationally expensive 39
updating of an agent’s neighborhood in each simulation step. We explain both methods
in more detail later. For Markovian spreading models, rejection-based simulation was 41
recently successfully applied to overcome these limitations [24–26]. 42
Contribution. This work is an extension of [26] in terms of theoretical analysis and 43
experimental evaluation of our method. Specifically, we provide an additional case 44
October 30, 2020 2/20
study, add a correctness and runtime analysis, and investigate the limitations of our 45
method. Moreover, we provide additional examples of models and of commonly used 46
inter-event time densities. We also compare our method with an additional baseline. 47
Generally speaking, this work extends the idea of rejection-based simulation to 48
networked systems that admit non-Markovian behavior. We propose RED, a 49
rejection-based, event-driven simulation approach. RED is based on three main ideas: 50
1. We express the distributions of inter-event times as time-varying instantaneous 51
rates (referred to as intensity or rate functions). 52
2. We sample inter-event times based on an over-approximation of the intensity 53
function, which we counter-balance by using a rejection step. 54
3. We utilize a priority (resp. event) queue to decide which agent fires next. 55
The combination of these ideas allows to reduce the computational costs of each 56
simulation step. More precisely, if an agent transitions from one local state to another 57
one, no update of neighboring agents is required, even though their instantaneous rates
might change as a result of the event. In short, the reason that it is not necessary to 59
update an agent if its neighborhood changes, is that (by using the rate 60
over-approximation) we always assume the “worst-case” behavior of an agent. If a 61
neighboring agent is updated, the (actual) instantaneous rate of an agent might change
but it will never exceed the rate over-approximation, which was used to sample the 63
firing time. Hence, the sampled firing time is always an under-approximation of the true
one, regardless of what happens to adjacent nodes. 65
Naturally, this comes with a cost, in our case rejection (or null) events. Rejection 66
events counter-balance the over-approximation of the instantaneous rate. The larger the
difference between actual rate and over-approximated rate, the more rejection events 68
will happen. Rejection events and the over-approximated rates complement each other 69
and, in combination, yield a statistically correct (i.e. exact) algorithm. Utilizing a 70
priority queue to order prospective events, renders the computational costs of each 71
rejection step extremely small. We provide numerical results showing the efficiency of 72
our method. In particular, we investigate how the runtime of our methods scales with 73
the size and the connectivity of the contact network. 74
Multi-agent model 75
Here we introduce our formalism for agent-based dynamics on complex networks. Our 76
goal is to have a framework that is as expressive as possible while remaining intuitive. 77
In particular, we make the following assumptions: 78
The state transitions of an agent can depend on its whole (direct) neighborhood 79
in a non-linear way. 80
The time delay until an agent changes its state and the choice of a successor state
can follow an arbitrary probability distribution whose parameterization depends 82
on the agents’ neighborhood. 83
The number of local states can be arbitrarily high so that they are expressive 84
enough to encode all information of interest. 85
Individual nodes and edges may carry specific information that the model may 86
take into account (e.g., some agents might be more reactive than others or 87
connections might vary in strength). 88
October 30, 2020 3/20
With the above assumptions, it is possible to describe a wide range of epidemic-type 89
applications (
, threshold and voter model, etc.) as well as inter-event times
following arbitrary distributions. We will also ensure that the framework is easily 91
adaptable (e.g. to directed or temporal networks). 92
Next, we specify how, at any given point in time, a (global) network state is defined.
After that, we explain how the dynamics can be formalized, that is, how agents change
under the influence of two functions: φ, for choosing a time of an agent’s state change, 95
and ψ, for choosing a local successor state. Note that, instead of explicitly using a 96
probability density function (PDF) to encode the firing times of agents, we formulate 97
so-called intensity functions. They have the same expressiveness but are more intuitive
to use and easier to parametrize on the neighborhood of an agent. An intensity function
determines how likely an agent fires in the following infinitesimal time interval. We will
discuss in detail intensity functions in a dedicated section below. 101
Let G= (V,E) be an undirected finite graph called contact network specifying the 102
connectivity of the system. We assume that
is strongly connected. That is, all nodes
are reachable from all other nodes. Each node v∈ V is an agent.104
Network state.
At any given time point, the current (global) state of a network
described by two functions: 106
V → S
assigns to each agent
a local state
∈ S
, where
is a finite set of
local states (e.g., S={S,I}for the SIS model); 108
R:V R0, describes the residence time of each agent (the time elapsed since 109
the agent changed its local state the last time). 110
We say an agent fires when it transitions from one local state to another. The time
between two firings of an agent is denoted as inter-event time. Moreover, we refer to
the remaining time until it fires as its time delay. The firing time depends on the direct
neighborhood of an agent. At any point in time, the neighborhood state M(v) of an
agent vis a set of triplets containing the local states and residence times of all
neighboring agents and the agents themselves:
M(v) = nS(v0), R(v0), v0(v, v0)∈ E o.
We use Mto denote the set of all possible neighborhood-states in a given model. 111
Network dynamics. Next, we specify how the network state evolves over time. 112
Therefore, we assign to each agent v∈ V two functions φvand ψv.φvgoverns when v113
fires and
defines its successor state. Both functions depend on the current state (first
parameter), the residence time of of
(second parameter), and the neighborhood
(third parameter). 116
φv:S × R0× M R0defines the instantaneous rate of v. If 117
λ=φvS(v), R(v), M (v), then the probability that vfires in the next 118
infinitesimal time interval tis λt(assuming t0); 119
S × R0× M PS
determines the successor state when a transition occurs.
More precisely, it determines for each local state the probability to be the 121
successor state. Here, PSdenotes the set of all probability distributions over S.122
Hence, if p=ψvS(v), R(v), M (v), then—assuming agent vfires—the 123
subsequent local state of vis s∈ S with probability p(s). 124
October 30, 2020 4/20
Note that we assume that these functions have no pathological behavior. That is, we 125
exclude the cases in which φvis defined so that it is not integrable or where some 126
intensity function φvwould cause an infinite amount of simulation steps in finite time 127
(see Examples). 128
A multi-agent network model is fully specified by a tuple (G,S,{φv},{ψv}, S0), 129
where S0denotes a function that assigns to each agent its initial local state. 130
Examples 131
Standard Markovian SIS model. Consider the classical SIS model with
S={S,I}.φvand ψvare the same for all agents:
φv(s, t, m) =
crif s=I
I(s0) if s=Sψv(s, t, m) = (Sif s=I
Iif s=S
Here, ci, crR0are the infection and recovery rate constants. The infection rate is 132
proportional to the number of infected neighbors while the recovery rate is independent
from the neighborhood. Moreover, s:S → {0,1}is such that s(s0) is one if s=s0134
and zero otherwise. The model is Markovian (w.r.t. all local states) as neither φvnor 135
ψvdepends on the residence time of any agent. As in most binary state models, ψvis 136
deterministic in the sense that an agent in state Ialways transforms to Swith 137
probability one and vice versa. 138
Complex cascade model. Consider a modification of the independent cascade 139
model [27] with local states susceptible (S), infected (I), and immune/removed (R). 140
Infected nodes try to infect their susceptible neighbors. The infection attempts can be 141
successful (turning the neighbor from
) or unsuccessful (turning the neighbor from
Sto R). Agents that are infected or immune remain in these states. 143
This model can be used to describe the propagation of some type of information on
social media. Infected agents can be seen as having shared (e.g. re-tweeted) the
information. A user exposed to the information decides right away if she considers it to
be worth sharing. Seeing the information multiple times or from multiple sources does
not increase the chances of sharing it (i.e. becoming infected). However, multiple
infected neighbors might decrease the time until the information is perceived by an
agent. Again, φvand ψvare the same for all agents. We define the instantaneous rate
φv(s, t, m) =
etdist if s=Sand P
0 otherwise .
Here, tdist denotes the time elapsed since the latest infected neighbor became infected.
Thus, the intensity at which infected agents “attack” their neighbors decreases
exponentially and only the most recently infected neighbor counts. Moreover, the next
internal state of an agent is selected according to the distribution:
ψv(s, t, m) = {Ipi,R1pi,S0},
where pi[0,1] denotes the infection probability. This example is both: 144
non-Markovian, because the residence times of the neighbors influence the rate, and 145
non-linear, because the individual effects from neighboring agents do not simply add up.
October 30, 2020 5/20
(a) (b) (c) (d)
Fig 1. (a-c) Sampling event times with an intensity function 1t[1,2]
2t. (a) Generate a
random variate from the exponential distribution with rate λ= 1, the sample here is
0.69. (b) We integrate the intensity function until the area is 0.69, here tn= 1.5. (c)
This is the intensity function corresponding to the uniform distribution in
γ(t) = 1t[1,2]. (d) Rejection sampling example: Sampling tvfrom a time-varying
intensity function λ(t) = sin2(2t) using an upper-bound of c= 1. Two iterations are
shown with rejection probabilities shown in red. After one rejection step, the method
accepts in the second iteration and returns tv= 1.3.
Pathological behavior. Assume two connected agents. Agent Aalways stays in 147
state S. Agent Bswitches between states I1and I2. The frequency at which B148
alternates increases with the residence time of A(denoted by R(A)). Let the rate to 149
jump from I1to I2(and vice versa) be 1
1R(A)for R(A)<1. Assume that we want to 150
perform a simulation for the time interval [0
1]. It is easy to see that the instantaneous
rate of
approaches infinity and the number of simulation steps (state transitions) will
not converge. Generally speaking, pathological behavior may occur if φv(s, t, m)153
approaches infinity with growing R(v0) (within the simulation time), where v0is a 154
neighbor of
. However, it is allowed that
s, t, m
) approaches infinity with growing
(t=R(v)). Then no pathological behavior can occur because of the reset of R(v) with 156
each state transition of v.157
Semantics 158
We specify the semantics of a multi-agent model in a generative matter. That is, we 159
describe a stochastic simulation algorithm that generates trajectories of the system. 160
Recall that the network state is specified by the mappings Sand R. Let tglobal denote 161
the global simulation time. 162
The simulation is based on a race condition among all agents: each agent picks a 163
random firing time, but only the one with the shortest time delay wins and actually 164
fires. Because each agent only generates a potential/proposed time delay, we might refer
to this sampled value as time delay candidate. The algorithm starts by initializing the 166
global clock tglobal = 0 and setting R(v) = 0 for all v∈ V. The algorithm performs 167
simulation steps until a predefined time horizon is reached. Each simulation step 168
contains the following sub-steps: 169
Generate a random time delay candidate
for each agent
. Identify the agent
with the smallest time delay tv0.171
2. Select the successor state s0for v0using ψv0S(v0), R(v0) + tv0, M (v0)and set 172
S(v0) = s0. Set R(v0) = 0 and R(v) = R(v) + tv0for all v6=v0.173
3. Set tglobal =tglobal +tv0and go to Line 1.174
This simulation approach is however—while being intuitive—very inefficient. Our 175
approach, RED, will be statistically equivalent while being much faster. 176
October 30, 2020 6/20
Generating time delays 177
Recall that, in this manuscript, we encode inter-event time distributions using intensity
functions. The intensity function of agent vis used to generate time delay candidates 179
which then compete in a race-condition (the shortest time delay “wins”). The 180
relationship between time delays and intensities is further discussed in the next section.
There are several ways to generate a time delay candidate for an agent
. In one way
or another, we have to sample from an exponential distribution with a time-varying rate
parameter. In principle, there are many different possible methods for this. For an 184
overview, we refer to [28–31]. 185
An obvious way is to turn the intensity function induced by
into a PDF (cf. Fig 1)
and sample from it using inverse transform sampling. A more direct way is to perform 187
numerical integration on φvassuming the neighborhood of vstays the same. Let us, 188
therefore, define for each
the effective rate
) which is the evolution of the intensity
starting from the current time point, assuming no changes in the neighboring agents:
λv(t) = φnS(n), R(n) + t, Mt(n),
where Mt(n) = nS(n0), R(n0) + tn, n0∈ Eo.
Here, tdenotes the time increase of the algorithm. 191
The effective rate makes it possible to sample the time delay
after which agent
fires (if it wins the race), using the inversion transform method. First, we sample an 193
exponentially distributed random variate xwith rate 1, then we integrate λv(·) to find 194
tv. Formally, tvis chosen such that the equation 195
is satisfied. The idea is the following: We first sample a random variate xassuming a
fixed rate (intensity function) of 1. The corresponding density is exp(x), leading to
P(X > x) = exp(x) (sometimes referred to as survival function). Next, we consider
the “real” time-varying intensity function λv(·) and choose [0, tv] such that the area
under the time-varying intensity function is equal to x(cf. Eq. (1)). Hence,
P(X > x) = exp(x) = exp Ztv
λv(t)dt=P(Y > tv)
and tvis thus distributed according to the time-varying rate λv(·). Intuitively, by 196
sampling the integral, we apriori define the number of infinitesimal time-steps we take 197
until the agent will eventually fire. This number naturally depends on the rate function.
If the rate decreases, more steps will be taken. We refer the reader to [29] for a proof.
An alternative approach to sample time delays is to use rejection sampling (this is 200
not the rejection sampling which is the key of the RED method though) which is 201
illustrated in Fig 1d. Assume that we have
for all
. We start
with tv= 0. Next, we sample a random variate t0
vwhich is exponentially distributed 203
with rate
. Next, we set
and accept
with probability
. Otherwise,
we reject t0
vand repeat the process. If a reasonably tight over-approximation can be 205
found, rejection sampling is much faster than numerical integration. The correctness 206
can be shown similarly to the correctness of
. That is, one creates a complementing-
(or shadow-process) which accounts for the difference between the upper-bound cand 208
λ(t). In total, the null events and the complementing process cancel out, yielding 209
statistically correct samples of tv.210
October 30, 2020 7/20
Table 1. Schematic illustration of intensity functions and inter-event time densities.
Exponential Uniform Weibull Rayleigh Power law
Parameters λR>0
a, b R>0
a < b c, u R>0σR>0
α, tmin R>0
α > 1
Intensity λ(t)λt[a,b]
PDF γ(t)λeλt t[a,b]cu(tu)c1e(tu)ct
tmin (t
tmin )α
Examples of the relationship between common PDFs for inter-event times and their corresponding intensity functions. All
functions are only defined on t0.
Intensities and inter-event times 215
In our framework, the distribution of inter-event times is expressed using intensity
functions. This is advantageous for the combination with rejection sampling. Here, we
want to further establish the relationship between intensity functions and probability
densities. Let us assume that at a given time point and for an agent v, the probability
density that the agent fires after exactly
time units is given by a PDF
Leveraging the theory of renewal processes [31–33], we find the relationship
λ(t) = γ(t)
0γ(t0)dt0and γ(t) = λ(t)eRt
0λ(y)dy .
We set λ(t) to zero if the denominator is zero. Using this equation, we can derive 216
intensity functions from any given inter-event time distribution (e.g., uniform, 217
log-normal, gamma, power-law, etc.). In cases where it is not possible to derive λ(·)218
analytically, we can still compute it numerically. Some examples of λ(·) for common 219
PDFs are shown in Table 1. 220
All density functions of time delays can be expressed as time-varying rates (i.e. 221
intensities). However, only intensity functions with an infinite integral can be expressed
as PDF. If R
0λ(t)dt is finite, the process might, with positive probability, not fire at 223
all. This follows directly from Eq. 1. The sampled reference area xcan be arbitrarily 224
large, if it is larger than R
0λ(t)dt the process does not fire. For instance, consider an 225
intensity function λ(t) which is 1 if t[0,1] and zero otherwise. If x > 1, the process 226
reaches t= 1 (without having already fired) and will also not fire while t > 1. 227
Previous simulation approaches 228
Most recent work on non-Markovian dynamics focuses on formal models of such 229
processes and their analysis [34–38]. Research has mostly focused on how specific 230
distributions (e.g. uniformly distributed curing times) alter the behavior of the epidemic
spreading, for instance, the epidemic threshold (see [3, 4] for an overview). Most of this
work is, however, rather limited in scope, in the sense that only certain distributions or
only networks with infinite nodes or only the epidemic threshold but not the full 234
emerging dynamics are considered. Even though substantial effort was dedicated to the
usage of rejection sampling in the context of Markovian stochastic processes on 236
networks [24,25, 39], only a few approaches are known to us that are dedicated to 237
non-Markovian dynamics [22,23]. 238
June 5, 2020 8/20
Fig 2. Schematic illustration of intensity functions and inter-event time densities.
Examples of the relationship between common PDFs for inter-event times and their
corresponding intensity functions. All functions are only defined on t0.
Intensities and inter-event times 211
In our framework, the distribution of inter-event times is expressed using intensity
functions. This is advantageous for the combination with rejection sampling. Here, we
want to further establish the relationship between intensity functions and probability
densities. Let us assume that at a given time point and for an agent v, the probability
density that the agent fires after exactly
time units is given by a PDF
Leveraging the theory of renewal processes [31–33], we find the relationship
λ(t) = γ(t)
0γ(t0)dt0and γ(t) = λ(t)eRt
0λ(y)dy .
We set λ(t) to zero if the denominator is zero. Using this equation, we can derive 212
intensity functions from any given inter-event time distribution (e.g., uniform, 213
log-normal, gamma, power-law, etc.). In cases where it is not possible to derive λ(·)214
analytically, we can still compute it numerically. Some examples of λ(·) for common 215
PDFs are shown in Figure 2. 216
All density functions of time delays can be expressed as time-varying rates (i.e. 217
intensities). However, only intensity functions with an infinite integral can be expressed
as PDF. If R
0λ(t)dt is finite, the process might, with positive probability, not fire at 219
all. This follows directly from Eq. 1. The sampled reference area xcan be arbitrarily 220
large, if it is larger than R
0λ(t)dt the process does not fire. For instance, consider an 221
intensity function λ(t) which is 1 if t[0,1] and zero otherwise. If x > 1, the process 222
reaches t= 1 (without having already fired) and will also not fire while t > 1. 223
Previous simulation approaches 224
Most recent work on non-Markovian dynamics focuses on formal models of such 225
processes and their analysis [34–38]. Research has mostly focused on how specific 226
distributions (e.g. uniformly distributed curing times) alter the behavior of the epidemic
spreading, for instance, the epidemic threshold (see [3, 4] for an overview). Most of this
work is, however, rather limited in scope, in the sense that only certain distributions or
only networks with infinite nodes or only the epidemic threshold but not the full 230
emerging dynamics are considered. Even though substantial effort was dedicated to the
usage of rejection sampling in the context of Markovian stochastic processes on 232
networks [24,25, 39], only a few approaches are known to us that are dedicated to 233
non-Markovian dynamics [22, 23]. 234
Here, we shortly summarize the relevant algorithms in order to lay the grounds for 235
our RED algorithm which was first introduced in [26]. We present an adaptation of the 236
October 30, 2020 8/20
classical Gillespie method for networked processes as well as the non-Markovian 237
Gillespie algorithm (
) and its adaptation, the Laplace-Gillespie algorithm (
). To
keep this contribution focused, we discuss all algorithms only for use in networked 239
systems and within the notation of this paper. 240
Non-Markovian Gillespie algorithm 241
Bogun´a et al. develop a modification of the Gillespie algorithm for non-Markovian 242
systems, nMGA [23]. Their method is statistically exact but computationally expensive. 243
Conceptually, nMGA is similar to the baseline in Section Semantics but computes the 244
time delays using so-called survival functions which simplifies the computation of the 245
minimal time delay over all agents. An agent’s survival function describes the 246
probability that the time until its firing is larger than a certain threshold t(for all t). 247
The joint survival function of all agents determines the probability that all time delays
are larger than
. The joint survival function can then be used to sample the next event
time. 250
Unfortunately, in nMGA, it is necessary to iterate over all agents in each simulation 251
step in order to construct the joint survival function. The authors also propose a fast 252
approximation. Therefore, only the current instantaneous rate (at the beginning of each
step) is used, and one assumes that all instantaneous rates remain constant until the 254
next event. This is reasonable when the number of agents is very high because, if the 255
number of agents approaches infinity, the time delay of the fastest agent approaches 256
zero. 257
Laplace-Gillespie algorithm 258
Masuda and Rocha have introduced the Laplace-Gillespie algorithm (LGA) in [22]. The 259
method aims at reducing the computational costs of finding the next event time 260
compared to
. They only consider inter-event time densities that can be expressed
as a continuous mixture of exponentials: 262
γv(t) = Z
pv(λ)λeλtdλ . (2)
is a PDF over the rate
. The restriction of inter-event times limits the
scope of the method to survival functions which are completely monotone [22]. The 264
advantage is that we can sample the time delay tvof an agent vby first sampling λv265
according to
and then sampling from an exponential distribution with rate
. That
is, tv=ln u/λ using for a uniformly (in (0,1)) distributed random variate u.267
Our method 268
In this section, we propose the RED algorithm for the generation of statistically correct 269
trajectories of non-Markovian spreading models on networks. The main idea is to use 270
rejection sampling to reduce the computational cost of each simulation step. Specifically,
when an agent changes its local state, we make it obsolete to update the rates of the 272
agent’s neighbors. 273
Rate over-approximation 274
Recall that we use the effective rate λv(·) to express how the instantaneous rate of v275
changes over time, assuming that no neighboring agent changes its state (colloquially, 276
we extrapolate the rate into the future). A key ingredient of our approach is the 277
October 30, 2020 9/20
construction of b
λv(·) which upper-bounds the instantaneous rate of v, taking into 278
consideration all possible state changes of v’s neighboring agents. That is, at all times 279
) is larger than (or equal to)
) while we allow that arbitrary state changes of
neighbors occur at arbitrary times in the future. In other words, b
λv(·) upper-bounds 281
) even when we have to re-compute
) due to a changing neighborhood. Formally,
the upper-bound always satisfies: 283
φnS(n), R(n) + t, M 0,(3)
where Mv,tdenotes the set of reachable neighborhoods (that is, with positive 284
probability) of agent vafter ttime units. Sometimes b
λv(·) is referred to as a 285
dominator of λv(·) [30]. 286
Note that it is not feasible to compute the over-approximation algorithmically, so we
derive it analytically. Upper-bounds can be constant or dependent on time. For 288
multi-agent models (with a finite number of local states) time-dependent upper-bound 289
exists for all practically relevant intensity functions since we can derive the maximal 290
instantaneous rate w.r.t. all reachable neighborhood states which is typically finite 291
except for some pathological cases (cf. Section Limitations). 292
Example. How does one find an over-approximation and why does it eliminate the 293
need to update an agent’s neighborhood? Consider again the Markovian SIS example 294
from earlier. The recovery of an infected agent does not depend on its neighborhood. 295
Hence, the rate is always cr, which is also a trivial upper-bound. The rate at which a 296
susceptible agent becomes infected is given by the citimes “number of infected 297
neighbors”. This means that the instantaneous infection rate of vcan be bounded by 298
λv(t) = kvciwhere kvis the degree (number of neighbors) of v. Note that this 299
upper-bound does not depend on t. When we use this upper-bound to sample the 300
time delay candidate of an agent, this time point will always be an under-approximation.
When a neighbor changes (e.g., becomes infected) the under-approximation remains 302
valid. 303
However, consider for instance a recovery time that is uniformly distributed on [1
In this case, λv(·) approaches infinity (cf. Fig 1b) making a constant upper-bound 305
impossible (even without considering any changes in the neighborhood). 306
The RED algorithm 307
As input, our algorithm takes a multi-agent model (G,S,{φv},{ψv}, S0) and 308
corresponding upper-bounds {b
λv}. As output, the method produces statistically exact 309
trajectories (samples) following the semantics introduced earlier. RED is based on two 310
main data structures: 311
Labeled graph 312
A graph represents the contact network. In each simulation step, each agent 313
(node) vis annotated with its current state S(v) and T(v), the time point of its 314
last state change. 315
Event queue 316
The event queue stores all (potential) future events (i.e. firings). An event is 317
encoded as a tuple (v, bµ, b
tv), where vis the agent that wants to fire, b
tvthe 318
prospective absolute time point of firing, and
is an over-approximation of
the true effective rate (at time point b
tv). The queue is sorted according to b
October 30, 2020 10/20
A global clock,
, keeps track of the elapsed time since the simulation started. We
initialize the simulation by setting
= 0 and generating one event per agent. Using
T(v) (as in Line 2) is a viable alternative to using R(v) in order to encode residence 323
times since R(v) = tglobal T(v). Practically T(v) is more convenient, as it avoids the 324
explicit updates of
) for all agents after any event happens. Again, we simulate until
some stopping criterion is fulfilled. Each simulation step contains the sub-steps: 326
1. Take the first event (v, bµ, b
tv) from the event queue and update tglobal =b
2. Evaluate the true instantaneous rate µ=φvS(v), tglobal T(v), M (v)of vat 328
the current system state. 329
3. With probability 1 µ
bµ,reject the firing and go to Line 5.330
4. Randomly choose the next state s0of vaccording to the distribution 331
ψvS(v), tglobal T(v), M (v). If S(v)6=s0: set S(v) = s0and T(v) = tglobal.332
5. Generate a new event for agent vand push it to the event queue. 333
6. Go to Line 1.334
The main difference to previous approaches is that traditionally the rate has to be 335
updated for all neighbors of a firing agent. In RED only the rate of the firing agent has 336
to be updated. 337
Event generation.
Here, we specify how the event generation in Line
is done. We
sample a random time delay
according to
) and set
(because the
event contains the absolute time). To sample tvaccording to the over-approximated 340
rate, we either use the numerical integration of Eq. (1) or sample directly from an 341
exponential distribution which upper-bounds the intensity function (cf. Fig 1d). Finally,
we set bµ=b
λv(tv). Alternatively, when appropriate for b
λv(t), we can even use an 343
LGA-like approach to sample tv(and also set bµ=b
λv(tv)) [23]. 344
Asymptotic time complexity 345
Here, we discuss how the runtime of RED scales with the size of the underlying contact 346
network (and the number of agents). Assume that a binary heap is used to implement 347
the event queue and that the graph structure is implemented using a hashmap. Each 348
step starts by popping an element from the queue which has constant time complexity. 349
Next, we compute µ. Therefore, we have to look up all neighbors of vin the graph 350
structure and iterate over them. We also have to look up all states and residence times.
This step has linear time-complexity in the number of neighbors. More precisely, 352
lookups in the hashmaps have constant time-complexity on average and are linear in the
number of agents in the worst case. Computing the rejection probability has constant 354
time complexity. When no rejection events takes place, we update Sand T. Again, 355
this has constant time-complexity on average. Generating a new event does not depend
on the neighborhood of an agent and has, therefore, constant time-complexity. Note 357
that this step can still be somewhat expensive when it requires integration to sample
but not in an asymptotic sense. Thus, a step in the simulation is linear in the number of
neighbors of the agent under consideration. 360
In contrast, previous methods require that after each update, the rate of each 361
neighbor v0is re-computed. The rate of v0, however, depends on the whole 362
neighborhood of
. Hence, it is necessary to iterate over all neighbors
of every single
neighbor v0of v.364
October 30, 2020 11/20
Correctness 365
The correctness of
can be shown similarly to [24]. Here, we provide a proof-sketch.
First, consider the rejection-free version of the algorithm: 367
1. Take the first event (v, µ, b
tv) from the event queue and update tglobal =b
2. Randomly choose the next state s0of vaccording to the distribution 369
ψvS(v), tglobal T(v), M (v).370
) =
: Generate a new event for agent
, push it to the event queue, and go
to Line 1(no state transition of v). 372
4. Otherwise set S(v) = s0and generate a new event for agent vand push it to the 373
event queue. 374
5. For each neighbor v0of v: Remove the event corresponding to v0from the queue 375
and generate a new event (taking the new state of vinto account). 376
6. Go to Line 1.377
Rejection events are not necessary for this version of the algorithm because all events
in the queue are generated by the “real” rate and are therefore consistent with the 379
current system state. In other words, the rejection probability would always be zero. It
is easy to see that the rejection-free version is a direct event-driven implementation of 381
the na¨ıve simulation algorithm which was introduced in the Semantics Section. The 382
correspondence between Gillespie-approaches and event-driven simulations is exploited 383
in the literature, for instance in [4]. Thus, it is sufficient to show that the above 384
rejection-free simulation and RED are statistically equivalent. 385
First, note that it is possible to include self-loop events to our model without 386
changing the underlying dynamics (resp. statistical properties). These are events in 387
which an agent fires but transitions into the same internal state it already occupies. 388
Until now, we did not allow such self-loop behavior. In the algorithm, self-loop events 389
correspond to the condition S(v) = s0in the third step. Such events do not alter the 390
network state and, therefore, do not change the statistical properties of the generated 391
trajectories. The key idea is now to change φvand ψvto b
φvand b
ψv, respectively, such 392
that the events related to b
φvand b
ψvalso admit self-loop events with a certain 393
probability. Specifically, self-loops have the same probability as rejection events in the 394
RED method but, apart from that, b
φvand b
ψvadmit the same behavior as φvand ψv.395
Formally, this is achieved by using so-called shadow-processes [24, 39]; sometimes also 396
referred to as complementing process [30]. A shadow-process does not change the state
of the corresponding agent but still fires at a certain rate. In the end, we can interpret
the rejection events not as rejections, but as the statistically necessary application of 399
the shadow-process. 400
We define the rate of the shadow-process, denoted by
, to be the difference between
the rate over-approximation and the true rate. For all v, t, this gives rise to the
invariance: b
λv(t) = λv(t) + e
We define b
ψvsuch that it includes the shadow-process. 401
The only thing remaining is to define b
φvsuch that the shadow-process does not
influence the system state. Therefore, we simply trigger a null event (or self-loop) with
the probability that is proportional to how much of b
φvis induced by the
October 30, 2020 12/20
shadow-process. Hence, the probability for a null event is e
. Consequently,
ψv(s, t, m) = pwhere pis defined such that:
p(s) = e
,p(s0) = 1e
φvψv(s0, t, m) (s06=s)
W.l.o.g., we assume that the original system has no inherent self-loops. In summary,
the model specification with and without the shadow-process are equivalent (i.e., admit
the same dynamics). This is because it has no actual effect on the system, all the 404
additional reactivity is compensated by the null event. Secondly, simulating the 405
rejection-free algorithm including the shadow-process directly yields RED. In particular, 406
the rejections events have the same likelihood as the shadow-process being chosen in
Limitations 408
The practical and theoretical applicability of our approach depends on how well the 409
intensity function of an agent can be over-approximated. The larger the difference 410
between λ(·) and b
λ(·) becomes, the more rejection events occur and the slower our 411
method becomes. In general, since rejection events are extremely cheap, it is not a 412
problem for our method when most of the events in the event queue will be rejected. 413
However, it is easy to think of examples where RED will perform exceptionally bad. 414
For instance, consider an
-type model, but nodes can only become infected if exactly
half of their neighbors are infected. In this case, the over-approximation would assume
that for all susceptible nodes this is always the case, causing too many rejection events.
Likewise, the problem can also occur in the time domain. Consider the case that 418
infected nodes only infect their susceptible neighbors in the first ttime-units of their 419
infection with rate λ, where tis extremely short (e.g. 0.001) and λis extremely high 420
(e.g. 1000). Given a susceptible node, we do not know how many of its neighbors will be
newly infected in the future, so we have to assume that all neighbors are infectious all of
the time. 423
Similarly, in some cases, it might not be possible to find a theoretical upper-bound 424
for the rate at all. Consider the case where an infected agent with residence time t425
“attacks” its neighbors at rate | − log(t)|(which converges to infinity for t0). This 426
still gives rise to a well-defined stochastic process because the integration of | − log(t)|427
leads to non-zero inter-event times and, therefore, it is possible to sample inter-event 428
times even though the rate starts at infinity. However, we cannot build an upper-bound
because, again, we have to assume that all neighbors of a susceptible node are always 430
newly infected. 431
There are also more practical examples like networked (self-exiting) Hawkes 432
processes [40]. Here, the number of firings of a neighbor increases the instantaneous rate
of an agent. As it is not possible to bound (in advance) the number of times the 434
neighbors fire (at least not without additional assumptions), it is not possible to 435
construct an upper-bound for the intensity function for any future point in time. 436
Case studies 437
We demonstrate the effectiveness of RED on three case studies. We generate synthetic 438
graphs to use as contact networks. Therefore, we use the stochastic configuration model
where the degree distribution is specified by a truncated power-law [41]. That is, for a 440
for 3
k |N |
. We use
β∈ {
(a smaller value for
to to a larger average degree and higher connectivity).
is implemented in Julia and
October 30, 2020 13/20
(a) (b) (c)
Fig 3. Results
. Computation time of a single simulation step w.r.t. network size and
connectivity (smaller βhigher connectivity). We measure the CPU time per
simulating step by dividing the simulation time by the number of successful (i.e.,
non-rejection) steps.
publicly available ( The evaluation was 443
performed on a 2017 MacBook Pro with a 3.1 GHz Intel Core i5 CPU. Runtime results
for different models are shown in Fig 3. To compute the step-wise CPU time, we 445
completely ignore the rejection steps to not give our method an advantage. We remark
that RED and Baseline are both statistically correct, meaning that they sample from 447
the correct distribution in the sense of the model semantics, while nMGA provides an 448
approximation. 449
Baseline 450
We compare the performance of RED with a baseline-algorithm and and an nMGA-type 451
approach. As a baseline, we use the rejection-free variant of the algorithm where, when
an agent fires, all of its neighbors are updated (described in more detail in Section 453
Correctness). In the Voter-model the baseline uses an LGA-type approach to sample 454
inter-event times (following Eq. 2). In the other experiments we sample inter-event 455
times using the rejection-based approach from Fig 1d. We do note that
not directly comparable as they are associated with different objectives. In short, LGA 457
focuses on optimizing the generation of inter-event times while
aims at reducing the
number of times that are necessary to generate inter-event times. We want to emphasize
that the reason we include an
-type and
-type sampling approach is to highlight
that our performance gain is not part of the specifics on how inter-event times are 461
generated. 462
We use an nMGA-type method as a second comparison. It is a re-implementation of 463
the approximate version of nMGA from [23]. The method stores all agents with their 464
associated residence time in a list. In each step, we iterate over the list and generate a 465
new firing time (candidate) for each agent assuming that the instantaneous rate remains
constant (note that assuming a constant rate means sampling an exponentially 467
distributed time delay). Then, the agent with the shortest time delay candidate fires and
the residence times of all agents are updated. The approximation error decreases with 469
an increasing network size because the time periods between events become smaller. 470
SIS model 471
As our first test case, we use a non-Markovian modification of the classical SIS model. 472
Specifically, we assume that infected nodes become (exponentially) less infectious over 473
time. That is, the rate at which an infected agent with residence time t“attacks” its 474
susceptible neighbors is
= 0
4. This does not directly relate to a probability
density because the infection event might not happen at all. Empirically, we choose 476
parameters which ensure that the infection actually spreads over the network. We 477
October 30, 2020 14/20
upper-bound the rate at which a susceptible agent
(with degree
) gets infected with
λv(t) = ukv. The upper-bound is constant in time and conceptually similar to the 479
earlier example (cf. Section Rate over-approximation). We sample tvusing an 480
exponential distribution (i.e., without numerical integration). The time until an infected
agent recovers is independent from its neighborhood and uniformly distributed in [0
(similar to [42]). Hence, we can sample it directly. We start with 5% randomly selected
infected agents. 484
Voter model 485
The voter model describes the spread of two competing opinions, denoted as Aand B.486
Agents in state Aswitch to Band vice versa, thus ψis deterministic. 487
In this experiment we use an inter-event time that can be sampled using an
approach (cf. Eq. 2). Moreover, to take full advantage of the LGA-formulation, we 489
assume that the neighborhood of an agent modulates the PDF pvwhich specifies the 490
continuous mixture of rates (otherwise, we could simply pre-compute it). Here, we 491
choose pvto be a uniform distribution in [0, ov], where ovis the fraction of opposing 492
neighbors of agent v. That is, if vis in state A(resp. B), then ovis the number of 493
neighbors in B(resp. A) divided by v’s degree kv.494
Hence, we can sample an time delay candidate by sampling a uniformly distributed 495
random variate λv[0, ov] and then sampling the time delay candidate tvwhich is 496
exponentially distributed with rate λv. The resulting inter-event time distribution 497
resembles a power-law with a slight exponential cut-off [22]. The cut-off becomes more
dominant for larger ov. Formally, 499
γv(t) = Zov
λeλt=1eovt(1 + ovt)
λv(t) = 1
To upper-bound the instantaneous rate we set ov= 1. To sample tvin RED, we use 500
rejection sampling (Fig 1d). The baseline uses the
-based approach, but changing to
rejection sampling does not noticeably change the performance. We initialize the 502
simulation with 50% of agents in Aand Brespectively. 503
Neural spiking 504
To model neural spiking behavior, we propose a networked (i.e., multivariate) temporal
point-processes [43]. In temporal point-processes, agents are naturally excitable (
) and
can get activated for an infinitesimally short period of time (I). After that, they
become immediately excitable again. Point-processes model scenarios where one is only
interested in the firing times of an agent not in their local state. They are commonly
used to model spiking behavior of neurons [44, 45] or information propagation in social
networks (like re-tweeting) [40]. A random trajectory of a system identifies each agent
with a list of time points Hvof its activations. Here, we consider multivariate
point-processes, where each agent (node) represents one point-processes and neighboring
agents influence each other by inhibition or excitement. Therefore, we identify each
(undirected) edge v, u with a weight wv,u of either 1 (excitatory connection) or 1
(inhibitory connection). Moreover, neurons can spontaneously fire with a baseline
October 30, 2020 15/20
intensity of µR0. Formally,
φv(s, t, m) = fµ+X
1 + t0
with f(x) = max 0,tanh(x).
The function fis called a response function, it converts the synaptic input into the 505
actual firing rate. We use the same one as in [46]. Without f, the intensity could 506
become negative. Note that ψvis deterministic. Our model can be seen as a 507
non-Markovian modification of the model of Benayoun et al. in [46]. Contrary to 508
Benayoun et al., we do not assume that active neurons stay in their active state for a 509
specific (in their case, exponentially distributed) amount of time. Instead, we assume 510
that they become immediately excitable again and that an activation affects the 511
neighboring neurons through a kernel function 1/(1 + t0). The kernel ensures that 512
neighbors who fired more recently (i.e., have a smaller residence time t0) have a higher 513
influence on an agent. 514
The residence time of an agent itself does not influence its rate. In contrast to 515
multivariate self-exiting Hawkes processes, only the most recent firing—and not the 516
whole event history Hv—contributes to the intensity of neighboring agents [40,47]. 517
Taking the whole history into account is not easily possible with a finite amount of local
states and introduces intensity functions which cannot be upper-bounded 519
(cf. Limitations). For our experiments, we set µ= 0.01, define 20% of the edges to be 520
inhibitory and use the trivial upper-bound of one (induced by the response function). 521
Discussion 522
Our experimental results provide a clear indication that rejection-bases simulation (and
the corresponding over-approximation of the instantaneous rate) can dramatically 524
reduce the computational costs of stochastic simulation in the context of non-Markovian
simulation on networks. 526
As expected, we see that the runtime behavior is influenced by the number of agents
(nodes) and the number of interconnections (edges). Interestingly, for RED, the number 528
of edges seems to be much more relevant than the number of agents. Most noticeably, 529
the CPU time of each simulation step does practically not increase (beyond statistical 530
noise) with the number of nodes. Moreover, one can clearly see that RED consistently 531
outperforms the baseline up to several orders of magnitude (cf. Fig 3), while the gain in
computational time (i.e., RED CPU time by baseline CPU time) ranges from 10.2 (103533
nodes, voter model, β= 2.5) to 674 (105nodes, SIS model, β= 2.0). 534
Note that, we only compared an LGA-type sampling approach with our method in 535
the voter model experiment The other case-studies could not straightforwardly be 536
simulated with
due to its constraints on the time delays. However, we still assume
that the rejection-free baseline algorithm is comparable with LGA in the other 538
experiments as both of them only update the rates of the relevant agents after an event.
We also tested an nMGA-like implementation where rates are considered to remain 540
constant until the next event. However, the method scales—albeit it is only 541
approximate—worse than the baseline. 542
Note that the SIS model is somewhat unfavorable for RED as it leads to the 543
generation of a large number of rejection events, especially when only a small fraction of
agents are overall infected. For concreteness, consider an agent with many neighbors of
which only very few are infected. The over-approximation simply assumes that all 546
neighboring agents are infected all the time. Nevertheless, the low computational costs
of each rejection event seem to easily atone for their large number. In contrast, the 548
neural spiking model is very favorable for our method as the tanh(·) response function 549
October 30, 2020 16/20
provides a global upper-bound for the instantaneous rate of each agent. 550
Performance-wise the differences between the two models are, surprisingly, pretty small.
Conclusion 552
We proposed RED, a rejection-based algorithm for the simulation of non-Markovian 553
agent models on networks. The key advantage, and most significant contribution of our
method, is that it is no longer required to update the instantaneous rates of the whole 555
neighborhood in each simulation step. This practically and theoretically reduces the 556
time complexity of each step compared to previous simulation approaches and makes 557
our method viable for the simulation of dynamical processes on real-world networks 558
which often have millions of nodes. In addition, rejection steps provide for some 559
inter-event time distributions a fast alternative to integrating the intensity function. 560
Currently, the most notable downside of the method is that the over-approximations
have to be constructed manually. It remains to be determined if it is possible to 562
automate the construction of
in an efficient way as the trivial way of searching in the
state space of all reachable neighborhoods is not feasible. We also plan to investigate 564
how correlated events (as in [22, 48]) can be integrated into RED.565
1. Barab´asi AL. Network science. Cambridge university press; 2016.
2. Goutsias J, Jenkinson G. Markovian dynamics on complex reaction networks.
Physics Reports. 2013;529(2):199–264.
3. Pastor-Satorras R, Castellano C, Van Mieghem P, Vespignani A. Epidemic
processes in complex networks. Reviews of modern physics. 2015;87(3):925.
4. Kiss IZ, Miller JC, Simon PL. Mathematics of epidemics on networks.
Forthcoming in Springer TAM series. 2016;.
5. Porter M, Gleeson J. Dynamical systems on networks: A tutorial. vol. 4.
Springer; 2016.
6. Rodrigues HS. Application of SIR epidemiological model: new trends. arXiv
preprint arXiv:161102565. 2016;.
7. Kitsak M, Gallos LK, Havlin S, Liljeros F, Muchnik L, Stanley HE, et al.
Identification of influential spreaders in complex networks. Nature physics.
8. Zhao L, Wang J, Chen Y, Wang Q, Cheng J, Cui H. SIHR rumor spreading
model in social networks. Physica A: Statistical Mechanics and its Applications.
9. Goltsev A, De Abreu F, Dorogovtsev S, Mendes J. Stochastic cellular automata
model of neural networks. Physical Review E. 2010;81(6):061921.
10. Meier J, Zhou X, Hillebrand A, Tewarie P, Stam CJ, Van Mieghem P. The
epidemic spreading model and the direction of information flow in brain networks.
NeuroImage. 2017;152:639–646.
11. Gan C, Yang X, Liu W, Zhu Q, Zhang X. Propagation of computer virus under
human intervention: a dynamical model. Discrete Dynamics in Nature and
Society. 2012;2012.
October 30, 2020 17/20
12. May RM, Arinaminpathy N. Systemic risk: the dynamics of model banking
systems. Journal of the Royal Society Interface. 2009;7(46):823–838.
13. Peckham R. Contagion: epidemiological models and financial crises. Journal of
Public Health. 2013;36(1):13–17.
14. Lloyd AL. Realistic distributions of infectious periods in epidemic models:
changing patterns of persistence and dynamics. Theoretical population biology.
15. Yang G. Empirical study of a non-Markovian epidemic model. Mathematical
Biosciences. 1972;14(1-2):65–84.
16. Blythe S, Anderson R. Variable infectiousness in HFV transmission models.
Mathematical Medicine and Biology: A Journal of the IMA. 1988;5(3):181–200.
17. Hollingsworth TD, Anderson RM, Fraser C. HIV-1 transmission, by stage of
infection. The Journal of infectious diseases. 2008;198(5):687–693.
18. Feng Z, Thieme H. Endemic models for the spread of infectious diseases with
arbitrarily distributed disease stages I: General theory. SIAM J Appl Math.
19. Barabasi AL. The origin of bursts and heavy tails in human dynamics. Nature.
20. azquez A, Oliveira JG, Dezs¨o Z, Goh KI, Kondor I, Barab´asi AL. Modeling
bursts and heavy tails in human dynamics. Physical Review E. 2006;73(3):036127.
21. Softky WR, Koch C. The highly irregular firing of cortical cells is inconsistent
with temporal integration of random EPSPs. Journal of Neuroscience.
22. Masuda N, Rocha LE. A Gillespie algorithm for non-Markovian stochastic
processes. SIAM Review. 2018;60(1):95–115.
23. Bogun´a M, Lafuerza LF, Toral R, Serrano M´
A. Simulating non-Markovian
stochastic processes. Physical Review E. 2014;90(4):042108.
24. Cota W, Ferreira SC. Optimized Gillespie algorithms for the simulation of
Markovian epidemic processes on large and heterogeneous networks. Computer
Physics Communications. 2017;219:303–312.
25. St-Onge G, Young JG, H´ebert-Dufresne L, Dub´e LJ. Efficient sampling of
spreading processes on complex networks using a composition and rejection
algorithm. arXiv preprint arXiv:180805859. 2018;.
26. Großmann G, Bortolussi L, Wolf V. Rejection-Based Simulation of
Non-Markovian Agents on Complex Networks. In: International Conference on
Complex Networks and Their Applications. Springer; 2019. p. 349–361.
27. D’Angelo G, Severini L, Velaj Y. Influence Maximization in the Independent
Cascade Model. In: ICTCS; 2016. p. 269–274.
28. Keeler P. Simulating an inhomogeneous Poisson point-processes; 2019.
29. Pasupathy R. Generating nonhomogeneous poisson processes;.
October 30, 2020 18/20
30. Gerhard F, Gerstner W. Rescaling, thinning or complementing? On
goodness-of-fit procedures for point-processes models and Generalized Linear
Models. In: Advances in neural information processing systems; 2010. p. 703–711.
31. Daley DJ, Jones DV. An Introduction to the Theory of point-processes:
Elementary Theory of point-processes. Springer; 2003.
32. Cox DR. Renewal theory. 1962;.
33. Ma D. Applied Probability and Statistics - The hazard rate function; 2011.
34. Kiss IZ, R¨ost G, Vizi Z. Generalization of pairwise models to non-Markovian
epidemics on networks. Physical review letters. 2015;115(7):078701.
35. Pellis L, House T, Keeling MJ. Exact and approximate moment closures for
non-Markovian network epidemics. Journal of theoretical biology.
36. Jo HH, Perotti JI, Kaski K, Kert´esz J. Analytically solvable model of spreading
dynamics with non-Poissonian processes. Physical Review X. 2014;4(1):011041.
37. Sherborne N, Miller J, Blyuss K, Kiss I. Mean-field models for non-Markovian
epidemics on networks: from edge-based compartmental to pairwise models.
arXiv preprint arXiv:161104030. 2016;.
38. Starnini M, Gleeson JP, Bogu˜a M. Equivalence between non-Markovian and
Markovian dynamics in epidemic spreading processes. Physical review letters.
39. Großmann G, Wolf V. Rejection-based simulation of stochastic spreading
processes on complex networks. In: International Workshop on Hybrid Systems
Biology. Springer; 2019. p. 63–79.
40. Farajtabar M, Wang Y, Rodriguez MG, Li S, Zha H, Song L. Coevolve: A joint
point-processes model for information diffusion and network co-evolution. In:
Advances in Neural Information Processing Systems; 2015. p. 1954–1962.
Fosdick BK, Larremore DB, Nishimura J, Ugander J. Configuring random graph
models with fixed degree sequences. SIAM Review. 2018;60(2):315–355.
ost G, Vizi Z, Kiss IZ. Impact of non-Markovian recovery on network epidemics.
In: BIOMAT 2015: International Symposium on Mathematical and
Computational Biology. World Scientific; 2016. p. 40–53.
43. Wu W, Liu H, Zhang X, Liu Y, Zha H. Modeling Event Propagation via Graph
Biased Temporal point-processes. arXiv preprint arXiv:190801623. 2019;.
Truccolo W. Stochastic models for multivariate neural point-processes: Collective
dynamics and neural decoding. In: Analysis of parallel spike trains. Springer;
2010. p. 321–341.
45. DELLA NATURA SDS. Comparative correlation analyses of high-dimensional
point-processes: applications to neuroscience;.
46. Benayoun M, Cowan JD, van Drongelen W, Wallace E. Avalanches in a
stochastic model of spiking neurons. PLoS computational biology. 2010;6(7).
October 30, 2020 19/20
Dassios A, Zhao H, et al. Exact simulation of Hawkes process with exponentially
decaying intensity. Electronic Communications. 2013;18.
48. Jo HH, Lee BH, Hiraoka T, Jung WS. Copula-based algorithm for generating
bursty time series. arXiv preprint arXiv:190408795. 2019;.
49. Ogata Y. On Lewis’ simulation method for point-processes. IEEE Transactions
on Information Theory. 1981;27(1):23–31.
October 30, 2020 20/20
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Stochastic models in which agents interact with their neighborhood according to a network topology are a powerful modeling framework to study the emergence of complex dynamic patterns in real-world systems. Stochastic simulations are often the preferred—sometimes the only feasible—way to investigate such systems. Previous research focused primarily on Markovian models where the random time until an interaction happens follows an exponential distribution. In this work, we study a general framework to model systems where each agent is in one of several states. Agents can change their state at random, influenced by their complete neighborhood, while the time to the next event can follow an arbitrary probability distribution. Classically, these simulations are hindered by high computational costs of updating the rates of interconnected agents and sampling the random residence times from arbitrary distributions. We propose a rejection-based, event-driven simulation algorithm to overcome these limitations. Our method over-approximates the instantaneous rates corresponding to inter-event times while rejection events counter-balance these over-approximations. We demonstrate the effectiveness of our approach on models of epidemic and information spreading.
Full-text available
Efficient stochastic simulation algorithms are of paramount importance to the study of spreading phenomena on complex networks. Using insights and analytical results from network science, we discuss how the structure of contacts affects the efficiency of current algorithms. We show that algorithms believed to require O(logN) or even O(1) operations per update – where N is the number of nodes – display instead a polynomial scaling for networks that are either dense or sparse and heterogeneous. This significantly affects the required computation time for simulations on large networks. To circumvent the issue, we propose a node-based method combined with a composition and rejection algorithm, a sampling scheme that has an average-case complexity of O[log(logN)] per update for general networks. This systematic approach is first set-up for Markovian dynamics, but can also be adapted to a number of non-Markovian processes and can enhance considerably the study of a wide range of dynamics on networks.
Full-text available
This paper introduces a novel extension of the edge-based compartmental model to epidemics where the transmission and recovery processes are driven by general independent probability distributions. Edge-based compartmental modelling is just one of many different approaches used to model the spread of an infectious disease on a network; the major result of this paper is the rigorous proof that the edge-based compartmental model and the message passing models are equivalent for general independent transmission and recovery processes. This implies that the new model is exact on the ensemble of configuration model networks of infinite size. For the case of Markovian transmission the message passing model is re-parametrised into a pairwise-like model which is then used to derive many well-known pairwise models for regular networks, or when the infectious period is exponentially distributed or is of a fixed length.
Full-text available
Numerical simulation of continuous-time Markovian processes is an essential and widely applied tool in the investigation of epidemic spreading on complex networks. Due to the high heterogeneity of the connectivity structure through which epidemics is transmitted, efficient and accurate implementations of generic epidemic processes are not trivial and deviations from statistically exact prescriptions can lead to uncontrolled biases. Based on the Gillespie algorithm (GA), in which only steps that change the state are considered, we develop numerical recipes and describe their computer implementations for statistically exact and computationally efficient simulations of generic Markovian epidemic processes aiming at highly heterogeneous and large networks. The central point of the recipes investigated here is to include phantom processes, that do not change the states but do count for time increments. We compare the efficiencies for the susceptible-infected-susceptible, contact process and susceptible-infected-recovered models, that are particular cases of a generic model considered here. We numerically confirm that the simulation outcomes of the optimized algorithms are statistically indistinguishable from the original GA and can be several orders of magnitude more efficient.
Temporal point process is widely used for sequential data modeling. In this article, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users and news transmitting between websites. Given a collection of event propagation sequences, the conventional point process model considers only the event history, i.e., embed event history into a vector, not the latent graph structure. We propose a graph biased temporal point process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic data set and two real-world data sets show the efficacy of our model compared with conventional methods and state-of-the-art ones.
Dynamical processes in various natural and social phenomena have been described by a series of events or event sequences showing non-Poissonian, bursty temporal patterns. Temporal correlations in such bursty time series can be understood not only by heterogeneous interevent times (IETs) but also by correlations between IETs. Modeling and simulating various dynamical processes requires us to generate event sequences with a heavy-tailed IET distribution and memory effects between IETs. For this, we propose a Farlie-Gumbel-Morgenstern copula-based algorithm for generating event sequences with correlated IETs when the IET distribution and the memory coefficient between two consecutive IETs are given. We successfully apply our algorithm to the cases with heavy-tailed IET distributions. We also compare our algorithm to the existing shuffling method to find that our algorithm outperforms the shuffling method for some cases. Our copula-based algorithm is expected to be used for more realistic modeling of various dynamical processes.
Stochastic processes can model many emerging phenomena on networks, like the spread of computer viruses, rumors, or infectious diseases. Understanding the dynamics of such stochastic spreading processes is therefore of fundamental interest. In this work we consider the wide-spread compartment model where each node is in one of several states (or compartments). Nodes change their state randomly after an exponentially distributed waiting time and according to a given set of rules. For networks of realistic size, even the generation of only a single stochastic trajectory of a spreading process is computationally very expensive.
Conference Paper
Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when they are exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes---information diffusion and network evolution---have been typically studied separately, ignoring their co-evolutionary dynamics. In this work, we propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. The model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Moreover, we develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. Experiments in both synthetic data and real data gathered from Twitter show that our model provides a good fit to the data as well as more accurate predictions than alternatives.
The Gillespie algorithm provides statistically exact methods for simulating stochastic dynamics modeled as interacting sequences of discrete events including systems of biochemical reactions or earthquake occurrences, networks of queuing processes or spiking neurons, and epidemic and opinion formation processes on social networks. Empirically, the inter-event times of various phenomena obey long-tailed distributions. The Gillespie algorithm and its variants either assume Poisson processes (i.e., exponentially distributed inter-event times), use particular functions for time courses of the event rate, or work for non-Poissonian renewal processes, including the case of long-tailed distributions of inter-event times, but at a high computational cost. In the present study, we propose an innovative Gillespie algorithm for renewal processes on the basis of the Laplace transform. The algorithm makes use of the fact that a class of point processes is represented as a mixture of Poisson processes with different event rates. The method is applicable to multivariate renewal processes whose survival function of inter-event times is completely monotone. It is an exact algorithm and works faster than a recently proposed Gillespie algorithm for general renewal processes, which is exact only in the limit of infinitely many processes. We also propose a method to generate sequences of event times with a tunable amount of positive correlation between inter-event times. We demonstrate our algorithm with exact simulations of epidemic processes on networks, finding that a realistic amount of positive correlation in inter-event times only slightly affects the epidemic dynamics.
We present an overview of existing methods to generate pseudorandom numbers from homogeneous Poisson processes. We provide three well-known definitions of the homogeneous Poisson process, present results that form the basis of various existing generation algorithms, and provide algorithm listings. With the intent of guiding users seeking an appropriate algorithm for a given setting, we note the computationally burdensome operations within each algorithm. Our treatment includes one-dimensional and two-dimensional homogeneous Poisson processes.