PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.


Stochastic models in which agents interact with their neighborhood according to a network topology are a powerful modeling framework to study the emergence of complex dynamic patterns in real-world systems. Stochastic simulations are often the preferred-sometimes the only feasible-way to investigate such systems. Previous research focused primarily on Markovian models where the random time until an interaction happens follows an exponential distribution. In this work, we study a general framework to model systems where each agent is in one of several states. Agents can change their state at random, influenced by their complete neighborhood, while the time to the next event can follow an arbitrary probability distribution. Classically, these simulations are hindered by high computational costs of updating the rates of interconnected agents and sampling the random residence times from arbitrary distributions. We propose a rejection-based, event-driven simulation algorithm to overcome these limitations. Our method over-approximates the instantaneous rates corresponding to inter-event times while rejection events counterbalance these over-approximations. We demonstrate the effectiveness of our approach on models of epidemic and information spreading.
Rejection-Based Simulation of Non-Markovian
Agents on Complex Networks
Gerrit Großmann( )1 ,
Luca Bortolussi1,2, and Verena Wolf1
1Saarland University, 66123 Saarbr¨ucken, Germany
2University of Trieste, Trieste, Italy
Abstract. Stochastic models in which agents interact with their neigh-
borhood according to a network topology are a powerful modeling frame-
work to study the emergence of complex dynamic patterns in real-world
systems. Stochastic simulations are often the preferred—sometimes the
only feasible—way to investigate such systems. Previous research focused
primarily on Markovian models where the random time until an interac-
tion happens follows an exponential distribution.
In this work, we study a general framework to model systems where each
agent is in one of several states. Agents can change their state at random,
influenced by their complete neighborhood, while the time to the next
event can follow an arbitrary probability distribution. Classically, these
simulations are hindered by high computational costs of updating the
rates of interconnected agents and sampling the random residence times
from arbitrary distributions.
We propose a rejection-based, event-driven simulation algorithm to over-
come these limitations. Our method over-approximates the instantaneous
rates corresponding to inter-event times while rejection events counter-
balance these over-approximations. We demonstrate the effectiveness of
our approach on models of epidemic and information spreading.
Keywords: Gillespie Simulation, Complex Networks, Epidemic Model-
ing, Rejection Sampling, Multi-Agent System
1 Introduction
Computational modeling of dynamic processes on complex networks is a thriving
research area [1–3]. Arguably, the most common formalism for spreading pro-
cesses is the continuous-time SIS model and its variants [4–6]. Generally speak-
ing, an underlying contact network specifies the connectivity between nodes (i.e.,
agents) and each agent occupies one of several mutually exclusive (local) states
(or compartments). In the well-known SIS model, these states are susceptible
(S) and infected (I). Infected nodes can recover (become susceptible again) and
propagate their infection to neighboring susceptible nodes.
2 G. Großmann et al.
SIS-type models have shown to be extremely useful for analyzing and pre-
dicting the spread of opinions, rumors, and memes in online social networks [7,
8] as well as the neural activity [9,10], the spread of malware [11], and blackouts
in financial institutions [12, 13].
Previous research focused mostly on models where the probability of an event
(e.g. infection or recovery) happening in the next (infinitesimal) time unit is
constant, i.e. independent of the time the agent has already spent in its current
state. We call such agents memoryless and the corresponding stochastic process
Markovian. The semantics of such a model can be described using a so-called
(discrete-state) continuous-time Markov chain (CTMC).
One particularly important consequence of the memoryless property is that
the random time until an agent changes its state, either because of an inter-
action with another agent or because of a spontaneous transition, follows an
exponential distribution. The distribution of this residence time is parameter-
ized by an (interaction-specific) rate λR0[4]. Each agent has an associated
event-modulated Poisson process whose rate depends on the agent’s state and
the state of its neighbors [14]. For instance, infection of an agent increases the
rate at which its susceptible neighbors switch to the infected state.
However, exponentially distributed residence times are an unrealistic assump-
tion in many real-word systems. This holds in particular for the spread of epi-
demics [15–19], for the diffusion of opinions in online social networks [20, 21], and
interspike times of neurons [22] as empirical results show. However, assuming
time delays that can follow non-exponential distributions complicate the anal-
ysis of such processes and typically only allow Monte-Carlo simulations, which
suffer from high computational costs.
Recently, the Laplace-Gillespie algorithm (LGA) for the simulation of non-
Markovian dynamics has been introduced by Masuda and Rocha in [14]. It is
based on the non-Markovian Gillespie algorithm by Bogun´a et al (nMGA) [23]
and minimizes the costs of sampling inter-event times. However, both methods
require computationally expensive updating of an agent’s neighborhood in each
simulation step, which renders them inefficient for large-scale networks. In the
context of Markovian processes on networks, it has recently been shown that
rejection-based simulation can overcome this limitation [24–26].
Here, we extend the idea of rejection-based simulation to non-Markovian net-
worked systems, proposing RED-Sim, a rejection-based, event-driven simulation
approach. Specifically, we combine three ideas to obtain an efficient simulation
of non-Markovian processes: (i) we express the distributions of inter-event times
through time-varying instantaneous rates, (ii) we sample events based on an
over-approximation of these rates and compensate via a rejection step, and (iii)
we use a priority queue to sample the next event. The combination of these ele-
ments makes it possible to reduce the time-complexity of each simulation step.
Specifically, if an agent changes its state, no update of the rate of neighboring
agents is necessary. This comes at the costs of rejection events to counter-balance
missing information about the neighborhood. However, using a priority queue
renders the computational burden of each rejection event very small.
Rejection-Based Simulation of Non-Markovian Agents 3
The remainder of the paper is organized as follows: We describe our frame-
work for non-Markovian dynamics in Section 2 and provide a review of previous
simulation approaches in Section 3. Next, we propose our rejection-based simula-
tion algorithm in Section 4. Section 5 presents numerical results and we conclude
our work in Section 6.
2 Multi-Agent Model
This section introduces the underlying formalism to express agent-based dy-
namics on networks. Let G= (N,E) be a an undirected, finite graph without
self-loops, called contact network. Nodes n∈ N are also referred to as agents.
Network State. The current state of a network Gis described by two functions:
S:N → S assigns to each agent na local state S(n)∈ S, where Sis a finite
set of local states (e.g., S={S,I}for the SIS model);
R:N R0, describes the residence time of each agent, i.e. the time since
the last change of state of the agent.
We say that an agent fires when it changes its state and refer to the remaining
time until it fires as its time delay. The neighborhood state M(n) of an agent n
is a multi-set containing the states of all neighboring agents together with their
respective residence times:
M(n) = nS(n0), R(n0)(n, n0)∈ Eo.
The set of all possible neighborhood-states of all agents in a given system is
denoted by M.
Network Dynamics. The dynamics of the network is described by assigning
to each agent ntwo functions φnand ψn:
φn:S × R0× M R0defines the instantaneous rate of n, i.e. if
λ=φnS(n), R(n), M (n), then the probability that nfires in the next
infinitesimal time interval tis λt;
ψn:S ×R0×M → PSdetermines the state probabilities when a transition
occurs. Here, PSdenotes the set of all probability distributions over S. Hence
if p=ψnS(n), R(n), M (n), then, when agent nfires, the next local state
is swith probability p(s).
Note that we do not consider cases of pathological behavior here, e.g. where φn
is defined in such a way that an infinite amount of simulation steps is possible
in finite time.
A multi-agent network model is completely specified by a tuple (G,S,{φn},
{ψn}, S0), where S0denotes a function that assigns to each node an initial state.
4 G. Großmann et al.
Example. In the classical SIS model we have S={S,I}and φand ψare the
same for all agents, i.e.,
φn(s, t, m) =
crif s=I
I(s0) if s=Sψn(s, t, m) = (Sif s=I
Iif s=S
Here, ci, crR0denote the infection and recovery rate constants, respectively.
Note that the infection rate is proportional to the number of infected neighbors
whereas the rate of recovery is independent from neighboring agents. Moreover,
s:S → {0,1}maps a state s0to one iff s=s0and to zero otherwise. The model
is Markovian as neither φnor ψdepend on the residence time of any agent.
2.1 Semantics
We will specify the semantics of a multi-agent model by describing a stochastic
simulation algorithm that generates trajectories of the system. It is based on a
race condition among agents: each agent picks a random time until it will fire,
but only the one with the shortest time delay wins and changes its state.
Time Delay Density. Assume that tis the time increment of the algorithm.
We define for each nthe effective rate λn(t) as
λn(t) = φnS(n), R(n) + t, Mt(n),where
Mt(n) = nS(n0), R(n0) + tn, n0∈ Eo,
describes the neighborhood-state of nin ttime units assuming that all agents
remain in their current state. Next we assume that for each node n, the prob-
ability density of the (non-negative) time delay is γn, i.e. γn(t) is the density
of firing after ttime units. Leveraging the theory of renewal processes [27], we
find the relationship
λn(t) = γn(t)
0γn(t)and γn(t) = λn(t)eRt
0λn(y)dy .(1)
We assume λn(t) to be zero if the denominator is zero. Note that using this
equation, we can derive rate functions from a given time delay distribution (i.e.
uniform, log-normal, gamma, and so on). If it is not possible to derive λnana-
lytically, it can be computed numerically.
For example, a constant rate function λ(t) = ccorresponds to an exponen-
tial time delay distribution γ(t) = cectwith rate c. Fig. 1 (b) illustrates the
rate function when γis the uniform distribution on [1,2].
Rejection-Based Simulation of Non-Markovian Agents 5
(a) (b) (c) (d)
Fig. 1: (a-c) Sampling event times with a rate function 1t[1,2]
2t. (a) Generate a
random variate from the exponential distribution with rate λ= 1, the sample
here is 0.69. (b) We integrate the rate function until the area is 0.69, here tn=
1.5. (c) This is the rate function corresponding to the uniform distribution in
γ(t) = 1t[1,2]. (d) Sampling tnfrom a time-varying rate function using an
upper-bound of c= 1, rejection probabilities shown in red.
Sampling Time Delays. The effective rate λn(t) allows us to sample the
time delay tnafter which agent nfires, using the inversion method. First, we
sample an exponential random variate xwith rate 1, then we integrate λn(t)
to find tnsuch that Ztn
λn(t)dt=x . (2)
In general it is possible to pre-compute the integral [28], but its parameterization
(on states, residence times, etc) renders this difficult.
Another viable approach is to use rejection sampling. Assume that we have
cR0such that λn(t)cfor all t. We start with tn= 0. In each step,
we sample an exponentially distributed random variate t0
nwith rate cand set
n. We accept tnwith probability λn(tn)
c. Otherwise we reject it and
repeat the process. If a reasonable over-approximation can be constructed, this
is typically much faster than the integral approach in (2).
Na¨ıve Simulation Algorithm. The following simulation algorithm generates
statistically correct trajectories of the model. It starts by initializing the global
clock tglobal = 0 and setting R(n) = 0 for all n. The algorithm repeatedly
performs simulation steps until a predefined time horizon or some other stopping
criterion is reached. Each stimulation step is as follows:
1. Generate a random time delay (candidate) tnfor each agent nusing γn.
Identify agent n0with the smallest time delay tn0.
2. Pick the next state s0for n0according to ψn0S(n0), R(n0) + tn0, M (n0)and
set S(n0) = s0. Set R(n0) = 0 and R(n) = R(n) + tn0(n6=n0).
3. Set tglobal =tglobal +tn0to update the global clock and go to Line 1.
Note that this algorithm is very inefficient as it requires an expensive iteration
over all agents and sampling of time delays in each step.
6 G. Großmann et al.
3 Previous Simulation Approaches
Most recent work on non-Markovian dynamics focuses on the mathematical mod-
eling of such processes [29–33]. In particular, research has focused on how spe-
cific distributions (e.g. constant recovery times) alter the properties of epidemic
spreading such as the epidemic threshold (see [3, 4] for an overview). However,
only few approaches are known for the simulation of non-Markovian dynamics
[23, 14]. We shortly review them in the sequel.
3.1 Non-Markovian Gillespie Algorithm
Bogun´a et al. propose a direct generalization of the Gillespie algorithm to non-
Markovian systems, nMGA, which is statistically exact but computationally ex-
pensive [23]. The algorithm is conceptually similar to our baseline in Section
2.1 but computes the time delay using so-called survival functions. An agent’s
survival function determines the probability that its time delay is larger than a
certain time t. The joint survival function of all agents determines the proba-
bility that all time delays are larger than twhich can be used to sample the
next event time.
The drawback of the nMGA is that it is necessary to iterate over all agents
in each step in order to construct their joint survival function. As a fast ap-
proximation, the authors suggest to only use the current instantaneous rate at
t= 0 i.e., λn(0)and assume all rates remain constant until the next event.
This is correct in the limit of infinite agents, because when the number of agent
approaches infinity, the time until the next firing of any agent approaches zero.
3.2 Laplace-Gillespie Algorithm
The LGA, introduced by Masuda and Rocha in [14], aims at reducing the compu-
tational cost of finding the next event time compared to nMGA, while remaining
statistically correct. It assumes that the time delay distributions can be expressed
in the form of a weighted average of exponential distributions
γn(t) = Z
pn(λ)λeλtdλ ,
where pnis a PDF over the rate λR0. This formulation of γn, while being
very elegant, limits the applicability to cases where the corresponding survival
function is completely monotone [14], which limits the set of possible inter-event
time distributions.
The LGA has two advantages. Firstly, we can sample tnby first sampling
λaccording to pnand then, instead of the numerical integration in Eq. (2),
compute tn=ln u/λ where uis uniformly distributed on (0,1). Secondly, we
can assume that the sampled λfor a particular agent remains constant until one
of its neighbors fires. Thus, in each step, it is only necessary to update the rates
of the neighbors of the firing agent, and not of all agents.
Rejection-Based Simulation of Non-Markovian Agents 7
4 Our Method
Rejection sampling for the efficient simulation of Markovian stochastic processes
on complex networks has been proposed recently [24–26, 34], but not for the
non-Markovian case where arbitrary distributions for the inter-event times are
Here, we proposes the RED-Sim algorithm for the generation of statistically
correct simulations of non-Markovian network models, as described in Section
2. The main idea of RED-Sim is to rely on rejection sampling to reduce the
computational cost, making it unnecessary to update the rates of the neighbors
of a firing agent. Independently from that, rejection sampling can also be utilized
to sample tnwithout numerical integration.
4.1 Rate Over-Approximation
Recall that λn(·) expresses how the instantaneous rate of nchanges over time,
assuming that no neighboring agent changes its state. A key in ingredient of our
method is now b
λn(·) which upper-bounds the instantaneous rate of n, assuming
that all neighbors are allowed to freely change their state as often as possible.
That is, at all times b
λn(t) is an upper-bound of λn(t) taking into consideration
all possible states of the neighborhood.
Consider again the Markovian SIS example. The curing of an infected node
does not depend on an agent’s neighborhood anyway. The rate is always cr,
which is a trivial upper bound. A susceptible node becomes infected with rate ci
times “number of infected neighbors”. Thus, the instantaneous infection rate of
an agent ncan be bounded by b
λn(t) = knciwhere knis the degree of n. Upper-
bounds may also be constant or depend on time. Consider for example a recovery
time that is uniformly distributed on [1,2]. In this case, λn(·) approaches infinity
(cf. Fig. 1b) making a constant upper-bound impossible. For multi-agent models,
a time-dependent upper-bound always exists since we can compute the maximal
instantaneous rate w.r.t. all reachable neighborhood states.
4.2 The RED-Sim Algorithm
For a given multi-agent model specification (G,S,{φn},{ψn}, S0) and given
upper-bounds {b
λn}, we propose a statistically exact simulation algorithm, which
is based on two basic data structures:
Labeled Graph
A graph represents the contact network and each agent (node) nis annotated
with its current state S(n) and T(n), the time point of its last state change.
Event Queue
The event queue stores the list of future events, where an event is a tuple
(n, bµ, b
tn). Here, nis the agent that fires, b
tnthe prospective absolute time
point of firing, and bµR0is an over-approximation of the true effective rate
(at time point b
tn). The queue is sorted w.r.t. b
tnand initialized by generating
one event per agent.
8 G. Großmann et al.
A global clock, tglobal, keeps track of the elapsed time since the simulation
started. We use T(n) instead of R(n) to avoid updates for all agents after each
event (i.e., R(n) = tglobal T(n)). We perform simulation steps until some ter-
mination criterion is fulfilled, each step is as follows:
1. Take the first event (n, bµ, b
tn) from the event queue and update tglobal =b
2. Evaluate the true instantaneous rate µ=φnS(n), tglobal T(n), M(n)of
nat the current system state.
3. With probability 1 µ
bµ,reject the firing and go to Line 5.
4. Randomly choose the next state s0of naccording to the distribution
ψnS(n), tglobal T(n), M (n). If S(n)6=s0: set S(n) = s0and T(n) = tglobal.
5. Generate a new event for agent nand push it to the event queue.
6. Go to Line 1.
The correctness of RED-Sim can be shown similarly to [26, 24] (see also the
proof sketch in Appendix A). Note that in all approaches evaluating an agent’s
instantaneous rate is linear in the number of its neighbors. In previous ap-
proaches, the rate has to be updated for all neighbors of a firing agent. In
RED-Sim only the rate of the firing agent has to be updated. The key asset
of RED-Sim is that, due to the over-approximation of the rate function, we do
not need to update the neighborhood of the firing agent n, even though the
neighbor’s respective rates might change as a result from the event. We provide
a more detailed analysis of the time-complexity of RED-Sim in Appendix B.
Event Generation. To generate new events in Line 5, we sample a random
time delay tnand set b
tn=tglobal +tn. To sample tnaccording to the over-
approximated rate b
λn(·), we either use the integration approach of Eq. (2) or
sample directly from an upper-bounded the exponential distribution (cf. Fig. 1d).
To sample tnfrom an exponential distribution, we need to be able to find an
upper bound that is constant in time b
λn(t) = cfor all t. Hence, we simply set
bµ=cand sample tnfrom an exponential distribution with rate c. Otherwise,
when a constant upper bound either does not exits or is unfeasible to construct,
we use numerical integration over b
λn(·) (see Eq. (2)), and set bµ=b
λn(tn). Al-
ternatively, when b
λn(t) has the required form (cf. Section 3), we can even use
LGA-like approach to sample tn[23] (and also set bµ=b
Discussion. We expect RED-Sim to perform poor only in some special cases,
where either the construction of an upper-bound is numerically too expensive
or where the difference between the upper-bound and the actual average rate is
very large, which would render the number of rejections events too high.
It is easy to extend RED-Sim to different types of non-Markovian behavior.
For instance, we might keep track of the number of firings of an agent and
parameterize φand ψaccordingly to generate the behavior of self-exiting point
processes or to cause correlated firings among agents [35, 36].
Note that, we can turn the method into a rejection-free approach by gener-
ating a new event for nand all of its neighbors in Line 5while taking the new
state of ninto consideration (see also Appendix A).
Rejection-Based Simulation of Non-Markovian Agents 9
5 Case Studies
We demonstrate the effectiveness our approach on classical epidemic-type pro-
cesses and synthetically generated networks following the configuration model
with a truncated power-law degree distribution [37]. That is P(k)kβfor
3k |N |. We use β∈ {2,2.5}(a smaller βcorresponds to a larger average
degree). The implementation is written in Julia and publicly available3. As a
baseline for comparison, we use the rejection-free variant of the algorithm where
neighbors are updated after an event (as described at the end of Section 4.2).
The evaluation was performed on a 2017 MacBook Pro with a 3.1 GHz Intel
Core i5 CPU and results are shown in Fig. 2.
SIS Model. We consider an SIS model (with ψand φas defined above), but
infected nodes become less infectious over time. That is, the rate at which an
infected agent with residence time t“attacks” its susceptible neighbors is ueut
for u= 0.4. This shifts the exponential distribution to the left. We upper-bound
the infection rate of an agent nwith degree knwith b
λn(t) = uknwhich is
constant in time. Thus, we sample tnusing an exponential distribution. The
time until an infected agent recovers is, independent from its neighborhood,
uniformly distributed in [0,1] (similar to [38]). Hence, we can sample it directly.
We start with 5% infected agents.
Voter Model. The voter model describes the competition of two opinions of
agents in state Aswitch to Band vice versa (i.e. ψis deterministic). The time
until an agent switches follows a Weibull distribution (similar to [23, 39]):
γn(t) = cu(tu)c1e(tu)cand λn(t) = cu(tu)c1, t 0
where we set c=cA= 2.0, u=uAif S(n) = Aand c=cB= 2.05, u=uBif
S(n) = B. We let the fraction of opposing neighbors modulate u, i.e., uA=Bn
where Bndenotes the number of neighbors currently in state Band knis the
degree of agent n(and analogously for A). Hence, the instantaneous rate depends
on the current residence time and the states of the neighboring agents. To get
an upper-bound for the rate, we set uA=uB= 1 and get b
λn(t) = ctc1. We
use numerical integration to sample tnto show that RED-Sim performs well also
in the case of this more costly sampling. We start with 50% of agents being in
each state.
Discussion. Our results provide strong evidence for the usefulness of rejection
sampling for non-Markovian simulation. As expected, we find that the number
of interconnections (edges) and the number of agents influence the runtime be-
havior. Especially for RED-Sim, the number of edges shows to be much more
relevant than purely the number of agents. Our method consistently outper-
forms the baseline up to several orders of magnitude. The gain (RED-Sim speed
by baseline speed) ranges from 10.2 (103nodes, voter model, β= 2.5) to 674
(105nodes, SIS model, β= 2.0).
10 G. Großmann et al.
(a) (b)
Fig. 2: Computation time of a single simulation step w.r.t. network size and
connectivity of the SIS model (a) and voter model (b). We measure the
CPU time per simulating step by dividing the simulation time by the number of
successful (i.e., non-rejection) steps.
We expect the baseline algorithm to be comparable with LGA as both of
them only update the rates of the relevant agents after an event. Moreover,
in the SIS model, sampling the next event times is very cheap. However, a
detailed statistical comparison remains to be performed (both case-studies could
not straightforwardly be simulated with LGA due to its constraints on the time
delays). Note that, when LGA is applicable, its key asset, the fast sampling of time
delays, can also be used in RED-Sim. We also tested a nMGA-like implementation
where rates are consider to remain constant until the next event. However, the
method was—even though it is only approximate—slower than the baseline.
Note that the SIS model is somewhat unfavorable for RED-Sim as it generates
a large amount of rejection events when only a small fraction of agents are
infected. Consider, for instance, an agent with many neighbours of which only a
few are infected. The over-approximation essentially assumes that all neighbors
are infected to sample the next event time (and, in addition, over-approximates
the rate of each individual neighbor), leading to a high rejection probability.
Nevertheless, the low computational costs of each rejection event overcome this.
6 Conclusions
We presented a rejection-based algorithm for the simulation of non-Markovian
agent models on networks. The key advantage of the rejection-based approach
is that in each simulation step it is no longer necessary to update the rates of
neighboring agents. This greatly reduces the time complexity of each step com-
pared to previous approaches and makes our method viable for the simulation
of dynamical processes on real-world networks. As future work, we plan to au-
tomate the computation of the over-approximation b
λand investigate correlated
time delays [40, 14] and self-exiting point processes [35, 36].
Acknowledgements. We thank Guillaume St-Onge for helpful comments on
non-Markovian dynamics. This research was been partially funded by the Ger-
man Research Council (DFG) as part of the Collaborative Research Center
“Methods and Tools for Understanding and Controlling Privacy”.
Rejection-Based Simulation of Non-Markovian Agents 11
1. Albert-L´aszl´o Barab´asi. Network science. Cambridge university press, 2016.
2. John Goutsias and Garrett Jenkinson. Markovian dynamics on complex reaction
networks. Physics Reports, 529(2):199–264, 2013.
3. Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, and Alessandro
Vespignani. Epidemic processes in complex networks. Reviews of modern physics,
87(3):925, 2015.
4. Istv´an .Z Kiss, J. C Miller, and P. L Simon. Mathematics of epidemics on networks:
from exact to approximate models. Forthcoming in Springer TAM series, 2016.
5. Mason Porter and James Gleeson. Dynamical systems on networks: A tutorial,
volume 4. Springer, 2016.
6. Helena Sofia Rodrigues. Application of sir epidemiological model: new trends.
arXiv preprint arXiv:1611.02565, 2016.
7. Maksim Kitsak, Lazaros K Gallos, Shlomo Havlin, Fredrik Liljeros, Lev Muchnik,
H Eugene Stanley, and Hern´an A Makse. Identification of influential spreaders in
complex networks. Nature physics, 6(11):888, 2010.
8. Laijun Zhao, Jiajia Wang, Yucheng Chen, Qin Wang, Jingjing Cheng, and Hongxin
Cui. Sihr rumor spreading model in social networks. Physica A: Statistical Me-
chanics and its Applications, 391(7):2444–2453, 2012.
9. AV Goltsev, FV De Abreu, SN Dorogovtsev, and JFF Mendes. Stochastic cellular
automata model of neural networks. Physical Review E, 81(6):061921, 2010.
10. Jil Meier, X Zhou, Arjan Hillebrand, Prejaas Tewarie, Cornelis J Stam, and Piet
Van Mieghem. The epidemic spreading model and the direction of information
flow in brain networks. NeuroImage, 152:639–646, 2017.
11. Chenquan Gan, Xiaofan Yang, Wanping Liu, Qingyi Zhu, and Xulong Zhang. Prop-
agation of computer virus under human intervention: a dynamical model. Discrete
Dynamics in Nature and Society, 2012, 2012.
12. Robert M May and Nimalan Arinaminpathy. Systemic risk: the dynamics of model
banking systems. Journal of the Royal Society Interface, 7(46):823–838, 2009.
13. Robert Peckham. Contagion: epidemiological models and financial crises. Journal
of Public Health, 36(1):13–17, 2013.
14. Naoki Masuda and Luis EC Rocha. A gillespie algorithm for non-markovian
stochastic processes. SIAM Review, 60(1):95–115, 2018.
15. Alun L Lloyd. Realistic distributions of infectious periods in epidemic models:
changing patterns of persistence and dynamics. Theoretical population biology,
60(1):59–71, 2001.
16. GL Yang. Empirical study of a non-markovian epidemic model. Mathematical
Biosciences, 14(1-2):65–84, 1972.
17. SP Blythe and RM Anderson. Variable infectiousness in hfv transmission models.
Mathematical Medicine and Biology: A Journal of the IMA, 5(3):181–200, 1988.
18. T D. Hollingsworth, R. M Anderson, and C. Fraser. Hiv-1 transmission, by stage
of infection. The Journal of infectious diseases, 198(5):687–693, 2008.
19. Z Feng and HR Thieme. Endemic models for the spread of infectious diseases
with arbitrarily distributed disease stages i: General theory. SIAM J. Appl. Math,
61(3):803–833, 2000.
20. Albert-Laszlo Barabasi. The origin of bursts and heavy tails in human dynamics.
Nature, 435(7039):207, 2005.
21. Alexei V´azquez, Joao Gama Oliveira, Zolt´an Dezs¨o, Kwang-Il Goh, Imre Kondor,
and Albert-L´aszl´o Barab´asi. Modeling bursts and heavy tails in human dynamics.
Physical Review E, 73(3):036127, 2006.
12 G. Großmann et al.
22. William R Softky and Christof Koch. The highly irregular firing of cortical cells is
inconsistent with temporal integration of random epsps. Journal of Neuroscience,
13(1):334–350, 1993.
23. Marian Bogun´a, Luis F Lafuerza, Ra´ul Toral, and M ´
Angeles Serrano. Simulating
non-markovian stochastic processes. Physical Review E, 90(4):042108, 2014.
24. Wesley Cota and Silvio C Ferreira. Optimized gillespie algorithms for the sim-
ulation of markovian epidemic processes on large and heterogeneous networks.
Computer Physics Communications, 219:303–312, 2017.
25. Guillaume St-Onge, Jean-Gabriel Young, Laurent H´ebert-Dufresne, and Louis J
Dub´e. Efficient sampling of spreading processes on complex networks using a
composition and rejection algorithm. arXiv preprint arXiv:1808.05859, 2018.
26. Gerrit Großmann and Verena Wolf. Rejection-based simulation of stochastic
spreading processes on complex networks. In International Workshop on Hybrid
Systems Biology, pages 63–79. Springer, 2019.
27. David Roxbee Cox. Renewal theory. 1962.
28. Raghu Pasupathy. Generating homogeneous poisson processes. Wiley encyclopedia
of operations research and management science, 2010.
29. I. Z Kiss, G. R¨ost, and Z. Vizi. Generalization of pairwise models to non-markovian
epidemics on networks. Physical review letters, 115(7):078701, 2015.
30. Lorenzo Pellis, Thomas House, and Matt J Keeling. Exact and approximate mo-
ment closures for non-markovian network epidemics. Journal of theoretical biology,
382:160–177, 2015.
31. Hang-Hyun Jo, Juan I Perotti, Kimmo Kaski, and J´anos Kert´esz. Analytically
solvable model of spreading dynamics with non-poissonian processes. Physical
Review X, 4(1):011041, 2014.
32. N Sherborne, JC Miller, KB Blyuss, and IZ Kiss. Mean-field models for non-
markovian epidemics on networks: from edge-based compartmental to pairwise
models. arXiv preprint arXiv:1611.04030, 2016.
33. Michele Starnini, James P Gleeson, and Mari´an Bogu˜a. Equivalence between
non-markovian and markovian dynamics in epidemic spreading processes. Physical
review letters, 118(12):128301, 2017.
34. Christian L Vestergaard and Mathieu G´enois. Temporal gillespie algorithm: Fast
simulation of contagion processes on time-varying networks. PLoS computational
biology, 11(10):e1004579, 2015.
35. Yosihiko Ogata. On lewis’ simulation method for point processes. IEEE Transac-
tions on Information Theory, 27(1):23–31, 1981.
36. Angelos Dassios, Hongbiao Zhao, et al. Exact simulation of hawkes process with
exponentially decaying intensity. Electronic Communications in Probability, 18,
37. Bailey K Fosdick, Daniel B Larremore, Joel Nishimura, and Johan Ugander.
Configuring random graph models with fixed degree sequences. SIAM Review,
60(2):315–355, 2018.
38. Gergely R¨ost, Zsolt Vizi, and Istv´an Z Kiss. Impact of non-markovian recovery on
network epidemics. In BIOMAT 2015: International Symposium on Mathematical
and Computational Biology, pages 40–53. World Scientific, 2016.
39. P Van Mieghem and R Van de Bovenkamp. Non-markovian infection spread
dramatically alters the susceptible-infected-susceptible epidemic threshold in net-
works. Physical review letters, 110(10):108701, 2013.
40. Hang-Hyun Jo, Byoung-Hwa Lee, Takayuki Hiraoka, and Woo-Sung Jung.
Copula-based algorithm for generating bursty time series. arXiv preprint
arXiv:1904.08795, 2019.
Rejection-Based Simulation of Non-Markovian Agents 13
A Correctness
First, consider the rejection-free version of the algorithm:
1. Take the first event (n, bµ, b
tn) from the event queue and update tglobal =b
2. Evaluate the true instantaneous rate µ=φnS(n), tglobal T(n), M(n)of
nat the current system state.
3. With probability 1 µ
bµ,reject the firing and go to Line 5.
4. Randomly choose the next state s0of naccording to the distribution
ψnS(n), tglobal T(n), M (n). If S(n)6=s0: set S(n) = s0and T(n) = tglobal.
5. Generate a new event for agent nand push it to the event queue.
6. For each neighbor n0of n: Remove the event corresponding to n0from the
queue and generate a new event (taking the new state of ninto account).
7. Go to Line 1.
Rejection events are not necessary in this version of the algorithm because all
events in the queue are generated by the “real” rate and are therefore consistent
with the current system state. It is easy to see that the rejection-free version is
a direct event-driven implementation of the Na¨ıve Simulation Algorithm which
specifies the semantics in Section 2.1. The correspondence between Gillespie-
approaches and even-driven simulation exploited in literature, for instance in
[4]. Thus, it is sufficient to show that the rejection-free version and RED-Sim
(Section 4.2) are statistically equivalent.
We do this with the following trick: We modify φnand ψninto b
φnand b
respectively. When we simulate the rejection-free algorithm, it will admit exactly
the same behavior as RED-Sim. The key to that are so-called shadow-process [24,
26]. A shadow process does not change the state of the corresponding agent but
still fires with a certain rate. They are conceptually similar to self-loops in a
Markov chain. In the end, we can interpret the rejection events not as rejections,
but as the statistically necessary application of the shadow process.
Here, we consider the case of a constant cR0upper-bound exits for
all φn. That is, cφn(s, t, m) for all reachable s, t, m. The case of an time-
dependent upper-bound is, however, analogous. Now, for each n, we define the
shadow-process ˜
φn(s, t, m) = cφn(s, t, m).
Consequently, for all n, s, t, m:
φn(s, t, m) = c=φn(s, t, m) + e
φn(s, t, m)
The only thing remaining is to define b
φnsuch that the shadow-process really
has no influence on the system state. Therefore, we simply trigger a null event
(or self-loop) with the probability proportional to how much of b
φnis induced by
the shadow-process. Formally,
ψn(s, t, m) = (p(s) = 1 (self-loop) with probability e
ψn(s, t, m) otherwise .
14 G. Großmann et al.
Note that, firstly, the model specification with b
ψnor φn, ψnare equiva-
lent, because e
φ, e
ψhas to actual effect on the system. Secondly, simulating the
rejection-free algorithm with b
ψndirectly yields RED-Sim. In particular, the
rejections events have the same likelihood as the shadow-process being chosen
in b
ψ. Moreover, updating the rates of all neighbors is redundant because all the
rates remain at c. Whatever the change in φnis, after an event, that shadow
process balances it out, such hat it actually remains constant.
For the case that an upper-bound cdoes not exits, we can still look at the
limit case of c→ ∞. In particular, we truncate all rate functions at cand find
that, as capproaches infinity, the simulated model approaches the real model.
B Time-Complexity
Next, we discuss how the runtime of RED-Sim scales with the size of the underly-
ing contact network (and number of agents). Assume that a binary heap is used
to implement the event queue and that the graph structure is implemented using
a hashmap. Each step starts by popping an element from the queue which has
constant time complexity. Next, we compute µ. Therefore, we have to lookup
all neighbors of nin the graph structure iterate over them. We also have to
lookup all states and residence times. This step has linear time-complexity in
the number of neighbors. More precisely, lookups in the hashmaps have constant
time-complexity on average and are linear in the number of agents in the worst
case. Computing the rejection probability has constant time complexity. In the
case of a real event, we update Sand T. Again, this has constant time-complexity
on average. Generating a new event does not depend on the neighborhood of an
agent and has, therefore, constant time-complexity. Note that this step can still
be somewhat expensive when it requires integration to sample tebut not in
an asymptotic sense. Thus, a step in the simulation is linear in the number of
neighbors of the agent under consideration.
In contrast, previous methods require that after each update, the rate of
each neighbor n0is re-computed. The rate of n0, however, depends on the whole
neighborhood of n0. Hence, it is necessary to iterate over all neighbors n00 of
every single neighbor n0of n.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Efficient stochastic simulation algorithms are of paramount importance to the study of spreading phenomena on complex networks. Using insights and analytical results from network science, we discuss how the structure of contacts affects the efficiency of current algorithms. We show that algorithms believed to require O(logN) or even O(1) operations per update – where N is the number of nodes – display instead a polynomial scaling for networks that are either dense or sparse and heterogeneous. This significantly affects the required computation time for simulations on large networks. To circumvent the issue, we propose a node-based method combined with a composition and rejection algorithm, a sampling scheme that has an average-case complexity of O[log(logN)] per update for general networks. This systematic approach is first set-up for Markovian dynamics, but can also be adapted to a number of non-Markovian processes and can enhance considerably the study of a wide range of dynamics on networks.
Full-text available
This paper introduces a novel extension of the edge-based compartmental model to epidemics where the transmission and recovery processes are driven by general independent probability distributions. Edge-based compartmental modelling is just one of many different approaches used to model the spread of an infectious disease on a network; the major result of this paper is the rigorous proof that the edge-based compartmental model and the message passing models are equivalent for general independent transmission and recovery processes. This implies that the new model is exact on the ensemble of configuration model networks of infinite size. For the case of Markovian transmission the message passing model is re-parametrised into a pairwise-like model which is then used to derive many well-known pairwise models for regular networks, or when the infectious period is exponentially distributed or is of a fixed length.
Full-text available
Numerical simulation of continuous-time Markovian processes is an essential and widely applied tool in the investigation of epidemic spreading on complex networks. Due to the high heterogeneity of the connectivity structure through which epidemics is transmitted, efficient and accurate implementations of generic epidemic processes are not trivial and deviations from statistically exact prescriptions can lead to uncontrolled biases. Based on the Gillespie algorithm (GA), in which only steps that change the state are considered, we develop numerical recipes and describe their computer implementations for statistically exact and computationally efficient simulations of generic Markovian epidemic processes aiming at highly heterogeneous and large networks. The central point of the recipes investigated here is to include phantom processes, that do not change the states but do count for time increments. We compare the efficiencies for the susceptible-infected-susceptible, contact process and susceptible-infected-recovered models, that are particular cases of a generic model considered here. We numerically confirm that the simulation outcomes of the optimized algorithms are statistically indistinguishable from the original GA and can be several orders of magnitude more efficient.
Full-text available
In this letter, a generalization of pairwise models to non-Markovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations (DDEs), which shows excellent agreement with results based on stochastic simulations. Furthermore, we analytically compute a new $\mathcal{R}_0$-like threshold quantity and an analytical relation between this and the final epidemic size. Additionally, we show that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size. By showing the rigorous link between non-Markovian dynamics and pairwise DDEs, we provide the framework for a more systematic understanding of non-Markovian dynamics.
The Gillespie algorithm provides statistically exact methods for simulating stochastic dynamics modeled as interacting sequences of discrete events including systems of biochemical reactions or earthquake occurrences, networks of queuing processes or spiking neurons, and epidemic and opinion formation processes on social networks. Empirically, the inter-event times of various phenomena obey long-tailed distributions. The Gillespie algorithm and its variants either assume Poisson processes (i.e., exponentially distributed inter-event times), use particular functions for time courses of the event rate, or work for non-Poissonian renewal processes, including the case of long-tailed distributions of inter-event times, but at a high computational cost. In the present study, we propose an innovative Gillespie algorithm for renewal processes on the basis of the Laplace transform. The algorithm makes use of the fact that a class of point processes is represented as a mixture of Poisson processes with different event rates. The method is applicable to multivariate renewal processes whose survival function of inter-event times is completely monotone. It is an exact algorithm and works faster than a recently proposed Gillespie algorithm for general renewal processes, which is exact only in the limit of infinitely many processes. We also propose a method to generate sequences of event times with a tunable amount of positive correlation between inter-event times. We demonstrate our algorithm with exact simulations of epidemic processes on networks, finding that a realistic amount of positive correlation in inter-event times only slightly affects the epidemic dynamics.
The interplay between structural connections and emerging information flow in the human brain remains an open research problem. A recent study observed global patterns of directional information flow in empirical data using the measure of transfer entropy. For higher frequency bands, the overall direction of information flow was from posterior to anterior regions whereas an anterior-to-posterior pattern was observed in lower frequency bands. In this study, we applied a simple Susceptible-Infected-Susceptible (SIS) epidemic spreading model on the human connectome with the aim to reveal the topological properties of the structural network that give rise to these global patterns. We found that direct structural connections induced higher transfer entropy between two brain regions and that transfer entropy decreased with increasing distance between nodes (in terms of hops in the structural network). Applying the SIS model, we were able to confirm the empirically observed opposite information flow patterns and posterior hubs in the structural network seem to play a dominant role in the network dynamics. For small time scales, when these hubs acted as strong receivers of information, the global pattern of information flow was in the posterior-to-anterior direction and in the opposite direction when they were strong senders. Our analysis suggests that these global patterns of directional information flow are the result of an unequal spatial distribution of the structural degree between posterior and anterior regions and their directions seem to be linked to different time scales of the spreading process.
A general formalism is introduced to allow the steady state of non-Markovian processes on networks to be reduced to equivalent Markovian processes on the same substrates. The example of an epidemic spreading process is considered in detail, where all the non-Markovian aspects are shown to be captured within a single parameter, the effective infection rate. Remarkably, this result is independent of the topology of the underlying network, as demonstrated by numerical simulations on two-dimensional lattices and various types of random networks. Furthermore, an analytic approximation for the effective infection rate is introduced, which enables the calculation of the critical point and of the critical exponents for the non-Markovian dynamics.
Random graph null models have found widespread application in diverse research communities analyzing network datasets. The most popular family of random graph null models, called configuration models, are defined as uniform distributions over a space of graphs with a fixed degree sequence. Commonly, properties of an empirical network are compared to properties of an ensemble of graphs from a configuration model in order to quantify whether empirical network properties are meaningful or whether they are instead a common consequence of the particular degree sequence. In this work we study the subtle but important decisions underlying the specification of a configuration model, and investigate the role these choices play in graph sampling procedures and a suite of applications. We place particular emphasis on the importance of specifying the appropriate graph labeling---stub-labeled or vertex-labeled---under which to consider a null model, a choice that closely connects the study of random graphs to the study of random contingency tables. We show that the choice of graph labeling is inconsequential for studies of simple graphs, but can have a significant impact on analyses of multigraphs or graphs with self-loops. The importance of these choices is demonstrated through a series of three in-depth vignettes, analyzing three different network datasets under many different configuration models and observing substantial differences in study conclusions under different models. We argue that in each case, only one of the possible configuration models is appropriate. While our work focuses on undirected static networks, it aims to guide the study of directed networks, dynamic networks, and all other network contexts that are suitably studied through the lens of random graph null models.
Moment-closure techniques are commonly used to generate low-dimensional deterministic models to approximate the average dynamics of stochastic systems on networks. The quality of such closures is usually difficult to asses and furthermore the relationship between model assumptions and closure accuracy are often difficult, if not impossible, to quantify. Here we carefully examine some commonly used moment closures, in particular a new one based on the concept of maximum entropy, for approximating the spread of epidemics on networks by reconstructing the probability distributions over triplets based on those over pairs. We consider various models (SI, SIR, SEIR and Reed-Frost) under Markovian and non-Markovian assumption characterising the latent and infectious periods. We initially study with care two special networks, namely the open triplet and closed triangle, for which we can obtain analytical results. We then explore numerically the exactness of moment closures for a wide range of larger motifs, thus gaining understanding of the factors that introduce errors in the approximations, in particular the presence of a random duration of the infectious period and the presence of overlapping triangles in a network. We also derive a much simpler and more intuitive proof than previously available concerning the known result that pair-based moment closure is exact for the Markovian SIR model on tree-like networks under pure initial conditions. We also extend such a result to all infectious models, Markovian and non-Markovian, in which susceptibles escape infection independently from each infected neighbour and for which infectives cannot regain susceptible status, provided the network is tree-like and initial conditions are pure. This works represent a valuable step in enriching intuition and deepening understanding of the assumptions behind moment closure approximations and for putting them on a more rigorous mathematical footing.