PreprintPDF Available

# Rejection-Based Simulation of Stochastic Spreading Processes on Complex Networks

Preprints and early-stage research may not have been peer reviewed yet.
Preprint

# Rejection-Based Simulation of Stochastic Spreading Processes on Complex Networks

## Abstract and Figures

Stochastic processes can model many emerging phenomena on networks, like the spread of computer viruses, rumors, or infectious diseases. Understanding the dynamics of such stochastic spreading processes is therefore of fundamental interest. In this work we consider the wide-spread compartment model where each node is in one of several states (or compartments). Nodes change their state randomly after an exponentially distributed waiting time and according to a given set of rules. For networks of realistic size, even the generation of only a single stochastic trajectory of a spreading process is computationally very expensive. Here, we propose a novel simulation approach, which combines the advantages of event-based simulation and rejection sampling. Our method outperforms state-of-the-art methods in terms of absolute run-time and scales significantly better, while being statistically equivalent.
Content may be subject to copyright.
Rejection-Based Simulation of Stochastic
Gerrit Großmann( )[000000024933447X]and Verena Wolf
Saarland University, 66123 Saarbr¨ucken, Germany
mosi.cs.uni-saarland.de
{gerrit.grossmann,verena.wolf}@uni-saarland.de
Abstract. Stochastic processes can model many emerging phenomena
on networks, like the spread of computer viruses, rumors, or infectious
diseases. Understanding the dynamics of such stochastic spreading pro-
cesses is therefore of fundamental interest. In this work we consider the
wide-spread compartment model where each node is in one of several
states (or compartments). Nodes change their state randomly after an
exponentially distributed waiting time and according to a given set of
rules. For networks of realistic size, even the generation of only a sin-
gle stochastic trajectory of a spreading process is computationally very
expensive.
Here, we propose a novel simulation approach, which combines the ad-
vantages of event-based simulation and rejection sampling. Our method
outperforms state-of-the-art methods in terms of absolute runtime and
scales signiﬁcantly better while being statistically equivalent.
Keywords: Spreading Process ·SIR ·Epidemic Modeling ·Monte-Carlo
Simulation ·Gillespie Algorithm
1 Introduction
Computational modeling of spreading phenomena is an active research ﬁeld
within network science with many applications ranging from disease prevention
to social network analysis [1–6]. The most widely used approach is a continuous-
time model where each node of a given graph occupies one of several states
(e.g. infected and susceptible) at each point in time. A set of rules determines
the probabilities and random times at which nodes change their state depending
on the node’s direct neighborhood (as determined by the graph). The application
of a rule is always stochastic and the waiting time before a rule “ﬁres” (i.e. is
applied) is governed by an exponential distribution.
The underlying stochastic dynamics are given by a continuous-time Markov
chain (CTMC) [6–9]. Each possible assignment from nodes to local node states
constitutes an individual state of the CTMC (here referred to as CTMC state or
network state to avoid confusion with the local state of a single node). Hence, the
corresponding CTMC state space grows exponentially in the number of nodes,
which renders its numerical solution infeasible.
2 Großmann et al.
As a consequence, mean-ﬁeld-type approximations and sampling approaches
have emerged as the cornerstones for their analysis. Mean-ﬁeld equations orig-
inate from statistical physics and provide typically a reasonably good approx-
imation of the underlying dynamics [10–14]. Generally speaking, they propose
a set of ordinary diﬀerential equations that model the average behavior of each
component (e.g., for each node, or for all nodes of a certain degree). However,
mean-ﬁeld approaches only give information about the average behavior of the
system, for example, about the expected number of infected nodes for each de-
gree. Naturally, this restricts the scope of their application. In particular, they
For example, one might be interested in ﬁnding the speciﬁc source of an
epidemic [15, 16] or wants to know where an intervention (e.g. by vaccination) is
most successful. [17–20].
Consequently, stochastic simulations remain an essential tool in the compu-
tational analysis of complex networks dynamics. Diﬀerent simulation approaches
for complex networks have been suggested, which can all be seen as adaptations
of the Gillespie algorithm (GA) [6]. Recently, a more eﬃcient extension of the
GA has been proposed, called Optimized GA (OGA) [21]. A rejection step is
used to reduce the number of network updates.
Here, we propose an event-driven simulation method which also utilizes re-
jection sampling. Our method is based on an event queue which stores infection
and curing events. Unlike traditional methods, we ensure that it is not necessary
to iterate over the entire neighborhood of a node after it has changed its state.
Therefore, we allow the creation of events which are inconsistent with the cur-
rent CMTC state. These might lead to rejections when they reach the beginning
of the queue. We introduce our method for the well-known SIS (Susceptible-
Infected-Susceptible) model and show that it can easily be generalized for other
epidemic-type processes. Code will be made available.1
We formalize the semantics of spreading processes in Section 2 and explain
how the CTMC is constructed. Previous simulation approaches, such as GA and
OGA, are presented in Section 3. In Section 4 we present our rejection sampling
algorithm and discuss to which extend our method is generalizable to diﬀerent
network models and spreading models. We demonstrate the eﬀectiveness of our
approach on three diﬀerent case studies in Section 5.
Let G= (N,E) be a an undirected, unweighted, ﬁnite graph without self-loops.
We assume the edges are tuples of nodes and that (n1, n2)∈ E always implies
(n2, n1)∈ E. At each time point tR0each node occupies one out of m(local)
states (also called labels or compartments), denoted by S={s1, s2, . . . , sm}.
Consequently, the (global) network state is fully speciﬁed by a labeling L:N →
S. We use L={L|L:N → S} to denote all possible network states. As each
1github.com/gerritgr/Rejection-Based-Epidemic-Simulation
Rejection-Based Simulation of Stochastic Processes on Networks 3
of the |N| nodes occupies one of mstates, we know that |L| =m|N |. Nodes
change their state by the application of a stochastic rule. A node’s state and its
neighborhood determine which rules are applicable to a node and the probability
density of the random delay until a rule ﬁres. If several rules can ﬁre, the one
with the shortest delay is executed.
We allow two types of rules: node-based (independent, spontaneous) rules
and edge-based (contact, spreading) rules. The application of a node-based rule
Aµ
B results in a transition of a node from state A ∈ S to state B ∈ S (A6=B)
with rate µR>0. That is, the waiting time until the rule ﬁres is governed
by the exponential distribution with rate µ. An edge-based rule has the form
A+C λ
B + C, where A,B,C∈ S,A6= B, λ R>0. Its application changes
an edge (more precisely, the state of an edge’s node). It can be applied to each
edge (n, n0)∈ E where L(n) = A, L(n0) = B. Again, the node in state A changes
after a delay that is exponentially distributed with rate λ. Note that, if a node in
state A has more than one direct B-neighbor, it is “attacked” independently by
each neighbor. Due to the properties of the exponential distribution, the rate at
which a node changes its state according to a certain contact rule is proportional
to the number of neighbors which induce the change.
SIS Model In the sequel, we use the well-known Susceptible-Infected-Susceptible
(SIS) model as a running example. Consider S={I,S}and the rules:
S+I λ
I + I I µ
S.
In the SIS model, infected nodes propagate their infection to neighboring
susceptible nodes using an edge-based rule. Thus, only susceptible nodes with
at least one infected neighbor can become infected. Infection of a node occurs
at a rate that increases proportionally with the number of infected neighbors.
Infected nodes can, independently from their neighborhood, recover (i.e. become
susceptible again) using a node-based rule.
3 Previous Approaches
In this section, we shortly revise techniques that have been previously suggested
for the simulation of SIS-type processes. For a more comprehensive description,
we refer the reader to [6, 21].
3.1 Standard Gillespie Algorithm
The Standard Gillespie Algorithm (here, simply referred to as GA) is also known
as Gillespie’s direct method and a popular method for the simulation of coupled
chemical reactions. Its adaptation to complex networks uses as key data struc-
tures two lists which are constantly updated: a list of all infected nodes (denoted
by LI) and a list of all S–I edges (denoted by LSI).
In each simulation step, we ﬁrst draw an exponentially distributed delay for
the time until the next rule ﬁres. That is, instead of sampling a waiting time for
4 Großmann et al.
each rule and each position where the rule can be applied, we directly sample
the time until the network state changes. For this, we compute an aggregated
rate c=µ|LI|+λ|LSI|. Then we randomly decide if an infection or a cur-
ing event is happening. The probability of the latter is proportional to its rate,
i.e. 1
cµ|LI|, and thus, the probability of an infection is 1
cλ|LSI|. After that we
pick an infected node (in case of a curing) or an S–I edge (in case of an infec-
tion) uniformly at random. We update the two lists accordingly. The expensive
part in each step is keeping LSIupdated. For this, we iterate over the whole
neighborhood of the node and for each susceptible neighbor we remove (after a
curing) or add (after an infection) the corresponding edge to the list. Thus, we
need one add/remove operation on the list for each susceptible neighbor.
Note that there are diﬀerent possibilities to sample the node that will become
infected next. Instead of keeping an updated list of all S–I edges one can also
use a list of all susceptible nodes. In that case, we cannot sample uniformly but
decide for the infection of a susceptible node with a probability proportional to
its number of infected neighbors.
Likewise, we can randomly pick the starting point of the next infection by
only considering LI. To generate an infection event, we ﬁrst sample an infected
node from this list and then we (uniformly) sample a susceptible neighbor, which
becomes infected. Since infected nodes with many susceptible neighbors have a
higher probability of being the starting point of an infection (i.e., they have more
S–I edges associated with them), we sample from LIsuch that the probability of
picking an infected node is proportional to its number of susceptible neighbors.
All three approaches are statistically equivalent but the last one motivates
the Optimized Gillespie Algorithm (OGA) [21].
3.2 Optimized Gillespie Algorithm
As discussed earlier, sampling from LIis expensive because Updating this in-
formation for all elements of LIis costly because after each event, the number
of susceptible neighbors may change for many nodes.
In [21] Cota and Ferreira suggest to sample nodes from LIwith a probability
that is proportional to the degree kof a node, which is an upper bound for the
maximal possible number of susceptible neighbors. Then they uniformly choose
a neighbor of that node and update the global clock. If this neighbor is already
infected they reject the infection event, which yields a rejection probability of
kkS
kif kSis the number of susceptible neighbors. Note that the rejection prob-
ability exactly corrects for the over-approximation of using kinstead of kS. This
is illustrated in Fig. 1.
Compared to the GA, updating the list of infected nodes becomes cheaper,
because only the node which actually changes its state is added to (or removed
from) LI. The sampling probabilities of the neighbors remain the same because
their degree remains the same. On the other hand, sampling of a node is more
expensive compared to the GA where we sample edges uniformly.
Naturally, the speedup in each step comes at the costs (of a potentially
enormous amount) of rejection events. Even a single infected node with many
Rejection-Based Simulation of Stochastic Processes on Networks 5
3
4
2
1
5
3
4
2
1
5
LI13
kS23
k34
\begin{center}
\begin{tabular}{ c| c c }
$\mathcal{L}_I$ & 1 & 3 \\
\hline
$k_S$ & 2 & 3 \\
$k$ & 3 & 4 \\
\end{tabular}
\end{center}
LI134
kS121
k343
infection
Fig. 1: Example of an infection event. We sample from LIproportional to kS.
Alternatively, we can weight according to the number of neighbors kwhich is
constant and over-approximates kS. To correct for the over-approximation we
reject a sample with probability kks
k.
infected but few susceptible neighbors will continuously lead to rejected events.
This is especially problematic in cases with many infected nodes and no or
very few susceptible neighbors which therefore make rejections many orders of
magnitude more likely than actual events. Therefore, in [21] the authors propose
the algorithm for simulations close to the epidemic threshold, where the number
of infected nodes is typically very small.
Note that, to sample a node, Cota and Ferreira also propose rejection sam-
pling based on the maximal degree. However, St-Onge et al. point out that in
the case of heterogeneous networks a binary tree can be used to speed up this
step signiﬁcantly. Speciﬁcally, this allows them to derive an upper-bound for the
rejecting probability. [22]. This, however, does not overcome the fundamental
limitation of the OGA approach regarding models with a large fraction of in-
fected nodes. That is, where infected nodes are mostly surrounded by infected
nodes causing most infections to be rejected.
3.3 Event-Based Simulation
In the event-driven approach, the primary data structure is an event queue, in
which events are sorted and executed according to the time points at which they
will occur. This eliminates the costly process of randomly selecting a node for
each step (popping the ﬁrst element from the queue has constant time com-
plexity). Events are either curing of a speciﬁc node or infection via a speciﬁc
edge. Moreover, it is easy to adapt the event-driven approach to rules with
non-Markovian waiting times or to a network where each node has individual
recovery and infection rates [6]. Event-based simulation of an SIS process is
done as follows: For the initialization, we draw for each node an exponentially
distributed time until recovery with rate µand add the respective curing event
to the queue. Likewise, for each susceptible node with at least one infected
neighbor we draw an exponentially distributed time until infection with rate
λ×“Number of infected neighbors”. We add the resulting events to the queue.
During the simulation, we always take the earliest event from the queue,
change the network accordingly and update the global clock. If the current event
6 Großmann et al.
is the infection of a node, the infection rates of its susceptible neighbors increase.
Thus, it is necessary to iterate over all neighbors of the corresponding node, draw
renewed waiting times for their infection events, and update the event queue
accordingly. Although eﬃcient strategies have been suggested [6], these queue
Since each step requires an iteration over all neighbors of the node under
consideration, the worst-case runtime depends on the maximal degree of the
network. Moreover, for each neighbor, it might be necessary to reorder the event
queue. The time complexity of reordering the queue depends (typically logarith-
mically) on the number of elements in the queue and adds signiﬁcant additional
costs to each step. Note that trajectories generated using the event-driven ap-
proach are statistically equivalent to those generated with the GA because all
delays are exponentially distributed and thus have the memoryless property. A
variant of this algorithm can also be found in [23].
4 Our Method
In this section, we propose a method for the simulation of SIS-type processes.
The key idea is to combine an event-driven approach with rejection sampling
while keeping the number of rejections to a minimum. We will generalize the
algorithm for diﬀerent epidemic processes as well as for weighted and temporal
networks. First, we introduce the main data structures:
Event queue. It stores all future infection and curing events generated so far.
Each event is associated with a time point and with the node(s) aﬀected by the
event. Curing events contain a reference to the recovering node and infection
events to a pair of connected nodes, an infected (source) node and a susceptible
(target) node.
Graph. In this graph structure, each node is associated with its list of neighbors,
its current state, a degree, and, if infected, a prospective recovery time.
We also keep track of the time in a global clock. We assume that an initial
network, a time horizon (or another stopping criterion), and the rate parameters
(µ, λ) are given as input. In Alg. 1-4 we provide pseudocode for the detailed steps
of the method.
Initialization Initially, we iterate over the network and sample a recovery time
(exponentially distributed with rate µ) for each infected node (cf. Line 2, Alg.
1). We push the recovery event to the queue and annotate each infected node
with its recovery time (cf. Line 5, Alg. 2). Next, we iterate over the network a
second time and generate an infection event for each infected node (cf. Line 5,
Alg. 1). The procedure for the generation of infection events is explained later.
In Alg. 1 we need two iterations because the recovery time of each infected
node has to be available for the infection events. These events identify the earliest
infection attempt of each node.
Rejection-Based Simulation of Stochastic Processes on Networks 7
Iteration The main procedure of the simulation is illustrated in Alg. 4. We
schedule events until the global clock reaches the speciﬁed time horizon (cf. Line
9). In each step, we take the earliest event from the queue (Line 7) and set the
global clock to the event time (Line 8). Then we “apply” the event (Line 11-20).
In case of a recovery event, we simply change the state of the corresponding
node from I to S and are done (Line 12). Note that we always generate (exactly)
one recovery event for each infected node, thus, each recovery event is always
consistent with the current network state. Note that the queue always contains
exactly one recovery event for each infected node.
If the event is an infection event, we apply the event if possible (Line 14-
18) and reject it otherwise (Line 19-20). We update the global clock either way.
Each infection event is associated with a source node and a target node (i.e., the
node under attack). The infection event is applicable if the current state of the
target node is S (which might not be the case anymore) and the current state of
the source node is I (which will always be the case). After a successful infection
event, we generate a new recovery event for the target node (Line 16) and two
novel infection events, one for the source node (Line 17) and one for the target
node which is now also infected (Line 18). If the infection attempt was rejected,
we only generate a novel infection event for the source node (Line 20). Thus, we
always have exactly one infection event in the queue for each infected node.
Generating Infection Events The generation of infection events and the
distinction between unsuccessful and potentially successful infection attempts is
an essential part of the algorithm.
In Alg. 3, for each infected node we only generate the earliest infection at-
tempt and add it to the queue. Therefore, we ﬁrst sample the exponentially
distributed waiting time with rate , where kis the degree of the node, and
compute the time point of infection (Line 5). If the time point of the infection
attempt is after its recovery event, we stop and no infection event is added to
the queue (Lines 6-7). Note that in the graph structure, each node is annotated
with its recovery time (node.recovery time) to have it immediately available.
Next, we uniformly select a random neighbor which will be attacked (Line
8). If the neighbor is currently susceptible, we add the event to the event queue
and the current iteration step ends (Lines 9-12).
If the neighbor is currently infected we check the recovery time of the neighbor
(Line 9). If the infection attempt happens before the recovery time point, we
nodes cannot become infected). Thus, we perform an early reject (Lines 10-12 are
not executed). That is, instead of pushing the surely unsuccessful infection event
to the queue, we directly generate another infection attempt, i.e. we re-enter the
while-loop in Lines 4-12. We repeat the above procedure until the recovery time
of the current node is reached or the infection can be added to the queue (i.e. no
early rejection is happening).
Fig. 3 provides a minimal example of a potential execution of our method.
8 Großmann et al.
Algorithm 1 Graph Initialization
1: procedure InitGraph(G,µ,λ,Q)
2: for each node in Gdo
3: if node.state = I then
4: GenerateRecoveryEvent(node, µ, 0, Q)
5: for each node in Gdo .recovery times are available now
6: if node.state = I then
7: GenerateInfectionEvent(node, λ, 0, Q)
Algorithm 2 Generation of a Recovery Event
1: procedure GenerateRecoveryEvent(node, µ,tglobal ,Q)
2: tevent =tglobal + draw exp(µ)
3: e = Event(src node = node, t=tevent, type=recovery)
4: node.recovery time = tevent
5: Q.push(e)
Algorithm 3 Generation of an Infection Event
1: procedure GenerateInfectionEvent(node, λ,tglobal,Q)
2: tevent =tglobal
3: rate = λnode.degree
4: while true do
5: tevent += draw exp(rate)
6: if node.recovery time < tevent then .no event is generated
7: break
8: attacked node = draw uniform(node.neighbor list)
9: if attacked node.state = S
or attacked node.recovery time < tevent then .check for early reject
10: e = Event(src node=node, target=attacked node,
time=tevent, type=infection)
11: Q.push(e) .was successful
12: break
Algorithm 4 SIS Simulation
Input: Graph (G) with initial states, time horizon (h), recovery rate (µ), infection rate (λ)
Output: Graph at time h . or any other measure of interest
1: Q=emptyQueue() .sorted w.r.t. time
2: InitGraph(G,µ,λ,Q)
3: tglobal = 0
4: while true do
5: if Q.is empty() then
6: break
7: e = Q.pop()
8: tglobal = e.time
9: if tglobal > h then
10: break
11: if e.type = recovery then
12: G[e.src node].state = S
13: else
14: if G[e.target node].state = S then
15: G[e.target node].state = I
16: GenerateRecoveryEvent(e.target node, µ,tglobal ,Q)
17: GenerateInfectionEvent(e.src node, λ,tglobal,Q)
18: GenerateInfectionEvent(e.target node, λ,tglobal,Q)
19: else .late reject
20: GenerateInfectionEvent(e.src node, λ,tglobal,Q)
Fig. 2: Pseudocode for our event-based rejection sampling method.
Rejection-Based Simulation of Stochastic Processes on Networks 9
3 42
1
5
t Event
0.4 Infection Edge: 3 4
0.5 Recovery Node: 4
1.6 Recovery Node: 1
1.7 Recovery Node: 3
t Event
0.4 Infection Edge: 3 4
0.5 Recovery Node: 4
0.7 Infection Edge: 1 2
1.6 Recovery Node: 1
1.7 Recovery Node: 3
t Event
0.9 Infection Edge: 4 5
3 42
1
5
t Event
1.6 Recovery Node: 1
1.7 Recovery Node: 3
t Event
0.3 Infection Edge: 1 4
0.4 Infection Edge: 3 4
1.6 Recovery Node: 1
1.7 Recovery Node: 3
t Event
0.1 Infection Edge: 1 3
(a)
(b)
3 42
1
5
t Event
0.4 Infection Edge: 3 4
0.5 Recovery Node: 4
0.7 Infection Edge: 1 2
1.6 Recovery Node: 1
1.7 Recovery Node: 3
t Event
0.5 Recovery Node: 4
0.6 Infection Edge: 3 4
0.7 Infection Edge: 1 2
1.6 Recovery Node: 1
1.7 Recovery Node: 3
3 42
1
5
t Event
0.6 Infection Edge: 3 4
0.7 Infection Edge: 1 2
1.6 Recovery Node: 1
1.7 Recovery Node: 3
(c)
(d)
Fig. 3: First four steps of the our method for a toy example (I: red, S: blue): (a)
Initialization, generate the recovery events (left queue), and infection event for
each infected node (right queue). The ﬁrst infection attempt from node 1 is an
early reject. (b) The infection from 1 to 4 was successful, we generate a recovery
event for 4 and two new infection events for 1 and 4. The infection event of node
4 is directly rejected because it happens after its recovery. (c) (Late) Reject of
the infection attempt from 3 to 4 as 4 is already infected. A new infection event
starting from 3 is inserted into the queue. (d) Node 4 recovers, the remaining
queue is shown.
10 Großmann et al.
4.1 Analysis
Our approach combines the advantages of an event-based simulation with the
advantages of rejection sampling. In contrast to the Optimized Gillespie Algo-
rithm, ﬁnding the node for the next event can be done in constant time. More
importantly, the number of rejection events is dramatically minimized because
the queue only contains events that are realistically possible. Therefore, it is
crucial that each node “knows” its own curing time and that the curing events
are always generated before the infection events. In contrast to traditional event-
based simulation, we do not have to iterate over all neighbors of a newly infected
node followed by a potentially costly reordering of the queue.
Runtime For the runtime analysis, we assume that a binary heap is used to
implement the event queue and that the graph structure is implemented using
a hashmap. Each simulation step starts by taking an element from the queue
(cf. Line 7, Alg. 4), which can be done in constant time. Applying the change of
state to a particular node has constant time complexity on average and linear
time complexity (in the number of nodes) in the worst case as it is based on
lookups in the hashmap.
Now consider the generation of infection events. Generating a waiting time
(Line 3, Alg. 3) can be done in constant time because we know the degree (and
therefore the rate) of each node. Likewise, sampling a random neighbor (Line 8) is
constant in time (assuming the number of neighbors ﬁts in an integer). Checking
for an early reject (Line 9) can also be done in constant time because each
neighbor is sampled with the same (uniform) probability and is annotated with
its recovery time. Even though each early rejection can be computed in constant
time, the number of early rejections can of course increase with the mean (and
maximal) degree of the network. Inserting the newly generated infection event(s)
to the event queue (Line 11) has a worst-case time complexity of O(log n), where
nis the number of elements in the heap. In our case, nis bounded by twice the
number of infected nodes. However, we can expect constant insertion costs on
average [24, 25].
Correctness Here, we argue that our method generates correct sample trajec-
tories of the underlying Markov model. To see this, we assume some hypothetical
changes to our method that do not change the sampled trajectories but makes
it easier to reason about the correctness. First, assume that we abandon early
rejects and insert all events in the event queue regardless of their possibility of
success. Second, assume that we change the generation of infection events such
that we do not only generate the earliest attempt but all infection attempts until
recovery of the node. Note that we do not do this in practice, as this would lead
to more rejections (less early rejections).
Similar to [21], we ﬁnd that our algorithm is equivalent to the direct event-
based implementation of the following spreading process:
Iµ
S S + I λ
I + I I + I λ
I+I.
Rejection-Based Simulation of Stochastic Processes on Networks 11
In [21], I + I λ
I + I is called a shadow process, because the application
of this rule does not change the network state. Hence, rejections of infections
in the SIS model can be interpreted as applications of the the shadow process.
Note that the rate at which this rule is applied to the network is the rate of the
rejection events. Hence, the rate at which an infected node attacks its neighbors
(no matter whether in state I or S) is exactly λk, where kis the degree of
the node. Our method includes the shadow process into our simulation in the
following way: For each S–I edge and I–I edge, an infection event is generated
with rate λand inserted into the queue. The decision if this event will be a real
or a “shadow infection” is postponed until the event is actually applied. This is
possible because both rules have the same rate, in particular, the joint rate at
which an infected k-degree node attacks its neighbors will always be kλ.
4.2 Generalizations
So far we have only considered SIS processes on static and unweighted networks.
This section shorty discusses how to generalize our simulation method to SIS-
type processes on temporal and weighted networks.
General Epidemic Models A key ingredient to our algorithm is the early
rejection of infection events. This is possible because we can compute a node’s
curing time already when the node gets infected. In particular, we exploit that
there is only one way to leave state I, that is, by the application of a node-based
rule. This gives us a guarantee about the remaining time in state I. Other epi-
demic models have a similar structure. For instance, consider the Susceptible-
Infected-Recovered (SIR) model, where infected nodes ﬁrst become recovered
(immune), before entering state I again:
S+I λ
I + I I µ1
R R µ2
S.
We also consider the competing pathogens model [26], where two infectious
diseases, denoted by I and J, compete over the susceptible nodes:
S+I λ1
I+I S+J λ2
J + J I µ1
S J µ2
S.
In both cases, we can exploit that certain states (I, J, R) can only be left
under node-based rules and thus their residence time is independent of their
neighborhood. This makes it simple to annotate each node in any of these states
with their exact residence time and perform early rejections accordingly.
If we do not have these guarantees, early rejection cannot be applied. For
instance in the (ﬁctional) system:
S+I λ1
I + I I + I λ2
I+S.
It is likely that our method will still perform better than the traditional
event-based approach, however, the number of rejection events might signiﬁ-
cantly decrease its performance.
12 Großmann et al.
Weighted Networks In weighted networks, each edge e∈ E is associated with
a positive real-valued weight w(e)R>0. Each edge-based rule of the form
A+C λ
B+C
ﬁres on this particular edge with rate w(e)·λ. Applying our method to weighted
networks is simple: Let nbe a node. During the generation of infection events, in-
stead of sampling the waiting time with rate λk, we now use λPn0N(n)w(n, n0)
as the rate, where N(n) is the set of neighbors of n. Moreover, instead of choosing
a neighbor that will be attacked with uniform probability, we choose them with
a probability proportionally to their edge weight. This can be done by rejection
sampling or in O(log(k)) time complexity, where kis the degree of n.
Temporal Networks Temporal (time-varying, adaptive, dynamic) networks
are an intriguing generalization of static networks which generally complicates
the analysis of their spreading behavior [27–30]. Generalizing the Gillespie algo-
rithm for Markovian epidemic-type processes is far from trivial [27].
In order to keep our model as general as possible, we assume here that an
external process governs the temporal changes in the network. This process runs
simultaneously to our simulation and might or might not depend on the current
network state. It changes the current graph by adding or removing edges, one
edge at a time. For instance, after processing one event, the external process
could add or remove an arbitrary number of edges at speciﬁc time points until
the time of the next event is reached. It is simple to integrate this into our
simulation.
Given that the external process removes an edge, we can simply update the
neighbor list and the degrees in our graph. For each infection event that reaches
the top of the queue, we ﬁrst check if the corresponding edge is still present. If
not, we reject the event. This is possible because removing events only decreases
infection rates which we can correct by using rejections. When an edge is added
to the graph and at least one corresponding node is infected, the infection rate
increases. Thus, it is not suﬃcient to only update the graph, we also generate an
infection event which accounts for the new edge. In order to minimize the number
of generated events, we change the algorithm such that each infected node is
annotated with the time point of its subsequent infection attempt. Consider
now an infected node. When it obtains a new edge, we generate an exponentially
distributed waiting time with rate λmodeling the infection attempt through this
speciﬁc link. We only generate a new event if this time point lies before the time
point of the subsequent infection attempt of the node. In that case, we also
remove the old event associated with this node from the queue.
Since most changes in the graph do not require changes to the event queue
(and those that do only cause two operations at maximum), we expect our
method to handle temporal networks with a reasonably high number of graph
updates very eﬃciently. In the case that an extremely large number of edges in
the graph change at once, we can always decide to iterate over the whole network
and newly initialize the event queue.
Rejection-Based Simulation of Stochastic Processes on Networks 13
5 Case Studies
We demonstrate the eﬀectiveness our approach on three classical epidemic-type
processes. We compare the performance of our method with the Standard Gille-
spie Algorithm (GA) and the Optimized Gillespie Algorithm (OGA) for diﬀer-
ent network sizes. We use synthetically generated networks following the con-
ﬁguration model [31] with a truncated power-law degree distribution, that is
P(k)kγfor 3 k1000. We compare the performance on degree distri-
butions with γ∈ {2,3}. This yields a mean degree around 30 (γ= 2) and 10
(γ= 3). We use models from the literature but adapt rate parameters freely to
generate interesting dynamics. Nevertheless, we ﬁnd that our observations gen-
eralize to a wide range of parameters that yield networks with realistic degree
We also report how the number of nodes in a network is related to the CPU
time of a single step. This is more informative than using the total runtime of a
simulation because the number of steps obviously increases with the number of
nodes when the time horizon is ﬁxed. The CPU time per step is deﬁned as the
total runtime of the simulation divided by the number of steps, only counting
the steps that actually change the network state (i.e., excluding rejections). We
do not count rejection events, because that would give an unfair advantage to
the rejection based approach. The evaluation was performed on a 2017 MacBook
Pro with a 3.1 GHz Intel Core i5 CPU and 16 GB of RAM.
Note that an implementation of the OGA was only available for the SIS
model and the comparison is therefore not available for other models. Due to
the high number of rejection steps in all models, we expect a similar diﬀerence
in the performance between our approach and the OGA also for other models.
5.1 SIS Model
For the SIS model we used rate parameters of (µ, λ) = (1.0,0.6) and an initial
distribution of 95% susceptible nodes and 5% infected nodes. CPU times are
reported in Fig. 4a, where “reject” refers to our rejection-based algorithm (as
described in Section 4). For a sample trajectory, we plot the fraction of nodes in
each state w.r.t. time (Fig. 4b). To have a comparison with OGA we used the
oﬃcial Fortran-implementation in [21] and estimated the average CPU time per
step based on the absolute runtime. Note that the comparison is not perfectly
fair due to implementation diﬀerences and additional input/output of the OGA
code. It is not surprising that the OGA performs comparably bad, as the method
is suited for simulations close to the epidemic threshold. Moreover, our maximal
degree is very large, which negatively aﬀects the performance of the OGA.
We also conducted experiments on models closer to the epidemic threshold
(i.e., where the number of infection events is very small, e.g. λ= 0.1) and with
smaller maximal degree (e.g. kmax = 100). The relative speed-up to the GA in-
creased slightly compared to the results in Fig. 4a. The performance of the OGA
improved signiﬁcantly compared our method leading to a similar performance
as our method (results not shown).
14 Großmann et al.
(a) (b)
Fig. 4: SIS model (a): Average CPU time for a single step (i.e., change of network
state) for diﬀerent networks. The GA method run out of memory for γ= 2.0,
|N|= 107. (b): Sample dynamics for a network with γ= 3.0 and 105nodes.
(a) (b)
Fig. 5: SIR model (a): Average CPU time for a single step (i.e., change of network
state) for diﬀerent networks. (b): Sample dynamics for a network with γ= 2.0
and 105nodes.
5.2 SIR Model
Next, we considered the SIR model, which has more complex dynamics. We used
rate parameters of (µ1, µ2, λ) = (1.1,0.3,0.6) and an initial distribution of 96%
susceptible nodes and 2% infected and recovered nodes, respectively. Similar as
above, CPU times and example dynamics are reported in Fig. 5. We see that
runtime behavior is almost the same as in the SIS model.
5.3 Competing Pathogens Model
Finally, we considered the Competing Pathogens model. We used rate param-
eters of (λ1, λ2, µ1, µ2) = (0.6,0.63,0.6,0.7) and an initial distribution of 96%
susceptible nodes and 2% infected nodes for both pathogens (denoted by I, J),
respectively. CPU times and network dynamics are reported in Fig. 6. The model
Rejection-Based Simulation of Stochastic Processes on Networks 15
(a) (b)
Fig. 6: Competing pathogens model (a): Average CPU time for a single step (i.e.,
change of network state) for diﬀerent networks. (b): Mean fractions and standard
deviations of a network with γ= 2.0 and 104nodes.
is interesting because we see that in the beginning J dominates I due to its higher
infection rate. However, nodes infected with pathogen J recover faster than those
infected with I. This gives the I pathogen the advantage that infected nodes have
more time to attack their neighbors. In the limit, I takes over and J dies out.
For this model stochastic noise has a signiﬁcant inﬂuence on the macroscopic
dynamics. Therefore, we also reported the standard deviation of the fractions
(cf. Fig. 6). Note that the fraction of susceptible nodes is almost deterministic.
Performance-wise our rejection method performs slightly worse than in the pre-
vious models (w.r.t. the baseline). We believe that this is due to the even larger
number of infection events and rejections.
6 Conclusions
In this paper, we presented a novel rejection algorithm for the simulation of
epidemic-type processes. We combined the advantages of rejection sampling and
event-driven simulation. In particular, we exploited that nodes can only leave
certain states using node-based rules, which made it possible to precompute
their residence times, which then again allowed us to perform early rejection of
certain events.
Our numerical results show that our method outperforms previous approaches
especially for networks which are not close the epidemic threshold. In particular,
the speed-up increases as the maximal degree of the network increases.
As future work, we plan to extend the method to compartment models with
arbitrary rules, including an automated decision for which states early rejections
can be computed and are useful.
Acknowledgments We thank Michael Backenk¨ohler for his comments on the
manuscript.
16 Großmann et al.
References
1. Albert-L´aszl´o Barab´asi. Network science. Cambridge university press, 2016.
2. Alain Barrat, Marc Barthelemy, and Alessandro Vespignani. Dynamical processes
on complex networks. Cambridge university press, 2008.
3. Mason Porter and James Gleeson. Dynamical systems on networks: A tutorial,
volume 4. Springer, 2016.
4. John Goutsias and Garrett Jenkinson. Markovian dynamics on complex reaction
networks. Physics Reports, 529(2):199–264, 2013.
5. Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, and Alessandro
Vespignani. Epidemic processes in complex networks. Reviews of modern physics,
87(3):925, 2015.
6. Istvan Z Kiss, Joel C Miller, and P´eter L Simon. Mathematics of epidemics on
networks: from exact to approximate models. Forthcoming in Springer TAM series,
2016.
7. eter L Simon, Michael Taylor, and Istvan Z Kiss. Exact epidemic models on
graphs using graph-automorphism driven lumping. Journal of mathematical biol-
ogy, 62(4):479–508, 2011.
8. Piet Van Mieghem, Jasmina Omic, and Robert Kooij. Virus spread in networks.
IEEE/ACM Transactions on Networking (TON), 17(1):1–14, 2009.
9. Faryad Darabi Sahneh, Caterina Scoglio, and Piet Van Mieghem. Generalized epi-
demic mean-ﬁeld model for spreading processes over multilayer complex networks.
IEEE/ACM Transactions on Networking (TON), 21(5):1609–1620, 2013.
10. James P Gleeson. High-accuracy approximation of binary-state dynamics on net-
works. Physical Review Letters, 107(6):068701, 2011.
11. James P Gleeson, Sergey Melnik, Jonathan A Ward, Mason A Porter, and Pe-
ter J Mucha. Accuracy of mean-ﬁeld theory for dynamics on real-world networks.
Physical Review E, 85(2):026106, 2012.
12. James P Gleeson. Binary-state dynamics on complex networks: Pair approximation
and beyond. Physical Review X, 3(2):021004, 2013.
13. K Devriendt and P Van Mieghem. Uniﬁed mean-ﬁeld framework for susceptible-
infected-susceptible epidemics on networks, based on graph partitioning and the
isoperimetric inequality. Physical Review E, 96(5):052314, 2017.
14. Luca Bortolussi, Jane Hillston, Diego Latella, and Mieke Massink. Continuous
approximation of collective system behaviour: A tutorial. Performance Evaluation,
70(5):317–349, 2013.
15. B Aditya Prakash, Jilles Vreeken, and Christos Faloutsos. Spotting culprits in
epidemics: How many and which ones? In Data Mining (ICDM), 2012 IEEE 12th
International Conference on, pages 11–20. IEEE, 2012.
Hongyuan Zha, and Le Song. Back to the past: Source identiﬁcation in diﬀusion
networks from partially observed cascades. In Artiﬁcial Intelligence and Statistics,
2015.
17. Christian M Schneider, Tamara Mihaljev, Shlomo Havlin, and Hans J Herrmann.
Suppressing epidemics with a limited amount of immunization units. Physical
Review E, 84(6):061911, 2011.
18. Reuven Cohen, Shlomo Havlin, and Daniel Ben-Avraham. Eﬃcient immuniza-
tion strategies for computer networks and populations. Physical review letters,
91(24):247901, 2003.
Rejection-Based Simulation of Stochastic Processes on Networks 17
19. Camila Buono and Lidia A Braunstein. Immunization strategy for epidemic spread-
ing on multilayer networks. EPL (Europhysics Letters), 109(2):26001, 2015.
20. Qingchu Wu, Xinchu Fu, Zhen Jin, and Michael Small. Inﬂuence of dynamic
immunization on epidemic spreading in networks. Physica A: Statistical Mechanics
and its Applications, 419:566–574, 2015.
21. Wesley Cota and Silvio C Ferreira. Optimized gillespie algorithms for the sim-
ulation of markovian epidemic processes on large and heterogeneous networks.
Computer Physics Communications, 219:303–312, 2017.
22. Guillaume St-Onge, Jean-Gabriel Young, Laurent H´ebert-Dufresne, and Louis J
Dub´e. Eﬃcient sampling of spreading processes on complex networks using a
composition and rejection algorithm. arXiv preprint arXiv:1808.05859, 2018.
23. Faryad Darabi Sahneh, Aram Vajdi, Heman Shakeri, Futing Fan, and Caterina
Scoglio. Gemfsim: a stochastic simulator for the generalized epidemic modeling
framework. Journal of computational science, 22:36–44, 2017.
24. Ryan Hayward and Colin McDiarmid. Average case analysis of heap building by
repeated insertion. J. Algorithms, 12(1):126–153, 1991.
25. Thomas Porter and Istvan Simon. Random insertion into a priority queue struc-
ture. IEEE Transactions on Software Engineering, (3):292–298, 1975.
26. Naoki Masuda and Norio Konno. Multi-state epidemic processes on complex net-
works. Journal of Theoretical Biology, 243(1):64–75, 2006.
27. Christian L Vestergaard and Mathieu G´enois. Temporal gillespie algorithm: Fast
simulation of contagion processes on time-varying networks. PLoS computational
biology, 11(10):e1004579, 2015.
28. Naoki Masuda and Petter Holme. Temporal Network Epidemiology. Springer, 2017.
29. Petter Holme and Jari Saram¨aki. Temporal networks. Physics reports, 519(3):97–
125, 2012.
30. Petter Holme. Modern temporal network theory: a colloquium. The European
Physical Journal B, 88(9):234, 2015.
31. Bailey K Fosdick, Daniel B Larremore, Joel Nishimura, and Johan Ugander.
Conﬁguring random graph models with ﬁxed degree sequences. SIAM Review,
60(2):315–355, 2018.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Efficient stochastic simulation algorithms are of paramount importance to the study of spreading phenomena on complex networks. Using insights and analytical results from network science, we discuss how the structure of contacts affects the efficiency of current algorithms. We show that algorithms believed to require O(logN) or even O(1) operations per update – where N is the number of nodes – display instead a polynomial scaling for networks that are either dense or sparse and heterogeneous. This significantly affects the required computation time for simulations on large networks. To circumvent the issue, we propose a node-based method combined with a composition and rejection algorithm, a sampling scheme that has an average-case complexity of O[log(logN)] per update for general networks. This systematic approach is first set-up for Markovian dynamics, but can also be adapted to a number of non-Markovian processes and can enhance considerably the study of a wide range of dynamics on networks.
Article
Full-text available
Numerical simulation of continuous-time Markovian processes is an essential and widely applied tool in the investigation of epidemic spreading on complex networks. Due to the high heterogeneity of the connectivity structure through which epidemics is transmitted, efficient and accurate implementations of generic epidemic processes are not trivial and deviations from statistically exact prescriptions can lead to uncontrolled biases. Based on the Gillespie algorithm (GA), in which only steps that change the state are considered, we develop numerical recipes and describe their computer implementations for statistically exact and computationally efficient simulations of generic Markovian epidemic processes aiming at highly heterogeneous and large networks. The central point of the recipes investigated here is to include phantom processes, that do not change the states but do count for time increments. We compare the efficiencies for the susceptible-infected-susceptible, contact process and susceptible-infected-recovered models, that are particular cases of a generic model considered here. We numerically confirm that the simulation outcomes of the optimized algorithms are statistically indistinguishable from the original GA and can be several orders of magnitude more efficient.
Article
Full-text available
Author Summary When studying how e.g. diseases spread in a population, intermittent contacts taking place between individuals—through which the infection spreads—are best described by a time-varying network. This object captures both their complex structure and dynamics, which crucially affect spreading in the population. The dynamical process in question is then usually studied by simulating it on the time-varying network representing the population. Such simulations are usually time-consuming, especially when they require exploration of different parameter values. We here show how to adapt an algorithm originally proposed in 1976 to simulate chemical reactions—the Gillespie algorithm—to speed up such simulations. Instead of checking at each time-step if each possible reaction takes place, as traditional rejection sampling algorithms do, the Gillespie algorithm determines what reaction takes place next and at what time. This offers a substantial speed gain by doing away with the many rejected trials of the traditional methods, with the added benefit of giving stochastically exact results. In practice this new temporal Gillespie algorithm is tens to hundreds of times faster than the current state-of-the-art, opening up for thorough characterization of spreading phenomena and fast large-scale applications such as the simulation of city- or world-wide epidemics.
Article
Full-text available
When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and real-world data show that our framework can effectively go back to the past, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.
Article
Full-text available
In many real-world complex systems, individuals have many kind of interactions among them, suggesting that it is necessary to consider a layered structure framework to model systems such as social interactions. This structure can be captured by multilayer networks and can have major effects on the spreading of process that occurs over them, such as epidemics. In this Letter we study a targeted immunization strategy for epidemic spreading over a multilayer network. We apply the strategy in one of the layers and study its effect in all layers of the network disregarding degree-degree correlation among layers. We found that the targeted strategy is not as efficient as in isolated networks, due to the fact that in order to stop the spreading of the disease it is necessary to immunize more than the 80 % of the individuals. However, the size of the epidemic is drastically reduced in the layer where the immunization strategy is applied compared to the case with no mitigation strategy. Thus, the immunization strategy has a major effect on the layer were it is applied, but does not efficiently protect the individuals of other layers.
Article
We propose an approximation framework that unifies and generalizes a number of existing mean-field approximation methods for the susceptible-infected-susceptible (SIS) epidemic model on complex networks. We derive the framework, which we call the unified mean-field framework (UMFF), as a set of approximations of the exact Markovian SIS equations. Our main novelty is that we describe the mean-field approximations from the perspective of the isoperimetric problem, which results in bounds on the UMFF approximation error. These new bounds provide insight in the accuracy of existing mean-field methods, such as the N-intertwined mean-field approximation and heterogeneous mean-field method, which are contained by UMFF. Additionally, the isoperimetric inequality relates the UMFF approximation accuracy to the regularity notions of Szemerédi's regularity lemma.
Book
This book covers recent developments in epidemic process models and related data on temporally varying networks. It is widely recognized that contact networks are indispensable for describing, understanding, and intervening to stop the spread of infectious diseases in human and animal populations; “network epidemiology” is an umbrella term to describe this research field. More recently, contact networks have been recognized as being highly dynamic. This observation, also supported by an increasing amount of new data, has led to research on temporal networks, a rapidly growing area. Changes in network structure are often informed by epidemic (or other) dynamics, in which case they are referred to as adaptive networks. This volume gathers contributions by prominent authors working in temporal and adaptive network epidemiology, a field essential to understanding infectious diseases in real society.
Article
Random graph null models have found widespread application in diverse research communities analyzing network datasets. The most popular family of random graph null models, called configuration models, are defined as uniform distributions over a space of graphs with a fixed degree sequence. Commonly, properties of an empirical network are compared to properties of an ensemble of graphs from a configuration model in order to quantify whether empirical network properties are meaningful or whether they are instead a common consequence of the particular degree sequence. In this work we study the subtle but important decisions underlying the specification of a configuration model, and investigate the role these choices play in graph sampling procedures and a suite of applications. We place particular emphasis on the importance of specifying the appropriate graph labeling---stub-labeled or vertex-labeled---under which to consider a null model, a choice that closely connects the study of random graphs to the study of random contingency tables. We show that the choice of graph labeling is inconsequential for studies of simple graphs, but can have a significant impact on analyses of multigraphs or graphs with self-loops. The importance of these choices is demonstrated through a series of three in-depth vignettes, analyzing three different network datasets under many different configuration models and observing substantial differences in study conclusions under different models. We argue that in each case, only one of the possible configuration models is appropriate. While our work focuses on undirected static networks, it aims to guide the study of directed networks, dynamic networks, and all other network contexts that are suitably studied through the lens of random graph null models.
Article
The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.