Content uploaded by Wasiur R. KhudaBukhsh
Author content
All content in this area was uploaded by Wasiur R. KhudaBukhsh on Feb 01, 2025
Content may be subject to copyright.
arXiv:2501.07738v1 [math.PR] 13 Jan 2025
Mixing time for a noisy SIS model on graphs
Wasiur R. KhudaBukhsh and Yangrui Xiang
School of Mathematical Sciences, University of Nottingham, University
Park, Nottingham, NG7 2RD, State, United Kingdom.
*Corresponding author(s). E-mail(s):
Wasiur.KhudaBukhsh@nottingham.ac.uk;
Yangrui.Xiang@nottingham.ac.uk;
Abstract
We study the mixing time of the noisy SIS (Susceptible-Infected-Susceptible)
model on graphs. The noisy SIS model is a variant of the standard SIS model,
which allows individuals to become infected not just due to contacts with infected
individuals but also due to external noise. We show that, under strong external
noise, the mixing time is of order O(nlog n). Additionally, we demonstrate
that the mixing time on random graphs, namely Erd¨os–R´enyi graphs, regular
multigraphs, and Galton–Watson trees, is also of order O(nlog n)with high
probability.
Keywords: Mixing times, Markov chains, Epidemics, Random graphs
MSC Classification: 60J10 , 60J20 , 92D25
1
1 Introduction
The noisy SIS (Susceptible-Infected-Susceptible) model is an irreducible discrete-time
Markov chain Norris (1997). It is a variant of the standard stochastic SIS model Ander-
sson and Britton (2000). In the standard SIS model, individuals are segregated into
two compartments, namely, S(the susceptible individuals) and I(the infected indi-
viduals). Susceptible individuals get infected when contacted by infected individuals.
Once infected, they are able to infect others before recovering and becoming suscep-
tible again (i.e., moving back to the Scompartment). While this model is used to
model several infectious diseases by mathematical epidemiologists, the model is rather
simplistic and ignores external sources of infection, which plays a crucial role in the
spread of a disease Sharker et al. (2024). The noisy SIS model allows for external
sources of infection so that susceptible individuals can get infected not only by an
infectious contact with an infected individual within the closed population but also
an external source of infection. The other crucial drawback of the standard SIS model
is that it ignores individual heterogeneity and the underlying contact network struc-
ture. We, therefore, consider epidemic dynamics on a variety of graphs (see Durrett
(2007); KhudaBukhsh et al. (2019); Kiss et al. (2017); KhudaBukhsh et al. (2022);
Decreusefond et al. (2012); van der Hofstad (2017); Cui et al. (2022); Bandyopadhyay
and Sajadi (2015); Ball (2021) and references therein to get a glimpse of recent liter-
ature on this topic), and study its mixing properties using n, the number of vertices
(population size) as a scaling parameter.
1.1 Mixing properties
It is well known that when the size of the state-space of an irreducible Markov chain is
a finite and fixed integer, the chain converges to its equilibrium measure exponentially
fast (Levin and Peres,2017, Chapter 4). We, however, adopt the approach introduced
by Aldous and Diaconis Aldous and Diaconis (1986). This approach revolves around
a parameterized family of Markov chains, typically parameterized by the size of the
state-space as a scaling factor. The key question is to determine how long it takes
for the chain to reach a close approximation of its stationary state. The mixing time,
roughly speaking, is the amount of time required for the chain to approach stationarity
in total variation distance, such that this distance is less than a given value (typically
chosen to be 1/4). We will employ the path coupling method, introduced by Wilson in
Wilson (2004), to derive the upper and the lower bounds on the mixing time.
Related to the concept of mixing times is the cutoff phenomenon, an abrupt change
in the total variation distance as the Markov chain converges to its equilibrium. For
random walks on dynamical Erd¨os–R´enyi random graphs, the cutoff phenomenon was
proven in Sousi and Thomas (2020). The universality of the cutoff for quasi-random
graphs, particularly those with an added random matching, is characterized in Hermon
et al. (2022).
For mean-field models on complete graphs, the noisy voter model shares similar-
ities with the noisy SIS model. In Cox et al. (2016), it is shown that the noisy voter
model, a continuous-time Markov chain, has a mixing time of 1
2log nand exhibits a
cutoff phenomenon. Furthermore, in Aljovin et al. (2024), the convergence profile of
2
the noisy voter model in Kantorovich distance is characterized. This model does not
exhibit a cutoff under natural noise intensity conditions; however, the time required
for the model to forget the initial state of the particles—a process known as thermal-
ization—does exhibit a cutoff profile. The most recent work on the noisy SIS model
on complete graphs in He et al. (2024) proves that, in the continuous-time setting, the
noisy SIS model has a sharp mixing time of order O(log n) and also exhibits a cutoff
phenomenon.
1.2 Mathematical contributions
We show that the upper and the lower bounds on the mixing times with respect to
the scaling parameter nare both of order O(nlog n). That is, even if the Markov
chain starts from the “worst possible” initial condition, the total variation distance
between the distribution of the chain at time t=O(nlog n) and its stationary state
is close. This argument stays valid for all general graphs under strong noise intensity.
Furthermore, we establish the mixing properties for random graphs, namely Erd¨os–
R´enyi graphs, regular multigraphs, and Galton–Watson trees. More specifically, given
an instantiation of a random graph, we run the Markov chain with the noisy SIS
dynamics (*) described in Section 2.1, and show that the mixing time is of order
O(nlog n) with high probability.
The rest of the paper is structured as follows: We describe the noisy SIS model
on graphs in Section 2. The upper and the lower bounds on the mixing time are
established in Sections 3and 4respectively. Finally, we study the mixing properties
of the noisy SIS model on random graphs in Section 5.
2 The noisy SIS model on graphs
2.1 Definition and dynamics of the model
Let n∈Nbe the scaling parameter. Consider the sequence of graphs Gn:= (Vn, En)
with Vndenoting the set of vertices and En⊆Vn×Vn, the set of edges. We fix
|Vn|=n. We define the maximal degree ∆(n)
max of graph Gnto be
∆(n)
max := max
x∈Vn
deg(x),
where deg(x) denotes the degree of the vertex x. We define Q:={0,1}and the state
space Ωn:=QVn. Each vertex x∈Vnrepresents an individual at site xwithin a
population of size n. From the perspective of an interacting particle system (IPS) (see
Kipnis and Landim (1999); Liggett (1985)), we say that a configuration σ∈Ωnhas a
particle at site x∈Vnif σx= 1. If σx= 0, we say that the site xis empty.
In our context, we interpret a configuration σ∈Ωnas having an infected individual
at site x∈Vnif σx= 1. If σx= 0, the individual at site xis considered susceptible. We
study a discrete-time Markov chain (σn(t); t= 0,1,2,...) on ΩnNorris (1997); Levin
and Peres (2017). We define nI,σ (x) as the number of infected neighbours of x,i.e.
nI,σ (x):=|y∈Vn:x∼yand σy= 1|,
3
where |A|denotes the cardinality of the set A.
The dynamics of the Markov chain are described as follows: Suppose an initial
configuration is given. With probability 1
n, we choose a vertex xto update, uniformly
at random. We consider three parameters a, λ, κ > 0. Here, λrepresents the infection
probability for each infected individual, meaning each infected individual can infect its
susceptible neighbours with probability λ. The parameter κis the recovery probabil-
ity,i.e., each infected individual recovers with this uniform probability. Finally, the
parameter adenotes the external infection probability. Each susceptible individual can
become infected independently by an external source of infection with probability a.
We define the rates
p(σ, x):=a+λ·nI,σ (x),
and
p(n)
∗(a, λ):=a+λ·∆(n)
max.
We choose the parameter λsuch that p(n)
∗(a, λ)<1. Then, the transition probabilities
of the Markov chain (σn(t); t= 0,1,2,...) are described as follows:
p(σx= 0, σx= 1) = 1
n·p(σ, x) and p(σx= 0, σx= 0) = 1
n·1−p(σ, x)
p(σx= 1, σx= 0) = 1
n·κand p(σx= 1, σx= 1) = 1
n·(1 −κ).
(*)
Note that only one vertex can change its local state in each timestep. The transi-
tion probabilities in Equation (*) describe the transition probabilities associated with
vertex x∈Vn.
We call the discrete-time Markov chain (σn(t); t≥0) under this dynamics on
the graph Gnthe noisy susceptible-infected-susceptible (SIS) model on the graph. The
superscript non the Markov chain σn(t) emphasizes that the law of the Markov
chain σn(t) depends on the parameter n. When there is no confusion, we omit the
superscript. We denote the law of the Markov chain at time tas (µn
t;t≥0). If we
specify the initial condition, we denote the law at time tof the Markov chain starting at
state η0as (µn
η0,t;t≥0). Due to the external infection probability a > 0, this Markov
chain is irreducible, as the absorbing state—the disease-free state—is eliminated. For
each n, since it is Markov chain on a finite state space, there exists a unique invariant
measure µn
a,λ,κ with total support in Ωn.
This model is useful for infections that do not generate permanent immunity such
as the flu. Our model does not capture the nuances of the modern theory of infectious
disease epidemiology. However, the spread of an infection in a middle-sized city near a
metropolis could be realistically modelled by the Markov chain σn(t). The metropolis
serves as a constant source of infection, facilitated by professional and commercial
activities, with an impact comparable to local contacts with infected individuals.
4
2.2 Mixing time
Given two probability measures µand νon Ωn, the the total variation distance between
µand νis defined as
dTV(µ, ν ):=1
2X
σ∈Ωn
|µ(σ)−ν(σ)|.
Also, for t≥0 and given η0∈Ωn, and we define the distance to the stationary state
at time tstarting from η0as
d(n)(t;η0):= dTV(µn
η0,t, µn
a,λ,κ).
For η0∈Ωn, we define
d(n)(t):= sup
η0∈Ωn
dTV(µn
η0,t, µn
a,λ,κ),
and finally,
t(n)
mix(ǫ):= inf{t≥0; d(n)(t)≤ǫ}.
The time tn
mix(ǫ) is called the ǫ-mixing time of the n-th Markov chain (σn(t); t≥0),
which can be interpreted as follows: even if it starts from the “worst” initial condition,
after tn
mix(ǫ) units of time the distance to stationarity will nevertheless be smaller than
ǫ. For σ0, η0∈Ωn, we furthermore define
¯
d(n)(t):= sup
σ0,η0∈Ωn
dTV(µn
σ0,t, µn
η0,t).
This represents the total variation distance between two laws at time t, starting from
the two “farthest” initial states σ0and η0. It follows directly from the definitions that
d(n)(t)≤¯
d(n)(t)≤2d(n)(t).
The inequality is standard. See Levin and Peres (2017) for a proof. To simplify nota-
tions, the superscript will be omitted whenever the dependence on the parameter nis
clear from the context.
3 Upper bound on the mixing time
In this section, we prove an upper bound on the mixing time of the noisy SIS model
on graphs. We consider the strong external infection regime:
0< κ < 1
4(n−1) and a > 1−κ > 1
2.(**)
5
In order to prove an upper bound, we need to introduce some notations and
concepts. First, we define a natural partial order on the state space Ωn. For two
configurations σ, η ∈Ωn, we say σ≤ηif and only if
σx≤ηx,∀x∈Vn.
We consider the Hamming distance between two configurations σ, η ∈Ωndefined as
follows
ρ(σ, η):=X
x∈Vn
1{σx6=ηx}.
The nonnegative integer ρ(σ, η) is the number of vertices where σand ηdisagree.
We use coupling techniques to find upper bounds on the mixing times. In a pairwise
coupling, two copies of the Markov chain, denoted (σn(t); t≥0) and (ηn(t); t≥0),
are run simultaneously. The (σn(t); t≥0) sequence independently adheres to the
Markov chain’s transition rules specified in (*), as does the (ηn(t); t≥0) sequence.
However, the joint distribution of σn(t+1), ηn(t+ 1)conditioned on σn(t), ηn(t)is
designed in such a way that the two copies of the Markov chains coalesce in finite time
after which the two chains move together. We consider (σn(t); t≥0) and (ηn(t); t≥
0) to be coupled Markov chains with initial conditions σ0and η0. We denote by P
the probability measure on the coupled state space where both (σn(t); t≥0) and
(ηn(t); t≥0) are defined, and by Pσ0,η0, the probability measure under the initial
conditions σ0and η0. The corresponding expectations are denoted by Eand Eσ0,η0,
respectively, under the given initial conditions. We define the coalescence time of the
coupling τ(n)
couple to be
τ(n)
couple := min
t≥0{t:σs=ηsfor all s≥t}.
By (Levin and Peres,2017, Theorem 5.4 and Corollary 5.5), the coalescence time
provides an upper bound on mixing time by the following inequality:
d(n)(t)≤sup
σ0,η0∈Ωn
Pσ0,η0τ(n)
couple > t.(3.1)
To begin with, fix a vertex x. Then, we consider two initial configurations σ(0) = σ0
and η(0) = η0with Hamming distance ρσ(0), η(0)= 1, and they disagree at vertex
x. Without loss of generality, we assume that σ0≤η0, meaning σx(0), ηx(0)= (0,1).
We denote the Markov chain starting from σ0and η0by (σσ0(t); t≥0) and (ηη0(t); t≥
0). We first prove a contraction property.
Lemma 3.1 (Contraction property for strong external infection regime).For the
coupled Markov chains (σσ0(t); t≥0) and (ηη0(t); t≥0), after one step of update, we
have that
Eσ0,η0ρσ(1), η(1)≤1−γ
n<1,(3.2)
6
with γ:= (1 −κ)a+ (1 −a)κ−2(n−1)κ(1 −κ)and γ > 0.
Proof of Lemma 3.1.Note that
Eσ0,η0ρσ(1), η(1)
= 1 ·Pσ0,η0ρσ(1), η(1)= 1+ 2 ·Pσ0,η0ρσ(1), η(1)= 2
= 1 ·1−Pσ0,η0ρσ(1), η(1)= 0−Pσ0,η0ρσ(1), η(1)= 2
+ 2 ·Pσ0,η0ρσ(1), η(1)= 2
= 1 −Pσ0,η0ρσ(1), η(1)= 0+Pσ0,η0ρσ(1), η(1)= 2.
The only way to reduce the Hamming distance after one update is to update at
the vertex where σ0and η0disagree, setting them to the same state. There are two
possibilities:
Pσ0,η0σx(1) = 1, ηx(1) = 1=1
n·(1 −κ)·p(σ0, x),
and
Pσ0,η0σx(1) = 0, ηx(1) = 0=1
n·1−p(σ0, x)·κ.
It follows that
Pσ0,η0ρσ(1), η(1)= 0=1
n·(1 −κ)·p(σ0, x) + 1
n·1−p(σ0, x)·κ.
In order to increase the Hamming distance, an update should occur at vertices
other than x, and the update at such a vertex wshould result in disagreement. There
are two cases: when σw(0) = ηw(0) = 1 or when σw(0) = ηw(0) = 0.
For the case when σw(0) = ηw(0) = 1, we have the estimate
Pσ0,η0ρσ(1), η(1)= 2≤n−1
n·2κ(1 −κ).
For the case when σw(0) = ηw(0) = 0, we have the estimate
Pσ0,η0ρσ(1), η(1)= 2≤n−1
n·p(σ, w)1−p(η, w)+p(η, w)1−p(σ, w).
Therefore, we are able to get the following estimate:
Eσ0,η0ρσ(1), η(1)≤1−1
n(1 −κ)p(σ0, x) + 1−p(σ0, x)κ
+n−1
nmax 2κ(1 −κ), p(σ, w)1−p(η, w)
+p(η, w)1−p(σ, w).
(3.3)
7
Under the strong external infection regime (**), we have
max n2κ(1 −κ), p(σ, w)1−p(η, w)+p(η, w)1−p(σ, w)o= 2κ(1 −κ).
On the other hand, by virtue of the two initial configurations, σ0and η0, with a
Hamming distance ρσ(0), η(0)= 1, where the configurations differ at vertex x.
Consequently, we have
(1 −κ)p(σ0, x) + 1−p(σ0, x)κ=κ+ (1 −2κ)p(σ0, x)
≥min
σ0∈Ωn
x∈Vn
κ+ (1 −2κ)p(σ0, x)
≥κ+ (1 −2κ)a
= (1 −κ)a+ (1 −a)κ. (3.4)
This immediately implies
Eσ0,η0ρσ(1), η(1)≤1−γ
n<1,
with γgiven in the statement of the Lemma.
We now consider two initial configurations with Hamming distance greater than
1. Suppose that σ(0) = σ0and η(0) = η0with Hamming distance ℓstrictly greater
than one, i.e.,ρσ(0), η(0)=ℓwith ℓ≥2. Without loss of generality, we assume
σ0≤η0. Additionally, there exist configurations ω0=σ0,ω1,ω2,...,ωℓ=η0such
that ρ(ωk−1, ωk) = 1 for all k. Since ρis a distance, we get the following estimate
Eσ0,η0ρσ(1), η(1)=Eρσσ0(1), ηη0(1)≤E
ℓ
X
k=1
ρσωk−1(1), ηωk(1)
=
ℓ
X
k=1
Eωk−1,ωkρσ(1), η(1)
≤ℓ·1−γ
n=ρσ(0), η(0)·1−γ
n.
The last inequality follows from Lemma 3.1. Now, after tsteps, we can obtain
Ehρσσ0(t), ηη0(t)σσ0(t−1), ηη0(t−1)= (σt−1, ηt−1)i=Eρσσt−1(1), ηηt−1(1)
≤ρσt−1, ηt−1·1−γ
n.
By taking an expectation over the event σσ0(t−1), ηη0(t−1), we get that
Ehρσσ0(t), ηη0(t)i≤Ehρσσ0(t−1), ηη0(t−1)i·1−γ
n.(3.5)
8
Repeating the inequality t−1 times, and we obtain
Eσ0,η0hρσ(t), η(t)i=Ehρσσ0(t), ηη0(t)i≤ρ(σ0, η0)·(1 −γ
nt.(3.6)
Now we are ready to state and prove Theorem 3.1, which is the main result in this
section. The idea is to use inequality in Equation (3.1).
Theorem 3.1. Under the strong external infection regime (**), we have the following
upper estimate for the mixing time
t(n)
mix(ǫ)≤n
γlog n+ log( 1
ǫ),(3.7)
where γ= (1 −κ)a+ (1 −a)κ−2(n−1)κ(1 −κ).
Proof of Theorem 3.1.For the coupled Markov chains (σσ0(t); t≥0) and (ηη0(t); t≥
0), we can write that
sup
σ0,η0∈Ωn
Pσ0,η0τ(n)
couple > t= sup
σ0,η0∈Ωn
Pσ0,η0σ(t)6=η(t)
= sup
σ0,η0∈Ωn
Pσ0,η0ρσ(t), η(t)≥1.
By Markov’s inequality and the inequality in Equation (3.6), we obtain that
sup
σ0,η0∈Ωn
Pσ0,η0ρσ(t), η(t)≥1≤sup
σ0,η0∈Ωn
Eσ0,η0hρσ(t), η(t)i
≤sup
σ0,η0∈Ωn
ρ(σ0, η0)·(1 −γ
nt
≤n·(1 −γ
nt.
By inequality (3.1), we get the following upper bound for total variation distance
d(n)(t)≤n·(1 −γ
nt≤n·e−tγ
n.
It follows that if t≥n
γlog n+ log( 1
ǫ), the total variation distance d(n)(t)≤ǫ, as
desired. This completes the proof.
Remark 3.1. For 0< κ < κ∗and a > 1−κ > 1
2, we have γ=a−1
2+o(1). In this
case, Theorem 3.1 yields the following upper bound
t(n)
mix(ǫ)≤n
a−1/2·log n+ log( 1
ǫ).
9
4 Lower bound on the mixing time.
In this section, we prove a lower bound on the mixing time of the noisy SIS model on
graphs. We consider the following strong external infection regime:
0< κ < 1
4(n−1)2, a > 1−1
nα>1−κ > 1
2, α > 1.(***)
As before, let us consider two starting configurations σ(0) = σ0and η(0) = η0
such that the Hamming distance satisfies ρσ(0), η(0)= 1, with the configurations
differing at the vertex x. Without loss of generality, we assume σ0≤η0, implying
σx(0), ηx(0)= (0,1). We also denote the respective Markov chains initiated at σ0
and η0by (σσ0(t); t≥0) and (ηη0(t); t≥0). We first bound expectation of the one-step
update Eσ0,η0ρσ(1), η(1) from below:
Eσ0,η0ρσ(1), η(1)≥1−Pσ0,η0ρσ(1), η(1)= 0
= 1 −1
n(1 −κ)p(σ0, x) + 1−p(σ0, x)κ
≥1−1
n·max
x∈Vnn(1 −κ)p(σ0, x) + 1−p(σ0, x)κo.
Since the Hamming distance between σ0and η0is ρσ(1), η(1)= 1, with the
configurations differing at the vertex x, we compute
max
x∈Vnn(1 −κ)p(σ0, x) + 1−p(σ0, x)κo= max
x∈Vnnk+p(σ0, x)(1 −2k)o
≤k+p(n)
∗(a, λ)(1 −2k),
with p(n)
∗(a, λ):=a+λ·∆(n)
max <1. Next, we define β:=k+p(n)
∗(a, λ)(1 −2k). It
follows that
Eσ0,η0ρσ(1), η(1)≥1−β
n.
Therefore, we conclude that
Eσ0,η0ρσ(t), η(t)≥ρσ(0), η(0)·1−β
nt.(4.8)
Having established the inequality in Equation (4.8), we make the following claim.
Claim 4.1. The expected squared Hamming distance at time thas a lower bound of
the form
Eσ0,η0ρ2σ(t), η(t)≤ρ2(σ0, η0)·1−2γ
nt+n
2γ.(4.9)
10
Proof of Claim 4.1.To simplify notations, we denote ρt:=ρσσ0(t), ηη0(t). Then, it
follows in a straightforward manner that
Eρ2
t+1ρt
=ρ2
t+ 2ρtEρt+1ρt−2ρ2
t+E(ρt+1 −ρt)2ρt
≤ρ2
t+ 2ρt1−γ
nρt−2ρ2
t+ 1
The inequality follows from Equation (3.5) and the fact that (ρt+1 −ρt)2≤1. By
induction and taking expectation over ρt, the claim follows.
With the estimate of the second moment for ρt, we can find an upper bound for
Var(ρt). Indeed, it follows that
Var(ρt) = Eρ2
t−Eρt2≤ρ2
01−2γ
nt+n
2γ−ρ01−β
nt2.
=ρ2
01−2γ
nt−1−β
n2t+n
2γ.
The first inequality is due to Equation (4.8) and Claim 4.1. Under the strong
external infection regime Equation (***), we have that
1−2γ
n<1−β
n2.
Consequently, it follows that
Var(ρt)≤n
2γ.
Now we are ready to state and prove Theorem 4.1, which states that the mixing time
of a noisy SIS model on graphs has a lower bound of the order O(nlog n).
Theorem 4.1. For the strong external infection regime (***), we have that
t(n)
mix(ǫ)≥n
2βlog n+ log γǫ
4,(4.10)
where β=k+p(n)
∗(a, λ)(1 −2k)and γ= (1 −κ)a+ (1 −a)κ−2(n−1)κ(1 −κ).
Proof of Theorem 4.1.Recall that
d(n)(t):= sup
η0∈Ωn
dTV(µn
η0,t, µn
a,λ,κ).
11
We consider the event ρt≤qn
γǫ . Then, note that
d(n)(t)≥dTV(µn
η0,t, µn
a,λ,κ)
≥µn
a,λ,κρt≤rn
γǫ −µn
η0,tρt≤rn
γǫ
= 1 −µn
a,λ,κρt≥rn
γǫ −µn
η0,tρt≤rn
γǫ .
To obtain the lower bound of the mixing time, we need to bound the two terms
µn
a,λ,κρt≥qn
γǫ and µn
η0,tρt≤qn
γǫ . We do that separately.
For the term µn
a,λ,κρt≥qn
γǫ , given that µn
a,λ,κ is the invariant measure, we
have that Eµn
a,λ,κ [ρt] = 0. By Chebychev’s inequality, we obtain that
µn
a,λ,κρt≥rn
γǫ =µn
a,λ,κρt−Eµn
a,λ,κ [ρt]≥rn
γǫ ≤ǫ.
For the term µn
η0,tρt≤qn
γǫ , let us assume for now that
Eµn
η0,t [ρt]≥r4n
γǫ .(4.11)
Under this condition, we have that
µn
η0,tρt≤rn
γǫ ≤µn
η0,tρt≤r4n
γǫ −rn
2γǫ
≤µn
η0,tρt≤Eµn
η0,t [ρt]−rn
2γǫ
≤µn
η0,tρt−Eµn
η0,t [ρt]≥rn
2γǫ
≤ǫ.
The last inequality is again by Chebychev’s inequality. Consequently, we conclude that
d(n)(t)≥1−2ǫ. (4.12)
Now, let us estimate the timescale at which the condition in Equation (4.11) holds.
To that end, we maximize the Hamming distance of initial condition. Then, it follows
that the inequalities
n1−β
nt≥Eµn
η0,t [ρt]≥r4n
γǫ ,
12
hold whenever
t≤
log n·pγǫ
4n
−log 1−β
n.
Now, when nis large, we obtain that
log n·pγǫ
4n
−log 1−β
n∼n
βlog rnγǫ
4=n
2βlog nγǫ
4=n
2βlog n+ log γǫ
4.
This implies that starting from the worst possible initial condition, after up to
n
2βlog n+ log γ ǫ
4steps, the total variation distance between µn
η0,t, the law of the
Markov chain at time t, and µn
a,λ,κ, the invariant measure, is still at least 1 −2ǫ. This
proves Theorem 4.1.
Remark 4.1. Under the strong external infection regime (***), we have γ= 1 −κ+
o(1) and β= 1 −κ+o(1). Then, Theorem 4.1 yields the following lower estimate
t(n)
mix(ǫ)≥n
2(1 −κ)·log n+ log( (1 −κ)ǫ
4)∼n
21−o(1)log n.
Theorems 3.1 and 4.1 tell us that
n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1).(4.13)
This implies that the ǫ-mixing time of the noisy SIS model on graphs, under the strong
external infection regime, is of order O(nlog n). In the next section, we study the noisy
SIS model on a number of random graph models.
5 Mixing properties on random graphs.
As before, we focus on the strong external infection regime (***) with parameters:
0< κ < 1
4(n−1)2, a > 1−1
nα>1−κ > 1
2, α > 1.
For the choice of the strong external infection probability a > 1−1
nαwith α > 1, and
p(n)
∗(a, λ):=a+λ·∆(n)
max <1, we require that
λ·∆(n)
max <1
nα.(5.14)
We consider three different models of random graphs, namely, Erd¨os–R´enyi graphs,
Galton–Watson trees, and regular multigraphs.
13
5.1 Mixing property on Erd¨os–R´enyi graphs.
Let G(n, p) denote an Erd¨os–R´enyi graph on nvertices, where each unordered pair
of vertices has an edge independently with probability p. We consider the case when
p≫1
n. Given a realisation of the Erd¨os–R´enyi graph G(n, p) (with p≫1
n), which we
continue to denote by G(n, p), we run the Markov chain with the noisy SIS dynamics
described in Section 2.1 on the random graph G(n, p). The following theorem follows.
Theorem 5.1. Let G(n, p)be an Erd¨os–R´enyi graph, with p≫1
n. Under the strong
external infection regime (***), if we furthermore assume λ≤1
n1+αp, then for any
ǫ′>0, for nlarge enough, we have that
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
Proof of Theorem 5.1.By an application of the Chernoff bound on deg(v) for v∈Vn,
we have that
Pdeg(v)−np≥δnp=Pdeg(v)−Edeg(v)≥δnp≤2 exp −δ2np
3,
for any 0 < δ < 1. It follows that
P (1 −δ)np < deg(v)<(1 + δ)np!≥1−2 exp −δ2np
3.(5.15)
When p≫1
n, we get
lim
n→∞ 1−2 exp −δ2np
3!n
= 1.
This means that for any ǫ′>0, for nlarge enough, we have
P ∀v∈Vn: (1 −δ)np < deg(v)<(1 + δ)np!≥1−ǫ′.
Therefore, it follows that for any ǫ′>0, for nlarge enough,
P (1 −δ)np < ∆(n)
max <(1 + δ)np!≥1−ǫ′.
If we furthermore take λ≤1
n1+αp, with high probability (5.14) is satisfied. That is,
P λ·∆(n)
max <1
nα!≥1−ǫ′,
14
for any ǫ′>0 and nlarge enough. Therefore, with high probability, the upper bound
and lower bound of the ǫ-mixing time are given by
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
This completes the proof.
5.2 Mixing property on regular multigraphs.
A multigraph is a graph in which more than one edges may be present between a pair
of vertices. A multigraph Gn= (Vn, En) is called regular if there exists a constant
d∈Z+such that for every vertex x∈Vn, the degree of xis equal to d. That is,
deg(x) = dfor all x∈Vn.
We will follow the configuration model (CM) to build a regular multigraph. Here we
provide a brief description of its construction, but we refer the readers to van der
Hofstad (2017) for a more detailed account.
Let nbe the number of vertices and d, the degree of each vertex in a d-regular
multigraph. To each vertex assign dhalf-edges/stubs. A half-edge, as the name sug-
gests, is half of an edge that has not yet been paired with another half-edge. The total
number of half-edges is dn. Denote the set of half-edges by H. At each step, we pick
two unpaired half-edges uniformly at random from the collection of all available half-
edges and pair them to form an edge. We repeat this process until no free half-edges
are left unpaired. Denote by Ω the state space of all possible pairings of half-edges.
Each pairing corresponds to an edge in the multigraph. The total number of ways to
pair dn half-edges is given by:
|Ω|=(dn)!
(dn/2)!2dn/2.
We assign uniform probability to each possible pairings of half-edges. That is, the
probability assigned to each pairing is given by
(dn/2)!
dn
2·dn−2
2·dn−4
2· · · ·2
2.
Since in the case of a regular multigraph, we allow self-loops as e= (x, x) with
x∈Vn, as well as multiple edges e= (x, y), e′= (x, y) with e6=e′and x, y ∈Vn,
a modification of the definition of nI,σ (x), the number of infected neighbours of x, is
required. Consider the following revised definition:
nI,σ (x):=|e∈En:e= (x, y), x 6=y, and σy= 1|.
15
We are now ready to state and prove the bounds on the mixing time of the noisy SIS
model on the random d-regular multigraph.
Theorem 5.2. Let Gnbe a random d-regular multigraph, with dbeing fixed and
independent of n. Under the strong external infection regime (***), if we furthermore
assume λ≤1
dnα, then for any ǫ′>0, for nlarge enough, we have that
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
Before proving Theorem 5.2, we first analyze the concentration of the number of
self-loops in a d-regular multigraph. A self-loop is a pairing (edge) where two half-
edges at the same vertex are chosen during the random pairing process. We aim to
show that the number of self-loops concentrates around its expected value Durrett
(2007); van der Hofstad (2017) using McDiarmid’s inequality. To this end, let Sbe
the total number of self-loops in the d-regular multigraph. This can be written as the
sum of indicator random variables Ist,j with 1 ≤s < t ≤dand 1 ≤j≤n:
S=
n
X
j=1 X
1≤s<t≤d
Ist,j ,
where ndenotes the total number of vertices, and Ist,j is an indicator random variable
representing the pairing of half-edge swith half-edge t, where sand tare half-edges
associated with vertex i. That is,
Ist,j =(1, s and tassociated with vertex iform a self-loop,
0,otherwise.
The following lemma is then immediate.
Lemma 5.1. For a random d-regular multigraph Gn, and any δ > 0, we have
P|S−d−1
2| ≥ δ≤2 exp −2δ2
n.
Proof of Lemma 5.1.At each pairing, the probability of forming a self-loop with half-
edges sand tassociated with vertex iin a single pairing is proportional to the number
of ways to pair two half-edges:
P(sand tform a self-loop) = 1
nd −1.
This probability does not depend on sand t. The expected number of self-loops E[S]
is:
E[S] = n·d
2·P(1 and 2 form a self-loop) = n·d(d−1)
2·1
nd −1≈d−1
2.
16
See (Durrett,2007, Theorem 3.1.2). Then, by McDiarmid’s inequality, we have:
P|S−d−1
2| ≥ δ≤exp −2δ2
n.
for any δ > 0.
Now, we are prepared to prove Theorem 5.2.
Proof of Theorem 5.2.By Lemma 5.1, with high probability, the total number of self-
loops is d−1
2, and self-loops are uniformly distributed. This implies that, with high
probability, ∆(n)
max is of order d. This means that for any ǫ′>0, for nlarge enough,
P (1 −δ)d < ∆(n)
max <(1 + δ)d!≥1−ǫ′.
In addition, if we take λ≤1
dnα,with high probability (5.14) is satisfied. That is,
P λ·∆(n)
max <1
nα!≥1−ǫ′,
for any ǫ′>0 and nlarge enough. Therefore, with high probability, we find the
following upper and lower bounds of the ǫ-mixing time
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
5.3 Mixing property on Galton–Watson trees
Let{Zn}n≥0be a Galton–Watson branching process, where Zndenotes the number
of individuals in the n-th generation. The process evolves according to the following
rules: We start with a single individual at generation 0, i.e.,Z0= 1. Each individual
in generation nindependently produces offspring according to a random variable X,
where Xtakes values in N0(the non-negative integers) with probability mass function
{pk}k≥0,i.e.,
pk=P(X=k), k = 0,1,2,...
Given that generation nconsists of Znindividuals, the number of individuals in
generation n+ 1, denoted Zn+1, is the sum of Znindependent random variables
X1, X2,...,XZn, where each Xihas the same distribution as X. Hence, for n≥0,
Zn+1 =
Zn
X
i=1
Xi,
17
where X1, X2,... are i.i.d. random variables with the same distribution as X.
We denote the Galton–Watson tree as Tn:= (Vn, En). Thus, the Galton–Watson
tree is a random rooted tree where the offspring of each individual is determined by the
random variable X, and the total number of individuals in each generation depends
on the offspring distribution of all individuals in the previous generation. We consider
two cases: a) The random variable Xhas a bonimial distribution; b) The random
variable Xhas a Poisson distribution.
5.3.1 Binomial offspring
Suppose X∼Bin(n, p) and p≫1
n. We run the Markov chain with the noisy SIS
dynamics described in Section 2.1 on the random tree Tn. Consequently, the mixing
time, with high probability, is also of order O(nlog n) as well.
Theorem 5.3. Let Tnbe a Galton–Watson tree with offspring having a binomial
law Bin(n, p), with p≫1
n. Under the strong external infection regime (***), if we
furthermore assume λ≤1
n1+αp, then for any ǫ′>0, for nlarge enough, we have that
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
Proof of Theorem 5.3.We begin by providing a Chernoff bound for the concentration
of deg(x) for x∈Vn. Notice that E[X] = np, thus E[deg(x)] = E[X+ 1] = np + 1 ∼
O(np).In the same way as in Section 5.1, by Chernoff’s inequality, we have
P|deg(x)−np| ≥ δnp=P|deg(x)−E[deg(x)]| ≥ δnp≤2 exp −δ2np
3,
for any 0 < δ < 1. This implies that
P(1 −δ)np < deg(x)<(1 + δ)np≥1−2 exp −δ2np
3.
In the case when p≫1
n, we obtain
lim
n→∞ 1−2 exp −δ2np
3n
= 1.
Thus, for any ǫ′>0, there exists a sufficiently large nsuch that
P∀x∈Vn: (1 −δ)np < deg(x)<(1 + δ)np≥1−ǫ′.
Therefore, it follows that for any ǫ′>0, for large enough n,
P(1 −δ)np < ∆(n)
max <(1 + δ)np≥1−ǫ′.
18
Furthermore, if we take λ≤1
n1+αp,with high probability (5.14) is satisfied. This
implies that
P λ·∆(n)
max <1
nα!≥1−ǫ′,
for any ǫ′>0 and nlarge enough. Consequently, we have
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
5.3.2 Poisson offspring
Let us now assume X∼Poi(θ) for any θ > 0. We run the Markov chain with the noisy
SIS dynamics described in Section 2.1 on the random tree Tn. The following theorem
describes the behaviour of the mixing time.
Theorem 5.4. Let Tnbe a Galton-Watson tree with offspring having a Poisson law
Poi(θ), with θ > 0. Under the strong external infection regime (***), if we furthermore
assume λ≤log log n
nαlog n, then for any ǫ′>0,nlarge enough, we have that
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
Proof of Theorem 5.4.Let X1, X2,...,Xnbe independent random variables, where
Xi∼Poisson(θ). Instead of using the Chernoff bound as in the previous sections, the
asymptotic behaviour of the maximum of a set of independent, identically distributed
Poisson random variables is given in Kimber (1983). As shown in Kimber (1983), for
any ǫ′>0, there exists an nlarge enough, such that
P max
1≤i≤nXi∼log n
log log n!≥1−ǫ′.
It follows that for any ǫ′>0, and nlarge enough, we have
P ∆(n)
max ∼log n
log log n!≥1−ǫ′.
Now if we take λ≤log log n
nαlog n,with high probability (5.14) is satisfied. This can be
restated as
P λ·∆(n)
max <1
nα!≥1−ǫ′,
19
for any ǫ′>0 and nlarge enough.
As a result, with high probability, the upper bound and lower bound of the ǫ-mixing
time are given by
P n
2log n1−o(1)≤t(n)
mix(ǫ)≤2nlog n1 + o(1)!≥1−ǫ′.
20
Statements and Declarations
Competing Interests
The authors declare that they have no competing interests.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Data availability
Not applicable.
Materials availability
Not applicable.
Code availability
Not applicable.
Author contribution
Both authors contributed equally to the work.
Funding
The work was supported by the Engineering and Physical Sciences Research Council
(EPSRC) under grant number EP/Y027795/1.
References
Andersson, H., Britton, T.: Stochastic Epidemic Models and Their Statistical Analysis
vol. 151. Springer, ??? (2000)
Aldous, D., Diaconis, P.: Shuffling cards and stopping times. Am. Math. Mon. 93,
333–348 (1986) https://doi.org/10.2307/2323590
Aljovin, E., Jara, M., Xiang, Y.: Thermalization and convergence to equilibrium of
the noisy voter model. ArXiv preprint: 2409.05722 (2024)
Ball, F.: Central limit theorems for SIR epidemics and percolation on configuration
model random graphs. Ann. Appl. Probab. 31(5), 2091–2142 (2021) https://doi.
org/10.1214/20-aap1642
21
Bandyopadhyay, A., Sajadi, F.: On the expected total number of infections for virus
spread on a finite network. Ann. Appl. Probab. 25(2), 663–674 (2015) https://doi.
org/10.1214/14-AAP1007
Cui, K., KhudaBukhsh, W.R., Koeppl, H.: Motif-based mean-field approximation of
interacting particles on clustered networks. Phys. Rev. E 105(4), 042301–7 (2022)
https://doi.org/10.1103/physreve.105.l042301
Cox, J.T., Peres, Y., Steif, J.E.: Cutoff for the noisy voter model. Ann. Appl. Probab.
26(2), 917–932 (2016) https://doi.org/10.1214/15-AAP1108
Decreusefond, L., Dhersin, J.-S., Moyal, P., Tran, V.C.: Large graph limit for an SIR
process in random network with heterogeneous connectivity. Ann. Appl. Probab.
22(2), 541–575 (2012) https://doi.org/10.1214/11-AAP773
Durrett, R.: Random Graph Dynamics. Cambridge Series in Statistical and Proba-
bilistic Mathematics, vol. 20, p. 212. Cambridge University Press, Cambridge, ???
(2007)
He, R., Luczak, M., Ross, N.: Cutoff for the logistic SIS epidemic model with self-
infection. ArXiv preprint: 2407.18446 (2024)
Hermon, J., Sly, A., Sousi, P.: Universality of cutoff for graphs with an added random
matching. Ann. Probab. 50(1), 203–240 (2022) https://doi.org/10.1214/21-aop1532
KhudaBukhsh, W.R., Auddy, A., Disser, Y., Koeppl, H.: Approximate lumpabil-
ity for Markovian agent-based models using local symmetries. Journal of Applied
Probability 56(3), 647–671 (2019) https://doi.org/10.1017/jpr.2019.44
Kimber, A.C.: A note on Poisson maxima. Z. Wahrsch. Verw. Gebiete 63(4), 551–552
(1983) https://doi.org/10.1007/BF00533727
Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems. Springer, ???
(1999). https://doi.org/10.1007/978-3-662-03752-2
Kiss, I.Z., Miller, J.C., Simon, P.L.: Mathematics of Epidemics on Networks. Interdis-
ciplinary Applied Mathematics, vol. 46, p. 413. Springer, ??? (2017). https://doi.
org/10.1007/978-3-319-50806-1 . From exact to approximate models. https://doi.
org/10.1007/978-3-319-50806-1
KhudaBukhsh, W.R., Woroszylo, C., Rempa l a, G.A., Koeppl, H.: A functional central
limit theorem for SI processes on configuration model graphs. Advances in Applied
Probability 54(3), 880–912 (2022) https://doi.org/10.1017/apr.2022.52
Liggett, T.M.: Interacting Particle Systems. Grundlehren der mathematischen Wis-
senschaften [Fundamental Principles of Mathematical Sciences], vol. 276, p. 488.
Springer, ??? (1985). https://doi.org/10.1007/978-1-4613-8542-4 .https://doi.org/
22
10.1007/978-1-4613-8542-4
Levin, D.A., Peres, Y.: Markov Chains and Mixing Times, 2nd edn., p. 447. American
Mathematical Society, Providence, RI, ??? (2017). https://doi.org/10.1090/mbk/
107 . With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling
from the past” by James G. Propp and David B. Wilson. https://doi.org/10.1090/
mbk/107
Norris, J.R.: Markov Chains. Cambridge Series in Statistical and Probabilistic
Mathematics. Cambridge University Press, ??? (1997). https://doi.org/10.1017/
cbo9780511810633
Sharker, Y., Diallo, Z., KhudaBukhsh, W.R., Kenah, E.: Pairwise accelerated failure
time regression models for infectious disease transmission in close-contact groups
with external sources of infection. Stat. Med. 43(27), 5138–5154 (2024)
Sousi, P., Thomas, S.: Cutoff for random walk on dynamical Erd˝os–R´enyi graph.
Ann. Inst. Henri Poincar´e Probab. Stat. 56(4), 2745–2773 (2020) https://doi.org/
10.1214/20-AIHP1057
Hofstad, R.: Random Graphs and Complex Networks. Vol. 1. Cambridge Series in
Statistical and Probabilistic Mathematics, vol. [43], p. 321. Cambridge University
Press, Cambridge, ??? (2017). https://doi.org/10.1017/9781316779422 .https://doi.
org/10.1017/9781316779422
Wilson, D.B.: Mixing times of Lozenge tiling and card shuffling Markov chains. Ann.
Appl. Probab. 14(1), 274–325 (2004) https://doi.org/10.1214/aoap/1075828054
23