Page 1
arXiv:1203.1113v3 [math.PR] 29 May 2013
CYCLES AND EIGENVALUES OF SEQUENTIALLY GROWING
RANDOM REGULAR GRAPHS
TOBIAS JOHNSON AND SOUMIK PAL
Abstract. Consider the sum of d many iid random permutation matrices on
n labels along with their transposes. The resulting matrix is the adjacency ma-
trix of a random regular (multi)-graph of degree 2d on n vertices. It is known
that the distribution of smooth linear eigenvalue statistics of this matrix is
given asymptotically by sums of Poisson random variables. This is in contrast
with Gaussian fluctuation of similar quantities in the case of Wigner matrices.
It is also known that for Wigner matrices the joint fluctuation of linear eigen-
value statistics across minors of growing sizes can be expressed in terms of
the Gaussian Free Field (GFF). In this article we explore joint asymptotic (in
n) fluctuation for a coupling of all random regular graphs of various degrees
obtained by growing each component permutation according to the Chinese
Restaurant Process. Our primary result is that the corresponding eigenvalue
statistics can be expressed in terms of a family of independent Yule processes
with immigration. These processes track the evolution of short cycles in the
graph. If we now take d to infinity, certain GFF-like properties emerge.
1. Introduction
We consider graphs that have labeled vertices and are regular, i.e., every ver-
tex has the same degree. We allow our graphs to have loops and multiple edges
(such graphs are sometimes called multigraphs or pseudographs). Additionally, our
graphs will be sparse in the sense that the degree will be negligible compared to
the order. Every such graph has an associated adjacency matrix whose (i,j)th
element is the number of edges between vertices i and j, with loops counted twice.
When the graph is randomly selected, the matrix is random, and we are interested
in studying the eigenvalues of the resulting symmetric matrix. Note that, due to
regularity, it does not matter whether we consider the eigenvalues of the adjacency
or the Laplacian matrix.
The precise distribution of this random regular graph is somewhat ad hoc. We
will use what is called the permutation model. Consider the permutation digraphs
generated by d many iid random permutations on n labels. We remove the direction
of the edge and collapse all these graphs on one another. This results in a 2d-regular
graph on n vertices, denoted by G(n,2d). At the matrix level this is given by adding
all the d permutation matrices and their transposes.
Date: May 29, 2013.
2000 Mathematics Subject Classification. 60B20, 05C80.
Key words and phrases. Random regular graphs, eigenvalue fluctuations, Chinese Restaurant
Process, minors of random matrices.
This research is partially supported by NSF grant DMS-1007563.
1
Page 2
2 TOBIAS JOHNSON AND SOUMIK PAL
Our present work is an extension of the study of eigenvalue fluctuations carried
out in [DJPP12]. We are motivated by the recent work by Borodin on joint eigen-
value fluctuations of minors of Wigner matrices and the (massless or zero-boundary)
Gaussian Free Field (GFF) [Bor10a, Bor10b]. Eigenvalues of minors are closely re-
lated to interacting particle systems [Fer10, FF10], and the KPZ universality class
of random surfaces [BF08]. See [JN06] for more on eigenvalues of minors of GUE
and [ANvM11] for those of Dyson’s Brownian motion.
Let us consider a particular but important case of Borodin’s result in [Bor10a]
(single sequence, the entire N). An n × n real symmetric Wigner matrix has iid
upper triangular off-diagonal elements with four moments identical to the standard
Gaussian. The diagonal elements are usually taken to be iid with mean zero vari-
ance two. Notice that every principal submatrix (called minors in this context)
of a Wigner matrix is again a Wigner matrix of a smaller order. Thus, on some
probability space one can construct an infinite order Wigner matrix W whose n×n
minor W(n) is a Wigner matrix of order n.
Let z be a complex number in the upper half plane H. Define y = |z|2and
x = 2ℜ(z).
eigenvalues that are greater than or equal to√nx. Define the height function
?π
Then Borodin shows that {Hn(z) − EHn(z), z ∈ H}, viewed as distributions,
converges in law to a generalized Gaussian process on H with a covariance kernel
Consider the minor W(⌊ny⌋), and let N(z) be the number of its
(1)Hn(z) :=
2N(z).
(2)C(z,w) =
1
2πln
????
z − w
z − w
????.
The above is the covariance kernel for the GFF on the upper half plane.
An equivalent assertion is the following.
{1,2,...,n}. Consider the Chebyshev polynomials of the first kind, {Tn, n =
0,1,2,...}, on the interval [−1,1]. These polynomials are given by the identity
Tn(cos(θ)) ≡ cos(nθ). We specialize [Bor10a, Proposition 3] for the case of GOE
(β = 1). Fix m positive real numbers t1 < t2 < ... < tm. In the notation of
[Bor10a], we take L = n and Bi(n) = [⌊tin⌋]. Then, for any positive integers
j1,j2,...,jm, the random vector
?trTji
converges in law, as n tends to infinity, to a centered Gaussian vector. For s ≤ t,
?
which gives the covariance kernel of the limiting vector. In particular, all such co-
variances are zero when i ?= k. Note that the traces can be expressed as integrals of
the height function of the corresponding submatrices. Thus, by approximating con-
tinuous compactly supported functions of z by a function that is piecewise constant
in y and polynomial in x, one gets the kernel (2).
Let [n] denote the set of integers
?W(⌊tin⌋)/2√tin?− EtrTji
?
?W(⌊tin⌋)/2√tin?, i ∈ [m]?
?W(⌊sn⌋)/2√sn??
(3)lim
n→∞Cov
trTi
W(⌊tn⌋)/2√tn
?
, trTk
= δikk
2
?s
t
?k/2
,
1.1. Main results. By a tower of random permutations we mean a sequence of
random permutations (π(n),n ∈ N) such that
(i) π(n)is a uniformly distributed random permutation of [n] for each n, and
Page 3
CYCLES AND EIGENVALUES3
(ii) for each n, if π(n)is written as a product of cycles then π(n−1)is derived
from π(n)by deletion of the element n from its cycle.
The stochastic process that grows π(n)from π(n−1)by sequentially inserting an el-
ement n randomly is called the Chinese Restaurant Process (CRP). We will review
the basic principles at a later section. In [KOV04] and other related work, a se-
quence of permutations satisfying condition (ii) is called a virtual permutation, and
the distribution on virtual permutations satisfying condition (i) is considered as a
substitute for Haar measure on S(∞), the infinite symmetric group. This is used
to study the representation theory of S(∞), with connections to Random Matrix
Theory. A recent extension of this idea is [BNN11].
Now suppose we construct a countable collection {Πd, d ∈ N} of towers of ran-
dom permutations. We will denote the permutations in Πdby {π(n)
it is possible to model every possible G(n,2d) by adding the permutation matrices
(and their transposes) corresponding to {π(n)
will keep d fixed and consider n as a growing parameter. Thus, Gnwill represent
G(n,2d) for some fixed d. Here and later, G0will represent the empty graph. We
construct a continuous-time version of this by inserting new vertices into Gnwith
rate n + 1. Formally, define independent times Ti∼ Exp(i), and let
?
and define the continuous-time Markov chain G(t) = GMt. When d = 1, this
process is essentially just a continuous-time version of the CRP itself. Though this
case is unusual compared to the rest—for example, G(t) is likely to be disconnected
when d = 1 and connected when d is larger—our results do still hold.
Our first result is about the process of short cycles in the graph process G(t).
By a cycle of length k in a graph, we mean what is sometimes called a simple cycle:
a walk in the graph that begins and ends at the same vertex, and that otherwise
repeats no vertices. We will give a more formal definition in Section 2.2. Let
(C(s)
k(t), k ∈ N) denote the number of cycles of various lengths k that are present
in G(s + t). This process is not Markov, but nonetheless it converges to a Markov
process (indexed by t) as s tends to infinity.
To describe the limit, define
?
(2d − 1)k+ 1,
Consider the set of natural numbers N = {1,2,...} with the measure
µ(k) =1
2[a(d,k) − a(d,k − 1)],
Consider a Poisson point process χ on N × [0,∞) with an intensity measure given
on N×(0,∞) by the product measure µ⊗Leb, where Leb is the Lebesgue measure,
and with additional masses of a(d,k)/2k on (k,0) for k ∈ N.
Let?Pxdenote the law of an one-dimensional pure-birth process on N given by
Lf(k) = k (f(k + 1) − f(k)),
starting from x ∈ N. This is also known as the Yule process.
d, n ∈ N}. Then
j
, 1 ≤ j ≤ d}. In what follows we
Mt= maxm:
m
?
i=1
Ti≤ t
?
,
a(d,k) =
(2d − 1)k− 1 + 2d,when k is even,
when k is odd.
k ∈ N,a(d,0) := 0.
the generator:
k ∈ N,
Page 4
4 TOBIAS JOHNSON AND SOUMIK PAL
Suppose we are given a realization of χ. For any atom (k,y) of the countably
many atoms of χ, we start an independent process (Xk,y(t), t ≥ 0) with law?Pk.
Nk(t) :=
Define the random sequence
?
(j,y)∈χ∩{[k]×[0,t]}
1{Xj,y(t − y) = k}.
In other words, at time t, for every site k, we count how many of the processes that
started at time y ≤ t at site j ≤ k are currently at k. Note that both (Nk(·), k ∈ N)
and (Nk(·), k ∈ [K]), for some K ∈ N, are Markov processes, while Nk(·) for fixed
k is not.
Theorem 1. As s → ∞, the process (C(s)
in the product topology on D∞[0,∞), to the Markov process (Nk(t), k ∈ N, 0 ≤
t < ∞). The limiting process is stationary.
Remark 2. In fact, the same argument used to prove Theorem 1 shows that
the process (C(s)
k(t), −∞ < t < ∞) converges in law to the Markov process
(Nk(t), −∞ < t < ∞) running in stationarity. The same conclusion holds for
all the following theorems in this section.
k(t), k ∈ N, 0 ≤ t < ∞) converges in law,
We now explore the joint convergence across various d’s. Define C(s)
stressing the dependence on the parameter d.
d,k(t) naturally,
Theorem 3. There is a joint process convergence of (C(s)
to a limiting process (Ni,k(t), k ∈ N, i ∈ [d], t ≥ 0). This limit is a Markov process
whose marginal law for every fixed d is described in Theorem 1. Moreover, for
any d ∈ N, the process (Nd+1,k(·) − Nd,k(·), k ∈ N) is independent of the process
(Ni,k(·), k ∈ N, i ∈ [d]) and evolves as a Markov process. Its generator (defined on
functions dependent on finitely many coordinates) is given by
i,k(t), k ∈ N, i ∈ [d], t ≥ 0)
Lf(x) =
∞
?
k=1
kxk[f (x + ek+1− ek) − f(x)] +
∞
?
k=1
ν(d,k)[f(x + ek) − f(x)],
where x is a nonnegative sequence, (ek,k ∈ N) are the canonical orthonormal basis
of ℓ2, and
ν(d,k) =1
2[a(d + 1,k) − a(d + 1,k − 1) − a(d,k) + a(d,k − 1)].
Remark 4. Theorems 1 and 3 show an underlying branching process structure. We
actually prove a more general decomposition where cycles are tracked by edge labels.
The additive structure also imparts a natural intertwining relationship between the
Markov operators. See [CPY98, Section 2] and [DF90, Bor10a].
We now focus on eigenvalues of G(t). Note that there is no easy exact relationship
between the eigenvalues of Gn for various n since the eigenvectors play a role in
determining any such identity. In fact, the eigenvalues of Gnand Gn+1need not
be interlaced. However, one can consider linear eigenvalue statistics for the graph
G(n,2d). That is, for any d-regular graph on n vertices G and function f : R → R,
define the random variable
trf(G) :=
n
?
i=1
f(λi)
Page 5
CYCLES AND EIGENVALUES5
where λ1 ≥ ... ≥ λn are the eigenvalues of adjacency matrix of G divided by
2(2d − 1)1/2. The scaling is necessary to take a limit with respect to d.
By a polynomial basis we refer to a sequence of polynomials {f0≡ 1,f1,f2,...}
such that fk is a polynomial of degree k of a single argument over reals. In the
statement below [∞] will refer to N.
Theorem 5. There exists a polynomial basis {fi, i ∈ N} (depending on d) such
that for any K ∈ N ∪ {∞}, the process (trfk(G(s + t)), k ∈ [K], t ≥ 0) converges
in law, as s tends to infinity, to the Markov process (Nk(t), k ∈ [K], t ≥ 0)
of Theorem 1. (The polynomials are given explicitly in (15).) Hence, for any
polynomial f, the process?trf(G(s+t))?converges to a linear combination of the
The Markov property is especially intriguing since, to the best of our knowledge,
no similar property of eigenvalues of the standard Random Matrix ensembles is
known. For the special case of minors of the Gaussian Unitary/Orthogonal Ensem-
bles, the entire distribution of eigenvalues across minors of various sizes do satisfy
a Markov property. However, this is facilitated by the known symmetry properties
of the eigenvectors, and do not extend to other examples of Wigner matrices.
For our final result we will take d to infinity. We will make the following no-
tational convention: for any polynomial f, we will denote the limiting process of
(trf(G(s + t)), t ≥ 0) by (trf (G(∞ + t)), t ≥ 0). Recall that this process is a
linear combination of (Nk(t), k ∈ N, t ≥ 0).
Theorem 6. Let {Tk, k ∈ N} denote the Chebyshev orthogonal polynomials of the
first kind on [−1,1]. As d tends to infinity, the collection of processes
(trTk(G(∞ + t)) − EtrTk(G(∞ + t)), t ≥ 0, k ∈ N)
converges weakly in D∞[0,∞) to a collection of independent Ornstein-Uhlenbeck
processes (Uk(t), t ≥ 0, k ∈ N), running in equilibrium. Here the equilibrium dis-
tribution of Ukis N(0,k/2) and Uksatisfies the stochastic differential equation
coordinate processes of (Nk(t), k ∈ N).
dUk(t) = −kUk(t)dt + kdWk(t),t ≥ 0,
and (Wk, k ∈ N) are iid standard one-dimensional Brownian motions.
Thus, the collection of random variables?trTk(G(∞ + t))−EtrTk(G(∞ + t))?,
with covariance kernel given by
indexed by k and t, converges as d tends to infinity to a centered Gaussian process
(4)lim
d→∞Cov(trTi(G(∞ + t)),trTk(G(∞ + s))) = δikk
2ek(s−t).
for s ≤ t.
A comparison of (4) with Borodin’s result (3) shows that the above limit captures
a key property of the GFF covariance structure. The appearance of the exponential
is merely due to a deterministic time-change of the process. A somewhat more
detailed discussion can be found in the following section.
Remark 7. A common model for random regular graphs is the configuration model
or pairing model (see [Wor99] for more information).
follows: Start with n buckets, each containing d prevertices. Then, separate these
dn prevertices into pairs, choosing uniformly from every possible pairing. Finally,
collapse each bucket into a single vertex, making an edge between one vertex and
The model is defined as
Page 6
6TOBIAS JOHNSON AND SOUMIK PAL
another if a prevertex in one bucket is paired with a prevertex in the other bucket.
This model has the advantage that choosing a graph from it conditional on it
containing no loops or parallel edges is the same as choosing a graph uniformly
from the set of graphs without loops and parallel edges. The model also allows for
graphs of odd degrees, unlike the permutation model.
It is possible to construct a process of growing random regular graphs simi-
lar to the one in this paper using a dynamic version of this model. Given some
initial pairing of prevertices labeled {1,...,dn}, extend it to a random pairing of
{1,...,dn+2} by the following procedure: Choose X uniformly from {1,...,dn+1}.
Pair dn+2 with X. If X = dn+1, leave the other pairs unchanged; if not, pair the
previous partner of X with dn + 1. This is an analogue of the CRP in the setting
of random pairings, in that if the initial pairing is uniformly chosen, then so is the
extended one.
If d is odd, we repeat this procedure a total of d times to extend a random
d-regular graph on n vertices to have n + 2 vertices (when d is odd, the number of
vertices in the graph must be even). When d is even, repeat d/2 times to add one
new vertex to a random graph. In this way, we can construct a sequence of growing
random regular graphs. We believe that all the results of this paper hold in this
model with minor changes, with similar proofs.
1.2. Existing literature. The study of the spectral properties of sparse regu-
lar random graphs is motivated by several different problems. These matrices do
not fall within the purview of the standard techniques of Random Matrix Theory
(RMT) due to their sparsity and lack of independence between entries. However, ex-
tensive simulations ([JMRR99]) point to conjectures that these matrices still belong
to the universality class of random matrices. For example, it is conjectured via simu-
lations ([MNS08]) that the distribution of the second largest eigenvalue (in absolute
value) is given by the Tracy-Widom distribution. In the physics literature, eigen-
values of random regular graphs have been considered as a toy model of quantum
chaos ([Smi10], [OGS09], [OS10]). Simulations suggest that the eigenvalue spacing
distribution has the same limit as that of the Wigner matrices. A limiting Gaussian
wave character of eigenvectors have also been conjectured ([Elo08, Elo10, ES10]).
Some fine properties of eigenvalues and eigenvectors can indeed be proved for a
single permutation matrix; see [Wie00] and [BAD11].
Somewhat complicating the matter is the fact that when the degree d is kept fixed
and we let n go to infinity, several classical results about random matrix ensembles
fail. A bit more elaboration on this point is needed. The two parameters in the
ensemble of random graphs are the degree d and the order n. In the permutation
model it is possible to construct random regular graphs for every possible value of
(d,n) where d is an even positive integer and n is any positive integer. Hence one can
consider various kinds of limits of these parameters. We will refer as the diagonal
limit the procedure of having a sequence of (d,n) where both these parameters
simultaneously go to infinity. To maintain sparsity1, it is usually assumed that
d is at most poly-logarithmic in n. No lower bound on the growth rate of d is
assumed. However, results are often easier to prove when d is kept fixed and we let
n go to infinity. Suppose for each d one gets a limiting object (say a probability
distribution); one can now take d to infinity and explore limits of the sequence of
1The non-sparse can be typically absorbed within standard techniques of RMT by comparing
with a corresponding Erd˝ os-R´ enyi graph whose adjacency matrix has independent entries.
Page 32
32 TOBIAS JOHNSON AND SOUMIK PAL
the bound
|Il
α∩ Ij| ≤
l∧(j−l)
?
p=1
(k/p)
?l − 1
?j − l − 1
p − 1
??k − l − 1
?
p − 1
?
(p − 1)! ×
2p−1
p − 1
[n − p − l]j−p−l(2d − 1)j−l.
We apply the bounds
?l − 1
p − 1
?
?
≤
rp−1
(p − 1)!,
≤ (er/(p − 1))p−1,
?k − l − 1
p − 1
?
,
?j − l − 1
p − 1
to get
|Il
α∩ Ij| ≤ k(2d − 1)j−l[n − 1 − l]j−1−l
1 +
i∧(k−i)
?
p=2
1
p
?
2e2r3
(p − 1)2
?p−1
1
[n − 1 − l]p−1
.
Since r ≤ n1/10, the sum in the above equation is bounded by an absolute constant.
Applying this bound and (24), for any α ∈ Ikand l ≥ 1,
?
?
β∈Il
α
Cov(Iα,Iβ) ≤
r
?
r
j=l+1
?
?k(2d − 1)j−l
?k(2d − 1)r−l
β∈Il
α∩Ij
1
[n]k+j−l
≤
j=l+1
O
nk+1
?
= O
nk+1
?
.
Therefore
?
α∈I
?
l≥1
?
β∈Il
α
Cov(Iα,Iβ) =
r
?
r
?
r
?
r
?
k=1
?
?
[n]ka(d,k)
2k
?(2d − 1)r+k−1
?(2d − 1)2r−1
αCov(Iα,Iβ). For any word w, let ew
in w. Let α and β be cycles with words w
α∈Ik
k−1
?
k−1
?
l=1
?
?k(2d − 1)r−l
?k(2d − 1)r−1
?
?
β∈Il
α
Cov(Iα,Iβ)
≤
k=1α∈Ik
l=1
O
nk+1
?
=
k=1
O
nk+1
?
=
k=1
O
n
= O
n
. (25)
Last, we must bound?
α∈I
?
β∈I0
ibe the
number of appearances of πi and π−1
i
Page 33
CYCLES AND EIGENVALUES33
and u respectively, and let k = |α| and j = |β|. Suppose that β ∈ I0
d
?
≤?ew,eu?
α. Then
Cov(Iα,Iβ) =
i=1
1
[n]ew
i+eu
i
−
d
?
1
i=1
1
[n]ew
i[n]eu
i
n
d
?
i=1
[n]ew
i+eu
i
≤?ew,eu?
n[n]k+j
by Lemma 22. For any pair of words w ∈ Wk and u ∈ Wj, there are at most
[n]k[n]j pairs of cycles α,β ∈ I with words w and u, respectively. Enumerating
over all w ∈ Wkand u ∈ Wj, we count each pair of cycles α,β exactly 4kj times.
Thus
?
≤1 + O(r2/n)
The vector?
The inner product in the above equation comes to kja(d,k)a(d,j)/d, giving us
?
α∈Ik
?
β∈I0
α∩Ij
Cov(Iα,Iβ) ≤
[n]k[n]j
4kjn[n]k+j
?
?
w∈Wk
?
u∈Wj
?ew,eu?
4kjn
?
w∈Wk
ew,
?
u∈Wj
eu
?
.
w∈Wkewhas every entry equal by symmetry, as does?
u∈Wjeu. Thus
u∈Wjeuis ja(d,j)/d.
each entry of?
w∈Wkewis ka(d,k)/d, and each entry of?
?
α∈Ik
β∈I0
α∩Ij
Cov(Iα,Iβ) ≤a(d,k)a(d,j)(1 + O(r2/n))
?(2d − 1)j+k−1
4dn
= O
n
?
.
Summing over all 1 ≤ k,j ≤ r,
?
α∈I
?
β∈I0
α
Cov(Iα,Iβ) =
?(2d − 1)2r−1
n
?
. (26)
We can now combine equations (22), (23), (25), and (26) with Proposition 20 to
show that
?(2d − 1)2r−1
dTV(I, Y) = O
n
?
. (27)
Step 3. Approximation of Y by Z.
By Lemma 21 and (21),
dTV(Y, Z) ≤
?
α∈I
|EYα− EZα| ≤
r
?
r
?
k=1
?
a(d,k)
2k
α∈Ik
?
1
[n]k
?
−
1
nk
?
?
=
k=1
1 −[n]k
nk
.
Since [n]k≥ nk(1 − k2/2n),
dTV(Y, Z) ≤
r
?
k=1
a(d,k)k
4n
= O
?r(2d − 1)r
n
?
.
Page 34
34 TOBIAS JOHNSON AND SOUMIK PAL
Together with (27), this bounds the total variation distance between the laws of I
and Z and proves the theorem.
?
The distributions of any functionals of I and Z satisfy the same bound in total
variation distance. This gives us several results as easy corollaries, including an
improvement on [DJPP12, Theorem 11].
Corollary 24.
i) Let (Zk, 1 ≤ k ≤ r) be a vector of independent Poisson random variables with
EZk= a(d,k)/2k. Let Ck denote the number of k-cycles in Gn, a 2d-regular
permutation random graph on n vertices. Then for some absolute constant c,
dTV
?(Ck, 1 ≤ k ≤ r), (Zk, 1 ≤ k ≤ r)?≤c(2d − 1)2r−1
ii) Let (Zw, w ∈ W′
EZw = 1/h(w). Let Cw denote the number of cycles with word w in Gn, a
2d-regular permutation random graph on n vertices. Then for some absolute
constant c,
n
.
K) be a vector of independent Poisson random variables with
dTV
?(Cw, w ∈ W′
K), (Zw, w ∈ W′
α∈IkIα, and that if we define Zk=?
αIα, where the sum is over all cycles in I with
K)?≤c(2d − 1)2K−1
n
.
Proof. Observe that Ck=?
To prove (ii), note that Cw=?
of cycles in I with word w is [n]k/h(w), we have EZw = 1/h(w), and the total
variation bound follows from Theorem 14.
α∈IkZα, then
(Zk, 1 ≤ k ≤ r) is distributed as described. Thus (i) follows from Theorem 14.
word w. We then define Zw as the analogous sum over Zα. Since the number
?
We can also use Theorem 14 to bound the likelihood that Gn contains two
overlapping cycles of size r or less.
Corollary 25. Let Gn be a 2d-regular permutation random graph on n vertices.
Let E be the event that Gncontains two cycles of length r or less with a vertex in
common. Then for some absolute constant c′, for all d ≥ 2 and n,r ≥ 1,
P[E] ≤c′(2d − 1)2r
Proof. Let E′be the event that Zα= Zβ= 1 for two cycles α, β ∈ I that have a
vertex in common. By Theorem 14,
n
.
P[E] ≤ P[E′] +c(2d − 1)2r−1
n
.
For any cycle α ∈ Ik, there are at most k[n − 1]j−1a(d,j) cycles in Ij that share
a vertex with α. For any such cycle β, the chance that Zα= 1 and Zβ= 1 is less
than 1/[n]k[n]j. By a union bound,
P[E′] ≤
r
?
r
?
k=1
a(d,k)[n]k
2k
r
?
j=1
k[n − 1]j−1a(d,j)
[n]k[n]j
?(2d − 1)2r
≤
k=1
r
?
j=1
a(d,k)a(d,j)
2n
= O
n
?
.
?
Page 35
CYCLES AND EIGENVALUES35
Proof of Corollary 15. When d = 1, there is only one word of each length in W′
and statement (i) reduces to the well-known fact that the cycle counts of a random
permutation converge to independent Poisson random variables (see [AT92] for
much more on this subject). In this case, G(t) is made up of disjoint cycles for all
times t, so that statement (ii) is trivially satisfied.
When d ≥ 2, let C(n)
lary 24ii. The random vector (Cw(t), w ∈ W′
(C(n)
K) over different values of n. That is,
K,
w
be the number of cycles with word w in Gn, as in Corol-
K) is a mixture of the random vectors
w , w ∈ W′
P??Cw(t), w ∈ W′
for any set A, recalling that G(t) = GMt. Corollary 24ii together with the fact that
P[Mt> N] → 1 as t → ∞ for any N imply that (Cw(t), w ∈ W′
to (Zw, w ∈ W′
way from Corollary 25.
K
?∈ A?=
∞
?
n=1
P[Mt= n]P
??C(n)
w, w ∈ W′
K
?∈ A
?
K) converges in law
K), establishing statement (i). Statement (ii) follows in the same
?
Proof of Lemma 21. We will apply the Stein-Chen method directly. Define the
operator A by
Ah(x) =
α∈I
?
E[Zα]?h(x + eα) − h(x)?+
+→ R and x ∈ Z|I|
EAh(Z) = 0 for any bounded function h. By Proposition 10.1.2 and Lemma 10.1.3
in [BHJ92], for any set A ⊆ Z|I|
Ah(x) = 1{x ∈ A} − P[Z ∈ A],
and this function has the property that
?
α∈I
xα
?h(x − eα) − h(x)?
for any h: Z|I|
+. This is the Stein operator for the law of Z, and
+, there is a function h such that
sup
x∈Z|I|
α∈I
+
|h(x + eα) − h(x)| ≤ 1. (28)
Thus we can bound the total variation distance between the laws of Y and Z by
bounding |EAh(Y)| over all such functions h.
We write Ah(Y) as
Ah(Y) =
α∈I
?
The first two of these sums have expectation zero, so
?
By (28), |h(Y + eα) − h(Y)| ≤ 1, which proves the lemma.
?
+
E[Yα]?h(Y + eα) − h(Y)?+
?EZα− EYα
?
α∈I
Yα
?h(Y − eα) − h(x)?
α∈I
??h(Y + eα) − h(Y)?.
|EAh(Y)| ≤
α∈I
|EZα− EYα|E|h(Y + eα) − h(Y)|.
?
Proof of Lemma 22. We define a family of independent random maps σiand τifor
1 ≤ i ≤ d. Choose σiuniformly from all injective maps from [ai] to [n], and choose
Page 36
36 TOBIAS JOHNSON AND SOUMIK PAL
τi uniformly from all injective maps from [bi] to [n]. Effectively, σi and τi are
random ordered subsets of [n]. We say that σiand τiclash if their images overlap.
P[σiand τiclash for some i] = 1 −
d
?
i=1
[n]ai+bi
[n]ai[n]bi
.
For any 1 ≤ i ≤ d, 1 ≤ j ≤ ai, and 1 ≤ k ≤ bi, the probability that σi(j) = τi(k) is
1/n. By a union bound,
P[σiand τiclash for some i] ≤
d
?
i=1
aibi
n
=?a,b?
n
.
We finish the proof by dividing both sides of this inequality by?d
i=1[n]ai+bi.
?
References
[ANvM11] Mark Adler, Eric Nordenstam, and Pierre van Moerbeke. The Dyson Brownian minor
process. Preprint. Available at arXiv:1006.2956, 2011.
[AT92] Richard Arratia and Simon Tavar´ e. The cycle structure of random permutations. Ann.
Probab., 20(3):1567–1591, 1992.
[BAD11] G´ erard Ben Arous and Kim Dang. On fluctuations of eigenvalues of random permuta-
tion matrices. 2011. Preprint. Available at arXiv:1106.2108.
[BF08] Alexei Borodin and Patrik Ferrari. Anisotropic growth of random surfaces in 2+1
dimensions. Preprint. Available at arXiv:0804.3035, 2008.
[BHJ92]A. D. Barbour, Lars Holst, and Svante Janson. Poisson approximation, volume 2 of
Oxford Studies in Probability. The Clarendon Press Oxford University Press, New
York, 1992. Oxford Science Publications.
[Bil99]Patrick Billingsley. Convergence of probability measures. Wiley Series in Probability
and Statistics: Probability and Statistics. John Wiley & Sons Inc., New York, second
edition, 1999. A Wiley-Interscience Publication.
[BNN11] Paul Bourgade, Joseph Najnudel, and Ashkan Nikeghbali. A unitary extension of vir-
tual permutations. Preprint. Available at arXiv:1102.2633, 2011.
[Bor10a] Alexei Borodin. CLT for spectra of submatrices of Wigner random matrices. Preprint.
Available at arXiv:1010.0898, 2010.
[Bor10b] Alexei Borodin. CLT for spectra of submatrices of Wigner random matrices II. Sto-
chastic evolution. Preprint. Available at arXiv:1011.3544, 2010.
[CPY98] Philippe Carmona, Fr´ ed´ erique Petit, and Marc Yor. Beta-gamma random variables and
intertwining relations between certain Markov processes. Rev. Mat. Iberoamericana,
14(2):311–367, 1998.
[DF90] Persi Diaconis and James Allen Fill. Strong stationary times via a new form of duality.
Ann. Probab., 18(4):1483–1522, 1990.
[DJPP12] Ioana Dumitriu, Tobias Johnson, Soumik Pal, and Elliot Paquette. Functional limit
theorems for random regular graphs. Probab. Theory Related Fields, pages 1–55, 2012.
Published online, 25 August 2012.
[DP12] Ioana Dumitriu and Soumik Pal. Sparse regular random graphs: Spectral density and
eigenvectors. Ann. Probab., 40(5):2197–2235, 2012.
[EK86] Stewart N. Ethier and Thomas G. Kurtz. Markov processes. Wiley Series in Probability
and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley &
Sons Inc., New York, 1986. Characterization and convergence.
[Elo08] Yehonatan Elon. Eigenvectors of the discrete Laplacian on regular graphs—a statistical
approach. J. Phys. A, 41(43):435203, 17, 2008.
[Elo10]Yehonatan Elon. Gaussian waves on the regular tree. Preprint. Available at
arXiv:0907.5065, 2010.
[ES10] Yehonatan Elon and Uzy Smilansky. Percolating level sets of the adjacency eigenvectors
of d-regular graphs. J. Phys. A, 43(45):455209, 13, 2010.
[Fer10] Patrik L. Ferrari. From interacting particle systems to random matrices. J. Stat. Mech.
Theory Exp., (10):P10016, 15, 2010.
Page 37
CYCLES AND EIGENVALUES 37
[FF10] Patrik L. Ferrari and Ren´ e Frings. On the partial connection between random matrices
and interacting particle systems. J. Stat. Phys., 141(4):613–637, 2010.
[JMRR99] Dmitry Jakobson, Stephen D. Miller, Igor Rivin, and Ze´ ev Rudnick. Eigenvalue spac-
ings for regular graphs. In Emerging applications of number theory (Minneapolis, MN,
1996), volume 109 of IMA Vol. Math. Appl., pages 317–327. Springer, New York, 1999.
[JN06] Kurt Johansson and Eric Nordenstam. Eigenvalues of GUE minors. Electron. J.
Probab., 11:no. 50, 1342–1371, 2006.
[KOV04] Sergei Kerov, Grigori Olshanski, and Anatoly Vershik. Harmonic analysis on the infi-
nite symmetric group. Invent. Math., 158(3):551–642, 2004.
[LP10] Nati Linial and Doron Puder. Word maps and spectra of random graph lifts. Random
Structures Algorithms, 37(1):100–135, 2010.
[MNS08] Steven J. Miller, Tim Novikoff, and Anthony Sabelli. The distribution of the largest
nontrivial eigenvalues in families of random regular graphs. Experiment. Math.,
17(2):231–244, 2008.
[OGS09] Idan Oren, Amit Godel, and Uzy Smilansky. Trace formulae and spectral statistics for
discrete Laplacians on regular graphs. I. J. Phys. A, 42(41):415101, 20, 2009.
[OS10] Idan Oren and Uzy Smilansky. Trace formulas and spectral statistics for discrete Lapla-
cians on regular graphs (II). J. Phys. A, 43(22):225205, 13, 2010.
[Pit06] Jim Pitman. Combinatorial stochastic processes, volume 1875 of Lecture Notes in
Mathematics. Springer-Verlag, Berlin, 2006. Lectures from the 32nd Summer School
on Probability Theory held in Saint-Flour, July 7–24, 2002, With a foreword by Jean
Picard.
[She07] Scott Sheffield. Gaussian free fields for mathematicians. Probability Theory and Related
Fields, 139(3-4):521–541, 2007.
[Smi10]Uzy Smilansky. Discrete graphs - a paradigm model for quantum chaos. In S´ eminaire
Poincar´ e, volume XIV, pages 1–26. 2010.
[Spo98] Herbert Spohn. Dyson’s model of interacting brownian motions at arbitrary coupling
strength. In MARKOV PROC. REL. FIELDS, pages 649–661, 1998.
[TVW13] Linh V. Tran, Van H. Vu, and Ke Wang. Sparse random graphs: Eigenvalues and
eigenvectors. Random Structures Algorithms, 42(1):110–134, 2013.
[Wie00] Kelly Wieand. Eigenvalue distributions of random permutation matrices. Ann.
Probab., 28(4):1563–1587, 2000.
[Wor99] Nicholas C. Wormald. Models of random regular graphs. In Surveys in combinatorics,
1999 (Canterbury), volume 267 of London Math. Soc. Lecture Note Ser., pages 239–
298. Cambridge Univ. Press, Cambridge, 1999.
Department of Mathematics, University of Washington, Seattle, WA 98195
E-mail address: toby@math.washington.edu
Department of Mathematics, University of Washington, Seattle, WA 98195
E-mail address: soumik@u.washington.edu