Page 1

arXiv:1203.1113v3 [math.PR] 29 May 2013

CYCLES AND EIGENVALUES OF SEQUENTIALLY GROWING

RANDOM REGULAR GRAPHS

TOBIAS JOHNSON AND SOUMIK PAL

Abstract. Consider the sum of d many iid random permutation matrices on

n labels along with their transposes. The resulting matrix is the adjacency ma-

trix of a random regular (multi)-graph of degree 2d on n vertices. It is known

that the distribution of smooth linear eigenvalue statistics of this matrix is

given asymptotically by sums of Poisson random variables. This is in contrast

with Gaussian fluctuation of similar quantities in the case of Wigner matrices.

It is also known that for Wigner matrices the joint fluctuation of linear eigen-

value statistics across minors of growing sizes can be expressed in terms of

the Gaussian Free Field (GFF). In this article we explore joint asymptotic (in

n) fluctuation for a coupling of all random regular graphs of various degrees

obtained by growing each component permutation according to the Chinese

Restaurant Process. Our primary result is that the corresponding eigenvalue

statistics can be expressed in terms of a family of independent Yule processes

with immigration. These processes track the evolution of short cycles in the

graph. If we now take d to infinity, certain GFF-like properties emerge.

1. Introduction

We consider graphs that have labeled vertices and are regular, i.e., every ver-

tex has the same degree. We allow our graphs to have loops and multiple edges

(such graphs are sometimes called multigraphs or pseudographs). Additionally, our

graphs will be sparse in the sense that the degree will be negligible compared to

the order. Every such graph has an associated adjacency matrix whose (i,j)th

element is the number of edges between vertices i and j, with loops counted twice.

When the graph is randomly selected, the matrix is random, and we are interested

in studying the eigenvalues of the resulting symmetric matrix. Note that, due to

regularity, it does not matter whether we consider the eigenvalues of the adjacency

or the Laplacian matrix.

The precise distribution of this random regular graph is somewhat ad hoc. We

will use what is called the permutation model. Consider the permutation digraphs

generated by d many iid random permutations on n labels. We remove the direction

of the edge and collapse all these graphs on one another. This results in a 2d-regular

graph on n vertices, denoted by G(n,2d). At the matrix level this is given by adding

all the d permutation matrices and their transposes.

Date: May 29, 2013.

2000 Mathematics Subject Classification. 60B20, 05C80.

Key words and phrases. Random regular graphs, eigenvalue fluctuations, Chinese Restaurant

Process, minors of random matrices.

This research is partially supported by NSF grant DMS-1007563.

1

Page 2

2 TOBIAS JOHNSON AND SOUMIK PAL

Our present work is an extension of the study of eigenvalue fluctuations carried

out in [DJPP12]. We are motivated by the recent work by Borodin on joint eigen-

value fluctuations of minors of Wigner matrices and the (massless or zero-boundary)

Gaussian Free Field (GFF) [Bor10a, Bor10b]. Eigenvalues of minors are closely re-

lated to interacting particle systems [Fer10, FF10], and the KPZ universality class

of random surfaces [BF08]. See [JN06] for more on eigenvalues of minors of GUE

and [ANvM11] for those of Dyson’s Brownian motion.

Let us consider a particular but important case of Borodin’s result in [Bor10a]

(single sequence, the entire N). An n × n real symmetric Wigner matrix has iid

upper triangular off-diagonal elements with four moments identical to the standard

Gaussian. The diagonal elements are usually taken to be iid with mean zero vari-

ance two. Notice that every principal submatrix (called minors in this context)

of a Wigner matrix is again a Wigner matrix of a smaller order. Thus, on some

probability space one can construct an infinite order Wigner matrix W whose n×n

minor W(n) is a Wigner matrix of order n.

Let z be a complex number in the upper half plane H. Define y = |z|2and

x = 2ℜ(z).

eigenvalues that are greater than or equal to√nx. Define the height function

?π

Then Borodin shows that {Hn(z) − EHn(z), z ∈ H}, viewed as distributions,

converges in law to a generalized Gaussian process on H with a covariance kernel

Consider the minor W(⌊ny⌋), and let N(z) be the number of its

(1)Hn(z) :=

2N(z).

(2)C(z,w) =

1

2πln

????

z − w

z − w

????.

The above is the covariance kernel for the GFF on the upper half plane.

An equivalent assertion is the following.

{1,2,...,n}. Consider the Chebyshev polynomials of the first kind, {Tn, n =

0,1,2,...}, on the interval [−1,1]. These polynomials are given by the identity

Tn(cos(θ)) ≡ cos(nθ). We specialize [Bor10a, Proposition 3] for the case of GOE

(β = 1). Fix m positive real numbers t1 < t2 < ... < tm. In the notation of

[Bor10a], we take L = n and Bi(n) = [⌊tin⌋]. Then, for any positive integers

j1,j2,...,jm, the random vector

?trTji

converges in law, as n tends to infinity, to a centered Gaussian vector. For s ≤ t,

?

which gives the covariance kernel of the limiting vector. In particular, all such co-

variances are zero when i ?= k. Note that the traces can be expressed as integrals of

the height function of the corresponding submatrices. Thus, by approximating con-

tinuous compactly supported functions of z by a function that is piecewise constant

in y and polynomial in x, one gets the kernel (2).

Let [n] denote the set of integers

?W(⌊tin⌋)/2√tin?− EtrTji

?

?W(⌊tin⌋)/2√tin?, i ∈ [m]?

?W(⌊sn⌋)/2√sn??

(3)lim

n→∞Cov

trTi

W(⌊tn⌋)/2√tn

?

, trTk

= δikk

2

?s

t

?k/2

,

1.1. Main results. By a tower of random permutations we mean a sequence of

random permutations (π(n),n ∈ N) such that

(i) π(n)is a uniformly distributed random permutation of [n] for each n, and

Page 3

CYCLES AND EIGENVALUES3

(ii) for each n, if π(n)is written as a product of cycles then π(n−1)is derived

from π(n)by deletion of the element n from its cycle.

The stochastic process that grows π(n)from π(n−1)by sequentially inserting an el-

ement n randomly is called the Chinese Restaurant Process (CRP). We will review

the basic principles at a later section. In [KOV04] and other related work, a se-

quence of permutations satisfying condition (ii) is called a virtual permutation, and

the distribution on virtual permutations satisfying condition (i) is considered as a

substitute for Haar measure on S(∞), the infinite symmetric group. This is used

to study the representation theory of S(∞), with connections to Random Matrix

Theory. A recent extension of this idea is [BNN11].

Now suppose we construct a countable collection {Πd, d ∈ N} of towers of ran-

dom permutations. We will denote the permutations in Πdby {π(n)

it is possible to model every possible G(n,2d) by adding the permutation matrices

(and their transposes) corresponding to {π(n)

will keep d fixed and consider n as a growing parameter. Thus, Gnwill represent

G(n,2d) for some fixed d. Here and later, G0will represent the empty graph. We

construct a continuous-time version of this by inserting new vertices into Gnwith

rate n + 1. Formally, define independent times Ti∼ Exp(i), and let

?

and define the continuous-time Markov chain G(t) = GMt. When d = 1, this

process is essentially just a continuous-time version of the CRP itself. Though this

case is unusual compared to the rest—for example, G(t) is likely to be disconnected

when d = 1 and connected when d is larger—our results do still hold.

Our first result is about the process of short cycles in the graph process G(t).

By a cycle of length k in a graph, we mean what is sometimes called a simple cycle:

a walk in the graph that begins and ends at the same vertex, and that otherwise

repeats no vertices. We will give a more formal definition in Section 2.2. Let

(C(s)

k(t), k ∈ N) denote the number of cycles of various lengths k that are present

in G(s + t). This process is not Markov, but nonetheless it converges to a Markov

process (indexed by t) as s tends to infinity.

To describe the limit, define

?

(2d − 1)k+ 1,

Consider the set of natural numbers N = {1,2,...} with the measure

µ(k) =1

2[a(d,k) − a(d,k − 1)],

Consider a Poisson point process χ on N × [0,∞) with an intensity measure given

on N×(0,∞) by the product measure µ⊗Leb, where Leb is the Lebesgue measure,

and with additional masses of a(d,k)/2k on (k,0) for k ∈ N.

Let?Pxdenote the law of an one-dimensional pure-birth process on N given by

Lf(k) = k (f(k + 1) − f(k)),

starting from x ∈ N. This is also known as the Yule process.

d, n ∈ N}. Then

j

, 1 ≤ j ≤ d}. In what follows we

Mt= maxm:

m

?

i=1

Ti≤ t

?

,

a(d,k) =

(2d − 1)k− 1 + 2d,when k is even,

when k is odd.

k ∈ N,a(d,0) := 0.

the generator:

k ∈ N,

Page 4

4 TOBIAS JOHNSON AND SOUMIK PAL

Suppose we are given a realization of χ. For any atom (k,y) of the countably

many atoms of χ, we start an independent process (Xk,y(t), t ≥ 0) with law?Pk.

Nk(t) :=

Define the random sequence

?

(j,y)∈χ∩{[k]×[0,t]}

1{Xj,y(t − y) = k}.

In other words, at time t, for every site k, we count how many of the processes that

started at time y ≤ t at site j ≤ k are currently at k. Note that both (Nk(·), k ∈ N)

and (Nk(·), k ∈ [K]), for some K ∈ N, are Markov processes, while Nk(·) for fixed

k is not.

Theorem 1. As s → ∞, the process (C(s)

in the product topology on D∞[0,∞), to the Markov process (Nk(t), k ∈ N, 0 ≤

t < ∞). The limiting process is stationary.

Remark 2. In fact, the same argument used to prove Theorem 1 shows that

the process (C(s)

k(t), −∞ < t < ∞) converges in law to the Markov process

(Nk(t), −∞ < t < ∞) running in stationarity. The same conclusion holds for

all the following theorems in this section.

k(t), k ∈ N, 0 ≤ t < ∞) converges in law,

We now explore the joint convergence across various d’s. Define C(s)

stressing the dependence on the parameter d.

d,k(t) naturally,

Theorem 3. There is a joint process convergence of (C(s)

to a limiting process (Ni,k(t), k ∈ N, i ∈ [d], t ≥ 0). This limit is a Markov process

whose marginal law for every fixed d is described in Theorem 1. Moreover, for

any d ∈ N, the process (Nd+1,k(·) − Nd,k(·), k ∈ N) is independent of the process

(Ni,k(·), k ∈ N, i ∈ [d]) and evolves as a Markov process. Its generator (defined on

functions dependent on finitely many coordinates) is given by

i,k(t), k ∈ N, i ∈ [d], t ≥ 0)

Lf(x) =

∞

?

k=1

kxk[f (x + ek+1− ek) − f(x)] +

∞

?

k=1

ν(d,k)[f(x + ek) − f(x)],

where x is a nonnegative sequence, (ek,k ∈ N) are the canonical orthonormal basis

of ℓ2, and

ν(d,k) =1

2[a(d + 1,k) − a(d + 1,k − 1) − a(d,k) + a(d,k − 1)].

Remark 4. Theorems 1 and 3 show an underlying branching process structure. We

actually prove a more general decomposition where cycles are tracked by edge labels.

The additive structure also imparts a natural intertwining relationship between the

Markov operators. See [CPY98, Section 2] and [DF90, Bor10a].

We now focus on eigenvalues of G(t). Note that there is no easy exact relationship

between the eigenvalues of Gn for various n since the eigenvectors play a role in

determining any such identity. In fact, the eigenvalues of Gnand Gn+1need not

be interlaced. However, one can consider linear eigenvalue statistics for the graph

G(n,2d). That is, for any d-regular graph on n vertices G and function f : R → R,

define the random variable

trf(G) :=

n

?

i=1

f(λi)

Page 5

CYCLES AND EIGENVALUES5

where λ1 ≥ ... ≥ λn are the eigenvalues of adjacency matrix of G divided by

2(2d − 1)1/2. The scaling is necessary to take a limit with respect to d.

By a polynomial basis we refer to a sequence of polynomials {f0≡ 1,f1,f2,...}

such that fk is a polynomial of degree k of a single argument over reals. In the

statement below [∞] will refer to N.

Theorem 5. There exists a polynomial basis {fi, i ∈ N} (depending on d) such

that for any K ∈ N ∪ {∞}, the process (trfk(G(s + t)), k ∈ [K], t ≥ 0) converges

in law, as s tends to infinity, to the Markov process (Nk(t), k ∈ [K], t ≥ 0)

of Theorem 1. (The polynomials are given explicitly in (15).) Hence, for any

polynomial f, the process?trf(G(s+t))?converges to a linear combination of the

The Markov property is especially intriguing since, to the best of our knowledge,

no similar property of eigenvalues of the standard Random Matrix ensembles is

known. For the special case of minors of the Gaussian Unitary/Orthogonal Ensem-

bles, the entire distribution of eigenvalues across minors of various sizes do satisfy

a Markov property. However, this is facilitated by the known symmetry properties

of the eigenvectors, and do not extend to other examples of Wigner matrices.

For our final result we will take d to infinity. We will make the following no-

tational convention: for any polynomial f, we will denote the limiting process of

(trf(G(s + t)), t ≥ 0) by (trf (G(∞ + t)), t ≥ 0). Recall that this process is a

linear combination of (Nk(t), k ∈ N, t ≥ 0).

Theorem 6. Let {Tk, k ∈ N} denote the Chebyshev orthogonal polynomials of the

first kind on [−1,1]. As d tends to infinity, the collection of processes

(trTk(G(∞ + t)) − EtrTk(G(∞ + t)), t ≥ 0, k ∈ N)

converges weakly in D∞[0,∞) to a collection of independent Ornstein-Uhlenbeck

processes (Uk(t), t ≥ 0, k ∈ N), running in equilibrium. Here the equilibrium dis-

tribution of Ukis N(0,k/2) and Uksatisfies the stochastic differential equation

coordinate processes of (Nk(t), k ∈ N).

dUk(t) = −kUk(t)dt + kdWk(t),t ≥ 0,

and (Wk, k ∈ N) are iid standard one-dimensional Brownian motions.

Thus, the collection of random variables?trTk(G(∞ + t))−EtrTk(G(∞ + t))?,

with covariance kernel given by

indexed by k and t, converges as d tends to infinity to a centered Gaussian process

(4)lim

d→∞Cov(trTi(G(∞ + t)),trTk(G(∞ + s))) = δikk

2ek(s−t).

for s ≤ t.

A comparison of (4) with Borodin’s result (3) shows that the above limit captures

a key property of the GFF covariance structure. The appearance of the exponential

is merely due to a deterministic time-change of the process. A somewhat more

detailed discussion can be found in the following section.

Remark 7. A common model for random regular graphs is the configuration model

or pairing model (see [Wor99] for more information).

follows: Start with n buckets, each containing d prevertices. Then, separate these

dn prevertices into pairs, choosing uniformly from every possible pairing. Finally,

collapse each bucket into a single vertex, making an edge between one vertex and

The model is defined as