Content uploaded by Ali Khezeli
Author content
All content in this area was uploaded by Ali Khezeli on Apr 10, 2023
Content may be subject to copyright.
Unimodular Random Measured Metric Spaces
and Palm Theory on Them
Ali Khezeli ∗†
April 6, 2023
Abstract
In this work, we define the notion of unimodular random measured
metric spaces as a common generalization of various other notions. This
includes the discrete cases like unimodular graphs and stationary point
processes, as well as the non-discrete cases like stationary random mea-
sures and the continuum metric spaces arising as scaling limits of graphs.
We provide various examples and prove many general results; e.g., on
weak limits, re-rooting invariance, random walks, ergodic decomposition,
amenability and balancing transport kernels. In addition, we generalize
the Palm theory to point processes and random measures on a given uni-
modular space. This is useful for Palm calculations and also for reducing
some problems to the discrete cases.
1 Introduction
The mass transport principle (MTP) refers to some key equations that appear
in various forms in a number of subfields of probability theory and other fields
of mathematics. These equations capture (and sometimes formalize) the intu-
itive notion of stochastic homogeneity, which, in different contexts, appears as
stationarity, unimodularity, typicality of a point, re-rooting invariance, involu-
tion invariance, or just invariance. The similarity of the different versions of the
MTP results in fruitful connections between various fields.
In Subsection 1.1, we provide a quick survey of the MTP in the literature.
Then, the contributions of the present paper are introduced in the next parts
of the introduction.
1.1 The Mass Transport Principle in the Literature
The mass transport principle (MTP) was first used in [20] for studying per-
colation on trees. It was developed further in [7] for studying group-invariant
∗Inria Paris, ali.khezeli@inria.fr
†Department of Applied Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares
University, P.O. Box 14115-134, Tehran, Iran, khezeli@modares.ac.ir
1
percolation on graphs and in [8] for proving recurrence of local limits of planar
graphs. [4] also established a property called involution invariance for local
limits of finite graphs1. This property turned out to be a special case of the
MTP, and in fact, equivalent to it. Then, [3] defined unimodular random graphs
by simply using the MTP as the definition, provided various general examples,
and established many properties for them. The term unimodular was chosen
in [3] because, in order that a fixed transitive graph Gsatisfies the MTP, it
is necessary and sufficient that the automorphism group of Gis a unimodular
group.
It is useful to recall the MTP for unimodular graphs. A mass transport or a
transport function is a function gthat takes as input a graph Gand two vertices
of Gand outputs a value in R≥0. We think of g(G, u, v) as the mass going from
uto v. The assumptions on gare measurability (to be defined suitably) and
invariance, where the latter means here that gis invariant under isomorphisms.
A random rooted graph [G,o] (where the graph Gand the root vertex oare
random) is called unimodular if it satisfies the following MTP for any transport
function g:
E
x∈G
g(G,o, x)=E
x∈G
g(G, x, o).(1.1)
Similar versions of the MTP exist for point processes in Rd. For a recent
application, one can mention the use of the MTP in constructing perfect match-
ing between point processes and fair tessellations (see e.g., [25] and [13]). These
MTPs are special cases of much older theorems in stochastic geometry; e.g.,
Mecke’s formula, Mecke’s theorem and Neveu’s exchange formula, which had
been widely used in stochastic geometry. In particular, Mecke’s theorem (see
e.g., [33]) can be rephrased as follows: If Φ0is the Palm version of a stationary
point process (i.e., conditioned to contain the origin, or equivaletnly, seeing the
point process form a typical point), then
E
x∈Φ0
g(Φ0,0, x)=E
x∈Φ0
g(Φ0, x, 0).(1.2)
This equation looks similar to (1.1) with the difference that the random objects
have different natures and the invariance condition for the transport function
g=g(Φ, u, v) is the invariance under Euclidean translations. In fact, (1.2)
characterizes a larger class of point processes which are called point-stationary
point processes in [33] (e.g., the zero set or the graph of the simple random
walk). See Subsection 4.1 for further discussion.
Due to the similarity of the MTP in (point-) stationary point processes and
unimodular random graphs, a lot of connections have been observed between
the two theories. In particular, [3] noted that any graph that is constructed
on a stationary point process (with some equivariance and measurability condi-
tion) gives rise to a unimodular random graph. Also, various results and proof
1The MTP for local limits of graphs was also observed in [8].
2
techniques in one theory can be translated into the other one with minor mod-
ifications (see [6]). The previous work [6] unifies these two notions by defining
unimodular random discrete spaces.
The theorems mentioned above for stationary point processes are in fact
more general and apply to stationary random measures. In particular, Mecke’s
theorem in the general form can be rephrased equivalently in a MTP form as
follows: If Ψ0is the Palm version of a stationary random measure on Rd, then
Eg(Ψ0,0, x)dΨ(x)=Eg(Ψ0, x, 0)dΨ(x)(1.3)
for every function g=g(Ψ0, u, v)≥0 that is translation-invariant and mea-
surable. As in the previous case, (1.3) characterizes a larger class of random
measures, which are called mass-stationary random measures in [33]. See Sub-
section 4.1 for further discussion.
As mentioned above, the MTP (1.1) is satisfied for local limits of graphs.
Similar properties have been established for some instances of scaling limits
of graphs, where scaling limit means that the graph-distance metric is scaled
by some small factor and the limit means the limit of a sequence of random
metric spaces (sometime, the metric spaces are rooted; i.e., have a distinguished
point, and/or measured; i.e., have a distinguished measure). Most notably,
various instances that have a compact rooted measured scaling limit satisfy the
so called re-rooting invariance property; i.e., the distribution is invariant under
changing the root randomly (according to the distinguished measure on the
model). This is the case for the Brownian continuum random tree [2], stable
trees [15] and the Brownian map [34]. Also, in a few examples where the scaling
limit is not compact, an MTP similar to (1.3) has been observed (e.g., in [10]).
However, a general rigorous MTP result for scaling limits seems to be missing
in the literature. This is done in this work by providing a general form of the
MTP and by showing that it is preserved under weak limits. This general result
readily implies the MTP and re-rooting invariance in the known examples of
scaling limits. We will also provide some re-rooting invariance properties in the
non-compact case as well.
In this occasion, we should also mention the mass transport principle in the
theory of countable Borel equivalence relations. This will be introduced in the
next subsection and will be discussed further in Subsection 4.6.
1.2 Unification of the MTP by Unimodular rmm Spaces
In the previous subsection, we recalled that the theories of unimodular graphs,
point processes, random measures and scaling limits exhibit the mass transport
principle in various forms. The similarity of these formulas provide fruitful
connections between these theories. So, it is natural to ask for a general theory of
the MTP that unifies these notions. We saw that the discrete cases (unimodular
graphs and point processes) can be unified by the notion of unimodular discrete
spaces [6] (these cases are also tightly connected to the theory of countable Borel
equivalence relations).
3
The first main task of this work is providing a unification of the MTP in
the discrete and non-discrete cases. The unification is provided by introducing
unimodular random rooted measured metric spaces, or in short, unimodular rmm
spaces in Section 2. This can be thought of a common generalization of uni-
modular graphs, (point-) stationary point processes, (mass-) stationary random
measures and the random continuum spaces arising in scaling limits. In this
part, establishing the MTP for general scaling limits is novel and is one of the
main motivations of this work.
Unimodular rmm spaces are defined as random triples [X,o,µ], where X
is a boundedly-compact metric space, ois a point of Xcalled the root, and µ
is a boundedly-finite Borel measure on X. The Gromov-Hausdorff-Prokhorov
metric, given in the most general form in [31], provides the measure-theoretic
requirements for this definition. Then, the definition of unimodularity in this
setting is given by modifying the MTP (1.3) accordingly. Note that a distin-
guished measure is needed on Xin order that the MTP makes sense. Many
specific examples and general categories of unimodular rmm spaces are discussed
in Section 4.
In Sections 3 and 5, we provide general results on unimodular spaces. In
particular, we prove that unimodularity is preserved under weak convergence.
This implies that all (measured) scaling limits are unimodular (which is one
of the main motivations of this work), and hence, in the compact case satisfy
the re-rooting invariance property. We also study re-rooting in the non-compact
case and the properties of random walks on a unimodular rmm space. We define
ergodicity and prove an ergodic decomposition theorem. Also, various defini-
tions of amenability are provided and proved to be equivalent, based on similar
definitions in other fields of mathematics; e.g., definition by local means, ap-
proximate means, hyperfinitenss and Folner condition. It is also proved that the
number of topological ends of a unimodular rmm space belongs to {0,1,2,∞}
almost surely.
We should mention that the discrete cases of the MTP, discussed above,
are connected to the theory of countable Borel equivalence relations (CBER),
which has totally different roots (ergodic theory of dynamical systems, group
actions and orbit equivalence theory)2. There exists a MTP-like formula in this
theory, which in fact defines when an equivalence relation is measure-preserving;
i.e., the measure is invariant. It is a generalization of the measure-preserving
property for dynamical systems and group actions. As discussed in [3], this
notion is connected to unimodular graphs via graphings. While the two theories
have substantial overlap, their view point and motivations are different. In fact,
the theory of CBERs can be thought of as the mathematical ground under the
theory of unimodular graphs; just like the distinction of measure theory and
probability theory. The unification of the MTP in this work does not cover
CBERs since we focus on random metric spaces. We will still use this theory in
2This theory is almost as old as Mecke’s theorem and is very useful in probability theory,
but still deserves to be more recognized in the probability community. Some results in this
theory have been re-invented later by probabilists; e.g., the existence of one-ended treeing in
the amenable case (see e.g., [12], [24] and [40]) and factor point processes (see [32]).
4
proving some of the results. This will be discussed further in Subsection 4.6.
1.3 Generalization of Palm Theory
In this work, we also consider random measures and point processes on a given
unimodular rmm space. In this view, the base space can be thought of a gen-
eralization of the Euclidean or hyperbolic spaces. Heuristically, the intensity of
a point process Φ on [X,o,µ] is the expected number of points of Φper unit
measure (measured by µ). Also, the mean of some quantity on the points of Φ
is the expectation of that quantity at a typical point of Φ. These notions will
be formalized by generalizing the Palm theory in Section 6, which is the second
main task of this work. We will generalize various notions and theorems in
stochastic geometry; e.g., the Campbell measure, Pam distribution, the Camp-
bell formula, Mecke’s theorem, Neveu’s exchange formula and the existence of
balancing transport kernels. We also show that the Palm theory generalizes
Palm inversion at the same time (i.e., reconstructing the probability measure
from the Palm distribution) and also generalizes various constructions of uni-
modular graphs by adding vertices and edges to another given unimodular graph
(e.g., the dual of a unimodular planar graph).
An important application of the Palm theory in this work is to construct a
countable Borel equivalence relation equipped with an invariant measure. This
is done by means of adding a Poisson point process and considering its Palm
version and is used to prove some of the theorems (e.g., amenability and invari-
ant disintegration) using the results of the theory of countable Borel equivalence
relations. These applications are included in Section 7.
2 Unimodular rmm Spaces
In this section, we define unimodular rmm spaces as a common generalization
of unimodular graphs, stationary point processes, stationary random measures,
etc. The definition is based on a mass transport principle similar to (1.1), (1.2)
and (1.3). Note that in the MTP, a distinguished point and a measures should
be presumed (since the spaces are not necessarily discrete). So we start by
discussing rooted measured metric spaces and some notation.
2.1 Notation
If Xis a metric space, the closed ball of radius rcentered at x∈Xis denoted
by Br(x) := {y∈X:d(x, y)≤r}. The open ball is denoted by Br(x).
Also, all measures are assumed to be Borel measures. If f:X→R≥0is
measurable, the measure fµ on Xis defined by (fµ)(A) := Af dµ. If µis a
probability measure, biasing µby fmeans considering the probability measure
1
c(fµ), where c=Xf dµ. The Prokhorov metric between finite measures on X
is denoted by dP.
5
2.2 The Space of Rooted Measured Metric Spaces
Arooted measured metric space, abbreviated by a rmm space, is a tuple
(X, o, µ) where Xis a metric space, µis a non-negative Borel measure on X
and ois a distinguished point of Xcalled the root. For simplicity of notation,
the metric on Xis always denoted by dif there is no ambiguity, otherwise, it
will be denoted by dX. In this paper, we always assume that the metric space is
boundedly compact (i.e. every closed ball is compact) and µis boundedly
finite (i.e. every ball has finite measure under µ). Similarly, a doubly-rooted
measured metric space is a tuple (X, o1, o2, µ), where µis as before and o1and
o2are two ordered distinguished points of X. Note that we do not require that
Xis a geodesic space. It can be even disconnected.
An isomorphism (or a GHP-isometry) between two rmm spaces (X, o, µ)
and (X′, o′, µ′) is a bijective isometry ρ:X→X′such that ρ(o) = o′and
ρ∗µ=µ′, where ρ∗(µ) represents the push forward of the measure µunder ρ.
If such ρexists, then (X, o, µ) and (X′, o′, µ′) are called isomorphic. Isomor-
phisms between doubly-rooted spaces are defined similarly. Under this equiva-
lence relation, the equivalence class containing (X, o, µ) (resp. (X, o1, o2, µ)) is
denoted by [X, o, µ](resp. [X, o1, o2, µ]).
Let M∗be the set of equivalence classes of rmm spaces under isomorphisms.
Similarly, define M∗∗ for doubly-rooted measured metric spaces. These sets be-
come Polish spaces under the Gromov-Hausdorff-Prokhorov (GHP) met-
ric (see [31] for the general case of the GHP metric and its history). To keep
focus on the main goals, we skip the definition of the metric and we just men-
tion the GHP topology (see Section 3 of [29]): A sequence [Xn, on, µn] con-
verges to [X, o, µ] when these spaces can be embedded isometrically in a common
boundedly-compact metric space Zsuch that, after the embedding, onconverges
to o,Xnconverges to Xas closed subsets of Z(with the Fell topology) and µn
converges to µin the vague topology (convergence against compactly-supported
continuous functions).
Polishness of M∗allows one to use classical tools in probability theory for
studying random elements of M∗, which are called random rmm spaces here.
In particular, every random rooted graph defines a random rmm space (equipped
with the graph-distance metric and the counting measure on the vertices). The
same is true for point processes. Also, Rd, rooted at the origin and equipped
with a random measures on Rd, defines a random rmm space. The condition
of boundedly-compactness matches the conditions of locally-finiteness in each
example.
Convention 2.1. A random rmm spaces is shown by a tuple of bold symbols
like [X,o,µ]. By convention, we will use the familiar symbols Pand Efor
most random objects (instead of writing them in the form of long integrals)
even if they live in different spaces and even if extra randomness is introduced.
The following explanation helps to reduce the possible confusions. Note that
the tuple determines just one random element of M∗and the three symbols
X,oand µare meaningless separately. Any formula containing these symbols
should be well defined for an isomorphism class of rmm spaces. By contrast, in
6
Section 6, we will consider more than one probability measure on M∗or similar
spaces. In this case, one may also think of [X,o,µ] as a symbol for a generic
element of M∗and use formulas like P[[X,o,µ]∈A] and Q([X,o,µ]∈A) for
different probability measures Pand Q.
2.3 Unimodularity
In this subsection, we define unimodular rmm spaces and a few basic examples.
Definition 2.2. Atransport function is a measurable function g:M∗∗ →
R≥0. The value g(X, u, v, µ) is interpreted as the mass that usends to v
and is also abbreviated by g(u, v) if Xand µare understood. Let g+(u) :=
g(u, x)dµ(x) and g−(u) := g(x, u)dµ(x) represent the outgoing mass from
uand the incoming mass to urespectively.
Definition 2.3. A random rmm space is called unimodular if the following
mass transport principle holds: For every transport function g,
EX
g(o, x)dµ(x)=EX
g(x, o)dµ(x).(2.1)
It is called nontrivial if µ= 0 a.s. and proper if supp(µ) = Xa.s.
In words, the expected outgoing mass from the root should be equal to the
expected incoming mass to the root. The case µ:= 0 trivially satisfies the
MTP and is not very interesting. However, there are some interesting classes of
non-proper examples (e.g., when extending a unimodular graph, Example 6.23,
or Rdor any other space equipped with a point process or a random measure,
Example 4.1 and Definition 6.1). Another basic example is when [X,o] is ar-
bitrary and µ:= δois the Dirac measure at o. More generally, finite measures
provide basic examples explained below.
Example 2.4 (Finite Measure).When µis a finite measure a.s., unimodularity
is equivalent to re-rooting invariance: If o′is an additional random point of
Xchosen with distribution proportional to µ, then [X,o′,µ] has the same
distribution as [X,o,µ] (see Theorem 3.8). Loosely speaking, the root is a
random point of Xchosen with distribution proportional to µ.
Although the infinite case is more interesting for our purpose, the finite
case appears in many important examples of scaling limits when the limiting
space is compact (see Subsection 4.3). We will see that the re-rooting invariance
property in these examples is a quick corollary of the fact that weak convergence
preserves unimodularity (Lemma 3.1).
Example 2.5 (Product).Let [Xi,oi,µi] be unimodular for i= 1,2. Then,
[X1×X2,(o1,o2),µ1⊗µ2] is also unimodular. Here, the metric on X1×X2
can be the max-metric or the sum-metric.
7
Example 2.6 (Biasing).Let [X,o,µ] be unimodular and b:M∗→R≥0be
a measurable function such that E[b(o)] = 1. Let bµbe the measure on X
defined by bµ(A) := Ab(x)dµ(x). Let [X′,o′,µ′] be obtained by changing the
measure µto bµand then by biasing the probability measure by b(o); i.e.,
Eh(X′,o′,µ′)=E[b(X,o,µ)h(X,o, bµ)] .
Then, [X′,o′,µ′] is a unimodular rmm space. This can be seen by verifying
the MTP directly. In fact, in Example 6.22, this will be shown to be the Palm
version of bµregarded as an additional measure on [X,o,µ].
A heuristic interpretation of unimodularity is that the root is a typical point
of the measure µ. When µis a finite measure, this heuristic is rigorous according
to Example 2.4. In the general case, the heuristic is that the expectation of any
(equivariant) quantity evaluated at the root is an interpretation of the average of
that quantity over the points of the metric space. More precisely, for measurable
functions f(o) := f(X,o,µ), the expectation E[f(o)] is an interpretation of the
average of f(X,·,µ) over the points of X, where the average is taken according
to the measure µ(in fact, this should be modified in the non-ergodic case; see
Subsection 5.4). See Corollary 5.8 for a rigorous statement.
3 Basic Properties of Unimodularity
In this section, we will provide basic properties of unimodularity, which are
useful in the discussion of the examples in the next section. Further properties
will be provided in Section 5.
3.1 Weak Limits
Lemma 3.1 (Weak Limits).Unimodularity is preserved under weak limits.
This is similar to the case of unimodular graphs, but the proof is more
involved since a convergent sequence of rmm spaces does not necessarily stabi-
lize in a given window. This will be handled by a generalization of Strassen’s
theorem.
Lemma 3.1 and Example 2.4 give naturally the following generalization of
soficity:
Problem 3.2. Is every unimodular rmm space sofic? i.e., is it the weak limit
of a sequence of deterministic compact measured metric spaces (Xn, µn)rooted
at a random point with distribution proportional to µn?
This generalizes Question 10.1 of [3], which involves unimodular graphs.
Note that every such weak limit is unimodular. Also, one can assume that Xn
is a finite metric space and µnis a multiple of the counting measure without
loss of generality.
8
Proof of Lemma 3.1. Assume [Xn,on,µn] is unimodular (n= 1,2, . . .) and
converges to [X,o,µ]. To prove unimodularity of [X,o,µ], it is enough to prove
the MTP (2.1) for bounded continuous functions g. Define f:M∗→R≥0by
f(X, o, µ) := X
g(X, o, p, µ)dµ(p).
By approximating gby simpler functions, it is enough to assume that for some
R < ∞,g(X, o, p, µ) = 0 whenever d(o, p)≥Rand also fand gare bounded
by R. In this case, we will prove in the next paragraph that fis a bounded
continuous function. So, weak convergence implies that E[f(Xn,on,µn)] →
E[f(X,o,µ)]. A similar argument holds when oand pare swapped in the
definition of f. Now, the MTP for [X,o,µ] is implied by taking limit from the
MTP for [Xn,on,µn] and claim is proved.
The rest is devoted to proving the continuity of fand can be skipped at
first reading. Assume [Xn, on, µn] are deterministic rmm spaces converging to
[X, o, µ]. One can assume that Xn’s are subspaces of a common boundedly-
compact metric space Zconverging to X⊆Z,on→oand µnconverges vaguely
to µ. So there exists ϵn≥0 and measures µ′
nsandwiched between the restric-
tions of µnto BR+3−ϵn(o) and BR+3+ϵn(o) (the balls are in Z) such that, if µ′
is the restriction of µto BR+3(o), then dP(µ′
n, µ′)≤ϵn(see Section 3 of [29]).
We may assume ϵn<1. By the generalized Strassen theorem (Theorem 2.1
of [31]), there exists an approximate coupling of µ′
nand µ′; i.e., a measure αn
on Xn×Xsuch that
|π1∗αn−µ′
n|+|π2∗αn−µ′|+αn({(x, y)∈Xn×X:dZ(x, y)> ϵn})≤ϵn.
Here, π1and π2are the projections from Xn×Xto Xnand Xrespectively. Also,
by the assumptions on g, the convergence and a compactness argument, there
exists M < ∞such that for all n, sup{g(Xn, on, p, µn) : p∈Xn} ≤ Mand the
same holds for X. In addition, by considering the modulus of continuity of gon a
suitable compact set, one finds that δn:= sup{|g(Xn, on, p, µn)−g(X, o, q, µ)|:
9
dZ(p, q)≤ϵn} → 0. Now,
|f(Xn, on, µn)−f(X, o, µ)|
=Xn
g(Xn, on, p, µn)dµn(p)−X
g(X, o, q, µ)dµ(q)
≤M|π1∗αn−µ′
n|+M|π2∗αn−µ′|+
Xn
g(Xn, on, p, µn)d(π1∗αn)(p)−X
g(X, o, q, µ)d(π2∗αn)(q)
≤M|π1∗αn−µ′
n|+M|π2∗αn−µ′|+
|g(Xn, on, p, µn)−g(X, o, q, µ)|dαn(p, q )
≤M|π1∗αn−µ′
n|+|π2∗αn−µ′|+αn({(x, y)∈Xn×X:d(x, y)> ϵn})
+ |g(Xn, on, p, µn)−g(X, o, q, µ)|1{d(p,q )≤ϵn}dαn(p, q)
≤Mϵn+δn|αn|
≤Mϵn+δnϵn+δn|µ′|.
It follows that |f(Xn, on, µn)−f(X, o, µ)| → 0 and the continuity of fis proved.
This finishes the proof of the lemma.
3.2 Subset Selection
Unimodularity means heuristically that the root is a typical point. In particular,
every property of the points that has zero chance to be seen at the root, is
observed at almost no other point. This is formalized in the following easy but
important lemma.
Definition 3.3. Let [X,o,µ] be a random rmm space and A⊆ M∗be mea-
surable. The set S:= S(X,µ) := {p∈X: (X, p, µ)∈A}is called a factor
subset of X. Note that the factor subset is a function of Xand µand does
not depend on o.
Lemma 3.4 (Everything Happens At Root).Let [X,o,µ]be a nontrivial uni-
modular rmm space. For every factor subset S,
o∈Sa.s. ⇐⇒ Shas full measure w.r.t. µ,a.s.,
P[o∈S]>0⇐⇒ P[µ(S)>0] >0.
Proof. It is enough to prove the first claim. Define g(u, v) := 1{v∈S}. Then,
g+(o) = µ(X\S) and g−(o) = µ(X)1{o∈S}. Since µ(X)>0 a.s., the claim
follows by the MTP (2.1).
This is a generalization of Lemma 2.3 of [3]. It can also be generalized by
allowing extra randomness as well, but some care is needed that will be discussed
in Subsection 5.1.
By letting S:= supp(µ), one immediately obtains:
10
Corollary 3.5. If [X,o,µ]is a nontrivial unimodular rmm space, then o∈
supp(µ)a.s.
Remark 3.6. Note that, unlike the discrete cases, one cannot replace the last
statement in Lemma 3.4 with P[S=∅]>0. In the language of Borel equivalence
relations, despite the case of countable equivalence relations, the saturation of
a null set can have positive measure.
Lemma 3.7 (Bounded Selection).If µ(X) = ∞a.s., then every factor subset
Sof Xis either empty or unbounded a.s. Also, µ(S)∈ {0,∞} a.s.
This generalizes Corollary 2.10 of [5] for unimodular graphs. As mentioned
before Proposition 4 of [35], this is related to Poincar´e’s recurrence theorem.
See also Lemma 5.15 for a further generalization.
Proof. Let Abe an event in which Sis nonempty and bounded. Hence, µ(S)<
∞on A. By replacing it with a suitable neighborhood of Sif necessary, one
may assume µ(S)>0 on A. So it is enough to prove the second claim. Let Bbe
the event 0 <µ(S)<∞. Let g(u, v ) := (µ(S))−11{v∈S}1B. Then, g+(o) = 1B
and g−(o) = ∞if o∈Sand Bholds. If P[B]>0, then the latter holds with
positive probability by Lemma 3.4. This contradicts the MTP for g.
3.3 Re-rooting Invariance
In the following proposition, re-rooting a unimodular rmm space is considered
even in the non-compact case. Assume for each rmm space (X, o, µ) a probability
measure ko=k(X,o,µ)is given on X. This will be considered as the law for
changing the root from oto a new root. By letting (X, µ) fixed and letting o
vary, this can be regarded as a Markovian kernel k(X,µ)on X(assuming the
following measurability condition). It is called an equivariant Markovian
kernel if it is invariant under the isomorphisms of rmm spaces and for each
measurable set A⊆ M∗∗, the function [X, o, µ]7→ ko({y∈X: [X, o, y, µ]∈A})
is a measurable function on M∗.
Given a deterministic pair (X, µ), an equivariant Markovian kernel ktrans-
ports µto another measure on Xdefined by µ′(·) := Xky(·)dµ(y). If µ′=µ,
then µis called a stationary measure for the kernel on X.
Let [X,o,µ] be a unimodular rmm space and kbe an equivariant Markovian
kernel. Conditional to [X,o,µ], choose o′∈Xrandomly with distribution
ko(·), which is regarded as a new root.
Theorem 3.8 (Re-Rooting Invariance).Assume [X,o,µ]is unimodular, kis
an equivariant Markovian kernel and o′is a new root chosen with law ko(·). If
almost surely, µis a stationary measure for the Markovian kernel k(X,µ)on X,
then [X,o′,µ]has the same distribution as [X,o,µ].
In particular, in the compact case, this theorem implies the re-rooting in-
variance mentioned in Example 2.4. This also generalizes the invariance of Palm
distributions under bijective point-shifts and a similar statement for unimodular
11
graphs (Proposition 3.6 of [5]). See also Subsection 5.2 for a generalization to
the case where an initial biasing is considered.
Before proving the theorem, we need some lemmas. We will first use the
MTP when k(o) is absolutely continuous w.r.t. µ, and then, we will deduce the
general case from the first case.
Lemma 3.9. If kois absolutely continuous w.r.t. µalmost surely, then the law
of [X,o′,µ]is absolutely continuous w.r.t. the law of [X,o,µ]. In addition,
if the Radon-Nikodym derivative dko/dµis given by f(X,o,·,µ), where f:
M∗∗ →R≥0is a measurable function, then the distribution of [X,o′,µ]is
obtained by biasing the distribution of [X,o,µ]by f−(o).
In fact, the proof shows that there always exists such a measurable Radon-
Nikodym derivative f.
Proof. Let αand βbe the σ-finite measures on M∗∗ defined by
α(A) = EX
1A[X,o, x, µ]dko(x),
β(A) = EX
1A[X,o, x, µ]dµ(x).
αis just the distribution of [X,o,o′,µ]. It can be easily seen that αis absolutely
continuous w.r.t. β. Let f(X, o, o′, µ) be the Radon-Nikodym derivative of
αw.r.t. βat [X, o, o′, µ]. It can be shown that fsatisfies the assumptions
mentioned in the lemma. Now, let Abe an event in M∗. One has
P[[X,o′,µ]∈A] = EX
1A[X, x, µ]dko(x)
=EX
f(o, x)1A[X, x, µ]dµ(x)
=EX
f(x, o)1A[X,o,µ]dµ(x)
=E1A[X,o,µ]f−(o),
where the third equality holds by the MTP (2.1). So the claim is proved.
Lemma 3.10. There exists a transport function hsuch that his symmetric,
h > 0and h+(·) = h−(·) = 1 except on an event that has zero measure w.r.t.
every nontrivial unimodular rmm space [X,o,µ]. Indeed, if g > 0is an arbitrary
transport function such that g+(o) = 1 a.s., then one can let
h(X, u, v, µ) := X
g(u, x)g(v, x)
g−(x)dµ(x).(3.1)
In fact, this is just the composition of the Markovian kernel corresponding
to g(given (X, µ)) with its time-reversal. This ensures that the new kernel
preserves µ.
12
Proof. One can construct gas follows. Given (X, o, µ), let N > 0 be the smallest
integer such that µ(BN(o)) >0 (unless µ= 0). For each k≥1, let gk(o, ·) be a
constant function on BN+k(o) such that g+
k(o) = 1. Then, let g:= k2−kgk.
Now, define hby (3.1). Assuming
0< g−(x)<∞for µ−a.e. x∈Xalmost surely,(3.2)
it is straightforward to check that his well defined and h+(o) = h−(o) = 1
a.s. So it remains to prove (3.2). Since g > 0, one has g−(o)>0 a.s. Also,
the MTP (2.1) gives E[g−(o)] = 1, and hence, g−(o)<∞a.s. So, Lemma 3.4
implies (3.2) and the claim is proved.
Remark 3.11. Given any measurable function b≥0 on M∗, one can modify
the proof of Lemma 3.10 to have h+(o) = h−(o) = b(o) a.s. Note that in
this case, the kernel has b(·)µas a stationary measure (this is also readily
implied by Lemma 3.10 and Example 2.6). Also, the kernel ˜
hdefined by h(i.e.,
˜
ho:= b(o)−1h(o,·)µ) can be arbitrarily close to the trivial kernel in the sense
that dP(˜
ho, δo) is less than an arbitrary constant ϵ > 0 a.s.
Proof of Theorem 3.8. In the first case, assume that, as in Lemma 3.9, ko(·) is
absolutely continuous w.r.t. µa.s. and its Radon-Nikodym derivative is given
by f:M∗∗ →R≥0. Since ko(·) is a probability measure, f+(o) = 1 a.s. Also,
since the Markovian kernel k(X,µ)preserves µ, one has f−(·) = 1 a.e. on X.
Therefore, f−(o) = 1 a.s. by Lemma 3.4. Therefore, by the second part of
Lemma 3.9, [X,o′,µ] has the same distribution as [X,o,µ].
Now consider the general case. By Lemma 3.10, there exists a transport
function h > 0 such that h+(o) = h−(o) = 1 a.s. Compose the Markovian
kernel corresponding to hwith kto obtain a new equivariant kernel η:
ηo(·) := X
h(o, x)kx(·)dµ(x).
Now, ηis an equivariant Markovian kernel that preserves µa.s. and one can
check that ηo(·) is absolutely continuous w.r.t. µa.s. So, the first case im-
plies that the composition of the two root changes preserves the distribution
of [X,o,µ]. But the first root-change (corresponding to h) already preserves
the distribution of [X,o,µ] since h−(o) = 1 a.s. This implies that the second
root-change also preserves it and the claim is proved.
4 General Categories of Examples
In this section, we present various general or specific examples in unimodular
rmm spaces. We also study the connection with Borel equivalence relations in
Subsection 4.6.
13
4.1 Point Processes and Random Measures
Apoint process in Rdis a random discrete subset of Rd. More generally, a
random measure on Rdis a random boundedly-finite measure on Rd. A point
process or random measure Φ is stationary if Φ + thas the same distribution
as Φ for every t∈Rd. If so, the Palm version of Φ can be defined in various
ways and means heuristically seeing Φfrom a typical point or conditioning on
0∈Φ. Formally, if B⊆Rdis an arbitrary bounded Borel set, bias by Φ(B)
and move the origin to a random point with distribution ΦB; i.e.,
P[Φ0∈A] = 1
λΦ
EB
1A(Φ −x)dΦ(x),(4.1)
where λΦis a constant called the intensity of Φ, which is assumed to be positive
and finite here. For other methods to defined the Palm version, one can mention
the use of the Campbell measure and tessellations, which will be generalized in
Section 6.
The Palm version satisfies Mecke’s theorem [37], which can be rephrased
equivalently3in the MTPs (1.2) and (1.3) (see also [30] and [33]). Mecke proved
in addition that with some finite moment condition, this equation characterizes
the Palm versions of stationary point process. Also, one can reconstruct the
stationary version from the Palm version via a formula which is known as Palm
inversion. Without the finite moment condition, the MTPs (1.2) and (1.3)
define larger classes of point processes and random measures, which are called
point-stationary point processes and mass-stationary random measures
in [33].4For example, one can mention the zero set of the simple random walk,
its graph, and the local time at zero of Brownian motion (see [6] for further
examples). The MTPs (1.2) and (1.3) directly imply the following.
Example 4.1. If Φ0is the Palm version of a stationary point process in Rd(or
more generally, a point-stationary point process), then [Φ0,0] is a unimodular
rmm space (equipped with the counting measure). If Ψ0is the Palm version
of a stationary random measure (or more generally, a mass-stationary random
measure), then [supp(Ψ0),0,Ψ0] and [Rd,0,Ψ0] are unimodular. Note that the
latter may be improper.
Remark 4.2. Since rmm spaces are considered up to isomorphisms, some ge-
ometry is lost (e.g., the coordinate axes) when considering point processes and
random measures as rmm spaces. To fix this, one can extend M∗to rmm spaces
equipped with some additional geometric structure, which will be discussed in
Subsection 5.1. In this sense, one can say that unimodular rmm spaces general-
ize (point-) stationary point processes and (mass-) stationary random measures.
We will also provide a further generalization in Section 6 by defining stationary
point processes and random measures on a given unimodular rmm space. In
this viewpoint, unimodular rmm spaces generalize the base space Rd.
3Mecke’s theorem is stated in different notations in the literature, but it is easy to transform
the equation in this form.
4The definition of point-stationarity in [33] is more complicated, but it is proved in [33]
that it is equivalent to Mecke’s condition.
14
4.2 Unimodular Graphs and Discrete Spaces
As mentioned in the introduction, a unimodular (random) graph [3] is a
random rooted graph that satisfies the MTP (1.1). Also, the MTP provides
many connections between unimodular graphs and (point-) stationary point
process. As a common generalization, [6] defines unimodular discrete spaces,
which are random boundedly-finite discrete metric spaces that satisfy a similar
MTP. Since the Benjamini-Schramm topology for rooted graphs is consistent
with the GHP topology (see e.g., [29]), The MTP implies the following.
Example 4.3. If [G,o] is a unimodular graph or a unimodular discrete space,
then it is also a unimodular rmm space (equipped with the counting measure).
In considering graphs as rmm spaces, some information might be lost (e.g.,
parallel edges and loops). However, one can keep these information by consid-
ering rmm spaces equipped with some additional structure (see Subsection 5.1).
In this sense, unimodular rmm spaces generalize unimodular graphs.
More generally, [3] defines unimodular networks, which are unimodular graphs
equipped with some marks on the vertices and edges. Also, [6] defines unimod-
ular marked discrete spaces. Similarly to the above example, these can be
regarded as unimodular rmm spaces equipped with suitable additional struc-
tures.
Example 4.4. The following are examples of unimodular rmm spaces which are
graphs (or discrete spaces) equipped with a measure different from the counting
measure. In these examples, the biasing is a special case of the Palm theory
developed in Section 6 (see Example 6.22).
(i) The Palm version of non-simple stationary point processes.
(ii) If [G,o] is a unimodular graph, then it is known that a stationary distri-
bution of the simple random walk is obtained by biasing the probability
measure by deg(o). It can be seen that, after this biasing, [G,o,deg(·)] is
unimodular.
(iii) Let Sbe the image of a process (Yn)n∈Zon Rdthat has stationary incre-
ments and Y0= 0 (e.g., a random walk). Let µbe the counting measure
on Sand mbe the mutiplicity measure on S. Assuming that Sis discrete
and mis finite, [S,0,m] is unimodular (this is easily implied by the MTP
on the index set Z). However, to make [S,0,µ] unimodular, one needs to
bias by m(0)−1(see Subsection 4.3 of [6]).
(iv) In some random rooted graphs [G,o], the MTP holds only when the sum is
made on a specific subset Scontaining the root. These cases are sometimes
called locally unimodular and are unimodular rmm spaces by letting µbe
the counting measure on S(not on G). An example is the graph of a
null-recurrent Markov chain on Z, where Sis a level set of the graph.
Another example appears in extending a unimodular graph as described
in Example 6.23.
15
4.3 Scaling Limits
Let Gnbe a sequence of finite graphs (or metric spaces), which might be deter-
ministic or random, and let onbe a vertex in Gnchosen uniformly at random.
A(measured) scaling limit of Gnis the weak limit of a sequence of the form
[ϵnGn,on, δnµn] as random elements in M∗. Here, µnis the uniform measure
on the vertices of Gnand ϵnGnmeans that the graph-distance metric is scaled
by ϵn. Likewise, a subsequential (measured) scaling limit is defined as the
limit along a subsequence. The coefficients ϵnand δnmay also depend on the
non-rooted graph Gn(but not on o). Since onis chosen uniformly, [Gn,on]
is a unimodular graph (in fact, it is enough to have the re-rooting invariance
property, see Example 2.4).
Another setting for scaling limits is zooming-out a given unimodular graph
or rmm space [X,o,µ]; i.e., (subsequential) limits of [ϵnX,o, δnµ] when the
factors ϵnand δndo not depend on oand converge to zero appropriately. There
are also examples of zooming-in a given rmm space (even compact) at the root,
which is the case when ϵn, δn→ ∞ appropriately. Various examples will be
mentioned below.
In each of these settings, Lemma 3.1 implies the following.
Lemma 4.5 (Scaling Limit).Under the above assumptions, every measured
scaling limit is unimodular and satisfies the MTP (2.1). In particular, if the
limit is compact a.s., then it satisfies the re-rooting invariance property. The
same is true for subsequential scaling limits, if the subsequence is chosen not
depending on o.
Non-compact scaling limits have also some re-rooting invariance property,
which is stated in Theorem 3.8.
Remark 4.6. Unimodularity does not make sense for non-measured scaling
limits. However, in many cases, by a pre-compactness argument, it is possible
to deduce the existence of a subsequential measured scaling limit. For instance,
this is always the case if the scaling limit is compact.
Example 4.7 (Brownian Motion).The zero set of the simple random walk
(SRW) on Zscales to the zero set of Brownian motion equipped with the local
time measure. The graph of the SRW scales to the graph of Brownian motion
with the measured induced from the time axis (the metric is distorted differently,
but unimodularity is preserved anyway). The image of the SRW on Zd(d≥3)
scales to the image of Brownian motion on Rdequipped with the push-forward
of the measure on R. By Example 4.4, the latter is unimodular. These examples
provide unimodular rmm spaces (and also mass-stationary random measures).
Example 4.8 (Brownian Trees).The Brownian continuum random tree (BCRT)
[1] is the scaling limit of random trees on nvertices. The re-rooting invariance
property (observed in [2]) is a direct corollary of Lemma 4.5 above. Aldous also
proved that by choosing larger scaling factors suitably, a non-compact scaling
16
limit is obtained, which is called the self-similar continuum random tree (SS-
CRT). Lemma 4.5 implies that the SSCRT is also a unimodular rmm space and
satisfies the MTP, which seems to be a new result.
Example 4.9 (Stable Trees).Stable trees generalize the BCRT and are the
scaling limit of Galton-Watson trees with infinite variance conditioned to be
large [14]. A re-rooting invariance property of stable trees is proved in [15].
Note that the Galton-Watson trees are not re-rooting invariant. However, one
can prove that the stable trees are the scaling limits of critical unimodular
Galton-Watson (UGW) trees (Example 1.1 of [3]) as well. Since UGW trees
are unimodular, the re-rooting invariance property is implied by Lemma 3.1
directly.
Example 4.10 (Self-Similar Unimodular Spaces).Let K⊆Rdbe a self-similar
set such that K=∪fi(K), where each fiis a homothety (see [9]). Equip K
with the unique self-similar probability measure µon it and choose o∈K
with distribution µ. Assume all homothety ratios are equal and the open set
condition holds. In this case, by zooming-in at o, there exists a subsequential
scaling limit. This can be proved similarly to Theorem 4.14 of [6] (regarding
self-similar unimodular discrete spaces) and the proof is skipped for brevity (the
limit can be constructed by adding to Ksome isometric copies and continuing
recursively, similarly to Remark 4.21 of [6]). In addition, the limit is the same
as the subsequential scaling limit when zooming-out the unimodular discrete
self-similar set defined in [6]. Hence, the scaling limit is unimodular.
Example 4.11 (Micro-Sets).More general than Example 4.10, micro-sets are
defined for an arbitrary compact set K⊆Rdby zooming-in at a given point
(see Subsection 2.4 of [9]). Let µbe a probability measure on Kand choose o
randomly with distribution µ. By modifying the definition of micro-sets slightly
to allow non-compact micro-sets (using convergence of closed subsets of Rd),
and also by scaling µat the same time, one obtains that every subsequential
measured micro-set at odefined as a weak limit (the subsequence and the scaling
parameters should not depend on o) is unimodular.
Example 4.12 (Brownian Web).Brownian web is the scaling limit of various
drainage network models (see e.g., [18]), which are random trees embedded in
the plane. The limit is usually defined with a different notion of convergence
(as collections of paths in the plane), but it is also proved recently in [11]
that the scaling limit exists in the Gromov-Hausdorff topology as well. It is
natural to expect that the measured scaling limit also exists. Nevertheless, the
limit is the completion of the skeleton of the Brownian web, and hence, has a
natural measure induced from the Lebesgue measure on R2. By stationarity,
the resulting measured continuum tree is a unimodular rmm space.
Example 4.13 (Uniform Spanning Forest).Let Cbe the connected component
containing 0 of the uniform spanning forest of Zd(see Section 7 of [3]). It is
known that [C,0] is unimodular (the same is true in any unimodular graph);
indeed, it is point-stationary. As random closed subsets of Rdequipped with a
17
measure, one can use precompactness to show that there exists subsequential
measured scaling limits. If one proves the existence of the scaling limit (or if
the subsequence is chosen in a translation-invariant way), then the limit is a
unimodular rmm space. It is also conjectured that the scaling limit of Cexists
as random real trees embedded in Rd, and for d≥5, the limit is identical to
the Brownian-embedded SSCRT (see Example 4.8 and [41]).
Example 4.14 (Planar Maps).The scaling limit of random planar graphs and
maps have been of great interest in probability theory and in physics. For in-
stance, the Brownian map and the Brownian disk are compact random metric
spaces arising as the scaling limits of some models of uniform random planar
triangulations and quadrangulations (see e.g., [34]). The Brownian plane is a
non-compact model obtained by zooming-out the uniform infinite planar quad-
rangulation and also by zooming-in the Brownian map. Therefore, the Brownian
plane, equipped with the volume measure (i.e., the scaling limit of the counting
measure), is a unimodular rmm space.
In addition, [10] defines the hyperbolic Brownian plane as a limit of a sequence
of planar triangulations scaled properly. It is observed in [10] that this model
satisfies a version of the MTP. Indeed, this means that the hyperbolic Brownian
plane is a unimodular rmm space and is a direct corollary of Lemma 3.1.
Also, in the uniform infinite half-plane triangulation, the root is on the bound-
ary, which is a bi-infinite path. It is proved that the scaling limit of this model
exists. Equipping it with the length measure, which is the scaling limit of the
counting measure on the boundary, a unimodular rmm space is obtained (which
is improper).
4.4 Groups and Deterministic Spaces
In this subsection, we investigate when a deterministic rmm space [X, o, µ] is a
unimodular rmm space. By Lemma 3.4, it is necessary that (X, µ) is transitive;
i.e., [X, o, µ] is isomorphic to [X, y, µ] for every y∈X. However, transitivity
is not enough. The following generalizes the analogous result about transitive
graphs (see [7]).
Proposition 4.15. Let [X, o, µ]be a deterministic rmm space.
(i) [X, o, µ]is unimodular if and only if (X, µ)is transitive and the automor-
phism group of (X, µ)is a unimodular group.
(ii) Assume Γis a closed subgroup of the automorphism group of (X, µ). Then,
the MTP holds for all Γ-invariant functions on X×Xif and only if Γis
unimodular and acts transitively on X.
The first part is proved in a more general form in Theorem 4.18. The second
part can also be proved similarly and the proof is skipped for brevity.
Corollary 4.16 (Unimodular Groups).Every unimodular group Γequipped
with the Haar measure and a boundedly-compact left-invariant metric d(e.g.,
18
any finitely generated group equipped with a Cayley graph) is a unimodular rmm
space.
It is also easy to prove this corollary by verifying the MTP directly.
Example 4.17. The Euclidean space Rnis a unimodular group, and hence,
[Rd,0,Leb] is unimodular. The hyperbolic space Hnis not a group, but Propo-
sition 4.15 implies that it is unimodular. However, the hyperbolic plane with one
distinguished ideal point (which can be regarded as a rmm space with some ad-
ditional structure; see Subsection 5.1) is not unimodular since its automorphism
group is not unimodular.
4.5 Quasi-Transitive Spaces
Let (X, µ) be a deterministic measured metric space. In this subsection, we
investigate when one can choose a random root o∈Xsuch that [X, o, µ] is
unimodular. This generalizes the case of quasi-transitive graphs;5see [7] and
Section 3 of [3].
Let Γ be the automorphism group of (X, µ). If Γ contains only the identity
function, then it is straightforward to see that µshould be a finite measure and
the distribution of oshould be proportional to µ(see Example 2.4). However,
if Γ is nontrivial, the situation is more involved.
The orbit of x∈Xis the set Γx={γx :γ∈Γ}. Boundedly-compactness of
Ximplies that Γ is a locally-compact topological group. Let |·| be a left-invariant
Haar measure on Γ. Let Bbe an arbitrary open set that intersects every orbit in
a nonempty bounded set. For instance, one can let B:= n≥1Bn(o)\ΓBn−2(o)
for an arbitrary o∈X. For x∈X, define
h(x) := |Γx,B |−1,where Γx,B := {γ∈Γ : γ−1x∈B}.
The assumptions imply 0 < h(x)<∞. Also, for α∈Γ, one has Γαx,B =αΓx,B ,
and hence, h(αx) = h(x).
Theorem 4.18. There exists a random point o∈Xsuch that [X, o, µ]is
unimodular if and only if Γis a unimodular group and Bhdµ < ∞. In this
case, the distribution of [X, o, µ]is unique. In addition, ocan be chosen with
distribution proportional to (hµ)B.
This generalizes Theorem 3.1 of [3]. Indeed, in the case where Xis a graph
or network and µis the counting measure, one can choose Bby choosing exactly
one point from every orbit. In this case, for x∈B,h(x) is the inverse of the
measure of the stabilizer of x. One can also generalize the if side of the claim
to the case where Γ is a closed subgroup of the automorphism group that acts
transitively.
Example 4.19. Let Xbe a horoball in the hyperbolic plane corresponding to an
ideal point ωand let µbe the volume measure on X. Theorem 4.18 shows that
5This is the only motivation for the title of this subsection.
19
one can choose a random point o∈Xsuch that [X, o, µ] is unimodular. Indeed,
if Bis the region between any two lines passing through ω, then it is enough to
choose ouniformly in B(note that Γ is isomorphic to the automorphism group
of R). This can be thought of as a continuum version of the canopy tree.
For a simpler example, let αbe a finite measure on Rand let µ:= α×Leb.
Then, [R2,o, µ] is unimodular, where ois chosen on the xaxis with distribution
proportional to α.
Proof of Theorem 4.18. For every measurable function f:X→R≥0, one has
X
f(y)dµ(y) = BΓ
h(x)f(γx)dγdµ(x).
This can be seen by the change of variable y:= γx in the right hand side and
noting that hand µare Γ-invariant. First, assume that a:= Bh(x)dµ(x)<∞
and o∼1
a(hµ)B. Therefore, for every transport function g,
aEg(o, y)dµ(y)=B
h(z)X
g(z, y)dµ(y)dµ(z)
=B
h(z)BΓ
h(x)g(z, γ x)dγdµ(x)
=ΓBB
h(z)h(x)g(z, γ x)dµ(x)dµ(z)dγ.
Similarly,
aEg(y, o)dµ(y)=ΓBB
h(z)h(x)g(γz , x)dµ(x)dµ(z)dγ.
Note that g(γz , x) = g(z, γ−1x). If Γ is unimodular, then the change of variable
γ→γ−1preserves the Haar measure. This implies that the two above formulas
are equal and the unimodularity of [X, o, µ] is proved.
Conversely, assume o∈Xis a random point such that [X, o, µ] is unimodu-
lar. Let Pand νbe the distributions of oand [X, o, µ] respectively. For A⊆X,
define [A] := {[X, u, µ] : u∈A}. So ν([A]) = P[o∈ΓA]. Let C, D ⊆Xbe
arbitrary and define g(u, v) := {γ:γ−1u∈C, γ −1v∈D}. One has
EX
g(o, y)dµ(y)=EXΓ
1C(γ−1o)1D(γ−1y)dγdµ(y)
=EXΓ
1C(γ−1o)1D(y)dγdµ(y)
=µ(D)EΓ
1C(γ−1o)dγ,
where the second equality is by the change of variable y′:= γ−1y. Similarly,
EX
g(y, o)dµ(y)=µ(C)EΓ
1D(γ−1o)dγ.
20
Since gis Γ-invariant, the MTP implies that these two equations are equal.
Since it holds for arbitrary Cand D, there exists a constant bsuch that
EΓ
1C(γ−1o)dγ=bµ(C),∀C⊆X. (4.2)
Thus, for every measurable function fon X, one has EΓf(γ−1o)dγ=
bXfdµ. In particular, let f=h1A1B, where A⊆Xis any Γ-invariant set.
Hence,
bX
h1A1Bdµ =EΓ
h(γ−1o)1A(γ−1o)1B(γ−1o)dγ
=Eh(o)1A(o)Γ
1B(γ−1o)dγ
=E[1A(o)] = ν([A]).
This proves the uniqueness of ν. Also, by letting A:= X, one gets that Bhdµ <
∞and b= 1/a. It also implies that, by choosing another point with distribution
b(hµ)B,νwould not change. Hence, one may choose o∼b(hµ)Bfrom the
beginning. In addition, for every β∈Γ, since µ(C) = µ(βC), (4.2) implies
EΓ
1C(γ−1o)dγ=EΓ
1βC (γ−1o)dγ=m(β)−1EΓ
1C(γ−1o)dγ,
where the last equality is by the change of variable γ′:= γβ and this changes
the Haar measure by the constant factor m(β), where mis the modular function
of Γ. The above equation implies that m(β) = 1; i.e., Γ is unimodular. So the
claim is proved.
4.6 Connection with Borel Equivalence Relations
Let Sbe a Polish space and Ebe an equivalence relation on S. It is called
acountable Borel equivalence relation (CBER) if it is a Borel subset of
S×Sand every equivalence class is countable. A probability measure νon
Sis invariant under Rif for all measurable functions f:S×S→R≥0, one
has y∈R(x)f(x, y)dν(x) = y∈R(x)f(y, x)dν(x), where R(x) is the equiv-
alence class containing x. As discussed in Example 9.9 of [3], this notion is
tightly connected to unimodular graphs: if one has a graphing of Rand o∈X
is a random point with distribution ν, then the component of the graphing con-
taining ois unimodular. Conversely, if Pis a unimodular probability measure
on G∗, where G∗⊆ M∗is the space of connected rooted graphs, then Pis in-
variant under the equivalence relation on G∗defined by (G, o)∼(G, v) for all
v∈G. The last claim is an if and only if in the case where Pis supported
on graphs with no nontrivial automorphism. As mentioned in [3], there is a
substantial overlap between the two theories, but their viewpoints and moti-
vations are different. In fact, graphings of CBERs are quite more general and
they can capture some features of graph limits that unimodular graphs don’t
21
(see local-global convergence in [22]). Also, some of the results for unimodular
graphs are proved by the results on CBERs; e.g., ergodic decomposition and
amenability. It should be noted that, due to the possibility of automorphisms,
some geometry is lost when passing to graphings. This issue should be dealt
with; e.g., by adding extra randomness to break the automorphisms.
For unimodular rmm spaces, the analogous equivalence class on M∗defined
by (X, o, µ)∼(X, v, µ) for v∈Xis not countable. Hence, the theory of CBERs
is not directly applicable. In Section 7, we will construct a CBER by introducing
the Poisson point process on unimodular rmm spaces. Also, by the Palm theory
developed in Section 6, we construct an invariant measure for the CBER. This
enables us to use the results in the theory of CBERs.
5 Further Properties of Unimodular rmm Spaces
5.1 Allowing Extra Randomness
Lemma 3.4 deals with factor subsets, which are functions of the underlying
measured metric space. The lemma still holds if one allows extra randomness
in choosing the subset suitably if unimodularity is preserved. However, this
is not straightforward to formalize due to measurabiliy issues (the space of all
(X, o, µ, S), where Sis a Borel subset of X, does not have a natural topology).6
To do this generalization, assume there exists already a random geometric
structure mon Xand [X,o,µ,m] is a random rmm space equipped with some
additional structure. This makes sense as soon as there is a suitable general-
ization of the GHP metric. For instance, mcan be a random measure on X,
a random closed subset of X, etc. In the previous work [29], a quite general
framework is presented for extending the GHP metric using the notion of func-
tors. This can be applied to various types of additional geometric structures and
sufficient criteria for Polishness are also provided (it is necessary that for every
deterministic (X, o, µ), the set of additional structures on (X, o, µ) is Polish, but
more assumptions are needed). Here, we assume that Polishness holds as well.
Definition 5.1. [X,o,µ,m] is unimodular if the MTP (2.1) holds even if g
depends on the additional structure m.
In fact, this means that there exists a version of the conditional distribution
of mgiven [X,o,µ] such that the conditional law does not depend on the
root. This will be formalized in Section 6 (see Definition 6.1, Lemma 6.3 and
Remark 6.7).
Once we have such an additional structure, then one can extend the notion
of factor subsets by allowing it to be a function of (X,µ,m). Then, the results
of this paper like Lemmas 3.4 and 3.1 can be generalized to this more general
setting.
6This is similar to the issue of defining random Borel subsets of Rd, where one has to use
a random field instead. The following idea is similar to the use of random fields.
22
Additional geometric structures are interesting in their own and various spe-
cial cases have been considered in the literature separately (see [29] for an ac-
count of the literature and for a unification). In this work, we use this setting
in various places; e.g., for developing Palm theory in Section 6 (where mis a
tuple of krandom measures on X), for studying random walks on unimodular
spaces in Subsection 5.2 (where mis a sequence of points of X), for ergodic-
ity (Subsection 5.3), for hyperfiniteness (Subsection 5.6), and in the proofs in
Section 7 (where mis a marked measure or a closed subset).
5.2 Random Walk
In Subsection 3.3, re-rooting a unimodular rmm space is defined by means of
equivariant Markovian kernels. By iterating a re-rooting, one obtains a random
walk on the unimodular rmm space, as formalized below.
Let M∞be the space of all (X, o, µ, (yn)n∈Z), where (yn)nis a sequence in
Xand y0=o. By the discussion in Subsection 5.1, one can show that M∞can
be turned into a Polish space. Consider the following shift operator on M∞:
S(X, o, µ, (yn)n) := (X, y1, µ, (yn+1 )n).(5.1)
An initial bias is a measurable function b:M∗→R≥0. Then, for every
deterministic (X, µ), one obtains a measure bµ on Xdefined by
bµ(A) := A
b(X, y, µ)dµ(y).(5.2)
Let kbe an equivariant Markovian kernel (Subsection 3.3). Given every
deterministic (X, o, µ), the kernel k(X,µ)defines a Markov chain (xn)n∈Zon X
such that x0=oand x1has law ko. Let θ(X,o,µ)be the law of (X, o, µ, (xn)n)
on M∞. Let [X,o,µ] be a random rmm space such that E[b(o)] <∞and let
Qbe the distribution of [X,o,µ,(xn)n] biased by b(o).
Theorem 5.2. Assume [X,o,µ]is a unimodular rmm space. Under the above
assumptions, if for almost every sample [X, o, µ]of [X,o,µ],bµ is a stationary
(resp. reversible) measure for the Markovian kernel k(X,µ)on X, then Qis a
stationary (resp. reversible) measure under the shift S.
This generalizes Theorem 4.1 of [3] which is for unimodular graphs. A further
generalization is provided in Subsection 6.3.4 by allowing the stationary measure
be singular with respect to µ.
Proof. First, assume that kois absolutely continuous w.r.t. µalmost surely. In
this case, a slight modification of the proof of Theorem 4.1 of [3] (similarly to
Lemma 3.9) can be used to prove the claim. But in the general case, the idea
of the proof of Theorem 3.8 (composing kwith a continuous kernel) does not
work. In this case, it is enough to approximate kby a sequence of equivariant
kernels that have the same properties as k(stationarity or reversibility) plus
absolute continuity. For this, let ˜
hnbe a sequence of kernels converging to the
23
trivial kernel given by Remark 3.11. Note that the kernel ˜
hn◦k◦˜
hnconverges
to kas n→ ∞, preserves bµif kdoes, is reversible if kis reversible (since hnis
symmetric) and has the absolute continuity property. This proves the claim.
Theorem 5.3 (Characterization of Unimodularity).Let hbe a fixed symmetric
transport function as in Lemma 3.10 (h > 0and h+(·) = h−(·) = 1). Let (xn)n
be the random walk given by the kernel kdefined by ko:= h(o, ·)µ. Then, a ran-
dom rmm space [X,o,µ]is unimodular if and only if the law of [X,o,µ,(xn)n]
(defined above) is stationary and reversible under the shift S. The condition
h > 0can also be relaxed to nh(n)>0, where h(n)is given by the n-fold
composition of the kernel with itself.
Remark 5.4. One can similarly extend this theorem to have an arbitrary initial
bias bby having h+=h−=b. This generalizes the fact that for random rooted
graphs, unimodularity is equivalent to stationarity and reversibility of the simple
random walk after biasing by the degree of the root (see Section 4 of [3]). The
benefit of the above theorem is that it characterizes all unimodular rmm spaces,
not those with the moment condition E[b(o)] <∞. This seems to be new even
for graphs (see also the proof of Lemma 4 of [30]).
Proof of Theorem 5.3. The only if part is implied by Theorem 5.2. For the
if part, note that if h > 0, then the two sides of (2.1) are E[g′(o,x1)] and
E[g′(x1,o)], where g′:= g/h. So, since [X,o,x1,µ] has the same distribution
as [X,x1,o,µ] by reversibility, the MTP holds; i.e., [X,o,µ] is unimodular.
Under the weaker condition nh(n)>0, let Anbe the event h(n)>0 and
write g=ngn, where gnis the restriction of gto An\(∪n−1
i=1 Ai). Each g(n)
satisfies the MTP by the first part of the proof. Hence, galso satisfies the
MTP.
Proposition 5.5 (Speed Exists).Under the setting of Theorem 5.2, the speed
of the random walk (xn)n, defined by limnd(o,xn)/n, exists.
This is a generalization of Proposition 4.8 of [3] and can be proved by the
same argument using Kingman’s subadditive ergodic theorem. In fact, Theo-
rem 5.7 implies that the speed does not depend on oand is measurable with
respect to the invariant sigma-field (up to modifying a null event).
5.3 Ergodicity
An even A⊆ M∗is invariant if it does not depend on the root; i.e., if [X , o, µ]∈
A, then ∀y∈X: [X, y, µ]∈A. Let Ibe the sigma-field of invariant events. A
unimodular rmm space [X,o,µ] (or a unimodular probability measure on M∗)
is ergodic if P[A]∈ {0,1}for all A∈I.
One can express ergodicity in terms of ergodicity of random walks as follows.
Let hbe a fixed symmetric transport function such that h+(·) = h−(·) = 1 and
h > 0 (given by Lemma 3.10). Let (xn)n∈Zbe the resulting two-sided random
walk as in Theorem 5.3 and let Qbe the distribution of [X,o,µ,(xn)n]. Let
Γ be the group of automorphisms of the time axis Z; i.e., transforms of the
24
form t7→ t+t0or t7→ −t+t0. By Theorem 5.3, [X,o,µ] is unimodular if
and only if Qis invariant under the action of Γ on M∞defined by shifts and
time-reversion (note that considering time-reversions is important at this point).
Here, we prove:
Theorem 5.6. [X,o,µ]is ergodic if and only if the law of [X,o,µ,(xn)n]is
ergodic under the action of Γ.
The latter means that every Γ-invariant event has probability zero or one.
This result is straightforwardly implied by the following theorem.
Theorem 5.7. Under the above setting, for every shift-invariant event B⊆
M∞, there exists an invariant event A∈Isuch that B∆{(X, o, µ, (xn)n) :
(X, o, µ)∈A}has zero probability for every unimodular rmm space equipped
with the random walk (xn)n.
This extends Theorem 4.6 of [3] with the additional point that Adoes not
depend on the choice of the unimodular probability measure (this is important
in the next section). The same proof works here and is skipped for brevity. This
results hold for an arbitrary random walk preserving an initial bias as well (as
in Theorem 5.2). Also, one can relax the condition h > 0 to nh(n)>0 as in
Theorem 5.3.
Corollary 5.8. Let [X,o,µ]and (xn)nbe as above. Then, for every f:
M∗→Rsuch that E[|f(o)|]<∞,ave(f) := limn1
2nn
i=−nf(xn)exists
and E[ave(f)] = E[f(o)]. In particular, if [X,o,µ]is ergodic, then ave(f) =
E[f(o)] a.s.
5.4 Ergodic Decomposition
In this subsection, we will prove ergodic decomposition for unimodular rmm
spaces; i.e., expressing the distribution as a mixture of ergodic probability mea-
sures. This does not follow immediately from the existing results in the litera-
ture; e.g., for measure preserving semi-group actions (see [16]) or for countable
Borel equivalence relations. We will deduce the claim from ergodic decomposi-
tion for the random walk.
Let U(resp. E) denote the set of unimodular (resp. ergodic) probability
measures on M∗. These are subsets of the set of probability measures on M∗
and can be equipped with the topology of weak convergence. Also, Uis a closed
and convex subset by Lemma 3.1.
Proposition 5.9. Eis the set of extreme points of U.
An extreme point means a point that cannot be expresses as a a convex
combination of two other points. This claim is implied by the following stronger
result and the proof is skipped.
Theorem 5.10 (Ergodic Decomposition).For every unimodular probability
measure νon M∗, there exists a unique probability measure λon Esuch that
ν=Eφdλ(φ).
25
See also Theorem 5.12 for another formulation. The proof is by using random
walks. Let hbe an arbitrary symmetric transport function as in Lemma 3.10
(h > 0 and h+(·) = h−(·) = 1) and let (xn)nbe the resulting random walk as in
Subsection 5.2. Let I′be the sigma field of Γ-invariant events in M∞, where Γ
is the automorphism group of Z(as in Subsection 5.3). Let U′(resp. E′) be the
set of Γ-invariant probability measures on M∞(resp. ergodic under the action
of Γ). Let π:M∞→ M∗be the projection defined by forgetting the trajectory
of points. By using Theorems 5.3 and 5.6, it is straightforward to deduce that
π∗U′=Uand π∗E′=E.
Proof of Theorem 5.10 (Existence). Let µbe the distribution of a unimodular
rmm space [X,o,µ]. Let ˆνbe the distribution of [X,o,µ,(xn)n]. By Theo-
rem 5.2, ˆν∈ U′. Therefore, by the ergodic decomposition for the action of Γ (see
e.g., [16]), there exists a probability measure αon E′such that ˆν=E′φdα(φ).
This implies that ν=E′(π∗φ)dα(φ). Use the change of variables ψ:= π∗φ
and note that π∗φ∈ E. One gets ν=Eψdλ(ψ), where λ= (π∗)∗α. So, the
existence of ergodic decomposition is proved.
Lemma 5.11. For every event C⊆ M∗and every 0≤p≤1, there exists an
invariant event A∈Isuch that {ν∈ E :ν(C)≤p}={ν∈ E :ν(A) = 1}.
Proof. Let C′:= {(X, o, µ, (xn)) : (X, o, µ)∈A} ⊆ M∞. By Theorems 1 and 3
of [16], there exists B∈I′such that {ν′∈ E′:ν′(C′)≤p}={ν′∈ E ′:ν′(B) =
1}. Now, let Abe the event given by Theorem 5.7.
Proof of Theorem 5.10 (Uniqueness). Let ν1=Eφdλ1(φ) and ν2=Eφdλ2(φ),
where λ1and λ2are distinct probability measures on E. The sets EC,p := {ν∈
E:ν(C)≤p}, where C⊆ M∗and 0 ≤p≤1, generate the weak topol-
ogy and its corresponding Borel sigma-field. Therefore, there exists Cand p
such that λ1(EC,p)=λ2(EC,p ). By Lemma 5.11, there exists an invariant
event A∈Isuch that EC,p ={ν∈ E′:ν(A)=1}. Since φ(A)∈ {0,1},
one gets νi(A) = φ(A)dλi(φ) = λi(EC,p). Thus, ν1(A)=ν2(A), and hence,
ν1=ν2.
Lemma 5.11 proves condition (b) of [16]. Condition (c) is also implied by
the proof of existence in Theorem 5.10. One can similarly prove condition (a)
of [16] using random walks. Therefore, one can leverage the direct construction
of the ergodic decomposition in [16] to prove the following formulation of ergodic
decomposition (see the paragraph of [16] after defining the condition (c)).
Theorem 5.12. There exists an I-measurable map β:M∗→ E such that
ν=M∗β(ξ)dν(ξ)for every unimodular probability measure νon M∗. In
addition, every ergodic φ∈ E is concentrated on β−1(φ)and no other ergodic
measure is concentrated on β−1(φ). Such a map βis unique in the sense that
two such maps are equal ν-a.s., for every unimodular probability measures ν.
See Theorem 4.11 of [28] for a similar statement for countable Borel equiv-
alence relations.
26
5.5 Ends
Ends are defined for every topological space [19]. In particular, if Xis boundedly-
compact and locally-connected and o∈X, every end of Xcan be represented
uniquely by a sequence (Un)n, where Unis an unbounded connected component
of Bn(o)cand U1⊇U2⊇ · · · . There is also a natural compact topology on the
union of Xand the set of its ends.
For unimodular graphs, it is proved that the number of ends (defined sim-
ilarly) is either 0, 1, 2 or ∞. In addition, in the last case, there is no isolated
end (Proposition 6.10 of [3]). Here, we extend these results to unimodular rmm
spaces.
Definition 5.13. Let (X, o, µ) be a locally-connected rmm space. An end of
X, represented by a sequence (Un)nas above, is light if µ(Un)<∞for some
n. Otherwise, it is called heavy. It is called isolated if Unhas only one end
for some n.
If µ(X)<∞, there is no heavy end, but the number of light tails can
be arbitrary (Example 2.4). Also, even if µ(X) = ∞, there might exist light
isolated ends (e.g., when X=∪n∈Z({n} × R)∪(R× {0}) and µis the sum
of the Lebesgue measure on R× {0}and some finite measures on the vertical
lines). Here, we focus on the other cases.
Proposition 5.14. Assume [X,o,µ]is a unimodular rmm space that is con-
nected and locally-connected a.s. and assume µ(X) = ∞a.s.
(i) The number of heavy ends of Xis either 1, 2 or ∞. In the last case, there
is no isolated heavy end.
(ii) The set of light ends is either empty or is an open dense subset of the set
of ends of X(and hence, is infinite).
If Xis disconnected but locally-connected, the same result holds for almost
every connected component of Xby Lemma 3.7 (note that every component is
clopen by local connectedness).
The proof of the claim for heavy ends is a modification of that of Proposi-
tion 3.9 of [36] for unimodular graphs. So this part is only sketched for brevity.
First, we provide the following generalization of Lemma 3.7.
Lemma 5.15. Let [X,o,µ]be as in Proposition 5.14 and Sbe a factor subset.
Then, almost surely, if S=∅, then every heavy end of (X,µ)is a limiting point
of S.
Proof. If not, with positive probability, there exists a bounded subset Cand
a component Uof X\Csuch that C∩S=∅,U∩S=∅and µ(U) = ∞.
One might assume that Cis open and connected and has diameter at most
n(for some fixed n). Let C′be the union of all such sets C. Then, C′is a
factor subset, intersects Sand one can see that X\C′has some components
U′with infinite mass. For every such U′, send unit mass from every x∈U′to
27
the compact set ∂U ′(or if µ(∂U ′) = 0, to some neighborhood of ∂U′). This
contradicts the MTP since the outgoing mass is at most 1 and the incoming
mass is ∞at some points.
Proof of Proposition 5.14. The existence of heavy tails is proved by constructing
Un’s inductively such that µ(Un) = ∞. If the number of heavy ends is finite
and at least three, then one can construct a compact subset S⊆X, as a
factor of X, that separates all of the heavy ends (consider the union of all open
connected subsets with diameter at most nthat separate all ends, and note that
every two such subsets intersect). This contradicts Lemma 3.7. Also, if eis an
isolated heavy end and there are at least three heavy ends, one can assign to
ea bounded set S(e)⊆Xequivariantly that separates efrom all other heavy
ends and separates at least two other heavy ends from each other (consider
the union of all such sets that are open and connected and have diameter at
most nand note that every two of them intersect). The set ∪eS(e) violates
Lemma 5.15. If the set of light ends is nonempty and non-dense, there exists
some open connected set Cwith diameter at most n(for some fixed n) such
that some component of X\Chas only heavy ends, and some other component
U′of X\Chas light ends and µ(U′)≤1. One can see that the union of all
such Cviolates Lemma 5.15.
5.6 Amenability
The notion of amenability is originally defined for countable groups. It is also ex-
tended to locally-compact topological groups and also to countable Borel equiv-
alence relations. The latter is extended to unimodular graphs in [3]. There are
various equivalent definitions, some functional-analytic ones and some combina-
torial ones. To extend them to unimodular rmm spaces, the existing definitions
do not apply directly, since no group or countable Borel equivalence relation is
present.
In this subsection, we extend some of the definitions of amenability of count-
able Borel equivalence relations (in [12] and [26]) to unimodular rmm spaces. In
Theorem 5.21, we prove that these definitions are equivalent by reducing them
to analogous conditions for some specific countable Borel equivalence relation.
This reduction requires Palm theory developed in Section 6. So the proof of the
main result is postponed to Subsection 7.3.
5.6.1 Definition by Local Means
A definition of amenability of groups is the existence of an invariant mean.
That is, the existence of a map that assigns a mean value to every bounded
measurable function such that the map is group-invariant and finitely-additive.
For countable Borel equivalence relations, two definitions of global mean and
local mean are provided (see [26] and [12]). Here, we extend the local mean to
unimodular rmm spaces (global means are based on partial bijections and seem
more difficult to be extended to the continuum setting).
28
Definition 5.16 (Local Mean).Let [X,o,µ] be a unimodular discrete space.
Alocal mean mis a map that assigns to (some class of ) deterministic rmm
spaces (X, o, µ), a state m(X,o,µ)=: moon (X, µ) (i.e., mo:L∞(X, µ)→Ris a
positive linear functional such that mo(1) = ||mo||∞= 1), such that:
(i) mis isomorphism-invariant and is defined for a.e. realization of [X,o,µ],
(ii) If mois defined, then myis defined for all y∈Xand mo=my,
(iii) For all bounded measurable functions f:M∗∗ →R, the map [X, o, µ]7→
mo(f(o, ·)) is measurable.
Then, the following condition is a definition of amenability of a unimodular
rmm space [X,o,µ] and is analogous to the condition (AI) of [26]:
There exists a local mean. (LM)
5.6.2 Definition by Approximate Means
Definition 5.17. Let [X,o,µ] be a unimodular rmm space. An approximate
mean is a sequence of measurable functions λn:M∗∗ →R≥0such that, almost
surely, ∀y∈X:Xλn(y, ·)dµ= 1 and ∀y∈X:||λn(o,·)−λn(y, ·)||1→0.
Here, λn(y, ·) is regarded as an element of L1(X,µ). Then, the following
condition is another definition of amenability and is analogous to the condition
(AI) of [26] for Borel equivalence relations:
There exists an approximate mean. (AM)
The name approximate mean come from the fact that a local mean is ob-
tained by taking an ultra limit of λn(o, ·)f(·)dµ as n→ ∞.
5.6.3 Definition by Hyperfiniteness
Roughly speaking, a countable Borel equivalence relation is hyperfinite if it can
be approximated by finite equivalence sub-relations. The following is analogous
to equivalence sub-relations of the natural equivalence relation on M∗(Subsec-
tion 4.6):
Definition 5.18. Afactor partition Π is a map that assigns to every (X, µ)
a partition of Xsuch that the map is invariant under isomorphisms and, if
Π(o) denotes the element containing o∈X, then {(X, o, y, µ) : y∈Π(o)}is a
measurable subset of M∗∗. It is called finite if every element of Π has finite
measure under µ(but need not be a finite or bounded set). A factor sequence
of nested partitions (Πn)nis defined similarly by the condition that the map
(X, o, y, µ)7→ min{n:y∈Πn(o)}is measurable.
If (X, µ) has nontrivial automorphisms, not all partitions of Xcan appear
in the above definition. So, we allow extra randomness. But due to topological
issues to define a random partition on X(see Subsection 5.1), we use another
form of extra randomness as follows.
29
Definition 5.19. An equivariant (random) partition is defined similarly
to factor partitions with the difference that the partition can be a factor of
(X, o, Φ), where Φ is an equivariant random additional structure as in Subsec-
tion 5.1. An equivariant nested sequence of partitions is defined similarly.
Remark 5.20. In fact, the arguments in Subsection 7.3 show that it is enough
that the partition is a factor of the marked Poisson point process (see Exam-
ple 6.10 and Subsection 7.3). Also, if there is no nontrivial automorphism a.s.,
then factor partitions are enough.
These definitions allow us to define the following three forms of hyperfinite-
ness, which will be seen to be equivalent.
There exist equivariant nested finite partitions Πn
such that P
n
Πn(o) = X= 1.(HF1)
This does not imply that Πn(o) contains a large neighborhood of o. The
following condition is another form of hyperfiniteness.
There exist equivariant nested finite partitions Πn
such that ∀r < ∞:P[∃n:Br(o)⊆Πn(o)] = 1.(HF2)
Hyperfiniteness can also be phrased in terms of a single partition as follows.
∀r < ∞,∀ϵ > 0,there exists an equivariant finite partition Π
such that P[Br(o)⊆ Π(o)] < ϵ. (HF3)
5.6.4 Definition by The Folner Condition
The Folner condition is a combinatorial way to define amenability of groups (and
deterministic graphs); i.e., the existence of a finite set Asuch that the boundary
of Ais arbitrarily small compared to A. Modified versions of this condition
are provided for countable Borel equivalence relations (based on equivariant
graphings; see [12] and [26]) and for unimodular graphs (Definition 8.1 of [3]).
In particular, Ais required to be an element of some factor partition. The
extension to the continuum setting is not straightforward. Here, we provide two
Folner-type conditions as follows. For A⊆Xand r≥0, let ∂rA:= {y∈A:
Br(y)⊆ A}denote the inner r-boundary of A.
∀r < ∞,∀ϵ > 0,there exists an equivariant finite partition Π
such that Eµ(∂rΠ(o))
µ(Π(o)) < ϵ. (FO1)
30
The MTP implies easily that this condition is equivalent to (HF3) (see the
proof of Theorem 5.21).
There exist equivariant nested finite partitions Πnsuch that
∀r:µ(∂rΠn(o))
µ(Πn(o)) →0, a.s. (FO2)
It is not clear to the author whether one can use the outer boundary or the
full boundary in the above definitions or not.
5.6.5 Equivalence of the Definitions
The following is the main result of this subsection.
Theorem 5.21. For unimodular rmm spaces, the conditions (LM),(AM),
(HF1),(HF2),(HF3),(FO1) and (FO2) are equivalent.
Definition 5.22 (Amenability).A unimodular rmm space is called amenable
if the equivalent conditions in Theorem 5.21 hold.
The proof of the analogous results for countable groups and for countable
Borel equivalence relations (Theorem 1 of [26]) is heavily based on the dis-
creteness. So, these proofs cannot be directly extended to unimodular rmm
spaces. We will prove the above theorem by reducing it to the analogous re-
sult for an specific countable Borel equivalence relation. The reduction is based
on Palm theory developed in Section 6 and the proof is postponed to Subsec-
tion 7.3. In short, a countable Borel equivalence relation is constructed using
the marked Poisson point process on X(Example 6.10). An invariant measure
is constructed by the Palm distribution (Example 6.24). Then, the reduction
is proved using the Voronoi tessellation and balancing transport kernels (later:
cross ref).
Here, we only prove the implications that do not rely on Section 6:
Lemma 5.23. (HF2) ⇒(HF3) ⇒(HF1) and (HF3) ⇔(FO1) ⇔(FO2).
Proof. The implication (HF2) ⇒(HF3) is clear.
(HF3) ⇒(HF1). Let Πnbe the equivariant partition given by (HF3) for
r:= nand ϵ:= 2−n. Then, one can let Π′
nbe the superposition of Πn,Πn+1, . . .
and use the Borel Cantelli lemma to deduce (HF1).
(HF3) ⇔(FO1). By letting g(o, y) := 1/µ(Π(o))1{y∈Π(o)}1{Br(o)⊆Π(o)}, the
MTP gives that P[Br(o)⊆ Π(o)] = E[µ(∂rΠ(o))/µ(Π(o))].
(FO1) ⇔(FO2). The ⇒part is obtained by taking a subsequence such that
the almost sure convergence holds. The ⇐part is obtained by the bounded
converge theorem noting that ∂rA⊆A.
Remark 5.24. Theorem 1 of [26] assumes ergodicity, but the claim holds for
non-ergodic cases as well. In fact, in each of the definitions, [X,o,µ] is amenable
if and only if almost all of its ergodic components are amenable.
31
6 Palm Theory on Unimodular rmm Spaces
Recall from Subsection 4.1 that the Palm version of a stationary point process
(or random measure) is defined and means heuristically re-rooting to a typi-
cal point of the point process. In this section, we generalize Palm theory to
unimodular rmm spaces. Here, a unimodular rmm space [X,o,µ] is thought
of as a generalization of the Euclidean or hyperbolic spaces. The Palm theory
is intended for random measures Φ on Xwhich are chosen equivariantly (e.g.,
the Poisson point process with intensity measure µ). The notion of equivariant
random measures is discussed in Subsection 6.1 and the Palm theory is given in
the next subsections.
6.1 Additional Random Measures on a Unimodular Ran-
dom rmm Space
For k≥1, let Mk
∗be the space of all tuples (X, o, µ1, . . . , µk), where (X, o, µ1)
is a rmm space and µ2, . . . , µkare measures on X. Likewise, let Mk
∗∗ be the
space of all doubly-rooted tuples (X, o1, o2, µ1, . . . , µk). By the discussion in
Subsection 5.1 and using the results of [29], one can equip Mk
∗and Mk
∗∗ with
generalizations of the GHP metric such that they are Polish spaces.
Let [X,o,µ] be a unimodular rmm space. Roughly speaking, an equivariant
random measure on Xis assigning an additional measure Φ on X(in every
realization of [X,o,µ]), possibly using extra randomness, in a way that uni-
modularity is preserved. Intuitively, the law of Φ (given [X,o,µ]) should not
depend on the root, should be isomorphism-invariant and satisfy some measur-
ability condition. More precisely:
Definition 6.1. An equivariant random measure Φ is a map that assigns
to every deterministic rmm space (X, o, µ) a random measure Φ(X,o,µ)on X
such that:
(i) For all (X, o, µ) and all y∈X, Φ(X,y,µ)∼Φ(X,o,µ),
(ii) If ρ: (X, o, µ)→(X′, o′, µ′) is an isomorphism, then ρ∗Φ(X,o,µ)∼Φ(X′,o′,µ′),
(iii) For all Borel subsets A⊆ M2
∗, the map (X, o, µ)7→ P[X, o, µ, Φ(X ,o,µ)]∈A
is measurable.
In addition, if [X,o, µ] is a unimodular rmm space, an equivariant random
measure on Xis a map with the above conditions relaxed to be defined on
an invariant event with full probability. We denote Φ(X,o,µ)simply by Φ for
brevity.
As usual, if for all (X, o, µ), Φ(X,o,µ)is a counting measure a.s., Φ is called
an equivariant (simple) point process and if Φ(X,o,µ)(·)∈Za.s., it is called
an equivariant (non-simple) point process.
By integrating the distribution of Φ over the distribution of [X,o,µ], one
obtains a probability measure on M2
∗(similarly to (2.3) of [6]). This determines
32
a random object of M2
∗. By an abuse of notation, we denote the latter by
[X,o,µ,Φ] and we use the same symbols Pand Efor its distribution. It is easy
to deduce that [X,o,µ,Φ] is unimodular in the sense of Subsection 5.1 (see
Lemma 6.3); i.e.,
Definition 6.2. A random element [Y,p,µ1,µ2] on M2
∗is unimodular if the
following MTP holds for all measurable functions g:M2
∗∗ →R≥0:
EY
g(p, z)dµ1(z)=EY
g(z, p)dµ1(z).
Note that the integral is against µ1, but gcan depend on µ2as well.
One can also extend the above definition to define a jointly equivariant
pairs (or tuples) of random measures (Φ(X,o,µ),Ψ(X,o,µ)) and the results of this
section remain valid.
6.1.1 An Equivalent Definition
A simpler definition of equivariant random measures on Xwould be a unimodu-
lar tuple [Y,p,µ1,µ2] such tat [Y,p,µ1] has the same distribution as [X,o,µ].
Indeed, the two definitions are equivalent in the following sense.
Lemma 6.3. Let [X,o,µ]be a nontrivial unimodular rmm space. If Φis an
equivariant random measure on X, then [X,o,µ,Φ] is unimodular. Conversely,
if [Y,p,µ1,µ2]is unimodular and [Y,p,µ1]∼[X,o,µ], then there exists an
equivariant random measure Φsuch that [Y,p,µ1,µ2]∼[X,o,µ,Φ].
The proof of this lemma is based on the Palm theory developed in the next
subsections and will be given in Subsection 7.2 (the lemma will not be used in
this section). To prove the converse, one basically needs to consider the regular
conditional distribution of [Y,p,µ1,µ2] w.r.t. [Y,p,µ1]. However, a difficult
step is proving that a version of the conditional law exists which does not depend
on the root (property (i) in Definition 6.1). This will be proved using the Palm
theory and invariant disintegration [27].
Remark 6.4. The above alternative definition of equivariant measures is useful
for weak convergence of equivariant random measures and tightness. See the
following definition and corollary. It also enables us to regard µas an equiv-
ariant measure on (the Palm version of) [X,o,Φ], which will be explained in
Subsection 6.3.1. In contrast, Definition 6.1 is useful for explicit constructions
and also for dealing with couplings of two equivariant random measures; e.g.,
to define the independent coupling (conditional to [X,o,µ]) of two equivariant
random measures (Example 6.11).
Definition 6.5. Two equivariant random measures Φ and Ψ on Xare equiv-
alent if [X,o,µ,Φ] ∼[X,o,µ,Ψ] (equivalently, on almost every realization
of [X,o,µ], one has Φ(·)∼Ψ(·)). Also, we say that Φnconverges to Φ if
[X,o,µ,Φn] converges weakly to [X,o,µ,Φ].
33
Corollary 6.6. Let b:M∗×N→Rbe a lower semi-continuous function (e.g.,
(X, o, µ, n)7→ µ(Bn(o))). Then, the set of equivariant random measures Φon
Xsuch that ∀n: Φ(Bn(o)) ≤b([X,o,µ], n)a.s. is compact.
Remark 6.7. Lemma 6.3 can be generalized to the case where Φ is any other
type of additional random structures on X, discussed in Subsection 5.1, as long
as the Polishness property holds. The same proof works in the general case, but
for simplicity, we provide the proof for random measures only.
6.1.2 Examples
For basic examples of equivariant random measures, one can mention Φ = 0,
Φ = µ, Φ = bµ (see (5.2)), or more generally, any factor measures; i.e.,
when Φ(X,o,µ)is a deterministic function of (X, µ) satisfying the assumptions in
Definition 6.1. The following are further examples.
Example 6.8 (Stationary Random Measure).When [X,o,µ] := [Rd,0,Leb],
every stationary point process or random measure on Rdis an equivariant ran-
dom measure on Xin the sense of Subsection 6.1.1. However, in order to have
the conditions in Definition 6.1, one should assume that it is isometry-invariant
(or just apply a random isometry fixing 0). By the modification mentioned in
Remark 4.2, one can say that stationarity is equivalent to being an equivariant
random measure on Rd.
Example 6.9 (Intensity Measure).If Φ is an equivariant random measure, then
the intensity measure of Φ, defined by Ψ(X,o,µ)(·) := EΦ(X,o,µ)(·), is also
equivariant. In fact, the intensity measure is a factor measure.
Example 6.10 (Poisson Point Process).Let Φ be an equivariant random mea-
sure. The Poisson point process with intensity measure Φ is an equivariant
random measure (defined by considering in every realization of Φ(X,o,µ), the clas-
sical definition of the Poisson point process with intensity measure Φ(X,o,µ)). In
addition, Φ and the Poisson point process are jointly-equivariant. Note that if
Φ has atoms, the the Poisson point process has multiple points and is not a
simple point process.
One can prove that if [X,o,µ] is ergodic and µ(X) = ∞a.s., then [X,o,µ,Φ]
is also ergodic (this fails if 0 <µ(X)<∞). The proof is similar to Lemma 4
of [30] and is skipped.
Example 6.11 (Independent Coupling).Let each of Φ and Ψ be an equivari-
ant random measure on X. Using the definition of Subsection 6.1.1, it is not
trivial to define a coupling of Φ and Ψ, but it is easy using Definition 6.1: For
every deterministic (X, o, µ), consider an independent coupling of Φ(X,o,µ)and
Ψ(X,o,µ). Then, (Φ,Ψ) becomes jointly-equivariant and is called the indepen-
dent coupling (conditional to [X,o,µ]) of Φ and Ψ.
34
6.2 Palm Distribution and Intensity
Let [X,o,µ] be a nontrivial unimodular rmm space and Φ be an equivariant
random measure on X(in the sense of either Definition 6.1 or Subsection 6.1.1).
Inspired by the analogous notions in stochastic geometry (Subsection 4.1), we
are going to define the Palm distribution and the intensity. The approach (4.1)
does not work here since the base space is not fixed. We will extend two other
approaches by using the Campbell measure and tessellations.
The Campbell measure CΦis the measure on M2
∗∗ defined by
CΦ(A) := EX
1A(o, y)dΦ(y),
where, as before, we use the abbreviation 1A(o, y) for 1A(X,o, y, µ,Φ). It is
straightforward to see that CΦis a sigma-finite measure. The Palm distribution
will be obtained by a disintegration of the Campbell measure as follows.
Theorem 6.12. There exists a unique sigma-finite measure QΦon M2
∗such
that for all measurable functions g:M2
∗∗ →R≥0,
EX
g(o, y)dΦ(y)=gdCΦ=M2
∗X
g(y, o)dµ(y)dQΦ([X, o, µ, φ]).
(6.1)
Note that the left hand side is just the definition of CΦ.
Definition 6.13 (Intensity and Palm Distribution).The intensity λΦof Φ
(w.r.t. µ) is the total mass of QΦ. If 0 < λΦ<∞, then one can normalize QΦ
to find a probability measure PΦ:= 1
λΦQΦon M2
∗, which is called the Palm
distribution of Φ.
By an abuse of notation, we use the same symbol [X,o,µ,Φ] when dealing with
PΦ; e.g., we use formulas like PΦ[[X,o,µ,Φ] ∈A] and EΦ[g(o)] by keeping in
mind that the probability measure has changed.
By the above notation, we can rewrite (6.1) as follows:
EX
g(o, y)dΦ(y)=λΦEΦX
g(y, o)dµ(y).(6.2)
Example 6.8 shows that the above definition generalizes the Palm distribu-
tion of stationary random measures on Rd. In addition, (6.2) is a generalization
of the refined Campbell Theorem (see [33] or [38]).
The intuition of the Palm distribution is that, under PΦ, the root is a typical
point of Φ. The equation (6.2) means that, if µ(resp. Φ) is interpreted as a
measure on a set of senders (resp. receivers), then the expected mass sent from
a typical sender is equal to λΦtimes the expected mass received by a typical
receiver.
We will prove Theorem 6.12 by constructing the Palm distribution directly.
The construction (4.1) does not work since the underlying space Xis not deter-
ministic and one cannot fix a subset B. We replace it by a transport function
35
hsuch that h−(·) = 1 as follows. This extends Mecke’s construction in (2.8)
and Satz 2.3 of [37]. This also extends the construction of a typical cell for
equivariant tessellations of stationary point processes.
Theorem 6.14 (Construction of Palm).Let h:M∗∗ →R≥0be a transport
function such that ∀y∈X:h−(y) = 1 a.s. Then, for every equivariant random
measure Φ, the intensity of Φand the measure QΦare obtained as follows and
satisfy (6.1):
λΦ=Eh+
Φ(o):= EX
h(o, z)dΦ(z),(6.3)
QΦ(A) = EX
1A(X, z, µ,Φ)h(o, z)dΦ(z).(6.4)
So, if 0< λΦ<∞, the Palm distribution is obtained by biasing by h+
Φ(o), and
then, re-rooting to a point chosen with law proportional to h(o,·)Φ; i.e.,
PΦ(A) = 1
λΦ
EX
1A(X, z, µ,Φ)h(o, z)dΦ(z).(6.5)
Note that such a transport function hexists by Lemma 3.10.
Proof. It is enough to prove that QΦsatisfies (6.1). Let gbe a transport function
on M2
∗∗. The right hand side of (6.1) equals to:
M2
∗
g−(o)dQΦ([X, o, µ, φ]) = EX
g−(z)h(o, z)dΦ(z)
=EXX
g(y, z)h(o, z)dΦ(z)dµ(y)
=EXX
g(o, z)h(y, z)dΦ(z)dµ(y)
=EX
g(o, z)dΦ(z),
where the first equality is by the definition of QΦ, the third one is by the MTP,
and the last one is because h−(z) = 1. This proves (6.1).
Proof of Theorem 6.12. The existence is proved in Theorem 6.14. For unique-
ness, assume QΦis a measure satisfying (6.1). Let hbe a transport function such
that h−(·) = 1 a.s. For an arbitrary event A⊆ M2
∗, let g(o, z) := h(o, z)1A(z).
Inserting ginto (6.1) gives QΦ(A) = E1A(y)h(o, y)dΦ(y); i.e., QΦis the
measure given in Theorem 6.14.
As a first application of the Palm calculus, we prove the following extension
of Lemma 3.4.
Proposition 6.15. Let Φbe an equivariant random measure on X. If µ(X) =
∞a.s., then Φ(X)∈ {0,∞} a.s.
36
Proof. Let Abe the event that µ(X) = ∞and 0 <Φ(X)<∞and assume
P[A]>0. Since Ais an invariant event, (6.4) implies that QΦ(A)>0. On A,
let g(u, v) := Φ(X)−1. Then, g+
Φ(·) = 1 and g−(·) = ∞on A. This contra-
dicts (6.1).
6.3 Properties of Palm
6.3.1 Unimodularity of the Palm Version
The Palm distribution of an equivariant random measure is generally not uni-
modular. However, it satisfies the following mass transport principle similarly
to the case of stationary random measures (Subsection 4.1).
Definition 6.16. A random element [Y,p,µ1,µ2] on M2
∗is unimodular with
respect to µ2if the following MTP holds for all measurable functions g:
M2
∗∗ →R≥0:
EY
g(p, z)dµ2(z)=EY
g(z, p)dµ2(z).
This is equivalent to the unimodularity of [Y,p,µ2,µ1], which is obtained by
swapping the two measures.
Theorem 6.17. Let [X,o,µ]be a unimodular rmm space and Φbe an equivari-
ant random measure on Xwith positive finite intensity. Then, under the Palm
distribution PΦ,[X,o,µ,Φ] is unimodular w.r.t. Φ. In other words, [X,o,Φ,µ]
is unimodular under the Pam distribution.
Proof. Let gbe a transport function and hbe as in Theorem 6.14. By (6.5),
EΦg+
Φ(o)=1
λΦ
EX
g+
Φ(y)h(o, y)dΦ(y)
=1
λΦ
EXX
g(y, z)h(o, y)dΦ(y)dΦ(z)
=1
λΦ
EΦXX
g(y, o)h(z, y)dΦ(y)dµ(z)
=1
λΦ
EΦX
g(y, o)dΦ(y),
where in the third equality, we have swapped oand zby the refined Campbell
formula (6.2) and the last equality holds by h−1(y) = 1. This proves the claim.
6.3.2 Exchange Formula
Neveu’s exchange formula (see Theorem 3.4.5 of [38]) is a form of the MTP
between two jointly-stationary random measures on Rd. The refined Campbell
formula (6.2) can be thought of an exchange formula between µand Φ. More
37
generally, assume (Φ,Ψ) is a pair of random measures on [X,o,µ] which are
jointly equivariant. Assume the intensities λΦand λΨare positive and finite
and consider the Palm distributions PΦand PΨ.
Proposition 6.18 (Exchange Formula).For jointly-equivariant random mea-
sures Φand Ψon Xas above, and for all transport functions g:M3
∗∗ →R≥0,
λΦEΦX
g(o, y)dΨ(y)=λΨEΨX
g(y, o)dΦ(y),(6.6)
where g(u, v)abbreviates g(X, u, v, µ,Φ,Ψ) as usual.
Proof. Let hbe as in Theorem 6.14. By (6.5), the left hand side is equal to
E h(o, z)g(z, y)dΨ(y)dΦ(z)=Ef(o, y)dΨ(y),
where f(u, v) := Xh(u, z)g(z, v)dΦ(z). By (6.2), the last term is equal to
λΨEΨf(y, o)dµ(y)=λΨEΨ h(y, z )g(z, o)dΦ(z)dµ(y).
Since h−(z) = 1, the last term is equal to the right hand side of (6.6) and the
claim is proved.
Another interpretation of the exchange formula is as follows. By Theo-
rem 6.17, under the Palm distribution of Φ, [X,o,µ,Φ,Ψ] is unimodular w.r.t.
Φ. Now, one may regard Ψ as a random measure on the latter and think of µas
a decoration. Then, the exchange formula (6.6) turns into the refined Campbell
formula for Ψ with respect to Φ. More precisely, one can deduce the following.
Corollary 6.19. Under the Palm distribution of Φ, and by thinking of Φas the
base measure on X, the intensity of Ψwould be λΨ/λΦand the Palm distribution
of Ψwould be identical to PΨ.
6.3.3 Palm Inversion = Palm
Palm inversion refers to the construction of the stationary distribution from
the Palm distribution. Similarly, we desire to reconstruct the distribution of
[X,o,µ,Φ] from the Palm distribution PΦ.
By Theorem 6.17, [X,o,Φ,µ] is unimodular under PΦ. Now, µcan be
regarded as a random measure (in the sense of Subsection 6.1.1) on the Palm
version of [X,o,Φ]. Therefore, it makes sense to speak of the Palm distribution
of µ, namely, P′. Using the symmetry of µand Φ in the refined Campbell
formula (6.2), it is straightforward to deduce that P′is equal to the distribution
of [X,o,µ,Φ] (see Corollary 6.19). An explicit construction of P′can also be
provided by Theorem 6.14.
The above discussion shows that the generalization of Palm theory in this
section unifies Palm and Palm-inversion as well. In particular, if Φ0is the Palm
38
version of a stationary point process in Rd, the stationary version is obtained by
considering the Palm distribution of the Lebesgue measure w.r.t. Φ0. Indeed,
for Palm inversion, one desires to move the origin to a typical point of the
Euclidean space.
6.3.4 Stationary Distribution of Random Walks
In Theorem 5.2, we considered random walks on [X,o,µ] such that the Marko-
vian kernel preserves a measure of type bµa.s. If the Markovian kernel has
a stationary distribution which is not absolutely continuous w.r.t. µ, one can
extend this result as follows.
Let Φ be an equivariant random measure with positive and finite intensity
on a unimodular rmm space [X,o,µ]. Let ka Markovian kernel (that might
depend on Φ as well) and (xn)nbe the random walk with kernel kas in Theo-
rem 5.2. Define a shift operator Ssimilarly to (5.1). Since [X,o,µ,Φ,(xn)n] is
unimodular, one can define the Palm distribution of Φ on it, which we call Q.
Proposition 6.20. In the above setting, if in almost every sample [X, o, µ, φ]
of [X,o,µ,Φ],φis a stationary (resp. reversible) measure for the Markovian
kernel, then the Palm distribution Qof Φis a stationary (resp. reversible)
measure for the shift operator S.
Proof. By Theorem 6.17, the claim is reduced to Theorem 5.2.
6.4 Examples
We already mentioned that the Palm distribution defined in this section gener-
alizes the case of stationary point processes and random measures (by Exam-
ple 6.8). The following are further examples of the Palm distribution.
Example 6.21 (Conditioning).Let Sbe a factor subset (Definition 3.3) and
let Φ := µS. Then, the intensity of Φ is P[o∈S] and the Palm distribution is
obtained by conditioning on o∈S.
Example 6.22 (Biasing).Let b:M∗→R≥0be an initial bias and Φ := bµ
defined in (5.2). Then, the intensity of Φ is E[b(o)] and the Palm distribution
of Φ is just biasing the probability measure by b(o) (if 0 <E[b(o)] <∞). In
particular, Theorem (6.17) implies that, after biasing by b, [X,o, bµ] would be
unimodular, which is already shown in Example 2.6. This fact can be used to
reduce some results to the case without initial biasing (e.g., Theorem 5.2).
Example 6.23 (Extending a Unimodular Graph).In many examples in the
literature, a unimodular graph is constructed by adding new vertices and edges
to a given unimodular graph, and then applying a biasing and a root-change
(see [30] for various examples in the literature). Here, we will show that all
this examples are instances of the Palm distribution developed in this work. As
an example, the unimodularization of the dual of a unimodular planar graph
(Example 9.6 of [3]) is reduced to the Palm distribution of the counting measure
39
on the set of faces (see below for the details).
We use the most general construction of this form mentioned in Section 5 of [30].
In short, assume [G,o] is a random rooted graph that is not unimodular, but
the MTP holds on a subset Sof G(see Definition 10 of [30]). In other words, G
is obtained by adding vertices and edges to [S,o]. Let µand Φ be the counting
measures of Sand Grespectively. In the setting of this section, [G,o,µ] is
unimodular (but not [G,o]) and Φ can be regarded as an equivariant random
measure on [G,o,µ]. Therefore, one can define the Palm distribution of Φ
using Definition 6.13 (if the intensity is finite). Then, Theorem 6.17 implies
that, under the Palm distribution, [G,o] is a unimodular graph, as desired. In
this setting, the general construction in Theorem 5 of [30] (which covers the
similar examples in the literature) is reduced to the explicit construction of the
Palm distribution given in Theorem 6.14.
For a continuum example, it can be seen that the product of a unimodular rmm
space [X,o,µ] with an arbitrary compact measured metric space can be made
unimodular (Example 2.5) and this is an instance of the Palm distribution. See
the proof of Theorem 5.21 in Subsection 7.3 for more explanation.
The final example is the Poisson point process as follows.
Theorem 6.24 (Palm of Poisson).Let c > 0and Φbe the Poisson point process
on Xwith intensity measure cµ(Example 6.10). Then, the Palm version has
the same distribution as [X,o,µ,Φ + δo].
Note that Φ+δois different from Φ∪{o}if µhas atoms. This result general-
izes Mecke’s theorem for the ordinary Poisson point process (see Theorem 3.3.5
of [38]). We guess that this property characterizes the Poisson point process
(which generalizes Slivnyak’s theorem), but we do not discuss this here.
Proof. By (6.3), it is straightforward to show that λΦ=c. We claim that for
every measurable function f:M2
∗∗ →R≥0, one has
E
y∈Φ
f(X,o, y, µ,Φ)
=cEX
f(X,o, y, µ,Φ + δy)dµ(y),(6.7)
which generalizes Mecke’s theorem (see Theorem 3.2.5 of [38]). Assuming this,
Let B⊆ M2
∗be any event. By (6.5), (6.7), the MTP and the fact h−(o) = 1
respectively,
PΦ[B] = 1
cE
x∈Φ
1B(X, z, µ,Φ)h(o, z)
=EX
1B(X, z, µ,Φ+δz)h(o, z)dµ(z)
=EX
1B(X,o,µ,Φ+δo)h(z, o)dµ(z)
=P[[X,o,µ,Φ + δo]∈B]
40
and the theorem is proved. To prove (6.7), for simplicity, we assume that
(supp(µ),µ) has no automorphism a.s. (otherwise, one may add an extra ran-
domness like an independent Poisson rain to break the automorphisms). In
this case, it is enough to prove this claim when the function fis of the form
f=g(X,o, y, µ)1{Φ(S)=k}, where k∈N∪ {0},S:= S(X,o,µ) := {y∈X:
(X,o, y, µ)∈A}and A⊆ M∗∗ is measurable. The left hand side of (6.7) is
E(gΦ)(X)1{Φ(S)=k}=E(gΦ)(X\S)1{Φ(S)=k}+E(gΦ)(S)1{Φ(S)=k}
=: E[a1] + E[a2].
Conditioned on [X,o,µ], the random variables Φ(S) and (gΦ)(X\S) are
independent. In addition, by Mecke’s formula, E[(gΦ)(X\S)|[X,o,µ] ] =
c(gµ)(X\S). Hence,
E[a1] = cE(gµ)(X\S)1{Φ(S)=k}=cEX\S
f(X,o, y, µ,Φ + δy)dµ(y).
For the second term, conditioned on [X,o,µ] and on Φ(S) = k, ΦSis dis-
tributed as the set of krandom points with distribution proportional to µS. So,
E[a2|[X,o,µ],Φ(S) = k] = k(gµ)(S)/µ(S)1{Φ(S)=k}. By the formula of the
Poisson distribution with parameter cµ(S), one can obtain that E[a2|[X,o,µ]] =
c(gµ)(S)1{Φ(S)=k−1}. Thus,
E[a2] = cE(gµ)(S)1{Φ(S)=k−1}=cES
f(X,o, y, µ,Φ + δy)dµ(y).
This proves (6.7) and the theorem is proved.
7 Some Applications of Palm Theory
In Subsections 7.2 and 7.3, we prove equivariant disintegration and the amenabil-
ity theorem (Lemma 6.3 and Theorem 5.21) using the Palm theory developed in
Section 6. The proof is by constructing a countable Borel equivalence relation
using the Poisson point process, and then, constructing an invariant measure
using the Palm theory. Then, the results on Borel equivalence relations are used
to prove the claims. Before that, we study the existence of balancing transport
kernels in Subsection 7.1, which will be used in Subsection 7.3.
7.1 Balancing Transport Kernels
Let Φ and Ψ be jointly-equivariant random measures on a unimodular rmm
space [X,o,µ] with positive and finite intensities. A balancing transport
kernel from Φ to Ψ is a Markovian kernel on X(in the sense of Subsec-
tion 3.3, but might depend on Φ and Ψ as well) that transports Φ to Ψ;
i.e., Xk(x, ·)dΦ(x) = Ψ(·),∀x∈Xa.s. This extends the notion of invari-
ant transport for stationary random measures [33] and fair tessellations for
41
point processes [25]. On the Euclidean space, using the shift-coupling result of
Thorisson [39], a necessary and sufficient condition for the existence of invariant
transports is provided in [33]. An analogous result is provided for unimodular
graphs in [30]. Here, we extend these results to unimodular rmm spaces.
An event A⊆ M3
∗is called invariant if it does not depend on the root. Let
Idenote the invariant sigma-field. The sample intensity of Φ is defined by
Eh+
Φ(o)|I, where his the function in (6.3).
Theorem 7.1. If [X,o,µ,Φ,Ψ] is ergodic, then the existence of balancing
transport kernels is equivalent to the condition that Φand Ψhave the same
intensities. In the non-ergodic case, the condition is that the sample intensities
of Φand Ψare equal a.s.
The latter is equivalent to the condition that after conditioning to any in-
variant event, Φ and Ψ have the same intensities. It is also equivalent to the
condition that PΦand PΨagree on I.
This theorem is an extension of Theorem 5.1 of [33]. To prove it, one might
try to extend the shift-coupling result of [39] (as done in [30] for unimodular
graphs), but we give another proof by extending the constructive proof of [21].
The latter is an extension of [23] for point processes, which is an infinite version
of the Gale-Shapley stable marriage algorithm.
First, note that the existence of a balancing transport kernel is equivalent to
the existence of a balancing transport density; i.e., a measurable function
K:M3
∗∗ →R≥such that ∀x, y ∈X:XK(x, ·)dΨ = XK(·, y)dΦ = 1 on
an invariant event with full probability. To show this, if k(x, ·) is absolutely
continuous w.r.t. Ψ for all x, it is enough to consider the Radon-Nikodym
derivative equivariantly (as in Lemma 3.9) and to modify it on a null event.
In the singular case, it is enough to smoothify kby composing it with another
kernel that preserves Ψ given by Lemma 3.10.
Sketch of the Proof of Theorem 7.1. For simplicity, we assume ergodicity. As-
sume a balancing transport density Kexists. Under PΦ, by letting h:= K
in (6.3), one obtains that the intensity of Ψ is equal to 1. So, the exchange
formula in Corollary 6.19 implies that λΦ=λΨ. For the converse, it is straight-
forward to extend Algorithm 4.4 of [21] to construct a density K(the stable
constrained density). Assuming λΦ=λΨ, one can use the Palm theory devel-
oped in Section 6 and mimic the proof of Theorem 4.8 of [21] to prove that K
is balancing. The proof is not short, but the extension is straightforward and is
skipped for brevity.
Theorem 7.1 allows us to prove the following result, which generalizes a
result of [39] (see the end of Section 1 of [39]).
Theorem 7.2. Let Φbe an equivariant random measure on a unimodular rmm
space [X,o,µ]. Then, the following are equivalent:
(i) The Palm distribution of Φis obtained by a random re-rooting of [X,o,µ,Φ].
42
(ii) There exists a balancing transport kernel between cµand Φfor some c.
(iii) The sample intensity of Φis constant.
In particular, these conditions hold if [X,o,µ,Φ] is ergodic.
Proof. The equivalence (iii)⇔(ii) is implied by Theorem 7.1.
(i)⇒(iii). It is straightforward to deduce from (i) that PΦ(A) = P[A] for
every invariant event A. This implies (iii).
(ii)⇒(i). Let Kbe such a balancing transport density. In this case, The-
orem 7.1 for h:= 1
cKshows that the Palm version is obtained by re-rooting
according to the kernel h(o,·)Φ (and there is no biasing since h+
Φ= 1).
Example 7.3. Assume Ψ is the Poisson point process with intensity measure
Φ. Then, if Φ(X) = ∞a.s., the ergodicity mentioned in Example 6.10 implies
that the conditions of Theorem 7.1 are satisfied (if [X,o,µ,Φ] is not ergodic,
use its ergodic decomposition). So, there exists a balancing transport density.
In addition, in the case Φ = µ, Theorem 7.2 and 6.24 imply that there exists a
random re-rooting that is equivalent (in distribution) to adding a point at the
origin to Ψ. This is an extension of extra head schemes [25].
7.2 Proof of Equivariant Disintegration
In this section, we will prove Lemma 6.3 in the following steps.
Lemma 7.4. The claim of Lemma 6.3 holds if µis a counting measures a.s.
and every automorphism of (X,µ)fixes every atom of µa.s.
In this case, we will use results on countable Borel equivalence relations to
deduce the claim from invariant disintegration for group actions.
Proof. Let Abe the event that µis a counting measure and every automorphism
of (X,µ) fixes every atom of µ. Consider the following countable equivalence
relation on M∗: Outside Aor if ois not an atom of µ, [X, o, µ] is equivalent
to only itself. On Aand if o1and o2are atoms of µ, let [X, o1, µ] be equiva-
lent to [X, o2, µ]. Then, this is a Borel equivalence relation on M∗and every
equivalence class is countable. Therefore, by Theorem 1 of [17], it is generated
by the action of some countable group Gon M∗. Note that on A, the map
y7→ [X, y, µ] maps the atoms of µbijectively to an equivalence class. So, on A,
Gacts on the set of atoms of µ. Let gy denote this action if g∈Gand yis an
atom of µ(outside Aor if yis not an atom, one has gy =y). This property
allows us to extend the action of Gto M2
∗as follows (the assumption on auto-
morphisms is essential for this goal): Given [Y, p, µ1, µ2]∈ M2
∗and g∈G, let
g[Y, p, µ1, µ2] := [Y , gp, µ1, µ2], where gp is defined by forgetting µ2and using
the above definition.
Let Qand ˜
Qbe the distributions of [X,o,µ] and [Y,p,µ1,µ2] respectively.
The two actions of Gon M∗and M2
∗are compatible with the projection π:
M2
∗→ M∗, which is defined by forgetting the second measure. In addition, since
43
every g∈Gacts bijectively on the set of atoms of µand µ1, the distributions
Qand ˜
Qare invariant under these actions (by Theorem 3.8). Hence, we can use
Kallenberg’s invariant disintegration theorem (Theorem 3.5 of [27]). This gives
a disintegration kernel kfrom M∗to M2
∗such that for all events B⊆ M2
∗,
˜
Q(B) = M∗
k([X, o, µ], B)dQ([X , o, µ]) (7.1)
and k(g[X, o, µ], gB ) = k([X, o, µ], B) for all g∈G. This implies that k([X, o, µ],·)
is concentrated on {[X, o, µ, φ] : φ∈M(X)}, where M(X) is the set of boundedly-
finite measures on X. Assuming [X, o, µ]∈A, by applying a random element of
the automorphism group of (X, µ) (which is a compact group since the atoms
are fixed points) to k([X, o, µ],·), one obtains an automorphism-invariant prob-
ability measure on M(X), which we call it k′(o, ·). Invariance under the action
of Gimplies that k′(o1,·) = k′(o2,·) for all atoms o1and o2of µ. So, for all
y∈X, one can let k′(y, ·) := k′(o, ·), where ois an arbitrary atom of µ. Now,
by choosing Φ(X,y,µ)randomly with law k′(y, ·), one obtains an equivariant ran-
dom measure which satisfies all of the assumptions of Definition 6.1. In addition,
(7.1) implies that [X,o,µ,Φ] has law ˜
Qand the claim is proved.
Lemma 7.5. The claim of Lemma 6.3 holds if there exists a factor point process
S(depending only on Xand µ) with finite intensity such that every automor-
phism of (X,µ)fixes every element of Sa.s. and Sis nonempty a.s.
Proof. Let Qand ˜
Qbe the distributions of [X,o,µ,S] and [Y,p,µ1,µ2,S], as
random elements of M2
∗and M3
∗respectively, where in the latter, S=S(Y,µ1).
The Palm distributions of Sin these two unimodular objects give probability
measures Q0and ˜
Q0respectively. By Theorem 6.17, under Q0, [X,o,µ,S] is
unimodular with respect to the counting measure on S(Definition 6.16) and
the same holds for [Y,p,µ1,µ2,S]. In addition, it is straightforward to show
that, under ˜
Q0, [Y,p,µ1,S] has the same distribution as Q0. Now, we can
use Lemma 7.4 to find an equivariant disintegration of ˜
Q0w.r.t. Q0(the same
argument works if the counting measure of Sis regarded as the base measure
and µand µ1are thought of as decorations). This gives an equivariant random
measure Φ (whose distribution depends on (X, µ, S )) such that
˜
Q0(B) = P[[X, o, µ, Φ, S]∈B]dQ0([X, o, µ, S])
for all events B. By using Palm inversion to reconstruct Qand ˜
Q(Subsec-
tion 6.3.3), it is straightforward to deduce that the above equation holds if ˜
Q0
and Q0are replaced by ˜
Qand Qrespectively. This shows that Φ is the desired
equivariant random measure and the claim is proved.
We are now ready to prove Lemma 6.3.
Proof of Lemma 6.3. First assume that µ(X) = ∞a.s. We start by adding a
marked Poisson point process to ensure that the assumptions of Lemma 7.5 hold.
44
Let Ψ = Ψ(X,µ) be the Poisson point process on Xwith intensity measure µ
and equip the points of Ψ with i.i.d. marks in [0,1] chosen with the uniform
distribution. By a suitable extension of the GHP metric discussed in Subsec-
tion 5.1, one can think of [X,o,µ,Ψ] and [Y,p,µ1,µ2,Ψ] as unimodular rmm
spaces equipped with additional structures, where in the latter, Ψ = Ψ(Y,µ1).
In the former, the probability space is the set M′
∗of tuples [X, o, µ, ψ], where
(X, o, µ) is a rmm space and ψis a marked measure on X; i.e., a boundedly-finite
measure on X×[0,1].
Note that the support of Ψ is a factor point process (depending on µand
Ψ) such that every point of the subset is fixed under every automorphism of
(X,µ,Ψ) a.s. (due to the i.i.d. marks). In addition, the support is not empty
a.s. since µ(X) = ∞a.s. So, we can use an argument similar to Lemma 7.5 to
obtain an equivariant random measure Φ such that [X,o,µ,Φ,Ψ] has the same
distribution as [Y,p,µ1,µ2,Ψ].
Note that Φ = Φ(X, µ, ψ ) is defined for deterministic spaces (X, µ, ψ) and
not for the spaces of the form (X, µ). To resolve this issue, one can take expec-
tation w.r.t. Ψ as follows. Let (X, o, µ) be a realization of [X,o,µ]. Given a
realization ψof the marked Poisson point process on (X, µ), let α=α(X,µ,ψ)be
the distribution of Φ = Φ(X, µ, ψ ). Here, αis a probability measure on M(X).
Let β(X,µ):= Eα(X,µ,Ψ), where the expectation is w.r.t. to the randomness of
Ψ. It is straightforward to see that βis invariant under the automorphisms of
(X, µ). Now, by choosing a random measure Φ′on Xwith distribution β(X,µ),
one obtains an equivariant random measure such that [X,o,µ,Φ′] has the same
distribution as [Y,p,µ1,µ2] as desired. So the claim is proved.
Finally, assume µ(X)<∞with positive probability. Conditioned on the
event µ(X) = ∞, the above arguments provide the equivariant disintegration.
Conditioned on the event µ(X)<∞, the situation is easier. Under this con-
ditioning, [X,µ] and [Y,µ1,µ2] make sense as random non-rooted measured
metric spaces and the corresponding probability spaces are Polish (under suit-
able topologies which are skipped here). So, it is enough to consider the regular
conditional distribution of [Y,µ1,µ2] w.r.t. [Y,µ1], and then apply a ran-
dom automorphism of (Y,µ1) to obtain an equivariant random measure. This
automatically does not depend on the root and has the desired properties.
7.3 Proof of the Amenability Theorem
In this subsection, we prove that the different notions of amenability defined in
Subsection 5.6 are equivalent (Theorem 5.21).
Let [X,o,µ] be a unimodular rmm space. First, we assume that µdoes not
have atoms a.s. and µ(X) = ∞a.s. As in Subsection 7.2, let Φ be the Poisson
point process on Xwith intensity measure µ(Example 6.10), equipped with
i.i.d. marks in [0,1] chosen with the uniform distribution. Then, Φ is infinite,
there are no multiple points and Φ has no nontrivial automorphism a.s. Let
PΦbe the Palm distribution of Φ. By Theorem 6.17, under PΦ, [X,o,µ,Φ] is
unimodular w.r.t. Φ.
The above definitions give a countable equivalence relation as follows. The
45
probability measure PΦis concentrated on the subset M′′
∗⊆ M′
∗of tuples
[X, o, µ, φ] in which µhas no atom, µ(X) = ∞,φis the counting measure on an
infinite discrete subset of X,o∈φand the marks of the points of φare distinct.
Let Rbe the equivalence relation on M′
∗defined as follows: Outside M′′
∗,
everything is equivalent to only itself. If [X, o, µ, φ]∈ M′′
∗, then it is equivalent
to [X, y, µ, φ] for all y∈φ. Then, Ris a countable equivalence relation on the
Polish space M′
∗. In addition, unimodularity of PΦis equivalent to invariance
of PΦunder R(see Subsection 4.6 and note that having no automorphism is
important for this equivalence).
Lemma 7.6. If µhas infinite total mass and no atom a.s., then each of the
conditions of the existence of local means, the existence of approximate means,
and hyperfiniteness is equivalent to the analogous condition for the countable
measured equivalence relation (M′
∗, R, PΦ)defined above.
Proof. An essential ingredient of the proof is the Voronoi tessellation: For every
[X, o, µ, φ]∈ M′′
∗, let τ(o) be the closest point of φto o. If the closest point
is not unique, choose the one with the smallest mark. For y∈φ,τ−1(y) is
the Voronoi cell of y. Another ingredient is the balancing transport density
constructed in Subsection 7.1 (Example 7.3). This gives an equivariant function
k(x, y) for x∈Xand y∈Φ such that z∈Φk(x, z) = 1 and k(z, y)dµ(z) = 1
a.s. (for all xand y). From now on, we always assume [X, o, µ, φ]∈ M′′
∗and
the above balancing property of kholds. In the next paragraphs, we prove each
equivalence claimed in the lemma.
Local Mean. Given f∈L∞(X, µ), define f′∈L∞(φ) by f′(y) :=
f(z)k(z, y)dµ(z) (the balancing property of kis important here). Conversely,
given g∈L∞(φ), define g′∈L∞(X, µ) by g′(x) := z∈φg(z)k(x, z). Using
this, every mean on L∞(φ) gives a mean on L∞(X, µ) and vice versa. This
implies the claim.
Approximate mean. Assume (λn)nis an approximate mean. For x, y ∈φ,
define Λn(x, y) := λn(x, z)k(z, y)dµ(z). This gives approximate means for φ.
Equivalently, (Λn)nsatisfies the condition (AI) of [26] (it is important that φ
has no nontrivial automorphism). Conversely, given approximate means Λnfor
(R, PΦ), for x, y ∈X, define λn(x, y) := ztk(x, z)Λn(z , t)k(t, y). It is left
to the reader to prove that λnis an approximate mean.
Hyperfiniteness. In Lemma 5.23, it is proved that (HF2) ⇒(HF3) ⇒
(HF1). Assume (Πn)nsatisfies (HF1) and, conditioned to [X,o,µ], the extra
randomness in the definition of (Πn)nis independent of Φ (use Example 6.11).
Hence, since each element of Πnhas finite mass, it has also finitely many points
in the Poisson point process a.s. So, (Πn)ninduces equivariant nested finite
partitions of Φ. If there is no extra randomness (i.e., (Πn)nis a factor), it also
induces a nested sequence of Borel equivalence sub-relations of Rand Ris hy-
perfinite. If not, one can enlarge M′
∗according to the extra randomness of the
partitions, but this does not affect the hyperfiniteness of the Borel equivalence
relation (using Theorem 1 of [26], find a local and then take its expectation
w.r.t. to the extra randomness to find a local mean for (R, PΦ)).
46
Finally, assume (R, PΦ) is hyperfinite. This gives equivariant nested finite par-
titions Πnof φsuch that ∀y∈φ, ∀r < ∞,∃n:Br(y)∩φ⊆Πn(y). Define the
partition Π′
nof Xas follows: x∈Π′
n(y) if and only if τ(x)∈Πn(τ(y)). The
sequence (Π′
n)nsatisfies (HF2) since τ(Br(o)) is a finite set a.s. for every r
(since τ(Br(o)) ⊆B2r+s(o), where s:= d(o, τ (o)), which is straightforward to
verify). This finishes the proof.
Proof of Theorem 5.21. The Folner conditions are already treated in Lemma 5.23.
Here, we prove the equivalence of the rest of the conditions. On the event
µ(X)<∞, all of the conditions hold. So it is enough to assume µ(X) = ∞
a.s. If µhas no atoms a.s., then Lemma 7.6 implies that the different notions of
amenability of [X,o,µ] are equivalent to those for (M′
∗, R, PΦ), defined above.
So, the equivalence of the conditions is implied by Theorem 1 of [26].
Finally, if µis allowed to have atoms, we multiply Xby [0,1] to destroy
the atoms as follows. Let X′:= X×[0,1] equipped with the sum metric
d((x1, t1),(x2, t2)) := d(x1, x2)+|t1−t2|. Let µ′:= µ×Leb. Then, by choosing
o′in {o} × [0,1] uniformly, [X′,o′,µ′,X× {0}] is unimodular, where X× {0}
is kept as a distinguished closed subset of X′(Example 2.5; this is in fact
the Palm version of µ′by Theorem 6.14). Note that the product structure
can be recovered from (X′,X× {0}). It is left to the reader to show that
the different notions of amenability for [X,o,µ] are equivalent to those for
[X′,o′,µ′,X× {0}]. Since µ′has no atoms, the first part of the proof implies
the claim.
Acknowledgments
This work was supported by the ERC NEMO grant, under the European Union’s
Horizon 2020 research and innovation programme, grant agreement number
788851 to INRIA. A major part of the work was done when the author was
affiliated with IPM. The research was in part supported by a grant from IPM
(No. 98490118).
References
[1] D. Aldous. The continuum random tree. I. Ann. Probab., 19(1):1–28, 1991.
[2] D. Aldous. The continuum random tree. II. An overview. In Stochastic
analysis (Durham, 1990), volume 167 of London Math. Soc. Lecture Note
Ser., pages 23–70. Cambridge Univ. Press, Cambridge, 1991.
[3] D. Aldous and R. Lyons. Processes on unimodular random networks. Elec-
tron. J. Probab., 12:no. 54, 1454–1508, 2007.
[4] D. Aldous and J. M. Steele. The ob jective method: probabilistic combina-
torial optimization and local weak convergence. In Probability on discrete
47
structures, volume 110 of Encyclopaedia Math. Sci., pages 1–72. Springer,
Berlin, 2004.
[5] F. Baccelli, M.-O. Haji-Mirsadeghi, and A. Khezeli. Eternal family trees
and dynamics on unimodular random graphs. In Unimodularity in ran-
domly generated graphs, volume 719 of Contemp. Math., pages 85–127.
Amer. Math. Soc., [Providence], RI, [2018] c
2018.
[6] F. Baccelli, M.-O. Haji-Mirsadeghi, and A. Khezeli. Unimodular Hausdorff
and Minkowski dimensions. Electron. J. Probab., 26:Paper No. 155, 64,
2021.
[7] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Group-invariant per-
colation on graphs. Geom. Funct. Anal., 9(1):29–66, 1999.
[8] I. Benjamini and O. Schramm. Recurrence of distributional limits of finite
planar graphs. Electron. J. Probab., 6:no. 23, 13, 2001.
[9] C. J. Bishop and Y. Peres. Fractals in probability and analysis, volume 162
of Cambridge Studies in Advanced Mathematics. Cambridge University
Press, Cambridge, 2017.
[10] Thomas Budzinski. The hyperbolic Brownian plane. Probab. Theory Re-
lated Fields, 171(1-2):503–541, 2018.
[11] G. Cannizzaro and M. Hairer. The Brownian web as a random R-tree.
arXiv preprint arXiv:2102.04068, 2021.
[12] A. Connes, J. Feldman, and B. Weiss. An amenable equivalence relation
is generated by a single transformation. Ergodic Theory Dynam. Systems,
1(4):431–450 (1982), 1981.
[13] M. Deijfen, A. E. Holroyd, and J. B. Martin. Friendly frogs, stable marriage,
and the magic of invariance. Amer. Math. Monthly, 124(5):387–402, 2017.
[14] T. Duquesne and J. F. Le Gall. Random trees, L´evy processes and spatial
branching processes. Ast´erisque, (281):vi+147, 2002.
[15] T. Duquesne and J. F. Le Gall. Probabilistic and fractal aspects of L´evy
trees. Probab. Theory Related Fields, 131(4):553–603, 2005.
[16] R. H. Farrell. Representation of invariant measures. Illinois J. Math.,
6:447–467, 1962.
[17] J. Feldman and C. C. Moore. Ergodic equivalence relations, cohomology,
and von Neumann algebras. I. Trans. Amer. Math. Soc., 234(2):289–324,
1977.
[18] L. R. G. Fontes, M. Isopi, C. M. Newman, and K. Ravishankar. The Brown-
ian web: characterization and convergence. Ann. Probab., 32(4):2857–2883,
2004.
48
[19] H. Freudenthal. ¨
Uber die Enden topologischer R¨aume und Gruppen. Math.
Z., 33(1):692–713, 1931.
[20] O. H¨aggstr¨om. Infinite clusters in dependent automorphism invariant per-
colation on trees. Ann. Probab., 25(3):1423–1436, 1997.
[21] M. O. Haji-Mirsadeghi and A. Khezeli. Stable transports between station-
ary random measures. Electron. J. Probab., 21:Paper No. 51, 25, 2016.
[22] H. Hatami, L. Lov´asz, and B. Szegedy. Limits of locally-globally convergent
graph sequences. Geom. Funct. Anal., 24(1):269–296, 2014.
[23] C. Hoffman, A. E. Holroyd, and Y. Peres. A stable marriage of Poisson
and Lebesgue. Ann. Probab., 34(4):1241–1272, 2006.
[24] A. E. Holroyd and Y. Peres. Trees and matchings from point processes.
Electron. Comm. Probab., 8:17–27, 2003.
[25] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann.
Probab., 33(1):31–52, 2005.
[26] V. A. Kaimanovich. Amenability, hyperfiniteness, and isoperimetric in-
equalities. C. R. Acad. Sci. Paris S´er. I Math., 325(9):999–1004, 1997.
[27] O. Kallenberg. Invariant measures and disintegrations with applications to
Palm and related kernels. Probab. Theory Related Fields, 139(1-2):285–310,
2007.
[28] A. S. Kechris. The theory of countable borel equivalence relations.
[29] A. Khezeli. A unified framework for generalizing the Gromov-Hausdorff
metric. preprint.
[30] A. Khezeli. Shift-coupling of random rooted graphs and networks. To
appear in the special issue of Contemporary Mathematics on Unimodularity
in Randomly Generated Graphs, 2018.
[31] A. Khezeli. Metrization of the Gromov-Hausdorff (-Prokhorov) topology
for boundedly-compact metric spaces. Stochastic Process. Appl., 2019.
[32] Ali Khezeli and Samuel Mellick. On the existence of balancing allocations
and factor point processes. arXiv preprint arXiv:2303.05137, 2023.
[33] G. Last and H. Thorisson. Invariant transports of stationary random mea-
sures and mass-stationarity. Ann. Probab., 37(2):790–813, 2009.
[34] J. F. Le Gall. Brownian geometry. Jpn. J. Math., 14(2):135–174, 2019.
[35] L. Lov´asz. Compact graphings. Acta Math. Hungar., 161(1):185–196, 2020.
[36] R. Lyons and O. Schramm. Indistinguishability of percolation clusters.
Ann. Probab., 27(4):1809–1836, 1999.
49
[37] J. Mecke. Station¨are zuf¨allige Masse auf lokalkompakten Abelschen Grup-
pen. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 9:36–58, 1967.
[38] R. Schneider and W. Weil. Stochastic and integral geometry. Probability
and its Applications (New York). Springer-Verlag, Berlin, 2008.
[39] H. Thorisson. Transforming random elements and shifting random fields.
Ann. Probab., 24(4):2057–2064, 1996.
[40] A. Tim´ar. Tree and grid factors for general point processes. Electron.
Comm. Probab., 9:53–59, 2004.
[41] R. van der Hofstad. Infinite canonical super-Brownian motion and scaling
limits. Comm. Math. Phys., 265(3):547–583, 2006.
50