Content uploaded by Yuval Peres
Author content
All content in this area was uploaded by Yuval Peres on Feb 16, 2015
Content may be subject to copyright.
Hunter, Cauchy Rabbit, and Optimal Kakeya Sets
Yakov Babichenko1, Yuval Peres2, Ron Peretz3, Perla Sousi4, and Peter Winkler5
1The Hebrew University of Jerusalem, Israel
2Microsoft Research, Redmond, WA
3Tel Aviv University, Israel
4University of Cambridge, Cambridge, UK
5Dartmouth College, Hanover, NH
Abstract
A planar set that contains a unit segment in every direction is called a Kakeya set. We relate
these sets to a game of pursuit on a cycle Zn. A hunter and a rabbit move on the nodes of Zn
without seeing each other. At each step, the hunter moves to a neighbouring vertex or stays
in place, while the rabbit is free to jump to any node. Adler et al (2003) provide strategies for
hunter and rabbit that are optimal up to constant factors and achieve probability of capture in
the first nsteps of order 1/log n. We show these strategies yield a Kakeya set consisting of 4n
triangles with minimal area, (up to constant), namely Θ(1/log n). As far as we know, this is the
first non-iterative construction of a boundary-optimal Kakeya set. Considering the continuum
analog of the game yields a construction of a random Kakeya set from two independent standard
Brownian motions {B(s) : s≥0}and {W(s) : s≥0}. Let τt:= min{s≥0 : B(s) = t}. Then
Xt=W(τt) is a Cauchy process, and K:= {(a, Xt+at) : a, t ∈[0,1]}is a Kakeya set of zero
area. The area of the ε-neighborhood of Kis as small as possible, i.e., almost surely of order
Θ(1/|log ε|).
Keywords and phrases. Pursuit games, graph games, Kakeya sets, Cauchy process.
MSC 2010 subject classifications. Primary 49N75; secondary 05C57, 60G50.
Figure 1: The hunter and rabbit construct a Kakeya set (see Section 5)
arXiv:1207.6389v1 [math.PR] 26 Jul 2012
1 Introduction
A subset Sof R2is called a Kakeya set if for every point Pon the unit circle in R2there is a
translation of the line segment (0, P ) that is contained in S. A deterministic construction of a
Kakeya set of zero area was first given by Besicovitch [3] in 1928. Perron [11] in 1928 published a
new proof of that theorem. Schoenberg [12] constructed a Kakeya set consisting of 4ntriangles of
area Θ(1/log n); his construction is explained in [4]. A similar construction was given by Keich [8],
who also proved that any Kakeya set consisting of 4ntriangles cannot have area of smaller order,
see [8, Theorem 2].
In the present work we construct a new class of optimal Kakeya sets using optimal strategies in a
certain game of pursuit and evasion.
Definition 1.1. Let Gnbe the following two-player zero-sum game. At every time step each
player occupies a vertex of the cycle Zn. At time 0 the hunter and rabbit choose arbitrary initial
positions. At each subsequent step the hunter may move to an adjacent vertex or stay where she
is; simultaneously the rabbit may stay where he is or move to any vertex on the cycle. Neither
player can see the other’s position. The game ends at “capture time” when the two players occupy
the same vertex at the same time. The hunter’s goal is to minimize expected capture time; the
rabbit’s goal is to maximize it.
Theorem 1.2. [1] There exists a randomized strategy for the rabbit in the game Gnso that against
any strategy for the hunter, the capture time τsatisfies
E[τ]≥c1nlog n,
where c1is a fixed positive constant.
The rabbit’s strategy is based on a discretized Cauchy walk; in Section 3 we give a new proof of
this theorem that relies on symmetry properties of simple random walk in two dimensions.
The bound nlog ngiven in Theorem 1.2 is sharp, in the following sense:
Theorem 1.3. [1] There exists a randomized strategy for the hunter in the game Gnso that against
any strategy for the rabbit, the capture time τsatisfies
E[τ]≤c2nlog n,
where c2is a positive constant.
In Section 4 we give a self-contained proof of this theorem that will be useful in making the
connection to Kakeya sets. Combining the randomized strategies of the hunter and the rabbit of
Theorems 1.2 and 1.3 we prove the following theorem in Section 5.
Theorem 1.4. For all nthere exists a Kakeya set of area at most of order 1/log n, which is the
union of 4ntriangles.
A central open problem regarding Kakeya sets is to understand their Hausdorff dimension and
Minkowski dimension. Davis [6] proved that every Kakeya set in R2has full Hausdorff and
Minkowski dimensions, i.e., dimension 2. In dimension d > 2, it is an open question whether
every Kakeya set has full Hausdorff or Minkowski dimension.
Minkowski dimension of a set K⊂Rnis closely related to the area of its ε-neighbourhood, denoted
by K(ε). Therefore it is natural to examine the sets K(ε). The first question that arises is the
following. What is the minimal area of K(ε)? The answer is known to be Θ(1/|log ε|).
2
Proposition 1.5. For every Kakeya set K⊂R2and every sufficiently small ε > 0we have
vol(K(ε)) ≥1/(3|log ε|).
This proposition was proved by Keich [8] using a maximal inequality from Bourgain’s paper [5]. In
Section 6 we give an alternative elementary proof, which can also be found in Ben Green’s lecture
notes [7].
Proposition 1.5 motivates the following definition. A Kakeya set Kwill be called optimal if it holds
that vol(K(ε)) = O(1/|log ε|) as ε→0. Note that every optimal Kakeya set must have zero area.
Construction of optimal Kakeya sets is known in the literature, see Keich [8], but the construction
is quite involved. Next, we describe a continuum analog of the Kakeya construction of Theorem 1.4.
This simple probabilistic construction almost surely yields an optimal Kakeya set.
Let {B(s) : s≥0}and {W(s) : s≥0}be two independent standard Brownian motions, and let
τt:= min{s≥0 : B(s) = t}. Then Xt=W(τt) is a Cauchy process, i.e., a L´evy process where the
increments Xs+t−Xshave the same law as tX1, and X1has the Cauchy density π(1 + x2)−1. See,
e.g., [2] or [10, Theorem 2.37].
Theorem 1.6. Let {Xt:t≥0}be a Cauchy process and let Λ := {(a, Xt+at) : a, t ∈[0,1]}. Then
the union ∪3
k=0eiπk/4Λof four rotated copies of Λis almost surely an optimal Kakeya set, i.e. there
exist positive constants c1, c2such that as ε→0we have
c1
|log ε|≤vol(Λ(ε)) ≤c2
|log ε|a.s.
This theorem is proved in Section 6.
2 Probability of collision
In this section we define a win-lose variant of Gncalled G0
n, in which only nmoves are made and
the hunter wins if she captures the rabbit.
Let
H={(Ht)n−1
t=0 :Ht∈Zn,|Ht+1 −Ht| ≤ 1}and R={(Rt)n−1
t=0 :Rt∈Zn}.
Then the sets of mixed strategies ∆hfor the hunter and ∆rfor the rabbit are given by
∆h=
x∈R|H| :∀f∈ H, xf≥0,X
f∈H
xf= 1
and ∆r=
y∈R|R| :∀g∈ R, yg≥0,X
g∈R
yg= 1
.
The hunter wins G0
nif she captures the rabbit. The pay off matrix M= (mfg )f,g, where f∈ H and
g∈ R, is given by
mfg =1(∃`≤n−1 : f(`) = g(`)).
When the hunter and the rabbit use the mixed strategies xand yrespectively, then
xTMy =Pxy (τ < n),
where τis the capture time.
By the minimax theorem we have
max
x∈∆h
min
y∈∆r
xTMy = min
y∈∆r
max
x∈∆h
xTMy = Val(G0
n),
3
where Val(G0
n) stands for the value of the game (to the hunter). Thus, there exists a randomized
strategy for the hunter so that against every strategy of the rabbit the probability that they collide
in the first nsteps is Val(G0
n); and there exists a randomized strategy for the rabbit, so that against
every strategy of the hunter, the probability they collide is Val(G0
n).
Remark 2.1. In Sections 3 and 4 we give randomized strategies for the hunter and the rabbit that
achieve Val(G0
n) up to multiplicative constants. In particular, in Propositions 3.2 and 4.1 we show
there are positive constants c3and c4such that
c3
log n≤Val(G0
n)≤c4
log n.(2.1)
Lemma 2.2. Let ex∈∆hbe a randomized hunter strategy in the game G0
nsatisfying miny∈∆rexTM y =
pn. Then there exists a randomized hunter strategy in the game Gnso that against any rabbit strat-
egy, the capture time τsatisfies
E[τ]≤2n
pn
.
Let ey∈∆rbe a randomized rabbit strategy in the game G0
nsatisfying maxx∈∆hxTMey=qn. Then
there exists a randomized rabbit strategy in the game Gnso that against any hunter strategy, the
capture time τsatisfies n
qn≤E[τ].
Proof. We divide time into rounds of length n. In rounds 1,3,5, . . . the hunter employs independent
copies of the randomized strategy exand she uses the even rounds to move to the proper starting
points.
This way we get a new hunter strategy ξso that against any rabbit strategy ηin Gn
Pξη (τ < 2n)≥Pexη0(τ < n) = exTM y ≥pn,
where η0is the restriction of the strategy ηin the first nsteps. Therefore, by bounding the capture
time τby 2ntimes a geometric random variable of success probability pn, we get E[τ]≤2n
pn.
For the lower bound we look at the process in rounds of length n. In each round the rabbit employs
an independent copy of the randomized strategy ey. Thus the capture time stochastically dominates
ntimes a geometric random variable of parameter qn, and hence E[τ]≥n
qn.
3 The rabbit’s strategy
In this section we give the proof of Theorem 1.2. We start with a standard result for random walks
in 2 dimensions and include its proof for the sake of completeness.
Lemma 3.1. Let Z= (X, Y )be a simple random walk in Z2starting from 0. For every i∈Z
define Ti= inf{t≥0 : Yt=i}. Then for all k∈ {−i, . . . , i}we have
P0(XTi=k)≥1
96i.
Proof. Notice that XTihas the same distribution for both a lazy simple random walk and a non-
lazy one. So it suffices to prove the lemma in the case of a lazy simple random walk in Z2. We
realize a lazy simple random walk in Z2as follows:
4
space
(0,0)
(i,i)(i,i)
(i,i)(i,i)
(k,i)
time
(a) Escaping the 2i×2isquare
time
space
(k,k)
(k,k)
(k,k)
(k,k)
(0,0)
(k,i)
(k,l)
(b) Escaping the 2k×2ksquare
Figure 2: Hitting times
Let V,Wbe two independent lazy simple random walks in Z. Let ξ1, ξ2, . . . be i.i.d. random
variables taking values 1 or 2 with equal likelihood. Now for all kdefine r(k) = Pk
i=1 1(ξi= 1)
and let
(Xk, Yk)=(Vr(k), Wk−r(k)).
Then it is elementary to check that Z= (X, Y ) is a lazy simple random walk in Z2.
We first show that for all k∈ {−i, . . . , i},
P0(XTi= 0) ≥P0(XTi=k).(3.1)
Since Vis independent of Tiand of r(`) for all values of `, we get for all k
P0(XTi=k) = X
m,`
P0(Xm=k, Ti=m, r(m) = `) = X
m,`
P0(V`=k)P0(r(m) = `, Ti=m).(3.2)
It is standard (see, for example, [9, Lemma 12.2]) that for a lazy simple random walk on Z, if Pt
stands for the transition probability in tsteps then Pt(x, y)≤Pt(x, x) for all xand y. Applying
this to V`and using (3.2) we obtain
P0(XTi= 0) ≥X
m,`
P0(V`=k)P0(r(m) = `, Ti=m) = P0(XTi=k)
and this concludes the proof of (3.1).
For k∈Zwe let
τk= min{t≥0 : Xt/∈[−|k|+ 1,|k| − 1]2}.
Setting A={Yτi=i}we have by symmetry
P0(XTi∈ {−i, . . . , i})≥P0(A) = 1
4.
5
Hence this together with (3.1) gives that
P0(XTi= 0) ≥1
8i+ 4 ≥1
12i.(3.3)
To finish the proof of the lemma it remains to show that for all (k, i) with k∈ {−i, . . . , i}we have
P0(XTi=k)≥1
96i.(3.4)
For any k∈ {−i, . . . , i}we have
P0(XTi=k)≥P0(Xτk=k, XTi=k) = X
`=−|k|,...,|k|
P0(Xτk=k, Yτk=`, XTi=k)
=X
`
P0(XTi=k|Xτk=k, Yτk=`)P0(Xτk=k, Yτk=`).(3.5)
Notice that by the strong Markov property, translation invariance and applying (3.3) to i−`we
get
P0(XTi=k|Xτk=k, Yτk=`) = P(k,`)XTi−`=k=P0XTi−`= 0≥1
12(i−`)∧1≥1
24i,
since −k < ` < k and k≤i. Therefore, plugging this into (3.5), we obtain
P0(XTi=k)≥1
24iX
`=−|k|,...,|k|
P0(Xτk=k, Yτk=`) = 1
24iP0(Xτk=k)≥1
96i
since by symmetry we have P0(Xτk=k)≥1/4. This concludes the proof of the lemma.
Proposition 3.2. There exists a randomized rabbit strategy in the game Gnso that against any
hunter strategy the capture time τsatisfies
P(τ < n)≤c
log n,
where cis a universal constant.
Proof. It suffices to prove the upper bound for a pure strategy of the hunter, i.e. a fixed path
(Hi)i<n.
Let Ube uniformly distributed on {0, . . . , n −1}and let Z= (X, Y ) be an independent simple
random walk in Z2. We define a sequence of stopping times as follows: T0= 0 and inductively for
k≥0,
Tk+1 = inf{t≥Tk:Yt=k+ 1}.
By recurrence of the two-dimensional random walk, for all kwe have Tk<∞a.s.
Define the position of the rabbit at time 0 to be R0=Uand Rk= (XTk+U) mod nat time k.
Define Knto be the total number of collisions in the first nsteps, i.e. Kn=Pn−1
i=0 1(Hi=Ri).
Since {τ < n}={Kn>0}, it suffices to show that for a positive constant c,
P(Kn>0) ≤c
log n.(3.6)
6
For the rest of the proof we extend the sequence (Hi)i<n by defining Hi+n=Hnfor all i<n. Then
we have
P(Kn>0) ≤E[K2n]
E[K2n|Kn>0].(3.7)
In order to prove (3.6) we will bound the numerator and denominator separately.
Since Uis uniform on {0, . . . , n −1}and Xis independent of U, it follows that Riis uniform on
{0, . . . , n−1}for every i. Using that and the fact that the hunter and the rabbit move independently,
we deduce
E[K2n] =
2n−1
X
i=0
P(Ri=Hi) =
2n−1
X
i=0
1
n= 2.(3.8)
For the term E[K2n|Kn>0] we have
E[K2n|Kn>0] =
n−1
X
k=0
E"2n−1
X
i=k
1(Ri=Hi)τ=k#P(τ=k)
P(τ < n).(3.9)
Define e
Ri= (Rk+i−Rk) mod nand e
Hi= (Hi+k−Hk) mod n. By the definition of the process R
it follows that e
Rhas the same law as the process R. By the Markov property of the process (Ri)
we get
E"2n−1
X
i=k
1(Ri=Hi)τ=k#≥E"n−1
X
i=0
1(e
Ri=e
Hi)#= 1 +
n−1
X
i=1
P0Ri=e
Hi.(3.10)
For all i≥1 and all `∈ {−i, . . . , i}, since Ri=XTimod n, using Lemma 3.1 we get
P0(Ri=`mod n)≥P0(XTi=`)≥1
64i.(3.11)
For all iwe have e
Hi∈ {−imod n, . . . , i mod n}, since e
H0= 0. Using (3.11) we get that for all
i≥1
P0Ri=e
Hi≥1
96i.
The above inequality together with (3.9) and (3.10) yields
E[K2n|Kn>0] ≥1 +
n−1
X
i=1
1
96i≥c1log n,
where c1is a positive constant. Thus (3.7), the above inequality and (3.8) conclude the proof
of (3.6).
Remark 3.3. We refer to the strategy used by the rabbit in the proof above as the Cauchy strategy,
because it is the discrete analogue of the hitting distribution of planar Brownian motion on a line
at distance 1 from the starting point, which is the Cauchy distribution.
Proof of Theorem 1.2. The proof of the theorem follows by combining Lemma 2.2 and Propo-
sition 3.2.
7
time
space
(a) Typical hunter path, zigzag strategy
time
space
(b) Typical rabbit path, counter to zigzag strategy
Figure 3: Typical paths
4 The hunter’s strategy
In this section we give the proof of Theorem 1.3 by constructing a randomized strategy for the
hunter. Before doing so, it is perhaps useful to consider the following natural strategy for the
hunter: at time 0 she chooses a random location and a random direction. Subsequently at each
time tshe continues in the same direction she has been walking with probability (n−2)/n, stops for
one move and then continues in the same direction with probability 1/n, and reverses direction with
probability 1/n. We call this the “zigzag” strategy. We can prove that the zigzag strategy achieves
expected capture time of order n3/2against any rabbit strategy. The following counter-strategy of
the rabbit yields expected capture time of n3/2against the zigzag strategy: he starts at random,
walks for √nsteps to the right in unit steps, then jumps to 2√nto the left and repeats.
To achieve minimal expected capture time (up to a constant) our hunter moves not only in a
random direction but at a random rate.
Proposition 4.1. There exists a randomized hunter strategy in the game G0
nso that against any
rabbit strategy the capture time τsatisfies
P(τ < n)≥c0
log n,
where c0is a universal positive constant.
Proof. Let R`be the location of the rabbit on the cycle at time `, i.e. R`∈ {0, . . . , n −1}. We now
describe the strategy of the hunter. Let a, b be independent random variables uniformly distributed
on [0,1]. We define the location of the hunter at time `to be H`=dan +b`emod n.
We again let Kndenote the number of collisions before time n, i.e. Kn=Pn−1
i=0 1(Ri=Hi). Then
by the second moment method we have
P(Kn>0) ≥(E[Kn])2
E[K2
n].
8
We now compute the first and second moments of Kn. For that, let I`denote the event that there
is a collision at time `, i.e. I`={H`=R`}. We first calculate P(I`). We have
I`={dan +b`e=R`} ∪ {dan +b`e − n=R`},
which gives that
I`={R`−1< an +b` ≤R`}∪{R`+n−1< an +b` ≤R`+n}.
Hence P(I`) = 1/n and
E[Kn] =
n−1
X
`=0
P(I`) = 1.
Let j > 0, then it is easy to check that P(I`∩I`+j)≤c
jn for a positive constant c. Therefore
EK2
n=E
n−1
X
`=0
I`!2
=E[Kn] + X
`6=m
E[I`∩Im] = 1 + 2
n−1
X
`=0
n−`−1
X
j=1
P(I`∩I`+j)
≤1+2
n−1
X
`=0
n
X
j=1
c
jn ≤c0log n,
for a positive constant c0. This way we get
P(τ < n) = P(Kn>0) ≥c1
log n
and this finishes the proof of the proposition.
Proof of Theorem 1.3. The proof of the theorem follows from Lemma 2.2 and Proposition 4.1.
5 The Kakeya connection
In this section we prove Theorem 1.4. We start by showing how to get a Kakeya set given a strategy
of the rabbit with probability at most pnof collision against any strategy of the hunter.
Proposition 5.1. Given a strategy eyof the rabbit which ensures capture probability at most pn
against any hunter strategy, there is a Kakeya set of area at most 8pnwhich is the union of 4n
triangles.
Proof. Recall the definition of the set Hof the allowed hunter paths.
First we slightly change the game and enlarge the set of allowed paths for the hunter, to include
all functions f: [0, n)→[0, n) that are 1-Lipschitz. Then we say that there is a collision in [0, n)
if for some m≤n−1 there exists t∈[m, m + 1) such that f(t) = Rm. We first show that if fis
1-Lipschitz, then
min
y∈∆r
Pfy(collision in [0, n)) ≤pn,(5.1)
9
T1
T2
TnTn−1
1
1
n
1
n
. . .
Figure 4: Triangles
where fstands for δfwith a slight abuse of notation.
In order to prove (5.1), for every fthat is 1-Lipschitz we will construct a path h∈ H such that for
all y∈∆r
Pfy(collision in [0, n)) ≤Phy(τ < n).(5.2)
We define h(m) for every m∈ {0, . . . , n −1}. By the 1-Lipschitz property of f, note that the image
f([m, m + 1)) can contain at most one integer. If there exists k∈Znsuch that k∈f([m, m + 1)),
then we set h(m) = k. If there is no such integer k, then we set h(m) = bf(m)c. The 1-Lipschitz
property then gives that h∈ H. Since the rabbit only jumps on Zn, the function hconstructed
this way satisfies (5.2).
Applying (5.2) to the strategy eyof the rabbit and using the assumption gives that for all fthat
are 1-Lipschitz
Pfey(collision in [0, n)) ≤pn.(5.3)
Next we consider the hunter strategy in which she chooses a linear function fa,b(t) = (an+tb) mod n,
where a, b are independent and uniformly distributed on the unit interval [0,1]. Suppose that during
the time segment [m, m + 1) the rabbit is located at position zm. Then the set of values (a, b) such
that zm∈fa,b([m, m + 1)) is T(m, zm) = T`(m, zm)∪Tr(m, zm), where
T`(m, zm) = {an +bm ≤zm< an +b(m+ 1)} ∩ [0,1]2and
Tr(m, zm) = {an +bm −n≤zm< an +b(m+ 1) −n} ∩ [0,1]2,
as illustrated in Figure 5. If the rabbit chooses the sequence of locations (zk)n−1
k=0 , then he will
be caught by the hunter using the strategy above with probability A(z) which is the area of
∪mT(m, zm). Therefore the objective of the rabbit is to minimize the area of A(z). We have
A(R) = P(collision in [0, n)|(Rm)) ,
and hence since from (5.3) we have Pfa,b
ey(collision in [0, n)) ≤R1
0R1
0pndadb =pn, we deduce that
for the strategy ey
E[A(R)] ≤pn.
Now we slightly change the sets T(m, zm), since they could consist of two disjoint triangles as
illustrated in Figure 5. So if we write T0(m, zm) = T`(m, zm)∪(Tr(m, zm)−(1,0)), then it is easy
to see that T0(m, zm) is always a triangle.
10
a
b
1
1
T`(m, zm)
Tr(m, zm)
zm
n
Figure 5: T(m, zm) = T`(m, zm)∪Tr(m, zm)
Hence taking the union ∪mT0(m, zm) gives a union of ntriangles with
Area(∪mT0(m, zm)) ≤2Area(∪mT(m, zm)),
and hence E[Area(∪mT0(m, zm))] ≤2pn.
The triangles Tiin Figure 4 contain unit segments in all directions that have an angle in [0, π/4]
with the vertical axis. Since the triangles T0(m, zm) are obtained from the triangles Tiby horizontal
translation, the union ∪mT0(m, zm) also contains a unit segment in all these directions. Hence if we
take 4 copies of this construction suitably rotated obtaining 4ntriangles gives a Kakeya set with
area at most 8pn.
Proof of Theorem 1.4. If the rabbit uses the Cauchy strategy, then by Proposition 3.2 we get
that the probability of collision in nsteps against any strategy of the hunter is at most pn=c/ log n.
Now we can apply Proposition 5.1 to get 4ntriangles of area at most 8pn= 8c/ log n. For a sample
of this random construction with n= see Figure 1.
6 Kakeya sets from the Cauchy process
Our goal in this section is to prove Theorem 1.6. We first recall some notation.
Let (Xt) be a Cauchy process, i.e., Xis a stable process with values in Rand the density of X1
is given by (π(1 + x2))−1. Let Ft=σ(Xs, s ≤t) be its natural filtration and let e
Ft=∩nFt+1/n.
Then ( e
Ft) is right continuous and Xis adapted to ( e
F).
For any set Awe denote by A(ε) the ε-neighbourhood of A, i.e. A(ε) = A+B(0, ε).
Let Fbe a subset of [0,1] and δ > 0. For a∈[0,1] we define
Va(F, δ) = ∪s∈FB(Xs+as, δ ).
Recall the definition Λ = {(a, Xt+at):0≤a≤1,0≤t≤1}.
11
Lemma 6.1. Let M > 0be a constant, t > r and I= [u, u +t]be a subinterval of [0,1]. Then
there exists a constant c=c(M)so that for all a∈[−M , M]
E[vol(Va(I, r))] ≤ct
log(t/r)+ 2r.
Proof. By translation invariance of Lebesgue measure and the stationarity of X, it suffices to
prove the lemma in the case when I= [0, t].
If τB(x,r)= inf{s≥0 : Xs+as ∈ B(x, r)}, then we can write
E[vol(∪s≤tB(Xs+as, r))] = ZR
PτB(x,r)≤tdx = 2r+ZR\B(0,r)
PτB(x,r)≤tdx. (6.1)
For x /∈ B(0, r) we define Zx=Rt
01(Xs+as ∈ B(x, r)) ds and e
Zx=R2t
01(Xs+as ∈ B(x, r)). By the
c`adl`ag property of Xwe deduce that up to zero probability events we have {τB(x,r)≤t}={Zx>0}.
So it follows that
PτB(x,r)≤t=P(Zx>0) ≤
Ehe
Zxi
Ehe
ZxZx>0i.(6.2)
For the numerator we have
Ehe
Zxi=Z2t
0ZB(x,r)
ps(0, y)dy ds =Z2t
0ZB(0,r)
ps(0, x +y)dy ds,
where ps(0, y) stands for the transition density in time sof the process (Xu+au)u. We now drop the
dependence on B(x, r) from τB(x,r)to simplify notation. For the conditional expectation appearing
in the denominator in (6.2) we have
Ehe
ZxZx>0i=EZ2t
τ
1(Xs+as ∈ B(x, r)) ds τ≤t
=EZ2t−τ
0
1(Xs+τ+a(s+τ)∈ B(x, r)) ds τ≤t
=EZ2t−τ
0
1(Xs+τ−Xτ+as +Xτ+aτ)∈ B(x, r)) ds τ≤t
≥min
y∈B(x,r)
EZt
0
1(Xs+as +y∈ B(x, r)) ds,
where in the last step we used the strong Markov property of Xand that Xτ+aτ =y∈ B(x, r).
We now bound from below the expectation appearing in the minimum above. Since we assumed
that r < t, we have
EZt
0
1(Xs+as +y∈ B(x, r)) ds=Zt
0ZB(x
s−y
s−a, r
s)
1
π(1 + z2)dz ds
≥Zt
r
2r
(1 + (M+ 3)2)πs ds =c1rlog t
r.
12
The inequality follows from the observation that when s≥rand y∈ B(x/s −y/s −a, r/s), then
|y| ≤ M+ 3, since a∈[−M, M ]. Hence we infer that for all x
Ehe
ZxZx>0i≥c1rlog(t/r).
So putting all things together we have
ZR\B(0,r)
P(Zx>0) dx ≤RR\B(0,r)R2t
0RB(0,r)ps(0, x +y)dy ds dx
c1rlog(t/r)≤4rt
c1rlog(t/r)=c2t
log(t/r)
and this together with (6.1) completes the proof of the lemma.
Claim 6.2. Let (Ft)be a right continuous filtration and (Xt)a c`adl`ag adapted process taking values
in Rd,d≥1. Let Dbe an open set in Rdand Fa subset of [0,1]. Then
τ= inf{t∈F:Xt∈D}
is a stopping time.
Proof. Let F∞be a countable dense subset of F. Then for all t∈[0,1] we deduce
{τ < t}=∪q∈F∞,q<t {Xq∈D},
since Xis c`adl`ag and Dis an open set. Hence {τ < t} ∈ Ft. Writing
{τ≤t}=\
n{τ < t + 1/n},
we get that {τ≤t} ∈ Ft+=Ft.
Lemma 6.3. Let Ibe a subinterval of [0,1] of length √ε. Define Y=Rd
bvol(Va(I, 2ε)) da, where
−2<b<d<2. Then there exists a constant csuch that for all ε > 0sufficiently smal l
EY2≤cε
(log(1/ε))2.
Proof. By Jensen’s inequality we get
EY2≤Zd
b
E(vol(Va(I, 2ε)2da. (6.3)
We will first show that for all δ > 0 and all a∈R
E(vol(Va(I, δ)))2≤2E[vol(Va(I, δ ))]2.(6.4)
For all xdefine τx= inf{t∈I:Xt+at ∈ B(x, δ)}. We then have
E(vol(Va(I, δ)))2=ZRZR
P(τx<∞, τy<∞)dx dy = 2 ZRZR
P(τx≤τy<∞)dx dy
= 2 ZR
P(τx<∞)ZR
P(τx≤τy<∞ | τx<∞)dy dx
= 2 ZR
P(τx<∞)E[vol(Va(I∩[τx,1], δ)) |τx<∞]dx. (6.5)
13
Since (Xu+au) is c`adl`ag and the filtration e
Fis right continuous, it follows from Claim 6.2 that τx
is a stopping time. By the stationarity, the independence of increments and the c`adl`ag property of
X, we get that Xsatisfies the strong Markov property (see [2, Proposition I.6]). In other words,
on the event {τx<∞}, the process (X(τx+t))t≥0is c`adl`ag and has independent and stationary
increments. Thus we deduce
E[vol(Va(I∩[τx,1], δ)) |τx<∞]≤E[vol(Va(I, δ))] ,
and this finishes the proof of (6.4)
Applying Lemma 6.1 with t=√εand r=εgives that there exists a constant cso that for all ε
sufficiently small and for all a∈[−2,2]
E[vol(Va(I, 2ε)] ≤c√ε
log(1/ε).
Therefore from (6.3) and (6.4), since the above bound is uniform over all a∈[−2,2], we deduce
that for all εsufficiently small
EY2≤c0ε
(log(1/ε))2
and this completes the proof.
Proof of Theorem 1.6. It is easy to see that ∪3
k=0eikπ /4Λ is a Kakeya set. Indeed, if we fix tand
we vary a, then we see that Λ contains all directions from 0 up to 45◦degrees. It then follows that
the set ∪3
k=0eikπ /4Λ contains a unit line segment in all directions.
It remains to show that there is a constant cso that almost surely for all εsufficiently small
vol(Λ(ε)) ≤c
log(1/ε).(6.6)
Note that it suffices to show the above inequality for εwhich goes to 0 along powers of 4. It is easy
to see that for all ε > 0 we have
Λ(ε)⊆[
−ε≤a≤1+ε{a} × Va([0,1],2ε).
Indeed, let (x, y)∈Λ(ε). Then x∈[−ε, 1 + ε] and (x−b)2+ (y−(Xt+bt))2< ε2for some
b, t ∈[0,1]. By the triangle inequality and since t∈[0,1], we get
|y−(Xt+xt)|≤|y−(Xt+bt)|+|(b−x)t| ≤ 2ε.
Take ε= 2−2n. Thus in order to show (6.6), it suffices to prove that almost surely for all n
sufficiently large we have
vol
[
−2−2n≤a≤1+2−2n{a} × Va([0,1],2−2n+1)
≤c
log(22n).(6.7)
For j= 1,...,2ndefine Ij= [(j−1)2−n, j2−n]. Since Va([0,1],2−2n+1) = ∪i≤2nVa(Ii,2ε) for all a,
writing
Yi= vol
[
−2−2n≤a≤1+2−2n{a} × Va(Ii,2−2n+1)
=Z1+2−2n
−2−2n
vol(Va(Ii,2−2n+1)) da
14
we have by the subadditivity of the volume that
vol
[
−2−2n≤a≤1+2−2n{a} × Va([0,1],2−2n+1)
≤
2n
X
i=1
Yi.
Hence it suffices to show that almost surely eventually in n
2n
X
i=1
Yi≤c
log(22n).(6.8)
Since Xhas independent and stationary increments, it follows that the random variables Yiare in-
dependent and identically distributed. From Lemma 6.3 we obtain that var(Yi)≤c4−n(log(22n))−2
for all nsufficiently large and thus we conclude by independence that eventually in n
var 2n
X
i=1
Yi!≤c2−n
(log(22n))2.
From Chebyshev’s inequality we now get
P
2n
X
i=1
Yi−E"2n
X
i=1
Yi#≥1
log(22n)!≤c
2n,
which is summable and hence Borel Cantelli now gives that almost surely for all nlarge enough
2n
X
i=1
Yi≤E"2n
X
i=1
Yi#+1
log(22n).
Using Lemma 6.1 gives that E[Yi]≤c2−n(log(22n))−1, and hence this together with the inequality
above finishes the proof.
We now show that our construction is optimal in terms of boundary size up to a constant factor.
Proposition 6.4. Let Kbe a Kakeya set, i.e. a set that contains a unit line segment in all
directions. Then for all εsufficiently small
vol(K(ε)) ≥1
2 log(1/ε).
Proof. Let ε > 0 and n=bε−1c. Suppose xi, vi∈R2for all i= 1, . . . , n are such that the unit
line segments `i={xi+tvi:t∈[0,1]}for i= 1, . . . , n are contained in the set Kand satisfy
^(`i−1, `i) = π
nfor all i= 1, . . . , n and ^(`1, `n) = π
n.
For each itake wi⊥viand define the set
e
`i(ε) = {xi+tvi+swi:t∈[0,1], s ∈[−ε, ε]}
as in Figure 6. Then it is clear that
e
`i(ε)⊆K(ε) for all i. (6.9)
15
2ε
1
`e
`(ε)
Figure 6: Line `and rectangle e
`(ε)
For all i= 1, . . . , n we define a function Ψi:R2→ {0,1}via Ψi(x) = 1(x∈e
`i(ε)) and let
Ψ = Pn
i=1 Ψi. Then from (6.9) we obtain {x: Ψ(x)>0} ⊆ K(ε), and hence it suffices to show
that for all εsufficiently small
vol({Ψ>0})≥1
2 log(1/ε).(6.10)
By Cauchy-Schwarz we get
vol({Ψ>0})≥RR2Ψ(x)dx2
RR2Ψ2(x)dx .(6.11)
By the definition of the function Ψ we immediately get that
ZR2
Ψ(x)dx =
n
X
i=1 ZR2
1(x∈e
`i(ε)) dx = 2εn. (6.12)
Since Ψ2
i= Ψifor all i, we have
ZR2
Ψ2(x)dx =ZR2
Ψ(x)dx + 2
n
X
i=1
n−i
X
k=1
vol(e
`i(ε)∩e
`i+k(ε)).
The angle between the lines `iand `i+kis kπ/n. From Figure 7 we see that if kπ/n ≤π/2, then
vol(e
`i(ε)∩e
`i+k(ε)) ≤4ε2
sin(kπ/n)≤2ε2n
k,
while if kπ/n > π/2, then
vol(e
`i(ε)∩e
`i+k(ε)) ≤4ε2
sin(π−kπ/n)≤2ε2n
n−k.
Hence using these two inequalities we deduce that
n
X
i=1
n−1
X
k=1
vol(e
`i(ε)∩e
`i+k(ε)) ≤
bn/2c
X
i=1
bn/2c
X
k=1
2ε2n
k+
n−i
X
k=bn/2c+1
2ε2n
n−k
+
n
X
i=bn/2c+1
n−i
X
k=1
2ε2n
k
16
θ
2ε
2ε
Figure 7: Intersection of two infinite strips
≤3ε2n2log n+ 3ε2n2.
Thus putting all these bounds in (6.11) we obtain
vol({Ψ>0})≥3
2log n+3
2+1
2εn−1
.
Since n=bε−1c, we conclude that for εsufficiently small
vol({Ψ>0})≥1
2 log(1/ε)
and this finishes the proof.
Acknowledgement
We thank Abraham Neyman for useful discussions.
Yakov Babichenko’s work is supported in part by ERC grant 0307950, and by ISF grant 0397679.
Ron Peretz’s work is supported in part by grant #212/09 of the Israel Science Foundation and
by the Google Inter-university center for Electronic Markets and Auctions. Peter Winkler’s work
was supported by Microsoft Research, by a Simons Professorship at MSRI, and by NSF grant
DMS-0901475. We also thank MSRI, Berkeley, where part of this work was completed.
References
[1] Micah Adler, Harald R¨acke, Naveen Sivadasan, Christian Sohler, and Berthold V¨ocking. Ran-
domized pursuit-evasion in graphs. Combin. Probab. Comput., 12(3):225–244, 2003. Combi-
natorics, probability and computing (Oberwolfach, 2001).
[2] Jean Bertoin. L´evy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge
University Press, Cambridge, 1996.
[3] A. S. Besicovitch. On Kakeya’s problem and a similar one. Math. Z., 27(1):312–320, 1928.
17
[4] A. S. Besicovitch. The Kakeya problem. Amer. Math. Monthly, 70:697–706, 1963.
[5] J. Bourgain. Besicovitch type maximal operators and applications to Fourier analysis. Geom.
Funct. Anal., 1(2):147–187, 1991.
[6] Roy O. Davies. Some remarks on the Kakeya problem. Proc. Cambridge Philos. Soc., 69:417–
421, 1971.
[7] Ben Green. Restriction and Kakeya Phenomena. Lecture notes from a course at Cambridge,
https://www.dpmms.cam.ac.uk/∼bjg23/rkp.html.
[8] U. Keich. On Lpbounds for Kakeya maximal functions and the Minkowski dimension in R2.
Bull. London Math. Soc., 31(2):213–221, 1999.
[9] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times.
American Mathematical Society, Providence, RI, 2009. With a chapter by James G. Propp
and David B. Wilson.
[10] P. M¨orters and Y. Peres. Brownian Motion. Cambridge University Press, 2010.
[11] Oskar Perron. ¨
Uber einen Satz von Besicovitsch. Math. Z., 28(1):383–386, 1928.
[12] I. J. Schoenberg. On the Besicovitch-Perron solution of the Kakeya problem. In Studies in
mathematical analysis and related topics, pages 359–363. Stanford Univ. Press, Stanford, Calif.,
1962.
18