ArticlePDF Available

Hunter, Cauchy Rabbit, and Optimal Kakeya Sets

Authors:

Abstract and Figures

A planar set that contains a unit segment in every direction is called a Kakeya set. We relate these sets to a game of pursuit on a cycle ℤ_n. A hunter and a rabbit move on the nodes of ℤ_n without seeing each other. At each step, the hunter moves to a neighbouring vertex or stays in place, while the rabbit is free to jump to any node. Adler et al. (2003) provide strategies for hunter and rabbit that are optimal up to constant factors and achieve probability of capture in the first n steps of order 1/ log n. We show these strategies yield a Kakeya set consisting of 4n triangles with minimal area (up to constant), namely Θ(1/ log n). As far as we know, this is the first non-iterative construction of a boundary-optimal Kakeya set. Considering the continuum analog of the game yields a construction of a random Kakeya set from two independent standard Brownian motions {B(s) : s ≥ 0} and {W(s) : s ≥ 0}. Let τ_t := min{s ≥ 0 : B(s) = t}. Then X_t = W(τ_t) is a Cauchy process and K := {(ɑ,X_t + ɑt) : ɑ, t ∈ [0, 1]} is a Kakeya set of zero area. The area of the ε-neighbourhood of K is as small as possible, i.e., almost surely of order Θ(1/| log ε|).
Content may be subject to copyright.
Hunter, Cauchy Rabbit, and Optimal Kakeya Sets
Yakov Babichenko1, Yuval Peres2, Ron Peretz3, Perla Sousi4, and Peter Winkler5
1The Hebrew University of Jerusalem, Israel
2Microsoft Research, Redmond, WA
3Tel Aviv University, Israel
4University of Cambridge, Cambridge, UK
5Dartmouth College, Hanover, NH
Abstract
A planar set that contains a unit segment in every direction is called a Kakeya set. We relate
these sets to a game of pursuit on a cycle Zn. A hunter and a rabbit move on the nodes of Zn
without seeing each other. At each step, the hunter moves to a neighbouring vertex or stays
in place, while the rabbit is free to jump to any node. Adler et al (2003) provide strategies for
hunter and rabbit that are optimal up to constant factors and achieve probability of capture in
the first nsteps of order 1/log n. We show these strategies yield a Kakeya set consisting of 4n
triangles with minimal area, (up to constant), namely Θ(1/log n). As far as we know, this is the
first non-iterative construction of a boundary-optimal Kakeya set. Considering the continuum
analog of the game yields a construction of a random Kakeya set from two independent standard
Brownian motions {B(s) : s0}and {W(s) : s0}. Let τt:= min{s0 : B(s) = t}. Then
Xt=W(τt) is a Cauchy process, and K:= {(a, Xt+at) : a, t [0,1]}is a Kakeya set of zero
area. The area of the ε-neighborhood of Kis as small as possible, i.e., almost surely of order
Θ(1/|log ε|).
Keywords and phrases. Pursuit games, graph games, Kakeya sets, Cauchy process.
MSC 2010 subject classifications. Primary 49N75; secondary 05C57, 60G50.
Figure 1: The hunter and rabbit construct a Kakeya set (see Section 5)
arXiv:1207.6389v1 [math.PR] 26 Jul 2012
1 Introduction
A subset Sof R2is called a Kakeya set if for every point Pon the unit circle in R2there is a
translation of the line segment (0, P ) that is contained in S. A deterministic construction of a
Kakeya set of zero area was first given by Besicovitch [3] in 1928. Perron [11] in 1928 published a
new proof of that theorem. Schoenberg [12] constructed a Kakeya set consisting of 4ntriangles of
area Θ(1/log n); his construction is explained in [4]. A similar construction was given by Keich [8],
who also proved that any Kakeya set consisting of 4ntriangles cannot have area of smaller order,
see [8, Theorem 2].
In the present work we construct a new class of optimal Kakeya sets using optimal strategies in a
certain game of pursuit and evasion.
Definition 1.1. Let Gnbe the following two-player zero-sum game. At every time step each
player occupies a vertex of the cycle Zn. At time 0 the hunter and rabbit choose arbitrary initial
positions. At each subsequent step the hunter may move to an adjacent vertex or stay where she
is; simultaneously the rabbit may stay where he is or move to any vertex on the cycle. Neither
player can see the other’s position. The game ends at “capture time” when the two players occupy
the same vertex at the same time. The hunter’s goal is to minimize expected capture time; the
rabbit’s goal is to maximize it.
Theorem 1.2. [1] There exists a randomized strategy for the rabbit in the game Gnso that against
any strategy for the hunter, the capture time τsatisfies
E[τ]c1nlog n,
where c1is a fixed positive constant.
The rabbit’s strategy is based on a discretized Cauchy walk; in Section 3 we give a new proof of
this theorem that relies on symmetry properties of simple random walk in two dimensions.
The bound nlog ngiven in Theorem 1.2 is sharp, in the following sense:
Theorem 1.3. [1] There exists a randomized strategy for the hunter in the game Gnso that against
any strategy for the rabbit, the capture time τsatisfies
E[τ]c2nlog n,
where c2is a positive constant.
In Section 4 we give a self-contained proof of this theorem that will be useful in making the
connection to Kakeya sets. Combining the randomized strategies of the hunter and the rabbit of
Theorems 1.2 and 1.3 we prove the following theorem in Section 5.
Theorem 1.4. For all nthere exists a Kakeya set of area at most of order 1/log n, which is the
union of 4ntriangles.
A central open problem regarding Kakeya sets is to understand their Hausdorff dimension and
Minkowski dimension. Davis [6] proved that every Kakeya set in R2has full Hausdorff and
Minkowski dimensions, i.e., dimension 2. In dimension d > 2, it is an open question whether
every Kakeya set has full Hausdorff or Minkowski dimension.
Minkowski dimension of a set KRnis closely related to the area of its ε-neighbourhood, denoted
by K(ε). Therefore it is natural to examine the sets K(ε). The first question that arises is the
following. What is the minimal area of K(ε)? The answer is known to be Θ(1/|log ε|).
2
Proposition 1.5. For every Kakeya set KR2and every sufficiently small ε > 0we have
vol(K(ε)) 1/(3|log ε|).
This proposition was proved by Keich [8] using a maximal inequality from Bourgain’s paper [5]. In
Section 6 we give an alternative elementary proof, which can also be found in Ben Green’s lecture
notes [7].
Proposition 1.5 motivates the following definition. A Kakeya set Kwill be called optimal if it holds
that vol(K(ε)) = O(1/|log ε|) as ε0. Note that every optimal Kakeya set must have zero area.
Construction of optimal Kakeya sets is known in the literature, see Keich [8], but the construction
is quite involved. Next, we describe a continuum analog of the Kakeya construction of Theorem 1.4.
This simple probabilistic construction almost surely yields an optimal Kakeya set.
Let {B(s) : s0}and {W(s) : s0}be two independent standard Brownian motions, and let
τt:= min{s0 : B(s) = t}. Then Xt=W(τt) is a Cauchy process, i.e., a L´evy process where the
increments Xs+tXshave the same law as tX1, and X1has the Cauchy density π(1 + x2)1. See,
e.g., [2] or [10, Theorem 2.37].
Theorem 1.6. Let {Xt:t0}be a Cauchy process and let Λ := {(a, Xt+at) : a, t [0,1]}. Then
the union 3
k=0ek/4Λof four rotated copies of Λis almost surely an optimal Kakeya set, i.e. there
exist positive constants c1, c2such that as ε0we have
c1
|log ε|vol(Λ(ε)) c2
|log ε|a.s.
This theorem is proved in Section 6.
2 Probability of collision
In this section we define a win-lose variant of Gncalled G0
n, in which only nmoves are made and
the hunter wins if she captures the rabbit.
Let
H={(Ht)n1
t=0 :HtZn,|Ht+1 Ht| ≤ 1}and R={(Rt)n1
t=0 :RtZn}.
Then the sets of mixed strategies ∆hfor the hunter and ∆rfor the rabbit are given by
h=
xR|H| :f∈ H, xf0,X
f∈H
xf= 1
and ∆r=
yR|R| :g∈ R, yg0,X
g∈R
yg= 1
.
The hunter wins G0
nif she captures the rabbit. The pay off matrix M= (mfg )f,g, where f∈ H and
g∈ R, is given by
mfg =1(`n1 : f(`) = g(`)).
When the hunter and the rabbit use the mixed strategies xand yrespectively, then
xTMy =Pxy (τ < n),
where τis the capture time.
By the minimax theorem we have
max
xh
min
yr
xTMy = min
yr
max
xh
xTMy = Val(G0
n),
3
where Val(G0
n) stands for the value of the game (to the hunter). Thus, there exists a randomized
strategy for the hunter so that against every strategy of the rabbit the probability that they collide
in the first nsteps is Val(G0
n); and there exists a randomized strategy for the rabbit, so that against
every strategy of the hunter, the probability they collide is Val(G0
n).
Remark 2.1. In Sections 3 and 4 we give randomized strategies for the hunter and the rabbit that
achieve Val(G0
n) up to multiplicative constants. In particular, in Propositions 3.2 and 4.1 we show
there are positive constants c3and c4such that
c3
log nVal(G0
n)c4
log n.(2.1)
Lemma 2.2. Let exhbe a randomized hunter strategy in the game G0
nsatisfying minyrexTM y =
pn. Then there exists a randomized hunter strategy in the game Gnso that against any rabbit strat-
egy, the capture time τsatisfies
E[τ]2n
pn
.
Let eyrbe a randomized rabbit strategy in the game G0
nsatisfying maxxhxTMey=qn. Then
there exists a randomized rabbit strategy in the game Gnso that against any hunter strategy, the
capture time τsatisfies n
qnE[τ].
Proof. We divide time into rounds of length n. In rounds 1,3,5, . . . the hunter employs independent
copies of the randomized strategy exand she uses the even rounds to move to the proper starting
points.
This way we get a new hunter strategy ξso that against any rabbit strategy ηin Gn
Pξη (τ < 2n)Pe0(τ < n) = exTM y pn,
where η0is the restriction of the strategy ηin the first nsteps. Therefore, by bounding the capture
time τby 2ntimes a geometric random variable of success probability pn, we get E[τ]2n
pn.
For the lower bound we look at the process in rounds of length n. In each round the rabbit employs
an independent copy of the randomized strategy ey. Thus the capture time stochastically dominates
ntimes a geometric random variable of parameter qn, and hence E[τ]n
qn.
3 The rabbit’s strategy
In this section we give the proof of Theorem 1.2. We start with a standard result for random walks
in 2 dimensions and include its proof for the sake of completeness.
Lemma 3.1. Let Z= (X, Y )be a simple random walk in Z2starting from 0. For every iZ
define Ti= inf{t0 : Yt=i}. Then for all k∈ {−i, . . . , i}we have
P0(XTi=k)1
96i.
Proof. Notice that XTihas the same distribution for both a lazy simple random walk and a non-
lazy one. So it suffices to prove the lemma in the case of a lazy simple random walk in Z2. We
realize a lazy simple random walk in Z2as follows:
4
space
(0,0)
(i,i)(i,i)
(i,i)(i,i)
(k,i)
time
(a) Escaping the 2i×2isquare
time
space
(k,k)
(k,k)
(k,k)
(k,k)
(0,0)
(k,i)
(k,l)
(b) Escaping the 2k×2ksquare
Figure 2: Hitting times
Let V,Wbe two independent lazy simple random walks in Z. Let ξ1, ξ2, . . . be i.i.d. random
variables taking values 1 or 2 with equal likelihood. Now for all kdefine r(k) = Pk
i=1 1(ξi= 1)
and let
(Xk, Yk)=(Vr(k), Wkr(k)).
Then it is elementary to check that Z= (X, Y ) is a lazy simple random walk in Z2.
We first show that for all k∈ {−i, . . . , i},
P0(XTi= 0) P0(XTi=k).(3.1)
Since Vis independent of Tiand of r(`) for all values of `, we get for all k
P0(XTi=k) = X
m,`
P0(Xm=k, Ti=m, r(m) = `) = X
m,`
P0(V`=k)P0(r(m) = `, Ti=m).(3.2)
It is standard (see, for example, [9, Lemma 12.2]) that for a lazy simple random walk on Z, if Pt
stands for the transition probability in tsteps then Pt(x, y)Pt(x, x) for all xand y. Applying
this to V`and using (3.2) we obtain
P0(XTi= 0) X
m,`
P0(V`=k)P0(r(m) = `, Ti=m) = P0(XTi=k)
and this concludes the proof of (3.1).
For kZwe let
τk= min{t0 : Xt/[−|k|+ 1,|k| − 1]2}.
Setting A={Yτi=i}we have by symmetry
P0(XTi∈ {−i, . . . , i})P0(A) = 1
4.
5
Hence this together with (3.1) gives that
P0(XTi= 0) 1
8i+ 4 1
12i.(3.3)
To finish the proof of the lemma it remains to show that for all (k, i) with k∈ {−i, . . . , i}we have
P0(XTi=k)1
96i.(3.4)
For any k∈ {−i, . . . , i}we have
P0(XTi=k)P0(Xτk=k, XTi=k) = X
`=−|k|,...,|k|
P0(Xτk=k, Yτk=`, XTi=k)
=X
`
P0(XTi=k|Xτk=k, Yτk=`)P0(Xτk=k, Yτk=`).(3.5)
Notice that by the strong Markov property, translation invariance and applying (3.3) to i`we
get
P0(XTi=k|Xτk=k, Yτk=`) = P(k,`)XTi`=k=P0XTi`= 01
12(i`)11
24i,
since k < ` < k and ki. Therefore, plugging this into (3.5), we obtain
P0(XTi=k)1
24iX
`=−|k|,...,|k|
P0(Xτk=k, Yτk=`) = 1
24iP0(Xτk=k)1
96i
since by symmetry we have P0(Xτk=k)1/4. This concludes the proof of the lemma.
Proposition 3.2. There exists a randomized rabbit strategy in the game Gnso that against any
hunter strategy the capture time τsatisfies
P(τ < n)c
log n,
where cis a universal constant.
Proof. It suffices to prove the upper bound for a pure strategy of the hunter, i.e. a fixed path
(Hi)i<n.
Let Ube uniformly distributed on {0, . . . , n 1}and let Z= (X, Y ) be an independent simple
random walk in Z2. We define a sequence of stopping times as follows: T0= 0 and inductively for
k0,
Tk+1 = inf{tTk:Yt=k+ 1}.
By recurrence of the two-dimensional random walk, for all kwe have Tk<a.s.
Define the position of the rabbit at time 0 to be R0=Uand Rk= (XTk+U) mod nat time k.
Define Knto be the total number of collisions in the first nsteps, i.e. Kn=Pn1
i=0 1(Hi=Ri).
Since {τ < n}={Kn>0}, it suffices to show that for a positive constant c,
P(Kn>0) c
log n.(3.6)
6
For the rest of the proof we extend the sequence (Hi)i<n by defining Hi+n=Hnfor all i<n. Then
we have
P(Kn>0) E[K2n]
E[K2n|Kn>0].(3.7)
In order to prove (3.6) we will bound the numerator and denominator separately.
Since Uis uniform on {0, . . . , n 1}and Xis independent of U, it follows that Riis uniform on
{0, . . . , n1}for every i. Using that and the fact that the hunter and the rabbit move independently,
we deduce
E[K2n] =
2n1
X
i=0
P(Ri=Hi) =
2n1
X
i=0
1
n= 2.(3.8)
For the term E[K2n|Kn>0] we have
E[K2n|Kn>0] =
n1
X
k=0
E"2n1
X
i=k
1(Ri=Hi)τ=k#P(τ=k)
P(τ < n).(3.9)
Define e
Ri= (Rk+iRk) mod nand e
Hi= (Hi+kHk) mod n. By the definition of the process R
it follows that e
Rhas the same law as the process R. By the Markov property of the process (Ri)
we get
E"2n1
X
i=k
1(Ri=Hi)τ=k#E"n1
X
i=0
1(e
Ri=e
Hi)#= 1 +
n1
X
i=1
P0Ri=e
Hi.(3.10)
For all i1 and all `∈ {−i, . . . , i}, since Ri=XTimod n, using Lemma 3.1 we get
P0(Ri=`mod n)P0(XTi=`)1
64i.(3.11)
For all iwe have e
Hi∈ {−imod n, . . . , i mod n}, since e
H0= 0. Using (3.11) we get that for all
i1
P0Ri=e
Hi1
96i.
The above inequality together with (3.9) and (3.10) yields
E[K2n|Kn>0] 1 +
n1
X
i=1
1
96ic1log n,
where c1is a positive constant. Thus (3.7), the above inequality and (3.8) conclude the proof
of (3.6).
Remark 3.3. We refer to the strategy used by the rabbit in the proof above as the Cauchy strategy,
because it is the discrete analogue of the hitting distribution of planar Brownian motion on a line
at distance 1 from the starting point, which is the Cauchy distribution.
Proof of Theorem 1.2. The proof of the theorem follows by combining Lemma 2.2 and Propo-
sition 3.2.
7
(a) Typical hunter path, zigzag strategy
(b) Typical rabbit path, counter to zigzag strategy
Figure 3: Typical paths
4 The hunter’s strategy
In this section we give the proof of Theorem 1.3 by constructing a randomized strategy for the
hunter. Before doing so, it is perhaps useful to consider the following natural strategy for the
hunter: at time 0 she chooses a random location and a random direction. Subsequently at each
time tshe continues in the same direction she has been walking with probability (n2)/n, stops for
one move and then continues in the same direction with probability 1/n, and reverses direction with
probability 1/n. We call this the “zigzag” strategy. We can prove that the zigzag strategy achieves
expected capture time of order n3/2against any rabbit strategy. The following counter-strategy of
the rabbit yields expected capture time of n3/2against the zigzag strategy: he starts at random,
walks for nsteps to the right in unit steps, then jumps to 2nto the left and repeats.
To achieve minimal expected capture time (up to a constant) our hunter moves not only in a
random direction but at a random rate.
Proposition 4.1. There exists a randomized hunter strategy in the game G0
nso that against any
rabbit strategy the capture time τsatisfies
P(τ < n)c0
log n,
where c0is a universal positive constant.
Proof. Let R`be the location of the rabbit on the cycle at time `, i.e. R`∈ {0, . . . , n 1}. We now
describe the strategy of the hunter. Let a, b be independent random variables uniformly distributed
on [0,1]. We define the location of the hunter at time `to be H`=dan +b`emod n.
We again let Kndenote the number of collisions before time n, i.e. Kn=Pn1
i=0 1(Ri=Hi). Then
by the second moment method we have
P(Kn>0) (E[Kn])2
E[K2
n].
8
We now compute the first and second moments of Kn. For that, let I`denote the event that there
is a collision at time `, i.e. I`={H`=R`}. We first calculate P(I`). We have
I`={dan +b`e=R`} ∪ {dan +b`e − n=R`},
which gives that
I`={R`1< an +b` R`}∪{R`+n1< an +b` R`+n}.
Hence P(I`) = 1/n and
E[Kn] =
n1
X
`=0
P(I`) = 1.
Let j > 0, then it is easy to check that P(I`I`+j)c
jn for a positive constant c. Therefore
EK2
n=E
n1
X
`=0
I`!2
=E[Kn] + X
`6=m
E[I`Im] = 1 + 2
n1
X
`=0
n`1
X
j=1
P(I`I`+j)
1+2
n1
X
`=0
n
X
j=1
c
jn c0log n,
for a positive constant c0. This way we get
P(τ < n) = P(Kn>0) c1
log n
and this finishes the proof of the proposition.
Proof of Theorem 1.3. The proof of the theorem follows from Lemma 2.2 and Proposition 4.1.
5 The Kakeya connection
In this section we prove Theorem 1.4. We start by showing how to get a Kakeya set given a strategy
of the rabbit with probability at most pnof collision against any strategy of the hunter.
Proposition 5.1. Given a strategy eyof the rabbit which ensures capture probability at most pn
against any hunter strategy, there is a Kakeya set of area at most 8pnwhich is the union of 4n
triangles.
Proof. Recall the definition of the set Hof the allowed hunter paths.
First we slightly change the game and enlarge the set of allowed paths for the hunter, to include
all functions f: [0, n)[0, n) that are 1-Lipschitz. Then we say that there is a collision in [0, n)
if for some mn1 there exists t[m, m + 1) such that f(t) = Rm. We first show that if fis
1-Lipschitz, then
min
yr
Pfy(collision in [0, n)) pn,(5.1)
9
T1
T2
TnTn1
1
1
n
1
n
. . .
Figure 4: Triangles
where fstands for δfwith a slight abuse of notation.
In order to prove (5.1), for every fthat is 1-Lipschitz we will construct a path h∈ H such that for
all yr
Pfy(collision in [0, n)) Phy(τ < n).(5.2)
We define h(m) for every m∈ {0, . . . , n 1}. By the 1-Lipschitz property of f, note that the image
f([m, m + 1)) can contain at most one integer. If there exists kZnsuch that kf([m, m + 1)),
then we set h(m) = k. If there is no such integer k, then we set h(m) = bf(m)c. The 1-Lipschitz
property then gives that h∈ H. Since the rabbit only jumps on Zn, the function hconstructed
this way satisfies (5.2).
Applying (5.2) to the strategy eyof the rabbit and using the assumption gives that for all fthat
are 1-Lipschitz
Pfey(collision in [0, n)) pn.(5.3)
Next we consider the hunter strategy in which she chooses a linear function fa,b(t) = (an+tb) mod n,
where a, b are independent and uniformly distributed on the unit interval [0,1]. Suppose that during
the time segment [m, m + 1) the rabbit is located at position zm. Then the set of values (a, b) such
that zmfa,b([m, m + 1)) is T(m, zm) = T`(m, zm)Tr(m, zm), where
T`(m, zm) = {an +bm zm< an +b(m+ 1)} ∩ [0,1]2and
Tr(m, zm) = {an +bm nzm< an +b(m+ 1) n} ∩ [0,1]2,
as illustrated in Figure 5. If the rabbit chooses the sequence of locations (zk)n1
k=0 , then he will
be caught by the hunter using the strategy above with probability A(z) which is the area of
mT(m, zm). Therefore the objective of the rabbit is to minimize the area of A(z). We have
A(R) = P(collision in [0, n)|(Rm)) ,
and hence since from (5.3) we have Pfa,b
ey(collision in [0, n)) R1
0R1
0pndadb =pn, we deduce that
for the strategy ey
E[A(R)] pn.
Now we slightly change the sets T(m, zm), since they could consist of two disjoint triangles as
illustrated in Figure 5. So if we write T0(m, zm) = T`(m, zm)(Tr(m, zm)(1,0)), then it is easy
to see that T0(m, zm) is always a triangle.
10
a
b
1
1
T`(m, zm)
Tr(m, zm)
zm
n
Figure 5: T(m, zm) = T`(m, zm)Tr(m, zm)
Hence taking the union mT0(m, zm) gives a union of ntriangles with
Area(mT0(m, zm)) 2Area(mT(m, zm)),
and hence E[Area(mT0(m, zm))] 2pn.
The triangles Tiin Figure 4 contain unit segments in all directions that have an angle in [0, π/4]
with the vertical axis. Since the triangles T0(m, zm) are obtained from the triangles Tiby horizontal
translation, the union mT0(m, zm) also contains a unit segment in all these directions. Hence if we
take 4 copies of this construction suitably rotated obtaining 4ntriangles gives a Kakeya set with
area at most 8pn.
Proof of Theorem 1.4. If the rabbit uses the Cauchy strategy, then by Proposition 3.2 we get
that the probability of collision in nsteps against any strategy of the hunter is at most pn=c/ log n.
Now we can apply Proposition 5.1 to get 4ntriangles of area at most 8pn= 8c/ log n. For a sample
of this random construction with n= see Figure 1.
6 Kakeya sets from the Cauchy process
Our goal in this section is to prove Theorem 1.6. We first recall some notation.
Let (Xt) be a Cauchy process, i.e., Xis a stable process with values in Rand the density of X1
is given by (π(1 + x2))1. Let Ft=σ(Xs, s t) be its natural filtration and let e
Ft=nFt+1/n.
Then ( e
Ft) is right continuous and Xis adapted to ( e
F).
For any set Awe denote by A(ε) the ε-neighbourhood of A, i.e. A(ε) = A+B(0, ε).
Let Fbe a subset of [0,1] and δ > 0. For a[0,1] we define
Va(F, δ) = sFB(Xs+as, δ ).
Recall the definition Λ = {(a, Xt+at):0a1,0t1}.
11
Lemma 6.1. Let M > 0be a constant, t > r and I= [u, u +t]be a subinterval of [0,1]. Then
there exists a constant c=c(M)so that for all a[M , M]
E[vol(Va(I, r))] ct
log(t/r)+ 2r.
Proof. By translation invariance of Lebesgue measure and the stationarity of X, it suffices to
prove the lemma in the case when I= [0, t].
If τB(x,r)= inf{s0 : Xs+as ∈ B(x, r)}, then we can write
E[vol(stB(Xs+as, r))] = ZR
PτB(x,r)tdx = 2r+ZR\B(0,r)
PτB(x,r)tdx. (6.1)
For x /∈ B(0, r) we define Zx=Rt
01(Xs+as ∈ B(x, r)) ds and e
Zx=R2t
01(Xs+as ∈ B(x, r)). By the
c`adl`ag property of Xwe deduce that up to zero probability events we have {τB(x,r)t}={Zx>0}.
So it follows that
PτB(x,r)t=P(Zx>0)
Ehe
Zxi
Ehe
ZxZx>0i.(6.2)
For the numerator we have
Ehe
Zxi=Z2t
0ZB(x,r)
ps(0, y)dy ds =Z2t
0ZB(0,r)
ps(0, x +y)dy ds,
where ps(0, y) stands for the transition density in time sof the process (Xu+au)u. We now drop the
dependence on B(x, r) from τB(x,r)to simplify notation. For the conditional expectation appearing
in the denominator in (6.2) we have
Ehe
ZxZx>0i=EZ2t
τ
1(Xs+as ∈ B(x, r)) ds τt
=EZ2tτ
0
1(Xs+τ+a(s+τ)∈ B(x, r)) ds τt
=EZ2tτ
0
1(Xs+τXτ+as +Xτ+)∈ B(x, r)) ds τt
min
y∈B(x,r)
EZt
0
1(Xs+as +y∈ B(x, r)) ds,
where in the last step we used the strong Markov property of Xand that Xτ+=y∈ B(x, r).
We now bound from below the expectation appearing in the minimum above. Since we assumed
that r < t, we have
EZt
0
1(Xs+as +y∈ B(x, r)) ds=Zt
0ZB(x
sy
sa, r
s)
1
π(1 + z2)dz ds
Zt
r
2r
(1 + (M+ 3)2)πs ds =c1rlog t
r.
12
The inequality follows from the observation that when srand y∈ B(x/s y/s a, r/s), then
|y| ≤ M+ 3, since a[M, M ]. Hence we infer that for all x
Ehe
ZxZx>0ic1rlog(t/r).
So putting all things together we have
ZR\B(0,r)
P(Zx>0) dx RR\B(0,r)R2t
0RB(0,r)ps(0, x +y)dy ds dx
c1rlog(t/r)4rt
c1rlog(t/r)=c2t
log(t/r)
and this together with (6.1) completes the proof of the lemma.
Claim 6.2. Let (Ft)be a right continuous filtration and (Xt)a c`adl`ag adapted process taking values
in Rd,d1. Let Dbe an open set in Rdand Fa subset of [0,1]. Then
τ= inf{tF:XtD}
is a stopping time.
Proof. Let Fbe a countable dense subset of F. Then for all t[0,1] we deduce
{τ < t}=qF,q<t {XqD},
since Xis c`adl`ag and Dis an open set. Hence {τ < t} ∈ Ft. Writing
{τt}=\
n{τ < t + 1/n},
we get that {τt} ∈ Ft+=Ft.
Lemma 6.3. Let Ibe a subinterval of [0,1] of length ε. Define Y=Rd
bvol(Va(I, 2ε)) da, where
2<b<d<2. Then there exists a constant csuch that for all ε > 0sufficiently smal l
EY2
(log(1))2.
Proof. By Jensen’s inequality we get
EY2Zd
b
E(vol(Va(I, 2ε)2da. (6.3)
We will first show that for all δ > 0 and all aR
E(vol(Va(I, δ)))22E[vol(Va(I, δ ))]2.(6.4)
For all xdefine τx= inf{tI:Xt+at ∈ B(x, δ)}. We then have
E(vol(Va(I, δ)))2=ZRZR
P(τx<, τy<)dx dy = 2 ZRZR
P(τxτy<)dx dy
= 2 ZR
P(τx<)ZR
P(τxτy<∞ | τx<)dy dx
= 2 ZR
P(τx<)E[vol(Va(I[τx,1], δ)) |τx<]dx. (6.5)
13
Since (Xu+au) is c`adl`ag and the filtration e
Fis right continuous, it follows from Claim 6.2 that τx
is a stopping time. By the stationarity, the independence of increments and the c`adl`ag property of
X, we get that Xsatisfies the strong Markov property (see [2, Proposition I.6]). In other words,
on the event {τx<∞}, the process (X(τx+t))t0is c`adl`ag and has independent and stationary
increments. Thus we deduce
E[vol(Va(I[τx,1], δ)) |τx<]E[vol(Va(I, δ))] ,
and this finishes the proof of (6.4)
Applying Lemma 6.1 with t=εand r=εgives that there exists a constant cso that for all ε
sufficiently small and for all a[2,2]
E[vol(Va(I, 2ε)] cε
log(1).
Therefore from (6.3) and (6.4), since the above bound is uniform over all a[2,2], we deduce
that for all εsufficiently small
EY2c0ε
(log(1))2
and this completes the proof.
Proof of Theorem 1.6. It is easy to see that 3
k=0eikπ /4Λ is a Kakeya set. Indeed, if we fix tand
we vary a, then we see that Λ contains all directions from 0 up to 45degrees. It then follows that
the set 3
k=0eikπ /4Λ contains a unit line segment in all directions.
It remains to show that there is a constant cso that almost surely for all εsufficiently small
vol(Λ(ε)) c
log(1).(6.6)
Note that it suffices to show the above inequality for εwhich goes to 0 along powers of 4. It is easy
to see that for all ε > 0 we have
Λ(ε)[
εa1+ε{a} × Va([0,1],2ε).
Indeed, let (x, y)Λ(ε). Then x[ε, 1 + ε] and (xb)2+ (y(Xt+bt))2< ε2for some
b, t [0,1]. By the triangle inequality and since t[0,1], we get
|y(Xt+xt)|≤|y(Xt+bt)|+|(bx)t| ≤ 2ε.
Take ε= 22n. Thus in order to show (6.6), it suffices to prove that almost surely for all n
sufficiently large we have
vol
[
22na1+22n{a} × Va([0,1],22n+1)
c
log(22n).(6.7)
For j= 1,...,2ndefine Ij= [(j1)2n, j2n]. Since Va([0,1],22n+1) = i2nVa(Ii,2ε) for all a,
writing
Yi= vol
[
22na1+22n{a} × Va(Ii,22n+1)
=Z1+22n
22n
vol(Va(Ii,22n+1)) da
14
we have by the subadditivity of the volume that
vol
[
22na1+22n{a} × Va([0,1],22n+1)
2n
X
i=1
Yi.
Hence it suffices to show that almost surely eventually in n
2n
X
i=1
Yic
log(22n).(6.8)
Since Xhas independent and stationary increments, it follows that the random variables Yiare in-
dependent and identically distributed. From Lemma 6.3 we obtain that var(Yi)c4n(log(22n))2
for all nsufficiently large and thus we conclude by independence that eventually in n
var 2n
X
i=1
Yi!c2n
(log(22n))2.
From Chebyshev’s inequality we now get
P
2n
X
i=1
YiE"2n
X
i=1
Yi#1
log(22n)!c
2n,
which is summable and hence Borel Cantelli now gives that almost surely for all nlarge enough
2n
X
i=1
YiE"2n
X
i=1
Yi#+1
log(22n).
Using Lemma 6.1 gives that E[Yi]c2n(log(22n))1, and hence this together with the inequality
above finishes the proof.
We now show that our construction is optimal in terms of boundary size up to a constant factor.
Proposition 6.4. Let Kbe a Kakeya set, i.e. a set that contains a unit line segment in all
directions. Then for all εsufficiently small
vol(K(ε)) 1
2 log(1).
Proof. Let ε > 0 and n=bε1c. Suppose xi, viR2for all i= 1, . . . , n are such that the unit
line segments `i={xi+tvi:t[0,1]}for i= 1, . . . , n are contained in the set Kand satisfy
^(`i1, `i) = π
nfor all i= 1, . . . , n and ^(`1, `n) = π
n.
For each itake wiviand define the set
e
`i(ε) = {xi+tvi+swi:t[0,1], s [ε, ε]}
as in Figure 6. Then it is clear that
e
`i(ε)K(ε) for all i. (6.9)
15
2ε
1
`e
`(ε)
Figure 6: Line `and rectangle e
`(ε)
For all i= 1, . . . , n we define a function Ψi:R2→ {0,1}via Ψi(x) = 1(xe
`i(ε)) and let
Ψ = Pn
i=1 Ψi. Then from (6.9) we obtain {x: Ψ(x)>0} ⊆ K(ε), and hence it suffices to show
that for all εsufficiently small
vol({Ψ>0})1
2 log(1).(6.10)
By Cauchy-Schwarz we get
vol({Ψ>0})RR2Ψ(x)dx2
RR2Ψ2(x)dx .(6.11)
By the definition of the function Ψ we immediately get that
ZR2
Ψ(x)dx =
n
X
i=1 ZR2
1(xe
`i(ε)) dx = 2εn. (6.12)
Since Ψ2
i= Ψifor all i, we have
ZR2
Ψ2(x)dx =ZR2
Ψ(x)dx + 2
n
X
i=1
ni
X
k=1
vol(e
`i(ε)e
`i+k(ε)).
The angle between the lines `iand `i+kis kπ/n. From Figure 7 we see that if kπ/n π/2, then
vol(e
`i(ε)e
`i+k(ε)) 4ε2
sin(kπ/n)2ε2n
k,
while if kπ/n > π/2, then
vol(e
`i(ε)e
`i+k(ε)) 4ε2
sin(πkπ/n)2ε2n
nk.
Hence using these two inequalities we deduce that
n
X
i=1
n1
X
k=1
vol(e
`i(ε)e
`i+k(ε))
bn/2c
X
i=1
bn/2c
X
k=1
2ε2n
k+
ni
X
k=bn/2c+1
2ε2n
nk
+
n
X
i=bn/2c+1
ni
X
k=1
2ε2n
k
16
θ
2ε
2ε
Figure 7: Intersection of two infinite strips
3ε2n2log n+ 3ε2n2.
Thus putting all these bounds in (6.11) we obtain
vol({Ψ>0})3
2log n+3
2+1
2εn1
.
Since n=bε1c, we conclude that for εsufficiently small
vol({Ψ>0})1
2 log(1)
and this finishes the proof.
Acknowledgement
We thank Abraham Neyman for useful discussions.
Yakov Babichenko’s work is supported in part by ERC grant 0307950, and by ISF grant 0397679.
Ron Peretz’s work is supported in part by grant #212/09 of the Israel Science Foundation and
by the Google Inter-university center for Electronic Markets and Auctions. Peter Winkler’s work
was supported by Microsoft Research, by a Simons Professorship at MSRI, and by NSF grant
DMS-0901475. We also thank MSRI, Berkeley, where part of this work was completed.
References
[1] Micah Adler, Harald R¨acke, Naveen Sivadasan, Christian Sohler, and Berthold V¨ocking. Ran-
domized pursuit-evasion in graphs. Combin. Probab. Comput., 12(3):225–244, 2003. Combi-
natorics, probability and computing (Oberwolfach, 2001).
[2] Jean Bertoin. evy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge
University Press, Cambridge, 1996.
[3] A. S. Besicovitch. On Kakeya’s problem and a similar one. Math. Z., 27(1):312–320, 1928.
17
[4] A. S. Besicovitch. The Kakeya problem. Amer. Math. Monthly, 70:697–706, 1963.
[5] J. Bourgain. Besicovitch type maximal operators and applications to Fourier analysis. Geom.
Funct. Anal., 1(2):147–187, 1991.
[6] Roy O. Davies. Some remarks on the Kakeya problem. Proc. Cambridge Philos. Soc., 69:417–
421, 1971.
[7] Ben Green. Restriction and Kakeya Phenomena. Lecture notes from a course at Cambridge,
https://www.dpmms.cam.ac.uk/bjg23/rkp.html.
[8] U. Keich. On Lpbounds for Kakeya maximal functions and the Minkowski dimension in R2.
Bull. London Math. Soc., 31(2):213–221, 1999.
[9] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times.
American Mathematical Society, Providence, RI, 2009. With a chapter by James G. Propp
and David B. Wilson.
[10] P. M¨orters and Y. Peres. Brownian Motion. Cambridge University Press, 2010.
[11] Oskar Perron. ¨
Uber einen Satz von Besicovitsch. Math. Z., 28(1):383–386, 1928.
[12] I. J. Schoenberg. On the Besicovitch-Perron solution of the Kakeya problem. In Studies in
mathematical analysis and related topics, pages 359–363. Stanford Univ. Press, Stanford, Calif.,
1962.
18
... The notion of Besicovitch set is closely related to that of Kakeya set, a set in which a unit line segment can be rotated continuously through 180 • . After Besicovitch's original construction [3], [4], alternative constructions of Besicovitch set were found by Perron [18], Besicovitch [5], Sawyer [20], Körner [15], Babichenko et al. [2] using different approaches (see also [19], [22], [12], [1], [14], [24]). We refer the reader to Falconer [10, § 12.1], Mattila [16,Ch. ...
... Here the order 1 M is smallest possible by the work of Cordoba [8]. Note that this order is achieved in the constructions of Perron [18] and Schoenberg [22] (see also [21], [2], [6,Ch. 9]). ...
Preprint
Keich (1999) showed that the sharp gauge function for the generalized Hausdorff dimension of Besicovitch sets in R2\mathbb R^2 is between r2log1/rr^2\log 1/r and r2(log1/r)(loglog1/r)2+εr^2(\log 1/r) (\log\log 1/r)^{2+\varepsilon} by refining an argument of Bourgain (1991). It is not known whether the iterated logarithms in Keich's bound are necessary. In this paper we construct a family of Besicovitch line sets whose sharp gauge function is smaller than r2(log1/r)(loglog1/r)εr^2(\log 1/r) (\log\log 1/r)^{\varepsilon}. Moreover, these Besicovitch sets are minimal in the sense that there is essentially only one line in the set pointing in each direction.
... What happens to the capture time if the rules are asymmetrical, and/or the game is played "at night"? In the "hunter and rabbit" game [1,2], the players move without seeing each other, and the robber-turned-rabbit is not constrained by the graph edges; that is, he is free to move to any vertex of the graph at each step. It turns out that the rabbit has a strategy that will get him expected capture time Ω(n log n) on the n-cycle (or any graph of linear diameter). ...
... Proof. We know from [1,2] that the rabbit can force expected capture time Θ(n log n) on C n , and it is easy to see that the cop can achieve linear expected capture time against the unknown gambler on the same graph. The cop's strategy (which is in fact optimal-see [11] for a proof) is simply to run around and around the cycle, clockwise or counterclockwise according to a fair coin-flip. ...
Article
Full-text available
We consider a variation of cop vs.\ robber on graph in which the robber is not restricted by the graph edges; instead, he picks a time-independent probability distribution on V(G) and moves according to this fixed distribution. The cop moves from vertex to adjacent vertex with the goal of minimizing expected capture time. Players move simultaneously. We show that when the gambler's distribution is known, the expected capture time (with best play) on any connected n-vertex graph is exactly n. We also give bounds on the (generally greater) expected capture time when the gambler's distribution is unknown to the cop.
... For instance, in the direction of the non-archimedean Kakeya conjecture, one may ask the following question: can one compute higher moments of the X ε 's (possibly extending the technics of this paper) and this way derive interesting informations about their minimum? In the real setting, results in related directions were obtained by Babichenko and al. [1,Theorem 1.6] in the 2-dimensional case. ...
Preprint
We study Kakeya sets over local non-archimedean fields with a probabilistic point of view: we define a probability measure on the set of Kakeya sets as above and prove that, according to this measure, almost all non-archimedean Kakeya sets are neglectable according to the Haar measure. We also discuss possible relations with the non-archimedean Kakeya conjecture.
... cop tries to pursue. Other adversaries besides the robber have also been investigated, such as the rabbit [1,2] which is allowed to jump to non-adjacent vertices. Komarov and Winkler introduced the gambler in [14]. ...
Preprint
We bound expected capture time and throttling number for the cop versus gambler game on a connected graph with n vertices, a variant of the cop versus robber game that is played in darkness, where the adversary hops between vertices using a fixed probability distribution. The paper that originally defined the cop versus gambler game focused on two versions, a known gambler whose distribution the cop knows, and an unknown gambler whose distribution is secret. We define a new version of the gambler where the cop makes a fixed number of observations before the lights go out and the game begins. We show that the strategy that gives the best possible expected capture time of n for the known gambler can also be used to achieve nearly the same expected capture time against the observed gambler when the cop makes a sufficiently large number of observations. We also show that even with only a single observation, the cop is able to achieve an expected capture time of approximately 1.5n, which is much lower than the expected capture time of the best known strategy against the unknown gambler (approximately 1.95n).
... Other versions of graph pursuit have also been studied, such as a variation where the robber becomes a rabbit and is able to hop between vertices [1,2]. ...
Article
Full-text available
We prove new theoretical results about several variations of the cop and robber game on graphs. First, we consider a variation of the cop and robber game which is more symmetric called the cop and killer game. We prove for all c<1c < 1 that almost all random graphs are stalemate for the cop and killer game, where each edge occurs with probability p such that 1ncp11nc\frac{1}{n^{c}} \le p \le 1-\frac{1}{n^{c}}. We prove that a graph can be killer-win if and only if it has exactly k3k\ge 3 triangles or none at all. We prove that graphs with multiple cycles longer than triangles permit cop-win and killer-win graphs. For (m,n)(1,5)\left(m,n\right)\neq\left(1,5\right) and n4n\geq4, we show that there are cop-win and killer-win graphs with m CnC_ns. In addition, we identify game outcomes on specific graph products. Next, we find a generalized version of Dijkstra's algorithm that can be applied to find the minimal expected capture time and the minimal evasion probability for the cop and gambler game and other variations of graph pursuit. Finally, we consider a randomized version of the killer that is similar to the gambler. We use the generalization of Dijkstra's algorithm to find optimal strategies for pursuing the random killer. We prove that if G is a connected graph with maximum degree d, then the cop can win with probability at least d1+d\frac{\sqrt d}{1+\sqrt d} after learning the killer's distribution. In addition, we prove that this bound is tight only on the (d+1)\left(d+1\right)-vertex star, where the killer takes the center with probability 11+d\frac1{1+\sqrt d} and each of the other vertices with equal probabilities.
... For instance, in the direction of the non-archimedean Kakeya conjecture, one may ask the following question: can one compute higher moments of the X ε 's (possibly extending the technics of this paper) and this way derive interesting informations about their minimum? In the real setting, results in related directions were obtained by Babichenko and al. [1,Theorem 1.6] in the 2-dimensional case. ...
Article
We study Kakeya sets over local non-archimedean fields with a probabilistic point of view: we define a probability measure on the set of Kakeya sets as above and prove that, according to this measure, almost all non-archimedean Kakeya sets are neglectable according to the Haar measure. We also discuss possible relations with the non-archimedean Kakeya conjecture.
Article
We consider a variation of cop vs. robber on graph in which the robber is not restricted by the graph edges; instead, he picks a time-independent probability distribution on and moves according to this fixed distribution. The cop moves from vertex to adjacent vertex with the goal of minimizing expected capture time. Players move simultaneously. We show that when the gambler’s distribution is known, the expected capture time (with best play) on any connected -vertex graph is exactly . We also give bounds on the (generally greater) expected capture time when the gambler’s distribution is unknown to the cop.
Article
Consider a search problem in which a stationary object is in one of LϵNL \epsilon \mathcal {N} locations. Each location can be searched using one of TϵNT \epsilon \mathcal {N} technologies, and each location-technology pair has a known associated cost and overlook probability. These quantities may depend on the number of times that the technology is applied to the location. This paper finds a search policy that maximizes the probability of finding the object given a constraint on the available budget. It also finds the policy that maximizes the probability of correctly stating at the end of a search where the object is. Additionally it exhibits another policy that minimizes the expected cost required to find the object and the optimal policy for stopping.
Article
The formation mechanism of AGN jets is one of the most mysterious problems in astronomy. The most promising model at present is ’a magnetically driven jet model (Blandford and Payne, 1982; Uchida and Shibata, 1985; Shibata and Uchida, 1986; Kudoh and Shibata, 1995, 1997a,b; Mat-sumoto et al., 1996). Recently, Kudoh, Matsumoto, and Shibata (1998) investigated the acceleration mechanism of nonsteady jets from geometrically thick disks and found that a jet’s ejection point (slow point) is determined by the effective gravitational potential made by the action of magneto-centrifugal force in spite of the fact that the model contains nonsteady accretion. Using a two-dimensional nonsteady MHD code (CIP-MOCCT), we investigated the relation between accretion and jets from geometrically thin disks that are not in dynamical equilibrium in the initial state. We confirmed that the dependence on B is approximately consistent with Kudoh and Shibata’s theory (1997) V jet ∝ B 1/3 (see TABLE 1), and found a knot-like structure in weak magnetic field cases. The results are shown in CD-ROM. Numerical computations were carried out on VPP300/16R at NAOJ.
Article
We prove that the bound on the Lp norms of the Kakeya type maximal functions studied by Cordoba (2), and by Bourgain (1) are sharp for p> 2. The proof is based on a con- struction originally due to Schoenberg (5), for which we provide an alternative derivation. We also show that r2 log(1/r) is the ex- act Minkowski dimension of the class of Kakeya sets in R2, and prove that the exact Hausdorff dimension of these sets is between r2 log(1/r) and r2 log(1/r) (log log(1/r)) 2+ε .
Article
Besicovitch's construction(1) of a set of measure zerot containing an infinite straight line in every direction was subsequently adapted (2, 3, 4) to provide the following answer to Kakeya's problem (5): a unit segment can be continuously turned round, so as to return to its original position with the ends reversed, inside an arbitrarily small area. The last word on Kakeya's problem itself seems to be F. Cunningham Jr.'s remarkable result(6)‡ that this can be done inside a simply connected subset of arbitrarily small measure of a unit circle.(Received May 29 1970)