Content uploaded by Dinesh Manocha

Author content

All content in this area was uploaded by Dinesh Manocha on Feb 24, 2015

Content may be subject to copyright.

Spoke Darts for Efﬁcient High Dimensional Blue Noise Sampling

Mohamed S. Ebeida

Sandia National Laboratories

Scott A. Mitchell

Sandia National Laboratories

Muhammad A. Awad

Alexandria University

Chonhyon Park

UNC Chapel Hill

Laura P. Swiler

Sandia National Laboratories

Dinesh Manocha

UNC Chapel Hill

Li-Yi Wei

Univ. Hong Kong

Abstract

Blue noise refers to sample distributions that are random and well-

spaced, with a variety of applications in graphics, geometry, and

optimization. However, prior blue noise sampling algorithms typi-

cally suffer from the curse-of-dimensionality, especially when striv-

ing to cover a domain maximally. This hampers their applicability

for high dimensional domains.

We present a blue noise sampling method that can achieve high

quality and performance across different dimensions. Our key idea

is spoke-dart sampling, sampling locally from hyper-annuli cen-

tered at prior point samples, using lines, planes, or, more gener-

ally, hyperplanes. Spoke-dart sampling is more efﬁcient at high

dimensions than the state-of-the-art alternatives: global sampling

and advancing front point sampling. Spoke-dart sampling achieves

good quality as measured by differential domain spectrum and spa-

tial coverage. In particular, it probabilistically guarantees that each

coverage gap is small, whereas global sampling can only guarantee

that the sum of gaps is not large. We demonstrate advantages of our

method through empirical analysis and applications across dimen-

sions 8 to 23 in Delaunay graphs, global optimization, and motion

planning.

Keywords: blue noise, sampling, high dimension, Delaunay

graph, global optimization, motion planning

1 Introduction

Blue noise refers to sample distributions that are random and well-

spaced. These are desirable properties for sampling in computer

graphics, as randomness avoids aliasing while uniformity reduces

noise. Blue noise sampling has been applied to a variety of graphics

applications, such as rendering [Cook 1986; Sun et al. 2013; Chen

et al. 2013], imaging [Balzer et al. 2009; Fattal 2011; de Goes et al.

2012], geometry [Alliez et al. 2003; ¨

Oztireli et al. 2010], animation

[Schechter and Bridson 2012], visualization [Li et al. 2010], and

numerical computation [Ebeida et al. 2014b].

Both the random and well-spaced properties are agnostic with re-

spect to the dimensionality of the underlying sample domain. Thus,

both high and low dimensional applications can and should beneﬁt

from blue noise. However, most existing applications are limited to

low dimensions, predominantly 2-d, in part because of the difﬁculty

of producing good blue noise distributions in high dimensions. For

example, blue noise has been deployed to construct meshes for 2-d

and 3-d domains, with questionable practicality for much higher di-

mensions; in global optimization, point placement in higher dimen-

sions becomes a critical issue to efﬁciently explore the parameter

space; in robotic planning, blue noise has been used for conﬁgura-

tion spaces up to 6-d [Park et al. 2013] but not in higher dimensions

for more complex or realistic agents and environments.

A key reason mostl results are low dimensional is the curse of di-

mensionality — prior blue noise algorithms do not scale well to

high dimensions, especially when striving to ﬁll in the domain max-

imally [Ebeida et al. 2011]. No prior algorithm can guarantee local

saturation within tractable runtime. The algorithms closest to ob-

taining this goal are based on advancing-front [Liu 1991; Bridson

2007] or k-d darts [Ebeida et al. 2014b]. Our method combines

these two approaches.

Advancing-front methods generate samples locally from the dis-

tribution boundary and gradually advanced towards the rest of the

domain. Building the geometric boundaries explicitly [Liu 1991]

through the intersections of sample disks results in a combinato-

rial explosion in complexity in high dimensions. Advancing-Front

Point dart-throwing (AFP) [Bridson 2007] avoids building the front

geometry. Each accepted sample has a disk around it that rejects fu-

ture samples. For each sample, AFP does rejection sampling from

an annulus around its disk, and proceeds to the next sample after

a ﬁxed number (30) of consecutive rejections. Its advantage is lo-

cality. The advancing front mitigates the effects of domain size.

It ensures that the saturation properties around the current sam-

ple also apply to future and past samples. Its disadvantage is that

point-rejection yields a volume-fraction saturation guarantee which

decreases exponentially with dimension; worse, this guarantee ap-

plies only within the annulus and there is no known bound on the

void volume outside all annuli.

k-d darts [Ebeida et al. 2014b] selects samples using hyperplanes:

select a random axis-aligned hyperplane, ﬁnd its uncovered subset,

and select a point from this subset. A rejection occurs only when

the entire hyperplane is covered. Its advantage is that hyperplanes

mitigate the effects of high dimensions, because rejection is much

less likely than for point samples. Its disadvantage is that it does not

provide local saturation, because hyperplanes are selected globally

from the entire domain.

Our method achieves guaranteed local saturation within tractable

runtime. Our key idea is to combine the advantages of the two prior

methods: the local saturation of AFP and the dimension-mitigation

of k-d darts. Speciﬁcally, our method replaces the point-sampling

of AFP with hyperplane sampling, especially line sampling.

We call our method spoke-dart sampling. A spoke-dart is a set of

spokes passing through a prior sample point. Each spoke is a hyper-

annulus, such as 1-d line segments or a 2-d planar ring as illustrated

in Figure 2. In contrast to constructing the front explicitly, we trim

each spoke by existing sample disks, and select the next sample

from the remaining regions. Since each spoke has a local scope, the

trimming can be efﬁciently performed locally instead of globally as

in k-d darts.

The advancing front nature also helps our method saturate the do-

main better than prior dimensionality-agnostic methods such as

brute-force dart throwing [Cook 1986] or AFP. Moreover, global

methods such as dart throwing and k-d darts only provide global

saturation. If parameters are chosen so that a global method and

spoke-dart sampling produce the same total gap volume, spoke-

dart sampling will likely have that volume distributed in small gaps

throughout the domain, whereas a global method might have the

gap volume concentrated in one large component. Thus spoke-dart

sampling ensures that the maximum distance from a domain point

to its nearest sample, rc, is smaller. With high probability, (1 −),

it achieves the user-desired saturation, as measured by the desired

ratio β=rc/rfbetween the coverage radius rcand conﬂict radius

arXiv:1408.1118v1 [cs.GR] 5 Aug 2014

(a) Delaunay graph (b) global optimization

(c) motion planning

Figure 1: Spoke-dart sampling is an effective method for high-dimensional

sampling with applications in Delaunay graph construction (a), global op-

timization (b), and motion planning (c).

rfof sample disks [Mitchell et al. 2012a].

Spoke-dart sampling has polynomial (instead of exponential)

time complexity with respect to the sample-space dimension

dand number of samples n. The only exponential-in-dde-

pendency is when high saturation is desired. The time is

Od(−log )(β−1)1−dn2;if β= 2 and a ﬁxed epsilon (say

10−5) is desired, then this reduces to O(dn2).We do not know

of explicit formulas for the run-time needed for dart-throwing and

k-d darts to achieve a given local β, but for large domains it ap-

pears that they scale much worse than spoke-dart sampling. An-

other key advantage of spoke-dart sampling is it requires only lin-

ear memory, O(dn),regardless of saturation β. This is in sharp

contrast to grid-based methods such as Simple MPS [Ebeida et al.

2012], where the memory is exponential, O(2d),when the top level

grid is ﬁrst reﬁned; and careful management is needed to avoid

the O(((−log2(β−1))d)memory for naive grid reﬁnement. To

our knowledge, we provide the ﬁrst feasible method to produce

probabilistically-guaranteed locally-saturated blue-noise point-sets

in high dimensions.

We apply spoke-dart sampling to high dimensional Delaunay

graphs, global optimization, and motion planning; see Figure 1.

We generate an approximate Delaunay graph in high dimensions,

where the exact version is too expensive to generate. For global op-

timization, the well-known DIRECT algorithm [Jones et al. 1993]

for Lipschitzian optimization might not scale well to high dimen-

sions due to its hyperrectangular sample neighborhoods and deter-

ministic patterns. Our method places hyperspheres stochastically,

and signiﬁcantly improves convergence speed. For motion plan-

ning, a common method is RRT (Rapidly-exploring Random Tree)

[Kuffner and Lavalle 2000]. MPS has been used in RRT for up to

six dimension [Park et al. 2013]. Our method can efﬁciently handle

conﬁguration spaces with more than 20 dimensions.

The contribution of this paper can be summarized as follows:

•The idea of spoke-dart sampling, which combines the advan-

tages of state-of-the-art methods: the locality of advancing-

front and the dimension-mitigation of k-d darts;

•Efﬁcient trimming and sampling algorithms for spoke-darts in

different dimensions;

•Empirical and theoretical bounds for key measures such as

time and memory complexity, and βfor coverage (a.k.a. sat-

uration, maximality, or well-spacedness);

•Applications in high dimensional Delaunay graphs, global op-

timization, and motion planning.

2 Background

Blue noise sampling algorithms and applications have much prior

art. Here we focus on those most relevant to high dimensionality,

maximal sampling, non-point samples, and advancing front.

High dimensionality The curse-of-dimensionality refers to the

difﬁculty of obtaining efﬁcient data-structures and algorithms in

high dimensions. The goal is to avoid time and memory complexi-

ties that have higher than polynomial growth.

Examining nearby samples (e.g. for conﬂict) is a key step among

all known blue noise methods. However, the number of poten-

tial neighbors grows exponentially with dimension d, related to the

mathematical “kissing number.” Finding them efﬁciently is also

difﬁcult. Neighborhood queries is an active research topic in com-

putational geometry [Samet 2005; Arya and Mount 2005; Miller

et al. 2013]. In high dimensions, it is hard to be more efﬁcient than

examining every point. The good news is that this is only linear in

the output size.

Maximal sampling A maximal disk sampling is fully saturated

and can receive no more samples. As a consequence the distribution

is well-spaced. Such saturation is important for many applications,

as described in the extensive literature on Maximal Poisson-disk

Sampling (MPS) [Cline et al. 2009; Gamito and Maddock 2009;

Ebeida et al. 2011; Ebeida et al. 2012; Yan and Wonka 2013].

There are practical algorithms that achieve maximality in low di-

mensions, but none do so in high-dimensions. The maximal meth-

ods for low dimensions use data structures (e.g. grids or trees)

which do not scale well beyond six dimensions, as surveyed in

Ebeida et al. [2014b]. Brute force dart throwing [Dipp´

e and Wold

1985; Cook 1986] scales easily to high dimensions, but does not

reach maximality even in low dimensions. The notion of a “relaxed

MPS” [Ebeida et al. 2014b], one that is measurably saturated but

short of maximal, is attractive for higher dimensions. Our method

achieves a form of relaxed MPS and is more efﬁcient than the prior

k-d darts method [Ebeida et al. 2014b].

Two-radii MPS methods [Mitchell et al. 2012b; Ebeida et al. 2014a]

provide a way to adjust and measure the saturation of Poisson-disk

distributions. The key idea is to deﬁne two radii around each sam-

ple, rcfor domain coverage and rffor inter-sample distances. In

particular, rcis the maximum distance between a domain point and

its nearest sample, and rfis the minimum distance between two

samples. Their ratio β=rc/rfquantiﬁes saturation, and affects

the randomness and well-spacedness of the distribution. We use β

as a control parameter, as do some prior works. Ebeida et al. [2013]

is a post-processing algorithm that can reduce the number of sample

points while preserving β= 1.

Beyond point samples k-d darts [Ebeida et al. 2014b] is the

state-of-the-art for high-dimensional relaxed Poisson-disk sam-

pling, and uses axis-aligned hyperplanes to ﬁnd regions of interest.

A hyperplane is especially efﬁcient for dealing with arrangements

of spheres, because its intersection with a sphere is a simple ana-

lytic sphere in the lower dimensional space of the hyperplane. How-

ever, k-d dart hyperplanes extend throughout the entire domain, and

it can be expensive to intersect them with densely sampled sub-

regions. Here we seek to avoid global computation. Our method

is essentially a local, advancing front version of the relaxed MPS

method in k-d darts [Ebeida et al. 2014b].

Sun et al. [2013] samples lines and line-segments for rendering ap-

plications, including 3-d motion blur, 4-d lens blur, and 5-d tempo-

ral light ﬁelds. For determining sample positions it relies on sub-

routines that do not scale well to high dimensions.

Advancing front Advancing front methods were initially pro-

posed for meshing [Liu 1991; Li et al. 1999; Liu et al. 2008] and

later adopted for sampling in graphics [Dunbar and Humphreys

2006]. The basic idea is to draw new samples from boundaries

(fronts) of existing sample sets and gradually expand towards the

rest of the domain. These methods are designed mainly for low di-

mensions and can run fast in 2-d and 3-d. However, they maintain

geometric and combinatorial structures for the fronts which do not

scale well to high dimensions. Our method is based on advancing

front, but maintains and samples from an implicit front, to avoid

this combinatorial complexity.

3 Method

Our spoke-dart sampling method builds and improves upon both ad-

vancing front and k-d darts. Similar to advancing front, our method

generates new samples from the current sample set boundary and

gradually expands towards the rest of the domain. The key dif-

ferences between our method and prior methods are (1) how the

front is constructed and (2) how new samples are drawn. In partic-

ular, prior methods compute fronts and samples from intersections

of existing sample disks which can be quite complex, whereas our

method uses spoke-darts with very simple structures.

Similar to k-d darts [Ebeida et al. 2014b], spoke-darts are sets of

k-dimensional hyperplanes. However, unlike k-d dart hyperplanes

which are global and axis-aligned, spokes are local hyper-annuli

and randomly oriented. A spoke-dart passes through an extant sam-

ple. Such locality and randomness properties given spoke-darts

computational advantages over k-d darts.

Below, we ﬁrst describe the basic representation and operations for

spoke-darts, followed by how we use them for blue noise sampling.

3.1 Representation

A spoke-dart is a set of randomly-oriented hyper-annuli that pass

through a given sample point s. Each such annulus is a spoke with

radius spanning [r, r +w], where wis its size (width) and ris its

starting distance away from the sample. For blue noise we use r

equal to the Poisson disk radius, and for some later applications we

use r= 0.Spokes can have various dimensions, up to the dimen-

sion of the sample domain. For example, a 2-d domain can have 1-d

line-spokes and 2-d plane-spokes, as illustrated in Figure 2. Line-

spokes are line segments starting from a random point on the disk

D(s)of a given sample sand extending in the radial direction for a

distance of w. Similarly, a plane-spoke is an annulus starting from

a random great circle on D(s)and extending in the radial direction

for a distance of w.

Spokes can be degenerate in the sense that they have w= 0. A

degenerate line-spoke reduces to a point, and a degenerate plane-

spoke reduces to a great circle, on D(s).

Spokes with three or higher dimensions can be deﬁned analogously.

In principle spoke-dart sampling works with spokes of any dimen-

sion. Since we have used only line-spokes and plane-spokes in our

current implementation, we describe only those below.

s

q

p

(a) line-spokes

q

p

s

(b) plane-spokes

Figure 2: Spokes in a 2-d domain. In (a), a line-spoke (blue) is a radial

line segment of length wstarting from the disk surface D(s)of a sample s.

Degenerate line-spokes (yellow) have zero length and lie on the disk surface.

In (b), a plane spoke (blue) is a planar annulus. Degenerate plane-spokes

(yellow) are circles on the disk. Here pand qindicate example samples

drawn from degenerate and full spokes.

3.2 Operations

As in Poisson-disk sampling, spoke-darts are used to explore the

gaps or voids, the uncovered space that can accept a new sample.

Speciﬁcally, we trim a spoke with existing sample disks, and select

a new sample from the remaining uncovered hyperplane region(s).

The key to our design is that all trimming operations are efﬁcient in

high dimensions, even with many nearby sample disks. In particu-

lar, we leverage the degenerate versions of spokes to avoid wasting

time while trimming their full versions.

Spoke generation A random line-spoke is generated by select-

ing its starting position pon the disk surface D(s)uniformly by

area. A random plane-spoke is generated by selecting a great cir-

cle with uniform random orientation, by selecting two such points

that the plane passes through. To generate p, we generate each of

its dcoordinate independently from a normal (Gaussian) distribu-

tion. Then we linearly scale the vector of coordinates to the disk

radius [Muller 1959]. (For r= 0 spokes, pdeﬁnes direction only.)

In our current implementation, wis initialized by a global constant.

Line-spoke trimming Our line-spoke trimming method is sum-

marized in Algorithm 1. Our implementation uses two simpliﬁca-

tions for efﬁciency, to avoid the problem that trimming a full spoke

directly by nearby disks can involve a lot of wasted effort. There

could be many (kissing number) nearby disks.If we iterate over the

disks, many segments are generated early on, which are completely

covered later. Often there is nothing left, but it took many oper-

ations to discover this. Our ﬁrst simpliﬁcation is to check if the

degenerate version of a line-spoke (a point) is covered before trim-

ming its full version. This involves no partitioning, and it is dis-

carded by the ﬁrst disk we ﬁnd that covers it. It is possible that

we might miss an uncovered segment of a line-spoke, but over-

all it is more efﬁcient. The second simpliﬁcation is to keep only

the line segment that touches D(s), and ignore the rest of the par-

tition. Hence each trimming operation shortens the length of the

line-spoke instead of fragmenting it into pieces. This favors insert-

ing nearer samples, but the net effect is small; see Figure 4.

Input: input line spoke `1at sample s

Output: trimmed spoke `0

1

1: `0

1←`1

2: for each sample s0near sdo

3: `0

1←`0

1−D(s0)

4: `0

1←only the segment of `0

1touching D(s)

5: if `0

1is empty then

6: return empty

7: return `0

1

Algorithm 1: Trimming a line spoke.

Input: input plane spoke `2at sample s

Output: trimmed line spoke `1⊂`2

1: `0

2←Degenerate(`2)// circle

2: p←RandomSample(`0

2)

3: while phas not traversed a full revolution of `0

2do

4: if pis covered by a disk D(s0)then

5: p←LeftIntersection(`0

2, D(s0))

6: else

7: w←thickness of `2

8: `1←LineSpoke(p, w)

9: return TrimLineSpoke(`1)// Algorithm 1

10: return empty // `0

2was covered

Algorithm 2: Trimming a plane spoke.

Plane spoke trimming We apply similar concepts for trimming

plane-spokes as summarized in Algorithm 2. For efﬁciency we start

with a degenerate spoke, a circle. We search for an uncovered point

on its circle as follows. Take any point pfrom the circle; a point

we used to generate the plane is a good choice. If pis covered by a

disk, we move p“left” along the circle to the intersection point of

the circle and that disk. We repeat this until pis uncovered; or we

have passed our starting point, in which case the circle is completely

covered. For a full plane-spoke, we now throw a line-spoke passing

through pand trim it as in Algorithm 1.

3.3 Sampling

With the representation and operations of spoke-darts above, our

blue noise generation method works as follows. We initialize the

output set with one random sample and put it into the active pool of

front points. When this pool becomes empty our algorithm termi-

nates. We remove a random sample sfrom the pool and try to gen-

erate new samples s0from random spokes `through s. Accepted

samples are added to the pool. We keep throwing spokes from the

same sample suntil mconsecutive spokes failed to generate an ac-

ceptable sample. Our method is summarized in Algorithm 3.

Input: sample domain Ω

Output: output sample set S

1: s←RandomSample(Ω)

2: S ← {s}

3: P ← {s}// active pool

4: while Pnot empty do

5: s←RandomSelectAndRemove(P)

6: reject ←0

7: while reject ≤mdo

8: `←RandomSpoke(s, r, r +w)

9: `←Trim(`)// Algorithm 1 or 2

10: if `not empty then

11: s0←RandomSample(`)

12: S ← S S{s0}

13: P ← P S{s0}

14: reject ←0

15: else

16: reject ←reject + 1

17: return S

Algorithm 3: Sampling with spoke-darts.

3.4 Implementation

Note that for each sample on the front, for each spoke-dart we it-

erate over the nearby samples, at distances less than 3r. It saves

time to collect a sample’s neighbors once before throwing any of its

spokes, if the number of neighbors happens to be less than the total

number of points, N < n. Those wishing to reproduce our output

may simply iterate over all the points and gather these neighbors in

an array. In our implementation, we have found that using a k-d

tree saves time in moderate dimensions.

We maintain a k-d tree of the entire point set. We collect the subtree

of neighbors. We update them as we successfully add new disks.

If the neighbor list is huge, N→n, as can happen when dis

very large, then these trees do not save any time over an array, but

they are not signiﬁcantly more costly either. (None of our run-time

proofs depend on these trees.)

4 Analysis

Here we compare our method against the state-of-the-art, and ana-

lyze the quality and performance of the variations of our method:

{line, plane}×{full, degenerate}spokes.

To the best of our knowledge, k-d dart [Ebeida et al. 2014b] is the

state-of-the-art method for high dimensional blue noise sampling

with high saturation, so we compare to it. For dimensions below

six, we may compare to MPS output produced by the Simple MPS

algorithm [Ebeida et al. 2012].

4.1 Performance Analysis

To consider both speed and saturation, we measure the accumulated

computation time with respect to the number of generated samples

across different dimensions for all candidate methods. Since the

distributions generated by the different methods are not the same,

especially for degenerate and non-degenerate spokes, equal number

of points does not mean equal saturation.

k-d dart versus spoke-darts As shown in Figure 3, when the

domain is relatively empty, line darts [Ebeida et al. 2014b] are bet-

ter than our methods. However, when the domain is relatively full,

1E+1$

1E+2$

1E+3$

1E+4$

1E+5$

1E+6$

100,000$ 150,000$ 200,000$

CPU$Time$(seconds)$

Number$of$inserted$points$

d$=$4$

r_f$=$0.05$

Line$

darts$

Line$$

Spokes$

Degen.$

Plane$

Spokes$

Degen.$Line$

Spokes$

Plane$

Spokes$

Simple$MPS$

(a) d=4

1E+1$

1E+2$

1E+3$

1E+4$

1E+5$

1E+6$

50,000$ 150,000$ 250,000$

CPU$Time$(seconds)$

Number$of$inserted$points$

Degen.$Line$

Spokes$

d$=$6$

r_f$=$0.15$

Line$

darts$

Line$$

Spokes$

Degen.$Plane$

Spokes$

Plane$

Spokes$

(b) d=6

1E+1$

1E+2$

1E+3$

1E+4$

1E+5$

1E+6$

0$ 100,000$ 200,000$

CPU$Time$(seconds)$

Number$of$inserted$points$

Line$

darts$

Line$$

Spokes$

Degen.$

Plane$

Spokes$

Degen.$Line$

Spokes$

Plane$

Spokes$

d$=$8$

r_f$=$0.28$

(c) d=8

1E+1$

1E+2$

1E+3$

1E+4$

1E+5$

1E+6$

50,000$ 150,000$ 250,000$

CPU$Time$(seconds)$

Number$of$inserted$points$

d$=$10$

r_f$=$0.4$

$

Line$

darts$

Line$$

Spokes$ Degen.$

Plane$

Spokes$

Degen.$Line$

Spokes$

Plane$

Spokes$

(d) d = 10

Figure 3: Comparison between k-d dart and different variations of our method under different dimensions. Each result is sampled from a domain is a

d-dimensional unit box domain with Poisson disk spacing rf. The goal is to ﬁll in the domains with as many samples as possible under the same amount

of computation time. Note that towards the end game with higher ﬁll-rates, our methods consistently outperform k-d dart, with plane-spokes outperforming

line-spokes, and the degenerate versions outperforming the full versions.

our methods are better. It appears that after a critical level of sat-

uration, line darts nearly stall, while spoke-darts continue to add

points. These thresholds depend on the dimension and radius. Fig-

ure 3 shows that if one desires a highly saturated (low β) distri-

bution, our method is orders of magnitude faster. These empirical

results are consistent with our earlier theoretical observations about

the global versus local nature of k-d darts and spoke-dart sampling.

Variations among spoke-darts Figure 3 also points towards the

general trends that degenerate spokes are more efﬁcient than full

spokes, and plane-spokes are more efﬁcient than line-spokes, at

producing points.

4.2 Quality Analysis

We measure quality via coverage β[Mitchell et al. 2012b; Ebeida

et al. 2014a] and the inter-point distance distribution.

Local saturation Figure 4b shows the βguaranteed by the theory

(βguaranteed) and achieved in practice (βachieved) for different values

of m. We see that line-spokes typically achieve a nearly-maximal

(β≈1) distribution, and can do so using many fewer spokes than

required in theory. Using ∆d= (βguaranteed −1)/(βachieved −1) for

dimension d, we see ∆4≈8.Figure 4a shows nby m.

1" 100" 10000"

Number'of'points

Number'of'successive'misses'(m)

d'='4'

Line'Spokes'

(a) Total inserted points by m.

1"

2"

4"

8"

0" 200" 400" 600" 800" 1000"

β"

m"

(βguaranteed-1)"/"(βachieved-1)"

d"="4"

ϵ"="1E-5"

βguaranteed"

βachieved"

(b) βfor different m

Figure 4: Local saturation (coverage) for line-spokes in theory and

practice, for the number of spoke misses m. Here βguaranteed is

the probabilistically-guaranteed saturation upper-bound in theory, and

βachieved is the βobserved in experiments. In practice βis about 8×

closer to 1 than the theory guarantee. For example, for m= 60 we

have βachieved ≈1.08, almost maximality, whereas βguaranteed ≈1.6.Since

rf≈rin practice, it is rcthat is determining β=rc/rf.

0.95%

1%

1.05%

1.1%

1.15%

1.2%

!"#$%&'(&)(*&+',-

.+-,#'$/(0"1

2(3(4(

5(3(67777(

Line%Darts%

Line%Spokes%

Plane%Spokes%

Degen.%Line%Spokes%

Degen.%Plane%Spokes%

Simple%

%MPS%

(a) different spoke types

0.95%

1%

1.05%

1.1%

1.15%

1.2%

!"#$%&'(&)(*&+',-

.+-,#'$/(0"1

2(3(4(

5+'/(6*&7/-(

m=10%

m=100%

m=100000%

m=1000%

m=10000%

(b) different m

Figure 5: Radial proﬁles, from differential domain distributions [Wei

and Wang 2011]. (a) Different sampling types. Spokes all use the same

m= 1000. Line-spokes, line darts (k-d darts with lines), and Simple MPS

produce nearly identical peaks, but line darts is farthest from saturation. (b)

Line-spokes with different m. The peak becomes more pronounced as satu-

ration is approached. All distributions are quite ﬂat for larger distances.

Distribution We analyze sample distributions via Differential

Domain Analysis (DDA) [Wei and Wang 2011], which essentially

computes histograms of spatial distances between samples. We use

DDA instead of Fourier spectral analysis [Lagae and Dutr´

e 2008]

because it is faster and easier to compute, especially in high di-

mensions; and the two are equivalent, differing only by the choice

of Gaussian versus sinusoidal kernels [Wei and Wang 2011]. In

Figure 5 we plot the 1-d radial means of the high dimensional dis-

tributions with respect to different spoke types and m. As shown in

Figure 5a, all spoke types exhibit MPS-like characteristics. Degen-

erate spokes tend to have sharper proﬁles than full spokes, analo-

gous to the sharper proﬁles of boundary sampling methods [Dunbar

and Humphreys 2006]. Figure 5b shows that higher mwill produce

sharper proﬁles due to higher saturation as measured by β.

4.3 Parameter Trade-offs

The more spokes we generate, the longer the run-time, but the more

saturated the output. Our main control parameter is m, the number

of successively-failed spokes for a given extant sample disk before

removing it from the front. For line-spokes, our guarantee is that

for a given m, with high probability (1 −)the achieved βachieved

is less than β.

How many spoke misses are enough? If the user selects the desired

β, then Equation (1) says how large mmust be. Conversely, the

user may pick mbased on a computational budget, and Equation (1)

describes what the probabilistically-guaranteed βwill be. Note β >

1,and −ln > 0,and m≥1.

m=l(−ln )(β−1)1−dm⇔β= 1 + −ln

m1/(d−1)

(1)

One can also pick mand βand bound the probability that βwas

exceeded: < exp (−m(β−1)d−1).In Figure 6, we see that

m= 12 for β= 2 and = 10−5and all d. The bound on min

Equation (1) is quite practical for moderate dimensions and β.

1.E+00&

1.E+04&

1.E+08&

1.E+12&

1.E+16&

1.E+20&

1& 1.2& 1.4& 1.6& 1.8& 2&

m"

β"

d=2"

d=20"

10"

6"

(a) full scale

1.E+01&

1.E+02&

1.E+03&

1.E+04&

1& 1.2& 1.4& 1.6& 1.8& 2&

m"

β"

d=2"

d=20"

10"

6"

4" 8"

(b) zoom in

Figure 6: Illustration of Equation (1) with = 10−5.Only 12 successive

misses per sample are required to achieve β= 2, regardless of d.

We provide some proof intuition for Equation (1) here; the actual

proofs are in the supplementary material, Appendix A. Let us sup-

pose that the algorithm has terminated and there is a void, some

connected part of the domain whose points are farther than rffrom

all samples. This void is bounded by some sample disks, and we

have thrown at least mspokes from each of these disks. Each of

these spokes must have missed this void, otherwise we would have

inserted a sample and reset our miss count. The chance of getting

msuccessive misses is the chance of one miss to the mth power,

which shows mis dependent on the log of . The chance of a single

spoke missing this void is the area the void shares with the spoke’s

disk, divided by the surface area of the disk. Combining this for

all disks bounding the void shows that the natural log of the chance

that they all missed is (at most) proportional to the area of one disk

divided by the total area of the void boundary. Since the void was

not hit, the area of the void is probabilistically-guaranteed to be

small. A void with small area has a small maximum distance from

its interior to its boundary (this is rc−rf), in particular smaller than

the radius of a ball with the same surface area as the void. Thus we

get a bound on rc. The exponential-in-(d−1) dependence on βis

precisely the dependence of the surface area of a d-ball on its ra-

dius. For β= 2,we only care about voids with at least the surface

area of one of our disks, and this dependence disappears.

In practice, we achieve a much better βthan the theoretical guar-

antee, for all m; See Section 4.2 for a description. This is expected

because the proof makes several worst-case assumptions, such as

the void being shaped like a ball, and ignores chains of misses less

than m.

To bound the overall run-time, we must account for these small miss

chains. We assign the cost of a small miss chain to the successful

sample disk insertion following it, not the disk generating the chain.

Thus each sample accounts for the (< m −1) misses preceding it,

the spoke that created it, plus its own mﬁnal successive misses, for

a total of at most 2mspokes. Thus in the entire algorithm we throw

at most 2mspokes. Each line-spoke takes time O(dN)to trim,

where N < n is the number of nearby disks in the pool. Multiply-

ing these gives time O(dmn2) = Od(−ln ) (β−1)1−dn2.

5 Applications

We present applications of our method in Delaunay graph (d=

6–14, Section 5.1), global optimization (d=6–15, Section 5.2),

and motion planning (d > 20, Section 5.3). In particular, we use

our spoke-dart operations (Section 3.2) for constructing approxi-

mate Delaunay graphs from given samples, while global optimiza-

tion and motion planning can beneﬁt from our placement of new

samples in blue noise distributions (Section 3.3). All these applica-

tions rely on the underlying domains being sampled as maximally

as possible, as measured by β.

5.1 High-dimensional Delaunay Graph

There is an increasing demand for high dimensional meshes in

various ﬁelds such as uncertainty quantiﬁcation [Witteveen and

Iaccarino 2012] and computational topology [Gerber et al. 2010].

Many applications rely on knowing the distance- and directionally-

signiﬁcant neighbors of points. These applications often rely on

Delaunay graphs as a core component. Many methods for con-

structing the exact Delaunay graph, D, suffer from the curse of

dimensionality and their effectiveness deteriorates very rapidly as

the dimension increases. Some recent theoretical papers [Miller

and Sheehy 2013; Miller et al. 2013] have considered approximate

graphs and the problem of dimension from the standpoint of com-

plexity analysis, although no implementations or experimental re-

sults of these algorithms are available.

In this section we apply our spoke-dart method to generate an ap-

proximate Delaunay graph D∗, which contains with high proba-

bility those edges whose dual Voronoi faces subtend a large solid

angle with respect to the site vertex. We call these edges signiﬁcant

Delaunay edges, and the corresponding D∗asigniﬁcant Delau-

nay graph. The signiﬁcant edges are a subset of the true Delaunay

edges, and the Voronoi cell deﬁned by the signiﬁcant neighbors ge-

ometrically contains the true Voronoi cell. Many high dimensional

applications, such as the classic approximate nearest neighbor prob-

lem, accept approximate Delaunay graphs. One such application is

high dimensional global optimization, as shown in Section 5.2. To

the best of our knowledge, we present the ﬁrst practical technique

to ﬁnd a signiﬁcant Delaunay graph in high dimensions. As a fur-

ther beneﬁt, for each edge our method produces a witness, a domain

point on its true Voronoi face, which can be used to estimate the ra-

dial extent δof the Voronoi cell. This is demonstrated in our global

optimization application; Section 5.2.

Our basic idea is to throw random line-spokes to tease out the sig-

niﬁcant Delaunay edges from a set of spatial neighbors. This is a

very simple method that scales well across different dimensions. It

is summarized in Algorithm 4 with details as follows. We construct

the graph D∗for each vertex sin turn. We initialize its edge pool

with all vertices that are close enough to possibly share a Delaunay

edge with s. We next identify vertices from this pool who are actual

Delaunay neighbors of swith the following probabilistic method.

Using spoke-darts, we throw mline-spokes. We trim each spoke

`using the separating hyperplane between sand each vertex s0in

the pool. There is one pool vertex s∗whose hyperplane trims `the

most. (In so-called “degenerate” cases multiple vertices trim the

spoke the most and equally. Then we can pick an arbitrary one for

s∗.) The far end of the trimmed spoke ωis equidistant from sand

s∗, and no other vertex is closer. Hence ωis the witness that sand

s∗share a Voronoi-face (Delaunay-edge), and ss∗is added to D∗.

The reason we tend to ﬁnd the signiﬁcant neighbors with high prob-

ability is obvious from the above algorithm description. Spokes

sample the solid angle around each vertex suniformly, so the prob-

ability that a given spoke hits a given Voronoi face is proportional

Input: vertex s, Delaunay graph D∗, NeighborCandidates M, Re-

cursionFlag R

Output: D∗with sadded

1: N=∅// approx. Delaunay neighbors of s

2: δ(s) = 0 // approx. cell radius of s

3: for i= 1 to mdo

4: `←RandomLineSpoke(s, 0,|Ω|)// Section 3.2

5: for each sample s0∈ M do

6: π(s, s0)←hyperplane between sand s0

7: trim `with π(s, s0)

8: if `got shorter then

9: s∗←s0

10: D∗← D∗Sss∗// set union without duplication

11: N ← N S{s∗}

12: δ(s) = max (δ(s),length(`))

13: if R=true then

14: // update edges of neighbors, removing some

15: for each sample s0∈ N do

16: M ← Neighbors(s0)S{s}

17: D∗← D∗\Edges(s0)// remove all edges

18: Recurse(s0,D∗,M,false)// restore some

19: return D∗

Algorithm 4: Adding a vertex to the approximate Delaunay graph via our

method. For a new vertex R=true.

to the solid angle the face subtends at s. As the number of spokes

mincreases we are more likely to also ﬁnd less signiﬁcant neigh-

bors, and D∗→ D. (This is analogous to our blue noise sampling

algorithm in Algorithm 3, where larger mincreases our chance of

ﬁnding even the small voids.)

0"

50"

100"

150"

6" 8" 10" 12" 14"

Time%(seconds)%

Dimension%

SpokeDarts%

Qhull%

m%=%1000%

m%=%10,000%

m%=%25,000%

(a) computation time

0"

500"

1000"

1500"

6" 8" 10" 12" 14"

Memory'(MB)'

'

Dimension'

SpokeDarts'

<'2.5'MB'

Qhull''

(b) memory requirement

Figure 7: Comparison of speed (a) and memory (b) between Qhull and

spoke-dart sampling for an approximate Delaunay graph. Qhull becomes

infeasible beyond d= 10 whereas our method scales well.

20#

30#

40#

50#

60#

4# 40# 400# 4000# 40000#

%"edges"missing"

"

"

m"

d=8"

d=9"

d=10"

(a) effects of mon % of missing edges

0.1$

1$

10$

100$

1000$

4$ 40$ 400$ 4000$ 40000$

Time%ra(o%

m%

Qhull%/%SpokeDarts%

Time%%Ra(o%

$

d=8%

d=9%

d=10%

(b) effects of mon time

Figure 8: Effects of mon the approximate Delaunay graph. As min-

creases, fewer Delaunay edges are missed (a) but run-time increases (b).

We demonstrate the efﬁciency of our approach against Qhull [Bar-

ber et al. 1996], a commonly-used code for convex hulls and Delau-

nay triangulations. As test input, we used Poisson-disk point sets

over the unit-box domain in various dimensions. For each case, we

used Qhull to generate the exact solution Dand our method for the

approximate solution D∗. As Figure 7 shows, the memory and time

requirements of Qhull grows signiﬁcantly as dincreases. Qhull re-

quired memory that might not be practical for d≥11. On the other

hand, our method shows a linear growth for time and memory with

d. We see that our method became competitive for d≥9. Figure 8

shows the effect of mon the time and number of missed edges.

5.2 Rethinking Lipschitzian Optimization

A variety of disciplines — science, engineering or even economics,

— seek the “absolutely best” answer. This usually involves solv-

ing a global optimization problem, where one explores a high-

dimensional design space to ﬁnd the optimum of some objective

function under a set of feasibility constraints. Local optimality is

not enough. For simple analytical functions, some algorithms are

guaranteed to ﬁnd the global minimum. However, no method is

guaranteed to ﬁnd the global minimum for all functions, or even

come close in ﬁnite time; for example, no method is guaranteed to

ﬁnd the minimum of a function resembling white noise. Heuristic

stochastic techniques are usually the best in practice, and some-

times the only option [Horst et al. 2002]. In particular, we con-

sider the important category of Lipschitzian optimization methods

for complex but well-behaved, high-dimensional functions. We

demonstrate how spoke-dart sampling can improve upon DIRECT,

which for many decades has been a preferred method. We believe

this opens the door to new approaches.

Lipschitzian optimization [Shubert 1972] explores the parame-

ter space and provides convergence based on the Lipschitz constant

of the objective function. Speciﬁcally, a function fis Lipschitz con-

tinuous with constant K > 0if

|f(xi)−f(xj)| ≤ K|xi−xj|(2)

for all xi6=xjin the feasible domain of f. One can use this con-

dition to show that a neighborhood around a sample point cannot

contain the best solution, and hence can be discarded. In particular,

if Kis known and the best currently-known answer is f∗, then a

ball around xiof radius |f(xi)−f∗|/K has values above f∗. We

only need to search the space outside this ball.

DIRECT [Shubert 1972] has two limitations: poor scaling to high

dimensions; and relying on a global K, whose exact value is often

unknown. The DIRECT algorithm [Jones et al. 1993] generalizes

[Shubert 1972] to higher dimensions and does not require knowl-

edge of the Lipschitz constant. DIRECT partitions the domain into

hyperrectangles. It reﬁnes those rectangles that could contain a bet-

ter point than the currently best-known f∗.This reﬁnement recurses

until reaching the maximum number of iterations, or the remaining

possible improvement is small. In particular, DIRECT determinis-

tically decides to reﬁne the jth rectangle if

f(cj)−˜

Kδj≤f(ci)−˜

Kδi∀i= 1,2, ..., m, and (3)

f(cj)−˜

Kδj≤f∗−|f∗|.(4)

Here cjis the center of the rectangle, and δjis the distance from

the rectangle’s center to its (farthest) corner. Also ˜

Kruns over all

positive real numbers, and the index iruns through all cells in the

domain. The best currently-known value is f∗,and is a small

positive number. Intuitively, DIRECT avoids the need to know K

A"

B"

C"

D" E"

F"

G"

p"

(a) rectangular cells

A"

B"

C"

D" E"

F"

G"

p"

(b) Voronoi cells

Figure 9: Main weakness of DIRECT [Jones et al. 1993]. Rectangular

partitions (a) can give misleading estimates of sample-neighborhood sizes.

Voronoi cell partitions (b) improve these estimates, especially for high di-

mensions. For example, the size of the relevant neighborhood of point A in

(a) is overestimated, since all its corners are actually closer to other sample

points, e.g. pis closer to D, as seen in (b).

via Equation (3), in which we consider whether any ˜

Kcould al-

low the cell to contain the global optimum. For small values of ˜

K,

Equation (4) avoids selecting cells which can lead only to minor

improvements. The set Lof all rectangles satisfying these equa-

tions, including their ˜

Kvalues, can be computed efﬁciently via the

convex hull [Jones et al. 1993]. Speciﬁcally, Lis the lower enve-

lope of the convex-hull of points (representing rectangles) plotted

in the two-dimensional domain δby f. Since DIRECT uses rectan-

gles, many cells have the same δand the data points tend to stack

above one another.

However, we question the efﬁcacy of rectangular cells in DIRECT.

As illustrated in Figure 9, they do not appear to be the best way

to describe or measure local neighborhoods around sample points.

Rectangles give misleading δi, and slow DIRECT’s convergence

rate, especially in high dimensions.

Input: target function fover domain Ω

Output: minimum f∗found

1: s=Center(Ω) // any sample point

2: S={s}// sample set

3: F ← f(s)// function values at samples

4: f∗←f(s)// minimum value found so far

5: {δ}={|Ω|} // set of cell size estimates

6: while computational budget is not exhausted do

7: L=LowerHull(F,{δ}, f ∗)

8: s←RandomSelect(L)

9: s0←OneSpokeSample(s, D∗(s))

10: S ← S S{s0}

11: F ← F S{f(s0)}

12: f∗←min(f∗, f (s0))

13: (D∗,{δ})←DelaunayAdd(D∗, s0)// Algorithm 4

14: return f∗

Algorithm 5: Lipschitzian optimization via our method.

Our method We follow the basic steps in DIRECT but improve

it in two major aspects: using Voronoi regions instead of hyper-

rectangular cells, and placing samples via stochastic blue noise in-

stead of deterministic cell division. In particular, to reﬁne a cell,

we ﬁrst add a new sample within it via our spoke-dart sampling

algorithm. We set the conﬂict radius to the cell’s inscribed hy-

persphere radius, to avoid adding a sample point that is too close

to a prior sample. We then divide the cell (and update its neigh-

boring cells) via the approximate Delaunay graph as described in

Section 5.1, and use the computed witnesses to estimate the δval-

ues in Equation (3). These two steps replace the corresponding de-

terministic center-sample and rectangular cell division in DIRECT,

respectively. See Algorithm 5 for a summary.

To our knowledge, our method is the ﬁrst exact stochastic Lips-

chitzian optimization technique that combines the beneﬁts of guar-

anteed convergence in [Jones et al. 1993] and high dimensional ef-

ﬁciency in [Spall 2005]. Computing blue noise and Voronoi regions

has been intractable in high dimensions, and this is probably why

this direction has not been explored before.

(a) iteration 0, # samples (1,1) (b) iteration 1, # samples (3, 2)

(c) iteration 2, # samples (5, 3) (d) iteration 3, # samples (7, 4)

(e) # samples (31, 31) (f) # samples (101, 101)

(g) # samples (301, 301) (h) # samples (601, 601)

(i) # samples (999, 999) (j) Smooth Herbie

Figure 10: Comparing DIRECT (left) and our method (right) while ex-

ploring the smooth Herbie function (j). We list the number of samples used

by each: (DIRECT, us).

Demonstration A didactic comparison of DIRECT and our

method is illustrated in Figure 10. It uses the smooth Herbie func-

tion, a 2-d test function popular in the optimization community be-

cause it has four local optima of similar value, located in different

quadrants. Notice how DIRECT partitions the space via determin-

istic rectangles while our method uses blue-noise Voronoi cells. Ta-

ble 1 shows the superior performance of our method over a set of

benchmark high-dimensional functions, where the standard mea-

sure of performance is the number of function evaluations needed,

Benchmark dimension DIRECT Our method Speedup

Easom 6 4987 1912 2.60×

Easom 8 64405 8480 7.59×

Easom 10 816937 19081 42.81×

Bohachevsky 7 10315 2125 4.85 ×

Exponential 10 13481 7807 1.72 ×

Exponential 15 36890 10316 3.57×

Table 1: Performance comparison of DIRECT and our method, measured

by the number of function evaluations needed to ﬁnd the global minimum

within relative error =f∗−fmin

fmax −fmin

= 10−4. Since our method is

random, results are the averages over 100 runs.

because in real problems those tend to be very expensive and dom-

inate the overall cost.

5.3 High-dimensional Motion Planning

Motion planning algorithms are frequently used in robotics, gam-

ing, CAD/CAM, and animation [Yamane et al. 2004; Overmars

2005; Pan et al. 2010]. The main goal is to compute a collision-free

path for real or virtual robots among obstacles. Furthermore, the

resulting path may need to satisfy additional constraints, including

path smoothness, dynamics constraints, and plausible motion for

gaming or animation. This problem has been extensively studied in

many areas for more than three decades. Two main challenges are:

Speed The computation needs to be fast enough for interactive ap-

plications and dynamic environments.

Dimensionality High Degrees-Of-Freedom (DOF) robots are very

common. For example, the simplest models for humans (or

humanoid robots) have tens of DOF, capable of motions like

walking, sitting, bending or picking objects.

Some of the most popular algorithms for high-DOF robots use

sample-based planning [LaValle and Kuffner 2001]. The main

idea is to generate random collision-free sample points in the high-

dimensional conﬁguration space, and join the nearby points us-

ing local collision-free paths. Connected paths provide a roadmap

or tree for path computation or navigation. In particular, RRT

(Rapidly-exploring Random Tree) [Kuffner and Lavalle 2000] in-

crementally builds a tree from the initial point towards the goal

conﬁguration. RRT is relatively simple to implement and widely

used in many applications.

However, prior RRT methods generate samples via white noise

(a.k.a. Poisson process). These samples are not uniformly spaced

in the conﬁguration space, leading to suboptimal computation. Us-

ing Poisson-disk sampling instead can lead to more efﬁcient explo-

ration of the conﬁguration space, as demonstrated in a recent work

by Park et al. [2013]. As summarized in Algorithm 6, the method

uses a precomputed Poisson-disk sampling to guide the generation

of new points which are not too close to prior points. Adaptive sam-

pling can also be used to generate more samples in tight spaces. The

performance of RRT planning can be further improved by multi-

core GPUs.

Due to the curse-of-dimensionality, Park et al. [2013] has been re-

stricted to relatively low dimensional spaces, d≤6. Our method

offers help here by simply precomputing the sample set via spoke-

dart sampling. We use three well-known motion planning bench-

mark scenarios from OMPL [S¸ucan et al. 2012] to evaluate the per-

formance of the planning algorithm. These scenarios all have 6

DOF, and vary in their level of difﬁculty. We also compute the mo-

tion of the HRP-4 robot with 23 DOF; see Figure 1c. The total times

Input: start / goal conﬁgurations xinit and xgoal within domain Ω

Input: Poisson-disk sample set Pprecomputed via Algorithm 3

Output: RRT Tree T

1: T.add(xinit)

2: P.add(xgoal)

3: for i= 1 to mdo in parallel // multiple threads

4: while xgoal /∈Tdo

5: y←RandomSample(Ω)

6: T←Extend(T,y,P)

7: end for

8: return T

Algorithm 6: Parallel Poisson-RRT with precomputed samples.

Benchmark DOF RRT (1 CPU core) GPU Poisson-RRT Speed-up

Easy 6 0.34 0.03 12.14×

AlphaPuzzle 6 32.76 1.31 24.93×

Apartment 6 191.79 11.88 16.15×

HRP-4 23 6.17 0.32 19.28×

Table 2: Comparison of the performances of our GPU-based Poisson-

RRT planning algorithms and a reference single-core CPU algorithm. We

compared the planning time for different benchmarks using 100 trials.

taken by the planner are shown in Table 2. For sampling time, the

only competition comes from line darts, and we have demonstrated

in Figure 3 that our spoke-dart sampling is more efﬁcient.

6 Conclusions and Future Work

In summary, we have presented spoke-dart sampling as a new algo-

rithm for generating well-spaced blue noise distributions in high

dimensions. The method combines the advantages of state-of-

the-art methods: the locality of advancing-front and simplicity of

k-d darts. We demonstrated the usefulness of our method for a va-

riety of applications.

Our method has several parameters. Usually the user has no choice

over the domain dimension d. If quick run-time is desired, then se-

lect m= 12 consecutive misses. If higher saturation (β < 2) is de-

sired, use Equation (1) to select m, but be prepared to wait in high

dimensions. In any event, memory should be a minor issue. De-

generate spokes are faster than full spokes in terms of the number

of points inserted; this advantage tends to disappear as the dimen-

sion increases beyond six. Plane-spokes are faster than line-spokes;

since they effectively reduce the dimension by (only) one, this ad-

vantage tends to disappear as the dimension increases. Moreover,

both degenerate spokes and plane-spokes may be producing more

points more quickly merely because they are inserting more points

at distance rcand creating a tighter, less-random packing than max-

imal Poisson-disk sampling. As such, there is little to recommend

degenerate spokes. Our overall recommendation is to use full line-

spokes. We use line-spokes of extent twice the Poisson-disk radius,

and select the next sample uniformly from the nearest segment. We

would like to understand the effect of the extent and the selection

criteria on the ﬁnal output distribution.

We would like to analyze and improve the accuracy of generating

approximate Delaunay graphs. We speculate that our approximate

Delaunay graphs may supplant the use of k-nearest neighbors for

computational topology and manifold learning. The beneﬁt is that

our Delaunay graph considers all directions; in contrast, for a point

near a dense cluster, k-nearest neighbors can miss signiﬁcant neigh-

bors in directions opposite to the cluster.

Spoke-darts may inspire further research in global optimization.

We presented the approach and demonstrated it on a small set of

benchmarks. In our current implementation for motion planning

we precompute all samples. We are investigating the possibility of

sampling on the ﬂy by exploiting the similarity between our method

and RRT tree growth. A potential application for high dimensional

blue noise sampling is rendering. Beyond sampling, we believe

spoke-darts can also beneﬁt numerical integration as demonstrated

in Ebeida et al. [2014b].

References

ALL IE Z, P., C OH EN-ST EI N ER , D. , DE V IL LE RS , O. , L ´

EV Y, B.,

AN D DES BR UN , M. 2003. Anisotropic polygonal remeshing. In

SIGGRAPH ’03, 485–493.

ARYA, S., AN D MOU NT, D. M. 2005. The Handbook of Data

Structures and Applications. Chapman & Hall/CRC, Boca Ra-

ton, ch. Computational Geometry: Proximity and Location,

63.1–63.22. eds. D. Mehta and S. Sahni.

BAL ZE R, M ., SCHLOMER, T., A ND DEU SS EN , O. 2009.

Capacity-constrained point distributions: A variant of Lloyd’s

method. In SIGGRAPH ’09, 86:1–8.

BARBER, C. B. , DOBKIN, D. P., AND HUHDANPAA, H. 1996.

The quickhull algorithm for convex hulls. ACM Transactions on

Mathematical Software (TOMS) 22, 4, 469–483.

BRIDSON, R . 2007. Fast Poisson disk sampling in arbitrary di-

mensions. In SIGGRAPH ’07: ACM SIGGRAPH 2007 Sketches

& Applications.

CHE N, J., G E, X., WEI , L. -Y., WANG, B ., WANG, Y., WANG ,

H., FE I, Y., QIAN, K.-L., YONG , J.-H., AND WANG, W. 2013.

Bilateral blue noise sampling. ACM Trans. Graph. 32, 6 (Nov.),

216:1–216:11.

CLI NE , D. , JES CH KE , S., RAZ DA N, A., W HIT E, K., AND

WON KA , P. 2009. Dart throwing on surfaces. In EGSR ’09,

1217–1226.

COO K, R. L. 1986. Stochastic sampling in computer graphics.

ACM Trans. Graph. 5, 1, 51–72.

S¸ UC AN , I. A., MOL L, M ., AN D KAVRA KI , L . E. 2012. The

Open Motion Planning Library. IEEE Robotics & Automation

Magazine 19, 4, 72–82. http://ompl.kavrakilab.org.

DE GO ES , F., BRE ED EN , K., OST RO MO UK HOV, V., AN D DES -

BRU N, M. 2012. Blue noise through optimal transport. In SIG-

GRAPH Asia ’12, 171:1–171:11.

DIP P ´

E, M. A. Z. , AND WO LD , E . H . 1985. Antialiasing through

stochastic sampling. In SIGGRAPH ’85, 69–78.

DUN BAR, D., A ND HUMPHREYS, G. 2006. A spatial data struc-

ture for fast Poisson-disk sample generation. In SIGGRAPH ’06,

503–508.

EBE IDA , M. S., PATNE Y, A., MITCHELL, S. A., DAVI DS O N, A.,

KNU PP, P. M., AN D OWE NS , J . D . 2011. Efﬁcient maximal

Poisson-disk sampling. In SIGGRAPH ’11, 49:1–12.

EBE IDA , M . S., MITCHELL, S. A., PATNE Y, A., DAVI DS ON ,

A. A., AND OWE NS , J. D . 2012. A simple algorithm for max-

imal Poisson-disk sampling in high dimensions. Comp. Graph.

Forum 31, 2pt4, 785–794.

EBE IDA , M. S., MA H MO UD , A. H., AWAD, M. A., MO HAM ME D

A. MOH AM MED , S. A . M. A. R., A ND OW ENS , J. D . 2013.

Sifted disks. Comp. Graph. Forum 32, 2.

EBE IDA , M. S., AWAD, M . A., GE, X ., M A HM OU D, A. H.,

MITCHELL, S. A., KN UP P, P. M., AN D WEI , L. -Y. 2014.

Improving spatial coverage while preserving blue noise of point

sets. Computer-Aided Design 46 (January), 25–36.

EBE IDA , M. S., PATNE Y, A., MITCHELL, S. A., DALBEY, K. R.,

DAVIDSO N, A . A., A ND OWENS, J . D. 2014. k-d darts: Sam-

pling by k-dimensional ﬂat searches. ACM Trans. Graph. 33, 1

(Feb.), 3:1–3:16.

FATTAL, R. 2011. Blue-noise point sampling using kernel density

model. In SIGGRAPH ’11, 48:1–12.

GAMITO, M. N., A ND MADDOCK, S. C . 2009. Accurate multi-

dimensional Poisson-disk sampling. ACM Trans. Graph. 29, 1,

1–19.

GER BE R, S ., BR EM ER , P., PASCUCCI, V., AND WHITAKER, R.

2010. Visual exploration of high dimensional scalar functions.

Visualization and Computer Graphics, IEEE Transactions on 16,

6, 1271–1280.

HOR ST, R., PARDALOS, P. M., AND ROM EIJ N, H. E . 2002.

Handbook of global optimization, vol. 2. Springer.

JON ES , D. R ., P E RTT UN EN , C. D. , AND STUCKMAN, B. E. 1993.

Lipschitzian optimization without the lipschitz constant. Journal

of Optimization Theory and Applications 79, 1, 157–181.

KUFF NE R, J . J., A ND LAVALL E, S. M. 2000. RRT-Connect: An

efﬁcient approach to single-query path planning. In Proc. IEEE

Conf. on Robotics and Automation, 995–1001.

LAGA E, A ., AND DUT R ´

E, P. 2008. A comparison of methods

for generating Poisson disk distributions. Computer Graphics

Forum 21, 1, 114–129.

LAVALL E, S ., AN D KUFF NE R, J. 2001. Randomized kinodynamic

planning. International Journal of Robotics Research 20, 5, 378–

400.

LI, X.-Y., TEN G, S.-H., A ND ¨

UNG ¨

OR , A. 1999. Biting: Ad-

vancing front meets sphere packing. In Int. Jour. for Numerical

Methods in Eng.

LI, H., WE I, L .-Y., SANDER, P., AND FU, C.- W. 2010.

Anisotropic blue noise sampling. In SIGGRAPH Asia ’10,

167:1–12.

LIU , J., LI, S ., AND CHE N, Y. 2008. A fast and practical method

to pack spheres for mesh generation. Acta Mechanica Sinica 24,

4, 439–447.

LIU , J . 1991. Automatic triangulation of N-dimensional Euclidean

domains. In Proceedings of CAD/Graphics ’91, 238–241.

MILLER, G. L., A ND SH EEH Y, D. R. 2013. A new approach to

output-sensitive Voronoi diagrams and Delaunay triangulations.

In SOCG: Proceedings of the 29th ACM Symposium on Compu-

tational Geometry.

MILLER, G. L., SHEEHY, D. R., A N D VELINGKER, A. 2013. A

fast algorithm for well-spaced points and approximate Delaunay

graphs. In SOCG: Proceedings of the 29th ACM Symposium on

Computational Geometry.

MITCHELL, S. A. , RAN D, A ., E B EI DA, M. S ., AN D BAJA J , C.

2012. Variable radii Poisson-disk sampling, extended version. In

Proceedings of the 24th Canadian Conference on Computational

Geometry, 1–9.

MITCHELL, S. A. , RAN D, A ., E B EI DA, M. S ., AN D BAJA J , C.

2012. Variable radii Poisson-disk sampling. In Proceedings

of the 24th Canadian Conference on Computational Geometry,

185–190.

MUL LE R, M. E. 1959. A note on a method for generating points

uniformly on n-dimensional spheres. Communications of the

ACM 2, 4 (Apr.), 19–20.

OVERMARS, M. H . 2005. Path planning for games. In Proc. 3rd

Int. Game Design and Technology Workshop, 29–33.

¨

OZTIRELI, A. C., ALE XA , M., AND GRO SS , M. 2010. Spectral

sampling of manifolds. In SIGGRAPH ASIA ’10, 168:1–8.

PAN, J ., Z H AN G, L ., L I N, M. C ., AND MANOCHA, D. 2010.

A hybrid approach for simulating human motion in constrained

environments. Computer Animation and Virtual Worlds 21, 3-4,

137–149.

PARK , C. , PAN , J. , AND MANOCHA, D. 2013. Real-time

optimization-based planning in dynamic environments using

gpus. In ICRA ’13, 4090–4097.

SAM ET, H. 2005. Foundations of Multidimensional and Met-

ric Data Structures (The Morgan Kaufmann Series in Computer

Graphics and Geometric Modeling). Morgan Kaufmann Pub-

lishers Inc.

SCH EC HT ER, H., A ND BRIDSON, R . 2012. Ghost SPH for ani-

mating water. ACM Trans. Graph. 31, 4, 61:1–61:8.

SHU BE RT, B . O. 1972. A sequential method seeking the global

maximum of a function. SIAM Journal on Numerical Analysis

9, 3, 379–388.

SPALL , J . C. 2005. Introduction to stochastic search and opti-

mization: estimation, simulation, and control, vol. 65. Wiley.

com.

SUN , X., ZHOU, K ., GU O, J ., X I E, G., PAN, J., WAN G, W.,

AN D GUO , B. 2013. Line segment sampling with blue-noise

properties. In SIGGRAPH ’13.

WEI , L.-Y., AND WANG, R. 2011. Differential domain analysis

for non-uniform sampling. In SIGGRAPH ’11, 50:1–10.

WIT TE VE EN, J. A. , AN D IAC CA RIN O, G. 2012. Simplex stochas-

tic collocation with random sampling and extrapolation for non-

hypercube probability spaces. SIAM Journal on Scientiﬁc Com-

puting 34, 2, A814–A838.

YAMANE, K., KUFF NE R, J . J., A ND HODGINS, J . K. 2004. Syn-

thesizing animations of human manipulation tasks. ACM Trans-

actions on Graphics (TOG) 23, 3, 532–539.

YAN, D.-M., A ND WO NK A, P. 2013. Gap processing for adap-

tive maximal poisson-disk sampling. ACM Trans. Graph. 32, 5

(Oct.), 148:1–148:15.

A Bound Proofs for Section 4.3

Here we provide bounds on m,β, and in terms of d. We consider

line-spokes only. A void is an uncovered region. It is bounded by

some disks. The chance of hitting a void will depend on its surface

area Area(void), the d-1 dimensional volume of its boundary.

A.1 Chance of missing the void from one disk

Let us quantify the chance p1(miss)that a line-spoke from disk

D1missed a void. See Figure 11a. Let R1=Area(void ∩

D1)/Area(D1).Since line-spokes are chosen uniformly from the

surface area of the disk, p1(hit) = R1,and p1(miss)=1−R1.

(We may multiply the hit chance by 2 if the extent of the void is

small enough that it does not contain antipodal points of the disk

and we use a line rather than a ray for a spoke.)

The chance of missing the void consecutively mtimes is then

pm

1(miss) = Qm

j=1(1 −R1) = (1 −R1)m.Using the well-

known inequality e−x= exp(−x)>1−x, we have pm

1(miss)<

exp(−mR1).

A.2 Chance of missing the void from all disks

The chance of missing mtimes consecutively from all N

bounding disks is then pm

all (miss) = QN

i=1 pm

i(miss)<

exp −mPN

i=1 Ri= exp(−mR),where all sample disks have

the same radius and size so we can drop their subscripts and

R=Area(void)/Area(D).

If we wish this miss chance to be less than , then it is sufﬁcient to

have exp(−mR)< or mR > −ln()>0.

A.3 Bound in terms of β

Now we bound Rin terms of β. Suppose there is a domain point

vin the void at distance rcfrom all samples. Then a ball at vof

radius rvoid =rc−rfis strictly inside the void, and Area(void)>

Area(D(rvoid)); see Figure 11b. Since we are in ddimensions and

β=rc/rf,

R=Area(void)

Area(D)>rd−1

void

rd−1

f

= (β−1)d−1

Hence a sufﬁcient condition is m(β−1)d−1>−ln , or

m=l(−ln )(β−1)1−dm⇔β= 1 + −ln

m1/(d−1)

(5)

A.4 Example mValues

Table 3 gives example mvalues using Equation (5).

A.5 Other issues

Boundary caveats The astute reader may have noticed that we

made no mention of the domain boundary. The void disk D(rvoid)

must be inside the domain, and Area(void)>Area(D(rvoid)) is

only guaranteed to hold for non-periodic domains. Here we as-

sumed that the void was bounded by disks only, and not the domain

boundary. For bounded domains, this may be ﬁnessed by throwing

spokes on or near the domain boundary to ensure it is covered.

HHH

H

d

β2 1.5 1.25 1.125

2 12 24 47 93

3 12 47 185 737

4 12 93 737 5900

5 12 185 3.0e3 4.8e4

6 12 369 1.2e4 3.8e5

7 12 737 4.8e4 3.1e6

8 12 1.5e3 1.9e5 2.5e7

9 12 3.0e3 7.6e5 2.0e8

10 12 5.9e3 3.1e6 1.6e9

20 12 6.1e6 3.2e12 1.7e18

30 12 6.2e9 3.4e18 1.8e27

40 12 6.4e12 3.5e24 2.0e36

50 12 6.5e15 3.7e30 2.1e45

100 12 3.4e21 4.7e60 3.0e90

Table 3: Values of mby dand βfor = 10−5as computed from Equa-

tion (5).

Order independence There is a statistical subtly in Ap-

pendix A.2. It does not matter that the consecutive spokes from

one bounding disk were not consecutive with the spokes from an-

other disk. The misses for each of the remaining boundary pieces

is independent of whether the void was hit and reduced by some

spokes from a later front disk. The important thing is that no spoke

ever hit the boundary of the terminal, remaining void.

void

Areashared

co

Do

(a) Shared area of a void and a disk.

void

rvoid

rc

rf

(b) An empty ball in a void has

smaller surface area than the

void.

Figure 11: Hitting a void from a neighboring disk.