Page 1

Smoothed Motion Complexity⋆

Valentina Damerow1, Friedhelm Meyer auf der Heide2, Harald R¨ acke2,

Christian Scheideler3, and Christian Sohler2

1PaSCo Graduate School and

2Heinz Nixdorf Institute, Paderborn University, D-33102 Paderborn, Germany

vio, fmadh, harry, csohler@upb.de

3Dept. of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA,

scheideler@cs.jhu.edu

Abstract. We propose a new complexity measure for movement of objects, the

smoothed motion complexity. Many applications are based on algorithms dealing

with moving objects, but usually data of moving objects is inherently noisy due

to measurement errors. Smoothed motion complexity considers this imprecise

information and uses smoothed analysis [13] to model noisy data. The input is

object to slight random perturbation and the smoothed complexity is the worst

case expected complexity over all inputs w.r.t. the random noise. We think that

the usuallyapplied worst caseanalysis of algorithms dealing withmoving objects,

e.g., kinetic data structures, often does not reflect the real world behavior and that

smoothed motion complexity is much better suited to estimate dynamics.

Weillustratethisapproach on theproblemof maintaining an orthogonal bounding

box of a set of n points in Rdunder linear motion. We assume speed vectors

and initial positions from [−1,1]d. The motion complexity is then the number of

combinatorial changes tothe description of the bounding box. Under perturbation

with Gaussian normal noise of deviation σ the smoothed motion complexity is

only polylogarithmic: O(d · (1 + 1/σ) · logn3/2) and Ω(d ·√logn). We also

consider the case when only very little information about the noise distribution is

known. We assume that the density function is monotonically increasing on R≤0

and monotonically decreasing on R≥0and bounded by some value C. Then the

motion complexity is O(√nlogn · C + logn) and Ω(d · min{5√n/σ,n}).

Keywords: Randomization, Kinetic Data Structures, Smoothed Analysis

1Introduction

The task to process a set of continuously moving objects arises in a broad vari-

ety of applications, e.g., in mobile ad-hoc networks, traffic control systems, and

computer graphics (rendering moving objects). Therefore, researchers investi-

gated data structures that can be efficiently maintained under continuous motion,

e.g., to answer proximity queries [5], maintain a clustering [8], a convex hull [4],

⋆The third and the fifth author are partially supported by DFG-Sonderforschungsbereich 376,

DFG grant 872/8-1, and the Future and Emerging Technologies program of the EU under

contract number IST-1999-14186 (ALCOM-FT).

Page 2

or some connectivity information of the moving point set [9]. Within the frame-

work of kinetic data structures the efficiency of such a data structure is analyzed

w.r.t. to the worst case number of combinatorial changes in the description of

the maintained structure that occur during linear (or low degree algebraic) mo-

tion. These changes are called (external) events. For example, to maintain the

smallest orthogonal bounding box of a point set in Rdhas a unique description

at a certain point of time consisting of the 2d points that attain the minimum and

maximum value in each of the d coordinates. If any such minimum/maximum

point changes then an event occurs. We call the worst case number of events

w.r.t. the maintainance of a certain structure under linear motion the worst case

motion complexity.

We introduce an alternative measure for the dynamics of moving data called

the smoothed motion complexity. Our measure is based on smoothed analysis, a

hybrid between worst case analysis and average case analysis. Smoothed anal-

ysis has been introduced by Spielman and Teng [13] in order to explain the

typically good performance of the simplex algorithm on almost every input. It

asks for the worst case expected performance over all inputs where the expec-

tation is taken w.r.t. small random noise added to the input. In the context of

mobile data this means that both the speed value and the starting position of an

input configuration are slightly perturbed by random noise. Thus the smoothed

motion complexity is the worst case expected motion complexity over all inputs

perturbed in such a way. Smoothed motion complexity is a very natural measure

for the dynamics of mobile data since in many applications the exact position

of mobile data cannot be determined due to errors caused by physical measure-

ments or fixed precision arithmetic. This is, e.g., the case when the positions

of the moving objects are determined via GPS, sensors, and basically in any

application involving ’real life’ data.

We illustrate our approach on the problem to maintain the smallest orthogo-

nal bounding box of a point set moving in Rd. The bounding box is a fundamen-

tal measure for the extend of a point set and it is useful in many applications,

e.g., to estimate the sample size in sublinear clustering algorithms [3], in the

construction of R-trees, for collision detection, and visibility culling.

1.1The Problem Statement

We are given a set P of n points in Rd. The position posi(t) of the ith point at

time t is given by a linear function of t. Thus we have posi(t) = si·t+piwhere

piis the initial position and sithe speed. We normalize the speed vectors and

initial positions such that pi,si∈ [−1,1]d.

Themotion complexity oftheproblem isthe numberofcombinatorial changes

to the set of 2d extreme points defining the bounding box. Clearly this motion

Page 3

complexity is O(d·n) in the worst case, 0 in the best case, and O(d·logn) in the

average case. When we consider smoothed motion complexity we add to each

coordinate of the speed vector and each coordinate of the initial position an i.i.d.

random variable from a certain probability distribution, e.g., Gaussian normal

distribution. Then the smoothed motion complexity is the worst case expected

complexity over all choices of piand si.

1.2 Related Work

In [4] Basch et al. introduced kinetic data structures (KDS) which is a frame-

work for data structures for moving objects. In KDS the (near) future motion of

all objects is known and can be specified by so-called pseudo-algebraic func-

tions of time specified by linear functions or low-degree polynomials. This

specification is called a flight plan. The goal is to maintain the description of

a combinatorial structure as the objects move according to this flight plan. The

flight plan may change from time to time and these updates are reported to the

KDS. The efficiency of a KDS is analyzed by comparing the worst case num-

ber of internal (events needed to maintain auxiliary data structures) and exter-

nal events it processed against the worst case number of external events. Using

this framework many interesting kinetic data structures have been developed,

e.g., for connectivity of discs [7] and rectangles [9], convex hulls [4], proximity

problems [5], and collision detection for simple polygons [10]. In [4] the au-

thors developed a KDS to maintain a bounding box of a moving point set in Rd.

The number of events these data structures process is O(nlogn) which is close

to the worst case motion complexity of Θ(n). In [1] the authors showed that it

is possible to maintain an (1 + ǫ)-approximation of such a bounding box. The

advantage of this approach is that the motion complexity of this approximation

is only O(1/√ǫ). The average case motion complexity has also been considered

in the past. If n particles are drawn independently from the unit square then it

has been shown that the expected number of combinatorial changes in the con-

vex hull is Θ(log2(n)), in the Voronoi diagram Θ(n3/2) and in the closest pair

Θ(n) [15].

Smoothed analysis has been introduced by Spielman and Teng [13] to ex-

plain the polynomial run time of the simplex algorithm on inputs arising in

applications. They showed that the smoothed run time of the shadow-vertex

simplex algorithm is polynomial in the input size and 1/σ. In many follow-up

papers other algorithms and values have been analyzed via smoothed analysis,

e.g., the perceptron algorithm [6], condition numbers of matrices [12], quick-

sort, left-to-right maxima, and shortest paths [2]. Recently, smoothed analysis

has been used to show that many existing property testing algorithms can be

viewed as sublinear decision algorithms with low smoothed error probability

Page 4

[14]. In [2] the authors analyzed the smoothed number of left-to-right maxima

of a sequence of n numbers. We will use the left-to-right maxima problem as

an auxiliary problem but we will use a perturbation scheme that fundamentally

differs from that analyzed in [2].

1.3 Our Results

Typically, measurement errors are modelled by the Gaussian normal distribu-

tion and so we analyze the smoothed complexity w.r.t. Gaussian normally dis-

tributed noise with deviation σ. We show that the smoothed motion complexity

of a bounding box under Gaussian noise is O(d·(1+1/σ)·log n3/2) and Ω(d·

√logn). In order to get a more general result we consider monotone probability

distributions, i.e., distributions where the density function f is bounded by some

constant C and monotonically increasing on R≤0and monotonically decreasing

on R≥0. Then the smoothed motion complexity is O(d·(√nlogn · C+logn)).

Polynomial smoothed motion complexity is, e.g., attained by the uniform distri-

bution where we obtain a lower bound of Ω(d · min{5√n/σ,n}).

Note that in the case of speed vectors from some arbitrary range [−S,S]d

instead of [−1,1]dthe above upper bounds hold if we replace σ by σ/S.

These results make it very unlikely, that in a typical application the worst

case bound of Θ(d · n) is attained. As a consequence, it seems reasonable to

analyze KDS’s w.r.t. the smoothed motion complexity rather than the worst case

motion complexity.

Our upper bounds are obtained by analyzing a related auxiliary problem: the

smoothed number of left-to-right maxima in a sequence of n numbers. For this

problem we also obtained lower bounds which only can be stated here: in the

case of uniform noise we have Ω(?n/σ)and in the case of normally distributed

noise we can apply the average case bound of Ω(logn). These bounds differ

only by a factor of√logn from the corresponding upper bounds. In the second

case the bounds are even tight for constant σ. Therefore, we can conclude that

our analysis is tight w.r.t. the number of left-to-right maxima. To obtain better

results a different approach that does not use left-to-right maxima as an auxiliary

problem is necessary.

2Upper Bounds

To show upper bounds for the number of external events while maintaining the

bounding box for a set of moving points we make the following simplifications.

We only consider the 1D problem. Since all dimensions are independently from

each other an upper or lower bound for the 1D problem can be multiplied by d

to yield a bound for the problem in d dimensions.

Page 5

Further, we assume that the points are ordered by their increasing initial

positions and that they are all moving to the left with absolute speed values be-

tween 0 and 1. We only count events that occur because the leftmost point of the

1D bounding box changes. Note that these simplifications do not asymptotically

affect the results in this paper.

A necessary condition for the jth point to cause an external event is that all

its preceding points have smallerabsolute speed values, i.e.that si< sj, ∀i < j.

If this is the case we call sja left-to-right maximum. Since we are interested in

an upper bound we can neglect the initial positions of the points and need only

to focus on the sequence of absolute speed values S = (s1,...,sn) and count

the left-to-right maxima in this sequence.

Thegeneral concept forestimating thenumberofleft-to-right maximawithin

the sequence is as follows. Let f and F denote the density function and distribu-

tion function, respectively, of the noise that is added to the initial speed values.

(This means ? si= si+ φiwhere φiis chosen according to density function f.)

can write this probability as

?∞

−∞

i=1

This holds since F(x − si) is the probability that the ith element is not greater

than x after the pertubation. Since all pertubations are independently from each

other,?j−1

probablity that the jth element reaches x and is a left-to-right maximum. Hence,

integration over x gives the probability Pr[LTRj].

In the following we describe how to derive a bound on the above integral.

First suppose that all siare equal, i.e., si = s for all i. Then Pr[LTRj] =

?∞

z := F(x − s). (Note that this result only reveals the fact that the probability

for the jth element to be the largest is 1/j.)

Now, suppose that the speed values are not equal but come from some inter-

val [smin,smax]. In this case Pr[LTRj] can be estimated by

Let Pr[LTRj] denote the probability that ? sjis a left-to-right maximum. We

Pr[LTRj] =

F(x − si) · f(x − sj)dx .

j−1

?

(1)

i=1F(x − si) is the probability that all elements preceding ? sjare be-

low x. Consequently,?j−1

i=1F(x − si) · f(x − sj)dx can be interpreted as the

−∞F(x − s)j−1· f(x − s)dx =

?1

0zj−1dz = 1/j, where we substituted

Pr[LTRj] =

?∞

−∞

?∞

−∞

?∞

−∞

j−1

?

i=1

F(x − si) · f(x − sj)dx

≤F(x − smin)j−1· f(x − smax)dx

=F(z + δ)j−1f(z)dz ,

Page 6

where we use δ to denote smax− smin. Let Zf

δ) ≥ r} denote the subset of R that contains all elements z for which the ratio

f(z)/f(z + δ) is larger than r. Using this notation we get

?

R\Zf

?

R\Zf

?

R\Zf

≤ r ·1

Zf

δ,r

δ,r:= {z ∈ R | f(z)/f(z +

Pr[LTRj] ≤

δ,r

F(z + δ)j−1f(z)dz +

?

Zf

δ,r

F(z + δ)j−1f(z)dz

≤

δ,r

F(z + δ)j−1

f(z)

f(z + δ)f(z + δ)dz +

?

Zf

δ,r

f(z)dz

≤ r ·

δ,r

?

F(z + δ)j−1f(z + δ)dz +

?

Zf

δ,r

f(z)dz

j+

f(z)dz .

(2)

Now, we can formulate the following lemma.

Lemma 1. Let f denote the density function of the noise distribution and define

for positive parameters δ and r the set Zf

δ) ≥ r}. Further, let Z denote the probability of the set Zf

i.e., Z :=?

of n elements that are perturbed with noise distribution F is at most

δ,r⊆ R asZf

δ,r:= {z ∈ R | f(z)/f(z+

δ,rwith respect to f,

Zf

δ,rf(z)dz. Then the number of left-to-right maxima in a sequence

r · ⌈1/δ⌉ · logn + n · Z .

Proof. We are given an input sequence S of n speed values from (0,1]. Let

L(S) denote the expected number of left-to-right maxima in the corresponding

sequence of speed values perturbed with noise distribution f. We are interested

in an upper bound on this value. The following claim shows that we only need

to consider input sequences of monotonically increasing speed values.

Claim. The maximum expected number of left-to-right maxima in a sequence

of n perturbed speed values is obtained for an input sequence S of initial speed

values that is monotonically increasing.

⊓ ⊔

From now on we assume that S is a sequence of monotonically increasing

speed values. We split S into ⌈1/δ⌉ subsequences such that the ℓth subsequence

Sℓ, ℓ ∈ {1,...,⌈1/δ⌉} contains all speed values between (ℓ − 1)δ and ℓδ, i.e.,

Sℓ := (s ∈ S : (ℓ − 1) · δ < s ≤ ℓ · δ). Note that each subsequence is

monotonically increasing.

Let L(Sℓ) denote the expected number of left-to-right maxima in subse-

quence Sℓ. Now we first derive a bound on each L(Sℓ) and then we utilize

L(S) ≤?

ℓL(Sℓ) to get an upper bound on L(S).

Page 7

Fix ℓ ∈ {1,...,⌈1/δ⌉}. Let kℓdenote the number of elements in subse-

quence Sℓ. We have

L(Sℓ) =

j=1

kℓ

?

Pr[LTRj] ,

where Pr[LTRj] is the probability that the jth element of subsequence Sℓis a

left-to-right maximum within this subsequence. We can utilize Inequality 2 for

Pr[LTRj] because the initial speed values in a subsequence differ at most by δ.

This gives

kℓ

?

j=1

Hence, L(S) ≤?

2.1Normally distributed noise

L(Sℓ) ≤

(r ·1

j+ Z) ≤ r · logn + kℓ· Z .

ℓL(Sℓ) ≤ r · ⌈1/δ⌉ · logn + n · Z, as desired.

⊓ ⊔

In this section we show how to apply the above lemma to the case of normally

distributed noise. We prove the following theorem.

Theorem 1. The expected number of left-to-right maxima in a sequence of n

speed values perturbed by random noise from the standard normal distribution

N(0,σ) is O(1

σ· (logn)3/2+ logn).

Proof. Let ϕ(z) :=

with expectation 0 and variance σ2. In order to utilize Lemma 1 we choose

δ :=

1

√2πσe−z2

2σ2denote the standard normal density function

σ

√logn. For z ≤ 2σ√logn it holds that

ϕ(z)/ϕ(z + δ) = e(δ/σ2)·z+δ2/(2σ2)= ez/(σ√logn)+1/(2logn)≤ e3.

Therefore, if we choose r := e3we have Zϕ

derive a bound on?

for the normal density function with expectation 0 and variance σ2it holds that

?∞

?

Zϕ

δ,r

Altogether we can apply Lemma 1 with δ = σ/√logn, r = e3and Z = 1/n.

This gives that the number of left-to-right maxima is at most O(1

log(n)), as desired.

δ,r⊂ [2σ√logn,∞). Now, we

Zϕ

δ,rϕ(z)dz. It is well known from probability theory that

kσϕ(z)dz ≤ e−k2/4. Hence,

ϕ(z)dz ≤

?∞

2σ√logn

ϕ(z)dz ≤1

n

.

σ· log(n)3/2+

⊓ ⊔

Page 8

2.2 Monotonic noise distributions

In this section we investigate upper bounds for general noise distributions. We

call anoise distribution monotonic ifthecorresponding density function ismono-

tonically increasing on R≤0and monotonically decreasing on R≥0. The follow-

ing theorem gives an upper bound on the number of left-to-right maxima for

arbitrary monotonic noise distributions.

Theorem 2. The expected number of left-to-right maxima in a sequence of n

speed values perturbed by random noise from a monotonic noise distribution is

O(?nlogn · f(0) + logn).

Proof. Let f denote the density function of the noise distribution and let f(0)

denote the maximum of f. We choose r := 2 whereas δ will be chosen later. In

ordertoapply Lemma Lemma1weonly need toderive abound on?

Therefore, we first define sets Zi, i ∈ N such that ∪iZi⊇ Zf

how to estimate?

First note that for z + δ < 0 we have f(z) < f(z + δ) because of the

monotonicity of f. Hence Zf

of the form [(ℓ − 1) · δ,ℓ · δ] for ℓ ∈ N0. Now, we define Zito be the ith

interval that has a non-empty intersection with Zf

a non-empty intersection then Ziis the empty set.) By this definition we have

∪iZi⊇ Zf

We can derive a bound on?

Ziis an interval of length δ and the maximum density within this interval is

f(ˆ zi). Furthermore it holds that f(ˆ zi+2) ≤1

consider some zi∈ Zi∩Zf

where we utilized that zi∈ Zf

?

by

?

∪iZi

i∈N

≤

i∈N

≤ 2δf(ˆ z1) + 2δf(ˆ z2) + δ · f(0) ≤ 5δ · f(0) .

Lemma1 yields that the number of left-to-right maxima is at most 2·⌈1

n · 5δ · f(0). Now, choosing δ :=

Zf

δ,rf(z)dz.

δ,rand then we show

∪iZif(z)dz.

δ,r⊆ [−δ,∞). We partition [−δ,∞) into intervals

δ,r. (If less than i intervals have

δ,ras desired.

∪iZif(z)dz as follows. Suppoe that all Zi⊂

Zif(z)dz ≤ δ·f(ˆ zi) because

R≥0. Let ˆ zidenote the start of interval Zi. Then?

2f(ˆ zi) for every i ∈ N. To see this

δ,r.Wehavef(ˆ zi) ≥ f(zi) > 2·f(zi+δ) ≥ 2·f(ˆ zi+2),

δ,rand that zi+δ ≤ ˆ zi+2. If Z1= [−δ,0] we have

Z1f(z)dz ≤ δ · f(0) for similar reasons. Now we can estimate?

?

Z2i−1

?

∪iZif(z)dz

f(z)dz ≤

?

f(z)dz +

?

i∈N

?

i∈N

?

Z2i

1

f(z)dz +

?

[−δ,0]

f(z)dz

1

2i−1δ · f(ˆ z1) +

2i−1δ · f(ˆ z2) + δ · f(0)

δ⌉·logn+

?logn/(f(0) · n) gives the theorem.

⊓ ⊔

Page 9

a)

V3

V0

V1

V2

V4

E1

γ1

α

ǫσ

b)

Vi

Vi+1

Ei

δi

R

ǫσ

Fig.1. (a) The partitioning of the plane into different regions. If the extreme

point Eiof a boundary region i falls into the shaded area the corresponding

boundary region is not valid.(b) The situation where the intersection between

a boundary region i and the corresponding range square Riis minimal.

3Lower Bounds

For showing lower bounds we consider the 1D problem and map each point

with initial position piand speed sito a point Pi= (pi,si) in 2D. We utilize

that the number of external events when maintaining the bounding box in 1D is

strongly related to the number of vertices of the convex hull of the Pi’s. If we

can arrange the points in the 2D plane such that after perturbation L points lie

on the convex hull on expectation, we can deduce a lower bound of L/2 on the

number of external events.

Bythismethod the results of[11]directly implyalowerbound of Ω(√logn)

for the case of normally distributed noise. Forthe case of monotonic noise distri-

butions we show that the number of vertices on the convex hull is significantly

larger than for the case of normally distributed noise.

We choose the uniform distribution with expectation 0 and variance σ2. The

density function f of this distribution is

?1/ǫσ

0

else

We construct an input of n points that has a large expected number of ver-

tices on the convex hull after perturbation. For this we partition the plane into

different regions. We inscribe an ℓ-sided regular polygon into a unit circle cen-

tered at the origin. The interior of the polygon belongs to the inner region while

everything outside the unit circle belongs to the outer region. Let V0,...,Vℓ−1

denote the vertices of the polygon. The ith boundary region is the segment of

the unit circle defined by the chord ViVi+1where the indices are modulo ℓ,

c.f. Figure 1a). An important property of these regions is expressed in the fol-

lowing observation.

f(x) =

|x| ≤ ǫσ/2

, where ǫσ=

√12σ.

Page 10

Observation 1 If no point lies in the outer region then every non-empty bound-

ary region contains at least one point that is a vertex of the convex hull.

⊓ ⊔

In the following, we select the initial positions of the input points such that

it is guaranteed that after the perturbation the outer region is empty and the

expected number of non-empty boundary regions is large.

We need the following notations and definitions. For an input point j we

define the range square R to be the axis-parallel square with side length ǫσ

centered at position (pj,sj). Note that for the uniform distribution with standard

deviation σ the perturbed position of j will lie in R. Further, the intersection

between the circle boundary and the perpendicular bisector of the chord ViVi+1

is called the extremal point of boundary region i and is denoted with Ei. The line

segment from the midpoint of the chord to Eiis denoted with δi, c.f. Figure 1b).

The general outline for the proof is as follows. We try for a boundary region

i to place a bunch ofn

common range square R lies in the extremal point Eiof the boundary region.

Furthermore we require that no point of R lies in the outer region. If this is

possible it can be shown that the range square and the boundary region have

a large intersection. Therefore it will be likely that one of then

corresponding to the square lies in the boundary region after perturbation. Then,

we can derive a bound on the number of vertices in the convex hull by exploiting

Observation 1, because we can guarantee that no perturbed point lies in the outer

region.

Now, we formalize this proof. We call a boundary region i valid if we can

place input points in the described way, i.e., such that their range square Riis

contained in the unit circle and a vertex of it lies in Ei. Then Riis called the

range square corresponding to boundary region i.

ℓinput points in the plane such that a vertex of their

ℓinput points

Lemma 2. If σ ≤ 1/8 and ℓ ≥ 23 then there are at least ℓ/2 valid boundary

regions.

Proof. If σ ≤ 1/8 then the relationship between ǫσand σ gives ǫσ= 2√3σ ≤

1/2. Let γidenote the angle of vector Eiwith respect to the positive x-axis. A

boundary region is valid iff sin(γi) ≥ ǫσ/2 and cos(γi) ≥ ǫσ/2. The invalid

regions are depicted in Figure 1a). If ǫσ≤ 1/2 these regions are small. To see

this let β denote the central angle of each region. Then 2sin(β/2) = ǫσ≤ 1/2

and β ≤ 2 · arcsin(1/4) ≤ 0.51. At most

their extreme point in a single invalid region. Hence the total number of invalid

boundary regions is at most 4(

β

2π/ℓ+ 1 boundary regions can have

β

2π/ℓ+ 1) ≤ ℓ/2.

⊓ ⊔

The next lemma shows that a valid boundary region has a large intersection with

the corresponding range square.

Page 11

Lemma 3. Let Ridenote the range square corresponding to boundary region i.

Then the area of the intersection between Riand the ith boundary region is at

least min{(4

Proof. Letα denote the central angle of the polygon. Then α =2π

cos(α

for α ≤ 2. Plugging in the value for α this gives δi≥ (4

The intersection between the range square and the boundary region is mini-

mal when one diagonal of the square is parallel to δi, c.f. Figure 1b). Therefore,

the area of the intersection is at least δ2

if δi≥√2ǫσ.

Lemma 4. If ℓ ≤ min{5?n/ǫ2

empty with probability at least 1 − 1/e, after perturbation.

Proof. We placen

ability that none of these points lies in the boundary region after perturbation

is

ℓ)4,ǫ2

σ/2} if ℓ ≥ 4.

ℓand δi= 1−

2). By utilizing the inequality cos(φ) ≤ 1−1

2φ2+1

24φ4we get δi≥11

ℓ)2for ℓ ≥ 4.

96α2

i≥ (4

ℓ)4if δi≤√2ǫσand at least ǫ2

σ/2

⊓ ⊔

σ,n/2} then every valid boundary region is non-

ℓinput points on the center of a valid range square. The prob-

Pr[boundary region is empty] ≤

?

1 −min{δ2

i,ǫ2

ǫ2

σ

σ/2}

?n

ℓ

,

because the area of the intersection is at least min{δ2

of the range square is ǫ2

i,ǫ2

σ/2} and the whole area

σ/2} the result follows since

σ. If δ2

i= min{δ2

σ/2}≤ǫ2

i≥ 1/ℓ4which follows from the proof of Lemma 3. In

σ/2 = min{δ2

Theorem 3. If σ ≤ 1/8 the smoothed worst case number of vertices on the

convex hull is Ω(min{5√n/σ,n}).

Proof. By combining Lemmas 2 and 4 with Observation 1 the theorem follows

immediatly if we choose ℓ = Θ(min{5√n/ǫσ,n}).

i,ǫ2

ǫ2

i,ǫ2

σ

min{δ2

σ

δ2

i

≤ ǫ2

σ· ℓ4= ǫ2

σ· ℓ5/ℓ ≤ n/ℓ .

Here we utilized that δ2

the case that ǫ2

i,ǫ2

σ/2} the result follows sincen

ℓ≥ 2.

⊓ ⊔

⊓ ⊔

4 Conclusions

We introduced smoothed motion complexity as a measure for the complexity

of maintaining combinatorial structures of moving data. We showed that for the

problem of maintaining the bounding box of a set of points the smoothed motion

complexity differs significantly from the worst case motion complexity which

makes it unlikely that the worst case is attained in typical applications.

Page 12

A remarkable property of our results is that they heavily depend on the

probability distribution of the random noise. In particular, our upper and lower

bounds show that there is an exponential gap in the number of external events

between the cases of uniformly and normally distributed noise. Therefore we

have identified an important sub-task when applying smoothed analysis. It is

mandatory to precisely analyze the exact distribution of the random noise for a

given problem since the results may vary drastically for different distributions.

References

1. AGARWAL, P. K., AND HAR-PELED, S.

moving points. In Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms

(SODA) (2001), pp. 148–157.

2. BANDERIER, C., BEIER, R., AND MEHLHORN, K. Smoothed analysis of three combi-

natorial problems. In Proceedings of the 28th International Symposium on Mathematical

Foundations of Computer Science (MFCS) (2003), pp. 198–207.

3. BAREQUET, G., AND HAR-PELED, S. Efficiently approximating the minimum-volume

bounding box of a point set in three dimensions. In Proceedings of the 10th ACM-SIAM

Symposium on Discrete Algorithms (SODA) (1999), pp. 82–91.

4. BASCH, J., GUIBAS, L. J., AND HERSHBERGER,J. Datastructuresformobiledata. Journal

of Algorithms 31, 1 (1999), 1–28.

5. BASCH, J., GUIBAS, L. J., AND ZHANG, L. Proximity problems on moving points. In

Proceedings of the 13th ACM Symposium on Computational Geometry (1997), pp. 344–351.

6. BLUM, A., AND DUNAGAN, J. Smoothed analysis of the perceptron algorithm. In Proceed-

ings of the 13th ACM-SIAM Symposium on Discrete Algorithms (SODA) (2002), pp. 905–

914.

7. GUIBAS, L. J., HERSHBERGER, J., SURI, S., AND ZHANG, L. Kinetic connectivity for

unit disks. Discrete & Computational Geometry 25, 4 (2001), 591–610.

8. HAR-PELED, S. Clustering motion. In Proceedings of the 42nd IEEE Symposium on Foun-

dations of Computer Science (FOCS) (2001), pp. 84–93.

9. HERSHBERGER, J., AND SURI, S. Simplified kinetic connectivity for rectangles and hyper-

cubes. In Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms (SODA)

(2001), pp. 158–167.

10. KIRKPATRICK, D. G., SNOEYINK, J., AND SPECKMANN, B. Kinetic collision detection

for simple polygons. International Journal of Computational Geometry and Applications

12, 1-2 (2002), 3–27.

11. R´ ENYI, A., AND SULANKE, R.¨Uber die konvexe H¨ ulle von n zuf¨ allig gew¨ ahlten Punkten.

Zeitschrift f¨ ur Wahrscheinlichkeitstheorie und verwandte Gebiete 2, 1 (1963), 75–84.

12. SANKAR, A., SPIELMAN, D. A., AND TENG, S.-H. Smoothed analysis of the condition

numbers and growth factors of matrices. SIAM Journal on Matrix Analysis and Applications

28, 2 (2006), 446–476.

13. SPIELMAN, D. A., AND TENG, S.-H. Smoothed analysis of algorithms: Why the simplex

algorithm usually takes polynomial time. In Proceedings of the 33rd ACM Symposium on

Theory of Computing (STOC) (2001), pp. 296–305.

14. SPIELMAN, D. A., AND TENG, S.-H. Smoothed analysis of property testing. Manuscript,

2002.

15. ZHANG, L., DEVARAJAN, H., BASCH, J., AND INDYK, P. Probabilistic analysis for com-

binatorial functions of moving points. In Proceedings of the 13th ACM Symposium on Com-

putational Geometry (1997), pp. 442–444.

Maintaining approximate extent measures of