An empirical comparison of techniques for updating Delaunay triangulations.
Conference Proceeding: Dynamic Delaunay Tetrahedralisation of a Deforming Surface[show abstract] [hide abstract]
ABSTRACT: Summary form only given. Reconstruction algorithms make it possible to retrieve a surface from the Delaunay tetrahedralisation (DT) of a point sampling, whose density reflects the surface local geometry and thickness. Most of these algorithms are static and some work remains to be done to handle deforming surfaces. In such case, we defend the idea that each point of the sampling should move with the surface using the information given by the motion to allow fast reconstruction. In this article, we tackle the problem of producing a good evolving sampling of a deforming surface S, and maintaining its DT along the motion. The surface is known only through a projection operator (O<sub>1</sub>): i<sup>3</sup> rarr S , and a normal operator (O<sub>2</sub>) that returns the oriented normal at a point on the surface. On that basis, we offer some perspectives on how reconstruction algorithms can be extended to the tracking of deforming surfaces.Computer-Aided Design and Computer Graphics, 2007 10th IEEE International Conference on; 11/2007
[show abstract] [hide abstract]
ABSTRACT: We study the problem of designing kinetic data structures (KDS’s for short) when event times cannot be computed exactly and events may be processed in a wrong order. In traditional KDS’s this can lead to major inconsistencies from which the KDS cannot recover. We present more robust KDS’s for the maintenance of several fundamental structures such as kinetic sorting and kinetic tournament trees, which overcome the difficulty by employing a refined event scheduling and processing technique. We prove that the new event scheduling mechanism leads to a KDS that is correct except for finitely many short time intervals. We analyze the maximum delay of events and the maximum error in the structure, and we experimentally compare our approach to the standard event scheduling mechanism. Kinetic data structures–Robust computationAlgorithmica 04/2012; 60(2):250-273. · 0.60 Impact Factor
Chapter: Robust Kinetic Convex Hulls in 3D[show abstract] [hide abstract]
ABSTRACT: Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although kinetic data structures for many problems have been proposed, some difficulties remain in devising and implementing them, especially robustly. One set of difficulties stems from the required update mechanisms used for processing certificate failures—devising efficient update mechanisms can be difficult, especially for sophisticated problems such as those in 3D. Another set of difficulties arises due to the strong assumption in the framework that the update mechanism is invoked with a single event. This assumption requires ordering the events precisely, which is generally expensive. This assumption also makes it difficult to deal with simultaneous events that arise due to degeneracies or due to intrinsic properties of the kinetized algorithms. In this paper, we apply advances on self-adjusting computation to provide a robust motion simulation technique that combines kinetic event-based scheduling and the classic idea of fixed-time sampling. The idea is to divide time into a lattice of fixed-size intervals, and process events at the resolution of an interval. We apply the approach to the problem of kinetic maintenance of convex hulls in 3D, a problem that has been open since 90s. We evaluate the effectiveness of the proposal experimentally. Using the approach, we are able to run simulations consisting of tens of thousands of points robustly and efficiently.09/2008: pages 29-40;
An Empirical Comparison of Techniques for Updating
Computer Science Department
Stanford, CA 94305
Computer Science Department
Stanford, CA 94305
The computation of Delaunay triangulations from static point sets
has been extensively studied in computational geometry. When the
points move with known trajectories, kinetic data structures can be
used to maintain the triangulation. However, there has been little
work so far on how to maintain the triangulation when the points
move without explicit motion plans, as in the case of a physical
gulations after small displacements of the defining points, as might
be provided by a physics-based integrator. We have implemented a
variety of update algorithms, many new, toward this purpose. We
ran these algorithms on a corpus of data sets to provide running
time comparisons and determined that updating Delaunay can be
significantly faster than recomputing.
Categories and Subject Descriptors: F.2.2 [Theory of Computa-
tion]: Analysis of Algorithms and Problem Complexity—Nonnu-
merical Algorithms and Problems; I.6.m [Computing Methodolo-
gies]: Simulation and Modeling—Miscellaneous
General Terms: Algorithms, Experimentation
Keywords: Delaunay triangulation update motion
Delaunay triangulations, and their duals Voronoi diagrams, are
fundamental to computational geometry. They provide a decompo-
sition of the space surrounding a set of points into well shaped cells
sions [30, 29]. However, they are still fairly expensive to compute,
despite extensive investigations of how to build them quickly and
robustly [10, 28, 33, 11]. Recently modifications to existing algo-
rithms have been proposed which allow the Delaunay triangulation
of millions of points to be computed , extending the domains in
which such techniques can be applied.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SCG’04, June 8–11, 2004, Brooklyn, New York, USA.
Copyright 2004 ACM 1-58113-885-7/04/0006 ...$5.00.
In many situations where the Delaunay triangulation or Voronoi
diagram is required, the Delaunay triangulation of a perturbed con-
figuration of the points is available. For example, in a physical
simulation where Delaunay is used for collision detection, the De-
launay triangulation must be recomputed at each time step as the
coordinates are updated by the integrator. These updates are neces-
sarily small, in order to ensure the accuracy of the simulation; thus
the Delaunay triangulation of the original and perturbed point sets
often have very similar combinatorial structure.
Our problem is then, given a point set with its Delaunay triangu-
lation, and a perturbation of each point in the original set, compute
the Delaunay triangulation of the perturbed point set. We present
several update techniques and compare their running times on var-
ious simulation data sets.
A motivation for this work came from molecular simulations.
There, the Delaunay triangulation is used to compute the molecular
surface area and volume and their various derivatives [14, 15]. The
combinatorial structure of the Delaunay triangulations before and
after an integrator updates the atom coordinates generally differ by
by performing flips on the initial triangulation structure, in practice,
we can update the triangulation of a molecule after a time step in
approximately half to three quarters the time it takes to recompute.
The details of this method are discussed in Section 4. A similar
method involving a mix of point removals and flips to update a
Delaunay triangulation where the points were constrained to stay
within their Voronoi cells was independently explored in .
Kinetic data structures, first introduced in , can be used to
maintain a Delaunay triangulation under smooth motion of the un-
derlying points. In Section 3 we discuss the trade offs involved
with and several techniques for using kinetic data structures to up-
date a Delaunay triangulation. In addition, we discuss possible ar-
eas where improved understanding and optimization might make
kinetic data structure based update methods competitive with re-
There has been some work exploring using a kinetic Delaunay
triangulation to perform inter-frame collision detection and dynam-
ically adjust the integrator step size in the context of a particle sim-
ulation . In general, the issues and advantages associated with
looking at the configuration between frames are quite application
dependent. Therefore, we will restrict ourselves to finding the De-
launay of the perturbed conformation and ignore any intermediate
An alternative would be to allow the triangulation to deviate
slightly from Delaunay in the hopes of achieving greater stability.
Figure 1: The number of certificate failures can be mislead-
ing: (a) shows the Delaunay triangulation of an arrangement
of points. They are then moved to configuration (b) without
changing the triangulation. The resulting triangulation has
only edge with a failed certificate, which is dashed, even though
none of the triangles are Delaunay. Ω(n2) flips are necessary to
make triangulation (b) Delaunay.
computing adjacency. However, there has been no investigation of
how to maintain such a set under motion, or maintain a subset of
the almost Delaunay simplices which forms a triangulation.
2. DELAUNAY PROPERTIES
The global correctness of a Delaunay triangulation can be veri-
fied using a set of local certificates, namely that for each facet (edge
in 2D), the sphere (circle) defined by the four (three) points of one
cell (triangle) incident to the facet (edge) does not contain the re-
maining point from the other incident cell (triangle). This test is
known as the InCircle predicate. As a result, it is easy to test if a
given triangulation is the Delaunay triangulation of a set of points.
Unfortunately, even a single failed certificate can mean that the tri-
angulation is arbitrarily far from Delaunay. For an example, see
Figure 1. In simulations, when points do not move far, such situ-
ations do not occur and the number of failed certificates is a good
measure of the amount of work that needs to be done to update a
triangulation to Delaunay.
In 2D, a triangulation which is not Delaunay can be made De-
launay by repeatedly finding an edge with an invalid certificate,
and replacing it with the other diagonal of the quadrilateral defined
by the two triangles sharing the edge. Such flips are called Delau-
nay flips. Converting an arbitrary triangulation to Delaunay using
this technique requires O(n2) flips. The concept of Delaunay flip
can be extend to 3D, where a facet is flipped to an edge and vice
versa. However, it is not always possible to convert a triangulation
to Delaunay using Delaunay flips alone . However, we have
found that flipping works most of the time in 3D, as is discussed in
If the triangulation is not embedded, flipping becomes problem-
atic even in 2D. Figure 2 shows such an example where Delaunay
flips cannot be used to convert a non-embedded triangulation to De-
launay. As a result, in order to use Delaunay flips we will have to
ensure that the triangulation is embedded first.
In 3D there are relatively few local operations that can be per-
formed on aDelaunaytriangulation, the most important being point
insertion and point removal. A point can be inserted by removing
all cells whose circumsphere contains the point (i.e. those whose
certificate would be invalidated by the new point). The resulting
hole is star-shaped around the new vertex and can be filled by con-
necting each facet on the boundary to the new point with a cell.
Point insertion was first proposed using flips  and forms the
basis for most recent implementations of Delaunay triangulation
construction. The cells which are removed by the point insertion
can be kept around and used to build a hierarchical point location
structure, which speeds insertions.
Figure 2: Flips do not work when the triangulation is not embed-
ded: (a) shows the Delaunay triangulation of an arrangement
of points. The point marked with a dot is then moved as shown
in (b) without changing the triangulation, making it no longer
embedded. The only edge that is not locally Delaunay is the
dashed edge, and this edge cannot be flipped (since the edge
that would be created by the flip already exists).
Figure 3: Stability and Delaunay triangulations:(a) shows an ar-
rangement of points with a very unstable Delaunay triangula-
tion due to the degenerate conformation. In most cases there is
some tolerance around each vertex, as shown in (b).
A point can be also be deleted from a Delaunay triangulation.
This operation is slightly more complicated and is covered in .
We will use both these operations for updating Delaunay triangula-
Delaunay triangulations can be very brittle structures. In certain
(a), arbitrarily small displacements can result in large changes to
the triangulation. While the example shown is highly degenerate
and is unlikely to arise in practice, in much of the data we looked
at the distance the vertices must be moved in order to invalidate a
certificate is quite small. The tolerance of a certificate is the min-
imal amount each vertex in the certificate must move in order to
invalidate the certificate and can be easily computed . We found
the average tolerance to be around 10% of the local edge length and
that perturbations of 1% of the edge length could invalidate 20% of
2.1A Note on Terminology
In the remainder of the paper we will restrict ourselves to 3D
Delaunay triangulations, although much of what is said also applies
to 2D. We will use cell to mean the full dimensional simplex (a
tetrahedron) and facet to mean the simplex with a dimensionality
of d − 1 (a triangle).
The lifting map is a convenient way of thinking about Delaunay
triangulations . It is the mapping lifting the 3D point set to a
paraboloid in four dimensional space, namely
(x,y,z) → (x,y,z,x2+ y2+ z2)
In the lifted space, the InCircle test is a point/plane orientation test
and the Delaunay triangulation is the lower convex hull of the lifted
Once we are in the lifted space it is natural to generalize De-
launay triangulations to power complexes  by allowing the lift-
ing coordinate to be specified independently, namely (x,y,z) →
(x,y,z,l). A way to do this is by giving each point a weight so
l = x2+y2+z2−w. The weight can by interpreted as the square
of the radius of a sphere around the point. The resulting lower con-
vex hull is the dual of the power diagram, a Voronoi diagram of
the weighted points using the power distance. This interpretation
is used in the molecular surface area computations mentioned in
the introduction. For simplicity, we will restrict the initial and final
weights of the points to be zero.
Since all of our current approaches only look at one time step at
a time we will let P be the coordinates of the points before the time
step and P?the perturbed coordinates. The Delaunay triangulation
of a point set will be denoted D(P). So the overall problem is to
compute D(P?) given P, P?and, D(P).
Given a set of continuous trajectories for the points, we can use a
kinetic data structure to maintain the Delaunay triangulation during
the motion. In contrast to most studies of kinetic data structures ,
in our problem, no trajectory between P and P?is specified. As a
result we are free to choose it as we see fit in order to minimize the
amount of work that is performed. In Section 3.3 we discuss the
trade offs involved. We only are interested in events which occur
during a narrow interval of time, which affects our choice of type
of solver to use. These issues are discussed in Section 3.5. As with
all known extant work on kinetic data structures, we will restrict
our trajectories to be polynomials of time.
3.1Kinetic Data Structures Overview
Computational geometry is built on the idea of predicates —
functions of parameters defining the geometric data set (e.g. point
coordinates) which return discrete sets of values. Many predicates
reduce to determining the sign of an algebraic or even arithmetic
expression on the coordinates of the primitive objects. For exam-
ple, to test whether a point lies above or below a plane (i.e. the
InCircle test under the lifting map), we compute the dot product
of the point with the normal of the plane and subtract the plane’s
offset along the normal. If the result is positive, the point is above
the plane, zero on the plane, negative below. The validity of many
combinatorial structures built on top of geometric primitives can
be proved by checking a finite number of predicates of the geomet-
ric primitives, called certificates. For a Delaunay triangulation, the
certificates are one InCircle test per facet of the triangulation, plus
point plane orientation test for each facet and edge of the convex
The kinetic data structures framework is built on top of this view
of computational geometry . Let the geometric objects move
by replacing each of their coordinates with a function of time. As
time advances, the objects now trace out paths through space called
trajectories. The values of the algebraic functions of the coordi-
nates used to evaluate the certificates now also become functions of
time. We call these certificate functions. As long as these functions
maintain the correct sign, the original data structure is still correct.
However, if one of the certificate functions changes sign, the orig-
inal structure must be updated and some new predicate functions
computed. We call such occurrences events.
Maintaining a kinetic data structure is then a matter of determin-
ing which certificate function changes sign next (i.e. determining
which predicate function has the first root after the current time)
and then updating the structure and certificate functions.
3.2Maintaining a Delaunay Triangulation
Maintaining a Delaunay triangulation using a kinetic data struc-
ture is well understood in theory and has been implemented numer-
ous times .
The predicate functions for kinetic Delaunay triangulation are
the determinant of the matrix corresponding to the lifted point/-
hyperplane orientation test as was mentioned in Section 2.1. The
where li(t) = x2
triangulations are Delaunay rather than power complexes w(t0) =
w(tf) = 0.
When a certificate fails, a flip must be performed and seven new
certificate functions must be computed. However, the negation of
the certificate function of the facet/edge being flipped is the certifi-
cate function of the edge/facet created by the flip. The function and
roots can be cached and reused, leaving only six new ones to be
handled. Additionally, each time the motion of a point changes, all
the predicate functions corresponding to facets of cells incident to
the point must be recomputed and re-solved.
In order to compare the costs of the various trajectory types con-
sidered, it helps to establish some notation. There are two logical
components to the cost of using a kinetic Delaunay data structure
to interpolate between two point sets:
i(t)−wi(t). If the initial and final
• trajectory change cost: the cost of computing and solving
the predicate functions for each of the facets of the Delaunay
cluding the initial specification of the trajectory).
• per flip cost: the cost associated with computing and solving
the six predicate functions mentioned in the previous para-
Let f be the number of facets in the final triangulation, S(d)
the cost to generate and solve a certificate polynomial of degree
d, emthe number of flips (events) that occur during the motion m
and let p be the number of times the motion of all the points is
changed after the initial specification. Then the total cost of the
kinetic data structure is (f(p + 1) + 6e)S(d). Alternatively, we
could move each point independently, one after another, and only
recompute the certificates involving each point when its trajectory
changes. Then, the cost is (5f + 6e)S(d) since each certificate
must be recomputed once for each of the five points involved.
tificate functions. Some points, called redundant points are points
which are above the convex hull in lifted space and therefore not
part of the triangulation. As a result, for each point which is inci-
dent to four other points in the triangulation, we need to maintain
a certificate to verify that the point has not moved above the hy-
perplane defined by its four neighbors and off the convex hull. In
addition we need certificates to track the location of each redundant
point. These add a cost (t+3er)S(d) where t is the number of de-
gree four vertexes and r is the number of redundant points per cell
of the triangulation. In our examples, t and r are close to zero since
the weights are never very uneven. As a result those components
of the cost can be ignored.
3.3 Interpolating Motions
We are now free to choose the motion interpolating between P
and P?to minimize our total work. We found that for the data sets
investigated, the initialization cost dominated, so we chose to fo-
cus on minimizing that and ignore the cost of flips. There the cost
we consider is the product of the number of times the motion is
changed and the cost of generating and solving a certificate func-
We can minimize the number of times the motion of each point
changes. This means picking a single motion for each point that
interpolates its initial and perturbed positions. The simplest way to
do this is by allowing x, y, z to change linearly at the same time.
The resulting certificates have degree five.
If we are already dealing with regular triangulations, or are will-
ing to bear the added complexity, we can instead interpolate lin-
early in the lifted space — i.e. interpolate x, y, z and l linearly.
The corresponding Cartesian space motion is the same as before
for x, y, and z. The weight varies quadratically with time. If the
initial and final weights are 0 and we vary t from 0 to 1 during the
interpolation, then w(t) = (t2− t)|d|2where d = (dx,dy,dz) is
the perturbation vector for the point. Note that the weight is always
negative and is bounded by |d|2/4.
Interpolating in lifted space reduces the degree of the certificates
to four at little additional cost. We call these methods linear in-
terpolation and lifted linear interpolation respectively. The latter
is the lowest degree interpolating motion which does not require
modifying the motions during the interpolation. The cost is then
(f + 6elli)S(4).
Minimize Trajectory Changes: Linear Interpolation
Generating and solving high degree certificate functions is ex-
pensive compared to computing static predicates. Polynomial mul-
tiplication is quadratic in the degree (the algorithms with better
asymptotic bounds have constants which are too large to be useful
for the polynomials in question) and there is overhead from allo-
cating space for all the intermediate values. In addition, solving a
degree five polynomial takes three times as long as solving a linear
one as is shown in Figure 4.
For all these reasons it is advantageous to minimize the degree
of the certificate functions. There are two ways to make the certifi-
cate functions linear, either allow one row of the certificate matrix
(Equation 1) to vary linearly, or allow one column.
If we allow one row to vary linearly and hold the others constant,
it corresponds to moving each point as in lifted linear interpolation,
but moving them one at a time. We call this method point at a time
interpolation. Using the above notation, work is (5f +6epat)S(1)
since each certificate must be recomputed five times, once for each
If, instead, we allow one column to vary linearly and hold the
other columns constant, it corresponds to linearly interpolating all
points along each coordinate successively (including l). The mo-
tion will need to be changed three times during the interpolation,
as the coordinate being interpolated shifts from x to y to z to l. We
call this method coordinate at a time interpolation. The work is
(4f +6ecat)S(1). This has better trajectory change cost than point
at a time interpolation. In this interpolation method the weight can
become quite large. If x,y,z are all moved before l, then the max-
imum weight of a point is 2(x · d) + |d|2where x is the initial
coordinates of the point and d is the displacement vector, as be-
fore. In practice this large change in the weight results in many
more events occurring then in point at a time interpolation.
Minimize Degree: Linear Certificate functions
Certain aspects of the costs associated with the kinetic data struc-
be compared a priori, namely the costs associated with evaluating
the static predicates and generating the kinetic certificate functions.
• Naive computation of an InCircle certificate function where
the result is linear takes twice as many multiplications as for
the static predicate. We can reduce this to approximately the
same number of multiplications as for the static predicates by
reordering the input to the determinant computation to avoid
linear intermediate results.
• An InCircle test where all of the Cartesian space motions
are linear, which results in a degree five certificate function,
takes approximately five times as many multiplications to
compute as the static determinant.
• An InCircle test where are the trajectories are quadratic (in-
cluding the lifted coordinate), takes approximately ten times
as many multiplications as a static determinant.
These are all underestimations of the amount of work necessary
to compute the certificate functions as it ignore the extra overhead
associated with memory management and branches.
In our data sets, building a Delaunay triangulation from scratch
takes approximately four InCircle predicate evaluations per facet in
the final triangulation. Using this we can compare the lower bound
on the kinetic data structure cost with the static algorithm. Table 1
shows this comparison. The estimates of the lower bounds on the
kinetic data structure update cost and the rebuilding cost are all
quite close for the methods we tried, and prohibitive for any higher
This analysis ignored many types of work associated both with
building a Delaunay triangulation and with maintaining a kinetic
data structure, however it does capture the most important compo-
nents of the work. On the static Delaunay side,  found that
40-100% of the running time of Delaunay computation was taken
by predicate evaluation. That 40-100% includes point/plane orien-
tation tests used during point location, of which there are typically
50% more than the InCircle tests. However, those are lower degree
and as a result require fewer than one fourth as many operations, so
are not a large fraction of the running time.
There are a number of aspects of the Delaunay update problem
which create different requirements on the solvers than with normal
kinetic data structure implementations. Unfortunately, we have not
yet been able to exploit these differences to our advantage. The key
dow corresponding to the interval between the two frames in
question, or even to some shorter interval until the motion
is next due to change. Effort spent computing and tracking
certificate failure times outside this interval is wasted.
• We need to robustly handle degeneracies and numerical is-
Both issues can be addressed by using interval based solvers.
Extending earlier work published in  we have implemented
move the points linearly in Cartesian space
move the points linearly in lifted space
move points one at a time linearly in lifted space
move all the points linearly along each coordinate, one after
move all the points linearly along some sort quadratic trajec-
tories in lifted space
rebuild the Delaunay triangulation from scratch
(f + 6eli)S(5)
(f + 6elli)S(4)
(5f + 6epat)S(1)
(5f + 6ecat)S(1)
point at a time
coordinate at a time
(5f + 6eq)S(8)8f
Table 1: A comparison of the various kinetic data structure based methods: f is the number of facets in the final triangulation,
em the number of flips (events) caused by motion m, S(d) is the cost to solve a polynomial of degree d. The cost column is how
many certificate functions will be generated and solved by the kinetic Delaunay update. The determinant cost is an estimate the in
initialization cost in units of a the cost of a static determinant evaluation. Note that the base costs of the low degree kinetic data
structures are very close to that of rebuilding. This agrees with our experimental findings. The cost for quadratic trajectories is too
high to be of practical interest.
1 2 3 4 5 6 7 8 9 10
Figure 4: Solver costs: The table shows the time in µs of the
various solves for isolating roots pf polynomials of specified de-
gree. The exact Descartes solver and filtered Descartes solver
both perform exact root isolation operations and allow exact
root comparisons. GSL is a numerical, eigenvalue based solver
using the GNU Scientific Library  which wraps the ATLAS
 linear algebra package. Currently the eigenvalue based
solver is faster than our solvers for low degree polynomials. We
expect to be able to bring down the costs of the interval based
solvers. The polynomials were generated with integer coeffi-
cients chosen uniformly between -1000 and 1000 and the roots
were isolated over an interval from 0 to 1. The timings were
done on a 1.8Ghz Pentium 4.
solvers that use Descartes rule of sign and Sturm sequences to iso-
late roots in intervals. These solvers naturally only act on an inter-
val of the real line, and so can ignore roots outside of the interval of
interest. Root isolation and comparison can all be implemented us-
ing field operations, and so exact comparison of roots can be done.
We have also implemented interval based solvers which use float-
ing point filters to accelerate the root isolation and comparison.
Unfortunately, the necessary operations in the kinetic setting are
fundamentally predicates (root comparisons) acting on construc-
tions (roots of certificate polynomials), rather than pure predicates,
making the filtering process more difficult and costly than in the
static case. As shown in Figure 4, the filtered interval solver out-
performs the eigenvalue based solver for high degree polynomials,
but is more than ten times slower for the sorts of low degree poly-
nomials we are interested in. We expect that we can reduce the gap
significantly in the future, but doubt the penalty for using an exact
solver will be as low as that for filtered static computations except
in the case of linear certificates.
We plan to publish a more detailed discussion of our solver pack-
age in a future paper.
4.STATIC UPDATE TECHNIQUES
The kinetic data structure based approaches involve creating a
smooth morph between the initial and final triangulation. In most
instances this is doing too much work, since we do not care about
any of the intermediate state. Therefore, we propose a set of al-
ternative update schemes which only involve computing predicates
on the initial and final coordinates and directly transforming the
initial triangulation into the final answer. These perform better in
practice than the kinetic data structures based approaches, at least
in part because they leverage much of the work that has gone into
optimizing static geometric algorithms.
The most straight forward such technique is to take each point in
turn, removeitfromthetriangulation, walktothecellcontainingits
new location and then reinsert it. We call this method placement.
Single point removals and insertions preserve the Delaunayhood
of the triangulation, so the final triangulation is guaranteed to be
Delaunay. If motions are small, the walk is cheaper than traversing
the point location hierarchy. In practice, this is an extremely poor
way to update Delaunay triangulations since all the structure, even
the conserved part, is rebuilt. In addition deleting a node from a
triangulation is more expensive than inserting one.
A particularly poor case would be P = P?. The placement
method would take every point, remove it, re-triangulate the hole