ArticlePDF Available

A Greedy Delaunay Based Surface Reconstruction Algorithm

Authors:

Abstract and Figures

In this paper, we present a new greedy algorithm for surface reconstruction from unorganized point sets. Starting from a seed facet, a piecewise linear surface is grown by adding Delaunay triangles one by one. The most plausible triangles are added first and in such a way as to prevent the appearance of topological singularities. The output is thus guaranteed to be a piecewise linear orientable manifold, possibly with boundary. Experiments show that this method is very fast and achieves topologically correct reconstruction in most cases. Moreover, it can handle surfaces with complex topology, boundaries, and nonuniform sampling.
Content may be subject to copyright.
ISSN 0249-6399 ISRN INRIA/RR--4564--FR+ENG
apport
de recherche
THÈME 2
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
A Greedy Delaunay Based Surface Reconstruction
Algorithm
David Cohen-Steiner — Frank Da
N° 4564
Septembre 2002
Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex (France)
Téléphone : +33 4 92 38 77 77 — Télécopie : +33 4 92 38 77 65
INTRODUCTION
1. TOPOLOGICAL CONSTRAINTS
b
e
b
e
b
e
e
b
Figure 1: Valid candidates
2. SELECTION CRITERIA
2.1 Choice of a candidate triangle for a boundary edge
pb
c
a
Figure 2: Curve reconstruction : edges incident on and their radii
e
a
b
H
e1
e2f
Figure 3: A sliver
2.2 Selection of candidates
3. OVERVIEW OF THE ALGORITHM
3.1 Stitching of a triangle
3.2 Main algorithm
p q r
p q r
p q r
p q r
Figure 4: A difficult case at first sight
4. DEALING WITH MULTIPLE COMPONENTS, BOUNDARIES AND SHARP EDGES
4.1 Multiple components
4.2 Boundaries
4.3 Sharp edges
5. EXPERIMENTAL RESULTS
CONCLUSION
Acknowledgements
References
Table 1: Running times (seconds).
Figure 5: Hand
Figure 6: British Museum
Figure 7: Large models
Figure 8: Engine Figure 9: Hypersheet
Figure 10: Knuckle
Figure 11: Tomo
Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex (France)
Unité de recherche INRIA Lorraine : LORIA, Technopôle de Nancy-Brabois - Campus scientifique
615, rue du Jardin Botanique - BP 101 - 54602 Villers-lès-Nancy Cedex (France)
Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France)
Unité de recherche INRIA Rhône-Alpes : 655, avenue de l’Europe - 38330 Montbonnot-St-Martin (France)
Unité de recherche INRIA Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex (France)
Éditeur
INRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France)
ISSN 0249-6399
... Representative methods [33], [34] of Delaunay triangulation first generate a set of triangular faces directly from the observed P, and then select the optimal subset from them to generate the final triangular mesh. Greedy Delaunay (GD) [33] proposes greedy algorithm based on topological constraints to select valid triangles sequentially, where the initial triangles are generated by Delaunay triangulation; Ball-Pivoting Algorithm (BPA) [34] uses balls with various radii rolling over the points in P to generate triangles, where every three points touched by a rolling ball will construct a new triangle if the triangle does not encompass any other points, which can also be regarded as an approximation of Delaunay triangulation. ...
... Representative methods [33], [34] of Delaunay triangulation first generate a set of triangular faces directly from the observed P, and then select the optimal subset from them to generate the final triangular mesh. Greedy Delaunay (GD) [33] proposes greedy algorithm based on topological constraints to select valid triangles sequentially, where the initial triangles are generated by Delaunay triangulation; Ball-Pivoting Algorithm (BPA) [34] uses balls with various radii rolling over the points in P to generate triangles, where every three points touched by a rolling ball will construct a new triangle if the triangle does not encompass any other points, which can also be regarded as an approximation of Delaunay triangulation. ...
... It is less feasible to study and empirically compare all the existing methods; instead, we take the strategy of selecting representative ones from each method group of geometric priors, assuming that our studies and conclusions would generalize in the same groups of existing methods. More specifically, we adopt the most representative Greedy Delaunay (GD) [33] and BPA [34] as the methods to be studied for triangulation-based prior; for priors of surface smoothness, we adopt SPSR [36] using the first manner of surface smoothness (cf. Eq. (9)) and RIMLS [53] using both two manners of surface smoothness (cf. ...
Preprint
Reconstruction of a continuous surface of two-dimensional manifold from its raw, discrete point cloud observation is a long-standing problem. The problem is technically ill-posed, and becomes more difficult considering that various sensing imperfections would appear in the point clouds obtained by practical depth scanning. In literature, a rich set of methods has been proposed, and reviews of existing methods are also provided. However, existing reviews are short of thorough investigations on a common benchmark. The present paper aims to review and benchmark existing methods in the new era of deep learning surface reconstruction. To this end, we contribute a large-scale benchmarking dataset consisting of both synthetic and real-scanned data; the benchmark includes object- and scene-level surfaces and takes into account various sensing imperfections that are commonly encountered in practical depth scanning. We conduct thorough empirical studies by comparing existing methods on the constructed benchmark, and pay special attention on robustness of existing methods against various scanning imperfections; we also study how different methods generalize in terms of reconstructing complex surface shapes. Our studies help identify the best conditions under which different methods work, and suggest some empirical findings. For example, while deep learning methods are increasingly popular, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of misalignment of point sets from multi-view scanning, missing of surface points, and point outliers remain unsolved by all the existing surface reconstruction methods. We expect that the benchmark and our studies would be valuable both for practitioners and as a guidance for new innovations in future research.
... More advanced methods might consider different rules for the selection of neighboring triangles. For instance, in the greedy algorithm [CD04], the neighboring triangles should not create topological singularities and should form a small dihedral angle in order to garantee a certain smoothness of the output surface. Each iteration chooses the triangle that has the smallest radius for its smallest empty circumscribed ball. ...
... The different algorithms developed in this contribution are formulated as global optimization according to a lexicographic order. The quantities involved in the computation of this lexicographic order -radius of the smallest enclosing and circumscribing balls of triangles -are natural quantities for measuring triangles and bear resemblances to the criteria used either in the carving algorithm [Boi84] or in the greedy algorithm [CD04]. The global lexicographic optimization along these quantities will be strongly justified by the connections between lexicographic optimal chains and Delaunay triangulations. ...
... As urban scenes present a lot more open surfaces than closed surfaces, we illustrate the closed surface reconstruction on an interior structure (Figure 4.24). We observe that the formulation as a global optimization allows to filter out noise as well as small thin structures when compared to either a level set approximating method such as Poisson reconstruction [KH13] or the scale-space version of the advancing front interpolating reconstruction [DMSL11,CD04], both implemented in CGAL [ASG21, DC21,van21]. ...
Thesis
Full-text available
Creating mesh representations for urban scenes is a requirement for numerous modern applications of urban planning ranging from visualization, inspection, to simulation. Adding to the diversity of possible input data -- photography, laser-based acquisitions, and existing geographical information system (GIS) data, the variety of urban scenes as well as the large-scale nature of the problem makes for a challenging line of research. Working towards an automatic approach to this problem suggests that a one-fits-all method is hardly realistic. Two independent approaches of reconstruction from point clouds have thus been investigated in this work, with radically different points of view intended to cover a large number of use cases.In the spirit of the GIS community, the first approach makes strong assumptions on the reconstructed scenes and creates a 2.5D piecewise-planar representation of buildings using an intermediate 2D cell decomposition. Constructing these decompositions from noisy or incomplete data often leads to overly complex representations, which lack the simplicity or regularity expected in this context of reconstruction. Loosely inspired by clustering problems such as mean-shift, the focus is put on simplifying such partitions by formulating an optimization process based on a tradeoff between attachment to the original partition and objectives striving to simplify and regularize the arrangement. This method involves working with point-line duality, defining local metrics for line movements, and optimizing using Riemannian gradient descent.The second approach is intended to be used in contexts where the strong assumptions on the representation of the first approach do not hold. We strive here to be as general as possible and investigate the problem of point cloud meshing in the context of noisy or incomplete data. By considering a specific minimization, corresponding to lexicographic orderings on simplicial chains, polynomial-time algorithms finding lexicographic optimal chains, homologous to a given chain or bounded by a given chain, are derived from algorithms for the computation of simplicial persistent homology. For pseudomanifold complexes in codimension 1, leveraging duality and an augmented version of the disjoint-set data structure improves the complexity of these problem instances to quasi-linear time algorithms. By combining its uses with a sharp feature detector in the point cloud, we illustrate different use cases in the context of urban reconstruction.
... For mesh reconstruction, the I&I resampling improves the accuracy for local region detection. Using the classical Delaunay triangulation method [61] on resampling result, the accurate reconstructed mesh can be achieved directly. ...
... To show the advantages of our method for reconstruction, we compare several popular mesh reconstruction methods, including Scale Space [74], Screened Poisson [27], Advancing Delaunay Reconstruction [61] [75], CVT [19], and Particle-based reconstruction [20]. The CVT and Particle-based reconstruction methods have been discussed before. ...
Article
Full-text available
With rapid development of 3D scanning technology, 3D point cloud based research and applications are becoming more popular. However, major difficulties are still exist which affect the performance of point cloud utilization. Such difficulties include lack of local adjacency information, non-uniform point density, and control of point numbers. In this paper, we propose a two-step intrinsic and isotropic (I&I) resampling framework to address the challenge of these three major difficulties. The efficient intrinsic control provides geodesic measurement for a point cloud to improve local region detection and avoids redundant geodesic calculation. Then the geometrically-optimized resampling uses a geometric update process to optimize a point cloud into an isotropic or adaptively-isotropic one. The point cloud density can be adjusted to global uniform (isotropic) or local uniform with geometric feature keeping (being adaptively isotropic). The point cloud number can be controlled based on application requirement or user-specification. Experiments show that our point cloud resampling framework achieves outstanding performance in different applications: point cloud simplification, mesh reconstruction and shape registration. We provide the implementation codes of our resampling method at https://github.com/vvvwo/II-resampling.
... However, the reconstruction of a mesh from a point cloud is not trivial for vegetation. Moreover, methods described in the literature for the estimation of a triangle mesh are dependent on several parameters to be adapted for each individual scenario (Cohen-Steiner and Da, 2004). Occlusion is mainly handled by using the well-known z-buffer (or depth buffer), where pixels of every image are mapped to one 3D point at most, i.e., the nearest point to the image viewpoint (Jeong et al., 2021;López et al., 2021cLópez et al., , 2021b. ...
Article
Full-text available
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.
... Numerous methods have been proposed to tackle this problem. Most commonly, the variational methods (Zhao et al., 2000), tensor voting (Medioni, Lee and Tang, 2000), implicit surface (Hoppe et al., 1994), and Delaunay triangulations (Cohen-Steiner and Da, 2004). Delaunay triangulations are greedy algorithms that reconstruct the surface as a result of the union of sequentially selected triangles. ...
Conference Paper
Full-text available
The 27th EG-ICE International Workshop 2020 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways.
... Multiple algorithms are found in the bibliography; some require the normal for each point, while others are robust enough to work without surface orientation, e.g. Advancing Front reconstruction (Cohen-Steiner and Da, 2004). Furthermore, surface reconstructions are commonly based on timeconsuming algorithms that rely on multiple parameters non-consistent along different point clouds, e.g. a search radius. ...
Article
Full-text available
Thermal infrared (TIR) images acquired from Unmanned Aircraft Vehicles (UAV) are gaining scientific interest in a wide variety of fields. However, the reconstruction of three-dimensional (3D) point clouds utilizing consumer-grade TIR images presents multiple drawbacks as a consequence of low-resolution and induced aberrations. Consequently, these problems may lead photogrammetric techniques, such as Structure from Motion (SfM), to generate poor results. This work proposes the use of RGB point clouds estimated from SfM as the input for building thermal point clouds. For that purpose, RGB and thermal imagery are registered using the Enhanced Correlation Coefficient (ECC) algorithm after removing acquisition errors, thus allowing us to project TIR images into an RGB point cloud. Furthermore, we consider several methods to provide accurate thermal values for each 3D point. First, the occlusion problem is solved through two different approaches, so that points that are not visible from a viewing angle do not erroneously receive values from foreground objects. Then, we propose a flexible method to aggregate multiple thermal values considering the dispersion from such aggregation to the image samples. Therefore, it minimizes error measurements. A naive classification algorithm is then applied to the thermal point clouds as a case study for evaluating the temperature of vegetation and ground points. As a result, our approach builds thermal point clouds with up to 798,69% more point density than results from other commercial solutions. Moreover, it minimizes the build time by using parallel computing for time-consuming tasks. Despite obtaining larger point clouds, we report up to 96,73% less processing time per 3D point.
Article
We introduce neural dual contouring (NDC), a new data-driven approach to mesh reconstruction based on dual contouring (DC). Like traditional DC, it produces exactly one vertex per grid cell and one quad for each grid edge intersection, a natural and efficient structure for reproducing sharp features. However, rather than computing vertex locations and edge crossings with hand-crafted functions that depend directly on difficult-to-obtain surface gradients, NDC uses a neural network to predict them. As a result, NDC can be trained to produce meshes from signed or unsigned distance fields, binary voxel grids, or point clouds (with or without normals); and it can produce open surfaces in cases where the input represents a sheet or partial surface. During experiments with five prominent datasets, we find that NDC, when trained on one of the datasets, generalizes well to the others. Furthermore, NDC provides better surface reconstruction accuracy, feature preservation, output complexity, triangle quality, and inference time in comparison to previous learned (e.g., neural marching cubes, convolutional occupancy networks) and traditional (e.g., Poisson) methods. Code and data are available at https://github.com/czq142857/NDC.
Article
Ceramics analysis, classification, and reconstruction are essential to know an archaeological site's history, economy, and art. Traditional methods used by the archaeologists for their investigation are time-consuming and are neither reproducible nor repeatable. The results depend on the operator's subjectivity, specialization, personal skills, and professional experience. Consequently, only a few indicative samples with characteristic components are studied with wide uncertainties. Several automatic methods for analysing sherds have been published in the last years to overcome these limitations. To help all the involved researchers, this paper aims to provide a complete and critical analysis of the state-of-the-art until the end of 2021 of the most important published methods on pottery analysis, classification, and reconstruction from a 3D discrete manifold model. To this end, papers in English indexed by the Scopus database are selected by using the following keywords: “computer methods in archaeology”, “3D archaeology”, “3D reconstruction”, “3D puzzling”, “automatic feature recognition and reconstruction”. Additional references complete the list found through the reading of selected papers. The 125 selected papers, referring to only archaeological potteries, are divided into six groups: 3D digitalization, virtual prototyping, Fragment features processing, geometric model processing of whole-shape pottery, 3D Vessel reconstruction from its fragments, classification, and 3D information systems for archaeological pottery visualization and documentation. In the present review, the techniques considered for these issues are critically analysed to highlight their pros and cons and provide recommendations for future research.
Article
The task of explicit surface reconstruction is to generate a surface mesh by interpolating a given point cloud. Explicit surface reconstruction is necessary when the point cloud is required to appear exactly on the surface. However, for a non-perfect input, e.g. lack of normals, low density, irregular distribution, thin and tiny parts, high genus, etc, a robust explicit reconstruction method that can generate a high-quality manifold triangulation is missing. We propose a robust explicit surface reconstruction method that starts from an initial simple surface mesh, alternately performs a Filmsticking step and a Sculpting step of the initial mesh and converges when the surface mesh interpolates all input points (except outliers) and remains stable. The Filmsticking is to minimize the geometric distance between the surface mesh and the point cloud through iteratively performing a restricted Voronoi diagram technique on the surface mesh, while the Sculpting is to bootstrap the Filmsticking iteration from local minima by applying appropriate geometric and topological changes of the surface mesh. Our algorithm is fully automatic and produces high-quality surface meshes for non-perfect inputs that are typically considered to be challenging for prior state-of-the-art. We conducted extensive experiments on simulated scans and real scans to validate the effectiveness of our approach.
Article
Full-text available
We give a simple combinatorial algorithm that computes a piecewise-linear approximation of a smooth surface from a finite set of sample points. The algorithm uses Voronoi vertices to remove triangles from the Delaunay triangulation. We prove the algorithm correct by showing that for densely sampled surfaces, where density depends on a local feature size function, the output is topologically valid and convergent (both pointwise and in surface normals) to the original surface. We briefly describe an implementation of the algorithm and show example outputs.
Conference Paper
Full-text available
Current surface reconstruction algorithms perform satisfactorily on we ll-sampled, smooth surfaces without boundaries. However, these algorithms face difficulty with undersampling. Cases of undersampling are prevalent in real data since often they sample a part of the boundary of an object, or are derived from a surface with high curvature or nonsmoothness. In this paper we present an algorithm to detect the boundaries where dense sampling stops and undersampling begins. This information can be used to reconstruct surfaces with boundaries, and also to localize small and sharp features where usually undersampling happens. We report the effectiveness of the algorithm with a number of experimental results. Theoretically, we justify the algorithm with some mild assumptions that are valid for most practical data.
Article
We construct a graph on a planar point set, which captures its shape in the following sense: if a smooth curve is sampled densely enough, the graph on the samples is a polygonalization of the curve, with no extraneous edges. The required sampling density varies with thelocal feature sizeon the curve, so that areas of less detail can be sampled less densely. We give two different graphs that, in this sense, reconstruct smooth curves: a simple new construction which we call thecrust, and the β-skeleton, using a specific value of β.
Article
It has been proposed recently that the skeleton of a shape can be computed using the Voronoi diagram of a discrete sample set of the shape boundary. This method avoids many of the complications encountered when computing the skeleton directly from an image because it is based on a continuous-domain model for shapes. In order to make better use of this new approach, it is necessary to establish a bridge between the continuous domain skeleton and its approximation obtained from the discrete boundary sample set. In this paper, the skeleton and Voronoi diagram formulations are briefly reviewed and elaborated upon to establish criteria for the functions to be continuous. Then the new continuity results are related to the discrete sample set model in order to establish conditions under which the skeleton approximation converges to the exact continuous skeleton.
Article
In this paper, we address the problem of curve and surface reconstruction from sets of points. We introduce regular interpolants, which are polygonal approximations of curves and surfaces satisfying a new regularity condition. This new condition, which is an extension of the popular notion of r-sampling to the practical case of discrete shapes, seems much more realistic than previously proposed conditions based on properties of the underlying continuous shapes. Indeed, contrary to previous sampling criteria, our regularity condition can be checked on the basis of the samples alone and can be turned into a provably correct curve and surface reconstruction algorithm. Our reconstruction methods can also be applied to non-regular and unorganized point sets, revealing a larger part of the inner structure of such point sets than past approaches. Several real-size reconstruction examples validate the new method.
Article
In this paper we consider a fundamental visualization problem: shape reconstruction from an unorganized data set. A new minimal-surface-like model and its variational and partial differential equation (PDE) formulation are introduced. In our formulation only distance to the data set is used as our input. Moreover, the distance is computed with optimal speed using a new numerical PDE algorithm. The data set can include points, curves, and surface patches. Our model has a natural scaling in the nonlinear regularization that allows flexibility close to the data set while it also minimizes oscillations between data points. To find the final shape, we continuously deform an initial surface following the gradient flow of our energy functional. An offset (an exterior contour) of the distance function to the data set is used as our initial surface. We have developed a new and efficient algorithm to find this initial surface. We use the level set method in our numerical computation in order to capture the deformation of the initial surface and to find an implicit representation (using the signed distance function) of the final shape on a fixed rectangular grid. Our variational/PDE approach using the level set method allows us to handle complicated topologies and noisy or highly nonuniform data sets quite easily. The constructed shape is smoother than any piecewise linear reconstruction. Moreover, our approach is easily scalable for different resolutions and works in any number of space dimensions.