Conference Paper

Algorithmic Approach to Planar Void Detection and Validation in Point Clouds

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When using exploratory robotics for nuclear decommissioning, the generation of accurate and complete maps of the environment is critical. One format these maps can take is point clouds. However, when generating point clouds it is likely that voids, resulting from shadows created by features within the environment for example, will exist within them. Previous research studies have developed techniques which enable such voids to be interpolated. Unfortunately, for hazardous environments, this interpolation can be detrimental and have severe safety implications. This paper proposes a new algorithmic method for simplifying the detection of voids in point clouds. Once detected, the voids are validated, to make sure they exist within the scan-able part of the environment and if they do then they are marked for further investigation by the robot. This enables a more complete point cloud to be generated. To demonstrate the capabilities of the method, it was initially applied to a set of simplified scenes, where it was able to detect all the voids that were present, whilst also being compared to another algorithm. Following this the method was applied to a more realistic scenario. In this example, many, but not all of the voids were correctly identified. A discussion follows explaining why some voids were not detected and how future research aims to address this.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Prior efforts towards improving the accuracy of generated 3D meshes often focused on regularising the camera depth maps [1], using expensive high-quality sensors [2], or regularising the dense-reconstruction output [3]. While regularisation based approaches impose structure to smooth local surfaces, gross reconstruction errors still remain. ...
Article
Dense reconstructions often contain errors that prior work has so far minimised using high quality sensors and regularising the output. Nevertheless, errors still persist. This paper proposes a machine learning technique to identify errors in three dimensional (3D) meshes. Beyond simply identifying errors, our method quantifies both the magnitude and the direction of depth estimate errors when viewing the scene. This enables us to improve the reconstruction accuracy. We train a suitably deep network architecture with two 3D meshes: a high-quality laser reconstruction, and a lower quality stereo image reconstruction. The network predicts the amount of error in the lower quality reconstruction with respect to the high-quality one, having only view the former through its input. We evaluate our approach by correcting two-dimensional (2D) inverse-depth images extracted from the 3D model, and show that our method improves the quality of these depth reconstructions by up to a relative 10% RMSE.
Article
Full-text available
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
Conference Paper
Full-text available
Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.
Conference Paper
Full-text available
With the advent of new, low-cost 3D sensing hardware such as the Kinect, and continued efforts in advanced point cloud processing, 3D perception gains more and more importance in robotics, as well as other fields. In this paper we present one of our most recent initiatives in the areas of point cloud perception: PCL (Point Cloud Library - http://pointclouds.org). PCL presents an advanced and extensive approach to the subject of 3D perception, and it's meant to provide support for all the common 3D building blocks that applications need. The library contains state-of- the art algorithms for: filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. PCL is supported by an international community of robotics and perception researchers. We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.
Article
Full-text available
Optical acquisition devices often produce noisy and incomplete data sets, due to occlusion, unfavorable surface reflectance prop- erties, or geometric restrictions in the scanner setup. We present a novel approach for obtaining a complete and consistent D model representation from such incomplete surface scans, using a data- base of 3D shapes to provide geometric priors for regions of miss- ing data. Our method retrieves suitable context models from the database, warps the retrieved models to conform with the input data, and consistently blends the warped models to obtain the fi- nal consolidated 3D shape. We define a shape matching penalty function and corresponding optimization scheme for computing the non-rigid alignment of the context models with the input data. This allows a quantitative evaluation and comparison of the quality of the shape extrapolation provided by each model. Our algorithms are explicitly designed to accommodate uncertain data and can thus be applied directly to raw scanner output. We show on a variety of real data sets how consistent models can be obtained from highly incomplete input. The information gained during the shape com- pletion process can be utilized for future scans, thus continuously simplifying the creation of complex 3D models. CR Categories: I.3.5 (Computer Graphics): Computational Geometry and Object Modeling—curve, surface, solid, and object representations
Conference Paper
Full-text available
Creating models of real scenes is a complex task for which the use of traditional modelling techniques is inappropriate. For this task, laser rangefinders are frequently used to sample the scene from several viewpoints, with the resulting range images integrated into a final model. In practice, due to surface reflectance properties, occlusions and accessibility limitations, certain areas of the scenes are usually not sampled, leading to holes and introducing undesirable artifacts in the resulting models. We present an algorithm for filling holes on surfaces reconstructed from point clouds. The algorithm is based on moving least squares and can recover both geometry and shading information, providing a good alternative when the properties to be reconstructed are locally smooth. The reconstruction process is mostly automatic and the sampling rate in the reconstructed areas follows the given samples. We demonstrate the use of the algorithm on both real and synthetic data sets to obtain complete geometry and reasonable shading.
Conference Paper
We present an automatic system for planar 3D modeling of building interiors from point cloud data generated by range scanners. This is motivated by the observation that most building interiors may be modeled as a collection of planes representing ceilings, floors, walls and staircases. Our proposed system, which employs model-fitting and RANSAC, is capable of detecting large-scale architectural structures, such as ceilings and floors, as well as small-scale architectural structures, such as staircases. We experimentally validate our system on a number of challenging point clouds of real architectural scenes.
Article
Models of non-trivial objects resulting from a 3d data acquisition process (e.g. Laser Range Scanning) often contain holes due to occlusion, reflectance or transparency. As point set surfaces are unstructured surface representations with no adjacency or connectivity information, defining and detecting holes is a non-trivial task. In this paper we investigate properties of point sets to derive criteria for automatic hole detection. For each point, we combine several criteria into an integrated boundary probability. A final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic hole detection algorithm.
Article
The recognition of objects in three-dimensional space is a desirable capability of a computer vision system. Range images, which directly measure 3-D surface coordinates of a scene, are well suited for this task. In this paper we report a procedure to detect connected planar, convex, and concave surfaces of 3-D objects. This is accomplished in three stages. The first stage segments the range image into ``surface patches'' by a square error criterion clustering algorithm using surface points and associated surface normals. The second stage classifies these patches as planar, convex, or concave based on a non-parametric statistical test for trend, curvature values, and eigenvalue analysis. In the final stage, boundaries between adjacent surface patches are classified as crease or noncrease edges, and this information is used to merge compatible patches to produce reasonable faces of the object(s). This procedure has been successfully applied to a large number of real and synthetic images, four of which we present in this paper.
Article
In the process of generating a surface model from point cloud data, a segmentation that extracts the edges and partitions the three-dimensional (3D) point data is necessary and plays an important role in fitting surface patches and applying the scan data to the manufacturing process. Many researchers have tried to develop segmentation methods by fitting curves or surfaces in order to extract geometric information, such as edges and smooth regions, from the scan data. However, the surface- or curve-fitting tasks take a long time and it is also difficult to extract the exact edge points because the scan data consist of discrete points and the edge points are not always included in these data. In this research, a new method for segmenting the point cloud data is proposed. The proposed algorithm uses the octree-based 3D-grid method to handle a large amount of unordered sets of point data. The final 3D-grids are constructed through a refinement process and iterative subdivisioning of cells using the normal values of points. This 3D-grid method enables us to extract edge-neighborhood points while considering the geometric shape of a part. The proposed method is applied to two quadric models and the results are discussed.
Article
Abstract In this paper we present an automatic algorithm to detect basic shapes in unorganized point clouds. The algorithm decomposes the point cloud into a concise, hybrid structure of inherent shapes and a set of remaining points. Each detected shape serves as a proxy for a set of corresponding points. Our method is based on random sampling and detects planes, spheres, cylinders, cones and tori. For models with surfaces composed of these basic shapes only, for example, CAD models, we automatically obtain a representation solely consisting of shape proxies. We demonstrate that the algorithm is robust even in the presence of many outliers and a high degree of noise. The proposed method scales well with respect to the size of the input point cloud and the number and size of the shapes within the data. Even point sets with several millions of samples are robustly decomposed within less than a minute. Moreover, the algorithm is conceptually simple and easy to implement. Application areas include measurement of physical parameters, scan registration, surface compression, hybrid rendering, shape classification, meshing, simplification, approximation and reverse engineering.
Conference Paper
We address the problem of building watertight 3D models from surfaces that contain holes - for example, sets of range scans that observe most but not all of a surface. We specifically address situations in which the holes are too geometrically and topologically complex to fill using triangulation algorithms. Our solution begins by constructing a signed distance function, the zero set of which defines the surface. Initially, this function is defined only in the vicinity of observed surfaces. We then apply a diffusion process to extend this function through the volume until its zero set bridges whatever holes may be present. If additional information is available, such as known-empty regions of space inferred from the lines of sight to a 3D scanner, it can be incorporated into the diffusion process. Our algorithm is simple to implement, is guaranteed to produce manifold non-interpenetrating surfaces, and is efficient to run on large datasets because computation is limited to areas near holes.
Conference Paper
Presents a simple, robust and practical method for object simplification for applications where gradual elimination of high-frequency details is desired. This is accomplished by sampling and low-pass filtering the object into multi-resolution volume buffers and applying the marching cubes algorithm to generate a multi-resolution triangle-mesh hierarchy. Our method simplifies the genus of objects and can also help existing object simplification algorithms achieve better results. At each level of detail, a multi-layered mesh can be used for an optional and efficient antialiased rendering