Article

Feature preserving multiresolution subdivision and simplification of point clouds: A conformal geometric algebra approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Due to the huge volume and complex structure, simplification of point clouds is an important technique in practical applications. However, the traditional algorithms often lose geometric information and have no dynamic expanding structure. In this paper, a new simplification algorithm is proposed based on conformal geometric algebra. First of all, a multiresolution subdivision is constructed by the sphere tree, which computes the minimal bounding spheres with the help of k-means clustering, and then 2 kinds of simplification methods with full advantages of distance computing convenience are applied to carry out self-adapting simplification. Finally, several comparisons with original data or other algorithms are implemented from visualization to parameter contrast. The results show that the proposed algorithm has good effect not only on the local details but also on the overall error rate.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The algorithm is suitable for different point clouds with different densities, and the reconstruction effect of the simplified point cloud is near the original point cloud model. Yuan et al. [20] proposed a simplified algorithm based on conformal geometric algebra. The spherical tree is used to construct multi-resolution subdivision, and K-means clustering is used to calculate the minimum boundary sphere, and then the adaptive point cloud simplification is carried out. ...
... The feature sharpness characteristics of these four data vary, and their potential surfaces are uneven. The experiment evaluated the method from two aspects, namely simplification and reconstruction effects, and compared it with the gridbased, curvature-based simplification methods in Geomagic and feature-preserving [20] simplification methods. ...
... To further verify the effectiveness of our method, in this chapter, we used the gridbased, curvature-based simplification methods in Geomagic and feature-preserving [20] simplification methods to simplify the data of Bunny and Mask A models and compared the results with our method. ...
Article
Full-text available
High-precision and high-density three-dimensional point cloud models usually contain redundant data, which implies extra time and hardware costs in the subsequent data processing stage. To analyze and extract data more effectively, the point cloud must be simplified before data processing. Given that point cloud simplification must be sensitive to features to ensure that more valid information can be saved, in this paper, a new simplification algorithm for scattered point clouds with feature preservation, which can reduce the amount of data while retaining the features of data, is proposed. First, the Delaunay neighborhood of the point cloud is constructed, and then the edge points of the point cloud are extracted by the edge distribution characteristics of the point cloud. Second, the moving least-square method is used to obtain the normal vector of the point cloud and the valley ridge points of the model. Then, potential feature points are identified further and retained on the basis of the discrete gradient idea. Finally, non-feature points are extracted. Experimental results show that our method can be applied to models with different curvatures and effectively avoid the hole phenomenon in the simplification process. To further improve the robustness and anti-noise ability of the method, the neighborhood of the point cloud can be extended to multiple levels, and a balance between simplification speed and accuracy needs to be found.
... Han et al. [19] proposed a simplified algorithm based on edge-preserving. Yuan et al. [20] used a conformal geometric algebra approach to reduce point. Wei et al. [21] simplified point cloud based on curvature co-occurrence histograms. ...
... We also have verified the effectiveness of the DFPSA. Moreover, the simplified results of the proposed algorithm have been compared with the results of five simplified methods: the simplified method based on Gaussian spheres, the simplified method based on octree coding, the k-means clustering simplified method based on boundary reservation [10], the uniform simplification method [16], and the conformal geometric algebra method [20]. The data processing platform is windows10 on laptop PC 1.7GHz processor and 4GB memory. ...
... This is because the curvature difference can select sharp detail feature points. To sum up, for the data set with uniform sampling and sharp features, the scale factor  of the spatial distance operator is set to a small value, the scale factor The conformal geometric algebra method [20] 84.9 85.2 84.6 84.4 Table. I shows the simplification rates of six simplified algorithms which are applied to the four data sets. ...
Article
Full-text available
3D point cloud simplification is an important pretreatment in surface reconstruction for sparing computer resources and improving reconstruction speed. However, existing methods often sacrifice the simplification precision to improve the simplification speed, or sacrifice the speed to improve precision. A proper balance between the simplification speed and the simplification accuracy is still a challenge. In this paper, we propose a new simplification method based on the importance of point. Named as detail feature points simplified algorithm (DFPSA), this algorithm has distinct processes to achieve improvements in three aspects. First, a rule of k neighborhood search is set to ensure the points found are the closest to the sample point. In this way, the accuracy of calculated normal vector of the point cloud is significantly improved, and the search speed is largely increased. Second, a formula that considers multiple characteristics for measuring the importance of point is proposed. Thereupon, the main detail features of the point cloud are preserved. Finally, an octree structure is employed to simplify the remaining points, through which holes in reconstructing point cloud are obviously reduced. The DFPSA is applied to four different data sets, and the corresponding results are compared with those of other five algorithms. The experimental results demonstrate that the DFPSA brings better simplification effects than existing counterparts, and the DFPSA not only can simplify point cloud but also has good effect in simplifying subject’s narrow contours.
... Before simplification, the measured space is divided into different sub-spaces [10]. Then, the simplification algorithm is implemented for each sub-space [2]. To achieve more efficient algorithms, simplification based on space division is frequently adopted. ...
... In order to simplify the data volume, many algorithms have been provided so far. Among these algorithms, point cloud simplification by voxelization is the most widely used method, especially in reverse engineering [2][3][4]. For example, the grid-based simplification For the purpose of preserving the geometric features, this paper provides the FPPS algorithm, and FPPS algorithm processing starts with the extraction of key points from the point cloud. ...
... The FPPS algorithm provides the adaptive reduction ratio, which changes with different shape features. (2) The simplification entropy of the key point is defined. The key point takes the effect of the preserved geometric features, and the effect factor is quantified through simplification entropy. ...
Article
Full-text available
With the development of 3D scanning technology, a huge volume of point cloud data has been collected at a lower cost. The huge data set is the main burden during the data processing of point clouds, so point cloud simplification is critical. The main aim of point cloud simplification is to reduce data volume while preserving the data features. Therefore, this paper provides a new method for point cloud simplification, named FPPS (feature-preserved point cloud simplification). In FPPS, point cloud simplification entropy is defined, which quantifies features hidden in point clouds. According to simplification entropy, the key points including the majority of the geometric features are selected. Then, based on the natural quadric shape, we introduce a point cloud matching model (PCMM), by which the simplification rules are set. Additionally, the similarity between PCMM and the neighbors of the key points is measured by the shape operator. This represents the criteria for the adaptive simplification parameters in FPPS. Finally, the experiment verifies the feasibility of FPPS and compares FPPS with other four-point cloud simplification algorithms. The results show that FPPS is superior to other simplification algorithms. In addition, FPPS can partially recognize noise.
... These methods are mainly based on: a partition strategy to preserve the features and integrity of the point cloud [12], the discrete gradient idea [13], local condition information [14], point importance [15], a point cloud rapid refinement algorithm based on sample point space neighborhood [16], and a method for obtaining important spatial features of points [17]. In addition, some methods, such as an edge-sensitive simplification method based on scanned point clouds [18], a method based on planar feature fitting [19], a method based on salient region detection [20], and other simplification methods that recognize and retain feature points [21][22][23][24][25][26][27][28][29], represent current research achievements in point cloud simplification. These methods promote the development of point cloud simplification technology. ...
Article
Full-text available
Most point cloud simplification algorithms use k-order neighborhood parameters, which are set by human experience; thus, the accuracy of point feature information is not high, and each point is repeatedly calculated simultaneously. The proposed method avoids this problem. The first ordinal point of the original point cloud file was used as the starting point, and the same spatial domain was then described. The design method filters out points located in the same spatial domain and stores them in the same V-P container. The normal vector angle information entropy was calculated for each point in each container. Points with information entropy values that met the threshold requirements were extracted and stored as simplified points and new seed points. In the second operation, a point from the seed point set was selected as the starting point for the operation. The same process was repeated as the first operation. After the operation, the point from the seed point set was deleted. This process was repeated until the seed point set was empty and the algorithm ended. The simplified point set thus obtained was the simplified result. Five experimental datasets were selected and compared using the five advanced methods. The results indicate that the proposed method maintains a simplification rate of over 82% and reduces the maximum error, average error, and Hausdorff distance by 0.1099, 0.074, and 0.0062 (the highest values among the five datasets), respectively. This method has superior performance for single object and multi object point cloud sets, particularly as a reference for the study of simplified algorithms for more complex, multi object and ultra-large point cloud sets obtained using terrestrial laser scanning and mobile laser scanning.
... At present, there have been some studies on the lightweight treatment of laser point clouds. Yuan, et al. [1] proposed a simplified method of conformal geometric algebra to optimize the point clouds by means of distance calculation. Wei, et al. [2] realized point cloud simplification by constructing co-occurrence histograms of the curvature, which can contain more local geometric features. ...
Article
Full-text available
In order to reduce the loss of laser point cloud appearance contours by point cloud lightweighting, this paper takes the laser point cloud data of the segment beam of the expressway viaduct as a sample. After comparing the downsampling algorithm from many aspects and angles, the voxel grid method is selected as the basic theory of the research. By combining the characteristics of the normal vector data of the laser point cloud, the top surface point cloud edge data are extracted and the voxel grid method is fused to establish an optimized point cloud lightweighting algorithm. The research in this paper shows that the voxel grid method performs better than the furthest point sampling method and the curvature downsampling method in retaining the top surface data, reducing the calculation time and optimizing the edge contour. Moreover, the average offset of the geometric contour is reduced from 2.235 mm to 0.664 mm by the edge-optimized voxel grid method, which has a higher retention. In summary, the edge-optimized voxel grid method has a better effect than the existing methods in point cloud lightweighting.
... Hierarchical data structures that allow processing and visualization of massive point clouds are usually used, such as k-tree, quadtrees and octrees. These structures allow the reduction of the data that need to be managed employing multiresolution techniques; moreover, they allow out-of-core or external memories strategies to load and unload data if they exceed available memory [4,5]. ...
Article
Full-text available
Due to the increasingly large amount of data acquired into point clouds, from LiDAR (Light Detection and Ranging) sensors and 2D/3D sensors, massive point clouds processing has become a topic with high interest for several fields. Current client-server applications usually use multiresolution out-of-core proposals; nevertheless, the construction of the data structures required is very time-consuming. Furthermore, these multiresolution approaches present problems regarding point density changes between different levels of detail and artifacts due to the rendering of elements entering and leaving the field of view. We present an autotuning multiresolution out-of-core strategy to avoid these problems. Other objectives are reducing loading times while maintaining low memory requirements, high visualization quality and achieving interactive visualization of massive point clouds. This strategy identifies certain parameters, called performance parameters, and defines a set of premises to obtain the goals mentioned above. The optimal parameter values depend on the number of points per cell in the multiresolution structure. We test our proposal in our web-based visualization software designed to work with the structures and storage format used and display massive point clouds achieving interactive visualization of point clouds with more than 27 billion points.
... Another type of algorithm for point cloud simplification is based on curvature. Multiple studies have investigated this method [10], [11], [12], [13], [14], the difference between them mainly having to do with the calculation method of curvature, the setting of k-order neighborhoods, and the algorithm of removing redundant points. Abdul Rahman El Sayed et al. [15] proposed a simplified algorithm based on weighted graph representation, which first used the significant characteristics of each shape vertex to identify the geometric region and identify the feature points in the region. ...
Article
Full-text available
Using three-dimensional spatial information, this method constructs the detection condition, compares the conditions to be detected, formed by the points in the file to be simplified, then determines the redundancy of the points. Moving windows are used to promote the operation of the algorithm and generate many tiny approximate vertical planes, from which the simplified points will be generated. By selecting experimental data on rabbit and horse and comparing the methods based on mesh and curvature, the proposed method increased the simplification rate of rabbit and horse from 3.022% and 11.123% (mesh method), and 5.704% and 15.316% (curvature method), to 83.387% and 84.296%, respectively, while the standard deviation was reduced from 0.01051 and 0.0157 (mesh method), and 0.0817 and 0.0013 (curvature method), to 0.02179 and 0.01507, respectively. For the simplification of multiple objects, the proposed method increased the simplification rate from 89.113% and 91.826% (mesh method), and 84.79% and 88.91% (curvature method), to 93.458% and 96.916%, respectively, reducing the time by approximately 20 s.
... Belon et al. in [14] implemented a GPU-based calculation that smoothly interpolates the normal information of the CG model represented by vertices and faces, and showed that even a model with millions of vertices can be calculated with CGA at a practical speed. Yuan et al. in [189] proposed a CGA based method to extract only geometric features by performing k-means clustering on a large amount of point cloud data. ...
Article
Full-text available
The new applications of Clifford’s geometric algebra surveyed in this paper include kinematics and robotics, computer graphics and animation, neural networks and pattern recognition, signal and image processing, applications of versors and orthogonal transformations, spinors and matrices, applied geometric calculus, physics, geometric algebra software and implementations, applications to discrete mathematics and topology, geometry and geographic information systems, encryption, and the representation of higher order curves and surfaces.
... Yu et al. [21] firstly built a hierarchical cluster tree of the point cloud data, and then simplified the data with three criteria including cardinality, radius and surface variance. Yuan et al. [22] combined the center substitution algorithm and the layer-by-layer algorithm to reduce point cloud data based on the hierarchical structure of the sphere tree constructed in advance. In spite of the wide adoption of the clustering-based methods, these methods suffer from the problem that replacing all points in a cluster with a few points will cause a geometric deviation between the original dataset and the simplified dataset [23]. ...
Article
Full-text available
Laser scan data are popularly adopted to capture the conditions of existing buildings for as-is BIM reconstruction, also known as scan-to-BIM. Down-sampling of massive laser scan data is a critical pre-processing step for efficient scan-to-BIM. It is also crucial to accurately retain the geometric and semantic information of existing buildings during the down-sampling process. However, existing down-sampling approaches mainly focus on the preservation of geometric features only, without considering semantic features. This study developed an adaptive down-sampling method, which is able to maintain scan points containing critical geometric or semantic information and only down-sample scan points containing no critical information. The proposed method first conducts a geometry-based segmentation to identify edge points and non-planar points, which contain critical geometric information. Then, a semantic-based segmentation is performed to identify points with critical semantic information (e.g. labels and information boards). All the points containing either geometric or semantic information are retained, while points containing no critical information are down-sampled to reduce data redundancy. Experiments were conducted on four scenes, which showed that the proposed method was suitable for point cloud data with different accuracies and could achieve a better performance in preserving both geometric and semantic information than the traditional voxel-based method and the curvature-based method. It is proved that down-sampled data with the proposed method could generate as-is BIMs with more accurate geometric and semantic information.
... Subsequently, CGA was proposed by Li [29] and is widely used in other fields, such as computer graphics [30], robotics [31], and computer vision [32]. Some scholars have also applied CGA to indoor space reconstruction [33] and point cloud simplification [34]. ...
Article
Full-text available
To remove space debris actively, the key step is choosing a suitable capturing method. The size characteristics of space debris are an important factor that affects the capturing method. This paper presents a contour-extracting algorithm for rolling targets, such as space debris. The representation of the point in Euclidean space is converted to a conformal space to obtain candidate contour points through the computational geometric and topological relationships among the points, lines, and surfaces in a high-dimensional space. By combining this conversion with the multiview stitching algorithm, the complete candidate contour point model is obtained. The final contour points are extracted by determining the concave and convex points in conformal space. Simulation results prove that the unified expression of the geometric features in conformal space not only reduces the computational complexity but also increases the speed of the contour point extraction. The method proposed in this paper is superior to other methods in terms of measurement accuracy, speed, and robustness; in particular, the accuracy increased by 60%.
... clustering [2][3][4], and spatial index such as KD-tree [5], Octree [6], and Sphere-tree [7]. The mesh-based methods simplify the overall point cloud structure by reducing the number of meshes [8,9]. ...
Article
Full-text available
A grating projection shape measurement system has been a commonly used method in the field of three-dimensional (3D) reconstruction in recent years, and global point cloud registration is a key step in this method. However, in the registration process, a large amount of low-precision overlapping redundant data (ORD) is generated between adjacent camera stations, which will seriously affect the speed and accuracy of later modeling. Therefore, how to eliminate these low-precision ORD is a major problem to be solved at present. Determining all overlapping 3D point pairs between two adjacent stations and deleting the points with low precision in the point pairs is the key to solving this problem. Therefore, based on an omnidirectional rotation measurement system, combined with the constraint relationships between the projection space and the acquisition space in the global registration process and the stereo-matching method of space conversion, an elimination algorithm for ORD with a two-dimensional (2D) phase constraint and a 2D pixel constraint is proposed. The experimental results show that the proposed algorithm can faster locate overlapping 3D point pairs between adjacent stations, with a higher elimination rate, and the accuracy of the overall point cloud is higher after the redundancy elimination.
Article
The digital twin of the ocean (DTO) is a groundbreaking concept that uses interactive simulations to improve decision-making and promote sustainability in earth science. The DTO effectively combines ocean observations, artificial intelligence (AI), advanced modeling, and high-performance computing to unite digital replicas, forecasting, and what-if scenario simulations of the ocean systems. However, there are several challenges to overcome in achieving the DTO’s objectives, including the integration of heterogeneous data with multiple coordinate systems, multidimensional data analysis, feature extraction, high-fidelity scene modeling, and interactive virtual–real feedback. Hypercomplex signal processing offers a promising solution to these challenges, and this study provides a comprehensive overview of its application in DTO development. We investigate a range of techniques, including geometric algebra, quaternion signal processing, Clifford signal processing, and hypercomplex machine learning, as the theoretical foundation for hypercomplex signal processing in the DTO. We also review the various application aspects of the DTO that can benefit from hypercomplex signal processing, such as data representation and information fusion, feature extraction and pattern recognition, and intelligent process simulation and forecasting, as well as visualization and interactive virtual–real feedback. Our research demonstrates that hypercomplex signal processing provides innovative solutions for DTO advancement and resolving scientific challenges in oceanography and broader earth science.
Article
Full-text available
To obtain a higher simplification rate while retaining geometric features, a simplification framework for the point cloud is proposed. Firstly, multi-angle images of the original point cloud are obtained with a virtual camera. Then, feature lines of each image are extracted by deep neural network. Furthermore, according to the proposed mapping relationship between the acquired 2D feature lines and original point cloud, feature points of the point cloud are extracted automatically. Finally, the simplified point cloud is obtained by fusing feature points and simplified non-feature points. The proposed simplification method is applied to four data sets and compared with the other six algorithms. The experimental results demonstrate that our proposed simplification method has the superiority in terms of both retaining geometric features and high simplification rate.
Article
A partial overlap between adjacent strips during airborne Light Detection And Ranging (LiDAR) data scanning is required to ensure data integrity. The overlap area is observed two times and contains richer target details. To reduce data density yet retaining target details in the overlap area, a method based on reducing the influence of repeated observation data is presented. The proposed method defines the repeated observation data as the multiple samples of the same location from two adjacent strips, identifies them by locating the pairwise closest points from two adjacent strips with their distance below a distance threshold, and eliminates unimportant points inside the repeated observation data according to the criterion that LiDAR points with low curvature or high incidence angle can be removed without affecting the sharp features and the overall quality of data representation of the overlap area. The optimal distance threshold is adaptively determined using a Gaussian fitting model by modeling the mean distance between all the nearest neighbors in one original LiDAR strips. Finally, reduction rate and information entropy metric are put forward to quantitatively evaluate the effectiveness of the proposed method. The developed method is applied to two real airborne LiDAR datasets with urban and forestry scenes for experimental testing. The quantitative accuracy evaluation results show that reduction rate in urban and forestry area can reach to 25.1% and 12.8%, respectively. Furthermore, the data density of the overlap area could be reduced while the information entropy and DTM accuracy still can be maintained.
Article
In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.
Article
Full-text available
Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS) map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM), can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR) data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS) and Inertial Measurement Unit (IMU) navigation solution.
Article
Full-text available
To solve leakages as complex subdivision structures, high nodes overlap probability and poor supporting on multi-dimensional entity object retrievals and the computation of existing spatial indexes. This paper presents a boundary restricted non-overlapping sphere tree for a unified multidimensional solid object index. By using the outer product expression of a sphere in Geometric Algebra, an approach is explored for intersection judgments and point extractions between lines and planes and lines and spherical surfaces based on meet operators. The element subdivision of multi-dimensional object voxelization and the boundary restricted non-overlapping sphere-filling algorithm is developed to balance the conditions (e.g. granularity, non-overlapping subdivision of object voxelization), the duplication and approximate degrees of approaching objects. Furthermore, generating and updating minimum boundary sphere and batch Neural Gas hierarchical clustering algorithm is also presented, and contains a volume adjusted, index level system steady with the balance of each branch of a sphere tree. Next, with the advantage of a clear geometric meaning, simple geometric relations calculation among spheres and dynamical updateable parameters, index structure can be dynamically generated and updated. Finally, the unified multidimensional solid object index, a query mechanism of any location and range on and in the solid objects is proposed. The updated minimum boundary sphere construction, algorithm, and the volume adjusted adaptive batch Neural Gas hierarchical cluster algorithm are defined to quickly, robustly, relatively, and uniformly classify the filling sphere. The simulation of different physical objects and performance comparison with a commonly used sphere tree indexes suggest that the proposed index can effectively query any position or regions on and in the solid objects, and support the nearest linkage distance and dynamical overlapping query under limited time restrictions with high precision.
Article
Full-text available
Traditional Euclidean geometry-based Geographical Information System (GIS) is not multidimensional unification with weak ability to object expression and analysis of high dimensions. Geometric algebra (GA) can connect different geometric and algebra systems, and provide rigorous and elegant foundation for expression, modeling and analysis in GIS. This paper proposes the implementation methods for system construction and key components of multidimension-unified GIS. Based on such properties as multidimension-unified and coordinate-free of GA, data models, data indexes, and data analysis algorithms for multidimensional vector data, raster and vector field data are developed. This study indicates that GA provides a new mathematical tool for the development of GIS characterized as multidimension-unified expression and computation. For the development of geographical analysis methods, it can represent multidimensional spatio-temporal changes conveniently.
Article
Full-text available
Classical geometry has emerged from efforts to codify perception of space and motion. With roots in ancient times, the great flowering of classical geometry was in the 19th century, when Euclidean, non-Euclidean and projective geometries were given precise mathematical formulations and the rich properties of geometric objects were explored. Though fundamental ideas of classical geometry are permanently imbedded and broadly applied in mathematics and physics, the subject itself has practically disappeared from the modern mathematics curriculum. Many of its results have been consigned to the seldom-visited museum of mathematics history, in part, because they are expressed in splintered and arcane language. To make them readily accessible and useful, they need to be reexamined and integrated into a coherent mathematical system.
Conference Paper
Full-text available
This study presents a novel rapid and effective point simplification algorithm based on point clouds without using either normal or connectivity information. Sampled points are clustered based on shape variations by octree data structure, an inner point distribution of a cluster, to judge whether these points correlate with the coplanar characteristics. Accordingly, the relevant point from each coplanar cluster is chosen. The relevant points are reconstructed to a triangular mesh and the error rate remains within a certain tolerance level, and significantly reducing number of calculations needed for reconstruction. The hierarchical triangular mesh based on the octree data structure is presented. This study presents hierarchical simplification and hierarchical rendering for the reconstructed model to suit user demand, and produce a uniform or feature-sensitive simplified model that facilitates rapid further mesh-based applications, especially the level of detail.
Conference Paper
Full-text available
With the advent of new, low-cost 3D sensing hardware such as the Kinect, and continued efforts in advanced point cloud processing, 3D perception gains more and more importance in robotics, as well as other fields. In this paper we present one of our most recent initiatives in the areas of point cloud perception: PCL (Point Cloud Library - http://pointclouds.org). PCL presents an advanced and extensive approach to the subject of 3D perception, and it's meant to provide support for all the common 3D building blocks that applications need. The library contains state-of- the art algorithms for: filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. PCL is supported by an international community of robotics and perception researchers. We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.
Conference Paper
Full-text available
This paper presents some basics for the analysis of point clouds using the geometrically intuitive mathematical framework of conformal geometric algebra. In this framework it is easy to compute with osculating circles for the description of local curvature. Also methods for the fitting of spheres as well as bounding spheres are presented. In a nutshell, this paper provides a starting point for shape analysis based on this new, geometrically intuitive and promising technology.
Article
Full-text available
In the past few years, many efficient rendering and surface reconstruction algorithms for point clouds have been developed. However, collision detection of point clouds has not been considered until now, although this is a prerequisite to use them for interactive or animated 3D graphics. We present a novel approach for time-critical collision detection of point clouds. Based solely on the point representation, it can detect intersections of the underlying implicit surfaces. The surfaces do not need to be closed. We construct a point hierarchy where each node stores a sufficient sample of the points plus a sphere covering of a part of the surface. These are used to derive criteria that guide our hierarchy traversal so as to increase convergence. One of them can be used to prune pairs of nodes, the other one is used to prioritize still to be visited pairs of nodes. At the leaves we efficiently determine an intersection by estimating the smallest distance. We have tested our implementation for several large point cloud models. The results show that a very fast and precise answer to collision detection queries can always be given. Categories and Subject Descriptors (according to ACM CCS): G.1.2 [Numerical Analysis]: Approximation of surfaces and contours I.3.5 [Computer Graphics]: Computational Geometry and Object-Modeling[Geometric algorithms, languages and systems; object hierarchy; physically-based modeling] I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism[Animation; virtual reality]
Book
This tutorial survey brings together two lines of research and development whose interaction promises to have significant practical impact on the area of spatial information processing in the near future: geographic information systems (GIS) and geometric computation or, more particularly, geometric algorithms and spatial data structures. In nine uniformly structured and coherent chapters, the authors present a unique survey ranging from the history and basic characteristics to current issues of precision and robustness of geometric computing. This textbook is ideally suited for advanced courses on GIS and applied geometric algorithms. Research and design professionals active in the area will find it valuable as a state-of-the-art survey.
Article
In order to simplify the dense data of point cloud efficiently, this paper proposes an algorithm which is quick and easy to simplify the point cloud based on the standard deviation of the normal vector. First of all, the normal distribution have to be calculated after getting the dense point cloud data which has been down sampled. Second of all, calculate the separation threshold value between the feature points and the other through the normal angle between the adjacent points. Finally, perform the down sampling step-by-step between feature points and the other to realize the self-adapting simplification of point cloud. According to the result, this algorithm has realized an efficient simplification of point cloud model in short time. And it has greatly held the characteristics and shape of the original model.
Article
This paper extends mean shift algorithm to 3D color point cloud, and successfully realizes its clustering segmentation. Then with these segmentation blocks, we use Random Sample Consensus (RANSAC) algorithm to calculate each block's approximate plane, and keep the points which are the nearest to the plane, thus the point cloud is simplified. The experiment shows that our simplifying results are perfect.
Conference Paper
This paper describes a novel method for surface mesh simplification. Given an initial surface mesh, the goal is to reduce the number of mesh elements and preserve the geometric approximation as well as the shape quality of the resulting mesh. We present a novel method - triangle contraction to simplify the mesh, and two tolerance areas with respect to the reference mesh have been introduced to preserve the geometry of the surface. The reference mesh is then simplified and optimized in order that the resulting mesh belongs to these tolerance areas.
Conference Paper
According to human visual sensitivity, in this paper, we present a new simplification of point cloud based on point saliency. In order to find the nearest K points, we need to divide the space of Point cloud firstly. Secondly, we compute the normal vector and saliency of every point. At last, we combine octree structure and saliency to simplify point cloud data. Experiments show that our method can well preserve the point cloud in visual sensitivity area.
Article
To ensure the realism of the visual effects, the 3D reconstruction of surgical instruments has become an important research topic in virtual surgery systems. In this paper, we propose a method for obtaining the model of the surgical instruments from point cloud data. The point cloud data are first triangulated using the Delaunay algorithm, and then the triangular mesh is simplified and smoothed for optimization. To verify the effectiveness of the proposed method, we have reconstructed a scalpel model with experimental results show that the reconstructed model of surgical instruments can not only keep the geometric characteristics of the original model, but also improve the real-time property for the simulation.
Article
We present an efficient technique for out-of-core multi-resolution construction and high quality interactive visualization of massive point clouds. Our approach introduces a novel hierarchical level of detail (LOD) organization based on multi-way kd-trees, which simplifies memory management and allows control over the LOD-tree height. The LOD tree, constructed bottom up using a fast high-quality point simplification method, is fully balanced and contains all uniformly sized nodes. To this end, we introduce and analyze three efficient point simplification approaches that yield a desired number of high-quality output points. For constant rendering performance, we propose an efficient rendering-on-a-budget method with asynchronous data loading, which delivers fully continuous high quality rendering through LOD geo-morphing and deferred blending. Our algorithm is incorporated in a full end-to-end rendering system, which supports both local rendering and cluster-parallel distributed rendering. The method is evaluated on complex models made of hundreds of millions of point samples.
Article
This investigation presents a novel rapid and effective point simplification algorithm utilizing point cloud without normals. Local coplanar analysis is utilized to obtain the relevant points from a point set sampled from 3D objects. The local coplanar analysis, on the basis of an octree data structure with an inner point distribution of a cube, can determine whether these points are coplanar. The relevant points, called the base model, were reconstructed to triangular mesh. In addition to the successful reconstruction, the error rate of the base model within a specific tolerance level. By using the octree data structure, this study proposes some hierarchical rendering for the base model to suit user demand and produce a uniform or feature-sensitive simplified model that facilitates rapid further mesh-based applications. Finally, output of the proposed method is a hierarchical triangular mesh that inherently supports generation of multi-resolution representations for the applications of level of detail
Article
We proposed a robust algorithm for valley-ridge extraction from point set. Our algorithm separately flag potential valley points and ridge points according to principle curvature of every point, then enhance the valley-ridge points by projecting them onto their local nearest potential valleys or ridges. Using an optimized principal covariance analysis approach, we smooth the projected points, finally valley-ridge lines are obtained after resolving gaps and relaxing the results.
Conference Paper
Simplification of scattered point cloud is one of the key preprocessing technologies in reverse engineering. Most simplification algorithms always lose geometric feature excessively in the process. On the basis of feature extraction, a new algorithm is proposed for the simplification of scattered point cloud with unit normal vectors. First, points in point cloud are distributed into uniform cubes. Next, bounding spheres are constructed with their centers at each point; accordingly K-nearest neighbors are searched in the relevant sphere. Later, a specified function is defined to measure the curvature of each point so that feature points can be extracted. Finally, feature points and non-feature points are simplified according to the radius of bounding sphere and the threshold of normal vectors' inner product. The experiments show that the proposed algorithm has the advantages of fast speed and high reservation of the geometric feature of point cloud.
Article
Building information models (BIMs) are maturing as a new paradigm for storing and exchanging knowledge about a facility. BIMs constructed from a CAD model do not generally capture details of a facility as it was actually built. Laser scanners can be used to capture dense 3D measurements of a facility's as-built condition and the resulting point cloud can be manually processed to create an as-built BIM — a time-consuming, subjective, and error-prone process that could benefit significantly from automation. This article surveys techniques developed in civil engineering and computer science that can be utilized to automate the process of creating as-built BIMs. We sub-divide the overall process into three core operations: geometric modeling, object recognition, and object relationship modeling. We survey the state-of-the-art methods for each operation and discuss their potential application to automated as-built BIM creation. We also outline the main methods used by these algorithms for representing knowledge about shape, identity, and relationships. In addition, we formalize the possible variations of the overall as-built BIM creation problem and outline performance evaluation measures for comparing as-built BIM creation algorithms and tracking progress of the field. Finally, we identify and discuss technology gaps that need to be addressed in future research.
Article
Due to the popularity of computer games and computer-animated movies, 3D models are fast becoming an important element in multimedia applications. In addition to the conventional polygonal representation for these models, the direct adoption of the original scanned 3D point set for model representation is recently gaining more and more attention due to the possibility of bypassing the time consuming mesh construction stage, and various approaches have been proposed for directly processing point-based models. In particular, the design of a simplification approach which can be directly applied to 3D point-based models to reduce their size is important for applications such as 3D model transmission and archival. Given a point-based 3D model which is defined by a point set P () and a desired reduced number of output samples ns, the simplification approach finds a point set Ps which (i) satisfies |Ps|=ns (|Ps| being the cardinality of Ps) and (ii) minimizes the difference of the corresponding surface Ss (defined by Ps) and the original surface S (defined by P). Although a number of previous approaches has been proposed for simplification, most of them (i) do not focus on point-based 3D models, (ii) do not consider efficiency, quality and generality together and (iii) do not consider the distribution of the output samples. In this paper, we propose an Adaptive Simplification Method (ASM) which is an efficient technique for simplifying point-based complex 3D models. Specifically, the ASM consists of three parts: a hierarchical cluster tree structure, the specification of simplification criteria and an optimization process. The ASM achieves a low computation time by clustering the points locally based on the preservation of geometric characteristics. We analyze the performance of the ASM and show that it outperforms most of the current state-of-the-art methods in terms of efficiency, quality and generality.
Conference Paper
With the increasing of data complexity, the needs for out-of-core simplification become evident. However, most of the existing point-based simplification algorithms adopt in-core scheme. We present an adaptive out-of-core algorithm for simplifying point-sampled models. Our approach uses quadric matrix to analyze the detailed regions of the initial simplified model that generated by an out-of-core uniform clustering. And then we use point-pair contraction to further simplify the flat regions and point-split to refine the detailed regions. Since the algorithm is input insensitive, it obtains high quality with low memory requirement.
Conference Paper
We present a rapid and effective point simplification algorithm for surface reconstruction which can represent different levels-of-detail. The core of this algorithm is to generate an approximately minimal set of adaptive balls covering the whole surface by defining and minimizing local quadric error functions. First, the feature points are extracted by simple thresholding curvatures, Second, for the non-feature points, they are covered by distinct balls. The size of each ball varies and reflects how curved the local surface is. Once the size of radius is fixed, the points in each ball will be substituted by an optimized point. Thus, the simplified surface consists of extracted feature points and optimized points. we can employ this algorithm to produce coarse-to-fine models by controlling a general error level, and name it as ESimp for short. Worthy of note, the error level of each ball may be adaptively adjusted according to the local curvature and density of the center of this ball which can avoid holes generation. Finally, the simplified points are triangulated by Cocone algorithm. This algorithm has been applied to a set of large scanned models. Experimental results demonstrate that it can generate high-quality surface approximation with feature preservation.
Conference Paper
This survey has explained a number of concepts on terrains, and some algorithms for various computations. The emphasis has been on TIN algorithms, because the TIN model for terrains is more elegant than the grid and contour line models. A common argument to use grids is the simplicity of the algorithms. However, the current trends in GIS research and in the field of computational geometry have shown that algorithms on TINS need not be . either. More programming effort is required, but this need not outweigh the advantages that TINS have to offer. We won't repeat arguments in the raster-vector debate; a summary of algorithmic methods and specific algorithms for TINS is useful in any case. The search for efficient algorithms on terrains is an interesting area of research where the GIS developers, GIS researchers, and computational geometers can work together to develop a variety of elegant and efficient solutions to practical problems on terrains. The analysis of efficiency of these solutions should be based on realistic assumptions on terrains.
Article
D scanning devices usually produce huge amounts of dense points, which require excessively large storage space and long post-processing times. This paper presents a new adaptive simplification method to reduce the number of the scanned dense points. An automatic recursive subdivision scheme is designed to pick out representative points and remove redundant points. It employs the k-means clustering algorithm to gather similar points together in the spatial domain and uses the maximum normal vector deviation as a measure of cluster scatter to partition the gathered point sets into a series of sub-clusters in the feature field. To maintain the integrity of the original boundary, a special boundary detection algorithm is developed, which is run before the recursive subdivision procedure. To avoid the final distribution of the simplified points to become locally greedy and unbalanced, a refinement algorithm is put forward, which is run after the recursive subdivision procedure. The proposed method may generate uniformly distributed sparse sampling points in the flat areas and necessary higher density in the high curvature regions. The effectiveness and performance of the novel simplification method is validated and illustrated through experimental results and comparison with other point sampling methods.
Article
For rendering curved surfaces, one of the most popular techniques is metaballs, an implicit model based on isosurfaces of potential fields. This technique is suitable for deformable objects and CSG model. For rendering metaballs, intersection tests between rays and isosurfaces are required. By defining the higher degree of functions for the field functions, richer capability can be expected, i.e., the smoother surfaces. However, one of the problems is that the intersection between the ray and isosurfaces can not be solved analytically for such a high degree function. Even though the field function is expressed by degree six polynomial in this paper (that means the degree six equation should be solved for the intersection test), in our algorithm, expressing the field function on the ray by Bézier functions and employing Bézier Clipping, the root of this function can be solved very effectively and precisely. This paper also discusses a deformed distribution function such as ellipsoids and a method displaying transparent objects such as clouds.
Article
This paper studies the problem of point cloud simplification by searching for a subset of the original input data set according to a specified data reduction ratio (desired number of points). The unique feature of the proposed approach is that it aims at minimizing the geometric deviation between the input and simplified data sets. The underlying simplification principle is based on clustering of the input data set. The cluster representation essentially partitions the input data set into a fixed number of point clusters and each cluster is represented by a single representative point. The set of the representatives is then considered as the simplified data set and the resulting geometric deviation is evaluated against the input data set on a cluster-by-cluster basis. Due to the fact that the change to a representative selection only affects the configuration of a few neighboring clusters, an efficient scheme is employed to update the overall geometric deviation during the search process. The search involves two interrelated steps. It first focuses on a good layout of the clusters and then on fine tuning the local composition of each cluster. The effectiveness and performance of the proposed approach are validated and illustrated through case studies using synthetic as well as practical data sets.
Book
This well-accepted introduction to computational geometry is a textbook for high-level undergraduate and low-level graduate courses. The focus is on algorithms and hence the book is well suited for students in computer science and engineering. Motivation is provided from the application areas: all solutions and techniques from computational geometry are related to particular applications in robotics, graphics, CAD/CAM, and geographic information systems. For students this motivation will be especially welcome. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement. All the basic techniques and topics from computational geometry, as well as several more advanced topics, are covered. The book is largely self-contained and can be used for self-study by anyone with a basic background in algorithms. In this third edition, besides revisions to the second edition, new sections discussing Voronoi diagrams of line segments, farthest-point Voronoi diagrams, and realistic input models have been added.
Article
this report is organized as follows. Section 2 describes map generalization problems and introduces algorithms that solve some of these problems. In Section 3, map labeling problems are discussed and solutions are proposed. Section 4 describes data structures for GIS. In Section 5, data structures to solve spatial queries in the past are discussed. 2 Generalization
Advances in Image and Video Technology, Proceedings
  • Lee PF
  • Chiang CH
  • Tseng JL
  • Jong BS
  • Lin TW
A method for displaying metaballs by using Bezier clipping
  • Nishitmura
Chapter 7 in Handbook of Computational Geometry
  • Floriani LD
  • Magillo P
  • Puppo E
Simplification of scattered point cloud based on feature extraction
  • Peng
ESimp: error-controllable simplification with feature preservation for surface reconstruction
  • Wei
Pang XF Pang MY An algorithm for extracting geometric
Analysis of point clouds-using conformal geometric algebra
  • Hildenbrandd Hitzere