## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

A new parameter-free graph-morphology-based segmentation algorithm is proposed to address the problem of partitioning a 3D triangular mesh into disjoint submeshes that correspond to the physical parts of the underlying object. Curvedness, which is a rotation and translation invariant shape descriptor, is computed at every vertex in the input triangulation. Iterative graph dilation and morphological filtering of the outlier curvedness values result in multiple disjoint maximally connected submeshes such that each submesh contains a set of vertices with similar curvedness values, and vertices in disjoint submeshes have significantly different curvedness values. Experimental evaluations using the triangulations of a number of complex objects demonstrate the robustness and the efficiency of the proposed algorithm and the results prove that it compares well with a number of state-of-the-art mesh segmentation algorithms.

To read the full-text of this research,

you can request a copy directly from the authors.

... Seed segmentation is a region-based method that does not require much prior knowledge to acquire regional nodes. In this method multiple seed points are first selected; then, using a predetermined similarity criterion, each region grows by adding neighborhood points [22]. The disadvantage of this method is that it is both highly sensitive to background noise and requires lengthy executions time [17]. ...

... To address the problem of algorithms being excessively sensitive to background noise, some studies have used a combination of normal and curvature information to smooth the constraints between different regions [22], [24]. This approach reduces the impact of background noise on the clustering results to a certain extent. ...

Light detection and ranging (LIDAR) scanning is a common method of substation scene modeling that extracts point clouds of electrical equipment from the point cloud scene of a substation. The extraction effect is limited by uncertainty regarding the noise level, nonuniform point cloud density, and the computational complexity. In this paper, we propose a point cloud extraction solution for electrical equipment models. First, a statistical analysis of substation ground elevation is performed to obtain the point clouds of devices at the feature height and remove large numbers of redundant underground point clouds. Second, based on the statistically derived power equipment feature heights, the point cloud data are sliced according to the featured elevation intervals. Based on voxelization, the point cloud slices are then clustered using horizontal hierarchical clustering. The clustering results at different elevation intervals are then reclustered using vertical hierarchical clustering. Finally, we use filters combined with the DBSCAN algorithm to perform fine segmentation on the point cloud data. The results show that our slice clustering approach reduces the computational burden involved in point cloud processing, and the comprehensive clustering strategy ensures the accuracy of the clustering results.

... Therefore, numerous studies on single wavelength lidar point cloud segmentation have been conducted. The four main geometric based methods are edge- [15], model-[16], clustering feature- [17], and region growing-based methods [18]. In edge-based methods, the ...

... In region growing-based methods, neighboring points that share similar characteristics are collected into one component [18,27]. A region (component) is grown around the seed point based on predefined similarity criterion of the geometric features, until an end condition is satisfied. ...

Light detection and ranging (lidar) can record a 3D environment as point clouds, which are unstructured and difficult to process efficiently. Point cloud segmentation is an effective technology to solve this problem and plays a significant role in various applications, such as forestry management and 3D building reconstruction. The spectral information from images could improve the segmentation result, but suffers from the varying illumination conditions and the registration problem. New hyperspectral lidar sensor systems can solve these problems, with the capacity to obtain spectral and geometric information simultaneously. The former segmentation on hyperspectral lidar were mainly based on spectral information. The geometric segmentation method widely used by single wavelength lidar was not employed for hyperspectral lidar yet. This study aims to fill this gap by proposing a hyperspectral lidar segmentation method with three stages. First, Connected-Component Labeling (CCL) using the geometric information is employed for base segmentation. Second, the output components of the first stage are split by the spectral difference using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Third, the components of the second stage are merged based on the spectral similarity using Spectral Angle Match (SAM). Two indoor experimental scenes were setup for validation. We compared the performance of our mothed with that of the 3D and intensity feature based method. The quantitative analysis indicated that, our proposed method improved the point-weighted score by 19.35% and 18.65% in two experimental scenes, respectively. These results showed that the geometric segmentation method for single wavelength lidar could be combined with the spectral information, and contribute to the more effective hyperspectral lidar point cloud segmentation.

... MLS, also known as mobile LiDAR, is the laser scanning 184 system that is mounted on a mobile ground-based platform, such as a vehicle. Among the three 185 categories of laser scanners, TLS has the highest ranging accuracy as the scanner keeps still 186 during operation. Therefore, TLS has been utilized for applications that require high accuracy 187 such as surveying [10], documentation [39], and monitoring [40] of buildings and civil 188 infrastructures. ...

... Segmentation algorithms based on region usually start a region from one or a few seed points and then iteratively grow the region to include neighboring points according to certain criteria [146,185]. These algorithms basically involve two steps: identification and growing of seed surface [154]. ...

3D point cloud data from sensing technologies such as 3D laser scanning and photogrammetry are able to capture the 3D surface geometries of target objects in an accurate and efficient manner. Due to these advantages, the construction industry has been capturing 3D point cloud data of construction sites, construction works, and construction equipment to enable better decision making in construction project management. The captured point cloud data are utilized to reconstruct 3D building models, check construction quality, monitor construction progress, improve construction safety etc. throughout the project lifecycle from design to construction and facilities management phase. This paper aims to review the state-of-the-art methods to acquire and process 3D point cloud data for construction applications. The different approaches to 3D point cloud data acquisition are reviewed and compared including 3D laser scanning, photogrammetry, videogrammetry, RGB-D camera, and stereo camera. Furthermore, the processing methods of 3D point cloud data are reviewed according to the four common processing procedures including (1) data cleansing, (2) data registration, (3) data segmentation, and (4) object recognition. For each processing procedure, the different processing methods and algorithms are compared and discussed in detail, which provides a useful guidance to both researchers and industry practitioners for adopting point cloud data in the construction industry.

... In this paper, the authors present a method for dynamic garment simulation based on a hybrid bounding volume hierarchy. The method collects the basic collision units of a human body model by using MCASG graph theory [20] and K-means clustering algorithm [21] . Then it constructs cylinder bounding box, elliptical cylinder bounding box and sphere box to approximate these basic units. ...

... We use MCASG graph algorithm [20] to process initial segmentation. The geometric aspect of an object is described as curvedness in the segmentation, which is also known as the bending energy. ...

In order to solve the computing speed and efficiency problem of existing dynamic clothing simulation, this paper presents a dynamic garment simulation based on a hybrid bounding volume hierarchy. It firstly uses MCASG graph theory to do the primary segmentation for a given three-dimensional human body model. And then it applies K-means cluster to do the secondary segmentation to collect the human body’s upper arms, lower arms, upper legs, lower legs, trunk, hip and woman’s chest as the elementary units of dynamic clothing simulation. According to different shapes of these elementary units, it chooses the closest and most efficient hybrid bounding box to specify these units, such as cylinder bounding box and elliptic cylinder bounding box. During the process of constructing these bounding boxes, it uses the least squares method and slices of the human body to get the related parameters. This approach makes it possible to use the least amount of bounding boxes to create close collision detection regions for the appearance of the human body. A spring- mass model based on a triangular mesh of the clothing model is finally constructed for dynamic simulation. The simulation result shows the feasibility and superiority of the method described.

... 3D segmentation is the process of decomposing 3D model into functionally meaningful regions. Several traditional methods, such as edge-based [36], region-based [37], and model-fitting [38] have been proposed to group point clouds into homogeneous groups with similar local features. With the ever-growing amount of 3D shape databases [39,40] and annotated RGB-D datasets [41,42] becoming available, the data-driven approach starts to play an important role in 3D object recognition and has achieved impressive progress [43,44] . ...

This paper presents a semantic-based interactive system that enables virtual content placement using natural language. We propose a novel computational framework composed of three components including 3D reconstruction, 3D segmentation, and 3D annotation. Based on the framework, the system can automatically construct a semantic representation of the environment from raw point cloud data. Users can then assign virtual content to a specific physical location by referring to its semantic label. Compared with traditional projection mapping which may involve tedious manual adjustments, the proposed system can facilitate intuitive and efficient manipulation of virtual content in immersive environments through speech inputs. The technical evaluation and user study results show that the system can provide users with accurate semantic information for effective virtual content placement at room scale.

... Generally speaking, these methods make point groups by checking each point (or derivatives, like voxels/supervoxels [13,14]) around one or more seeds by specific criteria. So researchers advocating RG-based method developed several criteria considering orientation [10,15], surface normal [10,16,17] , curvatures [18], et al. Li et al. [19] explore the possibility to apply RG algorithm to multiple conclusion cases (i.e., leaf phenotyping) and proves to be feasible. ...

Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud instance segmentation with small computational demands is lacking. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a point-wise scheme over the cluster-wise scheme used in existing works. The proposed method avoids traversing every point constantly in each nested loop, which is time and memory-consuming. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results.

... The margin area has a higher probability of passage of a margin line than the fuzzy area. When the region growing method is applied to the mesh model, starting from a given node, it continues to expand while searching for nodes with homogeneous properties (Lavoue et al., 2004;Jagannathan & Miller, 2007;Fei et al., 2014). Therefore, before applying the region growing method, we assigned a convex or concave property to each node of the mesh model. ...

A margin line, defined as the boundary curve of the contact area between a prepared tooth and a dental restoration, considerably influences the end shape of the dental restoration design. Most studies that have extracted margin lines from mesh models representing prepared teeth have faced convergence problems in the path search and therefore pose the inconvenience of specifying multiple input points as intermediate goal points. To overcome these problems, in this study, we propose a bidirectional path-search algorithm using a single input point. First, the algorithm classifies all nodes in a mesh model into a margin or fuzzy region to increase search efficiency. Then, the search starts from one point and proceeds on two paths in opposite directions, using the current node of the opposite path as the temporary goal of the currently searched path. During the search, a dynamic evaluation function that varies weights according to the region type is employed to improve the path convergence. Finally, to increase the practicality of the algorithm, the jagged initial margin line is converted into a smooth spline curve using an energy-minimisation function specialised for margin lines. To evaluate the proposed algorithm, margin lines extracted from various types of prepared teeth are demonstrated and compared with those created using some relevant previous works and a commercial dental computer-aided design system. The comparison verified that accurate margin lines could be calculated with only one input point using the proposed algorithm. Moreover, the proposed algorithm showed better performance for crown and inlay/only experimental models compared with a commercial dental CAD system under the same conditions.

... Its principle is to divide point cloud data into several nonintersecting subsets according to the characteristic properties of point clouds. Traditional point cloud segmentation methods include edge-based, region-based, and model-fitting based methods [17][18][19], only obtaining a relative coarse segmentation result. Recently, with the development of data-driven deep learning algorithms, end-to-end methods are proposed to analyze point clouds. ...

Existing point cloud semantic segmentation approaches do not perform well on details, especially for the boundary regions. However, supervised-learning-based methods depend on costly artificial annotations for performance improvement. In this paper, we bridge this gap by designing a self-supervised pretext task applicable to point clouds. Our main innovation lies in the mixed feature prediction strategy during the pretraining stage, which facilitates point cloud feature learning with boundary-aware foundations. Meanwhile, a dynamic feature aggregation module is proposed to regulate the range of receptive field according to the neighboring pattern of each point. In this way, more spatial details are preserved for discriminative high-level representations. Extensive experiments across several point cloud segmentation datasets verify the superiority of our proposed method, including ShapeNet-part, ScanNet v2, and S3DIS. Furthermore, transfer learning on point cloud classification and object detection tasks demonstrates the generalization ability of our method.

... The main idea is to segment the points that have homogeneity geometrical. They first choose seed points and then merge the neighbors if they have similarities in surface point properties such as orientation [4,5], surface normal, and curvatures [6,7]. However, these methods are sensitive to the location of initial seeds and inaccurate estimations of the normals and curvatures near boundaries, which leads to over-or under-segmentation. ...

Segmentation from point cloud data is essential in many applications such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. In this paper, we present a fast solution to point cloud instance segmentation with small computational demands. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a pointwise scheme over the clusterwise scheme used in existing works. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results.

... Region-based segmentation [14][15][16][17][18] aims to directly identify those specific zones characterized by homogeneous geometric properties. The methods based on this approach generally start from one or more seed points and merge around neighbouring points with similar characteristics, such as surface orientation, curvature, etc.. Region-based methods are relatively less sensitive to the noise in the data, and usually perform better when compared to edge based methods. ...

In modern industry, multi-sensor metrology methods are increasingly applied for fast and accurate 3D data acquisition. These method typically start with fast initial digitization by an optical digitizer, the obtained 3D data is analyzed to extract information to provide guidance for precise re-digitization and multi-sensor data fusion. The raw output measurement data from optical digitizer is dense unsorted points with defects. Therefore a new method of analysis has to be developed to process the data and prepare it for metrological verification. This article presents a novel algorithm to manage measured data from optical systems. A robust edge-points recognition method is proposed to segment edge-points from a 3D point cloud. The remaining point cloud is then divided into different patches by applying the Euclidean distance clustering. A simple RANSAC-based method is used to identify the feature of each segmented data patch and derive the parameters. Subsequently, a special region growing algorithm is designed to refine segment the under-segmentation regions. The proposed method is experimentally validated on various industrial components. Comparisons with state-of-the-art methods indicate that the proposed method for feature surface extraction is feasible and capable of achieving favorable performance and facilitating automation of industrial components.

... The surface-based region segmentation method is to segment points belonging to the same basic geometric features into the same region, the main steps include the selection of seed points and region growth. Jagannathan and others partitioned the point cloud into several regions by calculating the curvature value of each vertex in the point cloud, and the curvature values of points in the same region are similar, and the curvature values of points in different regions are more different, so as to achieve the point cloud segmentation [3]. The key point of the surface-based point cloud segmentation method is the selection of seed points and growth criteria, thus requiring strong a priori knowledge and the edges it produces are prone to distortion. ...

In order to analyze the machining errors of parts more accurately, a point cloud segmentation method based on model registration was proposed for mechanical parts. First, the surface information of the standard model of the part is extracted from the Initial Graphics Exchange Specification (IGES) file; then, the part point cloud is registered to the standard model using the method of the moment principal axis method and the Iterative Closest Point algorithm (ICP); then, by calculating the distance between each point in the point cloud and each surface in the standard model, it is used to determine the corresponding relationship between the model surface and the point cloud, and then the point cloud is segmented; finally, the segmentation results are surface-fitted and benchmarked to obtain a segmentation with better alignment quality. The experimental results show that this method can automatically segment the part point cloud according to the surface features of the part when only the IGES model and the point cloud model are input, without setting any empirical parameters, and obtain the segmentation result with clear boundaries.

... In order to avoid the high complexity and operational difficulty as well as guarantee the robustness in a changing scenario, the region growing (RG) and the random sample consensus (RANSAC) algorithms are considered for multiple sphere segmentation or detection from the 3D point cloud in this monitoring task due to their superiorities. RG was first proposed in the field of intensity image segmentation [21] and then was introduced to 3D point cloud segmentation [22][23][24][25][26]. This method has the advantage of unnecessary prior parameters of spheres (like the design of the sphere layout and the coordinate information). ...

Accepting the ecological necessity of a drastic reduction of resource consumption and greenhouse gas emissions in the building industry, the Institute for Lightweight Structures and Conceptual Design (ILEK) at the University of Stuttgart is developing graded concrete components with integrated concrete hollow spheres. These components weigh a fraction of usual conventional components while exhibiting the same performance. Throughout the production process of a component, the positions of the hollow spheres and the level of the fresh concrete have to be monitored with high accuracy and in close to real-time, so that the quality and structural performance of the component can be guaranteed. In this contribution, effective solutions of multiple sphere detection and concrete surface modeling based on the technology of terrestrial laser scanning (TLS) during the casting process are proposed and realized by the Institute of Engineering Geodesy (IIGS). A complete monitoring concept is presented to acquire the point cloud data fast and with high-quality. The data processing method for multiple sphere segmentation based on the efficient combination of region growing and random sample consensus (RANSAC) exhibits great performance on computational efficiency and robustness. The feasibility and reliability of the proposed methods are verified and evaluated by an experiment monitoring the production of an exemplary graded concrete component. Some suggestions to improve the monitoring performance and relevant future work are given as well.

... These traditional algorithms usually cluster geometric elements with similar properties, such as region growing, hierarchical clustering, and hierarchical decomposition. In region growing methods [9], [25], a set of triangular faces are first selected as seeds, then for each seed grows a region based on some local properties until all the faces are assigned to a region. ...

Recognizing and fitting shape primitives from underlying 3D models is a key component of many computer graphics applications. Although there exists many structure recovery methods, they usually fail to identify blending surfaces, which are small transition regions between relatively large primary patches. To address this issue, we present a novel approach for automatic segmentation and surface fitting with accurate geometric parameters from 3D models. On the whole, we formulate the structural segmentation as a Markov Random Field labeling problem. We first propose a new clustering algorithm to build mesh superfacets by incorporating 3D local geometric information. It allows the extraction of general quadric and rolling-ball blend regions, as well as improving the robustness of further segmentation. Next, by defining the multi-label energy function on the superfacets, we apply a specially-designed MRF framework to efficiently partition the model into different meaningful patches of known surface type. Furthermore, we present an iterative optimization algorithm based on skeleton extraction to fit rolling-ball blend patches by recovering the parameters of rolling center trajectories and ball radius. Experiments on a variety of complex models demonstrate the effectiveness and robustness of the proposed method, and our superiority is also verified by comparing with the state-of-the-art approaches. We further apply our algorithm in applications such as mesh editing by changing the radius of the rolling balls.

... The region growing approach is one of the most widely used segmentation methods. This method [32][33][34][35][36] selects a set of seed faces under certain rules and then grows around each Remote Sens. 2020, 12, 3908 4 of 21 seed face until all faces are assigned to a single region. When expanding a region, the local surface features are the main criteria. ...

The Markov Random Field (MRF) energy function, constructed by existing OpenMVS-based 3D texture reconstruction algorithms, considers only the image label of the adjacent triangle face for the smoothness term and ignores the planar-structure information of the model. As a result, the generated texture charts results have too many fragments, leading to a serious local miscut and color discontinuity between texture charts. This paper fully utilizes the planar structure information of the mesh model and the visual information of the 3D triangle face on the image and proposes an improved, faster, and high-quality texture chart generation method based on the texture chart generation algorithm of the OpenMVS. This methodology of the proposed approach is as follows: (1) The visual quality on different visual images of each triangle face is scored using the visual information of the triangle face on each image in the mesh model. (2) A fully automatic Variational Shape Approximation (VSA) plane segmentation algorithm is used to segment the blocked 3D mesh models. The proposed fully automatic VSA-based plane segmentation algorithm is suitable for multi-threaded parallel processing, which solves the VSA framework needed to manually set the number of planes and the low computational efficiency in a large scene model. (3) The visual quality of the triangle face on different visual images is used as the data term, and the image label of adjective triangle and result of plane segmentation are utilized as the smoothness term to construct the MRF energy function. (4) An image label is assigned to each triangle by the minimizing energy function. A texture chart is generated by clustering the topologically-adjacent triangle faces with the same image label, and the jagged boundaries of the texture chart are smoothed. Three sets of data of different types were used for quantitative and qualitative evaluation. Compared with the original OpenMVS texture chart generation method, the experiments show that the proposed approach significantly reduces the number of texture charts, significantly improves miscuts and color differences between texture charts, and highly boosts the efficiency of VSA plane segmentation algorithm and OpenMVS texture reconstruction.

... The region-based methods utilize the local information to cluster the points into regions. To determine the points to be added to a region, the features such as surface orientation, curvature, normal, etc. are investigated for points in a predetermined radius or a certain number of neighbors (Rabbani et al., 2006;Jagannathan and Miller, 2007). Therefore, a preprocessing step is required to define neighborhood relationships before using these methods with unorganized point cloud data (Vo et al., 2015). ...

... The segmentation of point clouds is the grouping of points into homogeneous regions. Traditionally, segmentation is done using edges [139] or surface properties such as normals, curvature and orientation [139,140]. Recently, feature-based deep learning approaches have been used for point cloud segmentation to segment the points into different aspects. The aspects could be different parts of an object, which is referred to as part segmentation, or different class categories, also referred to as semantic segmentation. ...

A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.

... Segmentation of point cloud is the grouping of points into homegenous regions. Traditionally, segmentation is done using edges [87] or surface properties such as normals, curvature and orientation [87,88]. Recently, feature based deep learning approaches are used for point cloud segmentation with the goal of segmenting the points into different aspects. ...

Point cloud is point sets defined in 3D metric space. Point cloud has become one of the most significant data format for 3D representation. Its gaining increased popularity as a result of increased availability of acquisition devices, such as LiDAR, as well as increased application in areas such as robotics, autonomous driving, augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision, becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes use of deep learning for its processing directly very challenging. Earlier approaches overcome this challenge by preprocessing the point cloud into a structured grid format at the cost of increased computational cost or lost of depth information. Recently, however, many state-of-the-arts deep learning techniques that directly operate on point cloud are being developed. This paper contains a survey of the recent state-of-the-art deep learning techniques that mainly focused on point cloud data. We first briefly discussed the major challenges faced when using deep learning directly on point cloud, we also briefly discussed earlier approaches which overcome the challenges by preprocessing the point cloud into a structured grid. We then give the review of the various state-of-the-art deep learning approaches that directly process point cloud in its unstructured form. We introduced the popular 3D point cloud benchmark datasets. And we also further discussed the application of deep learning in popular 3D vision tasks including classification, segmentation and detection.

... Therefore, in the early years, people used deep learning to learn advanced features from low-level clues (usually hand-made). [6][7][8][9] Unsupervised shape segmentation [10] is proposed by Shu et al. The process is that oversegmenting the input model, calculating the patch-based local features, then using the stacked auto-encoder to learn the highlevel features, and performing the graph-based segmentation. ...

... Region-based methods work with region growing algorithms. In this case, the segmentation starts from one or more points (seed points) featuring specific characteristics and then grows around neighboring points with similar characteristics, such as surface orientation, curvature, etc. [27,33]. The initial algorithm was introduced by Besl et al. [34], and several variations are presented in the literature [35][36][37][38][39][40]. ...

In recent years, the use of 3D models in cultural and archaeological heritage for documentation and dissemination purposes is increasing. The association of heterogeneous information to 3D data by means of automated segmentation and classification methods can help to characterize, describe and better interpret the object under study. Indeed, the high complexity of 3D data along with the large diversity of heritage assets themselves have constituted segmentation and classification methods as currently active research topics. Although machine learning methods brought great progress in this respect, few advances have been developed in relation to cultural heritage 3D data. Starting from the existing literature, this paper aims to develop, explore and validate reliable and efficient automated procedures for the classification of 3D data (point clouds or polygonal mesh models) of heritage scenarios. In more detail, the proposed solution works on 2D data ("texture-based" approach) or directly on the 3D data ("geometry-based approach) with supervised or unsupervised machine learning strategies. The method was applied and validated on four different archaeological/architectural scenarios. Experimental results demonstrate that the proposed approach is reliable and replicable and it is effective for restoration and documentation purposes, providing metric information e.g. of damaged areas to be restored.

... Many studies conducted in the last few years have focused on ground segmentation. 2,[4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] Various research groups have proposed techniques for handling 3D point cloud data, not only in the spatial domain, but also in the frequency domain. We roughly categorize these techniques in relation to our present research in the subsections below. ...

For an autonomous mobile robot operating in an unknown environment, distinguishing obstacles from the traversable ground region is an essential step in determining whether the robot can traverse the area. Ground segmentation thus plays a critical role in autonomous mobile robot navigation in challenging environments, especially in real time. In this article, a ground segmentation method is proposed that combines three techniques: gradient threshold, adaptive break point detection, and mean height evaluation. Based on three-dimensional (3D) point clouds obtained from a Velodyne HDL-32E sensor, and by exploiting the structure of a two-dimensional reference image, the 3D data are represented as a graph data structures. This process serves as both a preprocessing step and a visualization of very large data sets, mobile-generated data for segmentation, and building maps of the area. Various types of 3D data—such as ground regions near the sensor center, uneven regions, and sparse regions—need to be represented and segmented. For the ground regions, we apply the gradient threshold technique for segmentation. We address the uneven regions using adaptive break points. Finally, for the sparse region, we segment the ground by using a mean height evaluation.

... In region growing approaches, a set of faces are first selected as seeds of regions, and then those seeds keep growing until all faces of the mesh are assigned to a region. Local surface properties, such as normal and principle curvatures, are usually used to guide the growing precess [20,14,9]. In hierarchical clustering approaches, each face of the mesh is first regarded as a single region. ...

... These methods start from one or more points (seed points) featuring specific characteristics and then grow around neighbouring points with similar characteristics, such as surface orientation, curvature, etc. (Rabbani et al., 2006;Jagannathan and Miller, 2007). Region-based methods can be divided into: ...

Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed
on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data
with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of
grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these
regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point
clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed
and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different
scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

... To decompose a facade into depth planes, the point set needs to be segmented into 2.5D segments. To this end, the point cloud is separated by the curvature-based region growing algorithm (Jagannathan and Miller, 2007), which merges the neighboring points and adds a smoothness constraint. In particular, the point with the minimum curvature is first set as a seed and then the angles between the normal of every neighbor and normal of the current seed point are computed. ...

The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmen-tation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classi-fier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.

... The region growing based approaches deal with segmentation by detecting continuous surfaces that have homogeneity geometrical properties. In the segmentation of unstructured 3D point clouds, these methods firstly choose a seed point from which to grow a region, and then local neighbors of the seed point are combined with it if they have similarities in terms of surface point properties such as orientation and curvature (Rabbani et al., 2006, Jagannathan andMiller, 2007). There are also algorithms which take a sub window (Xiao et al., 2013) or a line segment (Harati et al., 2007) as the growth unit. ...

In this paper, we first present a novel hierarchical clustering algorithm named Pairwise Linkage (P-Linkage), which can be used for clustering any dimensional data, and then effectively apply it on 3D unstructured point cloud segmentation. The P-Linkage clustering algorithm first calculates a feature value for each data point, for example, the density for 2D data points and the flatness for 3D point clouds. Then for each data point a pairwise linkage is created between itself and its closest neighboring point with a greater feature value than its own. The initial clusters can further be discovered by searching along the linkages in a simple way. After that, a cluster merging procedure is applied to obtain the finally refined clustering result, which can be designed for specialized applications. Based on the P-Linkage clustering, we develop an efficient segmentation algorithm for 3D unstructured point clouds, in which the flatness of the estimated surface of a 3D point is used as its feature value. For each initial cluster a slice is created, then a novel and robust slicemerging method is proposed to get the final segmentation result. The proposed P-Linkage clustering and 3D point cloud segmentation algorithms require only one input parameter in advance. Experimental results on different dimensional synthetic data from 2D to 4D sufficiently demonstrate the efficiency and robustness of the proposed P-Linkage clustering algorithm and a large amount of experimental results on the Vehicle-Mounted, Aerial and Stationary Laser Scanner point clouds illustrate the robustness and efficiency of our proposed 3D point cloud segmentation algorithm.

... The method of 3D mesh models segmentation can be divided into automatic segmentation method and sketch-based user interface. There are many automatic segmentation algorithm [7], such as curvedness-based region growing approach [8], hierarehical face clustering [9], iterative clustering [10]. In the most of automatic segmentation algorithms are split using a threshold as a control parameter, the result is not intuitive and hard to quality, it takes several attempts to obtain satisfactory results. ...

... The region growing based approaches deal with segmentation by detecting continuous surfaces that have homogeneity geometrical properties. In the segmentation of unstructured 3D point clouds, these methods firstly choose a seed point from which to grow a region, and then local neighbors of the seed point are combined with it if they have similarities in terms of surface point properties such as orientation and curvature (Rabbani et al., 2006, Jagannathan andMiller, 2007). There are also algorithms which take a sub window (Xiao et al., 2013) or a line segment (Harati et al., 2007) as the growth unit. ...

In this paper, we first present a novel hierarchical clustering algorithm named Pairwise Linkage (P-Linkage), which can be used for clustering any dimensional data, and then effectively apply it on 3D unstructured point cloud segmentation. The P-Linkage clustering algorithm first calculates a feature value for each data point, for example, the density for 2D data points and the flatness for 3D point clouds. Then for each data point a pairwise linkage is created between itself and its closest neighboring point with a greater feature value than its own. The initial clusters can further be discovered by searching along the linkages in a simple way. After that, a cluster merging procedure is applied to obtain the finally refined clustering result, which can be designed for specialized applications. Based on the P-Linkage clustering, we develop an efficient segmentation algorithm for 3D unstructured point clouds, in which the flatness of the estimated surface of a 3D point is used as its feature value. For each initial cluster a slice is created, then a novel and robust slicemerging method is proposed to get the final segmentation result. The proposed P-Linkage clustering and 3D point cloud segmentation algorithms require only one input parameter in advance. Experimental results on different dimensional synthetic data from 2D to 4D sufficiently demonstrate the efficiency and robustness of the proposed P-Linkage clustering algorithm and a large amount of experimental results on the Vehicle-Mounted, Aerial and Stationary Laser Scanner point clouds illustrate the robustness and efficiency of our proposed 3D point cloud segmentation algorithm.

... Jagannathan and Miller (Jagannathan & Miller 2007) use a metric known as curvedness (Koenderink & Doorn 1992;Dorai & Jain 1997) for segmentation, to extract regions of the mesh with high curvature. The curvedness is calculated for each point in the mesh. ...

This paper presents the main concepts of a project under development concerning the analysis process of a scene containing a large number of objects, represented as unstructured point clouds. To achieve what
we called the ―optimal scene interpretation‖ ( the shortest scene description satisfying the MDL principle) we follow an approach for managing 3-D objects based on a semantic framework based on ontologies for adding and sharing conceptual knowledge about spatial objects.

Closely adjacent objects are a kind of challenging scene in three-dimensional (3D) point cloud segmentation. To address this issue, we propose a novel periphery-restrictive region-growing-based segmentation method. First, the clouds are smoothed by the moving least squares (MLS) method and the periphery of the target clouds is extracted. Then an initial seed generation based on minimum curvature first of non-periphery points, namely, non-periphery seed generation (NSG) algorithm is proposed. Finally, a nearest periphery restrictive growing (NPRG) algorithm is proposed for accurate segmentation. In addition, we establish a point cloud dataset of adjacent objects which contains three typically adjacent objects on the pipeline. We evaluate the effectiveness and accuracy of the proposed method on this dataset, and the extensive experiments show that the proposed method performs well on closely adjacent objects.

Management and maintenance of existing buildings remains a major problem due to the lack of existing three-dimensional (3D) models and accurate as-built representation. In this paper the authors propose to use 3D scanner technology to capture the as-built and existing conditions of the buildings combined with building information modeling (BIM) as the underlying technology, which is a 3D semantic representation of all the life cycle phases of a building. This paper presents the results from creating as-built BIM models of existing buildings, using point cloud (a set of points in 3D space) and machine learning as an intermediate medium. Machine learning methodologies are used to speed up the computation of segmentation and classification of point clouds from a 3D virtual indoor environment using procedural modeling, which focused on two attributes, point density and the level of random errors. In this paper we will present findings on the evaluation of the performance of machine learning segmentation and classification algorithm based on the comparison of ten different point cloud data sets. Different sets of segmentation and classification models with comparison between models and within themselves were provided, which included the mean loss and accuracy between models with different point density.

Journal of Physics: Conference Series
PAPER • THE FOLLOWING ARTICLE ISOPEN ACCESS
Preface
Published under licence by IOP Publishing Ltd
Journal of Physics: Conference Series, Volume 2101, 2021 The 2nd International Conference on Mechanical Engineering and Materials (ICMEM 2021) 19-20 November 2021, Beijing, China
Citation 2021 J. Phys.: Conf. Ser. 2101 011001
DownloadArticle PDF
39 Total downloads
Turn on MathJax
Share this article
Share this content via email
Share on Facebook (opens new window)
Share on Twitter (opens new window)
Share on Mendeley (opens new window)
Article information
Abstract
2021 The 2nd International Conference on Mechanical Engineering and Materials (ICMEM 2021) will be held in Beijing, China during 19-20 November, 2021. The ICMEM 2021 organized by Hubei Zhongke Institute of Geology and Environment Technology intends to invite worldwide famous scientists, experts, scholars, and researchers for academic presentations. ICMEM 2021 proceeding covers all aspects of mechanical engineering and material sciences, such as Dynamics, Vibration and Sound Engineering, Materials and Technology, Fluids Engineering, Solid Mechanics and Design, Micro/Nano Engineering and Technology, Production Engineering, Robotics and Control, Thermal and Power Engineering, Vehicle Technology, Machine Fault Diagnostics and Prognostics, Intelligent Transport System, Intelligent Monitoring Technology, Metallic Alloys, Tool Materials, Ceramics and Glasses, Composite Materials, Amorphous Materials, Nanomaterials, Biomaterials, Polymers, etc.. 88 articles were selected from more than 190 submissions after peer-review, and authors participated in ICMEM 2021 with oral presentations and posters, promoting the communication among researchers and experts from institutes and universities, including Northwestern Polytechnical University, University of Science and Technology Beijing, Beijing Institute of Space Mechanics & Electricity, State Key Laboratory of Engine Reliability, Chinese Academy of Sciences, etc.

Decomposing a complex object into simple components is a fundamental problem in geometry processing. Existing methods for decomposing point clouds rely on local or global features of an object, which leads to over-segmentation or unnatural component boundaries. In this paper, we propose a novel method for decomposing the point cloud by using internal and external critical points. First, we propose a novel shrinking strategy to build the global skeleton topology, from which we can extract internal critical points for locating components. External critical points are selected from the ridge and valley points for component segmentation. Then, we apply the constraint of internal and external critical points to decompose an object into semantic components by skeleton-based piecewise labeling. Experimental results demonstrate that our method is effective in decomposing 3D point clouds and is robust to limited noise and incomplete data.

Background
Shape segmentation is commonly required in many engineering fields to separate a 3D shape into pieces for some specific applications. Although there are different methods proposed to segment the 3D shape, there is a lack of analyses of their efficiency and accuracy. It is a challenge to select an effective method to meet a particular requirement of the shape segmentation.
Objective
This paper reviews existing methods of the shape segmentation to summarize the methods and processes to identify their pros and cons.
Method
The process of the shape segmentation is summarized in two steps of the feature extraction and model separation.
Results
Shape features are identified from the available methods. Different methods of the shape segmentation are evaluated. The challenge and trend of the shape segmentation are discussed.
Conclusion
Clustering is the most used method for the shape segmentation. Machine learning methods are trend of 3D shape segmentations for identification, analysis and reconstruction of large-scale models.

Sidewalks are a critical infrastructure to facilitate essential daily trips for pedestrian and wheelchair users. The dependence on the infrastructure and the increasing demand from these users press public transportation agencies for cost-effective sidewalk maintenance and better Americans with Disabilities Act (ADA) compliance. Unfortunately, most of the agencies still rely on outdated sidewalk mapping data or manual survey results for their sidewalk management. In this study, a network-level sidewalk inventory method is proposed by efficiently segmenting the mobile light detection and ranging (LiDAR) data using a customized deep neural network, i.e., PointNet++, and followed by integrating a stripe-based sidewalk extraction algorithm. By extracting the sidewalk locations from the mobile LiDAR point cloud, the corresponding geometry features, e.g., width, grade, cross slope, etc., can be extracted for the ADA compliance and the overall condition assessment. The experimental test conducted on the entire State Route 9, Massachusetts has shown promising performance in terms of the accuracy for the sidewalk extraction (i.e., point-level intersect over union (IoU) value of 0.946) and the efficiency for network analysis of the ADA compliance (i.e., approximately 6.5 min/mile). A case study conducted in Columbus District in Boston, Massachusetts, demonstrates that the proposed method can not only successfully support transportation agencies with an accurate and efficient means for network-level sidewalk inventory, but also support wheelchair users with accurate and comprehensive sidewalk inventory information for better navigation and route planning.

Background
Patient‐specific instrumentation (PSI) improves accuracy of surgical operations. PSI needs software for preoperative planning and instrument design. In this study, we explain the methodology of developing a software tool for PSI guide design and preoperative planning in reverse shoulder arthroplasty (RSA).
Methods
Approaches used to prepare input data, transform them into meaningful features, and use of those features to create special guide geometries are explained by describing different algorithms and libraries.
Results
The developed software is tested on three different patients’ data. Preoperative planning is performed and guides designed by software and the patients’ bones are manufactured and tested for RSA. The method of building a software is presented to do the preoperative planning and designing specific guides for each patient are shown to be properly functional.
Conclusions
This study proves processes in the development of the PSI software and the design of a specific guide for RSA.
This article is protected by copyright. All rights reserved.

Among different 3D data representations, point cloud stands out for its efficiency and flexibility. Hence, many researchers have been involved in the point cloud analysis recently. Existing approaches for point cloud segmentation task typically suffer from two limitations: (1) They usually treat different neighbor points as equals which cannot characterize the correlation between the center point and its neighborhoods well. Moreover, different parts may have different local structures for a point cloud, but they just learn a single representation space which is not sufficient and stable. (2) They often capture hierarchical information by heuristic sampling approaches which cannot reveal the spatial relationships of points well to learn global features. To overcome these limitations, we propose a novel hierarchical attentive pooling graph network (HAPGN) which utilizes the gated graph attention network (GGAN) and hierarchical graph pooling module (HiGPool) as building blocks for point cloud segmentation. Specifically, GGAN can highlight not only the importance of different neighbor points but also the importance of different representation spaces to enhance the local feature extraction. HiGPool is a novel pooling module that can capture the spatial layouts of points to learn the hierarchical features adequately. Experimental results on the ShapeNet part dataset show that HAPGN can achieve superior performance over the state-of-the-art segmentation approaches. Furthermore, we also combine our proposed HiGPool with some recent approaches for point cloud classification and achieve better results on the ModelNet40 dataset.

This paper introduces a multi-view recurrent neural netowrk (MV-RNN) approach for 3D mesh segmentation. Our architecture combines the convolutional neural networks (CNN) and a two-layer long short term memory (LSTM) to yield coherent segmentation of 3D shapes. The imaged-based CNN are useful for effectively generating the edge probability feature map while the LSTM correlates these edge maps across different views and output a well-defined per-view edge image. Evaluations on the Princeton Segmentation Benchmark dataset show that our framework significantly outperforms other state-of-the-art methods.

This paper presents a novel segmentation algorithm for mechanical CAD models (represented by either mesh or point cloud) constructed from planes, cylinders, cones, spheres, tori and easily extendable to surfaces of revolution. Our proposed approach differs from existing techniques in the following aspects. First, by assuming that common mechanical models only have a limited number of dominant orientations that their primitives are either parallel or orthogonal to, we narrow down the search space for detecting the primitives to the automatically estimated major orientations of the input model. Second, we employ a dimension reduction method which transforms the problem of detecting 3D primitives into the classical 2D problems such as circle and line detection in images. Third, we generate an over-complete set of primitives and formulate the segmentation as a set cover optimization problem. We demonstrate our method's robustness to noise and show that it compares favorably with state-of-the-art solutions such as the RANSAC-based (Schnabel et al., 2007) and GlobFit (Li et al., 2011) approaches on many synthetic and real scanned examples.

This paper presents a systematic theory for the construction of morphological operators on graphs. Graph morphology extracts structural information from graphs using predefined test probes called structuring graphs.Structuring graphs have a simple structure and are relatively small compared to the graph that is to be transformed. A neighbourhood function on the set of vertices of a graph is constructed by relating individual vertices to each other whenever they belong to a local instantiation of the structuring graph. This function is used to construct dilations and erosions. The concept of structuring graph is also used to define openings and closings. The resulting morphological operators are invariant under symmetries of the graph. Graph morphology resembles classical morphology (which uses structuring elements to obtain translation-invariant operators) to a large extent.However, not all results from classical morphology have correlates in graph morphology because the morphological transformations of a graph are restricted by its structure.

This paper presents a systematic theory for the construction of morphological operators on graphs. Graph morphology extracts structural information from graphs using predefined test probes called structuring graphs. Structuring graphs have a simple structure and are relatively small compared to the graph that is to be transformed. A neighbourhood function on the set of vertices of a graph is constructed by relating individual vertices to each other whenever they belong to a local instantiation of the structuring graph. This function is used to construct dilations and erosions. The concept of structuring graph is also used to define openings and closings. The resulting morphological operators are invariant under symmetries of the graph. Graph morphology resembles classical morphology (which uses structuring elements to obtain translation-invariant operators) to a large extent. However, not all results from classical morphology have correlates in graph morphology because the morphological transformations of a graph are restricted by its structure.

A tool for constructing a "good" 3D triangulation of a given set of vertices in 3D is developed and studied. The constructed triangulation is "optimal" in the sense that it locally minimizes a cost function which measures a certain discrete curvature over the resulting triangle mesh. The algorithm for obtaining the optimal triangulation is that of swapping edges sequentially, such that the cost function is reduced maximally by each swap. In this paper three easy-to-compute cost func-tions are derived using a simple model for defining discrete curvatures of triangle meshes. The results obtained by the different cost functions are compared. Operating on data sampled from simple 3D models, we com-pare the approximation error of the resulting optimal triangle meshes to the sampled model in various norms. The conclusion is that all three cost functions lead to similar results, and none of them can be said to be supe-rior to the others. The triangle meshes generated by our algorithm, when serving as initial triangle meshes for the butterfly subdivision scheme, are found to improve significantly the limit butterfly-surfaces compared to ar-bitrary initial triangulations of the given sets of vertices. Based upon this observation, we believe that any algorithm operating on triangle meshes such as subdivision, finite element solution of PDE, or mesh simplification, can obtain better results if applied to a "good" triangle mesh with small discrete curvatures. Thus our algorithm can serve for modelling surfaces from sampled data as well as for initialization of other triangle mesh based algorithms.

Computer graphics applications routinely generate geometric models consisting of large numbers of triangles. We present an algorithm that significantly reduces the number of triangles required to model a physical or abstract object. The algorithm makes multiple passes over an existing triangle mesh, using local geometry and topology to remove vertices that pass a distance or angle criterion. The holes left by the vertex removal are patched using a local triangulation process. The decimation algorithm has been implemented in a general scientific visualization system as a general network filter. Examples from volume modeling and terrain modeling illustrate the results of the decimation algorithm.

In this paper, we describe an algorithm called Fast Marching Watersheds that segments a triangle mesh into visual parts. This computer vision algorithm leverages a human vision theory known as the minima rule. Our im- plementation computes the principal curvatures and princi- pal directions at each vertex of a mesh, and then our hill- climbing watershed algorithm identifies regions bounded by contours of negative curvature minima. These regions fit the definition of visual parts according to the minima rule. We present evaluation analysis and experimental results for the proposed algorithm.

Abstract This paper discusses an efficient ,and ,effective framework ,to decompose,polygon ,meshes ,into components. This is useful ,in various interactive graphics applications, such as, mesh editing, establishing correspondence between objects for morphing, computation,of bounding volume,hierarchy for collision detection and ray tracing. In this paper, we formalize the notion of a component,as a ,sub-volume ,of an ,object with homogeneous geometric and topological features. Next, we describe the proposed framework, which adapts the idea of edge contraction and,space ,sweeping ,to decompose ,an object ,automatically. Finally, we demonstrate an application of this framework to improve bounding,volume,hierarchies constructed by state-of-theart collision detection systems such as RAPID and QuickCD. Keywords Geometric Modeling, Shape, Components, Collision Detection,

Many real world polygonal surfaces contain topological singularities that represent a challenge for processes such as simplification, compression, smoothing, etc. We present an algorithm for removing such singularities, thus converting non manifold sets of polygons to manifold polygonal surfaces (orientable if necessary). We identify singular vertices and edges, multiply singular vertices, and cut through singular edges. In an optional stitching phase, we join surface boundary edges that were cut, or whose endpoints are sufficiently close, while guaranteeing that the surface is a manifold. We study two different stitching strategies called "edge pinching" and "edge snapping"; when snapping, special care is required to avoid re-creating singularities. The algorithm manipulates the polygon vertex indices (surface topology) and essentially ignores vertex coordinates (surface geometry). Except for the optional stitching, the algorithm has a linear complexity in the number of vertices edges and faces, and require no floating point operation.

This paper describes a computational model for deriving a decomposition of objects from laser rangefinder data. The process aims to produce a set of parts defined by compactness and smoothness of surface connectivity. Relying on a general decomposition rule, any kind of objects made up of free-form surfaces are partitioned. A robust method to partition the object based on Markov random fields (MRF), which allows to incorporate prior knowledge, is presented. Shape index and curvedness descriptors along with discontinuity and concavity distributions are introduced to classify region labels correctly. In addition, a novel way to classify the shape of a surface is proposed resulting in a better distinction of concave, convex and saddle shapes. To achieve a reliable classification a multiscale method provides a stable estimation of the shape index.

Different application fields have shown increasing interest in shape description oriented to recognition and similarity issues. Beyond the application aims, the capability of handling details separating them from building elements, the invariance to a set of geometric transformations, the uniqueness and stability to noise represent fundamental properties of each proposed model. This paper defines an affine-invariant skeletal representation; starting from global features of a 3D shape, located by curvature properties, a Reeb graph is defined using the topological distance as a quotient function. If the mesh has uniformly spaced vertices, this Reeb graph can also be rendered as a geometric skeleton defined by the barycenters of pseudo-geodesic circles sequentially expanded from all the feature points

A novel approach to 3D part segmentation is presented. Beginning
with range data of a 3D object, we simulate the charge density
distribution over an object's surface which has been tessellated by a
triangular mesh. We then locate the object part boundary at deep surface
concavities by tracing local charge density minima. Finally, we
decompose the object into parts at the part boundary points

We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). The computation of points on the surface is local, which results in an out-of-core technique that can handle any point set. We show that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.

Many real-world polygonal surfaces contain topological
singularities that represent a challenge for processes such as
simplification, compression, and smoothing. We present an algorithm that
removes singularities from nonmanifold sets of polygons to create
manifold (optionally oriented) polygonal surfaces. We identify singular
vertices and edges, multiply singular vertices, and cut through singular
edges. In an optional stitching operation, we maintain the surface as a
manifold while joining boundary edges. We present two different edge
stitching strategies, called pinching and snapping. Our algorithm
manipulates the surface topology and ignores physical coordinates.
Except for the optional stitching, the algorithm has a linear complexity
and requires no floating point operations. In addition to introducing
new algorithms, we expose the complexity (and pitfalls) associated with
stitching. Finally, several real-world examples are studied

A novel approach to 3D part segmentation is presented. It is a
well-known physical fact that electrical charge on the surface of a
conductor tends to accumulate at a sharp convexity and vanish at a sharp
concavity. Thus, object part boundaries, which are usually denoted by a
sharp surface concavity, can be detected by simulating the electrical
charge density over the object surface and locating surface points which
exhibit local charge density minima. Beginning with single- or multiview
range data of a 3D object, we simulate the charge density distribution
over an object's surface which has been tessellated by a triangular
mesh. We detect the deep surface concavities by tracing local charge
density minima and then decompose the object into parts at these points.
The charge density computation does not require an assumption on surface
smoothness and uses weighted global data to produce robust local surface
features for part segmentation

Underlying recognition is an organization of objects and their
parts into classes and hierarchies. A representation of parts for
recognition requires that they be invariant to rigid transformations,
robust in the presence of occlusions, stable with changes in viewing
geometry, and be arranged in a hierarchy. These constraints are captured
in a general framework using notions of a PART-LINE and a PARTITIONING
SCHEME. A proposed general principle of “form from function”
motivates a particular partitioning scheme involving two types of parts,
neck-based and limb-based. Neck-based parts arise from narrowings in
shape, or the local minima in distance between two points on the
boundary, while limb-based parts arise from a pair of negative curvature
minima which have “co-circular” tangents. In this paper, we
present computational support for the limb-based and neck-based parts by
showing that they are invariant, robust, stable and yield a hierarchy of
parts. Examples illustrate that the resulting decompositions are robust
in the presence of occlusion and clutter for a range of man-made and
natural objects, and lead to natural and intuitive parts which can be
used for recognition

In this paper the formula for the optimal histogram bin width is derived which asymptotically minimizes the integrated mean squared error. Monte Carlo methods are used to verify the usefulness of this formula for small samples. A data-based procedure for choosing the bin width parameter is proposed, which assumes a Gaussian reference standard and requires only the sample size and an estimate of the standard deviation. The sensitivity of the procedure is investigated using several probability models which violate the Gaussian assumption.

Stack filters are widely used nonlinear filters based on threshold decomposition and positive Boolean functions. They have shown to form a very large class of filters which includes rank-order operations as well as standard morphological operations. The stack filter representation of an order statistic filter provides an efficient tool for the theoretical analysis of the filter.
Soft morphological filters form a large subclass of stack filters. They were introduced to improve the behavior of standard morphological filters in noisy conditions. In this paper, different properties of soft morphological filters are analysed and illustrated. Their connection to stack filters is established, and that connection is used in the statistical analysis of soft morphological filters. Soft morphological filters are less sensitive to additive noise than standard morphological filters. The deterministic properties of soft morphological filters are also analysed and it is shown that soft morphological filters form a class of filters with many desirable properties. For example, they preserve well details of images.

Gray-scale soft mathematical morphology is the natural extension of binary soft mathematical morphology which has been shown to be less sensitive to additive noise and to small variations. But gray-scale soft morphological operations are difficult to implement in real time. In this Note, a superposition property called threshold decomposition and another property called stacking are applied successfully on gray-scale soft morphological operations. These properties allow gray-scale signals and structuring elements to be decomposed into their binary sets respectively and operated by only logic gates in new VLSI architectures, and then these binary results are combined to produce the desired output as of the time-consuming gray-scale processing.

In several fields of image processing (e.g. geography, histology, 2-D electrophoretic gels), neighborhood relationships between objects may be modelled by graphs. Now, a graph being a lattice, it can be processed by Mathematical Morphology: this theory provides a great number of powerful tools for studying these graphs. The present paper deals first with some theoretical aspects of Mathematical Morphology on lattices and graphs. It then presents various graphs that can be defined on a given set S of objects, depending on the intensity of the desirable neighborhood relationships (e.g. Delaunay triangulation, Gabriel graph, relative neighborhood graph, etc.). Algorithms for constructing these graphs are also explained. Depending on S, computational geometry techniques or new digital procedures, based on Euclidean distance and zones of influence, will be preferred. In a third section, the main operators of Mathematical Morphology are defined on graphs (erosions and dilations, morphological filters, distance function, skeletons, reconstruction, labelling, geodesic operators, etc.). Their properties and interest are mentioned. Fast algorithms for computing the transforms are also introduced.

The classical surface curvature measures, such as the Gaussian and the mean curvature at a point of a surface, are not very indicative of local shape. The two principal curvatures (taken as a pair) are more informative, but one would prefer a single shape indicator rather than a pair of numbers. Moreover, the shape indicator should preferably be independent of the size i.e. the amount of curvature, as distinct from the type of curvature. We propose two novel measures of local shape, the ‘curvedness’ and the ‘shape index’. The curvedness is a positive number that specifies the amount of curvature, whereas the shape index is a number in the range [−1, +1] and is scale invariant. The shape index captures the intuitive notion of ‘local shape’ particularly well. The shape index can be mapped upon an intuitively natural colour scale. Two complementary shapes (like stamp and mould) map to complementary hues. The symmetrical saddle (which is very special because it is self-complementary) maps to white. When a surface is tinted according to this colour scheme, this induces an immediate perceptual segmentation of convex, concave, and hyperbolic areas. We propose it as a useful tool in graphics representation of 3D shape.

A complete, fast and practical isolated object recognition system has been developed which is very robust with respect to scale, position and orientation changes of the objects as well as noise and local deformations of shape (due to perspective projection, ...

Reeb graphs have been shown to be effective for topology matching of 3D objects. Their effectiveness breaks down, however, when the individual models become very geometrically and topologically detailed - as is the case for complex machined parts. The result is that Reeb graph techniques, as developed for matching general shape and computer graphics models, produce poor results when directly applied to create engineering databases. This paper presents a framework for shape matching through scale-space decomposition of 3D models. The algorithm is based on recent developments in efficient hierarchical decomposition of metric data using its spectral properties. Through spectral decomposition, we reduce the problem of matching to that of computing a mapping and distance measure between vertex-labeled rooted trees. We use a dynamic programming scheme to compute distances between trees corresponding to solid models. Empirical evaluation of the algorithm on an extensive set of 3D matching trials demonstrates both robustness and efficiency of the overall approach.

Triangle meshes are a popular representation of surfaces in computer graphics. Our aim is to detect feature on such surfaces. Feature regions distinguish themselves by high curvature. We are using discrete curvature analysis on triangle meshes to obtain curvature values in every vertex of a mesh. These values are then thresholded resulting in a so called binary feature vector. By adapting morphological operators to triangle meshes, noise and artifacts can be removed from the feature. We introduce an operator that determines the skeleton of the feature region. This skeleton can then be converted into a graph representing the desired feature. Therefore a description of the surface's geometrical characteristics is constructed.

Mathematical morphology is well suited to capturing geometric information. Hence, morphology-based approaches have been popular for object shape representation. The two primary morphology-based approaches-the morphological skeleton and the morphological shape decomposition (MSD)-each represent an object as a collection of disjoint sets. A practical shape representation scheme, though, should give a representation that is computationally efficient to use. Unfortunately, little work has been done for the morphological skeleton and the MSD to address efficiency. We propose a flexible search-based shape representation scheme that typically gives more efficient representations than the morphological skeleton and MSD. Our method decomposes an object into a number of simple components based on homothetics of a set of structuring elements. To form the representation, the components are combined using set union and set difference operations. We use three constituent component types and a thorough cost-based search strategy to find efficient representations. We also consider allowing object representation error, which may yield even more efficient representations.

A shape representation scheme that is typically more computationally efficient than the morphological skeleton and MSD (morphological shape decomposition) is proposed. This method greatly augments the MSD. The authors introduce new constituent component types and incorporate a cost-based search strategy for finding an efficient representation. If representation error is permissible, even more efficient representations are possible. However, search time is an issue for the method.< >

This paper describes a method for partitioning 3D surface meshes
into useful segments. The proposed method generalizes morphological
watersheds, an image segmentation technique, to 3D surfaces. This
surface segmentation uses the total curvature of the surface as an
indication of region boundaries. The surface is segmented into patches,
where each patch has a relatively consistent curvature throughout, and
is bounded by areas of higher, or drastically different, curvature. This
algorithm has applications for a variety of important problems in
visualization and geometrical modeling including 3D feature extraction,
mesh reduction, texture mapping 3D surfaces, and computer aided design

The medial axis transform (MAT) is a representation of an object
which has been shown to be useful in design, interrogation, animation,
finite element mesh generation, performance analysis, manufacturing
simulation, path planning and tolerance specification. In this paper, an
algorithm for determining the MAT is developed for general 3D polyhedral
solids of arbitrary genus without cavities, with nonconvex vertices and
edges. The algorithm is based on a classification scheme which relates
different pieces of the medial axis (MA) to one another, even in the
presence of degenerate MA points. Vertices of the MA are connected to
one another by tracing along adjacent edges, and finally the faces of
the axis are found by traversing closed loops of vertices and edges.
Representation of the MA and its associated radius function is
addressed, and pseudocode for the algorithm is given along with
recommended optimizations. A connectivity theorem is proven to show the
completeness of the algorithm. Complexity estimates and stability
analysis for the algorithms are presented. Finally, examples illustrate
the computational properties of the algorithm for convex and nonconvex
3D polyhedral solids with polyhedral holes

The aim of the work described is to use the potential strength of
the skeleton of discrete objects in computer vision and pattern
recognition where features of objects are needed for classification.
Algorithms are introduced for detecting skeleton characteristic points
(end points, junction points and curve points) and creating an
attributed graph, that can then be used as an input to graph matching
algorithms, based on a morphological approach

The Reeb graph represents the topological skeleton of a 3-D object and shows between which contours the surface patches should be generated. To construct the graph automatically, a weight function is defined for a pair of contours with each contour lying on the adjacent cross sections. First, the algorithm automatically generates the major parts of the edges of the Reeb graph where the number of contours does not change. Then the rest of the graph is determined by using the weight function and prior knowledge of the number of holes the object has. Specifically, the graph is completed by adding edges that do not contradict the known number of holes in descending order of the weight.< >

We address the problem of representing and recognizing 3D
free-form objects when (1) the object viewpoint is arbitrary, (2) the
objects may vary in shape and complexity, and (3) no restrictive
assumptions are made about the types of surfaces on the object. We
assume that a range image of a scene is available, containing a view of
a rigid 3D object without occlusion. We propose a new and general
surface representation scheme for recognizing objects with free-form
(sculpted) surfaces. In this scheme, an object is described concisely in
terms of maximal surface patches of constant shape index. The maximal
patches that represent the object are mapped onto the unit sphere via
their orientations, and aggregated via shape spectral functions.
Properties such as surface area, curvedness, and connectivity, which are
required to capture local and global information, are also built into
the representation. The scheme yields a meaningful and rich description
useful for object recognition. A novel concept, the shape spectrum of an
object is also introduced within the framework of COSMOS for object view
grouping and matching. We demonstrate the generality and the
effectiveness of our scheme using real range images of complex objects

Shape description is a very important issue in pictorial pattern analysis and recognition. Therefore, many theories exist that attempt to explain different aspects of the problem. The technique presented here decomposes a binary shape into a union of simple binary shapes. The decomposition is shown to be unique and invariant to translation, rotation, and scaling. The techniques used in the decomposition are based on mathematical morphology. The shape description produced can be used in object recognition and in binary image coding.

Cutting up a complex object into simpler sub-objects is a fundamental problem in various disciplines. In image processing, images are segmented while in computational geometry, solid polyhedra are decomposed. In recent years, in computer graphics, polygonal meshes are decomposed into sub-meshes. In this paper we propose a novel hierarchical mesh decomposition algorithm. Our algorithm computes a decomposition into the meaningful components of a given mesh, which generally refers to segmentation at regions of deep concavities. The algorithm also avoids over-segmentation and jaggy boundaries between the components. Finally, we demonstrate the utility of the algorithm in control-skeleton extraction.

Reeb graphs have been shown to be effective for topology matching of 3D objects. Their effectiveness breaks down, however, when the individual models become very geometrically and topologically detaileas is the case for complex machined parts. The result is that Reeb graph techniques, as developed for matching general shape and computer graphics models, produce poor results when directly applied to create engineering databases.

Laws of Organization in Perceptual Forms

- M Werthheimer

M. Werthheimer, "Laws of Organization in Perceptual Forms," A
Sourcebook of Gestalt Psychology, pp. 71-88, 1950.

Decimation of Triangular Meshes

- W J Schroeder
- J A Zarge
- W E Lorenson

W.J. Schroeder, J.A. Zarge, and W.E. Lorenson, "Decimation of
Triangular Meshes," Proc. ACM SIGGRAPH '92, pp. 65-70, 1992.