Conference PaperPDF Available
Sofia Catalucci1, Nicola Senin1,2, Samanta Piano1, Richard Leach1
1Manufacturing Metrology Team, Faculty of Engineering
University of Nottingham
Nottingham, United Kingdom
2Department of Engineering
University of Perugia
Perugia, Italy
This work addresses the development of
intelligent and adaptive optical form
measurement systems for quality inspection of
additively manufactured complex parts. The
ultimate objective is to obtain smart optical
measurement systems capable of automatically
reconfiguring themselves while inspecting new
geometries, and capable of assessing whether
completed measurements are sufficient, or
further measurements should be performed.
Intelligent behaviour is achieved through
automated self-assessment of measurement
performance, while the measurement itself is
being executed [1]. The decisional process is
supported by multiple sources of information [2],
namely: knowledge of part specifications (CAD
model, dimensional and geometric tolerances,
materials); knowledge of the manufacturing
process and the material, leading to predictability
of likely types of form error; knowledge of the
measurement instrument itself (metrological
performance and behaviour), and how it is
expected to interact with any specific material and
part geometry. The optical measurement
technologies covered by the project produce
point clouds: the work presented in this paper
focuses on algorithmic processing of point
clouds, and deals with the following, specific
challenges: a) automated point cloud localisation
within the part geometry, i.e. identifying what
surfaces have been captured by any given point
cloud, acquired from a part of unknown position
and orientation; b) automated assessment of
coverage and sampling density for the exposed
surfaces, including recognition of critical regions
(i.e. poorly represented by the point cloud), in
order to support automated planning for further
The experimental set-up is based on a
combination of a commercial measurement fringe
projection system (blue-light technology GOM
Atos Core 300), shown in Figure 1, and the point
cloud processing commercial software Polyworks
Inspector by Innovmetric. Automation is achieved
by interfacing Polyworks with MATLAB, via
FIGURE 1. The optical measurement system
while measuring one of the test parts.
Test cases
The selected test measurement parts are shown
in Figure 2. Sample A (Figure 2a) was fabricated
by selective laser sintering (SLS) using Nylon 12,
with size of a rectangular enclosing envelope (50
× 50 × 28) mm; sample B (Figure 2b) was
fabricated by laser powder bed fusion (LPBF)
using stainless steel 316L, with dimensions of
(125 × 45 × 8) mm.
FIGURE 2. Test parts; a) Nylon 12 pyramid
sample (50 × 50 × 28) mm fabricated by SLS; b)
stainless steel 316L automotive sample (125 × 45
× 8) mm fabricated by LPBF.
The nominal geometries of the test parts are
available as triangle meshes. Example results of
single measurements on the test parts with
unknown pose are shown in Figure 3a for sample
A and Figure 3b for sample B.
FIGURE 3. Example measurements: a) sample
A; b) sample B.
As sample A has four nominally identical sides,
pose estimation only pertains to the accurate
identification of the angular orientation of the
visible corner in the point cloud.
The first data processing step consists of
detecting the pose by identification and best-
matching of landmark features present on both
the measured point cloud and the nominal
reference geometry (triangle mesh). In the
second step, once the point cloud has been
aligned to the mesh, the degree of coverage can
be assessed by identifying the surfaces that have
not been reached by the measurement
instrument. For the covered surfaces, the density
and spatial distribution of the measured points
can be computed by inspecting the positions of
the points falling within each triangle of the mesh.
Alignment, also referred to as registration,
consists of a coarse phase and a fine phase.
Coarse registration
Coarse registration is based on the identification
and matching of common landmarks both in the
measured point cloud and in the triangle mesh.
Landmarks can be identified through computation
of local feature descriptors [3-5]. In this work,
local curvatures are used.
Surface normal vectors are identified both on the
point cloud and in the triangle mesh, by using
principal component analysis [6] on local subsets
of neighbouring points selected via the k-nearest
neighbour algorithm [7]. The principal curvatures
𝑘1 and 𝑘2 are then computed [8]. From the
principal curvatures, the Gaussian curvature K
and mean curvature H are computed as follows:
𝐾 = 𝑘1 ∙ 𝑘2,
𝐻 = (𝑘1 + 𝑘2)
Example results for curvature are shown in
Figures 4 to 7.
The next step involves the identification of
clusters of points with similar curvature values: a
first k-means clustering process [9] was used to
identify k-classes of curvature values (k = 5). The
highest-curvature class was then isolated; the
resulting points were subjected to another
clustering process, this time aimed at isolating
spatially distant subsets of points with high-
curvature values. The second clustering was,
therefore, hierarchical and based on Euclidean
distances between points (Figures 8 to 11).
a) b)
FIGURE 4. Gaussian curvature K estimation on
extracted vertices of the triangle mesh (sample
FIGURE 5. Mean curvature H estimation on
extracted vertices of the triangle mesh (sample
FIGURE 6. Gaussian curvature K estimation on
point cloud dataset (sample B).
FIGURE 7. Mean curvature H estimation on point
cloud dataset (sample B).
FIGURE 8. k-means clustering on K curvature.
Cluster 2 refers to the extracted vertices of the
triangle mesh with the highest curvature values
(sample B).
FIGURE 9. Hierarchical clustering and centroids
computation of clustered extracted vertices of the
triangle mesh (sample B). The points taken into
account are the ones with the highest curvature
FIGURE 10. k-means clustering on K curvature.
Cluster 2 refers to the points with the highest
curvature values (sample B).
FIGURE 11. Hierarchical clustering and centroids
computation of clustered point cloud (sample B).
The points taken into account are the ones with
the highest curvature values.
The identified common landmarks in both
datasets, described by high curvature values, are
then best-matched, using Random sample
consensus (RANSAC) [10,11]: at each iteration,
good matches were considered those resulting in
a spatial alignment which minimises the sum of
squared distances between matched points,
using the Procrustes algorithm [12].
Fine registration
Fine registration is based on a best-fit algorithm
[15], which iteratively minimises the distances
from the measured dataset to the reference
entity, revising the transformation based on a
rigid transformation (translation and rotation) until
the variation of the squared error is minimised.
The registration error function is defined as the
sum of squared Euclidean distances between
each point in the cloud and its closest neighbour
located on the triangular facets [13].
After the fine registration process is completed,
each triangular facet belonging to the original
mesh will have a certain number of measured
points associated with it. Coverage expresses
how comprehensively each triangle is
represented by the associated measured points.
To assess coverage, the number of points falling
within each triangle is considered in relation to the
area of the triangle with the purpose to obtain a
measure of spatial sampling density, i.e. number
of points per unit area. Sampling density is
computed on all the triangles (Figure 12). Then, a
percentage of the maximum density is set as
threshold to discriminate between adequately
and inadequately covered triangles (simply
referred to as "uncovered"). Finally, a coverage
ratio can be defined as the percentage of
triangles with adequate coverage over the total
number of triangles in the mesh. Additionally, the
ratio between the total area occupied by triangles
classified as covered, and the total area of all the
triangles in the mesh, can be computed, and is
referred to as "covered area ratio".
Example results of coverage computation are
shown in Figures 13 to 14, where the threshold
has been set to 75% of the maximum sampling
density per triangle. The areal coverage is either
estimated based on the number of triangular
facets associated with measured points over the
total number of triangles, and the sum of the
covered area over the total area of the object
(Table 1).
FIGURE 12. Triangle facets; colouring
proportional to sampling density (sample B).
FIGURE 13. Covered and uncovered triangles for
sample A (threshold on sampling density at 75%).
FIGURE 14. Covered and uncovered triangles for
sample B (threshold on sampling density at 75%).
TABLE 1. Coverage ratio results.
No. of
in the
ratio (%
area ratio
In this paper, preliminary results from the early
stage development of an intelligent system for
complex shape measuring have been presented.
Methods and algorithms for the automatic
assessment of part pose and measurement
coverage have been introduced and discussed
with the support of two test cases. The prototype
implementation is realised using a combination of
commercial measurement hardware and
software, and custom software modules
developed in-house.
Future work will address: 1) the estimation of
uncertainty associated with alignment and
assessment of coverage. Alignment in particular
may be affected by problems of geometric
stability (e.g. see [14] for ICP); 2) the
differentiation of part surfaces depending on
functional relevance, so that assessment of
coverage quality can be weighed; 3) the
implementation of feedback mechanisms based
on the results of pose and coverage estimation,
to automate planning for further measurement
The authors would like to acknowledge Patrick
Bointon of the Manufacturing Metrology Team,
Leonidas Gargalis and Joe White of the Centre
for Additive Manufacturing, University of
Nottingham, for their assistance in designing and
producing the test cases. We also acknowledge
funding from EPRSC project EP/M008983/1.
[1] Stavroulakis P, Chen S, Derlome C, Bointon
P, Tzimiropoulos G, Leach R K. Rapid
calibration tracking of extrinsic projector
parameters in fringe projection using
machine learning. Opt. Lasers Eng. 2019;
114: 7-14.
[2] Senin N, Leach R K. Information-rich
surface metrology. Proc. CIRP. 2018; 75:
[3] Hana X F, Jin J S, Xie J, Wang M J, Jiang
W. A comprehensive review of 3D point
cloud descriptors. 2018; arXiv:1802.02297.
[4] Tombari F, Salti S, Di Stefano L. Unique
signatures of histograms for local surface
description. European conference on
computer vision. Springer; 2010. 356-369.
[5] Bellekens B, Spruyt V, Berkvens R, Penne
R, Weyn M. A benchmark survey of rigid 3d
point cloud registration algorithms. Inter.
Journ. Adv. Intell. Sysm. 2015.
[6] Chung D, Lee Y DS. Registration of multiple-
range views using the reverse-calibration
technique. Pattern Recogn. 1998; 31: 457-
[7] Friedman J H, Bentely J, Finkel R A. An
Algorithm for Finding Best Matches in
Logarithmic Expected Time. ACM
Transactions on Mathematical Software.
1977; 3: 209-226.
[8] Merigot Q, Ovsjanikov M, Guibas L
J. Voronoi-based curvature and feature
estimation from point clouds. Visualization
and Computer Graphics, IEEE Transactions
on. 2011; 17: 743-756.
[9] Ding C, He X. K-means clustering via
Principal Component Analysis. Proceedings
of International Conference on Machine
Learning. 2004. 225-232.
[10] Moretti M, Gambucci G, Leach R K, Senin
N. Assessment of surface topography
modifications through feature-based
registration of areal topography data. Surf.
Topogr. Metrol. Prop. 2019; 7.
[11] Fishler M A, Bolles R C. Random sample
consensus: A paradigm for model fitting with
applications to image analysis and
automated cartography. Commun. ACM.
1981; 24: 381-95.
[12] Kendall D G. A survey of the statistical
theory of shape. Statistical Science. 1989; 4:
[13] Besl P, McKay N. A method for registration
of 3-D shapes. IEEE TPAMI. 1992; 14: 239-
[14] Gelfand N, Ikemoto L, Rusinkiewicz S,
Levoy M. Geometrically stable sampling for
the ICP algorithm. Proceedings of
International Conference on 3-D Digital
Imaging and Modeling, 3DIM. 2003.
... Performance indicators can potentially serve as tools for quantitative comparison of test parts, measurement conditions and instruments in large measurement campaigns [202]. For example, Zuquete-Guarato et al. [203] presented a comparison between three optical measuring instruments based on noise, trueness, measured area and surface accessibility indicators. ...
Full-text available
Manufacturing has recently experienced increased adoption of optimised and fast solutions for checking product quality during fabrication, allowing for manufacturing times and costs to be significantly reduced. Due to the integration of machine learning algorithms, advanced sensors and faster processing systems, smart instruments can autonomously plan measurement pipelines, perform decisional tasks and trigger correctional actions as required. In this paper, we summarise the state of the art in smart optical metrology, covering the latest advances in integrated intelligent solutions in optical coordinate and surface metrology, respectively for the measurement of part geometry and surface texture. Within this field, we include the use of a priori knowledge and implementation of machine learning algorithms for measurement planning optimisation. We also cover the development of multi-sensor and multi-view instrument configurations to speed up the measurement process, as well as the design of novel feedback tools for measurement quality evaluation.
... However, the complex structure and high volume of such data make the direct application of classic MSA and Gauge R&R techniques to them difficult. Variance modeling of point cloud data remains an important issue for applications such as repetitive metrology design (Catalucci et al., 2019) and RE process planning (Geng et al., 2022). ...
Three-dimensional (3-D) point cloud data are increasingly being used to describe a wide range of physical objects in detail, corresponding to customized and flexible shape designs. The advent of a new generation of optical sensors has simplified and reduced the costs of acquiring 3-D data in near-real-time. However, the variation of the acquired point clouds, and methods to describe them, create bottlenecks in manufacturing practices such as Reverse Engineering (RE) and metrology in additive manufacturing. We address this issue by developing an automated variance modeling algorithm that utilizes a physical object’s local geometric descriptors and Bayesian Extreme Learning Machines (BELMs). Our proposed ensemble and residual BELM-variants are trained by a scanning history that is composed of multiple scans of other, distinct objects. The specific scanning history is selected by a new empirical Kullback–Leibler divergence we developed to identify objects that are geometrically similar to an object of interest. A case study of our algorithm on additively manufactured products demonstrates its capability to model the variance of point cloud data for arbitrary freeform shapes based on a scanning history involving simpler, and distinct, shapes. Our algorithm has utility for measuring the process capability of 3-D scanning for RE processes.
... local curvature). A detailed discussion of the registration approach adopted in this work is illustrated in [10]. ...
... The part coverage indicators describe how much of the overall part surface is reached by the final point cloud (see reference [32] for more details). These indicators are designed to be computed on individual point clouds, and capture aspects, such as the presence of occluded or otherwise unreachable surfaces, and/ or the presence of differences in local point-sampling density across different surfaces of the part (lower local density implies lesser coverage). ...
Full-text available
Additively manufactured parts are fabricated by generating complex shapes, allowing almost infinite freedom of design. Such parts present significant measurement challenges related to the accessibility and the quality of the measurement results, due to the presence of hollow features, and freeform geometries. Optical measuring instruments are being increasingly applied for complex form measurement, because, compared to contact measurements, they feature higher speeds, higher point densities and often better capabilities to access recessed regions. In this paper, a novel set of indicators is presented that can be used to investigate the performance of measurement solutions based on high-density point-based sampling when applied to form measurement of complex parts. The indicators address surface coverage, sampling density and measurement error as a function of local geometric properties. The indicators are applied to an example comparative analysis involving structured light scanning and photogrammetry measurements of a complex freeform additively manufactured automotive part.
Full-text available
Advanced user interface sensors are able to observe the environment in three dimensions with the use of specific optical techniques such as time-of-flight, structured light or stereo vision. Due to the success of modern sensors, which are able to fuse depth and color information of the environment, a new focus on different domains appears. This survey studies different state-of-the-art registration algorithms, which are able to determine the motion between two corresponding 3D point clouds. This survey starts from a mathematical field of view by explaining two deterministic methods, namely Principle Component Analysis (PCA) and Singular Value Decomposition (SVD), towards more iteratively methods such as Iterative Closest Point (ICP) and its variants. We compare the performance of the different algorithms to their precision and robustness based on a real world dataset. The main contribution of this survey consists of the performance benchmark that is based on a real world dataset, which includes 3D point clouds of a Microsoft Kinect camera, and a mathematical overview of different registration methods, which are commonly used in applications such as simultaneous localization and mapping, and 3D-scanning. The outcome of our benchmark concludes that the ICP point-to-surface method is the most precise algorithm. Beside the precision, the result for the robustness we can conclude that a combination of applying a ICP point-to-point method after an SVD method gives the minimum error. I. INTRODUCTION This article, which is an extended version of the conference paper [1], contains new results that defines the robustness and the precision of the different registration algorithms. With the advent of inexpensive depth sensing devices, robotics, computer vision and ambient application technology research has shifted from 2D imaging and Laser Imaging Detection And Ranging (LIDAR) scanning towards real-time reconstruction of the environment based on 3D point cloud data. On the one hand, there are structured light based sensors such as the Microsoft Kinect and Asus Xtion sensor, which generate a structured point cloud, sampled on a regular grid, and on the other hand, there are many time-of-flight based sensors such as the Softkinetic Depthsense camera, which yield an unstructured point cloud. These point clouds can either be used directly to detect and recognize objects in the environment where ambient technology is been used, or can be integrated over time to completely reconstruct a 3D map of the camera's surroundings [2], [3], [4]. However, in the latter case, point clouds obtained at different time instances need to be aligned, a process that is often referred to as registration. Registration
Full-text available
We present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces.
Conference Paper
Full-text available
This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of unique- ness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and His- tograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.
In this work, we propose to enable the angular reorientation of a projector within a fringe projection system in real-time without the need for re-calibrating the system. The estimation of the extrinsic orientation parameters of the projector is performed using a convolutional neural network and images acquired from the camera in the setup. The convolutional neural network was trained to classify the azimuth and elevation angles of the projector approximated by a point source through shadow images of the measured object. The images used to train the neural network were generated through the use of CAD rendering, by simulating the illumination of the object model from different directions and then rendering an image of its shadow. The accuracy to which the azimuth and elevation angles are estimated is within 1 classification bin, where 1 bin is designated as a ±10° patch of the illumination dome. To evaluate use of the proposed system in fringe projection, a pyramidal additively manufactured object was measured. The point clouds generated using the proposed method were compared to those obtained by an established fringe projection calibration method. The maximum dimensional error in the point cloud generated when using the convolutional network as compared to the established calibration method for the object measured was found to be 1.05 mm on average.
Published in Commercial Micro Manufacturing
The introduction of inexpensive 3D data acquisition devices has promisingly facilitated the wide availability and popularity of 3D point cloud, which attracts more attention on the effective extraction of novel 3D point cloud descriptors for accurate and efficient of 3D computer vision tasks. However, how to de- velop discriminative and robust feature descriptors from various point clouds remains a challenging task. This paper comprehensively investigates the exist- ing approaches for extracting 3D point cloud descriptors which are categorized into three major classes: local-based descriptor, global-based descriptor and hybrid-based descriptor. Furthermore, experiments are carried out to present a thorough evaluation of performance of several state-of-the-art 3D point cloud descriptors used widely in practice in terms of descriptiveness, robustness and efficiency.
In this paper we propose a new registration algorithm. The proposed algorithm consists of two steps. The first step is to estimate the transformation parameters among multiple range views, making use of the eigenvectors of the weighted covariance matrix of the 3-D coordinates of data points. The weighting factors are carefully selected to take into account the projection effect caused by different viewpoints. The next step is to register the views iteratively with the estimated transformation parameters as initial values. To solve the correspondence problem, the reverse calibration technique is used, which is adapted to the space-encoding range finder. The object function, defined by means of the reverse calibration technique, is minimized iteratively. Experimental results show that the proposed algorithm is very fast and efficient.