ArticlePDF Available

Abstract and Figures

We propose a robust, anisotropic normal estimation method for both point clouds and meshes using a low rank matrix approximation algorithm. First, we compute a local feature descriptor for each point and find similar, non-local neighbors that we organize into a matrix. We then show that a low rank matrix approximation algorithm can robustly estimate normals for both point clouds and meshes. Furthermore, we provide a new filtering method for point cloud data to anisotropically smooth the position data to fit the estimated normals. We show applications of our method to point cloud filtering, point set upsampling, surface reconstruction, mesh denoising, and geometric texture removal. Our experiments show that our method outperforms current methods in both visual quality and accuracy.
Content may be subject to copyright.
Low Rank Matrix Approximation for 3D
Geometry Filtering
Xuequan Lu, Member, IEEE, Scott Schaefer, Jun Luo, Senior Member, IEEE,
Lizhuang Ma, Member, IEEE, and Ying He, Member, IEEE
Abstract—We propose a robust normal estimation method for both point clouds and meshes using a low rank matrix approximation
algorithm. First, we compute a local isotropic structure for each point and find its similar, non-local structures that we organize into a
matrix. We then show that a low rank matrix approximation algorithm can robustly estimate normals for both point clouds and meshes.
Furthermore, we provide a new filtering method for point cloud data to smooth the position data to fit the estimated normals. We show
the applications of our method to point cloud filtering, point set upsampling, surface reconstruction, mesh denoising, and geometric
texture removal. Our experiments show that our method generally achieves better results than existing methods.
Index Terms—3D Geometry filtering, Point cloud filtering, Mesh denoising, Point upsampling, Surface reconstruction, Geometric
texture removal.
FILTERI NG in 2D data like images is prevalent nowadays
[1], [2], [3], [4], and 3D geometry (e.g., point clouds,
meshes) filtering and processing has recently attracted more
and more attention in 3D vision [5], [6], [7]. Normal estima-
tion for point cloud models or mesh shapes is important
since it is often the first step in a geometry processing
pipeline. This estimation is often followed by a filtering
process to update the position data and remove noise [8].
A variety of computer graphics applications, such as point
cloud filtering [9], [10], [11], point set upsampling [12], sur-
face reconstruction [13], mesh denoising [8], [14], [15] and
geometric texture removal [16] rely heavily on the quality of
estimated normals and subsequent filtering of position data.
Current state of the art techniques in mesh denoising [8],
[14], [15] and geometric texture removal [16] can achieve
impressive results. However, these methods are still limited
in their ability to recover sharp edges in challenging regions.
Normal estimation for point clouds has been an active area
of research in recent years [12], [17], [18]. However, these
methods perform suboptimally when estimating normals in
noisy point clouds. Specifically, [17], [18] are less robust in
the presence of considerable noise. The bilateral filter can
preserve geometric features but sometimes may fail due to
the locality of its computations and lack of self-adaption of
Updating point positions using the estimated normals in
point clouds has received sparse treatment so far [9], [10].
However, those position update approaches using the L0
X. Lu is with School of Information Technology, Deakin University,
Australia. E-mail:
J. Luo and Y. He are with School of Computer Science and Engineer-
ing, Nanyang Technological University, Singapore. E-mails: {junluo,
S. Schaefer is with Department of Computer Science, Texas A&M Uni-
versity, College Station, Texas, USA. E-mail:
L. Ma is with Department of Computer Science, Shanghai Jiao Tong
University, Shanghai, China. E-mail:
Manuscript received May 19, 2020; revised 11 August, 2020. Preprint.
or L1norms are complex to solve and hard to implement.
Moreover, they restrict each point to only move along its
normal orientation potentially leading to suboptimal results
or slow convergence.
To address the issues shown above, we propose a new
normal estimation method for both meshes and point clouds
and a new position update algorithm for point clouds. Our
method benefits various geometry processing applications,
directly or indirectly, such as point cloud filtering, point set
upsampling, surface reconstruction, mesh denoising, and
geometric texture removal (Figure 1). Given a point cloud
or mesh as input, our method first estimates point or face
normals, then updates the positions of points or vertices
using the estimated normals. We observe that: (1) non-local
methods could be more accurate than local techniques; (2)
there usually exist similar structures of each local isotropic
structure (Section 3.1) in geometry shapes; (3) the matrix
constructed by similar structures should be low-rank (Sec-
tion 3.2). Motivated by these observations, we propose a
novel normal estimation technique which consists of two
sub-steps: (i) non-local similar structures location and (ii)
weighted nuclear norm minimization. We adopt the former
to find similar structures of each local isotropic structure. We
employ the latter [3] to handle the problem of recovering
low-rank matrices. We also present a fast and effective
point update algorithm for point clouds to filter the point
positions to better match the estimated normals. Extensive
experiments and comparisons show that our method gener-
ally outperforms current methods.
The main contributions of this paper are:
a novel normal estimation technique for both point
cloud shapes and mesh models;
a new position update algorithm for point cloud
analysis of the convergence of the proposed normal
estimation and point update techniques, experimen-
tally or theoretically (see supplementary document).
Normal Estimation
Positional Update
Our algorithms
Point Set Filtering
point set or mesh
Mesh Denoising
Mesh Texture Removal
Point Set Texture Removal
Upsampling Surface Reconstruction
Fig. 1. Overview of our approach and its benefited applications. Our
method can be applied to various geometry processing tasks directly or
In this section, we only review the research works that are
most related to this work. We first review the previous re-
search on normal estimation. Then we review some previous
works which employed the nuclear norm minimization or
its weighted version.
2.1 Normal Estimation
Normal estimation for geometric shapes can be classified
into two types: (1) normal estimation for point clouds, and
(2) normal estimation for mesh shapes.
Normal estimation for point clouds. Hoppe et al. [19]
estimated normals by computing the tangent plane at each
data point using principal component analysis (PCA) of the
local neighborhood. Later, a variety of variants of PCA have
been proposed [20], [21], [22], [23], [24] to estimate normals.
Nevertheless, the normals estimated by these techniques
tend to smear sharp features. Researchers also estimate
normals using Voronoi cells or Voronoi-PCA [25], [26]. Min-
imizing the L1or L0norm can preserve sharp features
as these norms can be used to measure sparsity in the
derivative of the normal field [9], [10]. Yet, the solutions are
complex and computationally expensive. Li et al. [27] esti-
mated normals by using robust statistics to detect the best
local tangent plane for each point. Another set of techniques
attempted to better estimate normals near edges and corners
by point clustering in a neighborhood [28], [29]. Later they
presented a pair consistency voting scheme which outputs
multiple normals per feature point [30]. Boulch and Marlet
[17] use a robust randomized Hough transform to estimate
point normals. Convolutional neural networks have recently
been applied to estimate normals in point clouds [18]. Such
estimation methods are usually less robust for point clouds
with considerable amount of noise. Bilateral smoothing of
PCA normals [12], [13] is simple and effective, but it suffers
from inaccuracy due to the locality of its computations and
may blur edges with small dihedral angles. Mattei et al.
[31] presented a moving RPCA method for point cloud
denoising, based on the inspiration of sparsity. They mod-
eled the RPCA problem in a local sense by specifying the
output rank of 2, rather than considering similar structures.
The computed normals are only used to compute similarity
Normal estimation for mesh shapes. Most methods
focus on the estimation of face normals in mesh shapes.
One simple, direct way is to compute the face normals by
the cross product of two edges in a triangle face. However,
such normals can deviate from the true normals significantly
even in the presence of small position noise. There exist
a considerable amount of research work to smooth these
face normals. One approach uses the bilateral filter [14],
[32], [33], inspired by the founding works [34], [35]. Mean,
median and alpha-trimming methods [36], [37], [38] are also
used to estimate face normals. Sun et al. [8], [39] present
two different methods to filter face normals. Recently, re-
searchers have presented filtering methods [15], [40], [41],
[42], [43] based on mean shift, total variation, guided nor-
mals, L1median, and normal voting tensor. Wang et al. [44]
estimated face normals via cascaded normal regression.
2.2 Nonlocal Methods for Point Clouds and Nuclear
Norm Minimization
Previous researchers proposed non-local methods for point
clouds. For example, Zheng et al. [45] applied non-local
filtering to 3D buildings that exhibit large scale repetitions
and self-similarities. Digne presented a non-local denoising
framework to unorganized point clouds by building an
intrinsic descriptor [46], and recently proposed a shape
analysis approach with colleagues based on the non-local
analysis of local shape variations [47].
The nuclear norm of a matrix is defined as the sum of the
absolute values of its singular values (see Eq. (4)). It has been
proved that most low-rank matrices can be recovered by
minimizing their nuclear norm [48]. Cai et al. [49] provided
a simple solution to the low-rank matrix approximation
problem by minimizing the nuclear norm. The nuclear norm
minimization has been broadly employed to matrix com-
pletion [48], [49], robust principle component analysis [50],
low-rank representation for subspace clustering [51] and
low-rank textures [52]. Gu et al. [3], [4] presented a weighted
version of the nuclear norm minimization, which has been
adopted to image processing applications such as image
denoising, background subtraction and image inpainting.
In this section, we take point clouds, consisting of positions
as well as normals, as input and further extend to meshes
later. As with [10], [11], [12], the normals are initialized by
the classical PCA method [19], which is robust and easy
to use (we use the implementation in [12]). First of all,
we present an algorithm to locate and construct non-local
similar structures for each local isotropic structure of a point
(Section 3.1). We then describe how to estimate normals via
weighted nuclear norm minimization on non-local similar
structures (Section 3.2).
3.1 Non-local Similar Structures
Local structure. We define each point pihas a local struc-
ture Siwhich consists of klocal nearest neighbors. Locating
structures similar to a specific local structure is difficult due
to the irregularity of points.
Tensor voting. We assume each local structure embeds a
representative normal. To do so, we first define the tensor at
a point pias
Tij =η(kpipjk)φ(θij , σθ)nT
where pj(1×3vector) is one of the klocal nearest neighbors
of pi, which we denote as jSi, and nj(1×3vector)
(a) Local structure (b) Local isotropic
(c) Similar structures
Fig. 2. (a) The local structures (green points) of the centered red points,
respectively. (b) The local isotropic structures (green) of the correspond-
ing red points. (c) The similar local isotropic structures of the consistent
local isotropic structures denoted by the red points. Each blue or cyan
point denotes its isotropic structure.
is the normal of pj.ηand φare the weights induced
by spatial distances and angles (θij ) of two neighboring
normals, which are given by [12], [14]: η(x) = e(x
φ(θ, σθ) = e(1cos(θ)
.σpand σθare the scaling param-
eters, which are empirically set to two times the maximal
distance between any two points in the klocal nearest neigh-
bors within the local structure and 30, respectively.
For each local structure Si, we can derive the accumu-
lated tensor by aggregating all the induced tensor votes
{Tij |jSi}. This final tensor encodes the local structure,
which provides a reliable, representative normal that will be
later used to compute the local isotropic structure and locate
similar structures.
Tij (2)
Let λi1λi2λi3be the eigenvalues of Tiwith the
corresponding eigenvectors ei1,ei2and ei3. In tensor voting
[53], λi1λi2indicates surface saliency with a normal di-
rection ei1;λi2λi3indicates curve saliency with a tangent
orientation ei3;λi3denotes junction saliency. Therefore, we
take ei1as the representative normal for the local structure
Siof point pi.
Local isotropic structure. We assume that each local
structure has a subset of points that are on the same isotropic
surface with the representative normal. We call this subset
of points the local isotropic structure. Surface patches with
small variation in its dihedral angles are usually considered
isotropic (Figure 2(b)) surfaces. To obtain a local isotropic
structure Siso
ifrom a local structure Siand locate similar
local isotropic structures for Siso
i, we present a simple yet
effective scheme. Here we also employ the defined function
φ(θ, σθ)in Eq. (1), with setting σθto θth.θis the angle of
two normals and θth is the angle threshold. Specifically, to
obtain Siso
i, we
compute the angles θbetween each point normal and
the representative normal within a local structure Si;
add the current point to Siso
iif φ(θ, θth)
(i.e., φ(θ, θth)e1).
For simplicity, we will call “similar local isotropic struc-
tures” as “similar structures” throughout the paper, unless
otherwise stated. Given an isotropic structure Siso
i, we iden-
tify its non-local similar structures by computing φ(θ, θth)
between the representative normal of each structure and
that of Si. If φ(θ, θth)e1(we use the same θth for
simplicity), we define the two isotropic structures to be
similar. The underlying rationale of our similarity search is:
the point normals in a local isotropic structure are bounded
by the representative normal, indicating these points are
on the same isotropic surface; the similar structures search
is also bounded by the representative normals, implying
the similar structures are on the similar isotropic surfaces.
These similar structures will often overlap on the same
isotropic surface as shown in Figure 2. In the figure, we
show the local structure (a), the local isotropic structure (b),
and the similar structures (c). Each representative normal
is computed based on its entire neighbors with different
weights respect to the current point (Eq. (1) and (2)). It
indicates that the representative normal is isotropic with the
current point normal, and there is no need to iteratively
refine the representative normal by using the local isotropic
neighbors. Note that the non-local similar structures are
searched in the context of isotropic surfaces rather than
anisotropic surfaces (see more analysis in Rotation-invariant
similarity in supplementary document).
(a) without reshaping (b) with reshaping
Fig. 3. A normal estimation comparison without or with matrix reshaping.
3.2 Weighted Nuclear Norm Minimization
For each non-local similar structure Siso
lfor the isotropic
structure Siso
iassociated with the point pi, we append the
point normals of Siso
las rows to a matrix M. Note that the
dimensions of this matrix are ˆr×3, where ˆris the number of
rows and 3is the number of columns. This matrix already
has a maximal rank of 3 and is a low rank matrix. Therefore,
the low rank matrix approximation from rank 3 or 2 to rank
2 or 1 is less meaningful than from a high rank to a low
rank, in terms of “smoothing”. To make the low rank matrix
approximation more meaningful, we reshape the matrix M
to be close to a square matrix. Figure 3 illustrates a normal
estimation comparison without and with matrix reshaping,
and shows that the initial ˆr×3matrix Mrequires reshaping
to obtain more effective smoothing results. It also shows that
the reconstruction error between the initially reshaped ma-
trix and the low rank optimized matrix is typically greater
than the error computed without reshaping, which further
validates a more effective smoothing of reshaping. As such,
it is necessary for reshaping M.
We do so by finding dimensions rand cof a new matrix
Z0where ˆr×3 = r×cand we minimize |rc|. Given
that the structure in Mis isotropic, removing one or more
points does not affect this structure significantly. Therefore,
we first find rand cto minimize krck(rc) and measure
if |rc| ≥ 6and c= 3 are both satisfied, where 6is an
empirical value and c= 3 tests if the reshaping failed. If
so, we remove a point normal from Mand solve for rand
cagain. We repeat such a process until the conditions are
not satisfied (ris not required to be a multiple of 3). Then
we simply copy the column entries in Mto Z0filling each
column of Z0before continuing to the next column.
We take the size 8×3for Mas a simple example, and
the reshaping process is illustrated in Eq. (3). The reshaped
matrix Z0has a size of 6×4and a higher rank than M
in general. It is known that the rank of a matrix is the
number of linearly independent columns. Intuitively, the
resulting matrix Z0should be low rank since all normals
come from similar isotropic structures and each point may
involve multiple normals, and the x,yand zvalues are re-
spectively gathered in columns. In Z0, most columns consist
of coordinates from a single dimension (only xcoordinates,
for example). There are at most two columns involving both
xand y, or both yand z(Eq. (3)), which negligibly affects
the rank and the smoothing results (see supplementary
document). Experimentally, we followed the above rules to
construct matrices of similar local isotropic structures for
planar and curved surfaces in Figure 2, and observed the
matrices are indeed low rank (i.e., a considerable number of
negligible singular values). Figure 4 shows the histograms
of singular values of two reshaped matrices from Figure 2,
and confirms the low-rank property.
We then cast the normal estimation problem as a low-
rank matrix approximation problem. We attempt to recover
a low-rank matrix Zfrom Z0using nuclear norm minimiza-
tion. We first present some fundamental knowledge about
nuclear norm minimization and then show how we estimate
normals with weighted nuclear norm minimization.
Nuclear norm. The nuclear norm of a matrix is defined
as the sum of the absolute values of its singular values,
shown in Eq. (4).
0 50 100 150 200 250 300 350 400
0 20 40 60 80 100 120 140 160 180 200
Fig. 4. Histograms of singular values of two reshaped matrices from
Figure 2. Horizontal axis denotes the singular values, and vertical axis
denotes the number of singular values falling into corresponding ranges.
where δmis the m-th singular value of matrix Z.kZk
indicates the nuclear norm of Z.
Nuclear norm minimization. Nuclear norm minimiza-
tion is frequently used to approximate the known matrix,
Z0, by a low-rank matrix, Z, while minimizing the nuclear
norm of Z. Cai et al. [49] demonstrated that the low-rank
matrix Zcan be easily solved by adding a Frobenius-norm
data term.
where αis the weighting parameter. The minimizing matrix
Zis then
Z=Uψ(S, α)VT,(6)
where Z0=USVTdenotes the SVD of Z0and Sm,m is
the m-th diagonal element in S.ψis the soft-thresholding
function on Sand the parameter α, i.e., ψ(Sm,m, α) =
max(0,Sm,mα). Soft thresholding effectively clamps small
singular values to 0, thus creating a low rank approxima-
Nuclear norm minimization treats and shrinks each sin-
gular value equally. However, in general, larger singular
values should be shrunk less to better approximate the
known matrix and preserve the major components. The
weighted nuclear norm minimization solves this issue [3].
Weighted nuclear norm minimization. The weighted
nuclear norm of a matrix Zis
where wmis the non-negative weight imposed on the m-
th singular value and w={wm}. We can then write the
low-rank matrix approximation problem as
Suppose the singular values {δm}are sorted in a non-
ascending order, the corresponding weights {wm}should
be in a non-descending order. Hence, we define the weight
function as a Gaussian function.
βdenotes the regularized coefficient which defaults to 1.0.
δ1is the first singular value after sorting {δm}in a non-
increasing order. We did not use the original weight def-
inition in [3] since it needs noise variance which should
be unknown in normal estimation. Also, we found their
weight determination is not suitable for normal-constructed
matrices. Then we solve Eq. (8) by the generalized soft
ALGORITHM 1: Weighted nuclear norm minimization
Input: non-local similar structures of each local isotropic
Output: New matrices {Z}
for each local isotropic structure Siso
construct a matrix Z0
compute the SVD of Z0
compute the weights via Eq. (9)
recover Zvia Eq. (10)
thresholding operation on the singular values with weights
where ψ(Sm,m, wm) = max(0,Sm,m wm). Here ψchanges
to the generalized soft-thresholding function by assign-
ing weights to singular values, and Eq. (10) becomes the
weighted version of Eq. (6).
Notice that the truncated SVD can also solve the low-
rank matrix approximation problem. However, we found it
is less effective here for two reasons. First, the truncated
SVD uses a fixed number, Kto determine top singular
values. However, the value Kis usually shape dependent.
Second, truncated SVD treats each selected singular value
equally. In contrast, our method treats singular values dif-
ferently to enable adaptivity.
3.3 Algorithm
Each point may have multiple normals in the recovered
matrices {Z}, as the similar structures often overlap. We
first reshape {Z}to matrices like {M}(each row in each
matrix is a normal), and compute the final normal of each
point by simply averaging the corresponding normals in
{Z}after calling Algorithm 1. To achieve quality normal
estimations, we iterate non-local similar structures searching
(Section 3.1) and the weighted nuclear norm minimization
in Algorithm 1.
Extension to mesh models. Our algorithm can be easily
extended to handle mesh models. One natural way is to take
the vertices/normals of a mesh as points/normals in a point
cloud. However, to achieve desired results, face normals are
frequently used to update vertex positions [8], [14], [15].
Hence, we use the centers of faces and the corresponding
normals as points. Moreover, we use the mesh topology to
compute neighbors in Section 3.1.
Besides normal estimation, we also present algorithms to
update point or vertex positions to match the estimated
normals, which is typically necessary before applying other
geometry processing algorithms.
Vertex update for mesh models. We use the algorithm
[8] to update vertices of mesh models, which minimizes the
square of the dot product between the normal and the three
edges of each face.
Point update for point clouds. Compared to the vertex
update for mesh models, updating point cloud positions is
more difficult due to the absence of topological information.
Furthermore, the local neighborhood information may vary
during this position update. We propose a modification of
the edge recovery algorithm in [10] to update points in a
feature-aware way and minimize
piand pjare unknown, and niand njare computed
by our normal estimation algorithm. Eq. (11) encodes the
sum of distances to the tangent planes defined by the
neighboring points {pj}and the corresponding normals
{nj}, as well as the sum of distances to the tangent planes
defined by {pi}and {ni}. The differences between [10] and
our method are: (1) [10] utilized a least squares form for
alleviating artifacts on the intersection of two sharp edges;
(2) [10] only considered the distance to all the planes defined
by each neighboring point and the point’s corresponding
We use gradient descent to solve Eq. (11), by assuming
the point piand its neighboring points {jSi|pj}in the
previous iteration are known. Here we use ball neighbors
instead of k nearest neighbors to ensure the convergence of
our point update. Therefore, the new position of pican be
computed by
where p0
iis the new position. γiis the step size, which is
set to 1
3|Si|to ensure the convergence (see supplementary
(a) (b) (c) (d)
Fig. 5. (a) and (b): two overly-sharpened results (more unique colors
around the upper corner) by fixing θth. (c) the smeared result (smoothly
changed colors around the lower corner) by using a greater θinit
th . (d)
The result by using a smaller θinit
th . Zoom in to clearly observe the
1.0% 2.0% 3.0%
Noise Level
(a) Cube
(b) Dodecahedron
Fig. 6. Normal errors (mean square angular error in radians) of the Cube
and Dodecahedron point sets corrupted with different levels of noise
(proportional to the diagonal length of the bounding box).
In this section, we demonstrate some geometry processing
applications that benefit from our approach directly or indi-
rectly including mesh denoising, point cloud filtering, point
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0
Fig. 7. Position accuracies for Fig. 9. The root mean square errors are
(×102): (a) 8.83, (b) 9.05, (c) 5.14, (d) 9.64, (e) 3.22. The rmse of
the corresponding surface reconstructions are (×102): 7.73,6.45,3.28,
7.71 and 2.41, respectively. (f) is the error bar for here and 8.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 8. Position accuracies for Fig. 13. The root mean square errors are
(×103): (a) 8.59, (b) 6.84, (c) 6.80, (d) 6.82, (e) 6.57. The rmse of
the corresponding surface reconstructions are (×103): 8.60,6.75,6.68,
6.74 and 6.40, respectively.
cloud upsampling, surface reconstruction, and geometric
texture removal. Moreover, we also compared state of the art
methods with our approach in each application. We utilize
freely available source code for each comparable method or
obtained implementations from the original authors.
Parameter setting. As with image denoising [3], we set a
“window” size (i.e., non-local searching range) for similar
structures searching, which provides a trade-off between
accuracy and speed. The main parameters of our normal
estimation method are the local neighborhood size klocal,
the angle threshold θth, the non-local searching range knon,
and the maximum iterations for normal estimation nnor. For
the position update procedure, our parameters are the local
neighborhood size klocal or 1-ring neighbors (mesh models)
and the number of iterations for the position update npos.
To more accurately find similar local isotropic structures,
we set one initial value and one lower bound to θth, namely
th and θlow
th . We reduce the start value θinit
th towards θlow
at a rate of 1.1nin the n-th iteration. We show the tests of
our parameters in Figure 5 and supplementary document. In
general, normal errors decrease with an increasing number
of normal estimation iterations, but excessive iterations can
cause normal errors to increase. The estimated normals of
models with sharp features are more accurate with the in-
creasing local neighborhood klocal or non-local search range
Fixed θth are likely to inaccurately locate similar local
isotropic structures and further generate erroneous normal
Normal errors (mean square angular error in radians) of two scanned
models. Dod vir is a virtual scan of a noise-free model as opposed to
Figure 10, which is corrupted with synthetic noise.
Methods [19] [17] [12] [18] Ours
Dod vir 0.0150 0.0465 0.0054 0.0553 0.0023
Fig. 13 0.0118 0.1274 0.0060 0.1208 0.0036
estimations (Figure 5(a-b)). Larger start values of θinit
th smear
geometric features (Figure 5(c)).
Based on our parameter tests and observations, for
point clouds we empirically set: klocal = 60,knon = 150,
th = 30.0, and θlow
th = 15.0for models with sharp
features, but set θinit
th = 20.0and θlow
th = 8.0for models with
low dihedral angle features. For mesh models, we replace
the local neighborhood with the 2-ring of neighboring faces.
We use 2 to 10 iterations for normal estimation and 5 to 30
iterations for the position update.
To make fair comparisons, we used the same local neigh-
borhood for all methods and tune the remaining parameters
of the other methods to achieve the best visual results.
Specifically, to tune one parameter, we fixed the other pa-
rameters and searched based on the suggested range and the
meaning of parameters in the original papers. We observed
that the other methods often take more iterations in normal
smoothing than ours. The methods [17], [18] have multiple
solutions, and we took the best results for comparison. For
the position update, we used the same parameters for the
compared normal estimation methods for each model.
Accuracy. Since we used the pre-filter [54] for meshes
with large noise, there exist few flipped normals in the
results so that different methods have limited difference in
normal accuracy. However, the visual differences are easy to
observe. Therefore, we compared the accuracy of normals
and positions over point cloud shapes. Note that state of
the art methods compute normals on edges differently: the
normals on edges are either sideling (e.g., [18], [19]) or
perpendicular to one of the intersected tangent planes (e.g.,
[12] and our method). The latter is more suitable for feature-
aware position update. For fair comparisons, we have two
ground truth models for each point cloud: the original
ground truth for [18], [19] and the other ground truth for
[12] and our method. The latter ground truth is generated by
adapting normals on edges to be perpendicular to one of the
intersected tangent planes. The ground truth model, which
has the smaller mean square angular error (MSAE) [54]
among the two kinds of ground truth models, is selected as
the ground truth for [17]. Figure 6 shows the normal errors
of different levels of noise on the cube and dodecahedron
models. We also compared our method with state of the
art techniques in Table 1. The ground truth for the Dod vir
model (Table 1) for [18], [19] is achieved by averaging the
neighboring face normals in the noise-free model. The other
kind of ground truth for [12] and our method is produced by
further adapting normals on edges to one of the intersected
tangent planes. We compute ground truth for Figure 13 and
6 in a similar way. The normal error results demonstrate that
our approach outperforms the state of the art methods. We
speculate that this performance is due to the use of non-local
similar structures as opposed to only local information.
In addition, we compared the position errors of differ-
ent techniques, see Figure 7 and 8. The position error is
measured using the average distance between points of the
ground truth and their closest points of the reconstructed
point set [11]. For visualization purpose, we rendered the
colors of position errors on the upsampling results. The
root mean square error (RMSE) of both the upsampling
and reconstruction results show that our approach is more
accurate than state of the art methods.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 9. The first row: normal results of the Cube point cloud (synthetic noise: 3.0% of the diagonal length of the bounding box). The second row:
upsampling results of the filtered results by updating position with the normals in the first row. The third row: the corresponding surface reconstruction
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 10. The first row: normal results of the Dodecahedron point cloud (synthetic noise: 2.0% of the diagonal length of the bounding box). The
second row: upsampling results of the filtered results by updating position with the normals in the first row. The third row: the corresponding surface
reconstruction results.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 11. The first row: normal results of the scanned Car point cloud. The second row: upsampling results of the filtered results by updating position
with the normals in the first row. The third row: the corresponding surface reconstruction. Comparing with other methods, [12] and our method are
better in sharp edges preservation and hereby generate more sharpened results.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 12. The first row: normal results of the scanned House point cloud. The second row: upsampling results of the filtered results by updating
position with the normals in the first row. The third row: the corresponding surface reconstruction results.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 13. The first row: normal results of the scanned Iron point cloud. The second row: upsampling results of the filtered results by updating position
with the normals in the first row. The third row: the corresponding surface reconstruction results.
(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours
Fig. 14. The first row: normal results of the scanned Toy point cloud. The second row: upsampling results of the filtered results by updating position
with the normals in the first row. The third row: the corresponding surface reconstruction results.
(a) Noisy input (b) [8] (c) [14] (local) (d) [14] (global) (e) [15] (f) Ours
Fig. 15. Denoised results of the Bunny (synthetic noise: 0.2of the average edge length), the scanned Pyramid and Wilhelm.
(a) [24] (b) RIMLS over
(c) [55] (d) RIMLS over
Fig. 16. Upsampling and reconstruction results over [24], [55]. The input
is the same as Figure 10.
5.1 Point Cloud Filtering
We compare our normal estimation method with several
state of the art normal estimation techniques. We then per-
form the same number of iterations of our position update
algorithm with the estimated normals of all methods.
Figure 9 and 10 show two point cloud models corrupted
with heavy, synthetic noise. The results demonstrate that
our method performs better than the state of the art ap-
proaches in terms of sharp feature preservation and non-
feature smoothness. Figure 11, 12, 13, and 14 show the
methods applied to a variety of real scanned point cloud
models. Our approach outperforms other methods in terms
of the quality of the estimated normals. We demonstrate
our technique on point clouds with more complicated fea-
tures. Figure 17 shows that our method produces slightly
lower normal errors than [12]. Figure 17 (f) and (g) show
our method with different parameters, which leads to a
less/more sharpened version of the input. We also show
(a) [12] (b) Ours (c) [12] (d) Ours
(e) [12] (f) Ours (g) Ours
Fig. 17. Normal estimation results on David (a,b), a female statue (c,d)
and monkeys (e,f,g). The mean square angular errors of (a-g) are
respectively (×102): 10.684,10.636,9.534,9.423,5.004,4.853 and
4.893. (b,d,f) used smaller knon and klocal, and (g) used the default
knon and klocal.
some results using [24], [55], which do not preserve sharp
features (Figure 16). Besides, we show some filtering results
by [56] which can preserve sharp features to some extent,
with introducing obvious outliers on surfaces (Figure 18).
Our method can even successfully handle a larger noise
(Figure 9 and 10), which we found is difficult for [56].
(a) [56] (b) [56]
Fig. 18. Filtering results by [56]. The input noise is 0.15% for (a) and
0.1% for (b). Red circles indicate outliers.
5.2 Point Cloud Upsampling
As described in Section 5.1, the point cloud filtering also
consists of a two-step procedure: normal estimation and
point update. However, unlike mesh shapes, point cloud
models often need to be resampled to enhance point density
after filtering operations have been applied.
We apply the edge-aware point set resampling technique
[12] to all the results after point cloud filtering and contrast
the different upsampling results. For fair comparisons, we
upsample the filtered point clouds of each model to reach
a similar number of points. Figure 9 to 14 display various
upsampling results on state of the art normal estimation
methods and different point cloud models. The figures show
that the upsampling results on our filtered point clouds are
substantially better than those filtered by other methods in
preserving sharp features. Bilateral normal smoothing [12]
usually produces good results, but this method sometimes
blur edges with low dihedral angles.
(a) [57] (b) [58] (c) [59] (d) [57] (e) [58] (f) [59]
(g) [57] (h) [58] (i) [59]
Fig. 19. Mesh denoising results of [57], [58], [59]. Only close-up views
are shown to highlight the differences.
5.3 Surface Reconstruction
One common application for point cloud models is to recon-
struct surfaces from the upsampled point clouds in Section
5.2 before use in other applications. Here, we select the
edge-aware surface reconstruction technique–RIMLS [13].
For fair comparisons, we use the same parameters for all
the upsampled point clouds of each model.
Figure 9 to 14 show a variety of surface reconstruction
results on different point cloud models. The comparison
results demonstrate that the RIMLS technique over our
method produces the best surface reconstruction results, in
terms of sharp feature preservation.
5.4 Mesh Denoising
Many state of the art mesh denoising methods involve
a two-step procedure which first estimates normals and
then updates vertex positions. We selected several of these
methods [8], [14], [15] for comparisons in Figure 15. Note
that [14] provides both a local and global solution, and
we provide comparisons for both. We also compared our
method with other mesh denoising techniques [57], [58],
[59]. Consistent with Figure 15, the corresponding blown-
up windows of these three methods are shown in Figure
When the noise level is high, many of these methods
produce flipped face normals. For the Bunny model (Figure
15), which involves frequent flipped triangles, we utilize
the technique in [54] to estimate a starting mesh from the
original noisy mesh input for all involved methods.
The comparison results show that our method outper-
forms the selected state of the art mesh denoising methods
in terms of sharp feature preservation. Similar to the above
analysis, this is because that other methods are mostly local
techniques while our method takes into account the infor-
mation of similar structures (i.e., more useful information).
Specifically, [8], [14], [15], [58], [59] are local methods (
[15] and the global mode of [14] are still based on local
information). [57] does not take sharp features information
into account and thus cannot preserve sharp features well.
5.5 Geometric Texture Removal
We also successfully applied our method to geometric de-
texturing, the task of which is to remove features of different
scales [16]. Our normal estimation algorithm is feature-
aware in the above applications because each matrix con-
sists only of similar local isotropic structures. On the other
hand, by larger values of θth, the constructed matrix can
include local anisotropic structures and the low rank matrix
approximation result becomes smoother, thus smoothing
anisotropic surfaces to isotropic surfaces.
Figure 20 shows comparisons of different methods
that demonstrate that our method outperforms other ap-
proaches. Note that [16] is specifically designed for geomet-
ric texture removal. However, that method cannot preserve
sharp edges well. Figure 21 shows the results of removing
different scales of geometric features on a mesh. We pro-
duced Figure 21 (d) by applying the pre-filtering technique
[54] in advance, since the vertex update algorithm [14] could
generate frequent flipped triangles when dealing with such
large and steep geometric features. As an alternative, our
normal estimation method can be combined with the vertex
update in [16] to handle such challenging mesh models.
Figure 22 shows the geometric texture removal on two
different point clouds, which are particularly challenging
due to a lack of topology.
5.6 Timings
Table 2 summarizes the timings of different normal estima-
tion methods on several point clouds. While our method
(a) Input (b) Laplacian (c) [14] (local) (d) [60] (e) [16] (f) Ours
Fig. 20. Geometric texture removal results of the Bunny and Cube. Please refer to the zoomed rectangular windows.
(a) Input (b) Small texture
(c) Medium tex-
ture removal
(d) Large texture
Fig. 21. Different scales of geometric texture removal results of the
Circular model.
(a) Input (b) Ours (c) Input (d) Ours
Fig. 22. Geometric texture removal results of the Turtle point cloud and
embossed point cloud. We render point set surfaces of each point cloud
for visualization.
produces high quality output, the algorithm takes a long
time to run due to the svd operation for each normal esti-
mation. Therefore, our method is more suitable for offline
geometry processing. However, it is possible to accelerate
our method using specific svd decomposition algorithms,
such as the randomized svd (RSVD) decomposition algo-
rithm [61] as shown in Table 2. In addition, many parts of
the algorithm could benefit from parallelization.
Timing statistics for different normal estimation techniques over point
clouds (in seconds).
Methods [19] [17] [12] [18] Ours
Fig. 9
#6146 0.57 141.5 0.48 8 95.6 65.1
Fig. 13
#100k 18.7 2204 17.2 115 3147 2458
Fig. 12
#127k 10.8 3769 12.5 141 3874 2856
In this paper, we have presented an approach consisting
of two steps: normal estimation and position update. Our
method can handle both mesh shapes and point cloud
models. We also show various geometry processing appli-
cations that benefit from our approach directly or indirectly.
The extensive experiments demonstrate that our method
performs substantially better than state of the art techniques,
in terms of both visual quality and accuracy.
While our method works well, speed is an issue if online
processing speeds are required. In addition, though we
mitigate issues associated with the point distribution in the
position update procedure (i.e., gaps near edges), the point
distribution could still be improved. One way to do so is
to re-distribute points after our position update through a
“repulsion force” from each point to its neighbors. We could
potentially accomplish this effect by adding this repulsion
force directly to Eq. (11).
Xuequan Lu is supported in part by Deakin University inter-
nal grant (CY01-251301-F003-PJ03906-PG00447) and indus-
try grant (PJ06625). Ying He is supported by AcRF 20/20.
[1] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color
images,” in Sixth International Conference on Computer Vision (IEEE
Cat. No.98CH36271), Jan 1998, pp. 839–846.
[2] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6,
pp. 1397–1409, June 2013.
[3] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear
norm minimization with application to image denoising,” in
Proceedings of the 2014 IEEE Conference on Computer Vision and
Pattern Recognition, ser. CVPR ’14. Washington, DC, USA:
IEEE Computer Society, 2014, pp. 2862–2869. [Online]. Available:
[4] S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang,
“Weighted nuclear norm minimization and its applications
to low level vision,” International Journal of Computer Vision,
vol. 121, no. 2, pp. 183–208, Jan 2017. [Online]. Available:
[5] S. Wu, P. Bertholet, H. Huang, D. Cohen-Or, M. Gong, and
M. Zwicker, “Structure-aware data consolidation,” IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, vol. 40, no. 10,
pp. 2529–2537, Oct 2018.
[6] W. Yifan, S. Wu, H. Huang, D. Cohen-Or, and O. Sorkine-Hornung,
“Patch-based progressive 3d point set upsampling,” 2018.
[7] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng, “Ec-net:
an edge-aware point set consolidation network,” in The European
Conference on Computer Vision (ECCV), September 2018.
[8] X. Sun, P. Rosin, R. Martin, and F. Langbein, “Fast and effective
feature-preserving mesh denoising,” IEEE Transactions on Visual-
ization and Computer Graphics, vol. 13, no. 5, pp. 925–938, Sept 2007.
[9] H. Avron, A. Sharf, C. Greif, and D. Cohen-Or, “L1-sparse
reconstruction of sharp point set surfaces,” ACM Trans. Graph.,
vol. 29, no. 5, pp. 135:1–135:12, Nov. 2010. [Online]. Available:
[10] Y. Sun, S. Schaefer, and W. Wang, “Denoising point sets
via l0minimization,” Computer Aided Geometric Design,
vol. 35-36, pp. 2 – 15, 2015, geometric Modeling and
Processing 2015. [Online]. Available: http://www.sciencedirect.
[11] X. Lu, S. Wu, H. Chen, S. K. Yeung, W. Chen, and M. Zwicker,
“Gpf: Gmm-inspired feature-preserving point set filtering,” IEEE
Transactions on Visualization and Computer Graphics, vol. PP, no. 99,
pp. 1–1, 2017.
[12] H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and
H. R. Zhang, “Edge-aware point set resampling,” ACM Trans.
Graph., vol. 32, no. 1, pp. 9:1–9:12, Feb. 2013. [Online]. Available:
[13] A. C. ¨
Oztireli, G. Guennebaud, and M. Gross, “Feature preserving
point set surfaces based on non-linear kernel regression,”
Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009. [Online].
[14] Y. Zheng, H. Fu, O.-C. Au, and C.-L. Tai, “Bilateral normal fil-
tering for mesh denoising,” IEEE Transactions on Visualization and
Computer Graphics, vol. 17, no. 10, pp. 1521–1530, Oct 2011.
[15] H. Zhang, C. Wu, J. Zhang, and J. Deng, “Variational mesh
denoising using total variation and piecewise constant function
space,” Visualization and Computer Graphics, IEEE Transactions on,
vol. 21, no. 7, pp. 873–886, July 2015.
[16] P.-S. Wang, X.-M. Fu, Y. Liu, X. Tong, S.-L. Liu, and B. Guo,
“Rolling guidance normal filter for geometric processing,” ACM
Trans. Graph., vol. 34, no. 6, pp. 173:1–173:9, Oct. 2015. [Online].
[17] A. Boulch and R. Marlet, “Fast and robust normal estimation
for point clouds with sharp features,” Comput. Graph. Forum,
vol. 31, no. 5, pp. 1765–1774, Aug. 2012. [Online]. Available:
[18] A. Boulch and R. Marlet, “Deep learning for robust normal
estimation in unstructured point clouds,” Computer Graphics
Forum, vol. 35, no. 5, pp. 281–290, 2016. [Online]. Available:
[19] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle,
“Surface reconstruction from unorganized points,” SIGGRAPH
Comput. Graph., vol. 26, no. 2, pp. 71–78, Jul. 1992. [Online].
[20] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and
C. T. Silva, “Point set surfaces,” in Proceedings of the Conference
on Visualization ’01, ser. VIS ’01. Washington, DC, USA:
IEEE Computer Society, 2001, pp. 21–28. [Online]. Available:
[21] M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification
of point-sampled surfaces,” in Proceedings of the Conference
on Visualization ’02, ser. VIS ’02. Washington, DC, USA:
IEEE Computer Society, 2002, pp. 163–170. [Online]. Available:
[22] N. J. Mitra and A. Nguyen, “Estimating surface normals in
noisy point cloud data,” in Proceedings of the Nineteenth Annual
Symposium on Computational Geometry, ser. SCG ’03. New
York, NY, USA: ACM, 2003, pp. 322–328. [Online]. Available:
[23] C. Lange and K. Polthier, “Anisotropic smoothing of point
sets,” Computer Aided Geometric Design, vol. 22, no. 7,
pp. 680 – 692, 2005, geometric Modelling and Differential
Geometry. [Online]. Available:
[24] H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-
Or, “Consolidation of unorganized point clouds for surface
reconstruction,” ACM Trans. Graph., vol. 28, no. 5, pp. 176:1–
176:7, Dec. 2009. [Online]. Available:
[25] T. K. Dey and S. Goswami, “Provable surface reconstruction
from noisy samples,” in Proceedings of the Twentieth Annual
Symposium on Computational Geometry, ser. SCG ’04. New
York, NY, USA: ACM, 2004, pp. 330–339. [Online]. Available:
[26] P. Alliez, D. Cohen-Steiner, Y. Tong, and M. Desbrun, “Voronoi-
based variational reconstruction of unoriented point sets,” in
Proceedings of the Fifth Eurographics Symposium on Geometry
Processing, ser. SGP ’07. Aire-la-Ville, Switzerland, Switzerland:
Eurographics Association, 2007, pp. 39–48. [Online]. Available:
[27] B. Li, R. Schnabel, R. Klein, Z. Cheng, G. Dang, and
S. Jin, “Robust normal estimation for point clouds with sharp
features,” Computers & Graphics, vol. 34, no. 2, pp. 94 –
106, 2010. [Online]. Available:
[28] J. Zhang, J. Cao, X. Liu, J. Wang, J. Liu, and X. Shi, “Point cloud
normal estimation via low-rank subspace clustering,” Computers
& Graphics, vol. 37, no. 6, pp. 697 – 706, 2013, shape Modeling
International (SMI) Conference 2013. [Online]. Available: http://
[29] X. Liu, J. Zhang, J. Cao, B. Li, and L. Liu, “Quality point
cloud normal estimation by guided least squares representation,”
Computers & Graphics, vol. 51, no. Supplement C, pp.
106 – 116, 2015, international Conference Shape Modeling
International. [Online]. Available: http://www.sciencedirect.
[30] J. Zhang, J. Cao, X. Liu, C. He, B. Li, and L. Liu, “Multi-normal
estimation via pair consistency voting,” IEEE Transactions on Visu-
alization and Computer Graphics, pp. 1–1, 2018.
[31] E. Mattei and A. Castrodad, “Point cloud denoising via moving
rpca,” Computer Graphics Forum, vol. 36, no. 8, pp. 123–137, 2017.
[Online]. Available:
[32] K.-W. Lee and W.-P. Wang, “Feature-preserving mesh denoising
via bilateral normal filtering,” in Proc. of Int’l Conf. on Computer
Aided Design and Computer Graphics 2005, Dec 2005.
[33] C. C. L. Wang, “Bilateral recovering of sharp edges on feature-
insensitive sampled meshes,” Visualization and Computer Graphics,
IEEE Transactions on, vol. 12, no. 4, pp. 629–639, July 2006.
[34] T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative,
feature-preserving mesh smoothing,” ACM Trans. Graph., vol. 22,
no. 3, pp. 943–949, Jul. 2003. [Online]. Available: http:
[35] S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh
denoising,” ACM Trans. Graph., vol. 22, no. 3, pp. 950–953, Jul.
2003. [Online]. Available:
[36] H. Yagou, Y. Ohtake, and A. Belyaev, “Mesh smoothing via mean
and median filtering applied to face normals,” in Geometric Model-
ing and Processing, 2002. Proceedings, 2002, pp. 124–131.
[37] H. Yagou, Y. Ohtake, and A. Belyaev, “Mesh denoising via it-
erative alpha-trimming and nonlinear diffusion of normals with
automatic thresholding,” in Computer Graphics International, 2003.
Proceedings, July 2003, pp. 28–33.
[38] Y. Shen and K. Barner, “Fuzzy vector median-based surface
smoothing,” IEEE Transactions on Visualization and Computer Graph-
ics, vol. 10, no. 3, pp. 252–265, May 2004.
[39] X. Sun, P. L. Rosin, R. R. Martin, and F. C. Langbein, “Random
walks for feature-preserving mesh denoising,” Computer Aided
Geometric Design, vol. 25, no. 7, pp. 437 – 456, 2008,
solid and Physical Modeling Selected papers from the Solid
and Physical Modeling and Applications Symposium 2007
(SPM 2007) Solid and Physical Modeling and Applications
Symposium 2007. [Online]. Available: http://www.sciencedirect.
[40] J. Solomon, K. Crane, A. Butscher, and C. Wojtan, “A
general framework for bilateral and mean shift filtering,”
CoRR, vol. abs/1405.4734, 2014. [Online]. Available: http:
[41] W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu,
“Guided mesh normal filtering,” Comput. Graph. Forum,
vol. 34, no. 7, pp. 23–34, Oct. 2015. [Online]. Available:
[42] X. Lu, W. Chen, and S. Schaefer, “Robust mesh denoising via
vertex pre-filtering and l1-median normal filtering,” Computer
Aided Geometric Design, vol. 54, no. Supplement C, pp. 49
– 60, 2017. [Online]. Available:
[43] S. K. Yadav, U. Reitebuch, and K. Polthier, “Mesh denoising
based on normal voting tensor and binary optimization,” IEEE
Transactions on Visualization and Computer Graphics, vol. PP, no. 99,
pp. 1–1, 2017.
[44] P.-S. Wang, Y. Liu, and X. Tong, “Mesh denoising via
cascaded normal regression,” ACM Trans. Graph., vol. 35,
no. 6, pp. 232:1–232:12, Nov. 2016. [Online]. Available:
[45] Q. Zheng, A. Sharf, G. Wan, Y. Li, N. J. Mitra, D. Cohen-Or,
and B. Chen, “Non-local scan consolidation for 3d urban scenes,”
ACM Trans. Graph., vol. 29, no. 4, pp. 94:1–94:9, Jul. 2010. [Online].
[46] J. Digne, “Similarity based filtering of point clouds,” in 2012
IEEE Computer Society Conference on Computer Vision and Pattern
Recognition Workshops, June 2012, pp. 73–79.
[47] J. Digne, S. Valette, and R. Chaine, “Sparse geometric represen-
tation through local shape probing,” IEEE Transactions on Visual-
ization and Computer Graphics, vol. 24, no. 7, pp. 2238–2250, July
[48] E. J. Cand`
es and B. Recht, “Exact matrix completion via
convex optimization,” Foundations of Computational Mathematics,
vol. 9, no. 6, p. 717, Apr 2009. [Online]. Available: https:
[49] J.-F. Cai, E. J. Cand`
es, and Z. Shen, “A singular value thresholding
algorithm for matrix completion,” SIAM J. on Optimization,
vol. 20, no. 4, pp. 1956–1982, Mar. 2010. [Online]. Available:
[50] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust principal
component analysis: Exact recovery of corrupted low-rank matri-
ces via convex optimization,” in Advances in Neural Information
Processing Systems 22, Y. Bengio, D. Schuurmans, J. D. Lafferty,
C. K. I. Williams, and A. Culotta, Eds. Curran Associates, Inc.,
2009, pp. 2080–2088.
[51] G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by
low-rank representation,” in Proceedings of the 27th International
Conference on International Conference on Machine Learning,
ser. ICML’10. USA: Omnipress, 2010, pp. 663–670. [Online].
[52] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma, “Tilt: Transform
invariant low-rank textures,” International Journal of Computer
Vision, vol. 99, no. 1, pp. 1–24, Aug 2012. [Online]. Available:
[53] T. P. Wu, S. K. Yeung, J. Jia, C. K. Tang, and G. Medioni, “A closed-
form solution to tensor voting: Theory and applications,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 34,
no. 8, pp. 1482–1495, Aug 2012.
[54] X. Lu, Z. Deng, and W. Chen, “A robust scheme for feature-
preserving mesh denoising,” IEEE Trans. Vis. Comput. Graph.,
vol. 22, no. 3, pp. 1181–1194, 2016.
[55] R. Preiner, O. Mattausch, M. Arikan, R. Pajarola, and M. Wimmer,
“Continuous projection for fast l1 reconstruction,” ACM Trans.
Graph., vol. 33, no. 4, pp. 47:1–47:13, Jul. 2014. [Online]. Available:
[56] S. K. Yadav, U. Reitebuch, M. Skrodzki, E. Zimmermann, and
K. Polthier, “Constraint-based point set denoising using normal
voting tensor and restricted quadratic error metrics,” Computers &
Graphics, vol. 74, pp. 234 – 243, 2018. [Online]. Available: http://
[57] X. Li, L. Zhu, C.-W. Fu, and P.-A. Heng, “Non-local low-
rank normal filtering for mesh denoising,” Computer Graphics
Forum, vol. 37, no. 7, pp. 155–166, 2018. [Online]. Available:
[58] W. Pan, X. Lu, Y. Gong, W. Tang, J. Liu, Y. He, and G. Qiu, “HLO:
half-kernel laplacian operator for surface smoothing,” Computer
Aided Design, 2019.
[59] S. K. Yadav, U. Reitebuch, and K. Polthier, “Robust and high
fidelity mesh denoising,” IEEE Transactions on Visualization and
Computer Graphics, vol. 25, no. 6, pp. 2304–2310, June 2019.
[60] L. He and S. Schaefer, “Mesh denoising via l0 minimization,”
ACM Trans. Graph., vol. 32, no. 4, pp. 64:1–64:8, Jul. 2013. [Online].
[61] N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding structure
with randomness: Probabilistic algorithms for constructing
approximate matrix decompositions,” SIAM Review, vol. 53, no. 2,
pp. 217–288, 2011. [Online]. Available:
Xuequan Lu is a Lecturer (Assistant Professor)
at Deakin University, Australia. He spent more
than two years as a Research Fellow in Sin-
gapore. Prior to that, he earned his Ph.D at
Zhejiang University (China) in June 2016. His
research interests mainly fall into the category of
visual computing, for example, geometry model-
ing, processing and analysis, 2D data process-
ing and analysis. More information can be found
Scott Schaefer is a Professor of Computer
Science at Texas A&M University. He re-
ceived a bachelor’s degree in Computer Sci-
ence/Mathematics from Trinity University in 2000
and an M.S. and PhD. in Computer Science
from Rice University in 2003 and 2006 respec-
tively. His research interests include graphics,
geometry processing, curve and surface repre-
sentations, and barycentric coordinates. Scott
received the Gnter Enderle Award in 2011 and
an NSF CAREER Award in 2012.
Jun Luo received his BS and MS degrees in
Electrical Engineering from Tsinghua University,
China, and the Ph.D. degree in Computer Sci-
ence from EPFL (Swiss Federal Institute of Tech-
nology in Lausanne), Lausanne, Switzerland. He
is currently an Associate Professor Nanyang
Technological University in Singapore. His re-
search interests include mobile and pervasive
computing, wireless networking, applied opera-
tions research, as well as network security.
Lizhuang Ma received the Ph.D. degree from
the Zhejiang University, Hangzhou, China. He
was the recipient of the national science fund
for distinguished young scholars from NSFC.
He is currently a Distinguished Professor and
the Head of the Digital Media & Computer Vi-
sion Laboratory, Shanghai Jiao Tong University,
Shanghai, China. His research interests include
digital media technology, vision, graphics, etc,.
Ying He is currently an associate professor at
School of Computer Science and Engineering,
Nanyang Technological University, Singapore.
He received the BS and MS degrees in electri-
cal engineering from Tsinghua University, China,
and the PhD degree in computer science from
Stony Brook University, USA. His research inter-
ests fall into the general areas of visual com-
puting and he is particularly interested in the
problems which require geometric analysis and
... Lu et al. [76] extend the non-local method to the normal field and proposed a robust normal estimation method for point clouds using a low-rank matrix approximation algorithm. They defined a local isotropic structure as a subset of points that are on the same isotropic surface with the representative normal. ...
... Instead of packing similar patches in the normal field [76], Chen et al. [74] devised a multi-patch collaborative point cloud denoising method in the surface height field. ...
... However, the HMP requires a high density of point cloud. Inspired by [76] and [74], Zhou et al. [77] projected the neighboring points of each point onto its normal, called normal height projection. They developed a structure-aware descriptor called projective height vector to capture the local height variations by normal height projection. ...
Full-text available
Over the past decade, we have witnessed an enormous amount of research effort dedicated to the design of point cloud denoising techniques. In this article, we first provide a comprehensive survey on state-of-the-art denoising solutions, which are mainly categorized into three classes: filter-based, optimization-based, and deep learning-based techniques. Methods of each class are analyzed and discussed in detail. This is done using a benchmark on different denoising models, taking into account different aspects of denoising challenges. We also review two kinds of quality assessment methods designed for evaluating denoising quality. A comprehensive comparison is performed to cover several popular or state-of-the-art methods, together with insightful observations. Finally, we discuss open challenges and future research directions in identifying new point cloud denoising strategies.
... For classical methods, MLS-based [13,14,15] and LOPbased [16,17,18] methods usually hold the local smoothness assumption, and thus are less able to preserve sharp features well. Sparse and low-rank methods [19,20,21,22,23,24] can preserve sharp features well, but may over-sharpen smoothly curved features. For deep learning-based methods [25,26,11,12], they 30 crucially depend on expensive training over massive datasets, and are often impaired when the data to be denoised deviates significantly from the training set. ...
... Third, most of existing point updating methods, which aim to reposition each 45 point to match estimated normal by analyzing the geometry structure of local neighborhood, are not good at restoring feature points, such as edge and corner points. Compared with the normal estimation in the point cloud denoising, point updating has received sparse treatment so far [21,24,31,34,35,36,37]. ...
... Nonlocal self-similarity (NSS) prior has been shown to produce promising 55 results with regard to denoising and feature preserving in point cloud denoising [20,22,23,24], especially in image denoising and restoration [39,40,41,42,43,44]. Generally, they first cluster similar patches to form patch groups, and then apply structural sparse representation to achieve promising results for each patch group [47]. ...
Full-text available
Point cloud denoising is a crucial and fundamental step in geometry processing, which has achieved significant progress in the last two decades. Denoising real-world noisy point clouds is a very challenging problem since it is hard to describe the complex real-world noise by simple distributions such as Gaussian distribution. Furthermore, existing methods may suffer from performance degradation when dealing with real-world noisy point clouds with complex structures, which contain not only sharp features (sharp edges, sharp corners, etc.) but also smooth features, fine features, etc. To solve the above-mentioned problems, we propose a novel structure-aware denoising approach by exploiting the prior information in both external clean point clouds and the given noisy point cloud. We first group nonlocal self-similarity (NSS) patches from a set of external clean point clouds. Then, we employ the Gaussian Mixture Model (GMM) learning algorithm to learn external NSS priors over patch groups. Next, the internal priors are learned from the given noisy point cloud in the same way to refine the prior model. We integrate both the learned external and internal priors into a set of orthogonal dictionaries to efficiently estimate point normals. Finally, we propose a feature-aware point updating method through adaptive neighborhood selection to reposition points to match the estimated normals. Extensive experiments show that our approach achieves favorable comprehensive performance compared with many popular or state-of-the-art methods in terms of both objective and visual perception. The source code can be found at
... Principal Component Analysis (PCA) [10] is used for the initial estimation of normals. Secondly, we update the point positions in a local manner by reformulating an objective function consisting of an edge-aware data term and a repulsion term inspired by [23,25]. The two terms account for preserving geometric features and point distribution, respectively. ...
... GPF [25] incorporated normal information to Gaussian Mixture Model (GMM), which included two terms and performed well in preserving sharp features. A robust normal estimation method was proposed in [23] for both point clouds and meshes with a low-rank matrix approximation algorithm, where an application of point cloud filtering was demonstrated. To keep the geometry features, [19] first filtered the normals by defining discrete operators on point clouds, and then present a bi-tensor voting scheme for the feature detection step. ...
... Note that if the input point cloud only contains positional information, PCA is used to compute the initial normals. The point positions are then updated in a local manner with the bilaterally filtered normals [23]. We also add a repulsion term [23] to ensure a more uniform distribution for filtered points. ...
Full-text available
As a popular representation of 3D data, point cloud may contain noise and need to be filtered before use. Existing point cloud filtering methods either cannot preserve sharp features or result in uneven point distribution in the filtered output. To address this problem, this paper introduces a point cloud filtering method that considers both point distribution and feature preservation during filtering. The key idea is to incorporate a repulsion term with a data term in energy minimization. The repulsion term is responsible for the point distribution, while the data term is to approximate the noisy surfaces while preserving the geometric features. This method is capable of handling models with fine-scale features and sharp features. Extensive experiments show that our method yields better results with a more uniform point distribution ($5.8\times10^{-5}$ Chamfer Distance on average) in seconds.
... Under the assumptions that: i) surfaces are commonly composed of piecewise flat patches, and ii) geometric features are sparsely distributed over the entire shape, sparsity-based methods [38], [39], [40] show impressive results for those CAD-like models, especially in sharp feature preservation. Low-rank based methods [40], [41], [42], [43] explore the nonlocal geometric similarity to generate better normal fields on 3D surfaces. Some other methods, like Hough Transform [44], also yield pleasing results. ...
... where n i is the normal of p i , w σ (n i , n j ) = exp(− ||ni−nj || 2 σ 2 ) is a weight function, λ = 0.5 is a tradeoff parameter and γ i is the step size set to 1 3|Ni| by default. To prevent points from accumulating around edges, we keep the neighboring information unchanged in all iterations [42]. The iteration number is set to 20 in our experiment. ...
Point normal, as an intrinsic geometric property of 3D objects, not only serves conventional geometric tasks such as surface consolidation and reconstruction, but also facilitates cutting-edge learning-based techniques for shape analysis and generation. In this paper, we propose a normal refinement network, called Refine-Net, to predict accurate normals for noisy point clouds. Traditional normal estimation wisdom heavily depends on priors such as surface shapes or noise distributions, while learning-based solutions settle for single types of hand-crafted features. Differently, our network is designed to refine the initial normal of each point by extracting additional information from multiple feature representations. To this end, several feature modules are developed and incorporated into Refine-Net by a novel connection module. Besides the overall network architecture of Refine-Net, we propose a new multi-scale fitting patch selection scheme for the initial normal estimation, by absorbing geometry domain knowledge. Also, Refine-Net is a generic normal estimation framework: 1) point normals obtained from other methods can be further refined, and 2) any feature module related to the surface geometric structures can be potentially integrated into the framework. Qualitative and quantitative evaluations demonstrate the clear superiority of Refine-Net over the state-of-the-arts on both synthetic and real-scanned datasets. Our code is available at
... Non-local Methods. In contrast to the previous classes, these approaches rely on the assumption that geometric statistics are (approximately) shared by certain surface patches of a 3D model, i.e. local surface denoising is conducted based on collected neighborhoods with similar geometry [36], [37], [38], [39]. However, the definition of a suitable metric as well as the regular representation of local surface structures remain challenging. ...
We present incomplete gamma kernels, a generalization of Locally Optimal Projection (LOP) operators. In particular, we reveal the relation of the classical localized $ L_1 $ estimator, used in the LOP operator for surface reconstruction from noisy point clouds, to the common Mean Shift framework via a novel kernel. Furthermore, we generalize this result to a whole family of kernels that are built upon the incomplete gamma function and each represents a localized $ L_p $ estimator. By deriving various properties of the kernel family concerning distributional, Mean Shift induced, and other aspects such as strict positive definiteness, we obtain a deeper understanding of the operator's projection behavior. From these theoretical insights, we illustrate several applications ranging from an improved Weighted LOP (WLOP) density weighting scheme and a more accurate Continuous LOP (CLOP) kernel approximation to the definition of a novel set of robust loss functions. These incomplete gamma losses include the Gaussian and LOP loss as special cases and can be applied for reconstruction tasks such as normal filtering. We demonstrate the effects of each application in a range of quantitative and qualitative experiments that highlight the benefits induced by our modifications.
... As the focus on feature preservation increased, multiple schemes were proposed that classify vertices into features and non-features using tensor voting in combination with k-means clustering [39], eigenanalysis [40] or feature descriptors [41], [42] before applying a filtering technique. Arvanitis et al. [43] proposed a coarse-to-fine mesh denoising approach that uses graph spectral processing to preserve feature normals in the denoising process. ...
Full-text available
In this work, we propose a novel denoising technique, the icosahedral mesh denoising network (IMD-Net) for closed genus-0 meshes. IMD-Net is a deep neural network that produces a denoised mesh in a single end-to-end pass, preserving and emphasizing natural object features in the process. A preprocessing step, exploiting the homeomorphism between genus-0 mesh and sphere, remeshes an irregular mesh using the regular mesh structure of a frequency subdivided icosahedron. Enabled by gauge equivariant convolutional layers arranged in a residual U-net, IMD-Net denoises the remeshing invariant to global mesh transformations as well as local feature constellations and orientations, doing so with a computational complexity of traditional conv2D kernel. The network is equipped with carefully crafted loss function that leverages differences between positional, normal and curvature fields of target and noisy mesh in a numerically stable fashion. In a first, two large shape datasets commonly used in related fields, ABC and ShapeNetCore, are introduced to evaluate mesh denoising. IMD-Net's competitiveness with existing state-of-the-art techniques is established using both metric evaluations and visual inspection of denoised models.
... Research [23,24] proposes deep learning methods to remove noise and preserve its clear features to filter point clouds automatically and robustly and achieve automatic prediction of normals. Research [25] proposed a robust normal estimation method for point clouds using a low-rank matrix approximation algorithm and provide a new filtering method for point cloud data to smooth the position data to fit the estimated normals. At present, the deep learning method for point cloud preprocessing is very common. ...
Full-text available
Based on the research background of in situ automatic ultrasonic phased array inspection of irregular porous castings, due to the limited in situ inspection stations, complex shape of irregular porous castings, and extreme multireflection structural features, it is necessary to identify the positioning inspection features among multiple features to be inspected and plan the optimal inspection path. This research is interested in the porous location recognition and its detection path planning of irregular porous castings. For this, a point cloud-based multifeature contour recognition and location algorithm were proposed to simultaneously extract and locate the hole feature and cylindrical feature from an irregular porous castings. Furthermore, a detection path planning method was put forward to search the shortest robot’s detection path based on the above acquired features by visual recognition and positioning technology. First, through the calibration of the industrial robot tool coordinate system and the internal and external parameters of the camera, the “EyeinHand” hand-eye conversion relationship was established. Second, the robot vision system collects the point cloud information of the area to be inspected and performs point cloud splicing, the accuracy of the original data, the removal of noise such as invalid points, outliers and internal noise points, on this basis, the boundary curve of the hole to be inspected was extracted, the cylindrical equation was fitted, its geometric center was calculated, and the central coordinates and axis direction of the contour of the hole to be inspected were obtained. Finally, all the detection paths were traversed through the multibranch tree to obtain the optimal detection path of the detection points of multiple targets. The experimental results show that the positioning accuracy of the feature of the hole to be inspected by the vision system is 0.107 mm, the aperture extraction accuracy is 0.002 mm, the cylinder fitting accuracy is 0.04 mm, and the calculation accuracy of the angle between the two axes is within 0.4. When the number of features to be inspected is different, the average moving distance can be saved by 10.7% by using the end effector after path optimization. The feasibility of in situ automatic ultrasonic phased array detection for irregular porous castings using by visual positioning is verified.
... Non-local Methods Non-local based approaches (Rosman et al. 2013;Lu et al. 2018;Chen et al. 2019) are inspired by the geometric statistics which indicate that a number of surface patches sharing approximate geometric properties always exist within a 3D model. So, these methods collect multiple neighborhoods with similar geometry to collaboratively denoise a local structure. ...
Full-text available
The captured 3D point clouds by depth cameras and 3D scanners are often corrupted by noise, so point cloud denoising is typically required for downstream applications. We observe that: (i) the scale of the local neighborhood has a significant effect on the denoising performance against different noise levels, point intensities, as well as various kinds of local details; (ii) non-iteratively evolving a noisy input to its noise-free version is non-trivial; (iii) both traditional geometric methods and learning-based methods often lose geometric features with denoising iterations, and (iv) most objects can be regarded as piece-wise smooth surfaces with a small number of features. Motivated by these observations, we propose a novel and task-specific point cloud denoising network, named RePCD-Net, which consists of four key modules: (i) a recurrent network architecture to effectively remove noise; (ii) an RNN-based multi-scale feature aggregation module to extract adaptive features in different denoising stage; (iii) a recurrent propagation layer to enhance the geometric feature perception across stages; and (iv) a feature-aware CD loss to regularize the predictions towards multi-scale geometric details. Extensive qualitative and quantitative evaluations demonstrate the effectiveness and superiority of our method over state-of-the-arts, in terms of noise removal and feature preservation.
Mesh inpainting aims to fill the holes or missing regions from observed incomplete meshes and keep consistent with prior knowledge. Inspired by the success of low rank in describing similarity, we formulate the mesh inpainting problem as the low rank matrix recovery problem and present a patch-based mesh inpainting algorithm. Normal patch covariance is adapted to describe the similarity between surface patches. By analyzing the similarity of patches, the most similar patches are packed into a matrix with low rank structure. An iterative diffusion strategy is first designed to recover the patch vertex normals gradually. Then, the normals are refined by low rank approximation to keep the overall consistency and vertex positions are finally updated. We conduct several experiments in different 3D models to verify the proposed approach. Compared with existing algorithms, our experimental results demonstrate the superiority of our approach both visually and quantitatively in recovering the mesh with self-similarity patterns.
Existing position based point cloud filtering methods can hardly preserve sharp geometric features. In this paper, we rethink point cloud filtering from a non-learning non-local non-normal perspective, and propose a novel position based approach for feature-preserving point cloud filtering. Unlike normal based techniques, our method does not require the normal information. The core idea is to first design a similarity metric to search the non-local similar patches of a queried local patch. We then map the non-local similar patches into a canonical space and aggregate the non-local information. The aggregated outcome (i.e. coordinate) will be inversely mapped into the original space. Our method is simple yet effective. Extensive experiments validate our method, and show that it generally outperforms position based methods (deep learning and non-learning), and generates better or comparable outcomes to normal based techniques (deep learning and non-learning).
Full-text available
In many applications, point set surfaces are acquired by 3D scanners. During this acquisition process, noise and outliers are inevitable. For a high fidelity surface reconstruction from a noisy point set, a feature preserving point set denoising operation has to be performed to remove noise and outliers from the input point set. To suppress these undesired components while preserving features, we introduce an anisotropic point set denoising algorithm in the normal voting tensor framework. The proposed method consists of three different stages that are iteratively applied to the input: in the first stage, noisy vertex normals, are initially computed using principal component analysis, are processed using a vertex-based normal voting tensor and binary eigenvalues optimization. In the second stage, feature points are categorized into corners, edges, and surface patches using a weighted covariance matrix, which is computed based on the processed vertex normals. In the last stage, vertex positions are updated according to the processed vertex normals using restricted quadratic error metrics. For the vertex updates, we add different constraints to the quadratic error metric based on feature (edges and corners) and non-feature (planar) vertices. Finally, we show our method to be robust and comparable to state-of-the-art methods in several experiments.
Full-text available
This paper presents a simple and effective two-stage mesh denoising algorithm, where in the first stage, the face normal filtering is done by using the bilateral normal filtering in the robust statistics framework. \textit{Tukey's bi-weight function} is used as similarity function in the bilateral weighting, which is a robust estimator and stops the diffusion at sharp edges which helps to retain features and removes noise from flat regions effectively. In the second stage, an edge-weighted Laplace operator is introduced to compute the differential coordinate. This differential coordinate helps the algorithm to produce a high-quality mesh without any face normal flips and also makes the method robust against high-intensity noise.
Full-text available
Point set filtering, which aims at reconstructing noise-free point sets from their corresponding noisy inputs, is a fundamental problem in 3D geometry processing. The main challenge of point set filtering is to preserve geometric features of the underlying geometry while at the same time removing the noise. State-of-the-art point set filtering methods still struggle with this issue: some are not designed to recover sharp features, and others cannot well preserve geometric features, especially fine-scale features. In this paper, we propose a novel approach for robust feature-preserving point set filtering, inspired by the Gaussian Mixture Model (GMM). Taking a noisy point set and its filtered normals as input, our method can robustly reconstruct a high-quality point set which is both noise-free and feature-preserving. Various experiments show that our approach can soundly outperform the selected state-of-the-art methods, in terms of both filtering quality and reconstruction accuracy.
Full-text available
We propose a robust and effective mesh denoising approach consisting of three steps: vertex pre-filtering, -median normal filtering, and vertex updating. Given an input noisy mesh model, our method generates a high quality model that preserves geometric features. Our approach is more robust than state of the art approaches when denoising models with different levels of noise and can handle models with irregular surface sampling.
The normals of feature points, i.e., the intersection points of multiple smooth surfaces, are ambiguous and undefined. This paper presents a unified definition for point cloud normals of feature and non-feature points, which allows feature points to possess multiple normals. This definition facilitates several succeeding operations, such as feature points extraction and point cloud filtering. We also develop a feature preserving normal estimation method which outputs multiple normals per feature point. The core of the method is a pair consistency voting scheme. All neighbor point pairs vote for the local tangent plane. Each vote takes the fitting residuals of the pair of points and their preliminary normal consistency into consideration. Thus the pairs from the same subspace and relatively far off features dominate the voting. An adaptive strategy is designed to overcome sampling anisotropy. In addition, we introduce an error measure compatible with traditional normal estimators, and present the first benchmark for normal estimation, composed of 152 synthesized data with various features and sampling densities, and 288 real scans with different noise levels. Comprehensive and quantitative experiments show that our method generates faithful feature preserving normals and outperforms previous cutting edge normal estimation methods, including the latest deep learning based method.
We present a structure-aware technique to consolidate noisy data, which we use as a pre-process for standard clustering and dimensionality reduction. Our technique is related to mean shift, but instead of seeking density modes, it reveals and consolidates continuous high density structures such as curves and surface sheets in the underlying data while ignoring noise and outliers. We provide a theoretical analysis under some assumptions, and show that our approach significantly improves the performance of many non-linear dimensionality reduction and clustering algorithms in challenging scenarios.
We consolidate an unorganized point cloud with noise, outliers, non-uniformities, and in particular interference between close-by surface sheets as a preprocess to surface generation, focusing on reliable normal estimation. Our algorithm includes two new developments. First, a weighted locally optimal projection operator produces a set of denoised, outlier-free and evenly distributed particles over the original dense point cloud, so as to improve the reliability of local PCA for initial estimate of normals. Next, an iterative framework for robust normal estimation is introduced, where a priority-driven normal propagation scheme based on a new priority measure and an orientation-aware PCA work complementarily and iteratively to consolidate particle normals. The priority setting is reinforced with front stopping at thin surface features and normal flipping to enable robust handling of the close-by surface sheet problem. We demonstrate how a point cloud that is wellconsolidated by our method steers conventional surface generation schemes towards a proper interpretation of the input data.