Content uploaded by Jun Luo

Author content

All content in this area was uploaded by Jun Luo on Nov 02, 2020

Content may be subject to copyright.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 1

Low Rank Matrix Approximation for 3D

Geometry Filtering

Xuequan Lu, Member, IEEE, Scott Schaefer, Jun Luo, Senior Member, IEEE,

Lizhuang Ma, Member, IEEE, and Ying He, Member, IEEE

Abstract—We propose a robust normal estimation method for both point clouds and meshes using a low rank matrix approximation

algorithm. First, we compute a local isotropic structure for each point and ﬁnd its similar, non-local structures that we organize into a

matrix. We then show that a low rank matrix approximation algorithm can robustly estimate normals for both point clouds and meshes.

Furthermore, we provide a new ﬁltering method for point cloud data to smooth the position data to ﬁt the estimated normals. We show

the applications of our method to point cloud ﬁltering, point set upsampling, surface reconstruction, mesh denoising, and geometric

texture removal. Our experiments show that our method generally achieves better results than existing methods.

Index Terms—3D Geometry ﬁltering, Point cloud ﬁltering, Mesh denoising, Point upsampling, Surface reconstruction, Geometric

texture removal.

F

1 INTRODUCTION

FILTERI NG in 2D data like images is prevalent nowadays

[1], [2], [3], [4], and 3D geometry (e.g., point clouds,

meshes) ﬁltering and processing has recently attracted more

and more attention in 3D vision [5], [6], [7]. Normal estima-

tion for point cloud models or mesh shapes is important

since it is often the ﬁrst step in a geometry processing

pipeline. This estimation is often followed by a ﬁltering

process to update the position data and remove noise [8].

A variety of computer graphics applications, such as point

cloud ﬁltering [9], [10], [11], point set upsampling [12], sur-

face reconstruction [13], mesh denoising [8], [14], [15] and

geometric texture removal [16] rely heavily on the quality of

estimated normals and subsequent ﬁltering of position data.

Current state of the art techniques in mesh denoising [8],

[14], [15] and geometric texture removal [16] can achieve

impressive results. However, these methods are still limited

in their ability to recover sharp edges in challenging regions.

Normal estimation for point clouds has been an active area

of research in recent years [12], [17], [18]. However, these

methods perform suboptimally when estimating normals in

noisy point clouds. Speciﬁcally, [17], [18] are less robust in

the presence of considerable noise. The bilateral ﬁlter can

preserve geometric features but sometimes may fail due to

the locality of its computations and lack of self-adaption of

parameters.

Updating point positions using the estimated normals in

point clouds has received sparse treatment so far [9], [10].

However, those position update approaches using the L0

•X. Lu is with School of Information Technology, Deakin University,

Australia. E-mail: xuequan.lu@deakin.edu.au.

•J. Luo and Y. He are with School of Computer Science and Engineer-

ing, Nanyang Technological University, Singapore. E-mails: {junluo,

YHe}@ntu.edu.sg.

•S. Schaefer is with Department of Computer Science, Texas A&M Uni-

versity, College Station, Texas, USA. E-mail: schaefer@cs.tamu.edu.

•L. Ma is with Department of Computer Science, Shanghai Jiao Tong

University, Shanghai, China. E-mail: ma-lz@cs.sjtu.edu.cn.

Manuscript received May 19, 2020; revised 11 August, 2020. Preprint.

or L1norms are complex to solve and hard to implement.

Moreover, they restrict each point to only move along its

normal orientation potentially leading to suboptimal results

or slow convergence.

To address the issues shown above, we propose a new

normal estimation method for both meshes and point clouds

and a new position update algorithm for point clouds. Our

method beneﬁts various geometry processing applications,

directly or indirectly, such as point cloud ﬁltering, point set

upsampling, surface reconstruction, mesh denoising, and

geometric texture removal (Figure 1). Given a point cloud

or mesh as input, our method ﬁrst estimates point or face

normals, then updates the positions of points or vertices

using the estimated normals. We observe that: (1) non-local

methods could be more accurate than local techniques; (2)

there usually exist similar structures of each local isotropic

structure (Section 3.1) in geometry shapes; (3) the matrix

constructed by similar structures should be low-rank (Sec-

tion 3.2). Motivated by these observations, we propose a

novel normal estimation technique which consists of two

sub-steps: (i) non-local similar structures location and (ii)

weighted nuclear norm minimization. We adopt the former

to ﬁnd similar structures of each local isotropic structure. We

employ the latter [3] to handle the problem of recovering

low-rank matrices. We also present a fast and effective

point update algorithm for point clouds to ﬁlter the point

positions to better match the estimated normals. Extensive

experiments and comparisons show that our method gener-

ally outperforms current methods.

The main contributions of this paper are:

•a novel normal estimation technique for both point

cloud shapes and mesh models;

•a new position update algorithm for point cloud

data;

•analysis of the convergence of the proposed normal

estimation and point update techniques, experimen-

tally or theoretically (see supplementary document).

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 2

Normal Estimation

Positional Update

Our algorithms

Point Set Filtering

Input:

point set or mesh

{

Mesh Denoising

Mesh Texture Removal

Point Set Texture Removal

Upsampling Surface Reconstruction

App.

Fig. 1. Overview of our approach and its beneﬁted applications. Our

method can be applied to various geometry processing tasks directly or

indirectly.

2 RE LATED WORK

In this section, we only review the research works that are

most related to this work. We ﬁrst review the previous re-

search on normal estimation. Then we review some previous

works which employed the nuclear norm minimization or

its weighted version.

2.1 Normal Estimation

Normal estimation for geometric shapes can be classiﬁed

into two types: (1) normal estimation for point clouds, and

(2) normal estimation for mesh shapes.

Normal estimation for point clouds. Hoppe et al. [19]

estimated normals by computing the tangent plane at each

data point using principal component analysis (PCA) of the

local neighborhood. Later, a variety of variants of PCA have

been proposed [20], [21], [22], [23], [24] to estimate normals.

Nevertheless, the normals estimated by these techniques

tend to smear sharp features. Researchers also estimate

normals using Voronoi cells or Voronoi-PCA [25], [26]. Min-

imizing the L1or L0norm can preserve sharp features

as these norms can be used to measure sparsity in the

derivative of the normal ﬁeld [9], [10]. Yet, the solutions are

complex and computationally expensive. Li et al. [27] esti-

mated normals by using robust statistics to detect the best

local tangent plane for each point. Another set of techniques

attempted to better estimate normals near edges and corners

by point clustering in a neighborhood [28], [29]. Later they

presented a pair consistency voting scheme which outputs

multiple normals per feature point [30]. Boulch and Marlet

[17] use a robust randomized Hough transform to estimate

point normals. Convolutional neural networks have recently

been applied to estimate normals in point clouds [18]. Such

estimation methods are usually less robust for point clouds

with considerable amount of noise. Bilateral smoothing of

PCA normals [12], [13] is simple and effective, but it suffers

from inaccuracy due to the locality of its computations and

may blur edges with small dihedral angles. Mattei et al.

[31] presented a moving RPCA method for point cloud

denoising, based on the inspiration of sparsity. They mod-

eled the RPCA problem in a local sense by specifying the

output rank of 2, rather than considering similar structures.

The computed normals are only used to compute similarity

weights.

Normal estimation for mesh shapes. Most methods

focus on the estimation of face normals in mesh shapes.

One simple, direct way is to compute the face normals by

the cross product of two edges in a triangle face. However,

such normals can deviate from the true normals signiﬁcantly

even in the presence of small position noise. There exist

a considerable amount of research work to smooth these

face normals. One approach uses the bilateral ﬁlter [14],

[32], [33], inspired by the founding works [34], [35]. Mean,

median and alpha-trimming methods [36], [37], [38] are also

used to estimate face normals. Sun et al. [8], [39] present

two different methods to ﬁlter face normals. Recently, re-

searchers have presented ﬁltering methods [15], [40], [41],

[42], [43] based on mean shift, total variation, guided nor-

mals, L1median, and normal voting tensor. Wang et al. [44]

estimated face normals via cascaded normal regression.

2.2 Nonlocal Methods for Point Clouds and Nuclear

Norm Minimization

Previous researchers proposed non-local methods for point

clouds. For example, Zheng et al. [45] applied non-local

ﬁltering to 3D buildings that exhibit large scale repetitions

and self-similarities. Digne presented a non-local denoising

framework to unorganized point clouds by building an

intrinsic descriptor [46], and recently proposed a shape

analysis approach with colleagues based on the non-local

analysis of local shape variations [47].

The nuclear norm of a matrix is deﬁned as the sum of the

absolute values of its singular values (see Eq. (4)). It has been

proved that most low-rank matrices can be recovered by

minimizing their nuclear norm [48]. Cai et al. [49] provided

a simple solution to the low-rank matrix approximation

problem by minimizing the nuclear norm. The nuclear norm

minimization has been broadly employed to matrix com-

pletion [48], [49], robust principle component analysis [50],

low-rank representation for subspace clustering [51] and

low-rank textures [52]. Gu et al. [3], [4] presented a weighted

version of the nuclear norm minimization, which has been

adopted to image processing applications such as image

denoising, background subtraction and image inpainting.

3 NORMAL ESTIMATION

In this section, we take point clouds, consisting of positions

as well as normals, as input and further extend to meshes

later. As with [10], [11], [12], the normals are initialized by

the classical PCA method [19], which is robust and easy

to use (we use the implementation in [12]). First of all,

we present an algorithm to locate and construct non-local

similar structures for each local isotropic structure of a point

(Section 3.1). We then describe how to estimate normals via

weighted nuclear norm minimization on non-local similar

structures (Section 3.2).

3.1 Non-local Similar Structures

Local structure. We deﬁne each point pihas a local struc-

ture Siwhich consists of klocal nearest neighbors. Locating

structures similar to a speciﬁc local structure is difﬁcult due

to the irregularity of points.

Tensor voting. We assume each local structure embeds a

representative normal. To do so, we ﬁrst deﬁne the tensor at

a point pias

Tij =η(kpi−pjk)φ(θij , σθ)nT

jnj,(1)

where pj(1×3vector) is one of the klocal nearest neighbors

of pi, which we denote as j∈Si, and nj(1×3vector)

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 3

(a) Local structure (b) Local isotropic

structure

(c) Similar structures

Fig. 2. (a) The local structures (green points) of the centered red points,

respectively. (b) The local isotropic structures (green) of the correspond-

ing red points. (c) The similar local isotropic structures of the consistent

local isotropic structures denoted by the red points. Each blue or cyan

point denotes its isotropic structure.

is the normal of pj.ηand φare the weights induced

by spatial distances and angles (θij ) of two neighboring

normals, which are given by [12], [14]: η(x) = e−(x

σp)2

,

φ(θ, σθ) = e−(1−cos(θ)

1−cos(σθ))2

.σpand σθare the scaling param-

eters, which are empirically set to two times the maximal

distance between any two points in the klocal nearest neigh-

bors within the local structure and 30◦, respectively.

For each local structure Si, we can derive the accumu-

lated tensor by aggregating all the induced tensor votes

{Tij |j∈Si}. This ﬁnal tensor encodes the local structure,

which provides a reliable, representative normal that will be

later used to compute the local isotropic structure and locate

similar structures.

Ti=X

j∈Si

Tij (2)

Let λi1≥λi2≥λi3be the eigenvalues of Tiwith the

corresponding eigenvectors ei1,ei2and ei3. In tensor voting

[53], λi1−λi2indicates surface saliency with a normal di-

rection ei1;λi2−λi3indicates curve saliency with a tangent

orientation ei3;λi3denotes junction saliency. Therefore, we

take ei1as the representative normal for the local structure

Siof point pi.

Local isotropic structure. We assume that each local

structure has a subset of points that are on the same isotropic

surface with the representative normal. We call this subset

of points the local isotropic structure. Surface patches with

small variation in its dihedral angles are usually considered

isotropic (Figure 2(b)) surfaces. To obtain a local isotropic

structure Siso

ifrom a local structure Siand locate similar

local isotropic structures for Siso

i, we present a simple yet

effective scheme. Here we also employ the deﬁned function

φ(θ, σθ)in Eq. (1), with setting σθto θth.θis the angle of

two normals and θth is the angle threshold. Speciﬁcally, to

obtain Siso

i, we

•compute the angles θbetween each point normal and

the representative normal within a local structure Si;

•add the current point to Siso

iif φ(θ, θth)≥

e−(1−cos(θth)

1−cos(θth))2

(i.e., φ(θ, θth)≥e−1).

For simplicity, we will call “similar local isotropic struc-

tures” as “similar structures” throughout the paper, unless

otherwise stated. Given an isotropic structure Siso

i, we iden-

tify its non-local similar structures by computing φ(θ, θth)

between the representative normal of each structure and

that of Si. If φ(θ, θth)≥e−1(we use the same θth for

simplicity), we deﬁne the two isotropic structures to be

similar. The underlying rationale of our similarity search is:

the point normals in a local isotropic structure are bounded

by the representative normal, indicating these points are

on the same isotropic surface; the similar structures search

is also bounded by the representative normals, implying

the similar structures are on the similar isotropic surfaces.

These similar structures will often overlap on the same

isotropic surface as shown in Figure 2. In the ﬁgure, we

show the local structure (a), the local isotropic structure (b),

and the similar structures (c). Each representative normal

is computed based on its entire neighbors with different

weights respect to the current point (Eq. (1) and (2)). It

indicates that the representative normal is isotropic with the

current point normal, and there is no need to iteratively

reﬁne the representative normal by using the local isotropic

neighbors. Note that the non-local similar structures are

searched in the context of isotropic surfaces rather than

anisotropic surfaces (see more analysis in Rotation-invariant

similarity in supplementary document).

(a) without reshaping (b) with reshaping

Fig. 3. A normal estimation comparison without or with matrix reshaping.

3.2 Weighted Nuclear Norm Minimization

For each non-local similar structure Siso

lfor the isotropic

structure Siso

iassociated with the point pi, we append the

point normals of Siso

las rows to a matrix M. Note that the

dimensions of this matrix are ˆr×3, where ˆris the number of

rows and 3is the number of columns. This matrix already

has a maximal rank of 3 and is a low rank matrix. Therefore,

the low rank matrix approximation from rank 3 or 2 to rank

2 or 1 is less meaningful than from a high rank to a low

rank, in terms of “smoothing”. To make the low rank matrix

approximation more meaningful, we reshape the matrix M

to be close to a square matrix. Figure 3 illustrates a normal

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 4

estimation comparison without and with matrix reshaping,

and shows that the initial ˆr×3matrix Mrequires reshaping

to obtain more effective smoothing results. It also shows that

the reconstruction error between the initially reshaped ma-

trix and the low rank optimized matrix is typically greater

than the error computed without reshaping, which further

validates a more effective smoothing of reshaping. As such,

it is necessary for reshaping M.

M=

x1y1z1

x2y2z2

x3y3z3

x4y4z4

x5y5z5

x6y6z6

x7y7z7

x8y8z8

⇒Z0=

x1x7y5z3

x2x8y6z4

x3y1y7z5

x4y2y8z6

x5y3z1z7

x6y4z2z8

(3)

We do so by ﬁnding dimensions rand cof a new matrix

Z0where ˆr×3 = r×cand we minimize |r−c|. Given

that the structure in Mis isotropic, removing one or more

points does not affect this structure signiﬁcantly. Therefore,

we ﬁrst ﬁnd rand cto minimize kr−ck(r≥c) and measure

if |r−c| ≥ 6and c= 3 are both satisﬁed, where 6is an

empirical value and c= 3 tests if the reshaping failed. If

so, we remove a point normal from Mand solve for rand

cagain. We repeat such a process until the conditions are

not satisﬁed (ris not required to be a multiple of 3). Then

we simply copy the column entries in Mto Z0ﬁlling each

column of Z0before continuing to the next column.

We take the size 8×3for Mas a simple example, and

the reshaping process is illustrated in Eq. (3). The reshaped

matrix Z0has a size of 6×4and a higher rank than M

in general. It is known that the rank of a matrix is the

number of linearly independent columns. Intuitively, the

resulting matrix Z0should be low rank since all normals

come from similar isotropic structures and each point may

involve multiple normals, and the x,yand zvalues are re-

spectively gathered in columns. In Z0, most columns consist

of coordinates from a single dimension (only xcoordinates,

for example). There are at most two columns involving both

xand y, or both yand z(Eq. (3)), which negligibly affects

the rank and the smoothing results (see supplementary

document). Experimentally, we followed the above rules to

construct matrices of similar local isotropic structures for

planar and curved surfaces in Figure 2, and observed the

matrices are indeed low rank (i.e., a considerable number of

negligible singular values). Figure 4 shows the histograms

of singular values of two reshaped matrices from Figure 2,

and conﬁrms the low-rank property.

We then cast the normal estimation problem as a low-

rank matrix approximation problem. We attempt to recover

a low-rank matrix Zfrom Z0using nuclear norm minimiza-

tion. We ﬁrst present some fundamental knowledge about

nuclear norm minimization and then show how we estimate

normals with weighted nuclear norm minimization.

Nuclear norm. The nuclear norm of a matrix is deﬁned

as the sum of the absolute values of its singular values,

shown in Eq. (4).

kZk∗=X

m

|δm|,(4)

0 50 100 150 200 250 300 350 400

0

2

4

6

8

10

12

14

16

0 20 40 60 80 100 120 140 160 180 200

0

50

100

150

200

250

300

350

Fig. 4. Histograms of singular values of two reshaped matrices from

Figure 2. Horizontal axis denotes the singular values, and vertical axis

denotes the number of singular values falling into corresponding ranges.

where δmis the m-th singular value of matrix Z.kZk∗

indicates the nuclear norm of Z.

Nuclear norm minimization. Nuclear norm minimiza-

tion is frequently used to approximate the known matrix,

Z0, by a low-rank matrix, Z, while minimizing the nuclear

norm of Z. Cai et al. [49] demonstrated that the low-rank

matrix Zcan be easily solved by adding a Frobenius-norm

data term.

min

ZαkZk∗+kZ0−Zk2

F,(5)

where αis the weighting parameter. The minimizing matrix

Zis then

Z=Uψ(S, α)VT,(6)

where Z0=USVTdenotes the SVD of Z0and Sm,m is

the m-th diagonal element in S.ψis the soft-thresholding

function on Sand the parameter α, i.e., ψ(Sm,m, α) =

max(0,Sm,m−α). Soft thresholding effectively clamps small

singular values to 0, thus creating a low rank approxima-

tion.

Nuclear norm minimization treats and shrinks each sin-

gular value equally. However, in general, larger singular

values should be shrunk less to better approximate the

known matrix and preserve the major components. The

weighted nuclear norm minimization solves this issue [3].

Weighted nuclear norm minimization. The weighted

nuclear norm of a matrix Zis

kZk∗,w=X

m

|wmδm|,(7)

where wmis the non-negative weight imposed on the m-

th singular value and w={wm}. We can then write the

low-rank matrix approximation problem as

min

ZkZk∗,w+kZ0−Zk2

F(8)

Suppose the singular values {δm}are sorted in a non-

ascending order, the corresponding weights {wm}should

be in a non-descending order. Hence, we deﬁne the weight

function as a Gaussian function.

wm=βe−(2δm

δ1)2(9)

βdenotes the regularized coefﬁcient which defaults to 1.0.

δ1is the ﬁrst singular value after sorting {δm}in a non-

increasing order. We did not use the original weight def-

inition in [3] since it needs noise variance which should

be unknown in normal estimation. Also, we found their

weight determination is not suitable for normal-constructed

matrices. Then we solve Eq. (8) by the generalized soft

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 5

ALGORITHM 1: Weighted nuclear norm minimization

Input: non-local similar structures of each local isotropic

structure

Output: New matrices {Z}

for each local isotropic structure Siso

ido

•construct a matrix Z0

•compute the SVD of Z0

•compute the weights via Eq. (9)

•recover Zvia Eq. (10)

end

thresholding operation on the singular values with weights

[3].

Z=Uψ(S,{wm})VT,(10)

where ψ(Sm,m, wm) = max(0,Sm,m −wm). Here ψchanges

to the generalized soft-thresholding function by assign-

ing weights to singular values, and Eq. (10) becomes the

weighted version of Eq. (6).

Notice that the truncated SVD can also solve the low-

rank matrix approximation problem. However, we found it

is less effective here for two reasons. First, the truncated

SVD uses a ﬁxed number, Kto determine top singular

values. However, the value Kis usually shape dependent.

Second, truncated SVD treats each selected singular value

equally. In contrast, our method treats singular values dif-

ferently to enable adaptivity.

3.3 Algorithm

Each point may have multiple normals in the recovered

matrices {Z}, as the similar structures often overlap. We

ﬁrst reshape {Z}to matrices like {M}(each row in each

matrix is a normal), and compute the ﬁnal normal of each

point by simply averaging the corresponding normals in

{Z}after calling Algorithm 1. To achieve quality normal

estimations, we iterate non-local similar structures searching

(Section 3.1) and the weighted nuclear norm minimization

in Algorithm 1.

Extension to mesh models. Our algorithm can be easily

extended to handle mesh models. One natural way is to take

the vertices/normals of a mesh as points/normals in a point

cloud. However, to achieve desired results, face normals are

frequently used to update vertex positions [8], [14], [15].

Hence, we use the centers of faces and the corresponding

normals as points. Moreover, we use the mesh topology to

compute neighbors in Section 3.1.

4 POSITION UP DATE

Besides normal estimation, we also present algorithms to

update point or vertex positions to match the estimated

normals, which is typically necessary before applying other

geometry processing algorithms.

Vertex update for mesh models. We use the algorithm

[8] to update vertices of mesh models, which minimizes the

square of the dot product between the normal and the three

edges of each face.

Point update for point clouds. Compared to the vertex

update for mesh models, updating point cloud positions is

more difﬁcult due to the absence of topological information.

Furthermore, the local neighborhood information may vary

during this position update. We propose a modiﬁcation of

the edge recovery algorithm in [10] to update points in a

feature-aware way and minimize

X

iX

j∈Si

|(pi−pj)nT

j|2+|(pi−pj)nT

i|2.(11)

piand pjare unknown, and niand njare computed

by our normal estimation algorithm. Eq. (11) encodes the

sum of distances to the tangent planes deﬁned by the

neighboring points {pj}and the corresponding normals

{nj}, as well as the sum of distances to the tangent planes

deﬁned by {pi}and {ni}. The differences between [10] and

our method are: (1) [10] utilized a least squares form for

alleviating artifacts on the intersection of two sharp edges;

(2) [10] only considered the distance to all the planes deﬁned

by each neighboring point and the point’s corresponding

normal.

We use gradient descent to solve Eq. (11), by assuming

the point piand its neighboring points {j∈Si|pj}in the

previous iteration are known. Here we use ball neighbors

instead of k nearest neighbors to ensure the convergence of

our point update. Therefore, the new position of pican be

computed by

p0

i=pi+γiX

j∈Si

(pj−pi)(nT

jnj+nT

ini),(12)

where p0

iis the new position. γiis the step size, which is

set to 1

3|Si|to ensure the convergence (see supplementary

document).

(a) (b) (c) (d)

Fig. 5. (a) and (b): two overly-sharpened results (more unique colors

around the upper corner) by ﬁxing θth. (c) the smeared result (smoothly

changed colors around the lower corner) by using a greater θinit

th . (d)

The result by using a smaller θinit

th . Zoom in to clearly observe the

differences.

1.0% 2.0% 3.0%

Noise Level

0

0.02

0.04

0.06

0.08

MSAE

[19]

[17]

[12]

[18]

Our

(a) Cube

1.0% 1.5% 2.0%

Noise Level

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

MSAE

[19]

[17]

[12]

[18]

Our

(b) Dodecahedron

Fig. 6. Normal errors (mean square angular error in radians) of the Cube

and Dodecahedron point sets corrupted with different levels of noise

(proportional to the diagonal length of the bounding box).

5 APPLICATIONS AND EXPERIMENTAL RESULTS

In this section, we demonstrate some geometry processing

applications that beneﬁt from our approach directly or indi-

rectly including mesh denoising, point cloud ﬁltering, point

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 6

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0

0.017

0.034

0.05

0.067

0.083

0.1

(f)

Fig. 7. Position accuracies for Fig. 9. The root mean square errors are

(×10−2): (a) 8.83, (b) 9.05, (c) 5.14, (d) 9.64, (e) 3.22. The rmse of

the corresponding surface reconstructions are (×10−2): 7.73,6.45,3.28,

7.71 and 2.41, respectively. (f) is the error bar for here and 8.

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 8. Position accuracies for Fig. 13. The root mean square errors are

(×10−3): (a) 8.59, (b) 6.84, (c) 6.80, (d) 6.82, (e) 6.57. The rmse of

the corresponding surface reconstructions are (×10−3): 8.60,6.75,6.68,

6.74 and 6.40, respectively.

cloud upsampling, surface reconstruction, and geometric

texture removal. Moreover, we also compared state of the art

methods with our approach in each application. We utilize

freely available source code for each comparable method or

obtained implementations from the original authors.

Parameter setting. As with image denoising [3], we set a

“window” size (i.e., non-local searching range) for similar

structures searching, which provides a trade-off between

accuracy and speed. The main parameters of our normal

estimation method are the local neighborhood size klocal,

the angle threshold θth, the non-local searching range knon,

and the maximum iterations for normal estimation nnor. For

the position update procedure, our parameters are the local

neighborhood size klocal or 1-ring neighbors (mesh models)

and the number of iterations for the position update npos.

To more accurately ﬁnd similar local isotropic structures,

we set one initial value and one lower bound to θth, namely

θinit

th and θlow

th . We reduce the start value θinit

th towards θlow

th

at a rate of 1.1nin the n-th iteration. We show the tests of

our parameters in Figure 5 and supplementary document. In

general, normal errors decrease with an increasing number

of normal estimation iterations, but excessive iterations can

cause normal errors to increase. The estimated normals of

models with sharp features are more accurate with the in-

creasing local neighborhood klocal or non-local search range

knon.

Fixed θth are likely to inaccurately locate similar local

isotropic structures and further generate erroneous normal

TABLE 1

Normal errors (mean square angular error in radians) of two scanned

models. Dod vir is a virtual scan of a noise-free model as opposed to

Figure 10, which is corrupted with synthetic noise.

Methods [19] [17] [12] [18] Ours

Dod vir 0.0150 0.0465 0.0054 0.0553 0.0023

Fig. 13 0.0118 0.1274 0.0060 0.1208 0.0036

estimations (Figure 5(a-b)). Larger start values of θinit

th smear

geometric features (Figure 5(c)).

Based on our parameter tests and observations, for

point clouds we empirically set: klocal = 60,knon = 150,

θinit

th = 30.0, and θlow

th = 15.0for models with sharp

features, but set θinit

th = 20.0and θlow

th = 8.0for models with

low dihedral angle features. For mesh models, we replace

the local neighborhood with the 2-ring of neighboring faces.

We use 2 to 10 iterations for normal estimation and 5 to 30

iterations for the position update.

To make fair comparisons, we used the same local neigh-

borhood for all methods and tune the remaining parameters

of the other methods to achieve the best visual results.

Speciﬁcally, to tune one parameter, we ﬁxed the other pa-

rameters and searched based on the suggested range and the

meaning of parameters in the original papers. We observed

that the other methods often take more iterations in normal

smoothing than ours. The methods [17], [18] have multiple

solutions, and we took the best results for comparison. For

the position update, we used the same parameters for the

compared normal estimation methods for each model.

Accuracy. Since we used the pre-ﬁlter [54] for meshes

with large noise, there exist few ﬂipped normals in the

results so that different methods have limited difference in

normal accuracy. However, the visual differences are easy to

observe. Therefore, we compared the accuracy of normals

and positions over point cloud shapes. Note that state of

the art methods compute normals on edges differently: the

normals on edges are either sideling (e.g., [18], [19]) or

perpendicular to one of the intersected tangent planes (e.g.,

[12] and our method). The latter is more suitable for feature-

aware position update. For fair comparisons, we have two

ground truth models for each point cloud: the original

ground truth for [18], [19] and the other ground truth for

[12] and our method. The latter ground truth is generated by

adapting normals on edges to be perpendicular to one of the

intersected tangent planes. The ground truth model, which

has the smaller mean square angular error (MSAE) [54]

among the two kinds of ground truth models, is selected as

the ground truth for [17]. Figure 6 shows the normal errors

of different levels of noise on the cube and dodecahedron

models. We also compared our method with state of the

art techniques in Table 1. The ground truth for the Dod vir

model (Table 1) for [18], [19] is achieved by averaging the

neighboring face normals in the noise-free model. The other

kind of ground truth for [12] and our method is produced by

further adapting normals on edges to one of the intersected

tangent planes. We compute ground truth for Figure 13 and

6 in a similar way. The normal error results demonstrate that

our approach outperforms the state of the art methods. We

speculate that this performance is due to the use of non-local

similar structures as opposed to only local information.

In addition, we compared the position errors of differ-

ent techniques, see Figure 7 and 8. The position error is

measured using the average distance between points of the

ground truth and their closest points of the reconstructed

point set [11]. For visualization purpose, we rendered the

colors of position errors on the upsampling results. The

root mean square error (RMSE) of both the upsampling

and reconstruction results show that our approach is more

accurate than state of the art methods.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 7

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 9. The ﬁrst row: normal results of the Cube point cloud (synthetic noise: 3.0% of the diagonal length of the bounding box). The second row:

upsampling results of the ﬁltered results by updating position with the normals in the ﬁrst row. The third row: the corresponding surface reconstruction

results.

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 10. The ﬁrst row: normal results of the Dodecahedron point cloud (synthetic noise: 2.0% of the diagonal length of the bounding box). The

second row: upsampling results of the ﬁltered results by updating position with the normals in the ﬁrst row. The third row: the corresponding surface

reconstruction results.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 8

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 11. The ﬁrst row: normal results of the scanned Car point cloud. The second row: upsampling results of the ﬁltered results by updating position

with the normals in the ﬁrst row. The third row: the corresponding surface reconstruction. Comparing with other methods, [12] and our method are

better in sharp edges preservation and hereby generate more sharpened results.

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 12. The ﬁrst row: normal results of the scanned House point cloud. The second row: upsampling results of the ﬁltered results by updating

position with the normals in the ﬁrst row. The third row: the corresponding surface reconstruction results.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 9

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 13. The ﬁrst row: normal results of the scanned Iron point cloud. The second row: upsampling results of the ﬁltered results by updating position

with the normals in the ﬁrst row. The third row: the corresponding surface reconstruction results.

(a) [19] (b) [17] (c) [12] (d) [18] (e) Ours

Fig. 14. The ﬁrst row: normal results of the scanned Toy point cloud. The second row: upsampling results of the ﬁltered results by updating position

with the normals in the ﬁrst row. The third row: the corresponding surface reconstruction results.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 10

(a) Noisy input (b) [8] (c) [14] (local) (d) [14] (global) (e) [15] (f) Ours

Fig. 15. Denoised results of the Bunny (synthetic noise: 0.2of the average edge length), the scanned Pyramid and Wilhelm.

(a) [24] (b) RIMLS over

(a)

(c) [55] (d) RIMLS over

(c)

Fig. 16. Upsampling and reconstruction results over [24], [55]. The input

is the same as Figure 10.

5.1 Point Cloud Filtering

We compare our normal estimation method with several

state of the art normal estimation techniques. We then per-

form the same number of iterations of our position update

algorithm with the estimated normals of all methods.

Figure 9 and 10 show two point cloud models corrupted

with heavy, synthetic noise. The results demonstrate that

our method performs better than the state of the art ap-

proaches in terms of sharp feature preservation and non-

feature smoothness. Figure 11, 12, 13, and 14 show the

methods applied to a variety of real scanned point cloud

models. Our approach outperforms other methods in terms

of the quality of the estimated normals. We demonstrate

our technique on point clouds with more complicated fea-

tures. Figure 17 shows that our method produces slightly

lower normal errors than [12]. Figure 17 (f) and (g) show

our method with different parameters, which leads to a

less/more sharpened version of the input. We also show

(a) [12] (b) Ours (c) [12] (d) Ours

(e) [12] (f) Ours (g) Ours

Fig. 17. Normal estimation results on David (a,b), a female statue (c,d)

and monkeys (e,f,g). The mean square angular errors of (a-g) are

respectively (×10−2): 10.684,10.636,9.534,9.423,5.004,4.853 and

4.893. (b,d,f) used smaller knon and klocal, and (g) used the default

knon and klocal.

some results using [24], [55], which do not preserve sharp

features (Figure 16). Besides, we show some ﬁltering results

by [56] which can preserve sharp features to some extent,

with introducing obvious outliers on surfaces (Figure 18).

Our method can even successfully handle a larger noise

(Figure 9 and 10), which we found is difﬁcult for [56].

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 11

(a) [56] (b) [56]

Fig. 18. Filtering results by [56]. The input noise is 0.15% for (a) and

0.1% for (b). Red circles indicate outliers.

5.2 Point Cloud Upsampling

As described in Section 5.1, the point cloud ﬁltering also

consists of a two-step procedure: normal estimation and

point update. However, unlike mesh shapes, point cloud

models often need to be resampled to enhance point density

after ﬁltering operations have been applied.

We apply the edge-aware point set resampling technique

[12] to all the results after point cloud ﬁltering and contrast

the different upsampling results. For fair comparisons, we

upsample the ﬁltered point clouds of each model to reach

a similar number of points. Figure 9 to 14 display various

upsampling results on state of the art normal estimation

methods and different point cloud models. The ﬁgures show

that the upsampling results on our ﬁltered point clouds are

substantially better than those ﬁltered by other methods in

preserving sharp features. Bilateral normal smoothing [12]

usually produces good results, but this method sometimes

blur edges with low dihedral angles.

(a) [57] (b) [58] (c) [59] (d) [57] (e) [58] (f) [59]

(g) [57] (h) [58] (i) [59]

Fig. 19. Mesh denoising results of [57], [58], [59]. Only close-up views

are shown to highlight the differences.

5.3 Surface Reconstruction

One common application for point cloud models is to recon-

struct surfaces from the upsampled point clouds in Section

5.2 before use in other applications. Here, we select the

edge-aware surface reconstruction technique–RIMLS [13].

For fair comparisons, we use the same parameters for all

the upsampled point clouds of each model.

Figure 9 to 14 show a variety of surface reconstruction

results on different point cloud models. The comparison

results demonstrate that the RIMLS technique over our

method produces the best surface reconstruction results, in

terms of sharp feature preservation.

5.4 Mesh Denoising

Many state of the art mesh denoising methods involve

a two-step procedure which ﬁrst estimates normals and

then updates vertex positions. We selected several of these

methods [8], [14], [15] for comparisons in Figure 15. Note

that [14] provides both a local and global solution, and

we provide comparisons for both. We also compared our

method with other mesh denoising techniques [57], [58],

[59]. Consistent with Figure 15, the corresponding blown-

up windows of these three methods are shown in Figure

19.

When the noise level is high, many of these methods

produce ﬂipped face normals. For the Bunny model (Figure

15), which involves frequent ﬂipped triangles, we utilize

the technique in [54] to estimate a starting mesh from the

original noisy mesh input for all involved methods.

The comparison results show that our method outper-

forms the selected state of the art mesh denoising methods

in terms of sharp feature preservation. Similar to the above

analysis, this is because that other methods are mostly local

techniques while our method takes into account the infor-

mation of similar structures (i.e., more useful information).

Speciﬁcally, [8], [14], [15], [58], [59] are local methods (

[15] and the global mode of [14] are still based on local

information). [57] does not take sharp features information

into account and thus cannot preserve sharp features well.

5.5 Geometric Texture Removal

We also successfully applied our method to geometric de-

texturing, the task of which is to remove features of different

scales [16]. Our normal estimation algorithm is feature-

aware in the above applications because each matrix con-

sists only of similar local isotropic structures. On the other

hand, by larger values of θth, the constructed matrix can

include local anisotropic structures and the low rank matrix

approximation result becomes smoother, thus smoothing

anisotropic surfaces to isotropic surfaces.

Figure 20 shows comparisons of different methods

that demonstrate that our method outperforms other ap-

proaches. Note that [16] is speciﬁcally designed for geomet-

ric texture removal. However, that method cannot preserve

sharp edges well. Figure 21 shows the results of removing

different scales of geometric features on a mesh. We pro-

duced Figure 21 (d) by applying the pre-ﬁltering technique

[54] in advance, since the vertex update algorithm [14] could

generate frequent ﬂipped triangles when dealing with such

large and steep geometric features. As an alternative, our

normal estimation method can be combined with the vertex

update in [16] to handle such challenging mesh models.

Figure 22 shows the geometric texture removal on two

different point clouds, which are particularly challenging

due to a lack of topology.

5.6 Timings

Table 2 summarizes the timings of different normal estima-

tion methods on several point clouds. While our method

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 12

(a) Input (b) Laplacian (c) [14] (local) (d) [60] (e) [16] (f) Ours

Fig. 20. Geometric texture removal results of the Bunny and Cube. Please refer to the zoomed rectangular windows.

(a) Input (b) Small texture

removal

(c) Medium tex-

ture removal

(d) Large texture

removal

Fig. 21. Different scales of geometric texture removal results of the

Circular model.

(a) Input (b) Ours (c) Input (d) Ours

Fig. 22. Geometric texture removal results of the Turtle point cloud and

embossed point cloud. We render point set surfaces of each point cloud

for visualization.

produces high quality output, the algorithm takes a long

time to run due to the svd operation for each normal esti-

mation. Therefore, our method is more suitable for ofﬂine

geometry processing. However, it is possible to accelerate

our method using speciﬁc svd decomposition algorithms,

such as the randomized svd (RSVD) decomposition algo-

rithm [61] as shown in Table 2. In addition, many parts of

the algorithm could beneﬁt from parallelization.

TABLE 2

Timing statistics for different normal estimation techniques over point

clouds (in seconds).

Methods [19] [17] [12] [18] Ours

SVD

Ours

RSVD

Fig. 9

#6146 0.57 141.5 0.48 8 95.6 65.1

Fig. 13

#100k 18.7 2204 17.2 115 3147 2458

Fig. 12

#127k 10.8 3769 12.5 141 3874 2856

6 CONCLUSION

In this paper, we have presented an approach consisting

of two steps: normal estimation and position update. Our

method can handle both mesh shapes and point cloud

models. We also show various geometry processing appli-

cations that beneﬁt from our approach directly or indirectly.

The extensive experiments demonstrate that our method

performs substantially better than state of the art techniques,

in terms of both visual quality and accuracy.

While our method works well, speed is an issue if online

processing speeds are required. In addition, though we

mitigate issues associated with the point distribution in the

position update procedure (i.e., gaps near edges), the point

distribution could still be improved. One way to do so is

to re-distribute points after our position update through a

“repulsion force” from each point to its neighbors. We could

potentially accomplish this effect by adding this repulsion

force directly to Eq. (11).

ACKNOWLEDGMENTS

Xuequan Lu is supported in part by Deakin University inter-

nal grant (CY01-251301-F003-PJ03906-PG00447) and indus-

try grant (PJ06625). Ying He is supported by AcRF 20/20.

REFERENCES

[1] C. Tomasi and R. Manduchi, “Bilateral ﬁltering for gray and color

images,” in Sixth International Conference on Computer Vision (IEEE

Cat. No.98CH36271), Jan 1998, pp. 839–846.

[2] K. He, J. Sun, and X. Tang, “Guided image ﬁltering,” IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6,

pp. 1397–1409, June 2013.

[3] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear

norm minimization with application to image denoising,” in

Proceedings of the 2014 IEEE Conference on Computer Vision and

Pattern Recognition, ser. CVPR ’14. Washington, DC, USA:

IEEE Computer Society, 2014, pp. 2862–2869. [Online]. Available:

http://dx.doi.org/10.1109/CVPR.2014.366

[4] S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang,

“Weighted nuclear norm minimization and its applications

to low level vision,” International Journal of Computer Vision,

vol. 121, no. 2, pp. 183–208, Jan 2017. [Online]. Available:

https://doi.org/10.1007/s11263-016-0930-5

[5] S. Wu, P. Bertholet, H. Huang, D. Cohen-Or, M. Gong, and

M. Zwicker, “Structure-aware data consolidation,” IEEE Transac-

tions on Pattern Analysis and Machine Intelligence, vol. 40, no. 10,

pp. 2529–2537, Oct 2018.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 13

[6] W. Yifan, S. Wu, H. Huang, D. Cohen-Or, and O. Sorkine-Hornung,

“Patch-based progressive 3d point set upsampling,” 2018.

[7] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng, “Ec-net:

an edge-aware point set consolidation network,” in The European

Conference on Computer Vision (ECCV), September 2018.

[8] X. Sun, P. Rosin, R. Martin, and F. Langbein, “Fast and effective

feature-preserving mesh denoising,” IEEE Transactions on Visual-

ization and Computer Graphics, vol. 13, no. 5, pp. 925–938, Sept 2007.

[9] H. Avron, A. Sharf, C. Greif, and D. Cohen-Or, “L1-sparse

reconstruction of sharp point set surfaces,” ACM Trans. Graph.,

vol. 29, no. 5, pp. 135:1–135:12, Nov. 2010. [Online]. Available:

http://doi.acm.org/10.1145/1857907.1857911

[10] Y. Sun, S. Schaefer, and W. Wang, “Denoising point sets

via l0minimization,” Computer Aided Geometric Design,

vol. 35-36, pp. 2 – 15, 2015, geometric Modeling and

Processing 2015. [Online]. Available: http://www.sciencedirect.

com/science/article/pii/S0167839615000345

[11] X. Lu, S. Wu, H. Chen, S. K. Yeung, W. Chen, and M. Zwicker,

“Gpf: Gmm-inspired feature-preserving point set ﬁltering,” IEEE

Transactions on Visualization and Computer Graphics, vol. PP, no. 99,

pp. 1–1, 2017.

[12] H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and

H. R. Zhang, “Edge-aware point set resampling,” ACM Trans.

Graph., vol. 32, no. 1, pp. 9:1–9:12, Feb. 2013. [Online]. Available:

http://doi.acm.org/10.1145/2421636.2421645

[13] A. C. ¨

Oztireli, G. Guennebaud, and M. Gross, “Feature preserving

point set surfaces based on non-linear kernel regression,”

Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009. [Online].

Available: http://dx.doi.org/10.1111/j.1467-8659.2009.01388.x

[14] Y. Zheng, H. Fu, O.-C. Au, and C.-L. Tai, “Bilateral normal ﬁl-

tering for mesh denoising,” IEEE Transactions on Visualization and

Computer Graphics, vol. 17, no. 10, pp. 1521–1530, Oct 2011.

[15] H. Zhang, C. Wu, J. Zhang, and J. Deng, “Variational mesh

denoising using total variation and piecewise constant function

space,” Visualization and Computer Graphics, IEEE Transactions on,

vol. 21, no. 7, pp. 873–886, July 2015.

[16] P.-S. Wang, X.-M. Fu, Y. Liu, X. Tong, S.-L. Liu, and B. Guo,

“Rolling guidance normal ﬁlter for geometric processing,” ACM

Trans. Graph., vol. 34, no. 6, pp. 173:1–173:9, Oct. 2015. [Online].

Available: http://doi.acm.org/10.1145/2816795.2818068

[17] A. Boulch and R. Marlet, “Fast and robust normal estimation

for point clouds with sharp features,” Comput. Graph. Forum,

vol. 31, no. 5, pp. 1765–1774, Aug. 2012. [Online]. Available:

http://dx.doi.org/10.1111/j.1467-8659.2012.03181.x

[18] A. Boulch and R. Marlet, “Deep learning for robust normal

estimation in unstructured point clouds,” Computer Graphics

Forum, vol. 35, no. 5, pp. 281–290, 2016. [Online]. Available:

http://dx.doi.org/10.1111/cgf.12983

[19] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle,

“Surface reconstruction from unorganized points,” SIGGRAPH

Comput. Graph., vol. 26, no. 2, pp. 71–78, Jul. 1992. [Online].

Available: http://doi.acm.org/10.1145/142920.134011

[20] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and

C. T. Silva, “Point set surfaces,” in Proceedings of the Conference

on Visualization ’01, ser. VIS ’01. Washington, DC, USA:

IEEE Computer Society, 2001, pp. 21–28. [Online]. Available:

http://dl.acm.org/citation.cfm?id=601671.601673

[21] M. Pauly, M. Gross, and L. P. Kobbelt, “Efﬁcient simpliﬁcation

of point-sampled surfaces,” in Proceedings of the Conference

on Visualization ’02, ser. VIS ’02. Washington, DC, USA:

IEEE Computer Society, 2002, pp. 163–170. [Online]. Available:

http://dl.acm.org/citation.cfm?id=602099.602123

[22] N. J. Mitra and A. Nguyen, “Estimating surface normals in

noisy point cloud data,” in Proceedings of the Nineteenth Annual

Symposium on Computational Geometry, ser. SCG ’03. New

York, NY, USA: ACM, 2003, pp. 322–328. [Online]. Available:

http://doi.acm.org/10.1145/777792.777840

[23] C. Lange and K. Polthier, “Anisotropic smoothing of point

sets,” Computer Aided Geometric Design, vol. 22, no. 7,

pp. 680 – 692, 2005, geometric Modelling and Differential

Geometry. [Online]. Available: http://www.sciencedirect.com/

science/article/pii/S0167839605000750

[24] H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-

Or, “Consolidation of unorganized point clouds for surface

reconstruction,” ACM Trans. Graph., vol. 28, no. 5, pp. 176:1–

176:7, Dec. 2009. [Online]. Available: http://doi.acm.org/10.1145/

1618452.1618522

[25] T. K. Dey and S. Goswami, “Provable surface reconstruction

from noisy samples,” in Proceedings of the Twentieth Annual

Symposium on Computational Geometry, ser. SCG ’04. New

York, NY, USA: ACM, 2004, pp. 330–339. [Online]. Available:

http://doi.acm.org/10.1145/997817.997867

[26] P. Alliez, D. Cohen-Steiner, Y. Tong, and M. Desbrun, “Voronoi-

based variational reconstruction of unoriented point sets,” in

Proceedings of the Fifth Eurographics Symposium on Geometry

Processing, ser. SGP ’07. Aire-la-Ville, Switzerland, Switzerland:

Eurographics Association, 2007, pp. 39–48. [Online]. Available:

http://dl.acm.org/citation.cfm?id=1281991.1281997

[27] B. Li, R. Schnabel, R. Klein, Z. Cheng, G. Dang, and

S. Jin, “Robust normal estimation for point clouds with sharp

features,” Computers & Graphics, vol. 34, no. 2, pp. 94 –

106, 2010. [Online]. Available: http://www.sciencedirect.com/

science/article/pii/S009784931000021X

[28] J. Zhang, J. Cao, X. Liu, J. Wang, J. Liu, and X. Shi, “Point cloud

normal estimation via low-rank subspace clustering,” Computers

& Graphics, vol. 37, no. 6, pp. 697 – 706, 2013, shape Modeling

International (SMI) Conference 2013. [Online]. Available: http://

www.sciencedirect.com/science/article/pii/S0097849313000824

[29] X. Liu, J. Zhang, J. Cao, B. Li, and L. Liu, “Quality point

cloud normal estimation by guided least squares representation,”

Computers & Graphics, vol. 51, no. Supplement C, pp.

106 – 116, 2015, international Conference Shape Modeling

International. [Online]. Available: http://www.sciencedirect.

com/science/article/pii/S0097849315000710

[30] J. Zhang, J. Cao, X. Liu, C. He, B. Li, and L. Liu, “Multi-normal

estimation via pair consistency voting,” IEEE Transactions on Visu-

alization and Computer Graphics, pp. 1–1, 2018.

[31] E. Mattei and A. Castrodad, “Point cloud denoising via moving

rpca,” Computer Graphics Forum, vol. 36, no. 8, pp. 123–137, 2017.

[Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.

1111/cgf.13068

[32] K.-W. Lee and W.-P. Wang, “Feature-preserving mesh denoising

via bilateral normal ﬁltering,” in Proc. of Int’l Conf. on Computer

Aided Design and Computer Graphics 2005, Dec 2005.

[33] C. C. L. Wang, “Bilateral recovering of sharp edges on feature-

insensitive sampled meshes,” Visualization and Computer Graphics,

IEEE Transactions on, vol. 12, no. 4, pp. 629–639, July 2006.

[34] T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative,

feature-preserving mesh smoothing,” ACM Trans. Graph., vol. 22,

no. 3, pp. 943–949, Jul. 2003. [Online]. Available: http:

//doi.acm.org/10.1145/882262.882367

[35] S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh

denoising,” ACM Trans. Graph., vol. 22, no. 3, pp. 950–953, Jul.

2003. [Online]. Available: http://doi.acm.org/10.1145/882262.

882368

[36] H. Yagou, Y. Ohtake, and A. Belyaev, “Mesh smoothing via mean

and median ﬁltering applied to face normals,” in Geometric Model-

ing and Processing, 2002. Proceedings, 2002, pp. 124–131.

[37] H. Yagou, Y. Ohtake, and A. Belyaev, “Mesh denoising via it-

erative alpha-trimming and nonlinear diffusion of normals with

automatic thresholding,” in Computer Graphics International, 2003.

Proceedings, July 2003, pp. 28–33.

[38] Y. Shen and K. Barner, “Fuzzy vector median-based surface

smoothing,” IEEE Transactions on Visualization and Computer Graph-

ics, vol. 10, no. 3, pp. 252–265, May 2004.

[39] X. Sun, P. L. Rosin, R. R. Martin, and F. C. Langbein, “Random

walks for feature-preserving mesh denoising,” Computer Aided

Geometric Design, vol. 25, no. 7, pp. 437 – 456, 2008,

solid and Physical Modeling Selected papers from the Solid

and Physical Modeling and Applications Symposium 2007

(SPM 2007) Solid and Physical Modeling and Applications

Symposium 2007. [Online]. Available: http://www.sciencedirect.

com/science/article/pii/S0167839608000307

[40] J. Solomon, K. Crane, A. Butscher, and C. Wojtan, “A

general framework for bilateral and mean shift ﬁltering,”

CoRR, vol. abs/1405.4734, 2014. [Online]. Available: http:

//arxiv.org/abs/1405.4734

[41] W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu,

“Guided mesh normal ﬁltering,” Comput. Graph. Forum,

vol. 34, no. 7, pp. 23–34, Oct. 2015. [Online]. Available:

http://dx.doi.org/10.1111/cgf.12742

[42] X. Lu, W. Chen, and S. Schaefer, “Robust mesh denoising via

vertex pre-ﬁltering and l1-median normal ﬁltering,” Computer

Aided Geometric Design, vol. 54, no. Supplement C, pp. 49

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. X, NO. Y, SEPTEMBER 2020 14

– 60, 2017. [Online]. Available: http://www.sciencedirect.com/

science/article/pii/S0167839617300638

[43] S. K. Yadav, U. Reitebuch, and K. Polthier, “Mesh denoising

based on normal voting tensor and binary optimization,” IEEE

Transactions on Visualization and Computer Graphics, vol. PP, no. 99,

pp. 1–1, 2017.

[44] P.-S. Wang, Y. Liu, and X. Tong, “Mesh denoising via

cascaded normal regression,” ACM Trans. Graph., vol. 35,

no. 6, pp. 232:1–232:12, Nov. 2016. [Online]. Available:

http://doi.acm.org/10.1145/2980179.2980232

[45] Q. Zheng, A. Sharf, G. Wan, Y. Li, N. J. Mitra, D. Cohen-Or,

and B. Chen, “Non-local scan consolidation for 3d urban scenes,”

ACM Trans. Graph., vol. 29, no. 4, pp. 94:1–94:9, Jul. 2010. [Online].

Available: http://doi.acm.org/10.1145/1778765.1778831

[46] J. Digne, “Similarity based ﬁltering of point clouds,” in 2012

IEEE Computer Society Conference on Computer Vision and Pattern

Recognition Workshops, June 2012, pp. 73–79.

[47] J. Digne, S. Valette, and R. Chaine, “Sparse geometric represen-

tation through local shape probing,” IEEE Transactions on Visual-

ization and Computer Graphics, vol. 24, no. 7, pp. 2238–2250, July

2018.

[48] E. J. Cand`

es and B. Recht, “Exact matrix completion via

convex optimization,” Foundations of Computational Mathematics,

vol. 9, no. 6, p. 717, Apr 2009. [Online]. Available: https:

//doi.org/10.1007/s10208-009-9045-5

[49] J.-F. Cai, E. J. Cand`

es, and Z. Shen, “A singular value thresholding

algorithm for matrix completion,” SIAM J. on Optimization,

vol. 20, no. 4, pp. 1956–1982, Mar. 2010. [Online]. Available:

http://dx.doi.org/10.1137/080738970

[50] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust principal

component analysis: Exact recovery of corrupted low-rank matri-

ces via convex optimization,” in Advances in Neural Information

Processing Systems 22, Y. Bengio, D. Schuurmans, J. D. Lafferty,

C. K. I. Williams, and A. Culotta, Eds. Curran Associates, Inc.,

2009, pp. 2080–2088.

[51] G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by

low-rank representation,” in Proceedings of the 27th International

Conference on International Conference on Machine Learning,

ser. ICML’10. USA: Omnipress, 2010, pp. 663–670. [Online].

Available: http://dl.acm.org/citation.cfm?id=3104322.3104407

[52] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma, “Tilt: Transform

invariant low-rank textures,” International Journal of Computer

Vision, vol. 99, no. 1, pp. 1–24, Aug 2012. [Online]. Available:

https://doi.org/10.1007/s11263-012-0515-x

[53] T. P. Wu, S. K. Yeung, J. Jia, C. K. Tang, and G. Medioni, “A closed-

form solution to tensor voting: Theory and applications,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 34,

no. 8, pp. 1482–1495, Aug 2012.

[54] X. Lu, Z. Deng, and W. Chen, “A robust scheme for feature-

preserving mesh denoising,” IEEE Trans. Vis. Comput. Graph.,

vol. 22, no. 3, pp. 1181–1194, 2016.

[55] R. Preiner, O. Mattausch, M. Arikan, R. Pajarola, and M. Wimmer,

“Continuous projection for fast l1 reconstruction,” ACM Trans.

Graph., vol. 33, no. 4, pp. 47:1–47:13, Jul. 2014. [Online]. Available:

http://doi.acm.org/10.1145/2601097.2601172

[56] S. K. Yadav, U. Reitebuch, M. Skrodzki, E. Zimmermann, and

K. Polthier, “Constraint-based point set denoising using normal

voting tensor and restricted quadratic error metrics,” Computers &

Graphics, vol. 74, pp. 234 – 243, 2018. [Online]. Available: http://

www.sciencedirect.com/science/article/pii/S0097849318300797

[57] X. Li, L. Zhu, C.-W. Fu, and P.-A. Heng, “Non-local low-

rank normal ﬁltering for mesh denoising,” Computer Graphics

Forum, vol. 37, no. 7, pp. 155–166, 2018. [Online]. Available:

https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13556

[58] W. Pan, X. Lu, Y. Gong, W. Tang, J. Liu, Y. He, and G. Qiu, “HLO:

half-kernel laplacian operator for surface smoothing,” Computer

Aided Design, 2019.

[59] S. K. Yadav, U. Reitebuch, and K. Polthier, “Robust and high

ﬁdelity mesh denoising,” IEEE Transactions on Visualization and

Computer Graphics, vol. 25, no. 6, pp. 2304–2310, June 2019.

[60] L. He and S. Schaefer, “Mesh denoising via l0 minimization,”

ACM Trans. Graph., vol. 32, no. 4, pp. 64:1–64:8, Jul. 2013. [Online].

Available: http://doi.acm.org/10.1145/2461912.2461965

[61] N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding structure

with randomness: Probabilistic algorithms for constructing

approximate matrix decompositions,” SIAM Review, vol. 53, no. 2,

pp. 217–288, 2011. [Online]. Available: https://doi.org/10.1137/

090771806

Xuequan Lu is a Lecturer (Assistant Professor)

at Deakin University, Australia. He spent more

than two years as a Research Fellow in Sin-

gapore. Prior to that, he earned his Ph.D at

Zhejiang University (China) in June 2016. His

research interests mainly fall into the category of

visual computing, for example, geometry model-

ing, processing and analysis, 2D data process-

ing and analysis. More information can be found

at http://www.xuequanlu.com.

Scott Schaefer is a Professor of Computer

Science at Texas A&M University. He re-

ceived a bachelor’s degree in Computer Sci-

ence/Mathematics from Trinity University in 2000

and an M.S. and PhD. in Computer Science

from Rice University in 2003 and 2006 respec-

tively. His research interests include graphics,

geometry processing, curve and surface repre-

sentations, and barycentric coordinates. Scott

received the Gnter Enderle Award in 2011 and

an NSF CAREER Award in 2012.

Jun Luo received his BS and MS degrees in

Electrical Engineering from Tsinghua University,

China, and the Ph.D. degree in Computer Sci-

ence from EPFL (Swiss Federal Institute of Tech-

nology in Lausanne), Lausanne, Switzerland. He

is currently an Associate Professor Nanyang

Technological University in Singapore. His re-

search interests include mobile and pervasive

computing, wireless networking, applied opera-

tions research, as well as network security.

Lizhuang Ma received the Ph.D. degree from

the Zhejiang University, Hangzhou, China. He

was the recipient of the national science fund

for distinguished young scholars from NSFC.

He is currently a Distinguished Professor and

the Head of the Digital Media & Computer Vi-

sion Laboratory, Shanghai Jiao Tong University,

Shanghai, China. His research interests include

digital media technology, vision, graphics, etc,.

Ying He is currently an associate professor at

School of Computer Science and Engineering,

Nanyang Technological University, Singapore.

He received the BS and MS degrees in electri-

cal engineering from Tsinghua University, China,

and the PhD degree in computer science from

Stony Brook University, USA. His research inter-

ests fall into the general areas of visual com-

puting and he is particularly interested in the

problems which require geometric analysis and

computation.