Content uploaded by Xuequan Lu

Author content

All content in this area was uploaded by Xuequan Lu on Jan 27, 2022

Content may be subject to copyright.

Towards Uniform Point Distribution in Feature-preserving Point Cloud Filtering

Shuaijun Chen

Deakin University

Australia

Jinxi Wang

Northwest A&F University

Yangling, China

Wei Pan

South China University of Technology

China

Shang Gao

Deakin University

Australia

Meili Wang

Northwest A&F University

Yangling, China

Xuequan Lu* (corresponding author)

Deakin University, Australia

xuequan.lu@deakin.edu.au

Abstract

As a popular representation of 3D data, point cloud

may contain noise and need to be ﬁltered before use. Ex-

isting point cloud ﬁltering methods either cannot pre-

serve sharp features or result in uneven point distri-

bution in the ﬁltered output. To address this problem,

this paper introduces a point cloud ﬁltering method that

considers both point distribution and feature preserva-

tion during ﬁltering. The key idea is to incorporate a

repulsion term with a data term in energy minimiza-

tion. The repulsion term is responsible for the point

distribution, while the data term is to approximate the

noisy surfaces while preserving the geometric features.

This method is capable of handling models with ﬁne-

scale features and sharp features. Extensive experi-

ments show that our method yields better results with

a more uniform point distribution in seconds.

Key words: Uniform Point Distribution, Point Cloud Filter-

ing, Feature-preserving

1. Introduction

Researchers have made remarkable achievements in

point cloud ﬁltering in recent years. The newly proposed

methods typically aim at maintaining the sharp features of

the original point cloud while projecting the noisy points

to underlying surfaces. The ﬁltered point cloud data can

then be used for upsampling [12], surface reconstruction

[13,27], skeleton learning [21,22] and computer animation

[24,28], etc.

The existing point cloud ﬁltering methods can be divided

into traditional and deep learning techniques. Among the

traditional class, position-based methods [17,11,29] ob-

tain good smoothing results, while normal-based methods

[27,25] achieve better effects in maintaining sharp edges of

models (e.g., CAD models). Some of these methods incor-

porate repulsion terms to prevent the points from aggregat-

ing but still leave gaps near the edges of geometric features,

which affects the reconstruction quality. Deep learning-

based approaches [31,32,37] require a number of noisy

point clouds with ground-truth models for training and often

achieve promising denoising performance through a proper

number of iterations. These methods are usually based

on local information, and lead to less even distribution in

ﬁltered results even in the presence of a “repulsion” loss

term. It is difﬁcult for these methods to handle unevenly

distributed point clouds and sparsely sampled point clouds

since the patch size is difﬁcult to adjust automatically. Also,

different patch sizes in the point cloud pose a signiﬁcant

challenge to the learning procedure.

The above analysis motivates us to produce ﬁltered point

clouds with the preservation of sharp features and a more

uniform point distribution.

In this paper, we propose a ﬁltering method that pre-

serves features well while making the points distribution

more uniform. Speciﬁcally, given a noisy point cloud with

normals as input, we ﬁrst smooth the input normals us-

ing Bilateral Filtering [12]. Principal Component Anal-

ysis (PCA) [10] is used for the initial estimation of nor-

mals. Secondly, we update the point positions in a local

manner by reformulating an objective function consisting

of an edge-aware data term and a repulsion term inspired by

[23,25]. The two terms account for preserving geometric

features and point distribution, respectively. The uniformly

distributed points with feature-preserving effects can be ob-

tained through a few iterations. We conduct extensive ex-

periments to compare our approach with various other ap-

proaches, including the position-based learning/traditional

approaches and the normal-based learning/traditional ap-

proaches. The results demonstrate that our method outper-

forms state-of-the-art methods in most cases, both in visu-

alization and quantitative comparisons.

2. Related Work

In this paper, we only review the most relevant work to

our research, including traditional point cloud ﬁltering and

1

deep learning-based point cloud ﬁltering.

2.1. Traditional Point Cloud Filtering

Position-based methods. LOP was ﬁrst proposed in

[17]. It is a parameterization-free method and does not rely

on normal estimation. Besides ﬁtting the original model,

a density repulsion term was added to evenly control the

point cloud distribution. WLOP [11] provided a novel re-

pulsion term to solve the problem that the original repul-

sion function in LOP dropped too fast when the support

radius became larger. The ﬁltered points were distributed

more evenly under WLOP. EAR [12] added an anisotropic

weighting function to WLOP to smooth the model while

preserving sharp features. CLOP [29] is another LOP-based

approach. It redeﬁned the data term as a continuous repre-

sentation of a set of input points.

Though only based on point positions, these approaches

achieved fair smoothing results. Still, since they disregard

normal information, these approaches tend to smear sharp

features such as sharp edges and corners.

Normal-based methods. FLOP [16] added normal in-

formation to the novel feature-preserving projection oper-

ator and preserved the features well. Meanwhile, a new

Kernel Density Estimate (KDE)-based random sampling

method was proposed for accelerating FLOP. MLS-based

approaches [14,15] have also been applied to point cloud

ﬁltering. They relied upon the assumption that the given

set of points implicitly deﬁned a surface. In [1], the au-

thors presented an algorithm that allocated a MLS local ref-

erence domain for each point that was most suitable for its

adjacent points and further projected the points to the un-

derlying plane. This approach used the eigenvectors of a

weighted covariance matrix to obtain the normals when the

input point cloud had no normal information. APSS [7],

RMLS [33], and RIMLS [27] were implemented based on

this, where RIMLS was based on robust local kernel regres-

sion and could obtain better results under the condition of

higher noise. GPF [25] incorporated normal information

to Gaussian Mixture Model (GMM), which included two

terms and performed well in preserving sharp features. A

robust normal estimation method was proposed in [23] for

both point clouds and meshes with a low-rank matrix ap-

proximation algorithm, where an application of point cloud

ﬁltering was demonstrated. To keep the geometry features,

[19] ﬁrst ﬁltered the normals by deﬁning discrete operators

on point clouds, and then present a bi-tensor voting scheme

for the feature detection step.

Inspired by image denoising, researchers have also in-

vestigated the nonlocal aspects of point cloud denoising.

The nonlocal-based point cloud ﬁltering methods [3,4,36,

2] often incorporated normal information and designed dif-

ferent similarity descriptions to update point positions in

a nonlocal manner. Among them, [3] proposed a similar-

ity descriptor for point cloud patches based on MLS sur-

faces. [4] designed a height vector ﬁeld to describe the dif-

ference between the neighborhood of the point with neigh-

borhoods of other points on the surface. Inspired by the

low-dimensional manifold model, [36] extends it from im-

age patches to point cloud surface patches, and thus serves

as a similarity descriptor for nonlocal patches. [2] presented

a new multi-patch collaborative method that regards denois-

ing as a low-rank matrix recovery problem. They deﬁne the

given patch as a rotation-invariant height-map patch and de-

noise the points by imposing a graph constraint.

Filtering methods that rely on normal information usu-

ally yield good results, especially for point clouds with

sharp features (e.g., CAD models). However, these methods

have a strong dependence on the quality of input normals,

and a poor normal estimation may lead to worse ﬁltering

results.

Our proposed approach falls in the normal-based cate-

gory. Inspired by GPF, we estimate normals of the input

point cloud based on bilateral ﬁltering [12] in order to get

high-quality normal information. Note that if the input point

cloud only contains positional information, PCA is used to

compute the initial normals. The point positions are then

updated in a local manner with the bilaterally ﬁltered nor-

mals [23]. We also add a repulsion term [23] to ensure a

more uniform distribution for ﬁltered points.

2.2. Deep Learning-based Point Cloud Filtering

A variety of deep learning-based methods dealing with

noisy point clouds have emerged [18,6,37,5,34,31,20].

In terms of point cloud ﬁltering, PointProNets [32] intro-

duced a novel generative neural network architecture that

encoded geometric features in a local way and obtained an

efﬁcient underlying surface. However, the generated un-

derlying surface was hard to ﬁll the holes caused by in-

put shapes. NPD [5] redesigned the framework on the ba-

sis of PointNet [30] to estimate normals from noisy shapes

and then projected the noisy points to the predicted refer-

ence planes. Another PointNet-inspired method is called

Pointﬁlter [37]. It started from points and learned the dis-

placement between the predicted points and the raw input

points. Moreover, this approach required normals only in

the training phase. In the testing phase, only the point po-

sitions were taken as input to obtain ﬁltered shapes with

feature-preserving effects. EC-NET [34] presented an edge-

aware network (similar to PU-NET [35]) for connecting

edges of the original points. This method got promising

results in retaining sharp edges in 3D shapes, but the train-

ing stage required manual labeling of the edges. Inspired

by PCPNet [8], PointCleanNet [31] developed a data-driven

method for both classifying outliers and reducing noise in

raw point clouds. A novel feature-preserving normal esti-

mation method was designed in [20] for point cloud ﬁltering

2

with preserving geometric features. Deep learning-based

ﬁltering methods usually yield good results with more au-

tomation and can often handle point clouds with high den-

sity. That is, low-density shapes as input may lead to poor

ﬁltering outcomes. Also, such methods require to “see”

enough samples during training.

3. Method

Our approach consists of two phases. In phase one, we

smooth the initial normals by Bilateral Filter (refer to [12]

for more details) to ensure the quality of normals. In phase

two, we update point positions with the smoothed normals

to obtain a uniformly distributed point cloud with geomet-

ric features preserved. Figure 1shows an overview of the

proposed approach. We will speciﬁcally explain the second

phase in the following section.

3.1. Position Update

We ﬁrst deﬁne a noisy input with Mpoints as P=

{pi}M

i=1,pi∈R3and N={ni}M

i=1,ni∈R3as the

corresponding ﬁltered normals. To obtain local information

from a given point pi, we deﬁne a local structure sifor each

point in the point cloud, consisting of the knearest points

to the current point. We employ an edge-aware recovery

algorithm [23] to obtain ﬁltered points by minimizing

D(P, N ) = X

i

X

j∈si

|(pi−pj)nT

j|2+

|(pi−pj)nT

i|2,

(1)

where pidenotes the point to be updated and pjdenotes

the neighbor point in the corresponding set si. Eq. (1) es-

sentially adjusts the angles between the tangential vector

formed by piand pjand the corresponding normal vectors

ni,nj.

Figure 2demonstrates how the points are updated on an

assumed local plane by this edge-aware technique. It can be

seen that the quality of the ﬁltered points depends heavily

on the quality of the estimated normals. Our normals are

generated by bilaterally ﬁltering the original input normals,

given its simplicity and effectiveness.

3.2. Repulsive Force

From Figure 3, it can be seen that the points would con-

tinually move towards the sharp edges at the position up-

date step, thus inducing gaps near sharp edges. This is also

demonstrated by [23] that minimizing D(P, N )inevitably

yields gaps near sharp edges, and the ﬁltered points with

obvious gaps might greatly impact following applications

such as upsampling and surface reconstruction. Thus, we

introduce R(P, N )[25] to better control the distribution of

points.

R(P, N ) = X

i

λi

M

X

j∈si

η(rij )θ(rij ),(2)

Eq. (2) obtains a repulsion force using both

point coordinates and normals, where rij =

(pi−pj)−(pi−pj)nT

jnj

, the term η(r)equals

to −r, and the term θ(r) = e(−r2/(h/2)2)denotes a

smoothly decaying weight function.

3.3. Minimization

By combining Eq. (1) and Eq. (2), our ﬁnal position

update optimization becomes:

argmin

P

D(P, N ) + R(P, N )(3)

The gradient descent method is employed to minimize

Eq. (3) and obtain the updated point p′

i. The partial deriva-

tive of Eq. (3) with respect to piis:

∂

∂pi

=X

j∈si

njpT

i−njpT

jpinT

j−pjnjT

∂pi

+

λiβij (pi−pj) (I−nT

jnj)

∂pi

,

(4)

where βij denotes θ(rij )

rij

∂η(rij )

∂r

, and Iis a 3×3identity

matrix.

The updated point p′

ican be calculated by:

p′

i=pi+γiX

j∈si

(pj−pi)nT

jnj+nT

ini+

µPj∈siwjβij (pi−pj) (I−nT

jnj)

Pj∈siwjβij

,

(5)

where γiis set to 1

3|si|according to [23], wjdenotes 1 +

Pj∈siθ(∥pi−pj∥), and µis a parameter which aims at

controlling the relative magnitude of the repulsive force.

3.4. Algorithm

The proposed method is described in Algorithm 1. We

ﬁrst ﬁlter the normals using bilateral ﬁltering. By feeding

the ﬁltered normals and raw point positions into Algorithm

1, we can obtain the updated point positions. Depending

on the point number of each model and the noise level, we

choose different kto generate the local patches and perform

several iterations accordingly. Section 4provides the ﬁl-

tered results of different models. Table 1lists our parameter

settings.

3

(a) Noisy input (b) Normal filtering (c) Position update (d) Result

Data term

Repulsion term

1 iteration 5 iterations 15 iterations

(e) Reconstruction

Figure 1. Overview of our approach. (a) Noisy input. Red color denotes points corrupted with noise. (b) Filtered normals. Blue lines

denote ﬁltered results of the initial normals. (c) Our position update method (considering a data term for feature preservation and a

repulsion term for uniform distribution). Multiple iterations are performed to achieve a better ﬁltered result. (d) The ﬁltered point cloud.

(e) The reconstructed mesh based on (d).

Figure 2. The left side represents the original points and the right

side represents the updated points. piand pjdenote the current

point and a neighboring point. ni,njdenote normals of piand

pj, respectively. Here a local plane surface is assumed.

Figure 3. The movement of the ﬁltered points around sharp edges.

Blue points denote an underlying surface, yellow and green points

indicate two neighboring points that need to be moved, respec-

tively. (a-b) show the movement of pjwith ﬁxed pi. (c-d) show

the movement of piwith ﬁxed pj. This reveals that points are

moving toward the sharp edges and concentrating there, leading to

gaps around the sharp edges.

Algorithm 1 Towards uniform point distribution in feature-

preserving point cloud ﬁltering

Input: Noisy point set P, corresponding ﬁltered normals

N.

Output: Uniformly distributed set of ﬁltered points P′.

Initialize: iteration t, repulsion term µ, local patch si

for Each iteration do

for Each point pido

construct a local patch si;

update point position via Eq. (5);

end for

end for

4. Experimental Results

The proposed method is implemented in Visual Stu-

dio 2017 and runs on a PC equipped with i9-9750h and

RTX2070. Most examples in this paper are executed in 7

seconds. The most time-demanding is the object from Fig-

ure 11 that takes 28 seconds.

4.1. Parameter Setting

The parameters include the local neighborhood size k,

the coefﬁcient of repulsion force µ, and the number of iter-

ations t. Considering that the number of points affects the

range of neighbors signiﬁcantly, in order to ﬁnd the appro-

priate kneighbors for different models, here we determine

the size of kin the range [15, 45] (k= 30 by default) ac-

cording to the point number of each model. To make the

distribution of points more even while preserving the fea-

tures, we use the parameter µto balance the magnitude of

4

the repulsive force among points and tto control the num-

ber of iterations. For the models with sharp edges, we set

a relatively large µwith a low number of iterations, specif-

ically µ= 0.3,t= 10 or t= 5. For models with smooth

surfaces (e.g., non-CAD model), we use a smaller value of

µand a higher number of iterations twith µ= 0.1,t= 30.

Table 1gives all the parameters of the models used in the

experiments.

Table 1. Parameter settings for different models.

Parameters k µ t

Figure 430 0.3 5

Figure 530 0.3 5

Figure 630 0.3 5

Figure 730 0.3 3

Figure 830 0.3 5

Figure 930 0.3 5

Figure 10 30 0.3 10

Figure 11 30 0.3 5

Figure 12 30 0.3 5

4.2. Compared Approaches

The proposed method is compared with the state-

of-the-art techniques which include the non-deep learn-

ing position-based method CLOP [29], non-deep learning

normal-based methods GPF [25] and RIMLS [27], and deep

learning-based methods TotalDenoising (TD) [9], Point-

CleanNet (PCN) [31] and Pointﬁlter (PF) [37]. We em-

ploy the following rules for a fair comparison: (a) We ﬁrst

normalize and centralize the noisy input. (b) As GPF and

RIMLS all require high-quality normals, we adopt the same

Bilateral Filter [12] to obtain the same input normals for

each model. (c) We try our best to tune the main param-

eters of each method to produce their ﬁnal visual results.

(d) For the deep learning-based methods, we use the results

of the 6th iteration for TD and iterate three times for both

PCN and PF. (e) For visual comparison, we use EAR [12]

for upsampling and achieve a similar number of upsampling

points with the same model. As for surface reconstruction,

we adopt the same parameters for the same model.

4.3. Evaluation Metrics

Before discussing the visual results, we introduce two

common evaluation metrics for analyzing the performance

quantitatively. Suppose the ground-truth point cloud and

the ﬁltered point cloud are respectively deﬁned as: S1=

{xi}|S1|

i=1 , S2={yi}|S2|

i=1 . Notice that the number of ground-

truth points |S1|and ﬁltered points |S2|may be slightly dif-

ferent.

1) Chamfer Distance:

eCD (S1, S2) = 1

|S1|X

x∈S1

min

y∈S2

∥x−y∥2

2+

1

|S2|X

y∈S2

min

x∈S1

∥y−x∥2

2,

(6)

2) Mean Square Error:

eMSE(S1, S2) = 1

|S1|

1

|NN(y)|X

x∈S1

X

y∈NN (y)

∥x−y∥2

2,

(7)

where NN(y)denotes the nearest neighbors in S1for

point yin S2. Here we set |NN (y)|= 10 similar to

[37], which means we search 10 nearest neighbors for

each point yin the predicted point set S2.

4.4. Visual Comparisons

Point clouds with synthetic noise. To show the denois-

ing effect of our method, we conduct experiments on mod-

els with synthetic noise, which are Gaussian noise at levels

of 0.5% and 1.0%, respectively. Compared to other state-of-

the-art methods, our visual results outperform them in terms

of both smoothing and feature-preserving aspects. The re-

sults beneﬁt from the fact that the position update consid-

ers normal information makes the ﬁltered points distribute

more evenly.

Meanwhile, we also observe the traits of other methods

in the experiments. CLOP always obtains good results in

terms of smoothing. However, since it is a position-based

method, it may blur sharp features. While GPF adds a gap-

ﬁlling step after projecting the points onto the underlying

surface, it is still difﬁcult to maintain a uniform distribution,

especially when points are near sharp edges. This method

may also make those models with less sharp features abnor-

mally sharp. RIMLS yields promising results in both noise

removal and feature preservation. Still its ﬁltered points are

often unevenly distributed, which affects the performance

in following applications such as upsampling and surface

reconstruction.

The learning-based method TD also yields good smooth-

ing results, but it does not seem to maintain the ﬁne features

of the model well. PCN typically produces less sharp fea-

tures, and PCN can hardly obtain good smoothing effects

under a relatively high level of noise. PF does not need nor-

mal information at the test stage, but it can still achieve a

good feature-preserving effect while denoising. However,

when the noisy points are sparse, this method cannot ex-

tract the information from the sparse point cloud, leading to

distortion of the ﬁltered points.

5

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 4. Results on the Bunnyhi model corrupted with 0.5% synthetic noise. The second row gives the surface reconstruction results.

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 5. Results on Rockerman corrupted with 0.5% synthetic noise. The second row gives the corresponding upsampling results.

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 6. Results on Icosahedron corrupted with 1.0% synthetic noise. The second row gives the corresponding upsampling results.

6

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 7. Results on Dodecahedron corrupted with 0.5% synthetic noise. The ﬁrst row gives the corresponding upsampling results and the

reconstruction meshes are shown at the bottom.

(a) Noisy input (d) RIMLS

(e) TD (f) PCN (g) PF (h) Ours

(b) CLOP (c) GPF

Figure 8. Results on kitten corrupted with 1.0% synthetic noise. upsampling is included.

Since the normal information is taken into account for

our method, it can keep the sharp features well. Impor-

tantly, the better uniform point distribution makes it stand

out in point cloud ﬁltering and following applications like

upsampling and surface reconstruction.

From the ﬁrst row of Figures 4,5and 6, it can be easily

seen that our method obtains the most uniform point dis-

tribution. Figures 5,6and 8give the upsampling results

of the three different models after ﬁltering. As seen in the

second rows of Figure 5and Figure 6, the sharp edges are

maintained well during denoising. The enlarged box in Fig-

ure 8also shows the effect of our ﬁltering method, where

the shape of the kitten’s ears is maintained quite well. The

results of surface reconstruction are presented in Figures 4

and 7. As can be seen from the enlarged box, we maintain

the bunny’s mouth and nose features in Figure 4very well.

Figure 7also shows the ﬁltered results of our method on a

simple geometric model with sharp edges. Our method is

the best in terms of maintaining details and sharp edges.

Point clouds with raw scan noise. In addition to

the synthetic noise, we also perform experiments on raw

scanned point clouds. The results of our method and ex-

isting methods on different raw point clouds are given in

Figures 9,10,11 and 12. The ﬁltered results of ours and

other approaches given in Figure 9show that our method

performs better in terms of smoothing and preserving de-

tails. As can be seen from the enlarged box, most methods

make the mouth of the model blurry or disappear after de-

noising. Note that although the model we use here has the

same shape as PF [37], the ﬁltered results may be different

since our sampling points are sparser than theirs.

Figure 10 shows the ﬁltered results on a raw scanned

model named BuddhaStele. The ﬁrst row gives the results

after upsampling, and the second row provides the results of

7

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 9. Results on one raw scanned model named Nefertiti. Upsampling is included.

(a) Noisy input

(b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 10. Results on the raw scanned model named BuddhaStele. The ﬁrst row gives the corresponding upsampling results and the

reconstruction meshes are shown at the bottom.

surface reconstruction using Screened Poisson [13]. From

the details such as the stairs in the model, it can be seen

that our method still outperforms the other methods. In Fig-

ure 11, our method maintains the sharp edges well on this

model. As seen in the zoom-in box, other state-of-the-art

methods either distort the sharp edges or smooth them out.

Figure 12 shows ﬁltered results on a raw scanned model

named David. Our method preserves feature better during

ﬁltering. Seeing details from the zoom-in box, our approach

maintains the facial features better than others.

4.5. Quantitative Comparisons

We also make a quantitative comparison of the two eval-

uation metrics introduced in the previous section. Note that

since there is no corresponding ground-truth model for the

point clouds with raw scanned noise, we choose the models

with synthetic noise for quantitative evaluation. The results

under the Chamfer Distance metric are given in Table 2. De-

spite the fact that deep learning-based methods are trained

on a large number of point clouds, our method still outper-

forms all deep learning-based methods and even achieves

the lowest quantitative results among most models. In terms

of the other evaluation metric MSE, our method still out-

performs most deep learning methods and holds the lowest

quantitative error among most models, as shown in Table 3.

These quantitative results remain consistent with the vi-

sual results, demonstrating that our method generally out-

performs existing methods both visually and quantitatively.

We consider it is because our method provides a better uni-

form distribution of the ﬁltered points and can handle both

sparsely and densely sampled point clouds. In the case of

sparse sampling, some deep learning-based methods are un-

able to obtain meaningful local geometric information from

the sparse points of local neighbors. It is also worth noting

8

(a) Noisy input (b) CLOP (c) GPF (d) RIMLS (e) TD (f) PCN (g) PF (h) Ours

Figure 11. Results on the raw scanned model named Realscan. Upsampling is included.

(a) Noisy input (d) RIMLS

(e) TD (f) PCN (g) PF (h) Ours

(b) CLOP (c) GPF

Figure 12. Results on the raw scanned model named David. Upsampling is included.

that although RIMLS achieves comparable results to ours in

some of the visual results, its error values are greater than

those of our method in most cases due to its uneven point

cloud distribution.

4.6. Ablation Study

Parameters. We ﬁrst perform experiments on a point

cloud containing 7682 points with different values of k.

From Figure 13, the best value of kis 30. This is also the

default value for this parameter. It is easy to know that the

range of kis highly related to the density of the point cloud.

With a ﬁxed value of k, when the model has a sparser dis-

tribution, the local range delimited by kwill become larger,

which may lead to an excessive range that should not be

treated as local information, resulting in a less desired out-

come. For point clouds with denser distribution, the lo-

cal neighbors’ krange becomes smaller, meaning that the

kneighborhood contains only a smaller range of local in-

formation, further leading to an uneven distribution of the

point cloud. Generally, we use a larger kfor point clouds

with denser points to ensure an appropriate number of local

neighbors.

As µis related to the number of iterations t, we give

ﬁltered results for different values of µunder certain itera-

tions. Figure 14 demonstrates the ﬁltered point clouds ob-

tained for different µvalues when t= 30 and k= 30. We

can see from this ﬁgure that as µincreases, the distribution

of the point cloud becomes more uniform, but a too-large

µwill make the model turn into chaos again. Figure 14(b)

shows the ﬁltered results with a low value of µwhen i=30

and k=30. As we can see, a smaller µis better for maintain-

ing the feature edges of the model.

We also conduct experiments under different iterations.

Figure 15 indicates that with an increasing number of iter-

ation, the distribution of the ﬁltered point cloud becomes

more uniform. However, Figure 20(d) shows that if the iter-

9

Table 2. Quantitative evaluation results of the compared methods and our method on the synthetic point clouds in Figures 4,5,6,7and 8.

Note that * represents deep learning methods. Chamfer Distance (×10−5) is used here. The best method for each model is highlighted.

Methods Figure 4Figure 5Figure 6Figure 7Figure 8Avg.

CLOP [29] 7.84 25.35 26.46 23.83 6.73 16.70

GPF [25] 16.19 31.85 21.52 18.35 15.54 17.58

RIMLS [27] 3.72 5.22 15.70 10.98 4.16 7.12

TD* [9] 23.88 13.20 24.43 19.22 11.86 16.15

PCN* [31] 4.76 6.38 29.87 14.96 6.24 11.18

PF* [37] 4.01 6.63 33.38 30.60 3.68 14.93

Ours 3.11 5.48 12.26 8.14 3.18 5.80

Table 3. Quantitative evaluation results of the compared methods and our method on the synthetic point clouds in Figures 4,5,6,7and 8.

Note that * represents deep learning methods. Mean Square Error (×10−3) is used here. The best method for each model is highlighted.

Methods Figure 4Figure 5Figure 6Figure 7Figure 8Avg.

CLOP [29] 10.32 13.91 21.86 23.31 9.88 15.86

GPF [25] 11.64 17.07 22.43 23.88 11.97 17.40

RIMLS [27] 10.05 14.02 21.68 23.30 10.02 15.81

TD* [9] 13.22 14.78 22.44 23.36 11.45 17.05

PCN* [31] 10.30 14.28 23.68 23.79 10.59 16.53

PF* [37] 10.02 14.17 23.71 25.28 9.82 16.60

Ours 9.92 14.01 21.46 23.15 9.92 15.69

ation parameter is set too large, the boundary of the model

will become unclear again.

(a) Noisy input (b) k= 1 (c) k= 5

(d) k= 15 (e) k= 30 (f) k= 45

Figure 13. Filtered results with different k. Noise level: 1.0%.t=

30, µ= 0.3.

With/without the repulsion term. Local-based ﬁl-

tering approaches tend to converge in certain places when

updating the positions. Obviously, this will make follow-

up applications such as surface reconstruction very difﬁcult.

Our method adopts the repulsion term mentioned in Section

3to allow the points to be evenly distributed while ﬁltering,

thus improving the quality of the ﬁltered point cloud. As

shown in Figure 16(a), without the repulsion term, it is clear

that some points are concentrated at the edges, whereas the

(a) Noisy input (b) µ= 0.1 (c) µ= 0.2

(d) µ= 0.3 (e) µ= 0.4 (f) µ= 0.5

Figure 14. Filtered results with different µ. Noise level: 1.0%,t=

30, k= 30.

(a) Noisy input (b) 5 iterations (c) 15 iterations (d) 30 iterations

Figure 15. Filtered results of different iterations. Noise level:

1.0%, other parameters: k= 30, µ= 0.3.

distribution in Figure 16(b) is more even.

Point density. The performance under different point

10

(a) Without repulsion

term

(b) With repulsion term

Figure 16. Filtered results with or without the repulsion term.

densities is tested. Figure 17 shows that our method yields

promising results on both sparse and dense point clouds. It

is worth noting that since our method requires only local

information, for point clouds with greater point density, a

desired ﬁltered result can be obtained by setting a larger k.

Noisy

input

Ours

7682 points 30722 points 67938 points

Figure 17. Filtered results of models with different sampling num-

bers of points.

Noise level. Different noise levels are applied to the

same model to verify the robustness of our approach. Fig-

ure 18 gives the ﬁltered results by our method under noise

levels of 0.5%, 1.0%, 1.5%, 2.0%, 2.5% and 3.0%. It can

be seen that our method is capable of handling models with

different levels of noise but may produce less desired re-

sults on excessively high noise. As our method relies on

the quality of normals, it is difﬁcult to accurately keep the

geometric features if the model has less accurate normals

caused by higher noise levels.

Irregular sampling. We conduct experiments on mod-

els with irregular sampling. As shown in Figure 19, we pro-

vide the visual comparisons on an unevenly sampled model

for PCN [31], PF [37], and our method. It can be seen that

the ﬁltered point cloud of PCN still contains obvious noise

and PF makes the detail features blurred, while our method

smooths the model better with feature preserving effect.

Holes ﬁlling. Taking the cube as an example, we exper-

iment on a model with holes. Figure 20 shows the ﬁltered

results of different holes. Our method is capable of ﬁlling

relatively small holes because we consider the distribution

of the updated points. However, it will be challenging to ﬁll

big holes that severely disrupt the surfaces of the model.

Lowrank [23] versus ours. Figure 21 gives a compari-

son of our approach with Lowrank [23]. It demonstrates that

our method gets a more uniform point distribution while re-

moving noise, which has better quality than Lowrank.

Indoor scene data. We perform an experiment on the

more challenging indoor scene data, as shown in Figure 22.

The result manifests that our method has the capacity of

dealing with point cloud indoor scenes as well.

Runtime. The running time of the proposed method is

calculated under different kand iterations parameters. It is

clearly shown in Table 4that as kand the number of itera-

tions increase, the runtime increases accordingly.

For each iteration, our method gathers k-local neighbors

for each point to obtain the updated points. Thus the larger

the parameter kis, the slower its computation becomes. The

number of iterations has a similar effect on the running time.

The higher the number of iteration is, the longer the com-

putation becomes. However, since our method is locally

based, multiple iterations usually run fast.

In addition, we also give the runtime of other methods for

comparison. Table 5shows that our method is signiﬁcantly

faster than the other methods.

Table 4. Runtime (in seconds) on Dodecahedron for different k

and t.

Iterations t= 5 t= 15 t= 30 t= 60

k= 30 1.41 3.77 7.37 14.69

k= 60 2.33 6.37 12.36 24.36

Table 5. Runtime (in seconds) comparison on different models.

Methods CLOP TD PCN PF Ours

Fig. 459.66 16.65 294.00 67.38 6.27

Fig. 510.82 6.17 78.04 13.86 1.80

Fig. 63.89 4.91 83.69 38.98 1.89

Fig. 72.60 5.53 28.24 70.81 0.91

Fig. 842.85 16.24 186.30 49.99 4.66

Fig. 960.92 155.76 317.67 81.10 7.93

Fig. 10 70.58 102.41 642.01 247.77 6.37

Fig. 11 103.91 40.52 241.05 68.23 28.16

Fig. 12 50.58 30.01 352.99 74.68 6.98

Limitation. Though our method achieves good results,

it still has room for improvement. Similar to [25], since it

is a normal-based approach, it is inevitably dependent on

the normal quality. In each iteration of the position update,

each point is estimated with reference to the direction of

the normal. Therefore, less accurate input normals may

affect the ﬁltered results. Figure 23 shows an example of

this issue. Also, similar to previous methods, our method

may produce less desirable results when handling a very

11

Noisy input

Ours

0.5% noise 1.0% noise 1.5% noise 2.0% noise 2.5% noise 3.0% noise

Figure 18. Filtered results of models with different levels of noise.

PCN [31] PF [37] Ours

Figure 19. Filtered results on the irregularly sampled point cloud.

(a) Cube with big

holes

(b) Cube with

small holes

(c) Updated

points of (a)

(d) Updated

points of (b)

Figure 20. Filtered results of the model with holes.

(a) Noisy input (b) Lowrank[23] (c) Ours

Figure 21. Filtered points of ours and Lowrank [23].

high level of noise. For instance, Figure 18 indicates 1.5%

noise is more challenging than the 0.5% and 1.0% noise.

In future, we would like to develop effective techniques to

handle the above limitations, e.g., fusing evolutionary opti-

mization within the ﬁltering framework [26].

(a) Noisy input (b) Filtered result

Figure 22. Filtered result on noisy point cloud of an indoor scene.

(a) Noisy input (b) Filtered result

Figure 23. A failure example.

5. Conclusion

In this paper, we presented a method to improve point

cloud ﬁltering by enabling a more even point distribution

for ﬁltered point clouds. Built on top of [23], our method

introduces a repulsion term into the objective function. It

not only removes noise while preserving sharp features but

also ensures a more uniform distribution of cleaned points.

Experiments show that our method obtains very promising

ﬁltered results under different levels of noise and densities.

Both visual and quantitative comparisons also show that it

generally outperforms the existing techniques in terms of

visual quality and quantity. Our method also runs fast, ex-

ceeding the other compared methods.

12

References

[1] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin,

and C. T. Silva. Computing and rendering point set surfaces.

IEEE Transactions on Visualization and Computer Graphics,

9(1):3–15, 2003. 2

[2] H. Chen, M. Wei, Y. Sun, X. Xie, and J. Wang. Multi-patch

collaborative point cloud denoising via low-rank recovery

with graph constraint. IEEE Transactions on Visualization

and Computer Graphics, 26(11):3255–3270, 2019. 2

[3] J.-E. Deschaud and F. Goulette. Point cloud non local de-

noising using local surface descriptor similarity. IAPRS,

38(3A):109–114, 2010. 2

[4] J. Digne. Similarity based ﬁltering of point clouds. In 2012

IEEE Computer Society Conference on Computer Vision and

Pattern Recognition Workshops, pages 73–79. IEEE, 2012. 2

[5] C. Duan, S. Chen, and J. Kovacevic. 3d point cloud de-

noising via deep neural network based local surface estima-

tion. In ICASSP 2019-2019 IEEE International Conference

on Acoustics, Speech and Signal Processing (ICASSP), pages

8553–8557. IEEE, 2019. 2

[6] P. Erler, P. Guerrero, S. Ohrhallinger, N. J. Mitra, and

M. Wimmer. Points2surf learning implicit surfaces from

point clouds. In European Conference on Computer Vision,

pages 108–124. Springer, 2020. 2

[7] G. Guennebaud and M. Gross. Algebraic point set surfaces.

In ACM SIGGRAPH 2007 papers, page 23. 2007. 2

[8] P. Guerrero, Y. Kleiman, M. Ovsjanikov, and N. J. Mitra.

Pcpnet learning local shape properties from raw point clouds.

In Computer Graphics Forum, volume 37, pages 75–85. Wi-

ley Online Library, 2018. 2

[9] P. Hermosilla, T. Ritschel, and T. Ropinski. Total denois-

ing: Unsupervised learning of 3d point cloud cleaning. In

Proceedings of the IEEE/CVF International Conference on

Computer Vision (ICCV), October 2019. 5,10

[10] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and

W. Stuetzle. Surface reconstruction from unorganized points.

In Proceedings of the 19th annual conference on computer

graphics and interactive techniques, pages 71–78, 1992. 1

[11] H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-Or.

Consolidation of unorganized point clouds for surface recon-

struction. ACM Transactions on Graphics (TOG), 28(5):1–7,

2009. 1,2

[12] H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and

H. Zhang. Edge-aware point set resampling. ACM Transac-

tions on Graphics (TOG), 32(1):1–12, 2013. 1,2,3,5

[13] M. Kazhdan and H. Hoppe. Screened poisson surface recon-

struction. ACM Transactions on Graphics (TOG), 32(3):1–

13, 2013. 1,8

[14] D. Levin. The approximation power of moving least-squares.

Mathematics of Computation, 67(224):1517–1531, 1998. 2

[15] D. Levin. Mesh-independent surface interpolation. In Ge-

ometric Modeling for Scientiﬁc Visualization, pages 37–49.

Springer, 2004. 2

[16] B. Liao, C. Xiao, L. Jin, and H. Fu. Efﬁcient feature-

preserving local projection operator for geometry recon-

struction. Computer-Aided Design, 45(5):861–874, 2013. 2

[17] Y. Lipman, D. Cohen-Or, D. Levin, and H. Tal-Ezer.

Parameterization-free projection for geometry reconstruc-

tion. ACM Transactions on Graphics (TOG), 26(3):22, 2007.

1,2

[18] Y. Liu, J. Guo, B. Benes, O. Deussen, X. Zhang, and

H. Huang. Treepartnet: neural decomposition of point clouds

for 3d tree reconstruction. ACM Transactions on Graphics

(TOG), 40(6):1–16, 2021. 2

[19] Z. Liu, X. Xiao, S. Zhong, W. Wang, Y. Li, L. Zhang, and

Z. Xie. A feature-preserving framework for point cloud de-

noising. Computer Aided Design, 127:102857, 2020. 2

[20] D. Lu, X. Lu, Y. Sun, and J. Wang. Deep feature-preserving

normal estimation for point cloud ﬁltering. Computer-Aided

Design, 125:102860, 2020. 2

[21] X. Lu, H. Chen, S.-K. Yeung, Z. Deng, and W. Chen. Un-

supervised articulated skeleton extraction from point set se-

quences captured by a single depth camera. Proceedings of

the AAAI Conference on Artiﬁcial Intelligence, 32(1), Apr.

2018. 1

[22] X. Lu, Z. Deng, J. Luo, W. Chen, S.-K. Yeung, and Y. He. 3d

articulated skeleton extraction using a single consumer-grade

depth camera. Computer Vision and Image Understanding,

188:102792, 2019. 1

[23] X. Lu, S. Schaefer, J. Luo, L. Ma, and Y. He. Low rank ma-

trix approximation for 3d geometry ﬁltering. IEEE Transac-

tions on Visualization and Computer Graphics, 2020. 1,2,

3,11,12

[24] X. Lu, Z. Wang, M. Xu, W. Chen, and Z. Deng. A personality

model for animating heterogeneous trafﬁc behaviors. Com-

puter Animation and Virtual Worlds, 25(3-4):361–371, 2014.

1

[25] X. Lu, S. Wu, H. Chen, S.-K. Yeung, W. Chen, and

M. Zwicker. Gpf: Gmm-inspired feature-preserving point

set ﬁltering. IEEE Transactions on Visualization and Com-

puter Graphics, 24(8):2315–2326, 2017. 1,2,3,5,10,11

[26] T. Nakane, N. Bold, H. Sun, X. Lu, T. Akashi, and C. Zhang.

Application of evolutionary and swarm optimization in com-

puter vision: a literature survey. IPSJ Transactions on Com-

puter Vision and Applications, 12(1):1–34, 2020. 12

[27] A. C. ¨

Oztireli, G. Guennebaud, and M. Gross. Feature pre-

serving point set surfaces based on non-linear kernel regres-

sion. In Computer Graphics Forum, volume 28, pages 493–

501. Wiley Online Library, 2009. 1,2,5,10

[28] Y. Pei, Z. Huang, W. Yu, M. Wang, and X. Lu. A cas-

caded approach for keyframes extraction from videos. In

F. Tian, X. Yang, D. Thalmann, W. Xu, J. J. Zhang, N. M.

Thalmann, and J. Chang, editors, Computer Animation and

Social Agents, pages 73–81, Cham, 2020. Springer Interna-

tional Publishing. 1

[29] R. Preiner, O. Mattausch, M. Arikan, R. Pajarola, and

M. Wimmer. Continuous projection for fast l1 reconstruc-

tion. ACM Transactions on Graphics (TOG), 33(4):1–13,

2014. 1,2,5,10

[30] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep

learning on point sets for 3d classiﬁcation and segmentation.

In Proceedings of the IEEE Conference on Computer Vision

and Pattern Recognition, pages 652–660, 2017. 2

13

[31] M.-J. Rakotosaona, V. La Barbera, P. Guerrero, N. J. Mi-

tra, and M. Ovsjanikov. Pointcleannet: Learning to denoise

and remove outliers from dense point clouds. In Computer

Graphics Forum, volume 39, pages 185–203. Wiley Online

Library, 2020. 1,2,5,10,11,12

[32] R. Roveri, A. C. ¨

Oztireli, I. Pandele, and M. Gross. Point-

pronets: Consolidation of point clouds with convolutional

neural networks. In Computer Graphics Forum, volume 37,

pages 87–99. Wiley Online Library, 2018. 1,2

[33] R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz.

Towards 3d object maps for autonomous household robots.

In 2007 IEEE/RSJ International Conference on Intelligent

Robots and Systems, pages 3191–3198. IEEE, 2007. 2

[34] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Ec-

net: an edge-aware point set consolidation network. In Pro-

ceedings of the European Conference on Computer Vision

(ECCV), pages 386–402, 2018. 2

[35] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Pu-

net: Point cloud upsampling network. In Proceedings of the

IEEE Conference on Computer Vision and Pattern Recogni-

tion, pages 2790–2799, 2018. 2

[36] J. Zeng, G. Cheung, M. Ng, J. Pang, and C. Yang. 3d point

cloud denoising using graph laplacian regularization of a low

dimensional manifold model. IEEE Transactions on Image

Processing, 29:3474–3489, 2019. 2

[37] D. Zhang, X. Lu, H. Qin, and Y. He. Pointﬁlter: Point cloud

ﬁltering via encoder-decoder modeling. IEEE Transactions

on Visualization and Computer Graphics, 27(3):2015–2027,

2020. 1,2,5,7,10,11,12

14