Conference PaperPDF Available

Exemplar-based Framework for 3D Point Cloud Hole Filling

Authors:

Abstract and Figures

Holes can arise in 3D point clouds due to a number of reasons such as incomplete scans, occlusions, and packet loss. We present an exemplar-based framework for hole filling in 3D point clouds, which exploits non-local self similarity to provide plausible reconstruction even for large holes and complex surfaces. Points along the hole boundary are given priority that determines the order in which they are processed. Hole filling is performed iteratively and uses templates near the hole boundary to find the best matching regions elsewhere in the cloud, from where existing points are transferred to the hole. The proposed method has been compared with several existing methods and has shown superior results, both visually and in terms of the Hausdorff distance.
Content may be subject to copyright.
To be presented at IEEE VCIP’17, St. Petersburg, FL, USA, Dec. 2017.
Exemplar-based Framework for 3D Point Cloud
Hole Filling
Chinthaka Dinesh #1 , Ivan V. Baji´
c#2, Gene Cheung ?3
#Simon Fraser University, Burnaby, BC, Canada; ?National Institute of Informatics, Tokyo, Japan
1hchintha@sfu.ca; 2ibajic@ensc.sfu.ca; 3cheung@nii.ac.jp
Abstract—Holes can arise in 3D point clouds due to a number
of reasons such as incomplete scans, occlusions, and packet
loss. We present an exemplar-based framework for hole filling
in 3D point clouds, which exploits non-local self similarity to
provide plausible reconstruction even for large holes and complex
surfaces. Points along the hole boundary are given priority that
determines the order in which they are processed. Hole filling is
performed iteratively and uses templates near the hole boundary
to find the best matching regions elsewhere in the cloud, from
where existing points are transferred to the hole. The proposed
method has been compared with several existing methods and
has shown superior results, both visually and in terms of the
Hausdorff distance.
Index Terms—Hole filling, 3D point cloud, 3D geometry in-
painting, point cloud alignment, surface reconstruction
I. INTRODUCTION
With the introduction of inexpensive 3D scanning devices
like Microsoft Kinect and Time-of-Flight (ToF) cameras, 3D
point cloud acquisition is becoming increasingly popular [1].
However, the scanned 3D point cloud may be missing data in
certain regions due to occlusion, low reflectance of the scanned
surface, high grazing angles, limited number of scans from
different viewing directions, etc. [1]. In certain applications
such as remote telepresence, parts of the point cloud may be
damaged or lost due to unreliable communication links along
the way. Hence, filling in the missing regions (holes) is an
important problem in 3D point cloud processing.
Various approaches have been proposed for hole filling
of surfaces represented by meshes [2], [3], [4], [5], but the
related problem of hole filling for 3D point clouds has received
relatively less attention. In [6], first, a triangle mesh is created
from the input point cloud and then, the vicinity of the hole
is identified. Finally, the missing portion of the cloud is
interpolated using a moving least squares approach. However,
this method produces unsatisfactory results on large holes,
especially if the underlying surface is complex. In [7], first, a
knearest neighbors graph is constructed from the input point
cloud. Then, a plane tangent to the missing part is determined
and the hole boundary is projected onto this plane. Next, its
convex hull is computed and points are generated in such a
way that the sampling allows to cover a dilated version of
the convex hull. Finally, a Partial Differential Equation (PDE)
solving method on graphs is used to deform the generated
points to fit into the hole. This method gives good results
for small holes or relatively smooth surfaces, but it faces
problems if the hole boundary contains folds or twist. In [8],
a hole-filling strategy based on the tangent plane for each
hole boundary point is proposed. Traversing the boundary
in a clock-wise direction, points on each tangent plane are
computed and inserted into the hole. This process is repeated
and refined ring by ring from the hole boundary towards the
interior. While the method can handle small holes, it faces the
same problems as [6], [7] on large holes and complex surfaces.
In this paper, we present an exemplar-based framework
for hole filling in 3D point clouds. The approach is inspired
by [9] and its success in image inpainting, where it has shown
excellent performance, especially on large holes and images
with complex texture. However, due to the lack of structure
in 3D point clouds, transplanting the image-based method
from [9] into the point-cloud framework is non-trivial. The
main focus of the paper is on the technical challenges involved
in this transition from the image-based framework to the
point-cloud framework. The proposed approach is presented
in Section II, followed by comparisons with [6], [7], [8] in
Sections III and conclusions in Section IV.
II. PRO POS ED HO LE FILLING APPROAC H
A. Preliminaries
Given that a point cloud has no explicit structure in its
representation, the first question to answer is: how do we
know there is a hole in it? There are a number of methods for
detecting holes in point clouds based on variations of the point
density [8], [10], [11]. In remote telepresence, point cloud data
is ordered and packetized for transmission to a remote location.
Here, missing packets’ indices can be used to determine the
locations in 3D space where the missing data used to be, much
like packet loss can be mapped to missing blocks in packet
video. We assume that the hole has been identified using one
of these existing methods.
Following the terminology in [9], the set of available points
in the point cloud is called the source region, denoted Φ.
The hole is denoted , and its boundary is denoted δ. The
boundary evolves inwards as hole filling progresses, so we
1
Fig. 1: Illustration of the source region Φ, hole and fill front
δof a 3D point cloud.
Fig. 2: Illustration of several normal vectors in a typical ψp.
also refer to it as the fill front. A cube centered at point
q, with edges parallel to the x, y, z axes, is denoted ψq.
Unless otherwise stated, the size of each cube we consider
is 10 ×10 ×10 voxels.
Fig. 1 illustrates these concepts in the context of a 3D point
cloud. Hole filling consists of three steps – priority calculation
(Section II-B), template matching (Section II-C), and point
transfer (Section II-D) – applied iteratively until the hole is
filled. At each iteration, the highest-priority cube ψpcentered
on the fill front (pδ) is selected. The available points in
this cube (ψpΦ) are used as a template to search the source
region . Then the best-matching cube from the source region
is used to transfer points to the hole, the fill front is updated,
and the next iteration starts. Each step is explained in the
remainder of this section.
B. Priority calculation
Like the inpainting method in [9], the proposed algorithm
uses a best-first fill strategy that depends on the priority values
assigned to each ψpfor pδ. The assigned priority is
biased towards those ψps that seem to be on the continuation
of ridges, valleys, and other surface elements that could more
reliably be extended into the hole.
For a given ψpfor pδ, the priority P(p)is defined as
a product of two terms:
P(p) = D(p)C(p),(1)
where D(p)is the data term and C(p)is the confidence
term. As in [9], the confidence term C(p)is defined as the
number of available points in ψp(C(p) = |ψpΦ|), in order
to give higher priority to cubes with larger number of available
points because they lead to more reliable template matching.
The data term D(p)depends on the structure of the data in
ψp. One of the main contributions of [9] was to realize that in
the case of image inpainting, certain structures such as strong
edges incident on the fill front may be more reliably extended
into the hole compared to flat, textureless regions. Our goal
here is to develop an analogous strategy for 3D point clouds.
The challenge is that the image processing concepts such as
contour normal and isophote, which allow for computation of
the data term D(p)in [9], do not exist in the context of 3D
point clouds.
To overcome this challenge, we propose several new con-
cepts. First we note that a ridge or a valley on a 3D surface
may be considered as analogous to an edge in an image –
it is characterized by large variation in one direction (per-
pendicular to the edge/ridge/valley) and small variation in the
other direction (parallel to the edge/ridge/valley). We measure
variations in the point cloud ψpby examining surface normal
vectors. For each point in ψp, six nearest neighbors are used
to fit a plane [12], and the unit normal vector to that plane
is considered the surface normal vector at the corresponding
point. An illustration is given in Fig. 2, where the point cloud
is represented as a surface for better visualization, and several
normal vectors are indicated. Plane fitting requires finding six
nearest neighbors to each point in ψp. The complexity of
finding k-nearest neighbors in a point cloud is O(n+klog n),
where nis the number of points in the point cloud, while
finding the normal vector is an O(n)procedure [12]. Normal
vectors could also be obtained by first fitting a mesh to the
point cloud and then computing the normals to the mesh faces.
However, this would be more expensive since triangular mesh
construction is an O(n2)procedure [13].
To identify a possible ridge/valley that is incident on the fill
front, we measure the variation of normal vectors along the
fill front. Specifically, for a given ψp, the boundary variation
term v(p)is defined as the sum of the variances of the three
components (x,y,z) of the normal vectors along ψpδ.
Large v(p)is an indication of a possible ridge/valley, but it
is not sufficient to conclude the presence of such a structure.
A true ridge/valley would exhibit a similar profile of normal
vectors as one moves away from the boundary δand into .
To capture this idea, we cluster the normal vectors of points
in ψpδusing (unsupervised) Mean Shift Clustering [14]
and denote the resulting number of clusters nδ. This roughly
corresponds to the number of distinct directions of normal
vectors along the fill front in ψp. We apply the same (unsu-
pervised) clustering to all normal vectors in ψpand denote
the resulting number of clusters nψ, which corresponds to the
number of distinct directions of normal vectors in the whole
ψp. If the underlying surface exhibits similar profile as one
moves away from the boundary within ψp, we expect nδand
nψto be approximately the same (see Fig. 2); otherwise nψ
would be larger than nδ. The continuity term is defined as the
ratio of these two numbers, c(p) = nδ/nψ. Finally, the data
term is computed as
D(p) = v(p)c(p).(2)
C. Template matching
Once all priorities on the fill front have been computed, we
select the highest priority cube ψb
pas b
p= arg maxpδP(p).
The available points in ψb
pare called a template. Then we
search the source region for the cube ψq(ψqΦ) that best
matches this template. Prior to the actual matching we first
reduce the number of candidate cubes as follows.
First, candidate cubes ψqthat contain fewer points than ψb
p
are eliminated from the matching process. After that, further
elimination is carried out based on the surface curvature,
similar to [15]. For the set of points in cube ψ, the 3×3
covariance matrix is determined as in [15], then its eigenvalues
λi(λ0λ1λ2) are calculated. The surface curvature
within ψis quantified by
σ(ψ) = λ0
λ0+λ1+λ2
.(3)
The deviation of the curvatures between ψb
pand a given
candidate cube ψqis computed as e(ψq)=[σ(ψb
p)σ(ψq)]2.
We keep only the 10% of candidate cubes with lowest e(ψq)
and generate additional candidates by mirroring these cubes
through the xy plane. These are used as matching candidates.
To find the best match for ψbpamong the candidate cubes,
we proceed is as follows. First, ψb
pis translated such that point
b
pcoincides with the origin. This translated cube is denoted
ψb
p. Then, ψqis translated such that point qcoincides with
the origin. This translated ψqis denoted ψq. Next, the best 3D
rotation matrix Rbis determined to align ψqwith ψb
p, and the
rotated cube is denoted ψRb
q. Specifically, the rotation matrix
is found as
Rb= arg min
Rd(ψR
q, ψb
p),(4)
where Ris a 3D rotation matrix and d(ψR
q, ψb
p)is the One-
sided Hausdorff Distance (OHD) [16] from ψR
qto ψb
p,
d(ψR
q, ψb
p) = max
aψR
q
min
bψb
p
kabk2,(5)
where k·k2is the Euclidean norm.
The Iterative Closest Point (ICP) algorithm [17] is used
to find Rbfrom (4) for each candidate cube ψq, and the
corresponding OHD after alignment, d(ψRb
q, ψb
p), is recorded.
Then the rotated cube with the smallest aligned OHD, ψRb
b
q,
is selected for transfer. Specifically,
ψRb
b
q= arg min
ψRb
q
d(ψRb
q, ψb
p).(6)
D. Point transfer
Once the cube that best matches the template is selected, we
need to decide which points to transfer to ψb
p. In general, point
clouds are not necessarily sampled uniformly in 3D, so it is not
straightforward to decide which points correspond to the hole
region within ψb
p. To make this decision, we first translate ψRb
b
q
so that its center coincides with b
p, and we denote the resulting
cube ψRb
b
q. Then we match the points of ψRb
b
qwith those of
ψb
p. Let xb
pψb
pbe a point in ψb
p. Let xb
qbe the closest (by
Euclidean distance) point to xb
pwithin ψ0Rb
b
q,
xb
q= arg min
xψRb
b
q
kxxb
pk2.(7)
Then we say that xb
qhas been matched with xb
pand add
it to the set of matched points of ψRb
b
q, denoted Mb
q. Finally,
all the unmatched points, ψRb
b
q\Mb
qare transferred to ψb
pand
the fill front is updated. This completes one iteration of the
filling procedure; after this, the fill front δis updated and
the process repeated until the hole is filled.
III. EXP ERIME NTAL RE SULTS
We test the proposed hole filling method on 3D point
clouds from two datasets: 1) the Microsoft Voxelized Upper
Bodies [18] that consists of point cloud models with a certain
level of measurement noise; 2) the Stanford 3D Scanning
Repository [19] containing noise-free models. Both qualitative
and quantitative results are presented and compared to those
of [6], [7], [8].
Holes were generated in 3D point clouds by removing all
points in a relatively large parallelepiped whose sides were
aligned with x, y, and zaxes. Three examples are shown
in the second column of Fig. 3 for the models from the
Stanford dataset. In this figure, surfaces are fitted over the
point clouds for better visualization, using the Ball Pivoting
algorithm [20]. In Fig. 3, the first column shows the original
point cloud/surface, while columns three to six show the
results of hole filling by the proposed method and those
of [6], [7], [8], respectively. It can be clearly seen that
the proposed method provides better reconstruction of the
underlying surface compared to other three methods, which
tend to over-smooth.
We also show quantitative comparison using the Normalized
Symmetric Hausdorff Distance (NSHD) between the recon-
structed set of points (denoted Sr) and the original set of points
(denoted So):
ds(Sr, So) = 1
Vmax{d(Sr, So), d(So, Sr)},(8)
where d(·,·)is the OHD introduced in (5) and Vis the
volume of the smallest axes-aligned parallelepiped enclosing
the given 3D point cloud. Tests were performed on the Andrew
and Ricardo models from the Microsoft dataset and four
models from the Stanford dataset. In each 3D point cloud,
we randomly selected the locations of 15 holes (punched by
a parallelepiped whose dimensions were, on average, 20% of
the range of data in each direction) and filled them using the
various hole-filling methods. Since different point clouds have
different dimensions, the hole size also varies model to model.
Table I shows the average NSHD (±standard deviation) of
the 15 test cases. The proposed method produces much lower
NSHD than other methods, at least three times lower compared
to the next best method, [7].
The run time results of the four methods are shown in
Table II. The fastest among these is [8] because it relies on
relatively simple computations in the immediate neighbour-
hood of the hole. However, its accuracy is the lowest. The
methods in [6], [7] use relatively expensive pre-processing
(mesh construction and knearest neighbors graph construc-
tion, respectively), which significantly increases their run time.
The proposed method avoids mesh/graph construction and
therefore has a lower run time compared to [6], [7]. These
results were obtained in MATLAB R2015b on a 2.2 GHz
Original Hole Proposed [6] [7] [8]
Fig. 3: Hole filling results on Armadillo (row 1), Bunny (row 2), and Dragon (row 3); a surface is fitted over the point cloud
for better visualization.
TABLE I: The average NSHD ±standard deviation
NSHD (×107)
Pt. cloud Proposed [6] [7] [8]
Andrew 0.8 ±0.1 2.3 ±0.3 2.1 ±0.3 20.4 ±3.7
Ricardo 0.2 ±0.1 0.9 ±0.2 0.8 ±0.2 4.2 ±0.3
Armadillo 2.6 ±1.2 9.4 ±1.8 8.8 ±1.6 22.6 ±3.8
Buddha 1.0 ±0.1 6.7 ±0.9 6.6 ±1.0 24.1 ±3.7
Bunny 2.4 ±0.3 7.8 ±1.1 7.1 ±1.0 21.1 ±2.5
Dragon 1.5 ±0.3 10.8 ±1.8 7.7 ±1.4 19.6 ±3.6
TABLE II: The average run time per hole ±standard deviation
Run time (seconds)
Pt. cloud Proposed [6] [7] [8]
Andrew 452 ±26 688 ±32 931 ±29 26 ±4.8
Ricardo 410 ±23 620 ±31 866 ±27 21 ±4.7
Armadillo 257 ±26 315 ±28 504 ±33 28 ±3.8
Buddha 502 ±28 652 ±42 821 ±39 27 ±4.9
Bunny 211 ±19 343 ±38 526 ±34 13 ±3.1
Dragon 648 ±38 864 ±39 1083 ±46 36 ±7.3
MacBook Pro with Intel core i7 processor and 16GB memory.
No special code optimization was performed for any of the
methods. IV. CONCLUSION
In this paper, we have presented an exemplar-based frame-
work for hole filling in 3D point clouds sampled from sur-
faces. Visual and quantitative comparison with three existing
methods shows that the proposed framework is better able to
handle large holes with complex underlying surface structure,
with reasonable computational complexity. In the future, we
plan to extend this framework to include the recovery of point
cloud attributes such as color.
REFERENCES
[1] S. Setty, S. A. Ganihar, and U. Mudenagudi, “Framework for 3D object
hole filling,” in Proc. IEEE NCVPRIPG, Dec. 2015, pp. 1–4.
[2] T. Ju, “Robust repair of polygonal models,ACM Trans. Graph., vol. 23,
no. 3, pp. 888–895, 2004.
[3] J. Podolak and S. Rusinkiewicz, “Atomic volumes for mesh completion,
in Symposium on Geometry Processing, 2005, pp. 33–41.
[4] W. Zhao, S. Gao, and H. Lin, “A robust hole-filling algorithm for
triangular mesh,” Visual Comput., vol. 23, no. 12, pp. 987–997, 2007.
[5] J. X. Wu, M. Y. Wang, and B. Han, “An automatic hole-filling algorithm
for polygon meshes,” Computer-Aided Design and Applications, vol. 5,
no. 6, pp. 889–899, 2008.
[6] J. Wang and M. M. Oliveira, “Filling holes on locally smooth surfaces
reconstructed from point clouds,” Image and Vision Computing, vol. 25,
no. 1, pp. 103–113, Jan. 2007.
[7] F. Lozes, A. Elmoataz, and O. Lezoray, “PDE-based graph signal pro-
cessing for 3-D color point clouds: Opportunities for cultural heritage,”
IEEE Signal Process. Mag., vol. 32, no. 4, pp. 103–111, 2015.
[8] V. S. Nguyen, H. T. Manh, and T. N. Tien, “Filling holes on the surface
of 3D point clouds based on tangent plane of hole boundary points,” in
Proc. ACM SoICT, 2016, pp. 331–338.
[9] A. Criminisi, P. Perez, and K. Toyama, “Region filling and object
removal by exemplar-based image inpainting,IEEE Trans. Image
Process., vol. 13, no. 9, pp. 1200–1212, Sept. 2004.
[10] G. H. Bendels, R. Schnabel, and R. Klein, “Detecting holes in point set
surfaces,” in Proc. WSCG’06, Jan.-Feb. 2006.
[11] V. S. Nguyen, T. H. Trinh, and M. H. Tran, “Hole boundary detection
of a surface of 3D point clouds,” in Proc. IEEE ACOMP’15, Nov. 2015,
pp. 124–129.
[12] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle,
“Surface reconstruction from unorganized points,SIGGRAPH Comput.
Graph., vol. 26, no. 2, pp. 71–78, Jul. 1992.
[13] M. Zou, “An algorithm for triangulating 3D polygons,” Master’s thesis,
Washington University in St. Louis, USA, 2013.
[14] Y. Cheng, “Mean shift, mode seeking, and clustering,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 17, no. 8, pp. 790–799, Aug 1995.
[15] M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification of point-
sampled surfaces,” in Proc. Visualization’02, 2002, pp. 163–170.
[16] P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: Measuring error on
simplified surfaces,” Computer Graphics Forum, vol. 17, no. 2, pp. 167–
174, 1998.
[17] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,”
in Int. Conf. 3-D Digital Imaging and Modeling, 2001, pp. 145–152.
[18] C. Loop, Q. Cai, S. O. Escolano, and P. A. Chou, “Microsoft voxelized
upper bodies - a voxelized point cloud dataset,” in ISO/IEC JTC1/SC29
Joint WG11/WG1 (MPEG/JPEG), m38673/M72012, 2016.
[19] M. Levoy, J. Gerth, B. Curless, and K. Pull, “The Stanford 3D scanning
repository,” [Online] https://graphics.stanford.edu/data/3Dscanrep/.
[20] F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva, and G. Taubin,
“The ball-pivoting algorithm for surface reconstruction,IEEE Trans.
Vis. Comput. Graphics, vol. 5, no. 4, pp. 349–359, 1999.

Supplementary resource (1)

... Various operations, such as registration (Han, Zhang, and Zhang 2023), filtering (Dai et al. 2023a), denoising (Duan et al. 2021), restoration (Fu, Hu, and Guo 2018), and simplification , are conducted with point clouds to enhance these applications. This study specifically concentrates on point cloud restoration for subsequent terrain modelling and analysis because the obtained point cloud data for terrain surfaces usually exhibit gaps of missing data, mainly due to the limitations of acquisition techniques, occlusion by vegetation or other obstructions, and the limited number of scans from different viewing directions (Dinesh, Bajic, and Cheung 2017;Rengers et al. 2016). For instance, in mountainous areas, complex elevation differences and terrain relief interfere with the signals emitted by sensors, resulting in gaps in ridge and valley areas (Boulton and Stokes 2018). ...
... In contrast to the methods mentioned above, which only utilise the surrounding data for gap restoration and often result in restored surfaces that appear smoother than the actual surfaces, patch matching methods have the advantage of leveraging both local similarity and nonlocal self-similarity within point clouds (Hu, Fu, and Guo 2019). Gap restoration with the patch matching method is performed iteratively using templates near the gap boundaries to find the best matching regions elsewhere in the cloud, from which existing points are transferred to the gap (Dinesh, Bajic, and Cheung 2017;Fu, Hu, and Guo 2018;Hu, Fu, and Guo 2019). This method retrieves the most similar region for the gap area from elsewhere and is based on the similarity of different parts of the object to be restored (Fu, Hu, and Guo 2018). ...
Article
Full-text available
Point clouds are widely used in Earth surface research but usually exhibit gaps of missing data. Previous point cloud restoration methods used in terrain modelling have not fully considered complex terrain characteristics, which can be summarised as the controlling role of topographic features in shaping terrain surfaces and the inherent similarities observed among these surfaces. This work introduces a novel method that integrates Topographic Features and Patch Matching (TFPM) into point cloud restoration processes for terrain modelling. The method mainly contains three steps. First, identifying gap boundary points. Second, topographic feature points are extracted and subsequently interpolated into the identified gaps. Third, searching other parts of the raw point cloud for patches resembling the gaps, and the identified patches are used as templates to restore the point cloud. The proposed method is benchmarked against three state-of-the-art point cloud restoration methods. The experimental results demonstrate that the TFPM method consistently exhibits superior accuracy in terrain modelling and analysis, as evidenced by low values of the root mean square error, average elevation difference, and average slope difference. This work endeavours to incorporate topographic features into point cloud restoration processes and can benefit future research related to terrain modelling and analysis.
... GPSNR [35] measures the error between Point Clouds A and B on the basis of the optimized peak signal-to-noise ratio (PSNR). The higher the GPSNR is, the smaller the difference between A and B. Normalized symmetric Hausdorff distance (NSHD) [36] is a normalized metric based on the unilateral Hausdorff distance. The lower the NHSD is, the smaller the difference between A and B. Figure 5 shows the subjective repair effect of our and other comparison methods on the PCN dataset. ...
Article
Full-text available
The point cloud is the basis for 3D object surface reconstruction. An incomplete point cloud significantly reduces the accuracy of downstream work such as 3D object reconstruction and recognition. Therefore, point-cloud repair is indispensable work. However, the original shape of the point cloud is difficult to restore due to the uncertainty of the position of the new filling point. Considering the advantages of the convex set in dealing with uncertainty problems, we propose a point-cloud repair method via a convex set that transforms a point-cloud repair problem into a construction problem of the convex set. The core idea of the proposed method is to discretize the hole boundary area into multiple subunits and add new 3D points to the specific subunit according to the construction properties of the convex set. Specific subunits must be located in the hole area. For the selection of the specific subunit, we introduced Markov random fields (MRF) to transform them into the maximal a posteriori (MAP) estimation problem of random field labels. Variational inference was used to approximate MAP and calculate the specific subunit that needed to add new points. Our method iteratively selects specific subunits and adds new filling points. With the increasing number of iterations, the specific subunits gradually move to the center of the hole region until the hole is completely repaired. The quantitative and qualitative results of the experiments demonstrate that our method was superior to the compared method.
Article
With the advantages of high precision, strong robustness and dense point clouds (PC), the fringe projection profilometry based binocular structured light technology is widely used in surface defect detection and profile extraction. However, the binocular structured light system (BSLS) often exits the abnormal regions where is coded or encoded ineffectively suffered from the imaging orientation or surface reflectivity of measured object. It will result in poor stereo matching, and the reconstructed data is missing. To improve this phenomenon, a PC fusion method is proposed in this paper. The PC of abnormal regions in the BSLS are fused by the monocular structured light system (MSLS). Two-step method including least squares and iterative closest point is used to get the rigid transformation matrix (RTM) between the BSLS and MSLSs. The strategies of fringe modulation constraint and phase consistency constraint are performed to obtain the reconstruction regions of left and right MSLSs. The feasibility of proposed method is verified both in the theory and experiment analysis. The proposed method improves the integrity of PC in the BSLS without additional equipment, which is attractable.
Article
Monitoring plants growth dynamics requires continuously tracing their evolution over time. When using point cloud data, such a process requires associating the individual organs among scans, spatially aligning them, and accounting for their evolution, decay, or split. It is common to address this challenge by abstracting the point cloud into its skeletal form and defining point correspondence by Euclidean measures. As the paper demonstrates, standard skeletonization approaches do not capture the actual plant topology, and Euclidean measures do not document its evolving form. To address this alignment challenge, we propose in this paper a registration model that traces high-degree deformations and accommodates the complex plant topology. We develop an embedded deformation graph-based solution and introduce manifold measures to trace the plant non-isometric development. We demonstrate how a path-seeking strategy and invariant features capture the plant topological form, and then use a probabilistic linear assignment solution to associate organs across scans. By minimizing deviations from rigidity, our registration form maintains elasticity, and by solving locally rigid transformations, regularized by structure-related constraints, we secure smoothness and optimality. We also demonstrate how data arrangement and linear path-finding models make our solution computationally efficient. Our model is applied on high quality laser triangulation data, commonly tested in 4-D plant registration studies, but is also verified on low resolution and noisy pointsets reconstructed from a limited number of images by multiview stereo (MVS). Our results demonstrate 0.1 mm levels of accuracy when applied to plant species exhibiting complex geometric structures. They improve by tenfold or more state-of-the-art results and transform correctly the plants form. Paper related resources are available at PLANT4D.
Preprint
Full-text available
The article proposes a new method for finding the triangle-triangle intersection in 3D space, based on the use of computer graphics algorithms -- cutting off segments on the plane when moving and rotating the beginning of the coordinate axes of space. This method is obtained by synthesis of two methods of cutting off segments on the plane -- Cohen-Sutherland algorithm and FC-algorithm. In the proposed method, the problem of triangle-triangle intersection in 3D space is reduced to a simpler and less resource-intensive cut-off problem on the plane. The main feature of the method is the developed scheme of coding the points of the cut-off in relation to the triangle segment plane. This scheme allows you to get rid of a large number of costly calculations. In the article the cases of intersection of triangles at parallelism, intersection and coincidence of planes of triangles are considered. The proposed method can be used in solving the problem of tetrahedron intersection, using the finite element method, as well as in image processing.
Article
Full-text available
With the advance of three-dimensional (3-D) scanning technology, the cultural heritage community is increasingly interested in 3-D scans of cultural objects such as antiques, artifacts, and heritage sites. Digitization of these objects is commonly aimed at heritage preservation. Since 3-D color scanning has the potential to tackle a variety of traditional documentation challenges, the use of signal processing techniques on such data can be expected to yield new applications that are feasible for the first time with the aid of captured 3-D color point clouds. Our contributions are twofold. First, we propose a simple method to solve partial differential equations (PDEs) on point clouds using the framework of partial difference equations (PdEs) on graphs. Second, we survey several applications of 3-D color point cloud processing on real examples for which signal processing researchers can develop tools that can be valuable for museum specialists. The results of these methods have been reviewed by experts in the arts and found promising.
Article
Full-text available
This paper presents a new tool, Metro, designed to compensate for a deficiency in many simplification methods proposed in literature. Metro allows one to compare the difference between a pair of surfaces (e.g. a triangulated mesh and its simplified representation) by adopting a surface sampling approach. It has been designed as a highly general tool, and it does no assumption on the particular approach used to build the simplified representation. It returns both numerical results (meshes areas and volumes, maximum and mean error, etc.) and visual results, by coloring the input surface according to the approximation error.
Conference Paper
Filling the holes of a triangular mesh has been studied for many years in the field of geometric modeling. This research is one of the reconstructing steps of a triangular mesh (or called refinement of a mesh) in order to improve the quality of a 3D triangular surface. With the same idea of hole filling in a mesh, filling in the holes of a 3D point cloud is still a challenge to the researchers. This paper describes a method for filling holes in an elevation surface of 3D point clouds structured in a 3D grid. The novelty of the method is processed directly on the 3D point clouds consisting of two steps. In the first step, we determine the boundary of hole. In the second step, we fill the holes based on the computation of tangent plane for each boundary point. Following clock-wise direction on the hole boundary, we compute and insert missing points on each tangent plane. This process is repeated and refined ring by ring from the hole boundary to the inside of the hole. The obtained results show that the processing time of algorithm is very fast, the output surfaces preserve their initial shapes and local curvatures.
Conference Paper
Processing surfaces with data coming from an automatic acquisition technique always generates the problem of unorganized 3D point sets or presence of geometric deficiencies (i.e. in some regions of the surface, the obtained data points are sparse or without containing any points). It leads to what we call "holes", and a mandatory surface reconstruction process. Applying the process to the whole set of 3D points in order to explore and detect all these holes that can be served to process in a further step. In this paper, we propose a method to detect the hole-boundary on a surface of 3D point clouds structured in a 3D volume. The method consists of two steps. In the first step, we extract the exterior boundary of the surface based on the neighborhood relationship between the 3D points and our particular definition of these points. In the second step, we detect the hole boundary by using a growth function on the interior boundary of the surface. Our method could process very fast and extract exactly the boundary of all holes on the surface.
Article
This paper addresses the problem of automatic hole-filling on polygon meshes based on radial basis functions (RBFs). Firstly, we use the topology connectivity of a watertight triangle mesh to detect the undesired holes. Secondly, 2 or 3-ring vertexes of the boundary of the holes are sampled from the original mesh. Thirdly, a surface patch is reconstructed from the sampled interpolation points by using RBFs. In order to stitch the surface patch and the original mesh with holes, we project the surface patch and the boundary polygon of the holes onto a plane with maximum areas, which can reduce the complexity by converting the problem from 3D into 2D. After we blend the projected polygons and the projected surface patches, then they are remapped into 3D space to finish the stitch operation. Lots of experiments reveal the efficiency and accuracy of our proposed algorithm.
Article
We present an algorithm for obtaining a triangulation of multiple, non-planar 3D polygons. The output minimizes additive weights, such as the total triangle areas or the total dihedral angles between adjacent triangles. Our algorithm generalizes a classical method for optimally triangulating a single polygon. The key novelty is a mechanism for avoiding non-manifold outputs for two and more input polygons without compromising optimality. For better performance on real-world data, we also propose an approximate solution by feeding the algorithm with a reduced set of triangles. In particular, we demonstrate experimentally that the triangles in the Delaunay tetrahedralization of the polygon vertices offer a reasonable trade off between performance and optimality.
Article
Creating models of real scenes is a complex task for which the use of traditional modeling techniques is inappropriate. For this task, laser rangefinders are frequently used to sample the scene from several viewpoints, with the resulting range images integrated into a final model. In practice, due to surface reflectance properties, occlusions and accessibility limitations, certain areas of the scenes are usually not sampled, leading to holes and introducing undesirable artifacts in the resulting models. We present an algorithm for filling holes on surfaces reconstructed from point clouds. The algorithm is based on moving least squares and can interpolate both geometry and shading information. The reconstruction process is mostly automatic and the sampling rate of the given samples is preserved in the reconstructed areas. We demonstrate the use of the algorithm on both real and synthetic datasets to obtain complete geometry and plausible shading.
Article
Models of non-trivial objects resulting from a 3d data acquisition process (e.g. Laser Range Scanning) often contain holes due to occlusion, reflectance or transparency. As point set surfaces are unstructured surface representations with no adjacency or connectivity information, defining and detecting holes is a non-trivial task. In this paper we investigate properties of point sets to derive criteria for automatic hole detection. For each point, we combine several criteria into an integrated boundary probability. A final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic hole detection algorithm.