ArticlePDF Available

Multispectral LiDAR point cloud highlight removal based on color information

Optica Publishing Group
Optics Express
Authors:

Abstract and Figures

With the rapid development of light detection and ranging (LiDAR) technology, multispectral LiDAR (MSL) can realize three-dimensional (3D) imaging of the ground object by acquiring rich spectral information. Although color restoration has been achieved on the basis of the full-waveform data of MSL, further improvement of the visual effect of color point clouds still faces many challenges. In this paper, a highlight removal method for MSL color point clouds is proposed to explore the potential of 3D visualization. First, the MSL reflection model are introduced according to radar equation and Phong model, and the restored color of the MSL point clouds is determined to comprise diffuse and specular components. Second, a data conversion method is proposed to improve the massive point cloud processing efficiency by spatial dimension reduction and data compression. Then, the visual saliency map after color denoising is used to obtain the highlight region, the unknown information of which is recovered based on the global or local color information. Finally, three representative targets are selected and evaluated by qualitative and quantitative validation, which verifies that the method can effectively recover the high-quality highlight-free point clouds of MSL.
This content is subject to copyright. Terms and conditions apply.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28614
Multispectral LiDAR point cloud highlight
removal based on color information
ZHONGZHENG LIU,1,2 SHALEI SONG,1,* BINHUI WANG ,3WEI
GONG,3YANHONG RAN ,1XIAXIA HOU,1ZHENWEI CHEN ,1AND
FAQUAN LI1
1
State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for
Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan, Hubei 430071,
China
2University of Chinese Academy of Sciences, Beijing 100049, China
3State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan
University, Wuhan, Hubei 430072, China
*songshalei@apm.ac.cn
Abstract:
With the rapid development of light detection and ranging (LiDAR) technology,
multispectral LiDAR (MSL) can realize three-dimensional (3D) imaging of the ground object
by acquiring rich spectral information. Although color restoration has been achieved on the
basis of the full-waveform data of MSL, further improvement of the visual effect of color point
clouds still faces many challenges. In this paper, a highlight removal method for MSL color
point clouds is proposed to explore the potential of 3D visualization. First, the MSL reflection
model are introduced according to radar equation and Phong model, and the restored color of
the MSL point clouds is determined to comprise diffuse and specular components. Second, a
data conversion method is proposed to improve the massive point cloud processing efficiency by
spatial dimension reduction and data compression. Then, the visual saliency map after color
denoising is used to obtain the highlight region, the unknown information of which is recovered
based on the global or local color information. Finally, three representative targets are selected
and evaluated by qualitative and quantitative validation, which verifies that the method can
effectively recover the high-quality highlight-free point clouds of MSL.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Three-dimensional (3D) vision is a research hotspot in computer vision in recent years. As the
main technical means of 3D vision, light detection and ranging (LiDAR) can acquire spatial
information to constitute point clouds that have unique advantages of high precision, all-day and
large-scale detection [1]. The point clouds of LiDAR have been widely used in many applications,
such as autonomous driving, digital construction, and target detection [25].
Most point clouds of LiDAR are currently monochromatic, lacking abundant spectral or color
information. Many efforts have been proposed by active and passive imaging data fusion to
compensate for this insufficiency [6,7]. However, the fusion data will suffer from shortcomings,
such as dependency on solar illumination and shadowing effects [8,9]. Moreover, achieving
complete matching between discrete echo points and continuous plane pixels is still difficult [10].
With the advancement of supercontinuum laser source and photoelectric detection technique,
LiDAR technology is developing toward single-wavelength, multispectral, or even hyperspectral
trends [1115]. This new kind of multispectral LiDAR (MSL) data can realize the acquisition
of 3D point clouds with multispectral information at a single laser footprint by increasing the
receiving bands. Numerous studies have promoted the MSL data in target recognition and
physical property discrimination, which have shown the considerable superiority compared with
the traditional monochromatic LiDAR data [1618].
#461764 https://doi.org/10.1364/OE.461764
Journal © 2022 Received 20 Apr 2022; revised 10 Jun 2022; accepted 5 Jul 2022; published 20 Jul 2022
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28615
In addition to the detection capability of 3D physical property, the multispectral point clouds
can be inverted into color point clouds, which provide a great potential for 3D visualization [19].
Since the passive imagery compensation is not required, the color point clouds generated by MSL
could solve the shortcoming of traditional monochrome point clouds and meet the increasing
application requirements in 3D scene reconstruction.
However, the specular reflection during laser transmission will inevitably results in point
cloud highlights in the color space. Highlights are common phenomena in active and passive
imaging. An ideal lambert surface is generally assumed to produce only diffuse reflection, but
most surfaces will also produce specular reflection. Considerably strong specular highlights
will induce saturated echo signals and seriously reduce the quality and visualization of color
point clouds. Highlight removal has been extensively studied in passive imaging, some methods
such as physical model analysis, mathematical estimation and light source compensation have
been proved to be effective [2023]. However, for the monochromatic point clouds, highlights
are unfortunately often ignored or masked in point cloud processing due to the lack of color
information. Some LiDAR reflection models with both diffuse and specular reflection are
proposed, so as to analyze the effect of specular reflection on point clouds and correct laser echo
intensity [2427]. This model-based approach relies on the selection or estimation of surface
roughness and specular reflection coefficient, which is difficult to be applied to point cloud
highlight removal in real scenes. There are also studies attempted to remove the virtual points
generated by specular reflection for 3D point clouds through estimating multiple glass planes
[28].
The above studies analyze the effect of specular reflection on point cloud data. However, there
are still no effective solutions for highlights removal in 3D color space. Benefitting from the
visualization of 3D color point cloud provided by MSL, it will provide a new solution for this
inevitable problem of point cloud highlights
In this paper, we proposed a new MSL point cloud highlight removal method based on color
information. In addition, experiments were designed to prove the feasibility and accuracy of the
method. The current work aims to provide a new idea for enhancing the visualization of color
point clouds, and further promote the development of 3D imaging using the MSL system.
The contributions of this paper are as follows:
1.
The problem of highlight removal for MSL color point clouds is first to raised and solved;
2.
The reflection mechanism of MSL is investigated and a first highlight removal method for
MSL point clouds is proposed. It is worth noting that this method is based on the local or
global prior of color information, and has universal applicability to most scanned targets.
3.
A dimension reduction algorithm is proposed to improve the computational efficiency of
massive color point clouds. The complete target color information is retained without the
consideration of complex geometric information.
The remainder of this paper is organized as follows. Section 2introduces the MSL system
and reflection model. Section 3proposes the highlight removal methods for MSL point clouds,
including the conversion between point clouds and images, the highlight detect, and the highlight
inpainting. Section 4presents the experimental results. Section 5concludes the paper with future
research issues.
2. MSL system and reflection model
2.1. MSL system
The instrument involved in this paper was a MSL system introduced by previous studies [19,29].
As presented in Fig. 1(a), a supercontinuum laser source covering almost the entire visible band
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28616
(400–700 nm) emits a discrete illumination pulse for scanning detection. Considering the spectral
energy distribution of this broadband laser source, the detector response at different receiving
bands, and the CIE 1931 color space chromaticity, the most appropriate RGB bands in the visible
spectral portion are selected for the receiving channels of color information, namely 434.5–474.5,
517–537, and 612–644 nm [16]. The laser reference signal and return waveforms of each RGB
channel are detected and recorded in a 12-bit digitizer. Subsequently, the color point clouds are
obtained through field programmable gate array online processing and calculation.
Fig. 1. Data acquisition and processing of MSL system
Figure 1(b) illuminates multi-target detection by recording multi-echo time domains (t1, t2,
and t3) at RGB channels. A multispectral waveform decomposition method [29] is applied
following the recorded time domains and intensity information to inverse color information of
each echo, while the delay of each time window is measured as spatial information. The original
data are recorded by the proposed system in the form of full-waveform and finally calculated into
point clouds with RGB color. This new kind of color point cloud dataset integrates 3D point
clouds and color information, which solves the shortcoming of traditional monochrome point
clouds in color visualization.
2.2. MSL reflection model
In the field of LiDAR, the radar equation [30] widely used for the radiometric calibration of
diffuse (Lambertian) target is:
Pr=Pt
Dr2ρdcos θηsys ηatm
4R2(1)
where Ptis the transmitted laser power, Pris the received laser power, Dris the diameter of the
receiver aperture, Ris the range from the laser to the target,
θ
is the incidence angle,
ηsys
and
ηatm
are two transmission factors,
ρd
is diffuse reflectance, which is the ratio of
Pr
to
Pt
in each
determined direction.
An ideal Lambertian object is assumed to only produce diffuse reflection. However, most
objects in the real world are non-Lambert, and the laser reflected on their surface comprises
diffuse and specular components. When an incident laser strikes a surface that is smooth at the
microscopic level, part of the laser is reflected in the form of specular reflection, causing the
so-called highlight phenomenon. The empirical Phong surface model [31] describes the way a
surface reflects light as a combination of diffuse and specular reflection. Such a model has a
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28617
wide application in computer graphics and 3D model rendering. Based on the radar equation and
the Phong model, the received laser intensity of MSL can be described as follows.
I(λ)=Id(λ)(1ks)+Is(λ)ks
Id(λ)=Iin(λ)ρd(λ)cos θ
Is(λ)=Iin(λ)cosn(λ)(2θ)
Iin(λ)=(Pt(λ)Dr2ηsys ηatm)/4R2
(2)
where
λ{
r,g,b
}
are wavelengths at R, G and B spectral channels, Iis the MSL received
intensity,
Iin
is the MSL transmitted intensity,
ks
is specular reflection proportion coefficient
that depends on geometry,
Id
is the diffuse reflection component,
Is
is the specular reflection
component, nis the surface roughness exponent that depends on geometry and wavelength,
respectively.
Equation (2) indicates that the received intensity I, which is influenced by receiving wavelength,
incident angle, and object surface roughness, is determined by the combination of diffuse and
specular components. The received intensity Icould be calibrated by the standard whiteboard,
which is regarded as an ideal Lamberite to achieve color restoration. The echo intensity of the
whiteboard I0is taken as a normalized reference:
I0(λ)=Iin(λ)ρ0(λ)cosθ(3)
where ρ0is the diffuse reflectance of the white board.
In order to ensure color uniformity, it’s assumed that ambient light conditions are consistent
with CIE standard illuminant D65. In other words, the color restoration of MSL point clouds is
under illuminant D65. Using the ratio of Ito I0, the target color is obtained by MSL as:
Icolor(λ)=I(λ)
I0(λ)=D(λ)(1ks)+S(λ)ks
D(λ)=ρd(λ)
ρ0(λ)
S(λ)=cosn(λ)(2θ)
ρ0(λ)cosθ
(4)
where
Icolor
the restored color of target, Dand Sare the color of the diffuse reflection and specular
reflection, respectively. Notably, Dis related to the target reflectance of each receiving band, and
Sis related to wavelength, incident angle and target roughness.
Dand Sillustrate the influence of target characteristics and laser characteristics on color
restoration, respectively. The additional item Swill introduce the complex phenomenon of
highlights. Compared with the monochromatic LiDAR, the broadband laser source will receive
more potential interference during the multi-channel detection, which would result in more
possible highlights. Meanwhile, the highlights are wavelength dependent due to the spectroscopic
design and photodetector response variation of different receiving spectral channels.
Figure 2shows massive point clouds of real scene, from left to right are the monochromatic
point clouds at RGB channels and the MSL color point clouds, respectively. In Fig. 2(a), the
monochromatic point clouds at RGB channels can acquire spatial information and some fuzzy
textures. However, the MSL color point clouds are directly obtained by an overall scan without
overlaying the passive images, which markedly improves the point cloud visualization. As shown
in Fig. 2(b), the highlights of a writing board are marked in a blue rectangle. The highlights have
different influences at various spectral channels due to their relation with wavelength. In addition,
MSL can realize multichannel simultaneous detection, which further improves the capability to
detect and remove the highlights.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28618
(a)
(b)
Fig. 2.
Large-scale scene point clouds of conference room with highlights. From left to
right are the monochromatic point clouds at RGB channels and the MSL color point clouds,
respectively. (a) Entire room scene. (b) a writing board with highlights.
3. Methods
As shown in Fig. 2, the highlights can have a serious impact on point cloud visualization.
According to Eq. (4), the calculation of the specular reflection component Srequires an estimate
of
ks
and n, which is related with the surface roughness. However, the two factors are often
referred to as experiential values that have specific values for specific targets, which complicates
the direct estimation of the specular reflection component without a prior knowledge of these
values.
Highlight removal have always been the research hotspots of image processing in computer
vision, and many proposed methods have achieved promising results. Different from monochro-
matic LiDAR, MSL can directly obtain color point clouds, which provides the possibility of
using existing image processing methods for reference. In order to conduct the highlight removal
of point clouds with unknown materials and unknown regions, the color denoising to produce
realistic colors of MSL point clouds in non-highlight regions is attempted. The color of the MSL
point clouds in highlight regions is then recovered by the global or local color information. This
approach is expected to achieve the color uniformity of the MSL point clouds and fully eliminate
the highlights.
The flowchart of the visual enhancement for MSL point clouds containing four main steps
is displayed in Fig. 3. First, the color restoration is conducted to obtain the initial color point
clouds from raw signals at RGB channels. Next, the conversion between point clouds and image
is performed with color information retained. Then, the highlight region is detected by the
visual saliency map after color denoising. Finally, the highlights are inpainted by solving the
optimization problem of objective function and similarity, and the final color point clouds are
obtained through the color assignment.
3.1. Conversion
Massive point cloud data are acquired to ensure the completeness of 3D imaging. However, this
large amount of data will reduce the processing efficiency of MSL color point clouds. To solve
the problem, the 3D color point clouds can be converted into a 2D image in a certain field of
view. The conversion by spatial dimension reduction and data compression will improve the
data processing efficiency; meanwhile, the visual enhancement is performed accurately without
sacrificing color information. Figure 4shows the projection of point clouds onto a plane, with
red and blue respectively representing the point clouds and the plane.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28619
Fig. 3.
Flowchart of the point cloud highlight removal, which comprises preprocessing,
conversion, highlight detect, and highlight inpainting.
Fig. 4. Projection of point clouds onto a plane comprising three steps.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28620
Before point cloud projection, point cloud data is preprocessed using the traditional point
cloud geometric denoising method. In this step, the outlier noise points near or far from the
surface of main target point clouds are removed, which can effectively improve the precision of
projection. The 3D point clouds of the target are projected on the 2D plane from different angles,
which will result in different images. The optimal plane required by the projection of target point
clouds must first be identified to retain as much information as possible in images. Least square
(LS) [32] and random sampling consensus (RANSAC) [33] are common plane fitting methods.
LS is used to fit all data, which leads to unsatisfactory fitting effects in the case of large data
offset. On the contrary, RANSAC can flexibly deal with large data offset by fitting the main
data. Therefore, RANSAC is suitable for point clouds. First, the threshold th1 is set to determine
whether a point is invalid. The point is invalid when the distance between a point and the fitting
plane exceeds th1.
th1=µrange +3σrange (5)
where
µrange
and
σrange
is the mean value and the standard deviation of the nearest range for each
point.
Then, a number of planes are randomly fitted. By counting the number of invalid points of a
series of fitting planes, the plane with the least number of invalid points is founded as the best
fitting plane. Based on the idea of weighted voting, this method can overcome the deviation of a
few discrete points in a specific perspective, and ensure the optimal solution of most points.
The point clouds are then projected onto the plane according to Eq. (6).
p1=p0d·
n(6)
where p
0
is the 3D coordinate before the projection, p
1
is the 3D coordinate after the projection,
dis the distance from the point to the plane, and
nis the normal vector of the plane.
The target point clouds are converted into the plane after projection. The fitting projected
plane is supposedly rotated to a reference plane, such as XOY, XOZ, or YOZ, to further convert
the projected plane into an image. The corresponding rotation matrices of plane rotation around
the X-, Y-, or Z-axis are as follows:
RX(α)=
1 0 0 0
0cos αsin α0
0sin αcos α0
0 0 0 1
RY(α)=
cos α0sin α0
0 1 0 0
sin α0cos α0
0 0 0 1
RZ(α)=
cos αsin α0 0
sin αcos α0 0
0 0 1 0
0 0 0 1
(7)
The final rotation matrix R
T
is related to the selection of reference plane and rotation direction.
For example, if the projected plane takes XOY as the reference plane and rotates around the
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28621
X-axis, then the obtained Ris:
RT=RZ(−α1) · RX(α2) · RZ(α1)(8)
After rotation, the point clouds in rotated plane is obtained by:
p2
1
=R·
p1
1
(9)
where
p1=x y z
,
p2=xyz
is the 3D coordinate after rotation. Since the
reference plane selected is XOY, zis clearly zero.
To convert the point clouds into images, p2 is divided into grids at an appropriate size, which
is guided by the average point density. The average RGB values of the point clouds in each grid
are taken as the corresponding pixel values of the grid to form the image. The size of each grid is
related to the pixel resolution of the image. A small grid size indicates a high pixel resolution.
Moreover, the initial 3D coordinates are stored in the grid to form the depth image. Therefore,
the depth image can be converted to the color point clouds again through the guide of color
assignment. That is taking the pixel values of the corresponding grid as the color of the point
clouds.
3.2. Highlight detect
Then, the next procedure of our method is based on the converted depth images. Unlike camera
which captures a 2D image at a time, the MSL obtains 3D point clouds through point-by-point
scanning. Thus, the massive point clouds will contain some uncertainties induced by the
system and measurement errors. The following two calibration methods are used to increase the
confidence of MSL point clouds.
1.
The laser reference signal and echo signal of RGB channels are simultaneously full-
waveform recorded, then the pulse energy fluctuation could be calibrated by calculating
the intensity ratio of echo and the correspondent laser reference signal.
2. A standard whiteboard has been applied to calibrate the reflectance of different targets.
Despite the above calibrations, the residual noise still influences the point cloud quality.
Figure 5analyzes the distribution and statistics of intensity values (0–255) for the point clouds of
writing board in Fig. 2(c). In Fig. 5(a), the intensity distribution along the marked blue line reveals
that noise and highlights in color space cause the intensity values to fluctuate or saturate. In
Fig. 5(b), the intensity statistics in the marked blue rectangle shows that the intensity probability
scattered by color noise and highlights. According to the influence on intensity statistics, color
noise is mainly divided into impulse noise in region N1 and Gaussian noise in region N2, and
highlight is mainly divided into weak highlight in region H1 and saturated highlight in region H2.
It’s also found that color noise and highlights have various effects at different spectral channels
due to wavelength dependence. For example, the probability in regions N1 and H2, the full width
at half maxima (FWHM) and peak heights in region N2 are different between the R, G, and B
channels.
Therefore, highlight detection needs to eliminate the interference of color noise. Rather than
using a fixed size, the filtering window
1
is chosen to accommodate the converted images with
different resolutions. Note that color noise has distinguishing probability distributions, color
denoising is considered to performed by a combination of a global bilateral filter [34] and a local
median filter [35]. Specific color denoising strategies are as follows:
1. For all pixels, bilateral filtering is performed first;
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28622
Fig. 5.
Distribution and statistics of intensity values (0–255) for the point clouds of writing
board in Fig. 2(c), where R, G, and B channels are presented left to right. (a) The distribution
of intensity values along the marked blue line. (b) The statistics of intensity values in the
marked blue rectangle.
2. If the pixel satisfies I(i)>th2 according to Eq. (10), median filtering is performed.
I(i)=max(Ir(i),Ig(i),Ib(i))
Iλ(i)=j1Iλ(j)
mIλ(i)
th2=5σcolor,λ
(10)
where
I
(
i
)
is the chromatism of pixel i, and mis the size of
1
,
σcolor,λ
is the standard
deviation of the color at each channel. Notably, mis related to point cloud density, total
number of target point clouds and resolution requirements.
After color denoising, the highlight detection can be conducted depending on the highlight
level. For pixels with saturated highlights, the specular reflection component occupies most of
the color. According to Eq. (11), only the threshold th3 is set to 200 for the determination of
such pixels.
Imin(i)=min(Ir(i),Ig(i),Ib(i))
>th3, saturated highlight
<th3, other
(11)
Saturated highlights are relatively evident to be detected, but the detection of weak highlights
between saturated highlights and non-highlights is the key point. Considering the inconsistency
of the highlights, an improved visual saliency detection algorithm based on Frequency-tuned
algorithm [36] is proposed to effectively detect weak highlights at RGB channels.
First, Gaussian smoothing with an adaptive filtering window
1
is used to preserve the overall
information of the image. And then the saliency of pixels at RGB channels can be calculated as
follows:
Jλ(i)=||IG
λIG
λ(i)| | (12)
where
Jλ
is the normalized saliency of pixel i, I
G
λ(
i
)
is the pixel value after Gaussian smoothing,
IG
λis the mean value of IG
λ(i).
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28623
Pixels with weak highlights are then screened if any saliency of each channel is larger than th4:
Jλ(i)
>th4, weak highight
<th4, other
(13)
th4=Jλ+3σsaliency,λ(14)
where
Jλ
is the mean saliency of all pixels,
σsaliency,λ
is the standard deviation of the saliency of
each channel.
3.3. Highlight inpainting
After highlight detection, the color restoration of highlight pixels is taken into account. However,
most of the image highlight removal methods are not applicable to MSL point clouds due to the
contradiction of highlight consistency. Besides, the MSL point clouds consider the highlight
to be wavelength related, which makes it difficult to estimate the value of highlight without the
priors of target reflectance. Therefore, the color restoration of highlight pixels for MSL is an
ill-posed inverse problem that has no well-defined unique solution. To solve the problem, it is
necessary to introduce another prior knowledge. That is, MSL highlight inpainting follows the
assumption that the known and unknown pixels have similar statistical characteristics and texture
structures [37].
For this, it is acceptable to transform the assumption into local or global priors to provide
images with reasonable textures and satisfactory visual effect after completion. The highlight
inpainting is conducted using the Space-time completion algorithm [38] with a variable window
1
depending on the image resolution. The Space-time completion algorithm presents a new
framework for the completion of missing information based on local structures. It poses the task
of completion as a global optimization problem with a well-defined objective function and a
similarity measure. Since the completion of MSL highlight is static, we simplify the objective
function and similarity measure as:
Coherence(H,F)=
pH
max
qFsim(Wp,Vq)
sim(Wp,Vq)=exp(− || Wp(x,y)−Vq(x,y)| | 2
2σ2)
(15)
where Hare the highlight pixels, Fare the non-highlight pixels, pand qrun over all pixels in H
and F,Wand Vare the patches with a sampling window size which is only measured by RGB
values, σis the smoothness index which is variable.
Each iteration of the Space-time completion algorithm requires a global search of the image.
In order to accelerate the convergence of algorithm iteration, a Patch-match algorithm [39] has
been introduced to optimize the process of finding the nearest neighbor of a curtain patch. The
core of Patch-match is to greatly reduce the scope of search by utilizing image continuity.
After the highlight inpainting, it’s necessary to guide the color assignment to make the
conversion from depth image to color point clouds again. The final color point clouds will no
longer be interfered by the specular reflection item S. Moreover, the continuity of texture and
color will be restored as much as possible.
4. Results and discussion
It’s assumed that the color restoration of MSL point clouds were conducted under CIE illuminant
D65. The performance of point cloud highlight removal method was evaluated on three targets
of different materials scanned by the MSL system.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28624
4.1. Dataset
As analyzed in Section 2, targets with different materials have varying reflectance, which affects
whether the detection signal contains the specular reflection component. One representative
scene in Fig. 2, namely a writing board, is selected to evaluate the accuracy and feasibility of the
proposed method. In addition, the evaluation and reference of color calibration are considered,
and a standard color checker and a color deer model are selected.
The actual images of the datasets are shown in Fig. 6. The color checker in Fig. 6(a), which
follows the color standards of CIE, comprises 24 marked color squares in the size of 4
×
4 cm,
including natural object, chromatic, primary, and gray-scale colors. The writing board in Fig. 6(b)
is aluminum framed and has a smooth and flat surface with a uniform color. And the deer model
in Fig. 6(c) comprises smooth fiberglass and has a complex color composition and 3D structure.
Among the above targets, the color checker is used to evaluate the color denoising result; while
the writing board and the deer model, which are prone to produce highlights, are selected for the
evaluation of the highlight removal method.
(a) (b) (c)
Fig. 6.
Actual images of the datasets. (a) Color checker. (b) Writing board. (c) Deer model.
4.2. Qualitative validation
The visual perception ability of human eyes can distinguish the dynamic changes of objects
in color, which can be used as a qualitative evaluation approach of color quality. Figure 7
shows the results of color denoising for point clouds of color checker. From left to right are
the monochromatic point clouds at RGB channels and the color point clouds, respectively. In
Fig. 7(a), it is observed that the initial point clouds are influenced by the color noise, which
exhibits wavelength dependence. In other words, the R channel has evident impulse noise, while
the G and B channels suffer from serious Gaussian noise. In Fig. 7(b), the Gaussian noise and
impulse noise is filtered by color denoising method. As can be seen, our approach can recover
realistic color of MSL point clouds in each color square, which lays the foundation for the
subsequent highlight removal.
The highlight removal results for the point clouds of writing board are shown in Fig. 8. From
top to bottom are the monochromatic point clouds at RGB channels and the combined color point
clouds, respectively. The LiDAR system has much smaller field of view corresponding to the
laser divergence angle with 0.5mrad. This means that it is hard to produce a large-scale highlight
area, but a relatively small highlight area. The point cloud highlights shown in Fig. 8are the
largest area of MSL detection results. In Fig. 8(a), the presence of color noise and highlights
considerably affect the visualization of MSL point clouds. In order to avoid the interference of
noise, we first perform the color denoising method to get the noise-free data in Fig. 8(b). Due to
the highlight inconsistency, the saliency map of a single channel is hard to detect the highlights
correctly. Figure 8(c) illustrates that only by combining the characteristics of multi-channel for
highlight detection can a desired highlight region be obtained. Then, the modified Space-time
completion algorithm is used to inpaint the detected highlights. After the highlight inpainting,
the highlights marked in pink almost disappear in the salience map of Fig. 8(d). Figure 8(e)
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28625
(a)
(b)
Fig. 7.
Results of color denoising for the point clouds of color checker. From left to right
are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a)
Initial point clouds. (b) Noise-free point clouds.
further reveals that this approach can recover realistic looking color of MSL point clouds despite
saturated highlights.
(a)(b)(c)(d)(e)
Fig. 8.
Results of highlight removal for the point clouds of writing board. From top to
bottom are the monochromatic point clouds at RGB channels and the MSL color point
clouds. (a), (b), and (e) Initial, noise-free, and highlight-free point clouds, respectively.
(c) and (d) Visual salience map with highlight detection marked in pink before and after
highlight removal.
In addition to the low-textured writing board, the high-textured deer model with complex3D
structure is further applied to test the feasibility of the highlight removal method in Fig. 9. From
1st-3rd row of Fig. 9(c), we can find that the highlight areas detected by a single channel are
significantly different. In contrast, multi-channel highlight detection can robustly improve the
accuracy of highlight recognition, which is shown in the 4th row of Fig. 9(c). Besides, due to the
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28626
full consideration of the texture continuity, the highlight removal algorithm can unveil the masked
color and texture to improve the 3D imaging quality. For the highlight region, the unknown color
information can be finally similar substitute after several iterations, which is based on the certain
color information of the domain non-highlight region. This method won’t lead to color distortion
in the region of fitting repair, but achieve the maximum consistency of the visualization effect.
As shown in the 4th row of Fig. 9(e), the highlights of the point cloud for the colored deer model
appear at the junction of red and yellow regions, and the restored color can also maintain the
continuity of its color and texture.
4.3. Quantitative validation
To intuitively compare the effects of color denoising and highlight removal algorithm, the intensity
values of point clouds are analyzed in Fig. 10. Compared to the original intensity values in
Fig. 5(a), the color denoising makes the intensity values more concentrated in Fig. 10(a), and
the highlight removal method further corrects the deviation of the intensity values in Fig. 10(b).
Figures 10(c) and (d) show the influence of the noise filtering and highlight removal on the
intensity statistics. The color denoising makes the intensity probability to concentrate in the
mean value, with higher peaks. The highlight removal algorithm makes the probability statistics
of the weak highlights in region H1 and the saturated highlights in region H2 converge to the
mean. Combined with the above analyses, the algorithm can successfully eliminate the influence
of color noise and highlights on the intensity values of point clouds.
The quantitative evaluation of the highlight removal for MSL point clouds is quite challenging,
since obtaining the ground truth color point clouds without reflection distortion is difficult.
To this end, except for the color checker with standard color, the target image is taken as the
reference color. For quantitative validation, peak signal-to-noise ratio (PSNR) and relative
standard deviation (RSD) are often used to evaluate the authenticity and stability of the color. A
larger PSNR or a smaller RSD indicates a better result.
Figure 11 displays the PSNR and RSD results of the color denoising for the point clouds
of color checker. In Fig. 11(a), squares 8, 13, 14, 17 and 18 cannot obtain significant PSNR
improvement because their original PSNR is limited to low intensity values. Nevertheless, the
PSNR improvement in most of the other squares after filtering is larger than 1 dB. This PSNR
result evidences the effectiveness of the color denoising to make most of the representative colors
in the MSL point clouds of color checker realistic.
The RSD at R, G, and B channels in 24 color squares, which are respectively shown in
Figs. 11(b), (c), and (d), is calculated to evaluate the uniformity of the color accurately. Although
the MSL system has realized some achievements in color restoration by using lognormal function
and pulse accumulation methods [19], the proposed color denoising method can further reduce
the RSD of each channel. Meanwhile, the RSD result also exhibits that the measured data has
considerable higher accuracy at R channel than the other two channels, which is related to the
original data stability. After noise filtering, the RSD of R channel ranges from 0.1% to 6.0%
while that of G and B channels are over 6% in some squares.
The restored color of MSL point clouds is under the CIE standard illuminant D65, while
the images in Fig. 6are taken under daytime light conditions. This explains the rationality of
the color difference in the MSL point clouds. Table 1shows the quantitative evaluation of the
highlight removal methods for the MSL point clouds of three targets, including the color checker,
the writing board, and the deer model. Notably, the evaluation of each target is the comprehensive
result of its monochromatic area, such as the 24 color squares of the color checker, the writing
board surface excluding the metal frame, and the five color areas of the deer model. Table 1
illustrates that the PSNR and RSD results of the initial point clouds data of the deer model cannot
reach the level of other targets due to the complexity of 3D structure and texture. However, the
PSNR and RSD results of the three targets were optimized to varying degrees. It is also noted
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28627
(a) (b) (c) (d) (e)
Fig. 9.
Results of highlight removal for the point clouds of deer model. From top to bottom
are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a),
(b), and (e) Initial, noise-free, and highlight-free point clouds, respectively. (c) and (d) Visual
salience map with highlight detection marked in pink before and after highlight removal.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28628
Fig. 10.
Distribution and statistics of intensity values (0–255) for color point clouds of
writing board (the same position as in Fig. 2(b)) after color denoising and highlight removal,
where left to right are the R, G, and B channels. (a) and (b) Distribution of intensity values
after noise filtering. (c) and (d) Statistics of intensity values after noise filtering and highlight
removal.
Fig. 11.
Local result of PSNR and RSD for the color point clouds of color checker. (a)
PSNR result. The higher triangulated pink lines and squared green lines represent the
point clouds of color checker before filtered and after filtered, respectively. (b), (c) and (d)
RSD results at R, G and B channels. The right triangulated red lines and circled blue lines
represent the point clouds before filtered and after filtered, respectively.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28629
that as a substep of highlight removal, color denoising also contributes to visual enhancement of
point cloud. In addition to the deer model, the PSNR of other targets increased to more than 20
dB, and the RSD decreased to less than 10%. The maximum PSNR could reach 27.9 dB and the
minimum RSD could reach 2.2/3.2/5.9, which shows an acceptable result for the attempt of MSL
system to display the visualization effect of massive color point clouds.
Table 1. Quantitative evaluation of MSL point cloud highlight removal method.
Target PSNR (dB) RSD (%) of R/G/B
Initial Noise-free Highlight-free Initial Noise-free Highlight-free
Color checker 19.5 20.4 - 6.2/8.6/11.8 2.2/3.2/5.9 -
Writing board 17.2 19.3 27.9 23.4/19.0/12.3 17.5/15.3/8.4 7.0/6.9/6.1
Deer model 15.1 15.8 16.5 24.9/23.6/67.3 20.1/21.4/51.6 18.3/17.2/30.4
Through the experiments of three representative targets, it can be found that the proposed
method is not limited by the material, color, texture and 3D structure of the target, and has a
satisfactory effect to achieve the highlight removal for MSL point clouds. For the color point
clouds formed by large-scale complex scenes, further verification of the applicability of the
method will be left in future research.
5. Conclusion
The MSL system can obtain color point clouds directly, which is becoming a new trend in 3D
imaging. Compared with traditional monochromatic point clouds, color point clouds have great
potential to display the visualization effects. In order to deal with the highlights during the new
data acquisition, we proposed a MSL point cloud highlight removal method. Based on the radar
equation and the illumination Phong model, we analyzed the reflection characteristics of MSL,
and further found that the color of the point clouds comprises diffuse and specular components.
Projecting the point clouds to the optimal fitting plane and obtaining the corresponding depth
image is attempted to simplify the data processing. After the color denoising, the specular
highlights are detected by visual saliency. Then, the highlight inpainting is performed according
to the global or local color information. Finally, the processed image is converted to color
point clouds again through guiding the color assignment. Three targets with different textures
and colors are selected to conduct the MSL scanning experiments to verify the validity of the
algorithm. The qualitative and quantitative analyses reveals that the algorithms are effective and
robust in highlight removal and provided a new idea for the visual enhancement of MSL point
clouds. In the future research, we can further improve the reflection model of MSL point cloud
data, and optimize the highlight removal approach for complex scenes.
Funding.
National Natural Science Foundation of China (42171347); National Key Research and Development
Program of China (2018YFB0504500).
Disclosures. The authors declare no conflicts of interest.
Data availability.
Data underlying the results presented in this paper are not publicly available at this time but may
be obtained from the authors upon reasonable request.
References
1.
W. Wagner, A. Ullrich, V. Ducic, T. Melzer, and N. Studnicka, “Gaussian decomposition and calibration of a novel
small-footprint full-waveform digitising airborne laser scanner,” ISPRS J. Photogramm. Remote Sens.
60
(2), 100–112
(2006).
2.
B. Yang, Y. Liu, Z. Dong, F. Liang, B. Li, and X. Peng, “3D local feature BKD to extract road information from
mobile laser scanning point clouds,” ISPRS J. Photogramm. Remote Sens. 130, 329–343 (2017).
3.
R. Klokov and V. Lempitsky, “Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models,”
in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, 863–872.
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28630
4.
H. Jing and Y. Suya, “Point cloud labeling using 3D Convolutional Neural Network,” in 2016 23rd International
Conference on Pattern Recognition (ICPR), 2016, 2670–2675.
5.
Y. Guo, F. Sohel, M. Bennamoun, J. Wan, and M. Lu, “A novel local surface feature for 3D object recognition under
clutter and occlusion,” Inf. Sci. 293, 196–213 (2015).
6.
T. Sankey, J. Donager, J. McVay, and J. B. Sankey, “UAV lidar and hyperspectral fusion for forest monitoring in the
southwestern USA,” Remote Sens Environ 195, 30–43 (2017).
7.
M. Alonzo, B. Bookhagen, and D. A. Roberts, “Urban tree species mapping using hyperspectral and lidar data
fusion,” Remote Sens Environ 148, 70–83 (2014).
8.
J. Zhang and X. Lin, “Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and
remote sensing,” International Journal of Image and Data Fusion (2016).
9.
E. Puttonen, J. Suomalainen, T. Hakala, E. Räikkönen, H. Kaartinen, S. Kaasalainen, and P. Litkey, “Tree species
classification from fused active hyperspectral reflectance and LIDAR measurements,” Forest Ecol Manag
260
(10),
1843–1852 (2010).
10.
G. Kereszturi, L. N. Schaefer, W. K. Schleiffarth, J. Procter, R. R. Pullanagari, S. Mead, and B. Kennedy, “Integrating
airborne hyperspectral imagery and LiDAR for volcano mapping and monitoring through image classification,” Int J
Appl Earth Obs 73, 323–339 (2018).
11.
B. Wang, S. Song, S. Shi, Z. Chen, Y.-S. Li, D. Wu, D. Liu, and W. Gong, “Multichannel Interconnection
Decomposition for Hyperspectral LiDAR Waveforms Detected From Over 500 m,” IEEE Trans. Geosci. Remote
Sensing 1, 1–14 (2021).
12.
L. Matikainen, K. Karila, J. Hyyppä, P. Litkey, E. Puttonen, and E. Ahokas, “Object-based analysis of multispectral
airborne laser scanner data for land cover classification and map updating,” ISPRS J. Photogramm. Remote Sens.
128, 298–313 (2017).
13.
Z. Niu, Z. Xu, G. Sun, W. Huang, L. Wang, M. Feng, W. Li, W. He, and S. Gao, “Design of a New Multispectral
Waveform LiDAR Instrument to Monitor Vegetation,” IEEE Geosci. Remote Sensing Lett.
12
(7), 1506–1510 (2015).
14.
P. Hartzell, C. Glennie, K. Biber, and S. Khan, “Application of multispectral LiDAR to automated virtual outcrop
geology,” ISPRS J. Photogramm. Remote Sens. 88, 147–155 (2014).
15.
T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser
scanning,” Opt. Express 20(7), 7119–7127 (2012).
16.
B. Chen, S. Shi, J. Sun, W. Gong, J. Yang, L. Du, G. Kuanghui, B. Wang, and B. Chen, “Hyperspectral lidar point
cloud segmentation based on geometric and spectral information,” Opt. Express 27(17), 24043 (2019).
17.
Z. Wang, Y. Chen, C. Li, M. Tian, M. Zhou, W. He, H. Wu, H. Zhang, L. Tang, Y. Wang, H. Zhou, E. Puttonen, and J.
Hyyppä, “A Hyperspectral LiDAR with Eight Channels Covering from VIS to SWIR,” in IGARSS 2018 - 2018 IEEE
International Geoscience and Remote Sensing Symposium, 2018, 4293–4296.
18.
J. C. Fernandez-Diaz, W. E. Carter, C. Glennie, R. L. Shrestha, Z. Pan, N. Ekhtari, A. Singhania, D. Hauser, and M.
Sartori, “Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar,” Remote Sens.
8, 1 (2016).
19.
B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform
Multispectral LiDAR Data,” Remote Sens. 12, 1 (2020).
20.
R. Saha, P. Pratim Banik, S. Sen Gupta, and K.-D. Kim, “Combining highlight removal and low-light image
enhancement technique for HDR-like image generation,” IET Image Processing 14(9), 1851–1861 (2020).
21.
M. W. Tao, J. Su, T. Wang, J. Malik, and R. Ramamoorthi, “Depth Estimation and Specular Removal for Glossy
Surfaces Using Point and Line Consistency with Light-Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell.
38(6), 1155–1169 (2016).
22.
Q. Yang, J. Tang, and N. Ahuja, “Efficient and Robust Specular Highlight Removal,” IEEE Trans. Pattern Anal. Mach.
Intell. 37(6), 1304–1311 (2015).
23.
H. Kim, H. Jin, S. Hadap, and I. Kweon, “Specular Reflection Separation Using Dark Channel Prior,” in 2013 IEEE
Conference on Computer Vision and Pattern Recognition, 2013, 1460–1467.
24.
X. Qian, J. Yang, S. Shi, W. Gong, L. Du, B. Chen, and B. Chen, “Analyzing the effect of incident angle on echo
intensity acquired by hyperspectral lidar based on the Lambert-Beckman model,” Opt. Express
29
(7), 11055–11069
(2021).
25.
J. Wagen, U. T. Virk, and K. Haneda, “Measurements based specular reflection formulation for point cloud modelling,”
in 2016 10th European Conference on Antennas and Propagation (EuCAP), 2016, 1–5.
26.
A. Tatoglu and K. Pochiraju, “Point cloud segmentation with LIDAR reflection intensity behavior,” in 2012 IEEE
International Conference on Robotics and Automation, 2012, 786–790.
27.
Q. Ding, W. Chen, B. King, Y. Liu, and G. Liu, “Combination of overlap-driven adjustment and Phong model for
LiDAR intensity correction,” ISPRS J. Photogramm. Remote Sens. 75, 40–47 (2013).
28.
J. S. Yun and J. Y. Sim, “Virtual Point Removal for Large-Scale 3D Point Clouds with Multiple Glass Planes,” IEEE
Trans. Pattern Anal. Mach. Intell. 43(2), 729–744 (2021).
29.
S. Song, B. Wang, W. Gong, Z. Chen, X. Lin, J. Sun, and S. Shi, “A new waveform decomposition method for
multispectral LiDAR,” ISPRS J. Photogramm. Remote Sens. 149, 40–49 (2019).
30.
W. Wagner, “Radiometric calibration of small-footprint full-waveform airborne laser scanner measurements: Basic
physical concepts,” ISPRS J. Photogramm. Remote Sens. 65(6), 505–513 (2010).
31. B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM 18(6), 311–317 (1975).
Research Article Vol. 30, No. 16 / 1 Aug 2022 / Optics Express 28631
32.
J. Steinier, Y. Termonia, and J. J. Deltour, “Smoothing and differentiation of data by simplified least square procedure,”
Anal. Chem. 44(11), 1906–1909 (1972).
33.
M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to
Image Analysis and Automated Cartography,” in Readings in Computer Vision, M. A. Fischler and O. Firschein, eds.
(Morgan Kaufmann, San Francisco (CA), 1987), pp. 726–740.
34.
C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Sixth International Conference on
Computer Vision (IEEE Cat. No.98CH36271), 1998, 839–846.
35. S. J. Ko and Y. H. Lee, “Center weighted median filters and their applications to image enhancement,” IEEE Trans.
Circuits Syst. 38(9), 984–993 (1991).
36.
R. Achantay, S. Hemamiz, F. Estraday, and S. Süsstrunky, “Frequency-tuned salient region detection,” 2009 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009,
1597–1604 (2009).
37.
C. Guillemot and O. L. Meur, “Image Inpainting : Overview and Recent Advances,” IEEE Signal Process. Mag.
31(1), 127–144 (2014).
38.
Y. Wexler, E. Shechtman, and M. Irani, “Space-Time Completion of Video,” IEEE Trans. Pattern Anal. Mach. Intell.
29(3), 463–476 (2007).
39.
C. Barnes, E. Shechtman, A. Finkelstein, and D. Goldman, “PatchMatch: A Randomized Correspondence Algorithm
for Structural Image Editing,” ACM Trans. Graph. 28, 1 (2009).
... Next, the saliency weight (SYW) is determined for both images A and G to highlight the salient items that their eminence is attenuated when captured in an underwater environment. This is done using a frequency-tuned (FT) algorithm for salient area recognition proposed by [27]. Both G and A images must be processed by the FT algorithm to produce two saliency weights that are needed later when computing the normalized weights required for the fusion process. ...
Article
Full-text available
Humanity currently lives in a technological era that witnesses rapid progress in multiple fields. Digital image processing is one of the modern technologies that has provided practical answers to many challenges including image enhancement, analysis, reconstruction, recovery, compression, processing, and understanding. One of these notable challenges relates to underwater photography. Underwater images are always exposed to less-than-ideal conditions due to environmental and physical factors. These include refraction of light in water, scattering of particles and dust in the aquatic medium, lack of illumination in deep water, and poor contrast. These challenges make it extremely difficult to analyze and extract valuable information without advanced processing. In this study, an improved color balance-fusion algorithm is provided by improving the image visuality and modifying some equations to obtain sharper and clearer images. The proposed algorithm begins by finding the white balance of the input RGB color image, after that, it improves the intensity. Next, the edges are improved using Gamma separately. The weights are then found for each image and combined to find naive fusion. The resulting image is processed using a color retrieval algorithm to produce the final image. along with comparisons to eleven other algorithms with various processing methods. Experimental results showed that this algorithm can significantly improve underwater images, increasing image clarity and making colors clearer. The improvement rates reached 5.8389 and 2.6778 for UISM and UICM metrics, respectively.
Article
Full-text available
In the acquisition process of 3D cultural relics, it is common to encounter noise. To facilitate the generation of high-quality 3D models, we propose an approach based on graph signal processing that combines color and geometric features to denoise the point cloud. We divide the 3D point cloud into patches based on self-similarity theory and create an appropriate underlying graph with a Markov property. The features of the vertices in the graph are represented using 3D coordinates, normal vectors, and color. We formulate the point cloud denoising problem as a maximum a posteriori (MAP) estimation problem and use a graph Laplacian regularization (GLR) prior to identifying the most probable noise-free point cloud. In the denoising process, we moderately simplify the 3D point to reduce the running time of the denoising algorithm. The experimental results demonstrate that our proposed approach outperforms five competing methods in both subjective and objective assessments. It requires fewer iterations and exhibits strong robustness, effectively removing noise from the surface of cultural relic point clouds while preserving fine-scale 3D features such as texture and ornamentation. This results in more realistic 3D representations of cultural relics.
Article
The information extracted from waveform data of full-waveform light detection and ranging (LiDAR) has been widely used in applications such as 3D urban modeling, target recognition, and classification However, the presence of weak signals is inevitable in LiDAR systems. To enhance its effective detection capability and extraction accuracy, we propose a multispectral LiDAR (MSL) weak signals extraction (MSL-WSE) method. The measurement data from our MSL system were used to evaluate the performance of the proposed method. The correlation coefficient (R 2 ), root mean square error (RMSE) and effective extraction rate show that the MSL-WSE method accurately detected and extracted the waveform parameters of weak echo signals, providing the more realistic and fine-grained true color 3D point cloud.
Article
Full-text available
Hyperspectral light detection and ranging (HSL) can acquire the spatial and spectral information simultaneously, which can provide more information than hyperspectral imaging and single band lidar. However, the echo intensity from targets is influenced by incident angle, and relative studies were still limited which result in the effect of incident angle on HSL not being completely understood. In this study, the incident angle effect in the whole band of HSL was analyzed and corrected. Then, five types of vegetation sample with different spectral characteristics were collected at the leaf level. Spectral range changing from 550 to 830 nm with a 1 nm spectral resolution was obtained. Lambert-Beckman model was applied to analyze the effect of the incident angle on the echo intensity. The experimental results demonstrated that the Lambert-Beckman model can efficiently apply in fitting the changing of echo intensity with incidence angle and efficiently eliminate the specular effect of target. In addition, the coefficient of variation ratio is significantly improved compared to the reference target-based model. The results illustrated that, compared to reference target-based model, the Lambert-Beckman model can efficiently explain and correct the incident angle effect with specular reflection in HSL. In addition, it was found that the specular fraction Ks, which is reduced with the increasing of reflectance, is dominating the incident angle effect in the whole band, while roughness m keeps stable at different wavelengths. Thus, this research will provide notably advanced insight into correcting the echo intensity of HSL.
Article
Full-text available
The current full-waveform data at a single wavelength can mainly retrieve the geometric attributes of targets along the light path by detecting waveform components, resulting in the lack of spectral or color attribute information. This kind of device relies on a digital camera for acquiring the color information, however, which is inevitably limited by the lighting conditions and geometric registration errors. With the development of multispectral light detection and ranging (LiDAR) or even hyperspectral LiDAR that often utilize a supercontinuum laser source covering the whole visible light band, including red, green and blue bands, the simultaneous acquisition of color and spatial information becomes possible and makes passive imaging data no longer necessary. In this study, we propose a color restoration method for a full-waveform multispectral LiDAR (FWMSL) system. Additionally, we develop a multispectral lognormal function to fit the tailing echoes measured by FWMSL further accurately. Experimental data from our FWMSL system are used to evaluate the performance of the proposed method. The relative standard deviation, correlation coefficient (R 2) and color difference (∆E) metrics suggest that the color restoration for the full-waveform multispectral data is feasible.
Article
Full-text available
Low dynamic range (LDR) image may contain low-light and highlight areas due to the limitations of the dynamic range of conventional image sensors. Low-light and highlight phenomena limit color richness and visibility of objects in an image. Therefore, it can cause a reduction in the quality of images and a loss in accuracy in the application of image recognition. To overcome this, high dynamic range (HDR)-like images have been developed with rich colors such as those seen by the human eye. In this paper, we propose a method to obtain an HDR-like image from a single LDR image by removing the specular component from highlight pixels as well as strengthening the actual color. Next, we select low-light image enhancement via illumination map estimation (LIME) as a low-light enhancement technique by showing the comparison with gamma based expansion operator (GEO). We evaluate our HDR-like output images with non-reference and full-reference metrics. We show the comparison of our proposed method with six other methods. Besides, visually, our proposed method delivers more pleasing output than the output of other competitive methods.
Article
Full-text available
Light detection and ranging (lidar) can record a 3D environment as point clouds, which are unstructured and difficult to process efficiently. Point cloud segmentation is an effective technology to solve this problem and plays a significant role in various applications, such as forestry management and 3D building reconstruction. The spectral information from images could improve the segmentation result, but suffers from the varying illumination conditions and the registration problem. New hyperspectral lidar sensor systems can solve these problems, with the capacity to obtain spectral and geometric information simultaneously. The former segmentation on hyperspectral lidar were mainly based on spectral information. The geometric segmentation method widely used by single wavelength lidar was not employed for hyperspectral lidar yet. This study aims to fill this gap by proposing a hyperspectral lidar segmentation method with three stages. First, Connected-Component Labeling (CCL) using the geometric information is employed for base segmentation. Second, the output components of the first stage are split by the spectral difference using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Third, the components of the second stage are merged based on the spectral similarity using Spectral Angle Match (SAM). Two indoor experimental scenes were setup for validation. We compared the performance of our mothed with that of the 3D and intensity feature based method. The quantitative analysis indicated that, our proposed method improved the point-weighted score by 19.35% and 18.65% in two experimental scenes, respectively. These results showed that the geometric segmentation method for single wavelength lidar could be combined with the spectral information, and contribute to the more effective hyperspectral lidar point cloud segmentation.
Article
Full-text available
Information derived from waveform decomposition of full-waveform light detection and ranging (LiDAR) data has been widely used in vegetation detection and three-dimensional urban terrain modeling to investigate and interpret the structural diversity of surface coverage. Most prevailing waveform decomposition methods involve only a single wavelength, but these methods do not apply to full-waveform multispectral LiDAR (FWMSL) systems that simultaneously acquire spectral and geometric information. In this paper, we propose a new multispectral waveform decomposition (MSWD) method in order to explore the potential advantages of the FWMSL system. Both simulated data and measured data from our FWMSL system were used to evaluate the performance of the proposed method. The coefficient of determination (R 2), root mean square error (RMSE), and relative error (rRMSE) metrics suggest that the decomposition results derived from MSWD exhibit a comparable overall fitting accuracy as a single wavelength waveform decomposition (SWWD) method. We also propose a new evaluation indicator, relative neighbor distance error (RNDE), to represent the relative error in the distance between adjacent targets. The simulation results present clear superiority of MSWD over SWWD in terms of discovering weak or overlapping components and retrieving accurate waveform parameters. The experimental results demonstrated a considerable improvement in RNDE (0.0100-0.0610) over the prevailing SWWD method (0.0566-0.2833). Unlike SWWD, MSWD initializes waveform components using mutually complementary wavelengths thus delivering higher completeness and accuracy. MSWD can be extended to other FWMSL or full-waveform hyperspectral LiDAR systems with additional wavelengths.
Article
Full-text available
Optical and laser remote sensing provide resources for monitoring volcanic activity and surface hydrothermal alteration. In particular, multispectral and hyperspectral imaging can be used for detecting lithologies and mineral alterations on the surface of actively degassing volcanoes. This paper proposes a novel workflow to integrate existing optical and laser remote sensing data for geological mapping after the 2012 Te Maari eruptions (Tongariro Volcanic Complex, New Zealand). The image classification is based on layer-stacking of image features (optical and textural) generated from high-resolution airborne hyperspectral imagery, Light Detection and Ranging data (LiDAR) derived terrain models, and aerial photography. The images were classified using a Random Forest algorithm where input images were added from multiple sensors. Maximum image classification accuracy (overall accuracy = 85%) was achieved by adding textural information (e.g. mean, homogeneity and entropy) to the hyperspectral and LiDAR data. This workflow returned a total surface alteration area of ~0.4 km 2 at Te Maari, which was confirmed by field work, lab-spectroscopy and backscatter electron imaging. Hydrothermal alteration on volcanoes forms precipitation crusts on the surface that can mislead image classification. Therefore, we also applied spectral matching algorithms to discriminate between fresh, crust altered, and completely altered volcanic rocks. This workflow confidently recognized areas with only surface alteration, establishing a new tool for mapping structurally controlled hydrothermal alteration, evolving debris flow and hydrothermal eruption hazards. We show that data fusion of remotely sensed data can be automated to map volcanoes and significantly benefit the understanding of volcanic processes and their hazards. 2
Article
The full-waveform hyperspectral light detection and ranging (FWHSL) data have been widely used in surface topography, vegetation detection, and 3-D urban terrain modeling, capable of revealing the spatial distribution of a target and more detailed spectral information in the vertical direction. However, the echo signals of a target would significantly vary between different spectral channels due to the reflectance characteristics and the uneven energy distribution of supercontinuum laser source. Especially, band channels with weak reflectance over a long distance would affect the extraction accuracy of waveform parameters, which are essential for retrieving the spatial and spectral information of targets. This article proposes a multichannel interconnection decomposition method to improve the extraction accuracy of distance and spectral information at each pulse using hyperspectral waveform data. Two experiments were conducted to verify the performance of long-distance detection of targets using FWHSL. The first experiment detected a standard whiteboard, a green leaf, and a yellow leaf at roughly 518 m. Results demonstrated a considerable improvement in ranging precision and spectral detection using the proposed method compared with using the optimal channel with the best data quality. The second experiment simultaneously detected two adjacent targets at a distance of approximately 518 m. Results presented clear superiority of adding waveform channels in terms of discovering overlapping components and retrieving accurate waveform parameters. The success rate of extracting two targets 60 cm apart was greatly increased from 47% to 73% through the multichannel interconnection waveform decomposition (MIWD) method.
Article
This article presents an adaptive hybrid-tracking (AHT) algorithm designed to process GNSS-R signals with a sufficient coherent component. Coherent GNSS-R signals have the potential to enable high-precision and high-resolution carrier-phase measurements for altimetry, sea-level monitoring, soil-moisture monitoring, flood mapping, snow-water equivalent measurements, and so on. The AHT algorithm incorporates the model inputs typically used in the master-slave open-loop (MS-OL) architecture into a closed-phase lock loop. Raw IF data recorded by the CYGNSS satellites over in-land water, land, and open-ocean surface are used to demonstrate the performance of the AHT. The results show that the AHT algorithm achieves comparable robustness with the MS-OL implementation while maintaining centimeter-level accuracy and excellent carrier-phase continuity that can be achieved with a fine-tuned Kalman filter (KF)-based adaptive closed-loop (ACL) system. Moreover, the AHT is suitable for real-time implementation and is applicable to other radio signals-of-opportunity.
Article
Large-scale 3D point clouds (LS3DPCs) captured by terrestrial LiDAR scanners often include virtual points which are generated by glass reflection. The virtual points may degrade the performance of various computer vision techniques when applied to LS3DPCs. In this paper, we propose a virtual point removal algorithm for LS3DPCs with multiple glass planes. We first estimate multiple glass regions by modeling the reliability with respect to each glass plane, respectively, such that the regions are assigned high reliability when they have multiple echo pulses for each emitted laser pulse. Then we detect each point whether it is a virtual point or not. For a given point, we recursively traverse all the possible trajectories of reflection, and select the optimal trajectory which provides a point with a similar geometric feature to a given point at the symmetric location. We evaluate the performance of the proposed algorithm on various LS3DPC models with diverse numbers of glass planes. Experimental results show that the proposed algorithm estimates multiple glass regions faithfully and detects the virtual points successfully. Moreover, we also show that the proposed algorithm yields a much better performance of reflection artifact removal compared with the existing method qualitatively and quantitatively.