ArticlePDF Available

A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

MDPI
Sensors
Authors:

Abstract

In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.
sensors
Article
A Single LiDAR-Based Feature Fusion Indoor
Localization Algorithm
Yun-Ting Wang 1, Chao-Chung Peng 1,*, Ankit A. Ravankar 2and Abhijeet Ravankar 3
1Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan 701, Taiwan;
omiwanggg@gmail.com
2Division of Human Mechanical Systems and Design, Faculty of Engineering, Hokkaido University,
Sapporo 060-8628, Japan; ankit@eng.hokudai.ac.jp
3Lab of Smart Systems Engineering, Kitami Institute of Technology, Hokkaido, Kitami 090-8507, Japan;
abhijeetravankar@gmail.com
*Correspondence: ccpeng@mail.ncku.edu.tw; Tel.: +886-6-275-7575 (ext. 63633)
Received: 17 March 2018; Accepted: 18 April 2018; Published: 23 April 2018


Abstract:
In past years, there has been significant progress in the field of indoor robot localization.
To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless,
this affects the overall system cost and increases computation. In this research work, we considered
a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and
propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve
localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is
presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and
line features before applying point registration. Later, points labeled as corners are only matched with
the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates.
Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation
less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability
of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor
layouts, experiment comparisons are carried out under both clean and perturbed environments. It is
shown that the proposed method is effective in significantly reducing computation effort and is
simultaneously able to preserve localization precision.
Keywords: indoor localization; pose estimation; iterative closet point; SLAM; LiDAR
1. Introduction
Simultaneous localization and mapping (SLAM) is a method of building a map under exploration
and estimating vehicle pose based on sensor information in an unknown environment. When exploring
an unpredictable environment, an unmanned vehicle is generally employed for exploration as well
as localization. In this regard, the vehicle could be equipped with a single sensor for detecting and
identifying surroundings, or by attaching two or even more sensors on the vehicle to enhance its
estimation capability.
When considering different kinds of sensors [
1
], laser range finders (LRFs), vision, and Wi-Fi
networks are popular sensing techniques for indoor localization tasks. Recently, with advancement in
computer vision and image processing, many researchers have started investigating the vision-based
SLAM [
2
,
3
]. Under the condition that the captured images are matched sufficiently, features can
be extracted using Scale-Invariant Feature Transform (SIFT) [
4
] or Speeded Up Robust Features
(SURF) [
5
]. Other indoor localization methods consider the amplitude of received signal from Wi-Fi
networks [
6
10
]. These localization strategies depend on pre-installed wireless hardware devices on
the site and thus may not be applicable in Wi-Fi denied environments.
Sensors 2018,18, 1294; doi:10.3390/s18041294 www.mdpi.com/journal/sensors
Sensors 2018,18, 1294 2 of 19
For an ultra-low-cost SLAM module, previous works have considered a set of on-board ultrasonic
sensors [
11
], which provided sparse measurements about the environment. However, the robot pose
might lose its pose under complicated environments without the aid of robot kinematics information.
To achieve robust image recognition [
12
], robot navigation and map construction, the depth sensor,
Kinect v2, was considered [
13
]. Kinect v2 is based on the time-of-flight measurement principle and can
be used in outdoors environment. Since the multi-depth sensors are able to provide highly dense 3D
data [
14
], the real-time computation effort is relative higher. Furthermore, for well-constructed indoor
environment, such hardware configuration is not necessary for 2D robot positioning.
Owing to the light weight and portable advantage, light detection and ranging (LiDAR) has
attracted more and more attention [
15
,
16
]. LiDAR possesses a high sampling rate, high angular
resolution, good range detection, and high robustness against environment variability. As a result,
in this research, a single LiDAR is used for indoor localization.
By analyzing the position of features in each frame at every movement of the vehicle, one can
figure out the vehicle’s traveling distance and heading. With different scanning data, an iterative closed
point (ICP) [
17
] algorithm is employed to find the most appropriate robot pose matching conditions,
including rotation and translation. However, the ICP may not always lead to good pattern matching if
point cloud registration issue is not well addressed. In other words, a better point registration will lead
to better robot pose estimation. To address this, point cloud outliers must be identified and recognized.
Another issue when applying the ICP is computation efficiency. Since the ICP algorithm considers
the closest-point rule to establish correspondences between points in current scan and a given layout,
the searching effort can increase dramatically when the scan or a layout contains large amounts of data.
The researches [
18
24
] has addressed and solved some of the problems when applying ICP,
including (1) wrong point matching for large initial errors, (2) expensive correspondence searching,
(3) slow convergence speed, and (4) outlier removal. For robot pose subjected to large initial angular
displacement, especially in [
16
], iterative dual correspondence (IDC) is proposed. However, it demands
higher computation due to its dual-correspondence process. Metric-based ICP (MbICP) [
21
] considers
geometric distance that takes translation and rotation into account simultaneously. The correspondence
between scans is established with this measure and the minimization of the error is carried out in
terms of this distance. The MbICP shows superior robustness in the case of existing large angular
displacement. Among various planar scan matching strategies, Normal Distribution Transformation
(NDT) [
22
] and Point-to-Line ICP (PLICP) [
23
] illustrate state-of-the-art performance in consideration
of scan matching accuracy. NDT transforms scans onto a grid space and tends to find the best fit by
maximizing the normal distribution in each cell. The NDT does not require point-to-point registrations,
so it enhances matching speed. However, the selection of the grid size dominates estimation stability.
The PLICP considers normal information from environment’s geometric surface and tries to minimize
the distance projected onto the normal vector of the surface. In addition, a close-form solution is also
given such that the convergence speed can be drastically improved.
To obtain precise and high-bandwidth robot pose estimation, the vehicle’s dynamics and traveling
status can be further integrated into a Kalman filter [
25
]. A Kalman filter provides the best estimate to
eliminate noise and provides a better robot pose prediction. Other researchers have considered the
wheel odometry fusion-based SLAM [
26
,
27
], which integrates robot kinematics and encoder data for
pose estimation. However, it might not be suitable for realization of a portable localization system.
Based on the aforementioned issues, in this work, the LiDAR is considered as the only sensor for
mapping and localization. To avoid mismatched point registration and to enhance matching speed,
we propose a feature-based weighted parallel iterative closed point (WP-ICP) architecture inspired
by [
18
,
23
,
28
31
]. The main advantages of the proposed method are as follows: (a) Point sizes of the
model set and data set are significantly reduced so that the ICP speed can be enhanced. (b) A split
and merge algorithm [
28
,
29
] is considered to divide the point cloud into two feature groups, namely,
corner and line segments. The algorithm works by matching points labeled as corners to the corner
Sensors 2018,18, 1294 3 of 19
candidates; similarly, for those points labeled as lines can only be matched to the lines candidates. As a
result, it attenuates any possibilities of point cloud mismatch.
In this paper, it is supposed that the well-constructed indoor layout is given in advance. The main
design object is to reduce the computation effort and to maintain the indoor positioning precision.
The rest of this paper is organized as follows. In Section 2, an adaptive breakpoint detector is firstly
introduced for scan point segmentation. A clustering algorithm and a split–merge approach is further
considered for point clustering and feature extraction, respectively. In Section 3, a WP-ICP algorithm is
proposed. Section 4presents real experiments to evaluate the effectiveness of the proposed method.
Finally, Section 5outlines conclusions and future work.
2. Feature Extraction
For feature extraction, a robot is employed in an indoor environment and moves in a given layout,
and a localization algorithm is introduced. In addition, real-time sensing and pose estimation is key
for practical realization. Feature extraction plays an important role in reducing the amount of point
cloud data for computational speedup.
2.1. The Main Concept of Feature Extraction
Due to the high scan resolution of LiDAR and the given layout, construction of a KD-Tree will
be highly time-consuming if all data points are fed into the ICP algorithm. Therefore, extracting
informative feature points to represent the environment is one of the important tasks.
In this research, a feature extraction scan matching procedure is proposed as summarized in
Figure 1. Firstly, all the scanning points are separated into many clusters by invoking the adaptive
breakpoint detector (ABD) [
28
]. The main idea of an ABD is to find the breakpoints in each scan,
where the scanning direction of a LiDAR is counterclockwise and continuous. Therefore, by detecting
breakpoints, the algorithm can determine if there exists a discontinuity between two consecutive
scanning points. However, the threshold for the breakpoint determination should be adaptive with
respect to the scanning distance, which is presented in the following subsection.
Sensors 2018, 18, x FOR PEER REVIEW 3 of 19
corner candidates; similarly, for those points labeled as lines can only be matched to the lines
candidates. As a result, it attenuates any possibilities of point cloud mismatch.
In this paper, it is supposed that the well-constructed indoor layout is given in advance. The
main design object is to reduce the computation effort and to maintain the indoor positioning
precision. The rest of this paper is organized as follows. In Section 2, an adaptive breakpoint detector
is firstly introduced for scan point segmentation. A clustering algorithm and a split–merge approach
is further considered for point clustering and feature extraction, respectively. In Section 3, a WP-ICP
algorithm is proposed. Section 4 presents real experiments to evaluate the effectiveness of the
proposed method. Finally, Section 5 outlines conclusions and future work.
2. Feature Extraction
For feature extraction, a robot is employed in an indoor environment and moves in a given
layout, and a localization algorithm is introduced. In addition, real-time sensing and pose estimation
is key for practical realization. Feature extraction plays an important role in reducing the amount of
point cloud data for computational speedup.
2.1. The Main Concept of Feature Extraction
Due to the high scan resolution of LiDAR and the given layout, construction of a KD-Tree will
be highly time-consuming if all data points are fed into the ICP algorithm. Therefore, extracting
informative feature points to represent the environment is one of the important tasks.
In this research, a feature extraction scan matching procedure is proposed as summarized in
Figure 1. Firstly, all the scanning points are separated into many clusters by invoking the adaptive
breakpoint detector (ABD) [28]. The main idea of an ABD is to find the breakpoints in each scan,
where the scanning direction of a LiDAR is counterclockwise and continuous. Therefore, by detecting
breakpoints, the algorithm can determine if there exists a discontinuity between two consecutive
scanning points. However, the threshold for the breakpoint determination should be adaptive with
respect to the scanning distance, which is presented in the following subsection.
Figure 1. Flow chart of feature extraction.
Figure 1. Flow chart of feature extraction.
Sensors 2018,18, 1294 4 of 19
2.2. A Novel Method for Finding Clusters
Consider that, for each scan, there exists
n
beams. The radius for each beam is denoted as
ri
,
where
i=
1,
···
,
n
. To enhance the clustering performance, a simple and straightforward adaptive
radius clustering (ARC) algorithm is developed as follows:
S=R×θ=R×(α×2π/360),R=min(ri,ri1)(1)
λ=N×S(2)
where
S
denotes an adaptive arc length between points,
α
is the angular resolution provided by
LiDAR specification,
N
is a design scaling factor relative to LiDAR noise level, and
λ
is the threshold
for clustering. Therefore, if the distance between two scanning points in the same scan is larger than
λ
,
those points are going to be divided into two different clusters; that is
kpipi1k>λ(3)
Based on the ARC, the scan shown in Figure 2b can be clustered into two groups. Else, it may
be divided into several segments due to the radius variations as illustrated in Figure 2a. Therefore,
the ARC algorithm is able to separate clusters according to LiDAR’s measurement characteristics.
After utilizing the ARC algorithm, clusters with fewer scanning points are treated as outliers.
For clusters with lower density points that cannot provide reliable information are discarded before
the feature extraction step.
Sensors 2018, 18, x FOR PEER REVIEW 4 of 19
2.2. A Novel Method for Finding Clusters
Consider that, for each scan, there exists n beams. The radius for each beam is denoted as i
r,
where ni ,,1 . To enhance the clustering performance, a simple and straightforward adaptive
radius clustering (ARC) algorithm is developed as follows:
)360/2( παRθRS ,

1
,min
ii rrR (1)
SNλ (2)
where S denotes an adaptive arc length between points, α is the angular resolution provided by
LiDAR specification, N is a design scaling factor relative to LiDAR noise level, and λ is the
threshold for clustering. Therefore, if the distance between two scanning points in the same scan is
larger than λ, those points are going to be divided into two different clusters; that is
λpp ii 1 (3)
Based on the ARC, the scan shown in Figure 2b can be clustered into two groups. Else, it may be
divided into several segments due to the radius variations as illustrated in Figure 2a. Therefore, the
ARC algorithm is able to separate clusters according to LiDAR’s measurement characteristics.
After utilizing the ARC algorithm, clusters with fewer scanning points are treated as outliers.
For clusters with lower density points that cannot provide reliable information are discarded before
the feature extraction step.
(a) without adaptive radius (b) with adaptive radius
Figure 2. Illustration of the adaptive radius clustering (ARC) algorithm.
2.3. Split and Merge for Corner and Line Extractions
Comparing a few of the feature extraction methods in different aspects, including the speed of
the algorithm, correctness, and so on, the split and merge [15,28,29] is considered for feature
extraction in this research. The algorithm is capable of extracting corner feature points in the
environment, which can be then taken as stable and reliable feature points.
The splitting procedure also combines the idea of Iterative End Point Fitting (IEPF) [30,32],
which is a recursive algorithm for extracting features. For a cluster
kiyx
ii
,,1|, , the split and
merge algorithm connects the first point and the last point to form a straight line. The algorithm then
calculates deviations from all the points to this line. If a point where the corresponding maximum
distance is larger than a predefined deviation threshold c
d, this point is labeled as a feature point
and it further splits the cluster into two subsets. The feature point

iic
yxP , can be determined
by the following rule:

c
ii
P
iic d
ba
byax
dyxP
c
22
1
maxarg:, (4)
Figure 2. Illustration of the adaptive radius clustering (ARC) algorithm.
2.3. Split and Merge for Corner and Line Extractions
Comparing a few of the feature extraction methods in different aspects, including the speed of the
algorithm, correctness, and so on, the split and merge [
15
,
28
,
29
] is considered for feature extraction in
this research. The algorithm is capable of extracting corner feature points in the environment, which
can be then taken as stable and reliable feature points.
The splitting procedure also combines the idea of Iterative End Point Fitting (IEPF) [
30
,
32
], which
is a recursive algorithm for extracting features. For a cluster
={xi,yi|i=1, ·· · ,k}
, the split and
merge algorithm connects the first point and the last point to form a straight line. The algorithm then
calculates deviations from all the points to this line. If a point where the corresponding maximum
distance is larger than a predefined deviation threshold
dc
, this point is labeled as a feature point and
it further splits the cluster into two subsets. The feature point
Pc(xi,yi)
can be determined by the
following rule:
Pc(xi,yi)d:=argmax
Pc
|axi+byi+1|
a2+b2>dc(4)
Sensors 2018,18, 1294 5 of 19
where
a
,
b
are the coefficients of a line equation
L:ax +by +
1
=
0, which is composed of
P1
and
Pk
.
Figure 3demonstrates the main process of the split and merge algorithm. The algorithm recursively
splits the set of points into two subsets until the condition, Equation (4), is not satisfied.
R
Figure 3. Illustration of the split and merge scheme.
3. The Proposed Method: WP-ICP
3.1. Pose Estimation Algorithm
SLAM is considered to be a chicken and egg problem since precise localization needs a reference
map, and a good mapping result comes from a correct estimation of the robot pose [
33
]. To achieve
SLAM, these two issues must be solved simultaneously [
34
]. However, in this work, only localization
is considered. Therefore, by assuming that a complete layout of the environment is given in advance,
a novel scan matching algorithm is presented and will be introduced in the next subsection.
Suppose that the correspondences are already known, the pose estimation can be considered as
an estimate of rigid body transform, which can be solved efficiently via singular value decomposition
(SVD) technique [35].
Let
P={p1,p2,·· ·pN}
be data set from a current scan and
Q={q1,q2,·· ·qN}
be a model set
received from a given layout. The goal is to find a rigid body transformation pair
(R,t)
such that the
best alignment can be achieved in the least error sense. It can be stated as
(R,t)=argmin
N
i=1
wikRpi+tqik2(5)
where wiare the weights for each point pair.
The optimal translation vector can be calculated by
t=qRp (6)
where
p=N
i=1wipi
N
i=1wi
,q=N
i=1wiqi
N
i=1wi
(7)
can be taken as weighted centroids for the data set and the model set, respectively.
Let
xi=pip
and
yi=qiq
. Consider also matrices
X
,
Y
, and
W
which are defined by
X=hx1·· · xNi,Y=hy1·· · yNi, and W=diag(w1,·· · ,wN), respectively.
Defining S=XWYTand then applying the singular value decomposition (SVD) on Syields
S=UΣVT(8)
where
U
and
V
are unitary matrices, and
Σ
is a diagonal matrix. It has been proved that the optimal
rotation matrix is available by considering
R=VUT(9)
Sensors 2018,18, 1294 6 of 19
Based on Equation (9), the translational vector given in Equation (6) can also be solved.
According to the ARC, the split–merge algorithm, and the ICP, the procedure of the
corner-feature-based ICP pose estimation is summarized in Figure 4. To reject the outlier during
the feature point registration, the weightings wican be designed according to the Euclidean distance,
and the values for certain unreasonable feature pairs can be set to zero.
Sensors 2018, 18, x FOR PEER REVIEW 6 of 19
According to the ARC, the split–merge algorithm, and the ICP, the procedure of the
corner-feature-based ICP pose estimation is summarized in Figure 4. To reject the outlier during the
feature point registration, the weightings i
w can be designed according to the Euclidean distance,
and the values for certain unreasonable feature pairs can be set to zero.
Start
Scanning points (Data Set)
Clustering by ARC algorithm
Corner feature extraction by
Split and Merge algorithm
ICP algorithm
Update the vehicle pose
in Global Map
Is there any new
scanning data?
End
No
Yes
Feature
Extraction Module
(Model Set)
Layout
Lidar
Find the rotation &
translation matrix
Feature
Extraction
Module
Figure 4. Corner-feature-based pose estimation.
3.2. A Weighted Parallel ICP Pose Estimation Algorithm
An environment that has a similar layout generally results in good estimates. However, in
practice, it is not always the case. To verify the feasibility of the proposed method, the proposed
algorithm needs to be robust enough even in the presence of environment uncertainties such as
moving people.
Based on the results presented in Sections 2 and 3, a WP-ICP is proposed. The WP-ICP considers
two features for point clouds that are pre-processed: one is the corner feature and the other is the line
feature. Corners are important feature points in the environment as they are distinct, whereas walls
are stable feature points and a good candidate for feature extraction in structured environments
[15,31,36]. Taking advantage of LiDAR for detecting surroundings, walls can be represented as line
segments, which are composed of two corners. Furthermore, we also considered the center point of
a line segment as another matching reference point.
The motivation for such features are that indoor environments, e.g., offices and buildings, are
generally well structured. Therefore, feature-based localization is suitable for such environments.
Examining the traditional full-points ICP algorithm, point cloud registration is achieved by means of
the nearest neighbor search (NNS) concept. There is no further information attached to those points.
Th erefor e, it is easy to obtain inc orrect corres ponden ce as sh own in Figu re 5. Un der this ci rcu msta nce ,
several iterations are usually needed to converge the point registration. In addition, the ICP gives rise
to an obvious time cost for registration especially when the size of point cloud is large. To solve the
incorrect correspondence and iteration time cost issues, a feature-based point cloud reduction
method is developed.
For the proposed WP-ICP, the point cloud is first characterized by fewer corners and lines. This
has two main advantages: (a) First, the size of the point cloud is reduced significantly and thus
enhances the ICP speed. (b) Second, the polished points are labeled as corner or line features. Only
Figure 4. Corner-feature-based pose estimation.
3.2. A Weighted Parallel ICP Pose Estimation Algorithm
An environment that has a similar layout generally results in good estimates. However, in practice,
it is not always the case. To verify the feasibility of the proposed method, the proposed algorithm
needs to be robust enough even in the presence of environment uncertainties such as moving people.
Based on the results presented in Sections 2and 3, a WP-ICP is proposed. The WP-ICP considers
two features for point clouds that are pre-processed: one is the corner feature and the other is the line
feature. Corners are important feature points in the environment as they are distinct, whereas walls are
stable feature points and a good candidate for feature extraction in structured environments [
15
,
31
,
36
].
Taking advantage of LiDAR for detecting surroundings, walls can be represented as line segments,
which are composed of two corners. Furthermore, we also considered the center point of a line segment
as another matching reference point.
The motivation for such features are that indoor environments, e.g., offices and buildings, are
generally well structured. Therefore, feature-based localization is suitable for such environments.
Examining the traditional full-points ICP algorithm, point cloud registration is achieved by means of
the nearest neighbor search (NNS) concept. There is no further information attached to those points.
Therefore, it is easy to obtain incorrect correspondence as shown in Figure 5. Under this circumstance,
several iterations are usually needed to converge the point registration. In addition, the ICP gives rise
to an obvious time cost for registration especially when the size of point cloud is large. To solve the
incorrect correspondence and iteration time cost issues, a feature-based point cloud reduction method
is developed.
For the proposed WP-ICP, the point cloud is first characterized by fewer corners and lines. This
has two main advantages: (a) First, the size of the point cloud is reduced significantly and thus
Sensors 2018,18, 1294 7 of 19
enhances the ICP speed. (b) Second, the polished points are labeled as corner or line features. Only
those points labeled as corners can be matched to the corner candidates; similarly, only those points
labeled as lines can be matched to the lines candidates. The result is shown in Figure 6. Figure 6a
illustrates the matching condition for corners while Figure 6b represents the matching condition for
lines. The parallel-ICP results in correct point registration and thereby reduces the number of iterations.
Sensors 2018, 18, x FOR PEER REVIEW 7 of 19
those points labeled as corners can be matched to the corner candidates; similarly, only those points
labeled as lines can be matched to the lines candidates. The result is shown in Figure 6. Figure 6a
illustrates the matching condition for corners while Figure 6b represents the matching condition for
lines. The parallel-ICP results in correct point registration and thereby reduces the number
of iterations.
Figure 5. Incorrect point registration caused by the nearest neighbor search (NNS) Iterative Closest
Point (ICP) algorithm.
(a) (b)
Figure 6. Illustration of feature-based point cloud registration. (a) Corner features are matched to
corresponding corner features (b) Line features are matched to corresponding line features.
The main advantage of the WP-ICP is that the number of data points in a data set as well as a
model set can be significantly reduced. On the contrary, the full-points ICP algorithm includes all
data points for correspondence searching and thus leads to low computation efficiency.
Moreover, in the proposed WP-ICP, the scan points are clustered into the corner or the line
groups, respectively. Based on the parallel mechanism, the points from a corner set can never be
matched with the points from a line set. It can thus avoid mismatching during the point registration.
However, for full point ICP, many mismatches could happen once the distances between those two
set points are close enough.
Since the WP-ICP has two ICP processes at the same time, it generates two pairs of robot pose,
namely

CC tR , and

LL tR ,. Therefore, it is desired to fuse these two poses to come out with a
more confident pose estimate.
0123456 78
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Corner Feature
0123456 78
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Corner Feature Model Set
Corner Feature Data Set
012345678
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Line Feature Model Set
Line Feature Data Set
Figure 5.
Incorrect point registration caused by the nearest neighbor search (NNS) Iterative Closest
Point (ICP) algorithm.
Sensors 2018, 18, x FOR PEER REVIEW 7 of 19
those points labeled as corners can be matched to the corner candidates; similarly, only those points
labeled as lines can be matched to the lines candidates. The result is shown in Figure 6. Figure 6a
illustrates the matching condition for corners while Figure 6b represents the matching condition for
lines. The parallel-ICP results in correct point registration and thereby reduces the number
of iterations.
Figure 5. Incorrect point registration caused by the nearest neighbor search (NNS) Iterative Closest
Point (ICP) algorithm.
(a) (b)
Figure 6. Illustration of feature-based point cloud registration. (a) Corner features are matched to
corresponding corner features (b) Line features are matched to corresponding line features.
The main advantage of the WP-ICP is that the number of data points in a data set as well as a
model set can be significantly reduced. On the contrary, the full-points ICP algorithm includes all
data points for correspondence searching and thus leads to low computation efficiency.
Moreover, in the proposed WP-ICP, the scan points are clustered into the corner or the line
groups, respectively. Based on the parallel mechanism, the points from a corner set can never be
matched with the points from a line set. It can thus avoid mismatching during the point registration.
However, for full point ICP, many mismatches could happen once the distances between those two
set points are close enough.
Since the WP-ICP has two ICP processes at the same time, it generates two pairs of robot pose,
namely

CC tR , and

LL tR ,. Therefore, it is desired to fuse these two poses to come out with a
more confident pose estimate.
0123456 78
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Corner Feature
0123456 78
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Corner Feature Model Set
Corner Feature Data Set
012345678
X
-2
-1
0
1
2
3
4
5
Full Points Model Set
Full Points Data Set
Line Feature Model Set
Line Feature Data Set
Figure 6.
Illustration of feature-based point cloud registration. (
a
) Corner features are matched to
corresponding corner features (b) Line features are matched to corresponding line features.
The main advantage of the WP-ICP is that the number of data points in a data set as well as a
model set can be significantly reduced. On the contrary, the full-points ICP algorithm includes all data
points for correspondence searching and thus leads to low computation efficiency.
Moreover, in the proposed WP-ICP, the scan points are clustered into the corner or the line groups,
respectively. Based on the parallel mechanism, the points from a corner set can never be matched
with the points from a line set. It can thus avoid mismatching during the point registration. However,
for full point ICP, many mismatches could happen once the distances between those two set points are
close enough.
Since the WP-ICP has two ICP processes at the same time, it generates two pairs of robot pose,
namely
(RC,tC)
and
(RL,tL)
. Therefore, it is desired to fuse these two poses to come out with a more
confident pose estimate.
Sensors 2018,18, 1294 8 of 19
The criterion for the confidence evaluation is designed as follows:
ΓC=#ofmatchedcornerfeaturepoints
#oftotalcornerfeaturepoints ,ΓL=#ofmatchedlinefeaturepoints
#oftotallinefeaturepoints (10)
where
ΓC,L
represent the confidences and can be treated as fused weights for corner feature ICP and
line feature ICP, respectively.
The final step is to calculate a fused pose estimate Rf used,tf used as follows:
Rf used =αRC+(1α)RL
tf used =αtC+(1α)tL
,α=ΓC
ΓC+ΓL
(11)
where the heading rotation angle is used to obtain the Rfused .
Since the WP-ICP provides two sources of feature points, the real-time LiDAR scanning points
are going to be separated into two groups including corners and lines. Each group is then matched
with its corresponding features. Based on this parallel matching mechanism, serious mismatching can
be avoided, resulting in improved localization stability and precision. The flow chart of the WP-ICP is
illustrated in Figure 7.
Sensors 2018, 18, x FOR PEER REVIEW 8 of 19
The criterion for the confidence evaluation is designed as follows:
points feature totalof #
points feature matched of #
,
points feature totalof #
points feature matched of #
line
line
corner
corner
LC (10)
where LC,
represent the confidences and can be treated as fused weights for corner feature ICP and
line feature ICP, respectively.
The final step is to calculate a fused pose estimate
fusedfused tR , as follows:


LCfused
LCfused
αα
αα
ttt
RRR
1
1,
LC
C
α
(11)
where the heading rotation angle is used to obtain the
f
used
R
.
Since the WP-ICP provides two sources of feature points, the real-time LiDAR scanning points
are going to be separated into two groups including corners and lines. Each group is then matched
with its corresponding features. Based on this parallel matching mechanism, serious mismatching
can be avoided, resulting in improved localization stability and precision. The flow chart of the
WP-ICP is illustrated in Figure 7.
Figure 7. The weighted parallel iterative closed point (WP-ICP) pose estimation algorithm.
The practical benefits of the proposed WP-ICP include the following: (a) computation effort is
reduced when KD-Trees are built and nearest point registration is applied; (b) correct point cloud
registration significantly reduces ICP iterations, enabling fast robot pose estimate; (c) robot pose is
determined by feature-based ICP fusion of two features (corner and line), which makes the estimate
less sensitive to uncertain environments.
4. Experiments and Discussions
Figure 7. The weighted parallel iterative closed point (WP-ICP) pose estimation algorithm.
The practical benefits of the proposed WP-ICP include the following: (a) computation effort is
reduced when KD-Trees are built and nearest point registration is applied; (b) correct point cloud
registration significantly reduces ICP iterations, enabling fast robot pose estimate; (c) robot pose is
determined by feature-based ICP fusion of two features (corner and line), which makes the estimate
less sensitive to uncertain environments.
Sensors 2018,18, 1294 9 of 19
4. Experiments and Discussions
For the experiments, a Hokuyo UST-20LX Scanning Laser Rangefinder is used, where a 20 Hz
scan rate and a 15 m scan distance are applied. The scanning angle resolution is 0.25
. Based on these
settings, the maximum scanning points are 1081 point per scan; that is, 20
×
1081 = 21,620 points per
second. The experiment system is shown in Figure 8, which includes (1) a portable LiDAR module
(the upper half part) and (2) a mecanum wheeled robot (the lower half part). In this work, since the
LiDAR is considered as the single sensor, the robot vehicle is only taken as a moving platform. There
is no communication between the LiDAR module and the vehicle. The maximum moving speed of the
vehicle in the following experiments is restricted to 50 cm/s.
Sensors 2018, 18, x FOR PEER REVIEW 9 of 19
For the experiments, a Hokuyo UST-20LX Scanning Laser Rangefinder is used, where a 20 Hz scan
rate and a 15 m scan distance are applied. The scanning angle resolution is 0.25°. Based on these settings,
the maximum scanning points are 1081 point per scan; that is, 20 × 1081 = 21,620 points per second.
The experiment system is shown in Figure 8, which includes (1) a portable LiDAR module (the upper
half part) and (2) a mecanum wheeled robot (the lower half part). In this work, since the LiDAR is
considered as the single sensor, the robot vehicle is only taken as a moving platform. There is no
communication between the LiDAR module and the vehicle. The maximum moving speed of the
vehicle in the following experiments is restricted to 50 cm/s.
Figure 8. A LiDAR-based portable module and a moving platform.
Based on the resolution and testing result, N in Equation (2) is set to 15. With regard to the ARC,
clusters containing fewer than 5 points are considered as outliers and are removed before the
WP-ICP is applied. The distance dc = 10 cm is used for the split and merge process. For each iteration,
the weights for ICP will be set to zero if the distances between the points’ correspondences are greater
than 50 cm. This threshold is determined in accordance with the maximum moving speed of the robot.
To ensure the WP-ICP algorithm is feasible, an experiment was firstly carried out in a clear
environment with no obstacles, which is shown in Figure 9. The area of the testing environment was
about 5 m × 6 m. There are two experiments that were carried out in this environment. One is guiding
the vehicle in a rectangular path and the other is moving the vehicle randomly.
(a) (b)
Figure 8. A LiDAR-based portable module and a moving platform.
Based on the resolution and testing result, Nin Equation (2) is set to 15. With regard to the
ARC, clusters containing fewer than 5 points are considered as outliers and are removed before the
WP-ICP is applied. The distance d
c
= 10 cm is used for the split and merge process. For each iteration,
the weights for ICP will be set to zero if the distances between the points’ correspondences are greater
than 50 cm. This threshold is determined in accordance with the maximum moving speed of the robot.
To ensure the WP-ICP algorithm is feasible, an experiment was firstly carried out in a clear
environment with no obstacles, which is shown in Figure 9. The area of the testing environment was
about 5 m
×
6 m. There are two experiments that were carried out in this environment. One is guiding
the vehicle in a rectangular path and the other is moving the vehicle randomly.
Sensors 2018, 18, x FOR PEER REVIEW 9 of 19
For the experiments, a Hokuyo UST-20LX Scanning Laser Rangefinder is used, where a 20 Hz scan
rate and a 15 m scan distance are applied. The scanning angle resolution is 0.25°. Based on these settings,
the maximum scanning points are 1081 point per scan; that is, 20 × 1081 = 21,620 points per second.
The experiment system is shown in Figure 8, which includes (1) a portable LiDAR module (the upper
half part) and (2) a mecanum wheeled robot (the lower half part). In this work, since the LiDAR is
considered as the single sensor, the robot vehicle is only taken as a moving platform. There is no
communication between the LiDAR module and the vehicle. The maximum moving speed of the
vehicle in the following experiments is restricted to 50 cm/s.
Figure 8. A LiDAR-based portable module and a moving platform.
Based on the resolution and testing result, N in Equation (2) is set to 15. With regard to the ARC,
clusters containing fewer than 5 points are considered as outliers and are removed before the
WP-ICP is applied. The distance dc = 10 cm is used for the split and merge process. For each iteration,
the weights for ICP will be set to zero if the distances between the points’ correspondences are greater
than 50 cm. This threshold is determined in accordance with the maximum moving speed of the robot.
To ensure the WP-ICP algorithm is feasible, an experiment was firstly carried out in a clear
environment with no obstacles, which is shown in Figure 9. The area of the testing environment was
about 5 m × 6 m. There are two experiments that were carried out in this environment. One is guiding
the vehicle in a rectangular path and the other is moving the vehicle randomly.
(a) (b)
Figure 9. Cont.
Sensors 2018,18, 1294 10 of 19
Sensors 2018, 18, x FOR PEER REVIEW 10 of 19
(c)
Figure 9. Scene-1: a robot moves in a clear environment with no obstacles. (a) Experimental environment.
(b) Robot is moved in a guided rectangular path. (c) Robot is moved randomly.
Considering full points based ICP (shown in the black line) result as the ground truth, using
corner feature only (shown in the red line) is able to result in an accurate pose estimate in a clear
environment as illustrated in Figure 10. Figure 11 shows the deviation comparisons of the pose
estimation and the average of deviations are 3.06786 and 4.29678 cm, respectively.
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot po se comparsion
Full scanning points
Corner feature only
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot pose comparsi on
Full scanning points
Corner feature only
(a) Exp-1 (b) Exp-2
Figure 10. Different algorithms for the first/second experiments.
0 200 400 600 800 1000 1200 1400 1600 1800
Scan Loop
0
50
100
150
200
250
300
mm
Pose estimation deviation
0 500 1000 1500 2000 2500 3000
Scan Loop
0
50
100
150
200
250
mm
Pose estimation deviation
(a) Exp-1 (b) Exp-2
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Figure 9.
Scene-1: a robot moves in a clear environment with no obstacles. (
a
) Experimental
environment. (b) Robot is moved in a guided rectangular path. (c) Robot is moved randomly.
Considering full points based ICP (shown in the black line) result as the ground truth, using corner
feature only (shown in the red line) is able to result in an accurate pose estimate in a clear environment
as illustrated in Figure 10. Figure 11 shows the deviation comparisons of the pose estimation and the
average of deviations are 3.06786 and 4.29678 cm, respectively.
Sensors 2018, 18, x FOR PEER REVIEW 10 of 19
(c)
Figure 9. Scene-1: a robot moves in a clear environment with no obstacles. (a) Experimental environment.
(b) Robot is moved in a guided rectangular path. (c) Robot is moved randomly.
Considering full points based ICP (shown in the black line) result as the ground truth, using
corner feature only (shown in the red line) is able to result in an accurate pose estimate in a clear
environment as illustrated in Figure 10. Figure 11 shows the deviation comparisons of the pose
estimation and the average of deviations are 3.06786 and 4.29678 cm, respectively.
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot pose comparsion
Full scanning points
Corner feature only
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot po se comparsi on
Full scanning points
Corner feature only
(a) Exp-1 (b) Exp-2
Figure 10. Different algorithms for the first/second experiments.
0 200 400 600 800 1000 1200 1400 1600 1800
Scan Loop
0
50
100
150
200
250
300
mm
Pose estimation deviation
0 500 1000 1500 2000 2500 3000
Scan Loop
0
50
100
150
200
250
mm
Pose estimation deviation
(a) Exp-1 (b) Exp-2
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Figure 10. Different algorithms for the first/second experiments.
Sensors 2018, 18, x FOR PEER REVIEW 10 of 19
(c)
Figure 9. Scene-1: a robot moves in a clear environment with no obstacles. (a) Experimental environment.
(b) Robot is moved in a guided rectangular path. (c) Robot is moved randomly.
Considering full points based ICP (shown in the black line) result as the ground truth, using
corner feature only (shown in the red line) is able to result in an accurate pose estimate in a clear
environment as illustrated in Figure 10. Figure 11 shows the deviation comparisons of the pose
estimation and the average of deviations are 3.06786 and 4.29678 cm, respectively.
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot pose comparsion
Full scanning points
Corner feature only
0 1000 2000 3000 4000 5000 6000 7000
mm
0
1000
2000
3000
4000
5000
6000
7000
mm
Robot po se comparsi on
Full scanning points
Corner feature only
(a) Exp-1 (b) Exp-2
Figure 10. Different algorithms for the first/second experiments.
0 200 400 600 800 1000 1200 1400 1600 1800
Scan Loop
0
50
100
150
200
250
300
mm
Pose estimation deviation
0 500 1000 1500 2000 2500 3000
Scan Loop
0
50
100
150
200
250
mm
Pose estimation deviation
(a) Exp-1 (b) Exp-2
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Sensors 2018,18, 1294 11 of 19
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Less ICP iterations are helpful for real-time realization. Figure 12 shows the number of ICP iterations
between different algorithms for different experiments. It should be noted that, when using corner
as the only feature, the maximum iteration loop in the ICP was no more than 4 (two iterations on
average). However, using full-points ICP results in more than 25 iterations on average. In this
environment, the WP-ICP was also applied. The localization performance is very close to the one
conducted by the corner-based ICP. Therefore, the main advantage of the WP-ICP is not obvious under
this clear environment.
Sensors 2018, 18, x FOR PEER REVIEW 11 of 19
Less ICP iterations are helpful for real-time realization. Figure 12 shows the number of ICP iterations
between different algorithms for different experiments. It should be noted that, when using corner
as the only feature, the maximum iteration loop in the ICP was no more than 4 (two iterations on
average). However, using full-points ICP results in more than 25 iterations on average. In this
environment, the WP-ICP was also applied. The localization performance is very close to the one
conducted by the corner-based ICP. Therefore, the main advantage of the WP-ICP is not obvious
under this clear environment.
To illustrate the superior pose estimation robustness against the corner-based ICP, another
experiment has been carried out on the 3rd floor of Department of Aeronautics and Astronautics,
NCKU, shown in Figure 13. The area is about 15 × 40 m2 and contained different sized objects like
flowerpots and water-cooler as shown in Figure 14, which can be taken as unknown disturbances for
post estimation. These objects were added later in the environment, and hence can be used to test the
feasibility and robustness of the proposed WP-ICP algorithm in dynamic environments.
0 200 400 600 800 1000 1200 1400 1600 1800
Scan Loop
0
10
20
30
40
50
60
70
80
90
100
ICP iter ation s
ICP performance comparison
Full scanning points
Corner feature only
0 500 1000 1500 2000 2500 3000
Scan Loop
0
10
20
30
40
50
60
70
80
90
100
ICP iter ation s
ICP performance comparison
Full scanning points
Corner feature only
(a) Exp-1 (b) Exp-2
Figure 12. ICP performances of the first/second experiments.
Figure 13. Scene-2: the 3rd floor, Department of Aeronautics and Astronautics, NCKU.
Figure 12. ICP performances of the first/second experiments.
To illustrate the superior pose estimation robustness against the corner-based ICP, another
experiment has been carried out on the 3rd floor of Department of Aeronautics and Astronautics,
NCKU, shown in Figure 13. The area is about 15
×
40 m
2
and contained different sized objects like
flowerpots and water-cooler as shown in Figure 14, which can be taken as unknown disturbances for
post estimation. These objects were added later in the environment, and hence can be used to test the
feasibility and robustness of the proposed WP-ICP algorithm in dynamic environments.
Sensors 2018, 18, x FOR PEER REVIEW 11 of 19
Less ICP iterations are helpful for real-time realization. Figure 12 shows the number of ICP iterations
between different algorithms for different experiments. It should be noted that, when using corner
as the only feature, the maximum iteration loop in the ICP was no more than 4 (two iterations on
average). However, using full-points ICP results in more than 25 iterations on average. In this
environment, the WP-ICP was also applied. The localization performance is very close to the one
conducted by the corner-based ICP. Therefore, the main advantage of the WP-ICP is not obvious
under this clear environment.
To illustrate the superior pose estimation robustness against the corner-based ICP, another
experiment has been carried out on the 3rd floor of Department of Aeronautics and Astronautics,
NCKU, shown in Figure 13. The area is about 15 × 40 m2 and contained different sized objects like
flowerpots and water-cooler as shown in Figure 14, which can be taken as unknown disturbances for
post estimation. These objects were added later in the environment, and hence can be used to test the
feasibility and robustness of the proposed WP-ICP algorithm in dynamic environments.
0 200 400 600 800 1000 1200 1400 1600 1800
Scan Loop
0
10
20
30
40
50
60
70
80
90
100
ICP iter ation s
ICP performance comparison
Full scanning points
Corner feature only
0 500 1000 1500 2000 2500 3000
Scan Loop
0
10
20
30
40
50
60
70
80
90
100
ICP iter ation s
ICP performance comparison
Full scanning points
Corner feature only
(a) Exp-1 (b) Exp-2
Figure 12. ICP performances of the first/second experiments.
Figure 13. Scene-2: the 3rd floor, Department of Aeronautics and Astronautics, NCKU.
Figure 13. Scene-2: the 3rd floor, Department of Aeronautics and Astronautics, NCKU.
Sensors 2018,18, 1294 12 of 19
Sensors 2018, 18, x FOR PEER REVIEW 12 of 19
(a) New objects like water-cooler added to the
environment
(b) New objects like flowerpots added to the
environment
Figure 14. DAA-3F local snapshot with new objects to test the robustness of the algorithm.
In the experiment, the traveling path of the vehicle goes counterclockwise on the 3rd floor. The
transient localization behavior can refer to Figure 15, where the upper left subplot and lower left
subplot shows that all the points in data set (from a LiDAR scan) and a model set (from a layout) are
already being labeled by corner features and line features, respectively. Those two features are fed
into the parallel ICP process and finally being fused to a single robot pose. Facing the area that is
different from layout and passing through a straight corridor, the result from utilizing corner feature
points based ICP algorithm leads to apparent localization deviations. Figure 16 shows a long range
localization results at DAA-3F under the use of different algorithms.
Figure 15. Localization by the WP-ICP with 10 cm interpolation. Upper left corner subplot: corner
features. Lower left corner subplot: line features. Right subplot: layout and estimated robot trajectory.
-0.50.00.51.01.52.0
x104
Global ma
p
(
mm
)
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4104
Frame Rate 20.00 Hz
Angular Resolution 0.250 deg
-1.5 -1 -0.5 0 0.5 1 1.5
104
-1.5
-1
-0.5
0
0.5
1
1.5
Corner Feature
104
LiDAR Scan Loop = 6652 Time = 166.28 sec
ICP Iteration =2
Model Set
Rotated Data Set
Data Set
= -0.80 deg
= -0.22 deg
-1.5 -1 -0.5 0 0.5 1 1.5
1
0
4
-1.5
-1
-0.5
0
0.5
1
1.5
Line Feature
104ICP Iteration =3
Model Set
Rotated Data Set
Data Set
= -0.80 deg
= -0.16 deg
Figure 14. DAA-3F local snapshot with new objects to test the robustness of the algorithm.
In the experiment, the traveling path of the vehicle goes counterclockwise on the 3rd floor.
The transient localization behavior can refer to Figure 15, where the upper left subplot and lower left
subplot shows that all the points in data set (from a LiDAR scan) and a model set (from a layout) are
already being labeled by corner features and line features, respectively. Those two features are fed
into the parallel ICP process and finally being fused to a single robot pose. Facing the area that is
different from layout and passing through a straight corridor, the result from utilizing corner feature
points based ICP algorithm leads to apparent localization deviations. Figure 16 shows a long range
localization results at DAA-3F under the use of different algorithms.
Sensors 2018, 18, x FOR PEER REVIEW 12 of 19
(a) New objects like water-cooler added to the
environment
(b) New objects like flowerpots added to the
environment
Figure 14. DAA-3F local snapshot with new objects to test the robustness of the algorithm.
In the experiment, the traveling path of the vehicle goes counterclockwise on the 3rd floor. The
transient localization behavior can refer to Figure 15, where the upper left subplot and lower left
subplot shows that all the points in data set (from a LiDAR scan) and a model set (from a layout) are
already being labeled by corner features and line features, respectively. Those two features are fed
into the parallel ICP process and finally being fused to a single robot pose. Facing the area that is
different from layout and passing through a straight corridor, the result from utilizing corner feature
points based ICP algorithm leads to apparent localization deviations. Figure 16 shows a long range
localization results at DAA-3F under the use of different algorithms.
Figure 15. Localization by the WP-ICP with 10 cm interpolation. Upper left corner subplot: corner
features. Lower left corner subplot: line features. Right subplot: layout and estimated robot trajectory.
-0.50.00.51.01.52.0
x104
Global ma
p
(
mm
)
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4104
Frame Rate 20.00 Hz
Angular Resolution 0.250 deg
-1.5 -1 -0.5 0 0.5 1 1.5
104
-1.5
-1
-0.5
0
0.5
1
1.5
Corner Feature
104
LiDAR Scan Loop = 6652 Time = 166.28 sec
ICP Iteration =2
Model Set
Rotated Data Set
Data Set
= -0.80 deg
= -0.22 deg
-1.5 -1 -0.5 0 0.5 1 1.5
1
0
4
-1.5
-1
-0.5
0
0.5
1
1.5
Line Feature
104ICP Iteration =3
Model Set
Rotated Data Set
Data Set
= -0.80 deg
= -0.16 deg
Figure 15.
Localization by the WP-ICP with 10 cm interpolation. Upper left corner subplot: corner
features. Lower left corner subplot: line features. Right subplot: layout and estimated robot trajectory.
Sensors 2018,18, 1294 13 of 19
Sensors 2018, 18, x FOR PEER REVIEW 13 of 19
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
5
4
3
2
1
0
-1
mm
-0.5 00.5 1.0 1.5 2.0 2.5
3.0
4
x10
mm
Robot Trajec tory Comparison
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
4
x10
Full scanning points
Corner feature only
WP-ICP(without interpolati on)
WP-ICP(with interpolation)
Robot Trajectory Comparison
10.0
8.0
6.0
4.0
mm
2.0
0
-2.0
02.0 4.0 6.0 8.0 10.0 12.0
mm
3
x10
3
x10
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
Full scanning points
Corner feature only
WP-ICP(without interpol ation)
WP-ICP(with i nterpolat ion)
Robot Traj ectory Compa rison
4
4
x10
3.8
3.6
3.4
3.2
3.0
2.8
4
x10
3.0
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
mm
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
mm
(a) Complete view (b) A corridor area (c) Area with unknown objects
Figure 16. Experiments under different algorithms in the DAA-3F.
Examining Figure 16 again, since WP-ICP provides both corner and line features as scan
matching correspondences, theoretically the results are supposed to be better than those conducted
by the corner-based ICP. However, Figure 16b indicates that there are still estimation errors when
passing through the corridor. It is because of fewer line features in the area. To further improve the
estimation robustness, an interpolation on the line features is further integrated into the WP-ICP.
Note that the interpolation is used to increase the number of line feature points for every 10 cm. The result
(i.e., the purple line) shows a good estimate when utilizing the WP-ICP algorithm with interpolation.
Finally, the improvement can also be found in locations subject to static unknown objects as shown
in Figure 16c, where the corresponding snapshots are shown in Figure 14a,b, respectively.
The results of WP-ICP with interpolation demonstrate the robustness against the external
environment uncertainty. The details of the computation efficiency under different algorithms are
depicted in Figure 17. Compared to the full point ICP, the use of corner features only can improve
speed by about 50 times on average; the use of WP-ICP (with interpolation) can improve speed by 5
times on average. However, applying corner features only could sometimes lead to unstable
estimates. Therefore, WP-ICP makes a compromise between computation speed and localization
accuracy.
Figure 17. ICP performance comparison of the third experiment.
Finally, it is worth discussing the localization performance under different interpolation sizes
when applying the WP-ICP. In this study, the layouts of the localization environment are given in
terms of few discrete data points. Therefore, the resolution of the layouts can be further enhanced by
Figure 16. Experiments under different algorithms in the DAA-3F.
Examining Figure 16 again, since WP-ICP provides both corner and line features as scan matching
correspondences, theoretically the results are supposed to be better than those conducted by the
corner-based ICP. However, Figure 16b indicates that there are still estimation errors when passing
through the corridor. It is because of fewer line features in the area. To further improve the estimation
robustness, an interpolation on the line features is further integrated into the WP-ICP. Note that
the interpolation is used to increase the number of line feature points for every 10 cm. The result
(i.e., the purple line) shows a good estimate when utilizing the WP-ICP algorithm with interpolation.
Finally, the improvement can also be found in locations subject to static unknown objects as shown in
Figure 16c, where the corresponding snapshots are shown in Figure 14a,b, respectively.
The results of WP-ICP with interpolation demonstrate the robustness against the external
environment uncertainty. The details of the computation efficiency under different algorithms are
depicted in Figure 17. Compared to the full point ICP, the use of corner features only can improve speed
by about 50 times on average; the use of WP-ICP (with interpolation) can improve speed by 5 times
on average. However, applying corner features only could sometimes lead to unstable estimates.
Therefore, WP-ICP makes a compromise between computation speed and localization accuracy.
Sensors 2018, 18, x FOR PEER REVIEW 13 of 19
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
5
4
3
2
1
0
-1
mm
-0.5 00.5 1.0 1.5 2.0 2.5
3.0
4
x10
mm
Robot Trajec tory Comparison
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
4
x10
Full scanning points
Corner feature only
WP-ICP(without interpolati on)
WP-ICP(with interpolation)
Robot Trajectory Comparison
10.0
8.0
6.0
4.0
mm
2.0
0
-2.0
02.0 4.0 6.0 8.0 10.0 12.0
mm
3
x10
3
x10
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
Full scanning points
Corner feature only
WP-ICP(without interpol ation)
WP-ICP(with i nterpolat ion)
Robot Traj ectory Compa rison
4
4
x10
3.8
3.6
3.4
3.2
3.0
2.8
4
x10
3.0
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
mm
Full Points ICP
Corner feature only
WP-ICP(without interpolation)
WP-ICP(with interpolation)
mm
(a) Complete view (b) A corridor area (c) Area with unknown objects
Figure 16. Experiments under different algorithms in the DAA-3F.
Examining Figure 16 again, since WP-ICP provides both corner and line features as scan
matching correspondences, theoretically the results are supposed to be better than those conducted
by the corner-based ICP. However, Figure 16b indicates that there are still estimation errors when
passing through the corridor. It is because of fewer line features in the area. To further improve the
estimation robustness, an interpolation on the line features is further integrated into the WP-ICP.
Note that the interpolation is used to increase the number of line feature points for every 10 cm. The result
(i.e., the purple line) shows a good estimate when utilizing the WP-ICP algorithm with interpolation.
Finally, the improvement can also be found in locations subject to static unknown objects as shown
in Figure 16c, where the corresponding snapshots are shown in Figure 14a,b, respectively.
The results of WP-ICP with interpolation demonstrate the robustness against the external
environment uncertainty. The details of the computation efficiency under different algorithms are
depicted in Figure 17. Compared to the full point ICP, the use of corner features only can improve
speed by about 50 times on average; the use of WP-ICP (with interpolation) can improve speed by 5
times on average. However, applying corner features only could sometimes lead to unstable
estimates. Therefore, WP-ICP makes a compromise between computation speed and localization
accuracy.
Figure 17. ICP performance comparison of the third experiment.
Finally, it is worth discussing the localization performance under different interpolation sizes
when applying the WP-ICP. In this study, the layouts of the localization environment are given in
terms of few discrete data points. Therefore, the resolution of the layouts can be further enhanced by
Figure 17. ICP performance comparison of the third experiment.
Sensors 2018,18, 1294 14 of 19
Finally, it is worth discussing the localization performance under different interpolation sizes
when applying the WP-ICP. In this study, the layouts of the localization environment are given in
terms of few discrete data points. Therefore, the resolution of the layouts can be further enhanced by
using interpolation. For each LiDAR scan, the interpolation can be achieved by manipulating the raw
data directly. The simplest way is to apply a divider on each segment.
Based on previous experiments, it is obvious that WP-ICP with interpolation has the minimum
error compared with the corner-feature-based ICP and the pure WP-ICP. In the following, 10 and 50 cm
interpolation resolutions are further considered for the same DAA-3F experiment. Figure 18 verifies
that increasing the size of the interpolation resolution does increase the localization error. However,
the number of ICP iterations can be significantly reduced. The average iterations of WP-ICP with
50 cm and 10 cm interpolations are 7.063 and 11.7577, respectively. As a result, the resolution of the
interpolation can be taken as a trade-off design factor between the computation efficiency and the
localization precision.
A comparison study from the viewpoint of the ICP iteration is summarized in Table 1. It is clear
that the ICP iteration performances obtained by the corner-based ICP and WP-ICP are noticeably better
than full-points ICP. To overcome the challenge of unknown objects during point cloud matching,
WP-ICP with interpolation was further introduced. It was shown that the localization robustness
can be significantly enhanced. Although the number of ICP iterations increases, the increment is still
acceptable for real-time consideration.
Sensors 2018, 18, x FOR PEER REVIEW 14 of 19
using interpolation. For each LiDAR scan, the interpolation can be achieved by manipulating the raw
data directly. The simplest way is to apply a divider on each segment.
Based on previous experiments, it is obvious that WP-ICP with interpolation has the minimum
error compared with the corner-feature-based ICP and the pure WP-ICP. In the following, 10 and 50
cm interpolation resolutions are further considered for the same DAA-3F experiment. Figure 18
verifies that increasing the size of the interpolation resolution does increase the localization error.
However, the number of ICP iterations can be significantly reduced. The average iterations of
WP-ICP with 50 cm and 10 cm interpolations are 7.063 and 11.7577, respectively. As a result, the
resolution of the interpolation can be taken as a trade-off design factor between the computation
efficiency and the localization precision.
Figure 18. Pose estimation deviation of different interpolations.
A comparison study from the viewpoint of the ICP iteration is summarized in Table 1. It is clear
that the ICP iteration performances obtained by the corner-based ICP and WP-ICP are noticeably
better than full-points ICP. To overcome the challenge of unknown objects during point cloud
matching, WP-ICP with interpolation was further introduced. It was shown that the localization
robustness can be significantly enhanced. Although the number of ICP iterations increases, the
increment is still acceptable for real-time consideration.
Table 1. ICP iteration performance.
Experimental Environment Pose Estimation Algorithm Average ICP Iteration
Scene-1 Exp.1
Full-Points ICP 25.36
Corner-Based ICP 2.06
WP-ICP 2.06/2.04 (corner/line)
Scene-1 Exp.2
Full-Points ICP 30.10
Corner-Based ICP 2.04
WP-ICP 2.05/2.04 (corner/line)
Scene-2
Full-Points ICP 49.92
Corner-Based ICP 2.49
WP-ICP 2.61/2.85 (corner/line)
WP-ICP with 10cm Interpolation 2.87/9.25 (corner/line)
Finally, to further demonstrate the robustness of the proposed WP-ICP, we considered another
experiment in the same environment (as illustrated in Figure 19) but with five people walking around
the vehicle. The practical scenes are shown in Figure 20. Firstly, to generate a ground truth for
comparison, the full-points ICP is considered, which leads to good localization results, as illustrated
in Figure 21. Due to many unknown moving objects that do not exist in the given layout, those
obstacles result in many outliers. Under this condition, using corner features only would cause
divergent localization, as demonstrated in Figure 22. On the contrary, as shown in Figure 23, applying
Figure 18. Pose estimation deviation of different interpolations.
Table 1. ICP iteration performance.
Experimental Environment Pose Estimation Algorithm Average ICP Iteration
Scene-1 Exp.1
Full-Points ICP 25.36
Corner-Based ICP 2.06
WP-ICP 2.06/2.04 (corner/line)
Scene-1 Exp.2
Full-Points ICP 30.10
Corner-Based ICP 2.04
WP-ICP 2.05/2.04 (corner/line)
Scene-2
Full-Points ICP 49.92
Corner-Based ICP 2.49
WP-ICP 2.61/2.85 (corner/line)
WP-ICP with 10cm Interpolation 2.87/9.25 (corner/line)
Finally, to further demonstrate the robustness of the proposed WP-ICP, we considered another
experiment in the same environment (as illustrated in Figure 19) but with five people walking around
Sensors 2018,18, 1294 15 of 19
the vehicle. The practical scenes are shown in Figure 20. Firstly, to generate a ground truth for
comparison, the full-points ICP is considered, which leads to good localization results, as illustrated
in Figure 21. Due to many unknown moving objects that do not exist in the given layout, those
obstacles result in many outliers. Under this condition, using corner features only would cause
divergent localization, as demonstrated in Figure 22. On the contrary, as shown in Figure 23, applying
the WP-ICP together with interpolation presents satisfactory localization results without inducing
divergent behavior, even in the presence of unknown moving objects. For the WP-ICP under different
interpolation resolutions, the results are depicted in Figure 24a,b, respectively. The performance details
are given in Table 2. Experiments verify that the use of WP-ICP can withstand dynamic uncertainties
as well as produce satisfactory localization result with fewer iterations.
Sensors 2018, 18, x FOR PEER REVIEW 15 of 19
the WP-ICP together with interpolation presents satisfactory localization results without inducing
divergent behavior, even in the presence of unknown moving objects. For the WP-ICP under different
interpolation resolutions, the results are depicted in Figure 24a,b, respectively. The performance
details are given in Table 2. Experiments verify that the use of WP-ICP can withstand dynamic
uncertainties as well as produce satisfactory localization result with fewer iterations.
Figure 19. ICP performance of different interpolations.
(a) (b)
Figure 20. An environment with unknown moving objects. (a) Random snapshot 1. (b) Random
snapshot 2.
Figure 19. ICP performance of different interpolations.
Sensors 2018, 18, x FOR PEER REVIEW 15 of 19
the WP-ICP together with interpolation presents satisfactory localization results without inducing
divergent behavior, even in the presence of unknown moving objects. For the WP-ICP under different
interpolation resolutions, the results are depicted in Figure 24a,b, respectively. The performance
details are given in Table 2. Experiments verify that the use of WP-ICP can withstand dynamic
uncertainties as well as produce satisfactory localization result with fewer iterations.
Figure 19. ICP performance of different interpolations.
(a) (b)
Figure 20. An environment with unknown moving objects. (a) Random snapshot 1. (b) Random
snapshot 2.
Figure 20.
An environment with unknown moving objects. (
a
) Random snapshot 1. (
b
) Random
snapshot 2.
Sensors 2018,18, 1294 16 of 19
Sensors 2018, 18, x FOR PEER REVIEW 16 of 19
-5000 0 5000
-6000
-4000
-2000
0
2000
4000
6000
Corner Feature
LiDAR Scan Loop = 2477 Time = 61.90 sec
ICP Iteration =13
Model Set
Rotated Data Set
Data Set
= -2.98 deg
= 0.01 deg
0 1000 2000 3000 4000 5000
Global map (mm)
-1000
0
1000
2000
3000
4000
5000
6000
7000
Frame Rate 20.00 Hz
Angular Resolution 0.250 deg
(a) (b)
Figure 21. Localization result by using full-points ICP. (a) Point registration. (b) Robot trajectories.
Figure 22. Localization result by using corner features only (where the line information was not fused
for the localization).
Corner Feature
Line Feature
Figure 21. Localization result by using full-points ICP. (a) Point registration. (b) Robot trajectories.
Sensors 2018, 18, x FOR PEER REVIEW 16 of 19
-5000 0 5000
-6000
-4000
-2000
0
2000
4000
6000
Corner Feature
LiDAR Scan Loop = 2477 Time = 61.90 sec
ICP Iteration =13
Model Set
Rotated Data Set
Data Set
= -2.98 deg
= 0.01 deg
0 1000 2000 3000 4000 5000
Global map (mm)
-1000
0
1000
2000
3000
4000
5000
6000
7000
Frame Rate 20.00 Hz
Angular Resolution 0.250 deg
(a) (b)
Figure 21. Localization result by using full-points ICP. (a) Point registration. (b) Robot trajectories.
Figure 22. Localization result by using corner features only (where the line information was not fused
for the localization).
Corner Feature
Line Feature
Figure 22.
Localization result by using corner features only (where the line information was not fused
for the localization).
Sensors 2018,18, 1294 17 of 19
Sensors 2018, 18, x FOR PEER REVIEW 17 of 19
Figure 23. Localization result using the WP-ICP with 20 cm interpolation.
-1000 0 1000 2000 3000 4000 5000 6000
X(mm)
0
1000
2000
3000
4000
5000
6000
7000
Y( mm )
Full Point ICP
WP-ICP (10cm Interp.)
WP-ICP (20cm Interp.)
0 500 1000 1500 2000 2500
LiDAR Scan Round
0
20
40
60
80
100
120
ICP It erations
Full Point ICP
WP-ICP (10cm Interp.)
WP-ICP (20cm Interp.)
(a) Localization results (b) ICP iterations
Figure 24. Localization results and ICP iterations.
Table 2. Localization results for the environment subject to unknown moving objects.
Localization Algorithm Model Set Size
(Points)
Average ICP
Iterations
Total ICP
Iterations
Local. Error (mm)
Avg/Max
Full-Points ICP
(Point-to-Point ICP) 1991 35 86687 taken as ground truth
WP-ICP with 10 cm
Interpolation 213 13 30838 53.7/261.3
WP-ICP with 20 cm
Interpolation 114 9 22981 60.5/264.6
5. Conclusions
In this work, to solve the mismatched point cloud registration problem and to enhance ICP
efficiency, a parallel feature-based indoor localization algorithm is proposed. In the traditional ICP
algorithm, the point cloud registration is achieved by means of NNS and there is no any other
information attached to those points. Therefore, it is prone to obtain incorrect correspondences. On
the contrary, we present a novel WP-ICP algorithm that provides more information on the polished
point cloud. The WP-ICP consists of two ICP sources, one is corner features and the other is line
features. Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line
Corner Feature
Line Feature
Figure 23. Localization result using the WP-ICP with 20 cm interpolation.
Sensors 2018, 18, x FOR PEER REVIEW 17 of 19
Figure 23. Localization result using the WP-ICP with 20 cm interpolation.
-1000 0 1000 2000 3000 4000 5000 6000
X(mm)
0
1000
2000
3000
4000
5000
6000
7000
Y( mm )
Full Point ICP
WP-ICP (10cm Interp.)
WP-ICP (20cm Interp.)
0 500 1000 1500 2000 2500
LiDAR Scan Round
0
20
40
60
80
100
120
ICP It erations
Full Point ICP
WP-ICP (10cm Interp.)
WP-ICP (20cm Interp.)
(a) Localization results (b) ICP iterations
Figure 24. Localization results and ICP iterations.
Table 2. Localization results for the environment subject to unknown moving objects.
Localization Algorithm Model Set Size
(Points)
Average ICP
Iterations
Total ICP
Iterations
Local. Error (mm)
Avg/Max
Full-Points ICP
(Point-to-Point ICP) 1991 35 86687 taken as ground truth
WP-ICP with 10 cm
Interpolation 213 13 30838 53.7/261.3
WP-ICP with 20 cm
Interpolation 114 9 22981 60.5/264.6
5. Conclusions
In this work, to solve the mismatched point cloud registration problem and to enhance ICP
efficiency, a parallel feature-based indoor localization algorithm is proposed. In the traditional ICP
algorithm, the point cloud registration is achieved by means of NNS and there is no any other
information attached to those points. Therefore, it is prone to obtain incorrect correspondences. On
the contrary, we present a novel WP-ICP algorithm that provides more information on the polished
point cloud. The WP-ICP consists of two ICP sources, one is corner features and the other is line
features. Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line
Corner Feature
Line Feature
Figure 24. Localization results and ICP iterations.
Table 2. Localization results for the environment subject to unknown moving objects.
Localization
Algorithm
Model Set Size
(Points)
Average ICP
Iterations
Total ICP
Iterations
Local. Error (mm)
Avg/Max
Full-Points ICP
(Point-to-Point ICP) 1991 35 86687 taken as ground truth
WP-ICP with 10 cm
Interpolation 213 13 30838 53.7/261.3
WP-ICP with 20 cm
Interpolation 114 9 22981 60.5/264.6
5. Conclusions
In this work, to solve the mismatched point cloud registration problem and to enhance ICP
efficiency, a parallel feature-based indoor localization algorithm is proposed. In the traditional ICP
algorithm, the point cloud registration is achieved by means of NNS and there is no any other
information attached to those points. Therefore, it is prone to obtain incorrect correspondences. On the
Sensors 2018,18, 1294 18 of 19
contrary, we present a novel WP-ICP algorithm that provides more information on the polished point
cloud. The WP-ICP consists of two ICP sources, one is corner features and the other is line features.
Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line matching or
from line to corner matching. As a result, the proposed algorithm results in faster convergence for pose
estimation. Moreover, since the full scan points are processed to extract fewer feature points, it also
enhances the ICP computation efficiency and therefore is suitable for low-cost CPUs. For environments
that possess fewer feature points, the WP-ICP together with a line interpolation is further verified.
Environments subject to static and dynamic unknown moving objects were also considered to verify
the feasibility and robustness of the proposed method.
Acknowledgments:
This research is supported by the Ministry of Science and Technology under grant numbers
MOST 106-2221-E-006-138 and MOST 107-2218-E-006-004.
Author Contributions:
Y.T. Wang implemented the ideas and performed the experiments; C.C. Peng conceived
and designed the algorithm; Y.T. Wang and C.C. Peng wrote the paper. A.A. Ravankar and A. Ravankar polished
the paper and provided valuable concepts and comments.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Khoshelham, K.; Zlatanova, S. Sensors for Indoor Mapping and Navigation. Sensors
2016
,16, 655. [CrossRef]
[PubMed]
2.
Nützi, G.; Weiss, S.; Scaramuzza, D.; Siegwart, R. Fusion of IMU and vision for absolute scale estimation in
monocular SLAM. J. Intell. Robot. Syst. 2011,61, 287–299. [CrossRef]
3.
Se, S.; Lowe, D.G.; Little, J.J. Vision-based global localization and mapping for mobile robots.
IEEE Trans. Robot. 2005,21, 364–375. [CrossRef]
4.
Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE
International Conference on Computer Vision, Kerkyra, Greece, 20–22 September 1999; pp. 1150–1157.
5.
Bay, H.; Ess, A.; Tuytelaars, T.; Gool, V.L. Speeded-up robust features (SURF). Comput. Vis. Image Underst.
2008,110, 346–359. [CrossRef]
6.
Husen, M.N.; Lee, S. Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Sensors 2016,16, 1898. [CrossRef] [PubMed]
7.
Chang, Q.; Li, Q.; Shi, Z.; Chen, W.; Wang, W. Scalable Indoor Localization via Mobile Crowdsourcing and
Gaussian Process. Sensors 2016,16, 381. [CrossRef] [PubMed]
8.
Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.C.; Xie, L. Fusion of WiFi, smartphone sensors and landmarks
using the Kalman filter for indoor localization. Sensors 2015,15, 715–732. [CrossRef] [PubMed]
9.
Passafiume, M.; Maddio, S.; Cidronali, A. An Improved Approach for RSSI-Based Only Calibration-Free
Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks. Sensors
2017
,17, 717.
[CrossRef] [PubMed]
10.
Wei, C.C.; Lin, J.S.; Chang, C.C.; Huang, Y.F.; Lin, C.B. The Development of E-Bike Navigation Technology
Based on an OpenStreetMap. J. Smart Sci. 2017,6, 29–35. [CrossRef]
11.
D’Alfonso, L.; Grano, A.; Muraca, P.; Pugliese, P. A polynomial based SLAM algorithm for mobile robots
using ultrasonic sensors-Experimental results. In Proceedings of the 16th International Conference on
Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013.
12.
Yu, C.W.; Liu, C.H.; Chen, Y.L.; Lee, P.; Tian, M.S. Vision-based Hand Recognition Based on ToF Depth
Camera. J. Smart Sci. 2017,6, 21–28. [CrossRef]
13.
Fankhauser, P.; Bloesch, M.; Rodriguez, D.; Kaestner, R.; Hutter, M.; Siegwart, R. Kinect v2 for Mobile Robot
Navigation: Evaluation and Modeling. In Proceedings of the 2015 International Conference on Advanced
Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015.
14.
Chen, C.; Yang, B.; Song, S.; Tian, M.; Li, J.; Dai, W.; Fang, L. Calibrate Multiple Consumer RGB-D Cameras
for Low-Cost and Efficient 3D Indoor Mapping. Remote Sens. 2018,10, 328. [CrossRef]
15.
Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A comparison of line extraction algorithms using 2D
range data for indoor mobile robotics. Auton. Robot. 2007,23, 97–111. [CrossRef]
Sensors 2018,18, 1294 19 of 19
16.
Aghamohammadi, A.; Tamjidi, A.H.; Taghirad, H.D. SLAM Using Single Laser Range Finder. In Proceedings
of the 17th IFAC World Congress, Seoul, Korea, 6–11 July 2008; pp. 14657–14662.
17. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. Proc. SPIE 1992,1611, 239–256. [CrossRef]
18.
Lu, F.; Milios, E. Robot pose estimation in unknown environments by matching 2D range scans.
J. Intell. Robot. Syst. 1997,18, 249–275. [CrossRef]
19.
Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International
Conference on 3-D Digital Imaging and Modeling, Quebec, Canada, 28 May–1 June 2001.
20.
Montesano, L.; Minguez, J.; Montano, L. Probabilistic scan matching for motion estimation in unstructured
environments. In Proceedings of the IEEE/RSJ International conference on Intelligent Robots and Systems
(IROS), Edmonton, AB, Canada, 2–6 August 2005.
21.
Minguez, J.; Lamiraux, F.; Montesano, L. Metric-based scan matching algorithms for mobile robot
displacement estimation. In Proceedings of the IEEE International Conference on Robotics and Automation,
Barcelona, Spain, 18–22 April 2005.
22.
Biber, P.; Strasser, W. The Normal Distributions Transform: A New Approach to Laser Scan Matching.
In Proceedings of the IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS),
Las Vegas, NV, USA, 27–31 October 2003.
23.
Censi, A. An ICP variant using a point-to-line metric. In Proceedings of the 2008 IEEE International
Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008.
24.
Pottmann, H.; Huang, Q.X.; Yang, Y.L.; Hu, S.M. Geometry and convergence analysis of algorithms for
registration of 3D shapes. Int. J. Comput. Vis. 2006,67, 277–296. [CrossRef]
25.
Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng.
1960
,82, 35–45.
[CrossRef]
26.
Armesto, L.; Tornero, J. SLAM based on Kalman filter for multi-rate fusion of laser and encoder
measurements. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Sendai, Japan, 28 September–2 October 2004.
27.
Zhang, J.; Ou, Y.; Jiang, G.; Zhou, Y. An Approach to Restaurant Service Robot SLAM. In Proceedings of the
IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016.
28.
Borges, G.A.; Aldon, M.J. Line extraction in 2D range images for mobile robotics. J. Intell. Robot. Syst.
2004
,
40, 267–297. [CrossRef]
29.
Lv, J.; Kobayashi, Y.; Ravankar, A.A.; Emaru, T. Straight Line Segments Extraction and EKF-SLAM in Indoor
Environment. J. Auto. Cont. Eng. 2014,2, 270–276. [CrossRef]
30.
An, S.Y.; Kang, J.G.; Lee, L.K.; Oh, S.Y. Line segment-based indoor mapping with salient line feature
extraction. Adv. Robot. 2012,26, 437–460. [CrossRef]
31.
Ravankar, A.; Ravankar, A.A.; Hoshino, Y.; Emaru, T.; Kobayashi, Y. On a hopping-points SVD and hough
transform-based line detection algorithm for robot localization and mapping. Int. J. Adv. Robot. Syst.
2016
,
13, 98. [CrossRef]
32.
Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; John Wiley & Sons: New York, NY, USA, 1973.
33.
Reina, A.; Gonzalez, J. A two-stage mobile robot localization method by overlapping segment-based maps.
Robot. Auton. Syst. 2000,31, 213–225. [CrossRef]
34.
Guivant, J.E.; Nebot, E.M. Optimization of the simultaneous localization and map-building algorithm for
real-time implementation. IEEE Trans. Robot. Autom. 2001,17, 242–257. [CrossRef]
35.
Sorkine-Hornung, O.; Rabinovich, M. Least-Squares Rigid Motion Using SVD. Available online:
http://www.igl.ethz.ch/projects/ARAP/svd_rot.pdf (accessed on 19 April 2018).
36.
Ravankar, A.A.; Hoshino, Y.; Ravankar, A.; Jixin, L.; Emaru, T.; Kobayashi, Y. Algorithms and a framework
for indoor robot mapping in a noisy environment using clustering in spatial and hough domains.
Int. J. Adv. Robot. Syst. 2015,12, 27. [CrossRef]
©
2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... The literature thus discovered is presented in Table 8. Tracking Literature Wi-Fi [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [58], [76] UWB [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87] RFID [88], [89], [90], [91], [92], [93], [94], [95], VLC [96], [97], [40], [43] IMU [98], [99], [100], [101], [61] Computer Vision [102], [103], [104], [105], [106], [107], [108], [109], [110], [111], [112], [113], [114] LiDAR [115], [116], [117], [118], [119], [120], [121] Seven different types of IPS-related technologies were included in the review. These technologies were classified into two types based on the measurement medium: wireless signals, such as Wi-Fi, UWB, and RFID, and other physical signals, like Computer Vision, LiDAR, IMU, and VLC. ...
... These applications highlight LiDAR's unparalleled ability to address the challenges of high-density, dynamic warehouse environments. [117] presented an efficient algorithm that employs LiDAR as the sole environmental detection sensor in IPS research, thus reducing computational effort while preserving localization robustness [118]. Recent advancements in LiDAR-based IPS systems have focused on improving point cloud data processing and reducing computational overhead. ...
Article
Full-text available
Advanced technologies and automation, driven by Indoor Positioning Systems (IPS), are transforming businesses by enhancing efficiency, intelligence, and digitalization. Despite the critical role of IPS, there remains a lack of comprehensive reviews focusing specifically on their applications in warehouse inventory management. To bridge this gap and provide actionable insights for both research and practical implementation, this study conducts a systematic literature review following the PRISMA checklist. Centered around three key research questions, this review explores the scope of IPS applications in warehouse environments, the specific technologies employed, and the methods to evaluate IPS performance. This paper analyzes the fundamental principles and recent applications of widely adopted indoor positioning technologies, including Wi-Fi, UWB, RFID, VLC, IMU, Computer Vision, and LiDAR. Furthermore, this paper evaluates IPS technologies through five key evaluation criteria, highlighting their advantages, limitations, and challenges. This study provides a comprehensive understanding of IPS technologies in warehouse inventory management, offering actionable methods to evaluate their performance. The insights presented aim to deliver strong decision support for researchers and practitioners seeking to optimize inventory operations in warehouse environments.
... Then, SLAM restores the real-time position and attitude of the AMR through the spatial-temporal correlation of the observations to realize its localization and mapping. There are two main categories according to the sensor types in the SLAM methods: visual SLAM (VSLAM) [2,3] and LiDAR SLAM [4][5][6][7]. ...
Article
Full-text available
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera and a 2D LiDAR for real-time localization and mapping. The navigation was performed using the A* algorithm for global path planning, combined with the Dynamic Window Approach (DWA) for local path planning. It enables the AMR to receive velocity control commands and complete the navigation task. RTAB-MAP is a graph-based visual SLAM method that combines closed-loop detection and the graph optimization algorithm. The maps built using these three methods were evaluated with RTAB-MAP localization and AMCL (Adaptive Monte Carlo Localization) in a high-similarity long corridor environment. For RTAB-MAP and AMCL methods, three map optimization methods, i.e., TORO (Tree-based Network Optimizer), g2o (General Graph Optimization), and GTSAM (Georgia Tech Smoothing and Mapping), were used for the graph optimization of the RTAB-MAP and AMCL methods. Finally, the TORO, g2o, and GTSAM methods were compared to test the accuracy of localization for a long corridor according to the RGB-D camera and the 2D LiDAR.
... For the past decades, light detection and ranging (LiDAR) sensors have been frequently exploited in various applications such as advanced driver assistance systems, robots, remote sensing systems, drones, mobile phones, and elder-care systems for senile dementia patients [1][2][3]. Figure 1a shows the block diagram of a typical pulsed time of flight (ToF) LiDAR system that is well known for providing a number of advantages including low implementation cost and design simplicity when compared to other schemes such as amplitude/frequency-modulated-continuous-wave (AMCW or FMCW) methods. Although these other schemes estimate the target distances by the phase or frequency variations between the transmitted (i.e., START) pulses and the reflected (i.e., STOP) pulses, the ToF LiDAR sensors emit light signals from a laser-diode driver directly to a target (e.g., the senile dementia patient shown in Figure 1) and the reflected light pulses are received by the analog front-end (AFE) preamplifier in the receiver. ...
Article
Full-text available
In this paper, a CMOS optoelectronic analog front-end (AFE) preamplifier with cross-coupled active loads for short range LiDAR applications is presented, which consists of a spatially modulated P+/N-well on-chip avalanche photodiode (APD), the differential input stage with cross-coupled active loads, and an output buffer. Particularly, another on-chip dummy APD is inserted at the differential input node to improve the common-mode noise rejection ratio significantly better than conventional single-ended TIAs. Moreover, the cross-coupled active loads are exploited at the output nodes of the preamplifier not only to help generate symmetric output waveforms, but also to enable the limiting operations even without the following post-amplifiers. In addition, the inductive behavior of the cross-coupled active loads extends the bandwidth further. The proposed AFE preamplifier implemented in a 180-nm CMOS process demonstrate the measured results of 63.5 dB dynamic range (i.e., 1 µApp~1.5 mApp input current recovery), 67.8 dBΩ transimpedance gain, 1.6 GHz bandwidth for the APD capacitance of 490 fF, 6.83 pA⁄√Hz noise current spectral density, 85 dB power supply rejection ratio, and 32.4 mW power dissipation from a single 1.8 V supply. The chip core occupies the area of 206 × 150 µm2.
... The most common method, GPS, often fails to work reliably indoors, as the signals are easily blocked, distorted, and weakened by building materials and structures, resulting in poor satellite visibility and geometry for accurate positioning. Vision [3], lidar [4], and bluetooth [5] are some common indoor localization methods, but they have their own shortcomings, such as low privacy and high requirements for line of sight in the case of vision, high cost of lidar, and low accuracy of bluetooth. ...
Preprint
Wi-Fi localization and tracking has shown immense potential due to its privacy-friendliness, wide coverage, permeability, independence from lighting conditions, and low cost. Current methods can be broadly categorized as model-based and data-driven approaches, where data-driven methods show better performance and have less requirement for specialized devices, but struggle with limited datasets for training. Due to limitations in current data collection methods, most datasets only provide coarse-grained ground truth (GT) or limited amount of label points, which greatly hinders the development of data-driven methods. Even though lidar can provide accurate GT, their high cost makes them inaccessible to many users. To address these challenges, we propose LoFi, a vision-aided label generator for Wi-Fi localization and tracking, which can generate ground truth position coordinates solely based on 2D images. The easy and quick data collection method also helps data-driven based methods deploy in practice, since Wi-Fi is a low-generalization modality and when using relevant methods, it always requires fine-tuning the model using newly collected data. Based on our method, we also collect a Wi-Fi tracking and localization dataset using ESP32-S3 and a webcam. To facilitate future research, we will make our code and dataset publicly available upon publication.
... Light detection and ranging (LiDAR) technologies are widely utilized in various fields, including autonomous vehicles, robotics, indoor mapping, and home monitoring systems [1][2][3][4]. Especially in long-term care facilities for elders with senile dementia, it is crucial to have compact, low-power, and cost-effective sensors that can monitor elders in real time with no violation of their portrait rights and alert caregivers in urgent situations or emergencies, such as falls or abnormal movements. Owing to the global and rapid increase in the number of aging societies, the maintenance costs associated with protecting elderly individuals from injuries (i.e., falls) have been exponentially increasing [5]. ...
Article
Full-text available
This paper presents a novel optoelectronic transimpedance amplifier (OTA) for short-range LiDAR sensors used in 180 nm CMOS technology, which consists of a main transimpedance amplifier (m-TIA) with an on-chip P+/N-well/Deep N-well avalanche photodiode (P+/NW/DNW APD) and a replica TIA with another on-chip APD, not only to acquire circuit symmetry but to also obtain concurrent automatic gain control (AGC) function within a narrow single pulse-width duration. In particular, for concurrent AGC operations, 3-bit PMOS switches with series resistors are added in parallel with the passive feedback resistor in the m-TIA. Then, the PMOS switches can be turned on or off in accordance with the DC output voltage amplitudes of the replica TIA. The post-layout simulations reveal that the OTA extends the dynamic range up to 74.8 dB (i.e., 1 µApp~5.5 mApp) and achieves a 67 dBΩ transimpedance gain, an 830 MHz bandwidth, a 16 pA/√Hz noise current spectral density, a −31 dBm optical sensitivity for a 10−12 bit error rate, and a 6 mW power dissipation from a single 1.8 V supply. The chip occupies a core area of 200 × 120 µm2.
... Light detection and ranging (LiDAR) sensors have been exploited in various applications, such as unmanned autonomous vehicles, navigation systems for robots, indoor mapping on mobile devices, remote sensing, and home monitoring LiDAR sensors for senile dementia patients residing in long-term care facilities [1][2][3][4]. In particular, home monitoring sensors for senile dementia patients are mandated to be very small and provide in-depth information in an emergency, thus alerting the situations promptly to supervisors. ...
Article
Full-text available
This paper presents an 8 × 8 channel optoelectronic readout array (ORA) realized in the TSMC 180 nm 1P6M RF CMOS process for the applications of short-range light detection and ranging (LiDAR) sensors. We propose several circuit techniques in this work, including an amplitude-to-voltage (A2V) converter that reduces the notorious walk errors by intensity compensation and a time-to-voltage (T2V) converter that acquires the linear slope of the output signals by exploiting a charging circuit, thus extending the input dynamic range significantly from 5 μApp to 1.1 mApp, i.e., 46.8 dB. These results correspond to the maximum detection range of 8.2 m via the action of the A2V converter and the minimum detection range of 56 cm with the aid of the proposed T2V converter. Optical measurements utilizing an 850 nm laser diode confirm that the proposed 8 × 8 ORA with 64 on-chip avalanche photodiodes (APDs) can successfully recover the narrow 5 ns light pulses even at the shortest distance of 56 cm. Hence, this work provides a potential CMOS solution for low-cost, low-power, short-range LiDAR sensors.
... Light detection and ranging (LiDAR) sensors have been exploited in various applications, such as unmanned autonomous vehicles, navigation systems for robots, indoor mapping on mobile devices, remote sensing, and home monitoring LiDAR sensors either for senile dementia patients residing in long-term care facilities [1][2][3][4]. In particular, the home monitoring sensors for senile dementia patients mandate to be very small and provide depth information in emergency, thus alarming the situations promptly to supervisors. ...
Preprint
Full-text available
This paper presents an 8x8 channel optoelectronic readout array (ORA) realized in a 180-nm CMOS process for the applications of short-range light detection and ranging (LiDAR) sensors. We propose several circuit techniques in this work including an amplitude-to-voltage (A2V) converter that reduces the notorious walk-errors by intensity compensation, and a time-to-voltage (T2V) converter that acquires the linear slope of the output signals by exploiting a charging circuit, thus extending the input dynamic range significantly from 5 uApp to 1.1 mApp, i.e. 46.8 dB. These results correspond to the maximum detection range of 8.2 meters via the action of the A2V converter and the minimum detection range of 56 centimeters with the aid of the proposed T2V converter. Optical measurements utilizing an 850-nm laser diode confirm that the proposed 8 x 8 ORA with 64 on-chip avalanche photodiodes (APDs) can successfully recover the narrow 5-ns light pulses even at the shortest distance of 56 centimeters. Hence, this work provides a potential CMOS solution for low-cost, low-power short-range LiDAR sensors.
Article
This paper presents a real-time, cost-effective navigation and localization framework tailored for quadruped robot-based indoor inspections. A 4D Building Information Model is utilized to generate a navigation map, supporting robotic pose initialization and path planning. The framework integrates a cost-effective, multi-sensor SLAM system that combines inertial-corrected 2D laser scans with fused laser and visual-inertial SLAM. Additionally, a deep-learning-based object recognition model is trained for multi-dimensional reality capture, enhancing comprehensive indoor element inspection. Validated on a quadruped robot equipped with an RGB-D camera, IMU, and 2D LiDAR in an academic setting, the framework achieved collision-free navigation, reduced localization drift by 71.77 % compared to traditional SLAM methods, and provided accurate large-scale point cloud reconstruction with 0.119-m precision. Furthermore, the object detection model attained mean average precision scores of 73.7 % for 2D detection and 62.9 % for 3D detection.
Article
We introduce HiSC4D, a novel H uman-centered i nteraction and 4D S cene C apture method, aimed at accurately and efficiently creating a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, rich human-human interactions, and human-environment interactions. By utilizing body-mounted IMUs and a head-mounted LiDAR, HiSC4D can capture egocentric human motions in unconstrained space without the need for external devices and pre-built maps. This affords great flexibility and accessibility for human-centered interaction and 4D scene capturing in various environments. Taking into account that IMUs can capture human spatially unrestricted poses but are prone to drifting for long-period using, and while LiDAR is stable for global localization but rough for local positions and orientations, HiSC4D employs a joint optimization method, harmonizing all sensors and utilizing environment cues, yielding promising results for long-term capture in large scenes. To promote research of egocentric human interaction in large scenes and facilitate downstream tasks, we also present a dataset, containing 8 sequences in 4 large scenes (200 to 5,000 m2m^{2} ), providing 36k frames of accurate 4D human motions with SMPL annotations and dynamic scenes, 31k frames of cropped human point clouds, and scene mesh of the environment. A variety of scenarios, such as the basketball gym and commercial street, alongside challenging human motions, such as daily greeting, one-on-one basketball playing, and tour guiding, demonstrate the effectiveness and the generalization ability of HiSC4D. The dataset and code will be publicly available for research purposes.
Article
Full-text available
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications.
Article
Full-text available
Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.
Article
Full-text available
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.
Article
Full-text available
Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS) mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM). We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD) and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP) algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.
Article
Full-text available
With the growth of cities and increased urban population there is a growing demand for spatial information of large indoor environments.[...].
Article
Currently, most E-Bike services rely on networks to complete the real time and turn-by-turn navigation. However, some users or situations without network may experience trouble in relation to navigation. Hence, our aim is to develop a low cost navigation system with offline and turn-by-turn navigation services. OpenStreetMap (OSM) is a collaborative project whose goal is to create a free editable map of the world. This means that the combined maps need to be more detailed than Google Maps. While Google Map is free to download and use, its offline functionality is restricted. Users cannot perform any searching or routing tasks without an internet connection, which means that when they are abroad or in low signal areas they either have to pay expensive roaming costs or spend a significant amount of time without map coverage, limiting its usability while traveling. For many developers, OSM has become a clear alternative to Google Maps. We used OSM-based Skobbler software to develop our navigation system. Google Maps Place APIs were also included in the system to help users search for nearby well-known places or stores.
Article
The current gesture recognition methods mostly adopt the classification-based approaches such as Neural Network (NN), Support Vector Machine (SVM), Hidden Markov Model (HMM) etc. As for the input image features, most research studies combined the color and depth images (ex. RGB-D) to obtain more accurate information of hand area, and such techniques may cost high computational resources and energy consumptions. To provide a low-cost gesture recognition method for wearable devices, this thesis used merely the Time-of-Flight depth camera to achieve a lightweight gesture recognition method. In most traditional gesture recognition methods, users have to wear gloves or bracelets to let depth cameras being able to accurately capture hands areas, and so that the hand contours, palm’s distances, and angle feature can be obtained. Moreover, the Earth Mover’s Distance (EMD) algorithm, which is adopted in most gesture recognition approaches, costs high computational times. In this study, to avoid wearing gloves or bracelets, we propose a new algorithm that can compute the wrist cutting edges and capture the palm areas. In addition, this thesis proposes an efficient finger detection algorithm to judge the number of fingers, and significantly reduce the computing times. In the experimental results, our proposed method achieves a recognition rate of 90% and the performance has 5 frames per second on NVIDIA TX1 embedded platforms.