Available via license: CC BY 4.0
Content may be subject to copyright.
© The Author(s) 2023. Published by Oxford University Press on behalf of Central South University Press. This is an Open Access
article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/),
which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
ORIGINAL UNEDITED MANUSCRIPT
Correcting of Unexpected Localization Measurement for Indoor
Automatic Mobile Robot Transportation Based on neural
network
Jiahao Huang 1 , Steffen Junginger 2 , Hui Liu 3 , Kerstin Thurow 1,*
1. Center for Life Science Automation (CELISCA), University of Rostock, Rostock 18119,
Germany;
2. Institute of Automation, University of Rostock, Rostock 18119, Germany;
3. Institute of Artificial Intelligence & Robotics (IAIR), School of Traffic &
Transportation Engineering, Central South
University, Changsha 410075, Hunan, China.
*Corresponding author. E-mail: Kerstin.Thurow@celisca.de
Abstract:
The increasing use of mobile robots in laboratory settings has led to a higher degree of
laboratory automation. However, when mobile robots move in laboratory environments,
mechanical errors, environmental disturbances, and signal interruptions are inevitable. This
can compromise the accuracy of the robot's localization, which is crucial for the safety of
staff, robots, and the laboratory. A novel time-series predicting model based on the data
processing method is proposed to handle the unexpected localization measurement of mobile
robots in laboratory environments. The proposed model serves as an auxiliary localization
system that can accurately correct unexpected localization errors by relying solely on the
historical data of mobile robots. The experimental results demonstrate the effectiveness of this
proposed method.
Keywords – Mobile Robots, Laboratory Automation, Indoor Localization, Neural Network
1. Introduction
In this study, the proposed method is based on the mobile robot transportation system
operating in the Center for Life Science Automation (CELISCA, University Rostock) [1].
This system is designed to transport experimental materials in life sciences automatically by
connecting distributed automated experimental operation tables, as shown in Figure 1. Core
functions include: multi-floor mapping, real-time localization, path planning, access control,
elevator interaction, human-computer interaction, obstacle avoidance, automatic identification
and grasping of labware, etc. [2].
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
Figure 1 Mobile robots in CELISCA.
Figure 2 StarGazer-based localization system
Landmarker
StarGazer
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
This paper focuses on the robust localization of indoor mobile robots. The localization
system is composed of on-board StarGazer camera sensors and localization landmarks
mounted on the ceiling, developed by Hagisonic Company, as shown in Figure 2 [3]. The
localization method is based on infrared light emitted by StarGazer's light-emitting diode
(LED) lights. The landmarks on the ceiling are made of special materials that reflect infrared
rays well. The infrared light emitted by the LED lights is reflected by the localization sheet,
and the image is captured by the camera of the onboard StarGazer. According to the unique
ID, X coordinate, Y coordinate, and angle information on each localization landmark, the
relative position of the robot and the localization landmark in the image is calculated, and the
pose information of the robot is calculated. The StarGazer sensor has real-time performance
(<0.3 s) and localization accuracy (<5 cm), making it an effective localization method for
mobile robots operating in indoor environments.
By adding additional landmarks, the StarGazer navigation maps can be extended to
match any size of laboratory environment. However, in sufficient lighting, the system
occasionally crashes, especially on sunny days. Figure 3 shows the localization coordinate
error identification points recorded in the path trajectory of the mobile robot work log. It can
be seen that the misidentification exists during the operation of the robot, and may affect the
actual operation of the mobile robot. It is a potential threat to the safety of robots, laboratory,
and staff.
Figure 3 Localization errors in mobile robots daily transportation
To address the issue of unexpected indoor coordinates measured by the on-board
StarGazer sensors, a time-series data processing method for the robot localization system is
proposed in this paper. The method considers both real-time performance and data processing
accuracy.
This issue has been studied by other scholars as well. Slovák et al. proposed a
discrete Kalman filter-based trajectory optimization to correct mechanical errors and noisy
data in stargazer-based navigation motion control, which are caused by direct sunlight on the
StarGazer camera sensor and the slippage of the Mecanum wheel [4]. The Hagisonic
StarGazer localization system is one of the passive landmark-based localization systems
widely used for indoor mobile robots. Since StarGazer is not open source, some scholars have
replicated and improved upon it based on its localization mechanism [5]
[6]. Wang et al. used
LED lights as active landmarks with a mix and match of duplicate IDs and unique IDs. A
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
method for estimating the pose of a mobile robot using the Monte-Carlo localization (MCL)
method to calculate the landmark probability of repeated IDs is proposed. This approach
makes landmark placement more reasonable, resulting in a localization error of less than 4 cm
and an orientation error of less than 6° [7]. Oh et al. applied Adaptive Extended Kalman Filter
(AEKF) to fuse StarGazer data and encoder data to obtain the pose of the mobile robot and
optimize the landmark distribution. The results indicate that this method improves the
maximum error based on StarGazer from 9.8 cm to 2.8 cm [8]. Andrzej et al. and others
designed an open-source localization based on the localization principle of StarGazer.
Infrared LEDs with adjustable power are used to resist the influence of light on the
localization device. Experiments prove that the localization error is less than 5 cm, and the
measurement frequency exceeds 30 times per second. The open-source and low-cost features
allow it to change the code and hardware according to user needs [9].
In the field of indoor mobile robot localization, various intelligent algorithms [10],
machine learning [11], and cloud computing [12] have been applied to optimize the
localization and decision-making process. For instance, Liu et al. proposed a big data
prediction model based on the Apache Spark cloud platform. The model consists of three
parts: Empirical Wavelet Transform (EWT) decomposition, Gradient Boosting Decision Tree
(GBDT) big data prediction method, and Inverse Empirical Wavelet Transform (IEWT)
decomposition and reconstruction. Using the distributed computing characteristics of the big
data platform, the speed, accuracy, and robustness of mobile robot position big data
processing are improved, and the MAE reaches 0.9 m [13]. Eder et al. composed a robot
localization evaluation model using AdaBoost reinforcement learning and Convolutional
Neural Network (CNN). The evaluation model trains a CNN with the particle set of the
particle filter as input and the actual pose of the mobile robot as output. The probability of
correctly assessing the pose of the robot in the fixed environment reaches 83.25%, and the
AdaBoost improves the localization evaluation accuracy to 88.21%. The experimental results
demonstrate that the proposed method is an effective evaluation method for monitoring the
localization of mobile robots [14]. Li et al. developed a neural network-based localization
fusion method and established a three-layer neural network model to fuse encoder data and
Light Detection and Ranging (LiDAR) data to output coordinate data. The average
localization accuracy reaches 5.23 cm [15].
This paper has made several significant contributions. Firstly, a novel localization data
processing method has been proposed. By modeling the X and Y coordinate data separately,
and using five different decomposition methods to obtain sub-sequences of varying
frequencies, a composite trajectory prediction model has been developed. Secondly, the Long
Short-Term Memory (LSTM) has been employed as the predictor to accurately predict the
mobile robot's localization coordinates. Thirdly, the simulation results have demonstrated that
the proposed composite localization prediction model has an error rate of 0.15 m, indicating
its ability to effectively correct StarGazer sensor localization errors caused by sunlight
interference. Furthermore, the proposed method is not limited to the StarGazer localization
method, but can also be applied to other localization methods, as it only requires historical
data. Overall, the proposed method provides a promising solution for correcting mobile robot
localization errors in various scenarios.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
2. Framework
A. Localization Coordinate Time-Series Prediction Model
The VMD-LSTM (Variational Mode Decomposition-LSTM) time-series prediction
model framework is illustrated in Figure 4, and the modeling process is described in detail as
follows:
(a) The localization coordinates data is obtained from the laboratory daily task log of the H20
robot based on the StarGazer localization system. First, errors in the data are removed, and
then the data is divided into the training part and the testing part according to the ratio of 3:1.
(b) To eliminate the influence of noise data caused by mechanical errors in the mobile robot,
five decomposition methods are used to decompose the coordinate time-series data into sub-
sequences of different frequencies. The prediction models are composed of VMD, Wavelet
(WT), EWT, Wavelet Packet Decomposition (WPD), and Fast Ensemble Empirical Mode
Decomposition (FEEMD) combined with LSTM.
(c) The LSTM is utilized as the predictor and is trained with the subsequences decomposed by
VMD as input and output, respectively. Once the model training is complete, the test section
is used to test the stability and scalability of the model. The Back Propagation (BP) neural
network and Nonlinear Autoregressive Network with Exogenous Inputs (NARX) neural
network are used as comparison models to verify the performance of the LSTM.
B. Localization Correction Strategy
The strategy of the proposed real-time processing method is illustrated in Figure 4. As it
shown, the detailed contents of the proposed method are given as:
(a) In the mobile robot transportation system, once the mobile robots begin to move, their
onboard StarGazer sensors will measure the robot’s indoor coordinates. The measured
coordinate data will be sent to the proposed model.
(b) The time-series analysis method is adopted to build the time-series models. Input the
acquired coordinate data into the trained model to get one-step ahead prediction position.
(c) The real robot indoor coordinates measured by the robot on-board StarGazer sensors will
always be compared to the prediction position. If the errors are abnormal, the robot onboard
controllers will stop the transportation movements and then use the predicted values to leave
the interference areas.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
Figure 4 Framework of the proposed correction method
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
3. The Proposed Localization Correcting Model
To improve the analysis of the H20 mobile robot's localization coordinate data, the
LSTM network, known for its excellent performance in time-series data analysis, was chosen
as the predictor. However, due to mechanical errors and real-time image processing
instability, the data is prone to noise. Therefore, the VMD decomposition method was
employed to eliminate high-frequency noise from the localization data.
A. Long Short-Term Memory (LSTM)
The LSTM is a deep neural network. Based on the classic deep learning Recursive
Neural Network (RNN), it enables the handling of long-term sequences. The LSTM
introduces a “gate” structure to control the state and output at different time points,
overcoming the problem caused by the disappearance of the RNN gradient. LSTM is an
advanced version of RNN. Its structure consists of input gates, forget gates, output gates, and
a unit state with long-term memory [16]. Its structure is shown in Figure 5.
Figure 5 The structure of LSTM cell
Where, t
xis the input sequence of the block at moment t; t
hrepresents the short-term
memory at the time t; t
crepresents the long-term memory at the time t; t
crepresents the
current input cell state. An LSTM unit is used to implement the weight update of the memory
unit. During the model training process, the correction coefficient corresponding to the error
function returned by the backward pass can be selectively memorized by the action of these
gates [17]
。
Forget gate t
f: According to formula (1), store the long-term state 1t
c- at the last moment.
)bx,hσ(wf ft1tft
(1)
Where, f
wrepresents the weight matrix of the forget gate, f
brepresents the bias term, σ
represents the sigmoid function.
The output vector 1t
h-at the previous moment 1-t and the
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
input vector t
xat the current moment
t
.
Input gate t
i: According to formula (2-3) control how much t
h is added in t
c.
)bx,hσ(wi it1tit (2)
)bx,h(wtc ct1tct
anh
(3)
Where,w and b are similar to forget gate,represent the weight matrix and bias term, t
c
represents the current input cell state. tanh is a double tangent function.
Output gate t
o: According to formula (1-6) output current long-term state t
cand current
output t
h.
)bx,hσ(wo ot1tot (4)
tt1-ttt cicfc (5)
)
(
t
c
tanhoh tt (6)
Where,o
w,o
b,1-t
hand t
xare same as formula(1)(2), t
cis the cell state at the time
t
.
B. Variational Mode Decomposition(VMD)
The VMD is a new adaptive signal decomposition method proposed by Dragomiretskiy
et al. based on EMD [18]. It can choose the number of modes to decompose and extract at the
same time, and can then find the center frequency of each mode. The method can reduce the
impact of end-point effects in other decomposition methods, with better robustness and
simplicity. The core of the VMD is to solve the variational problem. The construction of the
variational problem can be explained as follows [19]:
Step 1: Obtain the unilateral spectrum of analytic signal of each intrinsic mode function
through Hilbert transform as:
( ( ) ) u ( )
k
j
t t
t
(7)
Step 2: Modulate the spectrum of each intrinsic mode function to the corresponding baseband,
by adding the exponential term to adjust the estimated center frequency of each intrinsic
mode function:
( ( ) ) u ( )
k
j t
k
j
t t e
t
(8)
Step 3: Calculate the squared norm of the gradient of the above modulated signal, and
estimate the bandwidth of each mode. The corresponding constraint variational model is
expressed as follows:
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
2
1
1
{ , } argmin ( ( ) ) u ( )
s.t.
k
Kj t
k k t k
k
K
k
k
j
u t t e
t
u f
(9)
where
Kkuk...,3,2,1, are VMD decomposed Intrinsic Mode Functions (IMF), and
Kkwk...3,2,1, present the frequency center of each IMFs;
t
is the impulse function;
u ( )
ktis the modal component obtained by decomposition; jis the imaginary unit; k
is the
corresponding center frequency of each modal component; * is the convolution operation; k is
the modal decomposition number; t is the sample time.
The X-axis localization coordinates data of the mobile robot is decomposed by the VMD
method as shown in Figure 6.
Figure 6
The VMD decomposed results of the X localization coordinates data series
4. Localization Coordinates Prediction Experiments
A. Experimental Procedure
To verify the effectiveness of the proposed model, the proposed model is trained and
tested based on the log data during the transportation of the mobile robot in the CELISCA
laboratory. The localization coordinates data comes from the mobile robot transportation log,
which is divided into X coordinate and Y coordinate time-series data. The X coordinate and Y
coordinate data are modeled separately using a decomposition algorithm and a neural
network-based predictor. The experimental results are analyzed using three error criteria.
Then, the X coordinate and Y coordinate data are combined to form the mobile robot
trajectory. The simulation experiment software is based on matlab2020b. The computer is
configured with an i7 processor, RTX 3060 graphics card, and 16G memory.
B. Raw Localization Coordinates Data
The localization coordinates data used in the study were obtained from the robot's
transportation log. Before training, mislocalization points based on StarGazer were eliminated
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
from the raw data. The X and Y coordinates are recorded separately in the work log, so there
are two datasets in total. The X and Y coordinates were recorded separately, resulting in two
datasets with a sample length of 1,200 each. The datasets were divided into a training set and
a test set in a ratio of 3:1 (900:300). Figures 7 and Figure 8 show the X coordinate and Y
coordinate datasets. Figure 9 shows the real trajectory of the mobile robot.
Figure 7 Raw time-series of the X-axis coordinate
Figure 8 Raw time-series of the Y-axis coordinate
Figure 9 Robot movement trajectory combined by X-axis and Y-axis coordinates in CELISCA
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
C. Error Criteria for Proposed Models
To evaluate the prediction performance of each model, two error criteria, MAE (Mean
Absolute Error) and MAPE (Mean Absolute Percentage Error) are adopted in this paper. Both
MAE and MAPE are commonly used error evaluation indicators, but the emphasis is
different. MAE reflects the overall error of the data, while MAPE reflects the degree of error
for individual points. Additionally, the DISE (Distance Integrated Error) indicator is used in
evaluation, which calculates the average distance between predicted points and actual points
on a two-dimensional coordinate axis, to objectively reflect the overall error of the integrated
results of X-axis and Y-axis coordinates. The equations of the three indexes are as follows:
)/n(t)xx(t)(MAE
n
1t
ˆ (10)
)/n(t)x(t))/(x(t)(MAPE
n
1t
ˆ
x (11)
Where x(t) represents the coordinate localization series, (t)x
ˆ represents the predicted values,
and
n
is the number of the sample in the x(t) .
)/n(t))y(y(t)(t))x(x(t)(DISE
n
1t
22
ˆˆ (12)
Where x(t) and y(t) represent the X-axis and Y-axis raw localization series respectively,
(t)x
ˆand (t)y
ˆrepresent the X-axis and Y-axis predicted values respectively.
D. Parameters setting:
In LSTM parameter setting, the X-axis raw data is used without decomposition. When
adjusting parameters, only one parameter is adjusted each time, and the other parameters are
selected for optimal conditions.
Solver: ‘adam’. Comparing three solvers, according to the experimental results, adam
(MAE=0.4596) < rmsprop (MAE=0.5353) < sgdm (MAE=1.3283).
GradientThreshold: ‘1’. LSTM networks are susceptible to the vanishing and exploding
gradient problems, where the gradients become either too small or too large as they
propagate through the network. Set it to 1 to ensure the convergence speed and stability.
Shuffle: 'never'. Because the data has time-series characteristics, the order cannot be
disrupted for training, select 'never'.
InitialLearnRate: ‘0.001’. ‘0.001’ (MAE=0.4596) < ‘0.01’ (MAE=0.6914) < ‘0.0001’
(MAE=0.9720). Learning rate has a big impact on accuracy.
BatchSize: ‘70’. ‘70’ (MAE=0.4596, training time = 12s), ‘35’ (MAE=0.5459, training
time = 18s), ‘105’ (MAE=0.5409, training time = 9s). The batch size determines the
number of training examples used to compute gradients at each iteration of the training
process. Larger batch sizes can lead to more stable gradient estimates and faster
convergence, but it may also require more memory and computing resources.
MaxEpochs: ‘100’. ‘100’ (MAE=0.4596, training time = 12s), ‘50’ (MAE=0.5415,
training time = 7s), ‘150’ (MAE=0.4554, training time = 17s). The maximum number of
iterations is the number of iterations that can guarantee the optimal solution, and the
maximum number of iterations that is too large is a waste of time. Continuing to increase
MaxEpochs beyond 100 has little benefit.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
EKF parameter setting: the most important in EKF are state equation and measurement
equation and Jacobian matrix.
State Equation of the EKF Model: the coefficients of formula (13) (14) are determined
by ARIMA (Autoregressive Integrated Moving Average). Observe the autocorrelation of
the data to get the closest relationship between the current parameter and the first seven
parameters. Then get the correlation coefficient according to the AR(7) model.
;,,10,9,8),7(003.0)6(0640.0)5(1132.0
)4(0097.0)3(493.0
Ntkxkxkx
kxkx
2)-0.017x(k1)-0.8942x(kx(k) (13)
;,,10,9,8),7(0518.0)6(1363.0)5(1449.0
)4(0209.0)3(3774.
Ntkykyky
kyky
0.03502)-0.0822y(k1)-y(k1y(k) (14)
Where x(k) and y(k) are current values. 1)x(k -... 7)-x(k are time series past data.
Measurement Equation: )](),([)(
kykxk
z. Real values are used as measurement
equations.
The measurement noise covariance matrix: zero mean Gaussian white noise.
E. Experimental Results and Analysis
In this section, the paper proposes seven models to test the effectiveness of the proposed
model. The most classic BP neural network and the NARX neural network with the same
timing characteristics, the results are shown in Figure 10 and Figure 11, and the error
indicators are shown in Table 1. Four commonly used decomposition algorithms are added to
the comparison model: EWT, WT, WPD, and FEEMD. These four decomposition algorithms
are combined with LSTM. The results are shown in Figure 12 and Figure 13, and the error
indicators are shown in Table 2. Three error metrics, MAE, MAPE, and DISE are used to
comprehensively evaluate the prediction accuracy of the model. To observe the degree of fit
between the predicted curve and the raw data curve, Figure 10-13 is composed of an overall
picture and a detail picture. Details are identified by dashed boxes in the overall picture. The
optimal results of single-term errors in Tables 1 and 2 are bolded. Table 3 is further
calculated based on the MAE data in Table 2 to obtain the improvement ratio of five
decomposition algorithms to LSTM. Figure 14 is a visual representation of the data in Table
3.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
(a)
(b)
Figure 10 The prediction results of the X coordinate localization series, (a)
Overall picture; (b) Detail
picture
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
(a)
(b)
Figure 11 The prediction results of the Y coordinate localization series, (a)
Overall picture; (b) Detail
picture
T
able 1.
The prediction errors indexes of the localization coordinate
Models
X
Coordinate
Y
Coordinate
X
and
Y
M
AE (m)
M
APE (%)
M
AE (m)
M
APE (%)
DISE
(m)
L
STM
0.4596 0.1922 0.2269 0.2659 0.5853
NARX 0.8554 0.3312 0.4675 0.3475 0.9765
BP 0.9962 0.2902 0.3615 0.4345 1.1654
From the error in Table 1, it can be found that LSTM has the lowest error whether it is in
the X coordinate, the Y coordinate, or the integrated trajectory. LSTMs can be considered
predictors suitable for locating sequences of coordinates. It can be found in Figure 10 and
Figure 11 that the NARX neural network has low precision and cannot correctly predict when
the value of the coordinate sequence is negative. The problem with BP neural networks is that
the prediction data fluctuates greatly. For example, the positions of 70 and 240 on the X-axis,
and more frequently on the Y-axis. Comparing Figures 7 and 8, it can be seen that the data on
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
the Y-axis fluctuates more frequently (more noise), and the neural network is more difficult to
predict, so the Y-axis coordinates of the three are larger than the MAPE data of the X-axis
coordinates. Since the span of the X-axis (from -5 m to 20 m) is larger than the Y-axis (from -
1.5 m to 2.5 m), the MAE value of the X-axis is larger.
(a)
(b)
Figure 12 The prediction results of X coordinate localization series (with decomposition algorithm),
(a)
Overall picture; (b) Detail picture
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
(a)
(b)
Figure 13 The prediction results of the Y coordinate localization series (with decomposition
algorithm), (a)
Overall picture; (b) Detail picture
T
able 2.
The prediction errors indexes of the localization coordinate (with decomposition algorithm)
Models
X
Coordinate
Y Coordinate
X and Y
MAE (m)
MAPE (%)
MAE (m)
MAPE (%)
DISE (m)
VMD
-
LSTM
0.1341 0.0693
0.0901
0.089
0
0.1649
WT-LSTM 0.1598 0.0712 0.0749 0.0701 0.2027
EWT-LSTM 0.2622 0.1474 0.0493 0.0445 0.2744
FEEMD
-
LSTM
0.7154
0.4501
0.1163
0.1173
0.7527
LSTM
0.4596
0.1922
0.2269
0.2659
0.5853
EKF 0.2639 0.1003 0.1039 0.0837 0.3664
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
Table 3.
Comparison of
prediction
models by MAE index
Comparison models
X
-
axis
Y
-
axis
212
/
MAEMAEMAEP
MAE
VMD-LSTM vs. LSTM 0.7082 0.6029
WT
-
LSTM vs
.
LSTM
0
.6523
0
.6698
EWT
-
LSTM vs
.
LSTM
0
.4295
0
.7827
WPD-LSTM vs. LSTM 0.0374 -0.3327
FEEMD-LSTM vs. LSTM -0.5565 0.4874
Figure 14 The promoting percentages of the comparison models by MAE index
From the error data results in Table 2, it can be found that VMD, WT, EWT, and WPD
all improve LSTM significantly. This confirms the effectiveness of the VMD-LSTM model
for coordinate prediction. According to the MAPE index, it can be found that WT and EWT
have a good effect on the optimization of discrete points of the curve. The MAE shows, that
VMD plays a more significant role in reducing the overall error.
Observing the raw data of X and Y coordinate can be found that the noise on the Y-axis
is more obvious. The VMD results is the best error in the X coordinate curve, but is not as
good as the WT and EWT in the Y coordinate curve. This means that the EWT and WT
decomposition methods are more suitable for curves with high-frequency noise, while VMD
decomposition is suitable for curves with low-frequency noise. VMD is a signal processing
technique used for decomposing a signal into a set of intrinsic mode functions. VMD
decomposes the time series data into IMFs that capture different frequency bands in the data.
These IMFs are then fed into the LSTM network, which can effectively model the temporal
dependencies in the data. This allows for more accurate predictions of future values in the
time series. VMD is a data-driven technique that can adapt to different signal characteristics,
making it more robust and effective than other decomposition methods that may not be as
flexible. The combination of VMD and LSTM allows for both frequency and temporal
information to be incorporated into the analysis, resulting in more accurate predictions and a
more in-depth understanding of the underlying patterns in the localization data.
According to Table 3 and Figure 14, it can be found the decomposition algorithms work
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
differently for the X-axis and Y-axis. X-axis: VMD>WT>EWT>WPD>FEEMD, Y-axis:
EWT>WT>VMD>FEEMD>WPD. Even FEEMD for the x-axis data, WPD for the y-axis
data, had a negative effect. From the side, it shows that the X-axis and the Y-axis data have
different data characteristics. Since the X-axis is the main moving direction of the mobile
robot, the Y-axis has more noise perturbations. So in the case that VMD-LSTM data
prediction accuracy in the Y-axis direction is not high enough, the results are the best in the
prediction DISE index of the overall trajectory.
Furthermore, a hybrid model consisting of VMD-LSTM and EWT-LSTM combinations
is proposed. The X coordinate is predicted using VMD-LSTM and the Y coordinate is
predicted by the EWT-LSTM model. The DISE index obtained by the composite model is
0.1511 m, which is better than the 0.1649 m of the best model VMD-LSTM in Table 2. The
predicted trajectory result is shown in Figure 15.
Figure 15 The trajectory prediction results of the proposed hybrid model
5. Conclusions
Accurate localization of mobile robots is essential for ensuring safety in laboratory. In
this study, a novel robot localization correcting method is proposed. The original intention
was to solve the problem that the mobile robot localization system based on infrared
localization fails in sunny conditions. This method predicts the position of the mobile robot
entirely based on the historical transportation data of the CELISCA laboratory. The
decomposition algorithm is used to perform signal multi-frequency decomposition for X and
Y coordinate time-series data, respectively. The predictor chose the LSTM neural network,
compares the classic methods such as BP, NARX network, EKF. In the comparison of 5
various decomposition algorithms, it is found that the VMD-LSTM method has the best
overall trajectory accuracy in a single model, and the DISE index is 0.16 m. VMD-LSTM has
the best effect on X coordinate data, and EWT-LSTM has the best effect on Y coordinate
data, so the two models are combined to form a composite model. The error accuracy of the
composite model reaches 0.15 m. It could meet the needs of mobile robots’ localization in the
laboratory. When there is an error in localization, the localization coordinates data can be
corrected by the proposed method.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
In conclusion, the proposed localization correcting method presented in this paper has
some limitations and prospects. While it offers a viable solution to correcting mobile robot
localization errors in laboratory settings, it may not be suitable for environments where
significant changes occur, and retraining may be required. Additionally, the use of LSTM may
require a certain amount of computing resources that may not be available in some settings.
However, there are also prospects for the proposed method. It has the potential to be further
improved with the use of more advanced algorithms or techniques. With the advancement of
hardware, the computing cost will become lower. Moreover, the proposed method is sensor-
free and only requires historical and real-time coordinate data, which makes it adaptable to
different scenarios and different robots. In summary, the proposed method provides a
promising direction for error correction in mobile robot localization in laboratory settings.
However, its limitations and potential for improvement should be considered. Future work
can focus on addressing these limitations and further exploring the prospects of the proposed
method.
6. Reference
[1] Thurow, K., et al., Multi-floor laboratory transportation technologies based on intelligent mobile
robots. Transportation Safety and Environment, 2019. 1(1): p. 37-53.
[2] Abdulla, A.A., et al., A new robust method for mobile robot multifloor navigation in distributed life
science laboratories. Journal of Control Science and Engineering, 2016. 2016.
[3] Liu, H., et al. Mobile robotic transportation in laboratory automation: Multi-robot control, robot-
door integration and robot-human interaction. in 2014 IEEE International Conference on Robotics
and Biomimetics (ROBIO 2014). 2014. IEEE.
[4] Slovák, J., M. Melicher, and P. Vašek. Trajectories optimization of mobile robotic systems using
discrete Kalman filtration. in 2018 Cybernetics & Informatics (K&I). IEEE.
[5] Ul-Haque, I. and E. Prassler. Experimental evaluation of a low-cost mobile robot localization
technique for large indoor public environments. in ISR 2010 (41st International Symposium on
Robotics) and ROBOTIK 2010 (6th German Conference on Robotics). 2010. VDE.
[6] Lee, S., Use of infrared light reflecting landmarks for localization. Industrial Robot: An
International Journal, 2009.
[7] Wang, J. and Y. Takahashi, Indoor mobile robot self-localization based on a low-cost light system
with a novel emitter arrangement. ROBOMECH Journal, 2018. 5(1): p. 1-17.
[8] Oh, J.H., D. Kim, and B.H. Lee, An indoor localization system for mobile robots using an active
infrared positioning sensor. Journal of Industrial and Intelligent Information, 2014. 2(1).
[9] Debski, A., et al., Open-source localization device for indoor mobile robots. Procedia Computer
Science, 2015. 76: p. 139-146.
[10] Shaw, J.-S., et al. Development of an AI-enabled AGV with robot manipulator. in 2019 IEEE
Eurasia Conference on IOT, Communication and Engineering (ECICE). 2019. IEEE.
[11] Liu, R., et al. Slam for robotic navigation by fusing rgb-d and inertial data in recurrent and
convolutional neural networks. in 2019 IEEE 5th International Conference on Mechatronics
System and Robots (ICMSR). 2019. IEEE.
[12] Li, I.-h., et al., Cloud-based improved Monte Carlo localization algorithm with robust orientation
estimation for mobile robots. Engineering Computations, 2018.
[13] Liu, H., et al. Big data forecasting model of indoor positions for mobile robot navigation based on
apache spark platform. in 2019 IEEE 4th International Conference on Cloud Computing and Big
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023
ORIGINAL UNEDITED MANUSCRIPT
Data Analysis (ICCCBDA). 2019. IEEE.
[14] Eder, M., M. Reip, and G. Steinbauer, Creating a robot localization monitor using particle filter
and machine learning approaches. Applied Intelligence, 2022. 52(6): p. 6955-6969.
[15] Li, H., et al. A neural network approach to indoor mobile robot localization. in 2020 19th
International Symposium on Distributed Computing and Applications for Business Engineering
and Science (DCABES). 2020. IEEE.
[16] Li, J., et al. Ultra-short term wind power forecasting based on LSTM neural network. in 2019
IEEE 3rd International Electrical and Energy Conference (CIEEC). 2019. IEEE.
[17] Huang, L., et al., LSTM-based forecasting for urban construction waste generation. Sustainability,
2020. 12(20): p. 8555.
[18] Zhao, X., et al., Application of new denoising method based on VMD in fault feature extraction. J.
Vib. Meas. Diagn, 2018. 38: p. 11-19.
[19] Liu, H., et al., Big multi-step wind speed forecasting model based on secondary decomposition,
ensemble method and error correction algorithm. Energy Conversion and Management, 2018.
156: p. 525-541.
Downloaded from https://academic.oup.com/tse/advance-article/doi/10.1093/tse/tdad019/7153351 by guest on 06 May 2023