Content uploaded by Barak Or
Author content
All content in this area was uploaded by Barak Or on Jul 04, 2022
Content may be subject to copyright.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 1
Adaptive Step Size Learning with Applications to
Velocity Aided Inertial Navigation System
Barak Or, Member, IEEE, and Itzik Klein, Senior Member, IEEE
Abstract—Autonomous underwater vehicles (AUV) are com-
monly used in many underwater applications. Recently, the usage
of multi-rotor unmanned autonomous vehicles (UAV) for marine
applications is receiving more attention in the literature. Usually,
both platforms employ an inertial navigation system (INS), and
aiding sensors for an accurate navigation solution. In AUV
navigation, Doppler velocity log (DVL) is mainly used to aid
the INS, while for UAVs, it is common to use global navigation
satellite systems (GNSS) receivers. The fusion between the aiding
sensor and the INS requires a definition of step size parameter in
the estimation process. It is responsible for the solution frequency
update and, eventually, its accuracy. The choice of the step size
poses a tradeoff between computational load and navigation
performance. Generally, the aiding sensors update frequency
is considered much slower compared to the INS operating
frequency (hundreds Hertz). Such high rate is unnecessary for
most platforms, specifically for low dynamics AUVs. In this work,
a supervised machine learning based adaptive tuning scheme
to select the proper INS step size is proposed. To that end,
a velocity error bound is defined, allowing the INS/DVL or
the INS/GNSS to act in a sub-optimal working conditions, and
yet minimize the computational load. Results from simulations
and field experiment show the benefits of using the proposed
approach. In addition, the proposed framework can be applied
to any other fusion scenarios between any type of sensors or
platforms.
Index Terms—Autonomous underwater vehicles, inertial navi-
gation, Kalman filtering, machine learning, step size, supervised
learning, unmanned aerial vehicles.
I. INTRODUCTION
Autonomous vehicles, such as Autonomous underwater
vehicles (AUVs) or multi-rotor unmanned vehicles are com-
monly equipped with an inertial navigation system (INS) and
other sensors [1] to provide real-time information about their
position, velocity, and orientation [2]–[5]. The INS has two
types of inertial sensors, namely, the gyroscopes and ac-
celerometers. The former measures the angular velocity vector,
and the latter measures the specific force vector. Since the
inertial sensors measurements contain noises, and error terms,
the navigation solution drifts with time. In the underwater
environment, usually, a Doppler velocity log (DVL) is used
to reduce the solution drift with time [6]–[9], while above
the sea surface global navigation satellite systems (GNSS)
measurements are used instead [10], [11].
The inertial sensors, DVL, and GNSS provide discrete in-
formation regarding a vehicle’s continuous motion. Hence,
tracking a vehicle involves a discrete realization of continuous
motion. Such realization requires a step size selection, usually
Submitted: June 2022.
Barak Or and Itzik Klein are with the Hatter Department of Marine
Technologies, Charney School of Marine Science, University of Haifa, Haifa,
3498838, Israel (e-mail: barakorr@gmail.com, kitzik@univ@haifa.ac.il).
made by the designer according to the scenario and compu-
tational constraints [12]–[14]. Moreover, to save power and
extend the sensor/system life, the number of samples received
from each source should be determined such as the information
quality is maintained and the computational load is minimized
[15]. Most of the time, AUV navigate slow underwater for
a long time. Thus, there is no need to obtain a navigation
solution in a high frequency, except in situation of maneuvers
where the drift might grow and there is a need for a momentary
high computational load [3]. To cope with this trade-off an
adaptive step size may be used.
In [16], an adaptive scheme for the step size, based only
on the vehicle speed, was suggested for dealing with sensor
scheduling in target tracking scenarios. An adaptive scheme
was also suggested in [17], where the step size is based
only on the vehicle’s distance to the target. It improves the
energy efficiency during target tracking scenarios. In [12],
a simple criterion was suggested to define the step size for
sensor measurements to minimize computational load and still
provide moderate navigation performance. This approach is
based on the predictor and corrector of the linear discrete
Kalman Filter (KF) [18], [19], where the main idea is to keep
the discretized implementation of the continuous process with
a lower numerical error. Later, an adaptive scheme to update
the step size in real-time scenarios, with varying discrete noisy
measurements was presented for constant velocity (CV) and
constant acceleration (CA) models [20].
Focusing on inertial measurements step size, in a sensor fusion
scenario like INS/DVL or INS/GNSS, the fusion is carried out
using a nonlinear filter such as the Extended KF (EKF). There,
inertial measurements are used in the system model while the
aiding measurements are used in the filter measurement model
to update the navigation state. The inertial sensors operate in
a much faster frequency (tens or hundreds of Hertz) than the
aiding sensors (several Hertz). As a consequence, approaches
like in [12], [15]–[17], are not suitable for such setups, as they
assume constant step sizes.
In order to avoid the need of constant step size, recent works
explore the possibilities of using classical machine learning
(ML) or deep learning (DL) based approaches [21]. Such
approaches recently lead to state of the art performance in
various fields such as computer vision [22] and natural lan-
guage processing [23]. Recently, ML and DL approaches are
adopted in navigation applications. Among them are pedestrian
dead reckoning where DL approaches are used to regress the
user change in distance and heading [24]–[28]. In addition,
DL based networks were used for attitude estimation using
inertial sensors [29]–[31] . ML approaches showed also better
performance in stationary coarse alignment both in accuracy
arXiv:2206.13428v1 [cs.RO] 27 Jun 2022
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 2
and time to converge [32]. This trend of integrating ML ap-
proaches with classical INS applications raises the motivation
to adapt such approaches also in the described problem of
finding an adaptive step size.
In this area, a recent work by Dias et al. [15], discusses an
adaptive step size of sensor networks, where online reinforce-
ment learning technique is adopted to minimize the number
of transmissions of the reported data. They showed a high
reduction of energy while keeping the average information
quality. However, that approach is limited to large step sizes
and focused on a slow dynamics system, which is less relevant
for navigation applications.
In this paper, a typical scenario of an adaptive step size
determination for high rate inertial measurement unit (IMU)
aided by a low-rate sensor such as DVL or GNSS is consid-
ered. There, the quality/amount-of-measurements trade-off to
minimize velocity error as a function of the IMU step size is
addressed in the EKF framework. In the proposed approach,
ML models are used to predict the sub-optimal IMU step size,
and handle the non-linearity of the INS model. Establishing a
relationship between navigation features and their sub-optimal
IMU step sizes can be applied in a real-time setting to solve
the IMU step size conflict (accuracy vs. computational load).
The main contributions of this paper are:
1) A numerical study of the effect of the inertial sensor step
size on the velocity error in INS/DVL and INS/GNSS
typical fusion scenarios.
2) Derivation of a learning-based scheme to determine an
adaptive IMU step size as a function of the velocity error.
3) Online integration of the proposed learning scheme with
EKF implementation for the navigation filter.
To validate the proposed approach two numerical examples
of an AUV and a quadrotor are addressed, as well as quad-
copter field experiments. Both simulations and experiments re-
sults show the benefits of implementing the proposed learning-
based approach.
The rest of the paper is organized as follows: Section II deals
with the problem formulation for INS/DVL and INS/GNSS
models. Sections III presents the importance of step size
selection, followed by novel learning-based step size tuning
approach, where the feature engineering, database generation
process, and adaptive tuning scheme with the INS are dis-
cussed. Section IV presents the simulations and field experi-
ment results, and Section V gives the conclusions.
II. ADA PT IV E NAVIG ATIO N FILTER
The nonlinear nature of the INS equations requires a non-
linear filter. The most common filter for fusing INS with
external aiding sensors is the es-EKF [1]. There, the errors
are estimated and subtracted from the state vector. When
considering velocity aided INS, the position vector is not
observable [7]. Hence, it is not included in the error-state
vector, defined as:
δx=δvnδεnbabgT∈R12×1,(1)
where δvn∈R3×1is the velocity vector error states expressed
in the navigation frame, δεn∈R3×1is the misalignment
vector expressed in the navigation frame, ba∈R3×1is
the accelerometer bias residuals vector expressed in the body
frame, and bg∈R3×1is the gyro bias residuals vector
expressed in the body frame. The navigation frame is denoted
by nand the body frame is denoted by b. The navigation frame
center is located at the body center of mass, where the x-axis
points to the geodetic north, z-axis points down parallel to
local vertical, and y-axis completes a right-handed orthogonal
frame. The body frame center is located at the center of mass,
x-axis is parallel to the longitudinal axis of symmetry of the
vehicle pointing forwards, the y-axis points right, and the z-
axis points down such that it forms a right-handed orthogonal
frame. The linearized, error-state, continuous-time model is
δ˙x =Fδx+Gδw,(2)
where F∈R12×12 is the system matrix, G∈R12×12 is
the shaping matrix, and δw=wawgwab wgb T∈
R12×1is the system noise vector consisting of the accelerom-
eter, gyro, and their biases random walk noises, respectively
[33]. The system matrix, Fand the shaping matrix Gare
provided in the appendix. We define Tn
bas the transformation
matrix between body frame and navigation frame. The corre-
sponding discrete version of the navigation model (for small
step sizes), as given in (2), is
δxk+1 =Φkδxk+Gkδwk.(3)
The transition matrix, Φk, is defined by a first order approxi-
mation as
Φk
∆
=I+F∆t, (4)
kis a time index, δwkis a zero mean white Gaussian noise,
and Iis an identity matrix.
The step size for the INS calculations is defined by
∆tk
∆
=tk−tk−1,(5)
where each step size is related to the IMU frequency, νI MU ,
by
∆tk=1
νIM U
.(6)
The discretized process noise is given by
Qd
k=GQcGT∆tk,(7)
where Qcis the continuous process noise matrix.
The discrete es-EKF is used to fuse the INS with external
measurements. The initial error state and error state covariance
are defined as [1], [34]
δˆx0=012×1
P0=Qd,(8)
where δˆx0is the initial estimate error-state vector, and P0is
the initial covariance error.
The error-state vector is initialized every iteration, as following
δˆx−
k=0.(9)
The error covariance propagation (prediction) is given by
P−
k=Φk−1Pk−1Φk−1T+Qd
k−1,(10)
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 3
where Pk−1is the estimate from previous state, k−1. The
measurement arrives at time j, and then filter update is made.
The Kalman gain is given by
Kj=P−
jHjThHjP−
jHjT+Rd
ji−1,(11)
where Rdis the discrete measurement noise covariance,
assumed to be constant.
The error-state estimate update is given by
δˆxj=Kjδzj,(12)
where
δzj=ˆzj−zj(13)
is the measurement residual vector, defined as the difference
between the estimated (ˆzj) and the actual (zj) measurements.
Finally, the error covariance update (correction) is given by
Pj= [I−KjHj]P−
j.(14)
A. Velocity measurement models
Two types of velocity measurement models are considered:
1) DVL and 2) GNSS. Regardless of the measurement model,
the velocity measurements are available in a constant fre-
quency, with a different step size from the IMU. The step
size of the aiding velocity sensor is given by
∆τj=τj−τj−1, j = 1,2, ... (15)
and related to the aiding sensor sampling frequency, νAiding ,
by
∆τj=1
νAiding
.(16)
Without the loss of generality, it is assumed a constant step size
for the aiding sensor measurements. Hence, the subscript jis
omitted for ∆τ. Commonly, the IMU has a higher frequency
rate than the aiding sensor, thus, the following assumption is
made:
∆τ∆tk,∀k. (17)
1) DVL measurement model: After processing, DVL out-
puts the AUV velocity vector in the DVL frame, vd
DV L. Then,
it is transformed to the body frame, vb
DV L, and eventually to
the navigation frame, vn
DV L, where it is used in the navigation
filter. Thus,
vn
DV L =Tn
bTb
dvd
DV L,(18)
where Tb
dis the transformation matrix from the DVL frame
to the body frame. For simplicity, it is assumed that Tb
dis
accurately known, and therefore, removed in further analysis.
Linearizing (18) yields [35]
δvb
DV L =Tb
nδvn−Tb
n(vn×)δεn.(19)
The corresponding measurement residual is given by
δzb
DV L,j =HDV L,j δxj+ςDV L,j,(20)
where ςDV L,j ∼ N 0,Rd
DV L∈R3×1is an additive
discretized zero mean white Gaussian noise. It is assumed that
ςjand δwjare uncorrelated. The corresponding time-variant
DVL measurement matrix is given by
HDV L,j =hTb
nj−Tb
nj(vn×)j03×6i∈R3×12.(21)
2) GNSS velocity measurement model: The GNSS receiver
outputs the velocity vector in the navigation frame. Hence,
the corresponding time-invariant GNSS measurement matrix
is given by
HGNS S =I3×303×9∈R3×12.(22)
The corresponding measurement residual is given by
δzb
GNS S ,j =HGN SS δxj+ςGN SS ,j ,(23)
where ςGNS S ,j ∼ N 0,RdGNS S ∈R3×1is an additive
discretized zero mean white Gaussian noise. It is assumed that
ςjand δwjare uncorrelated.
III. ADAP TI VE S TE P SI ZE L EA RN IN G
A. Motivation: the importance of step size selection
To demonstrate the influence of the step size on vehicle’s
velocity error, a simplified simulated vehicle trajectory, shown
in Figure 1, was used. The simulation parameters are summa-
rized in Table I. As the example was conducted for a short
period (T= 240[s]), the IMU error model was simplified to
include only zero mean white Gaussian noise:
¯
fb=fimu
b+wa,(24)
and
¯ωib =ωimu
ib +wg,(25)
where fimu
band ωimu
ib are true simulated outputs of the
accelerometer and gyroscope, respectively.
In order to evaluate the navigation performance, Monte Carlo
(MC) simulation with 100 iterations was made. The av-
eraged velocity root mean squared error (RMSE) for the
entire scenario is 0.06[m/s]which was obtained by setting
∆t= 0.01[s].
TABLE I
INS/GNSS SI MUL ATIO N PARA MET ER S
Description Symbol Value
GNSS velocity noise (var) R11, R22 , R33 0.0042[m/s]2
GNSS step size ∆τ1[s]
Accelerometer noise (var) Q11 , Q22, Q33 0.022[m/s2]2
Gyroscope noise (var) Q44, Q55 , Q66 0.0022[rad/s]2
IMU step size ∆t0.01[s]
Num. Monte Carlo iterations N100
Simulation duration T240[s]
Initial velocity vn
0[5,0,0]T[m/s]
Initial position pn
0[320,340,5[m]]T
As the velocity accuracy is affected by the predetermined
step size of the IMU, a sub-optimal step size satisfies the
following condition:
∆t∗= arg min
∆t∈T
[E(∆t)− B],B>0,(26)
where
E(∆t)∆
=Ekδvn
T rue (∆t)k2.(27)
The argument ∆t∈ T ⊂ [∆tmin,∆tmax]minimizes the
difference between the 2nd Euclidean norm of the mean
averaged velocity (speed) error vector (E) and a design value,
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 4
Fig. 1. Estimated and ground truth simulated trajectories. Blue line shows
ground truth and the black doted line shows the estimated trajectory. The plots
are presented in a local Cartesian coordinate system.
B. Thus, the criterion allows velocity (speed) error up to B
for the sub-optimal step size. As Bis a design parameter, one
can choose it according to the platform and scenario at hand.
In this work, we allow an averaged velocity error of 0.1[m/s].
Obviously, when Bgoes to zero, ∆t→0, and as a conse-
quence the computational load increases. Hence, to find a trade
off between accuracy and computational load, the condition,
B>0, must be satisfied. In due course, we performed 100
MC simulations each with 10 different IMU step sizes for
three cases (different GNSS step size, ∆τ):
∆t∈˜
T=0.002,0.004,0.008,0.01,0.016
0.02,0.032,0.04,0.05,0.1.(28)
The results are summarized in Figure 2, where each point
represents the averaged velocity (speed) RMSE. For all
scenarios, as the step size increases, the averaged velocity
(speed) RMSE also increases. As seen in the figure, the
change of the averaged velocity (speed) RMSE slowly
converge to steady-state values for all step sizes per scenario.
Hence, there is a ∆t, which is too large to carry the navigation
information, and it is associated with a high error. On the other
hand, given a bound for the averaged velocity RMSE, B, a
moderate value of ∆tcan be defined where the computational
load will be minimized. For example, if ∆t= 0.05[s]or
∆t= 0.1[s]for case [2] (Figure 2), are chosen, the same
averaged velocity (speed) RMSE of 0.01[m/s]is achieved.
For this example, 50% of the computational load can be
reduced without affecting the velocity error accuracy. Other
cases consider the GNSS step size (∆τ) as very small value
(not necessarily available in the market, yet) - to demonstrate
the impact of high-rate update.
B. Supervised learning formulation
The power of ML rises the ability to solve many difficult and
non-conventional tasks. To determine the step size, the prob-
lem is formulated in a Supervised Learning (SL) approach; A
Fig. 2. Velocity sensitivity to step size for five different scenarios with various
GNSS step size (∆τ). Complete trajectory and variances values are provided
in Table 1.
feature set is defined where kinematic and statistics measures
are considered. Formally, we search for a model to relate an
instance space, X, and a label space, Y. We assume that there
exists a target function, F, such that Y=F(X). Generally,
the SL task is to find F, given a finite set of labeled instances:
{Xk,∆t∗
k}N
k=1 .(29)
The SL aims to find a function ˜
Fthat best estimates F. A
loss function, l, is defined to quantify the quality of ˜
Fwith
respect to F. The overall loss is given by
LY,ˆ
Y(X)∆
=1
M
M
X
m=1
l(y, ˆy)m,(30)
where Mis the number of examples, and min the example
index. Minimizing Lin a training/test procedure leads to the
target function. The step size tuning problem is formulated as a
classification problem, where only two classes are considered:
Y={0.04,0.002} ∈ R1×2.(31)
Ideally we would like to minimize the computational cost
without influencing the accuracy. Yet, due to the inherit
tradeoff this is not possible. Therefore, we would like to
minimize the computational cost with resulting minimum
accuracy degradation. As a consequence, only the step-size
is considered in the loss function, l, given by
lm
∆
=∆t∗
m−d
∆tm2,(32)
where d
∆tmis the estimated step size value obtained by the
learning model during the training process. The main reasons
for considering this problem as a classification task and not
regression task are:
1) Filter robustness: Minimizing the amount of step size
switching along the navigation process.
2) ML model robustness: As there are only two classes in
the label space. By doing so, the deterministic label space
avoid invalid values and improve real-time performance.
Notice, that in (30) two different step sizes are considered. Yet,
if needed, the proposed framework can be applied for more
different values pending on the scenario at hand. The major
benefit of defining the problem as a bi-classification predictor,
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 5
is that we minimize the number of “chattering” between many
step size values (might lead to unstable filter). Also, using two
values, one big 0.04[s]and the other small 0.002[s], presents
clearly the computational effort reduction. Later, an example
supporting the bi-classification choice is provided.
C. Feature engineering
Sixteen high-level and low-level features, are considered:
X=Xhigh ,Xlow ∈R1×16,(33)
where
1) High-level features: A group of features that contains
physical values of various filter and vehicle parameters in
the scenario. Mean and square root are commonly used
in many types of classification/regression problems and
thus used:
Xhigh =(qQg11 ,pQa11 ,qRd
11 ,
∆τ, E(ˆvn)2,Eˆϕ2,Eˆ
θ2,Eˆ
ψ2)∈R1×10,
(34)
where the subscript 11 stands for the first element of a
matrix, and Eis the expected value operator, calculated
based on a moving average of the last 50 values. E(ˆvn)2
contains three features for north, east, and down velocity
components. φ, θ, and ψare the body angles (briefly
explained in the appendix).
2) Low-level features: The low-level features are scalar
values, created based on combination and modification
of the high-level features, as summarized in Table II.
Low-level features were chosen due to the dynamic body
behavior and noise characteristics.
Generally, the designer can chose additional/different features
for the classification task.
TABLE II
LOW-LE VEL F EATU RES
Index Feature Description
Xlow
1qQω11 +Qfb11 +Rd
11 Noise StD 2-norm
Xlow
2r1
Qω11
+1
Qfb11
+1
Rd
11
Noise ’mean’
Xlow
3
√Qω11 +qQfb11
qRd
11
IMU + aiding sensor noise ratio
Xlow
4Rd
11 ∆τAiding sensor noise variance
Xlow
5kˆvnk2Vehicle speed
Xlow
6
√Qω11 +qQfb11
qRd
11 kˆvnk2Xlow
3∗ X low
5
D. Database generation process
As a preliminary stage of training, a dataset should be
generated. Then, it could be processed into the ML model.
A velocity error, B, was defined using different trajectories.
There, the vehicle traveled along them several times with
various step sizes to find and store those that minimized
vehicle velocity vector (speed) error. The various trajectories
were created by modifying the radius of a circular motion,
straight lines, and general curves. The process of generating
such trajectories is divided into two parts: INS simulation with
perfect IMU measurements to produce the GT trajectories
and store them, and noisy velocity aided INS simulation in
order to create noisy examples with their corresponding ∆t∗
satisfying a desired bound of velocity error, B. The IMU noise
variance values as also velocity aiding sensor noise values are
summarized in Table III.
TABLE III
VALUE RANGES FOR DATABA SE GE NE RATI ON
Description Value Range Amount
Aiding velocity noise (var) Rii ∈[0.0001,0.1]2[m/s]210
Aiding velocity step size ∆τ∈[0.1,2] [s] 6
Accelerometer noise (var) Qii ∈[0.0005,0.5]2m/s2210
Gyroscope noise (var) Qii ∈[0.0001,0.1]2[rad/s]210
The database generation process is described in Figure
3. IMU noise variances were firstly set, similarly to the
simplified IMU error model provided in (24) −(25) with
different noise variances values. Then, IMU readings and
aiding sensor measurements (constant step size) are processed
into the velocity aided INS scheme and provide the vehicle
navigation solution and their state errors as well. As the focus
is to determine a sub-optimal ∆tto minimize velocity error,
the error upper bound, B, was chosen to be 0.1[m/s](other
values can be examined instead). If the condition (26) is
satisfied, the example is stored. Else, the step size of the IMU,
∆t, is reduced. Repeating this process for many scenarios
yields a large dataset, enabling model training. Therefore, the
examples in the dataset have the smallest step size, within
the defined error bound, achieving the objective of trade-off
balance between computation and accuracy
Fig. 3. Database generation process. IMU and accurate velocity measurements
enter the velocity aided INS, then the velocity error is calculated. Given the
GT velocity, the system decides if step-size should be reduced or not.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 6
E. Adaptive tuning scheme
Applying the suggested tuning approach in online setting
involves integration of the velocity aided INS with the
classifier, as presented in Figure 4. Algorithm 1 gives the
velocity aided INS with adaptive step size tuning algorithm.
Algorithm 1 Velocity aided INS with adaptive step size tuning
Input: ωib,fb,vAiding ,∆t0,∆τ, T, tuning Rate
Output: vn, εn
Initialization :vn
0, εn
0
LOOP Process
1: for t= 0 to Tdo
2: obtain ωib,fb
3: solve navigation equations (3)
4: if (mod(t, ∆τ)=0)then
5: obtain vAiding (20),(23)
6: update navigation state using the es-EKF (8)-(14)
7: end if
8: Calculate features and predict ∆t∗
k+1.
9: if mod(t, tuningRate)=0then
10: calculate X(33)-(34)
11: ∆t∗
k+1 =˜
Ftrained (X)
12: end if
13: end for
Fig. 4. Step size adaptive tuning by applying the ML classifier. The features
(high level and low level) are calculated based on temporal data. Then, they
are processed into a classifier that outputs the sub-optimal step size.
IV. RES ULT S
A. Classification methods comparison
In order to find the most suitable SL prediction model,
various classification models have been explored: fine tree
(decision tree with many leaves), Naive Bayes, K-Nearest
Neighborhood (KNN), Support Vector Machine (SVM), Lo-
gistic regression, and Ensemble (boosted trees) [21], [36]. The
comparison process was made for 5 cases, with different IMU
variance values, using a database (created as described in
Section III.D) consisting of 18,000 examples. Those includes
motion along straight lines with various velocities, and circles
with different radiuses and velocities. All classifiers were bi-
classifiers, with small and large step sizes for better model
robustness (briefly explained in the appendix). The IMU, and
aiding sensors noise covariances were tuned with different
values to enrich the dataset. Note, that this dataset was created
to handle with both INS/DVL and INS/GNSS fusion scenarios.
Two training paradigms were considered: train/test with 80/20
ratio, and cross validation with five folds. The vehicle’s
dynamics was set by tuning the IMU and selecting the initial
kinematic conditions.
To evaluate the proposed models performance, the area under
curve (AuC) measure was employed (see appendix for further
explanations). For that, the positive value was defined as
P∆t= 0.002[s], and the negative value as N∆t= 0.04[s]. The
second criterion used in order to evaluate the proposed models
is the accuracy measure. Both criterions were calculated for
each of the candidate models.
Classification results comparing ML approaches are provided
in Figure 5. Each approach achieves AuC score and accuracy
score for their classification performance, where once it was
made by train/test paradigm and once by cross-validation
paradigm. All models obtained more than 0.88 accuracy and
AuC rates for both paradigms. The SVM obtained high ac-
curacy rate (0.95) using the train/test paradigm, and high rate
in the cross validation paradigm (also, a good performance
according to the AuC rate). The Ensemble method slightly
outperforms the SVM approach according to the AuC (both
train/test and cross validation) as well as according to accuracy
(cross validation). As the accuracy of both methods is similar,
the SVM was chosen as, in general, it is known to be a
robust classifier, as it maximizes the hyperplane margins [37].
Another justification to consider the SVM classifier is the
trained model computational time. To that end, the excitation
time was measured in the algorithm working environment (In-
tel i7-6700HQ CPU@2.6GHz 16GB RAM with MATLAB).
The Ensemble model averaged iteration calculation time was
0.015[s] while the SVM was 0.001[s]. Hence, the SVM is
15 times faster than the Ensemble, which is a very important
property in real time applications. These reasons lead us to
choose the SVM as the optimal classifier for this task, as we
deal with real-time scenarios and aim to keep the navigation
filter robustness and efficient. The resulted ROC curve with
a chosen classifier, (FPR = 0.03, T P R = 0.87), received
AuC of 0.98, with accuracy of 0.95 (obtained by a train/test
paradigm). Hence, the SVM classifier was chosen for further
analysis.
Fig. 5. Learning model comparison with AuC and accuracy performance
measures.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 7
B. MRMR based feature rank
The learning-based models were trained using 16 features.
In many real-time application the computational time of these
features is critical and might take long time and eventually
result in system latency or memory constraints. In order
to avoid that, feature dimensionality reduction methods are
applied. There, the features rank approach is used to select
the most contributing features. One of the classical and com-
mon feature ranking approaches is the minimum redundancy
maximum relevance (MRMR) method [38]. It was used to
rank the feature set (32). Figure 6 summarizes the results
showing that Eˆ
ψ2,pQa11 , and pRd
11are the three most
important features. Although all features were used in the
current research, the MRMR showed that the high level
features contribute more to the classification process. Thus,
if some computational constraint is required they could be
removed from the analysis/model.
Fig. 6. Feature importance ranking as obtained by the MRMR method.
C. Simulations
Two different simulation scenarios to validate the velocity
aided INS with adaptive step size tuning were made. First, an
INS/GNSS with scenario parameters given in Table IV, and
secondly, an INS/DVL with scenario parameters given in Table
V. Both simulated trajectories are constructed by lines and
curves, that the ML classifier was not trained on. However, it is
expected the model will successfully capture the dynamics and
statistics along the trajectories to predicting the sub-optimal
∆t∗, as it was trained over 18,000 examples (Section III.D)
with various scenarios. For better comparison, we add one
classical approach for step size tuning as a function of the
velocity, given by [16]:
∆t∗
k+1 (kvnk2) = ∆tmin (kvnk2)k> vT resh
∆tmax (kvnk2)k≤vT resh ,(35)
where the velocity threshold, vT resh is determined by the
designer, upon the real-time scenario. In the adaptive setting,
once the ML classifier predicts a different step size from the
one it used in the last 20 steps, the algorithm tunes the updated
step size for the next iteration.
For the trajectory shown in Figure 1, graphs of the predicted
∆tas a function of time according to the ML classifier and the
classical approach (35) are plotted in Figure 7, for INS/GNSS
simulation. There, the ML classifier predicted mostly ∆t∗=
0.04[s], except for short time interval ([50,58][s]), where it
predicted ∆t∗= 0.002[s]. The velocity error results are shown
in Table VI, where, in addition to the adaptive tuning, two
constant step sizes were used for comparison. Applying the
small step size (∆t= 0.002[s]) results in mean velocity
error of 0.145[m/s], which is the lowest error associated with
high computational load of 120,000 iterations, and maximum
velocity error of 0.41[m/s]. From the other side, applying
the larger step size (∆t= 0.04[s]), used only 6,000 iterations
(only 5% of the smaller step size) results in mean velocity error
of 0.187[m/s](less than 0.05[m/s]increase) and maximum
velocity error of 0.655[m/s]. By applying the adaptive step
size tuning approach, a mean velocity error of 0.181[m/s]
was obtained, with only 9,381 iterations. This is less than 10%
of the conservative approach with IMU step size of 0.002[s],
and also yields a lower velocity error than the large step size.
The maximum velocity error with the adaptive step size is
0.37[m/s], lower than both constant cases.
In Figure 8, the INS/DVL simulated trajectory is shown. The
changes of ∆t∗during time according to the ML classifier
and the classical approach (35) are plotted in Figure 9. There,
the ML classifier predicted ∆t∗= 0.002[s]for the first 17[s]
of the trajectory, and then predicted ∆t∗= 0.04[s]until the
end of the trajectory. The velocity error results are shown in
Table VII, where, similarly to the INS/GNSS simulation, two
constant step sizes were used to compare the adaptive tuning
step size. Applying the smaller step size (∆t= 0.002[s])
results in 0.015[m/s]mean velocity error and maximum
velocity error of 0.019[m/s], which is the error associated with
high computational load of 20,000 iterations. From the other
side, applying the larger step size (∆t= 0.04[s]), used only
1,000 iterations, (only 5% of the smaller step size) results
in 0.0216[m/s]mean velocity error (less than 0.07[m/s]
increase) and maximum velocity error of 0.046[m/s]. By ap-
plying the adaptive step size tuning approach, a mean velocity
error of 0.012[m/s]was obtained with only 9,360 iterations.
This is less than a half from the conservative approach with
IMU step size of 0.002[s], and also yields a lower mean
velocity error. A maximum velocity error of 0.028[m/s]were
obtained. For both INS/GNSS and INS/DVL simulations, the
classical method for determining ∆tkbased on the vehicle
speed, (35), obtained insufficient computational load with
higher mean velocity error, where for the INS/GNSS a mean
velocity error of 0.203[m/s]was obtained with over than
70,000 iterations. and for the INS/DVL a mean velocity error
of 0.280[m/s]was obtained with nearly 10,000 iterations. The
threshold was determined as the initial vehicles speed in both
scenarios. To summarize, while using our proposed adaptive
approach, the average velocity error has increased by only
0.077[m/s]While using only about 1/7of the computational
load.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 8
TABLE IV
INS/GNSS WITH ADAPTIVE STEP SIZE SIMULATION PARAMETERS.
Description Symbol Value
GNSS velocity noise (var) R11, R22 , R33 0.02[m/s]2
GNSS step size ∆τ1[s]
Accelerometer noise (var) Q11 , Q22, Q33 0.042[m/s2]2
Gyroscope noise (var)) Q44, Q55 , Q66 0.0032[rad/s]2
Simulation duration T240[s]
Initial velocity vn
0[5,0,0]T[m/s]
Initial position pn
0[320,340,5[m]]T
Velocity threshold vT resh 5[m/s]
TABLE V
INS/DVL WITH ADAPTIVE STEP SIZE SIMULATION PARAMETERS.
Description Symbol Value
DVL noise (var) R11, R22 , R33 0.004[m/s]2
DVL step size ∆τ1[s]
Accelerometer noise (var) Q11, Q22, Q33 0.022[m/s2]2
Gyroscope noise (var) Q44 , Q55, Q66 0.0022[rad/s]2
Simulation duration T40[s]
Initial velocity vn
0[1,0,0]T[m/s]
Initial position pn
0[320,340,−5[m]]T
Velocity threshold vTresh 1[m/s]
TABLE VI
INS/GNSS SIMULATION RESULTS.
∆t[s]Mean δvn[m/s]Max δvn[m/s]Iterations
Adaptive (ours) 0.181 0.370 9,381
0.002 0.145 0.410 120,000
0.04 0.187 0.655 6,000
∆tkvnk20.203 0.204 70,410
Fig. 7. INS/GNSS simulation with sub-optimal step size, ∆t∗, based on the
ML classifier and classical ∆tkvnk2as a function of time for a duration
of 4 minutes.
TABLE VII
INS/DVL SIMULATION RESULTS.
∆t[s]Mean δvn[m/s]Max δvn[m/s]Iterations
Adaptive (ours) 0.012 0.028 9,360
0.002 0.015 0.019 20,000
0.04 0.0216 0.046 1,000
∆tkvnk20.0283 0.076 9,884
Fig. 8. INS/DVL simulation trajectory for an AUV. The vehicle moves at the
same sea level (−5[m]) and performs a rectangular motion. The blue line is
for the GT trajectory and the dotted black line is for the estimated trajectory.
Fig. 9. INS/DVL simulation with sub-optimal step size, ∆t∗, based on the
ML classifier and classical ∆tkvnk2as a function of time for a duration
of 40 seconds.
D. Field experiment
A field experiment using a quadrotor was performed (Figure
10). The altitude of the quadrotor was kept constant and
the horizontal trajectory is shown in Figure 11. The 12
error state model, described in Section II.A, was used to
obtain the navigation solution. The filter was updated by
velocity measurements from a GNSS receiver and the resulting
navigation solution was compared with a GT measurements,
obtained using an RTK device. To examine different (from the
ones used in our simulations) and challenging scenarios (for
the proposed ML algorithm), an “8-figure” shape trajectory
was applied, to include accelerations/ declarations, as also
turns for part of the time, and almost “straight” lines for the
rest. Also, different parameters values were examined in the
field experiment resulting in more cases that were examined
and covered for better conclusion and generalization of the
ML approach, keeping the ML strategy the same. Experiment
parameters are provided in Table VIII. In Algorithm 1, line
11, for the experiment, the optimal step size is 0.02[s]. There
is no need for additional training as the experimental dataset
was addressed as a new test dataset, that is the ML was trained
on the simulation training dataset.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 9
The experiment error results using the classical approach,
∆tk= 0.002[s],∆tk= 0.02[s], and ∆t∗from the sug-
gested adaptive tuning approach are summarized in Table IX.
It appears that 1800 iterations results in 0.128[m/s]mean
velocity error, and increasing the number of iterations to
18,000 yields a meaningful reduction where only 0.01[m/s]
mean velocity error is obtained. A maximum velocity error of
6.25[m/s]was obtained for a fractional initialization moment,
both for the adaptive step size and the constant step size of
∆tk= 0.002[s]. Our suggested adaptive tuning algorithm
founds a sub-optimal solution, where only 9,900 iterations
yields a 0.02[m/s]mean velocity error (as defined by setting
B). The designer controls the amount of iterations according
to velocity RMSE criterion. In this experiment, the number
of iteration using ∆tk= 0.02[s]increases adaptive by a
factor of almost 6 (from 1800 to 9900) to meet designer’s
criterion. From the other side, applying the adaptive tuning
scheme results in 45% reduction of number of iterations
obtained while setting ∆tk= 0.002[s]with only 0.01[m/s]
increase of mean velocity error. The maximum velocity error
is 5.82[m/s]for a fractional initialization moment. In this
experiment we obtained a nearly linear relationship between
the velocity mean error and the step size. This result confirms
our numerical simulation from III.A. The classical method for
determining ∆tkbased on the vehicle speed, (35), obtained
insufficient computational load with higher mean velocity
error: a mean velocity error of 0.037[m/s]was obtained with
4,500 iterations. The changes of ∆t∗during time according
to the ML classifier and the classical approach (35) are plotted
in Figure 12.
Fig. 10. DJI matrice 300 in a field experiment.
V. CONCLUSIONS
A proper choice of the step size is important for imple-
menting velocity aided INS. The step size depends on various
navigation parameters. In real life, these parameters and their
behavior in the dynamic environment are partly known. A
novel ML-based scheme to adaptively tune an appropriate step
size, together with the es-EKF implementation was proposed.
According to this scheme, the designer should set an averaged
Fig. 11. Field experiment Quadrotor GT trajectory and its estimated one.
TABLE VIII
EXP ERI ME NT PAR AME TER S
Description Symbol Value
GNSS vel. noise (var) - horizon. R11, R22 0.003[m/s]2
GNSS vel. noise (var) - vertical R33 0.005[m/s]2
GNSS step size ∆τ0.1[s]
Accelerometer noise (MPU-9250) Q∗
11, Q∗
22, Q∗
33 300[µg/√Hz]
Gyroscope rate noise (MPU-9250) Q∗
44, Q∗
55, Q∗
66 0.01[dps/√Hz]
Experiment duration T35[s]
Initial velocity vn
0[0,0,0]T[m/s]
Initial position pn
0[32.10,34.80,0]T
Accelerometer bias noise Q∗
7, Q∗
8, Q∗
9[1,1,1][m/s2]2
Gyroscope bias noise Q∗
10, Q∗
11, Q∗
12 [1,1,1][rad/s]2
velocity threshold vT resh 7[m/s]
TABLE IX
ERRO RS AS A F UN CTI ON O F STE P SI ZE
AND THE RESULTING NUMBER OF ITERATIONS
∆tk[s] Mean δvn[m/s]Max δvn[m/s]Iterations
Adaptive (ours) 0.0217 6.25 9900
0.002 0.0111 6.25 18000
0.02 0.1280 5.82 1800
∆tkvnk20.037 5.82 4,500
Fig. 12. INS/GNSS experiment with sub-optimal step size, ∆t∗, based on the
ML classifier and classical ∆tkvnk2as a function of time for a duration
of 35 seconds.
velocity error bound, where the ML classifier predicts the sub-
optimal step size to provide the navigation solution without
exceeding this bound, in real-time scenarios. Extensive simula-
tions and a field experiment demonstrated the efficiency of this
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 10
methodology in commonly used vehicle tracking problems.
The proposed scheme minimized the computational load with
minimum influence on velocity estimate error. The scheme
was validated using two simulations and field experiment.
There, the relationship between the velocity RMSE and the
IMU step size was presented. In the INS/DVL, INS/GNSS
and field experiment, we measured the velocity only and
found that we can use lower number of iterations, and by
that minimize computational load, with a sufficient velocity
RMSE by applying the suggested scheme. In this work, for
demonstrative proposes, only two different step size were
examined, yet the proposed approach can be easily elaborated
to more different sizes. In addition, the goal of this work was
focused on velocity aided INS, however the proposed approach
can be used with any other aiding sensors and to any other
platform.
ACK NOW LE DG EM EN T
BO was supported by The Maurice Hatter Foundation.
APPENDIX
A. INS Equations of Motion
The INS equations of motion include the rate of change
of the position, velocity, and the transformation between the
navigation and body frame, as shown in Fig.12.
The position vector is given by
pn=φλhT∈R3×1,(36)
where φis the latitude, λis the longitude, and his the altitude.
The velocity vector is Earth referenced and expressed in the
North-East-Down (NED) coordinate system:
vn=vNvEvDT∈R3×1,(37)
where vN, vE, vDdenote the velocity vector components in
north, east, and down directions, respectively. The rate of
change of the position is given by [1]
˙pn=
˙
φ
˙
λ
˙
h
=
vN
RM+h
vE
cos(φ)(RN+h)
−vD
,(38)
where RMand RNare the meridian radius and the normal
radius of curvature, respectively. The rate of change of the
velocity vector is given by [1]
˙vn=Tn
bfb+gn−([ωn
en×] + 2 [ωn
ie×]) vn,(39)
where Tn
b∈R3×3is the transformation matrix from body
frame to the navigation frame. fb∈R3×1is the accelerometers
vector expressed in the body frame, gn∈R3×1is the gravity
vector expressed in the navigation frame. ωn
en is the angular
velocity vector between the earth centered earth fixed (ECEF)
frame and the navigation frame. The angular velocity vector
between ECEF and the inertial frame is given by ωn
ie and the
rate of change of the transformation matrix is given by [1]
˙
Tn
b=Tn
bωb
ib×−ωb
in×,(40)
where ωb
ib =p q r T∈R3×1is the angular velocity
vector as obtained by the gyroscope and ωb
in is the angular
velocity vector between the navigation frame and the inertial
frame expressed in the body frame. The angular velocity
between the navigation frame and the inertial frame expressed
in the navigation frame is given by ωn
in. The alignment
between body frame and navigation frame can be obtained
from Tn
b, as follows
ε=
ϕ
θ
ψ
=
atan2Tb
n31,Tb
n32
arccos Tb
n33
−atan2Tb
n13,Tb
n23
∈R3×1,(41)
where ϕis the roll angle, θis the pitch angle, and ψis the
yaw angle. These three angles are called Euler angles. The
system matrix, F, is given by
F=
Fvv Fvε Tn
b03×3
Fεv Fεε 03×3Tn
b
03×303×303×303×3
03×303×303×303×3
(42)
where Tn
bis calculated by (18), and Fij ∈R3×3can be found
explicitly in the classical literature (see [1], [20], [34]).
The dynamic matrix terms Fij are provided:
Fεv =
0−1
RN+ˆ
h0
1
RM+ˆ
h0 0
0tan(ˆ
φ)
RN+ˆ
h0
(43)
Fvε =
0fD−fE
−fD0fN
fE−fN0
(44)
where matrix terms are the specific forces in navigation frame.
Fεε =
0ωD−ωE
−ωD0ωN
ωE−ωN0
(45)
where,
ωN
ωE
ωD
=
˙
ˆ
λ+ωiecos ˆ
φ
−˙
ˆ
φ
−˙
ˆ
λ+ωiesin ˆ
φ
(46)
The Fvv matrix columns are given as follows
F(1)
vv =
ˆvD
Re
−ωD−ωie sin ˆ
φ
2ˆvN
Re
F(2)
vv =
2ωD
ˆvD
Re+ˆvN
Retan ˆ
φ
−2ωN
F(3)
vv =
−ˆvN
Re
ωN+ωie cos ˆ
φ
0
(47)
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 11
The shaping matrix is given explicitly by
G=
Tn
b03×303×303×3
03×3Tn
b03×303×3
03×303×3I3×303×3
03×303×303×3I3×3
(48)
B. Evaluation Criterions
1) AuC: Binary decision problems are commonly evaluated
using the receiver operating characteristic (ROC) curve. One of
the large advantages of the ROC is its representation capability
of accuracy of the test data. The ROC is a plot of sensitivity
vs. specificity. These two parameters are also known as:
TPR =T P
T P +F N (49)
and,
FPR =F P
F P +T N (50)
where TPR is the true positive rate (sensitivity), FPR is the
false positive rate (specificity). Pis the amount of positive
values, Nis the amount of negative values, T P is the number
of true positive, T N is the number of true negative, FP
is the number of false positive (type one error), and F N
is the number of false negative (type two error). The AuC,
Area under the Curve, is a measure of the two-dimensional
area underneath the entire ROC curve. This measure is scale-
invariant and classification threshold invariant. Hence, it is a
very useful criterion for classification performance evaluation.
2) Accuracy: The second criterion used in order to evaluate
the proposed models is the accuracy measure, given by:
ACC =T P +TN
T P +T N +F P +F N (51)
The accuracy is used as a measure of ”how well a binary
classification test correctly identifies a condition”. It compares
estimates of pre and post test probability. This is the ratio of
the number of true classified examples over the total number
of examples.
C. Bi vs. Multi Classification and Regression Formalization
The major benefit of defining the problem as a bi-
classification predictor, is that we minimize the number of
“chattering” between many step size values (might lead to
unstable filter). Also, using two values, one big 0.04[s] and
the other small 0.002[s], presents clearly the computational
effort reduction. A confusion matrix of 10 step size classes
are presented here, to demonstrate the lower robustness of
considering too much classes for this task. This is part of
our initial analysis and is not included in the paper.
Also, we formulated this problem as a regression task, where,
unfortunately, the obtained trained models result in high
RMSE respectively to the step size (linear regression RMSE:
0.025, Tree RMSE: 0.016).
Fig. 13. Quadratic SVM classifier for 10 classes of ∆t.
REFERENCES
[1] J. Farrell, Aided navigation: GPS with high rate sensors. McGraw-Hill,
Inc., 2008, pp: 393-395.
[2] H. Taheri and Z. C. Xia, “SLAM; definition and evolution,” Engineering
Applications of Artificial Intelligence, vol. 97, p. 104032, 2021.
[3] L. Paull, S. Saeedi, M. Seto, and H. Li, “Auv navigation and localization:
A review,” IEEE Journal of oceanic engineering, vol. 39, no. 1, pp. 131–
149, 2013.
[4] Z. Huang, D. Zhu, and B. Sun, “A multi-auv cooperative hunting method
in 3-d underwater environment with obstacle,” Engineering Applications
of Artificial Intelligence, vol. 50, pp. 192–200, 2016.
[5] P. A. Miller, J. A. Farrell, Y. Zhao, and V. Djapic, “Autonomous
underwater vehicle navigation,” IEEE Journal of Oceanic Engineering,
vol. 35, no. 3, pp. 663–678, 2010.
[6] N. Karaboga and F. Latifoglu, “Adaptive filtering noisy transcranial
doppler signal by using artificial bee colony algorithm,” Engineering
Applications of Artificial Intelligence, vol. 26, no. 2, pp. 677–684, 2013.
[7] I. Klein and R. Diamant, “Observability analysis of heading aided INS
for a maneuvering AUV,” NAVIGATION, Journal of the Institute of
Navigation, vol. 65, no. 1, pp. 73–82, 2018.
[8] A. Tal, I. Klein, and R. Katz, “Inertial navigation system/doppler velocity
log (INS/DVL) fusion with partial dvl measurements,” Sensors, vol. 17,
no. 2, p. 415, 2017.
[9] O. Elhaki and K. Shojaei, “A robust neural network approximation-based
prescribed performance output-feedback controller for autonomous un-
derwater vehicles with actuators saturation,” Engineering Applications
of Artificial Intelligence, vol. 88, p. 103382, 2020.
[10] R. H. Rogne, T. H. Bryne, T. I. Fossen, and T. A. Johansen, “On the
usage of low-cost MEMS sensors, strapdown inertial navigation, and
nonlinear estimation techniques in dynamic positioning,” IEEE Journal
of Oceanic Engineering, 2020.
[11] J. Vasconcelos, C. Silvestre, and P. Oliveira, “INS/GPS aided by fre-
quency contents of vector observations with application to autonomous
surface crafts,” IEEE Journal of Oceanic Engineering, vol. 36, no. 2,
pp. 347–363, 2011.
[12] B. Or, B.-Z. Bobrovsky, and I. Klein, “Kalman filtering with adaptive
step size using a covariance-based criterion,” IEEE Transactions on
Instrumentation and Measurement, vol. 70, pp. 1–10, 2021.
[13] S. Shats, B. Bobrovsky, and U. Shaked, “Discrete-time state estimation
of analog double integrators,” IEEE transactions on aerospace and
electronic systems, vol. 24, no. 6, pp. 670–677, 1988.
[14] G. Y. Kulikov and M. V. Kulikova, “Accurate numerical implementation
of the continuous-discrete extended kalman filter,” IEEE Transactions on
Automatic Control, vol. 59, no. 1, pp. 273–279, 2014.
[15] G. M. Dias, M. Nurchis, and B. Bellalta, “Adapting sampling interval
of sensor networks using on-line reinforcement learning,” in 2016 IEEE
3rd World Forum on Internet of Things (WF-IoT). IEEE, 2016, pp.
460–465.
[16] Y. Liu and Z. Sun, “EKF-based adaptive sensor scheduling for target
tracking,” in 2008 International Symposium on Information Science and
Engineering, vol. 2. IEEE, 2008, pp. 171–174.
ADAPTIVE STEP SIZE LEARNING WITH APPLICATIONS TO VELOCITY AIDED INERTIAL NAVIGATION SYSTEM / OR AND KLEIN 12
[17] Y. E. Hamouda and C. Phillips, “Metadata-based adaptive sampling
for energy-efficient collaborative target tracking in wireless sensor
networks,” in 2010 10th IEEE International Conference on Computer
and Information Technology. IEEE, 2010, pp. 313–320.
[18] R. E. Kalman and R. S. Bucy, “New results in linear filtering and
prediction theory,” Journal of basic engineering, vol. 83, no. 1, pp. 95–
108, 1961.
[19] A. H. Jazwinski, Stochastic processes and filtering theory. Courier
Corporation, 2007.
[20] R. G. Brown and P. Y. Hwang, Introduction to random signals and
applied Kalman filtering. Wiley New York, Third Edtion, 1992.
[21] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press,
2016.
[22] Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep
learning for visual understanding: A review,” Neurocomputing, vol. 187,
pp. 27–48, 2016.
[23] E. Cambria and B. White, “Jumping nlp curves: A review of natural lan-
guage processing research,” IEEE Computational intelligence magazine,
vol. 9, no. 2, pp. 48–57, 2014.
[24] H. Yan, S. Herath, and Y. Furukawa, “RoNIN: Robust neural inertial
navigation in the wild: Benchmark, evaluations, and new methods,”
arXiv preprint arXiv:1905.12853, 2019.
[25] H. Yan, Q. Shan, and Y. Furukawa, “RIDI: Robust imu double integra-
tion,” in Proceedings of the European Conference on Computer Vision
(ECCV), 2018, pp. 621–636.
[26] I. Klein and O. Asraf, “StepNet—Deep learning approaches for step
length estimation,” IEEE Access, vol. 8, pp. 85706–85 713, 2020.
[27] O. Asraf, F. Shama, and I. Klein, “PDRNet: A deep-learning pedestrian
dead reckoning framework,” IEEE Sensors Journal, 2021.
[28] W. Liu, D. Caruso, E. Ilg, J. Dong, A. I. Mourikis, K. Daniilidis,
V. Kumar, and J. Engel, “TLIO: Tight learned inertial odometry,” IEEE
Robotics and Automation Letters, vol. 5, no. 4, pp. 5653–5660, 2020.
[29] M. Brossard, S. Bonnabel, and A. Barrau, “Denoising IMU gyroscopes
with deep learning for open-loop attitude estimation,” IEEE Robotics
and Automation Letters, vol. 5, no. 3, pp. 4796–4803, 2020.
[30] D. Weber, C. G¨
uhmann, and T. Seel, “Neural networks versus conven-
tional filters for inertial-sensor-based attitude estimation,” arXiv preprint
arXiv:2005.06897, 2020.
[31] E. Vertzberger and I. Klein, “Attitude adaptive estimation with smart-
phone classification for pedestrian navigation,” IEEE Sensors Journal,
vol. 21, no. 7, pp. 9341–9348, 2021.
[32] I. Zak, R. Katz, and I. Klein, “MLCA—A machine learning framework
for ins coarse alignment,” Sensors, vol. 20, no. 23, p. 6959, 2020.
[33] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with applica-
tions to tracking and navigation: theory algorithms and software. John
Wiley & Sons, 2004.
[34] P. D. Groves, “Principles of GNSS, inertial, and multisensor integrated
navigation systems, [book review],” IEEE Aerospace and Electronic
Systems Magazine, vol. 30, no. 2, pp. 26–27, 2015.
[35] Y. Rothman, I. Klein, and S. Filin, “Analytical observability analysis of
INS with vehicle constraints,” NAVIGATION: Journal of the Institute of
Navigation, vol. 61, no. 3, pp. 227–236, 2014.
[36] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521,
no. 7553, pp. 436–444, 2015.
[37] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning,
vol. 20, no. 3, pp. 273–297, 1995.
[38] H. Peng, F. Long, and C. Ding, “Feature selection based on mu-
tual information criteria of max-dependency, max-relevance, and min-
redundancy,” IEEE Transactions on pattern analysis and machine intel-
ligence, vol. 27, no. 8, pp. 1226–1238, 2005.
Barak Or (Member, IEEE) received a B.Sc. degree
in aerospace engineering from the Technion–Israel
Institute of Technology, Haifa, Israel, a B.A. degree
(cum laude) in economics and management, and
an M.Sc. degree in aerospace engineering from the
Technion–Israel Institute of Technology in 2016 and
2018. He is currently pursuing a Ph.D. degree with
the University of Haifa, Haifa.
His research interests include navigation, deep learn-
ing, sensor fusion, and estimation theory.
Itzik Klein (Senior Member IEEE) received B.Sc.
and M.Sc. degrees in Aerospace Engineering from
the Technion-Israel Institute of Technology, Haifa,
Israel in 2004 and 2007, and a Ph.D. degree in Geo-
information Engineering from the Technion-Israel
Institute of Technology, in 2011. He is currently
an Assistant Professor, heading the Autonomous
Navigation and Sensor Fusion Lab, at the Hatter
Department of Marine Technologies, University of
Haifa. His research interests include data driven
based navigation, novel inertial navigation architec-
tures, autonomous underwater vehicles, sensor fusion, and estimation theory.