Content uploaded by Tom Koller
Author content
All content in this area was uploaded by Tom Koller on Oct 06, 2020
Content may be subject to copyright.
Copyright ©by IEEE 2020. This is the
accepted version of:
Koller, Tom L., and Udo Frese. ”The Interacting Multiple Model Filter on
Boxplus-Manifolds” 2020 International Conference on Multisensor Fusion
and Integration (MFI). IEEE, 2020.
September 17, 2020
The Interacting Multiple Model Filter on Boxplus-Manifolds*
Tom L. Koller1and Udo Frese1
Abstract— The interacting multiple model filter is the standard
in state estimation where different dynamic models are required
to model the behavior of a system. It performs a probabilistic
mixing of estimates. Up to now, it is undefined how to perform this
mixing properly on manifold spaces, e.g. quaternions. We present
the proper probabilistic mixing on differentiable manifolds based
on the boxplus-method. The result is the interacting multiple model
filter on boxplus-manifolds. We prove that our approach is a first
order correct approximation of the optimum. The approach is
evaluated in a simulation and performs as good as the ad-hoc
solution for quaternions. A generic implementation of the boxplus
interacting multiple model filter for differentiable manifolds is
published alongside with this paper.
I. INTRODUCTION
The interacting multiple model filter (IMM) is widely used
in the field of target tracking. After its original invention [1]
for radar based aircraft tracking [2], [3], [4], it has been used
in various applications as attitude estimation [5], lane change
prediction [6] and sensor fault detection [7].
The IMM is applied when a single dynamical model does not
predict the behavior of the system accurately [8]. This is the
case when the system dynamics depend on modes, i.e. discrete
states that change abruptly. The original IMM runs one Kalman
Filter (KF) per mode and fuses the estimates of the filters
probabilistically based on the likelihood of the models. Up to
now, several adaptations of the IMM have been published to use
different nonlinear filters as the Extended Kalman Filter (EKF)
[8], the Unscented Kalman Filter (UKF) [4] or the Particle Filter
(PF) [9].
Typically, these nonlinear filters operate on vector spaces
(RN). Thus, it is difficult to maintain manifold structures as
the rotation quaternion in the filter. As it is often required to
estimate orientations in target tracking, various extensions [10],
[11], [12] have been developed to use rotation quaternions or
matrices without destroying their manifold properties, e.g. unit
norm or orthogonality. To our knowledge, no such extension
exists for the IMM.
For quaternions, the approach to normalize the quaternion
exists [5]. However, such an approach usually degrades the per-
formance of the estimator due to erroneous covariance matrices.
Besides that approach, publications avoid to use quaternions in
the state by using 2D rather than 3D [6] or by using models
that work in world coordinates only, as it is common in aircraft
tracking [2]. In [13] the so called delta-quaternion is used in
the state, while the quaternion orientation is predicted outside
of the IMM.
We see a gap in the literature on how to handle manifolds
in the IMM. Thus, we are motivated to develop an IMM which
*DFG-Funding: ZaVI FR 2620/3-1
1Multi-Sensor Interactive Systems Group, University of Bremen, 28359
Bremen, Germany {tkoller,ufrese}@uni-bremen.de
can handle manifolds properly.
The core difficulty of an IMM on manifolds is the probabilis-
tic mixing of states. In the IMM the estimates of all filters are
mixed in a weighted sum. Unfortunately, the operator + breaks
the manifold structure, wherefore a sum cannot be computed
[10]. Furthermore, the mixing degrades the covariance which
may result in inconsistency.
To overcome this problem for single mode filtering on
quaternions, the multiplicative EKF (MEKF)[11] or the error
state KF (ESKF)[12] were developed. Both methods update the
quaternion estimate by quaternion multiplication only, which
sustains the manifold structure in contrast to addition. The
boxplus-method (-method) of Hertzberg et al. [10] generalizes
this concept for manifolds. It only allows changes to the mani-
fold which do not break its structure. It gained attention in pose
tracking in the last years since it is a general approach to handle
manifolds in nonlinear filtering [14], [15] and least squares
optimization [16]. The -method encapsulates manifolds as
black boxes, so that algorithms can handle them generically.
Furthermore, it provides the necessary definitions to calculate
a weighted sum of Gaussians over manifolds as it is required
for the IMM on manifolds.
The contribution of this paper is the theoretic derivation of an
IMM which uses the -method to properly handle the mixing
of states and covariances in the manifold case. The proposed
method is proved to perform a first order correct probabilistic
mixing of Gaussians. The solution is a generalization of aver-
aging rotations [17], since it applies to arbitrary -manifolds.
The focus of this work is the theoretic extension of the -
method. The method is extended to hybrid estimation, which
underlines the character of the -method as a general solution
to handle manifolds in state estimation.
The remainder of the paper is structured as follows: The
theoretic foundation of probabilistic mixing on -manifolds is
shown in Section II. The IMM on -manifolds is presented in
Section III. In Section IV we give an example for the new
algorithm and analyze its performance compared to a naive
solution. At the end, we conclude and discuss future work.
II. WEI GHT ED SUM OF GAUSSIANS ON -MANIFOLDS
Usually, the representations of manifolds Sare overpara-
metrizations i.e., they are represented with more parameters
than they have degrees of freedom (DOF). The key idea of
the -method is to allow changes of the manifolds only in the
direction of the DOFs [10]. This direction is called the tangent
space V ⊂ RDOF . Small changes to the manifold instance
can be expressed in the tangent space and are applied with the
operator :S × V 7→ S . The -operator enforces the manifold
structure.
The difference of two manifold instances is also expressed
in the tangent space and can be derived by the complementary
boxminus-operator :S × S 7→ V. The -operator calcu-
lates the geodesic between two manifold instances, i.e. the
shortest path between them on the manifold. The quadruplet
{S,,,V } is called a -manifold. The operators for com-
monly used manifolds can be found in [10].
The -method allows to compound a state of multiple
manifolds and vectors. The /-operators of a compound
manifold xsimply apply the operators elementwise:
x={x1,· · · , xn}(1)
xδ={x1δ1,· · · , xnδn}(2)
yx={y1x1,· · · , ynxn}(3)
For vectors, the operators naturally reduce to +/−. Thus, the
method allows to compound states of manifolds and vectors
seamlessly, which is essential for target tracking applications.
Hertzberg et al. already stated that it is not possible to
compute a weighted sum of manifolds with the classic definition
[10]. Instead, they derived the implicit definition of the weighted
sum using the expected value:
EXX= 0 (4)
where E(· · · )computes the expected value, X⊆ S is the set
of all weighted manifold instances and Xis the expected value
of X, i.e. the weighted sum.
In the IMM, Gaussian distributions are mixed instead of
simple instances. Thus, Xis a mixture of Gaussians Xj=
N(xj, Pj)with mean xjand covariance Pj. For manifolds the
Gaussian is defined as:
N(µ, P ) := µN(0, P )(5)
where P∈RDOF xDO F .
In the manifold case, the weighted sum of the mean values
of the Gaussians xis not guaranteed to be the weighted sum
Xof the complete distribution. However, we prove that it is at
least a first order correct approximation of X.
Theorem 1: Let Xbe a set of MGaussian Distributions Xj
with probabilities p(Xj). Then, the weighted sum xof their
mean values xjis a first order correct approximation of the
weighted sum Xof all elements in X.
Proof:
E(Xx)?
≈E(XX)=0 (6)
E(Xx) = ZX
p(x)·(xx)dx, x ∈X(7)
=
M
X
j=1 ZXj
p(xj)·(xjx)dxj, xj∈Xj(8)
ZXj
p(xj)dxj=p(Xj)(9)
Each element xjcan be expressed relative to the mean of the
respective Gaussian using the axioms of -manifolds [10]:
xj=xjδj, δj=xjxj(10)
E(Xx) =
M
X
j=1 ZXj
p(xj)·((xjδj)x)dxj(11)
We approximate the -operator with a first order Taylor series
around δj=~
0:
E(Xx)≈
M
X
j=1 ZXj
p(xj)·xj~
0x
+Jj∗(δj−~
0)dxj(12)
Jj=∂((xjδj)x)
∂δjδj=~
0
(13)
Using (9) we can split into:
E(Xx)≈
M
X
j=1 p(Xj) (xjx)
+Jj∗ZXj
p(xj)δjdxj!(14)
The first summand is the expected value E(xj−x)which is 0
by definition of x.
E(Xx)≈
M
X
j=1
Jj∗ZXj
p(xj)δjdxj(15)
Using the definition of xj:
E(Xjxj) = 0 = ZXj
p(xj) (xjxj)dxj(16)
=ZXj
p(xj)δjdxj(17)
We can reduce to:
E(Xx)≈0(18)
Thus, the approximation is first order correct.
In the IMM, the mixed distribution is approximated with a
Gaussian. Since the covariance of manifolds is expressed in the
tangent space, the covariances cannot be summed up as in the
original IMM. Instead, we use a first order propagation of the
covariances.
The standard definition of covariance can be adapted to -
manifolds [10]. We express the covariance Pwith respect to x
since it is a first order correct approximation of the real mean
of the weighted sum:
P= E [xx]⊗=ZX
p(x) [xx]⊗dx (19)
where [...]⊗is a short hand notation for the outer product of a
vector with itself, i.e.:
[v]⊗=v⊗v=vvT(20)
Again, each element of Xis expressed with respect to the mean
of the corresponding Gaussian using (10):
P=
M
X
j=1 ZXj
p(xj) [(xjδj)x]⊗dxj(21)
And linearise around δj=~
0with Jjas in (13):
P=
M
X
j=1 ZXj
p(xj)hxj~
0x+Jjδji⊗
dxj(22)
=
M
X
j=1 ZXj
p(xj) [xjx+Jjδj]⊗dxj(23)
By expansion we get:
P=
M
X
j=1 ZXj
p(xj)[xjx]⊗+ [Jjδj]⊗
+ (xjx) (Jjδj)T+ (Jjδj) (xjx)Tdxj
(24)
=
M
X
j=1 ZXj
p(xj)[xjx]⊗+JjδjδT
jJT
j
+ (xjx)δT
jJT
j+ (Jjδj) (xjx)Tdxj
(25)
Which we can further reduce by rearranging the sums to:
P=
M
X
j=1
p(Xj) [xjx]⊗+p(Xj)JjPjJT
j
+ (xjx)ZXj
p(xj)δT
jdxj∗JT
j
+JjZXj
p(xj)δjdxj∗(xjx)T(26)
Using (17) yields:
P=
M
X
j=1
p(Xj)[xjx]⊗+JjPjJT
j(27)
Equation (27) is similar to the original equation of the IMM.
The first summand expresses the spread of the mean in -
terms. The second summand propagates the covariances of the
Gaussians to the new mean. The main difference to the original
equation is the transformation of Pjwith the Jacobian Jj. In
the vector case, the Jacobian equals identity.
III. THE IMM O N -MANIFOLDS
The IMM runs a recursive filter, e.g. an EKF or UKF, for each
mode of the system. At every time step, it performs the three
steps interaction, filtering and combination [8] (see TABLE I).
The interaction mixes the estimates of all filters according to
their mode and transition probabilities. The filtering performs
the prediction and update of each filter. It calculates the new
mode probabilities based on the measurements. The details of
this step depend on the chosen filter type. The combination step
only combines all estimates according to their mode probability
to create the output of the IMM which is the most likely state
of the system.
TABLE I
ORIGINAL IMM [8] VS -IMM. BOXED LINES EXCHANGE THE PRIOR LINE
TO F ORM TH E -IMM. z(k)MAY BE A -MANIFOLD D.
State, input, process and measurement models:
xj(k)∈ S, Pj(k)∈RD OF ×DOF , u(k)∈Rν, z(k)∈ D (28)
gj:S × Rν×Rn7→ S , Xj(k+ 1) = gj(Xj(k), u(k), j(k)) (29)
j(k) = Nn(0, Qj(k)), Qj(k)∈Rn×n(30)
h:S 7→ D, z(k) = h(X(k)) Nd(0, Rj(k)), Rj(k)∈Rd×d(31)
Initialization: ∀j∈[1, M ]
xj(0) = x0, Pj(0) = P0, µj(0) = µj0(32)
Interaction: ∀i, j ∈[1, M]
cj=
M
X
i=1
pij µi(k−1) (33)
µi|j(k−1) = pij µi(k−1)
cj
, pij are transition probabilities. (34)
x0j(k−1) =
M
X
i=1
xi(k−1)µi|j(k−1) (35)
x0j(k−1) = -WeightedSum(xj(k−1), xi(k−1), µi|j(k−1))
(36)
P0j(k−1) =
M
X
i=1
µi|j(k−1) Pi(k−1)
+ [xi(k−1) −x0j(k−1)]⊗(37)
P0j(k−1) = -WeightedCovarianceSum(x0j(k−1),
Pi(k−1), xi(k−1), µi|j(k−1)) (38)
Filtering (-EKF): ∀j∈[1, M ]
Prediction:
ˆxj(k) = g(x0j(k−1), u(k−1),~
0) (39)
Fj(k−1) = ∂ g(x, u(k−1),~
0)
∂x
x=x0j(k−1)
(40)
Uj(k−1) = ∂ g(x0j(k−1), u(k−1), )
∂
=~
0
(41)
ˆ
Pj(k) = Fj(k−1)P0j(k−1)Fj(k−1)T
+Uj(k−1)Qj(k−1)Uj(k−1)T(42)
Update:
Hj(k) = ∂ h(x)
∂x
x=ˆxj(k)
(43)
Sj(k) = Hj(k)ˆ
Pj(k)Hj(k)T+Rj(k)(44)
Wj(k) = ˆ
Pj(k)Hj(k)TSj(k)−1(45)
rj(k) = z(k)h(ˆxj(k)) (46)
xj(k) = ˆxj(k)Wj(k)rj(k)(47)
Pj(k) = ˆ
Pj(k)−Wj(k)Sj(k)Wj(k)T(48)
Λj(k) = N(rj(k); 0, Sj(k)) (49)
µj(k) = 1
cΛj(k)cj, c is a normalization factor (50)
Combination:
x(k) =
M
X
j=1
xj(k)µj(k)(51)
x(k) = -WeightedSum(x(k−1), xj(k), µj(k)) (52)
P(k) =
M
X
j=1
µj(k)Pj(k)+[xj(k)−x(k)]⊗(53)
P(k) = -WeightedCovarianceSum(x(k),Pj(k), xj(k), µj(k)) (54)
To use the IMM on -manifolds two changes are applied:
1) The inner filter must be able to handle -manifolds.
2) The weighted sum of Gaussians in the interaction and
combination step needs to be calculated with the -
equivalents.
The first change can be introduced by using the -EKF [14],
[15] or -UKF [10]. The measurement likelihood can be
calculated using Gaussians on -manifolds.
The second change has to be introduced with the results from
Section II. Using Theorem 1, we approximate the mean of the
weighted sum of Gaussians with the weighted sum of the means
of the Gaussians. The weighted sum can be calculated using the
iterative algorithm -WeightedSum adapted from [10]:
Input: X0, xj, p(Xj)∀j∈[1, M ](55)
Xk+1 =Xk
M
X
j=1
p(Xj)(xjXk)(56)
x= lim
k7→∞ Xk(57)
In practice, the iteration can be stopped when the change of
the calculated mean is small. The convergence speed depends
greatly on the choice of the initial guess X0. The algorithm is
identical to the computation of the mean on compact lie groups
[18].
With the -WeightedSum algorithm the mode estimates can
be mixed without destroying the manifold structure. In the
IMM (35) and (51) have to be exchanged with (36) and (52)
respectively.
Following (27), the mixed covariance can be calculated using
the function -WeightedCovarianceSum:
Input: x, Pj, xj, p(Xj)∀j∈[1, M ](58)
Jj=∂(xjδx)
∂δ δ=~
0
(59)
P=
M
X
j=1
p(Xj)[xjx]⊗+JjPjJT
j(60)
With the -WeightedCovarianceSum function the covari-
ances can be mixed properly. To apply it to the IMM (37) and
(53) need to be exchanged with (38) and (54) respectively.
This results in a generic IMM that properly mixes the state
estimates based on the -method: The -IMM. It does not
require any ad-hoc implementation to mix the states as it only
uses the /-interface of the manifold.
IV. EXA MPL E APPLICATION AND PERFORMANCE
DISCUSSION
We test the -IMM in a simulated environment, to provide
first insights into the performance of the new algorithm. We
choose the following setup inspired by classic radar tracking: A
drone flies across known terrain. It has an stereo-camera facing
downwards. With the camera it detects known landmarks in the
terrain. The task is to track the position of the drone. Since the
camera is mounted on the drone, its measurements are in body
coordinates. Hence, it is required to estimate the orientation of
the drone to make use of the measurements.
The drone has two different flight modes. In the first mode
it flies straight with a constant velocity. In the second mode it
flies a curve with a constant angular rate.
We model the dynamics of the drone with the state x:
x= (qw
b~pw~vw~ωw)T(61)
where qw
bis the rotation quaternion that rotates a world frame
vector to body frame, ~pwis the position in world frame, ~vwis
the velocity in world frame and ~ωwis the angular rate in world
frame. The straight dynamic is modeled as:
gs(x(k), s(k)) =
qw
b
~pw+ (~vw+s)∆t
~vw+s
~ωw
(62)
where ∆t= 0.05 sis the time difference between time kand
k+ 1. The constant turn dynamic is modeled as in [2] with a
change for the orientation and angular rate:
gc(x(k), c(k)) =
qw
b∗exp(∆t
2~ωw)−1
~pw+ (∆tI3×3+B)~vw
(I3×3+A)~vw
~ωw+c
(63)
where exp(· · · )forms a quaternion from the given Euler-angle-
axis [10] and A, B ∈R3×3as given in [2].
For simplification, we assume that the camera measures the
position of the landmark in body coordinates. The measurement
model is:
h(x) = qw
b∗~pw∗(qw
b)−1(64)
The covariance matrices can be found in the Appendix.
The simulated drone flies a trajectory of two straights and
two 180◦curves over four visible landmarks. It is evaluated
against the Naive-IMM with naive mixing, i.e. the quaternions
are averaged in parameter space and normalized after. The
covariances are summed up as in the original IMM. However,
the inner filter is a -EKF as well since we only want to
evaluate the effect of the -mixing of states on the estimation
performance. Furthermore, the algorithm is compared against a
-EKF on the constant turn model, to show the overall benefit
of the IMM. The root mean squared error (RMSE) to the ground
truth is used for comparison.
The RMSE of the -IMM and Naive-IMM are shown in
Table II. The developed -IMM has the same RMSE as the
Naive-IMM. Thus, the -mixing of state estimates does not
improve the estimation accuracy.
TABLE II
RMSE COMPARISON FOR AIRCRAFT TRACKING
EKF -IMM Naive-IMM /Naive-IMM diff.
0.502367 0.488076 0.488084 -7.79736e-06
This result is unsatisfying since the -mixing should yield
better accuracy and consistency. However, the error of the naive
mixing is negligible in the presented example.
We try to quantify the error induced by naive mixing.
We calculate the weighted mean of two quaternion Gaussian
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
1.0 2.0 3.0
Mean error in rad
Difference of q1 and q2 in rad
p(q1)=0.50
p(q1)=0.55
p(q1)=0.60
p(q1)=0.65
p(q1)=0.70
p(q1)=0.75
p(q1)=0.80
p(q1)=0.85
p(q1)=0.90
p(q1)=0.95
Fig. 1. The difference of means (-norm) between - and naive mixing over
the angular differences of q1and q2.
distributions q1, q2for different angular differences and different
probabilities (see Fig. 1).
The difference is in the range of 10−4rad for angular
differences below 0.35 rad (ca. 20◦), for all probabilities. Since
the IMM usually operates at small differences between the
models, the error of the naive mixing is negligible for the mean.
Similarly, the effect of the -mixing on the covariance is
small (see Fig. 2). Hence, the two mixing methods differ
only for high differences of the mixed quaternions. In the
presented simulation example, the angular differences are small,
wherefore the mixing methods have equal results.
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
1.0 2.0 3.0
Covariance error in rad2
Difference of q1 and q2 in rad
p(q1)=0.50
p(q1)=0.55
p(q1)=0.60
p(q1)=0.65
p(q1)=0.70
p(q1)=0.75
p(q1)=0.80
p(q1)=0.85
p(q1)=0.90
p(q1)=0.95
Fig. 2. The covariance difference (Schur norm) between - and naive mixing
over the angular differences of q1and q2.
In general, it is unlikely that the quaternion estimates of the
IMM differ greatly. The mixing is always performed after the
update step. Hence, even big differences of the dynamic models
are compensated by the update.
The -mixing may perform better for higher differences.
However, its original purpose is subverted. With an increasing
difference between the quaternions the linearization errors also
increase since it is only first order correct.
To show the increasing error, - and naive mixing are
compared to an optimal solution in Fig. 3 and Fig. 4. Since
their is no closed form solution to mix the Gaussians, the
optimal solution is obtained numerically. Quaternions are sam-
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
1.0 2.0 3.0
Mean error in rad
Difference of q1 and q2 in rad
Boxplus mixing
Naive mixing
Fig. 3. The error of means (-norm) of - and naive mixing compared to
optimal mixing over the angular differences of q1and q2.
0
0.005
0.01
0.015
0.02
0.025
1.0 2.0 3.0
Covariance error in rad2
Difference of q1 and q2 in rad
Boxplus mixing
Naive mixing
Fig. 4. The covariance error (Schur norm) of - and naive mixing compared
to optimal mixing over the angular differences of q1and q2.
pled uniformly from the Gaussians to approximate the complete
distribution.
The mean and covariance error of the -mixing increase
with the distance between the Gaussian means. The -mixing
outperforms the naive mixing at higher differences between the
two quaternions. Still, the approaches are almost equal for small
differences.
Overall, using the -mixing with a first order approximation
does not give a performance boost for the IMM on quaternions.
Instead, it consumes more computational power since it requires
an iterative calculation of the mean and the calculation of
additional Jacobians.
The -mixing does not improve the performance of the
IMM, but it enables a generic IMM on differentiable -
manifolds. The method encapsulates the manifold properties of
the state so that it can be treated as a black box. Therefore,
the IMM can be implemented independently of the used state
representation. It does not require any ad-hoc solutions to
mix the states. An open source C++ implementation of the
generic -IMM is provided at: https://github.com/
TomLKoller/Boxplus-IMM. It uses automatic differentia-
tion [19] to calculate all required Jacobians for state mixing and
for the internal -EKF [20]. It simplifies the use of the IMM
immensely, as the developer can focus on tuning the dynamic
models without taking care of Jacobians or the mixing step.
The repository also contains the presented simulation example.
V. CONCLUSION
The first order correct IMM on -manifolds has been de-
rived. Proofs are provided, how the weighted mean and covari-
ance of mixtures of Gaussians on -manifolds are calculated.
With these, the -method is applied to the IMM.
The -IMM has been evaluated on a simulated aircraft
tracking example. The evaluation of the algorithm shows, that
the accuracy of the IMM is not improved compared to a
naive approach of mixing quaternions. Thus, it is shown that
mixing in the parameter space of quaternions followed by a
normalization, is a simple, but suitable way to handle the
quaternion manifold structure in the IMM.
The -IMM still has high theoretical value as it can be
directly derived from the basic definitions of the expected
value and the covariance on -manifolds. Thus, it is a justified
algorithm instead of an ad-hoc solution.
This paper extended the family of -algorithms to the the
IMM. The presented IMM is fully generic, since the -method
encapsulates the manifold properties and separates them from
the algorithm. No further ad-hoc implementations are required
to perform the mixing, regardless of the state. Therefore, the
-IMM enables the implementation of a generic IMM library
that can handle -manifold states. A first prototype is published
alongside this paper.
The presented method is only first order correct. Thus, one
may develop higher order methods or use an UKF style method
to mix the Gaussians. Presumably, this will not reduce the error,
as the error compared to the numerical solution was quite low
for small distances anyway. Instead, it should be investigated
whether the proposed method has a visible advantage on other
-manifolds as the rotation matrix. This may be the case, since
the normalization of quaternions is quite simple in comparison.
APPENDIX
Covariances of dynamic models:
Qs(k) = 10 0 0
0 10 0
0 0 10 , Qc(k) = 0.1 0 0
0 0.1 0
0 0 0.1
Covariance of Measurement and transition probabilities:
R(k) = 1 0 0
0 1 0
0 0 1 , ptr ans = ( 0.95 0.05
0.05 0.95 )
REFERENCES
[1] H. A. P. Blom, “An efficient filter for abruptly changing systems”, in
The 23rd IEEE Conference on Decision and Control, Las Vegas, Nevada,
USA, USA, Dec. 1984, pp. 656–658, doi: 10.1109/CDC.1984.272089.
[2] Ronghui Zhan and Jianwei Wan, “Passive maneuvering target tracking
using 3D constant-turn model”, in 2006 IEEE Conference on Radar, Apr.
2006, p. 8, doi: 10.1109/RADAR.2006.1631832.
[3] J. D. Glass, W. D. Blair, and Y. Bar-Shalom, “IMM estimators with
unbiased mixing for tracking targets performing coordinated turns”,
in 2013 IEEE Aerospace Conference, Mar. 2013, pp. 1–10, doi:
10.1109/AERO.2013.6496912.
[4] W. Zhu, W. Wang, and G. Yuan, “An Improved Interacting Multiple
Model Filtering Algorithm Based on the Cubature Kalman Filter for
Maneuvering Target Tracking”, Sensors, vol. 16, no. 6, Art. no. 6, Jun.
2016, doi: 10.3390/s16060805.
[5] C.-D. Wann and J.-H. Gao, “Orientation Estimation for Sensor Motion
Tracking Using Interacting Multiple Model Filter”, IEICE TRANSAC-
TIONS on Fundamentals of Electronics, Communications and Computer
Sciences, vol. E93-A, no. 8, pp. 1565–1568, Aug. 2010.
[6] Jiang Liu, Baigen Cai, J. Wang, and Wei Shangguan, “GNSS/INS-based
vehicle lane-change estimation using IMM and lane-level road map”, in
16th International IEEE Conference on Intelligent Transportation Systems
(ITSC 2013), Oct. 2013, pp. 148–153, doi: 10.1109/ITSC.2013.6728225.
[7] N. Sadeghzadeh-Nokhodberiz and J. Poshtan, “Distributed Interact-
ing Multiple Filters for Fault Diagnosis of Navigation Sensors in
a Robotic System”, IEEE Transactions on Systems, Man, and Cy-
bernetics: Systems, vol. 47, no. 7, pp. 1383–1393, Jul. 2017, doi:
10.1109/TSMC.2016.2598782.
[8] E. Mazor, A. Averbuch, Y. Bar-Shalom, and J. Dayan, “Interacting
multiple model methods in target tracking: a survey”, IEEE Transactions
on Aerospace and Electronic Systems, vol. 34, no. 1, pp. 103–123, Jan.
1998, doi: 10.1109/7.640267.
[9] Y. Boers and J. N. Driessen, “Interacting multiple model particle filter”,
Sonar and Navigation IEEE Proceedings - Radar, vol. 150, no. 5, pp.
344–349, Oct. 2003, doi: 10.1049/ip-rsn:20030741.
[10] C. Hertzberg, R. Wagner, U. Frese, and L. Schr¨
oder, “Integrating Generic
Sensor Fusion Algorithms with Sound State Representations through
Encapsulation of Manifolds”, Information Fusion, vol. 14, no. 1, pp.
57–77, 2013.
[11] F. L. Markley, “Attitude Error Representations for Kalman Filtering”,
Journal of Guidance, Control, and Dynamics, vol. 26, no. 2, pp. 311–317,
2003, doi: 10.2514/2.5048.
[12] V. Madyastha, V. Ravindra, S. Mallikarjunan, and A. Goyal, “Extended
Kalman Filter vs. Error State Kalman Filter for Aircraft Attitude Estima-
tion”, in AIAA Guidance, Navigation, and Control Conference, American
Institute of Aeronautics and Astronautics, Jun. 2012.
[13] H. Himberg, Y. Motai, and A. Bradley, “A Multiple Model Approach
to Track Head Orientation With Delta Quaternions”, IEEE Transac-
tions on Cybernetics, vol. 43, no. 1, pp. 90–101, Feb. 2013, doi:
10.1109/TSMCB.2012.2199311.
[14] J. Clemens and K. Schill, “Extended Kalman filter with manifold state
representation for navigating a maneuverable melting probe”, in 2016 19th
International Conference on Information Fusion (FUSION), Jul. 2016, pp.
1789–1796.
[15] D. Nakath, J. Clemens, and K. Schill, “Multi-Sensor Fusion and Active
Perception for Autonomous Deep Space Navigation”, in 2018 21st In-
ternational Conference on Information Fusion (FUSION), Jul. 2018, pp.
2596–2605, doi: 10.23919/ICIF.2018.8455788.
[16] T. L. Koller and U. Frese, “State Observability through Prior Knowledge:
Analysis of the Height Map Prior for Track Cycling”, Sensors, vol. 20,
no. 9, Art. no. 9, Jan. 2020, doi: 10.3390/s20092438.
[17] R. Hartley, J. Trumpf, Y. Dai, and H. Li, “Rotation Averaging”, Inter-
national Journal of Computer Vision, vol. 103, no. 3, pp. 267–305, Jul.
2013, doi: 10.1007/s11263-012-0601-0.
[18] J. H. Manton, “A globally convergent numerical algorithm for computing
the centre of mass on compact Lie groups”, in ICARCV 2004 8th Control,
Automation, Robotics and Vision Conference, 2004., Dec. 2004, vol. 3,
pp. 2211-2216 Vol. 3, doi: 10.1109/ICARCV.2004.1469774.
[19] P. H. W. Hoffmann, “A Hitchhiker’s Guide to Automatic Differentiation”,
Numerical Algorithms, vol. 72, no. 3, pp. 775–811, Jul. 2016, doi:
10.1007/s11075-015-0067-6.
[20] L. Post and T.L. Koller, ”An Automatically Differentiating Extended
Kalman Filter on Boxplus Manifolds”, 2020 https://github.com/
TomLKoller/ADEKF