Page 1

Efficient Nonlinear Measurement Updating based on

Gaussian Mixture Approximation of Conditional Densities

Marco F. Huber, Dietrich Brunn, and Uwe D. Hanebeck

Abstract—Filtering or measurement updating for nonlinear

stochastic dynamic systems requires approximate calculations,

since an exact solution is impossible to obtain in general. We

propose a Gaussian mixture approximation of the conditional

density, which allows performing measurement updating in

closed form. The conditional density is a probabilistic repre-

sentation of the nonlinear system and depends on the random

variable of the measurement given the system state. Unlike

the likelihood, the conditional density is independent of actual

measurements, which permits determining its approximation

off-line. By treating the approximation task as an optimization

problem, we use progressive processing to achieve high quality

results. Once having calculated the conditional density, the

likelihood can be determined on-line, which, in turn, offers

an efficient approximate filter step. As result, a Gaussian

mixture representation of the posterior density is obtained. The

exponential growth of Gaussian mixture components resulting

from repeated filtering is avoided implicitly by the prediction

step using the proposed techniques.

I. INTRODUCTION

Fusing information that has been acquired by measure-

ments is a common challenge in many technical applications

like sensor-actuator-networks or robotics. Especially in the

presence of uncertainties described by random variables,

Bayesian filtering provides exact probability density determi-

nations of the system state. In practical settings, a recursive

processing of this so-called posterior density is needed.

Since no exact density representation in closed-form and

constant complexity is available, the filtering or measurement

updating problem is computationally unfeasible in general.

While for linear systems with Gaussian random variables

the Kalman filter provides exact solutions in an efficient

manner [8], the nonlinear case requires the approximation

of the true density. The well-known extended Kalman filter

uses linearization to apply the Kalman filter equations on

nonlinear systems [12], while the unscented Kalman filter

offers increased higher-order accuracy by using a determin-

istic sampling approach [7]. The resulting single Gaussian

density of both estimation methods is typically not sufficient

for characterizing the true complex density. One possibility is

using a sample representation of the density, like in particle

filters [2]. Another possibility is to use generic parameter-

ized density functions. Due to their universal approximation

property [10], Gaussian mixtures are very convenient for

that purpose. The bandwidth of estimators using Gaussian

Marco F. Huber, Dietrich Brunn, and Uwe D. Hanebeck are with

the Intelligent Sensor-Actuator-Systems Laboratory, Institute of Computer

Science and Engineering, Universit¨ at Karlsruhe (TH), Germany.

{marco.huber|uwe.hanebeck}@ieee.org

brunn@ira.uka.de

mixtures ranges from the Gaussian sum filter [1], which is

algorithmically straightforward, up to computationally more

expensive but precise methods like the one presented in [4].

Unlike the previously mentioned estimation techniques,

we introduce a new efficient measurement updating ap-

proach that approximates the conditional density, which is

a probabilistic representation of the nonlinear measurement

equation. For approximation purposes, Gaussian mixtures

are used, whose parameters are calculated by means of

progressively solving an optimization problem as proposed

in [6]. Because of being independent of actual measurements,

off-line optimization is possible. Given the approximate

conditional density and an actual measurement, the like-

lihood is generated on-line. Since the likelihood is also

represented by a Gaussian mixture, the filter step is reduced

to simple multiplications of Gaussian densities resulting in

a Gaussian mixture representation of the posterior density.

To avoid an exponential growth of the number of Gaussian

mixture components, we propose a simultaneous prediction

and Gaussian mixture reduction. Hence, an efficient filter

step with constant complexity is obtained.

In the following Section, we review Bayes’ law for

filtering discrete-time systems and point out the relation

between conditional density and likelihood. The rest of the

paper is structured as follows: In Section III, the progres-

sive processing for approximating the conditional density

is explained. An example application is also investigated.

Approximating the likelihood and performing the filter step

on-line is subject of Section IV, while in Section V the

closed-form prediction step and Gaussian mixture reduction

is derived. In Section VI, the interaction of both techniques

with the efficient filter step is demonstrated and compared

to the unscented Kalman filter, the particle filter, and the

Bayesian estimator by means of the example application. The

paper closes with conclusions and an outlook on future work.

II. PROBLEM FORMULATION

In this paper we only consider scalar random variables,

denoted by boldface letters, e.g. x. Thus, we consider scalar

nonlinear time-invariant systems, where scalar measurements

ˆ yk at time step k are related to the scalar system state xk

by means of the measurement equation

yk= h(xk) + vk ,

(1)

where the additive noise vkis assumed as a white stationary

Gaussian random process with density fv(vk) = N(vk−

μv,σv), mean μv, and standard deviation σv. Note that an

actual measurement ˆ ykis a realization of (1).

Page 2

Given a predicted density fp

surement updates the system state via the filter step or

measurement update according to Bayes’ law [12]

k(xk) for xk, a new mea-

fe

k(xk) = ckfp

k(xk)fL

k(xk) ,

(2)

where ck = 1/?

Rfp

k(xk)fL

k(xk)dxk is a normalization

k(xk) is the so-called likelihood

constant and fL

fL

k(xk) = f(ˆ yk|xk) = fv(ˆ yk− h(xk)) .

The likelihood depends on the noise density of vk, the

structure of the measurement equation, and especially on the

measurement ˆ yk. Hence, the likelihood’s shape changes with

every new measurement.

Recursively updating the predicted density fp

ing to (2) is of conceptual value only, since the complex

shape of the likelihood prevents a closed-form and efficient

solution. Furthermore, for the case of nonlinear systems

with arbitrarily distributed random variables, in general there

exists no analytical density that can be updated in the

filter step without changing the type of representation. To

overcome these insufficiencies, an appropriate approximation

of the true posterior density fe

on true densities will be indicated by a tilde, e.g.˜f(·), while

the corresponding approximation will be denoted by f(·).

The typically enormous computational effort when directly

approximating fe

we translate our approach for the prediction step proposed

in [6] to the filter step. In doing so, we use a Gaussian

mixture representation fL

tion purposes, that depends on the parameter vector ηk. The

calculation of an appropriate parameter vector ηkfor high

quality approximations is computationally very demanding.

Since the likelihood is time-variant, the demanding compu-

tations would also be necessary at every time step.

Instead, we can approximate the time-invariant conditional

density˜f(yk|xk) = fv(yk−h(xk)) by the Gaussian mixture

density f(yk,xk,η) with parameter vector η. The conditional

density can be interpreted as the aggregation of all possible

likelihoods and thus is of higher dimensionality. In presence

of a new measurement ˆ yk, we can easily obtain the corre-

sponding likelihood with

k(xk) accord-

k(xk) is inevitable. From now

k(xk) at every time step can be avoided, if

k(xk,ηk) of˜fL

k(xk) for approxima-

˜fL

k(xk) =˜f(yk|xk)

⇓ Approx. ⇓

k(xk,ηk) = f(yk,xk,η)

???

yk=ˆ yk

fL

???

yk=ˆ yk

.

Thus, the approximate likelihood fL

mined on-line when needed by calculating its time-variant

parameter vector ηkfrom η and ˆ ykas shown in Section IV.

The more extensive Gaussian mixture conditional density

approximation depending on the parameter vector η can be

solved off-line as illustrated in Fig. 1. However, Gaussian

mixture approximations are considered a tough problem. In

the following section an effective approximation scheme is

presented.

k(xk,ηk) can be deter-

˜fx

0(x0)

fe

k(xk)

h(·)

fv(vk)

fp

k+1(xk+1)

ˆ yk

Approximate

Conditional Density

Approximate

Likelihood

Unit

Delay

Closed-form

Prediction Step

Closed-form

Filter Step

fL

k(xk,ηk)

f(yk,xk,η)

offline

online

Fig. 1.

proximation is performed off-line, while likelihood approximation and the

filter step remain on-line tasks. A closed-form prediction step depending on

transition density approximation as derived in [6] completes the efficient

estimator for nonlinear systems.

Recursive, closed-form estimation. The conditional density ap-

III. APPROXIMATION OF THE CONDITIONAL DENSITY

The key idea is to reformulate the Gaussian mixture

approximation problem as an optimization problem by min-

imizing a certain distance measure between˜f(yk|xk) and

f(yk,xk,η). For solving this problem, we give a review on

the progressive optimization scheme proposed in [6].

Since in real systems the system state is usually restricted

to a finite interval, i.e.,

∀k : xk∈ [a,b] =: Ω ,

we are only interested in approximating the conditional

density for xk∈ Ω.

Furthermore, we use the special case of a Gaussian mix-

ture with axis-aligned Gaussian components (short: axis-

aligned Gaussian mixture) for representing the Gaussian

mixture approximation f(yk,xk,η). Here, each component

is separable in every dimension, i.e.,

f(yk,xk,η) =

L

?

i=1

ωi· N(yk− μy

i,σy

i)N(xk− μx

i,σx

i) (3)

with the parameter vector

η = [ηT

1,ηT

2,...,ηT

L]

T, where ηi= [ωi,μy

i,σy

i,μx

i,σx

i]T.

An axis-aligned Gaussian mixture has minor approximation

capabilities compared to a non axis-aligned one. Hence, more

components are required to achieve a comparable approxima-

tion quality. In exchange, the covariance matrices of the axis-

aligned Gaussian mixture components are diagonal. Thus,

less parameters for a single component have to be adjusted

and the necessary determination of the gradient

to be easier. Altogether, representing f(yk,xk,η) as in (3)

lowers the algorithmic complexity.

∂G

∂ηproves

Page 3

A. The Optimization Problem

The quality of the approximation fe

on the similarity between˜f(yk|xk) and its Gaussian mixture

approximation f(yk,xk,η) for xk∈ Ω. Thus, this section is

concerned with solving the optimization problem

k(xk) strongly depends

ηmin= arg min

η

G(η) ,

(4)

that yields the parameter vector for f(yk,xk,η), which

minimizes the distance to˜f(yk|xk). The employed distance

measure is the squared integral measure

G(η) =1

2

RR

?

?

?˜f(yk|xk) − f(yk,xk,η)

?2

dxkdyk . (5)

Although this measure has been selected for its simplicity

and convenience, it has been found to give excellent results.

Apart from the selected distance measure, the underlying

nonlinearity complicates solving (4) significantly. In general,

no closed-form solution can be derived. In addition, the high

dimension of η makes the selection of an initial solution

very difficult, so that the direct application of numerical

minimization routines leads to insufficient local optima of η.

B. Progressive Processing

Instead of attempting the direct approximation the condi-

tional density, we pursue a progressive approach for finding

ηminas shown in Fig. 2. This type of processing has

been proposed in [4], [11]. In doing so, a parameterized

conditional density˜f(yk|xk,γ) with the progression param-

eter γ ∈ [0,1] is introduced. Incrementing this progression

parameter by small Δγ ensures a continuous transformation

of the solution of an initial, tractable optimization problem

towards the desired true conditional density˜f(yk|xk). In ev-

ery single so-called progression step we achieve a gradually

changed distance measure G(η,γ), for which the necessary

condition for a minimum

∂G(η,γ)

∂η

= 0

has to be satisfied. For this purpose, the BFGS formula

[3], a standard numerical optimization method, is em-

ployed. G(η,γ) results from using the progressive version

˜f(yk|xk,γ) of˜f(yk|xk) in (5).

For introducing the parameterized conditional density, we

use the parameterized measurement function h(xk,γ),

h(xk,γ) = (1 − γ)H · xk+ γh(xk) ,

where in particular H ∈ R and

h(xk,γ = 0) = H · xk ,

h(xk,γ = 1) = h(xk) .

(6)

This yields the modified measurement equation

yk= h(xk,γ) + vk .

(7)

Init. Progression: γ = 0

Result: f(yk,xk,η)

γ ≤ 1

Increment: γ = γ + Δγ

Optimize current Progression Step

by minimizing G(η,γ)

Fig. 2.Flow chart of the progressive processing to determine ηmin.

The dependence of˜f(yk|xk,γ) = fv(yk− h(xk,γ)) on

measurement equation (7) automatically causes its param-

eterization.

Example 1 (Quadratic Decay Measurement Function)

In a wireless communication scenario the measurement equa-

tion yk= (1+x2

to the relative signal strength (SNR) ykaccording to the free-

space propagation model [5]. Fig. 3 shows the progression

of the corresponding parameterized measurement function

h(xk,γ) = (1 − γ)H · xk + γ?1 + x2

˜f(yk|xk,γ) performs the same transformation.

Initially,˜f(yk|xk,γ = 0) corresponds to the conditional

density of a linear system (6). For this optimization problem,

there exists just one single optimum. Thus, the choice of

insufficient starting parameters by the user and starting the

progression with a local optimum is bypassed [6]. For γ = 1

the parameterized conditional density corresponds to the true

conditional density, i.e.,˜f(yk|xk,γ = 1) =˜f(yk|xk).

k)−1+vkrelates the position xkof a receiver

k

?−1for Δγ = 0.2,

H = 0 and Ω = [−3,3]. The parameterized conditional density

IV. THE FILTER STEP

Together with the closed-form prediction step proposed in

[6], the Gaussian mixture approximation of the conditional

density and likelihood respectively allows on-line performing

an efficient closed-form filter step as depicted in Fig. 1.

For this purpose we assume that all involved densities are

represented as Gaussian mixtures.

According to [6] we assume the predicted density fp

to be given by

k(xk)

fp

k(xk) =

Lp

?

j=1

ωp

k,j· N(xk− μp

k,j,σp

k,j) ,

(8)

where Lpis the constant number of Gaussian components,

N(xk−μp

standard deviation σp

with ωp

Given a Gaussian mixture approximation f(yk,xk,η) ac-

cording to (3), its axis-aligned structure allows the direct

approximation of the likelihood fL

k,j,σp

k,j) is a Gaussian density with mean μp

k,j, and ωp

k,j> 0 and?Lp

k,jand

k,jare weighting coefficients

k,j= 1.

j=1ωp

k(xk,ηk), if for time step

Page 4

??????0???

0

0.5

?

???

???

???

? = 0

???????

? = 0.4

? = 0.6

? = 0.8

?????

Fig. 3. Progression of the parameterized measurement function h(xk,γ) =

(1 − γ)H · xk+ γ(1 + x2

k)−1

k a measurement ˆ ykis present,

fL

k(xk,ηk) = f(yk,xk,η)

???

yk=ˆ yk

=

L

?

L

?

i=1

ωi· N(ˆ yk− μy

?

ωk,i· N(xk− μx

i,σy

i)

???

=:ωk,i

·N(xk− μx

i,σx

i)

=

i=1

i,σx

i)

(9)

and

ηk= [ηT

k,1,ηT

k,2,...,ηT

k,L]

T, where ηk,i= [ωk,i,μx

i,σx

i]T.

Example 2 (Quadratic Decay Measurement (cont’d.))

We consider again the measurement equation of Example 1,

where now vk ∼ N(vk − 0,0.25) and Ω = [−3,3]. Using

L = 20 Gaussian components leads to the conditional density

approximation with quality G(η) = 0.0039 shown in Fig. 4.

Applying a measurement ˆ yk = 0.6, we obtain the likelihood

approximation depicted in Fig. 5. The little bumps at the

interval borders of xk result from sharply restricting xk to the

interval Ω. A more continuous windowing would alleviate this.

Incorporating an actual measurement and generating the

likelihood corresponds to taking a slice parallel to the xk-

axis from the conditional density at position yk= ˆ yk. The

Gaussian mixture representation of fL

very convenient for efficiently performing the filter step.

k(xk,ηk) itself is then

Theorem 1 (Approximate Posterior Density)

Given the Gaussian mixture representations (8) and (9)

for fp

posterior density fe

be calculated analytically.

k(xk) and fL

k(xk,ηk) respectively, the approximate

k(xk) is also a Gaussian mixture that can

PROOF. Using Bayes’ law (2) we obtain

fe

k(xk) = ckfp

k(xk)fL

Lp

?

k(xk,ηk)

= ck

j=1

L

?

i=1

ωp

k,jωk,iN(xk− μp

?

ωe

k,j,σp

k,j)N(xk− μx

??

k,i,j,σe

i,σx

i)

?

=zi,j·N(xk−μe

k,i,j,σe

k,i,j)

= ck

Lp

?

j=1

L

?

i=1

k,i,j· N(xk− μe

k,i,j)

(10)

where

zi,j = N

?

μp

k,j− μx

i,

?

(σp

k,j)2+ (σe

k,i,j)2?

and

ωe

k,i,j= zi,j· ωp

k,i,j=?σe

σe

k,i,j=

k,j· ωk,i ,

?2·

(σp

(σp

μe

k,i,j

?

μp

k,j)2+

k,j

(σp

μx

i)2

i

(σx

?

,

?

k,j· σx

k,j)2+ (σx

i)2

i)2.

The normalization constant ck = 1/?Lp

For obtaining the result in (10), only multiplications of two

Gaussian densities, denoted by zi,j· N(xk− μe

have to be performed. Hence, (10) provides the closed-

form and efficient solution for the filter step by means of

the Gaussian mixture approximation of a likelihood. The

accuracy of the approximation of˜fe

on the number of components of f(yk,xk,η) and fL

respectively.

The obtained Gaussian mixture approximation for the

posterior density ˜fe

Thus, unlike the closed-form prediction step, the number of

components in the approximation grows exponentially over

time. To avoid this exponential growth it is standard practice

to employ Gaussian mixture reduction techniques after the

filter step. Instead, the subsequent closed-form prediction

step, as depicted in Fig. 1, automatically leads to a Gaussian

mixture reduction.

j=1

?L

i=1ωe

k,i,j results

from integrating over both sums in (10),

?

k,i,j,σe

k,i,j),

k(xk) strongly depends

k(xk,ηk)

k(xk) comprises Lp· L components.

V. GAUSSIAN MIXTURE REDUCTION

Very popular Gaussian mixture reduction methods,

Salmond’s joining algorithm [13] or Maybeck’s ISE based

reduction algorithm [9], suffer from either poor reduction

quality or a high computational burden. However, predictions

with respect to a scalar nonlinear time-invariant system

equation

xk+1= a(xk) + wk ,

where wkis white and stationary Gaussian noise, offer the

opportunity for directly reducing the number of components

to a constant value Lp. For this purpose, the closed-form

prediction step

?

proposed in [6] has to be performed. Here, fT(xk+1,xk,η)

is the axis-aligned Gaussian mixture approximation with Lp

components of the true transition density˜fT(xk+1|xk) for

xk∈ Ω. Since˜fT(xk+1|xk) is also a conditional density, its

approximation is done similarly to˜f(yk|xk).

Theorem 2 (Approximate Predicted Density)

Given the Gaussian mixture representation (10) for fe

and an axis-aligned Gaussian mixture representation for

fT(xk+1,xk,η) similar to (3) with Lpcomponents, the

approximate predicted density fp

sian mixture with Lpcomponents that can be calculated

analytically.

fp

k+1(xk+1) =

R

fT(xk+1,xk,η)fe

k(xk)dxk ,

(11)

k(xk)

k+1(xk+1) is also a Gaus-

Page 5

???? ??0??

??

????

0

???

?

???

?

???

???

?

???

???

???

???

?

???

???

???

0

Fig. 4. Top view on the Gaussian mixture approximation of the conditional

density˜f(yk|xk) = N(yk− (1 + x2

displays the underlying measurement function h(xk) = (1 + x2

measurement ˆ yk= 0.6 is indicated by the red line, leads to the likelihood

depicted in Fig. 5.

k)−1,0.25). The dashed black line

k)−1. A

PROOF. See [6].

The number of components in fp

on the number of components of the approximate transi-

tion density. In return, performing the closed-form predic-

tion step (11) automatically reduces the Gaussian mixture

fe

analytically, this simultaneous prediction and reduction is

computationally very efficient.

?

k+1(xk+1) depends only

k(xk). Because of the possibility of calculating fp

k+1(xk+1)

Remark 1 (Generalization) Until now we assumed, that vk

and wkare Gaussian. Generalization of the introduced ap-

proximation techniques with respect to noise represented by

a Gaussian mixture is straightforward. For general densities

it is possible to first find a Gaussian mixture approximation

of fv(vk) and fw(wk), e.g. using the method proposed in

[4], and then to approximate the conditional and transition

density afterwards.

VI. SIMULATION: QUADRATIC DECAY

In this section we investigate the estimation results for the

measurement equation

1

1 + x2

introduced in Examples 1 and 2 with measurement noise

vk ∼ N(vk,0.1). We approximate the conditional density

of (12) for xk ∈ Ω = [−5,5] with L = 70 Gaussian

components, gaining a quality of G(η) = 0.225880.

The prediction step is based on the system equation

yk=

k

+ vk ,

(12)

xk+1= xk+ wk ,

(13)

where wk∼ N(wk,0.25). Although (13) is linear, we use

a Gaussian mixture approximation of the transition density

with Lp= 50 components for mixture reduction purposes.

The linearity allows on-line approximation for each predic-

tion step to dynamically cover the spread of the posterior

density.

?? ????0??

?

0

???

0.4

0.6

0.8

?

???

???

???

???

?

???

??

?????????

Appr.

True

Components

???

????? ?

Fig. 5.

ˆ yk = 0.6 (blue, dashed) and its Gaussian mixture approximation (red,

solid), generated from the conditional density approximation shown in

Fig. 4. 20 Gaussian components (red, dotted) are used for representing the

approximate likelihood fL

The likelihood˜fL

k(xk) = N(ˆ yk− (1 + x2

k)−1,0.25) with

k(xk,ηk).

In the simulation, four consecutive filter and prediction

steps are performed alternatingly, starting with the filter step

and the density fx

xkat time step k = 0. The measurement sequence is

ˆ y0= 0.4,

ˆ y1= 0.75,

0(x0) = N(x0+0.5,1) of the system state

ˆ y2= 0.5,

ˆ y3= 0.9 .

We compare the posterior densities of our approach (de-

noted as Appr.) with those of the unscented Kalman filter

(UKF) [7], a particle filter (PF) with 700 samples and

systematic resampling [2], and the exact Bayesian estimator.

Recursive estimation with the exact Bayesian estimator re-

quires recursively applied numerical integration and is used

as reference. Fig. 6 shows the resulting posterior densities

of the system state xk for the four consecutive filter and

prediction step pairs at time k = 0,...,3. It is obvious that

there is almost no shape difference between the estimates

of the Bayesian estimator and our approach. Especially both

modes are approximated almost exactly. The same is true for

the means and standard deviations, as shown in Table I. Since

the prediction simultaneously provides a Gaussian mixture

reduction, the number of components in fp

fe

impairing the estimation results significantly.

Since the UKF provides a Gaussian density approximation,

whose mean is accurate up to second-order, the estimated

mean is relatively close to the true one. In contrast, the

difference in shape and standard deviation is significant.

Due to the shape approximation of the conditional density,

our approach is also able to cover higher-order moments

and the shape of the posterior density. Like the proposed

approach the time consumption of the UKF is constant, but

about two order of magnitudes less. However, our approach

provides estimates with high accuracy, whose calculation

need computing time that is close to real-time.

The density representation provided by the particle filter

depends on randomly drawn samples. Thus, this representa-

tion is inappropriate for well-fitting density approximations,

k+1(xk+1) and

k(xk) stays constant at 50 and 3500 respectively, without

Page 6

??

????? ?

??

????? ?

2.5

??

????? ?

??

????? ?

?? ?? ??0?2?

0

0.5

?

???

2

2.5

? ? ?

????0?2

0

0.5

?

???

2

2.5

? ? ?

????0?2

0

0.5

?

???

2

? ? ?

?? ??0?2

0

0.5

?

???

2

2.5

? ? ?

???

???

???

???

Appr.

UKF

Bayes

Fig. 6.

filter (green, dotted) are depicted. The particle filter provides only a sample representation and thus is omitted.

The results of the approach of this paper (red, solid) in comparison with those of the Bayesian estimator (blue, dashed) and the unscented Kalman

TABLE I

MEANS AND STANDARD DEVIATIONS OF THE POSTERIOR DENSITIES.

mean: μe

Appr.

-0.73

-0.33

-0.45

-0.22

k

standard deviation: σe

BayesAppr.

1.071.08

0.65 0.64

0.840.83

0.44 0.43

k

PF

1.11

0.86

0.90

0.71

k

0

1

2

3

Bayes

-0.72

-0.33

-0.44

-0.22

UKF

-0.50

-0.35

-0.47

-0.16

PF

-0.70

-0.40

-0.43

-0.27

UKF

0.97

0.94

0.94

0.91

but very convenient for estimating moments. In Table I the

average mean and standard deviation estimates of the PF

over 50 simulation runs are recorded. The mean estimates

are comparable to those provided by the UKF, while the

standard deviations are more accurate. Only by drastically

increasing the number of samples and thus the computation

time, the PF results would get close to those of the proposed

approach.

VII. CONCLUSIONS AND FUTURE WORK

The novel approach for closed-form measurement updat-

ing of dynamic time-invariant nonlinear systems introduced

in this paper is based on the approximation of conditional

densities by means of axis-aligned Gaussian mixtures. Given

the Gaussian mixture approximation of the conditional den-

sity and an actual measurement, an on-line generation of

the likelihood for performing the filter step is provided

and results in an analytic calculation of the approximate

posterior density. In contrast to the extended Kalman filter

or unscented Kalman filter, the Gaussian mixture represen-

tation of our approach allows an accurate approximation of

the posterior density, especially with regard to higher-order

moments and a multimodal shape. Whereas particle filters

only use a discrete approximation, the proposed estimation

technique is able to give a continuous representation.

The exponential growth of the number of components of

the approximate posterior density can be compensated by

applying the closed-form prediction step derived in [6]. In

doing so, performing predictions implicitly achieves Gaus-

sian mixture reduction in an efficient manner.

Foundation of the proposed approach is an accurate con-

ditional density approximation. To achieve approximation

results in high quality and to avoid from getting trapped

in local optima, a progressive optimization algorithm is

proposed. Since the conditional density is time-invariant, the

computationally demanding approximation can be executed

off-line.

The described approach has been introduced for scalar

random variables for the sake of brevity and clarity. Gen-

eralization to random vectors is straightforward. At the

moment, the approach is restricted to time-invariant systems.

Extension to time-variant systems is part of further research.

VIII. ACKNOWLEDGEMENTS

This work was partially supported by the German

Research Foundation (DFG) within the Research Train-

ing Group GRK 1194 “Self-organizing Sensor-Actuator-

Networks”.

REFERENCES

[1] D. L. Alspach and H. W. Sorenson, “Nonlinear Bayesian Estimation

using Gaussian Sum Approximation,” IEEE Transactions on Automatic

Control, vol. 17, no. 4, pp. 439–448, August 1972.

[2] S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on

Particle Filters for Online Nonlinear/Non–Gaussian Bayesian Track-

ing,” IEEE Transactions of Signal Processing, vol. 50, no. 2, pp. 174–

188, 2002.

[3] R. Fletcher, Practical Methods of Optimization, 2nd ed.

and Sons Ltd, 2000.

[4] U. D. Hanebeck, K. Briechle, and A. Rauh, “Progressive Bayes: A

New Framework for Nonlinear State Estimation,” in Proceedings of

SPIE, vol. 5099.AeroSense Symposium, 2003, pp. 256–267.

[5] S. Haykin, Communication Systems, 4th ed.

Inc., 2001.

[6] M. Huber, D. Brunn, and U. D. Hanebeck, “Closed-Form Prediction of

Nonlinear Dynamic Systems by Means of Gaussian Mixture Approx-

imation of the Transition Density,” in IEEE International Conference

on Multisensor Fusion and Integration for Intelligent Systems, 2006,

pp. 98–103.

[7] S. J. Julier and J. K. Uhlmann, “Unscented Filtering and Nonlinear

Estimation,” in Proceedings of the IEEE, vol. 92, no. 3, 2004, pp.

401–422.

[8] R. E. Kalman, “A new Approach to Linear Filtering and Prediction

Problems,” Transactions of the ASME, Journal of Basic Engineering,

no. 82, pp. 35–45, 1960.

[9] P. S. Maybeck and B. D. Smith, “Multiple Model Tracker Based on

Gaussian Mixture Reduction for Maneuvering Targets in Clutter,” in

8th International Conference on Information Fusion, vol. 1, 2005, pp.

40–47.

[10] V. Maz’ya and G. Schmidt, “On Approximate Approximations using

Gaussian Kernels,” IMA Journal of Numerical Analysis, vol. 16, no. 1,

pp. 13–29, 1996.

[11] N. Oudjane and C. Musso, “Progressive Correction for Regularized

Particle Filters,” in Proceedings of the 3rd International Conference

on Information Fusion, 2000, pp. THB2/10–THB2/17.

[12] A. Papoulis, Probability, Random Variables and Stochastic Processes,

3rd ed.McGraw-Hill, 1991.

[13] D. J. Salmond, “Mixture Reduction Algorithms for Target Tracking

in Clutter,” in SPIE Signal and Data Processing of Small Targets, ser.

1305, April 1990, pp. 434–445.

John Wiley

John Wiley & Sons,