ArticlePDF Available

Multitarget Tracking

Authors:
  • http://ba-ngu.vo-au.com
  • Independent Consultant

Abstract and Figures

Multitarget tracking (MTT) refers to the problem of jointly estimating the number of targets and their states or trajectories from noisy sensor measurements. MTT has a long history spanning over 50 years, with a plethora of applications in many fields of study. While numerous techniques have been developed, the three most widely used approaches to MTT are the joint probabilistic data association filter (JPDAF), multiple hypothesis tracking (MHT), and random finite set (RFS). The JPDAF and MHT have been widely used for more than two decades, while the random finite set (RFS) based MTT algorithms have received a great deal of attention during the last decade. In this article, we provide an overview of MTT and succinct summaries of popular state-of-the-art MTT algorithms.
Content may be subject to copyright.
1
Multitarget Tracking
Ba-Ngu Vo, Mahendra Mallick, Yaakov Bar-Shalom, Stefano Coraluppi, Richard Osborne, III, Ronald Mahler,
and Ba-Tuong Vo
Abstract—Multitarget tracking (MTT) refers to the problem
of jointly estimating the number of targets and their states or
trajectories from noisy sensor measurements. MTT has a long
history spanning over 50 years, with a plethora of applications
in many fields of study. While numerous techniques have been
developed, the three most widely used approaches to MTT are
the joint probabilistic data association filter (JPDAF), multiple
hypothesis tracking (MHT), and random finite set (RFS). The
JPDAF and MHT have been widely used for more than two
decades, while the random finite set (RFS) based MTT algorithms
have received a great deal of attention during the last decade.
In this article, we provide an overview of MTT and succinct
summaries of popular state-of-the-art MTT algorithms.
List of mathematical symbols:
δ(·)Dirac delta
δn[m]Kronecker delta: δn[m]=1,if m=n
0,otherwise
empty set
1A(·)indicator function on the set A
·,· inner product between functions/sequences
|X|cardinality (number of elements) of the set X
Cn
jnumber of j-combinations of n:n!
j!(nj)!
Pn
jnumber of j-permutations of n:n!
(nj)!
Xstate space
xstate vector
F(X)collection of finite subsets of X
Zobservation space
zobservation vector
zl:k(zl,zl+1, ..., zk)
zkobservation history z1:k
Pr(A)probability of event A
fk|k1(·|·)state transition density
gk(·|·)likelihood function
p0:k(·|·)posterior density
pk|k1(·|·)prediction density
pk(·|·)filtering density
N(·;m,P)Gaussian probability density with mean m
and covariance P
Fk,k1state transition matrix
Hkmeasurement matrix
(·)transpose of a vector/matrix
1column vector of ones [1,1, ..., 1]
PS,k|k1(x)probability of survival of a target at time k
given its previous state x
PD,k(x)probability of target detection at time k
given its state x
B.-N. Vo and B.-T. Vo are with Curtin University, Bentley, WA, Australia.
Y. Bar-Shalom and R. Osborne, III are with University of Connecticut,
Storrs, CT, USA.
S. Coraluppi is with STR, Woburn, MA, USA.
R. Mahler is an Independent Consultant, Minnesota, USA.
I. INTRODUCTION
In a multitarget scenario the number of targets and their
trajectories vary with time due to targets appearing and dis-
appearing. For example, the location, velocity and bearing
of commercial planes at an airport, ships in a harbour, or
pedestrians on the street. multitarget tracking (MTT) refers
to the problem of jointly estimating the number of targets
and their trajectories from sensor data. Driven by aerospace
applications in the 1960’s, MTT has a long history spanning
over 50 years. During the last decade, advances in MTT
techniques, along with sensing and computing technologies,
have opened up numerous research venues as well as appli-
cation areas. Today, MTT has found applications in diverse
disciplines, including, air traffic control, surveillance, defence,
space applications, oceanography, autonomous vehicles and
robotics, remote sensing, computer vision, and biomedical
research, see for example the texts [10], [15], [22], [73], [86],
[96], [99], [144]. The goal of this article is to discuss the
challenges in MTT and present the state-of-the-art techniques.
In this article we only consider the standard setting where
sensor measurements at each instance have been preprocessed
into a set of points or detections. The multitarget tracker
receives a random number of measurements due to detection
uncertainty and false alarms (FAs). Consequently, apart from
process and measurement noises, the multitarget tracker has
to contend with much more complex sources of uncertainty,
such as measurement origin uncertainty, false alarm, missed
detection, and births and deaths of targets. Moreover, in
the multi-sensor setting, a multitarget tracker needs to pro-
cess measurements from multiple heterogeneous sensors such
as radar, sonar, electro-optical, infrared, camera, unattended
ground sensor etc.
A number of MTT algorithms are used at present in various
tracking applications, with the most popular being the joint
probabilistic data association filter (JPDAF) [10], multiple
hypothesis tracking (MHT) [15], and random finite set (RFS)
based multitarget filters [86], [96]. This article focuses on sum-
marizing JPDAF, MHT and RFS as the three main approaches
to MTT. The JPDAF and MHT approaches are very well
established and make up the bulk of the multitarget tracking
literature, while the RFS approach is an emerging paradigm.
JPDAF and MHT as well as many traditional MTT solutions,
are formulated via data association followed by (single-target)
filtering. Data association refers to the partitioning of the
measurements into potential tracks and false alarms while
filtering is used to estimate the state of the target given its
measurement history (note that algorithms that operate on
pre-detection signals do not involve data association). The
distinguishing feature of the RFS approach is that, instead of
Preprint: Wiley Encyclopedia of Electrical and Electronics Engineering, Wiley, Sept. 2015.
M. Mallick is an Independent Consultant, Smith River, CA, USA.M. Mallick is an Independent Consultant, Smith River, CA, USA.
2
focusing on the data association problem, the RFS formulation
directly seeks both optimal and suboptimal estimates of the
multitarget state. Indeed some RFS-based algorithms do not
require data association at all.
We begin by reviewing the fundamental principles of
Bayesian estimation and summarizing some of the commonly
used (single-target) filters for tracking in Section II. Section III
presents some background on the MTT problem and describes
the main challenges, setting the scene for the rest of the article.
The JPDAF, MHT, and RFS approaches to MTT are presented
in chronological order of developments in Sections IV, V, and
VI respectively, with JPDAF being the earliest and RFS being
the most recent. Nonetheless, Sections IV, V, and VI can be
read independently from each other.
II. BAYESIAN DYNAMIC STATE ESTIMATION
During the last two decades significant progress has been
made in nonlinear filtering. This section provides a brief
overview of the Bayesian paradigm for nonlinear filtering.
A. Bayesian Estimation
Consider the problem of estimating the state or parameter
x∈Xfrom an observation z∈Z, where the state and
observation spaces Xand Zare assumed to be finite dimen-
sional vector spaces in this article. The relationship between
the observation and the state is described by the likelihood
function p(z|x), the likelihood of the observation zgiven a
state x. Note that for each x∈X ,p(·|x)is a probability density
on Z, i.e. for any B⊆Z,
Pr(zB|x)=B
p(z|x)dz.
In the Bayesian paradigm, prior information about the state
is given by a prior probability density (or simply prior) pon
X, i.e. for any A⊆X,
Pr(xA)=A
p(x)dx.
All information about the state given the observation is con-
tained in the posterior probability density (or simply poste-
rior), which can be computed from the prior and likelihood
function using Bayes rule
p(x|z)= p(z|x)p(x)
p(z|x)p(x)dx.(1)
An estimator of the state is a function ˆx that assigns the
observation za value ˆx(z)∈X. A cost C(ˆx(z),x)is
associated with using ˆx(z)to estimate x, and the Bayes risk
R(ˆx)is the expected cost over all possible realizations of the
observation and state, i.e
R(ˆx)=C(ˆx(z),x)p(z|x)p(x)dxdz.
ABayes optimal estimator is any estimator that minimizes the
Bayes risk [70], [128]. The most common estimators are the
expected a posteriori (EAP) or conditional mean and maximum
a posterior (MAP) estimators given respectively by [1], [8],
[70]
ˆxEAP =xp(x|z)dx,
ˆxMAP =argsup
x
p(x|z).
These estimators minimize the Bayes risks for certain costs
and are consistent in the sense that they converge almost surely
to the true state as the number of data points increases. The
EAP estimate is the minimum mean squared error estimate [8]
and corresponds to the case where C(ˆx(z),x)=ˆx(z)x2.
B. The Bayes Recursion
Target tracking is a dynamic state estimation problem, in
which the state varies with time. The dynamic model of a
target can be described by a discrete-time model [1], [8], [64]
or a continuous-time stochastic differential equation [8], [64].
This article only considers the discrete-time models.
The target state xkevolves in time according to the state
transition equation
xk=fk,k1(xk1,vk1),(2)
where fk,k1(·,·)is a nonlinear transformation and vk1is
the process noise. In general, the state transition equation can
be described by a Markov transition density.
fk|k1(xk|xk1),(3)
i.e. the probability density of a transition to the state xk
at time kgiven a state xk1at time k1. Note that for
each x∈X,fk|k1(·|x)is a probability density on X.
Some commonly used dynamic models are the nearly constant
velocity, nearly constant acceleration, nearly coordinated turn,
Ornstein-Uhlenbeck, and Singer models [8], [15], see the
survey [78] for more details.
At each time k, the state xkgenerates an observation zk
according to the observation equation
zk=gk(xk,wk),(4)
where gk(·,·)is a nonlinear transformation and wkis ob-
servation noise. In general, the observation equation can be
described by the likelihood function
gk(zk|xk),(5)
i.e. the probability density of receiving the observation zk
Zgiven a state xk. For compactness we denote an array of
variables (yl, ..., yk)by yl:k. It is further assumed that the
probability density of the observation history z1:kcondition
on x1:kis given by
p1:k(z1:k|x1:k)=gk(zk|xk)gk1(zk1|xk1)...g1(z1|x1).
All information about the state history to time kis encap-
sulated in the posterior density p0:k(·|zk), where zk=z1:k
denotes the observation history. The posterior density can be
computed recursively any for k1, starting from an initial
prior p0, via the Bayes recursion:
p0:k(x0:k|zk)
gk(zk|xk)fk|k1(xk|xk1)p0:k1(x0:k1|zk1).(6)
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
3
The filtering density pk(·|zk), is a marginal of the posterior
density, which is defined as the probability density of the
state at time kgiven the observation history zk. From an
initial density p0, the filtering density at time kcan be
computed recursively using the Bayes (filtering) recursion,
which consists of the Chapman-Kolmogorov equation and the
Bayes update:
pk|k1(xk|zk1)=fk|k1(xk|x)pk1(x|zk1)dx,(7)
pk(xk|zk)= gk(zk|xk)pk|k1(xk|zk1)
gk(zk|x)pk|k1(x|zk1)dx,(8)
where pk|k1(·|zk1)is called the prediction density. The
smoothing density pk|k+l(·|z1:k+l), the probability density of
the state at time kgiven the observation history z1:k+l,is
another marginal of the posterior density. Smoothing can yield
significantly better estimates than filtering by delaying the
decision time and using data at a later time [105], [58], [46],
[161].
C. The Kalman Filter
The Kalman filter (KF) is a closed form solution to the
Bayes (filtering) recursion for linear Gaussian models [1],
[8], [59], [64], [69], [126]. Specifically, the dynamical and
observation models are linear transformations with additive
Gaussian noise
xk=Fk,k1xk1+vk1,
zk=Hkxk+wk,
where Fk,k1is the (square) transition matrix, Hkis the
observation matrix, and vk1and wkare independent zero-
mean Gaussian noise variables with covariance matrices Qk1
and Rkof appropriate dimensions. Thus, the transition density
and likelihood function are
fk|k1(xk|xk1)=N(xk;Fk,k1xk1,Qk1),(9)
gk(zk|xk)=N(zk;Hkxk,Rk),(10)
where N(·;m,P)denotes a Gaussian density with mean and
covariance mand Prespectively. For example the nearly
constant velocity model:
xk=
xk
yk
˙xk
˙yk
,Fk,k1=I2ΔI2
02I2,
Qk1=σ2
wΔ4
4I2Δ3
2I2
Δ3
2I2Δ2I2,Hk=I202,
Rk=σ2
vI2, where Inand 0ndenote the n×nidentity and
zero matrices respectively, Δis the sampling period, σwand
σvare respectively the standard deviations of the process and
measurement noise.
Under these assumptions, suppose that initial prior is a
Gaussian p0=N(·;m0,P0), then all subsequent filtering
densities are Gaussians. Moreover, if at time k1, the filtering
density is a Gaussian of the form
pk1(xk1|zk1)=N(xk1;mk1,Pk1),
then the predicted density to time kis a Gaussian
pk|k1(xk|zk1)=N(xk;mk|k1,Pk|k1),
where
mk|k1=Fk,k1mk1,
Pk|k1=Qk1+Fk,k1Pk1F
k,k1,
and the filtering density at time kis a Gaussian
pk(xk|zk)=N(xk;mk(zk),Pk),
where
mk(zk)=mk|k1+Kk(zkHkmk|k1),
Pk=[IKkHk]Pk|k1,
Kk=Pk|k1HT
kS1
k,
Sk=Rk+HkPk|k1HT
k.
The matrix Kkis referred to as the Kalman gain, the residual
zkHkmk|k1is referred to as the innovation and the matrix
Skis the innovation covariance.
The dynamic and measurement models in many real-world
problems such as the bearing-only tracking, angle-only track-
ing, radar tracking, video tracking, etc. [10], [15], [100], [101],
[126] are nonlinear. The process noise and measurement noise
can also be non-additive and non-Gaussian. The Kalman filter
is not applicable to these problems and in general, closed-form
solutions are not possible. A number of approximate filtering
algorithms such as the extended extended Kalman filter (EKF)
[1], [8], [50], [64], [126], unscented Kalman filter (UKF) [67],
[68], Gaussian sum filter [143], particle filter [4], [21], [44]–
[46], [55], [126], quadrature filter, quasi Monte Carlo, grid
based filter, cubature Kalman filter [3], [65], particle flow filter
(PFF) [37], [42] have been proposed.
The EKF is a first order approximation to the Kalman
filter based on linearization using the Taylor series expansion.
The UKF uses the deterministic sampling principles of the
unscented transform (UT) to propagate the first and second
moments of the predicted and updated densities. The particle
filter uses the sequential Monte Carlo (SMC) approach to
approximates the posterior density using random sample points
or particles. Next, we present two important approximate
filters, the Gaussian sum filter and the particle filter.
D. The Gaussian Sum Filter
The Gaussian sum filter is a generalization of the Kalman
filter to Gaussian mixture models [143]. Suppose that at time
k1, the filtering density is a Gaussian mixture of the form
pk1(xk1|zk1)=
N
i=1
w(i)
k1N(xk1;m(i)
k1,P(i)
k1).
Then the predicted density to time kand filtering density at
time kare Gaussian mixtures
pk|k1(xk|zk1)=
N
i=1
w(i)
k1N(xk;m(i)
k|k1,P(i)
k|k1),(11)
pk(xk|zk)=
N
i=1
w(i)
k1N(xk;m(i)
k(zk),P(i)
k),(12)
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
4
where
m(i)
k|k1=Fk,k1m(i)
k1,(13)
P(i)
k|k1=Qk1+Fk,k1P(i)
k1F
k|k1,(14)
m(i)
k(zk)=m(i)
k|k1+K(i)
k(zkHkm(i)
k|k1),(15)
P(i)
k=[IK(i)
kHk]P(i)
k|k1,(16)
K(i)
k=P(i)
k|k1H
kS(i)
k|k11,(17)
S(i)
k|k1=Rk+HkP(i)
k|k1H
k.(18)
For clarity, we have presented the Gaussian sum filter
prediction and update for linear Gaussian model. In the more
general case where the transition density and/or likelihood
function are Gaussian mixtures, the predicted density (11) in-
volves an additional sum over the components of the Gaussian
mixture transition density, and/or the filtering density (12) in-
volves an additional sum over the components of the Gaussian
mixture likelihood function [143]. The number of Gaussians
required to represent the exact filtering density increases expo-
nentially with time and Gaussian mixture reduction techniques
are required to manage memory and computational load [135],
[134], [137].
E. The Particle Filter
The particle or sequential Monte Carlo (SMC) method is
a class of approximate numerical solutions to the Bayes re-
cursion that are applicable to nonlinear non-Gaussian dynamic
and observation models. The basis of the particle method is the
use of random samples (particles) to approximate probability
distributions of interest [4], [21], [44]–[46], [55], [126].
Consider Nindependently and identically distributed (i.i.d.)
samples {x(i)}N
i=1 from an arbitrary probability density pof
x. For any function hof x, the (finite) expectation of hcan
be approximated by the empirical expectation, i.e.,
h(x)p(x)dx1
N
N
i=1
h(x(i)).
The empirical expectation is unbiased and tends to the true
expectation almost surely as Ntends to infinity. Moreover,
the rate of convergence is not dependent on the dimension of
the integral, but primarily on N, the number of independent
samples. Hence, we can regard the samples {x(i)}N
i=1 as a
point mass approximation of p, i.e.,
p(x)1
N
N
i=1
δ(xx(i)),
where δdenotes the Dirac delta.
Now consider the case where the density pis only known
up to a normalizing constant, i.e. p(x)˜p(x), such as in the
Bayes recursion where the normalizing constant is difficult to
compute. Since it is difficult to sample from p, we draw N
i.i.d. samples {x(i)}N
i=1 from a known density q, referred to
as the proposal or importance density, and then weight these
samples accordingly so as to obtain a weighted point mass
approximation to p. More concisely, for any function h, the
(finite) expectation of hcan be approximated by the empirical
expectation, i.e.
h(x)p(x)dx
N
i=1
w(i)h(x(i)),
where
w(i)=˜w(x(i))
N
j=1 ˜w(x(j)),
˜w(i)(x(i))=p(x(i))
q(x(i)),
are known as the normalized importance weights and impor-
tance weights respectively. A ”good” proposal is one such that
the weights {w(i)}N
i=1 all have roughly the same value. For this
so-called importance sampling approximation, the empirical
expectation is biased. Nonetheless, it still tends to the true
expectation almost surely as Ntends to infinity. Hence, we can
regard the weighted samples {(w(i),x(i))}N
i=1 as a weighted
point mass approximation of p, i.e.,
p(x)
N
i=1
w(i)δ(xx(i)).
The key operation in particle filtering is the sequential
application of importance sampling to recursively approximate
the posterior. This is known as sequential importance sampling
(SIS) [55], [4], [44], [45], [126] and is described as follows:
Suppose that the posterior density p0:k1, at time
k1, is represented as a set of weighted particles
{(w(i)
k1,x(i)
0:k1)}N
i=1, i.e.,
p0:k1(x0:k1|zk1)
N
i=1
w(i)
k1δ(x0:k1x(i)
0:k1),
and given a proposal density qk(·|x(i)
k1,zk)that we can easily
sample from. Then the posterior density p0:k, at time k, is rep-
resented as a new set of weighted particles {(w(i)
k,x(i)
0:k)}N
i=1,
i.e.,
p0:k(x0:k|zk)
N
i=1
w(i)
kδ(x0:kx(i)
0:k),
where
x(i)
0:k=(x(i)
0:k1,x(i)
k)
x(i)
kqk(·|x(i)
k1,zk),
w(i)
kw(i)
k/N
i=1 ˜w(i)
k,
˜w(i)
k=w(i)
k1
gk(zk|x(i)
k)fk|k1(x(i)
k|x(i)
k1)
qk(x(i)
k|x(i)
k1,zk).
The selection of optimal proposals along with practical strate-
gies for constructing good proposals to this are considered
in [44], [80], [118]. If we are only interested in the filtering
density then only the most recent component of the samples
are kept, i.e. the filtering density is represented by the weight
samples {(w(i)
k,x(i)
k)}N
i=1.
The basic SIS algorithm suffers from particle depletion
or degeneracy where the variance of the importance weights
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
5
increases over time, thereby degrading the quality of the
particle approximation. Particle depletion is generally miti-
gated by resampling the weighted particles {(w(i)
k,x(i)
0:k)}N
i=1
to generate more replicas of particles with high weights
and eliminate those with low weights [55]. There are many
resampling schemes available, and the choice of resampling
scheme affects the computational load as well as the quality
of the particle approximation, see for example [19], [43], [46],
[107]. An additional Markov Chain Monte Carlo (MCMC)
step can then be used to rejuvenate particle diversity [25], [52]
if necessary. Relevant convergence results for particle filtering
can be found in [32], [41].
Various extensions of the particle filtering methodol-
ogy have been proposed to improve performance. Rao-
Blackwellization techniques can be incorporated with the
particle filter (PF) [45], [126] to improve performance for
particular classes of state space models, e.g. the Mixture
Kalman Filter (MKF) [24]. The underlying idea is to partition
the state vector into a linear Gaussian component and a
nonlinear non-Gaussian component. Then, the former is solved
analytically using a Kalman filter and the latter with a particle
filter so that the computational effort is appropriately focused.
Continuous approximations to the posterior density can be
obtained with kernel smoothing techniques. Examples of this
approach are the convolution or regularized particle filter
[45], [126]. Related approaches are the Gaussian particle and
Gaussian sum particle filters [74], [75].
F. Filtering Algorithms for Maneuvering Targets
The filtering algorithms discussed previously use a single
dynamic model and hence are known as single-model filters.
The motion of a maneuvering target involves multiple dy-
namic models. For example, an aircraft can fly with a nearly
constant velocity motion, accelerated/decelerated motion, and
coordinated turn [8], [10]. The multiple model approach is an
effective filtering algorithm for maneuvering targets in which
the continuous kinematic state and discrete mode or model
are estimated. This class of problems are known as jump
Markov or hybrid state estimation problems. The discrete-
time dynamic and measurement models for the hybrid state
estimation problem [8], [10], [126] are given, respectively, by
xk=fk,k1(xk1
k,vk1),
zk=gk(xk
k,wk),
where μkis the mode in effect from time k1to k. The
interacting multiple model (IMM) and variable-structure IMM
(VS-IMM) estimators [8], [10], [77], [79], [104] are two
well known filtering algorithms for maneuvering targets. The
number of modes in the IMM is kept fixed, whereas in the
VS-IMM the number of modes are adaptively selected from
a fixed set of modes for improved estimation accuracy and
computational efficiency.
III. MULTITARGET TRACKING
This section provides some background on the MTT prob-
lem and the main challenges, setting the scene for the rest of
the article.
A. Multitarget Systems
Driven by aerospace applications, MTT was originally de-
veloped for tracking targets from radar measurements. Fig. 1
shows a typical scenario describing the measurements by a
radar in which five true targets are present in the radar dwell
volume (the volume of the measurement space sensed by a
sensor at a scan time) and six measurements are collected
by the radar. We see from Fig. 1 that three target-originated
measurements and three false alarms (FAs) are generated, one
target is not detected by the radar, and two closely spaced
targets are not resolved. This type of information regarding the
nature and origin of measurements is not known for real radar
measurements due to measurement origin uncertainty. At each
discrete dwell/scan time tj, a set of noisy radar measurements
with measurement origin uncertainty is sent to a tracker, as
shown in Fig. 2.
7ZRWDUJHWV
LQWKHVDPH
UHVROXWLRQFHOO
XQUHVROYHG
0LVVHG
GHWHFWLRQ
5DQJH
7UXHWDUJHW
)DOVHDODUP)$
'HWHFWLRQZLWK
PHDVHUURU
5DGDU
$]LPXWK
$UHVROXWLRQ
FHOO
;
<
PLQ
U
PD[
T
PLQ
T
'ZHOO
YROXPH
PLQ PD[
PLQ PD[
 
 
UU
TT
]
]
]
]
]
]
PD[
U
Fig. 1. A typical radar measurement scenario.
W
N
W
0HDVXUHPHQWVIURP
WDUJHWVDQGFOXWWHU
W
Fig. 2. Varying number of noisy radar measurements in dwells.
In a general multitarget system, not only do the states of the
targets vary with time, but the number of targets also changes
due to targets appearing and disappearing as illustrated in Fig.
3. The targets are observed by a sensor (or sensors) such as
radar, sonar, electro-optical, infrared, camera etc. The sensor
signals at each time step are preprocessed into a set of points
or detections. It is important to note that existing targets may
not be detected and that FAs (due to clutter) may occur. As
a result, at each time step the multitarget observation is a set
of detections, only some of which are generated by targets
and there is no information on which targets generated which
detections (see Fig. 3).
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
6
REVHUYDWLRQVHW
SURGXFHGE\WDUJHWV
VWDWHVHW
WDUJHWPRWLRQ
VWDWHVSDFH
REVHUYDWLRQVSDFH
WDUJHWV WDUJHWV
;N
;N
Fig. 3. Multiple-target system model: the number of targets changes from 5
to 3, targets generate at each time a random number of measurements.
Most MTT algorithms assume a standard multitarget tran-
sition model, in which each existing target xk1, at time
k1, either continues to exist at time kwith probability
PS,k|k1(xk1)and moves to a new state xkwith proba-
bility density fk|k1(xk|xk1), or dies with probability 1
PS,k|k1(xk1). In addition, a random number of new targets
can appear from random locations in the state space at time
k. Each target is assumed to appear and evolve independently
from others. Different multitarget tracking approaches employ
different models for target births and deaths.
In a standard multitarget observation model, each target
xk, at time k, is either detected with probability PD,k (xk)
and generates an observation zkwith likelihood gk(zk|xk),
or missed with probability 1PD,k (xk). In addition to the
detections, the tracker also receives a random number of
FAs from random locations in the measurement space. It is
assumed that each target generates observations independently
from other targets and FAs and that each detection can only
be generated from at most one target. The standard multitarget
observation model is the most widely used. Other models
include: the merged or unresolved measurement model [23],
[71], [17], [91], [150], [14], where two or more targets can
share a detection; extended target/group measurement model,
where each target/group can generate multiple detections [51],
[72], [90] [114], [53], [48], [56], [81], [108]; track-before-
detect/image measurement model [136], [39], [160], [62], [63],
[49], and the superpositional measurement model [87], [111],
where the observed signal is a superposition of observations
generated by each of the targets present. This article only
considers the standard multitarget measurement model.
B. The MTT Problem
The objective of MTT is to jointly estimate, at each obser-
vation time, the number of targets and their trajectories from
sensor data. Even at a conceptual level, MTT is a non-trivial
extension of single-target tracking. Indeed MTT is far more
complex in both theory and practice.
The concept of estimation error between a reference quan-
tity and its estimated values plays a fundamental role in any
estimation problem. In (single-target) filtering the system state
is a vector and the notion of state estimation error is taken
for granted. For example, the EAP estimator minimizes the
expected squared Euclidean distance ˆx x2between the
estimated state vector ˆx and true state vector x. However,
the concept of Euclidean distance is not suitable for the
multitarget case. To see this consider the scenario depicted
in Fig. 4. Suppose that the multitarget state is formed by
stacking individual states into a single vector with the ground
truth represented by Xand the estimate represented by ˆ
X. The
estimate is correct but the Euclidean distance is || ˆ
XX|| =2.
Moreover, when the estimated number of targets is different
from the true number the Euclidean distance is not defined.
;
ªº
«»
«»
«»
«»
¬¼
;
ªº
«»
«»
«»
«»
¬¼
7UXH
PXOWLWDUJHW
VWDWH
(VWLPDWHG
PXOWLWDUJHW
VWDWH
WDUJHWV WDUJHWV
Fig. 4. A possible vector representation of multi-target states when the
estimated and true multi-target states have the same number of targets.
Central to Bayesian state estimation is the concept of Bayes
risk/optimality [70], [128]. A Bayes optimal solution is not
simply one that invokes Bayes rule. Criteria for optimality for
the single-target case such as the squared Euclidean distance
is not appropriate. In addition, the concept of consistency (of
an estimator) cannot be taken for granted since it is not clear
what is the notion of convergence in the multitarget realm.
From a practical point of view, MTT is not a simple
extension of classical (single-target) filtering. Even for the
simple special case with exactly one target in the scene,
classical filtering methods (described in Section II) cannot
be directly applied due to false detection, missed detection,
and measurement origin uncertainty. The simplest solution
is the nearest neighbor (NN) filter which applies the Bayes
filter to the measurement that is closest to the predicted
measurement [7], [10], [15]. A more sophisticated yet intu-
itively appealing solution is the Probabilistic Data Association
filter (PDAF) which applies the Bayes filter to the average
of all measurements weighted according to their association
probabilities [7], [10]. The solution based on enumerating
association hypotheses, proposed in [141], coincides with the
Bayes optimal filter in the presence of false detections, missed
detections, and measurement origin uncertainty proposed in
[158]. In the multitarget setting, even for the special case
where all targets are detected and no false detections occur,
classical filtering methods are not directly applicable since
there is no information on which target has generated which
measurements.
The simplest multitarget filter is the global nearest neighbor
(GNN) tracker, an extension of the NN filter to the multiple tar-
get case. The GNN tracker searches for the unique joint asso-
ciation of measurements to targets that minimizes/maximizes
a total cost, such as a total distance or likelihood. The GNN
filter then performs standard Bayes filtering for each target
using these associated measurements directly. Although the
GNN scheme is intuitively appealing and simple to implement,
it is susceptible to track loss and consequently exhibits poor
performance when targets are not well separated [15].
A
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
7
The JPDAF [7], [10] is an extension of the PDAF to a
fixed and known number of targets. The JPDAF uses joint
association events and joint association probabilities in order
to avoid conflicting measurement to track assignments in the
presence of multiple targets. The complexity of the calculation
for joint association probabilities grows exponentially with the
number of targets and the number of measurements. Several
approximation approaches have been proposed such as the
deterministic strategies in [130], [131], [103], [61], [12], [169],
[132] and the Markov Chain Monte Carlo (MCMC) based
strategies in [112]. Moreover, since the basic JPDAF can
only accommodate a fixed and known number of targets,
several novel extensions have been proposed to accommodate
an unknown and time varying number of targets, such as the
joint integrated PDAF (JIPDAF) [109] along with an efficient
implementation [110], and automatic track formation (ATF)
[6]. Further detail on the JPDAF is given in Section IV.
MHT [123], [76], [15], [16], [10], [101] is a deferred
decision approach to data association based MTT. At each ob-
servation time, the MHT algorithm attempts to propagate and
maintain a set of association hypotheses with high posterior
probability or track score. When a new set of measurements
arrives, a new set of hypotheses is created from the existing
hypotheses and their posterior probabilities or track scores are
updated using Bayes rule. In this way, the MHT approach
inherently handles initiation and termination of tracks, and
hence accommodates an unknown and time-varying number
of targets. Based on the best hypothesis, a standard Bayes
(or Kalman when the models are linear Gaussian) filter can
be used on the measurements in each track to estimate the
trajectories of individual targets. The total number of possible
hypotheses increases exponentially with time and heuristic
pruning/merging of hypotheses is performed to reduce com-
putational requirements. Further details on the MHT approach
is given in Section V.
Related deferred decision approaches, based on an intu-
itive and explicit formulation with hidden Markov models
and subsequent state estimation by application of the Viterbi
algorithm, can also be found in [171]. An innovative and
completely different approach proposed in [112] casts the
problem of finding the hypothesis with the highest posterior
probability as a combinatorial optimization problem, which
is solved using reversible jump Markov Chain Monte Carlo
(RJ-MCMC) techniques in order to generate samples from the
posterior density.
The probabilistic MHT (PMHT) is a tractable approach
that operates over several frames, reducing the complexity by
formulating the problem as one of maximum likelihood esti-
mation and applying Expectation-Maximization (EM) [146],
[168], [33]. The computation is simplified at a sacrifice in
performance, by removing the requirement for each target to
have a single measurement. An efficient implementation of
PMHT termed the turbo PMHT was proposed in [133] based
on the idea of turbo coding, which exhibits good tracking per-
formance with a very low computational complexity. PMHT
with track maintenance is decribed in [38].
The RFS approach retains the same Bayesian estimation
methodology for single-target (in Section II), by representing
the multitarget state as a finite set [86], [96], which admits
suitable distances between multitarget states (see [60], [140]).
This framework provides appropriate notions of multitarget
probability density, that enables concepts such as state space
model, Bayes recursion, Bayes optimality to be directly trans-
lated to the multitarget case. Moreover it covers more com-
plex multitarget tracking problems such as non-Poisson, non-
homogeneous FAs, state dependent probability of detection
[93], [95], extended targets [86], [56], merged measurements
[14], non-standard measurement (including image, fuzzy and
Dempster-Shafer) [86], distributed multitarget tracking [11],
etc. under one single umbrella without any ad hoc modifi-
cations [96]. Further detail on the RFS approach is given in
Section VI.
Two types of tracking architectures, centralized and dis-
tributed, are used in multisensor multitarget tracking (MTT)
[10], [15], [101]. This article only addresses centralized track-
ing. The three data association based MTT algorithms GNN,
JPDAF, MHT [10], [15], [101], have been widely used for
more than three decades, while the RFS-based algorithms [86],
[96] developed during the last decade have received consid-
erable interest. The computational cost of the MHT is much
higher than that of the GNN or JPDAF. Numerous studies
have shown that the MHT works significantly better than the
GNN and JPDAF for tracking scenarios with low signal-to-
noise ratio (SNR) and closely spaced targets [15, Section
6.8.1]. Recent independent studies [147], [149] demonstrated
that a sub-optimal RFS-based filter called the cardinalized
probability hypothesis density filter [85], [156] has comparable
performance to MHT which much lower computational cost.
IV. JOINT PROBABILISTIC DATA ASSOCIATION FILTER
List of mathematical symbols:
NTnumber of targets
Zkvector of observations at time k
Zk(Z1,Z2..., Zk)
θjtjevent that measurement joriginated from target tj
θjoint association event [θjtj],j =1,...,m
Ωvalidation matrix:
[ωjt],j =1,...,m;t=0,1,...,N
T
ˆ
Ω(θ)event matrix:
ωjt(θ)] ,j =1,...,m;t=0,1,...,N
T
δt(θ)detection indicator for event θ
δ(θ)vector of target detection indicators for event θ
τj(θ)measurement association indicator for event θ
φ(θ)number of false measurements in event θ
μF(φ)prior pmf of the number of false measurements
Vvolume of the surveillance region
λspatial density of false measurements
mknumber of measurements in the union of the
validation regions
Pt
Ddetection probability of target t
jt(θ)index of the measurement associated with target t
in event θ
βjt marginal assocation probability
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
8
A. Overview
The joint probabilistic data association filter (JPDAF) is the
multitarget extension of the probabilistic data association filter
(PDAF) for single target tracking [10].
1) Assumptions:
There is a known number of established targets NTin
clutter.
Measurements from one target can fall in the validation
region of a neighboring target — this can happen over
several sampling times and acts as a persistent interfer-
ence.
The past is summarized by an approximate sufficient
statistic — state estimates (approximate conditional
means) and covariances for each target.
The states are assumed Gaussian distributed with the
above means and covariances.
The models for the various targets do not have to be the
same.
The targets are resolved — there are no unresolved
(merged) measurements.
2) The Approach:
The measurement to target association probabilities are
computed jointly across the targets.
The association probabilities are computed only for the
latest set (scan) of measurements.1
The state estimation is done
separately for each target as in the PDAF (decou-
pled), or
in a coupled manner using a stacked state vector
in the JPDA coupled filter (JPDACF) (see [10] for
details).
B. The Key Feature of the JPDAF
The evaluation of the conditional probabilities of the fol-
lowing joint association events θpertaining to the current
time k(the time index kis omitted for simplicity where
it does not cause confusion) consisting of θjtj, the event
that measurement joriginated from target tj,j=1,...,m,
t=0,1,...,N
T;tjis the index of the target to which mea-
surement jis associated in the event θjtjunder consideration.
1) Remark: For the purpose of deriving the joint probabil-
ities, no individual validation gates will be assumed for the
various targets. Instead, each measurement will be assumed
validated for each target, i.e., every validation gate coincides
with the entire surveillance region.
This approach is adopted in order to have the pdf of each
false measurement the same, i.e., uniformly distributed in the
entire validation region.
C. The Feasible Joint Events
Validation gates are used for the selection of the feasible
joint events but not in the evaluation of their probabilities.
This logic avoids considering events whose probabilities are
1This is in view of the fact that, if a sufficient statistic is available, then
there is no need to consider the past (previous measurements). However, it
should be recalled that the Gaussian sufficient statistic is an approximation.
negligible and thus has a negligible effect on the other prob-
abilities.
1) The Validation Matrix: Define the validation matrix
Ω=[ωjt]j=1,...,m;t=0,1,...,N
T(19)
with binary elements that indicate if measurement jlies in the
validation gate of target t. The index t=0stands for “none of
the targets” and the corresponding column of Ωhas all units
since each measurement could have originated from clutter or
false alarm.
2) The Event Matrix: Ajoint association event θis repre-
sented by the event matrix
ˆ
Ω(θ)=[ˆωjt(θ)] (20)
consisting of the units in Ωcorresponding to the associations
in θ, with ˆωjt(θ)=1 if θjt θand 0 otherwise.
Afeasible association event is one where
(i) a measurement can have only one source,
(ii) at most one measurement can originate from a target,
for which the detection indicator is denoted as δt(θ).
3) Generation of the Feasible Joint Association Events:
The generation of the event matrices ˆ
Ωcorresponding to
feasible events can be done by scanning Ωand picking
(i) one unit per row, and
(ii) one unit per column except for t=0where the number
of units (which is the number of false measurements) is
not restricted.
The binary variable δt(θ)is called the target detection
indicator since it is unity if one of the mmeasurements is
associated to target tin event θ, i.e., target thas been detected.
It is also convenient to define another binary variable, called
the measurement association indicator τj(θ)to indicate if
measurement jis associated with a target in event θ.
With this definition, the number of false (unassociated)
measurements in event θis
φ(θ)=
m
j=1
[1 τj(θ)].(21)
D. Evaluation of the Joint Probabilities
The joint association event probabilities are, with Bayes’
formula,
Pr(θk|Zk)=1
cp(Zk|θk,m
k,Zk1)Pr(θk|mk),(22)
where cis the normalization constant.
1) Assumption: The states of the targets conditioned on the
past observations are mutually independent.2
2) The Likelihood Function of a Joint Association Event:
The likelihood function of the joint association event on the
r.h.s. of (22) is
p(Zk|θk,m
k,Zk1)=
mk
j=1
p(zj,k|θk,jtj,Zk1),(23)
2This assumption can be relaxed, and results in the JPDACF [10].
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
9
where mkis the number of measurements in the union of the
validation regions at time k. The product form of (23) follows
from the above assumption.
The conditional pdf of a measurement given its origin is
p(zj,k|θk,jtj,Zk1)=ftj(zj,k)if τj(θk)=1,
V1if τj(θk)=0,
(24)
where
ftj(zj,k)=N(zj,k ;ˆ
ztj
k|k1,Stj
k)(25)
and ˆ
ztj
k|k1is the predicted measurement for target tj, with
associated innovation covariance Stj
k.
Measurements not associated with a target are assumed
uniformly distributed in the surveillance region of volume V.
Using (24), the pdf (23) can be written as follows
p(Zk|θk,m
k,Zk1)=Vφ
j
[ftj(zj,k)]τj.(26)
In the above V1is raised to power φ(θ), the total number
of false measurements in event θkand the indicators τj(θ)
select the single measurement densities according to their
associations in event θk.
3) The Prior Probability of a Joint Association Event: The
prior (to time k) probability of an event θ(k), the last term
in (22), is obtained next. Denote by δ(θ)the vector of target
detection indicators corresponding to event θk.
The joint probability can be written as
Pr(θk|mk)= Pr(θk|δ(θ)(θ),m
k)
·Pr(δ(θ)(θ)|mk).(27)
The first term on the r.h.s. of the above is obtained from
the following reasoning based on combinatorics:
(i) In event θkthe set of targets assumed detected consists
of mkφtargets.
(ii) The number of measurement to target assignment events
θkin which the same set of targets is detected is given
by the number of permutations of the mkmeasurements
taken as mkφ, the number of targets to which a
measurement is assigned under the same detection event.
Therefore, assuming each such event a priori equally likely,
one has
Pr(θk|δ(θ)(θ),m
k)=mk!
φ!1
.(28)
After some manipulations [34] and assuming δand φ
independent, the last term in (27) becomes
Pr(δ(θ)(θ)|mk)=
t
(Pt
D)δt(1 Pt
D)1δtμF(φ)
Pr(mk),
(29)
where Pt
Dis the detection probability of target tand μF(φ)is
the prior pmf of the number of false measurements (the clutter
model). The indicators δt(θ)have been used in (29) to select
the probabilities of detection and no detection events according
to the event θkunder consideration. The term Pr(mk)in (29)
will be absorbed in the normalization constant in (31).
Combining (28) and (29) into (27) yields the prior proba-
bility of a joint association event θkas
Pr(θk|mk)= φ!μF(φ)
mk!Pr(mk)
t
(Pt
D)δt(1 Pt
D)1δt.(30)
4) The Posterior Probability of a Joint Association Event:
Combining (26) and (30) into (22) yields the posterior prob-
ability of a joint association event θkas
Pr(θk|Zk)=1
c
φ!
mk!μF(φ)Vφ
j
[ftj(zj,k)]τj
·
t
(Pt
D)δt(1 Pt
D)1δt,(31)
where φ,δtand τjare all functions of the event θkunder
consideration.
The above still needs the specification of the pmf of the
number of false measurements μF(φ), carried out in the next
section.
E. The Parametric and Nonparametric JPDAF
As in the case of the PDAF, the JPDAF has two versions,
according to the model used for the pmf μF(φ)of the number
of false measurements.
1) The Parametric JPDAF: The parametric JPDAF uses
the Poisson pmf μF(φ)with parameter λV which requires
the spatial density λof the false measurements.
Using the Poisson pmf in (31) leads to the cancellation of
Vφand φ!. Furthermore, each term contains eλV and mk!,
which also cancel since they appear in the denominator cof
(31), which is the sum of all the numerators.
Thus the joint association probabilities of the parametric
JPDAF are
Pr(θk|Zk)=λφ
c1
j
[ftj(zj,k)]τj
t
(Pt
D)δt(1 PD)1δt,
(32)
where c1is the appropriate normalization constant.
Since mkis a fixed number, the joint association probabil-
ities can be rewritten as
P(θk|Zk)= 1
c1λmk
j
[λ1ftj(zj,k)]τj
·
t
(Pt
D)δt(1 PD)1δt(33)
by defining a new normalization constant.
Each term in the first product above is the likelihood ratio
of the corresponding measurement having originated from a
particular target vs. from clutter. The denominator of these
likelihood ratios is the spatial density of the clutter, which
plays the role of the pdf of clutter originated measurements.
This is a consequence of the Poisson prior.
2) The Nonparametric JPDAF: The nonparametric JPDAF
uses the diffuse prior
μF(φ)=φ, (34)
which does not require the parameter λ.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
10
With this, (31) becomes after canceling the constant and
mk!, which appear in each expression,
Pr(θk|Zk)= 1
c3
φ!
Vφ
j
[ftj(zj,k)]τj
·
t
(Pt
D)δt(1 PD)1δt,(35)
where c3is the appropriate normalization constant.
Similarly to the nonparametric case, the joint association
probabilities can be rewritten as
Pr(θk|Zk)= φ!
c3Vmk
j
[Vf
tj(zj(k))]τj
·
t
(Pt
D)δt(1 PD)1δt.(36)
As it can be seen from (35), the nonparametric JPDAF
expressions contain a term that can be called pseudo sample
spatial measurement density φ!/V φin place of λφin the
parametric JPDAF.
F. The State Estimation
1) Assumption: The states of the targets conditioned on the
past observations are mutually independent.3
In this case one needs the marginal association proba-
bilities, which are obtained from the joint probabilities by
summing over all the joint events in which the marginal event
of interest occurs. This summation can be written as follows
βjt :=
θ:θjtθ
Pr(θ|Zk).(37)
The state estimation equations are then exactly the same as
in the standard PDAF.
2) Standard PDAF Estimation Equations: The PDAF up-
dates the target state by combining the predicted state with the
combined innovation multiplied by the filter gain W(k). The
combined innovation is the summation of the individual inno-
vations, weighted by the marginal association probabilities,
i.e.,
νk=
mk
i=1
βi,k zi,k ˆ
xk|k1.(38)
The covariance associated with the updated state is
Pk|k=Pk|k1+[β0,k 1]WkSkW
k+˜
Pk,(39)
where Skis the innovation covariance and the spread of the
innovations term is4
˜
Pk:= Wkmk
i=1
βi,k zi,k ˆ
xk|k1
·zi,k ˆ
xk|k1νkν
kW
k.(40)
3Considering the targets’ states, given the past, as correlated — character-
ized by means, covariances as well as cross-covariances — leads to coupled
estimation for the targets under consideration — the JPDA Coupled Filter
(JPDACF) (See [10] Section 6.2.7 for details).
4This assumes that all the measurements have the same noise covariance
and, hence, the same filter gain. The generalization to the case where each
measurement has a different covariance is straightforward.
Since it is not known which of the validated measurements is
correct, the term ˜
P, which is positive semidefinite, increases
the covariance of the updated state — this is the effect of the
measurement origin uncertainty.
G. A Modification of the JPDAF: Coupled Filtering
In Sections IV-D and IV-E the JPDAF was developed
assuming that, conditioned on the past, the target states (and,
thus, the target originated measurements) are independently
distributed. Consequently, the joint association was followed
by decoupled filtering of the targets’ states — this is an
approximation that simplifies the resulting algorithm.
For targets that “share” measurements (in the JPDAF sense)
for several sampling times, a dependence of their estimation
errors ensues and this can be taken into account by calculating
the resulting error correlations.
The resulting algorithm, called JPDA Coupled Filter (JP-
DACF), does the filtering in a coupled manner for the targets
with “common” measurements, yielding a covariance matrix
with off-diagonal blocks — cross-covariances — that reflect
the correlation between the targets’ state estimation errors.
The conditional probability for a joint association event (31)
becomes
Pr(θk|Zk)=1
c
φ!μF(φ)
mk!Vφftj1,tj2,...(zj,k ,j :τj=1)
·
t
(Pt
D)δt(1 Pt
D)1δt,(41)
where ftj1,tj2,... is the joint pdf of the measurements of the
targets under consideration; tj1is the target to which zj1(k)
is associated in event θ.
The joint probabilities are not reduced to the marginal
association probabilities as in (37) for use in decoupled PDA
filters. Instead, these joint probabilities are used directly in a
coupled filter.
1) The JPDACF: Denote the stacked vector of the predicted
states of the targets under consideration (assumed here to be
2) and the associated covariance matrix
ˆ
xT
k|k1=ˆ
x1
k|k1
ˆ
x2
k|k1,(42)
PT
k|k1=P11
k|k1P12
k|k1
P21
k|k1P22
k|k1,(43)
where ˆ
xtand Ptt correspond to target t;Pt1t2is the cross-
covariance between targets t1and t2(it will be zero before
these targets become “coupled”).
The coupled filtering is done as follows
ˆ
xT
k|k=ˆ
xT
k|k1+WT
k
θ
Pr(θk|Zk)[zT
k(θ)ˆ
zT
k|k1],(44)
where
zT
k(θ)=zj1(θ),k
zj2(θ),k ,(45)
and jt(θ)is the index of the measurement associated with
target tin event θat time k.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
11
The filter gain in (44) is
WT
k=PT
k|k1HT
kHT
kPT
k|k1HT
k+RT
k1,(46)
where
HT
k=δ1
θH1
k0
0δ2
θH2
k,(47)
RT
k=R1
k0
0R2
k,(48)
are the (block diagonal) measurement matrix and noise covari-
ance matrix, respectively, for the two targets under considera-
tion. The (binary) detection indicator variables δt
θabove take
care of the situation when only one of the targets is detected
in event θ[129]. The predicted stacked measurement vector
is
ˆ
zT
k|k1=HT
kˆ
xT
k|k1.(49)
The update of the covariance of the (stacked) state is as in
(39).
H. Extensions
1) Multiple Source Measurements: One can have an unre-
solved (merged) measurement from, e.g., two nearby targets.
The JPDAM — JPDA with Merged measurement includes
a special model for a merged measurement (see [10] Section
6.4). A version of this is the JPDAMCF — JPDA with Merged
measurement and Coupled Filter.
2) Splitting Target: A possible situation of interest is a
platform that launches a weapon, where one has the situation
ofasplitting target.
The JPDAF has been extended to cover such a situation by
using multiple models with the IMM configuration:
there is a single non-maneuvering target.
there is a single maneuvering target.
the target splits into two targets.
This provides a “warm start” for the new target. For details,
see Chapter 4 in [7].
3) The Coalescence Problem: Track coalescence can occur
for the JPDAF if the tracks are close to each other for an
extended time. Modifications of the JPDAF that counter the
coalescence tendency are available [18], [35], [148], [170].
I. The JPDAF — Summary
Assumptions of the JPDAF:
There are several targets to be tracked in the presence of
false measurements.
The number of targets is known.
The track of each target has been initialized.
The state equations of the targets are not necessarily the
same.
The validation regions of these targets can intersect and
have “common” measurements.
A target can give rise to at most one measurement — no
multipath.
The detection of a target occurs independently over time
and from other targets according to a known probability.
A measurement could have originated from at most one
target (or none) — no unresolved measurements are
considered here.
The conditional pdf of each target’s state given the past
measurements is assumed Gaussian (a quasi-sufficient
statistic that summarizes the past) and independent across
targets, with means and covariances available from the
previous cycle of the filter.
With the past summarized by an approximate sufficient
statistic, the association probabilities are computed (only for
the latest measurements) jointly across the measurement and
the targets.
1) The JPDAF Steps:
A validation matrix that indicates all the possible sources
of each measurement is set up
From this validation matrix all the feasible joint associ-
ation events are obtained according to the rules
one source for each measurement,
one measurement (or none) from each target.
The probabilities of these joint events are evaluated
according to the assumptions
Target originated measurements are Gaussian dis-
tributed around the predicted location of the corre-
sponding target’s measurement,
False measurements are uniformly distributed in the
surveillance region,
The number of false measurements is distributed
according to
Poisson prior — Parametric JPDAF,
Diffuse prior — Nonparametric JPDAF.
Marginal (individual measurement to target) association
probabilities are obtained from the joint association prob-
abilities.
The target states are estimated by separate (uncoupled)
PDA filters using these marginal probabilities.
V. M ULTIPLE HYPOTHESIS TRACKING
The MHT algorithm is described in a number of excellent
papers [16], [76], [123], and books [10], [15], [101]. We will
explain key concepts of the MHT algorithm through examples
while keeping the mathematics to a minimum. The interested
reader is encouraged to refer to books and papers mentioned
in this section.
A number of terms such as target, track, track hypothe-
sis, hypothesis, global hypothesis, association hypothesis, etc.
[10], [15], [16], [76], [101] are commonly used in the MTT
literature which are not clearly explained. Often a target and
a track are used interchangeably. In order to remove such
ambiguities we first explain these terms. To the best of our
knowledge, a standard taxonomy does not exist.
1) Targ e t: A target refers to the true object.
2) True trajectory: A true trajectory of a target is a time
history of the true states {xk}of the target.
3) Track or track hypothesis: A track represents an esti-
mated trajectory of a target.
4) Track label or identity (ID): A distinct label or ID,
usually a positive integer to uniquely identify a track.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
12
5) Compatible tracks: A number of tracks are said compat-
ible, if they do not have common measurements.
6) Target tree: In Kurien’s track-oriented MHT (TOMHT)
[76], an estimated target is represented by a target tree
with a resolved track,aroot node, and a number of
tracks (branches) originating from the root node. The
resolved track is a single branch from the first node
to the current root node in a target tree. The first
node is created when a new track is created using a
measurement. The resolved tracks for a number of target
trees are compatible and the branches in a target tree are
not compatible. The location of the root node is moved
forward by one scan when a new scan of measurements
is processed.
7) Gating: A process which defines a volume in the mea-
surement space to determine if a measurement can be
associated with a predicted measurement corresponding
to a predicted track.
8) Association hypothesis: An association hypothesis is
generated when a measurement is associated with a
predicted track.
9) Global hypothesis or hypothesis: A global hypothesis is
a collection of compatible tracks representing a number
of estimated trajectories.
10) Assignment algorithm: An algorithm which generates a
number of compatible tracks using association between
a number of tracks and measurements in one or more
scans.
A. Single Hypothesis and Multiple Hypothesis Tracking
A simple multitarget tracking scenario is shown in Fig. 5, to
illustrate the single and multiple hypothesis tracking methods
where two resolved tracks T1and T2are present at scan k1.
We have a single global hypothesis G1={T1,T
2}at scan
k1and at scans kand k+1, the tracker receives measurement
sets {z1,z2,z3}and {z4,z5}, respectively. It is not known if
a measurement is from a target or due to clutter. Secondly,
it is not known which measurement originates from which
target. Thirdly, it is not known if missed detection events have
occurred due to less than unity probability of detection. This
phenomenon is known as the measurement origin uncertainty.
In order to limit the number of candidate measurement-to-
6FDQ
]
]
]
N
N
N
7
7
7UDFNV 0HDVXUHPHQWV
]
]
Fig. 5. A multitarget tracking scenario.
track associations (M2TAs), a data association based MTT
algorithm uses gating [10], [15], [76], [123]. A coarse gating
followed by a fine gating (ellipsoidal gating) is commonly
used. A coarse gating is based on rectangular gating with a
large value along each measurement coordinate. The coarse
gating eliminates many unlikely M2TAs for computational
efficiency. In Fig. 6, we assume that the measurements are 2D
position measurements. The tracks T1and T2at scan k1
are predicted to scan time kto obtain predicted tracks ¯
T1and
¯
T2, respectively. Fig. 6 shows that measurements {z2,z3}and
{z1,z2,z3}can be associated with predicted tracks ¯
T1and
¯
T2, respectively, by gating. Fig. 7 shows the generation of
7
]
]
]
7
Fig. 6. Measurement-to-track association at scan k.
five M2TA hypotheses based on gating in Fig. 6, two missed
detection hypotheses, and three new tracks T3,T
4and T5
corresponding to measurements z1,z2and z3, respectively.
We note that two estimated targets are represented by two
target trees with root nodes at scan k1. The track from
the first node at scan 1 to the root node at scan k1
represents a resolved track for T1or T2. We shall see in later
discussion that this representation is used in the TOMHT first
proposed by Kurien [76]. Table I shows ten possible global
hypotheses corresponding to track hypotheses in Fig. 7. It
is a coincidence that the number of track hypotheses and
global hypotheses are the same for this scenario. In the single
6FDQ
]
N
7
$VVRFLDWLRQ
K\SRWKHVLV
]
]
]
]
7
N
K
7UDFN
K\SRWKHVHV
K
K
K
K
K
K
]

QHZK7
]

QHZK7
]
 
QHZK7
0LVVHG
GHWHFWLRQ
Fig. 7. Generation of track hypotheses at scan k.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
13
hypothesis tracking (SHT) algorithm such as the GNN, the
best global hypothesis (BGH) based on the maximum total
track score or the minimum total cost is selected using the
existing set of tracks and the current scan of measurements
and all other M2TAs are discarded for future consideration.
Suppose G3is the best global hypothesis. Then only the
M2TAs {T1z3,T
2z1,T
4z2}contained in G3are kept
and all other M2TAs are discarded in the GNN algorithm.
An MHT algorithm uses a deferred decision logic [15],
[16] by allowing more than one scan of measurements to be
used in the M2TA process. It is hoped that measurements in
more than one scan can provide more accurate M2TA than
those in a single scan. The assignment of measurements in
one scan to one set of tracks is known as the 2D assignment
problem [10], [15], [117]. The assignment of measurements
in s1,s > 3scans to a set of tracks is known as the sD
assignment (also known as the multi-frame assignment (MFA)
or multi-dimensional assignment (MDA)) problem [10], [15],
[16], [40], [116], [117].
The number of association hypotheses or tracks in a
TOMHT can grow exponentially as measurements in scans
are processed sequentially. An MHT algorithm usually uses a
number of techniques such as clustering, gating, N-scan prun-
ing, and track-score based pruning to limit this exponential
growth. These will be described in Section V-C.
For simplicity, the example described here has not con-
sidered false alarm hypotheses (i.e. all measurements are
accounted for in track hypotheses), nor have we considered
undetected target birth hypotheses. The same is true of the
example discussed later in Sec. VII.C. As discussed in [30],
it is beneficial to decouple data association and track ex-
traction processes; the latter discards spurious returns. Thus,
it is sufficient to consider hypotheses that account for all
returns. Further, a recent MHT generalization that accounts
for undetected target births is in [31].
Additionally, again for simplicity, we have not considered
target death events. Generally, MHT hypothesis generation
logic spawns only a target missed detection or target death
track hypothesis, the latter after a sufficient number of missed
detections.
B. Types of MHT Algorithms
There are two types of MHT, the hypothesis-oriented MHT
(HOMHT) [10], [15], [123] and TOMHT [10], [15], [76],
[101], [139], [149], [151]. Reid first proposed the HOMHT
[123]. There are two different types of the TOMHT, tree based
[29], [76], [101], [149] and non-tree based [138], [151]. Both
types of TOMHT solve the same binary MDA problem and
can yield the same binary (0-1) solutions. In the TOMHT
approach, measurements in the last s-1 scans are associated
with a number of tracks in the previous scan. The difference
is due to the representation by which tracks are represented.
Kurien first formulated a computationally efficient version of
the tree based TOMHT [76] in which a hypothesized target is
represented by a target tree. The non-tree based TOMHT [10],
[138], [151] does not use a target tree. When a hypothesized
target is represented by a target tree, Nscan pruning [15],
[29], [76], [101] can be performed to reduce the number
of tracks. The non-tree based TOMHT cannot perform the
Nscan pruning. Therefore, the total number of tracks will be
different in these two TOMHT implementations. Subsections
V-C and V-D present tree based TOMHT and non-tree based
TOMHT, respectively.
We model targets as points and assume that a tracker
receives measurements in scans. Each scan contains the scan
time, sensor state related information (e.g. sensor position,
velocity, etc.), measurements and associated measurement
error covariances, and sensor probability of detection. A
conventional tracker is based on the fundamental assumption
that a point target generates at most one measurement per
scan [10], [15]. Multiple detections per scan for a point target
arising in the over the horizon radar (OTHR) tracking problem
[57], [121], [139] requires advanced algorithms where this
fundamental assumption can be relaxed.
C. Tree Based TOMHT
The HOMHT keeps a number of global hypotheses between
consecutive scans whereas tree based TOMHT only maintains
a number of target trees, each containing a number of tracks
which are not compatible. In tree based TOMHT, the best
global hypothesis is formed from the existing set of tracks
and the Nscan pruning and track-score based pruning are
used to limit the number of tracks from growing exponentially.
In tree based TOMHT, there are many more tracks than the
number of tracks in the best global hypothesis. For large-scale
tracking problems, there may be several thousand comparable
global hypotheses from several hundred tracks in a cluster
[16]. From practical experience, several hundred tracks can
be easily handled by tree based TOMHT. Therefore, this tree
based TOMHT has computational advantage over HOMHT.
Based on software architecture development, maintenance,
debugging, and cost effectiveness, most tracking groups at
present use the tree based TOMHT.
A block diagram in Fig. 8 shows various processing steps
of a tree based TOMHT. When the first scan is received,
the measurements are partitioned into a number of clusters
[123] first using a coarse method and then using the location
and measurement error covariances. The use of clustering
in MTT was first proposed by Reid [123] to partition the
tracking problem to a number of sub-problems so that an MHT
algorithm can be applied to each cluster for computational
efficiency. For each measurement a new track is initiated using
a single-point (SP) track initiation algorithm [8], [97], [98],
[100] which calculates the initial state estimate and associated
covariance. Additionally, the track score for a new track, which
generates a new target tree is also calculated. A sensor usually
collects measurements in a region of the measurement space
known as the dwell or scan volume. For a radar measuring
range and azimuth in a plane, the dwell volume (area in this
case) can be specified by the minimum and maximum values
of the range and azimuth. Thus a sensor has no information
about targets outside the dwell volume.
Assumptions:
1) The number of FAs in the dwell volume is Poisson
distributed [10]. Let λFA denote the expected number of
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
14
TABLE I
TEN POSSIBLE GLOBAL HYPOTHESES.
Global hypothesis Structure of a global hypothesis
G1{T1z2,T
2z1,T
5z3}
G2{T1z2,T
2z3,T
3z1}
G3{T1z3,T
2z1,T
4z2}
G4{T1z3,T
2z2,T
3z1}
G5{T1,T
2z1,T
4z2,T
5z3}
G6{T1,T
2z2,T
3z1,T
5z3}
G7{T1,T
2z3,T
3z1,T
4z2}
G8{T1z2,T
2,T
3z1,T
5z3}
G9{T1z3,T
2,T
3z1,T
4z2}
G10 {T1,T
2,T
3z1,T
4z2,T
5z3}
0HDV
3UHGLFWHG
7UDFNV
&OXVWHULQJ
07$
7UDFN
3UHGLFWLRQ
7UDFN
8SGDWH
8SGDWHG
7UDFNV
3UXQLQJ
*OREDO
+\SRWKHVLV
)RUPDWLRQ
7UDFN
,QLWLDWLRQ
8VHU
([LVWLQJ
7UDFNV
1HZ
7UDFNV
0HDV
7UDFNV
%*+
5HSRUWHG
7UDFNV
7UDFNV
Fig. 8. Processing steps of a TOMHT.
FAs per unit volume of the measurement space, known
as the spatial density of FAs.
2) The number of new targets appearing in the dwell
volume is also Poisson distributed [10]. Let λnew denote
the expected number of new targets per unit volume of
the measurement space, known as the spatial density of
new targets.
1) Track Score: Tracks can have different number of de-
tections and missed detections. In order to treat the tracks in a
normalized manner, the likelihood ratio (LR), normalized by
the FA probability density is used [9], [15], which is dimen-
sionless. For computational convenience, the logarithm of the
LR (LLR) is used for the track score [9], [15]. High and low
track scores represent high and low quality tracks, respectively.
Let LLRkdenote the track score at scan k. Assuming that
measurements at different scans are independent given the
state, the LLRkis related to the LLRk1by
LLRk= LLRk1+LΔLR
k,(50)
where LΔLRkis the incremental log-likelihood ratio or in-
cremental track score at scan k. As described in Section V-A,
three possible cases arise; a measurement zkcan be associated
with a track, a track can have a missed detection, and a
new track corresponding to zkcan be created. Let PDand
PGdenote the probability of detection and gate probability,
respectively. The LLR for a new track is given by [9], [76]
LLRnew =logλnew
λFA
.(51)
Let Zk1denote the measurements associated with a track up
to time tk1.For generality, we assume that Zk1includes
detections and missed detections. Then the LΔLRks for the
association of zkwith the track and missed detection event,
respectively, are given by [9], [76]
LΔLRk=logPDp(zk|Zk1)
λFA
,(52)
LΔLRk=log(1PDPG).(53)
2) Best Global Hypothesis Generation using MFA and
Pruning: As seen in Section V-A, the number of tracks can
grow exponentially as scans of measurements are processed
sequentially. This can lead to a serious computational problem
when a few hundred targets are present in the surveillance area.
In order to have practical solutions for real-world tracking
problems, the tree based TOMHT uses a number of pruning
methods to delete tracks with low track score while keeping
tracks with high scores. Next we describe the formation of
the best global hypothesis using the example shown in Fig. 9,
described in [101]. The notations and symbols used here are
slightly different from those in [101]. This example is different
from the example in Fig. 6.
We assume that at scan k1we have two resolved tracks
T1,T
2and at scans kand k+1, the tracker receives measure-
ment sets {z1,z2}and {z3}, respectively. Following the pro-
cedure described in Section V-A, ten tracks {h1,h
2, ..., h10}
are generated at scan k+1. The problem shown in Fig. 9 is
a 3-dimensional (s=3) assignment problem. Let Nand M
denote the number of tracks at scan k+1 and the sum of the
number of resolved tracks at the root node and the number
of measurements in the last s1scans, respectively. For our
example, M=5,N=10. Let aand ube Ndimensional
column vectors, where each element of ais zero or one and
ui= LLRi,i=1,2, ...N . Let bbe an Mdimensional
column vector where each element of bis one. We refer to a
and uas the assignment and utility vectors, respectively. The
best global hypothesis is determined by solving the binary
programing problem where {ai[0,1]}are determined by
maximizing the total utility as shown in Fig. 10.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
15
%HVWJOREDO
K\SRWKHVLV
7
6FDQ
]
K
7
K
N
]
]
]
]
]
7
7
]
]
K
K
K
K
K
K
K

K
7
N
N
5HVROYHGWUDFNV 0LVVHG
GHWHFWLRQ
1VFDQ 
Fig. 9. Formation of best global hypothesis and N-scan pruning.

DUJ PD[
VXEMHFW WR ZLWK  IRU   DQG
> @ IRU  





M
L
EM0
DL1
KKKKKKKKKK
c
ªº
«»
«»
«»
«»
«»
«»
¬¼
XD
D
$D E
$


IRU PHDV ]] ]

IRU WUDFNV 77
Fig. 10. Multi-frame assignment problem.
The Lagrangian relaxation algorithm [117], [119], [120] and
approximate linear programming (LP) [29], [145] can be used
to solve the MFA problem.
Suppose the tracks h2and h5are included in the BGH
and we choose N-scan as 1. In N-scan pruning, we move
one scan back from the current scan k+1 to scan kand for
target trees T1and T2, we delete branches not included in the
BGH. Thus, tracks h1,h
4,h
6,h
7,h
8,and h9are deleted by
N-scan pruning. In addition to N-scan pruning, track score
based pruning is used to delete tracks with low track score.
In order to remove spurious tracks and keep good tracks, the
status of a track is specified by new, tentative, and confirmed.
A number of track-confirmation logic described in [15], [30]
are used to keep good tracks. Generally, track management
is performed in a sliding window and is either logic-based or
score-based, the latter making use of the sequential probability
ratio test (SPRT) for quickest change detection [15].
D. Non-tree Based TOMHT
Details of non-tree based TOMHT are presented in [10],
[138], [151]. There are a number of variants of this type
of non-tree based TOMHT. To illustrate this approach, we
consider the example in Fig. 9 and use the method de-
scribed in [151]. This multi-frame assignment problem is a
3D assignment problem. Following the approach in [151],
we have depicted this 3D assignment problem in Fig. 11,
where two resolved tracks T1and T2at scan k1and
three measurements {z1,z2}and {z3}at scans kand k+1
are shown. Secondly, a dummy track corresponding to an
”extraneous” measurement (new target or false alarm) at scan
k1and dummy measurements corresponding to missed
detections at scans kand k+1 are also shown in Fig. 11.
6FDQ
]
N
7
]
]
7
N
0LVVHGGHWHFWLRQ
GXPP\PHDVXUHPHQW
N
'XPP\
7UDFN
$VVRFLDWLRQ
K\SRWKHVLV
Fig. 11. Multi-frame assignment problem in non-tree based TOMHT.
Track initiation, track score computation, and track confir-
mation in this type of non-tree based TOMHT are similar to
those in tree based TOMHT. An sD assignment is used to
determine best global hypothesis. Two tracks present in the
best global hypothesis are shown in Fig. 11 by solid lines
originating from resolved tracks T1and T2.
E. Track Filtering
A single model filter (e.g KF, EKF, UKF, PF, PFF) and mul-
tiple model filter (e.g. IMM, VS-IMM) for a non-maneuvering
and maneuvering target, respectively, are used in an MTT
system. The dynamic and measurement models in a filter
can be linear or nonlinear. Some of these filters have been
described in Subsections II-C, II-D, and II-E. A detailed
overview of the dynamic and measurement models and filters
for non-maneuvering and maneuvering targets are presented
in [101].
F. Applications of MHT
The MHT has been successfully used for solving many real-
world problems in ground target tracking, maritime tracking,
air target tracking, missile defense systems, computer vision
systems, video tracking, persistence surveillance, and space
object tracking (SOT) [15], [16], [101]. Computer hardware
and software have advanced significantly during the last two
decades. As a result, large-scale real-world problems involving
thousands of targets can now be solved by the TOMHT using
high performance computing (HPC) and cluster computing. As
mentioned in [16], due to military applications and company
proprietary policies, many of these studies are not available in
the open literature.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
16
G. Future Work
Future areas of research include multiple detection systems
in which multiple measurements for a target can arise in a
scan. These problems arise in over the horizon radar (OTHR)
tracking [57], [121], [139], tracking using high range res-
olution radar (HRR), and passive coherent location system
(PCL) [152]. Multiple detections per scan can also arise for an
extended target [81]. Large-scale real-world problems which
will involve HPC and cluster computing are SOT [102] and
giga-pixel video surveillance system [2]. An important area of
research is the comparative evaluation of the TOMHT and RFS
based algorithms. Research in this area is quite limited [147].
Optimal solution to a complex MTT system is not possible. A
key goal of an advanced MTT system is to have numerically
efficient, near-optimal, robust, and scalable algorithms and
software.
VI. THE RANDOM FINITE SET APPROACH
List of mathematical symbols:
hXmultitarget exponential: xXh(x),h=1
Xk(set-valued) multitarget state at time k
Zk(set-valued) multitarget observation at time k
Zkmultitarget observation history (Z1,Z
2, ..., Zk)
π(·)multi-target density
v(·)Probability Hypothesis Density (PHD)
ρ(·)cardinality distribution
Lkspace of labels for targets born at time k
L0:kspace of labels for targets up to time k
L(X)the set of labels of X
Δ(X)distinct label indicator: δ|X|[|L(X)|]
θassociation map taking L0:kto {0,1, ..., |Zk|}
such that θ()=θ()>0implies =
Θkspace of association maps at time k
Θk(L)subset of Θkwith domain L
The RFS approach represents the multitarget state as a finite
set of single-target states, and the MTT problem is formulated
as a dynamic multitarget state estimation problem, analogous
to the single-target case in Section II.
A. Random Finite Set
An RFS X,ofX, is a random variable taking values in
F(X), the collection of all finite subsets of X. While F(X)
does not inherit the usual Euclidean notion of probability
density from X, a measure-theoretic notion of probability
density on F(X)is available [154]. However, we adopt the
Finite Set Statistic (FISST) notion of density since it is
convenient and by-passes measure theoretic constructs [54],
[86].
The FISST density of an RFS Xis a non-negative function
πon F(X)such that for any region S⊆X,
Pr(XS)=S
π(X)δX,
where the integral above is a set integral defined by [54], [84]
S
π(X)δX =
i=0
1
i!Si
π({x1, ..., xi})d(x1, ..., xi),
i.e. the set integral of the FISST density over a region S, yields
the probability that Xis contained in S. Although πis not
a probability density, the function defined by π(X)K|X|is,
where Kdenotes the unit of hyper-volume on X[154].
ABernoulli RFS Xhas probability 1rof being empty, and
probability rof being a singleton whose element is distributed
according to a probability density p(on X). The density of a
Bernoulli (RFS) is given by
π(X)=1r,
rp(x),
X=,
X={x}.
Amulti-Bernoulli RFS is a union of independent Bernoulli’s.
The cardinality distribution of an RFS is defined by
ρ(n)=Pr(|X|=n).
An i.i.d. cluster RFS Xhas elements i.i.d. according to a
probability density p(on X), and is completely characterized
by ρand p[36]. Its density is given by
π({x1, ..., xn})=n!ρ(n)n
i=1 p(xi),
with π()=ρ(0).APoisson RFS is a special case of i.i.d.
cluster RFS with Poisson cardinality.
The first moment of an RFS is the Probability Hypothesis
Density (PHD) also known as the intensity function [84], [154].
The PHD is a non-negative function v(on X) whose integral
over any region S⊆Xgives the expected number of elements
of the RFS that are in S, i.e.
E[|XS|]=S
v(x)dx.(54)
The PHD is computed from the multitarget density by [84]
v(x)=π({x}∪X)δX. (55)
The local maxima of the PHD are points in Xwith the
highest local concentration of expected number of elements.
Intuitively, we can use ˆn=E[|X|]or ˆn=argmax
nρ(n)
as the estimated number of targets, and the ˆnhighest local
maxima of the PHD as the estimated target states.
When a multitarget state, with prior density π, is observed
as Z(e.g. a set of points, an image, or a function) modelled
by the likelihood function π(Z|X), all information about the
multitarget state given the observation is contained in the
multitarget posterior density, given by Bayes rule (cf. (1))
π(X|Z)= π(Z|X)π(X)
π(Z|X)π(X)δX .(56)
Bayes optimal multitarget estimators can be formulated by
minimizing the Bayes risk as in the single-target case. One
such estimator is the Marginal multitarget estimator [86]: ˆ
X=
arg supX:|X|nπ(X|Z), where ˆn=argmax
nρ(n|Z).
B. Multitarget State Space Model
In a standard multitarget transition model (see Section III),
at time k1, each target xk1of a multitarget state Xk1,
generates a Bernoulli RFS Sk|k1(xk1)at time k.New
targets at time kare modeled by an RFS of spontaneous births
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
17
Γk. Thus, the multitarget state Xkgenerated by Xk1is given
by the multitarget state transition equation (cf. (2))
Xk=
xk1Xk1
Sk|k1(xk1)Γk.(57)
In general the multitarget transition equation can be described
by a Markov multitarget transition density (cf. (3))
φk|k1(Xk|Xk1),(58)
i.e. the probability density that a given multitarget state Xk1
evolves to Xk. The multitarget transition density captures the
underlying models of target motion, births and deaths.
In a standard multitarget observation model (see Section
III), each target xkof a multitarget state Xk, generates a
Bernoulli RFS Dk(xk). The observation Zkgenerated by Xk
is given by the multitarget observation equation (cf. (4))
Zk=
xkXk
Dk(xk)Fk,(59)
where Fkis an RFS of false detections. In general the multi-
target observation model can be expressed as the multitarget
likelihood function (cf. (5))
ϕk(Zk|Xk),(60)
i.e. the likelihood that the observation Zkis generated by the
multitarget state Xk. In a standard observation model, the
multitarget observation likelihood captures underlying mod-
els of target detections, observation noise, and FAs. Unlike
traditional techniques, this framework accommodates non-
homogeneous, non-Poisson FAs, and state-dependent probabil-
ity of detection in a principled way. Non-standard observations
such as images, functions, etc. can also be described by the
multitarget observation likelihood.
C. Multitarget Bayes Recursion
All information about the multitarget state history to
time kis encapsulated in the multitarget posterior density
π0:k(·|Z1:k), which can be computed recursively from an
initial prior π0, via the multitarget Bayes recursion (cf. (6))
π0:k(X0:k|Zk)
ϕk(Zk|Xk)φk|k1(Xk|Xk1)π0:k1(X0:k1|Zk1).(61)
Target trajectories are accommodated by incorporating a label
in each target’s state vector [54], [86], [163], [166]. The
multitarget posterior, thus contains all information on the RFS
of target trajectories, given the observation history.
The multitarget filtering density πk(·|Zk), is a marginal of
the posterior density at time k, which is of interest for on-line
multitarget tracking. The multitarget filtering density can be
computed recursively using the multitarget Bayes prediction
and update equations (cf. (7), (8))
πk|k1(X|Zk1)=φk|k1(X|Y)πk1(Y|Zk1)δY, (62)
πk(X|Zk)= ϕk(Zk|X)πk|k1(X|Zk)
ϕk(Zk|Y)πk|k1(Y|Zk)δY .(63)
A generic particle implementation of the multitarget re-
cursions (61) and (62)-(63) was given in [154]. Multitarget
trackers based on incorporating labels in the target states
include the generalized labeled multi-Bernoulli filter [163],
[164], which solves the filtering recursion (62)-(63) analyt-
ically, and the particle marginal Metropolis-Hasting tracker
[166], which simulates the posterior (61). Algorithms that only
estimate the multitarget state include the PHD, Cardinalized
PHD and multi-Bernoulli filters, [84]–[86], [159], [160], which
are analytic approximations of the filtering recursion (62)-(63).
D. The PHD Filter
The PHD filter is a computationally inexpensive approxi-
mation of the multitarget Bayes filter derived by Mahler using
FISST [84]. An alternative derivation of the PHD filter based
on classical point process theory was given in [142], while an
intuitive interpretation was given in [47].
Instead of propagating the multitarget filtering density
πk(·|Zk), the PHD filter propagates its first moment, the
filtered PHD vk(·|Zk). In addition to the standard multitarget
state space model with Poisson FAs, the PHD recursion
assumes that the updated and predicted multitarget RFSs are
Poisson. For compactness, we drop the dependence on Zk, and
denote by α, βthe inner product α(ζ)β(ζ)when α,β
are functions, or
=0 α()β()when α,βare sequences.
The PHD recursion consists of a prediction and an update
vk|k1(x)=P
S,k|k1fk|k1(x|·),v
k1+γk(x),(64)
vk(x)=[1PD,k(x)] vk|k1(x)
+
zZk
ψk(z;x)vk|k1(x)
λF,k +ψk(z;·),v
k|k1,(65)
where γkis the PHD of the RFS of new targets,
ψk(z;x)=PD,k(x)gk(z|x)/pF,k(z)(66)
is the detection-to-FA ratio of zgiven a target x,λF,k is
expected number of FAs at time k, and pF,k is the probability
density of each FA.
The PHD recursion (64)-(65) admits a closed form so-
lution called the Gaussian Mixture PHD (GM-PHD) filter
[155] under the linear Gaussian multitarget model: linear
Gaussian single-target model, i.e. (9)-(10); constant survival
and detection probabilities, i.e. PS,k|k1(x)=PS,k|k1, and
PD,k(x)=PD,k (Gaussian mixture PS,k|k1(x)and PD,k(x)
can also be accommodated); and Gaussian mixture birth PHD
γk(x)=JΓ,k
j=1 w(j)
Γ,kN(x;m(j)
Γ,k,P(j)
Γ,k).
In this case, if vk1is a Gaussian mixture of the form
vk1(x)=Jk1
i=1 w(i)
k1N(x;m(i)
k1,P(i)
k1),(67)
then the predicted PHD to time kis given by
vk|k1(x)=γk(x)+
Jk1
i=1
w(i)
k|k1N(x;m(i)
k|k1,P(i)
k|k1),(68)
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
18
where w(i)
k|k1=PS,k|k1w(i)
k1, and m(i)
k|k1,P(i)
k|k1are given
by the Gaussian sum filter prediction (13), (14), respectively.
If we rewrite vk|k1as
vk|k1(x)=Jk|k1
i=1 w(i)
k|k1N(x;m(i)
k|k1,P(i)
k|k1)(69)
then, the updated PHD at time kis given by
vk(x)=(1PD,k)vk|k1(x)
+P
D,k
zZk
Jk|k1
i=1
w(i)
k(z)N(x;m(i)
k(z),P(i)
k)(70)
where
w(i)
k(z)= w(i)
k|k1q(i)
k(z)
λF,k +PD,k Jk|k1
=1 w()
k|k1q()
k(z)
(71)
q(i)
k(z)=N(z;Hkm(i)
k|k1,S(i)
k|k1)/pK,k(z),(72)
with m(i)
k(z),P(i)
k,S(i)
k|k1given by the Gaussian sum filter
update (15)-(18) respectively.
Mixture reduction by pruning negligible components and
merging similar components are needed to manage the grow-
ing the number of components [155]. Multitarget state estima-
tion in the GM-PHD filter involves first estimating the number
of targets from the sum of the weights, and then extracting the
corresponding number of components with the highest weights
from the PHD as state estimates. Alternatively, we can choose
the means of components whose weights exceed a prescribed
threshold.
For highly nonlinear problems, vkcan be approximated
by a set of weighted particles {(w(i)
k,x(i)
k)}Lk
i=1. A generic
particle PHD filter is given in the algorithm below [154].
Convergence results similar result to that in [32] also hold for
the particle PHD filter under standard assumptions [154], [26],
[66]. Multitarget state estimation for the particle PHD filter
requires the clustering of particles into groups, which involves
additional processing. Techniques such as the auxiliary particle
PHD filter [167], the measurement driven particle PHD filter
[127] provide partial solutions to this problem.
Algorithm: Particle PHD filter
For i=1, ..., Lk1,sample
x(i)
kqk(·|x(i)
k1,Z
k)and compute
w(i)
k|k1=P
S,k|k1(x(i)
k1)fk|k1(
x(i)
k|x(i)
k1)
qk(
x(i)
k|x(i)
k1,Z
k)w(i)
k1.
For i=1, ..., Jk,sample
x(i+Lk1)
krk(·|Zk)and compute
w(i+Lk1)
k|k1=γk(
x(i+Lk1)
k)
Jkrk(
x(i+Lk1)
k|Zk).
For each zZk, compute
Ck(z)=Lk1+Jk
j=1 ψk(z;
x(j)
k)w(j)
k|k1.
For i=1, ..., Lk1+Jk, update weights
w(i)
k=1PD,k(
x(i)
k)+zZk
ψk(z;
x(i)
k)
λF,k +Ck(z)w(i)
k|k1.
Resample to get {(w(i)
k,x(i)
k)}Lk
i=1.
E. The Cardinalized PHD Filter
The cardinalized PHD (CPHD) filter is a generalization
of the PHD filter that jointly propagates the PHD vkand
cardinality distribution ρkto provide better performance albeit
at higher computational complexity [85]. In addition to the
standard multitarget state space model with i.i.d. cluster FAs,
the CPHD recursion assumes that the prior and predicted
multitarget densities are i.i.d. cluster. We use the alternative
form of the CPHD recursion given in [156] since it facilitates
implementations.
The CPHD prediction is the same as the PHD prediction ex-
cept for the additional calculation of the predicted cardinality
distribution ρk|k1, which is the convolution:
ρk|k1(n)=n
j=0ρΓ,k (nj)ρS,k|k1(j),
of the birth cardinality distributions ρΓ,k (given in the birth
model) and surviving target cardinality distribution
ρS, k |k1(j)=
=j
C
j¯
Pj
S,k|k1(1¯
P
S,k|k1)jρk1()
¯
P
S,k|k1=P
S,k|k1,v
k1/1,v
k1
The CPHD update is given by [85], [156]
ρk(n)=Υ(0)
k[vk|k1,Z
k](n)ρk|k1(n)
Υ(0)
k[vk|k1,Z
k]
k|k1,
vk(x)=Υ(1)
k[vk|k1,Z
k]
k|k1(1 PD,k(x))vk|k1(x)
Υ(0)
k[vk|k1,Z
k]
k|k1
+
zZk
Υ(1)
k[vk|k1,Z
k−{z}]
k|k1ψk(z;x)vk|k1(x)
Υ(0)
k[vk|k1,Z
k]
k|k1
where
Υ(u)
k[v, Z](n)=
SZ
Pn−|S|
u
1PD,k,vuΞk[v, Z, S|n]
Ξk[v, Z, S|n]=Pn
|S|[¯
ψk]S(1¯
P
D,k)n−| S|ρ
F,k
(|ZS|)|ZS|!
¯
ψk(z)=ψk(z;·),v/1,v
¯
PD,k =P
D,k,v/1,v.
ρF,k =cardinality distribution of FAs
Multitarget state estimation for the CPHD filter is similar
to that for the PHD filter. In addition, the number of targets
can be estimated using arg max ρk(·).
The bottleneck of the CPHD update is the evaluation of the
elementary symmetric function
ej(Y)=
SY,|S|=j
ζS
ζ,
for a finite subset Yof real numbers, with e0(Y)=1by
convention [20]. Using the Newton-Girard formulae (or Vieta’s
Theorem) ej(Y)can be evaluated efficiently by expanding out
the polynomial with roots given by the elements of Y, and
collect the coefficient of the |Y|−jpower [156]. Both the
PHD and CPHD filter are linear in the number of targets. The
PHD filter is linear in the number of observations while the
CPHD filter has a complexity of O|Zk|2log |Zk|[156].
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
19
Under the linear Gaussian multitarget model, the CPHD
recursion also admits a closed form solution. If vk1is a
Gaussian mixture of the form (67), then vk|k1is the same
as the GM-PHD prediction (68), and ρk|k1is the convolution
of ρΓ,k and ρS,k|k1with ¯
PS,k|k1=PS,k|k1.Ifvk|k1is a
Gaussian mixture of the form (69), then
ρk(n)=Υ(0)
k[wk|k1,Z
k](n)ρk|k1(n)
Υ(0)
k[wk|k1,Z
k]
k|k1,
vk(x)=(1PD,k)Υ(1)
k[wk|k1,Z
k]
k|k1
Υ(0)
k[wk|k1,Z
k]
k|k1vk|k1(x)
+PD,k
zZk
Jk|k1
i=1
w(i)
k(z)N(x;m(i)
k(z),P(i)
k),
where
w(i)
k(z)=Υ(1)
k[wk|k1,Z
k−{z}]
k|k1q(i)
k(z)w(i)
k|k1
Υ(0)
k[wk|k1,Z
k]
k|k1
wk|k1=[w(1)
k|k1, ..., w(Jk|k1)
k|k1],
Υ(u)
k[w,Z](n)=
SZ
Pn−|S|
u
[(1 PD,k)1w]uΞk[w,Z,S|n]
Ξk[w,Z,S|n]=Pn
|S|[¯
ψk]S(1P
D,k)n−| S|ρ
F,k
(|ZS|)|ZS|!
¯
ψk(z)=PD,kwqk(z)/1w
qk(z)=[q(1)
k(z), ..., q(Jk|k1)
k(z)],
and m(i)
k(z),P(i)
kand q(i)
k(z)are the same as in the GM-PHD
filter.
As with the Kalman filter, the extended PHD/CPHD fil-
ters are GM-PHD/CPHD filters with linearized state space
equations, and the unscented PHD/CPHD filters are GM-
PHD/CPHD filters with unscented transform approximations
[155]. The particle implementation of the CPHD filter fol-
lows that for the PHD filter [154]. The PHD/CPHD filters
can be applied to jointly estimate the FA parameters, state-
dependent detection probability, and the multitarget state [13],
[93], [95]. They have also been extended to multiple models
[115], [94], extended targets [90], [28], [56], superpositional
measurements [111], multiple sensors [87], [89], [92], [28]
and distributed multitarget filtering [11], [153]. We refer the
reader to the text [96] for more details on advances in PHD
filtering.
F. The Generalized Labeled Multi-Bernoulli Tracker
In addition to the PHD/CPHD filters, other approximations
of the multitarget Bayes filter include the multi-Bernoulli
filters [159], [160] and their extensions [122] [113], [162],
[170]. While these filters are not formulated to output tracks,
their generalization, the Generalized Labeled Multi-Bernoulli
(GLMB) filter, is [163], [164].
Targets are labeled by an ordered pair of integers =(k, i),
where kis the time of birth, and iis a unique index to
distinguish targets born at the same time. Figure 12 illustrates
the assignment of labels to target trajectories. The label space
for targets born at time kis denoted as Lkand the label space
for targets at time k(including those born prior to k) is denoted
as L0:k. Note that L0:k=L0:k1Lk.
VWDWHVSDFH
WLPH WUDFNV
 «

 
PXOWLWDUJHWVWDWHV
Fig. 12. A label assignment example: The two tracks born at time 1 are given
labels (1,1) and (1,2), while the track born at time 4 is given label (4,1).
An existing target at time khas state (x,)consisting of the
kinematic/feature xXand label L0:k, i.e. single-target
state space Xis the Cartesian product X×L0:k. To ensure
that the labels of a multitarget state XX×L0:kare distinct,
we require Xand the set of labels of X, denoted as L(X),to
have the same cardinality. The function Δ(X)δ|X|[|L(X)|]
is called the distinct label indicator.
An association map at time kis a function θ:L0:k
{0,1, ..., |Z|} such that θ()=θ()>0implies =.
Such a function can be regarded as an assignment of labels to
measurements, with undetected labels assigned to 0. The set
of all such association maps is denoted as Θk; the subset of
association maps with domain Lis denoted by Θk(L); and
Θ0:kΘ0×... ×Θk.
In the GLMB filter, the multitarget filtering density at time
k1is a GLMB of the form
πk1(X|Zk1)=Δ(X)
ξΘ0:k1
w(ξ)
k1(L(X))[p(ξ)
k1]X,(73)
where each p(ξ)
k1(·,)is a probability density, and each weight
w(ξ)
k1(L)is non-negative with
L∈F(L)
ξΘ0:k1
w(ξ)
k1(L)=1.
The cardinality distribution of the GLMB in (73) is given by
ρk1(n)=
L∈F(L)
ξΘ0:k1
δn(|L|)w(ξ)
k1(L).(74)
Each ξ=(θ0, ..., θk1)Θ0:k1represents a history
of association maps up to time k1, and contains the
history of target labels encapsulating both births and deaths.
A tractable suboptimal multitarget estimate is obtained by the
following proceedure: determine the MAP cardinality estimate
n; determine the label set Land ξwith highest weight
w(ξ)
k1(L)among those with cardinality n; determine the
expected values of the states from p(ξ)
k1(·,),L[163].
The set of targets born at time kis modelled by a GLMB
with one term: fΓ,k(X)=Δ(X)wΓ,k (L(X))pX
Γ,k (a full
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
20
GLMB birth can also be easily accommodated) [163]. Since
the label of a target does not evolve with time, we have
fk|k1(xk,
k|xk1,
k1)=fk|k1(xk|xk1,
k1)δk1[k].
The GLMB density is a conjugate prior with respect to
the standard multitarget likelihood function and is also closed
under the multitarget prediction. Under the standard multitar-
get model, if the multitarget filtering density, at the previous
time, πk1is a GLMB of the form (73), then the multitarget
prediction density πk|k1is a GLMB given by [163]
πk|k1(X|Zk1)=Δ(X)
ξΘ0:k1
w(ξ)
k|k1(L(X))[p(ξ)
k|k1]X,
where
w(ξ)
k|k1(L)=w(ξ)
S,k|k1(LL0:k1)wΓ,k (LLk),
p(ξ)
k|k1(x,)=1
L0:k1()p(ξ)
S,k|k1(x,)+1Lk()pΓ,k(x,),
w(ξ)
S,k|k1(L)=[¯
P(ξ)
S,k|k1]L
IL
[1¯
P(ξ)
S,k|k1]ILw(ξ)
k1(I),
¯
P(ξ)
S,k|k1()=PS,k|k1(·,),p
(ξ)
k1(·,),
p(ξ)
S,k|k1(x,)= PS,k|k1(·,)fk|k1(x,),p
(ξ)
k1(·,)
¯
P(ξ)
S,k|k1(),
and the multitarget filtering density πkis a GLMB given by
πk
(X|Zk)=Δ(X)
ξΘ0:k1
θΘk
w(ξ,θ)
k(L(X)|Zk)[p
(ξ,θ)(·|Zk)]X
,
where
w(ξ,θ)
k(L|Z)1Θk
(L)(θ)[ ¯
Ψ(ξ,θ)
Z,k ]Lw(ξ)
k|k1(L),
¯
Ψ(ξ,θ)
Z,k ()=Ψ(θ)
Z,k(·,),p
(ξ)
k|k1(·,),
Ψ(θ)
Z,k(x,)=ψk(zθ();(x,))
λF,k ,if θ()>0
1PD,k(x,),if θ()=0
p
(ξ,θ)
k(x,|Z)= Ψ(θ)
Z,k(x,)p(ξ)
k|k1(x,)
¯
Ψ(ξ,θ)
Z,k ().
The GLMB recursion above is the first exact close form
solution to the Bayes multitarget filter. Truncating the GLMB
sum is needed to manage the growing the number of compo-
nents in the GLMB filter. In [164] an implementation of the
GLMB filter based on discarding ”insignificant” components
was detailed, and it was shown that such truncation minimizes
the L1error in the multitarget density. This algorithm has
a worst case complexity that is cubic in the number of
observations.
A one term approximation to the GLMB filter, known
as the LMB filter [124], was used to track thousands of
targets simultaneously in relatively dense FAs on a laptop
computer [165]. Moreover, it has been deployed as a real-
time multitarget tracker in automative safety systems [125].
Recently, the GLMB filter was extended to the more realistic
and very challenging problem of multitarget tracking with
merged measurements [14].
ACKNOWLEDGMENT
The work on the Joint Probabilistic Data Association Filter
(JPDAF) has been supported by ARO W911NF-10-1-0369.
The work of the first author is supported by the Australian
Research Council under Discovery Project DP130104404.
REFERENCES
[1] B. Anderson and J. Moore, Optimal Filtering, Prentice Hall, 1979.
[2] 1.8 gigapixel ARGUS-IS. World’s highest resolution video surveillance
platform by DARPA,
https://www.youtube.com/watch?v=QGxNyaXfJsA.
[3] I. Arasaratnam and S. Haykin, “Cubature Kalman filters,” IEEE Trans.
Automatic Control, vol. 54, no. 6, pp. 1254–1269, 2009.
[4] M. Arulampalam, S. Maskell, N. Gordon and T. Clapp, “A tutorial on
particle filters for online nonlinear/non-Gaussian Bayesian tracking,”
IEEE Trans. Signal Processing, vol. 50, no. 2, pp. 174–188, 2002.
[5] Y. Bar-Shalom and T. E. Fortmann, Tracking and Data Association.
Academic Press, San Diego, 1988.
[6] Y. Bar-Shalom, Y. Chang, and H. Blom, “Automatic track formation in
clutter with a recursive algorithm,” in Y. Bar-Shalom (ed.) Multisensor
Multitarget Tracking: Advanced Applications, pp. 25–42, 1992.
[7] Y. Bar-Shalom, Ed., Multitarget-Multisensor Tracking: Applications
and Advances, vol. II, Norwood, MA, Artech House, 1992, Reprinted
by YBS Publishing, 1998.
[8] Y. Bar-Shalom, X. Li and T. Kirubarajan, Estimation with Applications
to Tracking and Navigation, Wiley, New York, 2001.
[9] Y. Bar-Shalom, S. S. Blackman, and R. J. Fitzgerald, “Dimen-
sionless score function for multiple hypothesis tracking,” IEEE Trans.
Aerospace & Electronic Systems, vol. 43, no. 1, pp. 392-400, 2007.
[10] Y. Bar-Shalom, P. Willett, and X. Tian, Tracking and Data Fusion: A
Handbook of Algorithms, YBS Publishing, 2011.
[11] G. Battistelli, L. Chisci, C. Fantacci, A. Farina, and A. Graziano,
“Consensus CPHD filter for distributed multitarget tracking,IEEE J.
Selected Topics in Signal Processing, vol. 7, no. 3, pp. 508-520, 2013.
[12] M. Baum, P. Willett, Y. Bar-Shalom, and U. Hanebeck, “Approximate
calculation of marginal association probabilities using a hybrid data
association model,” in SPIE Signal and Data Processing of Small
Targets, vol. 9092, 2014.
[13] M. Beard, B.-T. Vo, and B.-N. Vo, “Multi-target filtering with unknown
clutter density using a bootstrap GM-CPHD filter“, IEEE Signal
Processing Letters, vol. 20, no. 4, pp. 323-326, 2013.
[14] M. Beard, B.-T. Vo, and B.-N. Vo, “Bayesian multitarget tracking with
merged measurements using labelled random finite sets,” IEEE Trans.
Signal Processing, vol. 63, no. 6, pp. 1433-1447, 2015.
[15] S. Blackman and R. Popoli, Design and Analysis of Modern Tracking
Systems, Artech House, 1999.
[16] S. Blackman, “Multiple hypothesis tracking for multiple target track-
ing,” IEEE Aerospace & Electronic Systems Magazine, vol. 19, no. 1,
pp. 5–18, 2004.
[17] H. Blom and E. Bloem, “Bayesian tracking of two possibly unresolved
maneuvering targets,IEEE Trans. Aerospace & Electronic Systems,
vol. 43, no. 2, pp. 612-627, Apr. 2007.
[18] H. A. P. Blom and E. A. Bloem, “Probabilistic data association
avoiding track coalescence,IEEE Trans. Aerospace & Electronic
Systems, vol. 45, no. 2, pp. 247–259, 2000.
[19] M. Bolic, P. Djuric, and S. Hong, “Resampling algorithms and archi-
tectures for distributed particle filters,” IEEE Trans. Signal Processing,
vol. 53, no. 7, pp. 2442–2450, 2005.
[20] P. Borwein and T. Erd´
elyi, Newton’s Identities Section 1.1.E.2 in
Polynomials and Polynomial Inequalities. Springer-Verlag, New York,
1995.
[21] O. Capp´
e, S. J. Godsill, and Eric Moulines, “An overview of existing
methods and recent advances in sequential Monte Carlo,” Proc. IEEE,
vol. 95, no. 5, pp. 899–924, 2007.
[22] S. Challa, M.Morelande, D.Musicki, and R. Evans, Fundamentals of
Object Tracking, Cambridge University Press, 2011.
[23] K.-C. Chang and Y. Bar-Shalom, “Joint probabilistic data association
for multitarget tracking with possibly unresolved measurements and
maneuvers,” IEEE Trans. Automatic Control, vol. 29, no. 7, pp. 585-
594, Jul. 1984.
[24] R. Chen and J. Liu, “Mixture Kalman filters,” Journal of the Royal
Statistical Society: Series B (Methodological), vol. 62, part 3, pp. 493-
508, 2000.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
21
[25] N. Chopin, “A sequential particle filter method for static models,
Biometrika, vol. 89, no. 3, pp. 539–551, Aug 2002.
[26] D. Clark and J. Bell, “Convergence results for the particle PHD filter,”
IEEE Trans. Signal Processing, vol. 54, no. 7, pp. 2652–2661, 2006.
[27] D. Clark and B.-N. Vo, “Convergence analysis of the Gaussian mixture
PHD filter,IEEE Trans. Signal Processing, vol. 55, No. 4, pp. 1204–
1212, 2007.
[28] D. Clark, and R. Mahler “Generalized PHD filters via a general chain
rule,” Proc. Int’l. Conf. Information Fusion, Singapore, July 2012.
[29] S. Coraluppi, C. Carthel, M. Luettgen, and S. Lynch, “All-source track
and identity fusion,” Proc. Nat’l. Symp. Sensor & Data Fusion, San
Antonio TX, June 2000.
[30] S. Coraluppi and C. Carthel, “Modified scoring in multiple-hypothesis
tracking,” J. Advances in Information Fusion, vol 7, no. 2, pp. 153–164,
2012.
[31] S. Coraluppi and C. Carthel, “If a tree falls in the woods, it does make
a sound: Multiple-Hypothesis Tracking with undetected target births,
IEEE Trans. Aerospace & Electronic Systems, vol 50, no. 3, pp. 2379–
2388, 2014.
[32] D. Crisan, “Particle filters-A theoretical perspective,” in Sequential
Monte Carlo Methods in Practice, Doucet A., de Freitas N. and Gordon
N. J., (eds.), pp. 17-41, Springer-Verlag, May 2001.
[33] D. Crouse, M. Guerriero, and P. Willett, “A critical look at the PMHT,”
Journal of Advances in Information Fusion, vol. 4, no. 2, pp. 93-116,
Dec. 2009.
[34] D. Crouse, Private Communication, 2010.
[35] D. Crouse, Y. Bar-Shalom, P. K. Willett, and L. Svensson, “The JPDAF
in practical systems: Computation and snake oil,” Proc. SPIE Conf.
Signal & Data Processing of Small Targets, vol. 7698 , Orlando, FL,
April 2010.
[36] D. Daley and D. Vere-Jones, An introduction to the theory of point
processes. Springer-Verlag, 1988.
[37] F. Daum and J. Huang, “How to avoid normalization of particle flow
for nonlinear filters, Bayesian decisions, and transport,” Proc. SPIE
Conf. Defense & Security, pp. 90920B–90920B, International Society
for Optics and Photonics, 2014.
[38] S. Davey and D. Gray, “Integrated track maintenance for the PMHT via
the hysteresis model,” IEEE Trans. Aerospace & Electronic Systems,
vol. 43, no. 1, pp. 93-111, Jan. 2007.
[39] S. Davey, M. Rutten, and B. Cheung, “A comparison of detection
performance for several Track-Before-Detect algorithms,EURASIP J.
Advances in Signal Processing, vol. 2008, no. 1, article 41, 2008.
[40] S. Deb, M. Yeddanapudi, K. Pattipati, and Y. Bar-Shalom, “A gen-
eralized S-D assignment algorithm for multisensor-multitarget state
estimation,” IEEE Trans. Aerospace & Electronic Systems, vol. 33, no.
2, pp. 523-538, Apr. 1997.
[41] P. Del Moral, Mean field simulation for Monte Carlo integration, Chap-
man & Hall/CRC Monographs on Statistics & Applied Probability,
2013.
[42] T. Ding and M. J. Coates, “Implementation of the Daum-Huang exact-
flow particle filter,Proc. IEEE Statistical Signal Processing Workshop
(SSP), pp. 257–260, 2012.
[43] R. Douc, O. Capp´
e, and E. Moulines, “Comparison of resampling
schemes for particle filtering,” Proc. Int’l Symp. Image and Signal
Processing and Analysis, Istanbul, Turkey, Sep 2005.
[44] A. Doucet, S. Godsill, and C. Andrieu, “On sequential Monte Carlo
sampling methods for Bayesian filtering,” Statist. Comput., vol. 10, no.
3, pp. 197–208, 2000.
[45] A. Doucet, N.. de Freitas, and N. J. Gordon, “An introduction to
sequential Monte Carlo methods,” in Sequential Monte Carlo Methods
in Practice, A. Doucet, N.. de Freitas, and N. J. Gordon, Eds. New
York: Springer-Verlag, 2001.
[46] A. Doucet and A. M. Johansen, “A tutorial on particle filtering and
smoothing: Fifteen years later,Handbook of Nonlinear Filtering, vol.
12, pp. 656–704, 2009.
[47] O. Erdinc, P. Willett, and Y. Bar-Shalom. “The bin-occupancy filter
and its connection to the PHD filters.“ IEEE Trans. Signal Processing,
vol. 57, No. 11, pp. 4232–4246, 2009.
[48] M. Feldmann, D. Franken, and W. Koch, “Tracking of extended
objects and group targets using random matrices,” IEEE Trans. Signal
Processing, vol. 59, no. 4, pp. 1409-1420, Apr. 2011.
[49] A. F. Garc´
ıa-Fern´
andez, J. Grajal, and M. R. Morelande, “Two-layer
particle filter for multiple target detection and tracking,” IEEE Trans.
Aerospace & Electronic Systems, vol. 49, no. 3, pp. 1569–1588, July
2013.
[50] A. Gelb, Editor, Applied Optimal Estimation, MIT Press, 1974.
[51] K. Gilholm and D. Salmond, “Spatial distribution model for tracking
extended objects,” IEE Proceedings on Radar, Sonar and Navigation,
vol. 152, no. 5, pp. 364-371, Oct. 2005.
[52] W. R. Gilks and C. Berzuini, “Following a moving target–Monte Carlo
inference for dynamic Bayesian models,” J. R. Statist. Soc. B, vol. 63,
pp. 127–146, 2001.
[53] A. Gning, L. Mihaylova, S. Maskell, S. Pang, and S. Godsill, “Group
object structure and state estimation with evolving networks and Monte
Carlo methods,” IEEE Trans. Signal Processing, vol. 59, no. 4, pp.
1383-1396, Apr. 2011.
[54] I. Goodman, R. Mahler, and H. Nguyen, Mathematics of Data Fusion.
Kluwer Academic Publishers, 1997.
[55] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, “Novel approach to
nonlinear/non-Gaussian Bayesian state estimation,” IEE Proceedings-
F, vol. 140, no. 2, pp. 107–113, 1993.
[56] K. Granstrom, and U. Orguner, “A PHD filter for tracking multiple ex-
tended targets using random matrices“ IEEE Trans. Signal Processing,
vol. 60, no. 11, pp. 5657–5671, 2012.
[57] B. K. Habtemariam, R. Tharmarasa, T. Thayaparan, M. Mallick, and T.
Kirubarajan, “A Multiple detection probabilistic data association filter,”
IEEE J. Selected Topics in Signal Processing, vol. 7, no. 3, pp. 461–
471, June 2013.
[58] A. C. Harvey, Forecasting, Structural Time Series Models and the
Kalman Filter, Cambridge University Press, 1989.
[59] Y-C. Ho and R. C. K. Lee, “A Bayesian approach to problems in
stochastic estimation and control,” IEEE Trans. Automatic Control, vol.
9, no. 4, pp. 333–339, 1964.
[60] J. Hoffman and R. Mahler, “Multitarget miss distance via optimal
assignment,” IEEE Trans. Sys., Man, and Cybernetics-Part A, vol. 34,
no. 3, pp. 327–336, 2004.
[61] P. Horridge and S. Maskell, “Real-time tracking of hundreds of
targets with efficient exact JPDAF implementation,” Proc. Int’l. Conf.
Information Fusion, Florence, Italy, July 2006.
[62] R. Hoseinnezhad, B.-N.Vo, B.-T. Vo, and D. Suter, “Visual tracking of
numerous targets via multi-Bernoulli filtering of image data,” Pattern
Recognition, vol. 45, no. 10, pp. 3625-3635, 2012.
[63] R. Hoseinnezhad, B.-N. Vo and B.-T. Vo, “Visual Tracking in Back-
ground Subtracted Image Sequences via Multi-Bernoulli Filtering,”
IEEE Trans. Signal Processing, vol. 61, no. 2, pp. 392-397, 2013.
[64] A. Jazwinski, Stochastic Processes and Filtering Theory, Academic
Press, 1970.
[65] B. Jia, M. Xin, and Y. Cheng, “High-degree cubature Kalman filter,”
Automatica, vol. 49, no. 2, pp. 510–518, 2013.
[66] A. Johansen, S. Singh, A. Doucet, and B.-N Vo, “Convergence of the
SMC implementation of the PHD filter,Methodology & Computing
in Applied Probability, vol. 8, no. 2, pp. 265–291, 2006.
[67] S. Julier, J. Uhlmann and H.F. Durrant-Whyte, “A new method for
the nonlinear transformation of means and covariances in filters and
estimators,” IEEE Trans. Automatic Control, vol. AC-45, no. 3, pp.
477–482, March 2000.
[68] S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear
estimation“, Proc. IEEE, vol. 92, no. 3, pp. 401–422, March 2004.
[69] R. E. Kalman, “A new approach to linear filtering and prediction
problems,” Trans. ASME J. Basic Engineering, vol. 82, pp. 35–45,
1960.
[70] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation
Theory, Prentice Hall, Englewood Cliffs, New Jersey, 1993.
[71] W. Koch and G. van Keuk, “Multiple hypothesis track maintenance
with possibly unresolved measurements,” IEEE Trans. Aerospace &
Electronic Systems, vol. 33, no. 3, pp. 883-892, Jul. 1997.
[72] J. Koch, “Bayesian approach to extended object and cluster tracking
using random matrices,” IEEE Trans. Aerospace & Electronic Systems,
vol. 44, no. 3, pp. 1042-1059, Jul. 2008.
[73] W. Koch, Tracking and Sensor Data Fusion: Methodological Frame-
work and Selected Applications, Springer, Heidelberg, 2014.
[74] J. H. Kotecha and P. M. Djuric, “Gaussian particle filtering,” IEEE
Trans. Signal Processing, vol. 51, no. 10, pp. 2592–2601, 2003.
[75] J. H. Kotecha and P. M. Djuric, “Gaussian sum particle filtering,”
IEEE Trans. Signal Processing, vol. 51, no. 10, pp. 2602–2612, 2003.
[76] T. Kurien, “Issues in the design of practical multitarget tracking
algorithms,” Chapter 3 in Multitarget-Multisensor Tracking: Advanced
Applications, Ed. Y. Bar-Shalom, Artech House, pp. 43–83, 1990.
[77] X. R. Li, “Engineer’s guide to variable-structure multiple-model esti-
mation for tracking,” Chapter 10, in Multitarget-Multisensor Tracking:
Applications and Advances, Volume III, Ed. Y. Bar-Shalom and W. D.
Blair, pp. 449–567, Aetech House, 2000.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
22
[78] X. Li and V. Jilkov, “Survey of maneuvering target tracking, Part I:
Dynamic models,” IEEE Trans. Aerospace & Electronic Systems, vol.
39, no. 4, pp. 1333-1364, 2003.
[79] X. R. Li and V. P. Jilkov, “A survey of maneuvering target tracking,
Part V: Multiple-Model methods,IEEE Trans. Aerospace & Electronic
Systems, vol. 41, no. 4, pp. 1255–1321, 2005.
[80] J. S. Liu and R. Chen, “Sequential Monte Carlo methods for dynamical
systems,” J. Amer. Statist. Assoc., vol. 93, pp. 1032–1044, 1998.
[81] C. Lundquist, K. Granstr¨
om, and U. Orguner, “An extended target
CPHD filter and a Gamma Gaussian inverse Wishart implementation,
IEEE J. Selected Topics in Signal Processing, vol. 7, no. 3, pp. 472–
483, 2013.
[82] R. Mahler, “Global integrated data fusion,Proc. 7th Nat’l. Symp.
Sensor Fusion, vol. 1, (Unclassified) Sandia National Laboratories,
Albuquerque, ERIM Ann Arbor MI, pp. 187–199, 1994.
[83] R. Mahler, “A theoretical foundation for the Stein-Winter Probability
Hypothesis Density (PHD) multitarget tracking approach,” Proc. MSS
Nat’l. Symp. Sensor & Data Fusion, vol. I (Unclassified), San Antonio
TX, June 2000.
[84] R. Mahler, “Multitarget Bayes filtering via first-order multitarget mo-
ments,” IEEE Trans. Aerospace & Electronic Systems, vol. 39, no. 4,
pp. 1152–1178, 2003.
[85] R. Mahler, “PHD filters of higher order in target number,IEEE Trans.
Aerospace & Electronic Systems, vol. 43, no. 4, pp. 1523–1543, 2007.
[86] R. Mahler, Statistical Multisource-Multitarget Information Fusion,
Artech House, 2007.
[87] R. Mahler, “CPHD filters for superpositional sensors,” Proc. SPIE
Signal & Data Processing of Small Targets, O. E. Drummond (ed.),
vol. 7445, 2009.
[88] R. Mahler, “The multisensor PHD filter: I. General solution via
multitarget calculus,” Proc. SPIE Signal Processing, Sensor Fusion,
and Target Recognition XVIII, vol. 7336, pp. 73360E-12, May 2009.
[89] R. Mahler, “The multisensor PHD filter: II. Erroneous solution via
Poisson magic,” Proc. SPIE Signal Processing, Sensor Fusion, and
Target Recognition XVIII, vol. 7336, pp. 73360D-12, May 2009.
[90] R. Mahler, “PHD filters for nonstandard targets, I: Extended targets,
Proc. Int’l. Conf. Information Fusion, Seattle, July 2009.
[91] R Mahler, “PHD filters for nonstandard targets, II: Unresolved targets,
Proc. Int’l. Conf. Information Fusion, Seattle, July 2009.
[92] R. Mahler, “Approximate multisensor CPHD and PHD filters,” Proc.
Int’l. Conf. Information Fusion, Edinburg, UK, July, 2010.
[93] R. Mahler, B.-T. Vo and B.-N. Vo “CPHD filtering with unknown
clutter rate and detection profile,” IEEE Trans. Signal Processing, vol.
59, No. 8, pp. 3497–3513, 2011.
[94] R. Mahler, “On multitarget jump-Markov filters,Proc. Int’l Conf. on
Information Fusion, pp. 149–156, Singapore, July 2012.
[95] R. Mahler, and B.-T. Vo, “An improved CPHD filter for unknown
clutter backgrounds,” Proc. SPIE Defense & Security, 2014.
[96] R. Mahler, Advances in Statistical Multisource-Multitarget Information
Fusion, Artech House, 2014.
[97] M. Mallick and S. Arulampalam, “Comparison of nonlinear filtering
algorithms in ground moving target indicator (GMTI) target tracking,
Proc. SPIE, vol. 5204, San Diego, CA, August 2003.
[98] M. Mallick and B. F. La Scala, “Comparison of single-point and
two-point difference track initiation algorithms using position measure-
ments,” Proc. Int’l Colloquium on Information Fusion, Xian, China,
August 2007.
[99] M. Mallick, V. Krishnamurthy, and B.-N. Vo, Eds., Integrated Track-
ing, Classification, and Sensor Management: Theory and Applications,
Wiley/IEEE, 2012.
[100] M. Mallick, M. Morelande, L. Mihaylova, S. Arulampalam,
and Y. Yan, “Angle-only filtering in three dimensions,” Ch. 1, in
Integrated Tracking, Classification, and Sensor Management: Theory
and Applications, M. Mallick, V. Krishnamurthy, and B.-N. Vo, Eds.,
Wiley/IEEE, pp. 3–42, December 2012.
[101] M. Mallick, S. Coraluppi, and C. Carthel, “Multitarget tracking
using multiple hypotheses tracking,” Chapter 5, in Integrated Tracking,
Classification, and Sensor Management: Theory and Applications,M.
Mallick, V. Krishnamurthy, and B.-N. Vo, Eds., Wiley/IEEE, pp. 165–
201, December 2012.
[102] M. Mallick, S. Rubin, B.-N. Vo, “An introduction to force and
measurement modeling for space object tracking,” Proc. Int’l. Conf.
Information Fusion, Istanbul, Turkey, July 9-12, 2013.
[103] S. Maskell, M. Briers, and R. Wright, “Fast mutual exclusion,in SPIE
Signal and Data Processing of Small Targets, vol. 5428, 2004.
[104] E. Mazor, A. Averbuch, Y. Bar-Shalom, and J. Dayan, “Interacting
Multiple Model methods in target tracking: A survey,” IEEE Trans.
Aerospace & Electronic System., vol. 34, no. 1, pp. 103–123, 1998.
[105] J. S. Meditch, “A survey of data smoothing for linear and nonlinear
dynamic systems,” Automatica , vol. 9, no. 2, pp. 151–162, 1973.
[106] R. J. Meinhold and N. D. Singpurwalla, “Understanding the Kalman
filter,The American Statistician, vol. 37, no. 2, pp. 123–127, 1983.
[107] J. Miguez, “Analysis of parallelizable resampling algorithms for par-
ticle filtering,” Elsevier Signal Processing, vol. 87, no. 12, pp. 3155–
3174, Dec 2007.
[108] L. Mihaylova, A. Carmi, F. Septier, A. Gning, S. Pang, and S. Godsill,
“Overview of Bayesian sequential Monte Carlo methods for group and
extended object tracking,” Digital Signal Processing, vol. 25, 2014.
[109] D. Musicki and R. Evans, “Joint integrated probabilistic data associ-
ation,” IEEE Trans. Automatic Control, vol. AC-40, no. 3, pp. 1093–
1099, 2004.
[110] D. Musicki and B. La Scala, “Multi-target tracking in clutter without
measurement assignment,” IEEE Trans. Aerospace & Electronic Sys-
tems, vol. 44, no. 3, pp. 877-896, Jul. 2008.
[111] S. Nannuru, M. Coates, and R. Mahler, “Computationally-tractable
approximate PHD and CPHD filters for superpositional sensors, “ IEEE
J. Selected Topics in Signal Processing, vol. 7, no. 3, pp. 410–420,
2013.
[112] S. Oh, S. Russell, and S. Sastry, “Markov chain Monte Carlo data
association for multitarget tracking,” IEEE Trans. Automatic Control,
vol. 54., no. 3, 481-497, 2009.
[113] C. Ouyang, H. Ji, and C. Li, “Improved multi-target multi-Bernoulli
filter,IET Radar, Sonar Navigation, vol. 6, no. 6, pp. 458-464, Jul.
2012.
[114] S. Pang, J. Li, and S. Godsill, “Detection and tracking of coordinated
groups,” IEEE Trans. Aerospace & Electronic Systems, vol. 47, no. 1,
pp. 472-502, Jan. 2011.
[115] A. Pasha, B.-N. Vo, H. D. Tuan, and W. K. Ma, “A Gaussian mixture
PHD filter for jump Markov system model,IEEE Trans. Aerospace
& Electronic Systems, vol. 45, no.3, pp. 919–936, 2009.
[116] K. Pattipati, S. Deb, Y. Bar-Shalom, and R. Washburn, “A new
relaxation algorithm and passive sensor data association,IEEE Trans.
Automatic Control, vol. 37, no. 2, pp. 198-213, Feb. 1992.
[117] K. R. Pattipati, R. L. Popp, and T. T. Kirubarajan, “Survey of assign-
ment techniques for multitarget tracking,” Chapter 2, in Multitarget-
Multisensor Tracking: Applications and Advances, vol. III, Eds. Y. Bar-
Shalom and D. Blair, pp. 77–159, Artech House, Norwood, MA, USA,
2000.
[118] M. Pitt and N. Shephard, “Filtering via simulation: Auxiliary particle
filters,” J. Amer. Statist. Assoc., vol. 94, no. 446, pp. 590–599, 1999.
[119] A. Poore and N. Rijavec, “A Lagrangian relaxation algorithm for mul-
tidimensional assignment problems arising from multitarget tracking,”
SIAM J. Optimization, vol. 3, no. 3, pp. 544-563, August 1993.
[120] A. B. Poore and A. J. Robertson, “A new Lagrangian relaxation
based algorithm for a class of multidimensional assignment problems,”
Computational Optimization & Applications, vol. 8, no. 2, pp. 129–150,
1997.
[121] G. W. Pulford and R. J. Evans, “A multipath data association tracker
for over-the-horizon radar,IEEE Trans. Aerospace & Electronic
Systems, vol. 34, no. 4, pp. 1165–1183, 1998.
[122] V. Ravindra, L. Svensson, L. Hammarstrand, and M. Morelande, “A
cardinality preserving multitarget multi-Bernoulli RFS tracker,Proc.
Int’l. Conf. Information Fusion, Singapore, July 2012.
[123] D. Reid, “An algorithm for tracking multiple targets,” IEEE Trans.
Automatic Control, vol. 24, no. 6, pp. 843–854, 1979.
[124] S. Reuter, B.-T. Vo, B.-N. Vo, and K. Dietmayer, “The labelled multi-
Bernoulli filter,IEEE Trans. Signal Processing, vol. 62, no. 12, pp.
3246–3260, 2014.
[125] S. Reuter, Multi-Object Tracking Using Random Finite Sets, Diss.
Zugl.: Ulm, Universit¨
at Ulm, 2014.
[126] B. Ristic, S. Arulampalam, and N. J. Gordon, Beyond the Kalman
Filter: Particle Filters for Tracking Applications. Artech House, 2004.
[127] B. Ristic, D. Clark, B.-N.Vo, and B.-T. Vo “Adaptive target birth inten-
sity in PHD and CPHD filters,” IEEE Trans. Aerospace & Electronic
Systems, vol. 48, no. 2, pp. 1656–1668, 2012.
[128] C. P. Robert, The Bayesian Choice, Second Edition, Springer, New
York, 2007.
[129] A. Rodningsby, Y. Bar-Shalom, O. Hallingstad and J. Glattetre, “Mul-
titarget tracking in the presence of wakes,J. Advances in Information
Fusion, vol. 4, no. 2, pp. 117–145, 2009.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
23
[130] J. Roecker, “Suboptimal joint probabilistic data association,” IEEE
Trans. Aerospace & Electronic Systems, vol. 29, no. 2, pp. 510–517,
1993.
[131] J. Roecker, “A class of near optimal JPDA algorithms,” IEEE Trans.
Aerospace & Electronic Systems, vol. 30, no. 2, pp. 504–510, 1994.
[132] K. Romeo, D. Crouse, Y. Bar-Shalom, and P. Willett, “The JPDAF in
practical systems: approximations,” in SPIE Signal and Data Process-
ing of Small Targets, vol. 7698, 2010.
[133] Y. Ruan and P. Willett, “The turbo PMHT,IEEE Trans. Aerospace &
Electronic Systems, vol. 40, no. 4, pp. 1388–1398, 2004.
[134] A. Runnalls, “Kullback-Leibler approach to Gaussian mixture reduc-
tion,” IEEE Trans. Aerospace & Electronic Systems, vol. 43, no. 3, pp.
989-999, Jul. 2007.
[135] D. Salmond, “Mixture reduction algorithms for target tracking in
clutter,” in SPIE Signal and Data Processing of Small Targets, vol.
1305, 1990.
[136] D. J. Salmond and H. Birch, “A particle filter for track-before-detect,
Proc. American Control Conference, vol. 5, pp. 3755–3760, Arlington,
VA, USA, June 2001.
[137] D. Salmond, “Mixture reduction algorithms for point and extended ob-
ject tracking in clutter,IEEE Trans. Aerospace & Electronic Systems,
vol. 45, no. 2, pp. 667-686, Apr. 2009.
[138] T. Sathyan, A. Sinha, T. Kirubarajan, M. McDonald, and T. Lang,
“MDA-based data association with prior track information for passive
multitarget tracking,” IEEE Trans. Aerospace & Electronic Systems,
vol. 47, no. 1, pp. 539–556, 2011.
[139] T. Sathyan, T-J. Chin, S. Arulampalam, and D. Suter, “A multiple
hypothesis tracker for multitarget tracking with multiple simultaneous
measurements,” IEEE J. Selected Topics in Signal Processing, vol. 7,
no. 3, pp. 448–460, 2013.
[140] D. Schuhmacher, B.-T. Vo, and B.-N. Vo, “A consistent metric for
performance evaluation of multi-object filters,IEEE Trans. Signal
Processing, vol. 56, no. 8, pp. 3447–3457, 2008.
[141] R. Singer and J. Stein, “An optimal tracking filter for processing sensor
data of imprecisely determined origin in surveillance systems,” Proc.
IEEE Conf. Decision & Control, Florida, USA, pp. 171–175, Dec 1971.
[142] S. Singh, B.-N. Vo, A. Baddeley, and S. Zuev, “Filters for spatial point
processes,” SIAM J. Control and Optimization, vol. 48, no. 4, pp. 2275–
2295, 2009.
[143] H. W. Sorenson and D. L. Alspach, “Recursive Bayesian estimation
using Gaussian sum,” Automatica, vol. 7, pp. 465–479, 1971.
[144] L. Stone, R. Streit, T. Corwin, and K. Bell, Bayesian Multiple Target
Tracking, 2nd ed. Artech House 2013
[145] P. Storms and F. Spieksma, “An LP-based algorithm for the data asso-
ciation problem in multitarget tracking,” Proc. Int’l. Conf. Information
Fusion, Paris, France, July 2000.
[146] R. Streit and T. Luginbuhl, “Probabilistic multiple hypothesis tracking,”
Technical Report, NUWC-NPT, vol. 428, no. 10, Feb 1995.
[147] D. Svensson, J. Wintenby, and L. Svensson, Performance evaluation of
MHT and GMCPHD in a ground target tracking scenario, Proc. Int’l.
Conf. Information Fusion, Seattle, Washington, USA, July, 2009.
[148] L. Svensson, D. Svensson, M. Guerriero, and P. Willett, “Set JPDA
filter for multitarget tracking,” IEEE Trans. Signal Processing, vol. 59,
no. 10, pp. 4677-4691, Oct. 2011.
[149] D. Svensson, Target Tracking in Complex Scenarios, Ph.D. Thesis,
Chalmers University of Technology, G¨
oteberg, Sweden, 2010.
[150] D. Svensson, M. Ulmke and L. Hammarstrand, “Multitarget Sensor
Resolution Model and Joint Probabilistic Data Association”, IEEE
Trans. Aerospace & Electronic Systems, vol. 48, no. 4, pp. 3418-3434,
Oct. 2012.
[151] R. Tharmarasa, S. Sutharsan, T. Kirubarajan, and T. Lang, “Multiframe
assignment tracker for MSTWG data,” Proc. Int’l Conf. Information
Fusion, Seattle, WA, USA, pp. 1837–1844, July 2009.
[152] R. Tharmarasa, M. Subramaniam, N. Nadarajah, T. Kirubarajan, and
M. McDonald, “Multitarget passive coherent location with transmitter
origin and target-altitude uncertainties,” IEEE Trans. Aerospace &
Electronic Systems, vol. 48, no. 3, pp. 2530–2550, 2012.
[153] M. Uney, D. Clark, S. Julier, “Distributed fusion of PHD filters via
exponential mixture densities, Selected Topics in Signal Processing,”
IEEE J. Selected Topics in Signal Processing, vol. 7, no. 3, pp. 521-531,
2013.
[154] B.-N. Vo, S. Singh, and A. Doucet, “Sequential Monte Carlo methods
for multitarget filtering with random finite sets,” IEEE Trans. Aerospace
& Electronic Systems, vol. 41, no. 4, pp. 1224–1245, 2005.
[155] B.-N. Vo and W.-K. Ma, “The Gaussian mixture probability hypothesis
density filter,IEEE Trans. Signal Processing, vol. 54, no. 11, pp.
4091–4104, 2006.
[156] B.-T. Vo, B.-N. Vo, and A. Cantoni, “Analytic implementations of the
cardinalized probability hypothesis density filter,IEEE Trans. Signal
Processing, vol. 55, no. 7, pp. 3553–3567, 2007.
[157] B.-T. Vo, Random Finite Sets in Multi-Object Filtering, Ph.D Thesis,
University of Western Australia, 2008.
[158] B.-T. Vo, B.-N. Vo, and A. Cantoni, “Bayesian filtering with random
finite set observations,” IEEE Trans. Signal Processing, vol. 56, no. 4,
pp. 1313–1326, 2008.
[159] B.-T. Vo, B.-N. Vo, and A. Cantoni, “The Cardinality Balanced
Multitarget Multi-Bernoulli filter and its implementations,” IEEE Trans.
Signal Processing, vol. 57, no. 2, pp. 409–423, Feb. 2009.
[160] B.-N. Vo, B.-T. Vo, N.-T. Pham and D. Suter, “Joint detection and
estimation of multiple objects from image observations,IEEE Trans.
Signal Procesing, vol. 58, no. 10, pp. 5129–5241, 2010.
[161] B.-N. Vo, B.-T. Vo and R. Mahler, “Closed form solutions to forward-
backward smoothing,” IEEE Trans. Signal Processing, vol. 60, no. 1,
pp. 2–17, 2012.
[162] B.-T. Vo, B.-N. Vo, R. Hoseinnezhad, and R. Mahler, “Robust multi-
Bernoulli filtering,” IEEE J. Sel. Topics Signal Processing, vol. 7, no.
3, pp. 399-409, Jun. 2013.
[163] B.-T. Vo, and B.-N. Vo, “Labeled Random Finite Sets and multi-object
conjugate priors,” IEEE Trans. Signal Processing, vol. 61, no. 13, pp.
3460–3475, 2013.
[164] B.-N. Vo, B.-T. Vo, and D. Phung, “Labeled random finite sets and the
Bayes multitarget tracking filter,IEEE Trans. Signal Processing, vol.
62, no. 24, pp. 6554–6567, 2014.
[165] B.-N. Vo, B.-T. Vo, S. Reuter, Q. Lam, and K. Dietmayer, “Towards
large scale multi-target tracking,Proc. SPIE Defense & Security, pp.
90850W-90850W, June, 2014.
[166] T. Vu, B.-N. Vo and R.J. Evans “A Particle Marginal Metropolis-
Hastings multitarget tracker,IEEE Trans. Signal Processing, vol. 62,
no. 15, pp. 3953–3964, 2014.
[167] N. Whiteley, S. Singh, and S. Godsill, “Auxiliary Particle imple-
mentation of the Probability Hypothesis Density filter,IEEE Trans.
Aerospace & Electronic Systems, vol. 46, no. 3, pp. 1437–1454, 2010.
[168] P. Willett, Y. Ruan, and R. Streit, “PMHT: Some problems and
solutions,” IEEE Trans. Aerospace & Electronic Systems, vol. 38, no.
3, pp. 738–754, 2002.
[169] J. Williams and R. Lau, “Approximate evaluation of marginal associ-
ation probabilities with belief propagation,” IEEE Trans. Aerospace &
Electronic System, vol. 50, no. 4, Oct. 2014.
[170] J. Williams, “An efficient, variational approximation of the best fitting
multi-Bernoulli filter,IEEE Trans. Signal Processing, vol. 63, no. 1,
pp. 258-273, Jan. 2015.
[171] X. Xie and R. Evans, “Multiple-target tracking and multiple frequency
line tracking using hidden Markov models,IEEE Trans. Signal Pro-
cessing, vol. 39, no. 12, pp. 2659–2676, 1991.
Preprint: Wile
y
Enc
y
clopedia of Electrical and Electronics En
g
ineerin
g
, Wile
y
, Sept. 2015.
... In the classic setup for multi-sensor target localization or tracking, the local filters are homogeneous, meaning that all local sensors run the same filtering algorithm; see the review [14]. However, the abundance of sensor networking technologies has led to the dominance of heterogeneous network [15], [16], [17] where the local sensors have unequal computing, memory capacities and isomeric measurements, and thus are suitable for running heterogeneous RFS filters, either unlabeled or labeled RFS filters [18]. The combination of heterogeneous filters, which can adopt diverse target and scenario models, inherently offers greater robustness and reliability compared to the fusion of homogeneous filters. ...
... We now address averaging the GM-PHDs of heterogeneous RFS/LRFS filters, where the GM parameter set at time k for sensor s ∈ S is denoted by G s,k and the corresponding local PHD D s,k (x). The idea is to update the local GM parameters so that the corresponding PHD best fits the weighted PHD-AA D AA Ss,k calculated by (18) (or approximately in the average consensus manner), regardless of the isomerism of the local filters. To be more specific, the approach does not create or abandon any L-GCs in local filters but adjust/optimize their weights merely as what has been done in [21] or jointly with the GC mean and covariance parameters, both via VA. ...
Article
This paper is the fourth part of a series of papers on the arithmetic average (AA) density fusion approach and its application for target tracking. In this paper, we addresshis paper is the fourth part of a series of papers on the arithmetic average (AA) density fusion approach and its application for target tracking. In this paper, we addressT the intricate challenge of distributed heterogeneous multisensor multitarget tracking, where each inter-connected sensor operates a probability hypothesis density (PHD) filter, a multiple Bernoulli (MB) filter or a labeled MB (LMB) filter and they cooperate with each other via information fusion. Our recent work has proven that the existing linear fusion of these filters is all exactly built on averaging their respective unlabeled/labeled PHDs. Based on this finding, two PHD-AA fusion approaches are proposed via variational minimization of the upper bound of the Kullback-Leibler divergence between the local and multi-filter averaged PHDs subject to cardinality consensus based on the Gaussian mixture implementation, enabling heterogeneous filter cooperation. One focuses solely on fitting the weights of the local Gaussian components (L-GCs), while the other simultaneously fits all the parameters of the L-GCs at each sensor, both seeking average consensus on the unlabeled PHD, irrespective of the specific posterior form of the local filters. For the distributed peer-to-peer communication, both the classic consensus and flooding paradigms have been investigated. Simulations have demonstrated the effectiveness and flexibility of the proposed approaches in both homogeneous and heterogeneous scenarios.
... MOT problems can be further divided into Multi-Target Tracking (MTT) problems, where each target is supposed to generate at most one measurement at each iteration, [38], and Extended Target Tracking (ETT) problems, where each target can generate more than one measurement, as it is usually the case with LiDAR sensors. ETT techniques for vehicle tracking are usually based on strong assumptions about the vehicle shape and should guarantee a larger precision in tracking than MTT, although some shape-free ETT techniques exist as in [39]. ...
Preprint
Full-text available
Autonomous racing provides a controlled environment for testing the software and hardware of autonomous vehicles operating at their performance limits. Competitive interactions between multiple autonomous racecars however introduce challenging and potentially dangerous scenarios. Accurate and consistent vehicle detection and tracking is crucial for overtaking maneuvers, and low-latency sensor processing is essential to respond quickly to hazardous situations. This paper presents the LiDAR-based perception algorithms deployed on Team PoliMOVE's autonomous racecar, which won multiple competitions in the Indy Autonomous Challenge series. Our Vehicle Detection and Tracking pipeline is composed of a novel fast Point Cloud Segmentation technique and a specific Vehicle Pose Estimation methodology, together with a variable-step Multi-Target Tracking algorithm. Experimental results demonstrate the algorithm's performance, robustness, computational efficiency, and suitability for autonomous racing applications, enabling fully autonomous overtaking maneuvers at velocities exceeding 275 km/h.
... Weighted particles x (j) n,k , w α(j) n,k J+I j=1 representing the message α(x n,k , 1) in (29) are now obtained as follows. 2 First, for each particlex ...
Preprint
We propose a method for tracking an unknown number of targets based on measurements provided by multiple sensors. Our method achieves low computational complexity and excellent scalability by running belief propagation on a suitably devised factor graph. A redundant formulation of data association uncertainty and the use of "augmented target states" including binary target indicators make it possible to exploit statistical independencies for a drastic reduction of complexity. An increase in the number of targets, sensors, or measurements leads to additional variable nodes in the factor graph but not to higher dimensions of the messages. As a consequence, the complexity of our method scales only quadratically in the number of targets, linearly in the number of sensors, and linearly in the number of measurements per sensors. The performance of the method compares well with that of previously proposed methods, including methods with a less favorable scaling behavior. In particular, our method can outperform multisensor versions of the probability hypothesis density (PHD) filter, the cardinalized PHD filter, and the multi-Bernoulli filter.
Article
Full-text available
Passive acoustic monitoring (PAM) is a key technology for studying marine mammal populations. PAM typically generates large volumes of data that contain signals from multiple overlapping sources. To extract meaningful information from these data, automated tools are required that can cope with multiple sources, missed detections, and false alarms. This paper presents the Multiple-Animal Model-Based Acoustic Tracking (MAMBAT) framework, which integrates model-based localization with Bayesian multi-target tracking to automatically track multiple sound sources using acoustic data from wide baseline arrays. MAMBAT leverages a “Track-before-Localize” strategy followed by a “Localize-then-Track” strategy that does not require detection, classification, or association steps. The framework’s effectiveness is demonstrated through application to real-world datasets that contain multiple sperm whales from two ocean basins. MAMBAT advances our ability to monitor marine mammal distribution, abundance, and behavior, with potential to provide valuable information for conservation and management efforts.
Article
A leading family of algorithms for state estimation in dynamic systems with multiple sub-states is based on particle filters (PFs). PFs often struggle when operating under complex or approximated modelling (necessitating many particles) with low latency requirements (limiting the number of particles), as is typically the case in multi target tracking (MTT). In this work, we introduce a deep neural network (DNN) augmentation for PFs termed learning flock (LF) . LF learns to correct a particles-weights set, which we coin flock , based on the relationships between all sub-particles in the set itself, while disregarding the set acquisition procedure. Our proposed LF, which can be readily incorporated into different PFs flow, is designed to facilitate rapid operation by maintaining accuracy with a reduced number of particles. We introduce a dedicated training algorithm, allowing both supervised and unsupervised training, and yielding a module that supports a varying number of sub-states and particles without necessitating re-training. We experimentally show the improvements in performance, robustness, and latency of LF augmentation for radar multi-target tracking, as well its ability to mitigate the effect of a mismatched observation modelling. We also compare and illustrate the advantages of LF over a state-of-the- art DNN-aided PF, and demonstrate that LF enhances both classic PFs as well as DNN-based filters.
Article
Full-text available
Most tracking algorithms in the literature assume that the targets always generate measurements independently of each other, i.e., the sensor is assumed to have infinite resolution. Such algorithms have been dominant because addressing the presence of merged measurements increases the computational complexity of the tracking problem, and limitations on computing resources often make this infeasible. When merging occurs, these algorithms suffer degraded performance, often due to tracks being terminated too early. In this paper, we use the theory of random finite sets (RFS) to develop a principled Bayesian solution to tracking an unknown and variable number of targets in the presence of merged measurements. We propose two tractable implementations of the resulting filter, with differing computational requirements. The performance of these algorithms is demonstrated by Monte Carlo simulations of a multi-target bearings-only scenario, where measurements become merged due to the effect of finite sensor resolution.
Article
An algorithm for tracking multiple targets in a cluttered environment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis, using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. The branching techique allows correlation of a measurement with its source based on subsequent, as well as previous, data.
Article
Kalman filter, particle filter, IMM, PDA, ITS, random sets… The number of useful object-tracking methods is exploding. But how are they related? How do they help track everything from aircraft, missiles and extra-terrestrial objects to people and lymphocyte cells? How can they be adapted to novel applications? Fundamentals of Object Tracking tells you how. Starting with the generic object-tracking problem, it outlines the generic Bayesian solution. It then shows systematically how to formulate the major tracking problems – maneuvering, multiobject, clutter, out-of-sequence sensors – within this Bayesian framework and how to derive the standard tracking solutions. This structured approach makes very complex object-tracking algorithms accessible to the growing number of users working on real-world tracking problems and supports them in designing their own tracking filters under their unique application constraints. The book concludes with a chapter on issues critical to successful implementation of tracking algorithms, such as track initialization and merging. © S. Challa, M. R. Morelande, D. Mušicki and R. J. Evans 2011.
Chapter
Although point processes are just integer-valued random measures, their importance justifies a separate treatment, and their special features yield to techniques not readily applicable to general random measures. The first and last parts of the chapter summarize results for point processes, which parallel those for random measures—existence theorems, moment structure, and generating functional—as well as furnishing illustrative (and important) examples. Many of the results are special cases of the corresponding results in Chapter 6, while others are extensions from the context of finite point processes in Chapter 5. The remaining part of the chapter, on the avoidance functions and intensity measures, deals with properties that are peculiar to point processes and for which the extensions to general random measures are not easily found.
Article
Exciting multiple-model (MM) estimation algorithms have a fixed structure, i.e., they use a fixed set of models. An important fact that has been overlooked for a long time is how the performance of these algorithms depends on the set of models used, Limitations of the fixed structure algorithms are addressed first. In particular, it is shown theoretically that the use of too many models is performance-wise as bad as that of too few models, apart from the increase in computation, This paper then presents theoretical results pertaining to the two ways of overcoming these limitations: select/construct a better set of models and/or use a variable set of models. This is in contrast to the existing efforts of developing better implementable fixed structure estimators. Both the optimal MM estimator and practical suboptimal algorithms with variable structure are presented. A graph-theoretic formulation of multiple-model estimation is also given which leads to a systematic treatment of model-set adaptation and opens up new avenues for the study and design of the MM estimation algorithms, The new approach is illustrated in an example of a nonstationary noise identification problem.
Article
This paper introduces a generalization of the multiple-hypothesis tracking (MHT) formalism for multitarget tracking. To our knowledge, MHT treatments in the literature do not consider undetected target birth events. Their inclusion leads to an interesting extension to the MHT recursion and necessitates aggregation over indistinguishable global hypotheses. We show that the MHT recursion factors, enabling track-oriented MHT (TO-MHT), albeit with clusters of indistinguishable undetected births. The treatment requires a distinction between those targets that are eventually detected (we call these unnoticed targets) and those that are never detected (we call these ghost targets). While the formulation appears more complex, there is structure to the solution that can be exploited, resulting in the same number of relevant track hypotheses for detected targets as in the classical TO-MHT solution. In the time-invariant case, the solution simplifies further because we need not consider unnoticed targets and there is a fixed structure to the ghost target solution.
Book
* New in paperback, winner of the 2004 DeGroot Prize * Revised and updated * The text reads fluently throughout, with light, good-humoured touches Winner of the 2004 DeGroot Prize This paperback edition, a reprint of the 2001 edition, is a graduate-level textbook that introduces Bayesian statistics and decision theory. It covers both the basic ideas of statistical theory, and also some of the more modern and advanced topics of Bayesian statistics such as complete class theorems, the Stein effect, Bayesian model choice, hierarchical and empirical Bayes modeling, Monte Carlo integration including Gibbs sampling, and other MCMC techniques. It was awarded the 2004 DeGroot Prize by the International Society for Bayesian Analysis (ISBA) for setting "a new standard for modern textbooks dealing with Bayesian methods, especially those using MCMC techniques, and that it is a worthy successor to DeGroot's and Berger's earlier texts". Christian P. Robert is Professor of Statistics in the Applied Mathematics Department at the Universite Paris Dauphine, and Head of the Statistics Laboratory at the Center for Research in Economics and Statistics (CREST) of the National Institute for Statistics and Economic Studies (INSEE) in Paris. In addition to many papers on Bayesian statistics, simulation methods, and decision theory, he has written four other books, including Monte Carlo Statistical Method (Springer 2004) with George Casella and Bayesian Core (Springer 2007) with Jean-Michel Marin. He has served or is serving as associate editor for the Annals of Statistics, Bayesian Analysis, the Journal of the American Statistical Association, Statistical Science, and Sankhya. and is editor of the Journal of the Royal Statistical Society (Series B) from 2006–2009. He is a fellow of the Institute of Mathematical Statistics, and received the 1995 Young Statistician Award of the Societe de Statistique de Paris. Review of the second edition: "The text reads fluently and beautifully throughout, with light, good-humoured touches that warm the reader without being intrusive. There are many examples and exercises, some of which draw out the essence of work of other authors. Only a few displays and equations have numbers attached. This is an extremely fine, exceptional text of the highest quality." (ISI Short Book Reviews)