A probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments
ABSTRACT This paper presents a filter that is able to simultaneously estimate the behaviors of traffic participants and anticipate their future trajectories. This is achieved by recognizing the type of situation derived from the local situational context, which subsumes all information relevant for the drivers decision making. By explicitly taking into account the interactions between vehicles, it achieves a comprehensive situational understanding, inevitable for autonomous vehicles and driver assistance systems. This provides the necessary information for safe behavior decision making or motion planning. The filter is modeled as a Dynamic Bayesian Network. The factored state space, modeling the causal dependencies, allows to describe the models in a compact fashion and reduces the computational complexity of the inference process. The filter is evaluated in the context of a highway scenario, showing a good performance even with very noisy measurements. The presented framework is intended to be used in traffic environments but can be easily transferred to other robotic domains.

Conference Paper: A bayesian approach for driving behavior inference
[Show abstract] [Hide abstract]
ABSTRACT: Human drivers are endowed with an inborn ability to put themselves in the position of other drivers and reason about their behaviors and intended actions. Stateoftheart driving assistance systems, on the other hand, are generally limited to physical models and adhoc safety rules. In order to drive safely amongst humans, autonomous vehicles require a highlevel description of the state of traffic participants. This paper presents a probabilistic model for estimating and predicting the behavior of drivers immersed in traffic. The model is defined within a stochastic filtering framework and estimation and prediction are carried out with statistical inference techniques. The approach is validated with real data from a fleet of mining vehicles.Intelligent Vehicles Symposium (IV), 2011 IEEE; 01/2011  SourceAvailable from: Stephanie Lefevre[Show abstract] [Hide abstract]
ABSTRACT: With the objective to improve road safety, the automotive industry is moving toward more “intelligent” vehicles. One of the major challenges is to detect dangerous situations and react accordingly in order to avoid or mitigate accidents. This requires predicting the likely evolution of the current traffic situation, and assessing how dangerous that future situation might be. This paper is a survey of existing methods for motion prediction and risk assessment for intelligent vehicles. The proposed classification is based on the semantics used to define motion and risk. We point out the tradeoff between model completeness and realtime constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model.Robomech Journal. 07/2014; 1(1). 
Conference Paper: Situationspecific learning for egovehicle behavior prediction systems
[Show abstract] [Hide abstract]
ABSTRACT: We present a system able to predict the future behavior of the egovehicle in an innercity environment. Our system learns the mapping between the current perceived scene (information about the egovehicle and the preceding vehicle, as well as information about the possible traffic lights) and the future driving behavior of the egovehicle. We improve the prediction accuracy by estimating the prediction confidence and by discarding unconfident samples. The behavior of the driver is represented as a sequence of elementary states termed behavior primitives. These behavior primitives are abstractions from the raw actuator states. Behavior prediction is therefore considered to be a multiclass learning problem. In this contribution, we explore the possibilities of situationspecific learning. We show that decomposing the perceived complex situation into a combination of simpler ones, each of them with a dedicated prediction, allows the system to reach a performance equivalent to a system without situationspecificity. We believe that this is advantageous for the scalability of the approach to the number of possible situations that the driver will encounter. The system is tested on a real world scenario, using streams recorded in innercity scenes. The prediction is evaluated for a prediction horizon of 3s into the future, and the quality of the prediction is measured using established evaluation methods.Intelligent Transportation Systems (ITSC), 2011 14th International IEEE Conference on; 01/2011
Page 1
A Probabilistic Model for Estimating Driver Behaviors
and Vehicle Trajectories in Traffic Environments
Tobias Gindele, Sebastian Brechtel and Rüdiger Dillmann
Institute for Anthropomatics
Karlsruhe Institute of Technology
D76128 Karlsruhe, Germany
Email: {gindele  brechtel  dillmann}@kit.edu
Abstract—This paper presents a filter that is able to si
multaneously estimate the behaviors of traffic participants and
anticipate their future trajectories. This is achieved by recog
nizing the type of situation derived from the local situational
context, which subsumes all information relevant for the drivers
decision making. By explicitly taking into account the interac
tions between vehicles, it achieves a comprehensive situational
understanding, inevitable for autonomous vehicles and driver
assistance systems. This provides the necessary information for
safe behavior decision making or motion planning. The filter is
modeled as a Dynamic Bayesian Network. The factored state
space, modeling the causal dependencies, allows to describe the
models in a compact fashion and reduces the computational
complexity of the inference process. The filter is evaluated in
the context of a highway scenario, showing a good performance
even with very noisy measurements. The presented framework
is intended to be used in traffic environments but can be easily
transferred to other robotic domains.
I. INTRODUCTION
Autonomous vehicles need a comprehensive representa
tion of their environment in order to make optimal behavior
decisions or to conduct behaviors in a safe manner. The
process of acquiring the necessary information is hindered
by the fact that most of the environmental states are not
directly observable and therefore need to be inferred. E.g.
there exists no sensor which can perceive the plans of traffic
participants. Today’s sensory information is often limited
to noisy measurements of pose, velocity and some basic
geometric features.
While human drivers naturally put themselves into the
position of other traffic participants to reason about their
behaviors, machine tracking algorithms are usually limited to
physical models and simplest heuristics. This is not sufficient
for robust long term predictions — and consequently —
anticipatory driving. The incorporation of semantics and
context information is one key aspect when considering
forwardlooking driver assistance and autonomous vehicles.
The use of probabilistic methods helps to connect the
symbolic and metric behavior representation, at the same
time providing a decent way to cope with uncertainty and
inaccuracy of a semantic formulation. The combination of
both degrees of abstraction allows on the one hand to
estimate the state of objects more exactly, because they are
enriched with a complex understanding of situations and their
context. On the other hand the system does not only obtain
a symbolic situation classification, but also predicts it in the
future, which is the basis for probabilistic decision making.
II. RELATED WORK
The state estimation problem of traffic participants is often
done with a standard filtering approach. Usually the pose
of a vehicle is the only state of interest, which allows the
use of classical tracking methods like Kalman Filters or
Extended Kalman Filters. In the area of multitarget tracking,
objects are often assumed to move independently to employ
a process model that only models the systems dynamics.
While this assumption induces some beneficial mathematical
properties — since the objects can be tracked individually —
it is definitely not valid for traffic scenarios, where transitions
of vehicle states are highly coupled. To solve this problem
it is necessary to model these interactions and different
behaviors.
Dagli et al [1] [2] realized this problem and proposed a
motivation based approach. Other than relying solely on the
system dynamics, they conclude that it is necessary to infer
the motivations and goals of driver in order to predict his be
havior. Another approach is chosen by Zhang et al [3] which
estimates the behaviors of vehicles in intersection scenarios.
They choose an hierarchical approach that first estimates the
likelihoods of possible paths through the intersection on the
abstraction level of lanes to identify possible conflicts. In
the second stage the behaviors are estimated with a Dynamic
Bayesian Network (DBN). Forbes et al [4] already suggested
the use of a DBN to estimate the states of traffic participants
but their focus lied more on the following decision process of
the autonomous vehicle, rather than the state estimation prob
lem. The subproblem of situation recognition is addressed
by Meyer et al [5] who use a relational hidden Markov
model to recognize different classes of traffic situations based
on a semantic representation. While this approach has the
ability to predict situations on a high abstraction level, it
can not estimate the quantiative properties like the pose
of a vehicle. The problem of behavior estimation is often
addressed by using Hidden Markov Models. A special variant
are interacting multiple models filters (IMM) [6], which
explain the observed behavior of an object as an interaction
of several models corresponding to different behavior modes.
This shows some similarities to our approach since we also
use multiple models to describe different behaviors.
!"#"$#%&'$()&*+),&.),/$(000
1))2,/$3.)4*+*)5*$.)$()&*//6*)&$7+,)89.+&,&.)$:;8&*<8
=,>*+,$(8/,)>?$@.+&26,/?$:*9&*<A*+$#BC!!?$!"#"
D1EF%
BGHC#CE!EECGIJHCBK#"KL!IF""$M!"#"$(000#I!J
Page 2
III. BAYESIAN MODEL
A. Overview
Our approach to the estimation problem of the desired
properties is to formulate it as a time discrete filtering
problem and to apply an appropriate probabilistic inference
method to reason about the hidden states.
The main idea is a chain of reasoning steps that result
in the prediction of the next state of the global situation.
Beginning with the extraction of implicit context information
from the vehicle states – like which vehicle drives in front
of another – one is able to classify the type of situation
each vehicle resides in. Each situation class implies certain
behaviors that are likely to be conducted by a driver in this
situation. Based on the behavior and context information
the metric realization in form of a desired trajectory can
be inferred. This ultimately closes the loop by enabling the
prediction of the next vehicle states.
As can be seen, we use different levels of abstraction to
achieve the goal, thereby combining symbolic and subsym
bolic representations. The following sections describe the
necessary models and representations in detail.
B. vehicle model
For representing a vehicle state we apply a kinematic
onetrack model which virtually replaces the front and the
rear wheels by one in the middle. Neglecting any slipping
effects, the car rotates around its instantaneous center of
rotation (ICR). Therefore the car’s state depends directly on
the orientation of the wheels and the length of the vehicle
as depicted in Figure 1.
λ
ICR
δ
δ
l
Fig. 1.Onetrack model of a nonholonomic car
The set of vehicle states X ⊆ R5comprises the position
�x1,x2
the steering angle δ. A vehicle state is therefore defined as
x =
given by l and assumed constant.
The basic controls that a driver can directly influence are
the acceleration a and steering rate ω of the vehicle. So a
��of the vehicle together with the gear angle ψ, the
�x1,x2,ψ,v,δ��∈ X. The length of the vehicle is
longitudinal velocity v in direction of the gear angle and
control u is defined as: u =�a,ω��∈ U. The total system
C. Trajectories
A trajectory is a time ordered set of states of a dynamical
system. In our case a trajectory defines a vehicle state over
a time interval. Given a function of the vehicle position
equations are provided by
˙ x1
˙ x2
˙ψ
˙ v
˙δ
=
v cosψ
v sinψ
vtanδ
a
ω
l
(1)
y(t) : [ts,te] → R2
together with its first and second derivative and the systems
equations (1) all dimensions of the vehicle state x ∈ X are
defined.
The trajectories we need to represent are realizations
of actual behaviors and therefore have to fulfill the non
holonomic constraints. To get a compact representation we
further state that all trajectories are entirely defined by their
start and end states and the corresponding time interval.
y =�xs,xe,ts,te
This constrains the complexity of behaviors, since the space
of possible trajectories is restricted.
The intermediate values are obtained by interpolation. We
chose a Bézier interpolation scheme. Higher order Bézier
curves have proven to be useful in the context of trajectory
representation [7]. Besides their compact representation they
produce jerk continuous trajectories which obey the non
holonomic constraints of the vehicle. Other interpolation
mechanisms like spline interpolation that generate suffi
ciently smooth curves could be used as well.
A Bézier curve of degree n with control points
{P0,...,Pn} is given by
��∈ {X × R}2= T
P[ts,te](t) =
n
�
i=0
Bn
i(t)Pi
(2)
where the Bernstein polynomials Bn
iare defined as
Bn
i(t) =
�n
i
�
(t − ts
te− ts)i(te− t
te− ts)n−i.
The derivative itself is again a Bézier curve with modified
control points Diof degree n − 1:
n−1
�
with
Di=
te− ts(Pi+1− Pi)
To fully define the cubic Bézier trajectory function yB(t)
given y the two inner control points need to be defined. They
can be derived from the velocity and gear angle constraints
˙P[ts,te](t) =
i=0
Bn−1
i
(t)Di
(3)
n
(4)
#I!I
Page 3
of the start and end state together with the first derivative of
the Bézier curve (n = 3):
�vscosψs
�vecosψe
D. Behaviors
Behaviors are symbolic representations of context depen
dent motion primitives that a vehicle is able to conduct. We
assume that a finite set of behaviors is sufficient to describe
all possible maneuvers of a real driver. We chose the classes
based on semantic equivalents that humans use to describe
the maneuvers of a vehicle. This is not an optimal choice
but has the advantage that the estimations can be interpreted
by humans.
For the evaluation of the approach we modeled several
basic behaviors for highway driving situations. Naturally
the set of behaviors can be extended to other domains like
urban traffic where e.g. behaviors for intersection handling
are needed. It is important that the situation description
comprises all features necessary to discriminate behavior
classes.
The set of behaviors B consists of {free_ride, following,
acceleration_phase, sheer_out, overtake, sheer_in} which
are sufficient to describe the most common maneuvers in
highway scenarios. Examples of the different behavior pat
terns together with corresponding trajectories are shown in
Figure 2.
P1= P0+
vssinψs
�te− ts
�te− ts
3
P2= P3−
vesinψe
3
A
B
C
D
E
F
Fig. 2.
acceleration_phase, D sheer_out, E overtake, F sheer_in
Behaviors with trajectories. A free_ride, B following, C
a) free_ride: This behavior is applied if there are no
other vehicles or obstacles ahead. The main aspects a driver
has to consider are speed limits, sight conditions and the
curvature of the road.
b) following: A vehicle is in following mode if another
vehicle drives ahead on the same lane. Unlike the free_ride
behavior the driver has to accommodate to the velocity of
the up front vehicle to keep the necessary safety distance.
c) acceleration_phase: The complete overhauling pro
cess is separated in four phases. The first phase is the
acceleration phase where the driver accelerates to become
fast enough to drive past the up front vehicle.
d) sheer_out: In the sheer out phase the car is often
still accelerating. The car moves smoothly in a Sshaped
trajectory to the opposite lane. When it reaches the center
of the opposite lane the overtake phase begins.
e) overtake: In the overtake phase the overhauling car
drives on the opposite lane until it is far enough ahead of
the other vehicle to sheer back in.
f) sheer_in: By sheering in the car changes back from
the opposite lane to the original lane. As soon as the car
reaches the original lane it changes back to free_ride or
following.
All the different behaviors induce constraints on the cor
responding trajectories. A requirement for a trajectory of
the sheer_out behavior is e.g. that the end point lies on the
left opposite lane and that the velocity is significantly faster
than that of the overhauled vehicle. These constraints are
expressed by a conditional probability distribution function
(cdf) over the trajectory space depending on the behavior
and the situational context.
E. Situation context
The situational context describes the properties of the
vehicle surroundings. It consists of relations between objects,
which are in our case mainly distances. The relations are
extracted by analyzing the implicit features of the situation,
thereby making them explicit. These relations are the direct
basis for the drivers decisions. E.g. if the distance between
two car falls below the safety distance, the the driver of the
following car will decelerate.
dlat
centerline
dlon
Fig. 3.
the baseline for the distance calculations
Distance relations between two cars. The dashed line represents
Since the situational context is only needed to infer the
behaviors, only those relations need to be considered that are
relevant for the drivers decision making. They are evaluated
for every car in the scene. Figure 3 illustrates the distance
relations between two cars. For the modeled set of behaviors
B the following distance relations are relevant:
longitudinal distance: The longitudinal distance dlon,f
describes the distance of a vehicle to the next vehicle ahead.
The distance is measured between the projected positions
of the vehicles on the centerline of the lane. If there is no
vehicle ahead in the observable area then the distance is
set to infinity. The analog distance relation dlon,bmeasures
the distance to the next vehicle behind. Since the centerline
follows the curvature of the road it is a better measure than
the Euclidean distance.
#I!G
Page 4
lateral distance: The lateral distance dlatmeasures the
displacement between the position of a car and the centerline.
Again it’s calculated by projection of the position onto the
centerline.
relative velocity: Regarding a vehicle and the next
vehicle ahead, the relative velocity ddv,f describes the dif
ference between their velocities. Analogous ddv,bdescribes
the relative velocity to a vehicle behind.
The basic set of context relations is defined as C =
{dlat,dlon,f,dlon,b,ddv,f,ddv,b}. Since for some driving ma
neuvers, like lane changing or overtaking, the objects on
adjacent lanes are also relevant, similar relations are included
in the overall set of context features. If the set of behaviors
is extended for other applications, C needs to be extended
to contain all relevant context features.
F. Statespace
The state space of the filter consists of several random
variables that describe the different aspects of the situation.
We assume a finite number of n vehicles in the scene. The
random variables are defined as follows:
X: Vector containing the vehicle states.
X =�X1 ... Xn
C: Vector containing the features of local situational
context, primarily consisting of relevant distances
between the traffic participants and other objects,
e.g. the distance to next vehicle ahead.
C =�C1 ... Cn
S:Vector containing the recognized situations. S is
the set of all situations
S =�S1 ... Sn
B:Vector of behaviors. B is the set of all behaviors
B =�B1 ... Bn
T:Vector of the trajectories, representing the realiza
tion of the behaviors.
T =�T1 ... Tn
Z:Measurement vector providing information about
the pose. The actual form depends on the sensor
configuration. We assume the sensor signals can
be linearly mapped onto a subspace of the vehicle
pose to ease the formulation of the measurement
model. A reasonable assumption is the observabil
ity of the position (x1,x2), orientation ψ and speed
v of the vehicles.
Z =�Z1 ... Zn
All variables marked with a minus like X−correspond to
the random variables at time index t − 1. Since we assume
that the first order Markov assumption is met, we only need
to consider random variables with time index t and t − 1.
��,
Xi∈ X
��,
Ci∈ C
��,
��,
Si∈ S
Bi∈ B
��,
Ti∈ T
��,
Zi=�x y ψ v�
G. Decomposition of the Joint Distribution
The filter is modeled as a first order Markov process.
The process satisfies the Markov property since the state
space subsumes all relevant information needed to infer the
next system state. This property can be achieved for all
higher order Markov processes [8]. To exploit the conditional
independencies between random variables it is beneficial to
model the process as a Dynamic Bayesian Network (DBN)
[8].
Assuming several conditional independencies the joint
distribution can decomposed as follows:
P(X,X−,C,S,B,B−,T,T−,Z) =
P(X−)P(B−)P(T−)P(XT−)P(CX)
P(SC)P(BB−,S)P(TT−,B,C,X)P(ZX)
The cdfs are explained in section IIIH.
Figure 4 shows the resulting DBN. Continuous nodes are
depicted by circles and discrete nodes by rectangles. Solid
lines represent direct dependencies within a time frame while
dashed lines state dependencies between time steps. The
behaviors e.g. depend on the current situations and the last
conducted behaviors.
(5)
X
B
Z
C
S
T
Fig. 4.
while solid lines represent causal dependencies within a time frame
Bayesian Network. Dashed Lines represent temporal dependencies
H. Filtering Models
The factors of the decomposition of the joint distribution
(5) are referred to as the models of the filter. They describe
the dependencies between the random variables in form of
cdfs. Detailed descriptions and interpretations of the models,
as well as the definitions of the corresponding cdfs, are given
in the following sections.
1) Dynamics Model  P(XT−): The dynamics model
describes the change of the vehicles states after a time
step ∆t, given their last planned trajectory. This model is
often referred to as the motion model. Since we neglect
the explicit handling of collisions between vehicles in the
dynamics model, we are able to regard the individual motions
as conditionally independent. This simplifies the cdf to
P(XT−) =
n
�
i=1
P(XiT−
i)
#I!H
Page 5
Since the planned trajectory already accounts for the non
holonomic motion constraints, the new vehicle state can
be derived from the planned trajectory and current time t
using (2). Allowing some small deviations from the intended
motion, represented by the error term ex, the cdf can be
stated as follows:
P(XiT−
i= y) = gt(y) + ex
where gt(y) : T → X calculates the new vehicle state
given the curve parameters y. We assume exto be normal
distributed with ex∼N(0,Σx)
2) Context Model  P(CX): The context model de
scribes how the local situational context is derived from the
vehicle states. The context describes all relevant information
needed by a driver to choose an appropriate behavior. This
result in a local view for each vehicle, allowing us to
decompose the model as follows:
P(CX) =
n
�
i=1
P(CiX)
The distance relations are evaluated with the helper function
dmap(x) as described in IIIE. Therefore a virtual map of
the road network is employed, which allows to match the
vehicles to corresponding lanes, as well as finding the next
vehicles ahead or behind. It also provides the centerlines
needed for the projection. The cdf for a specific x is defined
as a Dirac distribution
P(CX = x) = δdmap(x).
Since the distances can be calculated without uncertainty,
given the exact states of vehicles and assuming a perfect
map, no error term is needed.
This model couples all vehicle states together. By extract
ing all relevant information from the global object state and
mapping it to the local context states, one is able to formulate
the remaining models for every object individually. This
simplifies the formulation of the other models drastically and
enables fast convergence of machine learning methods and
better generalization, since the models need not to be learned
for different numbers of objects.
3) Situation Model  P(SC): If the local context of a
vehicle is known, it is possible to distinct several classes
of situations. The matching of contexts to the finite set
of possible situations S is done by the situation model.
This matching is in fact a discretization that transfers the
knowledge about the situation to a symbolic level. All
contexts that are grouped in a situation class should share
the characteristic, that they imply similar distributions over
behaviors. This maximizes the information a situation holds
about the drivers behavior decision.
Since the type of situation is recognized individually
for each vehicle on the basis of its local context, we can
decompose the situation model to:
P(SC) =
n
�
i=1
P(SiCi)
To define the likelihood function for a situation class, we
use a weighted multivariate normal pdf, so all likelihood
functions together can be interpreted as a finite normal mix
ture. Figure 5 illustrates the relationship between a context
and situation likelihoods. In this example the context only
consists of the distance to a vehicle driving ahead and two
situation classes are distinguished, namely that the vehicle is
close and that the vehicle is far away.
dlat
P(SC)
vehicle far
vehicle close
Fig. 5. Exemplary likelihood functions of a simplified situation model with
two classes.
4) Policy Model  P(BS,B−): On the basis of the last
conducted behaviors and considering the type of situation
that a vehicle currently resides, the policy model describes
the behavior decision of a driver in a probabilistic way. For
each situation class it defines a distribution over the behaviors
likely to be chosen, which can be seen as set of probabilistic
rules covering every combination of behavior and situation.
To simplify the model, we decompose the cdf:
�
Regarding the behavior decisions as independent is in general
a valid assumption, because we can presume the drivers do
not communicate to arrange their behaviors. But expecting
that the current behavior decision is not influenced by the
last behaviors of the other traffic participants is a strong
assumption which is not always true. This is only done for
complexity reasons and analyzing the impact of this decision
will be part of future research.
Currently the set of rules is defined by a human expert.
An even better way to obtain the model – in contrast to
engineering – is to use machine learning methods or to
employ a planner. This would ensure that the model fits the
real policies of drivers.
5) Behavior Realization Model  P(TB,X,C,T−):
The behavior realization model makes the actual step from
abstract symbolic behaviors to concrete trajectories, thereby
taking into account the overall situation. The cdf of trajecto
ries is also biased by the planned trajectories of the last time
step.
In order to reduce the model complexity the cdf is decom
posed like the policy model, stating
n
�
#I!B
P(BS,B−) =
n
i=1
P(BiSi,B−
i)
P(TB,X,C,T−) =
i=1
P(TiBi,Xi,Ci,T−
i).
Page 6
Neglecting the planned trajectories of the other traffic par
ticipants is a critical choice, since the model isn’t able to
enforce nonoverlapping trajectories directly. This is accom
plished only indirectly by means of the policy model which
favors behavior decisions that do not result in overlapping
trajectories.
The most likely trajectories are the ones that optimize
several criteria set by the driver. The most important are
the length of a path to reach a goal, how economic and
comfortable the trajectory is and of course the safety prop
erties. These facts need to be considered in the cdf, e.g if a
vehicle drives at high speeds the trajectory on an overtaking
maneuver has a flatter shape than one driven at low speeds.
Since we use a Bézier curve representation for the tra
jectories based on control points as described in IIIC, the
cdf defines a conditional distribution over the control point
parameter space. To define the cdf we use deterministic
functions kbfor each behavior type b together with additive
white noise eb∼ N(0,Σb). This results in
P(TB= b,X= x,C= c,T−= y) = kb(x,c,y) + eb.
The function kbcalculates the control points parameters for a
specific behavior b. Like the context model this function also
makes use of the virtual road map to generate control points
according to the centerlines of the lanes. The main difficulty
is to find the appropriate end control point, since the values of
the start control point are constrained by the vehicle state in
order to produce smooth changeovers. E.g. for the sheer_in
behavior, the function picks a position and orientation of the
end control point based on the centerline of the opposite lane
together with an appropriate desired velocity.
These relationships can become rather complex, so for
more complex domains, like inner city environments, it
is a good idea to utilize a motion planner that generates
distributions over likely trajectories. Using machine learning
methods is another way to generate the model
6) Observation Model  P(ZX): The observation model
describes the coherence between the overall system state and
the likelihood of an observation. We assume that we receive
independent observations zifor each vehicle individually, i.e.
we don’t have to cope with the data association problem [9]
and are able to state the cdf as
P(ZX) =
n
�
i=1
P(ZiXi).
We assume a linear mapping H between vehicle state and
expected observation to stay sensor agnostic. The sensor
noise ez is assumed to be normally distributed with ez ∼
N(0,Σz). The cdf of the vehicle observation model can then
be stated as
P(ZXi= xi) = Hxi+ ez.
IV. FILTER EQUATIONS
Given the joint distribution of all variables (5), the recur
sive update formulas can be derived using Bayes’rule. The
distribution of the posterior state is estimated by marginal
izing over the prior state and weighting with the likelihood
of the measurement z. Thereby the update can be separated
in two steps – classically called Prediction and Correction.
The resulting update equation is
P(X,C,B,T,Uz) ∝
P(zX)
�
B−,T−
P(X,C,S,B,TB−,T−)P(B−,T−)
(6)
where the integral corresponds to the prediction and the other
part to the correction.
The integral of the prediction step is not solvable in
its general form. The existence of an analytical solution
depends strongly on the distribution types and the cdfs.
Classical representatives of this kind are e.g. Kalman filters
and histogram filters [10].
Since we are dealing with a mixed state space covering
discrete behaviors and continuous vehicle states in addition
to nonlinear cdfs, we use a particle filter framework for
inference. Particle filters approximate the true posterior and
have been used successfully in many robotics applications
[10]. The inference is achieved with likelihood weighting
[8] where samples are drawn for all unobserved random vari
ables according to their parent states. The overall particles are
weighted with the evidences according to the corresponding
cdfs. Since the number of samples is finite, the integral of
the prediction is replaced with a finite sum over all prior
particles.
V. EXPERIMENTS
To evaluate the presented filter we implemented a slightly
simplified version of the model, using only a subset of
the named behaviors. The behaviors used are free_ride,
following, sheer_out and overtake. These behaviors are suf
ficient to show the principal properties of the filter. The
traffic scenario we used for the evaluation was a two lane
road scenario with two vehicles, as depicted in Figure 6.
In the simulated runs, the vehicle coming from behind is
approaching the slower vehicle on the right and changes the
lane, so it can pass the other vehicle. For the evaluation we
Fig. 6.
overtaking the other
Scenario of the experiment. Two lane with two cars, where on is
assumed the measurability of the position, gear angle and
velocity of the vehicles. The measurements come at a fre
quency of 10Hz. To have a valid ground truth, we simulated
the scenario several times and added white Gaussian noise
to the measurements. The accuracy of the filter is measured
by estimating the average root mean square error (RMSE) of
the position of the overtaking vehicle. As position estimation
#I%"
Page 7
we used the expectation of the particle distribution. The
expectation shows a slightly more accurate and more stable
performance than the use of the best particle. The particle
inference employed 500 particles.
A. Filtering Results
The results of the average estimation error of the vehicles
position for different settings of the noise parameters is
shown in Tab.I. The filter yields a small average estimation
TABLE I
AVG. RMS ERROR OF THE MEAN POSITION FOR DIFFERENT NOISE
PARAMETERS
σx1,x2
0.050
0.100
0.150
0.200
0.250
σψ
0.025
0.050
0.075
0.100
0.125
σv
0.250
0.500
0.750
1.000
1.250
avg. RMS position error
0.06
0.09
0.13
0.18
0.21
error of 6cm of the vehicles position for low measurement
noise. As can be seen, the estimation error increases slower
than the applied noise, showing the smoothing effect of the
process model. This means that the filter is able to estimate
the vehicle states even under heavy noise due to the accuracy
of his models. Tab.II shows the average likelihood of the true
TABLE II
AVG. LIKELIHOOD OF THE TRUE CONDUCTED BEHAVIOR FOR
DIFFERENT NOISE PARAMETERS
σx1,x2
0.050
0.100
0.150
0.200
0.250
σψ
0.025
0.050
0.075
0.100
0.125
σv
0.250
0.500
0.750
1.000
1.250
avg. behavior likelihood
0.9168
0.9226
0.9059
0.8934
0.8748
conducted behavior, measuring the quality of the behavior
estimation. As expected the reliability of the estimation
decreases slowly with increasing noise. In all considered sets
of noise parameters a maximum likelihood estimator would
classify the behavior correctly in almost every case.
VI. CONCLUSION
In this paper we presented a filter that allows to estimate
simultaneously the current pose and behavior as well as
the anticipated trajectory of traffic participants. These es
timations are needed by an autonomous vehicle for decision
making and motion planning. By first recognizing the type
of situation based on the local situational context, the filter
is able to infer the likely behaviors.
The filter is modeled as a Dynamic Bayesian Network,
thereby exploiting the conditional independencies, which
allows to describe the models in a compact way and to
calculate the inference more efficiently. The probabilistic
approach makes the estimation robust under noisy measure
ments. A distinctive feature is that the expected trajectories
are estimated via filtering without enrolling the DBN into
the future. This saves a lot of computational effort since it
scales linearly with the number of enrolled time slices.
The evaluation showed a good performance of the filter
in the domain of highway scenarios. For an application of
this approach in an urban environment the models have to be
more complex to deal with the variety of possible situations.
An interesting direction to explore, is the estimation of
plans of traffic participants. We expect that this will improve
the accuracy of the estimation of the other states. On the
computational side, the use of RaoBlackwellized methods
[11] could reduce the complexity of the inference.
ACKNOWLEDGMENT
The authors gratefully acknowledge the contribution of the
German collaborative research center ”SFB/TR 28 – Cogni
tive Cars” granted by Deutsche Forschungsgemeinschaft.
REFERENCES
[1] I. Dagli, M. Brost, and G. Breuel, “Action recognition and prediction
for driver assistance systems using dynamic belief networks,” Lecture
Notes in Computer Science, pp. 179–194, 2003.
[2] I. Dagli and D. Reichardt, “Motivationbased approach to behavior
prediction,” in Proceedings of the IEEE Intelligent Vehicles Sympo
sium, 2002.
[3] J. Zhang and B. Rössler, “Situation Analysis and Adaptive Risk
Assessment for Intersection Safety Systems in Advanced Assisted
Driving,” in Proceedings of the 21th Fachgespräch Autonome Mobile
Systeme, 2009, pp. 249–258.
[4] J. Forbes, T. Huang, K. Kanazawa, and S. Russell, “The batmobile:
Towards a bayesian automated taxi,” in International Joint Conference
on Artificial Intelligence, vol. 14, 1995, pp. 1878–1885.
[5] D. MeyerDelius, C. Plagemann, G. Von Wichert, W. Feiten, G. Law
itzky, and W. Burgard, “A Probabilistic Relational Model for Charac
terizing Situations in Dynamic MultiAgent Systems,” in Proceedings
of the 31st Annual Conference for Data Analysis, Machine Learning
and Applications, 2007.
[6] E. Mazor, A. Averbuch, Y. BarShalom, and J. Dayan, “Interacting
multiple model methods in target tracking: a survey,” IEEE Transac
tions on Aerospace and Electronic Systems, vol. 34, no. 1, pp. 103–
123, 1998.
[7] J. Choi and G. Elkaim, “Bézier Curve for Trajectory Guidance,” in
World Congress on Engineering and Computer Science, San Francisco,
CA, Oct. 2224 2008.
[8] K. Murphy, “Dynamic Bayesian Networks: Representation, Inference
and Learning,” Ph.D. dissertation, University of California, 2002.
[9] I. Cox, “A review of statistical data association techniques for motion
correspondence,” International Journal of Computer Vision, vol. 10,
no. 1, pp. 53–66, 1993.
[10] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent
Robotics and Autonomous Agents).
sachusetts, USA, 2005.
[11] A. Doucet, N. de Freitas, K. Murphy, and S. Russell, “Rao
Blackwellised particle filtering for dynamic Bayesian networks,” Pro
ceedings of the Sixteenth Conference on Uncertainty in Artificial
Intelligence, pp. 176–183, 2000.
MIT press, Cambridge, Mas
#I%#
View other sources
Hide other sources
 Available from Rüdiger Dillmann · May 27, 2014
 Available from uka.de