Conference PaperPDF Available

„Automotive Radar the Key Technology For Autonomous Driving: From Detection and Ranging to Environmental Understanding“

Authors:
  • Mercedes-Benz AG
1
„Automotive Radar the Key Technology For Autonomous Driving: From
Detection and Ranging to Environmental Understanding“
Juergen Dickmann, Jens Klappstein, Markus Hahn, Nils Appenrodt, Hans-Ludwig Bloecher, Klaudius
Werber, Alfons Sailer
Daimler AG, Vehicle Automation and Chassis Systems, Ulm, Germany
E-Mail: juergen.dickmann@daimler.com
Abstract An overview on state of the art
automotive radar usage is presented and the changing
requirements from detection and ranging towards radar
based environmental understanding for highly
automated and autonomous driving deduced. The
traditional segmentation in driving, manoeuvering
and parking tasks vanishes at the driver less stage.
Situation assessment and trajectory/manoeuver
planning need to operate in a more thorough way.
Hence, fast situational up-date, motion prediction
of all kind of dynamic objects, object dimension,
ego-motion estimation, (self)-localisation and more
semantic/classification information, which allows
to put static and dynamic world into
correlation/context with each other is mandatory.
All these are new areas for radar signal processing
and needs revolutionary new solutions. The article
outlines the benefits that make radar essential for
autonomous driving and presents recent approaches in
radar based environmental perception.
Keywords—Radar, Environmental Perception,
Landmark, SLAM, Driver assistance, Active safety, Highly
automated Driving, Driverless driving
I. I. INTRODUCTION
Automotive Radar has already reached a market
penetration that leads to several tens of million units
used. It has already grown up to a status were it has
found its way into nearly all car manufacturers
portfolio in the world. They are used in all platforms
from passenger cars via van to heavy trucks and travel
busses down to even smallest sedan car platforms.
With the introduction of the collision prevention
assist®, Radar sensors have become even standard
equipment in passenger cars [1]. The major reason for
the success story of automotive radar is its physical
principle that offers unique performance features at
reasonable costs. Among others there are
independence from environmental conditions (light,
weather), directly measured parameters in space and
Doppler velocity, multiple field of view capability
and design compatible vehicle integration. Radar
performs under conditions, where other sensor types
fail and it is capable to virtually look through vehicles
(transvision effect) by exploiting reflections between
the road surface and vehicle floor and hence makes
the invisible visible. Over the decades, the
performance requirements increased steadily from
simple detector and ranging tasks in blind spot
monitoring or cruise control systems up to smart
environment perception tasks for present day semi-
autonomous evasion and braking functions [2].
However, the utmost push in performance
requirement is initiated with the trend towards highly
automated driving and down the road, driver less
driving. Future automotive radar systems have to
provide imaging like capabilities and have to interact
in radar networks, which allow for 36 highly
comprehensive perception tasks. In former days,
single sensor concepts were used, while multi sensor
networks composed of four or more short-,mid-, and
far range radars are being applied, nowadays [3, 4]. In
2013, the first stride ahead towards higher automation
has been made with the fully autonomous Bertha
drive of a Mercedes-Benz research sedan [3, 4]. One
design rule was that the vehicle had to appear as a
serial vehicle, which naturally brought radar into the
game. The technical lesson learned was, that higher
degree in automation, where the driver is going to be
exculpated increasingly from the pure driving task,
imposes much higher performance to the
environmental perception task radar has to deliver.
One important consequence is that radar signal
processing has to be extended to machine learning,
image understanding and patter recognition concepts
to keep radar in the leading edge of remote sensing.
The paper will provide an overview on state of the art
automotive radar usage deduces future requirements
for highly automated driving and will present recent
advances in radar based environmental perception.
II. FUNCTIONAL MILESTONES TO DRIVER LESS
DRIVING
A comprehensive overview of the evolution of
driver assistance and active safety systems is given in
[5]. Over the last decade, DAIMLER, and other car
manufacturers all over the world have introduced a
large variety of active safety and driver assistance
functions [1, 2, 3]. In general, those systems have been
developed to operate on highways and to some extend
on rural roads. The functional portfolio of those
systems covers mainly the following key features,
Blind spot detection, Cruise control with Stop and go,
Emergency Braking, 360°-Pre-crash sensing and pre-
triggering for/of airbags. The introduction of semi -
autonomous emergency braking and pre-crash systems
was only possible by a dramatic improvement in radar
technology and radar network architecture. The key
improvements are the introduction of multimodality
covering long (250m) and short range distances (0.5-
80m) and azimuth angles from ±10° to ± 70° in one
sensor-package. First steps towards imaging like
capability have been introduced with digital beam
forming allowing for SAR concepts combined with
high resolution algorithm techniques like linear or
autoregressive progression (APR) and MUSIC as well
as architectural changes to achieve improved Doppler
2
resolution [4-6]. Also important are a high angular
accuracy, a very fast up-date rate of few 10´s of
milliseconds (ms) and small latency of view ms [7,
13]. The evolution of driver assistance or active safety
functions towards higher degree of automation can be
revealed by considering the evolution of emergency
braking systems. For example the Mercedes-Benz
PRE-SAFE® Brake improved from a simple braking
force enhancement in 2006 to a system that intervenes
by braking the car automatically and activates the
maximum braking power around 600ms seconds
before the unavoidable collision in 2013 [2]. In 2015
an extension to urban areas with pedestrian
classification was added [2]. A similar evolution takes
place in the parking and maneuvering area. Active
parking assists of the former days, enabled the vehicle
to search for a suitable parking space, and to park
automatically at the press of a button, with the driver
retaining control of the accelerator and brake at all
times. The present evolution state is the advancement
to parking pilot, where the driver can remote
controlled park his car via a smart phone app from
outside of the vehicle [8]. Even in those state of the art
functions radar mainly performs according to its
traditional role, detection and ranging of dynamic
objects, based on a point representation. One first
example how pattern recognition and image
understanding concepts enable new safety functions is
driving lane prediction. Exploiting the reflections from
guard rails, gravel and lawn, this information enables
emergency braking in curves and in snow conditions
where optical lane information is missing [9, 10, 11].
Some basic pedestrian classification for NCAP and
braking functions is the first step of radar contributing
semantic information for a system reaction.
It is quite obvious that the trend in higher
automation level will continue up to autonomous
driving on highways as well as in urban areas. At this
utmost stage, there will be no driver in the loop. As a
consequence, the autonomous car driving performance
will be depending on the degree of completeness of
the essential environmental information the sensor-set
up is going to provide. Comparable to the human
being that utilizes many different sensors (ears, eyes
etc.) highly automated vehicles will use many different
sensor types. The difference to present fusion concepts
like “region of interestis that all sensors will have to
provide similar information in order to achieve the
required robustness via a fusion concept like “n
sensors out of m sensors see the same”, [29]. As a
consequence, the standard performance portfolio of
radar has to be dramatically enhanced. The following
chapter will deduce the challenges for future
automotive radar.
III. RADAR REQUIREMENTS
As shown with the Bertha Drive it could be shown,
that autonomous driving on both interurban and inner
city routes is feasible even with a vehicle and sensor-
set-up that is not dramatically different to a standard
serial vehicle. The goal of this experiment was to show
that autonomous driving is not limited on highways
and similar structured environment [3, 4, 12, 13]. On
its way, the self-driving S-Class had to deal
autonomously with a number of highly complex urban
situations, which were either enabled or aided by
radar.
In addition to far range operation in driving direction
for highway and rural road operation, in urban
scenarios 360° near- and mid-range distance of the
vehicles environment will become also important.
This along with a wider azimuthal observation
horizon in order to cover e.g. crossing scenarios,
roundabouts or pre-crash situations in driving
direction as well as side- and rear-crash situations.
Dramatically shrinking time scales in terms of
observation- and time to react horizons as well as a
huge larger number of static and dynamic object- and
motion types compared to classical ACC and collision
mitigation functions have to be coped with [14]. On
top of that, urban areas provide manifold occasions
for false detections, mirror targets and clutter. This all
together imposes dramatic challenges to the radar
signal processing engineer.
The traditional segmentation in driving, manoeuvring
and parking tasks vanishes at the driver less stage.
Situation assessment and trajectory/manoeuver
planning need to operate in a more thorough way.
Hence, fast situational up-date, motion prediction of
all kind of dynamic objects, object dimension, ego-
motion estimation, (self)-localisation and more
semantic/classification information, which allows to
put static and dynamic world into correlation/context
with each other is mandatory. All these are new areas
for radar signal processing and needs revolutionary
new solutions. In addition to that interoperability-
interference avoidance/mitigation- of all radars per
vehicle and with those already in other vehicles has to
be guaranteed at the same time. This becomes more
relevant with further increased market penetration and
numbers of radars per vehicle [15]. The specific
challenges on radar, deduced from the special traffic
situations learned during the Bertha drives are
described in detail in [13, 14]. A brief summary is
roundabouts, crossings of all kinds, lane-change,
over-/underride areas, different objects with various
motion models like cyclists, pedestrians, sedan, truck
buss etc., pre-crash situations from all directions, cut-
in situations in merged lanes, navigation and
localisation in large areas, (self-) localisation in small
parking areas, parking lot identification.
As described e.g. in [4, 13, 14, 16, 17, 18], the HW-
architectural solution can be achieved for example
over a two-step strategy. First, enhance the imaging
performance of each radar sensor. In detail, provide
higher spatial (range and angle) and Doppler
resolution. Endow multi Field of View (range and
azimuth angle) mode capability per sensor. Employ
appropriate interference counter measures and avoid
the mixture of CW like (PN-code/CDMA) with
FMCW based modulation schemes [15, 19, 20].
Second, equip the car with multiple radar sensors and
enable them to operate as a common network- quasi
as one radar organism. Adjust the radars in that way,
that the dark areas vanish and the FoV´s overlap most
to provide redundancy. The output of this radar-radar
fusion can be considered in the subsequent fusion step
as provided from a common electronic radar-skin.
3
The fusion of radar sensors with different cycle times
can be solved e.g. with out of sequence
tracking/fusion techniques as described in [7]. The
Bertha configuration is shown in figure 1. The third
step is purely based on signal processing. Adopt
machine learning, pattern recognition and mobile
robot algorithm concepts to radar data. With reference
to e.g. laser scanner or vision based data, radar
provides measured data in all dimensions
accompanied with the Doppler velocity. Resolution
and accuracy of the data are quasi constant over the
entire field of view.
Fig. 1. Example of a radar-configuration for driver less driving, [3,
4]
This is a huge advantage of radar technology.
Although radar data will never get a comparable data
density like those optical sensors (at automotive costs
and vehicle integration conditions), but settling time
of filters, convergence of filters and dynamic
parameters like relative speed will be faster, more
mature and robust. System availability enhanced.
IV. RADAR PERCEPTION APPROACHES
For some of the challenges listed above approaches
to solve them are described below.
Dense Point Cloud Generation: This is the most
important development target. With increased number
of detections per target (see III), the more likely is a
successful application of machine learning and pattern
recognition concepts. Since each detection comes
along with a Doppler value, smart representation
techniques can be applied [16, 21-24, 38, 39]. One
example is shown in Fig. 2. Future object
representation has to provide much more enhanced
information. These are object dimension, object
orientation, motion-prediction and classification
information. Moreover, distributed targets tend to split
into many objects, which cause problems in an
unambiguous representation and tracking. In order to
meet automotive cost targets, a compromise between
HW enabled resolution and the use of high resolution
algorithms has to be found.
Fig. 2. Example of dense point cloud representation of dynamic
objects [23].
Radar-Grids: Representation of the static
environment is a relatively new area of radar signal
processing. A method originally developed for in-door
mobile robot trajectory planning are occupancy grids.
The method was developed to provide a detailed 3D
representation by using low resolution ultrasonic
sensors [25]. Manifold modifications have been used
to serve as problem solver for many different tasks
during the Bertha-drive and subsequent product
developments and called radar-grids. Among others
they can be used for driving lane prediction, free path
description, parking lot detection, SLAM for parking,
landmark extraction, sensor-fusion and many more,
[9-11, 26-31, 39, 46]. Figure 3 shows one example of
driving lane prediction under snow conditions.
Fig. 3. Radar-Grid map as basis for driving lane prediction [9-11]
Co-representation of static and dynamic
environment: As mentioned above, to correlate static
and dynamic world is mandatory for autonomous
driving. From radar-grids a free path can be derived
and semantic information extracted. Correlating
dynamic objects into a grid based representation
allows a better understanding of the actual situation.
Tracking by using local radar-grids is another method
to correlate dynamic and stationary world. First results
of both approaches are shown in Fig.4 and Fig.5 and
described in [32-34, 36, 39].
Ego-motion estimation: As mentioned above, the
estimation of the precise ego-vehicle’s motion is a key
capability for the localization of mobile robots (hence
highly automated vehicles) to integrate new
measurements into the radar-grid-map and tracking
filter where the ego-motion has to be compensated to
obtain the absolute motion of the tracked object. For
radar-grids it is of similar importance. In [16, 35-38]
algorithm concepts have been proposed, that allow to
determine in a single shot the complete 2D motion
state of the ego vehicle (longitudinal, lateral velocity
and yaw rate). The key is a joint spatial- and Doppler-
based Ego-Motion Estimation. It evaluates the relative
motion between radar sensors with excellent Doppler
resolution and their received stationary reflections
(targets). Due to the Doppler information the method
is very robust against disturbances by moving objects
and clutter. The motion estimation is also free of bias
and drift. It provides excellent results for highly
nonlinear movements. The advantage compared to
standard vehicle odometry sensors is that especially in
slippery terrain or during high-dynamic maneuver
wheel speed sensors contain nonsystematic errors due
to wheel slip and slide can be compensated. They have
4
systematic errors caused by kinematic imperfections,
unequal wheel diameters or uncertainties about the
exact wheelbase. The Doppler approach is insensitive
to the interaction of the vehicle to the ground.
Radar-based localization: Radar-Grid-Loc, Reliable-
Radar-Objects-Map, Semantic-Radar-Grid-Map,
Radar-Landmarks are algorithm concepts used for
localization and parking tasks, [39-47]. The parking
manoeuver can be structured in the sub-tasks, self-
localisation and mapping (SLAM), free driving path
extraction, collision prevention, parking lot
identification and finally parking itself into the
parking lot. Localisation and mapping in urban
environment faces natural long-term variation of the
surroundings, for example parked cars leave its place
and dustbins are transitional appearances. Robustness
is achieved from multiple observation of the same
location at different times as these may provide
important information on static and mobile objects.
For efficient mapping, the environment should be
explored in parallel. The approach operates through a
stochastic analysis of previous observations of the area
of interest. The model uses a grid-based Markov chain
to instantly model changes. An extension of this model
by a Levy process allows statements about reliability
and prediction for each cell of the grid [40]. The
approach also provides a solution on how multiple
observations represented by grid maps have to be
aligned into one mutual frame. The solution is using
an image processing approach of group-wise grid map
registration. For registration, a rotational-invariant
descriptor is proposed in order to provide the
correspondences of points of interest in radar- based
occupancy grid maps. As pairwise registration of
multiple grid maps suffers from bias, a graph-based
approach for robust registration of multiple grid maps
is use. This will facilitate highly accurate range sensor
maps, [40-42]. Classification using neural network or
deep learning techniques allow the generation of
semantic-radar-grids. This eases situation analysis and
parking lot identification, [43, 44]. In [31, 46, 47] a
novel Rough Cough based approach is pursued to
extract landmarks using amplitude-based radar-grids
for localisation in normal driving mode, where
standard radar-grid map based SLAM approaches
suffer from required HW resources. The Rough
Cough algorithm approach enables online image
recognition and registration. It is applicable to input
images that can be aligned by an Euclidean
transformation. Based on an extension of the Hough
transform it is well-suited for massive parallel
processing. Thus, the extraction of features for
landmarks/features can be based on point like features
as well as distributed areas the radar can detect. Radar
landmarks are insensitive to different environmental
changes (dark vs. bright or winter vs. summer
appearance), which provide robustness and quality of
service of the system.
Motion-Prediction: Gaining milliseconds reaction
time and reducing the number of hypotheses is a key
issue in situation analysis and trajectory planning. If
any sensor could provide fast information about
changes in the motion state of dynamic objects in the
cars vicinity would make trajectory planning much
easier and robust. With the exploitation of the
azimuthal Doppler profile as described in [16, 24, 38]
even within a single shot motion prediction of vehicles
is possible by adopting the dense point cloud approach
using either single radars or stereo-radar
configuration,. This is illustrated in Fig.7. The figure
shows the identification of a change in the yaw-rate
much earlier as present day serial tracking filter can
do. Hence, radar can detect yaw-rate changes earlier as
a human eye can recognize any vehicle rotation. The
Doppler distribution can be used as input state in
tracking filters. The benefit is manifold. Transition
time of the filter is drastically reduced, non linear
motion can be easily tracked and massive object
information up to classification can be deduced. For
example, the fact that the wheels’ velocity differ from
the vehicle’s chassis velocity can be exploited.
Object classification: A spin-off of azimuthal
Doppler profile analysis is vehicle classification. In
[38] a fully automated approach calculates the
Normalized Doppler Moment, describing the Doppler
signature of each reflection based on the Doppler
distributions of wheels. Locations with high values
reveal the positions of the wheels. Besides the
classification, the vehicle’s orientation and therefore
the driving direction can be estimated. Furthermore the
position of the rear axle is estimated, which is essential
for a reliable prediction of rotational movements and
yaw rate estimation. Classification as small or large
scale vehicle as well as dimension estimation can be
deduced, see Fig.8.
Sensor-fusion between laser-scanner and radar
further improves the semantic information density and
dimension estimation of objects. Both sensor types are
con-genial. Laser-scanner provide high resolution
information about the objects contour, while radar
provides Doppler information and a dense point cloud
also of the “inner” part of vehicles due to the
transvision effect. Thus tracking of extended dynamic
objects become more reliable and robust [48-51].
V. OPEN ISSUES
Although great progress has been already made, the
following issues remain open and need further
engagement and innovative solutions.
Very-low speed or stand still imaging
performance.
Size reduction while maintaining detection
performance in order to close dark areas in the
360° coverage and get sensors easier integrated
into the vehicle.
Ultra- near range detection performance,
ideally close to nearly zero cm.
Higher spatial resolution.
Cognition/Adaptability.
Use of the 76-81 GHz band for situation
adaptive tailoring of range resolution and range
coverage.
Interoperability.
5
Height measurement capability.
Classification/Semantic capability for a more
mature situation understanding.
Fig. 4. Co-representation of static and dynamic objects. The
dynamic object is represented by a dense point cloud into the static
grid map, [23].
Fig. 5. Using local radar-grids in combination with a radar-grid to
combine static and dynamic world in one representation is shown
after [32].
Fig. 6. Integrated ego-motion data of two radar sensors combined
(black) and standard vehicle odometry (blue). Targets are mapped
using radar-ego-motion and their intensity is represented by the
color [yellow to red]. Start point: Top Left. (Aerial photography by
GeoBasis-DE/BKG, Google), [35].
Fig. 7. Upper grahics shows the detection of the change in the
yaw-rate with the new motion-prediction appoach. Lower left
compares the velocity vector of a serial tracking outcome (cube)
with the new approach (dots). Lower right shows the test situation.
VI. C
ONCLUSION
The lesson learned from the Bertha drive
experiment is, that present performance of serial
automotive radar is not sufficient for driver less
driving tasks. Future development of imaging like
performance that allows for comprehensive
understanding of static as well as dynamic
environment including height information is a decisive
factor in this concern.
Fig. 8. Accumulated wheel detections over the complete sequence
in the targets vehicle’s coordinate system (contour - line, axles -
dashed line), [38].
R
EFERENCES
[1] http://techcenter.mercedesBenz.com/en/collision_prevention_
assist/detail.html
[2] https://www.mercedes-benz.com/en/mercedes-
benz/innovation/mercedes-benz-intelligent-drive/
[3] http://next.mercedes-benz.com/en/autonomous-driving-in-the-
tracks-of-bertha-benz/
[4] J.Dickmann, N.Appenrodt, C. Brenk, “Making Bertha See”,
IEEE Spectrum, Aug. 2014, pp. 40-46
[5] H.Meinel and J.Dickmann, “Automotive Radar: From its
Origins to Future Directions”, MWJournal, 2013, vol.56,
No.9, pp.24-40
[6] W. Mayer, “Abbildender Radarsensor mit sendeseitig
geschalteter Gruppenantenne“, Institut fuer
Mikrowellentechnik, University of Ulm, Dissertation, Febrary
2008, Germany
[7] M. Muntzinger, M. Aeberhard, S. Zuther, M. Schmid, J.
Dickmann and K. Dietmayer. Reliable Automotive Pre-Crash
System with Out-of- Sequence Measurement Processing.
IEEE Intelligent Vehicles Symposium, 2010, p. 1022-1027.
[8] https://www.mercedes-benz.com/de/mercedes-
benz/innovation/remote-park-pilot/
[9] F. Sarholz, J. Mehnert, J.Klappstein, J. Dickmann, B. Radig.
“Evaluation of Different Approaches for Road Course
Estimation using Imaging Radar”. Intelligent Robots and
Systems 2011, San Francisco, USA
[10] F. Sarholz, F. Diewald, J. Klappstein, J. Dickmann, B. Radig,
“Evaluation of Different Quality Functions for Road Course
Estimation using Imaging Radar”, Intelligent Vehicle
Symposium 2011, Baden-Baden, Germany
[11] F. Sarholz, J. Mehnert, J.Klappstein, J. Dickmann and B.
Radig, „Evaluation of Different Approaches for Road Course
Estimation using Imaging Radar”, Intelligent Robots and
Systems 2011, San Francisco, USA
[12] J.Dickmann, N.Appenrodt, C.Brenk, „Bertha fährt autonom“,
Automobil Elektronik, 03-2014, pp.44-47
[13] J. Dickmann, N. Appenrodt, J. Klappstein, H. L. Bloecher, M.
Muntzinger, A. Sailer, M. Hahn, C. Brenk, „Making Bertha
See Even More: Radar Contribution”, IEEE Access, July
2015
[14] J. Dickmann, J. Klappstein, M. Hahn, M. Muntzinger, N.
Appenrodt, C. Brenk, A. Sailer, „Present Research Activities
and Future Requirements on Automotive Radar from a car
manufacturer´s point of view”, 2015 IEEE MTT-S
International Conference on Microwaves for Intelligent
Mobility (ICMIM), April 2015, Heidelberg, Germany
[15] EU-Project MOSARIM, „MOre Safety for All by Radar
Interference Mitigation”, Proj.Ref.No: 248231,FP7-ICT, 2014
[16] D. Kellner, M. Barjenbruch, Jens Klappstein, J. Dickmann
and K. Dietmayer, “Instantaneous Full-Motion Estimation of
Arbitrary Objects using Dual Doppler Radar”, Intelligent
Vehicle Symposium 2014 (IV 2014), Jun. 2014, Dearborn,
USA
[17] A. Hosseini, F. Diewald, J. Klappstein, J. Dickmann, H.
Neumann, „Modification Of The Landweber Method Based
On The Conjugate Gradient Method To Restore Automotive
Radar Images”, International Conference on Systems, Signals
6
and Image Processing (IWSSIP), Wien, Österreich, pp. 544-
547
[18] M. Andres, P. Feil, W. Menzel, „3D-Scattering Center
Detection of Automotive Targets Using 77GHz UWB Radar
Sensors”, EuCAP 2012, Prag, Czech Republic, March 2012,
pp. 3690-3693
[19] C. Fischer, M. Barjenbruch, H. L. Bloecher, W. Menzel,
„Detection of pedestrians in road environments with mutual
interference”, 14th International Radar Symposium (IRS),
June 2013, Dresden, Germany, pp. 746-751
[20] Barjenbruch, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, „A Method for Interference Cancellation in
Automotive Radar”, 2015 IEEE MTT-S International
Conference on Microwaves for Intelligent Mobility (ICMIM),
April 2015, Heidelberg, Germany
[21] F. Roos, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, K. D. Meuller-Glaser, C. Waldschmidt,
„Estimation of the Orientation of Vehicles in High-Resolution
Radar Images”, ICMIM, 2015
[22] C. Fischer, F. Ruf, H.-L. Bloecher, J. Dickmann, W. Menzel,
„Evaluation of Different Super-Resolution Techniques for
Automotive Applications”, International conference on radar
systems, RADAR 2012, Oct. 2012, Glasgow, United
Kingdom, pp. 1-6
[23] J. Dickmann, N. Appenrodt, H. L. Bloecher, C. Brenk, T.
Hackbarth, M. Hahn, J. Klappstein, M. Muntzinger, A. Sailer,
„Radar contribution to highly automated driving”, EuRAD,
October 2014, Rome, Italy
[24] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, „Joint Radar Alignment and Odometry
Calibration”, IEEE International Conference on Information
Fusion (FUSION), July 2015, Washington, D.C., USA
[25] Thrun, S. and Bücken, A. "Integrating grid-based and
topological maps for mobile robot navigation", Proceedings
of the Thirteenth National Conference on Artificial
Intelligence: pp. 944–950. 1996, ISBN 0-262-51091-X.
[26] Matthias R. Schmid, M. Maehlisch, J. Dickmann, H.-J.
Wuensche,”Dynamic Level of Detail 3D Occupancy Grids for
Automotive Use”,Intelligent Vehicle Symposium 2010, San
Diego, CA, USA, June 2010
[27] Dirk T. Linzmeier; Michael Skutek; Temel Abay ; Moheb
Mekhaiel; Klaus C. J. Dietmayer, „Grid-based optimal sensor
arrangement within a sensor array for 2D position
estimation”, Proc. SPIE 5612, Electro-Optical and Infrared
Systems: Technology and Applications, 370 (December 6,
2004); doi:10.1117/12.577546
[28] Dirk Linzmeier, Tobias Baer and Moheb Mekhaiel. Fusion
von Radar- und Thermopilesensordaten zur Fuss
gaengerdetektion (Fusion of Radar and Thermopile Sensor
Data for Pedestrian Detection). tm - Technisches Messen
74(3):121–129, 2007
[29] Moheb Mekhaiel, „Radarbasierte Sensorfusion für zukünftige
Sicherheits-systeme“, Sensors 4 cars- Sensorsystemtechnik
und Sensortechnologie, October 2008, Kempten, Germany
[30] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Occupancy Grid Map-based Extended Object Tracking”,
Intelligent Vehicle Symposium 2014 (IV 2014), Jun. 2014,
Dearborn, USA
[31] K. Werber, M. Rapp, J. Klappstein, M. Hahn, J. Dickmann,
K. Dietmayer, C. Waldschmidt, „Automotive Radar Gridmap
Representations”, 2015 IEEE MTT-S International
Conference on Microwaves for Intelligent Mobility (ICMIM),
April 2015, Heidelberg, Germany
[32] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Multiple extended objects tracking with object-local
occupancy grid maps”, 17th International Conference on
Information Fusion (FUSION), July 2014, Salamanca, Spain
[33] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer, „A
Flexible Environment Perception Framework for Advanced
Driver Assistance Systems”, AMAA 2013, May 2013, Berlin,
Germany, pp. 21-29
[34] M. Schütz, Y. Wiyogo, M. Schmid, J. Dickmann, „Laser-
based Hierarchical Grid Mapping for Detection and Tracking
of Moving Objects”, AMAA 2012, Berlin, Germany, April
2012, pp. 167-176
[35] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, „Instantaneous Ego-Motion Estimation using
Doppler Radar”,16th International Conference on Intelligent
Transport Systems (ITSC 2013), Oct. 2013, The Hague, The
Netherlands
[36] M. Rapp, M. Barjenbruch, M. Hahn, J. Dickmann, K.
Dietmayer, „A Fast Probabilistic Ego-Motion Estimation
Framework for Radar”, European Conference on Mobile
Robots 2015 (ECMR 2015), September 2015, Lincoln, UK
[37] M. Barjenbruch, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, „Joint Spatial- and Doppler-based Ego-Motion
Estimation for Automotive Radars”, IEEE Intelligent
Vehicles (IV), June 2015, Seoul, South Korea
[38] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, “Wheel Extraction based on Micro Doppler
Distribution using High-Resolution Radar”, 2015 IEEE MTT-
S International Conference on Microwaves for Intelligent
Mobility (ICMIM), April 2015, Heidelberg, Germany
[39] M. Hahn, J. Dickmann, “Autonomous Maneuvering with
Radars”, IWPC-Workshop, May 2014, Detroit, USA
[40] M. Rapp, M. Hahn, M. Thom, J. Dickmann and Klaus
Dietmayer, “Semi-Markov Process Based Localization using
Radar in Dynamic Environments“, to be published IEEE
International Conference on Robotics and Automation (ICRA
2015)
[41] M. Rapp, M. Barjenbruch, M. Hahn, J. Dickmann, K.
Dietmayer, „Clustering improved Grid Map Registration
using the Normal Distribution Transform”, Intelligent Vehicle
Symposium 2014 (IV 2014), July 2015, Seoul, South Korea
[42] M. Rapp, T. Giese, M. Hahn, J. Dickmann, K. Dietmayer, „A
Feature-Based Approach For Group-Wise Grid Map
Registration”, Intelligent Transportation Systems Conference
2015 (ITSC 2015), September 2015, Las Palmas, Gran
Canaria
[43] M. R. Schmid, S. Ates, F. von Hundelshausen, J. Dickmann,
H.-J. Wünsche, „Parking Space Detection with Hierarchical
Dynamic Occupancy Grids”, Intelligent Vehicle Symposium
2011, Baden-Baden, Germany, June 2011
[44] Renaud Dubé, Markus Hahn, Markus Schütz, Jürgen
Dickmann. and Denis Gingras, “Detection of parked vehicles
from a radar based occupancy grid”, IEEE Intelligent Vehicles
Symposium, 2014.
[45] Matthias R. Schmid, M. Mählisch, J. Dickmann, H.-J.
Wünsche, „Straight-Feature-Based Self-Localization for
Urban Scenarios”, 8th International Workshop on Intelligent
Transportation, Hamburg, Germany, March 22-23, 2011
[46] K. Werber, M. Barjenbruch, J. Klappstein, J. Dickmann, C.
Waldschmidt, „RoughCough - A New Image Registration
Method for Radar Based Vehicle Self-Localization”, 18th
International Conference on Information Fusion, July 2015,
Washington, D.C., USAM.
[47] K. Werber, M. Barjenbruch, J. Klappstein, J. Dickmann, C.
Waldschmidt, „How do Traffic Signs look like in Radar?”,
44th European Microwave Conference (EuMC), October
2014, Rome, Italy
[48] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Simultaneous Tracking and Shape Estimation with
Laserscanners”, 16th International Conference on Information
Fusion (FUSION), July 2013, Istanbul, Turkey
[49] P. Steinemann, J. Klappstein, J. Dickmann, H.-J. Wünsche
and F. v. Hundelshausen, „Determining the Outline Contour
of Vehicles in 3D-LIDAR-Measurements”, Intelligent
Vehicle Symposium 2011, Baden-Baden, Germany
[50] Sylvia Pietzsch, Nils Appenrodt, Juergen Dickmann, Bernd
Radig, „Model-based Fusion of Laser Scanner and Radar
Data for Target Tracking”, 8th International Workshop on
Intelligent Transportation, Hamburg, Germany, March 22-23,
2011
[51] P. Brosseit, D. Kellner, C. Brenk, J. Dickmann, „Fusion of
Doppler Radar and Geometric Attributes for Motion
Estimation of Extended Objects”, Sensor Data Fusion:
Trends, Solutions, Applications, October 2015, Bonn,
Germany
... La partie analogique ne représente qu'une petite partie du système radar. C'est sur la partie numérique que se concentrent principalement les efforts de développement, pour favoriser, par exemple, l'ouverture au « machine learning », à l'imagerie, à l'interopérabilité entre les radars et à la compréhension de situations à l'aide du radar [DIC16]. ...
... En particulier, les points clés à développer pour les capteurs automobiles sont : l'imagerie, la résolution, l'interopérabilité, et la compréhension des situations [DIC16]. La fusion des données entre lidar, caméra et radars est primordiale pour une reconnaissance totale dans chaque situation. ...
... La fusion des données entre lidar, caméra et radars est primordiale pour une reconnaissance totale dans chaque situation. Il devra être possible de classifier chaque objet détecté [DIC16]. Les futurs systèmes radar devront également permettre l'interaction entre tous les autres radars semblables. ...
Thesis
Dans le cadre de la réduction des accidents de la route et du développement de voitures autonomes, l’intégration de radars dans les véhicules n’a cessé de croître ces dernières années. En effet, le radar automobile est un élément clé pour les systèmes d’aide à la conduite (ADAS) afin de garantir une conduite plus sécurisée en toutes conditions. Avec l’explosion du nombre de capteurs dans les véhicules, l’utilisation des technologies CMOS avancées pour les radars automobiles est devenue attractive par la forte intégrabilité et le faible coût de ces technologies. Dans un radar à réseau de récepteurs multi-phases, l’angle d’arrivée, ou plus précisément l’azimut, d’un objet est déterminé par la différence de phase entre chaque chaîne de réception du signal reçu. La précision de cet angle d’arrivée est fondamentale pour garantir l’efficacité du radar avec un positionnement correct des objets Par conséquent, il est nécessaire de contrôler la phase au sein de chaque chaîne de réception afin de permettre la détermination précise de l’angle d’arrivée d’un objet. Or, en prenant en compte la variabilité des wafers en CMOS SOI due aux variations de processus, de tension et de température (PVT), les variations de phase par chaîne sont supérieures aux spécifications demandées pour le radar automobile.Cette thèse propose donc une solution innovante de contrôle d’erreur de phase d’une chaîne de réception en technologie CMOS 28-nm FD-SOI pour des applications radars dans la bande de fréquences 76-81 GHz. Les travaux réalisés dans cette thèse présentent en premier lieu une étude système des radars. Cette étude met en lumière la répercussion du type de système radar choisi sur les capacités de détection de distance, de vitesse et d’angle d’arrivée d’objets des radars. Un état de l’art des systèmes de calibration des radars est ensuite dressé. Cette étude propose également une étude théorique de l’erreur de phase dans un circuit de réception radar. Finalement, ces travaux aboutissent à la conception et la validation de la solution de contrôle d’erreur de phase, intégrée dans une chaîne de réception. Cette solution, appliquée directement en amont de la chaîne, permet de compenser les variations de phase dues aux disparités et variations de processus de la technologie. Les performances obtenues pour la chaîne de réception sont à l’état de l’art. Cette solution de contrôle d’erreur de phase analogique intégrée dans une tête de réception millimétrique contribue également à la réduction de la surface analogique ainsi qu’à la réduction de la consommation pour une détermination précise de l’angle d’arrivée.
... Camera images can also be used to estimate the drivable path, but they are subject to spatial and time complexity constraints [23]. An alternative choice is radar, as it is less expensive than lidar and robust to adverse weather conditions [2], [24], [25], [6]. Radar is more reliable for capturing long-range information than cameras or lidar irrespective of any class-specific detector. ...
... Determining the dynamics of a complex environment based on a single radar sensor using occupancy grid mapping is an unsaturated problem, and to the best of our knowledge, no suitable previous work on this topic is available. In addition, we consider that the ability to determine the drivable path via a stochastic approach under all weather conditions using a low-cost and long-range sensor, namely, radar alone, provides an alternative to laser and vision sensors for robots and autonomous vehicles [24], [25]. Fig. 1 shows the framework of the proposed method, the variational Gaussian process method for dynamic occupancy mapping (V-GPOM). ...
Article
Self-driving vehicles are posing new challenges as the automation level defined in the SAE International standards for autonomous driving is increased. A pivotal task in autonomous driving is building a perception of the surrounding environment using optical sensors, which is a long-standing challenge and prompts us to explore the utilization of various sensors. Radar is an older and cheaper type of sensor than alternatives such as lidar for long-range distance coverage, and it is also competitively reliable and robust in adverse weather conditions. However, sparse data and noise are inherent challenges of radar. This study explores the dynamic Gaussian process for occupancy mapping and predicting a drivable path for a self-driving vehicle within the field of view (FOV) of a radar sensor. Gaussian occupancy mapping does not need abundant data for training and is a promising alternative to data-reliant deep learning techniques. The proposed technique optimizes parameters (variational and kernel-based) of the Gaussian process to determine the allowed region within the FOV limits by means of stochastic selection of functional points (pseudoinput) and tuning of threshold values. We have tested the proposed technique in experiments performed under different environmental conditions, such as various road and traffic conditions and diverse weather and illumination conditions. The results verify the efficacy of the proposed technique in diverse weather conditions for finding a drivable path for a self-driving vehicle, with the additional benefits of requiring only a low-cost apparatus and providing coverage of a long distance range.
... Highly autonomous vehicles will rely heavily on active sensing technology for holistic environmental perception [1,2]. To enable greater degrees of autonomy, next-generation active sensors, which include radar and lidar, should exhibit highresolution imaging capabilities and maintain sufficient reliability in dynamic scenarios [3,4]. ...
Preprint
This paper describes important considerations and challenges associated with online reinforcement-learning based waveform selection for target identification in frequency modulated continuous wave (FMCW) automotive radar systems. We present a novel learning approach based on satisficing Thompson sampling, which quickly identifies a waveform expected to yield satisfactory classification performance. We demonstrate through measurement-level simulations that effective waveform selection strategies can be quickly learned, even in cases where the radar must select from a large catalog of candidate waveforms. The radar learns to adaptively select a bandwidth for appropriate resolution and a slow-time unimodular code for interference mitigation in the scene of interest by optimizing an expected classification metric.
... To realize robust autonomous operation, vehicles require nearly comprehensive and continuous situational awareness to make appropriate driving decisions. This awareness is obtained via the use of on-board sensing technologies such as LIDAR (LIDAR) [6], computer vision algorithms [7], [8], and radar [9]. A GNSS (GNSS) such as GPS (GPS) or similar provides vehicle location data. ...
Article
Full-text available
Wireless Sensor Networks (WSNs) that classify the source of detected radio signals require mobile transmitters, physical (PHY) and link layer meta data, and packet sniffing capabilities. These signal classifiers are restricted by assumptions that may be difficult to realize in adversarial Signal of Opportunity (SOP) localization settings, and they do not jointly localize transmitters. In this paper, we present a novel framework that self-organizes to classify and jointly localize sets of stationary transmitters emitting SOP. The framework leverages the underlying Gaussian distribution associated with multilateration estimates via the use of Unsupervised Learning (UL) techniques. Inference of spatial multilateration features allows for the joint estimation of classification outcomes with respect to several unknown parameters, including the number of transmitters, source transmitters for each signal, the underlying multilateration distribution, and the transmitter locations. The proposed framework was evaluated in a two-dimensional trilateration experiment. Signals transmitted by vehicular Tire Pressure Monitoring System (TPMS) wireless beacons were observed by a custom-built WSN test bed to produce Received Signal Strength Indicators (RSS) features. We used a trained Convolutional Neural Network (CNN) to make location estimates from the RSS feature data. An Anderson-Darling test showed that these CNN estimates were statistically indistinguishable from those of a normal distribution. The spatial trilateration estimates were clustered to identify six of the eight TPMS transmitters with a 75% cluster detection rate, which was the result of every statistically different spatial and RSS population as determined by a Kruskal-Wallis (KW) test. The source transmitter of every signal was classified with a 76.4% indicator variable accuracy (93.7% when removing statistically identical RSS populations) and the detected source transmitters were localized with an average of 1.72 m variance and 1.19 m bias within a roughly 15 m square whose perimeter is made up of receivers.
Article
Full-text available
Millimeter wave (MMW) radar simultaneous localization and mapping (SLAM) technology is an emerging technology in a tunnel vehicle accident rescue scene. It is a powerful tool for statistic-trapped vehicle detection with limited vision caused by darkness, heat, and smoke. A variety of SLAM frameworks have been proven to be able to obtain 3-degree-of-freedom (3-dof) pose estimation results using 2-dimention (2D) MMW radar in open space. In the application of millimeter wave radar for pose estimation and mapping in a closed environment, closed space structures and artificial targets together constitute high-intensity multi-path scattering measurement data, leading to radar false detections. Radar false detections caused by multi-path scattering are generally considered to be detrimental to radar applications, such as multi-target tracking. However, few studies analyze the mechanism of multi-path effects on radar SLAM, especially in closed spaces. In order to address the problem, this paper first presents a radar multi-path scattering theory to study the generation mechanism difference of false and radar true detection and their influences on radar SLAM 2D pose estimation and mapping in tunnel. According to the scattering mechanism differences on SLAM, a radar azimuth scattering angle signature is proposed, which allows distinguishing radar false detections from real ones. This is useful in avoiding using unreliable radar false detections to solve a radar SLAM problem. In addition, two different radar false detection revising methods combined with the CSM (correlative scan matching) algorithm are proposed in this paper. The HTMR-CSM (hard-threshold-multi-path-revised correlative scan matching) algorithm only depends on a hard threshold of radar azimuth scattering angle signature to eliminate all radar false detections as much as possible before CSM. Another idea is the STMR-CSM (soft-threshold-multi-path-revised correlative scan matching) algorithm. All the radar false detections are classified according to the distribution model of radar azimuth accuracy, and part of more reliable radar false detections are retained to estimate a more accurate pose. All the ideas in this paper are validated by using an MMW 2D radar mounted on a rail-guided robot in a tunnel. Two cars on fire were set as the targets. The experimental results show that the STMR-CSM algorithm that keeps the reliable radar false detections improves the positioning accuracy by 20% compared with CSM.
Article
Full-text available
There exist many difficulties in environmental perception in transportation at open-pit mines, such as unpaved roads, dusty environments, and high requirements for the detection and tracking stability of small irregular obstacles. In order to solve the above problems, a new multi-target detection and tracking method is proposed based on the fusion of Lidar and millimeter-wave radar. It advances a secondary segmentation algorithm suitable for open-pit mine production scenarios to improve the detection distance and accuracy of small irregular obstacles on unpaved roads. In addition, the paper also proposes an adaptive heterogeneous multi-source fusion strategy of filtering dust, which can significantly improve the detection and tracking ability of the perception system for various targets in the dust environment by adaptively adjusting the confidence of the output target. Finally, the test results in the open-pit mine show that the method can stably detect obstacles with a size of 30–40 cm at 60 m in front of the mining truck, and effectively filter out false alarms of concentration dust, which proves the reliability of the method.
Article
Millimeter-wave (mmWave) radar has been widely used in autonomous driving due to its good performance under harsh weather conditions. In recent years, with the development of mmWave radar hardware performance, radar point clouds, as an important data format of mmWave radar, have been widely used in high-level perception tasks of mobile robots and autonomous driving. However, at present, compared to LiDAR point clouds, in common application scenes of mobile robots, mmWave radar point clouds have shortcomings such as sparsity and containing many “ghost” targets. Therefore, in this article, we analyze the reasons that cause these problems and propose a new method for point cloud generation as well as a new evaluation metric. After building a new dataset and carrying out experiments in real-world scenes, our method shows better performance on the quality of radar point clouds compared to other methods. In addition, by evaluating the performance of applying the high-quality radar point clouds to object detection tasks as well as localization and mapping tasks, the result shows that radar point clouds generated using our method can significantly improve the environment perception ability of mobile robots.
Article
Full-text available
With new generations of high-resolution imaging radars, the orientation of vehicles can be estimated without temporal filtering. This enables time-critical systems to respond even faster. Based on a large data set, this paper compares three generic algorithms for the orientation estimation of a vehicle. An experimental MIMO imaging radar is used to highlight the requirements of a robust algorithm. The well-known orientated bounding box and the so-called L-fit are adapted for radar measurements and compared with a brute-force approach. A quality function selects the best fitted model and is a key factor to minimize alignment errors. Moreover, the reliability of the estimation is evaluated with respect to the aspect angle, the distance to the target, and the number of sensors. An approach to estimate the reliability of the current orientation estimation is introduced. It is shown that the root mean square error of the orientation estimation is 9.77° and 38% smaller compared with the common algorithm. In 50% of the evaluated measurements the orientation estimation error is smaller than 3.73°.
Conference Paper
Full-text available
An unsupervised online procedure for the precise alignment of fully integrated Doppler radar sensors is proposed. Alignment is the precise determination of the angle between the principal beam direction of the radar sensor and the thrust axis of the vehicle. The method is based on the accurate determination of the sensor movement through the analysis of the Doppler distribution of stationary targets over the azimuth angle. The precise alignment facilitates estimating the ego motion of the vehicle. The approach is long-term stable and bias free and therefore predestined for the calibration of standard vehicle's odometry (gyroscope and wheel sensors). A hierarchical optimization strategy to obtain the systematic errors is presented and Maximum Likelihood estimators are derived for each step. A Monte-Carlo simulation is used to identify critical impact factors and determine them quantitatively. The performance of all approaches is finally evaluated with promising results using measurements obtained by a pre-series 77 GHz Doppler radar sensor. An alignment precision on the order of 0.01-0.05 • is achieved.
Conference Paper
Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.
Conference Paper
An ego-motion estimation method based on the spatial and Doppler information obtained by an automotive radar is proposed. The estimation of the motion state vector is performed in a density-based framework. Compared to standard vehicle odometry the approach is capable to estimate the full two dimensional motion state with three degrees of freedom. The measurement of a Doppler radar sensor is represented as a mixture of Gaussians. This mixture is matched with the mixture of a previous measurement by applying the appropriate egomotion transformation. The parameters of the transformation are found by the optimization of a suitable join metric. Due to the Doppler information the method is very robust against disturbances by moving objects and clutter. It provides excellent results for highly nonlinear movements. Real world results of the proposed method are presented. The measurements are obtained by a 77GHz radar sensor mounted on a test vehicle. A comparison using a high-precision inertial measurement unit with differential GPS support is made. The results show a high accuracy in velocity and yaw-rate estimation.
Conference Paper
This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.
Conference Paper
For autonomous vehicles and advanced driver assistance systems, information on the actual state of the environment is fundamental for localization and mapping tasks. Localization benefits from multiple observations of the same location at different times as these may provide important information on static and mobile objects. For efficient mapping, the environment may be explored in parallel. For these purposes, multiple observations represented by grid maps have to be aligned into one mutual frame. This paper addresses the problem of group-wise grid map registration using an image processing approach. For registration, a rotational-invariant descriptor is proposed in order to provide the correspondences of points of interest in radar-based occupancy grid maps. As pairwise registration of multiple grid maps suffers from bias, this paper proposes a graph-based approach for robust registration of multiple grid maps. This will facilitate highly accurate range sensor maps for the aforementioned purposes. Large-scale experiments show the benefit of the proposed methods and compare it to state-of-the-art algorithms on radar measurements.
Chapter
The advanced driver assistance systems (ADAS) of the future will rely heavily on an accurate and extensive description of the ego vehicle’s environment based on multiple sensors. This paper gives an overview of a framework tackling this task by using Simultaneous Localization And Mapping (SLAM) and Detection And Tracking of Moving Objects (DATMO) techniques in real world scenarios using multiple 4-layer lidar sensors and a radar sensor. After a short literature overview, a modular framework is proposed which allows testing and evaluating different algorithms in outdoor scenarios. An advanced particle filter based tracking algorithm is presented to estimate the shape of extended objects with arbitrary forms. Results are evaluated based on sensor data.
Article
Improved sensors in the automotive field are leading to multi-object tracking of extended objects becoming more and more important for advanced driver assistance systems and highly automated driving. This paper proposes an approach that combines a PHD filter for extended objects, viz. objects that originate multiple measurements while also estimating the shape of the objects via constructing an object-local occupancy grid map and then extracting a polygonal chain. This allows tracking even in traffic scenarios where unambiguous segmentation of measurements is difficult or impossible. In this work, this is achieved using multiple segmentation assumptions by applying different parameter sets for the DBSCAN clustering algorithm. The proposed algorithm is evaluated using simulated data and real sensor data from a test track including highly accurate D-GPS and IMU data as a ground truth.