Content uploaded by Juergen Dickmann
Author content
All content in this area was uploaded by Juergen Dickmann on May 23, 2016
Content may be subject to copyright.
1
„Automotive Radar the Key Technology For Autonomous Driving: From
Detection and Ranging to Environmental Understanding“
Juergen Dickmann, Jens Klappstein, Markus Hahn, Nils Appenrodt, Hans-Ludwig Bloecher, Klaudius
Werber, Alfons Sailer
Daimler AG, Vehicle Automation and Chassis Systems, Ulm, Germany
E-Mail: juergen.dickmann@daimler.com
Abstract— An overview on state of the art
automotive radar usage is presented and the changing
requirements from detection and ranging towards radar
based environmental understanding for highly
automated and autonomous driving deduced. The
traditional segmentation in driving, manoeuvering
and parking tasks vanishes at the driver less stage.
Situation assessment and trajectory/manoeuver
planning need to operate in a more thorough way.
Hence, fast situational up-date, motion prediction
of all kind of dynamic objects, object dimension,
ego-motion estimation, (self)-localisation and more
semantic/classification information, which allows
to put static and dynamic world into
correlation/context with each other is mandatory.
All these are new areas for radar signal processing
and needs revolutionary new solutions. The article
outlines the benefits that make radar essential for
autonomous driving and presents recent approaches in
radar based environmental perception.
Keywords—Radar, Environmental Perception,
Landmark, SLAM, Driver assistance, Active safety, Highly
automated Driving, Driverless driving
I. I. INTRODUCTION
Automotive Radar has already reached a market
penetration that leads to several tens of million units
used. It has already grown up to a status were it has
found its way into nearly all car manufacturers
portfolio in the world. They are used in all platforms
from passenger cars via van to heavy trucks and travel
busses down to even smallest sedan car platforms.
With the introduction of the collision prevention
assist®, Radar sensors have become even standard
equipment in passenger cars [1]. The major reason for
the success story of automotive radar is its physical
principle that offers unique performance features at
reasonable costs. Among others there are
independence from environmental conditions (light,
weather), directly measured parameters in space and
Doppler velocity, multiple field of view capability
and design compatible vehicle integration. Radar
performs under conditions, where other sensor types
fail and it is capable to virtually look through vehicles
(transvision effect) by exploiting reflections between
the road surface and vehicle floor and hence makes
the invisible visible. Over the decades, the
performance requirements increased steadily from
simple detector and ranging tasks in blind spot
monitoring or cruise control systems up to smart
environment perception tasks for present day semi-
autonomous evasion and braking functions [2].
However, the utmost push in performance
requirement is initiated with the trend towards highly
automated driving and down the road, driver less
driving. Future automotive radar systems have to
provide imaging like capabilities and have to interact
in radar networks, which allow for 360° highly
comprehensive perception tasks. In former days,
single sensor concepts were used, while multi sensor
networks composed of four or more short-,mid-, and
far range radars are being applied, nowadays [3, 4]. In
2013, the first stride ahead towards higher automation
has been made with the fully autonomous Bertha
drive of a Mercedes-Benz research sedan [3, 4]. One
design rule was that the vehicle had to appear as a
serial vehicle, which naturally brought radar into the
game. The technical lesson learned was, that higher
degree in automation, where the driver is going to be
exculpated increasingly from the pure driving task,
imposes much higher performance to the
environmental perception task radar has to deliver.
One important consequence is that radar signal
processing has to be extended to machine learning,
image understanding and patter recognition concepts
to keep radar in the leading edge of remote sensing.
The paper will provide an overview on state of the art
automotive radar usage deduces future requirements
for highly automated driving and will present recent
advances in radar based environmental perception.
II. FUNCTIONAL MILESTONES TO DRIVER LESS
DRIVING
A comprehensive overview of the evolution of
driver assistance and active safety systems is given in
[5]. Over the last decade, DAIMLER, and other car
manufacturers all over the world have introduced a
large variety of active safety and driver assistance
functions [1, 2, 3]. In general, those systems have been
developed to operate on highways and to some extend
on rural roads. The functional portfolio of those
systems covers mainly the following key features,
Blind spot detection, Cruise control with Stop and go,
Emergency Braking, 360°-Pre-crash sensing and pre-
triggering for/of airbags. The introduction of semi -
autonomous emergency braking and pre-crash systems
was only possible by a dramatic improvement in radar
technology and radar network architecture. The key
improvements are the introduction of multimodality
covering long (250m) and short range distances (0.5-
80m) and azimuth angles from ±10° to ± 70° in one
sensor-package. First steps towards imaging like
capability have been introduced with digital beam
forming allowing for SAR concepts combined with
high resolution algorithm techniques like linear – or
autoregressive progression (APR) and MUSIC as well
as architectural changes to achieve improved Doppler
2
resolution [4-6]. Also important are a high angular
accuracy, a very fast up-date rate of few 10´s of
milliseconds (ms) and small latency of view ms [7,
13]. The evolution of driver assistance or active safety
functions towards higher degree of automation can be
revealed by considering the evolution of emergency
braking systems. For example the Mercedes-Benz
PRE-SAFE® Brake improved from a simple braking
force enhancement in 2006 to a system that intervenes
by braking the car automatically and activates the
maximum braking power around 600ms seconds
before the unavoidable collision in 2013 [2]. In 2015
an extension to urban areas with pedestrian
classification was added [2]. A similar evolution takes
place in the parking and maneuvering area. Active
parking assists of the former days, enabled the vehicle
to search for a suitable parking space, and to park
automatically at the press of a button, with the driver
retaining control of the accelerator and brake at all
times. The present evolution state is the advancement
to parking pilot, where the driver can remote
controlled park his car via a smart phone app from
outside of the vehicle [8]. Even in those state of the art
functions radar mainly performs according to its
traditional role, detection and ranging of dynamic
objects, based on a point representation. One first
example how pattern recognition and image
understanding concepts enable new safety functions is
driving lane prediction. Exploiting the reflections from
guard rails, gravel and lawn, this information enables
emergency braking in curves and in snow conditions
where optical lane information is missing [9, 10, 11].
Some basic pedestrian classification for NCAP and
braking functions is the first step of radar contributing
semantic information for a system reaction.
It is quite obvious that the trend in higher
automation level will continue up to autonomous
driving on highways as well as in urban areas. At this
utmost stage, there will be no driver in the loop. As a
consequence, the autonomous car driving performance
will be depending on the degree of completeness of
the essential environmental information the sensor-set
up is going to provide. Comparable to the human
being that utilizes many different sensors (ears, eyes
etc.) highly automated vehicles will use many different
sensor types. The difference to present fusion concepts
like “region of interest” is that all sensors will have to
provide similar information in order to achieve the
required robustness via a fusion concept like “n
sensors out of m sensors see the same”, [29]. As a
consequence, the standard performance portfolio of
radar has to be dramatically enhanced. The following
chapter will deduce the challenges for future
automotive radar.
III. RADAR REQUIREMENTS
As shown with the Bertha Drive it could be shown,
that autonomous driving on both interurban and inner
city routes is feasible even with a vehicle and sensor-
set-up that is not dramatically different to a standard
serial vehicle. The goal of this experiment was to show
that autonomous driving is not limited on highways
and similar structured environment [3, 4, 12, 13]. On
its way, the self-driving S-Class had to deal
autonomously with a number of highly complex urban
situations, which were either enabled or aided by
radar.
In addition to far range operation in driving direction
for highway and rural road operation, in urban
scenarios 360° near- and mid-range distance of the
vehicles environment will become also important.
This along with a wider azimuthal observation
horizon in order to cover e.g. crossing scenarios,
roundabouts or pre-crash situations in driving
direction as well as side- and rear-crash situations.
Dramatically shrinking time scales in terms of
observation- and time to react horizons as well as a
huge larger number of static and dynamic object- and
motion types compared to classical ACC and collision
mitigation functions have to be coped with [14]. On
top of that, urban areas provide manifold occasions
for false detections, mirror targets and clutter. This all
together imposes dramatic challenges to the radar
signal processing engineer.
The traditional segmentation in driving, manoeuvring
and parking tasks vanishes at the driver less stage.
Situation assessment and trajectory/manoeuver
planning need to operate in a more thorough way.
Hence, fast situational up-date, motion prediction of
all kind of dynamic objects, object dimension, ego-
motion estimation, (self)-localisation and more
semantic/classification information, which allows to
put static and dynamic world into correlation/context
with each other is mandatory. All these are new areas
for radar signal processing and needs revolutionary
new solutions. In addition to that interoperability-
interference avoidance/mitigation- of all radars per
vehicle and with those already in other vehicles has to
be guaranteed at the same time. This becomes more
relevant with further increased market penetration and
numbers of radars per vehicle [15]. The specific
challenges on radar, deduced from the special traffic
situations learned during the Bertha drives are
described in detail in [13, 14]. A brief summary is
roundabouts, crossings of all kinds, lane-change,
over-/underride areas, different objects with various
motion models like cyclists, pedestrians, sedan, truck
buss etc., pre-crash situations from all directions, cut-
in situations in merged lanes, navigation and
localisation in large areas, (self-) localisation in small
parking areas, parking lot identification.
As described e.g. in [4, 13, 14, 16, 17, 18], the HW-
architectural solution can be achieved for example
over a two-step strategy. First, enhance the imaging
performance of each radar sensor. In detail, provide
higher spatial (range and angle) and Doppler
resolution. Endow multi Field of View (range and
azimuth angle) mode capability per sensor. Employ
appropriate interference counter measures and avoid
the mixture of CW like (PN-code/CDMA) with
FMCW based modulation schemes [15, 19, 20].
Second, equip the car with multiple radar sensors and
enable them to operate as a common network- quasi
as one radar organism. Adjust the radars in that way,
that the dark areas vanish and the FoV´s overlap most
to provide redundancy. The output of this radar-radar
fusion can be considered in the subsequent fusion step
as provided from a common electronic radar-skin.
3
The fusion of radar sensors with different cycle times
can be solved e.g. with out of sequence
tracking/fusion techniques as described in [7]. The
Bertha configuration is shown in figure 1. The third
step is purely based on signal processing. Adopt
machine learning, pattern recognition and mobile
robot algorithm concepts to radar data. With reference
to e.g. laser scanner or vision based data, radar
provides measured data in all dimensions
accompanied with the Doppler velocity. Resolution
and accuracy of the data are quasi constant over the
entire field of view.
Fig. 1. Example of a radar-configuration for driver less driving, [3,
4]
This is a huge advantage of radar technology.
Although radar data will never get a comparable data
density like those optical sensors (at automotive costs
and vehicle integration conditions), but settling time
of filters, convergence of filters and dynamic
parameters like relative speed will be faster, more
mature and robust. System availability enhanced.
IV. RADAR PERCEPTION APPROACHES
For some of the challenges listed above approaches
to solve them are described below.
Dense Point Cloud Generation: This is the most
important development target. With increased number
of detections per target (see III), the more likely is a
successful application of machine learning and pattern
recognition concepts. Since each detection comes
along with a Doppler value, smart representation
techniques can be applied [16, 21-24, 38, 39]. One
example is shown in Fig. 2. Future object
representation has to provide much more enhanced
information. These are object dimension, object
orientation, motion-prediction and classification
information. Moreover, distributed targets tend to split
into many objects, which cause problems in an
unambiguous representation and tracking. In order to
meet automotive cost targets, a compromise between
HW enabled resolution and the use of high resolution
algorithms has to be found.
Fig. 2. Example of dense point cloud representation of dynamic
objects [23].
Radar-Grids: Representation of the static
environment is a relatively new area of radar signal
processing. A method originally developed for in-door
mobile robot trajectory planning are occupancy grids.
The method was developed to provide a detailed 3D
representation by using low resolution ultrasonic
sensors [25]. Manifold modifications have been used
to serve as problem solver for many different tasks
during the Bertha-drive and subsequent product
developments and called radar-grids. Among others
they can be used for driving lane prediction, free path
description, parking lot detection, SLAM for parking,
landmark extraction, sensor-fusion and many more,
[9-11, 26-31, 39, 46]. Figure 3 shows one example of
driving lane prediction under snow conditions.
Fig. 3. Radar-Grid map as basis for driving lane prediction [9-11]
Co-representation of static and dynamic
environment: As mentioned above, to correlate static
and dynamic world is mandatory for autonomous
driving. From radar-grids a free path can be derived
and semantic information extracted. Correlating
dynamic objects into a grid based representation
allows a better understanding of the actual situation.
Tracking by using local radar-grids is another method
to correlate dynamic and stationary world. First results
of both approaches are shown in Fig.4 and Fig.5 and
described in [32-34, 36, 39].
Ego-motion estimation: As mentioned above, the
estimation of the precise ego-vehicle’s motion is a key
capability for the localization of mobile robots (hence
highly automated vehicles) to integrate new
measurements into the radar-grid-map and tracking
filter where the ego-motion has to be compensated to
obtain the absolute motion of the tracked object. For
radar-grids it is of similar importance. In [16, 35-38]
algorithm concepts have been proposed, that allow to
determine in a single shot the complete 2D motion
state of the ego vehicle (longitudinal, lateral velocity
and yaw rate). The key is a joint spatial- and Doppler-
based Ego-Motion Estimation. It evaluates the relative
motion between radar sensors with excellent Doppler
resolution and their received stationary reflections
(targets). Due to the Doppler information the method
is very robust against disturbances by moving objects
and clutter. The motion estimation is also free of bias
and drift. It provides excellent results for highly
nonlinear movements. The advantage compared to
standard vehicle odometry sensors is that especially in
slippery terrain or during high-dynamic maneuver
wheel speed sensors contain nonsystematic errors due
to wheel slip and slide can be compensated. They have
4
systematic errors caused by kinematic imperfections,
unequal wheel diameters or uncertainties about the
exact wheelbase. The Doppler approach is insensitive
to the interaction of the vehicle to the ground.
Radar-based localization: Radar-Grid-Loc, Reliable-
Radar-Objects-Map, Semantic-Radar-Grid-Map,
Radar-Landmarks are algorithm concepts used for
localization and parking tasks, [39-47]. The parking
manoeuver can be structured in the sub-tasks, self-
localisation and mapping (SLAM), free driving path
extraction, collision prevention, parking lot
identification and finally parking itself into the
parking lot. Localisation and mapping in urban
environment faces natural long-term variation of the
surroundings, for example parked cars leave its place
and dustbins are transitional appearances. Robustness
is achieved from multiple observation of the same
location at different times as these may provide
important information on static and mobile objects.
For efficient mapping, the environment should be
explored in parallel. The approach operates through a
stochastic analysis of previous observations of the area
of interest. The model uses a grid-based Markov chain
to instantly model changes. An extension of this model
by a Levy process allows statements about reliability
and prediction for each cell of the grid [40]. The
approach also provides a solution on how multiple
observations represented by grid maps have to be
aligned into one mutual frame. The solution is using
an image processing approach of group-wise grid map
registration. For registration, a rotational-invariant
descriptor is proposed in order to provide the
correspondences of points of interest in radar- based
occupancy grid maps. As pairwise registration of
multiple grid maps suffers from bias, a graph-based
approach for robust registration of multiple grid maps
is use. This will facilitate highly accurate range sensor
maps, [40-42]. Classification using neural network or
deep learning techniques allow the generation of
semantic-radar-grids. This eases situation analysis and
parking lot identification, [43, 44]. In [31, 46, 47] a
novel Rough Cough based approach is pursued to
extract landmarks using amplitude-based radar-grids
for localisation in normal driving mode, where
standard radar-grid map based SLAM approaches
suffer from required HW resources. The Rough
Cough algorithm approach enables online image
recognition and registration. It is applicable to input
images that can be aligned by an Euclidean
transformation. Based on an extension of the Hough
transform it is well-suited for massive parallel
processing. Thus, the extraction of features for
landmarks/features can be based on point like features
as well as distributed areas the radar can detect. Radar
landmarks are insensitive to different environmental
changes (dark vs. bright or winter vs. summer
appearance), which provide robustness and quality of
service of the system.
Motion-Prediction: Gaining milliseconds reaction
time and reducing the number of hypotheses is a key
issue in situation analysis and trajectory planning. If
any sensor could provide fast information about
changes in the motion state of dynamic objects in the
cars vicinity would make trajectory planning much
easier and robust. With the exploitation of the
azimuthal Doppler profile as described in [16, 24, 38]
even within a single shot motion prediction of vehicles
is possible by adopting the dense point cloud approach
using either single radars or stereo-radar
configuration,. This is illustrated in Fig.7. The figure
shows the identification of a change in the yaw-rate
much earlier as present day serial tracking filter can
do. Hence, radar can detect yaw-rate changes earlier as
a human eye can recognize any vehicle rotation. The
Doppler distribution can be used as input state in
tracking filters. The benefit is manifold. Transition
time of the filter is drastically reduced, non – linear
motion can be easily tracked and massive object
information up to classification can be deduced. For
example, the fact that the wheels’ velocity differ from
the vehicle’s chassis velocity can be exploited.
Object classification: A spin-off of azimuthal
Doppler profile analysis is vehicle classification. In
[38] a fully automated approach calculates the
Normalized Doppler Moment, describing the Doppler
signature of each reflection based on the Doppler
distributions of wheels. Locations with high values
reveal the positions of the wheels. Besides the
classification, the vehicle’s orientation and therefore
the driving direction can be estimated. Furthermore the
position of the rear axle is estimated, which is essential
for a reliable prediction of rotational movements and
yaw rate estimation. Classification as small or large
scale vehicle as well as dimension estimation can be
deduced, see Fig.8.
Sensor-fusion between laser-scanner and radar
further improves the semantic information density and
dimension estimation of objects. Both sensor types are
con-genial. Laser-scanner provide high resolution
information about the objects contour, while radar
provides Doppler information and a dense point cloud
also of the “inner” part of vehicles due to the
transvision effect. Thus tracking of extended dynamic
objects become more reliable and robust [48-51].
V. OPEN ISSUES
Although great progress has been already made, the
following issues remain open and need further
engagement and innovative solutions.
• Very-low speed or stand still imaging
performance.
• Size reduction while maintaining detection
performance in order to close dark areas in the
360° coverage and get sensors easier integrated
into the vehicle.
• Ultra- near range detection performance,
ideally close to nearly zero cm.
• Higher spatial resolution.
• Cognition/Adaptability.
• Use of the 76-81 GHz band for situation
adaptive tailoring of range resolution and range
coverage.
• Interoperability.
5
• Height measurement capability.
• Classification/Semantic capability for a more
mature situation understanding.
Fig. 4. Co-representation of static and dynamic objects. The
dynamic object is represented by a dense point cloud into the static
grid map, [23].
Fig. 5. Using local radar-grids in combination with a radar-grid to
combine static and dynamic world in one representation is shown
after [32].
Fig. 6. Integrated ego-motion data of two radar sensors combined
(black) and standard vehicle odometry (blue). Targets are mapped
using radar-ego-motion and their intensity is represented by the
color [yellow to red]. Start point: Top Left. (Aerial photography by
GeoBasis-DE/BKG, Google), [35].
Fig. 7. Upper grahics shows the detection of the change in the
yaw-rate with the new motion-prediction appoach. Lower left
compares the velocity vector of a serial tracking outcome (cube)
with the new approach (dots). Lower right shows the test situation.
VI. C
ONCLUSION
The lesson learned from the Bertha drive
experiment is, that present performance of serial
automotive radar is not sufficient for driver less
driving tasks. Future development of imaging like
performance that allows for comprehensive
understanding of static as well as dynamic
environment including height information is a decisive
factor in this concern.
Fig. 8. Accumulated wheel detections over the complete sequence
in the targets vehicle’s coordinate system (contour - line, axles -
dashed line), [38].
R
EFERENCES
[1] http://techcenter.mercedesBenz.com/en/collision_prevention_
assist/detail.html
[2] https://www.mercedes-benz.com/en/mercedes-
benz/innovation/mercedes-benz-intelligent-drive/
[3] http://next.mercedes-benz.com/en/autonomous-driving-in-the-
tracks-of-bertha-benz/
[4] J.Dickmann, N.Appenrodt, C. Brenk, “Making Bertha See”,
IEEE Spectrum, Aug. 2014, pp. 40-46
[5] H.Meinel and J.Dickmann, “Automotive Radar: From its
Origins to Future Directions”, MWJournal, 2013, vol.56,
No.9, pp.24-40
[6] W. Mayer, “Abbildender Radarsensor mit sendeseitig
geschalteter Gruppenantenne“, Institut fuer
Mikrowellentechnik, University of Ulm, Dissertation, Febrary
2008, Germany
[7] M. Muntzinger, M. Aeberhard, S. Zuther, M. Schmid, J.
Dickmann and K. Dietmayer. Reliable Automotive Pre-Crash
System with Out-of- Sequence Measurement Processing.
IEEE Intelligent Vehicles Symposium, 2010, p. 1022-1027.
[8] https://www.mercedes-benz.com/de/mercedes-
benz/innovation/remote-park-pilot/
[9] F. Sarholz, J. Mehnert, J.Klappstein, J. Dickmann, B. Radig.
“Evaluation of Different Approaches for Road Course
Estimation using Imaging Radar”. Intelligent Robots and
Systems 2011, San Francisco, USA
[10] F. Sarholz, F. Diewald, J. Klappstein, J. Dickmann, B. Radig,
“Evaluation of Different Quality Functions for Road Course
Estimation using Imaging Radar”, Intelligent Vehicle
Symposium 2011, Baden-Baden, Germany
[11] F. Sarholz, J. Mehnert, J.Klappstein, J. Dickmann and B.
Radig, „Evaluation of Different Approaches for Road Course
Estimation using Imaging Radar”, Intelligent Robots and
Systems 2011, San Francisco, USA
[12] J.Dickmann, N.Appenrodt, C.Brenk, „Bertha fährt autonom“,
Automobil Elektronik, 03-2014, pp.44-47
[13] J. Dickmann, N. Appenrodt, J. Klappstein, H. L. Bloecher, M.
Muntzinger, A. Sailer, M. Hahn, C. Brenk, „Making Bertha
See Even More: Radar Contribution”, IEEE Access, July
2015
[14] J. Dickmann, J. Klappstein, M. Hahn, M. Muntzinger, N.
Appenrodt, C. Brenk, A. Sailer, „Present Research Activities
and Future Requirements on Automotive Radar from a car
manufacturer´s point of view”, 2015 IEEE MTT-S
International Conference on Microwaves for Intelligent
Mobility (ICMIM), April 2015, Heidelberg, Germany
[15] EU-Project MOSARIM, „MOre Safety for All by Radar
Interference Mitigation”, Proj.Ref.No: 248231,FP7-ICT, 2014
[16] D. Kellner, M. Barjenbruch, Jens Klappstein, J. Dickmann
and K. Dietmayer, “Instantaneous Full-Motion Estimation of
Arbitrary Objects using Dual Doppler Radar”, Intelligent
Vehicle Symposium 2014 (IV 2014), Jun. 2014, Dearborn,
USA
[17] A. Hosseini, F. Diewald, J. Klappstein, J. Dickmann, H.
Neumann, „Modification Of The Landweber Method Based
On The Conjugate Gradient Method To Restore Automotive
Radar Images”, International Conference on Systems, Signals
6
and Image Processing (IWSSIP), Wien, Österreich, pp. 544-
547
[18] M. Andres, P. Feil, W. Menzel, „3D-Scattering Center
Detection of Automotive Targets Using 77GHz UWB Radar
Sensors”, EuCAP 2012, Prag, Czech Republic, March 2012,
pp. 3690-3693
[19] C. Fischer, M. Barjenbruch, H. L. Bloecher, W. Menzel,
„Detection of pedestrians in road environments with mutual
interference”, 14th International Radar Symposium (IRS),
June 2013, Dresden, Germany, pp. 746-751
[20] Barjenbruch, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, „A Method for Interference Cancellation in
Automotive Radar”, 2015 IEEE MTT-S International
Conference on Microwaves for Intelligent Mobility (ICMIM),
April 2015, Heidelberg, Germany
[21] F. Roos, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, K. D. Meuller-Glaser, C. Waldschmidt,
„Estimation of the Orientation of Vehicles in High-Resolution
Radar Images”, ICMIM, 2015
[22] C. Fischer, F. Ruf, H.-L. Bloecher, J. Dickmann, W. Menzel,
„Evaluation of Different Super-Resolution Techniques for
Automotive Applications”, International conference on radar
systems, RADAR 2012, Oct. 2012, Glasgow, United
Kingdom, pp. 1-6
[23] J. Dickmann, N. Appenrodt, H. L. Bloecher, C. Brenk, T.
Hackbarth, M. Hahn, J. Klappstein, M. Muntzinger, A. Sailer,
„Radar contribution to highly automated driving”, EuRAD,
October 2014, Rome, Italy
[24] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, „Joint Radar Alignment and Odometry
Calibration”, IEEE International Conference on Information
Fusion (FUSION), July 2015, Washington, D.C., USA
[25] Thrun, S. and Bücken, A. "Integrating grid-based and
topological maps for mobile robot navigation", Proceedings
of the Thirteenth National Conference on Artificial
Intelligence: pp. 944–950. 1996, ISBN 0-262-51091-X.
[26] Matthias R. Schmid, M. Maehlisch, J. Dickmann, H.-J.
Wuensche,”Dynamic Level of Detail 3D Occupancy Grids for
Automotive Use”,Intelligent Vehicle Symposium 2010, San
Diego, CA, USA, June 2010
[27] Dirk T. Linzmeier; Michael Skutek; Temel Abay ; Moheb
Mekhaiel; Klaus C. J. Dietmayer, „Grid-based optimal sensor
arrangement within a sensor array for 2D position
estimation”, Proc. SPIE 5612, Electro-Optical and Infrared
Systems: Technology and Applications, 370 (December 6,
2004); doi:10.1117/12.577546
[28] Dirk Linzmeier, Tobias Baer and Moheb Mekhaiel. Fusion
von Radar- und Thermopilesensordaten zur Fuss
gaengerdetektion (Fusion of Radar and Thermopile Sensor
Data for Pedestrian Detection). tm - Technisches Messen
74(3):121–129, 2007
[29] Moheb Mekhaiel, „Radarbasierte Sensorfusion für zukünftige
Sicherheits-systeme“, Sensors 4 cars- Sensorsystemtechnik
und Sensortechnologie, October 2008, Kempten, Germany
[30] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Occupancy Grid Map-based Extended Object Tracking”,
Intelligent Vehicle Symposium 2014 (IV 2014), Jun. 2014,
Dearborn, USA
[31] K. Werber, M. Rapp, J. Klappstein, M. Hahn, J. Dickmann,
K. Dietmayer, C. Waldschmidt, „Automotive Radar Gridmap
Representations”, 2015 IEEE MTT-S International
Conference on Microwaves for Intelligent Mobility (ICMIM),
April 2015, Heidelberg, Germany
[32] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Multiple extended objects tracking with object-local
occupancy grid maps”, 17th International Conference on
Information Fusion (FUSION), July 2014, Salamanca, Spain
[33] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer, „A
Flexible Environment Perception Framework for Advanced
Driver Assistance Systems”, AMAA 2013, May 2013, Berlin,
Germany, pp. 21-29
[34] M. Schütz, Y. Wiyogo, M. Schmid, J. Dickmann, „Laser-
based Hierarchical Grid Mapping for Detection and Tracking
of Moving Objects”, AMAA 2012, Berlin, Germany, April
2012, pp. 167-176
[35] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, „Instantaneous Ego-Motion Estimation using
Doppler Radar”,16th International Conference on Intelligent
Transport Systems (ITSC 2013), Oct. 2013, The Hague, The
Netherlands
[36] M. Rapp, M. Barjenbruch, M. Hahn, J. Dickmann, K.
Dietmayer, „A Fast Probabilistic Ego-Motion Estimation
Framework for Radar”, European Conference on Mobile
Robots 2015 (ECMR 2015), September 2015, Lincoln, UK
[37] M. Barjenbruch, D. Kellner, J. Klappstein, J. Dickmann, K.
Dietmayer, „Joint Spatial- and Doppler-based Ego-Motion
Estimation for Automotive Radars”, IEEE Intelligent
Vehicles (IV), June 2015, Seoul, South Korea
[38] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K.
Dietmayer, “Wheel Extraction based on Micro Doppler
Distribution using High-Resolution Radar”, 2015 IEEE MTT-
S International Conference on Microwaves for Intelligent
Mobility (ICMIM), April 2015, Heidelberg, Germany
[39] M. Hahn, J. Dickmann, “Autonomous Maneuvering with
Radars”, IWPC-Workshop, May 2014, Detroit, USA
[40] M. Rapp, M. Hahn, M. Thom, J. Dickmann and Klaus
Dietmayer, “Semi-Markov Process Based Localization using
Radar in Dynamic Environments“, to be published IEEE
International Conference on Robotics and Automation (ICRA
2015)
[41] M. Rapp, M. Barjenbruch, M. Hahn, J. Dickmann, K.
Dietmayer, „Clustering improved Grid Map Registration
using the Normal Distribution Transform”, Intelligent Vehicle
Symposium 2014 (IV 2014), July 2015, Seoul, South Korea
[42] M. Rapp, T. Giese, M. Hahn, J. Dickmann, K. Dietmayer, „A
Feature-Based Approach For Group-Wise Grid Map
Registration”, Intelligent Transportation Systems Conference
2015 (ITSC 2015), September 2015, Las Palmas, Gran
Canaria
[43] M. R. Schmid, S. Ates, F. von Hundelshausen, J. Dickmann,
H.-J. Wünsche, „Parking Space Detection with Hierarchical
Dynamic Occupancy Grids”, Intelligent Vehicle Symposium
2011, Baden-Baden, Germany, June 2011
[44] Renaud Dubé, Markus Hahn, Markus Schütz, Jürgen
Dickmann. and Denis Gingras, “Detection of parked vehicles
from a radar based occupancy grid”, IEEE Intelligent Vehicles
Symposium, 2014.
[45] Matthias R. Schmid, M. Mählisch, J. Dickmann, H.-J.
Wünsche, „Straight-Feature-Based Self-Localization for
Urban Scenarios”, 8th International Workshop on Intelligent
Transportation, Hamburg, Germany, March 22-23, 2011
[46] K. Werber, M. Barjenbruch, J. Klappstein, J. Dickmann, C.
Waldschmidt, „RoughCough - A New Image Registration
Method for Radar Based Vehicle Self-Localization”, 18th
International Conference on Information Fusion, July 2015,
Washington, D.C., USAM.
[47] K. Werber, M. Barjenbruch, J. Klappstein, J. Dickmann, C.
Waldschmidt, „How do Traffic Signs look like in Radar?”,
44th European Microwave Conference (EuMC), October
2014, Rome, Italy
[48] M. Schütz, N. Appenrodt, J. Dickmann, K. Dietmayer,
„Simultaneous Tracking and Shape Estimation with
Laserscanners”, 16th International Conference on Information
Fusion (FUSION), July 2013, Istanbul, Turkey
[49] P. Steinemann, J. Klappstein, J. Dickmann, H.-J. Wünsche
and F. v. Hundelshausen, „Determining the Outline Contour
of Vehicles in 3D-LIDAR-Measurements”, Intelligent
Vehicle Symposium 2011, Baden-Baden, Germany
[50] Sylvia Pietzsch, Nils Appenrodt, Juergen Dickmann, Bernd
Radig, „Model-based Fusion of Laser Scanner and Radar
Data for Target Tracking”, 8th International Workshop on
Intelligent Transportation, Hamburg, Germany, March 22-23,
2011
[51] P. Brosseit, D. Kellner, C. Brenk, J. Dickmann, „Fusion of
Doppler Radar and Geometric Attributes for Motion
Estimation of Extended Objects”, Sensor Data Fusion:
Trends, Solutions, Applications, October 2015, Bonn,
Germany