Conference PaperPDF Available

Automated Vehicle System Architecture with Performance Assessment


Abstract and Figures

This paper proposes a reference architecture to increase reliability and robustness of an automated vehicle. The architecture exploits the benefits arising from the in-terdependencies of the system and provides self awareness. Performance Assessment units attached to subsystems quantify the reliability of their operation and return performance values. The Environment Condition Assessment, which is another important novelty of the architecture, informs augmented sensors on current sensing conditions. Utilizing environment conditions and performance values for subsequent centralized integrity checks allow algorithms to adapt to current driving conditions and thereby to increase their robustness. We demonstrate the benefit of the approach with the example of false positive object detection and tracking, where the detection of a ghost object is resolved in centralized performance assessment using a Bayesian network.
Content may be subject to copyright.
©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2017 IEEE International Conference on Intelligent Transportation Systems, 16-19 October 2017.
Automated Vehicle System Architecture with Performance Assessment
Ömer ¸Sahin Ts1, Stefan Hörmann2, Bernd Schäufele3, and Florian Kuhnt1
Abstract This paper proposes a reference architecture to
increase reliability and robustness of an automated vehicle.
The architecture exploits the benefits arising from the in-
terdependencies of the system and provides self awareness.
Performance Assessment units attached to subsystems quantify
the reliability of their operation and return performance values.
The Environment Condition Assessment, which is another im-
portant novelty of the architecture, informs augmented sensors
on current sensing conditions. Utilizing environment conditions
and performance values for subsequent centralized integrity
checks allow algorithms to adapt to current driving conditions
and thereby to increase their robustness. We demonstrate the
benefit of the approach with the example of false positive
object detection and tracking, where the detection of a ghost
object is resolved in centralized performance assessment using
a Bayesian network.
Index Terms— System architecture, performance assessment,
integrity monitoring, self awareness, fully automated driving,
self driving vehicle, robust, reliable, RobustSENSE.
The evolution of active safety and driver assistance ap-
plications are going to a direction where the on-board
vehicle computers can take over control of the vehicle from
a human driver in increasingly more situations [1]. The
main challenge, which still remains, is preserving reliability
and robustness of the perception systems in all possible
outdoor conditions, and the ability to react appropriately
to the unexpected behavior of other traffic participants. In
order to address these issues, several components have to
be integrated in an automated driving architecture [2]. The
two most important ones are the consequent consideration of
uncertainties in completely probabilistic processing and the
introduction of system-wide performance awareness. These
two approaches are the focus of the pan-European project
RobustSENSE [3] to improve robustness of advanced driver
assistance systems and automated driving in all weather and
driving conditions.
In this paper we present the RobustSENSE system archi-
tecture, which exploits redundant sensor information from a
multi-sensor platform and thereby addresses system deactiva-
tion arising from a single sensor malfunction or degradation.
The coordination among the individual sensors and subsys-
tems is managed by performance assessment modules that
monitor system performance during the vehicle’s operation
and return metrics that quantify the performance of the sub-
systems (cf. Fig. 1). This allows the subsystems to observe
Corresponding author:,1FZI Research Center for
Information Technology at the Karlsruhe Institute of Technology, Karlsruhe,
GERMANY,2University of Ulm, Institute for Measurement, Control and
Microtechnology, Ulm, GERMANY,3Fraunhofer Institute for Open Com-
munication Technologies (FOKUS), Berlin, GERMANY.
Data Fusion layer
Functional Modules
High Level Fusion
Performance Assessment
Sensor layer
Performance Assessment
Understanding and Planning layer
Environment model
Environment Condition Assessment
RobustSENSE - Sensor
Planning and Understanding
Performance Assessment
Fig. 1: An automated vehicle system architecture that benefits
from redundancy, interdependencies and disparity through
the utilization of performance assessment modules and an
Environment Condition Assessment module.
overall system performance and to review the reliability of
their own output. Such an architecture enables switching to
degraded operation modes and thereby allows to maintain
continuous operation.
The rest of the paper is structured as follows: In Section
II we give an overview of related work. Subsequently, in
Section III, we introduce the novel features of the Ro-
bustSENSE system architecture. Once the introduction of
the architecture is complete, we continue with Section IV
in which we present performance metrics defined for the
performance assessment of our algorithms and afterwards
demonstrate the success of the proposed approach. For the
demonstration, we artificially inject a ghost object due to
clutter and investigate the results on ghost object probability
delivered by our Bayesian network. In Section V, we give a
detailed outlook to highlight the impact to future research.
Section VI concludes the paper by summarizing key elements
and the benefits of the proposed architecture.
Fault diagnosis and system monitoring are both well de-
veloped topics in engineering [4], [5]. Although our previous
work [2], which reviews the requirements for realizing fully
automated driving, has proven that the state of the art
automated vehicles utilizing health monitoring systems can
deal with faults, a thorough application of system monitoring
and performance assessment on automated vehicles remains
One of the first representatives of mode switching systems
was implemented in the VaMoRs-P [6]. The switching rules,
however, were implemented as behavior selection rather
than performance degradation. The notion of situational
awareness was investigated by Albus et al. for task-based
behavioral decisions [7]. In another work, they also presented
metrics and performance measures for intelligent ground
vehicles [8]. However, the metrics they presented are for
testing the overall performance capabilities of the end system
– or product.
In the DARPA Urban Challenge, the performance mon-
itoring system implemented in the winner vehicle, Boss,
monitored the progress in its mission. If the mission was
repeatedly obstructed, it issued recoveries [9], [10]. Another
advanced monitoring module was designed for Shelley. The
module was completely separated from the rest of the system
and executed three different kinds of stops in case of incon-
sistencies [11]. The winner of the Korean 2012 Autonomous
Driving Challenge, A1, utilized also a very simple system
management module, but this rescued the vehicle from many
failures and contributed significantly to the success of the
vehicle [12].
Another state of the art automated vehicle, Jack, performs
probabilistic reasoning and deals with the discrepancies in
its perception system. Furthermore, the vehicle can switch
into degraded operation modes [13]. However, the automated
vehicle lacks system-wide situational awareness. The authors
that developed the system architecture of Jack propose an
implementation independent functional system architecture
for automated driving in their very recent publication [14].
However, as the paper inspects architecture from the func-
tional perspective, conclusions on robustness and reliability
can be made to a limited extent.
Literature, unfortunately lacks an architectural considera-
tion of module based and system wide performance assess-
ment for automated vehicles.
In a layer-function based classification, current architec-
tures typically distinguish between a sensor layer, a percep-
tion and scene understanding layer, and a planning layer
[2], [15], [16]. By slightly diverging from this taxonomy,
we divide the system architecture into four different layers
and classify scene understanding and situation prediction
modules within the understanding and planning layer.
A main feature to maintain robustness and reliability is
to realize performance awareness. The RobustSENSE archi-
tecture evaluates sensors’ performance under consideration
of environmental conditions and associates the performance
values of thereupon building subsystems by a system-wide
performance assessment.
The layers and their colors in the figures throughout this
paper are:
A. Sensor layer (grey)
B. Data Fusion layer (dark blue)
C. Understanding and Planning layer (light blue)
D. System Performance Assessment layer (magenta).
The layers strictly follow the information flow – all but
one: the System Performance Assessment is a horizon-
tal task that enables overall system monitoring and draws
conclusions by observing the orthogonal and independent
information flow of performance values.
A. Sensor Layer and RobustSENSE-Sensor
The Sensor layer is the lowest layer of the architecture.
Through this layer, the system retrieves the information
required for the basic environment perception.
Conventional sensors (dashed boxes in Fig. 2) deliver
transduced, partly denoised and smoothed outputs and are
unaware of the current environment conditions. However, in
particular on an outdoor moving platform, varying sensing
conditions highly influence the measurement. Therefore,
we define the RobustSENSE-Sensor, which continuously
evaluates the reliability and confidence of the measurement
data under consideration of environment conditions. The
information on environment conditions is delivered by the
Environment Condition Assessment presented in the next
As shown in Fig. 2, a RobustSENSE-Sensor is set up by
a performance assessment module attached onto a conven-
tional sensor. Sensor models, possibly conditioned on the
environment, are usually confidential and not published by
manufacturers. Therefore, the environment conditions are fed
back into the RobustSENSE-Sensor. The resulting sensor
yields the ordinary output data augmented with probabilistic
assessment such as uncertainty of the measurement, or the
clutter probability in the field of view. Examples are given
in Section V.
B. Data Fusion Layer and Environment Condition Assess-
The Data Fusion layer contains functional modules and
a high level fusion module for data fusion, and a newly
introduced Environment Condition Assessment module (cf.
Fig. 3). During processing, input data is propagated and
consequently an environment model containing probabilistic
features, e.g. existence probability or state variance of objects
and their relations, is built.
The functional modules have access to the whole sensor
data stream and the main aim is to combine strengths of
different types of sensors, e.g. the velocity measurement of
radars and angular resolution of lasers. Functional modules
tackle specific tasks like ego motion estimation with a strict
no-feedback structure, to avoid self-sustaining trends.
In High-Level Fusion, on the other hand, the estimates
of the functional modules can be optimized using data from
all other functional modules, e.g. by performing ego-motion
and localization compensation. The module also resolves
ambiguities originating from functional modules and puts
data in relation.
Data fusion layer
Environment Condition Assessment
High Level Fusion
Data Processing
Data Processing
RobustSENSE - Camera
Data Processing
Data Processing
RobustSENSE - Map
Data Processing
RobustSENSE - Positioning
Data Processing
Understanding and Planning layer
System Performance Assessment
Performance Assessment
Performance Assessment
Grid Mapping
Performance Assessment
Ego Motion Estimation
Performance Assessment
Sun glare
Angle of incidence
Performance Assessment
RobustSENSE - Car Data
Sensor layer
Fig. 2: A system architecture that is capable of assessing the performance of the Sensor layer. The performance assessment
modules attached to individual sensors receive information on the environmental conditions from the Environment Condition
Assessment module and send status information to the Fusion Performance Assessment module.
The RobustSENSE strategy of being adaptable to varying
weather and light conditions is addressed by the Environ-
mental Condition Assessment module. The RobustSENSE-
Sensors are informed by this module to consider current
conditions in the measurement models. This way, the sensor
performance data can be adjusted, e.g. by increasing clutter
probability, or the sensor can switch the operation mode to
maintain its performance.
The Fusion Performance Assessment module receives its
input by the performance assessment modules of sensors, the
Environment Condition Assessment, individual functional
modules, and the High Level Fusion. The module evaluates
the validity of the environment model in a holistic manner
to express the trustworthiness of the resulting environment
model. The assessment is based on the confidence, existence
probability and consistency of the fused data. Statistical tests
comparing predicted measurements with actually obtained
measurements can be used for module consistency assess-
ment [17].
C. Understanding and Planning Layer
The Understanding and Planning layer consists of Scene
Understanding, Situation Prediction, Behavioral Planning
and Trajectory Planning modules that are mainly processed
sequentially and thereby enrich the fused environment model
by additional interpretations and conclusions (cf. Fig. 4). The
final result is a trajectory decision that is transmitted to the
vehicle controllers.
The Scene Understanding module provides situational
awareness by identifying relations and interactions among
traffic participants and traffic infrastructure [18]. Interrelat-
ing object motion models, which include discrete behavior
classes such as braking in front of an intersection or driving
right through, are estimated using the relational information.
Environment model
Data Fusion layer
Sensor data stream
High Level Fusion
Street Topology Estimation
Ego Motion
Environment Cond. Assessment
Grid Mapping
Object Tracking
Sensor performance stream
Fig. 3: Overview of an exemplary Data Fusion process chain.
A functional module can contribute to other modules, as
long as it is on a lower hierarchical level. This restriction
is done to prevent inner feedback loops. The order and the
components of the process chain can be changed.
The refined environment model for the current scene serves
as a basis for situation prediction.
The Situation Prediction module utilizes the dynamic
environment, street topology as well as legal information
like speed limits or rules to give way from the environment
model and couple these with interpreted relations. Compared
to other modules, this module plays a more important role
on the overall performance assessment. If there are clear
contradictions between the predictions of the objects, or if the
predictions are not consistent with the previous results, the
whole automated driving system is switched into a degraded
operation mode.
Understanding and Planning layer
Environment model
Fig. 4: The Understanding and Planning layer consists
of Scene Understanding, Situation Prediction, Behavioral
Planning and Trajectory Planning modules which process and
enrich the environment model.
The Behavior Planning is responsible for creating a set
of behavior options and choosing one according to the
acceptable uncertainties, system performance status and pos-
tulations of the current traffic situation. It serves as a master
module guiding the trajectory planner into a certain solution-
space. The variation in number of alternative behaviors and
their regarding costs can for example be utilized as a metric
by the performance assessment module.
The Trajectory Planning module calculates safe trajecto-
ries using the confidence values of the environment model
and choosing the confidence bounds according to the output
of the System Performance Assessment. It employs metrics
reflecting how hard it is to find an acceptable solution.
If numerical methods are utilized for trajectory planning,
metrics on convergence properties can be utilized. In case of
sampling-based planning algorithms, number of admissible
samples, or for example in case of RRT* [19] the number
of parent node changes after rewiring, can be used as a
metric instead. By receiving input from the central System
Performance Assessment module and the uncertainties of the
environment model, the planner adapts its confidence bounds.
D. System Performance Assessment
The System Performance Assessment module continu-
ously evaluates the performance status of the whole ve-
hicle by processing its inputs from the Data Fusion and
Understanding and Planning layers to create an awareness
of the system’s performance and to be able to react by ei-
ther function degradation or component adaptation. Decision
making algorithms utilize the information about the source
of degradation and return evaluate system degradation for
the current situation of the environment.
The Fusion Performance Assessment module and the
Overall Algorithm Performance Assessment module deliver
an abstract, qualitative understanding of the layers’ perfor-
mance. Thus, only a small set of performance measures
have to be evaluated in the System Performance Assessment
module. Performance values like the detection of failure
cases can be inferred in different ways, e.g. rule-based as
described in our previous work for small-scale vehicles [20]
or probabilistically as will be described in Section IV.
In our experiments we tackle the scenario of a ghost object
originated from clutter in a sensor, e.g. due to bad weather
conditions. Fig. 5 illustrates the considered car following
scenario. An automated vehicle follows two leading vehicles,
while a slower third object, the ghost object, is falsely
detected in between the leading vehicles. In our experiments
vehicle 1hits the ghost object but drives through it without
any physical impact to its trajectory. Before the crash, the
ego vehicle vehicle 1 ghost object vehicle 2
Fig. 5: The considered scenario: A ghost object appears in a
car following situation, between vehicle 1and vehicle 2.
ego vehicle should be aware that the scene is inconsistent
because of the abnormal behavior of vehicle 1considering
the ghost object - in the best case already inferring a high
probability of having a ghost object. As soon as vehicle 1
drives through the ghost object the ego vehicle should be
completely aware of that object being a ghost object.
To exemplarily implement this desired system behavior
using the proposed system architecture, we put a focus on
the following components: Sensor Performance Assessment,
Fusion Performance Assessment, Scene Understanding Per-
formance Assessment, and Trajectory Planning Performance
Assessment. The performance measures are passed through
the different performance assessment modules and a proba-
bility for having a ghost object is derived.
For this experiment we recorded trajectories of objects in
car following scenarios using radar sensors. For recording we
used the automated vehicle of Ulm University [21], equipped
with a differential global positioning system and two radar
systems, LRR3 and ARS300. In particular, in a car following
scenario, the radar sensors’ capabilities to detect occluded
cars is an advantage compared to lidar sensors. The LRR3
radar field of view covers up to 250 m with an opening angle
of 30at 12.5 Hz. The used ARS300 provides radar units
for near distant (up to 60 m) and far distant (up to 200 m)
objects. The opening angles are 56(near) and 17(far)
with a 15 Hz update rate. The recorded data was filtered in
offline post processing by removing clutter, labeling target
measurements to objects and fitting trajectories in the noisy
measurements. The post processed trajectories are considered
as ground truth for a simulated Sensor and Data Fusion
A. Simulated Sensor and Fusion
In the experiment, the perception stage sensor clutter prob-
ability and tracked object uncertainty were simulated. The
sensor clutter probability was always set high because of the
harsh weather conditions in the scene. For testing the ghost
object detection we artificially modified the environment
In the first case, due to propagation of sensor clutter
probability, the ghost object has low existence probability.
This is the way how a good probabilistic implementation
should handle sensor uncertainties and thus suppress falsely
detected ghost objects in subsequent understanding and plan-
ning modules.
Nevertheless improbable events can still occur and thus
it can happen that a lot of false detections result in a high
existence probability of the ghost object. Thus, in the second
case, we simulate a high existence probability for the ghost
object, causing the subsequent modules to consider it in
processing. Probabilistic modelling alone cannot solve this
and an additional system wide performance assessment is
necessary to detect the algorithm failure.
B. Particle based Scene Understanding and Prediction
We use a particle based scene understanding and situation
prediction algorithm in these experiments [22]. In scene
understanding, the behavior defining parameters of a car
following model, the Intelligent Driver Model [23], are
estimated for tracked objects. In the subsequent situation
prediction, the particles are forward propagated using the
same model and a noise process. Performance assessment is
related to the resampling process, in the parameter estimator.
Since resampling is performed at every update step, every
particle has the same weight 1
N. The importance of a particle
after resampling is drawn by the number of duplications.
Ideally, every particle has the same importance, leading to
a high effective number of particles. However, if the filter
diverges and the underlaid model does not fit to the real
observation, the number of unique particles decreases. We
consider a model to be suitable when the ratio between
unique particles ˆ
Nand the total number of particles Nis
greater than 0.5and define a metric for model suitability
ηIDM = min 1,
C. Trajectory Planning
For trajectory planning we use a local-continuous opti-
mization based planner, as presented in [24]. The planning
problem is modeled as a quadratic objective function with
nonlinear constraints. The metrics we define in the follow-
ing are mainly for a numerical solution based approach.
However, the approach we present can be adapted to other
algorithms. We set the metrics so that they are normalized
and 1reflects a good performance, whereas 0reflects bad
performance. We choose the metrics as simple as possible
and try to avoid applying transformations.
The first metric we define is based on the final cost of the
planner. The value of cost reflects the quality of the planned
trajectory. The cost value is typically low if the planner
converges to a local minimum. However, in some cases even
though the algorithm has converged, the cost function can
still have high values due to environment conditions. In order
to separate the plausible bounds from unacceptable values,
we first clip the values and then choose the Sigmoid function
as a feature
ηcost = 1
1 + ecref c,(2)
Fig. 6: The performance assessment is realized as a discrete
Bayesian network combining the four exemplary perfor-
mance measures to an overall estimation of the ghost object
where crepresents the current cost and cref the origin value.
Another metric for evaluating the algorithmic performance
of the planner is the ratio of number of cost minimizer
iterations over total number of iterations
ηsucces =nminimizer
This metric when treated together with ηcost reflects whether
the problem is ill-formed or not.
A further metric for evaluating the performance of the
planner given the current situation is an analysis on required
deceleration. Traffic participants are proven to behave co-
operatively hence not requiring other participants to apply
full braking [25]. An instantaneous requirement of hard
braking, when considered together with clutter and existence
probabilities, can be used as an indicator of ghost objects.
Such an analysis is complementary to the cost returned by
the planner. If hard braking is indispensable for collision
avoidance, then the optimization based planner will not be
able to deliver low cost values as the acceleration and jerk
is penalized.
To determine the criticality, we utilize the naïve approach
of finding the minimum required braking accelerations and
normalize it to the maximum admissible value amax
ηcriticality = 1
A more detailed criticality analysis can be done by incorpo-
rating sensor uncertainties [26], but this lies out of the focus
of this paper.
D. System Performance Assessment via discrete Bayesian
As previously mentioned, inferring the overall system
performance from qualitative discrete performance measure-
ments can be realized in different ways. In this work, we
Fig. 7: Results of the online evaluation in the case of a falsely detected ghost object with high existence probability. The
ghost object appears at 9.804 s resulting in a bad match of the behavior model for vehicle 1(bottom left). The performance
metrics of the planning algorithm indicate that there is an anomaly (bottom center). The performance assessment using a
Bayesian network to combine all performance metrics infers a high ghost object probability (bottom right), and the vehicle 2
turns out to be a ghost object. The low performance values of behavior model at the very beginning are due to initialization.
Because the existence probability is always set high, the ghost object probability is not less than 0.336 over the entire record.
chose a discrete Bayesian Network because of its ability
to factorize a big, hardly modelable conditional dependency
into sub problems with less complexity.
The graph in Fig. 6 describes the joint probability dis-
tribution as a factorization of local conditional probabilities
using the hidden variables Ghost Object Occurrence, Plan-
ner Performance, Inconsistent Scene and Ghost Object. We
modeled the conditional probabilities by expert knowledge,
but also machine learning can be applied to sub problems to
prevent wrong assumptions. Using the Bayesian network the
a posteriori Ghost Object probability can be inferred given
the per module performance metrics described in Section IV-
A to IV-C. This Bayesian network is distributed into the
three modules Fusion Performance Assessment, Overall Al-
gorithm Performance Assessment and System Performance
Assessment and thus directly reflects the factorization in the
system architecture.
E. Results
The results are demonstrated in several sequences and vi-
sualized on a GUI (cf. Fig. 7). A video visualizing the results
is also provided1. Scene understanding estimates behavior
parameters of the first leading vehicle approaching the ghost
object, resulting in low precision of the a posteriori distri-
bution and consequently in a low value of the performance
metric. The subsequent situation prediction provides results
with high variance. Without system performance assessment,
trajectory planning will evaluate a critical situation and will
brake to reduce criticality. This will return low performance
of the planner. However, this result is consistent with the
low performance value of the scene understanding and hence
indicate successful operation of the planner. By incorporating
planning and understanding performance values to the clutter
and existence probabilities, we achieved a unified assessment
and resolved the use-case of confronting ghost objects.
The holistic probabilistic processing with incorporated
self-monitoring starts in the lowest layer, the Sensor layer.
Since the performance of sensors highly depends on the envi-
ronment, in particular the weather conditions, it is necessary
to provide fused environment conditions from the sensor
fusion layer to sensor post processing. The Environment
Condition Assessment can for example refer to rain sensors,
weather forecast and the relative sun position.
Since the sensor performance assessment is done within a
RobustSENSE sensor, manufacturers do not have to publish
sensor models, but can adapt the model to the current
conditions. Examples for sensor model adjustments based
on sensing conditions are utilized in lidar, radar or camera.
Lidar sensor often suffer from rain and reflections on a
wet road surface, increasing the clutter probability. The
detection probability of static objects, as well as the existence
probability of detected static objects largely decreases for
single beam mono pulse radars if the sensor is itself still
standing [27]. Cameras might even drop out when directly
facing the sun (cf. Fig. 8). These effects can easily be taken
into account with Environment Condition Assessment.
Fig. 8: An exemplary situation highlighting the benefits of
sensor performance assessment. Object detection probability
for the front left fish-eye camera highly decreases consider-
ing the relative position of the sun provided by Environment
Condition Assessment.
Further processing can adapt to the augmented sensor
information. Object tracking with multiple sensors, e.g., can
provide an enriched existence probability based on detection
probabilities and clutter probabilities of different sensors.
Scene understanding, as well as situation prediction can
run parallel relative motion hypothesis when the existence
probability of an object is low. Consequently, the planning
modules exploit this information for a decision or behavior
Finally, the consequent implementation of performance
assessment will introduce performance awareness to au-
tonomous systems making them able to react with function
degradation or algorithm adaptation to handle more environ-
ment conditions and increase the robustness of automated
In this paper we presented a guideline description for
achieving robust and reliable operation of automated vehi-
cles. We defined metrics and presented methods which can
be utilized for performance evaluation. By means of these
metrics, parallel to robustness and reliability of individual
algorithms we introduced a system-wide performance aware-
ness that relies on the interdependencies of the subsystems.
Such a system maintains performance awareness and by
providing feedback to the individual system modules allows
the vehicle to adapt its function and algorithms to the current
driving condition.
We carried out experiments on real trajectory data, where
we demonstrate the benefits of our system with the very com-
mon scenario of false object detection due to clutter measure-
ments in a car following scenario. Propagating probabilistic
performance assessment from sensor to trajectory planning
resolves the scenario where a radar sensor is aware of a high
clutter probability. Using the introduced System Performance
Assessment based on Bayesian inference, potential response
of different submodules in understanding and planning layer
are considered and we reliably resolved the situation of a
falsely detected ghost object.
Our future work focuses on the implementation of the
presented approach in the RobustSENSE automated vehicle
demonstrators. Algorithms utilizing these metrics and mak-
ing decisions under uncertain information received by the
presented metrics will constitute the basis of our research.
We will further focus on Environment Condition Assessment
and utilize its output in the RobustSENSE-Sensor together
with renowned automotive suppliers.
The research leading to these results has received funding
from the European Union under the H2020 EU.
ECSEL Programme, as part of the RobustSENSE project,
contract number 661933. Responsibility for the information
and views set out in this publication lies entirely with
the authors. The authors would like to thank all partners
within RobustSENSE for their cooperation and valuable
[1] K. Bengler, K. Dietmayer, B. Färber, M. Maurer, C. Stiller, and
H. Winner, “Three decades of driver assistance systems: Review and
future perspectives,IEEE Intell. Transp. Syst. Mag., vol. 6, no. 4, pp.
6–22, 2014.
[2] Ö. ¸S. Ta¸s, F. Kuhnt, J. M. Zöllner, and C. Stiller, “Functional System
Architectures towards Fully Automated Driving,” in IEEE Proc. Intell.
Veh. Symp., 2016, pp. 304–309.
[3] “RobustSENSE – Robust and Reliable Environment Sensing and
Situation Prediction for Advanced Driver Assistance Systems and
Automated Driving – Project,” 2016, [Retrieved: March 3, 2017].
[Online]. Available:
[4] G. Niu, Data-Driven Technology for Engineering Systems Health
Management. Springer.
[5] D. Wang, M. Yu, C. B. Low, and S. Arogeti, Model-based health
monitoring of hybrid systems. Springer, 2013.
[6] E. D. Dickmanns, R. Behringer, D. Dickmanns, T. Hildebrandt,
M. Maurer, F. Thomanek, and J. Schiehlen, “The seeing passenger
car’vamors-p’,” in Proc. IEEE Intell. Veh. Symp., 1994, pp. 68–73.
[7] J. S. Albus, “4d/rcs: A reference model architecture for intelligent
unmanned ground vehicles,” in AeroSense 2002. International Society
for Optics and Photonics, 2002, pp. 303–310.
[8] ——, “Metrics and performance measures for intelligent unmanned
ground vehicles,” DTIC Document, Tech. Rep., 2002.
[9] C. R. Baker, D. I. Ferguson, and J. M. Dolan, “Robust mission
execution for autonomous urban driving,Robotics Institute, p. 178,
[10] J. Kim, G. Bhatia, R. Rajkumar, and M. Jochim, “SAFER: System-
level Architecture for Failure Evasion in Real-time Applications,” in
Proc. IEEE 33rd Real-Time Systems Symp., 2012, pp. 227–236.
[11] J. Funke, P. Theodosis, R. Hindiyeh, G. Stanek, K. Kritataki-
rana, C. Gerdes, D. Langer, M. Hernandez, B. Müller-Bessler, and
B. Huhnke, “Up to the limits: Autonomous Audi TTS,” in IEEE Proc.
Intell. Veh. Symp., 2012, pp. 541–547.
[12] K. Jo, J. Kim, D. Kim, C. Jang, and M. Sunwoo, “Development of
Autonomous Car–Part II: A case study on the implementation of an
autonomous driving system based on distributed architecture,IEEE
Trans. Ind. Electronics, vol. 62, no. 8, pp. 5119–5132.
[13] R. Matthaei and M. Maurer, “Autonomous driving–a top-down-
approach,” at-Automatisierungstechnik, vol. 63, no. 3, pp. 155–167,
[14] S. Ulbrich, A. Reschka, J. Rieken, S. Ernst, G. Bagschik, F. Dierkes,
M. Nolte, and M. Maurer, “Towards a functional system architecture
for automated vehicles,” arXiv preprint arXiv:1703.08557, 2017.
[15] C. Berger and B. Rumpe, “Autonomous Driving-5 Years after the Ur-
ban Challenge: The Anticipatory Vehicle as a Cyber-Physical System,”
CoRR, vol. abs/1409.0413, 2014.
[16] S. Behere and M. Törngren, “A functional architecture for autonomous
driving,” in Proceedings of the First International Workshop on
Automotive Software Architecture. ACM, 2015, pp. 3–10.
[17] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with appli-
cations to tracking and navigation: theory algorithms and software.
John Wiley & Sons, 2004.
[18] F. Kuhnt, J. Schulz, T. Schamm, and J. M. Zöllner, “Understanding
Interactions between Traffic Participants based on Learned Behaviors,”
in IEEE Proc. Intell. Veh. Symp., 2016.
[19] S. Karaman, M. R. Walter, A. Perez, E. Frazzoli, and S. Teller,
“Anytime motion planning using the rrt,” in Proc. IEEE Int. Conf.
Robot. and Autom., 2011, pp. 1478–1483.
[20] F. Kuhnt, M. Pfeiffer, P. Zimmer, D. Zimmerer, J. M. Gomer, V. Kaiser,
R. Kohlhaas, and J. M. Zöllner, “Robust environment perception for
the Audi Autonomous Driving Cup,” in Proc. IEEE Intell. Trans. Syst.
Conf., Nov 2016, pp. 1424–1431.
[21] F. Kunz, D. Nuss, J. Wiest, H. Deusch, S. Reuter, F. Gritschneder,
A. Scheel, M. Stubler, M. Bach, P. Hatzelmann et al., “Autonomous
driving at Ulm University: A modular, robust, and sensor-independent
fusion approach,” in IEEE Proc. Intell. Veh. Symp., 2015, pp. 666–673.
[22] S. Hoermann, D. Stumper, and K. Dietmayer, “Probabilistic long-term
prediction for autonomous vehicles,” in IEEE Proc. Intell. Veh. Symp.,
[23] M. Treiber, A. Hennecke, and D. Helbing, “Congested traffic states in
empirical observations and microscopic simulations,” Physical review
E, vol. 62, no. 2, p. 1805, 2000.
[24] J. Ziegler, P. Bender, T. Dang, and C. Stiller, “Trajectory planning
for Bertha — a local, continuous method,” in Proc. IEEE Intell. Veh.
Symp., 2014, pp. 450–457.
[25] M. Treiber and A. Kesting, “Traffic flow dynamics,Traffic Flow
Dynamics: Data, Models and Simulation, Springer-Verlag Berlin Hei-
delberg, 2013.
[26] J. E. Stellet, P. Vogt, J. Schumacher, W. Branz, and J. M. Zöllner, “An-
alytical derivation of performance bounds of autonomous emergency
brake systems,” in Proc. IEEE Intell. Veh. Symp., 2016, pp. 220–226.
[27] M. I. Skolnik, Introduction to Radar Systems, 3rd ed. New York:
McGraw Hill Book Co., 2001.
... The autonomous vehicle has multiple sensors to obtain accuracy and find the target destination. They are further needed to improve the range of sensors, cost, and balance function development [113,114]. ...
Full-text available
Autonomous vehicles have numerous challenges facing today's driving in urban areas. The installation of artificial intelligence in autonomous cars has become capable of communicating with road users and determining the user's intentions. With the help of AI, computer vision, sensors, machine learning, deep learning, and other fields of study, they analyze the environment for autonomous cars. The behavior of users aids in the understanding and analysis of various factors such as the environment, traffic lights, and demographics. In this paper, we determine these factors involve autonomous vehicle surveying to assess the behavior of pedestrian drivers. Autonomous vehicles use various techniques and classifiers to analyze pedestrian behavior further approaches. We sincerely need help to review interaction challenges and independent vehicle design approaches. Understanding is complex reasoning to communicate visual perception. The main aim of this survey is to discuss the various sensors and their roles in autonomous vehicles. Communication technologies facilitate transmission in three main categories short-range, medium-range, and long-range. In addition, the automotive industry is developing and manufacturing to connect autonomous vehicles to communication safety and promote transportation. The survey also focuses on protecting and implementing various attacks that threaten the security of the CAVs. Hence, AV's environment of CAVs highlights the future and current challenges to identify the security issues-based communication networks and cyber-security risks. The cyber-attacks have many types, but we will focus on the primary attacks or damage to the data.
... Different works have been carried out that give an overview of these elements to generate autonomous driving in vehicles, touching topics in the monitoring and error correction part [Tas et al., 2016] [Tas et al., 2017]. Some architectures have been highlighted for having excellent results in the perception and planning part, providing excellent solutions for the generation of autonomous driving [Berger and Rumpe, 2014] [Ulbrich et al., 2017] [Matthaei and Maurer, 2015]. ...
The development of large cities and the increased demand for mobility in them has required improvements and changes in the way we transport ourselves. This change has generated new urban mobility environments, some more efficient than others, depending on the country, culture, and society in which it develops. Besides, in these times of rapid exponential changes, it is useful to have informal and unstructured organizations that help generate an open and interactive ecosystem for the exchange of ideas and the flow of information that founded and strengthened innovation projects. Investment in Intelligent Transport Systems (ITS) has been increasing in recent years, where only developed countries have the necessary infrastructure to integrate autonomous vehicles effectively into their roads. For countries with a lower quality of infrastructure, such as Latin American countries, it becomes a major challenge in terms of technology, algorithms, and investment. The characterization of urban circuits for autonomous navigation is a crucial task for the fast integration of this technology in the near future. The development of these vehicle technologies has been increasing too, which has allowed to improve their capabilities in autonomous driving systems; many of these features are related to Advanced Driving Assistance Systems (ADAS) and autonomous driving systems (ADS). This capability improvement has been achieved because of recent developments in automationoriented software and hardware. Such improvements, allowed the vehicle to achieve a more precise perception of its working environment. The open literature contains a variety of works related to the subject. They employ a diversity of techniques ranging from probabilistic to ones based on Artificial Intelligence. The increase in computing capacity, well known to many, has opened plentiful opportunities for the algorithmic processing needed by these applications, making way for the development of autonomous navigation, in many cases with astounding results. An important part of the development and testing of these algorithms is through the generation of prototype platforms with the necessary instrumentation to implement autonomous driving functionalities in different environments. For this reason, this thesis work contributes by presenting a general framework to address this problem by considering urban mobility as an Interactive Ecosystem of Research, Innovation, Engineering, and Entrepreneurship that brings together the different actors of this ecosystem (technical and non-technical) to generate systematic and articulated actions to alleviate this challenge in emerging countries. In a technical way, this work addresses the problem by proposing a route and infrastructure analyzer for smart electromobility, thus allowing the evaluation of the effective integration of smart routes for autonomous vehicles(AVs) in megacities. The overall objective of this approach focuses on the acquisition of data from various sensors installed on the vehicle. These data are post-analyzed through computer vision systems and data analysis tools (Machine Learning) in order to generate metrics that provide an accurate mapping and smart characterization of the proposed routes. Part of this analysis is complemented by simulation tools (of routes and vehicles) and the generation of navigation models for autonomous vehicles. Finally, the results of the driving automation of two platforms, each developed by different automation methods, are reported. First, we present the results obtained from the automation of the navigation of a modular vehicle platform generated at Tecnologico de Monterrey, with the intention of generating a low-cost modular AGV. The second platform presents the driving automation of the e.Go Life electric vehicle prototype developed at RWTH Aachen University. Its capabilities and functionalities within controlled industrial environments are shown. Both platforms aim to test and analyze the different algorithms needed to generate successful navigation of autonomous vehicles within established routes.
... Tas et. al. [37] employ a systemwide performance monitoring that is centered around selfawareness. In the case of perception degradation, a performance assessment fusion module degrades the capabilities of the planning. ...
Within the field of automated driving, a clear trend in environment perception tends towards more sensors, higher redundancy, and overall increase in computational power. This is mainly driven by the paradigm to perceive the entire environment as best as possible at all times. However, due to the ongoing rise in functional complexity, compromises have to be considered to ensure real-time capabilities of the perception system. In this work, we introduce a concept for situation-aware environment perception to control the resource allocation towards processing relevant areas within the data as well as towards employing only a subset of functional modules for environment perception, if sufficient for the current driving task. Specifically, we propose to evaluate the context of an automated vehicle to derive a multi-layer attention map (MLAM) that defines relevant areas. Using this MLAM, the optimum of active functional modules is dynamically configured and intra-module processing of only relevant data is enforced. We outline the feasibility of application of our concept using real-world data in a straight-forward implementation for our system at hand. While retaining overall functionality, we achieve a reduction of accumulated processing time of 59%.
... In this section, we discuss features of one typical architecture of an AV operating system, which includes a dedicated module for risk estimation and management (similar to e.g., Probst et al., 2021;Tas et al., 2017;Weisswange et al., 2019). We show how this can be adapted to addresses inequalities in risk exposure. ...
Full-text available
Automated vehicles (AVs) are expected to operate on public roads, together with non-automated vehicles and other road users such as pedestrians or bicycles. Recent ethical reports and guidelines raise worries that AVs will introduce injustice or reinforce existing social inequalities in road traffic. One major injustice concern in today’s traffic is that different types of road users are exposed differently to risks of corporal harm. In the first part of the paper, we discuss the responsibility of AV developers to address existing injustice concerns regarding risk exposure as well as approaches on how to fulfill the responsibility for a fairer distribution of risk. In contrast to popular approaches on the ethics of risk distribution in unavoidable accident cases, we focus on low and moderate risk situations, referred to as routine driving. For routine driving, the obligation to distribute risks fairly must be discussed in the context of risk-taking and risk-acceptance, balancing safety objectives of occupants and other road users with driving utility. In the second part of the paper, we present a typical architecture for decentralized automated driving which contains a dedicated module for real-time risk estimation and management. We examine how risk estimation modules can be adjusted and parameterized to redress some inequalities.
Full-text available
The transportation and automotive industry have seen the shifts and effects of innovation within the last decade, from the addition of critical messaging technologies, event response systems, to safety integration of each users initiated by safety messaging systems to almost all private utility vehicles to a public utility vehicle, all have seen either responsive or unresponsive layers of usability and functionality issues, together with the need for safety and protection of each users and riders. These assistive technologies are supposed to help drivers and operators in their daily driving routine, believed to be connected to a myriad of networked hardware and software, and arguably to be either be cost effective or safety limited. The study exploring vehicle security improvement framework in the safety messaging system to protect against hackers delivered responses from 6 subject matter experts within the public transit authority, cloud supply chain companies, and network providers that have invested greatly in vehicle ad hoc network technologies. Thematic analysis was chosen as a qualitative method design to extract themes from the critical answers through the semi-structured interview retrieved from general responses from the subject matter experts. The general findings were extracted from the major themes 1 through 5. The general findings are the following (1) Automotive Organization and Transit Authority Leaders have to Manage Vehicle Cyber Risks Throughout the Vehicle Lifecycle, (2) Automotive Organizations and Transit Authority Should Be Involved in Engineering a Secure Vehicle by Design, (3) Engineering Vendor Teams Should Detect and Respond to Major Incidents within the Vehicle Lifecycle. Keywords: safety messaging system, event response systems, safety integration, vehicle ad hoc networks, vehicle security improvement framework, automotive informatics security.
To drive in complex urban environments, autonomous vehicles need to understand their driving context. This task, also known as the situation awareness, relies on an internal virtual representation of the world made by the vehicle, called world model. This representation is generally built from information provided by multiple sources. High definition navigation maps supply prior information such as road network topology, geometric description of the carriageway, and semantic information including traffic laws. The perception system provides a description of the space and of road users evolving in the vehicle surroundings. Conjointly, they provide representations of the environment (static and dynamic) and allow to model interactions. In complex situations, a reliable and non-misleading world model is mandatory to avoid inappropriate decision-making and to ensure safety. The goal of this PhD thesis is to propose a novel formalism on the concept of world model that fulfills the situation awareness requirements for an autonomous vehicle. This world model integrates prior knowledge on the road network topology, a lane-level grid representation, its prediction over time and more importantly a mechanism to control and monitor the integrity of information. The concept of world model is present in many autonomous vehicle architectures but may take many various forms and sometimes only implicitly. In some work, it is part of the perception process when in some other it is part of a decisionmaking process. The first contribution of this thesis is a survey on the concept of world model for autonomous driving covering different levels of abstraction for information representation and reasoning. Then, a novel representation is proposed for the world model at the tactical level combining dynamic objects and spatial occupancy information. First, a graph based top-down approach using a high-definition map is proposed to extract the areas of interests with respect to the situation from the vehicle's perspective. It is then used to build a Lane Grid Map (LGM), which is an intermediate space state representation from the ego-vehicle point of view. A top-down approach is chosen to assess and characterize the relevant information of the situation. Additionally to classical free-occupied states, the unknown state is further characterized by the notions of neutralized and safe areas that provide a deeper level of understanding of the situation. Another contribution to the world model is an integrity management mechanism that is built upon the LGM representation. It consists in managing the spatial sampling of the grid cells in order to take into account localization and perception errors and to avoid misleading information. Regardless of the confidence on localization and perception information, the LGM is capable of providing reliable information to decision making in order not to take hazardous decisions.The last part of the situation awareness strategy is the prediction of the world model based on the LGM representation. The main contribution is to show how a classical object-level prediction fits this representation and that the integrity can also be extended at the prediction stage. It is also depicted how a neutralized area can be used in the prediction stage to provide a better situation prediction. The work relies on experimental data in order to demonstrate a real application of a complex situation awareness representation. The approach is evaluated with real data obtained thanks to several experimental vehicles equipped with LiDAR sensors and IMU with RTK corrections in the city of Compi_egne. A high-definition map has also been used in the framework of the SIVALab joint laboratory between Renault and Heudiasyc CNRS-UTC. The world model module has been implemented (with ROS software) in order to fulfll real-time application and is functional on the experimental vehicles for live demonstrations.
Within the field of automated driving, a clear trend in environment perception tends towards more sensors, higher redundancy, and overall increase in computational power. This is mainly driven by the paradigm to perceive the entire environment as best as possible at all times. However, due to the ongoing rise in functional complexity, compromises have to be considered to ensure real-time capabilities of the perception system. In this work, we introduce a concept for situation-aware environment perception to control the resource allocation towards processing relevant areas within the data as well as towards employing only a subset of functional modules for environment perception, if sufficient for the current driving task. Specifically, we propose to evaluate the context of an automated vehicle to derive a multi-layer attention map (MLAM) that defines relevant areas. Using this MLAM, the optimum of active functional modules is dynamically configured and intra-module processing of only relevant data is enforced. We outline the feasibility of application of our concept using real-world data in a straight-forward implementation for our system at hand. While retaining overall functionality, we achieve a reduction of accumulated processing time of 59%.
Full-text available
This paper presents a functional system architecture for an automated vehicle. It provides an overall, generic structure that is independent of a specific implementation of a particular vehicle project. Yet, it has been inspired and cross-checked with a real world automated driving implementation in the Stadtpilot project at the Technische Universit\"at Braunschweig. The architecture entails aspects like environment and self perception, planning and control, localization, map provision, Vehicle-To-X-communication, and interaction with human operators.
Conference Paper
Full-text available
One of the biggest challenges towards fully automated driving is achieving robustness. Autonomous vehicles will have to fully recognize their environment even in harsh weather conditions. Additionally, they have to be able to detect sensor and algorithm failures and react properly to keep the vehicle in a safe state. These two challenges are addressed exemplarily on miniature cars. We extend the approach of Compositional Hierarchical Models [1] by temporal fusion to achieve a robust environment perception. The increased association problem is overcome by a grid-based approximation and a voting system. System performance assessment surveils the system’s performance and reacts with driving function degradation or activation of specialized algorithms. The approach was evaluated at the final of the Audi Autonomous Driving Cup 2016. A video shows the advanced driving capabilities under harsh environment conditions and the source code is available for download.
Conference Paper
Full-text available
Predicting vehicles' behaviors in a traffic scene can be very challenging due to many influences. Especially interactions with other traffic participants like vehicles or pedestrians are very crucial for the future movement while they are hard to model even with expert knowledge. In this paper we propose an object-oriented probabilistic approach that detects interactions between vehicles and is able to infer possible routes of traffic participants. Using the Object-Oriented Probabilistic Relational Modelling Language (OPRML), the interactions between vehicles can be modeled in an intuitive direct way. The probabilistic component allows Bayesian Inference on noisy sensor data and uncertain dependencies , while the object-orientation makes the model flexible to a varying number of traffic participants. Street-dependent as well as interaction-dependent motion models are learned from simulated situations and recordings of real traffic scenes. Finally, route prediction is evaluated at an exemplary intersection showing how the awareness of interactions reduces route prediction uncertainty and wrong predictions.
Conference Paper
Full-text available
Autonomous emergency brake (AEB) systems have to decide on brake interventions based on an uncertain and incomplete perception of the environment. This paper analyses theoretical limitations in AEB systems caused by noisy sensor measurements and uncertain prediction models. Such performance bounds can be used to derive sensor accuracy constraints, to identify challenging scenarios or to develop objective metrics. In contrast to most previous studies, this work focusses on analytical derivations. To this end, the Cramér-Rao bound of the best attainable state estimation covariance is derived from a model of sensor measurement errors. This state- and time-dependent covariance is then propagated to an AEB decision making logic that is based on a criticality measure. Additional inherent prediction uncertainty in this risk assessment is taken into account. The effectiveness of the AEB subject to uncertainties is compared to the deterministic baseline case in terms of the brake activation time and the collision energy reduction.
Conference Paper
Long-term prediction of traffic participants is crucial to enable autonomous driving on public roads. The quality of the prediction directly affects the frequency of trajectory planning. With a poor estimation of the future development, more computational effort has to be put in re-planning, and a safe vehicle state at the end of the planning horizon is not guaranteed. A holistic probabilistic prediction, considering inputs, results and parameters as random variables, highly reduces the problem. A time frame of several seconds requires a probabilistic description of the scene evolution, where uncertainty or accuracy is represented by the trajectory distribution. Following this strategy, a novel evaluation method is needed, coping with the fact, that the future evolution of a scene is also uncertain. We present a method to evaluate the probabilistic prediction of real traffic scenes with varying start conditions. The proposed prediction is based on a particle filter, estimating behavior describing parameters of a microscopic traffic model. Experiments on real traffic data with random leading vehicles show the applicability in terms of convergence, enabling long-term prediction using forward propagation.
This book introduces condition-based maintenance (CBM)/data-driven prognostics and health management (PHM) in detail, first explaining the PHM design approach from a systems engineering perspective, then summarizing and elaborating on the data-driven methodology for feature construction, as well as feature-based fault diagnosis and prognosis. The book includes a wealth of illustrations and tables to help explain the algorithms, as well as practical examples showing how to use this tool to solve situations for which analytic solutions are poorly suited. It equips readers to apply the concepts discussed in order to analyze and solve a variety of problems in PHM system design, feature construction, fault diagnosis and prognosis.
Conference Paper
The project “Autonomous Driving” at Ulm University aims at advancing highly-automated driving with close-to-market sensors while ensuring easy exchangeability of the particular components. In this contribution, the experimental vehicle that was realized during the project is presented along with its software modules. To achieve the mentioned goals, a sophisticated fusion approach for robust environment perception is essential. Apart from the necessary motion planning algorithms, this paper thus focuses on the sensor-independent fusion scheme. It allows for an efficient sensor replacement and realizes redundancy by using probabilistic and generic interfaces. Redundancy is ensured by utilizing multiple sensors of different types in crucial modules like grid mapping, localization and tracking. Furthermore, the combination of the module outputs to a consistent environment model is achieved by employing their probabilistic representation. The performance of the vehicle is discussed using the experience from numerous autonomous driving tests on public roads.
This paper presents a functional system architecture for an “autonomous vehicle” in the sense of a modular building block system. It is developed in a top-down approach based on the definition of the functional requirements for an autonomous vehicle and explicitly combines perception-based and localization-based approaches. Both the definition and the functional system architecture consider the aspects operating by the human being, mission accomplishment, map data, localization, environmental and self-perception as well as cooperation. The functional system architecture is developed in the context of the research project “Stadtpilot” at the Technische Universität Braunschweig.