Content uploaded by Norbert Druml
Author content
All content in this area was uploaded by Norbert Druml on Oct 27, 2019
Content may be subject to copyright.
PRYSTINE - Technical Progress After Year 1
Norbert Druml*, Omar Veledar, Georg Macher, Georg Stettinger, Solmaz Selim, Jakob Reckenzaun, Sergio E. Diaz, Mauricio Marcano,
Jorge Villagra, Rutger Beekelaar, Johannes Jany-Luig, Marta Maria Corredoira Sagues, Paolo Burgio, Christian Ballato, Björn Debaillie,
Lars van Meurs, Andrei Terechko, Fabio Tango, Anna Ryabokon, Andrei Anghel, Oğuz İçoğlu, Sumeet S. Kumar, George Dimitrakopoulos
*Infineon Technologies Austria AG, Austria, norbert.druml@infineon.com
Abstract— Among the actual trends that will affect society in
the coming years, autonomous driving stands out as having the
potential to disruptively change the automotive industry as we
know it today. For this, fail-operational behavior is essential in
the sense, plan, and act stages of the automation chain in order
to handle safety-critical situations by its own, which currently
is not reached with state-of-the-art approaches also due to
missing reliable environment perception and sensor fusion.
PRYSTINE will realize Fail-operational Urban Surround
perceptION (FUSION) which is based on robust Radar and
LiDAR sensor fusion and control functions in order to enable
safe automated driving in urban and rural environments. In
this paper, we detail the vision of the PRYSTINE project and
we showcase the results achieved during the first year.
Keywords— FUSION, fail-operational, perception
I. INTRODUCTION
PRYSTINE – PRogrammable sYSTems for INtelligence in
AutomobilEs – implements a matrix-style organization made of
work packages and supply chains. A supply chain is the logical /
virtual combination of partner activities fitting together within a
specific topic leading to a combined result (e.g., demonstrator). A
supply chain supplies other supply chains and project activities with
its results. Each supply chain addresses a specific PRYSTINE
objective. PRYSTINE’s objectives can be distinguished between
technical objectives (O1-O4) and impact/social objectives (O5-O6).
Thus, a supply chain’s result (demonstrator) addresses the specific
objective (for example, the sensor components developed in supply
chain 1 will demonstrate enhanced reliability and performance,
reduced cost and power). Every effort / activity of a specific partner
in a supply chain is mapped into the generic work packages, starting
with requirements, and ending with validation and test (which
represents the typical automotive V-development cycle). This
approach forms a clearly arranged matrix structure, which is
depicted in Fig. 1. PRYSTINE defines three types of supply chains.
The technology enabler supply chains (1-4) develop the
fundamental core technology bricks required by other supply chains
and for achieving PRYSTINE’s ambitious goals. These supply
chains 1-4 cover fail-operational semiconductor components,
embedded systems and fundamental algorithms, E/E vehicle
architecture as well as the important sensor fusion solutions, thus
realizing FUSION.
The output enabler (also known as applications or shiny
demonstrators) supply chains (5-7) employ and validate the results
achieved in the technology enabler supply chains. Therefore, the
advancements achieved during the PRYSTINE project will be
showcased by dedicated demonstrators of supply chains 5-7. In
particular, a heavy-duty vehicle demonstrator, a passenger vehicle
demonstrator, and a shared control demonstrator will be featured.
The impact supply chains (8-10) form the basis in order to generate
“European Values”. This includes economic, societal, and pan-
European impact factors generated by the PRYSTINE project.
II. PRYSTINE’S SUPPLY CHAINS
SC1: Components LiDAR, Radar and safety controllers for
FUSION
SC1 represents one of the core technology enablers. The vision of
this supply chain and its project partners is to research and develop
PRYSTINE’s essential components for FUSION. Particularly
important is the focus on fail-operational environment perception
sensors (Radar & LiDAR) and on next-generation embedded
control components (such as safety controllers and processing
hardware). These core components will be employed and integrated
by most of PRYSTINE’s supply chains. Furthermore, SC1 will
address the following measureable key performance indicators:
Fail-operational sensor compound vs. fail silent individual
sensing approaches
Power reduction of 25% through semiconductor material
improvements and functional convergence in sensor
modules
Up to 30% cost reduction and 10% margin improvement
for perception sub-systems
30% less false-positive and/or false-negative detections
compared to separate sensing approach
SC1 pursues two approaches towards fail-operational LiDAR
sensing: an oscillating comp-drive based 1D MEMS scanning
solution and a piezo electric 2D MEMS scanning approach. Both
approaches come with their distinct pros and cons. For example,
while 1D scanners typically enable high frame rates and show high
robustness against vibrations and shocks, 2D scanners (see also Fig.
2) concentrate the whole laser light into a single laser spot thus
enabling high SNR values and long-range scanning.
Fig. 2: Murata’s 2D scanning MEMS mirror and NXP’s Radar module.
Fig. 1: PRYSTINE’s WP and SC interaction reflecting the automotive V-
development cycle. Obtained with changes from [1].
Radar is considered as one of the key sensors for automated driving
because it provides reliable detection over a wide range for weather
conditions (rain, snow, lighting, fog, etc.). The recent trend of
designing Radars in CMOS technologies comes with several
additional advantages. CMOS is a low-cost technology which
supports dense integration, and will continue to scale-down towards
future nodes. This is a clear benefit for automotive, as large-scale
deployment and seamless integration is targeted. At the same time,
CMOS supports complex system integration allowing to integrate
both the RF sensor and the digital data processing on the same chip.
In terms of cost scaling and miniaturization, CMOS clearly
outperforms cameras and other sensors. Therefore, CMOS Radars
development is one of the key activities in PRYSTINE. For
example, a novel package is developed that integrates not only a
CMOS transceiver but also MIMO antennas, thus forming an
innovative System-in-Package, see also Fig. 2.
From the Radar signal processing point of view, the efforts were
concentrated during the first year on distributed Radar sensors,
direction of arrival estimation methods and radio frequency
interference (RFI) detection and mitigation algorithms. For instance,
a single Radar RFI mitigation algorithm based on short time Fourier
transform and a linear combination of order statistics (L-statistics)
was designed (a general block diagram of the algorithm and
preliminary results on simulated data are shown in Fig. 3). In the
presence of a typical Radar-to-Radar interference, the algorithm
retains most of the target’s energy in the computed range profile,
which makes it suitable for an interference-operational Radar
sensor.
In addition to the aspects mentioned above, IMEC targets their
Radar sensor as a building block for a fail-operational system. The
envisioned Radar sensor will offer scalability over its sensitivity
versus its power consumption. Having such scalability enables the
Radar sensor to maintain its operation on an emergency cell battery
in case of a central power breakdown. In that case, the remaining
battery power would be reserved for driving the power train, and the
reduced sensing sensitivity would be sufficient for the low vehicle
speed. On the other hand, the Radar sensor could be scaled to
maximal sensitivity (and power consumption) to take-over the
functionality of other failing sensors. In addition to scalability,
IMEC develops their Radar sensor for high robustness and
reliability to minimize false detections and interferences.
SC1 also spends lots of efforts in order to advance current safety
controllers and health monitoring solutions way beyond state-of-
the-art. This is of particular importance, because these safety
components form the safety-backbone of current and future
vehicles. Fig. 4 depicts an innovative solution towards a vehicular
health monitoring solution. The vehicle’s hardware and software is
continuously monitored. In case a fault is detected, alarms are
triggered and certain safe states are entered that, e.g., continue to
operate the vehicle in a degraded mode.
SC2: High performance embedded control and intelligence for
FUSION
SC2 is the second core technology enabler that addresses Objective
2: Dependable embedded control by co-integration of signal
processing and AI approaches for FUSION on the system level by
realizing highly-efficient dependable architectures. These novel
architectures should fulfill crucial functional safety requirements for
automated driving and ADAS functions based on sensor data fusion
and include two main architectures of R&D:
1. Embedded Control architectures by combining COTS
components, e.g. number crunching processors, safety
microcontrollers, Deterministic Ethernet backbone for
data exchange, and
2. Embedded Intelligence architectures such as “drive-by-
wire” cars, AI algorithms for self- and context-awareness
for the road safety and system security.
The developments within SC2 are one of the most important
prerequisites to realize the fail-operational automated driving. The
main activities in SC2 include collection of requirements as well as
the design, development, and proof-of-concept demonstration of a
computing infrastructure. The main goal of these developments is to
support various control mechanisms that must remain fully/partially
functional in case of faults or impairments in the environment
perception. The future architectures will implement redundancy and
diversity within the electronic control units using highly-performant
and cost-efficient computational components. There are four sub-
objectives defined in SC2 to achieve O2:
O2.1: Fail-operational automated driving platform (SAE Level
3+) exploiting COTS components and Deterministic
Ethernet backbone network to enable safety-critical
application data exchange
O2.2: Design of architectures by component diversification to
overcome ability to fail by achieving each system
independent functionality with certain level of robustness
and development of Automated Safety and Awareness
Processing Stack
O2.3: Develop solutions for monitoring vehicle internal
communications for intrusions and for controlling vehicle
internal and external network boundary; develop a trust
model for reliability of internal data, sensors and system
state as well a driver monitoring subsystem to guarantee a
successful precision traffic movement.
O2.4: Development of SAE Level a 3+ equivalent autonomous
parking solution and prototype FUSION algorithms for
low speed autonomy based on Virtual Vehicle passenger
vehicle pre-demonstrator (described below) to be
integrated in Ford heavy duty truck demonstrator for low
speed autonomy.
The expertise of SC2 partners fits perfectly to the mentioned
objectives and covers, for instance, the next generation HW such as
networking and safety controllers, real-time software and
corresponding interfaces, AI approaches including heuristic search
and machine learning, build-in security mechanisms, etc. These
developments are implemented in five tasks and four demonstrators,
which exactly corresponds to the sub-objectives defined above.
One of the demonstrators in this supply chain focuses on the low
speed autonomy and automated back-parking use case. This pre-
Fig. 4: Vehicular health monitoring solution, obtained from [1].
Fig. 3: UPB’s single Radar RFI mitigation algorithm (block diagram and
preliminary results).
demonstrator is based on a passenger vehicle and it also contributes
to the demonstrators in SC4 and SC5. The FUSION algorithms will
be developed in a co-simulation environment, which serve as a
basis to simulate the individual use-case scenarios and to verify the
designed FUSION algorithms virtually, see Fig. 5.
The environment simulations will be based in open source CARLA
platform, while the vehicle dynamics model will be simulated using
the IPG Carmaker/TruckMaker. The ADAS control functions will
be based on Matlab/Python whereas sensor models will be modelled
in Python/Carla.
The test vehicle to be used for the demonstration is prepared and
provided by Virtual Vehicle with a range of sensors as seen in Fig.
6. In addition to the simulation environment development, a draft
architecture for the perception components as well as the HW/SW
interfaces and the E/E architecture is being developed in the scope
the specific use cases that are shown in Fig. 7.
The current status of the development process focusses on the
individual perception layer components based on the co-simulation
platform.
SC3: Fail-operational E/E architecture enabling FUSION
An automated vehicle must sense its environment, plan actions and
react accordingly. An E/E system covers three aspects. As
technology-enabler, SC 3 delivers a fail-operational E/E
architecture of an E/E system. This E/E system connects the fail-
operational and optimized sensors, components, embedded safety
controllers from other SCs using dependable vehicular
electrical/electronic infrastructure and communication systems. The
results will be shown in three demonstrators considering recent
safety standardization activities.
The first demonstrator is an integration platform which supports the
development of a generic fail-operational E/E system. The platform
serves as basis to enhance and adapt architectures; and supports in
partitioning functionality and control strategies. The overall aim of
this platform is to enable an efficient integration of various
FUSION technologies with a strong focus on dependability, testing
and validation.
The second demonstrator is closely linked to the previous one. It is
a Framework for simulation, development and validation of novel
AD functionalities that rely on distributed and highly interconnected
control functions. The provided framework allows a seamless shift
from pure simulation environment towards mixed physical and
simulated environment to finally pure physical development for
autonomous driving functionalities. It aims to interface between
various hardware and software components in a FUSION
environment and to enable their integration.
The third demonstrator is a dynamically shaped reliable mobile
communication. It will be shown at a passenger vehicle, but also via
a server. Its overall aim is to deliver an optimal connection for a
vehicle via an optimal network. A degrading quality of service will
trigger the underlying algorithm to choose a different network. The
proposed process is shown in Fig. 9.
Advances in development of automated driving bears huge
challenges to standardization processes and bodies. Various long-
established standards are ensuring a safe and secure E/E
architecture. The ISO 26262 “Road Vehicles – Functional Safety”
(revised in 2018) must be applied when an E/E system bears the risk
of potentially harming or killing a human being due to a
malfunction of the system, its sub-systems and components or the
Fig. 9: Process for a QoS.
EXTERNALREAL HW
ENVIRONMENTAL INFORMATION
SIMULATION
ENTERTAINMENT CHASSIS POWERTRAIN ADV ENVIRONMENT 3RD PARTY SCENE DETECTION
3RD PARTY SENSORS
WEB SERVICES
CONTROL-FUSION
AI CONVENTIONAL
Fig. 8: SC 3 Demonstrator 1 & 2, E/E-integration Plattform for E/E system
Developlement.
Fig. 6: Demonstrator vehicle used on the low-speed autonomy use cases.
Fig. 5: Co-simulation environment structure.
Fig. 7: E/E and HW/SW architecture of the pre-demonstrator.
interactions within the system. To ensure a safe operation, measures
need to be taken, to reduce the risk of occurring to an acceptable
minimum.
The ISO 26262 reduces hazardous events in a safety-related E/E
system due to malfunction. For a highly automated vehicle,
deviation of intended functionality can happen without any
malfunctioning behavior of the system. Such vehicles need to
handle level of uncertainty in a real road setting in order to achieve
a proper situational awareness for a safe operation.
The recently released ISO PAS 21448 “Safety of Intended
Function” fills a white spot as it addresses automated vehicles that
show an unexpected behavior, despite properly functioning
elements in the E/E system. An approach to tackle and to reduce
such a behavior is discussed in the ISO PAS using a classification
scheme of potential scenarios. The authors distinguish two different
characteristics of a scenario. They argue that there are known and
unknown as well as safe and unsafe scenarios; a permutation leads
to four areas, shown in Fig. 10 (left). The Aim of a SOTIF process
is to maximize or maintain area 1, minimize area 2 with technical
measures and minimize area 4.
During this project, the ISO PAS 21448 will be applied for test
purposes to suitable use cased and derived scenarios. Applicability
will be evaluated and results will be reported it to the relevant
standardization body.
SC4: FUSION and decision making
SC4 is the fourth core technology enabler supply chain that
addresses the third and fourth objectives of PRYSTINE:
Objective 3 - Optimized E/E architecture enabling FUSION-
based automated vehicles: This objective is achieved by
incorporating PRYSTINE’s fail-operational and optimized sensors,
components, embedded controllers, processing systems with
dependable vehicular electrical/electronic infrastructure. Thus
PRYSTINE’s FUSION technology will be realized and a fail-
operational electrical/electronic reference architecture (consisting
also of control strategies and actuators) will be achieved.
Objective 4 - Fail-operational systems for urban and rural
environments based on FUSION: This objective is achieved by
implementing PRYSTINE’s fail-operational sensor fusion, content
analysis, object recognition, scenario assessment, and decision
making solutions on PRYSTINE’s fail-operational embedded
control and electrical/electronic architectures. Thus, fail-operational
automated driving functions for urban and rural environments will
be achieved, which will be showcased by PRYSTINE’s use-cases.
SC4 is comprised of robust perception of the environment around a
vehicle, through the fusion of sensed data from a multitude of
sensors (Radar, LiDAR, cameras…), thus realizing PRYSTINE’s
FUSION technology. SC4’s vision is to implement; (i) fail-
operational perception based on sensor fusion and content analysis,
object recognition, scenario assessment, motion estimation, decision
making, and see through solutions.
SC4 has close ties with SC1, SC2 and SC5, SC6, SC7. SC4 aims to
integrate sensors developed in SC1 and the fusion box developed in
SC2 with robust environment perception and fusion algorithms
developed within itself. The final demonstrator of SC4 is a FUSION
Hardware In the Loop (HIL) demonstration (see Fig. 11), namely
the demonstration of successful results of SC4 in a HIL test setup.
Subsequently, the results validated in the HIL tests will be conveyed
to output enabler supply chains (SC5, SC6, and SC7) that will
consequently demonstrate FUSION outputs in a heavy duty truck,
passenger car and shared control environment respectively. To sum
up in short, FUSION components, such as sensors (LiDAR, Radar,
camera…), safety controllers, and PRYSTINE’s reference
embedded intelligence will be integrated to a demonstration setup in
the lab environment.
First, simulators and emulators will be employed in the HIL setups.
Perception and data fusion algorithms will be validated through
simulated sensor data by being executed on emulated platforms.
These simulators and emulators will further be replaced by the real
hardware as the hardware arrives for integration. Thus, the sensor
data will be recorded simultaneously with the vehicle data, then
replayed for evaluation while the target platforms (hardware) are in
the loop. These requirements explicitly address the key performance
indicators of SC4; (i) synchronized recording of the multi sensor
data and replay capability and (ii) fusion performance evaluation
using recorded data from the vehicle. The FUSION HIL
demonstrators will be divided into; (i) interfacing, (ii) functionality,
(iii) performance and (iv) reliability testing sub-sections, and will be
capable of testing the main functionality before installation on the
heavy duty truck (SC5), passenger car (SC6) or shared control
demonstrator (SC7).
SC4 involves 14 partners; Ford Otosan, AVL-Turkey, Virtual
Vehicles, Unimore, Innoluce, Delft University of Technology,
Imec, Dat.Mobility, Polito, Tampere University, Mattersoft, Nokia,
Tenneco and ITI. The solutions implemented by the partners during
the first year are given below:
a. LiDAR sensor data augmentation: LiDARs support very
good range resolutions and provide good capabilities to classify
objects. By fusing Radar, LiDAR, and camera well-balanced
environment perception is achieved and challenging environmental
conditions are mitigated. Innoluce supports the realization of the
SC4 solutions with expertise in the fields of LiDAR system and
application engineering in order to guarantee successful sensor
integration. Towards this end in the scope of SC4, Innoluce
develops sensor data augmentation algorithms to increase the
quality of LiDAR sensor outputs (Fig 12). The LiDAR solution is
utilized within SC4 by partners Ford and Tenecco for LiDAR based
perception solutions, particularly “back maneuvering assist” and
“suspension control” solutions respectively. The LiDAR efforts of
Innoluce will eventually be transferred to SC5 “Heavy Duty
Vehicle demonstration” and SC6 “Passenger Car demonstration”
after successful HIL tests finalizing the SC4 studies.
1
2
34
3
4
1
2
4
Fig. 10: Known and unknown scenarios.
Fig. 11: A HIL environment with sensors and vehicle data employed.
Fig 12: Examples of LiDAR sensor outputs: non-optimal outputs (left),
optimal and augmented outputs (right).
b. Back Maneuvering Assist: Most autonomous developments
today concentrate on long-haul highway related topics for heavy-
vehicles. The need for high-precision maneuvers, combined with
the size of heavy-vehicles, raises a high fatality risk, threatening the
accident-free mobility scenarios envisaged for Europe. Within the
scope of SC4, back maneuvering assisting solutions are developed
by Ford, AVLTR and VIF for articulated heavy duty vehicles. In
the first year, perception algorithms are developed and objects are
detected from LiDAR and stereo camera sensors by Ford. Offline
open source test data are utilized for validation of the algorithms.
Meanwhile simulated sensor data is being generated by AVLTR
using Unreal Engine tool to be utilized for further perception testing
(Fig. 13). Data Fusion algorithm is developed by AVLTR for fusing
objects detected from various sources (LiDAR, stereo camera,
Radar, etc.). Path planning algorithm is developed by Ford for
further use to extract optimized paths for back maneuvering. These
efforts will eventually be transferred to SC5 “Heavy Duty Vehicle
demonstration” after successful HIL tests finalizing the SC4 studies.
Fig. 13: Simulated parking station environment for back maneuvering assist
developed with Unreal Engine.
c. Vulnerable Road User (VRU) Detection: In the EU, 22% of
road fatalities are pedestrians, while 8% are cyclists. Within
PRYSTINE FUSION technologies, two core technologies dedicated
to protecting these groups are developed. First one is the
“vulnerable road user detection” towards VRU detection solutions,
IMEC develops a platform for tracking and classification of VRUs
based on raw data fusion of Radar and vision with algorithms
offering higher robustness and fail-operational features. In the first
year, the impact and means to mitigate Radar interference in a
highly sensorized environment is investigated by IMEC and the
findings are presented and published at RadarConf and IET
conferences. IMEC’s goal is to finally develop a reliable VRU
detection/tracking/classification based on sensor fusion at a smart
intersection (Fig. 14).
Fig. 14: Reliable VRU detection at a smart intersection.
Additionally, ITI developed deep learning based solutions for
detection and recognition of VRUs using Single Shot Detector
(SSD) and You Only Look Once (YOLO) methods, after observing
different machine learning (ML) algorithms in terms of robustness
and precision. In the same way, a subset of four databases (Caltech,
Daimler, KITTI, BDD100k) from all the datasets were selected in
different scenarios. An architectural specification of a computer
vision framework is developed to generalize the common computer
vision tasks. Currently, ITI is working in the development of the
framework, and the first results are obtained by using it.
Particularly, the framework is used to create, train and compare
object detection ML algorithms with the above developed methods.
These efforts of ITI will eventually be transferred to SC6
“Passenger Car demonstration” after successful HIL tests finalizing
the SC4 studies.
d. SuperSight (CiThruS): As mentioned above, 22% of road
fatalities are pedestrians and 8% are cyclists in the EU. Towards
protecting these two groups, in addition to VRU detection, a second
solution is developed in PRYSTINE; SuperSight, or in other words
CiThruS is created with collaboration of TAU, MTS and Nokia.
The goal of the SuperSight solution is to basically provide (i) blind
area removal which is especially useful to spot weak road users
before they enter the natural field of view. In addition SuperSight
provides (ii) automatic safety alerts which reduce road accidents
and improve driver proactivity, and (iii) traffic analytics which
improves driving experience by helping the driver to bypass traffic
jams, roadworks, and other unwanted events in the traffic.
SuperSight solution utilizes 360 degrees video processing with
surrounding cameras attached on the vehicle. In the first year, TAU
focused on designing augmented 360 degrees spherical video view
for the CiThruS framework. The addressed components of the real-
time 360 degrees video processing pipeline include video capture,
stitching, projection, HEVC (high efficiency video coding) video
encoding/decoding and playback. For video capture, a component
and interface definition for a 6-camera rig and related video capture
FPGA cards is developed. Currently, the designed functionality is
limited to stationary cameras located in a single-point. For video
stitching, a software architecture for merging multiple video feeds
into a 360 degrees video sphere is designed by TAU. The
implementation of frontend software and hardware components of
the real-time 360 degrees video processing pipeline is started. The
implemented components include a 3D printed 6-camera rig for six
GoPro cameras and related HDMI video capture cards on Pynq
FPGAs to enable raw video capture from GoPros. For video
stitching, OpenCV stitching algorithm is modified to support 6
input images and optimized for real-time operation. Additionally, to
assist CiThruS development, a simulation environment for 360-
degree traffic imaging is implemented in order to simulate different
vehicular camera settings in a virtual city. The simulator is built on
top of Unity game engine with assets used from Windridge City and
other free sources. The vehicles drive autonomously selecting
different routes from a predefined road network and avoid collisions
using a simplified collision detections. Pedestrians walk around the
city area in defined locations and cross the streets when they have
green lights. In addition to TAU’s efforts, Nokia, in the first year
period, specified CiThruS cloud acceleration platform requirements
and composed a good understanding of the platform architecture
both from the hardware and software perspectives. Also, general
purpose cloud acceleration platform descriptions are implemented
to further convey CiThruS results to the cloud environment, making
it possible to share them with other vehicles in the traffic (Fig. 15).
Fig. 15: CiThruS (SuperSight) solution.
e. Control: Sensor fusion capabilities, combined with motion
prediction enabled the improvement of AI-based and enhanced
controllers that can decide on and implement the required action(s)
in the specific situations. Towards this end, a trajectory planning
and control algorithm is developed by Polito based on a Model
Predictive Control (MPC) approach to work in different road
scenarios. The algorithm allows accomplishment of (i) way-point
tracking, (ii) lane center tracking, (iii) obstacle avoidance (for fixed
and moving obstacles) and (iv) constraint satisfaction (e.g., road
boundaries, speed limits, etc.). First year activities of Polito are
mainly concerned with the design of a trajectory planning and
control algorithm. The MPC trajectory planning and control
algorithm is tested in simulation using the Matlab/Simulink
simulator. Different road scenarios are considered in these tests,
such as avoidance of a fixed obstacle, avoidance of a moving
obstacle, way point tracking, lane keeping, lane center tracking,
overtaking in a motorway scenario, emergency lane change and stop
in a motorway scenario. In these preliminary simulation tests, the
MPC algorithm (in all its variants) showed a satisfactory capability
of calculating optimal trajectories and provided effective control
actions. Two screenshots from two simulations are given in Fig. 16,
showing two maneuvers accomplished control in collaboration with
CRF and Unimore, using the MPC algorithm for trajectory planning
and vehicle control.
Fig. 16: Two maneuvers accomplished on the simulator using the MPC
algorithm for trajectory planning and control.
Towards suspension control, Tenecco evaluated different sensor
technologies in the first year for road state scanning. The observed
sensor technologies are laser triangulation, Radar and, for a lesser
extent, ultrasonic imaging. Laser triangulation is identified as
suitable for road scanning, where similar results from LiDAR
sensors are expected as soon as they become available from the
consortium partners (Innoluce). Radar is also tested for road
scanning based on 77GHz technology, at which Radar becomes
usable for road state scanning, as shorter wavelength Radar is still
in early design stage. In any case, the required Radar module is a
short-range/high bandwidth type to provide the necessary height
accuracy, since the specific case of road scanning requires fast
repetition rate short-range scanning due to the self-shadowing effect
of small objects, especially negative-height objects like potholes.
During the first year studies, the smart damper controlling algorithm
is implemented in MATLAB/Simulink running on dSpace
hardware, where the vehicle controller receives the weighted signals
from the physical sensor. In case of vision sensors, each sensor
disposes over a dedicated signal pre-processing unit to distribute the
processing burden and to preserve bandwidth. For evaluating the
performance of the system, 3D street profile data are recorded using
a high-resolution laser triangulation system. On these data, different
sensor profile simulations can be run and the sensor performance
prior to its practical availability can be evaluated. This concerns
physical sensors as short-range Radar, LiDAR and vehicle internal
readings based on e.g. accelerometers and rattle sensors. The
simulated sensor signals are then coupled to a vehicle model in
Simulink, where different vehicle suspension models are available.
With these models, the sensor performance at nominal conditions
and under influence of environmental disturbances are tested
f. Traffic Management: In the transition from AD levels 2, 3
to level 4, vehicles need to deal with more complex traffic
conditions, road networks, etc. Better understanding of traffic
behavior will support the application of traffic measures more
adequately, specifically in complex environments such as cities.
These new capabilities will support developments of Traffic
Management as a Service. The traffic management solution
proposed by DAT.Mobility involves fusion of streaming traffic data
from traffic controllers, floating car data (FCD) and automatic
number-plate recognition (ANPR) cameras. In the first year, a
model framework with a network and traffic model is developed
(Fig. 17) and a section of Amsterdam city is employed in the model
for short term prediction. The efforts of DAT.Mobility will
eventually be transferred to SC7 “Shared Control demonstration”
(demo#3) after successful HIL tests finalizing the SC4 studies.
Fig. 17: Framework with network and traffic model for short term
prediction.
g. Compute architecture for multi-sensor data fusion:
Within the scope of SC4, a compute architecture for multi-sensor
data fusion is developed by TUD. In the first year, a simulation
framework is set to enable architecture and design space exploration
of the compute platform. At the same time, Radar and LiDAR
sensor data are obtained to enable the exploration. As part of this
effort, creation of a benchmark data set is investigated. A baseline
architecture is established for the exploration. Design activities will
move towards determining encoding strategies for sensor data in a
low-level fusion setup, and subsequently towards system-level
design implementation in the second year (Fig. 18).
Fig. 18: Conceptual view of TUDelft’s accelerator for perception and
sensor fusion (left); and an architectural illustration of the massively
parallel accelerator (right).
SC5: FUSION application - heavy duty electric vehicle
SC5 is the first output enabler supply chain that addresses the fourth
objective of PRYSTINE:
Objective 4 - Fail-operational systems for urban and rural
environments based on FUSION: This objective is achieved by
implementing PRYSTINE’s fail-operational sensor fusion, content
analysis, object recognition, scenario assessment, and decision
making solutions on PRYSTINE’s fail-operational embedded
control and electrical/electronic architectures. Thus, fail-operational
automated driving functions for urban and rural environments will
be achieved, which will be showcased by PRYSTINE’s use-cases.
The vision of SC5 is to integrate the results of supply chains 1-4 in
order to achieve the heavy duty vehicle demonstrator employing
FUSION, therefore SC5 has close ties with all technology enabler
supply chains. The goal is to realize the heavy duty vehicle
demonstrator employing fail-operational autonomous driving
functions based on FUSION and its data fusion from a wide range
of sensors (Radar, LiDAR, camera, ultrasound, etc.) within an
integrated platform.
The demonstrator of SC5 will be a Ford heavy duty truck (semi-size
trailer). The heavy duty vehicle demonstration will be performed
with a Ford F-MAX, introduced by Ford Otosan in 2018. In 2018,
Ford F-MAX became available in 29 countries. Ford F-MAX was
unveiled during the IAA 2018 in Hannover, Germany and chosen to
be the International Truck of the Year 2019.
Ford F-MAX has a cabin width of 2.5 meters, a 12.7 lt., 500 PS
Ecotorq engine, 600 lt. fuel capacity and 3.6 meters axle width. The
F-MAX truck is employed as the PRYSTINE heavy duty
demonstrator and is equipped with adaptive cruise control,
automatic emergency braking and electric steering systems that
enable the vehicle to be controlled in lateral and longitudinal axes.
In the context of heavy vehicles, PRYSTINE aims to advance state-
of-the-art by realizing an ambitious autonomous heavy-vehicle
demonstrator for urban scenarios. Most autonomous developments
today concentrate on long-haul highway related topics for heavy-
vehicles. The need for high-precision maneuvers, combined with
the size of heavy-vehicles, raises a high fatality risk, threatening the
accident-free mobility scenarios in urban environments. Towards
annihilating these risks, back maneuvering related use cases are
defined in the scope of SC5 of PRYSTINE. Specifically, two
distinct use cases for trucks with trailers are considered; (i) docking
in a docking station and (ii) backing in a construction site. For both
use cases, it is common that the driver needs several trials to bring
the trailer in the correct position, either to dock correctly to the
dedicated slot in the docking station, or to bring the truck in position
on the construction site. While the main concern when considering
the docking station use case is the time spent to position the trailer,
the problem with construction sites is also the surrounding traffic
and other road users, such as pedestrians.
These use cases explicitly address the two key performance
indicators of SC5; (i) automated back parking of the heavy duty
truck at docking station, (ii) automated back entrance of the heavy
duty truck into the construction zone.
There are six partners in SC5 who will contribute to the heavy duty
vehicle demonstration; Ford Otosan, AVL-Turkey, Virtual
Vehicles, Innoluce, AnyWi and ITI. These contributions take part in
both of the use cases defined above and are focused on two
scenarios:
a. Traffic Management: Back maneuvering scenario describes the
contributions focused on enabling a back maneuvering assist system
for heavy duty vehicles.
Ford Otosan will take part in both of the use cases and integrate
firstly the sensors; LiDAR, Radar, stereo camera and mono camera
to the demo truck. Secondly, Ford Otosan will integrate its
perception and fusion algorithms (developed in the scope of SC4)
on the target platform, NVidia Drive PX2 that will also be
eventually deployed to the demo truck together with a HMI device.
AVL-Turkey will contribute to both of the use cases and integrate
its data fusion algorithms developed in the scope of SC4. AVL-
Turkey will also integrate its trailer-angle detection solution which
is crucial in data fusion for mapping local truck and trailer
coordinate systems to a reference coordinate system.
Virtual Vehicle will contribute to both of the use cases and integrate
its object recognition/classification solution developed in the scope
of SC4. Virtual Vehicle will also deploy its solution for fail-
operationality that will continuously check the status of the
algorithms integrated on the demo truck. Like Ford Otosan, Nvidia
Drive PX2 will be utilized by Virtual Vehicle as target platform.
Thus, the perception outputs of Ford Otosan and Virtual Vehicle
will be combined in a single processing unit.
Innoluce will contribute to both of the use cases and integrate up to
5 LiDAR demonstrators with sensor fusion algorithms developed
within the scope of SC1 and SC4.
Fig. 20: Use case 1 in SC5: automated back parking of the heavy duty truck
at docking station.
Fig. 19: Ford Otosan’s semi-trailer heavy duty vehicle.
b. Docking Station Management: Docking station management
scenario describes the contributions focused on enabling a facility
management system for tracking the status of the docking station by
truck drivers and facility managers.
AnyWi will contribute to the use case 1 and integrate its gateway
unit developed within the scope of SC3 (see also Fig. 22).
According to AnyWi’s demonstration scenario, the demo truck will
firstly detect docking slot occupancy status and compose docking
station suggestion data, which will then be sent to the facility
(docking station) management server using the AnyWi Gateway
Unit integrated on the truck. A facility management client
application (running on a tablet with internet connection) will
receive a docking station suggestion from the facility management
server and display the docking station suggestion on the HMI to the
facility manager. Thus, the facility manager will be able to remotely
track docking station occupancy status at the same time.
ITI will contribute to the use case 1 by integrating its on board unit
(OBU) to the demo truck and deploying two road side units (RSU)
to the docking station. The road side units will detect several
environmental conditions of the station such as temperature,
humidity with its embedded sensors and send these data to the demo
truck. The on board unit will receive the environmental conditions
of the docking station and relay these data to the demo truck, which
will eventually display the environmental conditions of the docking
station to the driver through the HMI.
SC6: FUSION application - passenger vehicle
In Supply Chain 6, the FUSION technology developed in the other
Supply Chains (SC1-SC3) is integrated in a passenger vehicle
demonstrator for test and validation by means of three different
use-cases. The demonstrator will be set up in the Modena
Automotive Smart Area (MASA), a 4km2 urban area in the city of
Modena specifically designed for this purpose, which enables the
demonstration of the technology in a real urban scenario.
“Fail operational”, “sensor fusion strategies”, “vehicle-to-
infrastructure communication” and “new advanced driver
assistance features” are the main aspects that will be theoretically
analyzed and then practically implemented and tested among this
project.
In the following the description is given of the 3 use-case scenarios
of the advanced driver assistant features that will be implemented.
UC6.1 – Traffic light time-to-green. Based on the received traffic
light phase schedule, the vehicle calculates the approaching speed at
which the vehicle reaches the traffic light, according to the actual
traffic condition (vehicles ahead). Fig. 23 depicts this use-case.
The information for the development of use-case 1 comes from the
following data fusion:
Maximum allowed speed, based on the fused information
coming from the Front Camera (traffic sign recognition),
GPS (vehicle location) and the Navigation map (speed
limit information).
Traffic ahead information coming from the Radar (Active
Cruise Control).
Traffic light schedule (time to green/red), via
Infrastructure-to-vehicle (I2V) connectivity.
As result of the data fusion of the input data, the output data in-
vehicle is the optimal vehicle speed (slow down/speed up) to reach
the green wave (when possible).
UC 6.2 – Vulnerable Road Users (VRU’s) and trajectory
recognition. The ego-vehicle, with the support of the infrastructure
(I2V), recognizes the type of obstacles in the surroundings of the
vehicle and predicts the trajectory of the VRU’s (pedestrians and
cyclists) in order to avoid potential collisions (see Fig. 24).
The input data to carry out this use-case is the following:
Obstacle detection and classification by on-board
cameras.
Distance and velocity of the obstacles by on-board Radar.
Fig. 24: Description of use-case 6.2.
Fig. 21: Use case 2 in SC5: automated back entrance of the heavy duty
truck into the construction zone.
Fig. 22: AnyWi demonstration layout in SC5.
Fig. 23: Description of use-case 6.1.
Enhanced obstacle identification/classification by on-
board LiDAR.
Information about the type, position and velocity of the
occluded obstacles by means of I2V communication.
The output of this use-case is the longitudinal control of the vehicle
to avoid potential collision.
UC 6.3 – Driver Monitoring System (DMS) and emergency
lateral lane stop. In this use-case a DMS will be implemented,
using FUSION components, to detect if the driver is not capable to
control the vehicle. In this case, a “take over request” recalls the
driver’s attention and in case of no driver feedback, a safe stop
maneuver is actuated (see Fig. 25).
The input information to perform this use-case is hereafter:
Driver status information (drowsiness, sickness, cognitive
load, ecc) based on biometric devices and a dynamic
vehicle algorithm that analyzes the driving style.
Surrounding classification and analysis for a safe stop
maneuver, by Radar, cameras, LiDAR and ultrasonic
sensors.
The output action in this use-case is the longitudinal/lateral control
of the vehicle to avoid potential collision and perform a safe stop
maneuver (emergency lights activation and stop in the emergency
lane, if possible).
SC7: Shared control and arbitration applications using FUSION
According to the vision of SC7, the objective is to deploy
PRYSTINE’s FUSION technologies for the development of an
intelligent co-driver able to assist the driver in manual and
automated mode for various levels of automation (2, 3, and 4). The
proposed approach for research on AI for the arbitration & sharing
controller elements will include: (i) classification of driver's (and
maybe occupants') movement inside the vehicle cockpit; (ii)
scenario assessment; (iii) risk assessment; (iv) motion prediction;
(v) control sharing and (vi) decision making.
Three distinct demonstrators are developed covering decision and
arbitration processes for transitions between human driver and
automated system, and between different automation levels all the
way up to a fully automated vehicle. Fig. 26 depicts the common
framework for the three demonstrators.
Demonstrator 7.1, Driver in the Loop - Shared control and
arbitration (simulator), focuses on the study of the interaction
between driver and automated system. It implements HiL
applications for driver monitoring, recording several variables and
using different approaches to establish driver status using FUSION.
The simulated vehicle and surrounding variables are also processed
and, through FUSION with the driver status, risk assessment is
performed. An arbitration algorithm assigns the authority and
controls the transitions between the human driver and the automated
system. These transitions can be from manual to automated or vice
versa and can be initiated either by the driver or by the system.
Use cases are defined to evaluate the arbitration process (i.e. the
decision on whether the authority should be given to the driver or
the system, and on if/when a transition should take place). Also, the
transition itself is a focus for the demonstrator. A smooth dynamic
transition, without DDT interruption, warrantying stability and
comfort is sought. This transition process is a key factor on the
successful implementation of SAE levels 2 to 4, where the back and
forth authority transition between driver and machine is required.
Demonstrator 7.2, Passenger Vehicle and Bus – Layered Control,
will put together a fully automated passenger vehicle and an
instrumented and connected bus to showcase driver-machine
interaction and vehicles cooperation in an urban-like scenario. The
automated vehicle will introduce a layered control architecture able
to automatically switch from partial to high automation levels,
namely SAE Level 2 (Supervised city control); SAE Level 3 (city
chauffeur); SAE Level 4 (Safe Stop), following the driving scenario
specificities and sensors health. These functionalities will be tested,
validated and demonstrated in real conditions.
With the supervised city control operation, the decision architecture
will handle longitudinal and lateral control, and will trigger
overtaking when needed, always under the driver supervision. In the
city chauffeur mode, the system will take full control in specific
complex areas. If a failure is detected, the driver will be requested to
intervene, and if no response is obtained, a safe stop will allow
stopping the vehicle in a safe manner.
Scene assessment is based on perceiving the outside traffic, driving
conditions and the car status along with monitoring the inside car
events, (driver’s state and behavior). To that end, multiple
heterogeneous sensors (GNSS, LiDAR, video, inputs from CAN
bus, etc.) will be used. V2X communications will be used to extend
the field of perception of each vehicle, so that complex driving
scenarios, such as roundabouts or crossroads with occlusions and/or
bad visibility can be properly interpreted by the artificial decision
system, reducing thus the degree of human intervention. Specific
attention will be paid in the use cases to trade control between the
driver and the artificial system in any driving scenarios, and more
particularly in the transitions between modes. Additionally, the
status of the driver in terms of fatigue and distraction will be
continuously monitored (i.e. using cameras installed inside the
vehicle) to determine the driver readiness to resume the control of
the vehicle, and to adapt the HMI communication channel and
mechanisms.
Functional validation will be one of the key pillars in this
demonstrator, because of its complexity and the introduction of
learning and evolving decision systems will require exploring new
approaches for fail-operational decision-making systems.
Demonstrator 7.3, Passenger Vehicle – Fully Automated
Highway/Urban Decision Making, focuses on the use of AI based
algorithms for motion prediction and decision making. The aim is to
demonstrate and validate the capability of artificial intelligence
methodologies for decision making, in terms of safety, comfort, and
possibly throughput. Sensors like LiDAR and Radar will be
combined with information on the upcoming traffic state which is
communicated (V2I) to the car. The first use case will focus on
Demonstrator 7.1
-Shared control
- Simulator
- DiL and HiL
Demonstrator 7.2
-Layered control
- Test track
- Cooperation w/Bus
Demonstrator 7.3
-Automated control
- In traffic
- Traffic information
FUSION
&
Control
Demonstrator
(1-2-3)
Road details –Traffic –Vehicle state –Driver state
Fig. 26: Common framework of SC7 demonstrators.
Fig. 25: Description of use-case 6.3.
Highway driving, which is more limited in complexity than the
second use case of Urban scenarios. Some aspects which can be
considered for Highway driving are: lane markings, exits etc. Urban
scenarios create much more complexity due to the variety in
infrastructure (4 leg crossing, T-shaped crossing, traffic lights) but
also for example the variety in road users (pedestrians, different
kind of cyclists) or other things to take into account like parked
cars, trees and the absence of lane markings.
The car is equipped with a platform for sensor fusion, world
modeling and decision making. Although the car will have a driver
inside (for safety reasons), the car is considered to be an
autonomous driving car (SAE level 4) so the decision-making
process in the car does not take the presence of a driver into
account. The demonstrator 7.3 will lead to an increase of knowledge
about AI for autonomous driving with respect to ‘decision making’,
framework for implementation of AI algorithms, a method to
generate/optimize AI algorithms and a first proof-of-concept.
SC8: Novel, competitive and fail-operational semiconductors
This supply chain will tackle and highlight explicitly the market
impact achieved by PRYSTINE’s breakthroughs in the field of
novel competitive semiconductor systems. Therefore, this supply
chain will act as a central platform to showcase novel market
figures impacted by PRYSTINE’s technology enabling supply
chains.
SC9: FUSION's impact on vehicle and road safety
By consistently enforcing the integration and use of PRYSTINE’s
novel FUSION technologies throughout the entire project,
PRYSTINE will offer an unmatched degree of reliability in the field
of autonomous driving. The purpose of this SC will be to
communicate and to showcase (by means of proper demonstrations
and methodologies) that road and vehicle safety is impacted in a
positive way thanks to PRYSTINE’s novel technological
advancements.
SC10: End user acceptance of automated driving functions
Supply Chain 10 (SC10) is the last one, where the FUSION
technology developed and integrated by the other Supply Chains
(SC1-SC4 and SC5-7, respectively) is evaluated from an end-user’s
perspective. In particular, with reference to the main goal of
PRYSTINE project, the SC10 addresses two specific objectives: O6
“Increased user acceptance of automated driving functions” and O5
“Competitive advantage for European industry”. The first includes
the following aspects:
• Increasing user acceptance of automated driving functions
through PRYSTINE’s groundbreaking technological
advancements.
• Pivotal impact safety
• Social impact (pivotal impact for mobility of aging
population)
The second deals with the impact of PRYSTINE at EU level and is
about:
• The increasing market share and revenue of European
companies through PRYSTINE’s groundbreaking
technological advancements (O1-O4).
• The competitive advantage on automotive industry (e.g.,
marked image of car brands).
• The legislative and regulation impact.
In this paper, we focus specifically on O6, for the evaluation of the
demonstrators from the user’s acceptance point of view. In
particular, the following results are expected:
• Methods and metrics to evaluate the demonstrators as
developed in SC5, SC6 and SC7 empirically. This should
include a dedicated test-plan (with test cases), including
potential real end-users with different degrees of
experience with ADAS currently on the market.
• Human participation in these experimental phases (in
driving simulator and/or in real cars) to validate the SC5-
7 demonstrators, according to human factors (thus
focusing on scenarios where the adaptation between
human and machine agents is fundamental for safe and
effective operations).
• Measurements of users’ expectations and trust in ADFs as
developed by PRYSTINE will be provided (using, as
baseline, the current available status of automation
features).
• Assessment of the effectiveness of adaptation, drivers’
situation awareness and trust in automation.
The focus is to prove that the ADF implemented in the
demonstrators are well accepted by users (e.g., the co-driver
supports human driver in a human-like way and therefore this
improves effectiveness and acceptance). Trust in automation before
and after driving sessions will be measured, through both
questionnaires and implicit measures, such as observation of
drivers’ behavior during non-drive period. Possible hints of over-
reliance on automation and misuse / abuse of automation will also
be monitored.
This SC considers three demonstrators, above all, the ones
developed by the other SCs (SC5, 6 and 7), but evaluated from the
end-user perspective. In particular, we will use above all the driving
simulator with the participation of naive subjects for safety reason.
This involves the demonstrator 7.1 “Hardware in the Loop
Simulator for Shared control and arbitration applications using
FUSION” (see the dedicated section for more details).
CONCLUSION
The automation of vehicles has been identified as one major enabler
to master the Grand Societal Challenges 'Individual Mobility' and
'Energy Efficiency'. Highly automated driving functions (ADF) are
one major step to be taken. However, at SAE levels, the driver
cannot be relied upon to intervene in a timely and appropriate
manner, and consequently, the automation must be capable of
handling safety-critical situations on its own. For this, fail-
operational behavior is essential in the sense, plan, and act stages of
the automation chain. PRYSTINE's target is to realize Fail-
operational Urban Surround perceptION (FUSION), which is based
on robust Radar and LiDAR sensor fusion, and control functions in
order to enable safe automated driving in urban and rural
environments.
This work highlights the visions of PRYSTINE’s supply chains and
summarizes the preliminary results achieved during PRYSTINE’s
first year. Furthermore, an outlook towards the next years’ results is
sketched.
ACKNOWLEDGMENT
The authors would like to thank all national funding authorities and
the ECSEL Joint Undertaking, which funded the PRYSTINE
project under the grant agreement n° 783190.
REFERENCES
[1] N. Druml, G. Macher, M. Stolz, E. Armengaud, D. Watzenig, C.
Steger, T. Herndl, A. Eckel, A. Ryabokon, A. Hoess et al., “PRYSTINE -
Programmable sYSTems for INtelligence in automobilEs,” in 2018 21st
Euromicro Conference on Digital System Design (DSD), 2018.
[2] Y. Fu, A. Terechko, T. Bijlsma, P. J. Cuijners, J. Redegeld, and A. O.
Örs, “A Retargetable Fault Injection Framework for Safety Validation of
Autonomous Vehicles,” in IEEE International Conference on Software
Architecture Companion (ICSA-C), pp. 69-76, 2019