Conference Paper

Machine learning based motion planning approach for intelligent vehicles

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Article
Autonomous vehicles must be able to react in a timely manner to typical and unpredictable situations in urban scenarios. In this connection, motion planning algorithms play a key role as they are responsible of ensuring driving safety and comfort while producing human-like trajectories in a wide range of driving scenarios. Typical approaches for motion planning focus on trajectory optimization by applying computation-intensive algorithms, rather than finding a balance between optimatily and computing time. However, for on-road automated driving at medium and high speeds, determinism is necessary at high sampling rates. This work presents a trajectory planning algorithm that is able to provide safe, human-like and comfortable trajectories by using cost-effective primitives evaluation based on quintic Bézier curves. The proposed method is able to consider the kinodynamic constrains of the vehicle while reactively handling dynamic real environments in real-time. The proposed motion planning strategy has been implemented in a real experimental platform and validated in different real operating environments, successfully overcoming typical urban traffic scenes where both static and dynamic objects are involved.
Full-text available
Article
The last decade witnessed increasingly rapid progress in self‐driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence (AI). The objective of this paper is to survey the current state‐of‐the‐art on deep learning technologies used in autonomous driving. We start by presenting AI‐based self‐driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration, and motion control algorithms. We investigate both the modular perception‐planning‐action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources, and computational hardware. The comparison presented in this survey helps gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices.
Full-text available
Article
Autonomous vehicles are controlled today either based on sequences of decoupled perception-planning-action operations, either based on End2End or deep reinforcement learning (DRL) systems. Current deep learning solutions for autonomous driving are subject to several limitations (e.g., they estimate driving actions through a direct mapping of sensors to actuators, or require complex reward shaping methods). Although the cost function used for training can aggregate multiple weighted objectives, the gradient descent step is computed by the backpropagation algorithm using a single-objective loss. To address these issues, we introduce NeuroTrajectory , which is a multiobjective neuroevolutionary approach to local state trajectory learning for autonomous driving, where the desired state trajectory of the ego-vehicle is estimated over a finite prediction horizon by a perception-planning deep neural network. In comparison to DRL methods, which predict optimal actions for the upcoming sampling time, we estimate a sequence of optimal states that can be used for motion control. We propose an approach which uses genetic algorithms for training a population of deep neural networks, where each network individual is evaluated based on a multi-objective fitness vector, with the purpose of establishing a so-called Pareto front of optimal deep neural networks. The performance of an individual is given by a fitness vector composed of three elements. Each element describes the vehicle's travel path, lateral velocity and longitudinal speed, respectively. The same network structure can be trained on synthetic, as well as on real-world data sequences. We have benchmarked our system against a baseline Dynamic Window Approach (DWA), as well as against an End2End supervised learning method.
Full-text available
Article
Finding the boundaries of the drivable space is key on the development of any advance driver assistance systems with automated driving functions. A common approach found in the literature is to combine the information of digital maps with multiple on-board sensors for building a robust and accurate model of the environment from which to extract the navigable space. In this sense, the digital map is the crucial component for relating the location of the vehicle and identifying the different road features. This work presents an automatic procedure for generating driving corridors from OpenStreetMap. The proposed method expands the original map representation, replacing polylines by polynomial-based roads, whose sections are defined using cubic Bézier curves. All curves are automatically adjusted from the original road description, thus generating an efficient and accurate road representation without human intervention. Finally, the driving corridors are generated as a concatenation of the modified road sections along a planned route. The proposed approach has been validated on a peri-urban environment, for which corridors where successfully generated in all cases.
Full-text available
Article
Autonomous vehicles promise to improve traffic safety while, at the same time, increase fuel efficiency and reduce congestion. They represent the main trend in future intelligent transportation systems. This paper concentrates on the planning problem of autonomous vehicles in traffic. We model the interaction between the autonomous vehicle and the environment as a stochastic Markov decision process (MDP) and consider the driving style of an expert driver as the target to be learned. The road geometry is taken into consideration in the MDP model in order to incorporate more diverse driving styles. The desired, expert-like driving behavior of the autonomous vehicle is obtained as follows: First, we design the reward function of the corresponding MDP and determine the optimal driving strategy for the autonomous vehicle using reinforcement learning techniques. Second, we collect a number of demonstrations from an expert driver and learn the optimal driving strategy based on data using inverse reinforcement learning. The unknown reward function of the expert driver is approximated using a deep neural-network (DNN). We clarify and validate the application of the maximum entropy principle (MEP) to learn the DNN reward function, and provide the necessary derivations for using the maximum entropy principle to learn a parameterized feature (reward) function. Simulated results demonstrate the desired driving behaviors of an autonomous vehicle using both the reinforcement learning and inverse reinforcement learning techniques.
Full-text available
Article
To address the problem of model error and tracking dependence in the process of intelligent vehicle motion planning, an intelligent vehicle model transfer trajectory planning method based on deep reinforcement learning is proposed, which is able to obtain an effective control action sequence directly. Firstly, an abstract model of the real environment is extracted. On this basis, a deep deterministic policy gradient (DDPG) and a vehicle dynamic model are adopted to jointly train a reinforcement learning model, and to decide the optimal intelligent driving maneuver. Secondly, the actual scene is transferred to an equivalent virtual abstract scene using a transfer model. Furthermore, the control action and trajectory sequences are calculated according to the trained deep reinforcement learning model. Thirdly, the optimal trajectory sequence is selected according to an evaluation function in the real environment. Finally, the results demonstrate that the proposed method can deal with the problem of intelligent vehicle trajectory planning for continuous input and continuous output. The model transfer method improves the model's generalization performance. Compared with traditional trajectory planning, the proposed method outputs continuous rotation-angle control sequences. Moreover, the lateral control errors are also reduced.
Full-text available
Article
Motion planning in on-road urban driving is usually stated as an optimization problem in a multi-dimensional space that presents a high complexity in obtaining an global optimal solution. In that sense, a great amount of different approaches to solve this problem co-exist in the literature. However, to the best of our knowledge there is no prior work studying how to choose the best strategy in this multi-dimensional space. This work presents an in-depth analysis of interpolation curve planners based on continuous curvature Bézier compositions. To that end, a comparison framework to benchmark different path-planning primitives for on-road urban driving is proposed and the evaluation of different primitive configurations and optimisation techniques for path-planning is carried out. In addition, the results are openly published together with its consequent analysis, based on a set of key performance indicators (KPI) related to the aforementioned main features.
Full-text available
Conference Paper
Research in the field of automated driving has created promising results in the last years. Some research groups have shown perception systems which are able to capture even complicated urban scenarios in great detail. Yet, what is often missing are general-purpose path- or trajectory planners which are not designed for a specific purpose. In this paper we look at path- and trajectory planning from an architectural point of view and show how model predictive frameworks can contribute to generalized path- and trajectory generation approaches for generating safe trajectories even in cases of system failures.
Full-text available
Article
Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.
Full-text available
Article
Currently autonomous or self-driving vehicles are at the heart of academia and industry research because of its multi-faceted advantages that includes improved safety, reduced congestion, lower emissions and greater mobility. Software is the key driving factor underpinning autonomy within which planning algorithms that are responsible for mission-critical decision making hold a significant position. While transporting passengers or goods from a given origin to a given destination, motion planning methods incorporate searching for a path to follow, avoiding obstacles and generating the best trajectory that ensures safety, comfort and efficiency. A range of different planning approaches have been proposed in the literature. The purpose of this paper is to review existing approaches and then compare and contrast different methods employed for the motion planning of autonomous on-road driving that consists of (1) finding a path, (2) searching for the safest manoeuvre and (3) determining the most feasible trajectory. Methods developed by researchers in each of these three levels exhibit varying levels of complexity and performance accuracy. This paper presents a critical evaluation of each of these methods, in terms of their advantages/disadvantages, inherent limitations, feasibility, optimality, handling of obstacles and testing operational environments.
Article
In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning, and decision-making for autonomous vehicles have led to great improvements in functional capabilities, with several prototypes already driving on our roads and streets. Yet challenges remain regarding guaranteed performance and safety under all driving circumstances. For instance, planning methods that provide safe and systemcompliant performance in complex, cluttered environments while modeling the uncertain interaction with other traffic participants are required. Furthermore, new paradigms, such as interactive planning and end-to-end learning, open up questions regarding safety and reliability that need to be addressed. In this survey, we emphasize recent approaches for integrated perception and planning and for behavior-aware planning, many of which rely on machine learning. This raises the question of verification and safety, which we also touch upon. Finally, we discuss the state of the art and remaining challenges for managing fleets of autonomous vehicl Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems Volume 1 is May 28, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Conference Paper
In this paper, we develop a motion planner for on-road autonomous swerve maneuvers that is capable of learning passengers' individual driving styles. It uses a hybrid planning approach that combines sampling-based graph search and vehicle model-based evaluation to obtain a smooth trajectory plan. To automate the parameter tuning process, as well as to reflect individual driving styles, we further adapt inverse reinforcement learning techniques to distill human driving patterns from maneuver demonstrations collected from different individuals. We found that the proposed swerve planner and its learning routine can approximate a good variety of maneuver demonstrations. However, due to the underlying stochastic nature of human driving, more data are needed in order to obtain a more generative swerve model.