Figure 2 - uploaded by Majid Moghadam
Content may be subject to copyright.
Hierarchical architecture of the general ADAS systems vs. end-to-end approaches

Hierarchical architecture of the general ADAS systems vs. end-to-end approaches

Source publication
Conference Paper
Full-text available
Tactical decision making is a critical feature for advanced driving systems, that involves several challenges such as uncertainty in other drivers behaviors and the trade-off between safety and agility. In this work, we develop a multi-modal architecture that includes the environmental mod-eling of ego surrounding and train a deep reinforcement lea...

Contexts in source publication

Context 1
... discussed above, in most of the studies in the literature the raw sensory inputs like video frames of on-board camera or RGB-D sensors on automobile have been used to train a neural network in order to estimate the required action to control the vehicle (see Fig.2). Most of the studies validated the performance of their methods in a number of unreal simulations or video games. ...
Context 2
... approach is summarized in Fig.1. Also the ADAS architecture is provided in Fig.2. Note that we use the occupancy grid as the environment model around the ego vehicle. ...
Context 3
... are planning to implement the multi-layer architecture in Fig.2 and tackle each layer separately using the same approach that we used in this study. ...
Context 4
... are also planning to use Unreal Engine powered simulations like CARLA ( Dosovitskiy et al., 2017) or Microsoft AirSim ( Shah et al., 2018) to generate the occupancy grid of the ego surrounding from the ground truth information that we receive from the simulator. This way, we may bypass the fusion layer in ADAS (Fig.2) and study the presented hierarchical approach ( Fig.1) performance in more realistic and complex situations. ...

Similar publications

Preprint
Full-text available
Modeling stochastic traffic dynamics is critical to developing self-driving cars. Because it is difficult to develop first principle models of cars driven by humans, there is great potential for using data driven approaches in developing traffic dynamical models. While there is extensive literature on this subject, previous works mainly address the...
Preprint
Full-text available
Decision-making for urban autonomous driving is challenging due to the stochastic nature of interactive traffic participants and the complexity of road structures. Although reinforcement learning (RL)-based decision-making scheme is promising to handle urban driving scenarios, it suffers from low sample efficiency and poor adaptability. In this pap...
Preprint
Full-text available
Tactical decision making is a critical feature for advanced driving systems, that incorporates several challenges such as complexity of the uncertain environment and reliability of the autonomous system. In this work, we develop a multi-modal architecture that includes the environmental modeling of ego surrounding and train a deep reinforcement lea...
Article
Full-text available
In this letter, we introduce a deep reinforcement learning (RL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles. State-of-the-a...

Citations

... However, the A* algorithm still has some problems, such as the inaccurate detection of a road edge and inapplicability during vehicle turning. In reference [8], the MPC algorithm is used to solve the problem of lane-change decision and control, and a decision method based on model predictive control is proposed. In this method, the control of a vehicle running on the expressway is divided into two parts: lane-change decision and lane-change control, which are solved by the MPC method, respectively. ...
Article
Full-text available
Intelligent decisions for autonomous lane-changing in vehicles have consistently been a focal point of research in the industry. Traditional lane-changing algorithms, which rely on predefined rules, are ill-suited for the complexities and variabilities of real-world road conditions. In this study, we propose an algorithm that leverages the deep deterministic policy gradient (DDPG) reinforcement learning, integrated with a long short-term memory (LSTM) trajectory prediction model, termed as LSTM-DDPG. In the proposed LSTM-DDPG model, the LSTM state module transforms the observed values from the observation module into a state representation, which then serves as a direct input to the DDPG actor network. Meanwhile, the LSTM prediction module translates the historical trajectory coordinates of nearby vehicles into a word-embedding vector via a fully connected layer, thus providing predicted trajectory information for surrounding vehicles. This integrated LSTM approach considers the potential influence of nearby vehicles on the lane-changing decisions of the subject vehicle. Furthermore, our study emphasizes the safety, efficiency, and comfort of the lane-changing process. Accordingly, we designed a reward and penalty function for the LSTM-DDPG algorithm and determined the optimal network structure parameters. The algorithm was then tested on a simulation platform built with MATLAB/Simulink. Our findings indicate that the LSTM-DDPG model offers a more realistic representation of traffic scenarios involving vehicle interactions. When compared to the traditional DDPG algorithm, the LSTM-DDPG achieved a 7.4% increase in average single-step rewards after normalization, underscoring its superior performance in enhancing lane-changing safety and efficiency. This research provides new ideas for advanced lane-changing decisions in autonomous vehicles.
... Yang et al. [27] proposed a multi-task learning framework to predict the steering angle and control speed simultaneously in an end-to-end manner by taking previous feedback speeds and visual recordings as inputs. Moghadam and Elkaim [17] proposed a multi-modal architecture that includes the environmental modeling of ego surrounding, trained a deep reinforcement learning (DRL) agent that yields consistent performance in stochastic highway driving scenarios, and obtained the high-level sequential commands (i.e. lane change) to send them to lower-level controllers. ...
Chapter
The majority of road accidents occur because of human errors, including distraction, recklessness, and drunken driving. One of the effective ways to overcome this dangerous situation is by implementing self-driving technologies in vehicles. In this paper, we focus on building an efficient deep-learning model for self-driving cars. We propose a new and simple CNN model called ‘LaksNet’ consisting of four convolutional layers and two fully connected layers. We conducted extensive experiments using our LaksNet model with the training data generated from the Udacity simulator. Our model outperforms many existing pre-trained ImageNet and NVIDIA models in terms of the duration of the car for which it drives without going off the track on the simulator.
... Due to their advantages, researchers have been also started to apply the DRL theory to solve various autonomous driving tasks [16]. For example, in ref. [37] authors propose an RL-based strategy to train an agent to learn an automated lane change behavior in order to improve the collision avoidance task in unforeseen scenarios, while in ref. [38] a hierarchical architecture is exploited to learn a sequential collision-free decision strategy for AVs. Again, a hierarchical architecture is suggested in ref. [17], where the decision-maker implements a kernel-based least-squares policy iteration algorithm while the lower layer is designed via a dual heuristic programming algorithm to address the motion planning problem. ...
Article
Full-text available
Autonomous vehicles in highway driving scenarios are expected to become a reality in the next few years. Decision-making and motion planning algorithms, which allow autonomous vehicles to predict and tackle unpredictable road traffic situations, play a crucial role. Indeed, finding the optimal driving decision in all the different driving scenarios is a challenging task due to the large and complex variability of highway traffic scenarios. In this context, the aim of this work is to design an effective hybrid two-layer path planning architecture that, by exploiting the powerful tools offered by the emerging Deep Reinforcement Learning (DRL) in combination with model-based approaches, lets the autonomous vehicles properly behave in different highway traffic conditions and, accordingly, to determine the lateral and longitudinal control commands. Specifically, the DRL-based high-level planner is responsible for training the vehicle to choose tactical behaviors according to the surrounding environment, while the low-level control converts these choices into the lateral and longitudinal vehicle control actions to be imposed through an optimization problem based on Nonlinear Model Predictive Control (NMPC) approach, thus enforcing continuous constraints. The effectiveness of the proposed hierarchical architecture is hence evaluated via an integrated vehicular platform that combines the MATLAB environment with the SUMO (Simulation of Urban MObility) traffic simulator. The exhaustive simulation analysis, carried out on different non-trivial highway traffic scenarios, confirms the capability of the proposed strategy in driving the autonomous vehicles in different traffic scenarios.
... Planning and decision-making can occur at three of Michon's task hierarchies: strategic, tactical, and operational. Generally, planning and decision-making can be done using three approaches: sequential, end-to-end, and behavior-aware decision-making [212], [213], [214]. Sequential decisionmaking is the case where the agent makes a final decision after successive road observations. ...
Article
Full-text available
Driver behavior models have been used as input to self-coaching, accident prevention studies, and developing driver-assisting systems. In recent years, driver behavior recognition has revolutionized autonomous vehicles (AVs) and traffic management studies. This comprehensive survey provides an up-todate review of the different driver behavior models and modeling approaches. In heterogeneous streets where humans and autonomous vehicles operate simultaneously, predicting the intent and action of human drivers is crucial for AVs with the help of wireless communication and artificial intelligence (AI) technologies. Therefore, the review also summarizes the applications of driver behavior modeling (DBM) for effective behavior recognition and human-like AV driving. Moreover, the review also covers the application of DBM in capturing behaviors of complex dynamic driving tasks. In this review, we solely cover car-following (CF) and lane-changing (LC) maneuvers.
... Moreover, Moghadam et al. incorporated Deep Reinforcement Learning (DRL) into a hierarchical architecture. The tactical choice was then translated into low-level actions that controllers could use [50]. Behzadan and Munir provided a benchmark for evaluating the performance of autonomous vehicles in CA scenarios using an adversarial agent trained to drive an AV into safety hazards [51]. ...
Article
Human drivers are requested by the Automated Vehicle (AV) to perform takeover actions if needed. Existing research mainly focuses on predicting the takeover quality due to distraction using wearable sensor data. It is unrealistic, unnatural, and inapplicable to require human drivers to wear these sensors when driving an AV so that their situational awareness for takeover actions can be continuously monitored. Moreover, traffic conflicts can be observed even if drivers take over as requested. Current practice mainly develops conflict-actuated collision avoidance systems that alert drivers once the traffic conflict reaches a certain threshold. There is a research need to anticipate conflicts other than by measuring them. Besides, drivers are still responsible for responding to the alerts, which leaves the possibility of resulting in human error-related safety issues. This research aims at developing a Non-intrusive, Ultra-advanced Collision Avoidance System (NIUCAS) under automated driving. NIUCAS applies the brake pedal for drivers if it predicts the absence of takeover actions due to distraction or predicts traffic conflicts before they can be measured. The NIUCAS prototype was implemented in a driving simulator. An experiment was conducted by recruiting sixty participants to drive a vehicle under Level 3 automation, going through jaywalking scenarios, and being requested to take over. Participants’ demographics were collected to predict the takeover actions, while vehicle-related performance was collected to predict the traffic conflicts. Three machine learning-based modeling techniques were chosen as candidates for predictions. Additionally, an empirical equation is formulated to quantify the safety benefits of implementing NIUCAS.
... In [6][7][8], an end-to-end decision-making approach based on a deep neural network (DNN) was used to perceive input images from the environment directly and output decision-making behaviours to a control actuator. Sallab et al. [9,10] used a deep reinforcement learning (DRL) approach with a DNN representation policy and learned based on inter-actions with the environment, rather than a sample set applied to driving scenarios. ...
Article
Full-text available
To improve the application range of decision‐making systems for connected automated vehicles, this paper proposes a cooperative decision‐making approach for multiple driving scenarios based on the combination of multi‐agent reinforcement learning with centralized planning. Specifically, the authors derived driving tasks from driving scenarios and computed the policy functions for different driving scenarios as linear combinations of policy functions for a set of specific driving tasks. Then, the authors classified vehicle coalitions according to the relationships between vehicles and used centralized planning methods to determine the optimal combination of actions for each coalition. Finally, the authors conducted tests in two driving scenarios considering different traffic densities to evaluate the performance of the developed approach. Simulation results demonstrate that the proposed approach exhibits good robustness in multiple driving scenarios while enabling cooperative decision making for connected automated vehicles, thereby ensuring safe and rational decision making.
... The lower-level planner implements the strategy determined by the upper level-planner using precise dynamics. Similarly, Moghadam andd Elkaim [23] also study hierarchical reasoning decision making in highway driving. They construct a high-level planner using a trained reinforcement-learning policy to determine lane changing plans to safely pass other drivers. ...
Preprint
Full-text available
We study the problem of autonomous racing amongst teams composed of cooperative agents subject to realistic safety and fairness rules. We develop a hierarchical controller to solve this problem consisting of two levels, extending prior work where bi-level hierarchical control is applied to head-to-head autonomous racing. A high-level planner constructs a discrete game that encodes the complex rules with simplified dynamics to produce a sequence of target waypoints. The low-level controller uses the resulting waypoints as a reference trajectory and computes high-resolution control inputs by solving a simplified racing game with a reduced set of rules. We consider two approaches for the low-level planner: training a multi-agent reinforcement learning (MARL) policy and solving a linear-quadratic Nash game (LQNG) approximation. We test our controllers against three baselines on a simple oval track and a complex track: an end-to-end MARL controller, a MARL controller tracking a fixed racing line, and an LQNG controller tracking a fixed racing line. Quantitative results show that our hierarchical methods outperform their respective baseline methods in terms of race wins, overall team performance, and abiding by the rules. Qualitatively, we observe the hierarchical controllers mimicking actions performed by expert human drivers such as coordinated overtaking moves, defending against multiple opponents, and long-term planning for delayed advantages. We show that hierarchical planning for game-theoretic reasoning produces both cooperative and competitive behavior even when challenged with complex rules and constraints.
... The existing methods for driving decision making can be mainly divided into three categories: motion planning based methods (Tu et al., 2019;Tahir et al., 2020;Lee and Kum, 2019;Wang et al., 2019), risk assessment based methods (Noh, 2019;Kim and Kum, 2018;Yu et al., 2018;Shin et al., 2019), and learning based methods (including both supervised learning (Codevilla et al., 2018;Xu et al., 2017) and reinforcement learning Shi et al., 2019;Long et al., 2018;Moghadam and Elkaim, 2019;. ...
... Following the major breakthroughs of deep reinforcement learning (DRL) in the recent years (Mnih et al., 2015;Hasselt et al., 2015;Schaul et al., 2016;Duan et al., 2021), researchers have started to apply DRL to address the driving decision making problems in autonomous driving (Shin et al., 2019;Long et al., 2018;Ye et al., 2019;Zhu et al., 2020). DRL based methods can greatly decrease the heavy reliance on the large amount of data because they do not need labeled driving data for training (Zhu et al., 2018;Moghadam and Elkaim, 2019;Hoel et al., 2020). Alternatively, they learn and enhance their driving knowledge and skills via trial-and-error, which means that DRL based methods can be used in the crash or near-crash scenarios to help AVs avoid crashes (Kiran et al., 2021). ...
... Long et al. (2018) proposed a DRL based system-level scheme for multi-agents to plan their own collision-free actions without observing other agents' states and intents. Moghadam and Elkaim (2019) introduced DRL into a hierarchical architecture to make a sequential tactical decision (e.g., lane change) for AVs to avoid collisions, and then the tactical decision was converted to low-level actions for vehicle control. Unlike supervised learning methods, DRL based methods can compensate the high cost of data collection in dangerous scenarios by training models in virtual simulation environments with affordable trial-and-error. ...
Article
Driving safety is the most important element that needs to be considered for autonomous vehicles (AVs). To ensure driving safety, we proposed a lane change decision-making framework based on deep reinforcement learning to find a risk-aware driving decision strategy with the minimum expected risk for autonomous driving. Firstly, a probabilistic-model based risk assessment method was proposed to assess the driving risk using position uncertainty and distance-based safety metrics. Then, a risk aware decision making algorithm was proposed to find a strategy with the minimum expected risk using deep reinforcement learning. Finally, our proposed methods were evaluated in CARLA in two scenarios (one with static obstacles and one with dynamically moving vehicles). The results show that our proposed methods can generate robust safe driving strategies and achieve better driving performances than previous methods.
... Benefiting from the rapid development of environment perception, path planning, and motion control technologies, various CA applications have been applied in advanced driver assistance systems (ADASs) or AVs to help improve driving safety. The current CA methods can be generally divided into three categories including: (1) motion planning based methods (Tahir et al., 2020;Lee and Kum, 2019), (2) learning based methods (both supervised learning (Codevilla et al., 2018;Xu et al., 2017) and reinforcement learning (Long et al., 2018;Moghadam and Elkaim, 2019)), and (3) risk assessment based methods (Noh, 2019;Noh and An, 2018;Yu et al., 2019;Shin et al., 2019;Tang et al., 2018). ...
... After the breakthrough of deep reinforcement learning (DRL) methods in decision-making studies in the recent years (Mnih et al., 2015;Wang et al., 2018;Shi et al., 2019;Long et al., 2018;Moghadam and Elkaim, 2019), researchers started to apply DRL to address CA problems. Long et al. (2018) proposed a DRL based system-level scheme for multi-agents to plan their own collision-free actions without observing other agents' states and intents. ...
... Long et al. (2018) proposed a DRL based system-level scheme for multi-agents to plan their own collision-free actions without observing other agents' states and intents. Moghadam and Elkaim (2019) introduced DRL into a hierarchical architecture to make a sequential tactical decision (e.g., lane change) for AVs to avoid collisions, and then the tactical decision was converted to lowlevel actions which could be applied in controllers. Behzadan and Munir (2018) offered a benchmark for assessing the performance of AVs in CA scenarios using an adversarial agent trained to drive AVs into unsafe states. ...
Article
In this paper, we proposed a new risk assessment based decision-making algorithm to guarantee collision avoidance in multi-scenarios for autonomous vehicles. A probabilistic-model based situation assessment module using conditional random field was proposed to assess the risk level of surrounding traffic participants. Based on the assessed risk from the situation assessment module, a collision avoidance strategy with driving style preferences (e.g., aggressive or conservative) was proposed to meet the demands of different drivers or passengers. Finally, we conducted experiments in Carla (car learning to act) to evaluate our developed collision avoidance decision-making algorithm in different scenarios. The results show that our developed method was sufficiently reliable for autonomous vehicles to avoid collisions in multi-scenarios with different driving style preferences. Our developed method with adjustable driving style preferences to meet the demand of different consumers would improve drivers’ acceptance of autonomous vehicles. The personalized Share Link of this article is: https://authors.elsevier.com/c/1cEnM,M0mRH3Sv
... Recent advances in Q-learning [13], [14], [15], [16] which is a model-free off-policy RL algorithm and the discrete nature of the action space along with the promising performance on planning [17] and the control of autonomous systems [18] [19] motivated us to apply the deep version [20] of this algorithm to our problem. ...
Conference Paper
Full-text available
Autonomous lane changing is a critical feature for advanced autonomous driving systems, that involves several challenges such as uncertainty in other driver's behaviors and the trade-off between safety and agility. In this work, we develop a novel simulation environment that emulates these challenges and train a deep reinforcement learning agent that yields consistent performance in a variety of dynamic and uncertain traffic scenarios. Results show that the proposed data-driven approach performs significantly better in noisy environments compared to methods that rely solely on heuristics.