Chapter

Enabling Technologies for Vehicle Automation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Vehicle automation relies heavily on technologies such as sensing, wireless communications, localization, mapping, human factors, and several others. Applications planned within the USDOT’s automation research roadmap depend on the understanding and applicability of these technologies. Thus it is important to be aware of the state of these technologies, and more importantly to stay ahead of the curve. The value of this task is not in accurately predicting the future of these technologies for USDOT’s automation program, but to minimize surprises. A four step process was followed to better understand advances in positioning, navigation and timing (PNT), mapping, communications, sensing and human factors. The first step identified the needs, second tracked high-level trends and based on these findings, the third step identified gaps. Finally, these insights were used to develop potential next steps for USDOT consideration. Paper presents a high-level overview of the research process, findings from the study and insights on next steps.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Automated vehicles require reliable Positioning, Navigation, and Timing (PNT) services in order to perform their autonomy functions (1). For example, autonomous vehicles (AVs) use PNT services to localize and navigate themselves along the roadway (2). ...
Preprint
Full-text available
Global Navigation Satellite System (GNSS) provides Positioning, Navigation, and Timing (PNT) services for autonomous vehicles (AVs) using satellites and radio communications. Due to the lack of encryption, open-access of the coarse acquisition (C/A) codes, and low strength of the signal, GNSS is vulnerable to spoofing attacks compromising the navigational capability of the AV. A spoofed attack is difficult to detect as a spoofer (attacker who performs spoofing attack) can mimic the GNSS signal and transmit inaccurate location coordinates to an AV. In this study, we have developed a prediction-based spoofing attack detection strategy using the long short-term memory (LSTM) model, a recurrent neural network model. The LSTM model is used to predict the distance traveled between two consecutive locations of an autonomous vehicle. In order to develop the LSTM prediction model, we have used a publicly available real-world comma2k19 driving dataset. The training dataset contains different features (i.e., acceleration, steering wheel angle, speed, and distance traveled between two consecutive locations) extracted from the controlled area network (CAN), GNSS, and inertial measurement unit (IMU) sensors of AVs. Based on the predicted distance traveled between the current location and the immediate future location of an autonomous vehicle, a threshold value is established using the positioning error of the GNSS device and prediction error (i.e., maximum absolute error) related to distance traveled between the current location and the immediate future location. Our analysis revealed that the prediction-based spoofed attack detection strategy can successfully detect the attack in real-time.
Conference Paper
Full-text available
Much research is being carried out in autonomous driving of vehicles under various disciplines but very few autonomous navigating vehicles are developed till date. This paper presents a novel technique, with a practical example, for the representation of navigational information under real-time considerations. Machine learning algorithms can easily be trained with the data sets developed from the representation and to testify this, an Artificial Neural Network is trained with a represented data set. This technique is independent of driving capabilities of vehicle and provides simple directional instructions for navigation. Using this method, an autonomous driving vehicle can be made to learn to navigate between locations in a known region. Moreover, the usage of ANN makes it completely adaptive and any changes or modifications in the trained region can easily be updated into the knowledge base.
Article
Full-text available
Compared with daytime, a larger proportion of road accidents happens during nighttime. The altered visibility for drivers partially explains this situation. It becomes worse when dense fog is present. In this paper, we first define a standard night visibility index, which allows specifying the type of fog that an advanced driver assistance system should recognize. A methodology to detect the presence of night fog and characterize its density in images grabbed by an in-vehicle camera is then proposed. The detection method relies on the visual effects of night fog. A first approach evaluates the presence of fog around a vehicle due to the detection of the backscattered veil created by the headlamps. In this aim, a correlation index is computed between the current image and a reference image where the fog density is known. It works when the vehicle is alone on a highway without external light sources. A second approach evaluates the presence of fog due to the detection of halos around light sources ahead of the vehicle. It works with oncoming traffic and public lighting. Both approaches are illustrated with actual images of fog. Their complementarity makes it possible to envision a complete night-fog detection system. If fog is detected, its characterization is achieved by fitting the different correlation indexes with an empirical model. Experimental results show the efficiency of the proposed method. The main applications for such a system are, for instance, automation or adaptation of vehicle lights, contextual speed computation, and reliability improvement for camera-based systems.
Article
While many research projects on autonomous driving and advanced driver support systems make heavy use of highly accurate maps covering large areas, there is relatively little work on methods for automatically generating such maps. These maps require accuracy in both the number of lanes and positioning of every lane, which we call lanelevel maps. Here, we present a method that combines coarse, inaccurate prior maps from OpenStreetMap (OSM) with local sensor information from 3D Lidar and a positioning system. We formulate a probabilistic model of lane structure using such information, and develop a number of tractable inference algorithms. These algorithms leverage the coarse structural information present in OSM, and integrates it with the highly accurate local sensor measurements. The resulting maps have extremely good alignment with manually constructed baseline maps generated for autonomous driving experiments.
Article
For robots operating in outdoor environments, a number of factors, including weather, time of day, rough terrain, high speeds, and hardware limitations, make performing vision-based simultaneous localization and mapping with current techniques infeasible due to factors such as image blur and/or underexposure, especially on smaller platforms and low-cost hardware. In this paper, we present novel visual place-recognition and odometry techniques that address the challenges posed by low lighting, perceptual change, and low-cost cameras. Our primary contribution is a novel two-step algorithm that combines fast low-resolution whole image matching with a higher-resolution patch-verification step, as well as image saliency methods that simultaneously improve performance and decrease computing time. The algorithms are demonstrated using consumer cameras mounted on a small vehicle in a mixed urban and vegetated environment and a car traversing highway and suburban streets, at different times of day and night and in various weather conditions. The algorithms achieve reliable mapping over the course of a day, both when incrementally incorporating new visual scenes from different times of day into an existing map, and when using a static map comprising visual scenes captured at only one point in time. Using the two-step place-recognition process, we demonstrate for the first time single-image, error-free place recognition at recall rates above 50% across a day-night dataset without prior training or utilization of image sequences. This place-recognition performance enables topologically correct mapping across day-night cycles.
Map based representation of navigation information for robust machines learning
  • Mgi Feroz
  • Jada C Kumar
  • R Yenala
Car Sensors Monitor Drivers for Distraction
  • Bigthink
Qualcomm Drives Future of Automotive Connectivity with New 4G LTE Modems
  • Qualcomm
Can Panasonic’s ‘e-Cockpit
  • Forbes
CarVi brings modern driver assist technology to older cars
  • Gizmag