Typical autonomous vehicle system.

Typical autonomous vehicle system.

Source publication
Article
Full-text available
This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, a...

Contexts in source publication

Context 1
... as is exposed in [8], the first thing that must be taken into account is the type of characteristics or systems that can be tested through simulation (see Figure 1). In this work, we focus on the perception of autonomous vehicles as offering greater autonomy and complexity, emphasizing on their subsystems [9]: Environmental perception, and localization. ...
Context 2
... as is exposed in [8], the first thing that must be taken into account is the type of characteristics or systems that can be tested through simulation (see Figure 1). In this work, we focus on the perception of autonomous vehicles as offering greater autonomy and complexity, emphasizing on their subsystems [9]: Environmental perception, and localization. ...

Similar publications

Conference Paper
Full-text available
Semantic segmentation is a class of methods to classify each pixel in an image into semantic classes, which is critical for autonomous vehicles and surgery systems. Cross-entropy (CE) loss-based deep neural networks (DNN) achieved great success w.r.t. the accuracy-based metrics, e.g., mean Intersection-over Union. However, the CE loss has a limitat...
Preprint
Full-text available
Generative adversarial imitation learning (GAIL) has shown promising results by taking advantage of generative adversarial nets, especially in the field of robot learning. However, the requirement of isolated single modal demonstrations limits the scalability of the approach to real world scenarios such as autonomous vehicles' demand for a proper u...
Article
Full-text available
This paper investigates the environmental trade offs resulting from the adoption of autonomous vehicles (AVs) as a function of modal shifts and use phase. An empirical approach is taken to formulate a mode choice model informed from a stated preference (SP) survey conducted in Madison, Wisconsin. A life cycle analysis based on well-to-wheel model i...
Article
Full-text available
A long-range and short-range navigation systems of an autonomous vehicle are the most important thing to develop because it is very complex and related to safety in the driving of an automatic vehicle. The purpose of this research is to develop a short-range navigation system for autonomous vehicles focused to detect and avoid humans which implemen...
Preprint
Full-text available
In this paper, a driver's intention prediction near a road intersection is proposed. Our approach uses a deep bidirectional Long Short-Term Memory (LSTM) with an attention mechanism model based on a hybrid-state system (HSS) framework. As intersection is considered to be as one of the major source of road accidents, predicting a driver's intention...

Citations

... The GNSS system emits a signal, and a receiver decodes this signal to extract the information of the receiver's position, time, and speed. The Global Positioning System (GPS) is the most common GNSS system freely available to be used in any part of the world [111]. In addition to GPS, other commonly available GNSSs are GLONASS, GALILEO, and BEIDOU. ...
Preprint
Full-text available
Recently, the advanced driver assistance system (ADAS) of autonomous vehicles (AVs) has offered substantial benefits to drivers. Improvement of passenger safety is one of the key factors for evolving AVs. An automated system provided by the ADAS in autonomous vehicles is a salient feature for passenger safety in modern vehicles. With an increasing number of electronic control units and a combination of multiple sensors, there are now sufficient computing aptitudes in the car to support ADAS deployment. An ADAS is composed of various sensors: radio detection and ranging (RADAR), cameras, ultrasonic sensors, and LiDAR. However, the continual use of multiple sensors and actuators of the ADAS can lead to the failure of AV sensors. Thus, the prognostic health management (PHM) of ADAS is important for the smooth and continuous operation of AVs. The PHM of AVs has recently been introduced and is still progressing. There is a lack of surveys available related to sensor-based PHM of AVs in the literature. Therefore, the objective of the current study was to identify sensor-based PHM, emphasizing different fault identification and isolation (FDI) techniques with challenges and gaps existing in this field.
... The AVs are capable of sensing their environment and navigating by employing different sensors and technology without human input [43]. These advanced vehicles are expected to revolutionize the traffic environment by reducing the current externalities, especially accidents and congestion. ...
Article
Full-text available
Autonomous Vehicles (AVs) with their immaculate sensing and navigating capabilities are expected to revolutionize urban mobility. Despite the expected benefits, this emerging technology has certain implications pertaining to their deployment in mixed traffic streams, owing to different driving logics than Human-driven Vehicles (HVs). Many researchers have been working to devise a sustainable urban transport system by considering the operational and safety aspects of mixed traffic during the transition phase. However, limited scholarly attention has been devoted to mapping an overview of this research area. This paper attempts to map the state of the art of scientific production about autonomous vehicles in mixed traffic conditions, using a bibliometric analysis of 374 documents extracted from the Scopus database from 1999 to 2021. The VOSviewer 1.1.18 and Biblioshiny 3.1 software were used to demonstrate the progress status of the publications concerned. The analysis revealed that the number of publications has continuously increased during the last five years. The text analysis showed that the author keywords “autonomous vehicles” and “mixed traffic” dominated the other author keywords because of their frequent occurrence. From thematic analysis, three research stages associated with AVs were identified; pre-development (1999–2017), development (2017–2020) and deployment (2021). The study highlighted the potential research areas, such as involvement of autonomous vehicles in transportation planning, interaction between autonomous vehicles and human driven vehicles, traffic and energy efficiencies associated with automated driving, penetration rates for autonomous vehicles in mixed traffic scenarios, and safe and efficient operation of autonomous vehicles in mixed traffic environment. Additionally, discussion on the three key aspects was conducted, including the impacts of AVs, their driving characteristics and strategies for their successful deployment in context of mixed traffic. This paper provides ample future directions to the people willing to work in this area of autonomous vehicles in mixed traffic conditions. The study also revealed current trends as well as potential future hotspots in the area of autonomous vehicles in mixed traffic.
... Reference [33] uses lidar and cameras to scan the real traffic scenarios and generates the reasonable traffic flows of vehicles and pedestrians from the acquired trajectory data, which can be used for test scenario simulation. Reference [34] gives a machine learning model to implement environmental perception to test the sensing devices. One of the reasons is that the definitions of test indexes changed. ...
Article
Full-text available
Currently, there are mature test methods for specific sensing devices or processing devices in the Internet of Vehicles (IoV). However, when a system is combined with these different types of devices and algorithms for real scenarios, the existing device-level test results cannot reflect the comprehensive functional or performance requirements of the IoV applications at the system level. Therefore, novel application-oriented system-level evaluation indexes and test methods are needed. To this end, we extract the data processing functional entities into specific and quantifiable evaluation indexes by considering the IoV application functions and performance requirements. Then, we build a roadside sensing and processing test system in a real test zone to collect and process these evaluation indexes into accurate multidimensional ground-truth. According to the actual test results of multiple manufacturers’ solutions, our proposed test method is verified to effectively evaluate the performance of the system-level solutions in real IoV application scenarios. The unprecedented evaluation indexes, system-level test method, and the actual test results in this paper can provide an advanced reference for academics and industry.
... When choosing a radar, we chose to prioritize durability and precision since it will be mounted externally and used for localization. For the GNSS/IMU sensors, two Swift Navigation Duro sensors were chosen, seen in Figure 9. Two of these sensors were chosen (1) to provide better overall accuracy using Real-Time Kinematic (RTK) corrections since GPS location is important, and (2) to calculate the vehicle's differential position and thus the orientation of the Ego vehicle [61,62]. RTK is a carrier-based ranging technology that offers ranges (and subsequently locations) that are orders of magnitude more exact than those provided by code-based positioning. ...
Article
Commercialization of autonomous vehicle technology is a major goal of the automotive industry, thus research in this space is rapidly expanding across the world. However, despite this high level of research activity, literature detailing a straightforward and cost-effective approach to the development of an AV research platform is sparse. To address this need, we present the methodology and results regarding the AV instrumentation and controls of a 2019 Kia Niro which was developed for a local AV pilot program. This platform includes a drive-by-wire actuation kit, Aptiv electronically scanning radar, stereo camera, MobilEye computer vision system, LiDAR, inertial measurement unit, two global positioning system receivers to provide heading information, and an in-vehicle computer for driving environment perception and path planning. Robotic Operating System software is used as the system middleware between the instruments and the autonomous application algorithms. After selection, installation, and integration of these components, our results show successful utilization of all sensors, drive-by-wire functionality, a total additional power* consumption of 242.8 Watts (*Typical), and an overall cost of $118,189 USD, which is a significant saving compared to other commercially available systems with similar functionality. This vehicle continues to serve as our primary AV research and development platform.
... Ignatious et al. present an overview of different sensor systems for autonomous vehicles [3]. In addition, Rosique et al. introduce the different measurement principles of the different perception systems, quantify them by features, and show elements that are important in a model-based development [4]. ...
Article
Perception of the environment by sensor systems in variable environmental conditions is very complex due to the interference influences. In the field of autonomous machines or autonomous vehicles, environmental conditions play a decisive role in safe person detection. A uniform test and validation method can support the manufacturers of sensor systems during development and simultaneously provide proof of functionality. The authors have developed a concept of a novel test method, "REDA", for this purpose. In this article, the concept is applied and measurement data are presented. The results show the versatile potential of this test method, through the manifold interpretation options of the measurement data. Using this method, the strengths and weaknesses of sensor systems have been identified with an unprecedented level of detail, flexibility, and variance to test and compare the detection capability of sensor systems. The comparison was possible regardless of the measuring principle of the sensor system used. Sensor systems have been tested and compared with each other with regard to the influence of environmental conditions themselves. The first results presented highlight the potential of the new test method. For future applications, the test method offers possibilities to test and compare manifold sensing principles, sensor system parameters, or evaluation algorithms, including, e.g., artificial intelligence.
... These models are often referred to as idealised or geometric models and provide object lists as output data. As examples, we mention some products that are widely used in the automotive industry: in [62], TwT GmbH, TASS-PreScan, dSpace-ASM; in [57], TESIS Dyna4-Driver Assistance, MathWorks-ADAS Toolbox; in [63], CARLA, AirSim, DeepDrive, Udacity, Constellation, Helios, GLIDAR, RADSim, SIMSonic; and in [64], CarMaker from IPG Automotive GmbH, VIRES-VTD, CARLA, and AirSim can provide GT information. ...
Article
Full-text available
Radar sensors were among the first perceptual sensors used for automated driving. Although several other technologies such as lidar, camera, and ultrasonic sensors are available, radar sensors have maintained and will continue to maintain their importance due to their reliability in adverse weather conditions. Virtual methods are being developed for verification and validation of automated driving functions to reduce the time and cost of testing. Due to the complexity of modelling high-frequency wave propagation and signal processing and perception algorithms, sensor models that seek a high degree of accuracy are challenging to simulate. Therefore, a variety of different modelling approaches have been presented in the last two decades. This paper comprehensively summarises the heterogeneous state of the art in radar sensor modelling. Instead of a technology-oriented classification as introduced in previous review articles, we present a classification of how these models can be used in vehicle development by using the V-model originating from software development. Sensor models are divided into operational, functional, technical, and individual models. The application and usability of these models along the development process are summarised in a comprehensive tabular overview, which is intended to support future research and development at the vehicle level and will be continuously updated.
... Some breakthroughs have been made in the research of autonomous vehicle multisensor fusion methods, including image fusion, point cloud fusion, and image-point cloud fusion [3,[16][17][18]. Although a multi-sensor redundant combination design can make up for the insufficiency of a single sensor in perception, reduce the uncertainty of target detection, and enhance the vehicle's effective perception of surrounding environmental information, the test results in real scenarios are not ideal. ...
... sensor fusion methods, including image fusion, point cloud fusion, and image-point cloud fusion [3,[16][17][18]. Although a multi-sensor redundant combination design can make up for the insufficiency of a single sensor in perception, reduce the uncertainty of target detection, and enhance the vehicle's effective perception of surrounding environmental information, the test results in real scenarios are not ideal. ...
Article
Full-text available
Cooperative perception, as a critical technology of intelligent connected vehicles, aims to use wireless communication technology to interact and fuse environmental information obtained by edge nodes with local perception information, which can improve vehicle perception accuracy, reduce latency, and eliminate perception blind spots. It has become a current research hotspot. Based on the analysis of the related literature on the Internet of vehicles (IoV), this paper summarizes the multi-sensor information fusion method, information sharing strategy, and communication technology of autonomous driving cooperative perception technology in the IoV environment. Firstly, cooperative perception information fusion methods, such as image fusion, point cloud fusion, and image–point cloud fusion, are summarized and compared according to the approaches of sensor information fusion. Secondly, recent research on communication technology and the sharing strategies of cooperative perception technology is summarized and analyzed in detail. Simultaneously, combined with the practical application of V2X, the influence of network communication performance on cooperative perception is analyzed, considering factors such as latency, packet loss rate, and channel congestion, and the existing research methods are discussed. Finally, based on the summary and analysis of the above studies, future research issues on cooperative perception are proposed, and the development trend of cooperative perception technology is forecast to help researchers in this field quickly understand the research status, hotspots, and prospects of cooperative perception technology.
... identification and localization [48][49][50][51][52]. The LIDAR V3 sensor has a range of up to 40 m while providing a centimeter accuracy, (i.e., less than ±10 cm). ...
... Additionally, a possible disadvantage of ultrasonic proximity sensors could be given by the fact that some pets, (e.g., dogs and cats) are able to perceive ultrasounds, and might be perturbed by these sensors. Additional details concerning the integration of ultrasonic sensors in automotive applications can be found in [48], where the performances of these sensors are compared to the performances of other localization sensors used in commercial vehicles. Due to the narrow FOV, the ultrasonic sensors have been chosen to determine the moment/point when the pedestrian is on the way to leaving the sidewalk and stepping on the crosswalk (see Figure 3c). ...
... The LIDAR sensor has been included in the test due to its high precision and long detection range [48][49][50][51][52]. Based on these unique features, LIDAR sensors are widely used in automotive applications, being one of the most advanced solutions used for object identification and localization [48][49][50][51][52]. ...
Article
Full-text available
In urban areas, pedestrians are the road users category that is the most exposed to road accident fatalities. In this context, the present article proposes a totally new architecture, which aims to increase the safety of pedestrians on the crosswalk. The first component of the design is a pedestrian detection system, which identifies the user’s presence in the region of the crosswalk and determines the future street crossing action possibility or the presence of a pedestrian engaged in street crossing. The second component of the system is the visible light communications part, which is used to transmit this information toward the approaching vehicles. The proposed architecture has been implemented at a regular scale and experimentally evaluated in outdoor conditions. The experimental results showed a 100% overall pedestrian detection rate. On the other hand, the VLC system showed a communication distance between 5 and 40 m when using a standard LED light crosswalk sign as a VLC emitter, while maintaining a bit error ratio between 10−7 and 10−5. These results demonstrate the fact that the VLC technology is now able to be used in real applications, making the transition from a high potential technology to a confirmed technology. As far as we know, this is the first article presenting such a pedestrian street crossing assistance system.
... Sparse point cloud data makes instance detection over-segmented to multiple objects, and the oversegmented parts make it more complex to classify object classes. Therefore, adopting a multi-modal system setup [13,14] is versatile in compensating for each sensor's weakness. Even though self-driving research targeting airfields does not exist, study [13]- [20] for the self-driving vehicle has shown various fusion methods in the public road environment. ...
... Therefore, adopting a multi-modal system setup [13,14] is versatile in compensating for each sensor's weakness. Even though self-driving research targeting airfields does not exist, study [13]- [20] for the self-driving vehicle has shown various fusion methods in the public road environment. These are categorised into data-driven and model-based approaches. ...
... However, the model-based method is limited when the model does not fit into the designed conditions. Therefore, adopting a multi-modal system setup [13,14] is versatile in compensating for each sensor's weakness. Even though self-driving research targeting airfield does not exist, researches [13]- [20] for the self-driving vehicle have shown various types of fusion methods in the public road environment. ...
Article
Full-text available
Self‐driving baggage tractors driving on airport ramps/aprons present new trends that promote better airport operation procedures and proliferate the aviation market. Airport ramps have unique mobility requirements when it comes to layout, population, demand, and patterns. Estimating aircraft movement is highly crucial and must be required because of safety reasons and airport operations rules. The movement of aircraft at the airport ramp is not dynamic but relatively static and slow. However, it is more complicated in estimating attributes. Even though the aircraft is parked and stationary, all operational vehicles should be cautious on aircraft pushing in or out from the parking stand. However, this is never dealt with in any research since aircraft detection observed at the airport ramp is not suggested in previous research. In this paper, a context‐aware fusion‐based method for aircraft intention detection at airport ramps is proposed. A parallel extraction of behavioural features from the aircraft and situational context detection from other adjacent objects are suggested. Using the proposed method, aircraft can be detected and aircraft movement context can be estimated. The feasibility of this algorithm is demonstrated based on the fluent dataset obtained at Cincinnati Airport.
... The design of reliable perception systems is a key challenge in the development of safety-critical autonomous systems [1], [2]. Modern perception systems are often required to predict state information from complex, high-dimensional inputs such as images or LiDAR data [3]- [5]. ...
Preprint
Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions. We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.