Figure 2 - uploaded by Sergio Rodriguez Florez
Content may be subject to copyright.
10: Stereo rectification routine can be summarized in three main steps: In the undistortion step, image distortions induced by the camera lens are corrected. In the rectification step, images are "deformed" so as the image planes become coplanar and row-aligned. Then, missing pixel values are interpolated. Finally, the rectified images are cropped ensuring a good view overlapping.
Source publication
Advanced Driver Assistance Systems (ADAS) can improve road safety by supporting the driver through warnings in hazardous circumstances or triggering appropriate actions when facing imminent collision situations (e.g. airbags, emergency brake systems, etc). In this context, the knowledge of the location and the speed of the surrounding mobile object...
Similar publications
In the recent years, more and more modern cars have been equipped with perception capabilities. One of the key applications of such perception systems is the estimation of a risk of collision. This is necessary for both Advanced Driver Assistance Systems and Autonomous Navigation. Most approach for risk estimation propose to detect and track the dy...
The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road...
SEV (small electric vehicle) is one of the future solutions for urban mobility, since these cars show a small environmental footprint due to their lightweight design for optimizing the range. Due to the low number of SEVs in real accidents it is difficult to judge the collision severity of these vehicles, especially if such vehicles are equipped wi...
This study addressed end-users' knowledge of Advanced Driver Assistance Systems (ADAS) and the role of learning in relation to use of the systems. Therefore end-users' perspective on the subject of the systems' purpose, functions, potential risks and usefulness were explored, as well as motives behind choosing to use the systems. The study used qua...
Citations
... Perspective, diotropic and catadioptric images. Image from[FLÓREZ and Stiller, 2011]. ...
Self-driving cars have the potential to provoke a mobility transformation that will impact our everyday lives. They offer a novel mobility system that could provide more road safety, efficiency and accessibility to the users. In order to reach this goal, the vehicles need to perform autonomously three main tasks: perception, planning and control. When it comes to urban environments, perception becomes a challenging task that needs to be reliable for the safety of the driver and the others. It is extremely important to have a good understanding of the environment and its obstacles, along with a precise localization, so that the other tasks are well performed. This thesis explores from classical approaches to Deep Learning techniques to perform mapping and localization for autonomous vehicles in urban environments. We focus on vehicles equipped with low-cost sensors with the goal to maintain a reasonable price for the future autonomous vehicles. Considering this, we use in the proposed methods sensors such as 2D laser scanners, cameras and standard IMUs. In the first part, we introduce model-based methods using evidential occupancy grid maps. First, we present an approach to perform sensor fusion between a stereo camera and a 2D laser scanner to improve the perception of the environment. Moreover, we add an extra layer to the grid maps to set states to the detected obstacles. This state allows to track an obstacle overtime and to determine if it is static or dynamic. Sequentially, we propose a localization system that uses this new layer along with classic image registration techniques to localize the vehicle while simultaneously creating the map of the environment. In the second part, we focus on the use of Deep Learning techniques for the localization problem. First, we introduce a learning-based algorithm to provide odometry estimation using only 2D laser scanner data. This method shows the potential of neural networks to analyse this type of data for the estimation of the vehicle's displacement. Sequentially, we extend the previous method by fusing the 2D laser scanner with a camera in an end-to-end learning system. The addition of camera images increases the accuracy of the odometry estimation and proves that we can perform sensor fusion without any sensor modelling using neural networks. Finally, we present a new hybrid algorithm to perform the localization of a vehicle inside a previous mapped region. This algorithm takes the advantages of the use of evidential maps in dynamic environments along with the ability of neural networks to process images. The results obtained in this thesis allowed us to better understand the challenges of vehicles equipped with low-cost sensors in dynamic environments. By adapting our methods for these sensors and performing the fusion of their information, we improved the general perception of the environment along with the localization of the vehicle. Moreover, our approaches allowed a possible comparison between the advantages and disadvantages of learning-based techniques compared to model-based ones. Finally, we proposed a form of combining these two types of approaches in a hybrid system that led to a more robust solution.
... In an autonomous navigation context, the regular task for these systems is finding the navigable area, segmenting the road and detecting obstacles [18], [19], [20]. Camera data can also be used to improve other systems, such as the lidar-based ones [21]. ...
Advanced Driver Assistance Systems (ADAS) are electronic systems, present on modern vehicles, which enhance the road safety by providing assistance on the driving act. These systems are usually associated with semi-autonomous and expensive vehicles. Oppositely, yet affordable, modern smartphones, endowed with an operating system, are capable of providing a high computational power allied to multiple embedded sensors and wireless connection capabilities. Hence, these devices are a convenient and practical platform to use as a smart sensor and driving assistance for intelligent vehicles. This paper presents two of the broad range of practical applications that regards this approach: I) A vehicle data recorder and II) A road segmentation application. Furthermore, other applications are going to be discussed.
... It is easy to understand that a large number of LIDAR points will be accumulated on the road boundaries, producing two peaks in the histogram (see upper row in Fig. 3). However, this method is prone to fail in steering and roundabout scenarios as stated in [19] because the histogram may fade out (see lower row in Fig. 3). Here, temporal filtering is used to overcome difficulties in such situations. ...
Reliable road detection is a key issue for modern Intelligent Vehicles, since it can help to identify the driv- able area as well as boosting other perception functions like object detection. However, real environments present several challenges like illumination changes and varying weather conditions. We propose a multi-modal road detection and segmentation method based on monocular images and HD multi-layer LIDAR data (3D point cloud). This algorithm consists of three stages: extraction of ground points from multi- layer LIDAR, transformation of color camera information to an illumination-invariant representation, and lastly the segmentation of the road area. For the first module, the core function is to extract the ground points from LIDAR data. To this end a road boundary detection is performed based on histogram analysis, then a plane estimation using RANSAC, and a ground point extraction according to the point-to- plane distance. In the second module, an image representation of illumination-invariant features is computed simultaneously. Ground points are projected to image plane and then used to compute a road probability map using a Gaussian model. The combination of these modalities improves the robustness of the whole system and reduces the overall computational time, since the first two modules can be run in parallel. Quantitative experiments carried on the public KITTI dataset enhanced by road annotations confirmed the effectiveness of the proposed method.
... Área (Canny, 1986); (c) and (g) are the binarized images by Canny step 2;(d) and ( xiii The red horizontal lines represent the horizon finding algorithm results Para fins militares ou civis, algumas destas aplicações incluem: o Grand Challenge and Urban Challenge (Rojo, et al., 2007 (Bakker, et al., 2008), (Subramanian, et al., 2006). Finalmente, impulsionada pelo elevado número de veículos em todo o mundo, os sistemas ADAS surgem para ajudar motoristas na tarefa de condução (Gietelink, et al., 2006), (Rodríguez Flórez, 2010). ...
... Em outra frente, os robôs aéreos também oferecem grandes perspectivas em muitas aplicações: busca e salvamento, fiscalização de obras, monitoramento em tempo real, em missões aéreas de alto risco, cartografia, deteção de incêndio ou em gravações de filmes no cinema (Kim, et al., 2003), (Bonin-fonte, et al., 2008 (Gietelink, et al., 2006), (Rodríguez Flórez, 2010 ...
... Cameras are passive sensors which are more and more used in ADAS, like blind spot monitoring, lane departure warning, speed limit signals and pedestrian recognition. The large functional spectrum offered by this sensor makes it quite attractive, since it provides rich and additional information, replacing on-board sensors or provide redundancy in safety applications (Rodríguez Flórez, 2010). Monocular camera systems are preferred to stereo camera systems because monocular systems have advantages in terms of reduced costs and the ease with which they can be fitted to vehicles (Yamaguchi, et al., 2008). ...
This thesis addresses the problem of obstacle avoidance for semi- and autonomous terrestrial platforms in dynamic and unknown environments. Based on monocular vision, it proposes a set of tools that continuously monitors the way forward, proving appropriate road informations in real time. A horizon finding algorithm was developed to sky removal. This algorithm generates the region of interest from a dynamic threshold search method, allowing to dynamically investigate only a small portion of the image ahead of the vehicle, in order to road and obstacle detection. A free-navigable area is therefore represented from a multimodal 2D drivability road image. This multimodal result enables that a level of safety can be selected according to the environment and operational context. In order to reduce processing time, this thesis also proposes an automatic image discarding criteria. Taking into account the temporal coherence between consecutive frames, a new Dynamic Power Management methodology is proposed and applied to a robotic visual machine perception, which included a new environment observer method to optimize energy consumption used by a visual machine. This proposal was tested in different types of image texture (road surfaces), which includes free-area detection, reactive navigation and time-to-collision estimation. A remarkable characteristic of these methodologies is its independence of the image acquiring system and of the robot itself. This real-time perception system has been evaluated from different test-banks and also from real data obtained by two intelligent platforms. In semi-autonomous tasks, tests were conducted at speeds above 100 Km/h. Autonomous displacements were also carried out successfully. The algorithms presented here showed an interesting robustness.
Camera-based estimation of drivable image areas is still in evolution. These systems have been developed for improved safety and convenience, without the need to adapt itself to the environment. Machine Vision is an important tool to identify the region that includes the road in images. Road detection is the major task of autonomous vehicle guidance. In this way, this work proposes a drivable region detection algorithm that generates the region of interest from a dynamic threshold search method and from a drag process (DP). Applying the DP to estimation of drivable image areas has not been done yet, making the concept unique. Our system was has been evaluated from real data obtained by intelligent platforms and tested in different types of image texture, which include occlusion case, obstacle detection and reactive navigation.