Fig 3 - available via license: CC BY
Content may be subject to copyright.
Consistency map M Consistency generating flow. https://doi.org/10.1371/journal.pone.0215159.g003

Consistency map M Consistency generating flow. https://doi.org/10.1371/journal.pone.0215159.g003

Source publication
Article
Full-text available
Road Detection is a basic task in automated driving field, in which 3D lidar data is commonly used recently. In this paper, we propose to rearrange 3D lidar data into a new organized form to construct direct spatial relationship among point cloud, and put forward new features for real-time road detection tasks. Our model works based on two prerequi...

Contexts in source publication

Context 1
... the value of L is also limited by a empirical threshold to select candidate road parts from each scan line. After locating the candidate point sequences on each scan, we project point cloud to a grid map shown in Fig 3a (candidate point sequences are colored in green), and the endpoints of such sequence are supposed to be candidate road boundary points in the grid map. Due to sparsity of lidar scans, we expand the endpoints on each scan along forward direction, then the end points of road regions become vertical line segments (Fig 3b). ...
Context 2
... locating the candidate point sequences on each scan, we project point cloud to a grid map shown in Fig 3a (candidate point sequences are colored in green), and the endpoints of such sequence are supposed to be candidate road boundary points in the grid map. Due to sparsity of lidar scans, we expand the endpoints on each scan along forward direction, then the end points of road regions become vertical line segments (Fig 3b). However, when we encounter to a line segment, it is not clear that it belongs to a left boundary or a right one, because lidar scan lines are circles in the grid map. ...
Context 3
... in order to reduce computation, we choose 4 scans, instead of computing all 64 scans to predict the road direction. Just as the green line in Fig 3b, it is the road direction we predict, and the red squares are the 4 midpoints we choose. By road direction, we can easily decide the left boundary points (shown in yellow in Fig 3b) and the right ones (shown in blue in Fig 3b). ...
Context 4
... as the green line in Fig 3b, it is the road direction we predict, and the red squares are the 4 midpoints we choose. By road direction, we can easily decide the left boundary points (shown in yellow in Fig 3b) and the right ones (shown in blue in Fig 3b). Then, fast Hough transformation is adopted to determine the best lines, which go through most left or right boundary points, as the left and right boundary respectively (Fig 3c). ...
Context 5
... as the green line in Fig 3b, it is the road direction we predict, and the red squares are the 4 midpoints we choose. By road direction, we can easily decide the left boundary points (shown in yellow in Fig 3b) and the right ones (shown in blue in Fig 3b). Then, fast Hough transformation is adopted to determine the best lines, which go through most left or right boundary points, as the left and right boundary respectively (Fig 3c). ...
Context 6
... road direction, we can easily decide the left boundary points (shown in yellow in Fig 3b) and the right ones (shown in blue in Fig 3b). Then, fast Hough transformation is adopted to determine the best lines, which go through most left or right boundary points, as the left and right boundary respectively (Fig 3c). Next, we expand all candidate road point sequences on each scan along forward direction as shown in gray in Fig 3d. ...
Context 7
... fast Hough transformation is adopted to determine the best lines, which go through most left or right boundary points, as the left and right boundary respectively (Fig 3c). Next, we expand all candidate road point sequences on each scan along forward direction as shown in gray in Fig 3d. The two road boundaries and the road direction could help to locate the true road regions from the candidate ones. ...
Context 8
... to the sparsity of point cloud, we expend the farthest road region (with largest ScanID) along the nearest boundary direction to borders of the grid map, where is no laser echo. Finally, we acquire an image of road region by consistency measurement, which is called consistency map M Consistency here (Fig 3e). In case of transformation from BEV to perspective, we store the corresponding height of each pixel in M Consistency , coming from our array of structures. ...

Similar publications

Article
Full-text available
When the automobile body deviates, being one of the hidden dangers of road traffic safety, it affects not only the automobile operation stability, but also the accuracy of headlight detection results. The driving directions analyzing method based on binocular stereo vision was presented. The shape center of the front license plate was extracted as...

Citations

... This approach has been present in models where a significant number of 2D rendering extraction techniques have been developed. 2D point cloud slices [6,7], object views [8], elevation images [9,10] and spherical representations [11,12] are some of these, each with diverse versions. 2D views representations usually omit relevant 3D information, which can give a performance drop. ...
Article
The three-dimensional perception applications have been growing since Light Detection and Ranging devices have become more affordable. On those applications, the navigation and collision avoidance systems stand out for their importance in autonomous vehicles, which are drawing an appreciable amount of attention these days. The on-road object classification task on three-dimensional information is a solid base for an autonomous vehicle perception system, where the analysis of the captured information has some factors that make this task challenging. On these applications, objects are represented only on one side, its shapes are highly variable and occlusions are commonly presented. But the highest challenge comes with the low resolution, which leads to a significant performance dropping on classification methods. While most of the classification architectures tend to get bigger to obtain deeper features, we explore the opposite side contributing to the implementation of low-cost mobile platforms that could use low-resolution detection and ranging devices. In this paper, we propose an approach for on-road objects classification on extremely low-resolution conditions. It uses directly three-dimensional point clouds as sequences on a transformer-convolutional architecture that could be useful on embedded devices. Our proposal shows an accuracy that reaches the 89.74 % tested on objects represented with only 16 points extracted from the Waymo, Lyft's level 5 and Kitti datasets. It reaches a real time implementation (22 Hz) in a single core processor of 2.3 Ghz.
... The Lane region which is surrounded by a lane marking scheme is sufficient enough to keep the vehicle separate in a dynamic driving environment. The lane detection scheme works with various sensors [5][6][7][8][9][10] and fused up with a lane detection algorithm to get a clear view of the surroundings of the vehicle. However, these sensors have restricted performance to grasp the scene completely due to varying wavelength of sensors, improper marking, and have high operational cost. ...
... However, the purchase and computing cost of implementing both of these sensors is expensive. Similarly, Xu et al. had obtained 3D data on road boundaries using Velodyne HDL-64 S2 LiDAR sensor which had 64 laser heads [6]. The formation of the LiDAR data had been reorganized and based on the mapping criteria two maps namely consistency and obstacle map were generated. ...
... In the referred dataset, URBAN which is the combination of all subcategories of the dataset has been considered here for the evaluation purpose. However, the corresponding score for UU, UM, and UMM are also represented in Table 7. Xu et al. [6] processed the point cloud data using LiDAR sensor and attained 92.22% of MaxF score in URBAN category but suffers from intensity computation and dimensional data reduction. Similarly, Kim [8] applied radar sensor with metal reflector approach and attained the lane detection rate in the range of 86%-94%. ...
Article
Full-text available
Detection of lane region under the road boundary is an imperative module for intelligent vehicle system. Lane markings provide separate regions on the road for the vehicles to avoid the possibility of accidents. Existing methods in lane detection have limited performance using various sensor-based approaches such as Radar and LiDAR and have high operational costs. To achieve a steady and optimal lane detection, the vision-based lane region detection scheme VLDNet is proposed which utilizes a encoder-decoder network using semantic segmentation architecture. In this direction, a hybrid model using UNet and ResNet has been adopted, where UNet is used as a segmentation model and ResNet-50 is used for down-sampling the image and identifying the required features. These identified features have been then applied into UNet for up-sampling and decoding the segments of the images. The publicly available KITTI dataset have been accessed for experiments and validation of the proposed network. The method outperforms the existing state-of-the-art methods in lane region detection. The network achieves better performance using standard evaluation measures such as accuracy of 98.87%, the precision of 98.24%, recall of 96.55%, frequency weighted IoU of 97.78%, and MaxF score of 97.77%.
... In general, researchers choose to use a mobile laser scanning system [8], [9], 64-beam LiDAR [15], [16], or at least 32beam LiDAR [17], [18] to identify lane markings from point clouds due to its precise 3D information. However, a 16beam commercial LiDAR is used for this research considering the cost-effective application. ...
Article
Full-text available
The commercialization of automated driving vehicles promotes the development of safer and more efficient autonomous driving technologies including lane marking detection strategy, which is considered to be the most promising feature in environmental perception technology. To reduce the tradeoff between time consumption and detection precision, we propose a real-time lane marking detection method by using LiDAR point clouds directly. A constrained RANSAC algorithm is applied to select the regions of interest and filter the background data. Further, a road curb detection method based on the segment point density is also proposed to classify the road points and curb points. Finally, an adaptive threshold selection method is proposed to identify lane markings. In this investigation, five datasets are collected from different driving conditions that include the straight road, curved road, and uphill, to test the proposed method. The proposed method is evaluated under different performance metrics such as Precision, Recall, Dice, Jaccard as well as the average detection distance and computation time for the five datasets. The quantitative results show the efficiency and feasibility of this proposed method.
... To enable environment perception, radar, camera and light detection and ranging (LiDAR) sensors are used for safety navigation. The autonomous vehicles take actions based on the outputs acquired from environment perception and localization [3,4]. In real time, the road status is determined by the communication between the autonomous vehicles [5]. ...
... where represents the layer number and represents the former layer's result. 4. In general, the fully connected layer is considered as the final layer of the CNN. ...
Article
Full-text available
Connected autonomous vehicles (CAVs) currently promise cooperation between vehicles, providing abundant and real-time information through wireless communication technologies. In this paper, a two-level fusion of classifiers (TLFC) approach is proposed by using deep learning classifiers to perform accurate road detection (RD). The proposed TLFC-RD approach improves the classification by considering four key strategies such as cross fold operation at input and pre-processing using superpixel generation, adequate features, multi-classifier feature fusion and a deep learning classifier. Specifically, the road is classified as drivable and non-drivable areas by designing the TLFC using the deep learning classifiers, and the detected information using the TLFC-RD is exchanged between the autonomous vehicles for the ease of driving on the road. The TLFC-RD is analyzed in terms of its accuracy, sensitivity or recall, specificity, precision, F1-measure and max F measure. The TLFC- RD method is also evaluated compared to three existing methods: U-Net with the Domain Adaptation Model (DAM), Two-Scale Fully Convolutional Network (TFCN) and a cooperative machine learning approach (i.e., TAAUWN). Experimental results show that the accuracy of the TLFC-RD method for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset is 99.12% higher than its competitors.
... An integral parts of robotics [17][18][19][20], object recognition with LiDAR sensor commonly depends on feature extraction for classification of objects. Accurate object detection and classification allows object tracking, road signs detection [21], scene understanding [22] and behaviour recognition [23]. 3D LiDAR traits based on local surface and key-points are amongst the main features for extraction within object recognition [24]. ...
Article
Full-text available
Low-end LiDAR sensor provides an alternative for depth measurement and object recognition for lightweight devices. However due to low computing capacity, complicated algorithms are incompatible to be performed on the device, with sparse information further limits the feature available for extraction. Therefore, a classification method which could receive sparse input, while providing ample leverage for the classification process to accurately differentiate objects within limited computing capability is required. To achieve reliable feature extraction from a sparse LiDAR point cloud, this paper proposes a novel Clustered Extraction and Centroid Based Clustered Extraction Method (CE-CBCE) method for feature extraction followed by a convolutional neural network (CNN) object classifier. The integration of the CE-CBCE and CNN methods enable us to utilize lightweight actuated LiDAR input and provides low computing means of classification while maintaining accurate detection. Based on genuine LiDAR data, the final result shows reliable accuracy of 97% through the method proposed.
... However, the performance of this method degrades when outside the road sides solid obstacles exist that do not produce variations in the frequency domain of Lidar measurements. In a similar approach followed in [35], the authors employed Hough transform which has been applied on the road bounds, in order to construct a consistency map and an obstacle map in Bird's Eye View. Their fusion outcome is the final road region, yet with performance limitations due to Hough thresholding sensitivity. ...
Chapter
Full-text available
The future civilian, and professional autonomous vehicles to be realised into the market should apprehend and interpret the road in a manner similar to the human drivers. In structured urban environments where signs, road lanes and markers are well defined and ordered, landmark-based road tracking and localisation has significantly progressed during the last decade with many autonomous vehicles to make their debut into the market. However, in semi-structured and rural environments where traffic infrastructure is deficient, the autonomous driving is hindered by significant challenges. The paper at hand presents a Lidar-based method for road boundaries detection suitable for a service robot operation in rural and semi-structured environments. Organised Lidar data undergo a spatial distribution processing method to isolate road limits in a forward looking horizon ahead of the robot. Stereo SLAM is performed to register subsequent road limits and RANSAC is applied to identify edges that correspond to road segments. In addition, the robot traversable path is estimated and progressively merged with Bézier curves to create a smooth trajectory that respects vehicle kinematics. Experiments have been conducted on data collected from our robot on a semi-structured urban environment, while the method has also been evaluated on KITTI dataset exhibiting remarkable performance.
... The candidate points are classified based on their Z values (heights) into groups. This helps to differentiate between road segment and vegetation using the technique presented inXu et al. (2019).Finally, for the lane marking detections, LiDAR data points with high reflectivity are selected as they correspond to the bright surfaces such as lane markings, railings and other reflective surfaces. Since we are only interested in markings on the road, only the points with high reflectivity and are on the road plane (based on the road detection obtained) are selected as final candidates. ...
Thesis
Full-text available
Intelligent vehicles are a key component in humanity’s vision for safer, efficient, and accessible transportation systems across the world. Due to the multitude of data sources and processes associated with Intelligent vehicles, the reliability of the total system is greatly dependent on the possibility of errors or poor performances observed in its components. In our work, we focus on the critical task of localization of intelligent vehicles and address the challenges in monitoring the integrity of data sources used in localization. The primary contribution of our research is the proposition of a novel protocol for integrity by combining integrity concepts from information systems with the existing integrity concepts in the field of Intelligent Transport Systems (ITS). An integrity monitoring framework based on the theorized integrity protocol that can handle multimodal localization problems is formalized. As the first step, a proof of concept for this framework is developed based on cross-consistency estimation of data sources using polynomial models. Based on the observations from the first step, a 'Feature Grid' data representation is proposed in the second step and a generalized prototype for the framework is implemented. The framework is tested in highways as well as complex urban scenarios to demonstrate that the proposed framework is capable of providing continuous integrity estimates of multimodal data sources used in intelligent vehicle localization.
... However, these methods do not work well on false positive curb points on large obstacles such as a wall or a fence. A new approach [20] utilizes road consistency and obstacle maps by rearranging dense point cloud data. These maps provide the spatial relationship and height difference between the road and off-road, regions, which help to segment road region from unstructured rural scenes. ...
Preprint
Full-text available
Reliable curb detection is critical for safe autonomous driving in urban contexts. Curb detection and tracking are also useful in vehicle localization and path planning. Past work utilized a 3D LiDAR sensor to determine accurate distance information and the geometric attributes of curbs. However, such an approach requires dense point cloud data and is also vulnerable to false positives from obstacles present on both road and off-road areas. In this paper, we propose an approach to detect and track curbs by fusing together data from multiple sensors: sparse LiDAR data, a mono camera and low-cost ultrasonic sensors. The detection algorithm is based on a single 3D LiDAR and a mono camera sensor used to detect candidate curb features and it effectively removes false positives arising from surrounding static and moving obstacles. The detection accuracy of the tracking algorithm is boosted by using Kalman filter-based prediction and fusion with lateral distance information from low-cost ultrasonic sensors. We next propose a line-fitting algorithm that yields robust results for curb locations. Finally, we demonstrate the practical feasibility of our solution by testing in different road environments and evaluating our implementation in a real vehicle\footnote{Demo video clips demonstrating our algorithm have been uploaded to Youtube: https://www.youtube.com/watch?v=w5MwsdWhcy4, https://www.youtube.com/watch?v=Gd506RklfG8.}. Our algorithm maintains over 90\% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and our dataset respectively, and its average processing time per frame is approximately 10 ms on Intel i7 x86 and 100ms on NVIDIA Xavier board.
... The ROI is divided into smaller patches in the XY plane, and the points belonging to each patch are examined for their Z values. This helps to differentiate between road segments, curbs, dividers, vegetation, etc. using the technique presented in [21]. Points with high reflectivity are also selected as they correspond to the bright surfaces such as lane markings and railings. ...
Article
Full-text available
Autonomous driving systems tightly rely on the quality of the data from sensors for tasks such as localization and navigation. In this work, we present an integrity monitoring framework that can assess the quality of multimodal data from exteroceptive sensors. The proposed multisource coherence-based integrity assessment framework is capable of handling highway as well as complex semi-urban and urban scenarios. To achieve such generalization and scalability, we employ a semantic-grid data representation, which can efficiently represent the surroundings of the vehicle. The proposed method is used to evaluate the integrity of sources in several scenarios, and the integrity markers generated are used for identifying and quantifying unreliable data. A particular focus is given to real-world complex scenarios obtained from publicly available datasets where integrity localization requirements are of high importance. Those scenarios are examined to evaluate the performance of the framework and to provide proof-of-concept. We also establish the importance of the proposed integrity assessment framework in context-based localization applications for autonomous vehicles. The proposed method applies the integrity assessment concepts in the field of aviation to ground vehicles and provides the Protection Level markers (Horizontal, Lateral, Longitudinal) for perception systems used for vehicle localization.
... Intelligent transportation arouses huge social concern in recent decades, researches can be directly divided by application scenarios, such as ground [1][2][3], underwater [4][5][6] and aerial [7] ones. Traversable road detection is a core task in the context of autonomous driving field. ...
Article
Full-text available
Road region detection is a hot spot research topic in autonomous driving field. It requires to give consideration to accuracy, efficiency as well as prime cost. In that, we choose millimeter-wave (MMW) Radar to fulfill road detection task, and put forward a novel method based on MMW which meets real-time requirement. In this paper, a dynamic and static obstacle distinction step is firstly conducted to estimate the dynamic obstacle interference on boundary detection. Then, we generate an occupancy grid map using modified Bayesian prediction to construct a 2D driving environment model based on static obstacles, while a clustering procedure is carried out to describe dynamic obstacles. Next, a Modified Random Sample Consensus (Modified RANSAC) algorithm is presented to estimate candidate road boundaries from static obstacle maps. Results of our experiments are presented and discussed at the end. Note that, all our experiments in this paper are run in real-time on an experimental UGV (unmanned ground vehicle) platform equipped with Continental ARS 408-21 radar.