Fig 3 - uploaded by Lingfei Ma
Content may be subject to copyright.
Contexts in source publication
Context 1
... we use 10 m as a metric to segment the scene. In view of the scanned MLS data in Fig.3, the ground points always occupy the largest part; however, our interest objects are set on the ground. ...
Context 2
... set and a validation set are generated manually within a real mobile laser scanning point cloud data, and the training set is trained using SVM with a radial basis function(RBF) kernel. Finally, the point cloud segments are classified as a traffic sign or not using SVM with the optimal classification model, as shown by the red dots respectively (Fig. 3). The clustered point clouds are classified as a traffic sign in mobile laser scanning point ...
Similar publications
Citations
... These factors can be listed as the resolution quality of the camera used when collecting road data, the captured image being too bright, too dim, or containing spotlights, the weather conditions at the time the image was taken, motion blur caused by vehicle movement, and objects that hide TSs in the environment where the image was taken. The main source of these difficulties is uncontrollable environmental conditions [5]. ...
Traffic sign recognition techniques aim to reduce the probability of traffic accidents by increasing road and vehicle safety. These systems play an essential role in the development of autonomous vehicles. Autonomous driving is a popular field that is seeing more and more growth. In this study, a new high-performance and robust deep convolutional neural network model is proposed for traffic sign recognition. The stacking ensemble model is presented by combining the trained models by applying improvement methods on the input images. For this, first of all, by performing preprocessing on the data set, more accurate recognition was achieved by preventing adverse weather conditions and shooting errors. In addition, data augmentation was applied to increase the images in the data set due to the uneven distribution of the number of images belonging to the classes. During the model training, the learning rate was adjusted to prevent overfitting. Then, a new stacking ensemble model was created by combining the models trained with the input images that were subjected to different preprocessing. This ensemble model obtained 99.75% test accuracy on the German Traffic Sign Recognition Benchmark (GTSRB) dataset. When compared with other studies in this field in the literature, it is seen that recognition is performed with higher accuracy than these studies. Additively different approaches have been applied for model evaluation. Grad-CAM (gradient weighted class activation mapping) was used to make the model explainable. Evidential deep learning approach was applied to measure the uncertainty in classification. Results for safe monitoring are also shared with SafeML-II, which is based on measuring statistical distances. In addition to these, the migration test is applied with BTSC (Belgium Traffic Sign Classification) dataset to test the robustness of the model. With the transfer learning method of the models trained with GTSRB, the parameter weights in the feature extraction stage are preserved, and the training is carried out for the classification stage. Accordingly, with the stacking ensemble model obtained by combining the models trained with transfer learning, a high accuracy of 99.33% is achieved on the BTSC dataset. While the number of parameters the single model is 7.15 M, the number of parameters of the stacking ensemble model with additional layers is 14.34 M. However, the parameters of the models trained on a single preprocessed dataset were not trained, and transfer learning was performed. Thus, the number of trainable parameters in the ensemble model is only 39,643.
... There are several problems when using neural networks to recognize traffic signs: the need for high performance, low efficiency in real time, and a large amount of input data [14]. Sensitivity to uncontrolled environments is also a characteristic of neural-network-based traffic sign recognition systems [15]. The problem of reducing our dependence on neural networks is urgent. ...
This article proposes an algorithm for recognizing road signs based on a determination of their color and shape. It first searches for the edge segment of the road sign. The boundary curve of the road sign is defined by the boundary of the edge segment. Approximating the boundaries of a road sign reveals its shape. The hierarchical road sign recognition system forms classes in the form of a sign. Six classes are at the first level. Two classes contain only one road sign. Signs are classified by the color of the edge segment at the second level of the hierarchy. The image inside the edge segment is cut at the third level of the hierarchy. The sign is then identified based on a comparison of the pattern. A computer experiment was carried out on two collections of road signs. The proposed algorithm has a high operating speed and a low percentage of errors.
... In recent years, mobile laser scanning (MLS) technology has drawn widespread attention in the collection of road information due to its characteristics of high-precision and 3-D [9], [10], [11]. Many works detect road surface elements based on MLS point clouds by using some traditional descriptors, such as grayscale [12], [13], color [14], and geometry [15], [16], [17]. Due to the insufficient expression of traditional descriptors in complex road scene, some researchers, e.g., Nguyen et al. [18], intend to adopt artificial neural networks to construct discriminative feature of road surface element. ...
... They further employ a maximum posterior probability method to obtain the optimal detection results. Li et al. [15] proposed a clustering method to detect proposals of traffic signs from point clouds and use the support vector machine (SVM) algorithm to learn geometrical statistic features of traffic signs, to realize the detection of traffic signs. Abedin et al. [14] segmented the road scene into local blocks based on a threshold segmentation algorithm and then trained a speeded-up robust feature (SURF) to recognize traffic signs from each local block. ...
Accurate and automatic detection of road surface element (such as road marking or manhole cover) information is basis and key to many applications. To efficiently obtain information of road surface element, we propose a content-adaptive hierarchical deep learning model to detect arbitrary-oriented road surface elements from mobile laser scanning (MLS) point clouds. In the model, we design a densely connected feature integration module (DCFM) to connect and reorganize feature maps of each stage in the backbone network. Besides, we propose a hierarchical prediction module (HPM) to innovatively use the reorganized feature maps to recognize different types of road surface elements, thus semantic information of road surface element can be adaptively expressed on muti-level feature maps. We also add a cascade structure (CS) in the head of model to detect target efficiently, which can learn the offset between the predicted minimum bounding box of road surface element and ground truth. In experiments, we prove that the proposed method mainly contributed by HPM can maintain robust detection performance, even in the cases of unbalanced category number or overlapping of road surface elements. The experiments also prove that the proposed DCFM can improve the recognition effects of small targets. The CS for predicting boundary offset can detect each target more accurately. We also integrate the designed modules into some rotation detectors, e.g., the EAST and R3Det, and achieve state-of-the-art results in three road scenes with different categories and uneven distribution of road surface elements, which further shows the effectiveness of the proposed method.
... The quality of the image can be affected by several reasons: (a) the camera quality in terms of image resolution, (b) the brightness of the image captured which can be too bright, too dim, or contains spotlights, (c) the weather condition at the time of image capture which can be snowy, rainy, or foggy, (d) the motion blurriness due to the photograph being taken at high speed in a way that the traffic sign is barely recognized, (e) the road situations that can obscure parts of the traffic sign during the image capture. The uncontrolled environment aspect of the traffic sign recognition problem is the main source of the above challenges [1]. ...
Traffic sign recognition plays a crucial role in the development of autonomous cars to reduce the accident rate and promote road safety. It has been a necessity to address traffic signs that are affected significantly by the environment as well as poor real-time performance for deep-learning state-of-the-art algorithms. In this paper, we introduce Real-Time Image Enhanced CNN (RIECNN) for Traffic Sign Recognition. RIECNN is a real-time, novel approach that tackles multiple, diverse traffic sign datasets, and out-performs the state-of-the-art architectures in terms of recognition rate and execution time. Experiments are conducted using the German Traffic Sign Benchmark (GTSRB), the Belgium Traffic Sign Classification (BTSC), and the Croatian Traffic Sign (rMASTIF) benchmark. Experimental results show that our approach has achieved the highest recognition rate for all Benchmarks, achieving a recognition accuracy of 99.75% for GTSRB, 99.25% for BTSC and 99.55% for rMASTIF. In terms of latency and meeting the real-time constraint, the pre-processing time and inference time together do not exceed 1.3 ms per image. Not only have our proposed approach achieved remarkably high accuracy with real-time performance, but it also demonstrated robustness against traffic sign recognition challenges such as brightness and contrast variations in the environment.
... Several methodologies have increased traffic sign recognition accuracy, but algorithms still have advantages and disadvantages that are constrained by a variety of conditions. Interruptions such as severe weather, illumination changes, and signs fading, according to researchers, contribute to poor environmental adaptation and lower traffic sign detection accuracy [9,10,11]. To categorize recorded traffic signs and provide proper real-time input to smart vehicles, traffic sign identification employs an effective classification algorithm based on current dataset resources. ...
Automated tasks have simplified almost everything we perform in today's environment. Due to a desire to focus only on driving, drivers regularly ignore signs placed on the side of the road, which can be harmful to themselves and others. To address this issue, the motorist should be informed in a method that does not require them to divert their concentration. Traffic Sign Detection and Recognition (TSDR) is critical in this case since it alerts the motorist of approaching signals. Not only are roads safer because of this, but motorists also feel more at ease when driving unfamiliar or difficult routes. Another typical issue is inability to read the sign. Driver assistance systems (ADAS) will make it easier for motorists to read traffic signs with the help of this software. We provide a traffic sign detection and recognition system that employs image processing for sign detection and an ensemble of Convolutional Neural Networks (CNNs) for sign recognition. Because of its high recognition rate, CNNs may be used in a wide range of computer vision applications. TensorFlow is used in CNNTSR (Traffic Sign Recognition), a key component of current driving assistance systems that improves driver safety and comfort. TensorFlow is used to implement CNNTSR (Traffic Sign Recognition). This article examines a technology that assists drivers in recognizing traffic signs and avoiding road accidents. Two things determine the accuracy of TSR: the feature extractor and the classifier. Although there are a variety of approaches, most recent algorithms use CNN (Convolutional Neural Network) to do both feature extraction and classification tasks. Using TensorFlow and CNN, we create traffic sign recognition. The CNN will be trained using a dataset of 43 distinct types of traffic signs. The accuracy of the findings will be 95 percent. Keywords: Driver, Tensor flow, Data Sheet, Alert, CNNTSR, ADAS.
... Traffic sign detection and recognition are crucial in the development of intelligent vehicles, which directly affects the implementation of driving behaviors. study of traffic sign detection technology, disturbances, such as bad weather conditions, changes in lighting conditions and fading of signage, will lead to an evident decline in the accuracy of traffic sign detection and poor environmental adaptability [26][27][28]. Moreover, recognition algorithms based on deep learning-based methodologies have a high accurate recognition rate, but some problems, such as high complexity of the algorithms and long processing time, exist. ...
Traffic sign detection and recognition are crucial in the development of intelligent vehicles. An improved traffic sign detection and recognition algorithm for intelligent vehicles is proposed to address problems such as how easily affected traditional traffic sign detection is by the environment, and poor real-time performance of deep learning-based methodologies for traffic sign recognition. Firstly, the HSV color space is used for spatial threshold segmentation, and traffic signs are effectively detected based on the shape features. Secondly, the model is considerably improved on the basis of the classical LeNet-5 convolutional neural network model by using Gabor kernel as the initial convolutional kernel, adding the batch normalization processing after the pooling layer and selecting Adam method as the optimizer algorithm. Finally, the traffic sign classification and recognition experiments are conducted based on the German Traffic Sign Recognition Benchmark. The favorable prediction and accurate recognition of traffic signs are achieved through the continuous training and testing of the network model. Experimental results show that the accurate recognition rate of traffic signs reaches 99.75%, and the average processing time per frame is 5.4 ms. Compared with other algorithms, the proposed algorithm has remarkable accuracy and real-time performance, strong generalization ability and high training efficiency. The accurate recognition rate and average processing time are markedly improved. This improvement is of considerable importance to reduce the accident rate and enhance the road traffic safety situation, providing a strong technical guarantee for the steady development of intelligent vehicle driving assistance.
Urban roads, as one of the essential transportation infrastructures, provide considerable motivations for rapid urban sprawl and bring notable economic and social benefits. Accurate and efficient extraction of road information plays a significant role in the development of autonomous vehicles (AVs) and high-definition (HD) maps. Mobile laser scanning (MLS) systems have been widely used for many transportation-related studies and applications in road inventory, including road object detection, pavement inspection, road marking segmentation and classification, and road boundary extraction, benefiting from their large-scale data coverage, high surveying flexibility, high measurement accuracy, and reduced weather sensitivity. Road information from MLS point clouds is significant for road infrastructure planning and maintenance, and have an important impact on transportation-related policymaking, driving behaviour regulation, and traffic efficiency enhancement.
Compared to the existing threshold-based and rule-based road information extraction methods, deep learning methods have demonstrated superior performance in 3D road object segmentation and classification tasks. However, three main challenges remain that impede deep learning methods for precisely and robustly extracting road information from MLS point clouds. (1) Point clouds obtained from MLS systems are always in large-volume and irregular formats, which has presented significant challenges for managing and processing such massive unstructured points. (2) Variations in point density and intensity are inevitable because of the profiling scanning mechanism of MLS systems. (3) Due to occlusions and the limited scanning range of onboard sensors, some road objects are incomplete, which considerably degrades the performance of threshold-based methods to extract road information.
To deal with these challenges, this doctoral thesis proposes several deep neural networks that encode inherent point cloud features and extract road information. These novel deep learning models have been tested by several datasets to deliver robust and accurate road information extraction results compared to state-of-the-art deep learning methods in complex urban environments. First, an end-to-end feature extraction framework for 3D point cloud segmentation is proposed using dynamic point-wise convolutional operations at multiple scales. This framework is less sensitive to data distribution and computational power. Second, a capsule-based deep learning framework to extract and classify road markings is developed to update road information and support HD maps. It demonstrates the practical application of combining capsule networks with hierarchical feature encodings of georeferenced feature images. Third, a novel deep learning framework for road boundary completion is developed using MLS point clouds and satellite imagery, based on the U-shaped network and the conditional deep convolutional generative adversarial network (c-DCGAN). Empirical evidence obtained from experiments compared with state-of-the-art methods demonstrates the superior performance of the proposed models in road object semantic segmentation, road marking extraction and classification, and road boundary completion tasks.