Long short-term memory (LSTM) architecture. Image source [96].

Long short-term memory (LSTM) architecture. Image source [96].

Source publication
Article
Full-text available
The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving aut...

Contexts in source publication

Context 1
... adds a memory cell to the RNN networks to store each neuron's state, thus preventing the gradient from exploding. LSTM architecture is shown in Figure 5, consisting of three different gates that control the memory cell's data flow. ...
Context 2
... radar signal processing, especially human activity recognition, RNNs can exploit the radar signal temporal and spatial correlation characteristics, which is vital in human activity recognition [38]. As shown in Figure 5, LSTM has a chain-like structure, with repeated neural network blocks (yellow rectangles) having different structures. The neural network blocks are also called memory blocks or cells. ...

Similar publications

Preprint
Full-text available
In the past few years, we have witnessed rapid development of autonomous driving. However, achieving full autonomy remains a daunting task due to the complex and dynamic driving environment. As a result, self-driving cars are equipped with a suite of sensors to conduct robust and accurate environment perception. As the number and type of sensors ke...
Preprint
Full-text available
Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood...
Conference Paper
Full-text available
Existing research on autonomous driving primarily focuses on urban driving, which is insufficient for characterising the complex driving behaviour underlying high-speed racing. At the same time, existing racing simulation frameworks struggle in capturing realism, with respect to visual rendering, vehicular dynamics, and task objectives , inhibiting...
Conference Paper
Full-text available
This paper mainly discusses the problem of next-scan view planning for autonomous scanning of 3D cameras in unknown environments. Based on the fact that the weight of adjacent voxel of boundary hole equals to zero in the truncated distance function (TSDF) model, a boundary extraction algorithm based on TSDF volume is firstly proposed. The boundary...
Article
Full-text available
The development of autonomous cars is currently increasing along with the need for safe and comfortable autonomous cars. The development of autonomous cars cannot be separated from the use of deep learning to determine the steering angle of an autonomous car according to the road conditions it faces. In the research, a Vision Transformer (ViT) mode...

Citations

... The primary difficulty lies in resolving the substantial Doppler shifts caused by the rapid motion of these objects, leading to inaccuracies in both detection and tracking. Traditional radar systems, which are typically ground-based and stationary, struggle to effectively capture the complex dynamics of fast-moving targets due to their limited frequency resolution and inability to account for the unique micro-Doppler signatures associated with these objects [4][5][6][7]. and tracking of small UAVs, highlighting the challenges associated with low-RCS targets. These studies underscore the growing interest in developing advanced radar systems capable of detecting and characterizing high-speed, low-RCS targets for a wide range of non-military applications. ...
Article
Full-text available
In this study, we present a novel approach for the real-time detection of high-speed moving objects with rapidly changing velocities using a high-resolution millimeter-wave (MMW) radar operating at 94 GHz in the W-band. Our detection methodology leverages continuous wave transmission and heterodyning of the reflected signal from the moving target, enabling the extraction of motion-related attributes such as velocity, position, and physical characteristics of the object. The use of a 94 GHz carrier frequency allows for high-resolution velocity detection with a velocity resolution of 6.38 m/s, achieved using a short integration time of 0.25 ms. This high-frequency operation also results in minimal atmospheric absorption, further enhancing the efficiency and effectiveness of the detection process. The proposed system utilizes cost-effective and less complex equipment, including compact antennas, made possible by the low sampling rate required for processing the intermediate frequency signal. The experimental results demonstrate the successful detection and characterization of high-speed moving objects with high acceleration rates, highlighting the potential of this approach for various scientific, industrial, and safety applications, particularly those involving targets with rapidly changing velocities. The detailed analysis of the micro-Doppler signatures associated with these objects provides valuable insights into their unique motion dynamics, paving the way for improved tracking and classification algorithms in fields such as aerospace research, meteorology, and collision avoidance systems.
... There has been a plethora of work carried out in the domain of machine learning/deep learning (ML/DL) using radar sensors for automotive applications. Surveys in [8] and [68,69] show the potential research directions and experimental research conducted. ML/DL are traditionally employed in object detection and classification within ADASs. ...
Article
Full-text available
Precise altitude data are indispensable for flight navigation, particularly during the autonomous landing of unmanned aerial systems (UASs). Conventional light and barometric sensors employed for altitude estimation are limited by poor visibility and temperature conditions, respectively, whilst global positioning system (GPS) receivers provide the altitude from the mean sea level (MSL) marred with a slow update rate. To cater to the landing safety requirements, UASs necessitate precise altitude information above ground level (AGL) impervious to environmental conditions. Radar altimeters, a mainstay in commercial aviation for at least half a century, realize these requirements through minimum operational performance standards (MOPSs). More recently, the proliferation of 5G technology and interference with the universally allocated band for radar altimeters from 4.2 to 4.4 GHz underscores the necessity to explore novel avenues. Notably, there is no dedicated MOPS tailored for radar altimeters of UASs. To gauge the performance of a radar altimeter offering for UASs, existing MOPSs are the de facto choice. Historically, frequency-modulated continuous wave (FMCW) radars have been extensively used in a broad spectrum of ranging applications including radar altimeters. Modern monolithic millimeter wave (mmWave) automotive radars, albeit designed for automotive applications, also employ FMCW for precise ranging with a cost-effective and compact footprint. Given the technology maturation with excellent size, weight, and power (SWaP) metrics, there is a growing trend in industry and academia to explore their efficacy beyond the realm of the automotive industry. To this end, their feasibility for UAS altimetry remains largely untapped. While the literature on theoretical discourse is prevalent, a specific focus on mmWave radar altimetry is lacking. Moreover, clutter estimation with hardware specifications of a pure look-down mmWave radar is unreported. This article argues the applicability of MOPSs for commercial aviation for adaptation to a UAS use case. The theme of the work is a tutorial based on a simplified mathematical and theoretical discussion on the understanding of performance metrics and inherent intricacies. A systems engineering approach for deriving waveform specifications from operational requirements of a UAS is offered. Lastly, proposed future research directions and insights are included.
... Extensive literature on deep learning-based ROD has been published [19]. Notably, several publicly available datasets on mmWave radar point cloud data have been introduced, encompassing a view of delft (VOD) [50], K-Radar [51], TJ4DRadSet [52], and Radarscenes [54]. ...
Article
With the advancement of environmental perception technology, millimeter-wave (mmWave) radar is emerging as a predominant sensor. While deep learning has facilitated the development of mmWave radar object detection (ROD) techniques, mmWave ROD suffers from datasets. Because the annotation of mmWave datasets is inherently more complex. Motivated by masked image modeling (MIM), this paper proposes a novel pretraining method for ROD to address the limitations posed by datasets. The study conducts masking operations on mmWave radar images from both spatial and temporal perspectives, followed by a straightforward image reconstruction proxy task. To the best of our knowledge, our method represents the inaugural application of the MIM self-supervision method to ROD tasks. Additionally, we designed a lightweight self-supervised ROD network (SS-RODNet). Numerous ablation experiments have demonstrated the effectiveness of the proposed method. The pretrained SS-RODNet attains comparable results to the state-of-the-art (SOTA) on CRUW and CARRADA datasets with fewer parameters and floating-point operations per second (FLOPs).
... Comparisons and discussions are conducted on the measured data. This paper [34] presents a review of various deeplearning approaches for radar to accomplish some important tasks in an autonomous driving application, such as detection and classification. The authors have classified the paper according to the different methods and techniques used. ...
Article
Full-text available
Frequency modulated continuous wave (FMCW) radar is increasingly used for various detection and classification applications in different fields, such as autonomous vehicles and mining fields. Our objective is to increase the classification accuracy of objects detected using millimeter-wave radar. We have developed an approach based on millimeter-wave radar. The proposed solution combines the use of an FMCW radar, a YOLOv7 model, and the Pix2Pix architecture. The latter architecture was used to reduce noise in the heatmaps. We create a dataset of 4125 heatmaps annotated with five different object classes. To evaluate the proposed approach, 14 different models were trained using the annotated heatmap dataset. In the initial experiment, we compared the models using metrics such as mean average precision (mAP), precision, and recall. The results showed that the proposed model of YOLOv7 (YOLOv7-PM) was the most efficient in terms of mAP_0.5, which reached 90.1%, and achieved a mAP_0.5:0.95 of 49.51%. In the second experiment, we compared the models with a cleaned dataset generated using the Pix2Pix architecture. As a result, we observed improved performances, with the Pix2Pix + YOLOv7-PM model achieving the best mAP_0.5, reaching 91.82%, and a mAP_0.5:0.95 of 52.59%.
... Classification of human activities using radar has recently experienced an influx of deep learning models due to their predictive power and ability to automatically learn relevant discriminant features from radar measurements [12][13][14][15][16][17][18]. In particular, convolutional neural networks (CNNs) are being extensively used for learning spatial hierarchies of features from micro-Doppler signatures of human activities [19][20][21][22][23][24][25][26]. ...
... where X W,c i ∈ R d×m i denotes the activations corresponding to class c i after whitening with W and m i is the number of samples for class c i . The problem in (16) with orthogonality constraint can be solved via gradient-based approaches on the Stiefel manifold [31,34]. ...
Article
Full-text available
Deep learning architectures are being increasingly adopted for human activity recognition using radar technology. A majority of these architectures are based on convolutional neural networks (CNNs) and accept radar micro-Doppler signatures as input. The state-of-the-art CNN-based models employ batch normalization (BN) to optimize network training and improve generalization. In this paper, we present whitening-aided CNN models for classifying human activities with radar sensors. We replace BN layers in a CNN model with whitening layers, which is shown to improve the model’s accuracy by not only centering and scaling activations, similar to BN, but also decorrelating them. We also exploit the rotational freedom afforded by whitening matrices to align the whitened activations in the latent space with the corresponding activity classes. Using real data measurements of six different activities, we show that whitening provides superior performance over BN in terms of classification accuracy for a CNN-based classifier. This demonstrates the potential of whitening-aided CNN models to provide enhanced human activity recognition with radar sensors.
... It includes sensing techniques based on multiple wireless signals, such as WiFi, mmWave and RFID, but the analysis of mmWave sensing is not enough. Abdu et al. [13] wrote a survey of deep learning approaches processing mmWave radar signals in autonomous driving applications, but their focus is on deep learning rather than on mmWave sensing. Busari et al. [8] summarized the research on mmWave massive MIMO communication and discussed research issues and future directions on mmWave massive MIMO. ...
... There are also some related surveys that provide an analysis of particular narrow scopes, such as application scenarios [10], [11] and sensing techniques [13], [14]. We summarize these surveys in Table. ...
... I. Specifically, Wang et al. [15] provided a review on the cross-sensing techniques towards wearable applications and summarized the applied signal processing and machine learning algorithms. Abdu et al. [13] presented a survey of deep learning approaches processing radar signals to accomplish some sensing tasks in autonomous driving applications. Shastri et al. [12] focused on indoor mmWave device-based localization and device-free sensing, and provided a review of indoor localization approaches, technologies, schemes and algorithms. ...
Preprint
Full-text available
With the rapid development of the Internet of Things (IoT) and the rise of 5G communication networks and automatic driving, millimeter wave (mmWave) sensing is emerging and starts impacting our life and workspace. mmWave sensing can sense humans and objects in a contactless way, providing fine-grained sensing ability. In the past few years, many mmWave sensing techniques have been proposed and applied in various human sensing applications (e.g., human localization, gesture recognition, and vital monitoring). We discover the need of a comprehensive survey to summarize the technology, platforms and applications of mmWave-based human sensing. In this survey, we first present the mmWave hardware platforms and some key techniques of mmWave sensing. We then provide a comprehensive review of existing mmWave-based human sensing works. Specifically, we divide existing works into four categories according to the sensing granularity: human tracking and localization, motion recognition, biometric measurement and human imaging. Finally, we discuss the potential research challenges and present future directions in this area.
... In addition, studies applying deep learning in the field of radar signal processing are being actively conducted [24]. In automotive radar signal processing, deep learning is mainly applied for target detection or classification [25][26][27][28][29]. ...
Article
Full-text available
The reliability and safety of advanced driver assistance systems and autonomous vehicles are highly dependent on the accuracy of automotive sensors such as radar, lidar, and camera. However, these sensors can be misaligned compared to the initial installation state due to external shocks, and it can cause deterioration of their performance. In the case of the radar sensor, when the mounting angle is distorted and the sensor tilt toward the ground or sky, the sensing performance deteriorates significantly. Therefore, to guarantee stable detection performance of the sensors and driver safety, a method for determining the misalignment of these sensors is required. In this paper, we propose a method for estimating the vertical tilt angle of the radar sensor using a deep neural network (DNN) classifier. Using the proposed method, the mounting state of the radar can be easily estimated without physically removing the bumper. First, to identify the characteristics of the received signal according to the radar misalignment states, radar data are obtained at various tilt angles and distances. Then, we extract range profiles from the received signals and design a DNN-based estimator using the profiles as input. The proposed angle estimator determines the tilt angle of the radar sensor regardless of the measured distance. The average estimation accuracy of the proposed DNN-based classifier is over 99.08%. Therefore, through the proposed method of indirectly determining the radar misalignment, maintenance of the vehicle radar sensor can be easily performed.
... Additionally, the use of deep learning and artificial intelligence has become widespread in various fields, including target classification research based on millimeter-wave radar [15,16]. While the effectiveness of such research in short-distance areas with dense millimeter-wave radar point clouds is good [17][18][19], it is not ideal in long-distance sparse point clouds. As a result, there is currently a lack of long-range, high-resolution millimeter-wave radars for rail transit applications, along with the associated algorithms for rail transit boundary fitting and boundary-based target detection. ...
Article
Full-text available
This article introduces a long-range sensing system based on millimeter-wave radar, which is used to detect the roadside boundaries and track trains for trains. Due to the high speed and long braking distance of trains, existing commercial vehicle sensing solutions cannot meet their needs for long-range target detection. To address this challenge, this study proposes a long-range perception system for detecting road boundaries and trains based on millimeter-wave radar. The system uses high-resolution, long-range millimeter-wave radar customized for the strong scattering environment of rail transit. First, we established a multipath scattering theory in complex scenes such as track tunnels and fences and used the azimuth scattering characteristics to eliminate false detections. A set of accurate calculation methods of the train’s ego-velocity is proposed, which divides the radar detection point clouds into static target point clouds and dynamic target point clouds based on the ego-velocity of the train. We then used the road boundary curvature, global geometric parallel information, and multi-frame information fusion to extract and fit the boundary in the static target point stably. Finally, we performed clustering and shape estimation on the radar track information to identify the train and judge the collision risk based on the position and speed of the detected train and the extracted boundary information. The paper makes a significant contribution by establishing a multipath scattering theory for complex scenes of rail transit to eliminate radar false detection and proposing a train speed estimation strategy and a road boundary feature point extraction method that adapt to the rail environment. As well as building a perception system and installing it on the train for verification, the main line test results showed that the system can reliably detect the road boundary more than 400 m ahead of the train and can stably detect and track the train.
... AVs are often outfitted with numerous complementary sensors to provide complementary information that helps to attain the necessary accuracy when combined. Multisensor fusion combines data from numerous sensors to achieve a higher object detection/classification accuracy and performance than those obtained with a single-sensor modal system [11]. Therefore, an essential subject for AVs is the combination of radar and other sensors, such as cameras. ...
... Machine vision in fog can fall as low as 1000 m in moderate fog and as low as 50 m in heavy fog [9,10]. Camera sensors are one of the significant sensors used for object detection because of their low cost and the large number of features they provide [11]. In fog, the camera's performance is limited due to visibility degradation. ...
... In fog, lidar undergoes reflectivity degradation and a reduction in the distance measured. However, radars tend to perform better than cameras and lidars in adverse weather, since radars are unaffected by changes in environmental conditions [11,12]. Radars employ the Doppler effect to determine the distance and velocity of objects by monitoring the reflection of radio waves. ...
Article
Full-text available
AVs are affected by reduced maneuverability and performance due to the degradation of sensor performances in fog. Such degradation can cause significant object detection errors in AVs' safety-critical conditions. For instance, YOLOv5 performs well under favorable weather but is affected by mis-detections and false positives due to atmospheric scattering caused by fog particles. The existing deep object detection techniques often exhibit a high degree of accuracy. Their drawback is being sluggish in object detection in fog. Object detection methods with a fast detection speed have been obtained using deep learning at the expense of accuracy. The problem of the lack of balance between detection speed and accuracy in fog persists. This paper presents an improved YOLOv5-based multi-sensor fusion network that combines radar object detection with a camera image bounding box. We transformed radar detection by mapping the radar detections into a two-dimensional image coordinate and projected the resultant radar image onto the camera image. Using the attention mechanism, we emphasized and improved the important feature representation used for object detection while reducing high-level feature information loss. We trained and tested our multi-sensor fusion network on clear and multi-fog weather datasets obtained from the CARLA simulator. Our results show that the proposed method significantly enhances the detection of small and distant objects. Our small CR-YOLOnet model best strikes a balance between accuracy and speed, with an accuracy of 0.849 at 69 fps.
... As a result, current radar models are often based on lidar and camera detection models [17]- [20] Such models are easily implemented after formatting radar data to match the required input format, such as a pseudo-image [17]. Additionally, perception has conventionally followed a late fusionbased multi-object tracking paradigm [21] that needs full object detection from each sensor model. ...
Preprint
Full-text available
Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research. For a summary of the paper and more results, visit the website: autonomous-radars.github.io.