Lulu Liu’s research while affiliated with Xi’an Jiaotong-Liverpool University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Mask-VRDet: A robust riverway panoptic perception model based on dual graph fusion of vision and 4D mmWave radar
  • Article

November 2023

·

61 Reads

·

7 Citations

Robotics and Autonomous Systems

·

·

Lulu Liu

·

[...]

·

Yutao Yue

With the development of Unmanned Surface Vehicles (USVs), the perception of inland waterways has become significant to autonomous navigation. RGB cameras can capture images with rich semantic features, but they would fail in adverse weather and at night. As a perception sensor that has initially emerged in recent years, 4D millimeter-wave radar (4D mmWave radar) can work in all weather and has more abundant point-cloud features than ordinary radar, but it also suffers from water-surface clutter seriously. Furthermore, the shape and outline of dense point cloud captured by 4D mmWave radar are irregular. CNN-based neural networks treat features as 2D rectangle grids, which excessively favor image modality and are unfriendly to radar modality. Therefore, we transform both features of image and radar into non-Euclidean space as graph structures. In this paper, we focus on robust panoptic perception in inland waterways. Firstly, we propose the first Clutter-Point-Removal (CPR) algorithm for 4D mmWave radar, removing water-surface clutter and improving the recall of radar targets. Secondly, we propose a high-performance panoptic perception model based on the graph neural network called Mask-VRDet, fusing features of vision and radar to simultaneously perform object detection and semantic segmentation. To the best of our knowledge, Mask-VRDet is the first riverway panoptic perception model based on vision-radar graphical fusion. It outperforms other single-modal and fusion models, and achieves state-of-the-art performance on our collected dataset. We release our code at https://github.com/GuanRunwei/Mask-VRDet-Official.


Fig. 1. Illustration of the FMCW radar principle..
Fig. 4. A batch of three range-Doppler maps with interferences at the three temporal adjacent frames. Frame t gt is the ground-truth of the range-Doppler map at frame t, which is the original range-Doppler map without interference.
Fig. 5. The comparison between the units of CAE and Radar-STDA.
Fig. 8. Three example images corresponding to radar signals from the data set of RaDICaL.
Fig. 10. Denoising results of Radar-STDA. The visualization is through the function imshow() of OpenCV. The first, second and third columns list the range-Doppler maps at frame T , T −1 and T −2 with interference. The fourth column lists the ground-truth of range-Doppler map without interference. The last column lists the denoised range-Doppler maps by Radar-STDA.
Radar-STDA: A High-Performance Spatial-Temporal Denoising Autoencoder for Interference Mitigation of FMCW Radars
  • Preprint
  • File available

July 2023

·

150 Reads

With its small size, low cost and all-weather operation, millimeter-wave radar can accurately measure the distance, azimuth and radial velocity of a target compared to other traffic sensors. However, in practice, millimeter-wave radars are plagued by various interferences, leading to a drop in target detection accuracy or even failure to detect targets. This is undesirable in autonomous vehicles and traffic surveillance, as it is likely to threaten human life and cause property damage. Therefore, interference mitigation is of great significance for millimeter-wave radar-based target detection. Currently, the development of deep learning is rapid, but existing deep learning-based interference mitigation models still have great limitations in terms of model size and inference speed. For these reasons, we propose Radar-STDA, a Radar-Spatial Temporal Denoising Autoencoder. Radar-STDA is an efficient nano-level denoising autoencoder that takes into account both spatial and temporal information of range-Doppler maps. Among other methods, it achieves a maximum SINR of 17.08 dB with only 140,000 parameters. It obtains 207.6 FPS on an RTX A4000 GPU and 56.8 FPS on an NVIDIA Jetson AGXXavier respectively when denoising range-Doppler maps for three consecutive frames. Moreover, we release a synthetic data set called Ra-inf for the task, which involves 384,769 range-Doppler maps with various clutters from objects of no interest and receiver noise in realistic scenarios. To the best of our knowledge, Ra-inf is the first synthetic dataset of radar interference. To support the community, our research is open-source via the link \url{https://github.com/GuanRunwei/rd_map_temporal_spatial_denoising_autoencoder}.

Download


Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges

May 2022

·

786 Reads

·

126 Citations

With recent developments, the performance of automotive radar has improved significantly. The next generation of 4D radar can achieve imaging capability in the form of high-resolution point clouds. In this context, we believe that the era of deep learning for radar perception has arrived. However, studies on radar deep learning are spread across different tasks, and a holistic overview is lacking. This review paper attempts to provide a big picture of the deep radar perception stack, including signal processing, datasets, labelling, data augmentation, and downstream tasks such as depth and velocity estimation, object detection, and sensor fusion. For these tasks, we focus on explaining how the network structure is adapted to radar domain knowledge. In particular, we summarise three overlooked challenges in deep radar perception, including multi-path effects, uncertainty problems, and adverse weather effects, and present some attempts to solve them.

Citations (3)


... Currently, multi-modal sensor fusion is recognized as a predominant perception method in autonomous driving [1] and intelligent transportation systems [2]. Among these methods, the fusion of 4D millimeter-wave radar (4D radar) and cameras offers a low-cost, complementary, and robust perception approach [3][4][5], especially for Unmanned Surface Vehicle (USV)-based waterway perception under adverse conditions where cameras may temporarily fail [6,7]. The challenges of USV-based real-time waterway monitoring are compounded by the irregularity of channel zones, variable lighting conditions, and erratic vessel movement [8]. ...

Reference:

NanoMVG: USV-Centric Low-Power Multi-Task Visual Grounding based on Prompt-Guided Camera and 4D mmWave Radar
Mask-VRDet: A robust riverway panoptic perception model based on dual graph fusion of vision and 4D mmWave radar
  • Citing Article
  • November 2023

Robotics and Autonomous Systems

... Luo et al. [19] proposed a sparse MIMO radar 3D ghost target identification method based on the difference between signal departure and arrival directions. Liu et al. [20] proposed a novel network architecture based on PointNet++ for ghost detection. Feng et al. [21] proposed an indoor 3D point cloud ghost recognition method based on Hough transform by utilizing the linear relationship between targets and ghosts in the distance-Doppler map, and in [22] another method for joint ghost target elimination and wall estimation for co-located MIMO radar based on robust multipath identification and mitigation mail However, the existing ghost elimination method may impact the imaging quality of other objects while removing ghosts, and it faces challenges in effectively eliminating ghost targets generated by secondary or multiple reflections. ...

Clutter Detection in Automotive Radar Point Clouds Based on Deep Learning with Self-attention
  • Citing Conference Paper
  • July 2023

... The input to our scene flow branch consists of two point clouds P ∈ R N ×(3+2) and Q ∈ R M ×(3+2) , with 5D initial features: x, y, z coordinates plus RRV (relative radial velocity) and RCS (radar cross-section) [42]. The multiscale point encoder [30] is firstly applied to two point clouds for point feature extraction. ...

Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges