Featured research (9)

Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) plays a pivotal role in civilian and military applications. However, the limited labeled samples present a significant challenge in deep learning-based SAR ATR. Few-shot learning (FSL) offers a potential solution, but models trained with limited samples may produce a high probability of incorrect results that can mislead decision-makers. To address this, we introduce uncertainty estimation into SAR ATR and propose Prior knowledge-guided Evidential Deep Learning (Prior-EDL) to ensure reliable recognition in FSL. Inspired by Bayesian principles, Prior-EDL leverages prior knowledge for improved predictions and uncertainty estimation. We use a deep learning model pre-trained on simulated SAR data to discover category correlations and represent them as label distributions. This knowledge is then embedded into the target model via a Prior-EDL loss function, which selectively uses the prior knowledge of samples due to the distribution shift between simulated data and real data. To unify the discovery and embedding of prior knowledge, we propose a framework based on the teacher-student network. Our approach enhances the model’s evidence assignment, improving its uncertainty estimation performance and target recognition accuracy. Extensive experiments on the MSTAR dataset demonstrate the effectiveness of Prior-EDL, achieving recognition accuracies of 70.19% and 92.97% in 4-way 1-shot and 4-way 20-shot scenarios, respectively. For Out-Of-Distribution data, Prior-EDL outperforms other uncertainty estimation methods. The code is available at https://github.com/Xiaoyan-Zhou/Prior-EDL/
Synthetic aperture radar (SAR) automatic target recognition (ATR) is extensively applied in both military and civilian sectors. Nevertheless, test and training data distribution may differ in the open world. Therefore, SAR out-of-distribution (OOD) detection is important because it enhances the reliability and adaptability of SAR systems. However, most OOD detection models are based on maximum likelihood estimation (MLE) and overlook the impact of data uncertainty, leading to overconfidence output for both in-distribution (ID) and OOD data. To address this issue, we consider the effect of data uncertainty on prediction probabilities, treating these probabilities as random variables and modeling them using Dirichlet distribution. Building on this, we propose an evidential uncertainty aware mean squared error (UMSE) loss function to guide the model in learning highly distinguishable output between ID and OOD data. Furthermore, to comprehensively evaluate OOD detection performance, we have compiled and organized some publicly available data and constructed a new SAR OOD detection dataset named SAR-OOD. Experimental results on SAR-OOD demonstrate that the UMSE approach achieves state-of-the-art (SOTA) performance. The code and data are available at: https://github.com/Xiaoyan-Zhou/UMSE-SAR-OOD-Detection .
Multimodal change detection (MCD) is a topic of increasing interest in remote sensing. Due to different imaging mechanisms, the multimodal images cannot be directly compared to detect the changes. In this article, we explore the topological structure of multimodal images and construct the links between class relationships (same/different) and change labels (changed/unchanged) of pairwise superpixels, which are imaging modality-invariant. With these links, we formulate the MCD problem within a mathematical framework termed the locality-preserving energy model (LPEM), which is used to maintain the local consistency constraints embedded in the links: the structure consistency based on feature similarity and the label consistency based on spatial continuity. Because the foundation of LPEM, i.e., the links, is intuitively explainable and universal, the proposed method is very robust across different MCD situations. Noteworthy, LPEM is built directly on the label of each superpixel, so it is a paradigm that outputs the change map (CM) directly without the need to generate intermediate difference image (DI) as most previous algorithms have done. Experiments on different real datasets demonstrate the effectiveness of the proposed method. Source code of the proposed method is made available at https://github.com/yulisun/LPEM.
Highly variable polarimetric signatures caused by complex structures in built-up areas make interpretation of these scattering behaviors intractable for PolSAR remote sensing. This paper proposes a fine polarimetric decomposition method and derives several products to finely simulate the scattering mechanisms of urban buildings, thus fulfilling its use for effective surveillance. First, through theoretically establishing the roll-invariant condition for a completely general scatterer, a roll-invariant cross polarization (RICP) scattering model is constructed, which characterizes the cross polarization scattering in the manner of planar structure distribution. Second, by designing a root-discriminant-based parameter inversion strategy, a fine seven-component decomposition is proposed, which achieves the complete physical interpretation of matrix elements and reasonable inversion of model parameters. Third, by analyzing the external and internal scattering difference, the derivative products, i.e., scattering contribution synthesizers are derived for built-up area monitoring. Experimental results derived from real PolSAR data confirm the superiority and effectiveness of the constructed descriptors on the one hand. On the other hand, the extensibility of fine polarimetric decomposition in specific remote sensing is also explicitly demonstrated.
Machine learning methods for synthetic aperture radar (SAR) image automatic target recognition (ATR) can be divided into two main types: traditional methods and deep learning methods. The deep learning methods can learn the high-dimensional features of the target directly, and usually obtain high target recognition accuracy. However, they lack full consideration of SAR targets’ inherent characteristics resulting in poor generalization and interpretation ability. Compared with the deep learning methods, traditional methods can get more interpretable and stable results with model-based features. In order to take full advantage of these two kinds of methods, we propose target part attention network based on the attributed scattering center (ASC) model to integrate the electromagnetic characteristics with the deep learning framework. Firstly, considering the importance of scattering structure for SAR ATR, we design a target part model based on ASC model. Then, a novel part attention module based on Scaled Dot-Product Attention mechanism is proposed, which directly associates the features of target parts with the classification results. Finally, we give the derivation method of the importance of each part, which is of great significance for practical application and the interpretation of SAR ATR. Experiments on the MSTAR data set demonstrate the effectiveness of the proposed part attention network. Compared with existing studies, it can achieve higher and more robust classification accuracy under different complex conditions. Furthermore, combined with the importance of parts, we constructed two effective interpretable analysis methods for deep learning network classification results.

Lab head

Gangyao Kuang
Department
  • Department of Electronic Science and Technology

Members (17)

Gui Gao
  • National University of Defense Technology
Boli Xiong
  • National University of Defense Technology
Yuli Sun
  • National University of Defense Technology
Sinong Quan
  • National University of Defense Technology
Canbin Hu
  • National University of Defense Technology
Xiao Li
  • National University of Defense Technology
Siqian Zhang
  • National University of Defense Technology
Ganggang Dong
  • Xidian University
Gangyao Kuang
Gangyao Kuang
  • Not confirmed yet
Li Liu
Li Liu
  • Not confirmed yet
Lingjun Zhao
Lingjun Zhao
  • Not confirmed yet
Xiangguang Leng
Xiangguang Leng
  • Not confirmed yet
Wanxia Deng
Wanxia Deng
  • Not confirmed yet
Yan Zhao
Yan Zhao
  • Not confirmed yet
Ming Li
Ming Li
  • Not confirmed yet
Xiaoqiang Zhang
Xiaoqiang Zhang
  • Not confirmed yet