Qiyun Xu’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Simulation system architecture.
System-visualized operating interface.
(a) The trend of A·Trequire under a given Pd value. (b) The trend of A·Trequire under a given Pfa value.
Relationship curve between A·T and N.
Antenna array gain pattern.

+5

Adaptive Multi-Function Radar Temporal Behavior Analysis
  • Article
  • Full-text available

November 2024

·

15 Reads

Zhenjia Xu

·

Qingsong Zhou

·

·

[...]

·

Qiyun Xu

The performance of radar mode recognition has been significantly enhanced by the various architectures of deep learning networks. However, these approaches often rely on supervised learning and are susceptible to overfitting on the same dataset. As a transitional phase towards Cognitive Multi-Functional Radar (CMFR), Adaptive Multi-Function Radar (AMFR) possesses the capability to emit identical waveform signals across different working modes and states for task completion, with dynamically adjustable waveform parameters that adapt based on scene information. From a reconnaissance perspective, the valid signals received exhibit sparsity and localization in the time series. To address this challenge, we have redefined the reconnaissance-focused research priorities for radar systems to emphasize behavior analysis instead of pattern recognition. Based on our initial comprehensive digital system simulation model of a radar, we conducted reconnaissance and analysis from the perspective of the reconnaissance side, integrating both radar and reconnaissance aspects into environmental simulations to analyze radar behavior under realistic scenarios. Within the system, waveform parameters on the radar side vary according to unified rules, while resource management and task scheduling switch based on operational mechanisms. The target in the reconnaissance side maneuvers following authentic behavioral patterns while adjusting the electromagnetic space complexity in the environmental aspect as required. The simulation results indicate that temporal annotations in signal flow data play a crucial role in behavioral analysis from a reconnaissance perspective. This provides valuable insights for future radar behavior analysis incorporating temporal correlations and sequential dependencies.

Download

Efficient Jamming Policy Generation Method Based on Multi-Timescale Ensemble Q-Learning

August 2024

·

32 Reads

With the advancement of radar technology toward multifunctionality and cognitive capabilities, traditional radar countermeasures are no longer sufficient to meet the demands of countering the advanced multifunctional radar (MFR) systems. Rapid and accurate generation of the optimal jamming strategy is one of the key technologies for efficiently completing radar countermeasures. To enhance the efficiency and accuracy of jamming policy generation, an efficient jamming policy generation method based on multi-timescale ensemble Q-learning (MTEQL) is proposed in this paper. First, the task of generating jamming strategies is framed as a Markov decision process (MDP) by constructing a countermeasure scenario between the jammer and radar, while analyzing the principle radar operation mode transitions. Then, multiple structure-dependent Markov environments are created based on the real-world adversarial interactions between jammers and radars. Q-learning algorithms are executed concurrently in these environments, and their results are merged through an adaptive weighting mechanism that utilizes the Jensen–Shannon divergence (JSD). Ultimately, a low-complexity and near-optimal jamming policy is derived. Simulation results indicate that the proposed method has superior jamming policy generation performance compared with the Q-learning algorithm, in terms of the short jamming decision-making time and low average strategy error rate.