Da Ke’s research while affiliated with National University of Defense Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Fig. 2 System model where a transmitted signal is intruded.
Fig. 4 A comparison of the spectrum of a 16QAM modulated signal with a SNR of 20dB under 7 different adversarial attacks. (The blue line represents the spectrum of the original signal, while the orange line represents the spectrum of the adversarial perturbation.)
Fig. 5 A comparison of the waveform of a 16QAM modulated signal with a SNR of 20dB under 7 different adversarial attacks. (The blue line represents the waveform of the original signal, while the orange line represents the waveform of the adversarial example.)
Fig. 6 Spectrum of the adversarial perturbations and original signals attacked by 6 different adversarial attack methods on an adversarial training model. (The blue line represents the spectrum of the original signal, while the orange line represents the spectrum of the adversarial perturbation.)
Frequency-Selective Adversarial Attack Against Deep Learning-Based Wireless Signal Classifiers
  • Article
  • Full-text available

January 2024

·

68 Reads

IEEE Transactions on Information Forensics and Security

Da Ke

·

·

Zhitao Huang

Although Deep learning (DL) provides state-of-art results for most spectrum sensing tasks, it is vulnerable to adversarial examples. Based on this phenomenon, we consider a non-cooperative communication scenario where an intruder tries to recognize the modulation type of the intercepted signal. Specifically, this paper aims to minimize the intruder’s accuracy while guaranteeing that the intended receiver can still recover the underlying message with the highest reliability. This process is implemented by adding adversarial perturbations to the channel input symbols at the encoder. In image classification, the perturbation is limited to be imperceptible to a human observer by minimizing the ℓp norm, while in this work, we enriched the connotation of adversarial examples, and first proposed that the imperceptibility of adversarial examples in the field of wireless signals is the imperceptibility of filters. Based on this perspective, we optimized the model of adversarial examples and constrained the adversarial perturbation to a narrow frequency band so that filters cannot filter it out. We also define a new set of metrics to describe the imperceptibility of the wireless signal adversarial example. The simulation results demonstrate the viability of our approach in securing wireless communication against state-of-the-art DL-based intruders while minimizing communication performance reduction.

Download

Schematic diagram of the minimal perturbation of a binary classifier
Minimal perturbation algorithm for nonlinear decision function
Time domain waveforms for 11 modulation styles of high SNR
Time domain waveform and spectrum of 64QAM modulated signals. a The time domain waveform of 64QAM modulated signal. The blue curve is the original 64QAM modulated signal, and the red curve is the adversarial example. The vertical coordinates of the adversarial example signal are artificially adjusted to show the differences. b The spectrum of modulated signal. The blue curve is the original 64QAM modulated signal, and the red curve is the adversarial example
Classification confusion matrix. The vertical axis is the ground truth label and the horizontal axis is the predicted label a classification confusion matrix for original signals and b classification confusion matrix for signals with proposed perturbations
Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning

October 2022

·

261 Reads

·

10 Citations

Cognitive Computation

Da Ke

·

·

·

[...]

·

Zhitao Huang

Integrating cognitive radio (CR) technique with wireless networks is an effective way to solve the increasingly crowded spectrum. Automatic modulation classification (AMC) plays an important role in CR. AMC significantly improves the intelligence of CR system by classifying the modulation type and signal parameters of received communication signals. AMC can provide more information for decision making of the CR system. In addition, AMC can help the CR system dynamically adjust the modulation type and coding rate of the communication signal to adapt to different channel qualities, and the AMC technique help eliminate the cost of broadcast modulation type and coding rate. Deep learning (DL) has recently emerged as one most popular method in AMC of communication signals. Despite their success, DL models have recently been shown vulnerable to adversarial attacks in pattern recognition and computer vision. Namely, they can be easily deceived if a small and carefully designed perturbation called an adversarial attack is imposed on the input, typically an image in pattern recognition. Owing to the very different nature of communication signals, it is interesting yet crucially important to study if adversarial perturbation could also fool AMC. In this paper, we make a first attempt to investigate how we can design a special adversarial attack on AMC. we start from the assumption of a linear binary classifier which is further extended to multi-way classifier. We consider the minimum power consumption that is different from existing adversarial perturbation but more reasonable in the context of AMC. We then develop a novel adversarial perturbation generation method that leads to high attack success to communication signals. Experimental results on real data show that the method is able to successfully spoof the 11-class modulation classification at a model with a minimum cost of about − 21 dB in automatic modulation classification task. The visualization results demonstrate that the adversarial perturbation manifests in the time domain as imperceptible undulations of the signal, and in the frequency domain as small noise outside the signal band.


Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks

October 2022

·

84 Reads

·

17 Citations

Deep learning (DL)-based specific emitter identification (SEI) technique can automatically extract radio frequency (RF) fingerprint features in RF signals to distinguish between legal and illegal devices and enhance the security of wireless network. However, deep neural network (DNN) can easily be fooled by adversarial examples or perturbations of the input data. If a malicious device emits signals containing a specially designed adversarial samples, will the DL-based SEI still work stably to correctly identify the malicious device? To the best of our knowledge, this research is still blank, let alone the corresponding defense methods. Therefore, this paper designs two scenarios of attack and defense and proposes the corresponding implementation methods to specializes in the robustness of DL-based SEI under adversarial attacks. On this basis, detailed experiments are carried out based on the real-world data and simulation data. The attack scenario is that the malicious device adds an adversarial perturbation signal specially designed to the original signal, misleading the original system to make a misjudgment. Experiments based on three different attack generation methods show that DL-based SEI is very vulnerability. Even if the intensity is very low, without affecting the probability density distribution of the original signal, the performance can be reduced to about 50%, and at −22 dB it is completely invalid. In the defense scenario, the adversarial training (AT) of DL-based SEI is added, which can significantly improve the system’s performance under adversarial attacks, with ≥60% improvement in the recognition rate compared to the network without AT. Further, AT has a more robust effect on white noise. This study fills the relevant gaps and provides guidance for future research. In the future research, the impact of adversarial attacks must be considered, and it is necessary to add adversarial training in the training process.




Blind Detection Techniques for Non-cooperative Communication Signals Based on Deep Learning (Jun 2019)

July 2019

·

781 Reads

·

33 Citations

IEEE Access

The performance of existing signal detection methods depend heavily on the amount of prior information acquired by the sensor of interest. Therefore, to improve cognitive radio-based detection in low-SNR environments, we propose a deep learning method based passive signal detection. A convolution neural network (CNN) and long short-term memory (LSTM) approach are used to extract the frequency and time domain features of the signal. Our method can detect signal when little to none prior information exists. Simulation experiments verify the probability of detection for our method. Results show that our method is about 4.5dB-5.5dB better than a traditional blind detection algorithm under different SNR environments.

Citations (5)


... The robustness of neural networks [3,4,9,10] is an area of particular interest in modern deep learning research, especially because deep convolutional neural networks are very susceptible to attacks of various kinds. Adversarial attacks stand out [11][12][13], so various kinds of defense against them are a very important area of research. In particular, the phenomenon of transferability of universal adversarial perturbations [14] should be highlighted, which enables greater applicability of adversarial attacks to different types of neural networks. ...

Reference:

Extensions and Detailed Analysis of Synergy Between Traditional Classification and Classification Based on Negative Features in Deep Convolutional Neural Networks
Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning

Cognitive Computation

... Research on RFF-based attacks is relatively scarce in the literature and is mostly qualitative. Although pioneering contributions consider RFF techniques based on Channel State Information (CSI) fingerprinting [10,11], a few contributions explored the usage of Adversarial Machine Learning (AML) techniques to generate perturbations in the transmitted signals thwarting RFF while keeping the communication quality acceptable [12], [13], [14], and [15]. However, such works either focus on pure data analysis without considering the technical challenges of data manipulation and wireless transmission or use simplistic RFF models, being far-away from real-world deployments. ...

Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks

... Other primary parameters TOA PRI estimation RNN Deinterleaving Clustering CNN Dadgarnia [44] Mahmod [42] Martin [78] Digne [39] Mu [46] Ata'a [45] Mottier [26] Lesieur [49] Lang [69] Gasperini [13] Kang [62] Liu [50] Wang [55] Chao [54] Chao [25] Xiang [60] Nuhoglu [58] Liu [59] Mao [61] Li [66] Can [68] SS Liu [72] Mardia [31] Milojevic [32] Nishiguchi [34] Li [35] Ge [33] Guo-liang [37] FIGURE 9. Euler diagram of deinterleaving methods. The four main categories are represented by blue rectangles. ...

A Deinterleaving Method for Mechanical-Scanning Radar Signals Based on Deep Learning
  • Citing Conference Paper
  • April 2022

... In the task of signal modulation recognition, Toshea et al. [1] in 2016 first applied deep learning to this task, demonstrating that deep learning algorithms outperform expert features and higher-order moment methods in signal modulation recognition. Subsequently, Sadeghi et al. [13][14][15] conducted in-depth studies on adversarial attacks against signal modulation recognition networks and found that the network is vulnerable to signal adversarial examples under various signal-to-noise ratios (SNRs). Moreover, Sadeghi and Flowers further noted that compared to Gaussian noise perturbations of the same intensity, signal adversarial examples exhibit a more significant attack effect [15,16]. ...

Application of Adversarial Examples in Communication Modulation Classification
  • Citing Conference Paper
  • November 2019

... With accurate signal classification and modulation recognition, secondary users (SUs) and the primary users (PUs) can efficiently share the spectrum, thereby significantly improving the spectrum utilization. Given the potential of CR, the related technologies are attracting more and more attention of researchers [3]. As the wireless channels, signal waveforms and the modulation formats become increasing complex and dynamic, the accurate signal parameter estimation and modulation classification become more and more challenging. ...

Blind Detection Techniques for Non-cooperative Communication Signals Based on Deep Learning (Jun 2019)

IEEE Access