Yaopeng Wang’s research while affiliated with Fuzhou University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Robust Single-Step Adversarial Training with Regularizer
  • Chapter

October 2021

·

14 Reads

·

1 Citation

Lecture Notes in Computer Science

Lehui Xie

·

Yaopeng Wang

·

·

High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of “catastrophic overfitting”, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0% over a single epoch. To address this issue, we focus on single-step adversarial training scheme in this paper and propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core observation is that single-step adversarial training can not simultaneously learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for L∞-perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.



Robust Single-step Adversarial Training with Regularizer

February 2021

·

14 Reads

High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch. To address this problem, we propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core idea is that single-step adversarial training can not learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for L_\infty-perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.


FIGURE 1. A demonstration of an adversarial sample [21]. The panda image is recognized as a gibbon with high confidence by the classifier after adding adversarial perturbations.
FIGURE 3. An overview of the model extracion attack and model inversion attack.
FIGURE 5. An overview of adversarial attacks.
FIGURE 7. An overview of the adversarial defense.
FIGURE 8. An overview of Defense-Gan framework [124].

+5

Privacy and Security Issues in Deep Learning: A Survey
  • Article
  • Full-text available

December 2020

·

7,247 Reads

·

272 Citations

IEEE Access

Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image classification, automatic driving, natural language processing to medical diagnosis, credit risk assessment, intrusion detection. However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim can be recovered. Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence. In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL. We then review and summarize the attack and defense methods associated with DL privacy and security in recent years. To demonstrate that security threats really exist in the real world, we also reviewed the adversarial attacks under the physical condition. Finally, we discuss current challenges and open problems regarding privacy and security issues in DL.

Download

Citations (3)


... There are three methods for adding noise to create the adversarial sample: L 0 [23], L 2 [24], and L ∞ [25]. In all three methods, the smaller the number, the more closely the adversarial example resembles the original sample. ...

Reference:

Restricted-Area Adversarial Example Attack for Image Captioning Model
Robust Single-Step Adversarial Training with Regularizer
  • Citing Chapter
  • October 2021

Lecture Notes in Computer Science

... Zhang et al. proposed a credible federated learning framework (RobustFL) [27], which realizes the identification of malicious clients by constructing a predictive model based on logit on the server side. Wang et al. also proposed a method based on the distribution of logit to determine the distribution of malicious samples [28]. ...

Model-Agnostic Adversarial Example Detection Through Logit Distribution Learning
  • Citing Conference Paper
  • September 2021

... Academic interest in AI ethics has surged in recent years, leading to numerous studies that discuss ethical principles, fairness, bias, security, and other critical concerns (HuYupeng et al. 2021;John-Mathews et al. 2022;Nilsson 2014;Ntoutsi et al., n.d.). However, existing literature reviews are usually constrained to particular subtopics, adopting qualitative methods, providing fragmented insights rather than an overarching perspective on the discipline as a whole (Hagendorff 2020;Jobin et al. 2019;Liu et al. 2021;Mehrabi et al. 2022;Ntoutsi et al., n.d.;Zhang et al. 2021). This piecemeal understanding makes it difficult to map out broad trends, to identify emergent themes, or to assess the collaboration networks within AI ethics research. ...

Privacy and Security Issues in Deep Learning: A Survey

IEEE Access