Lehui Xie’s research while affiliated with Fuzhou University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Privacy-Preserving Object Detection With Poisoning Recognition for Autonomous Vehicles
  • Article

January 2022

·

16 Reads

·

2 Citations

IEEE Transactions on Network Science and Engineering

·

·

Lehui Xie

·

[...]

·

Jianping Cai

Object detection has achieved significant progress in attaining high-quality performance without leaking private messages. However, traditional approaches cannot defend the poisoning attacks. Poisoning attacks can make the predictive model unusable, which quickly causes recognition errors or even traffic accidents. In this paper, we propose a privacy-preserving object detection with poisoning recognition (PR-PPOD) framework via distributed training with the help of the CNN, ResNet18, and classical SSD network. Specifically, we design a poisoning model recognition algorithm to remove the uploaded local poisoning parameters to guarantee a trained model's availability based on given privacy-preserving progress. More importantly, the PR-PPOD framework can effectively prevent the threat of differential attacks and avoid privacy leakage caused by reverse model reasoning. Moreover, the effectiveness, efficiency, and security of PR-PPOD are demonstrated via comprehensive theoretical analysis. Finally, we simulate the performance of local poisoning model recognition based on the MNIST, CIFAR10, VOC2007, and VOC2012 datasets, which could achieve good performance compared with the case without poisoning recognition.


Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

December 2021

·

18 Reads

In response to the threat of adversarial examples, adversarial training provides an attractive option for enhancing the model robustness by training models on online-augmented adversarial examples. However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy. To maintain the trade-off between natural and robust accuracy, we alleviate the shift from the perspective of feature adaption and propose a Feature Adaptive Adversarial Training (FAAT) optimizing the class-conditional feature adaption across natural data and adversarial examples. Specifically, we propose to incorporate a class-conditional discriminator to encourage the features become (1) class-discriminative and (2) invariant to the change of adversarial attacks. The novel FAAT framework enables the trade-off between natural and robust accuracy by generating features with similar distribution across natural and adversarial data, and achieve higher overall robustness benefited from the class-discriminative feature characteristics. Experiments on various datasets demonstrate that FAAT produces more discriminative features and performs favorably against state-of-the-art methods. Codes are available at https://github.com/VisionFlow/FAAT.


Robust Single-Step Adversarial Training with Regularizer

October 2021

·

14 Reads

·

1 Citation

Lecture Notes in Computer Science

High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of “catastrophic overfitting”, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0% over a single epoch. To address this issue, we focus on single-step adversarial training scheme in this paper and propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core observation is that single-step adversarial training can not simultaneously learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for L∞-perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.



Robust Single-step Adversarial Training with Regularizer

February 2021

·

14 Reads

High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch. To address this problem, we propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core idea is that single-step adversarial training can not learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for L_\infty-perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.


FIGURE 1. A demonstration of an adversarial sample [21]. The panda image is recognized as a gibbon with high confidence by the classifier after adding adversarial perturbations.
FIGURE 3. An overview of the model extracion attack and model inversion attack.
FIGURE 5. An overview of adversarial attacks.
FIGURE 7. An overview of the adversarial defense.
FIGURE 8. An overview of Defense-Gan framework [124].

+5

Privacy and Security Issues in Deep Learning: A Survey
  • Article
  • Full-text available

December 2020

·

7,238 Reads

·

272 Citations

IEEE Access

Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image classification, automatic driving, natural language processing to medical diagnosis, credit risk assessment, intrusion detection. However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim can be recovered. Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence. In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL. We then review and summarize the attack and defense methods associated with DL privacy and security in recent years. To demonstrate that security threats really exist in the real world, we also reviewed the adversarial attacks under the physical condition. Finally, we discuss current challenges and open problems regarding privacy and security issues in DL.

Download

Citations (4)


... For this reason, the proposed method and existing methods are compared, these above-discussed methods are attempts to give a good performance in real-time autonomous vehicle detection, but they have challenges to give better performance. Therefore, the method is developed and compared with methods such as A-RNN [21], LSTM [22], CNN [23], and YOLOv4-object detection method [12] for the evaluation based on object detection on autonomous vehicles. ...

Reference:

Feature refinement with DBO: optimizing RFRC method for autonomous vehicle detection
Privacy-Preserving Object Detection With Poisoning Recognition for Autonomous Vehicles
  • Citing Article
  • January 2022

IEEE Transactions on Network Science and Engineering

... Zhang et al. proposed a credible federated learning framework (RobustFL) [27], which realizes the identification of malicious clients by constructing a predictive model based on logit on the server side. Wang et al. also proposed a method based on the distribution of logit to determine the distribution of malicious samples [28]. ...

Model-Agnostic Adversarial Example Detection Through Logit Distribution Learning
  • Citing Conference Paper
  • September 2021

... Academic interest in AI ethics has surged in recent years, leading to numerous studies that discuss ethical principles, fairness, bias, security, and other critical concerns (HuYupeng et al. 2021;John-Mathews et al. 2022;Nilsson 2014;Ntoutsi et al., n.d.). However, existing literature reviews are usually constrained to particular subtopics, adopting qualitative methods, providing fragmented insights rather than an overarching perspective on the discipline as a whole (Hagendorff 2020;Jobin et al. 2019;Liu et al. 2021;Mehrabi et al. 2022;Ntoutsi et al., n.d.;Zhang et al. 2021). This piecemeal understanding makes it difficult to map out broad trends, to identify emergent themes, or to assess the collaboration networks within AI ethics research. ...

Privacy and Security Issues in Deep Learning: A Survey

IEEE Access