Meiqi Wang’s research while affiliated with Tsinghua University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


A Case for Application-Aware Space Radiation Tolerance in Orbital Computing
  • Preprint

July 2024

·

17 Reads

Meiqi Wang

·

·

Longnv Xu

·

[...]

·

We are witnessing a surge in the use of commercial off-the-shelf (COTS) hardware for cost-effective in-orbit computing, such as deep neural network (DNN) based on-satellite sensor data processing, Earth object detection, and task decision.However, once exposed to harsh space environments, COTS hardware is vulnerable to cosmic radiation and suffers from exhaustive single-event upsets (SEUs) and multi-unit upsets (MCUs), both threatening the functionality and correctness of in-orbit computing.Existing hardware and system software protections against radiation are expensive for resource-constrained COTS nanosatellites and overwhelming for upper-layer applications due to their requirement for heavy resource redundancy and frequent reboots. Instead, we make a case for cost-effective space radiation tolerance using application domain knowledge. Our solution for the on-satellite DNN tasks, \name, exploits the uneven SEU/MCU sensitivity across DNN layers and MCUs' spatial correlation for lightweight radiation-tolerant in-orbit AI computing. Our extensive experiments using Chaohu-1 SAR satellite payloads and a hardware-in-the-loop, real data-driven space radiation emulator validate that RedNet can suppress the influence of radiation errors to \approx 0 and accelerate the on-satellite DNN inference speed by 8.4%-33.0% at negligible extra costs.


Mitigating Query-based Neural Network Fingerprinting via Data Augmentation

May 2023

·

7 Reads

·

2 Citations

ACM Transactions on Sensor Networks

Protecting the intellectual property (IP) of deep neural network (DNN) models becomes essential and urgent with the rapidly increasing cost of DNN training. Fingerprinting is one promising IP protection method that queries suspicious models with specific fingerprint samples to infer and verify IP by comparing the predictions with pre-defined labels. Based on utilizing unique features of target models, various DNN fingerprinting methods are proposed to effectively verify the IP of models remotely with a meager false-positive ratio. In this paper, we propose a novel attack to mitigate query-based fingerprinting methods based on data augmentation methods. We propose a randomized transformation on input samples to significantly mislead the fingerprint samples’ prediction and compromise the IP verification. Then, our attack can keep the model utility with an acceptable accuracy drop in the data-free scenario (i.e. without any samples) or achieve much higher precision in the data-limited scenario (i.e. with a small number of samples with the same distribution). An intensive evaluation of three well-known model structures and three well-known datasets shows that our attack can effectively mitigate five query-based DNN fingerprinting methods in top-tier conferences.


Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks

February 2023

·

38 Reads

Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established. In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios by simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10×\times, significantly outperforming existing defense methods.


Mitigating Targeted Bit-Flip Attacks via Data Augmentation: An Empirical Study

July 2022

·

5 Reads

·

1 Citation

Lecture Notes in Computer Science

As deep neural networks (DNNs) become more widely used in various safety-critical applications, protecting their security has been an urgent and important task. Recently, one critical security issue is proposed that DNN models are vulnerable to targeted bit-flip attacks. This kind of sophisticated attack tries to inject backdoors into models via flipping only a few bits of carefully chosen model parameters. In this paper, we propose a gradient obfuscation-based data augmentation method to mitigate these targeted bit-flip attacks as an empirical study. Particularly, we mitigate such targeted bit-flip attacks by preprocessing only input samples to break the link between the features carried by triggers of input samples with the modified model parameters. Moreover, our method can keep an acceptable accuracy on benign samples. We show that our method is effective against two targeted bit-flip attacks by experiments on two widely-used structures (ResNet-20 and VGG-16) with one famous dataset (CIFAR-10).

Citations (1)


... The robustness of state-of-the-art fingerprinting techniques has not been thoroughly verified because they were only evaluated against attacks that were not specifically designed for defeating fingerprint detection. A recent study by (Wang et al. 2023) proposed to apply preprocessing to mitigate DNN fingerprinting. However, their attack can be easily identified and made ineffective by removing the preprocessing module. ...

Reference:

IPRemover: A Generative Model Inversion Attack against Deep Neural Network Fingerprinting and Watermarking
Mitigating Query-based Neural Network Fingerprinting via Data Augmentation
  • Citing Article
  • May 2023

ACM Transactions on Sensor Networks