Yijun YangThe Chinese University of Hong Kong | CUHK · Department of Computer Science and Engineering
Yijun Yang
Doctor of Engineering
About
16
Publications
1,016
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
146
Citations
Introduction
I am a third-year Ph.D. student of CUhk REliable Laboratory (CURE), in the Department of Computer Science and Engineering, The Chinese University of Hong Kong, supervised by Prof. Qiang Xu. Before that, I got my M.Phil of EE from Tsinghua University in 2019. My current research interests span the fields of AI security and Deep Learning, including Adversarial Example defense, Out of distribution detection, Deep Generative Models, and Self-supervised learning.
Additional affiliations
August 2016 - August 2019
Publications
Publications (16)
In recent years, Text-to-Image (T2I) models have seen remarkable advancements, gaining widespread adoption. However, this progress has inadvertently opened avenues for potential misuse, particularly in generating inappropriate or Not-Safe-For-Work (NSFW) content. Our work introduces MMA-Diffusion, a framework that presents a significant and realist...
This paper proposes a novel out-of-distribution (OOD) detection framework named MoodCat for image classifiers. MoodCat masks a random portion of the input image and uses a generative model to synthesize the masked image to a new image conditioned on the classification result. It then calculates the semantic difference between the original image and...
Deep Neural Networks (DNNs) have achieved excellent performance in various fields. However, DNNs' vulnerability to Adversarial Examples (AE) hinders their deployments to safety-critical applications. This paper presents a novel AE detection framework, named BEYOND, for trustworthy predictions. BEYOND performs the detection by distinguishing the AE'...
This paper proposes a novel out-of-distribution (OOD) detection framework named MoodCat for image classifiers. MoodCat masks a random portion of the input image and uses a generative model to synthesize the masked image to a new image conditioned on the classification result. It then calculates the semantic difference between the original image and...
Adversarial examples (AEs) pose severe threats to the applications of deep neural networks (DNNs) to safety-critical domains, e.g., autonomous driving. While there has been a vast body of AE defense solutions, to the best of our knowledge, they all suffer from some weaknesses, e.g., defending against only a subset of AEs or causing a relatively hig...
Machine learning with deep neural networks (DNNs) has become one of the foundation techniques in many safety-critical systems, such as autonomous vehicles and medical diagnosis systems. DNN-based systems, however, are known to be vulnerable to adversarial examples (AEs) that are maliciously perturbed variants of legitimate inputs. While there has b...
Lightweight ciphers have a wide range of applications such as IoT, anti-counterfeiting labels, and passive RFID, which drawing loads of attention in recent years. Obviously, the most significant metric of lightweight cryptography is the area. To implement the smallest area lightweight cipher, to the best of our knowledge, the bit-serial structure i...
Side-channel collision attacks are more powerful than traditional side-channel attack without knowing the leakage model or establishing the model. Most attack strategies proposed previously need quantities of power traces with high computational complexity and are sensitive to mistakes, which restricts the attack efficiency seriously. In this paper...