About
13
Publications
699
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
24
Citations
Introduction
Skills and Expertise
Current institution
Publications
Publications (13)
Crafting adversarial examples is crucial for evaluating and enhancing the robustness of Deep Neural Networks (DNNs), presenting a challenge equivalent to maximizing a non-differentiable 0-1 loss function. However, existing single objective methods, namely adversarial attacks focus on a surrogate loss function, do not fully harness the benefits of e...
The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has highlighted the need for robust deep learning systems. Conventional evaluation methods of their robustness rely on adversarial accuracy, which measures the model performance under a specific perturbation intensity. However, this singu...
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence, capable of processing and understanding extensive human knowledge to enhance problem-solving across various domains. This paper explores the potential of LLMs to drive the discovery of symbolic solutions within scientific and engineering disciplines, where...
Algorithm Design (AD) is crucial for effective problem-solving across various domains. The advent of Large Language Models (LLMs) has notably enhanced the automation and innovation within this field, offering new perspectives and superior solutions. Over the past three years, the integration of LLMs into AD (LLM4AD) has progressed significantly, fi...
The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has underscored the need for robust deep learning systems. Conventional robustness evaluations have relied on adversarial accuracy, which measures a model's performance under a specific perturbation intensity. However, this singular metri...
In the rapidly evolving field of machine learning, adversarial attacks present a significant challenge to model robustness and security. Decision-based attacks, which only require feedback on the decision of a model rather than detailed probabilities or scores, are particularly insidious and difficult to defend against. This work introduces L-AutoD...
Black-box query-based attacks constitute significant threats to Machine Learning as a Service (MLaaS) systems since they can generate ad-versarial examples without accessing the target model's architecture and parameters. Traditional defense mechanisms, such as adversarial training , gradient masking, and input transformations, either impose substa...
Hypervolume subset selection (HSS) has received significant attention since it has a strong connection with evolutionary multi-objective optimization (EMO), such as environment selection and post-processing to identify representative solutions for decision-makers. The goal of HSS is to find the optimal subset that maximizes the hypervolume indicato...
In many real-world applications, the Pareto Set (PS) of a continuous multiobjective optimization problem can be a piecewise continuous manifold. A decision maker may want to find a solution set that approximates a small part of the PS and requires the solutions in this set share some similarities. This paper makes a first attempt to address this is...
The GPS has empowered billions of users and various critical infrastructures with its positioning and time services. However , GPS spoofing attacks also become a growing threat to GPS-dependent systems. Existing detection methods either require expensive hardware modifications to current GPS devices or lack the basic robustness against sophisticate...
Questions
Question (1)
Blackbox query-based adversarial attacks pose threats to real-world deep learning models hosted on clouds. While I am developing defense mechanism for these attacks, I found that current libraries (Foolbox, ART) provide somehow slow implementations of attack algorithsm. This may contribute to the following reason:
- The inference speed of the victim model itself.
- The running efficiency of the implementation for different algorithms.
I am wondering if there is a need for me to re-implement these algorithms and accelerate their major parts for the academic community. Could somone with related background provide some comments?