Phoenix Williams’s research while affiliated with University of Exeter and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Evolutionary Art Attack For Black-Box Adversarial Example Generation
  • Article

January 2024

·

13 Reads

·

2 Citations

IEEE Transactions on Evolutionary Computation

Phoenix Neale Williams

·

Ke Li

·

Geyong Min

Deep neural networks (DNNs) have achieved remarkable performance in various tasks, including image classification. However, recent research has revealed the susceptibility of trained DNNs to subtle perturbations introduced into input images. Addressing these vulnerabilities is pivotal, leading to a significant area of study focused on developing attack algorithms capable of generating potent adversarial images. In scenarios where access to gradient information is restricted (black-box scenario), many existing methods introduce optimized perturbations to each individual pixels of an image to cause trained DNNs to mis-classify. However, due to the high-dimensional nature of this approach, current methods have inherent limitations. In contrast, our proposed approach involves the construction of perturbations by concatenating a series of overlapping semi-transparent shapes. Through the optimization of these shapes’ characteristics, we generate perturbations that result in the desired misclassification by the DNN. By conducting a series of attacks on state-of-the-art DNNs trained of CIFAR-10 and Imagenet datasets, our method consistently outperforms existing attack algorithms in terms of both query efficiency and success rate.




Sparse Adversarial Attack via Bi-objective Optimization

March 2023

·

16 Reads

·

2 Citations

Lecture Notes in Computer Science

Neural classifiers have achieved near human level performances when applied to several real-world tasks. Despite their successes, recent works have demonstrated their vulnerability to adversarial attacks. In particular, image classifiers have shown to be vulnerable to fine-tuned noise that perturb a small number of pixels, known as sparse attacks. To generate such perturbations current works either prioritise query efficiency by allowing the size of the perturbation to be unbounded or the minimization of its size by allowing a large number of pixels to be perturbed. Addressing the drawbacks of both approaches we propose a method of conducting query efficient sparse adversarial attacks that minimizes the number of perturbed pixels by formulating the attack as a constrained bi-objective optimization problem. Within the single objective unbounded query-efficient scenario our method is able to outperform state-of-the-art sparse attack algorithms in terms of success rate and query efficiency. When also minimizing the number of perturbed pixels in the bi-objective setting, the proposed method is able to generate adversarial perturbations that impact a fewer number of pixels than its state-of-the-art competitors.



Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

March 2022

·

22 Reads

Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples. Many works go with a white-box attack that assumes total access to the targeted model including its architecture and gradients. A more realistic assumption is the black-box scenario where an attacker only has access to the targeted model by querying some input and observing its predicted class probabilities. Different from most prevalent black-box attacks that make use of substitute models or gradient estimation, this paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples that iteratively evolves a set of overlapping transparent shapes. To evaluate the effectiveness of our proposed method, we attack three state-of-the-art image classification models trained on the CIFAR-10 dataset in a targeted manner. We conduct a parameter study outlining the impact the number and type of shapes have on the proposed attack's performance. In comparison to state-of-the-art black-box attacks, our attack is more effective at generating adversarial examples and achieves a higher attack success rate on all three baseline models.


Citations (6)


... Open-box methods, such as the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), require access to the model's gradients [9], [30]. Conversely, we employ a black-box or closed-box approach using an evolutionary algorithm (EA), specifically NSGA-III [86], to generate adversarial perturbations without necessitating gradient calculations [87]. ...

Reference:

Preventing Adversarial AI Attacks Against Autonomous Situational Awareness: A Maritime Case Study
Evolutionary Art Attack For Black-Box Adversarial Example Generation
  • Citing Article
  • January 2024

IEEE Transactions on Evolutionary Computation

... Computational creativity and evolutionary optimization have been merrily married for quite some time now. Having their own conference exactly on the intersection (EvoMUSART within EvoSTAR), it hosts a variety of creative endeavours [3], [4], but is by no means the only conference that currently supports the topic [5], [6], and the more rigorous approaches have even reached journals nowadays [7], [8]. So, optimization in computational creativity is on the rise, like everything in artificial intelligence, and an often seen avenue is the various approximation methods on classical paintings by means of brush strokes, transparent polygons, or other geometric shapes [9], [10]. ...

A Surrogate Assisted Evolutionary Strategy for Image Approximation by Density-Ratio Estimation
  • Citing Conference Paper
  • July 2023

... Te literature [53] used the adversarial contrastive learning method to attack the input data structure, gradient information, and other parts of the GNN and verifed that the adversarial learning strategy helps improve the model's robustness. Te other type is the black-box attack [54,55], where the attacker cannot obtain any model information when generating perturbations. For example, literature [54] proposes a population-based heuristic solution method for sparse adversarial attacks, which achieves a high attack success rate with only a small query budget. ...

Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation CVPR Proceedings
  • Citing Conference Paper
  • June 2023

... Multiobjecive Optimization for Robustness: Recent research efforts have advanced the application of multiobjective optimization in crafting adversarial examples. These efforts have expanded the original single-objective paradigm to include additional aims, yielding more diverse and robust adversarial examples [47], [48]. Additionally, multiobjective optimization has been employed in training models, offering a defensive enhancement to adversarial robustness [49]. ...

Sparse Adversarial Attack via Bi-objective Optimization
  • Citing Chapter
  • March 2023

Lecture Notes in Computer Science

... Based on this problem formulation, we have four basic definitions. Evolutionary algorithms have been widely accepted as an effective method for multi-objective optimization problems , with applications in adversarial robustness [69][70][71][72][73][74][75], parameter control [76][77][78][79][80][81][82][83], software engineering [19,21,84,85], automated machine learning [86][87][88][89], smart grid [90][91][92][93], networking [13,14,[94][95][96][97], large language model tuning [98]. ...

Black-box adversarial attack via overlapped shapes
  • Citing Conference Paper
  • July 2022

... This can be explained as the ineffectiveness of the surrogate modeling in a high-dimensional scenario with a strictly limited amount of training data. In this case, the local optima estimated from a less reliable model can be misleading [120][121][122][123]. ...

Large-Scale Evolutionary Optimization via Multi-Task Random Grouping
  • Citing Conference Paper
  • October 2021