Marla Kennedy’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Figure 1. Adversarial images can have manipulated explanations. Image A -adversarial image; explanation reveals target class. Image B -adversarial image; manipulated explanation hides target class. Manipulation based on (Dombrowski et al., 2019)
Figure 2. A HITL labeling interface with explanation maps
Overcoming Adversarial Attacks for Human-in-the-Loop Applications
  • Preprint
  • File available

June 2023

·

20 Reads

·

Marla Kennedy

·

Platon Lukyanenko

·

Including human analysis has the potential to positively affect the robustness of Deep Neural Networks and is relatively unexplored in the Adversarial Machine Learning literature. Neural network visual explanation maps have been shown to be prone to adversarial attacks. Further research is needed in order to select robust visualizations of explanations for the image analyst to evaluate a given model. These factors greatly impact Human-In-The-Loop (HITL) evaluation tools due to their reliance on adversarial images, including explanation maps and measurements of robustness. We believe models of human visual attention may improve interpretability and robustness of human-machine imagery analysis systems. Our challenge remains, how can HITL evaluation be robust in this adversarial landscape?

Download

Figure 1. Adversarial images can have manipulated explanations. Image A -adversarial image; explanation reveals target class. Image B -adversarial image; manipulated explanation hides target class. Manipulation based on (Dombrowski et al., 2019)
Overcoming Adversarial Attacks for Human-in-the-Loop Applications

July 2022

·

50 Reads

·

1 Citation

Including human analysis has the potential to positively affect the robustness of Deep Neural Networks and is relatively unexplored in the Adversarial Machine Learning literature. Neural network visual explanation maps have been shown to be prone to adversarial attacks. Further research is needed in order to select robust visualizations of explanations for the image analyst to evaluate a given model. These factors greatly impact Human-In-The-Loop (HITL) evaluation tools due to their reliance on adversarial images, including explanation maps and measurements of robust-ness. We believe models of human visual attention may improve interpretability and robustness of human-machine imagery analysis systems. Our challenge remains, how can HITL evaluation be robust in this adversarial landscape?