René Raab

René Raab
Friedrich-Alexander-University of Erlangen-Nürnberg | FAU · Machine Learning and Data Analytics Lab

Master of Science

About

10
Publications
1,367
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
8
Citations
Introduction

Publications

Publications (10)
Preprint
Full-text available
The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural adversarial examples) or worst-case distribution sh...
Article
Full-text available
The monitoring of animals under human care is a crucial tool for biologists and zookeepers to keep track of the animals’ physical and psychological health. Additionally, it enables the analysis of observed behavioral changes and helps to unravel underlying reasons. Enhancing our understanding of animals ensures and improves ex situ animal welfare a...
Conference Paper
Full-text available
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research. Although several sophisticated defense mechanisms have been introduced, most were later shown to be ineffective. However, a reliable evaluation of model robustness is mandatory for deployment in safety-critic...
Preprint
Full-text available
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it difficult to identify promising approaches. We analyze the classification decisions of 19 different state-of-the-art neural networks tra...
Chapter
Full-text available
Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability can have severe consequences...
Preprint
Full-text available
Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability can have severe consequences...
Preprint
Full-text available
The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications. Most existing methods either require a re-training of a given model to achieve robust identification of adversarial attacks or are limited to...
Preprint
Full-text available
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research. Although several sophisticated defense mechanisms have been introduced, most were later shown to be ineffective. However, a reliable evaluation of model robustness is mandatory for deployment in safety-critic...
Preprint
Adversarial training is the most successful empirical method for increasing the robustness of neural networks against adversarial attacks. However, the most effective approaches, like training with Projected Gradient Descent (PGD) are accompanied by high computational complexity. In this paper, we present two ideas that, in combination, enable adve...

Network

Cited By

Projects

Project (1)
Project
In this project we analyse mathematical properties of Deep Neural Networks and graph-based learning techniques and propose new ideas to advance this field. We are interested in robustness of trained neural networks under adversarial attacks and sparse solutions for network training. Furthermore, we analyze robust methods for graph-based semi-supervised learning and their continuum limits.