Figure 6 - uploaded by Jonathan How
Content may be subject to copyright.
Increase of conservatism with . An agent (orange) following the CARRL policy avoids a dynamic, non-cooperative obstacle (blue) that is observed without noise. An increasing robustness parameter (left to right) increases the agent's conservatism, i.e., the agent avoids the obstacle with a greater safety distance.

Increase of conservatism with . An agent (orange) following the CARRL policy avoids a dynamic, non-cooperative obstacle (blue) that is observed without noise. An increasing robustness parameter (left to right) increases the agent's conservatism, i.e., the agent avoids the obstacle with a greater safety distance.

Source publication
Preprint
Full-text available
Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was already sho...

Similar publications

Article
Full-text available
Security certification establishes that a given system satisfies properties and constraints as specified in the system security profile. Mechanisms and techniques have been developed to assess if and how well the system complies with the properties, thereby providing a degree of confidence in the security certification. Generally, certification of...
Preprint
Full-text available
Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently sh...