Home
Publications
People
Join Us
Jonas Rauber
Latest
2.1 Decision-Based Adversarial Attacks
Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax
EagerPy: Writing code that works natively with PyTorch, TensorFlow, JAX, and NumPy
Adversarial vision challenge
Inducing a human-like shape bias leads to emergent human-level distortion robustness in CNNs
On evaluating adversarial robustness
Accurate, reliable and fast robustness evaluation
Towards the first adversarially robust neural network model on MNIST
Decision-based adversarial attacks: Reliable attacks against black-box machine learning models
Foolbox v0. 8.0: A python toolbox to benchmark the robustness of machine learning models
Cite
×