2.1 Decision-Based Adversarial Attacks

Abstract

Results We found that our decision-based adversarial attack is competitive with common gradient-based adversarial attacks in terms of perturbation size in both untargeted and targeted scenarios. We tested our attack on standard computer vision models for MNIST, CIFAR-10, and ImageNet and compared the median minimal 𝐿2 adversarial perturbation size with established gradient-based attacks such as DeepFool (Moosavi-Dezfooli et al. 2016) and the Carlini-Wagner attack (Carlini and Wagner 2017b). Our median perturbation size is always within a factor of two compared to the best attack and often better than at least either DeepFool or Carlini-Wagner.Running our attack against a model trained with defensive distillation (Papernot, McDaniel, Wu, et al. 2016), a defense known to introduce gradient masking rather than truly increasing the robustness (Carlini and Wagner 2016), confirmed our hypothesis that …

Wieland Brendel
Wieland Brendel
Principal Investigator (PI)

Wieland Brendel received his Diploma in physics from the University of Regensburg (2010) and his Ph.D. in computational neuroscience from the École normale supérieure in Paris (2014). He joined the University of Tübingen as a postdoctoral researcher in the group of Matthias Bethge, became a Principal Investigator and Team Lead in the Tübingen AI Center (2018) and an Emmy Noether Group Leader for Robust Machine Learning (2020). In May 2022, Wieland joined the Max-Planck Institute for Intelligent Systems as an independent Group Leader and is now a Hector-endowed Fellow at the ELLIS Institute Tübingen (since September 2023). He received the 2023 German Pattern Recognition Award for his substantial contributions on robust, generalisable and interpretable machine vision. Aside of his research, Wieland co-founded a nationwide school competition (bw-ki.de) and a machine learning startup focused on visual quality control.