Translational symmetry in convolutions with localized kernels causes an implicit bias toward high frequency adversarial examples

Abstract

Adversarial attacks are still a significant challenge for neural networks. Recent efforts have shown that adversarial perturbations typically contain high-frequency features, but the root cause of this phenomenon remains unknown. Inspired by theoretical work on linear convolutional models, we hypothesize that translational symmetry in convolutional operations together with localized kernels implicitly bias the learning of high-frequency features, and that this is one of the main causes of high frequency adversarial examples. To test this hypothesis, we analyzed the impact of different choices of linear and non-linear architectures on the implicit bias of the learned features and adversarial perturbations, in spatial and frequency domains. We find that, independently of the training dataset, convolutional operations have higher frequency adversarial attacks compared to other architectural parameterizations, and that this phenomenon is exacerbated with stronger locality of the kernel (kernel size) end depth of the model. The explanation for the kernel size dependence involves the Fourier Uncertainty Principle: a spatially-limited filter (local kernel in the space domain) cannot also be frequency-limited (local in the frequency domain). Using larger convolution kernel sizes or avoiding convolutions (e.g., by using Vision Transformers or MLP-style architectures) significantly reduces this high-frequency bias. Looking forward, our work strongly suggests that understanding and controlling the implicit bias of architectures will be essential for achieving adversarial robustness.

Wieland Brendel
Wieland Brendel
Principal Investigator (PI)

Wieland Brendel received his Diploma in physics from the University of Regensburg (2010) and his Ph.D. in computational neuroscience from the École normale supérieure in Paris (2014). He joined the University of Tübingen as a postdoctoral researcher in the group of Matthias Bethge, became a Principal Investigator and Team Lead in the Tübingen AI Center (2018) and an Emmy Noether Group Leader for Robust Machine Learning (2020). In May 2022, Wieland joined the Max-Planck Institute for Intelligent Systems as an independent Group Leader and is now a Hector-endowed Fellow at the ELLIS Institute Tübingen (since September 2023). He received the 2023 German Pattern Recognition Award for his substantial contributions on robust, generalisable and interpretable machine vision. Aside of his research, Wieland co-founded a nationwide school competition (bw-ki.de) and a machine learning startup focused on visual quality control.