Abstract
Artificial Neural Networks (ANNs) have been proposed as computational models of the primate ventral stream, because their performance on tasks such as image classification rivals or exceeds human baselines. But useful models should not only predict data well, but also offer insights into the systems they represent, which remains a challenge for ANNs. We here investigate a specific method that has been proposed to shed light on the representations learned by ANNs: Feature Visualizations (FVs), that is, synthetic images specifically designed to excite individual units (" neurons") of the target network. Theoretically, these images should visualize the features that a unit is sensitive to, like receptive fields in neurophysiology. We conduct a psychophysical experiment to establish an upper bound on the interpretability afforded by FVs, in which participants need to match five sets of exemplars (natural images that …

PhD candidate
I work on human-machine comparisons, and interpretable vision models.

Principal Investigator (PI)
Wieland Brendel received his Diploma in physics from the University of Regensburg (2010) and his Ph.D. in computational neuroscience from the École normale supérieure in Paris (2014). He joined the University of Tübingen as a postdoctoral researcher in the group of Matthias Bethge, became a Principal Investigator and Team Lead in the Tübingen AI Center (2018) and an Emmy Noether Group Leader for Robust Machine Learning (2020). In May 2022, Wieland joined the Max-Planck Institute for Intelligent Systems as an independent Group Leader and is now a Hector-endowed Fellow at the ELLIS Institute Tübingen (since September 2023). He received the 2023 German Pattern Recognition Award for his substantial contributions on robust, generalisable and interpretable machine vision. Aside of his research, Wieland co-founded a nationwide school competition (bw-ki.de) and a machine learning startup focused on visual quality control.