Home
Publications
People
Join Us
matthias-bethge
Latest
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws
Pretraining Frequency Predicts Compositional Generalization of CLIP on Real-World Tasks
In search of forgotten domain generalization
Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?
Provable Compositional Generalization for Object-Centric Learning
Compositional Generalization from First Principles
Robust deep learning object recognition models rely on low frequency information in natural images
AI sciencepreneurship and startups
Human vs Machine Cognition
Multi-Modal Alignment and Reasoning
Foundation of Generalization
Jacobian-based Causal Discovery with Nonlinear ICA
Learning From Brains How to Regularize Machines (Supplementary Material)
The bittersweet lesson: data-rich models narrow the behavioural gap to human vision
If your data distribution shifts, use self-learning
Imagenet-d: A new challenging robustness dataset inspired by domain adaptation
2.1 Decision-Based Adversarial Attacks
Visual representation learning does not generalize strongly within the same domain
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Partial success in closing the gap between human and machine vision
Adapting imagenet-scale models to complex distribution shifts with self-learning
Five points to check when comparing visual perception in humans and machines
Contrastive Learning Inverts the Data Generating Process
Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations
Unintended cue learning: Lessons for deep learning from experimental psychology
On the surprising similarities between supervised and self-supervised models
Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax
EagerPy: Writing code that works natively with PyTorch, TensorFlow, JAX, and NumPy
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
Improving robustness against common corruptions by covariate shift adaptation
Unmasking the inductive biases of unsupervised object representations for video sequences
Shortcut Learning in Deep Neural Networks
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
A simple way to make neural networks robust against diverse image corruptions
Adversarial vision challenge
Generalized Invariant Risk Minimization: relating adaptation and invariant representation learning
Inducing a human-like shape bias leads to emergent human-level distortion robustness in CNNs
Benchmarking robustness in object detection: Autonomous driving when winter is coming
Reproducing Decision-Making With Constrained Networks to Understand Deep Neural Networks
Approximating cnns with bag-of-local-features models works surprisingly well on imagenet
Accurate, reliable and fast robustness evaluation
Learning from brains how to regularize machines
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Comparing the ability of humans and DNNs to recognise closed contours in cluttered images
One-shot texture segmentation
Towards the first adversarially robust neural network model on MNIST
Trace your sources in large-scale data: one ring to find them all
Decision-based adversarial attacks: Reliable attacks against black-box machine learning models
Foolbox v0. 8.0: A python toolbox to benchmark the robustness of machine learning models
Comment on" Biologically inspired protection of deep networks from adversarial attacks"
What does it take to generate natural textures?
Texture synthesis using shallow convolutional networks with random filters
Cite
×