Are Vision Transformers Robust to Spurious Correlations?

TMLR Paper464 Authors

27 Sept 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Deep neural networks may be susceptible to learning spurious correlations that hold on average but not in atypical test samples. As with the recent emergence of vision transformer (ViT) models, it remains unexplored how spurious correlations are manifested in such architectures. In this paper, we systematically investigate the robustness of different transformer architectures to spurious correlations on three challenging benchmark datasets and compare their performance with popular CNNs. Our study reveals that for transformers, larger models and more pre-training data significantly improve robustness to spurious correlations. Key to their success is the ability to generalize better from the examples where spurious correlations do not hold. Further, we perform extensive ablations and experiments to understand the role of the self-attention mechanism in providing robustness under spuriously correlated environments. We hope that our work will inspire future research on further understanding the robustness of ViT models to spurious correlations.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ekin_Dogus_Cubuk1
Submission Number: 464
Loading