Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to PracticeDownload PDF

29 Sept 2021, 00:31 (edited 09 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Deep ViT, Spectral Analysis, Attention Collapse, Patch Diversity
  • Abstract: Vision Transformer (ViT) has recently demonstrated promise in computer vision problems. However, unlike Convolutional Neural Networks (CNN), it is known that the performance of ViT saturates quickly with depth increasing, due to the observed attention collapse or patch uniformity. Despite a couple of empirical solutions, a rigorous framework studying on this scalability issue remains elusive. In this paper, we first establish a rigorous theory framework to analyze ViT features from the Fourier spectrum domain. We show that the self-attention mechanism inherently amounts to a low-pass filter, which indicates when ViT scales up its depth, excessive low-pass filtering will cause feature maps to only preserve their Direct-Current (DC) component. We then propose two straightforward yet effective techniques to mitigate the undesirable low-pass limitation. The first technique, termed AttnScale, decomposes a self-attention block into low-pass and high-pass components, then rescales and combines these two filters to produce an all-pass self-attention matrix. The second technique, termed FeatScale, re-weights feature maps on separate frequency bands to amplify the high-frequency signals. Both techniques are efficient and hyperparameter-free, while effectively overcoming relevant ViT training artifacts such as attention collapse and patch uniformity. By seamlessly plugging in our techniques to multiple ViT variants, we demonstrate that they consistently help ViTs benefit from deeper architectures, bringing up to 1.1% performance gains "for free" (e.g., with little parameter overhead). We publicly release our codes and pre-trained models at
  • One-sentence Summary: In this paper, we investigate the scalability issue with ViT via Fourier domain analysis and propose two practical solutions by scaling different frequency components of attention and feature maps.
  • Supplementary Material: zip
17 Replies