Scratching Visual Transformer's Back with Uniform AttentionDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Vision Transformer, Self-attention, Attention, Dense Interactions, Image Classification
TL;DR: Vision Transformers may need yet more dense interactions. We tried to supply them with a simple trick. We get improvements.
Abstract: The favorable performance of Vision Transformers (ViTs) is often attributed to the multi-head self-attention ($\mathtt{MSA}$). The $\mathtt{MSA}$ enables global interactions at each layer of a ViT model, which is a contrasting feature against Convolutional Neural Networks (CNNs) that gradually increase the range of interaction across multiple layers. We study the role of the density of the attention. Our preliminary analyses suggest that the spatial interactions of attention maps are close to dense interactions rather than sparse ones. This is a curious phenomenon, as dense attention maps are harder for the model to learn due to steeper softmax gradients around them. We interpret this as a strong preference for ViT models to include dense interaction. We thus manually insert the uniform attention to each layer of ViT models to supply the much needed dense interactions. We call this method Context Broadcasting, $\mathtt{CB}$. We observe that the inclusion of $\mathtt{CB}$ reduces the degree of density in the original attention maps and increases both the capacity and generalizability of the ViT models. $\mathtt{CB}$ incurs negligible costs: 1 line in your model code, no additional parameters, and minimal extra operations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2210.08457/code)
6 Replies

Loading