Local Attention Layers for Vision TransformersDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Attention layers in transformer networks have contributed to state-of-the-art results on many vision tasks. Still, attention layers leave room for improvement because relative position information is not learned, and locality constraints are typically not enforced. To mitigate both issues, we propose a convolution-style attention layer, LA-layer, as a replacement for traditional attention layers. LA-layers implicitly learn the position information in a convolutional manner. Given an input feature map, keys in the kernel region deform in a designated constrained region, which results in a larger receptive field with locality constraints. Query and keys are processed by a novel aggregation function that outputs attention weights for the values. The final result is an aggregation of the attention weights and values. In our experiments, we replace ResNet's convolutional layers with LA-layers and address image recognition, object detection and instance segmentation tasks. We consistently demonstrate performance gains with LA-layers over the state-of-the-art, despite having fewer floating point operations and training parameters. These results suggest that LA-layers more effectively and efficiently extract features. They can replace convolutional and attention layers across a range of networks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
6 Replies

Loading