Separable Self-attention for Mobile Vision Transformers

Published: 24 Feb 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires $O(k^2)$ time complexity with respect to the number of tokens (or patches) $k$. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. $O(k)$. A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTv2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTv2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running $3.2\times$ faster on a mobile device. Our source code is available at: https://github.com/apple/ml-cvnets
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: i) Added a link to the source code ii) Added references suggested by the reviewers iii) Updated Table 1 to add latency comparison between MobileViT and MobileViTv2 on the CPU iv) Added Footnote 1 in Page 6 that compares with locality- and grouping-based methods. This footnote also links to Appendix E where these approaches are discussed in detail v) Added Section 6, comparing the robustness properties of separable self-attention with transformer-based models. vi) Added brief discussion about how methods in EfficientFormer can be used for further improving the efficiency and accuracy of MobileViTv2. viii) Added acknowledgements ix) Updated Figure 1 caption to highlight the main efficiency bottleneck in self-attention in transformer.
Code: https://github.com/apple/ml-cvnets
Assigned Action Editor: ~Dumitru_Erhan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 441
Loading