Keywords: Efficient inference, Edge computing, Vision Transformers, Foundation Models
TL;DR: We speed up fine-tuned vision transformers by replacing attention heads that exhibit convolutional-like behaviours with lightweight depthwise convolutions, preserving the strength of large-scale pretrained weights.
Abstract: Pretrained vision foundation models deliver strong performance across tasks with limited fine-tuning. However, their Vision Transformer (ViT) backbones impose high inference costs, limiting deployment on resource-constrained devices. In this work, we accelerate large-scale pretrained ViTs while preserving their feature extraction capabilities by exploiting the intrinsic convolution-like behavior of some attention heads. Specifically, we introduce an efficient depthwise convolution-based layer that serves as a drop-in replacement for these heads. Additionally, we propose simple strategies to identify which heads can be replaced and introduce a fine-tuning procedure that recovers downstream task performance. Across both image classification and segmentation tasks, our method achieves 17–20\% inference speedup with minimal performance degradation. We validate the approach through detailed derivations, extensive experiments, and efficiency benchmarks on multiple low-power platforms. Implementation will be released publicly.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 9611
Loading