Abstract: Vision Transformers (ViTs) have recently demonstrated significant potential in computer vision, but their high computational costs remain a challenge. To address this limitation, various methods have been proposed to compress ViTs. Most approaches utilize spatial-domain information and adapt techniques from convolutional neural networks (CNNs) pruning to reduce channels or tokens. However, differences between ViTs and CNNs in the frequency domain make these methods vulnerable to noise in the spatial domain, potentially resulting in erroneous channel or token removal and substantial performance drops. Recent studies suggest that high-frequency signals carry limited information for ViTs, and that the self-attention mechanism functions similarly to a low-pass filter. Inspired by these insights, this paper proposes a joint compression method that leverages properties of ViTs in the frequency domain. Specifically, a metric called Low-Frequency Sensitivity (LFS) is used to accurately identify and compress redundant channels, while a token-merging approach, assisted by Low-Frequency Energy (LFE), is introduced to reduce tokens. Through joint channel and token compression, the proposed method reduces the FLOPs of ViTs by over 50% with less than a 1% performance drop on ImageNet-1K and achieves approximately a 40% reduction in FLOPs for dense prediction tasks, including object detection and semantic segmentation.
External IDs:dblp:journals/ijcv/WangXLHDXLLWS25
Loading