A unified pruning framework for vision transformersDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 02 Jun 2023Sci. China Inf. Sci. 2023Readers: Everyone
Abstract: In this study, we proposed a novel method called UP-ViTs to prune ViTs in a unified manner. Our framework can prune all components in a ViT and its variants, maintain the models’ structure, and generalize well into downstream tasks. UP-ViTs achieve state-of-the-art results when pruning various ViT backbones. Moreover, we studied the transferring ability of the compressed model and found that our UP-ViTs also outperform original ViTs. We also extended our method into NLP tasks and obtained more efficient transformer models. Please refer to the appendix for more details.
0 Replies

Loading