Abstract: The cost of deploying vision transformers increasingly represents a barrier to wider industrial adoption. Existing compression techniques require additional end-to-end fine-tuning or incur a significant drawback to energy efficiency, making them ill-suited for online (real-time) inference, where a prediction is made on any new input as it comes in. We introduce the Visual-Word Tokenizer (VWT), a training-free method for reducing energy costs while retaining performance. The VWT groups visual subwords (image patches) that are frequently used into visual words, while infrequent ones remain intact. To do so, intra-image or inter-image statistics are leveraged to identify similar visual concepts for sequence compression. Experimentally, we demonstrate a reduction in energy consumed of up to 47%. Comparative approaches of 8-bit quantization and token merging can lead to significantly increased energy costs (up to 500% or more). Our results indicate that VWTs are well-suited for efficient online inference with a marginal compromise on performance. The experimental code for our paper is also made publicly available.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=TY4qi6dBnA
Changes Since Last Submission: The following revisions have been incorporated for the camera ready as requested:
(1) The power and runtime (latency) measurements have been included in Appendix B.1, specifically Table 9 (pages 18 & 20).
(2) The one-time cost of the pre-processing step has been included in Subsection 4.3, at the end of the second paragraph under "VWTs and Inference Efficiency" (pages 9-10).
(3) Further commentary on safeguards have been incorporated into the Conclusion (page 13).
Assigned Action Editor: ~Blake_Aaron_Richards1
Submission Number: 5761
Loading