Deep Compression of Pre-trained Transformer ModelsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Jan 2023, 20:48NeurIPS 2022 AcceptReaders: Everyone
Keywords: Quantization, Sparsity, Pruning, Pre-trained, Transformer, Foundation Model, Inference, NLP, vision, speech, BERT, Wav2vec, ViT
TL;DR: , we introduce methods to deeply compress pre-trained transformer models across three major application domains: NLP, speech, and vision.
Abstract: Pre-trained transformer models have achieved remarkable success in natural language processing (NLP) and have recently become competitive alternatives to Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN) in vision and speech tasks, respectively. Due to excellent computational efficiency and scalability, transformer models can be trained on exceedingly large amounts of data; however, model sizes can grow tremendously. As high performance, large-scale, and pre-trained transformer models become available for users to download and fine-tune for customized downstream tasks, the deployment of these models becomes challenging due to the vast amount of operations and large memory footprint. To address this challenge, we introduce methods to deeply compress pre-trained transformer models across three major application domains: NLP, speech, and vision. Specifically, we quantize transformer backbones down to 4-bit and further achieve 50% fine-grained structural sparsity on pre-trained BERT, Wav2vec2.0 and Vision Transformer (ViT) models to achieve 16x compression while maintaining model accuracy. This is achieved by identifying the critical initialization for quantization/sparsity aware fine-tuning, as well as novel techniques including quantizers with zero-preserving format and scheduled dropout. These hardware-friendly techniques need only to be applied in the fine-tuning phase for downstream tasks; hence, are especially suitable for acceleration and deployment of pre-trained transformer models.
Supplementary Material: pdf
8 Replies

Loading