Floating-Point Quantized TransformersDownload PDF

Anonymous

23 Jul 2023OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: Model Compression, Model Quantization
TL;DR: Floating-Point Quantized Transformers
Abstract: Transformer-based models have revolutionized various fields with their remarkable performance, but challenges in deploying these models persist due to high computational/memory costs. Post-training quantization (PTQ), as a practical compression technique, has been extensively studied in convolutional architectures, but limited research has focused on PTQ for transformers. Existing PTQ solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and scale parameters. In this regard, we construct a strong FP-PTQ baseline by integrating hessian-based loss into quantization parameter search. Furthermore, we observe a consistent pattern of activation distributions across transformer models, with high between-channel variance and low within-channel variance. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the Bert model to only 4-bit and achieves an average GLUE score of 80.07, which is only 3.66 lower than the full-precision model, significantly outperforming the previous state-of-the-art method that had a gap of 11.48. The source code will be made available once the paper has been accepted.
0 Replies

Loading