Keywords: Hyper-scale, Compression, Quantization, Transformers, LLM
TL;DR: We have proposed a next-level post-training quantization scheme for Hyper-scale Transformer models
Abstract: With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile and TVs.
Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required.
As a cost-effective alternative, learning-free PTQ schemes have been proposed.
However, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a significant feature of Transformers.
In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency.
The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency.
Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models. The code will be available at https: //github.com/SamsungLabs/aespa.
Primary Area: Natural language processing
Submission Number: 3318
Loading