Dual Grained Quantization: efficient fine-grained quantization for LLMDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large Language Models (LLMs) demonstrate considerable potential across a range of tasks; however, they pose significant challenges due to their extensive memory requirements and computational demands. Fine-grained quantization effectively preserves model performance during aggressive weight compression, yet its inefficiency on hardware platforms hinders its applicability in real-world production environments. To enhance hardware efficiency while preserving the performance of fine-grained quantization, we propose a novel quantization framework, Dual Grained Quantization (DGQ), employing a W4A8 configuration specifically tailored for LLMs. By employing a dual-phase search strategy, DGQ minimizes quantization error without significantly extending quantization time. To improve the accuracy of W4A8-configured LLMs, we introduce aggressive selective equalization. This approach is grounded in the observation that key weights and outliers frequently coexist within the same channels. Comprehensive experiments with our W4A8 CUDA kernel highlight DGQ's exceptional performance, delivering speedups of 1.37$\times$ and 2.5$\times$ over standard INT8 and FP16 kernels, respectively, while preserving the superior performance of fine-grained quantization.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Approaches to low-resource settings
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview