Layer-Wise High-Impact Parameter Ratio Optimization in Post-Training Quantization for Large Language Models
Keywords: Post-training Quantization, LLMs
Abstract: Large language models (LLMs) have significantly advanced natural language processing, but their massive parameter count creates substantial computational and memory challenges during deployment. Post-training quantization (PTQ) has emerged as a promising approach to mitigate these challenges with minimal overhead. While existing PTQ methods can effectively quantize LLMs, they experience substantial accuracy loss at extremely low bit-widths, primarily due to high-impact parameters that significantly influence quantization performance. Several approaches address these issues by identifying and retaining the high-impact parameters in FP16 format. However, they apply fixed ratios of high-impact parameters across all layers, overlooking layer-wise sensitivity variations. In this paper, we propose a quadratic optimization framework that determines layer-specific ratios of high-impact parameters while considering inter-layer dependencies. We quantize high-impact parameters to moderate bit-widths, the bit-width often results in negligible performance degradation in quantized LLMs, while the remaining parameters can be quantized to extremely low bit-widths. Under the same resource-constrained budget, this allows preserving more high-impact parameters than methods that keep selecting few in FP16 format. Additionally, the proposed framework allows us to leverage an advanced quantization method that often requires extensive learnable parameters solely for high-impact parameters, while applying a computationally efficient method to the rest. Our approach achieves an effective balance between computational efficiency and model accuracy, while maintaining high performance compared to state-of-the-art methods.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 10651
Loading