Searching Optimal Floating-Point Format for Sub-8-Bit Large Language Model Inference

Published: 01 Jan 2024, Last Modified: 19 Apr 2024ICEIC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) have shown remarkable success in various natural language processing tasks. However, their extensive parameter count leads to significant memory and computational demands. To tackle these challenges, there is growing interest in employing post-training quantization (PTQ) with reduced-precision floating-point (FP) operations. Yet, the optimal FP configuration remains a topic of debate. Existing studies often overlook a thorough analysis of the diverse data distributions found in LLMs and the crucial design choice, denormal. In this paper, we conduct a comprehensive examination of the various data distributions within LLMs and the significance of denormal representation, presenting a mixed-format floating-point framework. Our proposed framework allows for sub-8-bit inference with minimal performance degradation in language modeling and reasoning tasks across a broad spectrum of LLMs.
Loading