Efficient Pretraining and Finetuning of Quantized LLMs with Low-Rank Structure

Published: 01 Jan 2024, Last Modified: 28 Jan 2025ICDCS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large language models (LLMs) are computationally intensive. The computation workload and the memory footprint grow quadratically with the dimension (layer width). Most of LLMs' parameters come from the linear layers of the transformer structure and are highly redundant. These linear layers contribute more than 80% of the computation workload and 99% of the model size. To pretrain and finetune LLMs efficiently, there are three major challenges to address: 1) reducing redundancy of the linear layers; 2) reducing GPU memory footprint; 3) improving GPU utilization when using distributed training. Prior methods, such as LoRA and QLoRA, utilized low-rank structure and quantization to reduce the number of trainable parameters and model size, respectively. However, the resulting model still consumes a large amount of GPU memory. In this paper, we present high-performance GPU-based methods for both pretraining and finetuning quantized LLMs with low-rank structures. We replace a single linear layer in the transformer structure with two narrower linear layers, significantly reducing the number of parameters by several orders of magnitude. By quantizing the pretrained parameters into low precision (8-bit and 4-bit), the memory consumption of the resulting model is further reduced. Compared with existing LLMs, our methods achieve a speedup of 1.3x and a model compression ratio of 2.64 x for pretraining without accuracy drop. For finetuning, our methods achieve an average accuracy score increase of 6.3 and 24.0 in general tasks and financial tasks, respectively, and GPU memory consumption is reduced by 6.3x. The sizes of our models are smaller than 0.59 GB, allowing inference on a smartphone.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview