Track: tiny / short paper (up to 4 pages)
Keywords: Large Language Models, Post-Training Quantization, Finetuning
TL;DR: A novel approach for compression and memory-efficient adaptation of pre-trained language models that encompasses most of the post-training quantization and fine-tuning methods.
Abstract: We introduce ReALLM, a novel approach for compression and memory-efficient adaptation of pre-trained language models that encompasses most of the post-training quantization and fine-tuning methods for a budget of $<4$ bits. Pre-trained matrices are decomposed into a high-precision low-rank component and a vector-quantized latent representation (using an autoencoder). During the fine-tuning step, only the low-rank components are updated. Our results show that pre-trained matrices exhibit different patterns. ReALLM adapts the shape of the encoder (small/large embedding, high/low bit VQ, etc.) to each matrix. ReALLM proposes to represent each matrix with a small embedding on $b$ bits and a neural decoder model $D_{\phi}$ with its weights on $b_\phi$ bits. The decompression of a matrix requires only one embedding and a single forward pass with the decoder. Our weight-only quantization algorithm yields the best results on both commonsense reasoning tasks (C4, WikiText-2) for a budget of $3$ bits *without* any training. With a budget of $2$ bits, ReALLM achieves state-of-the-art performance on understanding tasks (ARC, PiQA, Winogrande, MMLU) as well as generation tasks (TruthfulQA) after fine-tuning on a single partition of C4 dataset. Additionally, ReALLM is practical in terms of inference latency and memory.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Lisa_Bedin2
Format: Yes, the presenting author will definitely attend in person because they attending ICLR for other complementary reasons.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Submission Number: 60
Loading