LoQT: Low Rank Adapters for Quantized Training

Published: 18 Jun 2024, Last Modified: 10 Jul 2024WANT@ICML 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Quantization, Low-Rank Adaptation, Memory Efficient Training, Large Language Models
TL;DR: LoQT enables efficient quantized pre-training of LLMs with results close to full-rank non-quantized models. It enables pre-training of a 13B LLM on a 24GB GPU without model parallel, checkpointing, or offloading strategies during training.
Abstract: Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
Submission Number: 25
Loading