Track: long paper (up to 4 pages)
Keywords: LoRA, Quantization, QLoRA, Transferability
TL;DR: A novel Quantization-LoRA framework that leverages the transferability of adapters to improve the performance of QLoRA models or accelerate LoRA training.
Abstract: In this study, we consider the transferability of LoRA adapters across quantized foundation models.
Specifically, we investigate whether LoRA adapters trained on a low-bit-width foundation model can still perform effectively when merged into a higher-bit-width foundation model.
By leveraging this transferability, it becomes possible to construct models with performance comparable to conventional LoRA using QLoRA adapters trained under resource-constrained conditions.
This approach not only improves the performance of trained QLoRA models without additional training but also accelerates LoRA fine-tuning.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 81
Loading