Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models

Published: 05 Mar 2025, Last Modified: 25 Apr 2025SLLMEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 4 pages)
Keywords: LoRA, Quantization, QLoRA, Transferability
TL;DR: A novel Quantization-LoRA framework that leverages the transferability of adapters to improve the performance of QLoRA models or accelerate LoRA training.
Abstract: In this study, we consider the transferability of LoRA adapters across quantized foundation models. Specifically, we investigate whether LoRA adapters trained on a low-bit-width foundation model can still perform effectively when merged into a higher-bit-width foundation model. By leveraging this transferability, it becomes possible to construct models with performance comparable to conventional LoRA using QLoRA adapters trained under resource-constrained conditions. This approach not only improves the performance of trained QLoRA models without additional training but also accelerates LoRA fine-tuning.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 81
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview