Toward the First Optimization Framework for Low-Rank Adaptation

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LORA, optimization, stochastic optimization, low-rank adaptation
TL;DR: We present RAC-LoRA, a low-rank optimization framework with provable guarantees of convergence to the same solution of full-parameter fine-tuning
Abstract: Fine-tuning is a common approach for adapting large foundational models to downstream tasks. With growing model and dataset sizes, parameter-efficient techniques have become crucial. A widely used method is Low-Rank Adaptation (LoRA), which expresses updates as a product of two low-rank matrices. While effective, LoRA often lags behind full-parameter fine-tuning (FPFT), and its optimization theory remains underexplored. We show that LoRA and its extensions, Asym- metric LoRA and Chain of LoRA, face convergence issues. To address this, we propose Random- ized Asymmetric Chain of LoRA (RAC-LoRA)—a general framework analyzing convergence rates of LoRA-based methods. Our approach keeps the empirical benefits of LoRA while intro- ducing algorithmic modifications that ensure provable convergence. The framework bridges FPFT and low-rank adaptation, guaranteeing convergence to the FPFT solution with explicit rates. We further provide analysis for smooth non-convex losses under gradient descent, stochastic gradient descent, and federated learning, supported by experiments.
Submission Number: 143
Loading