Keywords: Parameter-Efficient Fine-Tuning, Large Language Models, Robust Adaptation, Sparse Plus Low-Rank Decomposition
Abstract: Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA), are widely adopted for their efficiency. However, LoRA assumes model updates are inherently low-rank, which introduces a restrictive bias that results in underperformance compared to full fine-tuning. Hybrid approaches, such as Robust Adaptation (RoSA), improve expressiveness by combining low-rank and sparse components, but they rely on a manually tuned ratio to balance these components, leading to suboptimal parameter allocation across tasks. We introduce RA-SpaRC (Robust Adaptation with Sparse plus Low-Rank Compressors), a new initialization strategy that overcomes this limitation. The key idea is an adaptive allocation mechanism that automatically balances sparse and low-rank components within a given parameter budget. This approach removes the need for manual rank–sparsity tuning and supports arbitrary parameter budgets. This principled and automated design allows RA-SpaRC to consistently outperform LoRA, its variants, and RoSA in extensive experiments across multiple models, delivering more effective and flexible adaptation.
Primary Area: optimization
Submission Number: 11340
Loading