DMLoRA: Dynamic Multi-Subspace Low-Rank Adaptation

Published: 01 Jan 2025, Last Modified: 16 Oct 2025WWW (Companion Volume) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As one of the most widely adopted parameter-efficient fine-tuning (PEFT) techniques, LoRA and its variants have garnered significant attention for their ability to avoid additional inference costs. However, the standard LoRA struggles to fully match the expressive capacity of fully fine-tuned models due to inherent approximation errors and the limited flexibility of rank-level component weights. In this work, we propose a novel Dynamic Multi-subspace LoRA (DMLoRA), which partitions high-dimensional input features into multiple subspaces, each optimized with dynamic rank-level weighting. By employing dynamic weights within fine-grained subspaces, DMLoRA effectively reduces the number of fine-tuned parameters while enhancing the flexibility of rank-level representations. Moreover, we present a rigorous theoretical analysis to demonstrate that the proposed subspace-induced dynamic LoRA achieves lower approximation errors compared to static-weighted approaches. Extensive experiments on benchmarks for commonsense reasoning and natural language understanding validate the superiority of DMLoRA over conventional LoRA and its variants, showcasing its efficacy in both parameter efficiency and expressive capacity. The code of this paper is available at https://github.com/MobiusDai/DMLoRA
Loading