Dynamic Rank Reallocation for Module-Level LoRA

ICLR 2026 Conference Submission17453 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-Efficient Fine-Tuning, LLM
Abstract: Low-Rank Adaptation (LoRA) and its variants have been widely applied in parameter-efficient fine-tuning of large language models. However, most existing approaches assign the same rank to all modules, which limits the representational capacity and adaptation flexibility of the finetuned model. Inspired by interference theory in human learning, which posits that forgetting outdated knowledge while allocating more resources to relevant new knowledge facilitates better learning, we propose the Dynamic rank re-Allocation method (DaRA). In DaRA, the rank of each module represents a direction in the parameter space, analogous to a knowledge component. During warm-up, DaRA allocates the same rank to all modules, allowing the model to form a coarse understanding of the task. Afterwards, DaRA reallocates ranks by discarding less useful directions from unimportant modules (outdated knowledge) and assigning more ranks to important modules (new knowledge). This dynamic adjustment mirrors the human learning process described by interference theory. Extensive experiments across diverse model architectures and downstream tasks demonstrate that DaRA consistently outperforms existing baselines.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17453
Loading