PE-DyRA: Dynamic Rank Adaptation for Parameter-Efficient Fine-Tuning via Importance-Aware Pruning and Expansion

16 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: parameter-efficient fine-tuning, large language model, low-rank adaptation
TL;DR: A novel framework that dynamically allocates ranks through importance score-based pruning and expansion.
Abstract: As large language models grow in scale, full-parameter fine-tuning for downstream tasks incurs substantial computational and storage costs. Low-Rank Adaptation (LoRA) provides a parameter-efficient paradigm for model adaptation, but its fixed-rank allocation cannot adapt to the heterogeneous importance of different layers or the evolving requirements across training stages, resulting in either redundancy or insufficient capacity. In this paper, we introduce Dynamic Rank Adaptation via Importance-Aware Pruning and Expansion (PE-DyRA), a novel framework that dynamically allocates ranks through importance score-based pruning and expansion. PE-DyRA introduces three key innovations: 1) A parameter importance evaluation measure based on gradient information and input activations to enable more stable ranking; 2) A bidirectional rank adjustment mechanism that dynamically prunes and expands ranks based on importance, enabling flexible allocation and improved parameter utilization; 3)The PE-DyRA framework can be used as a paradigm to achieve better results on benchmark methods such as DoRA, PiSSA, and QLoRA. Extensive experiments demonstrate the effectiveness of PE-DyRA, surpassing baseline methods. Furthermore, theoretical analysis demonstrates that PE-DyRA has better parameter efficiency.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 6795
Loading