Abstract: In the era of large language models, low-rank adaptation (LoRA) is an effective method for model fine-tuning, and rank reassignment further improves its performance. However, existing rank adjustment methods often face generalization problems and challenges in interpretability in their scoring mechanisms. We propose a new framework, I-LoRA, which addresses these limitations through two key innovations: firstly, we integrate an interpretable integral gradients for robust parameter scoring; secondly, we optimize the workflow of traditional methods to improve the fine-tuning performance. Extensive experiments on natural language understanding and generation tasks demonstrate the superior generalization ability of I-LoRA, while ablation studies confirm its effectiveness.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: parameter-efficient-training
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 3558
Loading