Keywords: Parameter-Efficient Fine-Tuning
Abstract: Parameter-efficient fine-tuning methods like Low-Rank Adaptation (LoRA) have become essential for deploying large language models, yet their static parameter allocation remains suboptimal for inputs of varying complexity. We present Flexi-LoRA, a novel framework that dynamically adjusts LoRA ranks based on input complexity during both training and inference. Through empirical analysis across question answering and mathematical reasoning tasks, we demonstrate that maintaining consistency between training and inference dynamics is important for effective adaptation, particularly for sequential reasoning tasks. Our findings reveal that input-dependent parameter allocation achieves superior performance with fewer parameters by optimally matching rank configurations to question complexity. Furthermore, task-specific sensitivity to rank dynamics varies, with mathematical reasoning tasks exhibiting higher sensitivity than QA tasks. Successful adaptation manifests not only in correctness but also in reasoning quality and instruction adherence. Flexi-LoRA consistently outperforms static LoRA while using fewer parameters, with performance gains more pronounced on tasks requiring strict reasoning chains. Our approach realizes key benefits of mixture-of-experts frameworks through a more streamlined implementation, reducing parameter redundancy while enhancing model capabilities. We provide comprehensive empirical studies across diverse tasks, establishing a foundation for future work in input-adaptive and efficient fine-tuning approaches.
Submission Number: 39
Loading