Abstract: Despite Low-Rank Adaptation (LoRA)'s popularity for fine-tuning large models, it often exhibits a noticeable performance gap compared to full fine-tuning, particularly in complex tasks such as mathematical reasoning and code generation. Motivated by this discrepancy, we propose a novel fusion approach for LoRA fine-tuned models. Our key insight is that LoRA models trained with different random seeds on the same task often exhibit complementary strengths. In contrast to existing research that typically focuses on fusing models trained on diverse tasks, we explore the potential of combining multiple LoRA models fine-tuned on the same task with different random seeds. This intra-task fusion method aims to leverage the strengths of various fine-tuned models to create a more robust and effective adaptation. To validate our approach, we conducted comprehensive experiments across three key areas: mathematical reasoning, code generation, and general instruction-tuning tasks. The results demonstrate that our fusion method significantly enhances LoRA's performance, outperforming both standalone LoRA models and current fusion methods. Notably, this advancement substantially narrows the gap between LoRA and full fine-tuning, thus offering a more effective approach to model adaptation without the GPU memory burden of full parameter fine-tuning.
Lay Summary: Despite its popularity for making large language models (LLMs) more efficient, a technique called Low-Rank Adaptation (LoRA) often falls short of full fine-tuning, especially for challenging tasks like solving math problems or generating code.
Our research, "SeedLoRA," introduces a new method to close this performance gap. We observed that LoRA models, even when trained on the same task, develop unique strengths if they start with different random initial settings (seeds). Unlike existing methods that merge models trained on different tasks, SeedLoRA specifically focuses on combining these "same-task, different-seed" LoRA models.
SeedLoRA uses a two-stage process: first, it identifies and preserves the strong, consistent knowledge shared across these models; then, it cleverly fuses the unique, complementary insights from each model in a shared digital space.
Our experiments demonstrated that SeedLoRA significantly improves performance across various challenging tasks, effectively matching or even exceeding the capabilities of full fine-tuning while still benefiting from LoRA's efficiency. This breakthrough highlights that by cleverly combining the diverse strengths learned by models starting from different random points, we can create more robust and effective large language models, suggesting a new path for optimizing their training.
Link To Code: https://github.com/NUS-HPC-AI-Lab/SeedLoRA
Primary Area: Optimization
Keywords: Parameter Efficient Fine-Tuning, Model Merging, Large Language Model
Submission Number: 7167
Loading