Stability Matters: Combating Parameter Shifts in Low-Rank Adaptation for Continual Learning

ICLR 2026 Conference Submission8661 Authors

17 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: continual learning
Abstract: Continual Learning (CL) has increasingly embraced Parameter-Efficient Fine-Tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), to balance task adaptability with parameter efficiency. Existing LoRA-based approaches resort to low-rank matrices to inherently capture task-specific parameter shifts, whereas meantime mitigate interference between tasks through architectural design (e.g., Mixture-of-Experts) or optimization constraint (e.g., orthogonality). However, they largely overlook how these shifts evolve across tasks, i.e., the internal dynamics of parameter space, which is a crucial yet underexplored factor in model forgetting. In this work, our analysis reveals a key insight that abrupt performance drops often coincide with drastic changes in the distribution of learned parameter shifts. Motivated by this, we propose a simple yet effective Parameter Stability Loss that regularizes both the sign and magnitude of parameter updates to mitigate forgetting. Beyond training-time regularization, we also introduce a post-training model merging step that bridges earlier directions with the current one and further combats the inevitable drift toward new tasks. Our method Parameter Stable LoRA (PS-LoRA) achieves state-of-the-art results on multiple continual learning benchmarks, with performance improvements of up to 3%, and can be integrated with existing approaches.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8661
Loading