Keywords: continual learning, large language model, PEFT
Abstract: Continual Learning (CL) for Large Language Models (LLMs) faces a fundamental $\textbf{Stability-Plasticity Dilemma}$: balancing the plasticity to acquire new capabilities with the stability to preserve prior knowledge. While Parameter-Efficient Fine-Tuning methods, such as LoRA, enable efficient adaptation, we identify a critical flaw in current approaches termed $\textbf{Rank-Blindness}$: the enforcement of a single rank constraint across diverse tasks, which entangles task-shared and task-specific knowledge, leading to catastrophic forgetting of earlier tasks and underfitting on complex new ones.
To address this,
we propose SpaRTA, a novel rehearsal-free framework guided by a rank-spectrum perspective that explicitly disentangles knowledge into two orthogonal subspaces. Specifically, SpaRTA employs a low-rank branch to capture task-shared representations and a high-rank branch to model task-specific features. To integrate these complementary representations, we introduce a context-aware dynamic router that adaptively fuses the two branches based on input semantics, while an explicit orthogonality constraint minimizes interference between shared and specific parameter subspaces.
This design effectively isolates task-specific updates from shared knowledge, preventing the overwriting of prior capabilities while preserving strong adaptation capacity.
Extensive experiments demonstrate that SpaRTA achieves a superior stability-plasticity balance compared to single-rank baselines.
Notably, the proposed spectral disentanglement strategy substantially reduces inter-task interference and yields strong zero-shot generalization on unseen tasks.
Our code will be available at https://anonymous.4open.science/r/SpaRTA-CL.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: Language Modeling, NLP Applications
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 2197
Loading