Keywords: Continual Learning; Large Language Models; Pseudo Sample Rehearsal; Similarity Regularization
Abstract: Continual learning for large language models (LLMs) demands a precise balance between $\textbf{plasticity}$ - the ability to absorb new tasks - and $\textbf{stability}$ - the preservation of previously learned knowledge. Conventional rehearsal methods, which replay stored examples, are limited by long-term data inaccessibility; earlier pseudo-rehearsal methods require additional generation modules, while self-synthesis approaches often generate samples that poorly align with real tasks, suffer from unstable outputs, and ignore task relationships. We present $\textbf{\textit{Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity}}(\textbf{SERS})$, a lightweight framework that 1) decouples pseudo-input synthesis from label creation, using semantic masking and template guidance to produce diverse, task-relevant prompts without extra modules; 2) applies label self-evolution, blending base-model priors with fine-tuned outputs to prevent over-specialization; and 3) introduces a dynamic regularizer driven by the Wasserstein distance between task distributions, automatically relaxing or strengthening constraints in proportion to task similarity. Experiments across diverse tasks on different LLMs show that our SERS reduces forgetting by over 2\% points against strong pseudo-rehearsal baselines, by ensuring efficient data utilization and wisely transferring knowledge.
The code will be released at https://github.com/JerryWangJun/LLM_CL_SERS/.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 9296
Loading