Keywords: Continual learning,Class-incremental learning,Task-Imbalance
Abstract: Continual learning aims to acquire new knowledge without forgetting previously learned tasks.
However, most existing studies assume balanced tasks, which is rarely the practice case, as real-world scenarios often exhibit severe task imbalance with long-tailed distributions. This task-imbalanced continual learning (TICL) setting entangles two fundamental challenges: the well-known \textit{stability–plasticity dilemma}, and the newly emerging \textit{head–tail learning dilemma}, where head classes dominate training while tail classes remain under-optimized. To address this compounded difficulty, we propose Decoupled Fast–Slow Adaptation (DFSA), which introduces two key components. First, a fast–slow dual adapter augments the image encoder with a fast-adapting branch for rapid task acquisition and a slow-consolidating branch for stable knowledge retention. A task-modulated weighting mechanism dynamically integrates these branches, effectively fusing “fast” and “slow” thinking to balance short-term plasticity with long-term stability, while simultaneously providing complementary perspectives that enhance learning for underrepresented classes. Complementarily, DFSA employs a decoupled training strategy by first fine-tuning the text encoder as a semantic-aware classifier before updating image features, providing stable guidance that mitigates the negative impact of long-tailed distributions. Extensive experiments on TICL benchmarks show that our method significantly improves both few-sample task generalization and overall retention, outperforming existing continual learning baselines. The source code is temporarily available at https://anonymous.4open.science/r/DFSA-3aD6E.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 3269
Loading