Keywords: Contiunal learning, Incremental Learning, Lifelong learning, catastrophic forgetting
Abstract: Continual learning (CL) enables models to sequentially learn from a stream of tasks while retaining previously acquired knowledge, mitigating catastrophic forgetting (CF). Modular approaches in CL enhance flexibility by decomposing tasks into reusable modules. However, existing approaches suffer from a trade-off between accuracy and scalability, often resorting to parameter expansion to preserve accuracy at the cost of efficiency and long-term applicability. In addition, these methods that evaluate task similarity based on task accuracy are computationally expensive. To overcome these limitations, we propose Feature-level Modular Network for Continual Learning (FMNIL), which focuses on feature-level task similarity. FMNIL dynamically constructs task-specific subnetworks, reusing modules when tasks exhibit high feature similarity and expanding the network only when necessary. This approach results in significantly less parameter growth while maintaining higher accuracy compared to existing methods. Experiments on four benchmarks (CIFAR100-SC/RS/B0, ImageNet1000) show FMNIL achieves up to 5.9% higher accuracy (2.8% on average) while reducing parameter growth by 35% compared to state-of-the-art methods. FMNIL thus provides an accuracy-preserving and parameter-efficient solution that breaks the accuracy–scalability trade-off in long-term continual learning.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9982
Loading