Keywords: Continual Learning, Neural Networks, Catastrophic Forgetting, Class Incremental Learning, Task Incremental Learning
TL;DR: The paper proposes a memory-free continual learning approach based on knowledge distillation.
Abstract: Continual Learning (CL) refers to a model's ability to sequentially acquire new knowledge across tasks while minimizing Catastrophic Forgetting (CF) of previously learned information. Many existing CL approaches face scalability challenges, often relying heavily on memory or a model buffer to maintain performance. To address this limitation, we propose "Less Forgetting Learning" (LFL), a memory-free CL framework for class and task incremental learning classification that does not rely on any memory buffer.
The LFL adopts a stepwise freezing and fine-tuning strategy. Different components of the network are trained in separate stages, with selective freezing applied to preserve critical knowledge. The framework leverages knowledge distillation to strike a balance between stability and plasticity during learning. Building upon this foundation, LFL+ incorporates an under-complete Auto-Encoder (AE) to preserve the most informative features. In addition, the LFL+ addresses the bias toward new classes in the classification head. Extensive experiments on three benchmark datasets show that LFL achieves competitive performance while requiring only 2.53% of the model buffer used by state-of-the-art methods. In addition, we propose a new metric designed to assess CL's plasticity-stability trade-off better.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 17710
Loading