Multi-strategy continual learning for knowledge refinement and consolidation

Published: 01 Jan 2024, Last Modified: 03 Nov 2024Appl. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Saving part of the old class data for replay is one of the most effective approaches to alleviate the catastrophic forgetting of deep learning models when the class is incrementally updated data, but there are problems such as easy overfitting of the model and serious imbalance between the data of old and new classes. In this paper, we propose a multi-strategy continual learning model, which contains three strategies: the significant sample retention strategy, the significant feature distillation strategy, and the old task attention strategy. The significant sample retention strategy and the significant feature distillation strategy help the new model to refine and consolidate the old task knowledge by acquiring significant samples and significant features through uncertainty metric and attention mechanism, respectively. Then, the old task attention strategy is used to capture the inter-class semantic consistency across tasks to correct the model imbalance gradient propagation to alleviate the old task forgetting caused by the imbalance between the significant samples and the new task samples. The three strategies synergistically alleviate the catastrophic forgetting problem in the replay continual learning approach from multiple perspectives such as sample storage, stability-plasticity balance, and task classification bias. Our model outperforms state-of-the-art methods by 0.9%\(\sim \)6.9% in terms of average accuracy on representative benchmark datasets.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview