Autonomous Generative Feature Replay for Non-Exemplar Class-Incremental Learning

Published: 2024, Last Modified: 12 Jun 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks have been successfully applied in many computer vision tasks. However, these models suffer catastrophic forgetting when learning new knowledge incrementally. To overcome the stability-plasticity dilemma, class incremental learning (CIL) has been widely discussed recently. The state-of-the-art CIL methods mainly leverage additional exemplar sets, thus memory costly and may raise privacy issues. To that end, we propose an autonomous generative feature replay (AGFR) framework without using exemplar sets. It consists of three modules: the feature extractor module, the feature generator module, and the unified classification module. First, to stabilize features over tasks, robust feature extractors are learned in a self-supervised manner and thus generalize well to unseen data. Second, instead of using exemplar sets or producing raw images, we propose an autonomous generative feature replay scheme to constantly update unified classifier in CIL without saving any image data. This strategy avoids overwhelming memory usage or poor quality of the generated raw images. Experiments demonstrate that our method achieves state-of-the-art performance in terms of average classification accuracy.⋆
Loading