Abstract: In continual learning, new data often falls outside the distribution of previous data. Since the old model is trained solely on past tasks and has not encountered the new data, its learned representations lack the adaptability needed to accommodate these new inputs. This mismatch creates a conflict between the learned representations and the concepts of new classes, resulting in a significant representation shift during model updates and exacerbating the issue of catastrophic forgetting. In this work, we presents a two-stage training framework to enhance the learned representations’ adaptability dynamically for exemplar-free continual learning (EFCL), dubbed ERA-EFCL. The first stage involves synthesizing data for the old classes using DeepInversion and combining them with the new data to train an expanded module and a feature fusion network (FFN). The expanded module identifies critical features overlooked by the old model, while the FFN integrates the old representations with these newly discovered features. By incorporating additional information that distinguishes between the old and new classes, the FFN generates fused representations enriched with more transferable features for the old classes. This helps align the old representations with the new concepts, enhancing their adaptability and reducing the representation shift. Next, we introduce global task-wise knowledge distillation to ensure balanced knowledge transfer alongside other losses, thereby improving representation learning during the training of the new model. Furthermore, the new classifier is refined using a class-balanced training strategy. Extensive experiments demonstrate that ERA-EFCL achieves favorable results across four benchmark datasets. The code is available at: “https://github.com/CSTiger77/ExemplarFreeCL”.
Loading