Abstract: In dynamic real-world scenarios, continuous learning without forgetting old knowledge is essential, particularly in environments with stricter privacy protection or resource-constrained edge devices where storing old exemplars is infeasible. Therefore, Non-Exemplar Class-Incremental Learning (NECIL) has garnered significant attention. Compared with normal settings, it faces a more severe plasticity-stability dilemma and classifier bias. To address those challenges, we propose a framework based on the vision transformer architecture, called the Continual Expansion and Absorption Transformer (CEAT), which consists of two core components. First, we propose the Continual Expansion and Absorption (CEA) method to alleviate the trade-off between new and old classes by parallelly expanding a set of parameters (i.e. EF layer) on the backbone to learn new tasks, while freezing the backbone to retain old task knowledge. The EF layers can be seamlessly absorbed into the ViT backbone through parameter recombination before inference, mitigating storage and computational burdens. Second, we propose a Dynamic Boundary-Aware (DBA) method to generate dynamic pseudo-features for classifier calibration to address the classifier bias. Extensive experiments demonstrate that our approach achieves state-of-the-art performance, particularly showcasing significant improvements of 4.82% and 5.92% on TinyImageNet and ImageNet-Subset, respectively.
Loading