Abstract: Class-incremental learning (CIL) aims to learn new classes while retaining previous knowledge. Although pretrained model (PTM) based approaches show strong performance, directly fine-tuning PTMs on incremental task
streams often causes renewed catastrophic forgetting. This paper proposes a Dual-Prototype Network with Taskwise Adaptation (DPTA) for PTM-based CIL. For each incremental learning task, an adapter module is built to
fine-tune the PTM, where the center-adapt loss forces the representation to be more centrally clustered and class
separable. The dual prototype network improves the prediction process by enabling test-time adapter selection,
where the raw prototypes deduce several possible task indexes of test samples to select suitable adapter modules
for PTM, and the augmented prototypes that could separate confusable classes are utilized to determine the final
result. Experiments on multiple benchmarks show that DPTA consistently surpasses recent methods by 1–5 %.
Notably, on the VTAB dataset, it achieves approximately 3 % improvement over state-of-the-art methods. The
implementation is available at https://github.com/Yorkxzm/DPTA
Loading