Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation

Published: 01 Jan 2024, Last Modified: 08 Apr 2025PRCV (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Class-incremental learning (CIL) has been widely applied in the real world due to its flexibility and scalability. Recent advancements in CIL have achieved outstanding performance. However, deep neural networks, including CIL models, face challenges in resisting adversarial attacks. Presently, the majority of research in CIL focuses on alleviating catastrophic overfitting, while lacking comprehensive exploration into enhancing adversarial robustness. To this end, we introduce a novel CIL framework called the Perturbation Volume-up Framework (PVF). This framework divides each epoch into multiple iterations, wherein three main tasks are performed sequentially: intensifying adversarial data, extracting new knowledge, and reinforcing old knowledge. To intensify adversarial data, we propose the Fused Robustness Augmentation (FRA) approach. This method incorporates more generalized knowledge into the adversarial data by randomly blending data and leveraging finely-tuned Jensen-Shannon (JS) divergence. For the remaining two tasks, we introduce a set of regularization techniques known as Knowledge Inspiration Regularization (KIR). This regularization employs innovative classification and distillation losses to enhance the model’s generalization performance while preserving previously learned knowledge. Extensive experiments have demonstrated the effectiveness of our method in enhancing adversarial robustness of CIL models.
Loading