From Forgetting to Robustness: Robust Class-Incremental Learning with CLIP

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robust Class-Incremental Learning; Class-Incremental Learning; Adversarial Robustness
Abstract: Class-Incremental Learning (CIL) aims to enable a model to continuously recognize new categories without forgetting previously learned ones. While most existing methods focus on alleviating catastrophic forgetting, they largely overlook the vulnerability of CIL models to adversarial perturbations, which poses a critical threat to their reliability in real-world applications. Motivated by this oversight, we formalize a new problem setting, Robust Class-Incremental Learning (RCIL). To address the conflict between adversarial robustness and class-incremental learning, we propose Selective parameter optimization for Adversarial training with GEometric constraint (SAGE), which selectively updates critical parameters to protect knowledge learned from previous tasks. Beyond parameter efficiency, SAGE introduces a theoretically grounded geometric constraint together with a contrastive loss to preserve structural relationships among features. This design enables stable and robust learning across tasks under adversarial attacks. Extensive experiments demonstrate that SAGE effectively improves adversarial robustness while mitigating catastrophic forgetting, leading to more reliable and practical CIL models. The code is provided in the supplementary material.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9016
Loading