From Memory to Reasoning: Generative Models Enable Explainable Continual Learning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Class Incremental Learning, Intepretable Classification, Multi-model, Generative Methods
Abstract: Class Incremental Learning (CIL) aims to enable models to acquire new knowledge over time without forgetting previous tasks. However, existing CIL approaches primarily rely on discriminative modeling, making them prone to catastrophic forgetting due to parameter expansion and lacking transparency in the prediction process, which limits their reliability in real-world settings. In contrast, human continuous learning is inherently generative and interpretable. Humans integrate new concepts by focusing on salient visual details and linking them to prior semantic structures, forming traceable chains of reasoning. Therefore, we propose the Generative Explainable Class-Incremental Learning (GECL), a pioneering generative and interpretable CIL framework designed to address these challenges. GECL employs soft-label-guided visual augmentation to focus attention on the most discriminative image regions and utilizes large language models (LLMs) to construct structured semantic attributes. This approach eliminates the need for expanding classification heads, preventing parameter overwriting and preserving previous knowledge. These semantic attributes achieve fine-grained alignment with visual features through entropy-regularized optimal distribution matching, where a cost matrix explicitly quantifies each attribute-region contribution, generating transparent attribute-region reasoning chains. Experiments across natural scenes and fine-grained datasets demonstrate that GECL balances high accuracy, low forgetting rates, and interpretability, marking a promising step toward safe and reliable continuous learning.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9455
Loading