On the Discrimination and Consistency for Exemplar-Free Class Incremental Learning
Abstract: Exemplar-free class incremental learning (EF-CIL)
is a nontrivial task that requires continuously enriching
model capability with new classes while
maintaining previously learned knowledge without
storing and replaying any old class exemplars. An
emerging theory-guided framework for CIL trains
task-specific models for a shared network, shifting
the pressure of forgetting to task-id prediction.
In EF-CIL, task-id prediction is more challenging
due to the lack of inter-task interaction (e.g., replays
of exemplars). To address this issue, we conduct
a theoretical analysis of the importance and
feasibility of preserving a discriminative and consistent
feature space, upon which we propose a
novel method termed DCNet. Concretely, it progressively
maps class representations into a hyperspherical
space, in which different classes are orthogonally
distributed to achieve ample inter-class
separation. Meanwhile, it also introduces compensatory
training to adaptively adjust supervision intensity,
thereby aligning the degree of intra-class
aggregation. Extensive experiments and theoretical
analysis verified the superiority of DCNet.
Loading