Effective Decision Boundary Learning for Class Incremental LearningDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Class Incremental Learning, Catastrophic Forgetting, Long Tail, Mixup, Knowledge Distillation, Influence Balance
Abstract: Rehearsal approaches in class incremental learning (CIL) suffer from decision boundary overfitting to new classes, which is caused by two factors: insufficiency of old classes data for knowledge distillation (KD) and imbalanced data between the old and new classes because of the limited storage memory. In this work, we present a simple but effective approach to deal with these two factors to optimize the decision boundary. First, we employ the mixup knowledge distillation (MKD) and re-sampling strategy to improve the performance of KD, which would greatly alleviate the overfitting problem. Specifically, it utilizes mixup and re-sampling to synthesize adequate data that are more consistent with the latent distribution between the learned and new classes. Second, inspired by the influence balanced (IB) loss used in handling the long-tailed data, we propose a novel incremental influence balanced (IIB) method for CIL to address the classification on imbalanced data, which re-weights samples by their influences to create a proper decision boundary. With these two improvements, we present the effective decision boundary learning (EDBL) algorithm which improves the performance of KD and deals with the imbalanced data classification simultaneously. Experiments show that the proposed EDBL achieves state-of-the-art performances on several CIL benchmarks.
Supplementary Material: zip
25 Replies

Loading