WHICH RESTRAINS FEW-SHOT CLASS-INCREMENTAL LEARNING, FORGETTING OR FEW-SHOT LEARNING?

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Few-shot class incremental learning, catastrophic forgetting
Abstract: Few-shot class-incremental learning (FSCIL) is one common yet difficult task in machine learning. There are mainly two challenges in FSCIL: catastrophic forgetting of old classes during incremental sessions and insufficient learning of new classes with only a few samples. Recent wisdom mainly focuses on how to avoid catastrophic forgetting by calibrating prototypes of each class while surprisingly overlooking the issue of limited samples of new classes. In this paper, we aim to improve the FSCIL by supplementing knowledge of new classes from old ones. To this end, we propose an old classes-guided FSCIL method with two stages of the base and incremental sessions. In the base session, we propose a prototype-centered loss trying to learn a compact distribution of old classes. During the incremental learning sessions, we first augment more samples for each new class by Gaussian sampling, where the mean and covariance are calibrated by old classes; we then propose to update the model based on both prototype-based and replay-based learning methods on those augmented samples. In addition, based on a series of analyses on examining the performance in both old and new classes during each session, we find out that most works contain a deceptive accuracy bias to old classes, where test data usually consists of a large part of samples in old classes. Extensive experiments on three popular FSCIL datasets including mini-ImageNet, CIFAR100, and CUB200 demonstrate the superiority of our model to the other state-of-the-art methods on both old and new classes.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7544
Loading