Overcoming bias towards base sessions in few-shot class-incremental learning (FSCIL)

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Continual learning, Few-shot learning, Few-shot class-incremental learning, Pretrained representations, Self-supervised learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Few-shot class-incremental learning (FSCIL) with a more realistic and challenging problem setting aims to learn a set of novel object classes with a restricted number of training examples in sequence. In the process, striking a balance between not forgetting previously-learned object classes and overfitting to novel ones plays a crucial role. Meanwhile, conventional methods exhibit a significant performance bias towards a base session: excessively low incremental performance compared to base performance. To tackle this, we propose a simple-but-effective pipeline that achieves a substantial performance margin for incremental sessions. Further, we devise and perform comprehensive experiments under diverse conditions—leveraging pretrained representations, various classification modules, and aggregation of the predictions within our pipeline; our findings reveal essential insights towards model design and future research directions. Additionally, we introduce a set of new evaluation metrics and benchmark datasets to address the limitations of the conventional metrics and benchmark datasets which disguise the bias towards a base session. These newly introduced metrics and datasets allow to estimate the generalization of FSCIL models. Furthermore, we achieve new state-of-the-art performance with significant margins as a result of our study. The codes of our study are available at GITHUB.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2257
Loading