Keywords: continual learning, class-incremental learning, analysis
Abstract: A fundamental objective in class-incremental learning is to strike a balance between stability and plasticity, where models should be both stable enough to retain knowledge learnt from previously seen classes, and plastic enough to learn concepts from new classes. While previous works demonstrate strong performance on class-incremental benchmarks, it is not clear whether their success comes from the models being stable, plastic, or a mixture of both. In this paper we aim to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off. We establish analytical tools that help measure the stability and plasticity feature representations, and employ such tools to investigate models trained with various class-incremental algorithms on large-scale class-incremental benchmarks. Surprisingly, we find that the majority of class-incremental algorithms heavily favor stability over plasticity, to the extent that the feature extractor of a model trained on the initial set of classes is no less effective than that of the final incremental model. Our observations not only inspire two simple algorithms that highlight the importance of analyzing feature representations, but also suggest that class-incremental research, in general, should strive for better feature representation learning.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
4 Replies
Loading