Learning Forward Compatible Representation in Class Incremental Learning by Increasing Effective Rank

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: class incremental learning, continual learning, lifelong learning
Abstract: Class Incremental Learning (CIL) is a prominent subfield of continual learning, aiming to enable models to incrementally learn new tasks while preserving the knowledge learned from the previous tasks. The main challenge of CIL is known as catastrophic forgetting, where a model that is naively fine-tuned to new tasks experiences a significant drop in performance on previous tasks. To address the challenge, previous studies have mostly focused on backward compatible approaches. Recently, a forward compatible approach has been introduced that supports a concurrent use with the existing backward compatible methods. The forward compatible method, however, is limited in that it relies solely on class information. In this study, we propose an effective-Rank based Forward Compatible (RFC) representation regularization that is not confined to specific types of information, such as class information. The proposed method increases the efficient rank of representation during the base session, thereby facilitating the encoding of more informative features pertinent to unseen novel tasks. To substantiate the effectiveness of our method, we establish a theoretical connection between effective rank and Shannon entropy of the representations. Subsequently, we conduct comprehensive experiments, by integrating it into ten well-known backward compatible CIL methods. The results demonstrate that our forward compatible approach is effective in enhancing the performance of novel tasks while mitigating catastrophic forgetting. Furthermore, the results indicate that our method significantly improves the average incremental accuracy of all ten cases that we have examined, underscoring its efficacy and general applicability.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1608
Loading