DRL: DISCRIMINATIVE REPRESENTATION LEARNING FOR CLASS INCREMENTAL LEARNING

19 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: class incremental learning
TL;DR: propose an efficient parallel network and a global anchor loss to improve the performance of class incremental learning
Abstract: Non-rehearsal class incremental learning (CIL) is pivotal in real-world scenarios such as data streaming applications and data security. Despite the remarkable progress in research on CIL, it remains an extremely challenging task due to three conundrums: increasingly large model complexity, non-smooth representation shift during incremental learning and inconsistency between stage-wise sub-problem optimization and global inference. In this work, we propose the Discriminative Representation Learning (\emph{DRL}) method to deal with these challenges specifically. To conduct incremental learning effectively and yet efficiently, our \emph{DRL} is built upon a pre-trained large model with excellent representation learning capability, and increasingly augments the model by learning a lightweight adapter with a small amount of parameter learning overhead in each incremental learning stage. While the adapter is responsible for adapting the model to new classes of data involved in current learning stage, it can inherit and propagate the representation capability from the current model via parallel connection between them. As a result, such design can guarantee a smooth representation shift between different stages of incremental learning. Furthermore, to alleviate the issue of the training-inference inconsistency induced by the stage-wise sub-optimization, we design the Margin-CE loss, which imposes a hard margin between classification boundaries to push for more discriminative representation learning, thereby narrowing down the gap between stage-wise local optimization over a subset of data and global inference on all classes of data. Extensive experiments on six benchmarks reveal that our \emph{DRL} consistently outperforms other state-of-the-art methods throughout the entire CIL period while maintaining high efficiency in both training and inference phases.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1809
Loading