Incremental Unified Parameter Additional Tuning with Basic Memory ReplayingDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: class incremental learning, parameter-additional-tuning, basic memory replaying
TL;DR: We propose a novel method for class incremental learning by tuning an unified additional parameter structure and replaying basic memory.
Abstract: Class incremental learning (CIL) aims to develop an open intelligence system that can continuously learn new concepts from new tasks while retaining the knowledge to distinguish between new and old concepts. Recently, parameter-additional-tuning methods (PAT) have successfully alleviated catastrophic forgetting by starting from a well-pre-trained model and only allowing a few additional parameters to be trained. However, the contradiction between stability and plasticity and the lack of inter-task features still challenge PAT-based CIL methods. To address these, we propose Unified PAT and basic memory replaying (BMR). On the one hand, unified PAT transfer the model to sequential arrival downstream tasks based on a fixed pre-trained vision transformer by unifying the prompt-based and the adapter-based methods, offering more diversified plastic structures to efficiently capture more useful features without large-scale parameters. On the other hand, BMR synthesizes on-call virtual old samples with a fixed-size basic memory to create a global task that covers up all the sub-tasks, which makes inter-task features more learnable without a large memory budget. Abundant experiments prove the effectiveness of our method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading