Keywords: Continual Learning, Prompt Tuning, Prototype, Contrastive Learning
Abstract: Prototype, as a representation of class embeddings, has been explored to reduce memory footprint or avoid bias towards the latest task for continual learning. However, prototype-based methods still suffer from performance deterioration due to semantic drift and prototype interference. In this work, we propose a simple and novel framework for rehearsal-free continual learning. We show that task-specific prompt-tuning when coupled with a contrastive loss design can effectively address both issues and largely improves the potency of prototypes. The proposed framework excels at three challenging benchmarks, resulting in 3% to 6% absolute improvements over state-of-the-art methods without usage of a rehearsal buffer or a test-time oracle. Furthermore, the proposed framework largely bridges the performance gap between incremental learning and offline joint learning, demonstrating a promising design schema for continual learning.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning