Leveraging Self-Supervised and Supervised Embeddings for Memory-Efficient Experience-Replay Continual Learning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Life-Long and Continual Learning, Representation Learning, Self-Supervised Learning, Deep Learning, Representation Learning for Vision
TL;DR: This work presents a novel sample selection method for continual learning that integrates supervised and self-supervised embeddings via a graph-based approach, and which yields SOTA results on CIFAR-100 and Tiny-ImageNet in low-memory settings.
Abstract: Catastrophic forgetting remains a key challenge in Continual Learning (CL). In replay-based CL with severe memory constraints, performance critically depends on the sample selection strategy - that is, which examples are stored for replay. Most existing approaches construct memory buffers using embeddings learned under supervised objectives. However, class-agnostic, self-supervised representations often encode rich, class-relevant semantics that are overlooked. We propose a new method, MERS - Multiple Embedding Replay Selection, which replaces the buffer selection module with a graph-based approach that integrates both supervised and self-supervised embeddings. Empirical results show consistent improvements over state-of-the-art selection strategies across a range of continual learning algorithms, with particularly strong gains in low-memory regimes. On CIFAR-100 and TinyImageNet, MERS outperforms single-embedding baselines without adding model parameters or increasing replay volume, making it a practical, drop-in enhancement for replay-based continual learning.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8884
Loading