Keywords: Continual Learning, Geometric Deep Learning, Representation Learning, High Dimensional Learning, Neural Manifolds
TL;DR: Overcoming catastrophic forgetting in continual learning through preserving intra/inter-task geometry in latent relative representations using metric space isometries
Abstract: We introduce CAMELS, a continual learning framework that leverages metric space constraints in latent space to preserve stable representations over time. Rather than constraining parameters or matching global prototypes, CAMELS anchors the internal structure of past tasks by preserving pairwise cosine similarities among replay samples—maintaining relative geometry without freezing coordinates and embeds different tasks in orthogonal subspaces. This formulation treats continual learning as the problem of preserving local isometries across evolving latent manifolds in high-dimensional embedding spaces. We provide theoretical guarantees that our approach bounds forgetting and classification risk by maintaining manifold consistency and prototype stability. Empirically, CAMELS outperforms or matches prior methods on standard benchmarks including Split, Rotated, and Permuted MNIST, as well as Split CIFAR-10. The resulting latent space is highly interpretable, revealing clear task and class structure that evolves throughout training. These results highlight the value of geometric structure preservation as a principled approach to learning stable, adaptable representations in sequential settings.
Submission Number: 13
Loading