Keywords: Continual Learning, Off-line brain states, Generative Latent Space Manipulation
Abstract: Continual learning struggles with balancing plasticity and stability while mitigating catastrophic forgetting. Inspired by human sleep and dreaming mechanisms, we propose Dream2Learn (D2L), a generative approach that enables models, trained in a continual learning setting, to synthesize structured additional training signals driven by their internal knowledge. Unlike prior methods that rely on real data to simulate the dreaming process, D2L autonomously constructs semantically distinct yet structurally coherent dreamed classes, conditioning a diffusion model via soft prompt optimization. These dynamically generated samples expand the classifier’s representation space, reinforcing past knowledge while structuring features in a way that facilitates adaptation to future tasks. In particular, by integrating dreamed classes into training, D2L enables the model to self-organize its latent space, improving generalization and adaptability to new data.
Experiments on Mini-ImageNet, FG-ImageNet, and ImageNet-R show that D2L surpasses existing methods across all evaluated metrics. Notably, it achieves positive forward transfer, confirming its ability to enhance adaptability by structuring representations for future tasks.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 12652
Loading