Understanding the Role of Rehearsal in Continual Learning under Varying Model Capacities

ICLR 2026 Conference Submission14995 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: continual learning, catastrophic forgetting, rehearsal mechanism, theoretical analysis, deep neural networks
TL;DR: We presents a closed-form analysis of rehearsal-based continual learning, revealing how key factors affect the model's errors across underparameterized and overparameterized regimes.
Abstract: Continual learning, which aims to learn from dynamically changing data distributions, has garnered significant attention in recent years. However, most existing theoretical work focuses on regularization-based methods, while theoretical understanding of the rehearsal mechanism in continual learning remains limited. In this paper, we provide a closed-form analysis of adaptation, memory and generalization errors for rehearsal-based continual learning within a linear-Gaussian regression framework, covering both underparameterized and overparameterized regimes. We derive explicit formulae linking factors such as rehearsal size to each error component, and obtain several insightful findings. Firstly, more rehearsal does not always better for memorability, and there exists a decreasing floor for memory error when tasks are similar and noise levels are low. Secondly, rehearsal enhances adaptability under underparameterization, but can be provably detrimental under overparameterization. Moreover, enlarging the rehearsal size can raise peaks in generalization error when slightly overparameterized, and may further degrade generalization when tasks are dissimilar or noise is high. Finally, numerical simulations validate these theoretical insights and we further extend the analysis to neural networks on MNIST, CIFAR-10, CIFAR-100 and Tiny-ImageNet. The empirical curves closely follow with the predicted trends, indicating that our linear analysis captures phenomena that persist in modern deep continual learning models.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 14995
Loading