Keywords: Continual learning, Lifelong learning, Regularization methods, Regularization strength
TL;DR: We provide a theoretical framework for isotropic regularization in continual learning, covering multiple-teacher settings while isolating practical effects of regularization strength, yielding an optimal strength scaling law.
Abstract: We study generalization in an overparameterized continual linear regression setting, where a model is trained with L2 (isotropic) regularization across a sequence of tasks.
We derive a closed-form expression for the expected generalization loss in the high-dimensional regime that holds for arbitrary linear teachers.
We demonstrate that isotropic regularization mitigates label noise under both single-teacher and multiple i.i.d. teacher settings, whereas prior work accommodating multiple teachers either did not employ regularization or used memory-demanding methods.
Furthermore, we prove that the optimal fixed regularization strength scales nearly linearly with the number of tasks $T$, specifically as $T/\ln T$. To our knowledge, this is the first such result in theoretical continual learning.
Finally, we validate our theoretical findings through experiments on linear regression and neural networks, illustrating how this scaling law affects generalization and offering a practical recipe for the design of continual learning systems.
Submission Number: 162
Loading