Theoretical Foundations of Continual Learning via Drift-Plus-Penalty

09 Feb 2026 (modified: 22 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In many real-world settings, data streams are inherently nonstationary and arrive sequentially, necessitating learning systems to adapt continuously without repeatedly retraining from scratch. Continual learning (CL) addresses this setting by seeking to incorporate new tasks while preventing catastrophic forgetting, whereby updates for recent data induce performance degradation on previously acquired knowledge. We introduce a control-theoretic perspective on CL that explicitly regulates the temporal evolution of forgetting, framing adaptation to new tasks as a controlled process subject to long-term stability constraints. We focus on replay-based CL settings in which a finite memory buffer preserves representative samples from prior tasks, allowing forgetting to be explicitly regulated. We propose COntinual Learning with Drift-Plus-Penalty (\texttt{COLD}), a novel CL framework grounded in the stochastic optimization-based Drift-Plus-Penalty (DPP) principle. At each task, \texttt{COLD} minimizes the instantaneous penalty corresponding to the current task loss while simultaneously maintaining a virtual queue that explicitly tracks deviations from long-term stability on previously learned tasks, hence capturing the stability–plasticity trade-off as a regulated dynamical process. We establish stability and convergence guarantees that characterize this trade-off, as governed by a tunable control parameter. Empirical results on standard benchmark datasets show that the proposed framework consistently achieves superior accuracy compared to a wide range of state-of-the-art CL baselines, while exhibiting competitive and tunable forgetting behavior that reflects the explicit regulation of the stability–plasticity trade-off through virtual queues and the DPP objective.
Submission Type: Long submission (more than 12 pages of main content)
Changes Since Last Submission: NA.
Assigned Action Editor: ~Stefano_Sarao_Mannelli1
Submission Number: 7423
Loading