PAC-Bayes bounds for cumulative loss in Continual Learning

ICLR 2026 Conference Submission12854 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, PAC-Bayes, Generalization bounds, Lifelong Learning
TL;DR: Upper bounds on the loss accumulated during online learning (offline or online)
Abstract: In continual learning, knowledge must be preserved and re-used between tasks, requiring a balance between maintaining good transfer to future tasks and minimizing forgetting of previously learned ones. As several practical algorithms have been devised to address the continual learning setting, the natural question of providing reliable risk certificates has also been raised. Although there are results for specific settings and algorithms on the behavior of memory stability, generally applicable upper bounds on learning plasticity are few and far between. In this work, we extend existing PAC-Bayes bounds for online learning and time-uniform offline learning to the continual learning setting. We derive general upper bounds on the cumulative generalization loss applicable for any task distribution and learning algorithm as well as oracle bounds for Gibbs posteriors and compare their effectiveness for several different task distributions. We demonstrate empirically that our approach yields non-vacuous bounds for several continual learning problems in vision, as well as tight oracle bounds on linear regression tasks. To the best of our knowledge, this is the first general upper bound on learning plasticity for continual learning.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 12854
Loading