Generalisation Guarantees for Continual Learning with Orthogonal Gradient DescentDownload PDF

12 Jun 2020 (modified: 22 Oct 2023)LifelongML@ICML2020Readers: Everyone
Student First Author: No
Keywords: Continual Learning, Deep Learning Theory, Deep Learning, Transfer Learning, Statistical Learning, Curriculum Learning
Abstract: In continual learning settings, deep neural networks are prone to catastrophic forgetting. Orthogonal Gradient Descent (Farajtabar et al., 2019) achieves state-of-the-art results in practice for continual learning, although no theoretical guarantees have been proven yet. We derive the first generalisation guarantees for the algorithm OGD for continual learning, for overparameterized neural networks. We find that OGD is only provably robust to catastrophic forgetting across a single task. We propose OGD+, prove that it is robust to catastrophic forgetting across an arbitrary number of tasks, and that it verifies tighter generalisation bounds. The experiments show that OGD+ outperforms OGD on settings with long range memory dependencies, even though the models are not overparameterized. Also, we derive a closed form expression of the learned models through tasks, as a recursive kernel regression relation, which captures the transferability of knowledge through tasks. Finally, we quantify theoretically the impact of task ordering on the generalisation error, which highlights the importance of the curriculum for lifelong learning.
TL;DR: We prove convergence and generalisation guarantees for the algorithm Orthogonal Gradient Descent for Continual Learning and propose OGD+, which outperforms OGD on settings with long range memory dependencies.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2006.11942/code)
0 Replies

Loading