Improving Continual Learning by Accurate Gradient Reconstructions of the Past

Published: 02 Nov 2023, Last Modified: 02 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Weight-regularization and experience replay are two popular continual-learning strategies with complementary strengths: while weight-regularization requires less memory, replay can more accurately mimic batch training. How can we combine them to get better methods? Despite the simplicity of the question, little is known or done to optimally combine these approaches. In this paper, we present such a method by using a recently proposed principle of adaptation that relies on a faithful reconstruction of the gradients of the past data. Using this principle, we design a prior which combines two types of replay methods with a quadratic weight-regularizer and achieves better gradient reconstructions. The combination improves performance on standard task-incremental continual learning benchmarks such as Split-CIFAR, SplitTinyImageNet, and ImageNet-1000, achieving $>\!80\%$ of the batch performance by simply utilizing a memory of $<\!10\%$ of the past data. Our work shows that a good combination of the two strategies can be very effective in reducing forgetting.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Incorporate reviewer feedback and general improvements to writing and figures.
Assigned Action Editor: ~Jakub_Mikolaj_Tomczak1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1390
Loading