Amphibian: A Meta-Learning Framework for Rehearsal-Free, Fast Online Continual Learning

TMLR Paper3546 Authors

24 Oct 2024 (modified: 25 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Online continual learning is challenging as it requires fast adaptation over a stream of data in a non-stationary environment without forgetting the knowledge acquired in the past. To address this challenge, in this paper, we introduce Amphibian - a gradient-based meta-learner that learns to scale the direction of gradient descent to achieve the desired balance between fast learning and continual learning. For this purpose, using only the current batch of data, Amphibian minimizes a meta-objective that encourages alignments of gradients among given data samples along selected basis directions in the gradient space. From this objective, it learns a diagonal scale matrix in each layer that accumulates the history of such gradient alignments. Using these scale matrices Amphibian updates the model online only in the directions having positive cumulative gradient alignments among the data observed so far. With evaluation on standard continual image classification benchmarks, we show that such meta-learned scaled gradient descent in Amphibian achieves better accuracy in online continual learning than relevant baselines while enabling fast learning with less data and few-shot knowledge transfer to new tasks. We also introduce Amphibian-$\beta$ a unified and principled framework for analyzing and understanding the fast learning and continual learning dynamics. Additionally, with loss landscape visualizations, we show such gradient updates incur minimum loss to the old task enabling fast continual learning in Amphibian.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Seungjin_Choi1
Submission Number: 3546
Loading