Metalearning Continual Learning Algorithms

TMLR Paper3534 Authors

22 Oct 2024 (modified: 31 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural networks, however, suffer from catastrophic forgetting (CF)---previously acquired skills are forgotten when a new task is learned. Instead of hand-crafting new algorithms for avoiding CF, we propose Automated Continual Learning (ACL) to train self-referential neural networks to meta-learn their own in-context continual (meta-)learning algorithms. ACL encodes continual learning desiderata---good performance on both old and new tasks---into its meta-learning objectives. Our experiments demonstrate that, in general, in-context learning algorithms also suffer from CF but ACL effectively solves such "in-context catastrophic forgetting". Our ACL-learned algorithms outperform hand-crafted ones and popular meta-continual learning methods on the Split-MNIST benchmark in the replay-free setting, and enables continual learning of diverse tasks consisting of multiple few-shot and standard image classification datasets. Going beyond, we also highlight the current limitation of in-context continual learning, by investigating the possibilities to extend ACL to the realm of state-of-the-art CL methods which leverage pre-trained models. Our work provides several novel perspectives into the long-standing problem of continual learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Alessandro_Sordoni1
Submission Number: 3534
Loading