Using Hindsight to Anchor Past Knowledge in Continual LearningDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Continual Learning, Lifelong Learning, Catastrophic Forgetting
TL;DR: A continual learning method that uses replay buffer to construct anchors by maximizing the forgetting of a task and later keep the predictions on these anchors invariant by a meta-learning objective.
Abstract: In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, state-of-the-art continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a meta-learning technique that we call anchoring: the learner updates its knowledge on the current task, while keeping predictions on some anchor points of past tasks intact. These anchor points are learned using gradient-based optimization as to maximize forgetting of the current task, in hindsight, when the learner is fine-tuned on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the state of the art in terms of both accuracy and forgetting metrics and for various sizes of episodic memories.
Code: https://bit.ly/2mw8bsE
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2002.08165/code)
Original Pdf: pdf
12 Replies

Loading