Activity Regularization for Continual LearningDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: While deep neural networks have achieved remarkable successes, they suffer the well-known catastrophic forgetting issue when switching from existing tasks to tackle a new one. In this paper, we study continual learning with deep neural networks that learn from tasks arriving sequentially. We first propose an approximated multi-task learning framework that unifies a family of popular regularization based continual learning methods. We then analyze the weakness of existing approaches, and propose a novel regularization method named “Activity Regularization” (AR), which alleviates forgetting meanwhile keeping model’s plasticity to acquire new knowledge. Extensive experiments show that our method outperform state-of-the-art methods and effectively overcomes catastrophic forgetting.
Keywords: continual learning, regularization
TL;DR: This paper develops a novel regularization for continual learning
Data: [Permuted MNIST](https://paperswithcode.com/dataset/permuted-mnist)
4 Replies

Loading