Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual LearningDownload PDF

21 May 2021, 20:46 (edited 20 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: continual learning, weight loss landscape, dynamic gradient projection memory, sharpness flatten
  • TL;DR: This paper proposes a method, Flattening Sharpness for Dynamic Gradient Projection Memory, to address the ’sensitivity-stability’ dilemma for continual learning.
  • Abstract: The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/danruod/FS-DGPM
16 Replies