Minimizing Change in Classifier Likelihood to Mitigate Catastrophic ForgettingDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
TL;DR: Another perspective on catastrophic forgetting
Abstract: Continual learning is a longstanding goal of artificial intelligence, but is often counfounded by catastrophic forgetting that prevents neural networks from learning tasks sequentially. Previous methods in continual learning have demonstrated how to mitigate catastrophic forgetting, and learn new tasks while retaining performance on the previous tasks. We analyze catastrophic forgetting from the perspective of change in classifier likelihood and propose a simple L1 minimization criterion which can be adapted to different use cases. We further investigate two ways to minimize forgetting as quantified by this criterion and propose strategies to achieve finer control over forgetting. Finally, we evaluate our strategies on 3 datasets of varying difficulty and demonstrate improvements over previously known L2 strategies for mitigating catastrophic forgetting.
Keywords: catastrophic forgetting, continual learning, classification, regularization
Original Pdf: pdf
7 Replies

Loading