Remembering for the Right Reasons: Explanations Reduce Catastrophic ForgettingDownload PDF

28 Sept 2020, 15:48 (modified: 10 Feb 2022, 11:48)ICLR 2021 PosterReaders: Everyone
Keywords: Continual Learning, Lifelong Learning, Catastrophic Forgetting, XAI, Explainability
Abstract: The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the \textit{evidence} for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has ``the right reasons'' for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at \url{}.
One-sentence Summary: Introducing a connection between continual learning and model explainability by regularizing saliency maps to avoid forgetting and showing its effect on memory and regularization-based continual learning approaches.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) SaynaEbrahimi/Remembering-for-the-Right-Reasons](
Data: [CIFAR-100](, [CUB-200-2011](
9 Replies