Probing Representation Forgetting in Continual LearningDownload PDF

09 Oct 2021, 14:49 (modified: 01 Dec 2021, 17:42)NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: Continual Learning, Linear Probes, Catastrophic Forgetting
TL;DR: We employ linear probes to study representation forgetting in various continual learning settings and observe naive finetuning approaches can lead to representation's whose usefulness do not severely degrade
Abstract: Continual Learning methods typically focus on tackling the phenomenon of catastrophic forgetting in the context of neural networks. Catastrophic forgetting is associated with an abrupt loss of knowledge previously learned by a model. In supervised learning problems this forgetting is typically measured or observed by evaluating decrease in task performance. However, a model’s representations can change without losing knowledge. In this work we consider the concept of representation forgetting, which relies on using the difference in performance of an optimal linear classifier before and after a new task is introduced. Using this tool we revisit a number of standard continual learning benchmarks and observe that through this lens, model representations trained without any special control for forgetting often experience minimal representation forgetting. Furthermore we find that many approaches to continual learning that aim to resolve the catastrophic forgetting problem do not improve the representation forgetting upon the usefulness of the representation.
1 Reply

Loading