TL;DR: A behavioral study of catastrophic forgetting, revealing a link between the learning speed of an example and its susceptibility to being forgotten by the model when new data is introduced.
Abstract: Catastrophic forgetting -- the tendency of neural networks to forget previously learned data when learning new information -- remains a central challenge in continual learning. In this work, we adopt a behavioral approach, observing a connection between learning speed and forgetting: examples learned more quickly are less prone to forgetting. Focusing on replay-based continual learning, we show that the composition of the replay buffer -- specifically, whether it contains quickly or slowly learned examples -- has a significant effect on forgetting. Motivated by this insight, we introduce Speed-Based Sampling (SBS), a simple yet general strategy that selects replay examples based on their learning speed. SBS integrates easily into existing buffer-based methods and improves performance across a wide range of competitive continual learning benchmarks, advancing state-of-the-art results. Our findings underscore the value of accounting for the forgetting dynamics when designing continual learning algorithms.
Lay Summary: When teaching neural networks to perform a series of tasks, they often forget how to do the earlier ones after learning the new ones. This problem, called catastrophic forgetting, makes it hard to build AI systems that learn continuously, like humans do. In this study, we ask: Which examples are more likely to be forgotten? We find that examples the network learns quickly -- typically simpler ones -- are remembered better, while harder, slower-to-learn examples are more likely to be forgotten. Using this insight, we develop a method that identifies which examples are at risk of being forgotten and helps the network focus more on them during training. Our findings deepen the understanding of catastrophic forgetting and offer a step toward building AI that can learn over time without losing past knowledge.
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: continual learning, catastrophic forgetting, replay buffer, simplicity bias
Submission Number: 10377
Loading