Mitigating Forgetting in Continual Learning with Selective Gradient Projection

20 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Catastrophic Forgetting, Stability–Plasticity Trade-off, Gradient-based Optimization, Orthogonal Gradient Descent, Gradient Projection Methods, Threshold-based Optimization, Task Ordering
TL;DR: A memory-efficient, tunable optimizer for mitigating forgetting in continual learning.
Abstract: As neural networks are increasingly deployed in dynamic environments, they face the challenge of catastrophic forgetting, the tendency to overwrite previously learned knowledge when adapting to new tasks, resulting in severe performance degradation on earlier tasks. We propose Selective Forgetting-Aware Optimization (SFAO), a dynamic method that regulates gradient directions via cosine similarity and per-layer gating, enabling controlled forgetting while balancing plasticity and stability. SFAO selectively projects, accepts, or discards updates using a tunable mechanism with efficient Monte Carlo approximation. Experiments on standard continual learning benchmarks show that SFAO achieves competitive accuracy with markedly lower memory cost, a 90\% reduction, and improved forgetting on MNIST datasets, making it suitable for resource-constrained scenarios.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 24164
Loading