Self-Normalized Resets for Plasticity in Continual Learning

Published: 28 Nov 2025, Last Modified: 30 Nov 2025NeurIPS 2025 Workshop MLxOREveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Plasticity Loss, Lifelong Learning, Neuron Resets
Abstract: Plasticity Loss is an increasingly important phenomenon that refers to the empirical observation that as a neural network is continually trained on a sequence of changing tasks, its ability to adapt to a new task diminishes over time. We propose Self-Normalized Resets (SNR), which resets a neuron's weights when evidence indicates its firing rate has collapsed. Across a battery of continual learning problems and network architectures, we demonstrate that SNR consistently attains superior performance compared to its competitor algorithms. We establish the necessity of neuron-resets for mitigating plasticity loss by analyzing the task of learning a single ReLU neuron with gradient descent. Under an adversarial-target regime, an idealized SNR learns the target while regularization-based schemes can fail to learn. SNR’s reset-threshold is motivated by a simple hypothesis test for detecting inactive neurons. Seen through the lens of this hypothesis test, competing reset proposals yield suboptimal error rates in correctly detecting inactive neurons.
Submission Number: 107
Loading