Load Balancing Neurons: Controlling Firing Rates Improves Plasticity in Continual Learning

ICLR 2026 Conference Submission19098 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual learning, Plasticity, Firing rate
Abstract: Neural networks in continual learning often lose plasticity: some neurons become inactive, while others fire almost constantly. This limits adaptation to shifting data and wastes capacity. Prior work mitigates this by periodically reinitializing low-utility units, but such resets can destroy previously learned features and do not proactively prevent low utility. We study a simple diagnostic measure: the firing rate of ReLU units, defined as the fraction of positive pre-activations. Low rates identify dead units, while very high rates indicate linearized, always-on units. Based on this view, we introduce a lightweight load-balancing mechanism that adjusts per-neuron thresholds to keep firing rates within a target range. Across Continual ImageNet and Class-incremental CIFAR-100, improvements in firing-rate distributions help explain differences in plasticity across approaches, including our load-balancing mechanism and well-known techniques, notably L2 regularization and non-affine normalization.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 19098
Loading