Architectural Plasticity for Continual Learning

ICLR 2026 Conference Submission18047 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Plasticity Loss, Continual Learning, Regularization, Optimization
Abstract: Neural networks for continual reinforcement learning (CRL) often suffer from plasticity loss—a progressive decline in their ability to learn new tasks arising from increased churn and Neural Tangent Kernel (NTK) rank collapse. We propose InterpLayers, a drop-in architectural solution that blends a fixed, parameter-free reference pathway with a learnable projection pathway using input-dependent interpolation weights. Without requiring algorithmic adaptation, InterpLayers conserve gradient diversity and constrain output variability by integrating stable and adaptive computations. We provide theoretical guarantees for bounded churn and show that, under mild assumptions, InterpLayers prevent NTK rank collapse through a non-zero rank contribution from the interpolation weights. Across environments with distributional shifts, including permutation, windowing, and expansion, InterpLayer variants (conv-only, full) consistently mitigate performance degradation compared to parameter-matched baselines. Furthermore, lightweight modifications such as dropout improve performance, especially under gradual shifts. These results position InterpLayers as a simple, complementary solution for maintaining plasticity in CRL.
Primary Area: reinforcement learning
Submission Number: 18047
Loading