Keywords: loss of plasticity, continual learning, lifelong learning, continual reinforcement learning, activation functions
TL;DR: Activation design—guided by simple first-principles rules—yields drop-in choices that keep models plastic across sequences and generalize better under distribution shift in continual supervised learning and RL.
Abstract: In independent, identically distributed (i.i.d.) training regimes, activation functions have been benchmarked extensively, and their differences often shrink once model size and optimization are tuned. In continual learning, however, the picture is different: beyond catastrophic forgetting, models can progressively lose the ability to adapt—loss of plasticity—and the role of the non-linearity in this failure mode remains underexplored. We show that activation choice is a primary, architecture-agnostic lever for mitigating plasticity loss. Building on a property-level analysis of negative-branch shape and saturation behavior, we introduce two drop-in nonlinearities—Smooth-Leaky and Randomized Smooth-Leaky—and evaluate them in two complementary settings: (i) supervised class-incremental benchmarks and (ii) reinforcement learning with non-stationary MuJoCo environments designed to induce controlled distribution and dynamics shifts. We also provide a simple stress protocol and diagnostics that link the shape of the activation to the adaptation under change. The takeaway is straightforward: thoughtful activation design offers a lightweight, domain-general way to sustain plasticity in continual learning without extra capacity or task-specific tuning.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 6318
Loading