A SIMPLE BASELINE FOR STABLE AND PLASTIC NEURAL NETWORKS

Published: 12 Jun 2025, Last Modified: 03 Aug 2025CoLLAs 2025 - Workshop TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Stability-Plasticity Dilemma, Computer Vision
TL;DR: A new activation function and a backpropagation adaptation to provide a simple way of balancing plasticity and stability in a continual computer vision setting.
Abstract: Continual learning in computer vision requires that models adapt to a continuous stream of tasks without forgetting prior knowledge, yet existing approaches often tip the balance heavily toward either plasticity or stability. We introduce RDBP, a simple, low-overhead baseline that unites two complementary mechanisms: ReLUDown, a lightweight activation modification that preserves feature sensitivity while preventing neuron dormancy, and Decreasing Backpropagation, a biologically inspired gradient-scheduling scheme that progressively shields early layers from catastrophic updates. Evaluated on the Continual ImageNet benchmark, RDBP matches or exceeds the plasticity and stability of state-of-the-art methods while reducing computational cost. RDBP thus provides both a practical solution for real-world continual learning and a clear benchmark against which future continual learning strategies can be measured.
Submission Number: 8
Loading