AltNet: Alternating Network Resets for Plasticity

19 Sept 2025 (modified: 01 Oct 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Plasticity, Stability, Reinforcement Learning
TL;DR: Alternating network resets enable performance, plasticity, and stability.
Abstract: Deep learning methods have shown remarkable success in supervised learning when training from fixed datasets and in stationary environments. However, when models such as neural networks are sequentially trained on multiple tasks, their ability to learn progressively declines with each additional task. This phenomenon is known as plasticity loss. Previous work has shown that periodically resetting a neural network's parameters, in whole or in part, often helps restore plasticity. However, this comes at the cost of a temporary drop in performance, which can be risky in real-world settings. We introduce AltNet, a reset-based alternating network approach that mitigates plasticity loss without performance degradation by leveraging alternating twin networks. The use of twin networks anchors performance during resets and prevents performance collapse through a mechanism that allows networks to periodically switch roles: one learns as it interacts with the environment, while the other learns off-policy from the active agent’s interactions and a replay buffer. At fixed intervals, the active network is reset and the passive network, having learned from the agent’s prior (online and offline) experiences, becomes the new active network. We demonstrate that AltNet improves plasticity and sample efficiency, enables fast adaptation, and improves safety by preventing performance drops. It outperforms baseline methods and various state-of-the-art reset-based techniques on challenging high-dimensional tasks in the DeepMind Control Suite.
Primary Area: reinforcement learning
Submission Number: 21141
Loading