Keywords: Plasticity Loss, Continual Learning, Reinforcement Learning, Resetting
TL;DR: Alternating network resets enable performance, plasticity, and stability.
Abstract: Modern deep learning systems struggle in continual learning settings due to plasticity loss—the gradual decline in the ability of a neural network learning system to change its output in response to new information. Although periodically resetting a neural network, in whole or in part, has been shown to restore plasticity, it causes performance to temporarily collapse, which can be dangerous in real-world settings. To address this, we propose a simple and general method: AltNet, a reset-based-alternating network approach. AltNet maintains two neural networks that periodically switch roles: one actively interacts with the environment, while the other learns off-policy from the active agent's interactions and a replay buffer. At fixed intervals, the active network is reset, and the passive network—having learned from recent experience—becomes the new active network. This alternating reset-and-role-switch strategy supports continual learning by preserving both plasticity and performance.
Submission Number: 12
Loading