Keywords: lifelong reinforcement learning, parameter-free optimization, continual reinforcement learning, loss of plasticity
TL;DR: We propose a parameter-free optimizer for lifelong reinforcement learning that mitigates loss of plasticity and rapidly adapts to new distribution shifts.
Abstract: A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Extensive experiments on Procgen, Atari, and Gym Control environments show that TRAC works surprisingly well—mitigating loss of plasticity and rapidly adapting to challenging distribution shifts—despite the underlying optimization problem being nonconvex and nonstationary.
Supplementary Material: zip
Primary Area: Reinforcement learning
Submission Number: 7802
Loading