Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning

Published: 07 Jun 2024, Last Modified: 09 Aug 2024RLC 2024 ICBINB PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: lifelong reinforcement learning, parameter-free optimization, continual reinforcement learning, loss of plasticity
TL;DR: We propose a parameter-free optimizer for lifelong reinforcement learning that mitigates loss of plasticity and rapidly adapts to new distribution shifts.
Abstract: A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Experiments on Procgen and Gym Control environments show that TRAC works surprisingly well—mitigating loss of plasticity and rapidly adapting to challenging distribution shifts—despite the underlying optimization problem being nonconvex and nonstationary.
Submission Number: 10
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview