Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning

Published: 06 Mar 2025, Last Modified: 06 Mar 2025MCDC @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Continual Offline Reinforcement Learning, Continual Reinforcement Learning, Hierarchical Policies, Mazes, Navigation, Offline Learning, Offline Reinforcement Learning, Reinforcement Learning, Reinforcement Learning for Navigation
TL;DR: Solving continual learning for navigation by growing a hierarchy of policy subspace of neural networks
Abstract: We consider a Continual Reinforcement Learning setup, where a learning agent must continuously adapt to new tasks while retaining previously acquired skill sets, with a focus on the challenge of avoiding forgetting past gathered knowledge and ensuring scalability with the growing number of tasks. Such issues prevail in autonomous robotics and video game simulations, notably for navigation tasks prone to topological or kinematic changes. To address these issues, we introduce HiSPO, a novel hierarchical framework designed specifically for continual learning in navigation settings from offline data. Our method leverages distinct policy subspaces of neural networks to enable flexible and efficient adaptation to new tasks while preserving existing knowledge. We demonstrate, through a careful experimental study, the effectiveness of our method in both classical MuJoCo maze environments and complex video game-like navigation simulations, showcasing competitive performances and satisfying adaptability with respect to classical continual learning metrics, in particular regarding the memory usage and efficiency.
Submission Number: 13
Loading