Keywords: hierarchical reinforcement learning, long horizon, scale
TL;DR: We develop a highly scalable hierarchical RL method which demonstrates promising scaling trends on NetHack when training for 30B samples, and also outperforms flat baselines on MiniHack and Mujoco.
Abstract: Hierarchical reinforcement learning (RL) has the potential to enable effective decision-making over long timescales. Existing approaches, while promising, have yet to realize the benefits of large-scale training. In this work, we identify and solve several key challenges in scaling online hierarchical RL to high-throughput environments. We propose Scalable Option Learning (SOL), a highly scalable hierarchical RL algorithm which achieves a ~35x higher throughput compared to existing hierarchical methods. To demonstrate SOL's performance and scalability, we train hierarchical agents using 30 billion frames of experience on the complex game of NetHack, significantly surpassing flat agents and demonstrating positive scaling trends. We also validate SOL on MiniHack and Mujoco environments, showcasing its general applicability. Our code will be open sourced.
Primary Area: reinforcement learning
Submission Number: 12651
Loading