Inter-Level Cooperation in Hierarchical Reinforcement Learning

11 May 2021 (modified: 11 May 2021)OpenReview Archive Direct UploadReaders: Everyone
Abstract: Hierarchical models for deep reinforcement learning (RL) have emerged as powerful methods for generating meaningful control strategies in difficult long time horizon tasks. Training of said hierarchical models, however, continue to suffer from instabilities that limit their applicability. In this paper, we address instabilities that arise from the concurrent optimization of goal-assignment and goal-achievement policies. Drawing connections between this concurrent optimization scheme and communication and cooperation in multi-agent RL, we redefine the standard optimization procedure to explicitly promote cooperation between these disparate tasks. Our method is demonstrated to achieve superior results to existing techniques in a set of difficult long time horizon tasks, and serves to expand the scope of solvable tasks by hierarchical reinforcement learning. Videos of the results are available at: https://sites.google.com/berkeley.edu/cooperative-hrl.
0 Replies

Loading