Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • TL;DR: Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks
  • Abstract: Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. Despite the success of recent works in dealing with inherent nonstationarity and sample complexity, it remains difficult to generalize to unseen environments and to transfer different layers of the policy to other agents. In this paper, we propose a novel HRL architecture, Hierarchical Decompositional Reinforcement Learning (HiDe), which allows decomposition of the hierarchical layers into independent subtasks, yet allows for joint training of all layers in end-to-end manner. The main insight is to combine a control policy on a lower level with an image-based planning policy on a higher level. We evaluate our method on various complex continuous control tasks for navigation, demonstrating that generalization across environments and transfer of higher level policies can be achieved. See videos https://sites.google.com/view/hide-rl
  • Code: https://sites.google.com/view/hide-rl
  • Keywords: Hierarchical reinforcement learning, planning, navigation
  • Original Pdf:  pdf
0 Replies