META LEARNING SHARED HIERARCHIESDownload PDF

15 Feb 2018 (modified: 15 Sept 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps. Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.
TL;DR: learn hierarchal sub-policies through end-to-end training over a distribution of tasks
Keywords: hierarchal reinforcement learning, meta-learning
Code: [![github](/images/github_icon.svg) openai/mlsh](https://github.com/openai/mlsh) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=SyX0IeWAW)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/meta-learning-shared-hierarchies/code)
17 Replies

Loading