Hierarchical and Interpretable Skill Acquisition in Multi-task Reinforcement LearningDownload PDF

15 Feb 2018 (modified: 27 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Learning policies for complex tasks that require multiple different skills is a major challenge in reinforcement learning (RL). It is also a requirement for its deployment in real-world scenarios. This paper proposes a novel framework for efficient multi-task reinforcement learning. Our framework trains agents to employ hierarchical policies that decide when to use a previously learned policy and when to learn a new skill. This enables agents to continually acquire new skills during different stages of training. Each learned task corresponds to a human language description. Because agents can only access previously learned skills through these descriptions, the agent can always provide a human-interpretable description of its choices. In order to help the agent learn the complex temporal dependencies necessary for the hierarchical policy, we provide it with a stochastic temporal grammar that modulates when to rely on previously learned skills and when to execute new skills. We validate our approach on Minecraft games designed to explicitly test the ability to reuse previously learned skills while simultaneously learning new skills.
TL;DR: A novel hierarchical policy network which can reuse previously learned skills alongside and as subcomponents of new skills by discovering the underlying relations between skills.
Keywords: Hierarchical Policy, Interpretable Policy, Deep Reinforcement Learning, Multi-task Reinforcement Learning, Skill Acquisition, Language Grounding
9 Replies

Loading