Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Modular Multitask Reinforcement Learning with Policy Sketches
Jacob Andreas, Dan Klein, Sergey Levine
Nov 04, 2016 (modified: Dec 07, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:We describe a framework for multitask deep reinforcement learning guided by
policy sketches. Sketches annotate each task with a sequence of named subtasks,
providing high-level structural relationships among tasks, but not providing the
detailed guidance required by previous work on learning policy abstractions for
RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations).
Our approach associates every subtask with its own modular subpolicy,
and jointly optimizes over full task-specific policies by tying parameters across
shared subpolicies. This optimization is accomplished via a simple decoupled
actor–critic training objective that facilitates learning common behaviors from
dissimilar reward functions. We evaluate the effectiveness of our approach on a
maze navigation game and a 2-D Minecraft-inspired crafting game. Both games
feature extremely sparse rewards that can be obtained only after completing a
number of high-level subgoals (e.g. escaping from a sequence of locked rooms or
collecting and combining various ingredients in the proper order). Experiments
illustrate two main advantages of our approach. First, we outperform standard
baselines that learn task-specific or shared monolithic policies. Second, our
method naturally induces a library of primitive behaviors that can be recombined
to rapidly acquire policies for new tasks.
TL;DR:Learning multitask deep hierarchical policies with guidance from symbolic policy sketches
Keywords:Reinforcement Learning, Transfer Learning
Enter your feedback below and we'll get back to you as soon as possible.