Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning

Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, Sergey Levine

Nov 05, 2016 (modified: Mar 03, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where twp agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of ``analogy making,'' or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.
  • TL;DR: Learning a common feature space between robots with different morphology or actuation to transfer skills.
  • Keywords: Deep learning, Reinforcement Learning, Transfer Learning
  • Conflicts: cs.berkeley.edu, eecs.berkeley.edu, berkeley.edu, google.com