Know Thyself: Transferable Visual Control Policies Through Robot-AwarenessDownload PDF

04 Mar 2022, 07:18 (edited 06 Apr 2022)ICLR 2022 GPL OralReaders: Everyone
  • Keywords: visual foresight, model-based reinforcement learning, dynamics models, visuomotor control, manipulation, transfer
  • TL;DR: We closely integrate readily available knowledge about the robot and world into a learned model and cost to facilitate transfer.
  • Abstract: Note: This submission is published in ICLR'22. Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data. Could we leverage data previously collected on another robot to reduce or even completely remove this need for robot-specific data? We propose a “robot-aware control” paradigm that achieves this by exploiting readily available knowledge about the robot. We then instantiate this in a robot-aware model-based RL policy by training modular dynamics models that couple a transferable, robot-agnostic world dynamics module with a robot-specific, potentially analytical, robot dynamics module. This also enables us to set up visual planning costs that separately consider the robot agent and the world. Our experiments on tabletop manipulation tasks with simulated and real robots demonstrate that these plug-in improvements dramatically boost the transferability of visual model-based RL policies, even permitting zero-shot transfer of visual manipulation skills onto new robots. Project website:
1 Reply