Learning Category-Level Generalizable Object Manipulation Policy via Generative Adversarial Self-Imitation Learning from DemonstrationsDownload PDF

04 Mar 2022, 07:18 (modified: 14 Apr 2022, 17:16)ICLR 2022 GPL PosterReaders: Everyone
Keywords: Generalizable Policy Learning, Imitation Learning, GAIL, Curriculum learning
TL;DR: We propose Generative Adversarial Self-Imitation Learning from Demonstration along with several general and critical techniques to improve GAIL for category-level object generalizable manipulation policy learning.
Abstract: Generalizable object manipulation skills are critical for intelligent and multi-functional robots to work in real-world complex scenes. Despite the recent progress in reinforcement learning, it is still very challenging to learn a generalizable manipulation policy that can handle a category of geometrically diverse articulated objects. In this work, we tackle this category-level object manipulation policy learning problem via imitation learning in a task-agnostic manner, where we assume no handcrafted dense rewards but only a terminal reward. Given this novel and challenging generalizable policy learning problem, we identify several key issues that can fail the previous imitation learning algorithms and hinder the generalization to unseen instances. We then propose several general but critical techniques, including generative adversarial self-imitation learning from demonstrations, progressive growing of discriminator, and instance-balancing for expert buffer, that accurately pinpoints and tackles these issues and can benefit category-level manipulation policy learning regardless of the tasks. Our experiments on ManiSkill benchmarks demonstrate a remarkable improvement on all tasks and our ablation studies further validate the contribution of each proposed technique.
1 Reply

Loading