Motion Policy NetworksDownload PDF

16 Jun 2022, 10:45 (modified: 03 Dec 2022, 00:30)CoRL 2022 PosterReaders: Everyone
Student First Author: yes
Keywords: Motion Control, Imitation Learning, End-to-End Learning
TL;DR: Motion Policy Networks are trained on millions of example trajectories to generate collision-free, smooth motion from just a single depth camera image
Abstract: Collision-free motion generation in unknown environments is a core building block for robot manipulation. Generating such motions is challenging due to multiple objectives; not only should the solutions be optimal, the motion generator itself must be fast enough for real-time performance and reliable enough for practical deployment. A wide variety of methods have been proposed ranging from local controllers to global planners, often being combined to offset their shortcomings. We present an end-to-end neural model called Motion Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from just a single depth camera observation. M$\pi$Nets are trained on over 3 million motion planning problems in more than 500,000 environments. Our experiments show that M$\pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes. They are 46% better than prior neural planners and more robust than local control policies. Despite being only trained in simulation, M$\pi$Nets transfer well to the real robot with noisy partial point clouds. Videos and code are available at
Supplementary Material: zip
24 Replies