Keywords: Large-scale Learning, Procedural Scene Generation, Motion Planning
TL;DR: We propose a SOTA approach to fast, reactive real-world motion planning by distilling traditional motion planners into generalist visuomotor policies at scale in millions of procedurally generated scenes.
Abstract: The current paradigm for motion planning generates solutions from scratch for every new problem, which consumes significant amounts of time and computational resources. For complex, cluttered scenes, motion planning approaches can often take minutes to produce a solution, while humans are able to accurately and safely reach any goal in seconds by leveraging their prior experience. We seek to do the same by applying data-driven learning at scale to the problem of motion planning. Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy. We then combine this with lightweight optimization to obtain a safe path for real world deployment. We perform a thorough evaluation of our method on **64** motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of **23%, 17%** and **79%** motion planning success rate over state of the art sampling, optimization and learning based planning methods. Videos, code, and models available at [mihdalal.github.io/neuralmotionplanner](mihdalal.github.io/neuralmotionplanner)
Submission Number: 5
Loading