NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors

Published: 23 Oct 2023, Last Modified: 02 Nov 2023CoRL23-WS-LEAP OralEveryoneRevisionsBibTeX
Keywords: Task and Motion Planning, Learning from Demonstration, Neural Object Representations
TL;DR: A TAMP-based framework featuring neural object descriptors, capable of learning from only a handful of brief demonstrations yet exhibiting robust performance in long-horizon tasks involving diverse object shapes, poses, and goal configurations.
Abstract: Developing intelligent robots for complex manipulation tasks in household and factory settings remains challenging due to long-horizon tasks, contact-rich manipulation, and the need to generalize across a wide variety of object shapes and scene layouts. While Task and Motion Planning (TAMP) offers a promising solution, its assumptions such as kinodynamic models limit applicability in novel contexts. Neural object descriptors (NODs) have shown promise in object and scene generalization but face limitations in addressing broader tasks. Our proposed TAMP-based framework, NOD-TAMP, extracts short manipulation trajectories from a handful of human demonstrations, adapts these trajectories using NOD features, and composes them to solve broad long-horizon tasks. Validated in a simulation environment, NOD-TAMP effectively tackles varied challenges and outperforms existing methods, establishing a cohesive framework for manipulation planning. For videos and other supplemental material, see the project website: https://sites.google.com/view/nod-tamp/.
Submission Number: 3
Loading