DiMSam: Diffusion Models as Samplers for Task and Motion Planning under Partial Observability

Published: 23 Oct 2023, Last Modified: 23 Oct 2023CoRL23-WS-LEAP PosterEveryoneRevisionsBibTeX
Keywords: Task and Motion planning, Diffusion model, Articulated object manipulation
TL;DR: We use diffusion models as samplers in TAMP to deal with partial observability and unseen objects.
Abstract: Task and Motion Planning (TAMP) approaches are effective at planning long-horizon autonomous robot manipulation. However, it can be difficult to apply them to domains where the environment and its dynamics are not fully known. We propose to overcome these limitations by leveraging deep generative modeling, specifically diffusion models, to learn constraints and samplers that capture these difficult-to-engineer aspects of the planning model. These learned samplers are composed and combined within a TAMP solver in order to find action parameter values jointly that satisfy the constraints along a plan. To tractably make predictions for unseen objects in the environment, we define these samplers on low-dimensional learned latent embeddings of changing object state. We evaluate our approach in an articulated object manipulation domain and show how the combination of classical TAMP, generative learning, and latent embeddings enables long-horizon constraint-based reasoning. We also apply the learned sampler in the real world. More details are available at https://sites.google.com/view/dimsam-tamp.
Submission Number: 17