Keywords: Semantic Fields, Category-Level Generalization, Imitation Learning, Diffusion Models
Abstract: Diffusion-based policies have shown remarkable capability in executing complex robotic manipulation tasks but lack explicit characterization of geometry and semantics, which often limits their ability to generalize to unseen objects and layouts. To enhance the generalization capabilities of Diffusion Policy, we introduce a novel framework that incorporates explicit spatial and semantic information via 3D semantic fields. We generate 3D descriptor fields from multi-view RGBD observations with large foundational vision models, then compare these descriptor fields against reference descriptors to obtain semantic fields. The proposed method explicitly considers geometry and semantics, enabling strong generalization capabilities in tasks requiring category-level generalization, resolving geometric ambiguities, and attention to subtle geometric details. We evaluate our method across eight tasks involving articulated objects and instances with varying shapes and textures from multiple object categories. Our method demonstrates its effectiveness by increasing Diffusion Policy's average success rate on \textit{unseen} instances from 20\% to 93\%. Additionally, we provide a detailed analysis and visualization to interpret the sources of performance gain and explain how our method can generalize to novel instances. Project page: https://robopil.github.io/GenDP/
Supplementary Material: zip
Spotlight Video: mp4
Video: https://youtu.be/6jUGmUaAEOc
Website: https://robopil.github.io/GenDP/
Code: https://github.com/WangYixuan12/gendp
Publication Agreement: pdf
Student Paper: yes
Submission Number: 513
Loading