Equivariant Diffusion Policy

Published: 05 Sept 2024, Last Modified: 15 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Equivariance, Diffusion Model, Robotic Manipulation
TL;DR: We propose Equivariant Diffusion Policy that leverages symmetries in diffusion policy learning to improve sample efficiency.
Abstract: Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning. However, a drawback of this approach is the need to learn a denoising function, which is significantly more complex than learning an explicit policy. In this work, we propose Equivariant Diffusion Policy, a novel diffusion policy learning method that leverages domain symmetries to obtain better sample efficiency and generalization in the denoising function. We theoretically analyze the $\mathrm{SO}(2)$ symmetry of full 6-DoF control and characterize when a diffusion model is $\mathrm{SO}(2)$-equivariant. We furthermore evaluate the method empirically on a set of 12 simulation tasks in MimicGen, and show that it obtains a success rate that is, on average, 21.9\% higher than the baseline Diffusion Policy. We also evaluate the method on a real-world system to show that effective policies can be learned with relatively few training samples, whereas the baseline Diffusion Policy cannot.
Supplementary Material: zip
Video: https://www.youtube.com/watch?v=xIFSx_NVROU
Website: https://equidiff.github.io
Student Paper: yes
Submission Number: 334
Loading