DisDP: Robust Imitation Learning via Disentangled Diffusion Policies

Published: 09 May 2025, Last Modified: 28 May 2025RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitiation learning, Diffusion policy, Multi-View Disentanglement
TL;DR: This work introduces Disentangled Diffusion Policy (DisDP), an Imitation learning method that enhances robustness by integrating multi-view disentanglement into diffusion-based policies. It improves the performance on sensor values and modality drop.
Abstract: This work introduces Disentangled Diffusion Policy (DisDP), an Imitation learning method that enhances robustness by integrating multi-view disentanglement into diffusion-based policies. Robots operating in real-world environments rely on multiple sensory inputs to effectively interact with their surroundings. However, sensors are susceptible to noise, calibration errors, failures, and environmental perturbations. Existing Imitation Learning methods struggle to generalize under such conditions, as they typically assume consistent, noise-free inputs. Disentangled Diffusion Policy (DisDP) addresses this limitation by structuring sensory inputs into shared and private representations, preserving task-relevant global features while retaining distinct details from individual sensors. This structured representation improves resilience against sensor dropouts and perturbations. Evaluations on The RoboColosseum and Libero benchmarks demonstrate that DisDP achieves performance on par with baseline methods while exhibiting greater robustness to sensor variations.
Submission Number: 132
Loading