Keywords: Imitiation learning, Robustness, Diffusion policy, Multi-View Disentanglement
TL;DR: This work introduces Disentangled Diffusion Policy (DisDP), an Imitation learning method that enhances robustness by integrating multi-view disentanglement into diffusion-based policies. It improves the performance on sensor values and modality drop.
Abstract: This work introduces Disentangled Diffusion Policy(DisDP), an Imitation Learning (IL) method that enhances robustness. Robot policies have to be robust against different perturbations, including sensor noise, complete sensor dropout, and environmental variations. Existing IL methods struggle to generalize under such conditions, as they typically assume consistent, noise-free inputs. To address this limitation, DisDP structures sensors into shared and private representations, pre-serving global features while retaining details from individual
sensors. Additionally, Disentangled Behavior Cloning (DisBC) is introduced, a disentangled Behavior Cloning (BC) policy, to demonstrate the general applicance of disentanglement for IL. This structured representation improves resilience against sensor dropouts and perturbations. Evaluations on The Colosseum and Libero benchmarks demonstrate that disentangled policies achieve better performance in general and exhibit greater robustness to perturbations compared to their baseline policies.
Submission Number: 4
Loading