Keywords: diffusion models, shortcut models, fast diffusion, clipping, DDPM, diffusion policy, robotic manipulation, embodied AI, Unet, metaheuristic, genetic algorithm, D4RL, robomimic
TL;DR: We show that diffusion policy can be made faster and more accurate without retraining and analyse why, we introduce a genetic algorithm to reduce to two denoising steps..
Abstract: Diffusion models, such as diffusion policy, have achieved state-of-the-art results in robotic manipulation by imitating expert demonstrations. While diffusion models were originally developed for vision tasks like image and video generation, many of their inference strategies have been directly transferred to control domains without adaptation. In this work, we show that by tailoring the denoising process to the specific characteristics of embodied AI tasks—particularly the structured, low-dimensional nature of action distributions---diffusion policies can operate effectively with as few as 5 neural function evaluations (NFE).
Building on this insight, we propose a population-based sampling strategy, genetic denoising, which enhances both performance and stability by selecting denoising trajectories with low out-of-distribution risk. Our method solves challenging tasks with only 2 NFE while improving or matching performance. We evaluate our approach across 14 robotic manipulation tasks from D4RL and Robomimic, spanning multiple action horizons and inference budgets. In over 2 million evaluations, our method consistently outperforms standard diffusion-based policies, achieving up to 20\% performance gains with significantly fewer inference steps.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 13343
Loading