Uncertainty Comes for Free: Human-in-the-Loop Policies with Diffusion Models

Published: 01 Jun 2026, Last Modified: 03 Feb 2026ICRA 2026EveryoneCC BY-NC 4.0
Abstract: Human-in-the-loop robot deployment has gained significant attention in both academia and industry as a semiautonomous paradigm that enables human operators to intervene and adjust robot behaviors at deployment time, improving success rates. However, continuous human monitoring and intervention can be highly labor-intensive and impractical when deploying a large number of robots. To address this limitation, we propose a method that allows diffusion policies to actively seek human assistance only when necessary, reducing reliance on constant human oversight. To achieve this, we leverage the generative process of diffusion policies to compute an uncertainty-based metric based on which the autonomous agent can decide to request operator assistance at deployment time, without requiring any operator interaction during training. Additionally, we show that the same method can be used for efficient data collection for fine-tuning diffusion policies in order to improve their autonomous performance. Experimental results from simulated and real-world environments demonstrate that our approach enhances policy performance during deployment for a variety of scenarios.
Loading