Abstract: Human-in-the-loop robot deployment has gained
significant attention in both academia and industry as a semiautonomous paradigm that enables human operators to intervene
and adjust robot behaviors at deployment time, improving
success rates. However, continuous human monitoring and
intervention can be highly labor-intensive and impractical when
deploying a large number of robots. To address this limitation,
we propose a method that allows diffusion policies to actively
seek human assistance only when necessary, reducing reliance
on constant human oversight. To achieve this, we leverage
the generative process of diffusion policies to compute an
uncertainty-based metric based on which the autonomous agent
can decide to request operator assistance at deployment time,
without requiring any operator interaction during training.
Additionally, we show that the same method can be used for
efficient data collection for fine-tuning diffusion policies in order
to improve their autonomous performance. Experimental results
from simulated and real-world environments demonstrate that
our approach enhances policy performance during deployment
for a variety of scenarios.
Loading