Abstract: Generative policies trained with human demonstrations can autonomously accomplish multimodal, longhorizon tasks. However, during inference, humans are often
removed from the policy execution loop, limiting the ability
to guide a pre-trained policy towards a specific sub-goal or
trajectory shape among multiple predictions. Naive human
intervention may inadvertently exacerbate distribution shift,
leading to constraint violations or execution failures. To better
align policy output with human intent without inducing outof-distribution errors, we propose an Inference-Time Policy
Steering (ITPS) framework that leverages human interactions
to bias the generative sampling process, rather than finetuning the policy on interaction data. We evaluate ITPS
across three simulated and real-world benchmarks, testing
three forms of human interaction and associated alignment
distance metrics. Among six sampling strategies, our proposed
stochastic sampling with diffusion policy achieves the best
trade-off between alignment and distribution shift. Videos are
available at https://yanweiw.github.io/itps/.
Loading