Keywords: Flow Matching, Robot Learning, Imitation Learning, Robotics, Robotics Policy, Manipulation
TL;DR: We present Vision-To-Action flow matching policy, a noise-free, conditioning-free framework, that evolves latent visual representations into latent actions via flow matching for efficient visuomotor control.
Abstract: Conventional flow matching and diffusion-based policies sample through iterative denoising from standard noise distributions (e.g., Gaussian), and require conditioning mechanisms to incorporate visual information during the generative process, incurring substantial time and memory overhead. To reduce the complexity, we develop VITA~({\bf VI}sion-{\bf T}o-{\bf A}ction policy), a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching. VITA treats latent visual representations as the source of the flow, thus eliminating the need of conditioning. As expected, bridging vision and action is challenging, because actions are lower-dimensional, less structured, and sparser than visual representations; moreover, flow matching requires the source and target to have the same dimensionality. To overcome this, we introduce an action autoencoder that maps raw actions into a structured latent space aligned with visual latents, trained jointly with flow matching. To further prevent latent space collapse, we propose flow latent decoding, which anchors the latent generation process by backpropagating the action reconstruction loss through the flow matching ODE (ordinary differential equations) solving steps. We evaluate VITA on 8 simulation and 2 real-world tasks from ALOHA and Robomimic. VITA outperforms or matches state-of-the-art generative policies, while achieving $1.5{\times}$-$2.3{\times}$ faster inference compared to conventional methods with conditioning.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 17070
Loading