Keywords: vision-language-action models, robotics, video prediction, imitation learning
TL;DR: We present ViPRA, a framework that turns video prediction models into robot policies by learning latent action priors from unlabeled videos and refining them with flow-matching to enable high-frequency control with minimal labeled data.
Abstract: Can we turn a video prediction model into a robot policy?
Videos, including those of humans or teleoperated robots, capture rich physical interactions. However, most of them lack labeled actions, which limits their use in robot learning. We present **ViPRA**, a simple pretraining-finetuning framework that learns continuous robot control from these actionless videos. Instead of directly predicting actions, we train a video-language model to predict *both future visual observations and motion-centric latent actions*, which serve as intermediate representations of scene dynamics. We train these latent actions using perceptual losses and optical flow consistency to ensure they reflect physically grounded behavior. For downstream control, we introduce a chunked *flow-matching decoder* that maps latent actions to robot-specific continuous action sequences, using only 100 to 200 teleoperated demonstrations. This approach avoids expensive action annotation, supports generalization across embodiments, and enables smooth, high-frequency continuous control via chunked action decoding. Unlike prior latent action works that treat pretraining as autoregressive policy learning, ViPRA explicitly models both what changes and how. Our method outperforms strong baselines, with a 16\% gain on the SIMPLER benchmark and a 14\% improvement across real world manipulation tasks. We will release models and code [here](https://vipra-robot.github.io).
Submission Number: 8
Loading