Pixel Motion as Universal Representation for Robot Control

11 Sept 2025 (modified: 01 Feb 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: vision-language-action, universal motion representations
TL;DR: Learning language tied universal and interpretable motion features that can be mapped to real world actions in robotics tasks
Abstract: We present LangToMo, a vision-language-action framework structured as a dual-system architecture that uses pixel motion forecasts as intermediate representations. Our high-level $\textit{System 2}$, an image diffusion model, generates text-conditioned pixel motion sequences from a single frame and past motion to guide robot control. Pixel motion—a universal, interpretable, and motion-centric representation—can be extracted from videos in a weakly-supervised manner, enabling diffusion model training on any video-caption data. Treating the generated pixel motion as largely embodiment-agnostic $\textit{universal representations}$, our embodiment-aware $\textit{System 1}$ module translates these into robot actions via motion-to-action mapping functions, which can be either hand-crafted or learned with minimal supervision. System 2 operates as a high-level policy applied at sparse temporal intervals, while System 1 acts as a low-level policy at dense temporal intervals. This hierarchical decoupling enables flexible, scalable, and generalizable robot control under both unsupervised and supervised settings, bridging the gap between language, motion, and action. Visualizations at https://anonymous.4open.science/w/LangToMo.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 3947
Loading