Keywords: Vision-language-action Model, Unified Multimodal Model
Abstract: Vision-language-action (VLA) models aim to understand natural language instructions and visual observations and execute corresponding actions as an embodied agent. Recent advancements have integrated future images into the understanding-action loop, enabling foresight-driven policies that reduce abstract action prediction to a more tractable inverse kinematics problem. However, existing models either rely on external experts for modality unification or treat image generation and action prediction as separate processes, limiting the benefits of direct synergy between these tasks. In this work, we propose Unified Diffusion VLAs, which tightly couple understanding, generation, and action in a mutually reinforcing manner. Our method optimizes the generation of actions and images jointly through a synchronous denoising diffusion process, where action tokens progressively attend to future image tokens. This iterative refinement enables actions to evolve from initialization with sufficient visual guidance, ensuring precise action execution. We introduce a hybrid attention mechanism and the Joint Discrete Denoising Diffusion Process (JD3P), which integrates multiple modalities into a unified trajectory. We also propose a two-stage training pipeline and several inference-time techniques that optimize performance and efficiency. Our approach achieves state-of-the-art performance on benchmarks such as CALVIN, LIBERO, and SimplerEnv, and we demonstrate its effectiveness through ablation studies and real-world evaluations.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 12819
Loading