Keywords: vision-language-action models, world modeling, diffusion models, robot learning
TL;DR: We introduce dual-stream diffusion with independent noise schedules to jointly model actions and future states, improving VLA model performance.
Abstract: Recently, augmenting Vision-Language-Action models (VLA) with world modeling has shown promise in improving robotic policy learning. However, it remains challenging to jointly predict next-state observations and action sequences because of the inherent difference between the two modalities. To address this, we propose DUal-STream diffusion (DUST), a world-model augmented VLA framework that handles the modality conflict and enhances the performance of VLA models across diverse tasks. Specifically, we propose a multimodal diffusion transformer architecture that explicitly maintains separate modality streams while still enabling cross-modal knowledge sharing.
In addition, we introduce independent noise perturbations for each modality and a decoupled flow-matching loss. This design enables the model to learn the joint distribution in a bidirectional manner while avoiding the need for a unified latent space. Based on the decoupling of modalities during training, we also introduce a joint sampling method that supports test-time scaling, where action and vision tokens evolve asynchronously at different rates. Through experiments on simulated benchmarks such as RoboCasa and GR-1, DUST achieves up to 6\% gains over baseline methods, while our test-time scaling approach provides an additional 2–5\% boost. On real-world tasks with the Franka Research 3, DUST improves success rates by 13\%, confirming its effectiveness beyond simulation. Furthermore, pre-training on action-free videos from BridgeV2 yields significant transfer gains on RoboCasa, underscoring DUST’s potential for large-scale VLA pretraining.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 19263
Loading