Keywords: Latent Action Model, World Model, Reinforcement Learning, Video Generation Model
Abstract: Adapting pre-trained video generation models into controllable world models via *latent actions* is a promising step towards creating generalist world models. The dominant paradigm adopts a two-stage approach that trains the latent action model (LAM) and the world model separately, resulting in redundant training and limiting their potential for co-adaptation. A conceptually simple and appealing idea is to directly replace the forward dynamic model in LAM with a powerful world model and train them jointly, but this is non-trivial and prone to representational collapse. In this work, we propose **CoLA-World**, which for the first time successfully realizes this synergistic paradigm, resolving the core challenge in joint learning through a critical warm-up phase that effectively aligns the representations of the from-scratch LAM with the pre-trained world model. This unlocks a co-evolution cycle: the world model acts as a knowledgeable tutor, providing gradients to shape a high-quality LAM, while the LAM offers a more precise and adaptable control interface to the world model. Empirically, CoLA-World matches or outperforms prior two-stage methods in both video simulation quality and downstream visual planning, establishing a robust and efficient new paradigm for the field.
Primary Area: reinforcement learning
Submission Number: 20417
Loading