AdaWorld: Learning Adaptable World Models with Latent Actions

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A highly adaptable world model that enables efficient adaptation and effective planning with minimal interactions and finetuning.
Abstract: World models aim to learn action-controlled future prediction and have proven essential for the development of intelligent agents. However, most existing world models rely heavily on substantial action-labeled data and costly training, making it challenging to adapt to novel environments with heterogeneous actions through limited interactions. This limitation can hinder their applicability across broader domains. To overcome this limitation, we propose AdaWorld, an innovative world model learning approach that enables efficient adaptation. The key idea is to incorporate action information during the pretraining of world models. This is achieved by extracting latent actions from videos in a self-supervised manner, capturing the most critical transitions between frames. We then develop an autoregressive world model that conditions on these latent actions. This learning paradigm enables highly adaptable world models, facilitating efficient transfer and learning of new actions even with limited interactions and finetuning. Our comprehensive experiments across multiple environments demonstrate that AdaWorld achieves superior performance in both simulation quality and visual planning.
Lay Summary: How to achieve human-alike adaptability in unseen environments with new action controls? In this paper, we answer this by pretraining AdaWorld with continuous latent actions from thousands of environments. It enables zero-shot action transfer, fast adaptation, and effective planning with minimal finetuning.
Link To Code: https://github.com/Little-Podi/AdaWorld
Primary Area: Applications
Keywords: World Model, Latent Action, Video Generation, Diffusion Model, Decision Making, Embodied AI
Submission Number: 1014
Loading