Reward-free World Models for Online Imitation Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Imitation learning (IL) enables agents to acquire skills directly from expert demonstrations, providing a compelling alternative to reinforcement learning. However, prior online IL approaches struggle with complex tasks characterized by high-dimensional inputs and complex dynamics. In this work, we propose a novel approach to online imitation learning that leverages reward-free world models. Our method learns environmental dynamics entirely in latent spaces without reconstruction, enabling efficient and accurate modeling. We adopt the inverse soft-Q learning objective, reformulating the optimization process in the Q-policy space to mitigate the instability associated with traditional optimization in the reward-policy space. By employing a learned latent dynamics model and planning for control, our approach consistently achieves stable, expert-level performance in tasks with high-dimensional observation or action spaces and intricate dynamics. We evaluate our method on a diverse set of benchmarks, including DMControl, MyoSuite, and ManiSkill2, demonstrating superior empirical performance compared to existing approaches.
Lay Summary: Teaching robots and AI systems to perform complex tasks by simply watching human experts—known as imitation learning—is a powerful idea. However, many current methods struggle when tasks involve complicated environments or high-dimensional data, like video inputs or robotic control. To address this, we developed a new technique that lets AI systems learn from demonstrations without needing predefined rewards or full reconstructions of the environment. Instead, we train a model to understand the “rules” of the environment in a compressed, abstract form. Using this model, the system can plan and act intelligently, much like a human would. We also introduced a more stable training method that avoids common pitfalls in learning how to make decisions. Our approach closely matches expert performance on challenging benchmarks involving simulated robots and manipulation tasks. This makes it a promising step toward more reliable and scalable AI training methods—especially for real-world tasks where designing reward functions or collecting large datasets is difficult.
Link To Code: https://github.com/TobyLeelsz/iqmpc
Primary Area: Reinforcement Learning
Keywords: world models, imitation learning
Submission Number: 5832
Loading