Coupled Distributional Random Expert Distillation for World Model Online Imitation Learning

22 Mar 2026 (modified: 11 May 2026)Withdrawn by AuthorsEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Imitation Learning (IL) has achieved remarkable success across various domains, including robotics, autonomous driving, and healthcare, by enabling agents to learn complex behaviors from expert demonstrations. However, existing IL methods often face instability challenges, particularly when relying on adversarial reward or value formulations in world model frameworks. In this work, we propose a novel approach to online imitation learning that addresses these limitations through a reward model based on random network distillation (RND) for density estimation. Our reward model is built on the joint estimation of expert and behavioral distributions within the latent space of the world model. We evaluate our method across diverse benchmarks, including DMControl, Meta-World, and ManiSkill2, showcasing its ability to deliver stable performance and achieve expert-level results in both locomotion and manipulation tasks. Our approach demonstrates improved stability over adversarial methods while maintaining expert-level performance.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=9fsdvnMWsC&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: - We have included additional ablation studies for hyperparameters $\zeta$ and $\alpha$ in our manuscript. - We have provided additional experimental results comparing gradient norms to support our claims regarding training stability. - We have added the comparison with **SAIL** and **BC** baselines. - We have provided a toy example to reinforce the intuition of our proposed method in the manuscript. - We have added new ablation studies on world models, MPCs, and coupling, along with additional analysis of hard exploration problems with coupling.
Assigned Action Editor: ~Yunbo_Wang1
Submission Number: 8036
Loading