Keywords: Model-based reinforcement learning, Offline learning, Online learning, Active learning, Exploration
TL;DR: In MBRL, offline agents suffer from performance degradation w.r.t online agents due to test-time OOD states; we showcase that this can be mitigated through limited additional online interactions or exploration data to increase state space coverage.
Abstract: Data collection is crucial for learning robust world models in model-based reinforcement learning.
The most prevalent strategies are to actively collect trajectories by interacting with the environment during online training or training on offline datasets.
At first glance, the nature of learning task-agnostic environment dynamics makes world models a good candidate for effective offline training. However, the effects of online vs. offline data on world models and thus on the resulting task performance have not been thoroughly studied in the literature. In this work, we investigate both paradigms in model-based settings, conducting experiments on 31 different environments.
First, we showcase that online agents outperform their offline counterparts.
We identify a key challenge behind performance degradation of offline agents: encountering Out-of-Distribution states at test time.
This issue arises because, without the self-correction mechanism in online agents, offline datasets with limited state space coverage induce a mismatch between the agent's imagination and real rollouts, compromising policy training.
We demonstrate that this issue can be mitigated by allowing for additional online interactions in a fixed or adaptive schedule, restoring the performance of online training with limited interaction data.
We also showcase that incorporating exploration data helps mitigate the performance degradation of offline agents. Based on our insights, we recommend adding exploration data when collecting large datasets, as current efforts predominantly focus on expert data alone.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11532
Loading