Exploiting Model Errors for Exploration in Model-Based Reinforcement Learning

Published: 17 Jul 2025, Last Modified: 06 Sept 2025EWRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Model-based Reinforcement Learning, Exploration
Abstract: We address the problem of exploration in model-based reinforcement learning (MBRL). We present Model-Corrective eXploration (MCX) a novel approach to exploration in MBRL that is both agnostic to the model representation and scalable to complex environments. MCX learns to generalise model prediction errors in order to make hypotheses about how the model might else be wrong, and uses such hypotheses for performing planning to facilitate exploration. We demonstrate the efficacy of our method in visual control tasks with the state-of-the-art MBRL algorithm, DreamerV3.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Jared_Swift1
Track: Regular Track: unpublished work
Submission Number: 123
Loading