On Efficient Bayesian Exploration in Model-Based Reinforcement Learning

Published: 02 Jul 2025, Last Modified: 02 Jul 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we address the challenge of data-efficient exploration in reinforcement learning by examining existing principled, information-theoretic approaches to intrinsic motivation. Specifically, we focus on a class of exploration bonuses that targets epistemic uncertainty rather than the aleatoric noise inherent in the environment. We prove that these bonuses naturally signal epistemic information gains and converge to zero once the agent becomes sufficiently certain about the environment’s dynamics and rewards, thereby aligning exploration with genuine knowledge gaps. Our analysis provides formal guarantees for IG-based approaches, which previously lacked theoretical grounding. To enable practical use, we also discuss tractable approximations via sparse variational Gaussian Processes, Deep Kernels and Deep Ensemble models. We then outline a general framework — Predictive Trajectory Sampling with Bayesian Exploration (PTS-BE) — which integrates model-based planning with information-theoretic bonuses to achieve sample-efficient deep exploration. We empirically demonstrate that PTS-BE substantially outperforms other baselines across a variety of environments characterized by sparse rewards and/or purely exploratory tasks.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: The camera ready version includes adjustment suggested by the Action Editor, including some minor rephrasing in the abstract and Section 1, and a new set of experiments on the Ant Maze environment.
Supplementary Material: zip
Assigned Action Editor: ~Mirco_Mutti1
Submission Number: 4509
Loading