On Efficient Bayesian Exploration in Model-Based Reinforcement Learning

TMLR Paper4509 Authors

18 Mar 2025 (modified: 05 Jun 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we address the challenge of data-efficient exploration in reinforcement learning by developing a principled, information-theoretic approach to intrinsic motivation. \review{Specifically, we study a class of exploration bonuses that targets epistemic uncertainty rather than the aleatoric noise inherent in the environment. We prove that these bonuses naturally signal epistemic information gains and converge to zero once the agent becomes sufficiently certain about the environment’s dynamics and rewards, thereby aligning exploration with genuine knowledge gaps. Our analysis provides formal guarantees for IG-based approaches, which previously lacked theoretical grounding.} To enable practical use, we also discuss tractable approximations via sparse variational Gaussian Processes, Deep Kernels and Deep Ensemble models. We then propose a general Predictive Trajectory Sampling with Bayesian Exploration (PTS-BE) framework, which combines model-based planning with the proposed information-theoretic bonuses to achieve sample-efficient deep exploration. Empirically, we demonstrate that PTS-BE substantially outperforms other baselines across a variety of environments characterized by sparse rewards and/or purely exploratory tasks.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: The revised version of the manuscript includes several changes, highlighted in blue text, made in response to the reviewers' constructive feedback and suggestions.
Assigned Action Editor: ~Mirco_Mutti1
Submission Number: 4509
Loading