Instance-Dependent Fixed-Budget Pure Exploration in Reinforcement Learning

ICLR 2026 Conference Submission13248 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, MDP, pure exploration, fixed budget
TL;DR: We propose algorithms for fixed-budget pure exploration in reinforcement learning and provide a theoretical analysis of its performance.
Abstract: We study the problem of fixed budget pure exploration in reinforcement learning. The goal is to identify a near-optimal policy, given a fixed budget on the number of interactions with the environment. Unlike the standard PAC setting, we do not require the target error level $\epsilon$ and failure rate $\delta$ as input. We propose novel algorithms and provide, to the best of our knowledge, the first instance-dependent $\epsilon$-uniform guarantee, meaning that the probability that $\epsilon$-correctness is ensured can be obtained simultaneously for all $\epsilon$ above a budget-dependent threshold. It characterizes the budget requirements in terms of the problem-specific hardness of exploration. As a core component of our analysis, we derive a $\epsilon$-uniform guarantee for the multiple bandit problem—solving multiple multi-armed bandit instances simultaneously—which may be of independent interest. To enable our analysis, we also develop tools for reward-free exploration under the fixed-budget setting, which we believe will be useful for future work.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 13248
Loading