Regret-Optimal Model-Free Reinforcement Learning for Discounted MDPs with Short Burn-In Time

Published: 21 Sept 2023, Last Modified: 04 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: reinforcement learning theory, regret minimization, minimax optimality
TL;DR: We proposed the first model-free algorithm that can achieve minimax regret optimality in the infinite-horizon discounted setting, with the additional benefit of low burn-in time.
Abstract: A crucial problem in reinforcement learning is learning the optimal policy. We study this in tabular infinite-horizon discounted Markov decision processes under the online setting. The existing algorithms either fail to achieve regret optimality or have to incur a high memory and computational cost. In addition, existing optimal algorithms all require a long burn-in time in order to achieve optimal sample efficiency, i.e., their optimality is not guaranteed unless sample size surpasses a high threshold. We address both open problems by introducing a model-free algorithm that employs variance reduction and a novel technique that switches the execution policy in a slow-yet-adaptive manner. This is the first regret-optimal model-free algorithm in the discounted setting, with the additional benefit of a low burn-in time.
Submission Number: 2623
Loading