Track: Research Track
Keywords: Online reinforcement learning, Bayesian-based exploration, model-free algorithms, sampling
Abstract: While Bayesian-based exploration often demonstrates superior empirical performance compared to bonus-based methods in model-based reinforcement learning (RL), its theoretical understanding remains limited for model-free settings. Existing provable algorithms either suffer from computational intractability or rely on stage-wise policy updates which reduce responsiveness and slow down the learning process. In this paper, we propose a novel variant of Q-learning algorithm, refereed to as RandomizedQ, which integrates sampling-based exploration with agile, step-wise, policy updates, for episodic tabular RL. We establish a sublinear regret bound $\widetilde{O}(\sqrt{H^5SAT})$, where $S$ is the number of states, $A$ is the number of actions, $H$ is the episode length, and $T$ is the total number of episodes. In addition, we present a logarithmic regret bound $O(\frac{H^6SA}{\Delta_{\min}}\log^5(SAHT))$ when the optimal Q-function has a positive sub-optimality $\Delta_{\min}$. Empirically, RandomizedQ exhibits outstanding performance compared to existing Q-learning variants with both bonus-based and Bayesian-based exploration on standard benchmarks.
Submission Number: 87
Loading