Bilinear Exponential Family of MDPs: Frequentist Regret Bound with Tractable Exploration $\&$ PlanningDownload PDF

16 May 2022 (modified: 03 Jul 2024)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Reinforcement learning, bilinear MDP, frequentist regret, tractable optimism
TL;DR: We notice that a generic family of bilinear exponential MDPs provides a linear value function without further assumptions. We propose a modification for RLVS for this setting and show a regret bound improving over previous literature.
Abstract: We study the problem of episodic reinforcement learning in continuous state-action spaces with unknown rewards and transitions. Specifically, we consider the setting where the rewards and transitions are modeled using parametric bilinear exponential families. We propose an algorithm, $\texttt{BEF-RLSVI}$, that a) uses penalized maximum likelihood estimators to learn the unknown parameters, b) injects a calibrated Gaussian noise in the parameter of rewards to ensure exploration, and c) leverages linearity of the exponential family with respect to an underlying RKHS to perform tractable planning. We further provide a frequentist regret analysis of $\texttt{BEF-RLSVI}$ that yields an upper bound of $\tilde{\mathcal{O}}(\sqrt{d^3H^3K})$, where $d$ is the dimension of the parameters, $H$ is the episode length, and $K$ is the number of episodes. Our analysis improves the existing bounds for the bilinear exponential family of MDPs by $\sqrt{H}$ and removes the handcrafted clipping deployed in existing $\texttt{RLSVI}$-type algorithms. Our regret bound is order-optimal with respect to $H$ and $K$.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/bilinear-exponential-family-of-mdps/code)
19 Replies

Loading