## RL for Latent MDPs: Regret Guarantees and a Lower Bound

21 May 2021, 20:46 (modified: 27 Oct 2021, 03:54)NeurIPS 2021 SpotlightReaders: Everyone
Keywords: Reinforcement Learning, Partially Observable, Latent Variable, Lower Bound, Upper Confidence Bound, Expectation-Maximization, Predictive State Representation
TL;DR: Fundamental Limits and Algorithms for Episodic Reinforcement Learning in MDPs with a Latent Context
Abstract: In this work, we consider the regret minimization problem for reinforcement learning in latent Markov Decision Processes (LMDP). In an LMDP, an MDP is randomly drawn from a set of $M$ possible MDPs at the beginning of the interaction, but the identity of the chosen MDP is not revealed to the agent. We first show that a general instance of LMDPs requires at least $\Omega((SA)^M)$ episodes to even approximate the optimal policy. Then, we consider sufficient assumptions under which learning good policies requires polynomial number of episodes. We show that the key link is a notion of separation between the MDP system dynamics. With sufficient separation, we provide an efficient algorithm with local guarantee, {\it i.e.,} providing a sublinear regret guarantee when we are given a good initialization. Finally, if we are given standard statistical sufficiency assumptions common in the Predictive State Representation (PSR) literature (e.g., \cite{boots2011online}) and a reachability assumption, we show that the need for initialization can be removed.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies