Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RLDownload PDF

Sep 28, 2020 (edited Mar 10, 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: reinforcement learning, factored MDP, constrained RL, learning theory
  • Abstract: Reinforcement learning (RL) in episodic, factored Markov decision processes (FMDPs) is studied. We propose an algorithm called FMDP-BF, which leverages the factorization structure of FMDP. The regret of FMDP-BF is shown to be exponentially smaller than that of optimal algorithms designed for non-factored MDPs, and improves on the best previous result for FMDPs~\citep{osband2014near} by a factor of $\sqrt{nH|\mathcal{S}_i|}$, where $|\mathcal{S}_i|$ is the cardinality of the factored state subspace, $H$ is the planning horizon and $n$ is the number of factored transition. To show the optimality of our bounds, we also provide a lower bound for FMDP, which indicates that our algorithm is near-optimal w.r.t. timestep $T$, horizon $H$ and factored state-action subspace cardinality. Finally, as an application, we study a new formulation of constrained RL, known as RL with knapsack constraints (RLwK), and provides the first sample-efficient algorithm based on FMDP-BF.
  • One-sentence Summary: We propose an efficient algorithm with near-optimal regret guarantee for factored MDP, and apply the algorithm to a new formulation of constrained RL.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
10 Replies