Keywords: continuous mdps, smoothness, representation, regret
Abstract: Achieving the no-regret property for Reinforcement Learning (RL) problems in continuous state and action-space environments is one of the major open problems in the field. Existing solutions either work under very specific assumptions or achieve bounds that are vacuous in some regimes. Furthermore, many structural assumptions
are known to suffer from a provably unavoidable exponential dependence on the time horizon $H$ in the regret, which makes any possible solution unfeasible in practice.
In this paper, we identify \textit{local linearity} as the feature that makes Markov Decision Processes (MDPs) both \textit{learnable} (sublinear regret) and \textit{feasible} (regret that is polynomial in $H$).
We define a novel MDP representation class, namely \textit{Locally Linearizable MDPs}, generalizing other representation classes like Linear MDPs and MDPS with low inherent Belmman error.
Then, i) we introduce \textsc{Cinderella}, a no-regret algorithm for this general representation class, and ii) we show that all known learnable and feasible MDP families are representable in this class.
We first show that all known feasible MDPs belong to a family that we call \textit{Mildly Smooth MDPs}. Then, we show how any mildly smooth MDP can be represented as a Locally Linearizable MDP by an appropriate choice of representation. This way, \textsc{Cinderella} is shown to achieve state-of-the-art regret bounds for all previously known (and some new) continuous MDPs for which RL is learnable and feasible.
Submission Number: 18
Loading