Implicit Finite-Horizon Approximation and Efficient Optimal Algorithms for Stochastic Shortest PathDownload PDF

21 May 2021, 20:46 (modified: 26 Oct 2021, 17:35)NeurIPS 2021 PosterReaders: Everyone
Keywords: reinforcement learning, stochastic shortest path, regret minimization
TL;DR: Implicit Finite-Horizon Approximation of SSP
Abstract: We introduce a generic template for developing regret minimization algorithms in the Stochastic Shortest Path (SSP) model, which achieves minimax optimal regret as long as certain properties are ensured. The key of our analysis is a new technique called implicit finite-horizon approximation, which approximates the SSP model by a finite-horizon counterpart only in the analysis without explicit implementation. Using this template, we develop two new algorithms: the first one is model-free (the first in the literature to our knowledge) and minimax optimal under strictly positive costs; the second one is model-based and minimax optimal even with zero-cost state-action pairs, matching the best existing result from [Tarbouriech et al., 2021b]. Importantly, both algorithms admit highly sparse updates, making them computationally more efficient than all existing algorithms. Moreover, both can be made completely parameter-free.
Supplementary Material: zip
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: zip
13 Replies