Optimistic Planning by Regularized Dynamic ProgrammingDownload PDF

Published: 20 Jul 2023, Last Modified: 29 Aug 2023EWRL16Readers: Everyone
Keywords: Discounted Markov Decision Processes, Optimistic Planning, Approximate Dynamic Programming, Online Mirror Descent
TL;DR: We propose an efficient method for optimistic planning in infinite-horizon discounted MDPs based on regularized dynamic programming.
Abstract: We propose a new method for optimistic planning in infinite-horizon discounted Markov decision processes based on the idea of adding regularization to the updates of an otherwise standard approximate value iteration procedure. This technique allows us to avoid contraction and monotonicity arguments typically required by existing analyses of approximate dynamic programming methods, and in particular to use approximate transition functions estimated via least-squares procedures in MDPs with linear function approximation. We use our method to recover known guarantees in tabular MDPs and to provide a computationally efficient algorithm for learning near-optimal policies in discounted linear mixture MDPs from a single stream of experience, and show it achieves near-optimal statistical guarantees.
Already Accepted Paper At Another Venue: already accepted somewhere else
1 Reply

Loading