Keywords: Reinforcement Learning Theory, Distributionally Robust Reinforcement Learning, Finite-Time Convergence Guarantee, Two-Time-Scale Stochastic Approximation, Function Approximation
TL;DR: Finite-time convergence guarantees for robust model-free TD learning and Q-learning with linear function approximation for commonly used uncertainty sets from single-trajectory data, without restrictive discount-factor assumptions.
Abstract: Distributionally robust reinforcement learning (DRRL) focuses on designing policies that achieve good performance under model uncertainties. In particular, we are interested in maximizing the worst-case long-term discounted reward, where the data for RL comes from a nominal model while the deployed environment can deviate from the nominal model within a prescribed uncertainty set. Existing convergence guarantees for robust temporal‑difference (TD) learning for policy evaluation are limited to tabular MDPs or are dependent on restrictive discount‑factor assumptions when function approximation is used. We present the first robust TD learning with linear function approximation, where robustness is measured with respect to the total‑variation distance uncertainty set. Additionally, our algorithm is both model-free and does not require generative access to the MDP. Our algorithm combines a two‑time‑scale stochastic‑approximation update with an outer‑loop target‑network update. We establish an $\tilde{O}(1/\epsilon^{2})$ sample complexity to obtain an $\epsilon$-accurate value estimate. Our results close a key gap between the empirical success of robust RL algorithms and the non-asymptotic guarantees enjoyed by their non-robust counterparts. The key ideas in the paper also extend in a relatively straightforward fashion to robust Q-learning with function approximation.
Primary Area: reinforcement learning
Submission Number: 21786
Loading