Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning

Published: 18 Sept 2025, Last Modified: 10 Dec 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Robust Reinforcement Learning, Policy Evaluation, Finite-Sample Analysis, Average Reward MDPs, Stochastic Approximation
TL;DR: We present the first finite-sample analysis for policy evaluation in robust average-reward reinforcement learning using semi-norm contractions.
Abstract: We present the first finite-sample analysis of policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior work in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by showing that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite expected sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 16670
Loading