A General-Purpose Theorem for High-Probability Bounds of Stochastic Approximation with Polyak Averaging

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Stochastic Approximation, Polyak-Ruppert Averaging, high-probability bound
TL;DR: This paper introduces a unified method that converts high-probability bounds for individual Stochastic Approximation iterates into tight concentration bounds for their Polyak–Ruppert averages.
Abstract: Polyak–Ruppert averaging is a widely used technique to achieve the optimal asymptotic variance of stochastic approximation (SA) algorithms, yet its high-probability performance guarantees remain underexplored in general settings. In this paper, we present a general framework for establishing non-asymptotic concentration bounds for the error of averaged SA iterates. Our approach assumes access to individual concentration bounds for the unaveraged iterates and yields a sharp bound on the averaged iterates. We also construct an example, showing the tightness of our result up to constant multiplicative factors. As direct applications, we derive tight concentration bounds for contractive SA algorithms and for algorithms such as temporal difference learning and $Q$-learning with averaging, obtaining new bounds in settings where traditional analysis is challenging.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 9514
Loading