A Statistical Online Inference Approach in Averaged Stochastic ApproximationDownload PDF

Published: 31 Oct 2022, Last Modified: 01 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: stochastic approximation, functional central limit theorem, statistical inference
TL;DR: We propose a general framework to perform statistical online inference in a class of constant step size stochastic approximation (SA) problems.
Abstract: In this paper we propose a general framework to perform statistical online inference in a class of constant step size stochastic approximation (SA) problems, including the well-known stochastic gradient descent (SGD) and Q-learning. Regarding a constant step size SA procedure as a time-homogeneous Markov chain, we establish a functional central limit theorem (FCLT) for it under weaker conditions, and then construct confidence intervals for parameters via random scaling. To leverage the FCLT results in the Markov chain setting, an alternative condition that is more applicable for SA problems is established. We conduct experiments to perform inference with both random scaling and other traditional inference methods, and finds that the former has a more accurate and robust performance.
Supplementary Material: pdf
15 Replies

Loading