How to Improve Sample Complexity of SGD over Highly Dependent Data?Download PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Dependent data sampling, SGD, sample complexity
Abstract: Conventional machine learning applications typically assume that data samples are independently and identically distributed (i.i.d.). However, many practical scenarios naturally involve a data-generating process that produces highly dependent data samples, which are known to heavily bias the stochastic optimization process and slow down the convergence of learning. In this paper, we conduct a fundamental study on how to facilitate the convergence of SGD over highly dependent data using different popular update schemes. Specifically, with a $\phi$-mixing model that captures both exponential and polynomial decay of the data dependence over time, we show that SGD with periodic data-subsampling achieves an improved sample complexity over the standard SGD in the full spectrum of the $\phi$-mixing data dependence. Moreover, we show that by fully utilizing the data, mini-batch SGD can further substantially improve the sample complexity with highly dependent data. Numerical experiments validate our theory.
One-sentence Summary: This paper shows that both SGD with periodic data-subsampling and mini-batch SGD can improve the sample complexity of vanilla SGD under dependent data.
Supplementary Material: zip
5 Replies

Loading