Improved Sample Complexity for Stochastic Compositional Variance Reduced GradientDownload PDFOpen Website

2020 (modified: 04 Nov 2022)ACC 2020Readers: Everyone
Abstract: Convex composition optimization is an emerging topic that covers a wide range of applications arising from stochastic optimal control, reinforcement learning and multistage stochastic programming. Existing algorithms suffer from unsatisfactory sample complexity and practical issues since they ignore the convexity structure in the algorithmic design. In this paper, we develop a new stochastic compositional variance-reduced gradient algorithm with the sample complexity of O((m + n)log(1/ε) + 1/ε <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> ) where m + n is the total number of samples. Our algorithm is near-optimal as the dependence on m + n is optimal up to a logarithmic factor. Experimental results on real-world datasets demonstrate the effectiveness and efficiency of the new algorithm.
0 Replies

Loading