A Unified Convergence Theorem for Stochastic Optimization MethodsDownload PDF

Published: 31 Oct 2022, Last Modified: 07 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: stochastic optimization theory, almost sure convergence
TL;DR: We provide a unified convergence theorem and use it to recover existing and establish new convergence results for a set of classic stochastic optimization methods.
Abstract: In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods.
Supplementary Material: pdf
22 Replies

Loading