Dual Averaging Converges for Nonconvex Smooth Stochastic Optimization
Abstract: Dual averaging and gradient descent with their stochastic variants stand as the two canonical recipe books for first-order optimization: Every modern variant can be viewed as a descendant of one or the other. In the convex regime, these algorithms have been deeply studied, and we know that the two classes are essentially equivalent in terms of theoretical guarantees. On the other hand, in the non-convex setting, the situation is drastically different: While it is provable that SGD can minimize the gradient norm of non-convex smooth functions, no finite-time complexity guarantee for Stochastic Dual Averaging (SDA) was known in the same setting. In this paper, we close this gap by a reduction that views SDA as SGD applied to a sequence of implicitly regularized objectives. We show that a tuned SDA exhibits a rate of convergence $\mathcal{O}(1 / T + \sigma \log T/ \sqrt{T})$, similar to that of SGD under the same assumptions. To our best knowledge, this is the first complete convergence theory for dual averaging on non-convex smooth stochastic problems without restrictive assumptions, closing a long-standing open problem in the field. Beyond the base algorithm, we also discuss ADA-DA, a variant that marries SDA with AdaGrad's auto-scaling, which achieves the same rate without requiring knowledge of the noise variance.
Submission Number: 1582
Loading