Randomized Smoothing Variance Reduction Method for Large-Scale Non-smooth Convex OptimizationDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 10 May 2023Oper. Res. Forum 2021Readers: Everyone
Abstract: We consider a new method for minimizing the average of a large number of non-smooth and convex functions. Such a problem often arises in typical machine learning problems, but is computationally challenging. We apply an implementable randomized smoothing method and propose a multistage scheme to progressively reduce the variance of the gradient estimator of the smoothed functions. Our algorithm achieves a linear convergence rate. Both its time complexity and gradient complexity are superior to the current standard algorithms for non-smooth minimization as well as subgradient-based algorithms. Besides, our algorithm works well without the error-bound condition on the minimizing sequence as well as the commonly imposed (but strong) smoothness and strongly convexity condition. We show that our algorithm has wide applications in optimization and machine learning problems. As an illustrative example, we demonstrate experimentally that our algorithm performs well on large-scale ranking problems and risk-aware portfolio optimization problems.
0 Replies

Loading