Keywords: stochastic optimization, high probability, sample complexity, iteration complexity, proximal point method, variance reduction
TL;DR: We develop a stochastic proximal point method that achieves high-probability convergence guarantees under only bounded-variance noise, with low sample complexity and no reliance on batching.
Abstract: High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.
Submission Number: 131
Loading