Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods

Published: 09 Sept 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study stochastic Cubic Newton methods for solving general, possibly non-convex minimization problems. We propose a new framework, the helper framework, that provides a unified view of the stochastic and variance-reduced second-order algorithms equipped with global complexity guarantees; it can also be applied to learning with auxiliary information. Our helper framework offers the algorithm designer high flexibility for constructing and analyzing stochastic Cubic Newton methods, allowing arbitrary size batches and using noisy and possibly biased estimates of the gradients and Hessians, incorporating both the variance reduction and the lazy Hessian updates. We recover the best-known complexities for the stochastic and variance-reduced Cubic Newton under weak assumptions on the noise. A direct consequence of our theory is the new lazy stochastic second-order method, which significantly improves the arithmetic complexity for large dimension problems. We also establish complexity bounds for the classes of gradient-dominated objectives that include convex and strongly convex problems. For Auxiliary Learning, we show that using a helper (auxiliary function) can outperform training alone if a given similarity measure is small.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=hvRXm5lIRJ
Changes Since Last Submission: fixed the stylefile format (the previous submission had the title centred instead of left-aligned).
Code: https://github.com/elmahdichayti/Unified-Convergence-Theory-of-Cubic-Newton-s-method
Assigned Action Editor: ~Atsushi_Nitanda1
Submission Number: 2425
Loading