Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: Quasi-Newton, Accelerated, global convergence rate, Anderson, limited memory
TL;DR: This paper introduces a novel limited-memory quasi-Newton method with global convergence guarantees that interpolates the ones of first and second order methods.
Abstract: Despite the impressive numerical performance of quasi-Newton and Anderson/nonlinear-acceleration methods, their global convergence rates have remained elusive for over 50 years. This paper addresses this long-standing question by introducing a framework that derives novel and adaptive quasi-Newton or nonlinear/Anderson acceleration schemes. Under mild assumptions, the proposed iterative methods exhibit explicit, non-asymptotic convergence rates that blend those of gradient descent and Cubic Regularized Newton's method. The proposed approach also includes an accelerated version for convex functions. Notably, these rates are achieved adaptively, without prior knowledge of the function's smoothness parameter. The framework presented in this paper is generic, and algorithms such as Newton's method with random subspaces, finite difference, or lazy Hessian can be seen as special cases of this paper's algorithm. Numerical experiments demonstrate the efficiency of the proposed framework, even compared to the L-BFGS algorithm with Wolfe line search.
Submission Number: 22
Loading