Keywords: Quasi-Newton Methods, Global Convergence Guarantees, Linesearch Procedures
TL;DR: A simple stepsize can make Quasi-Newton globally convergent at rate $O(1/k)$ and even accelerated at $O(1/k^2)$ with precise Hessians approximation. We extended our stepsize schedule to a robust and adaptive variant.
Abstract: Quasi-Newton methods are widely used for solving convex optimization problems due to their ease of implementation, practical efficiency, and strong local convergence guarantees. However, their global convergence is typically established only under specific line search strategies and the assumption of strong convexity. In this work, we extend the theoretical understanding of Quasi-Newton methods by introducing a simple stepsize schedule that guarantees a global convergence rate of $\mathcal {O}(1/k)$ for the convex functions. Furthermore, we show that when the inexactness of the Hessian approximation is controlled within a prescribed relative accuracy, the method attains an accelerated convergence rate of $\mathcal {O}(1/k^2)$ -- matching the best-known rates of both Nesterov’s accelerated gradient method and cubically regularized Newton methods. We validate our theoretical findings through empirical comparisons, demonstrating clear improvements over standard Quasi-Newton baselines. To further enhance robustness, we develop an adaptive variant that adjusts to the function's curvature while retaining the global convergence guarantees of the non-adaptive algorithm.
Primary Area: optimization
Submission Number: 2054
Loading