Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear ConvergenceDownload PDF

21 May 2021, 20:43 (edited 06 Nov 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: quasi-Newton methods, superlinear convergence, local convergence, convex optimziation
  • TL;DR: We establish better explicit (local) superlinear convergence for random or greedy SR1 and BFGS quasi-Newton methods.
  • Abstract: In this paper, we follow Rodomanov and Nesterov’s work to study quasi-Newton methods. We focus on the common SR1 and BFGS quasi-Newton methods to establish better explicit (local) superlinear convergence rates. First, based on the greedy quasi-Newton update which greedily selects the direction to maximize a certain measure of progress, we improve the convergence rate to a condition-number-free superlinear convergence rate. Second, based on the random quasi-Newton update that selects the direction randomly from a spherically symmetric distribution, we show the same superlinear convergence rate established as above. Our analysis is closely related to the approximation of a given Hessian matrix, unconstrained quadratic objective, as well as the general strongly convex, smooth, and strongly self-concordant functions.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
17 Replies