everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Newton’s Method is the most widespread high-order method, demanding only the gradient and the Hessian of the objective function. However, one of the main disadvantages of Newton's method is its lack of global convergence. There are various ways to deal with it: Cubic Regularization of the Newton Method, Damped Newton Method, Quasi-Newton Methods etc. Another disadvantage is the high iteration cost in modern high dimension problems. In this paper we derive the convergence of Regularization of Newton Method with Bregman divergences for convex, non-convex functions that can be applied with various inexact Hessians. By obtaining global convergence, complexity of iteration stays the same up to logarithmic factor. We show that our theory is validated by multiple experiments, showing competitive performance among other Newton Methods.