Novel Iteratively Preconditioned Gradient-Descent Algorithm via Successive Over-Relaxation Formulation

Published: 01 Jan 2024, Last Modified: 15 May 2025IEEE Control. Syst. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We devise a novel quasi-Newton algorithm for solving unconstrained convex optimization problems. The proposed algorithm is built on our previous framework of the iteratively preconditioned gradient-descent (IPG) algorithm. IPG utilized Richardson iteration to update a preconditioner matrix that approximates the inverse of the Hessian matrix. In this letter, we substitute the Richardson iteration with a successive over-relaxation (SOR) formulation. The convergence guarantee of the proposed algorithm and its theoretical improvement over vanilla IPG are presented. The algorithm is used in a mobile robot position estimation problem for numerical validation using a moving horizon estimation (MHE) formulation. Compared with IPG, the results demonstrate an improved performance of the proposed algorithm in terms of computational time and the number of iterations needed for convergence, matching our theoretical results.
Loading