Inexact Riemannian Gradient Descent Method for Nonconvex Optimization with Strong Convergence

Published: 01 Jan 2025, Last Modified: 24 Sept 2025J. Sci. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Gradient descent methods are fundamental first-order optimization algorithms in both Euclidean spaces and Riemannian manifolds. However, in many scenarios, the exact gradient is either not readily available or is computationally expensive to obtain. Examples include high-dimensional optimization, non-differentiable functions, and black-box functions, among others. This paper proposes a unified inexact Riemannian gradient descent algorithm for nonconvex optimization problems, accompanied by strong convergence guarantees. Specifically, the inexact gradient is approximated based on two key assumptions. Our method demonstrates significant convergence results for both the gradient sequences and the function values. The global convergence with constructive convergence rates for the sequence of iterates is ensured under the Riemannian Kurdyka-Łojasiewicz property. Furthermore, our algorithm encompasses two specific applications: Riemannian sharpness-aware minimization and Riemannian extragradient algorithm, both of which inherit the global convergence properties of the inexact gradient methods. Numerical experiments on low-rank matrix completion, principal component analysis and Procrustes problems validate the efficiency and practical relevance of the proposed approaches.
Loading