A general framework of Riemannian adaptive optimization methods with a convergence analysis

Published: 15 Jan 2025, Last Modified: 15 Jan 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper proposes a general framework of Riemannian adaptive optimization methods. The framework encapsulates several stochastic optimization algorithms on Riemannian manifolds and incorporates the mini-batch strategy that is often used in deep learning. Within this framework, we also propose AMSGrad on embedded submanifolds of Euclidean space. Moreover, we give convergence analyses valid for both a constant and a diminishing step size. Our analyses also reveal the relationship between the convergence rate and mini-batch size. In numerical experiments, we applied the proposed algorithm to principal component analysis and the low-rank matrix completion problem, which can be considered to be Riemannian optimization problems. Python implementations of the methods used in the numerical experiments are available at https://github.com/iiduka-researches/202408-adaptive.
Certifications: Reproducibility Certification
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Stephen_Becker1
Submission Number: 3274
Loading