A general framework of Riemannian adaptive optimization methods with a convergence analysis

TMLR Paper3274 Authors

01 Sept 2024 (modified: 02 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper proposes a general framework of Riemannian adaptive optimization methods. The framework encapsulates several stochastic optimization algorithms on Riemannian manifolds and incorporates the mini-batch strategy that is often used in deep learning. Within this framework, we also propose AMSGrad on embedded submanifolds of Euclidean space. Moreover, we give convergence analyses valid for both a constant and a diminishing step size. Our analyses also reveal the relationship between the convergence rate and mini-batch size. In numerical experiments, we applied the proposed algorithm to principal component analysis and the low-rank matrix completion problem, which can be considered to be Riemannian optimization problems. Python implementations of the methods used in the numerical experiments are available at https://anonymous.4open.science/r/202408-adaptive-0BA6/README.md.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Stephen_Becker1
Submission Number: 3274
Loading