Adaptive gradient descent on Riemannian manifolds and its applications to Gaussian variational inference

ICLR 2026 Conference Submission21715 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adaptive method, Riemannian optimization, Variational Inference
TL;DR: We propose RAdaGD, a novel family of adaptive gradient descent methods on general Riemannian manifolds, and its applications to Gaussian variational inference.
Abstract: We propose RAdaGD, a novel family of adaptive gradient descent methods on general Riemannian manifolds. RAdaGD adapts the step size parameter without line search, and includes instances that achieve a non-ergodic convergence guarantee, $f(x_k) - f(x_\star) \le \mathcal{O}(1/k)$, under local geodesic smoothness and generalized geodesic convexity. A core application of RAdaGD is Gaussian Variational Inference, where our method provides the first convergence guarantee in the absence of $L$-smoothness of the target log-density, under additional technical assumptions. We also investigate the empirical performance of RAdaGD in numerical simulations and demonstrate its competitiveness in comparison to existing algorithms.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 21715
Loading