Towards Characterizing the Complexity of Riemannian Online Convex Optimization

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Online convex optimization, Riemannian manifold, geodesically convex function
TL;DR: For Online Convex Optimization over Riemannian manifolds, we establish a tight curvature-dependent regret bound for Riemannian Online Gradient Descent, and introduce a new algorithm that achieves curvature-free regret.
Abstract: Online Convex Optimization (OCO) over Riemannian manifolds raises fundamental questions about how geometry affects algorithmic performance. While Riemannian Online Gradient Descent (R-OGD) has been shown to achieve a regret upper bound of $O(DL\sqrt{\zeta T})$, where $\zeta$ depends on the manifold's curvature, the tightness of this bound remained unclear. We first establish a matching lower bound of $\Omega(DL\sqrt{\zeta T})$ for R-OGD, valid for any fixed step-size schedule. This shows that the worst-case regret of R-OGD is $\Theta(DL\sqrt{\zeta T})$, and that the effect of manifold curvature appears as a multiplicative factor of $\sqrt{\zeta}$ in the regret. In contrast to the Euclidean setting—where OGD is minimax optimal and regret bounds are independent of feedback models—this result reveals that geometry can substantially degrade the performance of first-order algorithms. Our second contribution shows that this degradation is not unavoidable. In the full-information setting, we propose a new algorithm, R-FTRL, based on a Riemannian extension of Follow-the-Regularized-Leader. R-FTRL achieves a regret bound of $O(DL\sqrt{T})$, independent of the curvature. This establishes that curvature-dependent regret is an artifact of limited feedback, not of the problem itself. Our findings highlight a fundamental separation between first-order and full-information models in non-Euclidean settings, and uncover subtle interactions between feedback structure, algorithm design, and geometry.
Submission Number: 108
Loading