Towards Characterizing the Complexity of Riemannian Online Convex Optimization
Abstract: Online Convex Optimization (OCO) over Riemannian manifolds raises fundamental questions about how geometry affects algorithmic performance.
While Riemannian Online Gradient Descent (R-OGD) has been shown to achieve a regret upper bound of $O(DL\sqrt{\zeta T})$,
where $\zeta$ depends on the manifold’s curvature,
the tightness of this bound remained unclear.
We first establish a matching lower bound of $\Omega(DL\sqrt{\zeta T})$ for R-OGD,
valid for any predetermined step-size schedules and for certain types of adaptive step-size schedules.
This shows that the worst-case regret of R-OGD is $\Theta(DL\sqrt{\zeta T})$,
and that the effect of manifold curvature appears as a multiplicative factor of $\sqrt{\zeta}$ in the regret.
In contrast to the Euclidean setting—where OGD is minimax optimal and regret bounds are independent of feedback models—this result reveals that geometry can substantially degrade the performance of first-order algorithms.
We also analyze a Riemannian extension of Follow-the-Regularized-Leader, which we term R-FTRL, in the full-information setting.
R-FTRL achieves a regret bound of $O(DL\sqrt{T})$,
independent of the curvature.
This complements recent curvature-independent guarantees for full-information methods obtained by different algorithmic approaches.
Together with our lower bound for R-OGD,
our results support a separation between first-order and full-information models in non-Euclidean settings,
and highlight the subtle interactions between feedback structure, algorithm design, and geometry.
Submission Number: 1021
Loading