Keywords: Online learning, Riemannian optimization, projection-free optimization
TL;DR: We study projection-free online optimization on Riemannian manifolds and get several sublinear regret guarantees.
Abstract: The projection operation is a critical component in a wide range of optimization algorithms, such as online gradient descent (OGD),
for enforcing constraints and achieving optimal regret bounds. However, it suffers from computational complexity limitations in high-dimensional settings or
when dealing with ill-conditioned constraint sets. Projection-free algorithms address this issue by replacing the projection oracle with more efficient optimization
subroutines. But to date, these methods have been developed primarily in the Euclidean setting, and while there has been growing interest in optimization on
Riemannian manifolds, there has been essentially no work in trying to utilize projection-free tools here. An apparent issue is that non-trivial affine functions
are generally non-convex in such domains. In this paper, we present methods for obtaining sub-linear regret guarantees in online geodesically convex optimization
on curved spaces for two scenarios: when we have access to (a) a separation oracle or (b) a linear optimization oracle. For geodesically convex losses, and
when a separation oracle is available, our algorithms achieve $O(T^{\frac{1}{2}})$, $O(T^{\frac{3}{4}})$ and $O(T^{\frac{1}{2}})$ adaptive regret guarantees in the full
information setting, the bandit setting with one-point feedback and the bandit setting with two-point feedback, respectively. When a linear optimization oracle is
available, we obtain regret rates of $O(T^{\frac{3}{4}})$ for geodesically convex losses
and $O(T^{\frac{2}{3}}\log T)$ for strongly geodesically convex losses.
Supplementary Material: pdf
Submission Number: 9229
Loading