Abstract: This work studies and develops projection-free algorithms for online learning with linear optimization oracles (a.k.a.~Frank--Wolfe) for handling the constraint set,
and for convex loss functions.
More precisely, this work
(i) shows how to exploit semidefinite programming to jointly design and analyze online Frank--Wolfe-type algorithms numerically in a variety of settings,
(ii) leverages those design techniques to propose an improved (optimized) variant of an online Frank--Wolfe algorithm along with its conceptually simple potential-based proof,
and (iii) extends this proof to its anytime version, which benefits from a similar $O(T^{3/4})$ regret rate without requiring knowledge of the time horizon $T$ in advance.
We are not aware of other direct regret guarantees for an anytime version of online Frank--Wolfe
without using the classical doubling trick.
Based on the semidefinite technique, we conclude with strong numerical evidence suggesting that no pure online Frank--Wolfe algorithm within our model class can have a regret guarantee better than $O(T^{3/4})$ without additional assumptions, that the current algorithms do not have optimal constants, and that multiple linear optimization rounds do not generally help to obtain better regret bounds.
Code Dataset Promise: Yes
Code Dataset Url: https://github.com/JulienWeibel/Optimized-projection-free-algorithms-for-online-learning-construction-and-worst-case-analysis
Signed Copyright Form: pdf
Format Confirmation: I agree that I have read and followed the formatting instructions for the camera ready version.
Submission Number: 2323
Loading