Online Frank-Wolfe with Arbitrary DelaysDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 05 Oct 2022, 07:29NeurIPS 2022 AcceptReaders: Everyone
Keywords: Online Convex Optimization, Online Frank-Wolfe, Arbitrary Delays
TL;DR: A variant of online Frank-Wolfe for online learning with arbitrary delays is proposed, and it is robust to a relatively large amount of delay.
Abstract: The online Frank-Wolfe (OFW) method has gained much popularity for online convex optimization due to its projection-free property. Previous studies show that OFW can attain an $O(T^{3/4})$ regret bound for convex losses and an $O(T^{2/3})$ regret bound for strongly convex losses. However, they assume that each gradient queried by OFW is revealed immediately, which may not hold in practice and limits the application of OFW. To address this limitation, we propose a delayed variant of OFW, which allows gradients to be delayed by arbitrary rounds. The main idea is to perform an update similar to OFW after receiving any delayed gradient, and play the latest decision for each round. Despite its simplicity, we prove that our delayed variant of OFW is able to achieve an $O(T^{3/4}+dT^{1/4})$ regret bound for convex losses and an $O(T^{2/3}+d\log T)$ regret bound for strongly convex losses, where $d$ is the maximum delay. This is quite surprising since under a relatively large amount of delay (e.g., $d=O(\sqrt{T})$ for convex losses and $d=O(T^{2/3}/\log T)$ for strongly convex losses), the delayed variant of OFW enjoys the same regret bound as that of the original OFW.
14 Replies

Loading