TL;DR: We provide improved regret bounds for delayed online convex optimization with curved losses.
Abstract: In this work, we study the online convex optimization problem with curved losses and delayed feedback.
When losses are strongly convex, existing approaches obtain regret bounds of order $d_{\max} \ln T$, where $d_{\max}$ is the maximum delay and $T$ is the time horizon. However, in many cases, this guarantee can be much worse than $\sqrt{d_{\mathrm{tot}}}$ as obtained by a delayed version of online gradient descent, where $d_{\mathrm{tot}}$ is the total delay.
We bridge this gap by proposing a variant of follow-the-regularized-leader that obtains regret of order $\min\\{\sigma_{\max}\ln T, \sqrt{d_{\mathrm{tot}}}\\}$, where $\sigma_{\max}$ is the maximum number of missing observations.
We then consider exp-concave losses and extend the Online Newton Step algorithm to handle delays with an adaptive learning rate tuning, achieving regret $\min\\{d_{\max} n\ln T, \sqrt{d_{\mathrm{tot}}}\\}$ where $n$ is the dimension.
To our knowledge, this is the first algorithm to achieve such a regret bound for exp-concave losses.
We further consider the problem of unconstrained online linear regression and achieve a similar guarantee by designing a variant of the Vovk-Azoury-Warmuth forecaster with a clipping trick.
Finally, we implement our algorithms and conduct experiments under various types of delay and losses, showing an improved performance over existing methods.
Lay Summary: Sequential decision-making often involves delays before outcomes become known: imagine scenarios like financial markets or recommendation systems, where decisions are made continuously but feedback arrives late. Importantly, these scenarios usually have special loss structures, such as strong convexity or exponential concavity. Our goal is to leverage this "curvature" information to significantly improve decision-making performance under the delayed feedback.
Specifically, for strongly convex losses, we designed an algorithm that adapts to both the maximum number of missing observations and total accumulated delays, achieving better theoretical guarantees compared to previous methods. For exp-concave losses, we extend the Online Newton Step algorithm by incorporating an adaptive learning rate, allowing it to effectively adapt to different delay regimes. We further extend the Vovk-Azoury-Warmuth forecaster with a carefully designed clipping scheme and a learning rate choice to achieve similar curvature adaptive guarantees for unconstrained online linear regression. Extensive experiments across various settings further validate the superior performance of our algorithms compared to exists ones.
Primary Area: Theory->Online Learning and Bandits
Keywords: online learning, delayed feedback, curved losses
Submission Number: 8076
Loading