Keywords: Online Learning, Transductive Online Learning, Algorithms with Predictions
TL;DR: We show that having access to good predictions about future examples can lead to better regret bounds in online regression.
Abstract: Motivated by the predictability of real-life data streams, we study online regression when the online learner has access to predictions about future examples. In the extreme case, called transductive online learning, the sequence of examples is revealed to the learner before the game begins. Here, we fully characterize the expected regret by the fat-shattering dimension, establishing a separation between transductive online regression and online regression, akin to that between online and transductive online classification. Then, we generalize this setting by allowing for noisy or \emph{imperfect} predictions about future examples. Using our results for the transductive online setting, we develop an online learner whose expected regret matches the worst-case regret, improves smoothly with prediction quality, and significantly outperforms the worst-case regret when future example predictions are precise, achieving performance similar to the transductive online learner. This enables learnability for previously unlearnable classes under predictable examples, aligning with the broader learning-augmented model paradigm.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 14668
Loading