Online Prediction of Stochastic Sequences with High Probability Regret Bounds

Published: 26 Jan 2026, Last Modified: 03 May 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: online prediction, learning theory, high-probability bound, regret, stochastic sequences
TL;DR: We propose high-probability regret bounds for online prediction of stochastic sequences.
Abstract: We revisit the classical problem of universal prediction of stochastic sequences with a finite time horizon $T$ known to the learner. The question we investigate is whether it is possible to derive vanishing regret bounds that hold with high probability, complementing existing bounds from the literature that hold in expectation. We propose such high-probability bounds which have a very similar form as the prior expectation bounds. For the case of universal prediction of a stochastic process over a countable alphabet, our bound states a convergence rate of $\mathcal{O}(T^{-1/2} \delta^{-1/2})$ with probability as least $1-\delta$ compared to prior known in-expectation bounds of the order $\mathcal{O}(T^{-1/2})$. We also propose an impossibility result which proves that it is not possible to improve the exponent of $\delta$ in a bound of the same form without making additional assumptions.
Primary Area: learning theory
Submission Number: 727
Loading