Precise Regret Bounds for Log-loss via a Truncated Bayesian AlgorithmDownload PDF

Published: 31 Oct 2022, Last Modified: 14 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Sequential probability assignment, Online Regression, Logarithmic Loss, Bayesian Algorithm, Shtarkov Sum
TL;DR: We give optimal lower and upper bounds of online regression under logarithmic loss via a novel smooth truncated Bayesian algorithm.
Abstract: We study sequential general online regression, known also as sequential probability assignments, under logarithmic loss when compared against a broad class of experts. We obtain tight, often matching, lower and upper bounds for sequential minimax regret, which is defined as the excess loss incurred by the predictor over the best expert in the class. After proving a general upper bound we consider some specific classes of experts from Lipschitz class to bounded Hessian class and derive matching lower and upper bounds with provably optimal constants. Our bounds work for a wide range of values of the data dimension and the number of rounds. To derive lower bounds, we use tools from information theory (e.g., Shtarkov sum) and for upper bounds, we resort to new "smooth truncated covering" of the class of experts. This allows us to find constructive proofs by applying a simple and novel truncated Bayesian algorithm. Our proofs are substantially simpler than the existing ones and yet provide tighter (and often optimal) bounds.
Supplementary Material: pdf
10 Replies

Loading