Keywords: Online Learning, Transductive Online Learning, Offline Learning, Mistake Bound
TL;DR: A qualitative trichotomy and some quantitative mistakes bounds relating the transductive online learning setting to known combinatorial dimensions.
Abstract: We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).
This setting is similar to standard online learning, except that the adversary fixes a sequence of instances $x_1,\dots,x_n$ to be labeled at the start of the game, and this sequence is known to the learner.
Qualitatively, we prove a \emph{trichotomy}, stating that the minimal number of mistakes made by the learner as $n$ grows can take only one of precisely three possible values: $n$, $\Theta\left(\log (n)\right)$, or $\Theta(1)$.
Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension.
Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions.
In particular, we improve the known lower bound on the constant in the $\Theta(1)$ case from $\Omega\left(\sqrt{\log(d)}\right)$ to $\Omega(\log(d))$ where $d$ is the Littlestone dimension.
Finally, we extend our results to cover multiclass classification and the agnostic setting.
Supplementary Material: pdf
Submission Number: 14914
Loading