Keywords: bandit, meta-learning
TL;DR: We provide and analyze algorithms for a sequence of multi-armed bandit problems where only a small number of arms can be optimal in any of the problems.
Abstract: We study a sequential decision problem where the learner faces a sequence of $K$-armed bandit tasks.
The task boundaries might be known (the bandit meta-learning setting), or unknown (the non-stationary bandit setting). For a given integer $M\le K$, the learner aims to compete with the best subset of arms of size $M$.
We design an algorithm based on a reduction to bandit submodular maximization and show that for $T$ time steps comprised of $N$ tasks, in the regime of large $N$ and small number of optimal arms $M$, its regret in both settings is smaller than the simple baseline of $\tilde{O}(\sqrt{KNT})$ that can be obtained by using standard algorithms designed for non-stationary bandit problems. For the bandit meta-learning problem with fixed task length $\tau$,
we show that the regret of the algorithm is bounded as $\tilde{O}(NM\sqrt{M \tau}+ (M^4 K N^2)^{1/3} \tau)$. Under some additional assumptions on the identifiability of the optimal arms in each task, we show a bandit meta-learning algorithm with an improved $\tilde{O}(N\sqrt{M \tau}+ (NMK^{1/2})^{1/2}\tau^{3/4} )$ regret, where the order of the leading term (the first term) is optimal up to logarithmic factors, and the algorithm does not need the knowledge of $M, N$, and $T$ in advance.
Submission Number: 358
Loading