From Contextual Combinatorial Semi-Bandits to Bandit List Classification: Improved Sample Complexity with Sparse Rewards
Keywords: multiclass, classification, bandit, combinatorial, online, pac, exploration
TL;DR: We design a PAC-learner for contextual combinatorial semi-bandits with sparse rewards, with a sample complexity bound that primarily scales with the sparsity parameter rather than the number of arms.
Abstract: We study the problem of contextual combinatorial semi-bandits, where input contexts are mapped into subsets of size $m$ of a collection of $K$ possible actions. In each round of the interaction, the learner observes feedback consisting of the realized reward of the predicted actions. Motivated by prototypical applications of contextual bandits, we focus on the $s$-sparse regime where we assume that the sum of rewards is bounded by some value $s \ll K$. For example, in recommendation systems the number of products purchased by any customer is significantly smaller than the total number of available products. Our main result is for the $(\varepsilon,\delta)$-PAC variant of the problem for which we design an algorithm that returns an $\varepsilon$-optimal policy with high probability using a sample complexity of $\widetilde{O}\big( (\mathrm{poly}(K/m) + sm / \varepsilon^2) \log (|\Pi|/\delta) \big)$ where $\Pi$ is the underlying (finite) class and $s$ is the sparsity parameter. This bound improves upon known bounds for combinatorial semi-bandits whenever $s \ll K$, and in the regime where $s = O(1)$, the leading terms in our bound match the corresponding full-information rates, implying that bandit feedback essentially comes at no cost. Our PAC learning algorithm is also computationally efficient given access to an ERM oracle for $\Pi$. Our framework generalizes the list multiclass classification problem with bandit feedback, which can be seen as a special case with binary reward vectors. In the special case of single-label classification corresponding to $s=m=1$, we prove an $O \big((K^7 + 1/\varepsilon^2)\log (|\mathcal{H}|/\delta)\big)$ sample complexity bound for a finite hypothesis class $\mathcal{H}$, which improves upon recent results in this scenario. Additionally, we consider the regret minimization setting where data can be generated adversarially, and establish a regret bound of $\widetilde O(|\Pi| + \sqrt{smT \log |\Pi|})$, extending the result of Erez et al. ('24) who consider the simpler single label classification setting.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 20179
Loading