Keywords: Stochastic multi-armed bandits, best-arm identification, sequential learning, ranking and selection
Abstract: Top-2 methods have become popular in solving the best arm identification (BAI) problem. The best arm, or the arm with the largest mean amongst finitely many, is identified through an algorithm that at any sequential step independently pulls the empirical best arm, with a fixed probability $\beta$, and pulls the best challenger arm otherwise. The probability of incorrect selection is guaranteed to lie below a specified $\delta>0$. Information theoretic lower bounds on sample complexity are well known for BAI problem and are matched asymptotically as $\delta\to 0$ by computationally demanding plug-in methods. The above top 2 algorithm for any $\beta\in(0, 1)$ has sample complexity within a constant of the lower bound. However, determining the optimal β that matches the lower bound has proven difficult. In this paper, we address this and propose an optimal top-2 type algorithm. We consider a function of allocations anchored at a threshold. If it exceeds the threshold then the algorithm samples the empirical best arm. Otherwise, it samples the challenger arm. We show that the proposed algorithm is optimal as $\delta\to 0$. Our analysis relies on identifying a limiting fluid dynamics of allocations that satisfy a series of ordinary differential equations pasted together and that describe the asymptotic path followed by our algorithm. We rely on the implicit function theorem to show existence and uniqueness of these fluid ode’s and to show that the proposed algorithm remains close to the ode solution.
Supplementary Material: zip
Primary Area: Bandits
Submission Number: 16503
Loading