TL;DR: We report a novel theoretical criterion called 'exponential stopping tail', which is strictly stronger than prior ones. We propose two algorithms that can achieve such a tail bound and prove that some existing algorithms cannot satisfy our criterion.
Abstract: The best arm identification problem requires identifying the best alternative (i.e., arm) in active experimentation using the smallest number of experiments (i.e., arm pulls), which is crucial for cost-efficient and timely decision-making processes. In the fixed confidence setting, an algorithm must stop data-dependently and return the estimated best arm with a correctness guarantee. Since this stopping time is random, we desire its distribution to have light tails. Unfortunately, many existing studies focus on high probability or in expectation bounds on the stopping time, which allow heavy tails and, for high probability bounds, even not stopping at all. We first prove that this never-stopping event can indeed happen for some popular algorithms. Motivated by this, we propose algorithms that provably enjoy an exponential-tailed stopping time, which improves upon the polynomial tail bound reported by Kalyanakrishnan et al. (2012). The first algorithm is based on a fixed budget algorithm called Sequential Halving along with a doubling trick. The second algorithm is a meta algorithm that takes in any fixed confidence algorithm with a high probability stopping guarantee and turns it into one that enjoys an exponential-tailed stopping time. Our results imply that there is much more to be desired for contemporary fixed confidence algorithms.
Lay Summary: Imagine needing to find the best option among many choices – like the most effective ad or medical treatment – through experiments. You want to be sure you've picked the winner using the fewest trials.
The problem is, some current methods for doing this can take an unpredictably long time, or might even never give you a final answer. We found this "never stopping" risk is real.
Our research offers new ways to experiment that guarantee a much faster and more predictable stopping time. One method refines a tournament-style approach, and another acts as an upgrade for existing techniques. Essentially, our solutions make sure the process of finding the best option doesn't drag on, ensuring you get reliable answers efficiently.
Link To Code: https://github.com/Kapilan-Balagopalan/Brakebooster
Primary Area: Theory->Online Learning and Bandits
Keywords: Multi-Armed Bandits, Pure Exploration, Learning Theory
Submission Number: 9026
Loading