Locally Optimal Fixed-Budget Best Arm Identification in Two-Armed Gaussian Bandits with Unknown Variances

TMLR Paper2439 Authors

29 Mar 2024 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We address the problem of best arm identification (BAI) with a fixed budget for two-armed Gaussian bandits. In BAI, given multiple arms, we aim to find the best arm, an arm with the highest expected reward, through an adaptive experiment. \citet{Kaufman2016complexity} develops a lower bound for the probability of misidentifying the best arm. They also propose a strategy, assuming that the variances of rewards are known, and show that it is asymptotically optimal in the sense that its probability of misidentification matches the lower bound as the budget approaches infinity. However, an asymptotically optimal strategy is unknown when the variances are unknown. For this open issue, we propose a strategy that estimates variances during an adaptive experiment and draws arms with a ratio of the estimated standard deviations. We refer to this strategy as the \emph{Neyman Allocation (NA)-Augmented Inverse Probability weighting (AIPW)} strategy. We then demonstrate that this strategy is asymptotically optimal by showing that its probability of misidentification matches the lower bound when the budget approaches infinity, and the gap between the expected rewards of two arms approaches zero (\emph{small-gap regime}). Our results suggest that under the worst-case scenario characterized by the small-gap regime, our strategy, which employs estimated variance, is asymptotically optimal even when the variances are unknown.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=5FE4z0yJXb&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: We have made the following revisions: **Request 1:** Avoid using little-$o$ notation and clarify the dependency on constants. **Change 1:** We noted that the little-$o$ notation is an asymptotic notation that does not depend on constants. The definition of little-$o$ notation is: > for every positive constant $\epsilon > 0$, there exists a constant $\tilde{\Delta} > 0$ such that for all $0< \Delta \leq \tilde{\Delta}$, $f(\Delta) \leq \epsilon\Delta^2$. To clarify this for the readers, we avoided using the little-$o$ notation in the theorem and articulated the statement as per its definition. Additionally, we also provided an equivalent mathematical expression using little-$o$ notation for completeness. **Request 2:** Define the upper and lower bounds such that their ratio approaches $1$ as $\Delta \to 0$, as the upper bound becomes meaningless when $\Delta \to 0$. **Change 2:** We showed that the ratio of the upper and lower bounds approaches $1$ as $\Delta \to 0$. However, we also explained that according to the definition, the upper bound does not lose its meaning in the regime with $\Delta \to 0$. **Request 3:** Relax the conditions on the parameters used in the algorithm. **Change 3:** We stated that the parameters used in our algorithm are necessary only for the proof and are mostly trivial for the actual algorithm. For example, it would be acceptable to set $C_{\mu}$ and $C_{\sigma}$ to a value like $10^{100000000000}$. We have made this explicitly clear.
Assigned Action Editor: ~Sivan_Sabato1
Submission Number: 2439
Loading