Abstract: We study best-arm identification (BAI) in the fixed-budget setting. Adaptive allocations based on upper confidence bounds (UCBs), such as UCBE, are known to work well in BAI. However, it is well-known that its optimal regret is theoretically dependent on instances, which we show to be an artifact in many fixed-budget BAI problems. In this paper we propose an UCB exploration algorithm that is both theoretically and empirically efficient for the fixed budget BAI problem under a Bayesian setting. The key idea is to learn prior information, which can enhance the performance of UCB-based BAI algorithm as it has done in the cumulative regret minimization problem. We establish bounds on the failure
probability and the simple regret for the Bayesian BAI problem, providing upper bounds of order $\tilde{O}(\sqrt{K/n})$, up to logarithmic factors, where $n$ represents the budget and $K$ denotes the number of arms. Furthermore, we demonstrate through empirical results that our approach consistently outperforms state-of-the-art baselines.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We would like to thank the Assigned Action Editor and the three reviewers for their positive feedback on our previous revision.
In this camera-ready revision, we have made several updates, including correcting mathematical notations and refining the definition of the supremum in the probability expression $e_*(\alpha)$.
Additionally, we have adjusted formatting issues in the appendix.
Code: https://github.com/rong-zhu/UCBExploration-BayesianBAI
Assigned Action Editor: ~Marcello_Restelli1
Submission Number: 2483
Loading