Rethinking Exploration In Asynchronous Bayesian Optimization: Standard Acquisition Is All You Need

Published: 12 Jun 2025, Last Modified: 02 Jul 2025EXAIT@ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: AI for Science
Keywords: Bayesian Optimization, Gradientless Optimization, Black-Box Optimization, Parallel Optimization
Abstract: Asynchronous Bayesian optimization is widely used for gradient-free optimization in domains with independent parallel experiments and varying evaluation times. Previous works posit that standard acquisitions lead to under exploration of the space via redundant queries. We show that this is not the case: standard acquisition functions avoid redundant queries thanks to the intermediate posterior updates. We show theoretically that $\textit{penalization}$-based methods are approximations to the Kriging Believer, a method with known shortcomings. By analysing distance to busy locations, we also show that by enforcing diversity incumbent methods over-explore and under-exploit in asynchronous settings, reducing their performance. In contrast, our extensive experiments demonstrate that simple standard acquisition functions, like the Upper Confidence Bound, match or outperform purpose-built asynchronous methods across synthetic and real-world tasks.
Serve As Reviewer: ~James_A_C_Odgers1
Submission Number: 19
Loading