Keywords: Bayesian optimization, domain knowledge, approximate inference
TL;DR: We introduce a Bayesian optimization framework that updates only the distribution of the maximizer and skips the usual surrogate modeling, and show that it achieves low regret, better runtime and prior usage capabilities.
Abstract: In many real-world black-box optimization problems, practitioners know that the maximizer exists in a rather small subset of the search space, yet most common Bayesian Optimization (BO) frameworks do not allow them to input their prior knowledge over the maximizer. In addition, although the goal of BO is only to find the optimizer, BO surrogate models typically model the distribution of the whole latent function, which may introduce a computational burden. Motivated by these, we propose \textsc{LeonArDBO}, a novel approach to BO in which we only update the distribution of the argmax directly given the new observation in the surrogate modeling step, using a neural network to learn to do such updates. This not only enables custom priors over the optimum, but also results in $\mathcal{O} (n)$-time updates in the number of samples, in contrast to exact Gaussian Process (GP) updates with $\mathcal{O} (n^3)$-time. We analyze our method's performance empirically on synthetic functions as well as a real scientific problem where large language models (LLMs) can provide useful priors.
Submission Number: 98
Loading