Keywords: Bayesian optimization; generative models; black-box optimization
TL;DR: We propose a general framework to turn generative models into solution samplers for black-box optimization problems
Abstract: We present a general strategy for turning generative models into candidate solution samplers for batch Bayesian optimization (BO). The use of generative models for BO enables: large batch scaling as generative sampling, optimization of non-continuous design spaces, and high-dimensional and combinatorial design by using generative priors over feasible regions. Inspired by the success of direct preference optimization (DPO) and its variants, we show that its approach of directly training generative models using preferential rewards without the need for an intermediate reward model is extensible to the BO case. Furthermore, this framework is generalizable beyond preference-based feedback to general types of reward signals and loss functions. In essence, one can train a generative model with noisy, simple utility values directly computed from observations to then form proposal distributions whose densities are proportional to the expected utility, i.e., BO's acquisition function values. This perspective unifies recent progress in using generative models for black-box optimization and connects it with batch Bayesian optimization under a general framework.
As theoretical results, we show that the generative models within the BO process approximately follow a sequence of distributions which asymptotically concentrate at the global optima under certain conditions.
We also demonstrate this effect through experiments on challenging optimization problems involving large batches in high dimensions.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 22459
Loading