Practical Bayesian Optimization of Objectives with Conditioning VariablesDownload PDF

21 May 2021 (modified: 05 May 2023)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: Gaussian processes, optimization
TL;DR: We consider a variation of multi-task BO where one aims to find the peak of each task. We propose a novel theoretically optimal algorithm outperforming state of the art methods in optimizing control simulators and DNN hyperparameters.
Abstract: Bayesian optimization is a class of data efficient model based algorithms typically focused on global optimization. We consider the more general case where a user is faced with multiple problems that each need to be optimized conditional on a state variable, for example given a range of cities with different patient distributions, we optimize the ambulance locations conditioned on patient distribution. Given partitions of Cifar10, we optimize CNN hyperparameters for each partition. Similarity across objectives boosts optimization of each objective in two ways: in modelling by data sharing across objectives, and also in acquisition by quantifying how a single point on one objective can provide benefit to all objectives. For this we propose a framework for conditional optimization: ConBO. This can be built on top of a range of acquisition functions and we propose a new Hybrid Knowledge Gradient acquisition function. The resulting method is intuitive and theoretically grounded, performs either similar to or significantly better than recently published works on a range of problems, and is easily parallelized to collect a batch of points.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
9 Replies

Loading