Keywords: Black-box optimization, Bayesian optimization, Few-shot Learning
TL;DR: We introduce a new black-box optimization setting closer to real-world design problems, where a trial provides auxiliary information beyond reward and a task history is available; we propose a novel few-shot prediction approach and benchmark task.
Abstract: Many real-world design problems involve optimizing an expensive black-box function $f(x)$, such as hardware design or drug discovery. Bayesian Optimization has emerged as a sample-efficient framework for this problem. However, the basic setting considered by these methods is simplified compared to real-world experimental setups, where experiments often generate a wealth of useful information. We introduce a new setting where an experiment generates high-dimensional auxiliary information $h(x)$ along with the performance measure $f(x)$; moreover, a history of previously solved tasks from the same task family is available for accelerating optimization. A key challenge of our setting is learning how to represent and utilize $h(x)$ for efficiently solving new optimization tasks beyond the task history. We develop a novel approach for this setting based on a neural model which predicts $f(x)$ for unseen designs given a few-shot context containing observations of $h(x)$. To evaluate our method, we develop a new benchmark task involving designing customized robotic grippers for stably grasping objects. On this task, our approach which incorporates $h(x)$ significantly outperforms a baseline which only uses reward information, demonstrating improved few-shot prediction capability and more efficient optimization.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 2431
Loading