Keywords: Bayesian optimization, knowledge elicitation, preference learning, multi-task learning
TL;DR: We tackle the problem of incorporating preference-based expert knowledge into Bayesian optimization (BO) by designing a multi-task learning architecture, which allows for the actively elicited expert knowledge to be transferred into the BO task.
Abstract: Bayesian optimization (BO) is a well-established method to optimize black-box functions whose direct evaluations are costly. In this paper, we tackle the problem of incorporating expert knowledge into BO, with the goal of further accelerating the optimization, which has received little attention so far. We design a multi-task learning architecture for this task, with the goal of jointly eliciting the expert knowledge and minimizing the objective function. In particular, this allows for the expert knowledge to be transferred into the BO task. We introduce a specific architecture based on Siamese neural networks to handle the knowledge elicitation from pairwise queries. Experiments on various benchmark functions show that the proposed method significantly speeds up BO even when the expert knowledge is biased.
Submission Number: 22
Loading