How Useful is Intermittent, Asynchronous Expert Feedback for Bayesian Optimization?

Published: 27 May 2024, Last Modified: 06 Jun 2024AABI 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian optimization, expert feedback, preference learning, Bayesian neural network, Laplace approximation
Abstract: Bayesian optimization (BO) is an integral part of automated scientific discovery---the so-called self-driving lab---where human inputs are ideally minimal or at least non-blocking. However, scientists often have strong intuition, and thus human feedback is still useful. Nevertheless, prior works in enhancing BO with expert feedback, such as by incorporating it in an offline or online but blocking (arrives at each BO iteration) manner, are incompatible with the spirit of self-driving labs. In this work, we study whether a small amount of randomly arriving expert feedback that is being incorporated in a non-blocking manner can improve a BO campaign. To this end, we run an additional, independent computing thread on top of the BO loop to handle the feedback-gathering process. The gathered feedback is used to learn a Bayesian preference model that can readily be incorporated into the BO thread, to steer its exploration-exploitation process. Experiments suggest that even just a few intermittent, asynchronous expert feedback can be useful for improving or constraining BO. This can especially be useful for its implication in improving self-driving labs, e.g.\ making them more data-efficient and less costly.
Submission Number: 7
Loading