Fast Bayesian Optimization of Function Networks with Partial Evaluations

Published: 03 Jun 2025, Last Modified: 03 Jun 2025AutoML 2025 Methods TrackEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: our paper adheres to reproducibility best practices. In particular, we confirm that all important details required to reproduce results are described in the paper,, the authors agree to the paper being made available online through OpenReview under a CC-BY 4.0 license (https://creativecommons.org/licenses/by/4.0/), and, the authors have read and commit to adhering to the AutoML 2025 Code of Conduct (https://2025.automl.cc/code-of-conduct/).
Reproducibility: zip
Abstract: Bayesian optimization of function networks (BOFN) is a framework for optimizing expensive-to-evaluate objective functions structured as networks, where some nodes’ outputs serve as inputs for others. Many real-world applications, such as manufacturing and drug discovery, involve function networks with additional properties—nodes that can be evaluated independently and incur varying costs. A recent BOFN variant, p-KGFN, leverages this structure and enables cost-aware partial evaluations, selectively querying only a subset of nodes at each iteration. However, despite its effectiveness, p-KGFN suffers from computational inefficiency due to its formulation that requires solving nested optimizations with a Monte Carlo-based objective function and the increasing number of acquisition function optimizations as the network size grows. To address this, we propose an accelerated p-KGFN algorithm that reduces computational overhead by considering a single acquisition function problem per iteration. This approach first generates candidate inputs for the entire network and leverages simulated intermediate outputs to form node-specific candidates. Experiments on benchmark problems show that our method maintains competitive performance while achieving up to a $16\times$ speedup over existing BOFN methods.
Submission Number: 40
Loading