Cost-aware Stopping for Bayesian Optimization

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian optimization, stopping rules, cost-aware, hyperparameter optimization, Pandora's box, Gittins index
TL;DR: We propose a cost-aware stopping rule for Bayesian optimization that are theoretically grounded, free of heuristic tuning, and consistently achieve competitive cost-adjusted simple regret on hyperparameter optimization tasks.
Abstract: In automated machine learning, scientific discovery, and other applications of Bayesian optimization, deciding when to stop evaluating expensive black-box functions in a cost-aware manner is an important but underexplored practical consideration. A natural performance metric for this purpose is the cost-adjusted simple regret, which captures the trade-off between solution quality and cumulative evaluation cost. While several heuristic or adaptive stopping rules have been proposed, they lack guarantees ensuring stopping before incurring excessive function evaluation costs. We propose a principled cost-aware stopping rule for Bayesian optimization that adapts to varying evaluation costs without heuristic tuning. Our rule is grounded in a theoretical connection to state-of-the-art cost-aware acquisition functions, namely the Pandora's Box Gittins Index (PBGI) and log expected improvement per cost (LogEIPC). We prove a theoretical guarantee bounding the expected cost-adjusted simple regret incurred by our stopping rule when paired with either acquisition function. Across synthetic and empirical tasks, including hyperparameter optimization and neural architecture size search, pairing our stopping rule with PBGI or LogEIPC usually matches or outperforms other acquisition-function--stopping-rule pairs in terms of cost-adjusted simple regret.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 12971
Loading