BAOSL: Benchmarking Active Optimization for Self-driving Laboratories

TMLR Paper4760 Authors

29 Apr 2025 (modified: 14 Jul 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Discovery of novel materials and antibiotics can be posed as an optimization problem, namely, identifying candidate formulations that maximize one or more desired properties. In practice, however, the enormous dimensionality of the design space and the high cost of each experimental evaluation make exhaustive search strategies infeasible. Active learning methods, which iteratively identify informative data points, offer a promising solution to tackle these challenges by significantly reducing the data-labeling effort and resource requirements. Integrating active learning into optimization workflows, hereafter termed active optimization, accelerates the discovery of optimal candidates while substantially cutting the number of required evaluations. Despite these advances, the absence of standardized benchmarks impedes objective comparison of methodologies, slowing progress in self-driving scientific discovery. To address this, we introduce BAOSL, a comprehensive benchmark designed to systematically evaluate active optimization in self-driving laboratories. BAOSL provides a standardized evaluation protocol and reference implementations to facilitate efficient and reproducible benchmarking. BAOSL includes both synthetic benchmarks and real-world tasks in various fields, designed to address unique challenges, particularly limited data availability, in self-driving laboratories.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Roman_Garnett1
Submission Number: 4760
Loading