What Actually Matters for Materials Discovery: Pitfalls and Recommendations in Bayesian Optimization

Published: 19 Mar 2025, Last Modified: 25 Apr 2025AABI 2025 Workshop TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Materials discovery, Bayesian optimization, Gaussian process, Bayesian neural network, feature fine-tuning
TL;DR: We systematically investigate the impact of key design choices in Bayesian optimization for materials discovery and identify pitfalls of current methodologies and opportunities for improvements
Abstract: Materials discovery underpins innovation in many fields such as energy storage and therapeutics delivery, but it requires time- and resource-intensive chemical synthesis and testing. Bayesian optimization (BO) with Gaussian processes (GPs) or Bayesian neural networks (BNNs) offers a promising solution to accelerate materials discovery, but the full landscape of features and surrogate models is still poorly understood. In this work, we systematically investigate the impact of key design choices and identify pitfalls of current methodologies and opportunities for improvements. These include: (1) GPs with the default initialization scheme perform \emph{hardly any better} than random policies; (2) BNNs are \emph{highly sensitive} to hyperparameters; (3) expert-designed molecular features \emph{underperform} compared to learned and even simple, generic features; and (4) simple feature fine-tuning significantly enhances performance, contrary to the conventional practice of using fixed molecular features or costly Bayesian fine-tuning schemes. We identify the design choices that actually matter for BO in materials discovery, namely using a simple but well-initialized surrogate model with feature fine-tuning. Our work provides recommendations for practitioners and highlights future research directions toward more cost-effective BO for materials discovery.
Submission Number: 21
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview