Overconfident Oracles: Limitations of In Silico Sequence Design Benchmarking

Published: 17 Jun 2024, Last Modified: 17 Jul 2024ICML2024-AI4Science SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Biological sequence design, Evaluation, Benchmarks
TL;DR: We highlight critical limitations of the current in silico protein and DNA sequence design benchmarks, and further, introduce additional biophysical measures to improve the robustness and reliability.
Abstract: Machine learning methods can automate the in silico design of biological sequences, aiming to reduce costs and accelerate medical research. Given the limited access to wet labs, in silico design methods commonly use an oracle model to evaluate de novo generated sequences. However, the use of different oracle models across methods makes it challenging to compare them reliably, motivating the question: are in silico sequence design benchmarks reliable? In this work, we examine 12 sequence design methods that utilise ML oracles common in the literature and find that there are significant challenges with their cross-consistency and reproducibility. Indeed, oracles differing by architecture, or even just training seed, are shown to yield conflicting relative performance with our analysis suggesting poor out-of-distribution generalisation as a key issue. To address these challenges, we propose supplementing the evaluation with a suite of biophysical measures to assess the viability of generated sequences and limit out-of-distribution sequences the oracle is required to score, thereby improving the robustness of the design procedure. Our work aims to highlight potential pitfalls in the current evaluation process and contribute to the development of robust benchmarks, ultimately driving the improvement of in silico design methods.
Submission Number: 139
Loading