Generalization Beyond Benchmarks: Evaluating Learnable Protein-Ligand Scoring Functions on Unseen Targets
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: protein-ligand scoring, protein-ligand binding, out-of-distribution evaluation, information leakage, machine learning for drug discovery, data leakage
TL;DR: Protein–ligand scoring benchmarks leak protein-level information, inflating performance estimates. We show generalization to novel targets is harder, but self-supervised pretraining and limited test-target data can help build more reliable models.
Abstract: As machine learning becomes increasingly central to molecular design, it is vital to ensure the reliability of learnable protein–ligand scoring functions on novel protein targets. While many scoring functions perform well on standard benchmarks, their ability to generalize beyond training data remains a significant challenge. In this work, we evaluate the generalization capability of state-of-the-art scoring functions on dataset splits that simulate evaluation on targets with a limited number of known structures and experimental affinity measurements. Our analysis reveals that the commonly used benchmarks do not reflect the true challenge of generalizing to novel targets. We also investigate whether large-scale self-supervised pretraining can bridge this generalization gap and we provide preliminary evidence of its potential. Furthermore, we probe the efficacy of simple methods that leverage limited test-target data to improve scoring function performance. Our findings underscore the need for more rigorous evaluation protocols and offer practical guidance for designing scoring functions with predictive power extending to novel protein targets.
Submission Number: 341
Loading