Keywords: Biology, Scaling Laws, Data, Protein
TL;DR: Have we saturated biological enough to see scaling laws emerge?
Abstract: Data in biology is redundant, noisy, and sparse. How does the type and scale of available data impact model performance? In this work, we specifically investigate how protein language models (pLMs) scale with increasing pretraining data. We investigate this relationship by measuring the performance of protein function prediction on a suite of pLMs pretrained on yearly snapshots of UniRef100 from 2011 to 2024. We find no evidence of model saturation on this task: performance improves---but not monotonically---with added data, and this trend differs between unsupervised and supervised experiments. Using a well-characterized Beta-Lactamase protein from E. coli, we find that unsupervised model predictions get better year-over-year, though they do not yet consistently perform better than the supervised baseline. Our results underscore the need for targeted data acquisition and deeper study of data scaling in protein modeling.
Submission Number: 46
Loading