Training Compute-Optimal Protein Language Models

Published: 17 Jun 2024, Last Modified: 20 Jul 2024AccMLBio SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Protein Language Models, Scaling Law
TL;DR: We explore the optimal training of protein language models, an area crucial in biological research where guidance is limited.
Abstract: We explore the optimal training of protein language models, an area crucial in biological research where guidance is limited. Most models are trained with extensive compute resources, emphasizing model size increases over efficient compute usage. Our research uses a large dataset of 939 million protein sequences, training over 300 models ranging from 3.5 million to 10.7 billion parameters on 5 to 200 billion tokens to examine the relationships between model sizes, token numbers, and objectives. Initial findings show diminishing returns for Causal Language Models (CLM) and overfitting tendencies in Masked Language Models (MLM) when using the Uniref database. To combat this, we incorporated metagenomic sequences to diversify the training set and mitigate plateauing and overfitting. We derived scaling laws for CLM and MLM on Transformers, tailored to protein sequence data characteristics. To validate these scaling laws, we compared large-scale ESM-2 and PROGEN2 models in downstream tasks, including protein generation and structure- and function-related evaluations, within comparable pre-training compute budgets.
Submission Number: 15
Loading