Exploring Molecular Pretraining Model at Scale

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: Molecular Pretraining, Scaling Law, Molecular Property Prediction
TL;DR: we systematically investigate the scaling laws on molecular pretraining and scale the model to 1.1 billion parameters through pretraining on 0.8 billion conformations, making it the largest molecular pretraining model to date.
Abstract: In recent years, pretraining models have made significant advancements in the fields of natural language processing (NLP), computer vision (CV), and life sciences. The significant advancements in NLP and CV are predominantly driven by the expansion of model parameters and data size, a phenomenon now recognized as the scaling laws. However, research exploring scaling law in molecular pretraining model remains unexplored. In this work, we present an innovative molecular pretraining model that leverages a two-track transformer to effectively integrate features at the atomic level, graph level, and geometry structure level. Along with this, we systematically investigate the scaling law within molecular pretraining models, examining the power-law correlations between validation loss and model size, dataset size, and computational resources. Consequently, we successfully scale the model to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date. Extensive experiments show the consistent improvement on the downstream tasks as the model size grows up. The model with 1.1 billion parameters also outperform over existing methods, achieving an average 27\% improvement on the QM9 and 14\% on COMPAS-1D dataset.
Supplementary Material: zip
Primary Area: Machine learning for healthcare
Submission Number: 15320
Loading