PLMFit: Benchmarking Transfer Learning with Protein Language Models for Protein Engineering

Published: 13 Oct 2024, Last Modified: 01 Dec 2024AIDrugX PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter efficient fine-tuning, Low rank adaptation, Protein language models, Feature extraction, Benchmarking, Protein engineering
TL;DR: PLMFit is a comparative analysis aimed at identifying the most effective strategies for transfer knowledge from protein language models by benchmarking fine-tuning techniques on a range of protein engineering tasks.
Abstract: Protein language models (PLMs) have emerged as a useful resource for protein engineering applications. Transfer learning (TL) leverages pre-trained parameters to extract features to train machine learning models or adjust the weights of PLMs for novel tasks via fine-tuning through back-propagation. TL methods have shown potential for enhancing protein predictions performance when paired with PLMs, however there is a notable lack of comparative analyses that benchmark TL methods applied to state-of-the-art PLMs, identify optimal strategies for transferring knowledge and determine the most suitable approach for specific tasks. Here, we report PLMFit, a benchmarking study that combines, three state-of-the-art PLMs (ESM2, ProGen2, ProteinBert), with three TL methods (feature extraction, low-rank adaptation, bottleneck adapters) for five protein engineering datasets. We conducted over 2900 experiments, altering PLM sizes and layers, TL hyperparameters and different training procedures. Our experiments reveal three key findings: (i) utilizing a fraction of PLM for transfer learning does not detrimentally impact performance, (ii) the choice between feature extraction and fine-tuning is primarily dictated by the amount and diversity of data and (iii) fine-tuning is most effective when generalization is necessary and only limited data is available. We provide PLMFit as an open-source software package, serving as a valuable resource for the scientific community to facilitate the feature extraction and fine-tuning of PLMs for various applications.
Submission Number: 129
Loading