Leveraging Protein Language Model Embeddings for Catalytic Turnover Prediction of Adenylate Kinase Orthologs in a Low-Data Regime

ICLR 2025 Workshop LMRL Submission11 Authors

02 Feb 2025 (modified: 18 Apr 2025)Submitted to ICLR 2025 Workshop LMRLEveryoneRevisionsBibTeXCC BY 4.0
Track: Full Paper Track
Keywords: representation learning, protein language models, sequence to function modeling, enzymology
TL;DR: We assess the use of PLMs for sequence-to-kcat prediction with a dataset of diverse enzyme sequences.
Abstract: Accurate prediction of enzymatic activity from amino acid sequences could drastically accelerate enzyme engineering for applications such as bioremediation and therapeutics development. In recent years, Protein Language Model (PLM) embeddings have been increasingly leveraged as the input into sequence-to-function models. Here, we use consistently collected catalytic turnover observations for 175 orthologs of the enzyme Adenylate Kinase (ADK) as a test case to assess the use of PLMs and their embeddings in enzyme kinetic prediction tasks. In this study, we show that nonlinear probing of PLM embeddings outperforms baseline embeddings (one-hot-encoding) and the specialized $k_{cat}$ (catalytic turnover number) prediction models DLKcat and CatPred. We also compared fixed and learnable aggregation of PLM embeddings for $k_{cat}$ prediction and found that transformer-based learnable aggregation of amino-acid PLM embeddings is generally the most performant. Additionally, we found that ESMC 600M embeddings marginally outperform other PLM embeddings for $k_{cat}$ prediction. We explored Low-Rank Adaptation (LoRA) masked language model fine-tuning and direct fine-tuning for sequence-to-$k_{cat}$ mapping, where we found no difference or a drop in performance compared to zero-shot embeddings, respectively. And we investigated the distinct hidden representations in PLM encoders and found that earlier layer embeddings perform comparable to or worse than the final layer. Overall, this study assesses the state of the field for leveraging PLMs for sequence-to-$k_{cat}$ prediction on a set of diverse ADK orthologs.
Submission Number: 11
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview