Curriculum Learning for Biological Sequence Prediction: The Case of De Novo Peptide Sequencing

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We introduce a curriculum learning approach and a self-refining inference module to improve non-autoregressive peptide sequencing, achieving state-of-the-art accuracy across nine species.
Abstract: Peptide sequencing—the process of identifying amino acid sequences from mass spectrometry data—is a fundamental task in proteomics. Non-Autoregressive Transformers (NATs) have proven highly effective for this task, outperforming traditional methods. Unlike autoregressive models, which generate tokens sequentially, NATs predict all positions simultaneously, leveraging bidirectional context through unmasked self-attention. However, existing NAT approaches often rely on Connectionist Temporal Classification (CTC) loss, which presents significant optimization challenges due to CTC's complexity and increases the risk of training failures. To address these issues, we propose an improved non-autoregressive peptide sequencing model that incorporates a structured protein sequence curriculum learning strategy. This approach adjusts protein's learning difficulty based on the model’s estimated protein generational capabilities through a sampling process, progressively learning peptide generation from simple to complex sequences. Additionally, we introduce a self-refining inference-time module that iteratively enhances predictions using learned NAT token embeddings, improving sequence accuracy at a fine-grained level. Our curriculum learning strategy reduces NAT training failures frequency by more than 90% based on sampled training over various data distributions. Evaluations on nine benchmark species demonstrate that our approach outperforms all previous methods across multiple metrics and species. Model and source code are available at https://github.com/BEAM-Labs/denovo.
Lay Summary: Proteins are essential to life, and understanding their structure is critical for advances in medicine, biology, and drug discovery. One common technique to study proteins is peptide sequencing, which tries to figure out the building blocks (amino acids) of a protein using data from mass spectrometry, a tool that measures molecules by their mass. However, current methods often struggle with accuracy or efficiency, especially when trying to predict entire sequences all at once. In this work, we improve a type of AI model called a non-autoregressive Transformer, which predicts all parts of a sequence in parallel instead of one by one. These models are faster but hard to train. To solve this, we introduce a new curriculum learning approach, inspired by how humans learn — starting with simpler sequences and gradually tackling harder ones. We also add a second step where the model refines its own guesses to make them more accurate. Our method makes the model much more stable during training and significantly more accurate when predicting protein sequences. It outperforms all existing methods across multiple species, making it a promising tool for scientific research and future medical breakthroughs.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/BEAM-Labs/denovo
Primary Area: Applications->Everything Else
Keywords: De Novo, protein language modelling, sequence modelling
Submission Number: 2985
Loading