Converting diffusions to flows accelerates sampling and suggests over-conditioning of co-folding models on sequence

Published: 04 Mar 2026, Last Modified: 01 Apr 2026ICLR 2026 Workshop LMRL PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Track: tiny / short paper (2-4 pages excluding references; extended abstract format)
Keywords: diffusion, flow models, co-folding, protein structure prediction, AlphaFold
Abstract: Deep generative models can predict protein structures from sequence with high accuracy; however, sampling from these models remains computationally burdensome, with current protocols using hundreds of iterations through the trained model to obtain a final predicted structure. To accelerate sampling and improve the interpretability of the prediction trajectories, we convert the stochastic diffusion sampling process into a deterministic flow process. We show that the conversion of pre-trained, diffusion-based structure prediction models to probability-flow ODEs yields equivalent performance on the FoldBench benchmark alongside a 20x sampling speed-up. Furthermore, we demonstrate the effects on prediction diversity and use the intermediate predictions made along the de-noising trajectory to show that deep generative structure prediction methods are strongly conditioned on the sequence and MSA embeddings, appearing to make predictions with weak sensitivity to the noise initialisation. Finally, we discuss the implications of strong sequence conditioning for generative protein structure prediction and protein design, as well as pointing to future experiments that build on our initial results.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 36
Loading