ProteinZero: Self-Improving Protein Generation via Online Reinforcement Learning

ICLR 2026 Conference Submission20926 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Protein Design, Reinforcement Learning, Inverse Folding, Generative Models, Sequence Diversity, Online Learning, Large Language Model
TL;DR: ProteinZero improves protein generative models via online reinforcement learning with computationally efficient online feedback, eliminating the need for labeled datasets while producing proteins with superior designability, stability, and diversity.
Abstract: Protein generative models have shown remarkable promise in protein design, yet their success rates remain constrained by reliance on curated sequence-structure datasets and by misalignment between supervised objectives and real design goals. We present ProteinZero, an online reinforcement learning framework for inverse folding models that enables scalable, automated, and continuous self-improvement with computationally efficient feedback. ProteinZero employs a reward pipeline that combines structural guidance from ESMFold with a novel self-derived ddG predictor, providing stable multi-objective signals while avoiding the prohibitive cost of physics-based methods. To ensure robustness in online RL, we further introduce a novel embedding-level diversity regularizer that mitigates mode collapse and promotes functionally meaningful sequence variation. Within a general RL formulation balancing multi-reward optimization, KL-divergence from a reference model, and diversity regularization, ProteinZero achieves robust improvements across designability, stability, recovery, and diversity. On the CATH-4.3 benchmark, it consistently outperforms state-of-the-art baselines including ProteinMPNN, ESM-IF, and InstructPLM, reducing design failure rates by 36-48\% and achieving success rates above 90\% across diverse folds. Importantly, a complete RL run can be executed on a single 8$\times$GPU node within three days, including reward computation and data generation. These results indicate that efficient online RL fine-tuning can complement supervised pretraining by allowing protein generative models to evolve continuously from their own outputs and optimize multiple design objectives without labeled data, opening new possibilities for exploring the vast protein design space.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 20926
Loading