Learning Diffusion Models with Flexible Representation Guidance

ICML 2025 Workshop FM4LS Submission26 Authors

Published: 12 Jul 2025, Last Modified: 12 Jul 2025FM4LS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Representation Learning, Biomolecule Generation
TL;DR: We introduce a systematic approach to flexibly incorporating representation guidance into diffusion models, resulting in both accelerated training and better performance across protein and molecule generation tasks.
Abstract: Diffusion models can be improved with additional guidance towards more effective representations of input. Indeed, prior empirical work has already shown that aligning internal representations of the diffusion model with those of pre-trained models improves generation quality. In this paper, we present a systematic framework for incorporating representation guidance into diffusion models. We provide alternative decompositions of denoising models along with their associated training criteria, where the decompositions determine when and how the auxiliary representations are incorporated. Guided by our theoretical insights, we introduce two new strategies for enhancing representation alignment in diffusion models. First, we pair examples with target representations either derived from themselves or arisen from different synthetic modalities, and subsequently learn a joint model over the multimodal pairs. Second, we design an optimal training curriculum that balances representation learning and data generation. Our experiments across protein sequence and molecule generation tasks demonstrate superior performance as well as accelerated training. The code is available at https://github.com/ChenyuWang-Monica/REED.
Submission Number: 26
Loading