Importance-Weighted Training of Diffusion Samplers

Published: 11 Jun 2025, Last Modified: 18 Jul 2025GenBio 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Amortized inference, Monte Carlo methods, Diffusion models, GFlowNets
TL;DR: We propose an importance-weighted training framework for diffusion samplers to improve training efficiency and mode coverage with off- policy training.
Abstract: We propose an importance-weighted training framework for diffusion samplers — diffusion models trained to sample from a Boltzmann distribution — that leverages Monte Carlo methods with off-policy training to improve training efficiency and mode coverage. Building upon past attempts to use experience replay to guide the training of denoising models as policies, we derive a way to combine historical samples with adaptive importance weights so as to make the training samples better approximate the desired distribution even when the sampler is far from converged. On synthetic multi-modal targets and the Boltzmann distribution of alanine dipeptide conformations, we demonstrate improvements in distribution approximation and training stability compared to existing baselines. Our results are a step towards combining the strengths of amortized (RL- and control-based) approaches to training diffusion samplers with those of Monte Carlo methods.
Submission Number: 166
Loading