Solving Bayesian inverse problems with diffusion priors and off-policy RL

Published: 06 Mar 2025, Last Modified: 14 Mar 2025ICLR 2025 DeLTa Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 8 pages)
Keywords: diffusion, bayesian, mcmc, inverse problems
TL;DR: We apply Relative Trajectory Balance (RTB) to solve Bayesian inverse problems in Vision and Science, we extend the work to training conditional diffusion posteriors from unconditional priors, and empirically show limitations of current methods.
Abstract: This paper presents a practical application of Relative Trajectory Balance (RTB), a recently introduced off-policy reinforcement learning (RL) objective that can asymptotically solve Bayesian inverse problems optimally. We extend the original work by using RTB to train conditional diffusion model posteriors from pretrained unconditional priors for challenging linear and non-linear inverse problems in vision, and science. We use the objective alongside techniques such as off-policy backtracking exploration to improve training. Importantly, our results show that existing training-free diffusion posterior methods struggle to perform effective posterior inference in latent space due to inherent biases.
Submission Number: 17
Loading