Align & Invert: Solving Inverse Problems with Diffusion and Flow-based Models via Representational Alignment

19 Sept 2025 (modified: 28 Sept 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Flow-Based Models, Inverse Problems Representational Alignment
Abstract: Enforcing the alignment of internal representations in diffusion or flow-based models with those from pretrained self-supervised models during training has recently been shown to provide a powerful inductive bias, significantly improving both convergence speed and the quality of generated images. In this paper, we move beyond training and instead focus on inverse problems, where pretrained diffusion or flow-based models are used as {\it priors}. We propose applying {\it representational alignment} of diffusion/flow-based models with a {\it pretrained self-supervised visual encoder}, such as \textsc{Dinov2}, for guidance at inference time. Although the ground truth signal is unavailable in inverse problems, we show that representational alignment based on approximations of the ground truth can yield significant gains, steering the reverse process of diffusion/flow-based models toward higher-quality reconstructed images. We provide theoretical insights by uncovering a connection between representational alignment and perceptual metrics. Under mild assumptions, we further show that our approach can improve the perception–distortion trade-off frontier, i.e., enhance perceptual quality with negligible degradation in distortion. Finally, we demonstrate its versatility by integrating it into multiple state-of-the-art solvers. Extensive experiments on super-resolution, box inpainting, Gaussian deblurring, and motion deblurring show that our proposed method consistently enhances reconstruction quality.
Primary Area: generative models
Submission Number: 18913
Loading