Keywords: super resolution, diffusion, posterior learning, KL divergence, finite difference, diffusion prior, inverse problem, numerical error
TL;DR: numerical error correction term for posterior learning is used for image based classifier guidance to enhance fidelity or peceptual quality of an image in the manner of plug and play.
Abstract: Diffusion-based super-resolution (SR) has shown remarkable progress, mainly through prior-guided approaches that require explicit degradation models or semantic priors. While posterior diffusion SR avoids these assumptions by directly learning from LR–HR pairs, it still suffers from numerical errors during sampling and lacks plug-and-play mechanisms for quality control.
We provide a numerical analysis showing that discretization errors are a key bottleneck in posterior SR. In principle, these errors can be corrected to improve fidelity when reference supervision is available, offering a new theoretical understanding of posterior diffusion trajectories. However, in real-world SR where such references are absent, fidelity enhancement is limited. To address this, we demonstrate that reversing the correction term effectively enhances edge contrast, providing a practical way to improve perceptual quality without retraining.
Experiments confirm that our method consistently improves perceptual quality, while also validating the theoretical link between numerical errors and fidelity in posterior SR. Our findings establish the first plug-and-play framework for quality control in posterior diffusion SR, bridging theoretical insight with practical applicability.
Supplementary Material: pdf
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 14514
Loading