Abstract: Image super-resolution (SR) has attracted increasing attention due to its widespread applications. However, current SR methods generally suffer from over-smoothing and artifacts, and most of them work only with fixed magnifications. To address these problems, this paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution. IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework, where the implicit neural representation is adopted in the decoding process to learn continuous-resolution representation. Moreover, we design a scale-adaptive conditioning mechanism that consists of a low-resolution (LR) conditioning network and a scaling factor. The LR conditioning network adopts a parallel architecture to provide multi-resolution LR conditions for the denoising model. The scaling factor further regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output, which enables the model to accommodate the continuous-resolution requirement. Furthermore, we accelerate the inference process by adjusting the denoising equation and employing post-training quantization to compress the learned denoising network in a training-free manner. Extensive experiments on six benchmark datasets validate the effectiveness of our IDM and demonstrate its superior performance over prior arts. The source code is available at https://github.com/Ree1s/IDM.
External IDs:dblp:journals/ijcv/LiuGZZWLZ25
Loading