Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution

Published: 01 Jan 2024, Last Modified: 16 Sept 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Artifact-free super-resolution (SR) aims to translate low-resolution images into their high-resolution counterparts with a strict integrity of the original content, eliminating any distortions or synthetic details. While traditional diffusion-based SR techniques have demonstrated remarkable abilities to enhance image detail, they are prone to ar-tifact introduction during iterative procedures. Such arti-facts, ranging from trivial noise to unauthentic textures, de-viate from the true structure of the source image, thus chal-lenging the integrity of the super-resolution process. In this work, we propose Self-Adaptive Reality-Guided Diffusion (SARGD), a training-free method that delves into the latent space to effectively identify and mitigate the propagation of artifacts. Our SARGD begins by using an artifact detector to identify implausible pixels, creating a binary mask that highlights artifacts. Following this, the Reality Guidance Refinement (RGR) process refines artifacts by integrating this mask with realistic latent representations, improving alignment with the original image. Nonetheless, initial realistic-latent representations from lower-quality images result in over-smoothing in the final output. To address this, we introduce a Self-Adaptive Guidance (SAG) mechanism. It dynamically computes a reality score, enhancing the sharpness of the realistic latent. These alternating mechanisms collectively achieve artifact-free super-resolution. Extensive experiments demonstrate the superiority of our method, delivering detailed artifact-free high-resolution images while reducing sampling steps by 2 x. We release our code at https://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion.git.
Loading