SD-URM: Stable-Diffusion Based Zero-Shot Universal Restoration Model

Dae Yeol Lee, Shiv Gehlot, Lijun Zhang, Guan-Ming Su

Published: 2025, Last Modified: 23 Apr 2026ICIPW 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pretrained text-to-image diffusion models have strong priors on natural image attributes. These learned representations can be leveraged for various image-to-image translation tasks during the sampling process, enabling zero-shot approaches to image restoration. In this work, we propose Stable-Diffusion based zero-shot universal restoration model (SD-URM), which incorporates a learnable degradation model that disentangles distortions from clean signals and uses this information to guide the sampling process of Stable-Diffusion. This enables efficient and accurate restoration, even under complex, combined distortions. SD-URM demonstrates strong performance across diverse degradations, including blur, low resolution, grayscale, missing pixels, and low-light conditions, and outperforms existing frameworks, especially in handling combinations of these degradations.
Loading