Learning to Blur is Learning to Deblur: Realistic Synthetic UHD Blurred Image via Diffusion

Published: 06 May 2025, Last Modified: 06 May 2025SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deblur, UHD, Diffusion
Abstract: Generating large-scale, diverse, and realistic paired data for ultra-high-definition (UHD) image deblurring is challenging due to the complex textures and information contained in UHD images. Existing synthetic methods often fail to replicate the complex, spatially-varying blurs present in real-world 4K imagery, limiting model performance. To address this gap, we introduce two diffusion-centric contributions: First, \textbf{UHD-RealBlur}, a large-scale 4K dataset produced by our novel PhysicsGuided-BlurSynth framework. PhysicsGuided-BlurSynth leverages a pre-trained Stable Diffusion model controlled using both content guidance from a clean input image and explicit conditioning on \textbf{real-world camera settings} (ISO, aperture, shutter speed, focus mode, etc.). Futhermore, we collected a set of real-world blurred images (with 4K resolution) and adopted unpaired training to fine-tune the distribution of generated blurred images to make it closer to real-world distributions. Second, we develope a FreqDiff, which incorporates essential frequency information from blurred inputs into the diffusion process and is specifically engineered for UHD image deblurring. Extensive experiments demonstrate that FreqDiff trained solely on UHD-RealBlur exhibits outstanding performance on real-world 4K blurred images.
Submission Number: 31
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview