Keywords: Nighttime flare removal, Image restoration
Abstract: Nighttime photography with intense light sources frequently produces significant flare artefacts that obscure the background, resulting in diminished image quality. Existing encoder–decoder methods can remove flare, but when trained on limited datasets, they still suffer from some issues: residual artifacts and color shifts. In contrast, diffusion-based methods can address these problems to some extent, but the multi-step diffusion process leads to error accumulation, which in turn causes background distortion. To address the above issues, we propose a novel single-step diffusion framework, FGDNet, for nighttime flare removal, guided by Laplace Pyramid frequency priors. Specifically, Our method leverages stable diffusion combined with frequency prior guidance to achieve high-fidelity flare removal without requiring flare annotations. The framework consists of three key components: (1) A Latent Diffusion-based Deflare Module (LDDM) that performs flare removal and preliminary background reconstruction through single-step diffusion with LoRA fine-tuning; (2) A Multi-scale Frequency Injection Module (MFIM) that extracts high-frequency details through Laplacian pyramid decomposition, aligns authentic textures, and injects them into the VAE decoder to restore fine details; (3) A Multi-band Frequency Fusion Module (MFFM) that employs multi-reference attention to adaptively fuse preliminary results with high/low-frequency information from the input image, further enhancing structural and color restoration. Experiments on Flare7K and Flare7K++ show superior performance in PSNR, SSIM, LPIPS, and no-reference metrics (MUSIQ, MANIQA), reducing artifacts while enhancing background detail and color fidelity in complex nighttime scenes.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 8943
Loading