DCGSD: Low-Light Image Enhancement With Dual-Conditional Guidance Sparse Diffusion Model

Haiyan Jin, Jing Wang, Fengyuan Zuo, Haonan Su, Zhaolin Xiao, Bin Wang, Yuanlin Zhang

Published: 01 Aug 2025, Last Modified: 04 Nov 2025IEEE Transactions on Circuits and Systems for Video TechnologyEveryoneRevisionsCC BY-SA 4.0
Abstract: When restoring low-light images, most methods largely overlook the ambiguity due to dark noise and lack discrimination for region and shape representations, resulting in invalid feature enhancement. In this work, we propose a physically explainable and prior guidance model for low-light image enhancement, termed Dual-Conditional Guidance Sparse Diffusion (DCGSD). Specifically, we introduce an elaborately designed Luminance Structure Guidance Head, which can be easily plugged into the existing diffusion model to emphasize the value of the luminance and structural representation. Furthermore, for reliable noise analysis, we provide a novel Sparse Attention Enhancement Module that is adaptively empowered to exploit the most useful region-to-region dependencies. This dynamic selection makes the diffusion process from dense to sparse, thus improving the efficiency of the reasoning noise distributions. To avoid noise amplification, we further present a Skip Calibration Module, which can be used to refine the local neighborhood that contains noisy and structural information. Extensive experiments have been performed to verify the superiority of the proposed method. DCGSD shows that leveraging dual-conditional guidance can support the diffusion model to produce sharper and more realistic results.
Loading