SVD-Guided Diffusion for Training-Free Low-Light Image Enhancement

Published: 01 Jan 2025, Last Modified: 04 Nov 2025IEEE Signal Process. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Low-light image enhancement aims to improve the visibility and the contrast of images captured under poor lighting conditions while preserving contextual details. In this context, most previous methods have relied on the paired training data, which often leads to overfitting to specific data distributions. Although recent approaches have adopted generative priors of the diffusion model to avoid such learning bias, the stochastic nature of the diffusion model restricts the precise control over luminance-related features. To address these challenges, we propose a novel and training-free method that integrates the Singular Value Decomposition (SVD) with a pretrained diffusion model. Based on our observation that SVD tends to separate an image into luminance and structural components, we propose to leverage the decomposition capability of SVD and the generative prior of the diffusion model simultaneously. Specifically, our approach effectively guides the restoration process of lighting conditions by adaptively combining singular values of the intermediate result, which is obtained from each denoising step, with those of low-light input. For this combination, we define a semantic-aware scaling scheme based on a vision-language model. Experimental results on benchmark datasets demonstrate that the proposed method efficiently improves the performance of low-light image enhancement compared to other training-free methods.
Loading