Abstract: Low-light image enhancement (LLIE) is the process of improving the quality of images taken in low-light conditions while striking a balance between enhancing image illumination and maintaining their natural appearance. This involves reducing noise, enhancing details, and correcting colors, all while avoiding artifacts such as halo effects or color distortions. We propose LIVENet, a novel deep neural network that jointly performs noise reduction on lowlight images and enhances illumination and texture details. LIVENet has two stages: the image enhancement stage and the refinement stage. For the image enhancement stage, we propose a Latent Subspace Denoising Block (LSDB) that uses a low-rank representation of low-light features to suppress the noise and predict a noise-free grayscale image. We propose enhancing an RGB image by eliminating noise. This is done by converting it into YCbCr color space and replacing the noisy luminance (Y) channel with the predicted noise-free grayscale image. LIVENet also predicts the transmission map and atmospheric light in the image enhancement stage. LIVENet produces an enhanced image with rich color and illumination by feeding them to an atmospheric scattering model. In the refinement stage, the texture information from the grayscale image is incorporated into the improved image using a Spatial Feature Transform (SFT) layer. Experiments on different datasets demonstrate that LIVENet’s enhanced images consistently outperform previous techniques across various quality metrics. The source code can be obtained from https://github.com/CandleLabAI/LiveNet.
Loading