PFLLTNet: enhancing low-light images with PixelShuffle upsampling and feature fusion

Published: 01 Jan 2025, Last Modified: 19 May 2025Signal Image Video Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper presents a novel low-light image enhancement model named PFLLTNet, specifically designed to address the issues of detail loss and global structure distortion in low-light conditions. The model leverages the separated operations of luminance and chrominance in the YUV color space, combined with multi-headed self-attention (MHSA), feature fusion paths, residual connections, and the PixelShuffle upsampling strategy, significantly improving image detail restoration and enhancing the fidelity of global information. Additionally, we optimized the combination and weight configuration of the loss function, with particular emphasis on the introduction of light consistency loss and the refinement of the learning rate scheduling mechanism using cosine annealing with restarts, ensuring the model’s stability and robustness during extended training. Experimental results demonstrate that PFLLTNet achieves state-of-the-art (SOTA) performance in key metrics such as PSNR and SSIM while maintaining relatively low computational complexity. Due to its computational efficiency and low resource demands, PFLLTNet holds significant potential for deployment in scenarios such as mobile devices, real-time video processing, and intelligent surveillance systems, particularly in environments requiring rapid processing and constrained computational resources. The source code and pre-trained models are available for download at https://github.com/Huang408746862/PFLLTNet.
Loading