RGB-Net: transformer-based lightweight low-light image enhancement network via RGB channel separation

Published: 01 Jan 2025, Last Modified: 05 Jun 2025Multim. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In real-life scenarios, captured images often suffer from insufficient brightness, significant noise, and color distortion due to varying lighting conditions. Therefore, we propose a novel lightweight network for low-light image enhancement named RGB-Net. Firstly, unlike traditional Retinex-based models, our approach leverages the separation of RGB color channels to enhance the input image. Each RGB channel is independently enhanced for brightness and color information by a U-shaped channel optimization module (UCOM). Additionally, we utilize the transformer to capture long-range dependencies by incorporating a multi-head self-attention module within the UCOM, thereby improving feature extraction capabilities. Secondly, we design a multi-channel fusion module (MCFM) that integrates a mixed dense convolution and fully connected layers, employing a residual network to fuse the enhancement results from different color channels for improve image reconstruction. Thirdly, we construct a new hybrid loss function by exploring various loss terms, which significantly improves the representational ability of our network. Extensive experiments on five publicly used real-world datasets have shown that our method can significantly enhance image details with only 0.71M parameters and 5.81G floating-point operations, outperforming existing low-light image enhancement algorithms in both quantitative and qualitative evaluations.
Loading