Signal-to-noise ratio guided noise adaptive network via Dual-domain collaboration for low-light image enhancement
Abstract: Low-light image enhancement is crucial for accurate perception and decision-making, particularly in fields like autonomous driving. However, noise is inherently present in low-light images, especially in extremely dark regions, which complicates the reconstruction of clear images. Transformer, which typically computes self-attention scores across all available tokens, often struggle with the interference caused by this noise. To address this challenge, we propose the Signal-to-Noise Ratio (SNR) guided Noise Adaptive Network (SNA-Net), a novel approach that leverages the strengths of both Convolutional Neural Networks (CNN) and Transformer to adapt to noise distribution across different regions for low-light image enhancement. The SNA-Net introduces two key components within the transformer block: Noise Adaptive Self-Attention (NASA) and Dual-domain Refinement Feed-forward Network (DRFN) Specifically, NASA adaptively computes attention scores using both dense and sparse branches. The sparse branch filters out negative token interactions in low SNR regions, while the dense branch preserves essential image information. In parallel, DRFN reduces feature redundancy in both the spatial and frequency domains, thereby improving the recovery of the underlying clear image. Additionally, to facilitate better integration between CNN and Transformer features, we design an SNR-guided Feature Fusion Module (SGFF). We validate the superior performance of SNA-Net on six datasets through extensive experiments. Our code is available at https://github.com/Wyyff993/SNA-NET.
External IDs:dblp:journals/eaai/WangSSDW25
Loading