ILVS\(^2\)Net: Illumination-Driven Non-Local Visual State Space Unfolding Network for Low-Light Enhancement

ICLR 2026 Conference Submission2877 Authors

08 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-Light Image Enhancement, Group-Sparse prior, Non-Local Visual State Space, illumination smoothing operator, Retinex unfolding
TL;DR: ILVS$^2$Net is a Retinex-inspired deep-unfolding model with NLVSS+ISP proximal priors that preserve illumination/reflectance, achieving SOTA LLIE and boosting low-light detection—even under unsupervised training.
Abstract: In low-light image enhancement (LLIE), deep unfolding methods have achieved remarkable success by bridging physical models with learnable modules. However, existing approaches often overlook the structured sparsity of illumination, which leads to oversmoothing and unstable recovery. To address this, we propose ILVS\(^2\)Net, a deep Retinex unfolding network that explicitly integrates a group-sparse prior into each iteration. Specifically, we design two learnable proximal operator networks: a Non-Local Visual State Space (NLVSS) module that translates the grouping and shrinkage principle of group sparsity into a neural operator, effectively capturing long-range structural dependencies; and an Illumination Smoothing Operator (ISP) that enforces edge-preserving piecewise smoothness for coherent illumination estimation. By embedding these proximal operator networks into the unfolding process, our model achieves a stable closed-form update while dynamically adapting to complex illumination variations. Extensive experiments on five public benchmarks demonstrate that ILVS\(^2\)Net consistently outperforms state-of-the-art methods in both quantitative metrics and perceptual quality. The code and pretrained models will be released.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 2877
Loading