WienerFlow: Wiener-Adaptive Flow Matching for Perception and Fidelity Trade-off in Low-light Image Enhancement
Keywords: low-light image enhancement, flow matching
Abstract: Low-light image enhancement (LLIE) strives to restore visibility and faithful details from severely under-exposed photographs. Existing learning-based approaches largely polarize around two objectives: fidelity‐driven models, optimized for distortion metrics (e.g., PSNR, SSIM), tend to produce over-smoothed results with detail loss in extreme darkness, whereas perception‐driven generative models synthesize visually appealing textures at the risk of hallucination. We bridge this dichotomy through \textbf{WienerFlow}, a continuous‐time, flow-matching framework that unifies both objectives within a single linear transport path. Leveraging the theory of neural ordinary differential equations, we show that (i) a noise-free linear path originating from the low-light image equates to a fidelity-oriented trajectory, while (ii) a linear path initialized from Gaussian noise inherently favors perceptual richness. Under mild regularity assumptions, we prove that convex combinations of these two vector fields yield another valid linear flow, and we derive an optimal weight that maximizes perceptual realism subject to a fidelity budget. Extensive experiments on four LLIE benchmarks demonstrate that WienerFlow achieves state-of-the-art PSNR/SSIM scores while substantially improving perceptual quality, as confirmed by LPIPS and NIQE, without introducing spurious textures. Our findings provide both a theoretical lens and a practical solution for balancing perception and distortion in low-light enhancement.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 11746
Loading