Abstract: This paper proposes a Retinex-driven reinforced diffusion model for low-light image enhancement, termed Diff-Retinex++, to address various degradations caused by low light. Our main approach integrates the diffusion model with Retinex-driven restoration to achieve physically-inspired generative enhancement, making it a pioneering effort. To be detailed, Diff-Retinex++ consists of two-stage view modules, including the Denoising Diffusion Model (DDM), and the Retinex-Driven Mixture of Experts Model (RMoE). First, DDM treats low-light image enhancement as one type of image generation task, benefiting from the powerful generation ability of diffusion model to handle the enhancement. Second, we design the Retinex theory into the plug-and-play supervision attention module. It leverages the latent features in the backbone and knowledge distillation to learn Retinex rules, and further regulates these latent features through the attention mechanism. In this way, it couples the relationship between Retinex decomposition and image enhancement in a new view, achieving dual improvement. In addition, the Low-Light Mixture of Experts preserves the vividness of the diffusion model and fidelity of the Retinex-driven restoration to the greatest extent. Ultimately, the iteration of DDM and RMoE achieves the goal of Retinex-driven reinforced diffusion model. Extensive experiments conducted on real-world low-light datasets qualitatively and quantitatively demonstrate the effectiveness, superiority, and generalization of the proposed method.
External IDs:dblp:journals/pami/YiXZTM25
Loading