Abstract: Low-light image enhancement is a vital process that seeks to enhance the visibility and clarity of images taken under suboptimal lighting conditions. Traditional methods for low-light enhancement rely on altering illumination levels but struggle with local variations in lighting levels and complex scenes, often producing artifacts, color distortions, or unnatural appearances. Recent deep learning-based methods have improved performance through robust feature extraction but usually rely on single-domain datasets that cannot generalize to all lighting conditions present in real-life situations. To address these generalization challenges, we propose MDANet: a Multi-Stage Domain Adaptation framework that sequentially implements light, focus, and contextual domain adaptation. First, MDANet employs an Adaptive Lighting Enhancement Module that combines an innovative U-Net architecture with cross-domain contrastive learning for lighting invariance. Second, this framework utilizes large language models (LLMs) for context-aware adjustments while incorporating depth map constraints within an innovative Region-Focused Enhancement Module that selectively targets and processes key focal regions. Third, a Contextual Adaptation Module employs multitask meta-learning to ensure robustness and superior performance across a range of contextual scenarios. Experimental results demonstrate that MDANet consistently outperforms both traditional and state-of-the-art methods by offering both improved image quality as well as generalization across diverse low-light scenarios. Experimental results show that MDANet achieves state-of-the-art performance across multiple datasets, including MUSIQ scores of 38.234 on LOL, 39.811 on SID, 39.391 on SICE, and 39.971 on RELLISUR, outperforming all compared methods in both perceptual and quantitative metrics.
Loading