Abstract: Low-light image enhancement (LLIE) is a critical low-level image processing task aimed at improving the visual quality of dimly lit images, thereby enhancing user experience and supporting subsequent applications. Recent research has explored incorporating semantic information into LLIE models. However, these methods often suffer from high computational costs during training due to the optimization of semantic-related loss functions or inadequate utilization of semantic features. This paper presents a novel model, Low-light Image Enhancement with Semantic Awareness (LIESA), designed to address these limitations. LIESA focuses on adjusting the illumination component of an image and employs a triple-branch encoder-decoder architecture. To effectively integrate semantic and reflectance features into the illumination feature hierarchy, we introduce multiple Lighting Optimization Modules (LOMs) at different stages of the illumination encoder-decoder branch. In LOMs, two sub-modules, Semantic Prior Fusion Module (SFM) and Illumination Attention Module (IAM), are designed to fuse information from the illumination, reflectance and semantic branches. The design of our model facilitates the incorporation of high-level and low-level features across different resolutions, thereby leading to more effective enhancement in terms of improving visibility and preserving visual naturalness. Experimental evaluations on ten public datasets demonstrate LIESA’s superior performance in terms of both visual quality and quantitative metrics compared to state-of-the-art LLIE methods that incorporate semantic information. Ablation studies further corroborate the effectiveness of LIESA’s key components.
Loading