Semantic-Aware Global and Local Fusion Model for Image Enhancement

Published: 01 Jan 2024, Last Modified: 13 Feb 2025PRCV (9) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Current existing image enhancement methods do not consider the importance of semantic information, and ignore the local feature consistency within semantic regions. In this paper, we incorporate semantic-aware information into lookup tables (LUTs) to enhance images in a targeted manner. We propose a novel semantic-aware global and local fusion enhancement model (SGLFM). First, a semantic-aware module (SAM) is proposed to obtain the feature representation of each semantic region in the image to approximate the required manual local adjustment. Then, we propose a semantic feature fusion module (SFFM) to effectively fuse local semantic information with global features extracted from the original image to provide high-level semantic guidance. Finally, we learn LUTs to perform global, local and semantic-dependent image transformations to ensure smooth and natural enhancement within semantic regions via global and local consistency constraints. In contrast to other image enhancement methods, our method not only leverages consistency within the same semantic area but also offers a solution for global and local enhancement, aligned with user experience from a semantic standpoint. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art methods quantitatively and qualitatively.
Loading