Semantic-Region Specific Lookup Tables for Image Enhancement Via Unpaired Learning

Published: 01 Jan 2024, Last Modified: 15 May 2025ICIP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper proposes a novel unpaired learning approach to enhance images by employing specific strategies for different semantic regions. Leveraging the generative adversarial network (GAN) framework for unpaired learning, our method incorporates a cascaded 1D and 3D lookup table (LUT) structure as the generator. Initially, context-aware 1D LUTs redistribute the input image to approach the target globally. Subsequently, category-specific 3D LUTs are merged based on the semantic category probability assigned to each pixel. The fused 3D LUTs are then applied to transform individual pixels, producing visually pleasing results. Furthermore, we introduce a semantic-attended multi-discriminator, offering more precise supervision during training. To train and evaluate our method, we have curated a semantically categorized dataset. User studies and qualitative comparisons demonstrate that our model outperforms existing methods, exhibiting better alignment with human aesthetics.
Loading