Abstract: We introduce the Self-Exemplar Illumination Equalization Network, designed specifically for effective portrait shadow removal. The core idea of our method is that partially shadowed portraits can find ideal exemplars within their non-shadowed facial regions. Rather than directly fusing two distinct classes of facial features, our approach utilizes non-shadowed regions as an illumination indicator to equalize the shadowed regions, generating deshadowed results without boundary-merging artifacts. Our network comprises cascaded Self-Exemplar Illumination Equalization Blocks (SExmBlock), each containing two modules: a self-exemplar feature matching module and a feature-level illumination rectification module. The former identifies and applies internal illumination exemplars to shadowed areas, producing illumination-corrected features, while the latter adjusts shadow illumination by reapplying the illumination factors from these features to the input face. Applying this series of SExmBlocks to shadowed portraits incrementally eliminates shadows and preserves clear, accurate facial details. The effectiveness of our method is demonstrated through evaluations on two public shadow portrait datasets, where it surpasses existing state-of-the-art methods in both qualitative and quantitative assessments.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: This work is highly relevant to the conference's emphasis on theoretical and algorithmic solutions to address problems across multimedia and related application fields. By introducing a novel method for removing shadows from portrait images, our research addresses a common challenge in photography and video, improving the visual quality of images in real-time multimedia applications such as virtual meetings and mobile photography. Furthermore, our work also perfectly aligns with "Multimedia in the Generative AI Era" theme of the conference, and we believe our findings will be of great interest to your readership, providing actionable insights that advance current practices in the multimedia field.
Supplementary Material: zip
Submission Number: 2152
Loading