Reference-conditional Makeup-aware Discrimination for Face Image Beautification

Published: 01 Jan 2024, Last Modified: 28 Jan 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Facial makeup transfer aims to replicate reference makeup on target face, and the existing methods are mainly based on a generic adversarial training process. In this work, we design a Reference-conditional Makeup-aware Discrimination approach (RcMD) to facilitate makeup transfer. Specifically, we perform region-wise semantic feature extraction from a reference makeup image and a source image without makeup. A generator learns to capture and render the reference makeup by modulating the region-wise intermediate features. To ensure precise makeup on target face, we incorporate a reference-conditional discrimination network, which learns to measure the regional makeup consistency between reference and synthesized images. Considering the discrepancy between reference and target faces, an alignment module is trained to fuse the extracted features, conditioned on the reference style. Based on the feature statistics, we perform regional real-synthesized makeup discrimination to ensure precise makeup rendering. Extensive experiments are performed to demonstrate the effectiveness of our designed modules and the superior performance of RcMD in transferring diverse real-world facial makeup.
Loading