Feature Space Disentangling Based on Spatial Attention for Makeup TransferDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023ICIP 2022Readers: Everyone
Abstract: Makeup transfer aims at rendering the makeup style from a given reference image to a source image. Most existing works have achieved promising progress by disentangled representation. However, these methods do not consider the spatial distribution of makeup style, which inevitably change the makeup-irrelevant regions. To solve the problem, we introduce a novel feature space disentangling framework based on spatial attention mechanism for makeup transfer. In particular, we first utilize a single encoder to extract all the features of the image. Then we propose a learnable spatial semantic classifier to classify the extracted features into makeup-specific and makeup-irrelevant features. Finally, we complete makeup transfer by swapping the classified features. Experiments demonstrate that the makeup-specific features precisely signify the spatial distribution of makeup style. The superiority of our approach is well demonstrated by the experiment that it produces promising visual results and keeps those makeup-irrelevant regions unchanged.
0 Replies

Loading