KDDGAN: Knowledge-Guided Explicit Feature Disentanglement for Facial Attribute Editing

Published: 01 Jan 2024, Last Modified: 19 Feb 2025IEEE Trans. Consumer Electron. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Facial attribute editing is a popular direction in face generation, which aims to modify facial attributes in the face image and remain unedited attributes unchanged. However, generative models are prone to affect the unedited attributes when editing multiple facial attributes. Currently, the concatenation of the prior knowledge with hidden features is still data-driven work. Due to the feature coupling in data-driven models, high-entanglement implicit semantics are generated, which is incomprehensible for human beings. Besides, multi-attribute boundaries of the implicit semantics are ambiguous, which is complicated to effectively control the editing process. In this paper, we propose a knowledge-guided explicit feature disentanglement network that is compatible with human cognition, leveraging a classification method with the prior knowledge to encode features. Specifically, we select 13 facial attribute labels for a comprehensive and explicit presentation of this task and design a knowledge-guided feature disentanglement module to transform the implicit feature representations into explicit feature semantics. We also construct a semantic space that can independently manipulate facial attributes. In addition, our proposed model can be combined with existing facial attribute editing models to obtain multiple variant models. Our proposed model is fully validated by various experiments and the variant model has achieved better performance than the benchmark model in facial attribute editing.
Loading