Abstract: Face frontalization, which aims to synthesize a frontal view from an arbitrary facial pose, plays a crucial role in enhancing downstream tasks such as face recognition, expression analysis, and lip reading. However, previous methods often rely on paired or annotated datasets, which are costly and impractical to obtain, particularly in unconstrained real-world scenarios. To overcome this limitation, we propose a novel framework that leverages pretrained generative and encoder-based projection models for efficient frontal face synthesis. Our method extracts identity-aware segmentation embeddings and manipulates the corresponding segmentation masks to generate multiple realistic frontal views, selecting the optimal output based on identity loss. The proposed approach demonstrates strong robustness even in extreme cases with severely limited facial input, while requiring only minimal fine-tuning, thereby offering both effectiveness and computational efficiency.
External IDs:dblp:conf/avss/LeeLLCC25
Loading