Multi-attributed Face Synthesis for One-Shot Deep Face Recognition

Published: 01 Jan 2023, Last Modified: 05 Apr 2025IW-FCV 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Nothing is more unique and crucial to an individual’s identity than their face. With the rapid improvement in computational power and memory space and recent specializations in deep learning models, images are becoming more essential than ever for pattern recognition. Several deep face recognition models have recently been proposed to train deep networks on enormously big public datasets like MSCeleb-1M [8] and VG-GFace2 [5], successfully achieving sophisticated performance on mainstream applications. It is particularly challenging to gather an adequate dataset that allows strict command over the desired properties, such as hair color, skin tone, makeup, age alteration, etc. As a solution, we devised a one-shot face recognition system that utilizes synthetic data to recognize a face even if the facial attributes are altered. This work proposes and investigates the feasibility of creating a multi-attributed artificial face dataset from a one-shot image to train the deep face recognition model. This research seeks to demonstrate how the image synthesis capability of the deep learning methods can construct a face dataset with multiple critical attributes for a recognition process to enable and enhance efficient face recognition. In this study, the ideal deep learning features will be combined with a conventional one-shot learning framework. We did experiments for our proposed model on the LFW and multi-attributed synthetic data; these experiments highlighted some insights that can be helpful in the future for one-shot face recognition.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview