Abstract: We propose a method for modelling groups of face images from the same identity. The model is trained to infer a distribution over the latent space for identity given a small set of “training data”. One can then sample images using that latent representation to produce images of the same identity. We demonstrate that the model extracts disentangled factors for identity factors and image-specific vectors. We also perform generative classification over identities to assess its feasibility for few-shot face recognition.
TL;DR: Generating a new face image from an identity inferred from a small subset of images.
Keywords: variational autoencoders, few-shot learning, meta-learning, generative models, MSCeleb