FeedFace: Efficient Inference-based Face Personalization via Diffusion Models

Published: 19 Mar 2024, Last Modified: 07 May 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalization, Diffusion Model, Face Generation
TL;DR: Our work successfully extends the capabilities of text-to-image diffusion models to support high quality face-conditioned generation. We demonstrate the efficacy and efficiency of our approach.
Abstract: We introduce FeedFace, a novel inference-based method designed to augment text-to-image diffusion models with face-based conditional generation. Trained on a thoroughly curated and annotated dataset of diverse human faces, FeedFace operates without additional training for new facial conditions during generation. Our method can create images that are not only true to the textual descriptions but also exhibit remarkable facial faithfulness in seconds. Our model supports using multiple faces as input conditions, leveraging extra facial information to improve facial consistency. A key strength of our method lies in its efficiency. Through our experiments, we demonstrate that FeedFace can produce face-conditioned samples with comparable quality to leading industry methods, using only 0.4% of their data volume and fewer than 5% of the samples seen by these methods during training.
Supplementary Material: zip
Submission Number: 118
Loading