Generating Realistic Facial Expressions through Conditional Cycle-Consistent Generative Adversarial Networks (CCycleGAN)Download PDF

04 Jul 2019 (modified: 05 May 2023)RIIAA 2019 Conference SubmissionReaders: Everyone
TL;DR: A novel approach (CCycleGAN) for conditional image-to-image translation in absence of paired examples.
Keywords: Generative Adversarial Networks, CCycleGAN, Facial Expressions Synthesis
Abstract: Generative adversarial networks have been widely explored for generating photorealistic images but their capabilities in multimodal image-to-image translations in a conditional generative model setting have been vaguely explored. Moreover, applying such capabilities of GANs in the context of facial expression generation conditioning on the emotion of facial expression and in absence of paired examples, to our knowledge, is almost a green field. Thus, the novelty of this study consists in experimenting the synthesis of conditional facial expressions and we present a novel approach (CCycleGAN) for learning to translate an image from a domain (e.g. the face images of a person) conditioned on a given emotion of facial expression (e.g. joy) to the same domain but conditioned on a different emotion of facial expression (e.g. surprise), in absence of paired examples. Our goal is to learn a mapping such that the distribution of generated images is indistinguishable from the distribution of real images using adversarial loss and cycle consistency loss. Qualitative results are presented, where paired training data does not exist, with a quantitative justification of optimal hyperparameters. The code for our model is available at https://github.com/gtesei/ccyclegan.
0 Replies

Loading