XGAN: Unsupervised Image-to-Image Translation for many-to-many MappingsDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter. Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce XGAN ("Cross-GAN"), a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.
TL;DR: XGAN is an unsupervised model for feature-level image-to-image translation applied to semantic style transfer problems such as the face-to-cartoon task, for which we introduce a new dataset.
Keywords: unsupervised, gan, domain adaptation, style transfer, semantic, image translation, dataset
Code: [![Papers with Code](/images/pwc_icon.svg) 4 community implementations](https://paperswithcode.com/paper/?openreview=rkWN3g-AZ)
Data: [VGG Face](https://paperswithcode.com/dataset/vgg-face-1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/arxiv:1711.05139/code)
15 Replies

Loading