ComboGAN: Unrestricted Scalability for Image Domain TranslationDownload PDF

09 Feb 2018 (modified: 09 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: This past year alone has seen unprecedented leaps in the area of learning-based image translation, namely the unsupervised model CycleGAN, by Zhu et al. But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. With two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by training. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains.
TL;DR: We devise an image-translation model like CycleGAN, but scaling linearly in cost and resources for more than two domains.
Keywords: computer vision, generative, adversarial, image translation, style
4 Replies