Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
ComboGAN: Unrestricted Scalability for Image Domain Translation
Asha Anoosheh, Eirikur Agustsson, Radu Timofte
Feb 09, 2018 (modified: Feb 09, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:This past year alone has seen unprecedented leaps in the area of learning-based image translation, namely the unsupervised model CycleGAN, by Zhu et al. But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. With two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by training. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains.
TL;DR:We devise an image-translation model like CycleGAN, but scaling linearly in cost and resources for more than two domains.