Unsupervised Multi-Domain Image Translation with Domain-Specific Encoders/Decoders

Published: 01 Jan 2018, Last Modified: 16 May 2025ICPR 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised Image-to-Image Translation achieves spectacularly advanced developments nowadays. However, recent approaches mainly focus on one model with two domains, which may face heavy burdens with the large cost of training time and the huge model parameters, under such a requirement that $n\ (n > 2)$ domains are freely transferred to each other in a general setting. To address this problem, we propose a novel and unified framework named Domain-Bank, which consists of a globally shared auto-encoder and $n$ domain-specific encoders/decoders, assuming that there is a universal shared-latent space can be projected. Thus, we not only reduce the parameters of the model but also have a huge reduction of the time budgets. Besides the high efficiency, we show the comparable (or even better) image translation results over state-of-the-arts on various challenging unsupervised image translation tasks, including face image translation and painting style translation. We also apply the proposed framework to the domain adaptation task and achieve state-of-the-art performance on digit benchmark datasets.
Loading