Abstract: Unsupervised image translation aims to learn the transformation from a source domain to a target domain given unpaired training data. Several state-of-the-art works have yielded impressive results in the GANs-based unsupervised image-to-image translation. It fails to capture strong geometric changes between domains, or it produces unsatisfactory results for complex scenes, compared to local texture mapping tasks such as style transfer. Recently, SAGAN [35] showed that the self-attention network produces better results than the convolution-based GAN. However, the effectiveness of the selfattention network in unsupervised image-to-image translation tasks have not been verified. In this paper, we propose an unsupervised image-to-image translation with self-attention networks, in which long range dependency helps to not only capture strong geometric change but also generate details using cues from all feature locations. In experiments, we qualitatively and quantitatively show superiority of the proposed method compared to existing state-of-the-art unsupervised image-toimage translation task. The source code and our results are online: https://github.com/itsss/img2img_sa and http://itsc.kr/2019/01/24/2019_img2img_sa
0 Replies
Loading