GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image Translation via Attribute Gaussian Mixture Modelling
TL;DR: GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion.
Abstract: Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs. Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappings that are required to be learned independently and/or they generate low-diversity results, a phenomenon known as model collapse. To overcome these limitations, we propose a method named GMM-UNIT based on a content-attribute disentangled representation, where the attribute space is fitted with a GMM. Each GMM component represents a domain, and this simple assumption has two prominent advantages. First, the dimension of the attribute space does not grow linearly with the number of domains, as it is the case in the literature. Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains. Additionally, we show how GMM-UNIT can be constrained down to different methods in the literature, meaning that GMM-UNIT is a unifying framework for unsupervised image-to-image translation.
Keywords: GANs, image-to-image translation, multi-domain image translation, multi-modal image translation, Gaussian Mixture Model
Original Pdf: pdf
4 Replies
Loading