DNA-GAN: Learning Disentangled Representations from Multi-Attribute ImagesDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.
TL;DR: We proposed a supervised algorithm, DNA-GAN, to disentangle multiple attributes of images.
Keywords: disentangled representations, multi-attribute images, generative adversarial networks
Code: [![github](/images/github_icon.svg) Prinsphield/DNA-GAN](https://github.com/Prinsphield/DNA-GAN)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [Multi-PIE](https://paperswithcode.com/dataset/multi-pie)
14 Replies

Loading