SCGAN: Disentangled Representation Learning by Adding Similarity Constraint on Generative Adversarial Nets

Published: 01 Jan 2019, Last Modified: 11 Nov 2024IEEE Access 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We proposed a novel generative adversarial net called similarity constraint generative adversarial network (SCGAN), which is capable of learning the disentangled representation in a completely unsupervised manner. Inspired by the smoothness assumption and our assumption on the content and the representation of images, we design an effective similarity constraint. SCGAN can disentangle interpretable representations by adding this similarity constraint between conditions and synthetic images. In fact, similarity constraint works as a tutor to instruct generator network to comprehend the difference of representations based on conditions. SCGAN successfully distinguishes different representations on a number of datasets. Specifically, SCGAN captures digit type on MNIST, clothing type on Fashion-MNIST, lighting on SVHN, and object size on CIFAR10. On the CelebA dataset, SCGAN captures more semantic representations, e.g., poses, emotions, and hair styles. Experiments show that SCGAN is comparable with InfoGAN (another generative adversarial net disentangles interpretable representations on these datasets unsupervisedly) on disentangled representation learning. Code is available at https://github.com/gauss-clb/SCGAN.
Loading