Challenges in Disentangling Independent Factors of Variation

Anonymous

Nov 03, 2017 (modified: Dec 13, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.
  • TL;DR: It is a mostly theoretical paper that describes the challenges in disentangling factors of variation, using autoencoders and GAN.
  • Keywords: disentangling, factors, attribute, transfer, autoencoder, GAN

Loading