Challenges in Disentangling Independent Factors of VariationDownload PDF

09 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: disentangling, factors, attribute, transfer, autoencoder, GAN
TL;DR: It is a mostly theoretical paper that describes the challenges in disentangling factors of variation, using autoencoders and GAN.
Abstract: We study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.
Code: [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=SkmiegW0b)
Data: [MNIST](https://paperswithcode.com/dataset/mnist), [ShapeNet](https://paperswithcode.com/dataset/shapenet), [Sprites](https://paperswithcode.com/dataset/sprites)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1711.02245/code)
1 Reply

Loading