Disentangling Factors of Variations Using Few Labels

Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar R¨¨ätsch, Bernhard Schölkopf, Olivier Bachem

Mar 14, 2019 ICLR 2019 Workshop LLD Blind Submission readers: everyone
  • Abstract: Learning disentangled representations is considered a promising research direction in representation learning. Recently, Locatello et al. (2018) demonstrated that the unsupervised learning of disentangled representations is theoretically impossible and that state-of-the-art methods, which are often unsupervised, require access to annotated examples to select good model runs. Yet, if we assume access to labels for model selection, it is not clear why we should not use them directly for training. In this paper, we first show that model selection using few labels is feasible. Then, as a proof-of-concept, we consider a simple semi-supervised method that directly uses the labels for training. We train more than 7000 models and empirically validate that collecting a handful of potentially noisy labels is sufficient to learn disentangled representations.
0 Replies