Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian SupervisionDownload PDF

Published: 21 Dec 2018, Last Modified: 05 May 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training.
Keywords: disentangling, autoencoders, jacobian, face manipulation
TL;DR: A method for learning image representations that are good for both disentangling factors of variation and obtaining faithful reconstructions.
Code: [![github](/images/github_icon.svg) jlezama/disentangling-jacobian](https://github.com/jlezama/disentangling-jacobian)
Data: [CelebA](https://paperswithcode.com/dataset/celeba)
11 Replies