Keywords: Disentanglement learning, varational auto-encoder, curriculum learning, generative adversarial networks
TL;DR: We propose an adversarial variational auto-encoder that alleviates the issue of hyperparameter selection for disentanglement learning and propose a new unsupervised disentanglement metric.
Abstract: The use of well-disentangled representations offers many advantages for downstream tasks, e.g. an increased sample efficiency, or better interpretability.
However, the quality of disentangled interpretations is often highly dependent on the choice of dataset-specific hyperparameters, in particular the regularization strength.
To address this issue, we introduce DAVA, a novel training procedure for variational auto-encoders. DAVA completely alleviates the problem of hyperparameter selection.
We compare DAVA to models with optimal hyperparameters.
Without any hyperparameter tuning, DAVA is competitive on a diverse range of commonly used datasets.
Underlying DAVA, we discover a necessary condition for unsupervised disentanglement, which we call PIPE.
We demonstrate the ability of PIPE to positively predict the performance of downstream models in abstract reasoning.
We also thoroughly investigate correlations with existing supervised and unsupervised metrics. The code is available at https://github.com/besterma/dava.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/dava-disentangling-adversarial-variational/code)
10 Replies
Loading