Keywords: disentanglement, hierarchical representation learning, representation learning, hierarchical dataset, hierarchical variational autoencoders
TL;DR: Hierarchical VAE models disentangle hierarchical factors of variation better than current single layer VAE models.
Abstract: Disentanglement is hypothesized to be beneficial towards a number of downstream tasks. However, a common assumption in learning disentangled representations is that the data generative factors are statistically independent. As current methods are almost solely evaluated on toy datasets where this ideal assumption holds, we investigate their performance in hierarchical settings, a relevant feature of real-world data. In this work, we introduce Boxhead, a dataset with hierarchically structured ground-truth generative factors. We use this novel dataset to evaluate the performance of state-of-the-art autoencoder-based disentanglement models and observe that hierarchical models generally outperform single-layer VAEs in terms of disentanglement of hierarchically arranged factors.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/boxhead-a-dataset-for-learning-hierarchical/code)
3 Replies
Loading