Structure by Architecture: Disentangled Representations without RegularizationDownload PDF

Published: 09 Jul 2022, Last Modified: 05 May 2023CRL@UAI 2022 PosterReaders: Everyone
Keywords: Hierarchical Representation Learning, Disentanglement, Structured Representation Learning, Deep Autoencoders
TL;DR: We improve the performance and interpretability of unsupervised representations by structuring the architecture of the model to resemble a structural causal model and improved sampling.
Abstract: We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance inherent to VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, akin to structural causal models, thereby ordering the information without any additional regularization. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets.
5 Replies

Loading