Keywords: VAE, variational autoencoder, beta-VAE, disentanglement
TL;DR: Disentanglement in VAEs is close to being understood in that the training objective is shown to promote orthogonality in the decoder's Jacobian. We show how that then relates to identifying statistically independent factors of the data.
Abstract: Disentanglement, or identifying statistically independent salient factors of the data, is of interest in many aspects of machine learning and statistics, having potential to improve generation of synthetic data with controlled properties, robust classification of features, parsimonious encoding, and greater understanding of the generative process behind the data. Disentanglement arises in various generative paradigms, including Variational Autoencoders (VAEs), GANs and diffusion models, and particular progress has recently been made in understanding the former. That line of research shows that the choice of diagonal posterior covariance matrices in a VAE promotes mutual orthogonality between columns of the decoder's Jacobian. We continue this thread to show how such *linear* independence translates to *statistical* independence, completing the chain in understanding how the VAE objective leads to the identification of independent components of the data, i.e. disentanglement.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7355
Loading