Embrace the Gap: VAEs Perform Independent Mechanism AnalysisDownload PDF

Published: 26 Jul 2022, Last Modified: 22 Oct 2023TPM 2022Readers: Everyone
Keywords: variational autoencoder, ELBO, representation learning, independent mechanism analysis, variational inference
TL;DR: The gap between ELBO and log-likelihood helps variational autoencoders with near-deterministic decoders learn useful representations by performing independent mechanism analysis.
Abstract: Despite the widespread use of variational autoencoders (VAEs), the consequences of optimizing the evidence lower bound (ELBO) opposed to the exact log-likelihood remain poorly understood. We shed light on this matter by studying nonlinear VAEs in the limit of near-deterministic decoders. We first prove that, in this regime, the optimal encoder approximately inverts the decoder---a commonly used but unproven conjecture---which we call self-consistency. Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood rather than to the exact one. The regularization term allows VAEs to perform what has been termed independent mechanism analysis (IMA): it adds an inductive bias towards decoders with column-orthogonal Jacobians. This connection to IMA allows us to precisely characterize the gap w.r.t. the log-likelihood in near-deterministic VAEs. Furthermore, it elucidates an unanticipated benefit of ELBO optimization for nonlinear representation learning as, unlike the unregularized log-likelihood, the IMA-regularized objective promotes identification of the ground-truth latent factors.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2206.02416/code)
1 Reply