Identifiability of deep generative models under mixture priors without auxiliary informationDownload PDF

Published: 09 Jul 2022, Last Modified: 05 May 2023CRL@UAI 2022 PosterReaders: Everyone
Keywords: identifiable representation learning, latent variable models, variational autoencoders, deep generative models, statistical theory
TL;DR: We prove identifiability of deep generative models that are universal approximators and are the decoders of VAEs used in practice.
Abstract: We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice. Unlike existing work, our analysis does not require weak supervision, auxiliary information, or conditioning in the latent space. The models we consider are tightly connected with autoencoder architectures used in practice that leverage mixture priors in the latent space and ReLU/leaky-ReLU activations in the encoder. Our main result is an identifiability hierarchy that significantly generalizes previous work and exposes how different assumptions lead to different ``strengths'' of identifiability. For example, our weakest result establishes (unsupervised) identifiability up to an affine transformation, which already improves existing work. It's well known that these models have universal approximation capabilities and moreover, they have been extensively used in practice to learn representations of data.
4 Replies

Loading