Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep variational information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how to force sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
  • TL;DR: We apply the copula transformation to the Deep Information Bottleneck which leads to restored invariance properties and an interpretable latent space.
  • Keywords: Information Bottleneck, Variational Autoencoder, Sparsity, Disentanglement, Interpretability, Copula

Loading