Neural Disentanglement using Mixture Latent Space with Continuous and Discrete VariablesDownload PDF

Anonymous

04 Sept 2019 (modified: 05 May 2023)NeurIPS 2019 Workshop DC S1 Blind SubmissionReaders: Everyone
Keywords: deep learning, neural disentanglement, unsupervised learning representation, variational auto-encoder
TL;DR: Mixture Model for Neural Disentanglement
Abstract: Recent advances in deep learning techniques has shown the usefulness of the deep neural networks in extracting features required to perform the task at hand. However, these features learnt are in particular helpful only for the initial task. This is due to the fact that the features learnt are very task specific and does not capture the most general and task agnostic features of the input. In fact the way humans are seen to learn is by disentangling features which task agnostic. This indicates that leaning task agnostic features by disentangling only the most informative features from the input data. Recently Variational Auto-Encoders (VAEs) have shown to be the de-facto models to capture the latent variables in a generative sense. As these latent features can be represented as continuous and/or discrete variables, this indicates us to use VAE with a mixture of continuous and discrete variables for the latent space. We achieve this by performing our experiments using a modified version of joint-vae to learn the disentangled features.
0 Replies

Loading