Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders
Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C.H. Lee, Hugh Salimbeni, Kai Arulkumaran, Murray Shanahan
Nov 03, 2016 (modified: Jan 13, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results.
TL;DR:We study a variant of the variational autoencoder model with a Gaussian mixture as prior distribution and discuss its optimization difficulties and capabilities for unsupervised clustering.
Keywords:Unsupervised Learning, Deep learning
Enter your feedback below and we'll get back to you as soon as possible.