Fitting large mixture models using stochastic component selectionDownload PDF

21 May 2021 (modified: 22 Oct 2023)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: Mixture models, expectation-maximization, sum-product networks, Metropolis-Hastings, unsupervised learning
TL;DR: We propose a method to speed up the fitting of mixture models by replacing the evaluation of all component likelihoods by Metropolis-Hastings sampling. The method applies to a wide class of models, including sum-product networks and their extensions.
Abstract: Traditional methods for unsupervised learning of finite mixture models require evaluating the likelihood of all components of the mixture. This becomes computationally prohibitive when the number of components is large, as it is in the sum-product (transform) networks. Therefore, we propose an approach combining the expectation-maximization and the Metropolis-Hastings algorithm to evaluate only a lower number of, stochastically sampled, components, thus substantially reducing the computational cost. We put emphasis on the generality of our method, equipping it with the ability to train both shallow and deep mixture models which involve complex, and possibly nonlinear, transformations. The performance of our method is illustrated in a variety of synthetic and real-data contexts, considering deep models, such as mixtures of normalizing flows and sum-product (transform) networks.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2110.04776/code)
10 Replies

Loading