Reducing the Cost of Fitting Mixture Models via Stochastic SamplingDownload PDF

Published: 26 Jul 2022, Last Modified: 17 May 2023TPM 2022Readers: Everyone
Keywords: Mixture models, expectation-maximization, sum-product networks, Metropolis-Hastings, unsupervised learning
Abstract: Traditional methods for unsupervised learning of finite mixture models require to evaluate the likelihood of all components of the mixture. This quickly becomes prohibitive when the components are abundant or expensive to compute. Therefore, we propose to apply a combination of the expectation maximization and the Metropolis-Hastings algorithm to evaluate only a small number of, stochastically sampled, components, thus substantially reducing the computational cost. The Markov chain of component assignments is sequentially generated across the algorithm's iterations, having a non-stationary target distribution whose parameters vary via a gradient-descent scheme. We put emphasis on generality of our method, equipping it with the ability to train mixture models which involve complex, and possibly nonlinear, transformations. The performance of our method is illustrated on mixtures of normalizing flows.
1 Reply

Loading