Robust Model Selection and Nearly-Proper Learning for GMMsDownload PDF

Published: 31 Oct 2022, Last Modified: 13 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Mixtures of Gaussians, model selection, proper learning, density estimation
TL;DR: We give efficient algorithms for robust model selection and nearly-proper learning of Gaussian mixture models.
Abstract: In learning theory, a standard assumption is that the data is generated from a finite mixture model. But what happens when the number of components is not known in advance? The problem of estimating the number of components, also called model selection, is important in its own right but there are essentially no known efficient algorithms with provable guarantees. In this work, we study the problem of model selection for univariate Gaussian mixture models (GMMs). Given $\textsf{poly}(k/\epsilon)$ samples from a distribution that is $\epsilon$-close in TV distance to a GMM with $k$ components, we can construct a GMM with $\widetilde{O}(k)$ components that approximates the distribution to within $\widetilde{O}(\epsilon)$ in $\textsf{poly}(k/\epsilon)$ time. Thus we are able to approximately determine the minimum number of components needed to fit the distribution within a logarithmic factor. Moreover, by adapting the techniques we obtain similar results for reconstructing Fourier-sparse signals. Prior to our work, the only known algorithms for learning arbitrary univariate GMMs either output significantly more than $k$ components (e.g. $k/\epsilon^2$ components for kernel density estimates) or run in time exponential in $k$.
Supplementary Material: pdf
10 Replies

Loading