Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-dimensionsDownload PDF

21 May 2021, 20:46 (edited 21 Jan 2022)NeurIPS 2021 SpotlightReaders: Everyone
  • Keywords: Statistical Physics, Replica method, High-dimensional statistics, Approximate Message Passing, Gaussian Mixture Models
  • TL;DR: We give a rigorous formula for the error of generalised linear models fitting mixtures of Gaussian in high-dimensions.
  • Abstract: Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks. In this manuscript, we characterise the learning of a mixture of $K$ Gaussians with generic means and covariances via empirical risk minimisation (ERM) with any convex loss and regularisation. In particular, we prove exact asymptotics characterising the ERM estimator in high-dimensions, extending several previous results about Gaussian mixture classification in the literature. We exemplify our result in two tasks of interest in statistical learning: a) classification for a mixture with sparse means, where we study the efficiency of $\ell_1$ penalty with respect to $\ell_2$; b) max-margin multi-class classification, where we characterise the phase transition on the existence of the multi-class logistic maximum likelihood estimator for $K>2$. Finally, we discuss how our theory can be applied beyond the scope of synthetic data, showing that in different cases Gaussian mixtures capture closely the learning curve of classification tasks in real data sets.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/IdePHICS/GaussMixtureProject
7 Replies