Explainable Mixture Models through Differentiable Rule Learning

Published: 26 Jan 2026, Last Modified: 01 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mixture modeling, Interpretability, Conditional Density Estimation
TL;DR: We introduce explainable mixture models, a framework that pairs each mixture component with a human-interpretable rule over descriptive features.
Abstract: Mixture models excel at decomposing complex, multi-modal distributions into simpler probabilistic components, but provide no insight into the conditions under which these components arise. We introduce explainable mixture models (XMM), a framework that pairs each mixture component with a human-interpretable rule over descriptive features. This enables mixtures that are not only statistically expressive but also transparently grounded in the underlying data. We formalize the problem and examine conditions under which an XMM exactly captures a target distribution. We then propose a scalable, differentiable learning procedure for discovering sets of rules. Experiments on synthetic and real-world datasets demonstrate that our method discovers interesting sub-populations in both univariate and multivariate settings, offering interpretable insights into the structure of complex distributions.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 22384
Loading