Explainable Mixture Models through Differentiable Rule Learning

ICLR 2026 Conference Submission22384 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mixture modeling, Interpretability, Conditional Density Estimation
TL;DR: We introduce explainable mixture models, a framework that pairs each mixture component with a human-interpretable rule over descriptive features.
Abstract: Mixture models excel at decomposing complex, multi-modal distributions into simpler probabilistic components, but provide no insight into the conditions under which these components arise. We introduce explainable mixture models (EMM), a framework that pairs each mixture component with a human-interpretable rule over descriptive features. This enables mixtures that are not only statistically expressive but also transparently grounded in the underlying data. We formally examine the conditions under which an EMM exactly captures a target distribution and propose a scalable, differentiable learning procedure for discovering sets of rules. Experiments on synthetic and real-world datasets demonstrate that our method discovers interesting sub-populations in both univariate and multivariate settings, offering interpretable insights into the structure of complex distributions.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 22384
Loading