- Keywords: Kernel methods, Statistical Learning Theory, Positive Definite Models, Probabilistic Inference, Bayesian Inference, Decision Theory, Density Estimation, Probability Representation
- TL;DR: We show that PSD models enjoy all key properties for probability representation and inference: sum and product rule apply efficiently and in closed form and the model is concise, i.e. few units suffice to approximate a wide family of probabilities.
- Abstract: Finding a good way to model probability densities is key to probabilistic inference. An ideal model should be able to concisely approximate any probability while being also compatible with two main operations: multiplications of two models (product rule) and marginalization with respect to a subset of the random variables (sum rule). In this work, we show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end. In particular, we characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees. Moreover, we show that we can perform efficiently both sum and product rule in closed form via matrix operations, enjoying the same versatility of mixture models. Our results open the way to applications of PSD models to density estimation, decision theory, and inference.
- Supplementary Material: pdf
- Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.