Keywords: Operator learning, conditional density estimation, statistical inference, deep learning, theoretical guarantees
TL;DR: A novel operator approach for learning conditional probability distributions with deep learning
Abstract: We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with
a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as
conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing
efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation
capabilities of neural networks, NCP efficiently handles a wide variety of complex probability distributions.
We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy.
In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods.
This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve
competitive results, even in the face of more complex architectures.
Primary Area: Learning theory
Submission Number: 18932
Loading