A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models
Keywords: Most Probable Explanation, Probabilistic Graphical Models, Probabilistic Circuits, Neural Autoregressive Model, Self-supervised learning, Tractable loss functions
TL;DR: A novel neural-based method for efficiently answering arbitrary Most Probable Explanation (any-MPE) queries in large probabilistic models.
Abstract: We propose a novel neural network-based approach to efficiently answer arbitrary Most Probable Explanation (MPE) queries in large probabilistic models, such as Bayesian and Markov networks, probabilistic circuits, and neural auto-regressive models. These MPE queries are not restricted by predefined partitions of variables into evidence and nonevidence groups. Our key idea is to distill all MPE queries into a neural network, eliminating the need for time-consuming inference algorithms on the probabilistic model itself. We enhance this method by incorporating inference-time optimization with a self-supervised loss to iteratively improve the solutions. Additionally, we use a teacher-student framework to provide a better initial network, reducing the number of necessary inference-time optimization steps. The teacher network, optimized with a self-supervised loss function, seeks the exact MPE solution, while the student network learns from the teacher’s near-optimal outputs via supervised loss. We demonstrate the practicality, efficacy and scalability of our approach across various
datasets and probabilistic models.
Submission Number: 11
Loading