Neural Network Approximators for Marginal MAP in Probabilistic Circuits

Published: 22 Jun 2024, Last Modified: 05 Aug 2024TPM 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Probabilistic Circuits, marginal MAP inference, Self-Supervised Learning
TL;DR: We propose to train neural networks in a self-supervised manner to efficiently compute high quality MMAP solutions in probabilistic circuits.
Abstract: Probabilistic circuits (PCs) such as sum-product networks efficiently represent large multi-variate probability distributions. They are preferred in practice over other probabilistic representations, such as Bayesian and Markov networks, because PCs can solve marginal inference (MAR) tasks in time that scales linearly in the size of the network. Unfortunately, the most probable explanation (MPE) task and its generalization, the marginal maximum-a-posteriori (MMAP) inference task remain NP-hard in these models. Inspired by the recent work on using neural networks for generating near optimal solutions to optimization problems such as integer linear programming, we propose an approach that uses neural networks to approximate MMAP inference in PCs. The key idea in our approach is to approximate the cost of an assignment to the query variables using a continuous multilinear function and then use the latter as a loss function. The two main benefits of our new method are that it is self-supervised, and after the neural network is learned, it requires only linear time to output a solution. We evaluate our new approach on several benchmark datasets and show that it outperforms three competing linear time approximations: max-product inference, max-marginal inference, and sequential estimation, which are used in practice to solve MMAP tasks in PCs.
Submission Number: 9
Loading