Sparse Probabilistic Graph Circuits

Published: 17 Jun 2025, Last Modified: 20 Jun 2025TPM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Probabilistic circuits, sum-product networks, graphs, sparse graph representation, graph neural networks, permutation invariance, tractable inference.
Abstract: Deep generative models (DGMs) for graphs achieve impressively high expressive power thanks to very efficient and scalable neural networks. However, these networks contain non-linearities that prevent analytical computation of many standard probabilistic inference queries, i.e., these DGMs are considered intractable. While recently proposed Probabilistic Graph Circuits (PGCs) address this issue by enabling tractable probabilistic inference, they operate on dense graph representations with $\mathcal{O}(n^2)$ complexity for graphs with $n$ nodes and $m$ edges. To address this scalability issue, we introduce Sparse PGCs, a new class of tractable generative models that operate directly on sparse graph representation, reducing the complexity to $\mathcal{O}(n + m)$, which is particularly beneficial for $m \ll n^2$. In the context of de novo drug design, we empirically demonstrate that SPGCs retain exact inference capabilities, improve memory efficiency and inference speed, and match the performance of intractable DGMs in key metrics.
Submission Number: 22
Loading