Tractable Uncertainty for Structure LearningDownload PDF

Published: 26 Jul 2022, Last Modified: 22 Oct 2023TPM 2022Readers: Everyone
Keywords: spn, causality, structure learning
TL;DR: We present a new representation for uncertainty in structure learning (of Bayesian networks) based on sum-product networks.
Abstract: Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data. In this work, we present Tractable Uncertainty for STructure learning (TRUST), a framework for approximate posterior inference that relies on probabilistic circuits as the representation of our posterior belief. In contrast to sample-based posterior approximations, our representation can capture a much richer space of DAGs, while also being able to tractably reason about the uncertainty through a range of useful inference queries. We empirically show how probabilistic circuits can be used as an augmented representation for structure learning methods, leading to improvement in both the quality of inferred structures and posterior uncertainty. Experimental results on conditional query answering further demonstrate the practical utility of the representational capacity of TRUST.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2204.14170/code)
1 Reply

Loading