Keywords: graphical models, belief propagation, uncertainty quantification
TL;DR: We propose LinUProp, a novel approach that offers accuracy, interpretability, scalability and guaranteed convergence for uncertainty quantification of graphical model inference.
Abstract: Uncertainty Quantification (UQ) is vital for decision makers as it offers insights into the potential reliability of data and model, enabling more informed and risk-aware decision-making.
Graphical models, capable of representing data with complex dependencies, are widely used across domains.
Existing sampling-based UQ methods are unbiased but cannot guarantee convergence and are time-consuming on large-scale graphs.
There are fast UQ methods for graphical models with closed-form solutions and convergence guarantee but with uncertainty underestimation.
We propose *LinUProp*, a UQ method that utilizes a novel linear propagation of uncertainty to model uncertainty among related nodes additively instead of multiplicatively, to offer linear scalability, guaranteed convergence, and closed-form solutions without underestimating uncertainty.
Theoretically, we decompose the expected prediction error of the graphical model and prove that the uncertainty computed by *LinUProp* is the *generalized variance component* of the decomposition.
Experimentally, we demonstrate that *LinUProp* is consistent with the sampling-based method but with linear scalability and fast convergence.
Moreover, *LinUProp* outperforms competitors in uncertainty-based active learning on four real-world graph datasets, achieving higher accuracy with a lower labeling budget.
Primary Area: Probabilistic methods (for example: variational inference, Gaussian processes)
Submission Number: 8583
Loading