Approximate Message Passing for Bayesian Neural Networks

ICLR 2026 Conference Submission17993 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian Neural Networks, Message Passing, Uncertainty Quantification, Bayesian Inference
TL;DR: Our framework is the first to support convolutional neural networks for Bayesian learning.
Abstract: Bayesian methods for learning predictive models have the ability to consider both sources of uncertainty (i.e., data and model uncertainty) within a single framework and thereby provide a powerful tool for decision-making. Bayesian neural networks (BNNs) hold great potential for training data efficiency due to full uncertainty quantification, making them promising candidates for more data-efficient AI in data-constrained settings such as reinforcement learning in the physical world. However, current computational approaches for learning BNNs often face limitations such as overconfidence, sensitivity to hyperparameters, and posterior collapse, highlighting the need for alternative computational approaches. In this paper, we introduce a novel method that leverages approximate message passing (MP) of a full factorized neural network model using mixed approximations to overcome these problems while maintaining data efficiency. Our framework supports convolutional neural networks while addressing the issue of double-counting training data, which has been a key source of overconfidence in prior work. We demonstrate the data-efficiency of our method on multiple benchmark datasets in comparison to state-of-the-art methods for learning neural networks.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 17993
Loading