Evaluating and Improving Subspace Inference in Bayesian Deep Learning

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Subspace inference, Bayesian neural networks, Uncertainty quantification
TL;DR: We enhance subspace inference in Bayesian deep learning with better subspace construction, evaluation metrics, and efficient inference techniques, improving both accuracy and computational efficiency.
Abstract: Bayesian neural networks incorporate Bayesian inference over model weights to account for uncertainty in weight estimation and predictions. Since full Bayesian inference methods are computationally expensive and suffer from high dimensionality, subspace inference has emerged as an appealing class of methods for approximate inference, where inference is restricted to a lower-dimensional weight subspace. Despite their benefits, existing subspace inference methods have notable pitfalls in terms of subspace construction, subspace evaluation, and inference efficiency. In this work, we conduct a comprehensive analysis of current subspace inference techniques and address all the aforementioned issues. First, we propose a block-averaging construction strategy that improves subspace quality by better resembling subspaces built from the full stochastic gradient descent trajectory. Second, to directly evaluate subspace quality, we propose novel metrics based on the Bayes factor and prior predictive, focusing on both goodness-of-fit and generalization abilities. Finally, we enhance inference within the subspace by leveraging importance sampling and quasi-Monte Carlo methods, significantly reducing computational overhead. Our experimental results demonstrate that the proposed methods not only improve computational efficiency but also achieve better accuracy and uncertainty quantification compared to existing subspace inference methods on CIFAR and UCI datasets.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8467
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview