Scaling Up Bayesian Neural Networks with Neural Networks

TMLR Paper1989 Authors

01 Jan 2024 (modified: 11 Mar 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Bayesian Neural Network (BNN) offers a principled and natural framework for proper uncertainty quantification in the context of deep learning. They address the typical challenges associated with conventional deep learning methods, such as data insatiability, ad-hoc nature, and susceptibility to overfitting. However, their implementation typically relies on Markov chain Monte Carlo (MCMC) methods that are characterized by their computational intensity and inefficiency in a high-dimensional space. To address this issue, we propose a novel calibration-Emulation-Sampling (CES) strategy to significantly enhance the computational efficiency of BNN. In this CES framework, during the initial calibration stage, we collect a small set of samples from the parameter space. These samples serve as training data for the emulator. Here, we employ a Deep Neural Network (DNN) emulator to approximate the forward mapping, i.e., the process that input data go through various layers to generate predictions. The trained emulator is then used for sampling from the posterior distribution at substantially higher speed compared to the original BNN. Using simulated and real data, we demonstrate that our proposed method improves computational efficiency of BNN, while maintaining similar performance in terms of prediction accuracy and uncertainty quantification.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=x2AfG72iDv&noteId=x2AfG72iDv
Changes Since Last Submission: We uploaded a revised version of our submission, with major changes highlighted in blue.
Assigned Action Editor: ~Shinichi_Nakajima2
Submission Number: 1989
Loading