Abstract: While deep learning models have seen significant success across various domains, their black-box learning nature and lack of interpretability affect their reliability in safety-critical applications like medical diagnostics and autonomous vehicles. In an attempt to address these limitations, Bayesian neural networks (BNNs) offer a promising alternative by incorporating uncertainty estimation into model predictions, enhancing transparency and decision-making. However, BNN development has primarily focused on efficient, high-fidelity approximate inference and guaranteed convergence in asymptotic settings. These are unsuitable for modern high-dimensional, multi-modal, and non-asymptotic deep learning applications, undermining their theoretical advantages. To bridge this gap, this paper provides in-depth reviews on how approximate Bayesian inference leverages deep learning optimization to achieve high efficiency and fidelity in high-dimensional spaces and multi-modal loss landscapes. It also reconciles Bayesian consistency with generalization objectives in non-asymptotic settings and investigates the generalization capabilities of BNNs. Additionally, this survey examines the often-overlooked expressiveness of BNNs, emphasizing how weight uncertainty and the absence of in-between uncertainty affect their performance. This survey aims to inspire BNN practitioners to adopt a deep learning perspective and offer valuable insights to propel further advancements in the field.
External IDs:dblp:journals/kbs/ChenYLEL25
Loading