Stochastic Weight Sharing for Bayesian Neural Networks
TL;DR: We extend the applicability of Bayesian Neural Networks (BNNs) to large-scale models such as Vision Transformers (ViT) with few training parameters
Abstract: While offering a principled framework for uncertainty quantification in deep learning, the employment of Bayesian Neural Networks (BNNs) is still constrained by their increased computational requirements and the convergence difficulties when training very deep, state-of-the-art architectures. In this work, We reinterpret weight-sharing quantization techniques from a stochastic perspective in the context of training and inference with Bayesian Neural Networks (BNNs). Specifically, we leverage 2D-adaptive Gaussian distributions, Wasserstein distance estimations, and alpha-blending to encode the stochastic behavior of a BNN in a lower-dimensional, soft Gaussian representation. Through extensive empirical investigation, we demonstrate that our approach significantly reduces the computational overhead inherent in Bayesian learning by several orders of magnitude, enabling efficient Bayesian training of large-scale models, such as ResNet-101 and Vision Transformer (VIT). On various computer vision benchmarks—including CIFAR-10, CIFAR-100, and ImageNet1k—our approach compresses model parameters by approximately 50× and reduces model size by 75% while achieving accuracy and uncertainty estimations comparable to state-of-the-art.
Submission Number: 1721
Loading