Sampling-based inference for large linear models with application to linearised Laplace

Published: 20 Jun 2023, Last Modified: 18 Jul 2023AABI 2023 - Fast TrackEveryoneRevisionsBibTeX
Keywords: Sampling, Bayesian inference, expectation maximisation, Linearised Laplace, Bayesian Neural Networks
TL;DR: We draw samples and optimise hyperparameters for linear models with millions of observations and parameters and then apply it to NNs.
Abstract: Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method. Alas, the computational cost associated with Bayesian linear models constrains this method’s application to small networks, small output spaces and small datasets. We address this limitation by introducing a scalable sample-based Bayesian inference method for conjugate Gaussian multi-output linear models, together with a matching method for hyper-parameter (regularisation strength) selection. Furthermore, we use a classic feature normalisation method, the g-prior, to resolve a previously highlighted pathology of the linearised Laplace method. Together, these contributions allow us to per- form linearised neural network inference with ResNet-18 on CIFAR100 (11M parameters, 100 outputs × 50k datapoints), with ResNet-50 on Imagenet (50M parameters, 1000 outputs × 1.2M datapoints) and with a U-Net on a high-resolution tomographic reconstruction task (2M parameters, 251k output dimensions).
Publication Venue: ICLR 2023
Submission Number: 5
Loading