Sampling-based inference for large linear models, with application to linearised LaplaceOpen Website

02 Jan 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Large-scale linear models are ubiquitous throughout machine learning, with con- temporary application as surrogate models for neural network uncertainty quan- tification; that is, the linearised Laplace method. Alas, the computational cost associated with Bayesian linear models constrains this method’s application to small networks, small output spaces and small datasets. We address this limitation by introducing a scalable sample-based Bayesian inference method for conjugate Gaussian multi-output linear models, together with a matching method for hy- perparameter (regularisation) selection. Furthermore, we use a classic feature normalisation method (the g-prior) to resolve a previously highlighted pathology of the linearised Laplace method. Together, these contributions allow us to perform linearised neural network inference with ResNet-18 on CIFAR100 (11M parame- ters, 100 output dimensions × 50k datapoints) and with a U-Net on a high-resolution tomographic reconstruction task (2M parameters, 251k output dimensions).
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview