Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

Published: 21 Sept 2023, Last Modified: 12 Jan 2024NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: Gaussian processes, scalable learning, posterior sampling, Bayesian optimization
TL;DR: We sample from GP posteriors using SGD and develop a spectral characterization for why it works, even in cases of non-convergence.
Abstract: Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems. In general, this has a cubic cost in dataset size and is sensitive to conditioning. We explore stochastic gradient algorithms as a computationally efficient method of approximately solving these linear systems: we develop low-variance optimization objectives for sampling from the posterior and extend these to inducing points. Counterintuitively, stochastic gradient descent often produces accurate predictions, even in cases where it does not converge quickly to the optimum. We explain this through a spectral characterization of the implicit bias from non-convergence. We show that stochastic gradient descent produces predictive distributions close to the true posterior both in regions with sufficient data coverage, and in regions sufficiently far away from the data. Experimentally, stochastic gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks. Its uncertainty estimates match the performance of significantly more expensive baselines on a large-scale Bayesian~optimization~task.
Submission Number: 8394