Variational Gaussian Process Models without Matrix InversesDownload PDF

16 Oct 2019 (modified: 05 May 2023)AABI 2019Readers: Everyone
Keywords: Gaussian processes, variational inference, bayesian deep learning
TL;DR: We present a variational lower bound for GP models that can be optimised without computing expensive matrix operations like inverses, while providing the same guarantees as existing variational approximations.
Abstract: Large matrix inversions have often been cited as a major impediment to scaling Gaussian process (GP) models. With the use of GPs as building blocks for ever more sophisticated Bayesian deep learning models, removing these impediments is a necessary step for achieving large scale results. We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step. Our bound instead directly parameterises a free matrix, which is an additional variational parameter. At the local maxima of the bound, this matrix is equal to the matrix inverse. We prove that our bound gives the same guarantees as earlier variational approximations. We demonstrate some beneficial properties of the bound experimentally, although significant wall clock time speed improvements will require future improvements in optimisation and implementation.
0 Replies

Loading