Inverse-Free Sparse Variational Gaussian Processes

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS BDU Workshop 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gaussian Processes, Variational Inference, Natural Gradients
TL;DR: We propose new techniques to improve and optimise an existing inverse-free bound for sparse variational GPs, allowing to effectively train them on realistic datasets without the computation of matrix decomposition.
Abstract: Gaussian processes (GPs) are a powerful prior over functions, but they require to invert/decompose the kernel matrix to perform inference, making them poorly suited to modern hardware. To address this, variational bounds that require only matmuls by introducing an additional variational parameter $\mathbf T \in \mathbb{R}^{M\times M}$ were proposed. However, in practice, the optimisation of $\mathbf{T}$ with typical deep learning optimisers is challenging, limiting the practical utility of these bounds. In this work, we solve this by introducing a preconditioner for a variational parameter in the bound, a tailored update for $\mathbf T$ based on natural gradients, and a stopping criterion to determine the number of updates. This leads to an inverse-free method on-par with existing approaches on an iteration basis, with low-precision computation and wall-clock speedups being the next step.
Submission Number: 125
Loading