Keywords: Gaussian Process, Variational Inference
Abstract: We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points, which can lead to more scalable algorithms than previous methods. It is based on decomposing a GP as a sum of two independent processes: one in the subspace spanned by the inducing basis and the other in the orthogonal complement. We show that this formulation recovers existing methods and at the same time allows to obtain tighter lower bounds on the marginal likelihood and new stochastic variational inference algorithms. We demonstrate the efficiency of these algorithms in several GP models ranging from standard regression to multi-class classification using (deep) convolutional GPs and report state-of-the-art results on CIFAR-10 for purely GP-based models.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/sparse-orthogonal-variational-inference-for/code)
0 Replies
Loading