Sequential Learning in GPs with Memory and Bayesian Leverage ScoreDownload PDF

Published: 18 Nov 2022, Last Modified: 05 May 2023CLL@ACML2022Readers: Everyone
Keywords: Sequential learning, sparse Gaussian process, online learning
TL;DR: Hyperparameter learning for sequential sparse GPs.
Abstract: Limited access to previous data is challenging when using Gaussian process (GP) models for sequential learning. This results in inaccuracies in posterior, hyperparameter learning, and inducing variables. The recently proposed ‘dual’ sparse GP model enables inference of variational parameters in such a setup. In this paper, using the dual GP, we tackle the problem arising due to a lack of access to previous data for estimating hyperparameters of a sparse Gaussian process. We propose utilizing the concept of ‘memory’. To pick representative memory, we develop the ‘Bayesian leverage score’ built on the ridge leverage score. We experiment and perform an ablation study with a sequential learning data set, split MNIST, to showcase the usefulness of the proposed method.
1 Reply