Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference

TMLR Paper4447 Authors

11 Mar 2025 (modified: 18 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The Multi-Output Gaussian Process (MOGP) is a popular tool for modelling data from multiple sources. A typical choice to build a covariance function for a MOGP is the Linear Model of Coregionalisation (LMC) which parametrically models the covariance between outputs. The Latent Variable MOGP (LV-MOGP) generalises this idea by modelling the covariance between outputs using a kernel applied to latent variables, one per output, leading to a flexible MOGP model that allows efficient generalisation to new outputs with few data points. The computational complexity in LV-MOGP grows linearly with the number of outputs, which makes it unsuitable for problems with a large number of outputs. In this paper, we propose a stochastic variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs, making computational complexity per training iteration independent of the number of outputs. We demonstrate the performance of the model by benchmarking against some other MOGP models in several real-world datasets, including spatial-temporal climate modelling and spatial transcriptomics.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission:

We add one more row corresponding to independent SVGPs with Poisson likelihoods in the NYC experiment table.

Assigned Action Editor: Vincent Fortuin
Submission Number: 4447
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview