Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference

TMLR Paper4447 Authors

11 Mar 2025 (modified: 19 May 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The Multi-Output Gaussian Process (MOGP) is a popular tool for modelling data from multiple sources. A typical choice to build a covariance function for a MOGP is the Linear Model of Coregionalisation (LMC) which parametrically models the covariance between outputs. The Latent Variable MOGP (LV-MOGP) generalises this idea by modelling the covariance between outputs using a kernel applied to latent variables, one per output, leading to a flexible MOGP model that allows efficient generalisation to new outputs with few data points. The computational complexity in LV-MOGP grows linearly with the number of outputs, which makes it unsuitable for problems with a large number of outputs. In this paper, we propose a stochastic variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs, making computational complexity per training iteration independent of the number of outputs. We demonstrate the performance of the model by benchmarking against some other MOGP models in several real-world datasets, including spatial-temporal climate modelling and spatial transcriptomics.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission:

We add a clarification about our contributions in the final paragraph of the introduction section in this version (highlighted in blue).

Assigned Action Editor: Vincent Fortuin
Submission Number: 4447
Loading