Graph Random Features for Scalable Gaussian Processes

Published: 23 Sept 2025, Last Modified: 23 Dec 2025SPIGM @ NeurIPSEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gaussian Processes, Bayesian Inference, Graph Machine Learning, Bayesian Optimisation, Scalable Machine Learning, Random Features
TL;DR: We enable graph Gaussian process inference in O(N^{3/2}) time using graph random features (GRFs), making Bayesian optimisation feasible on million-node graphs on a single computer chip.
Abstract: We study the application of graph random features (GRFs) - a recently introduced stochastic estimator of graph node kernels - to scalable Gaussian processes on discrete input spaces. We prove that (under mild assumptions) Bayesian inference with GRFs enjoys $\mathcal{O}(N^{3/2})$ time complexity with respect to the number of nodes $N$, with probabilistic accuracy guarantees. In contrast, exact kernels generally incur $cal(O)(N^3)$. Substantial wall-clock speedups and memory savings unlock Bayesian optimisation on graphs with over $10^6$ nodes on a single computer chip, whilst preserving competitive performance.
Submission Number: 38
Loading