Graph Transformer Neural Processes

ICLR 2026 Conference Submission16671 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural processes, meta-learning, graph neural networks, uncertainty quantification
Abstract: Neural Processes (NPs) are a powerful class of model for forming predictive distributions. Rather than use an assumed prior over functions to form uncertainties---as is done with Gaussian Processes---NPs can meta-learn uncertainties for unseen tasks; however, in practice meta-learning uncertainties may require a great deal of data. To address this, we propose representing the inputs to the model as a graph and labelling the edges of the graph with similarities or differences between points in the context and target sets, allowing for invariant representations. We propose an architecture that can operate over such a graph and experimentally show that it achieves strong performance even when there is limited data available. We then apply our model on three real world regression tasks to demonstrate the advantages of representing the data as a graph.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16671
Loading