Message Passing Neural ProcessesDownload PDF

Published: 25 Mar 2022, Last Modified: 22 Oct 2023GTRL 2022 PosterReaders: Everyone
Keywords: message passing, neural processes, graph representation learning, node classification, uncertainty, few-shot learning, active learning, transductive, inductive, cellular automata, shapenet, biochemical
TL;DR: We equip Neural Processes with relational inductive biases, showing significant gains in structured settings including existing geometric and biochemical datasets and newly-proposed citation network and cellular automata tasks.
Abstract: Neural Processes (NPs) are powerful and flexible models able to incorporate uncertainty when representing stochastic processes, while maintaining a linear time complexity. However, NPs produce a latent description by aggregating independent representations of context points and lack the ability to exploit relational information present in many datasets. This renders NPs ineffective in settings where the stochastic process is primarily governed by neighbourhood rules, such as cellular automata (CA), and limits performance for any task where relational information remains unused. We address this shortcoming by introducing Message Passing Neural Processes (MPNPs), the first class of NPs that explicitly makes use of relational structure within the model. Our evaluation shows that MPNPs thrive at lower sampling rates, on existing benchmarks and newly-proposed CA and Cora-Branched tasks. We further report strong generalisation over density-based CA rule-sets and significant gains in challenging arbitrary-labelling and few-shot learning setups.
Poster: png
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2009.13895/code)
1 Reply

Loading