Attentive Neural ProcessesDownload PDF

Published: 21 Dec 2018, Last Modified: 22 Oct 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Neural Processes (NPs) (Garnelo et al., 2018) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.
Keywords: Neural Processes, Conditional Neural Processes, Stochastic Processes, Regression, Attention
TL;DR: A model for regression that learns conditional distributions of a stochastic process, by incorporating attention into Neural Processes.
Code: [![github](/images/github_icon.svg) deepmind/neural-processes](https://github.com/deepmind/neural-processes) + [![Papers with Code](/images/pwc_icon.svg) 5 community implementations](https://paperswithcode.com/paper/?openreview=SkE6PjC9KX)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [MNIST](https://paperswithcode.com/dataset/mnist)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/arxiv:1901.05761/code)
9 Replies

Loading