Exploiting Inferential Structure in Neural ProcessesDownload PDF

Published: 26 Jul 2022, Last Modified: 20 Apr 2025TPM 2022Readers: Everyone
Keywords: neural processes, structured inference
TL;DR: We incorporate structured inference networks into neural processes.
Abstract: Neural processes (NPs) can be extremely fast at test time, but their training requires a wide range of context sets to generalize well. We propose to address this issue by incorporating the structure of graphical models into NPs. This leads to aggregation strategies in which context points are appropriately weighted, generalizing a recent proposal by Volpp et al., [2020]. The weighting further reveals an interpretation of each point, which we refer to as the neural sufficient statistics. It is expected that by exploiting information in structured priors, the data inefficiency of NPs can be alleviated.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/exploiting-inferential-structure-in-neural/code)
1 Reply

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview