Rényi Neural Processes

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Using Rényi divergence for robust inference of neural processes
Abstract: Neural Processes (NPs) are deep probabilistic models that represent stochastic processes by conditioning their prior distributions on a set of context points. Despite their advantages in uncertainty estimation for complex distributions, NPs enforce parameterization coupling between the conditional prior model and the posterior model. We show that this coupling amounts to prior misspecification and revisit the NP objective to address this issue. More specifically, we propose Rényi Neural Processes (RNP), a method that replaces the standard KL divergence with the Rényi divergence, dampening the effects of the misspecified prior during posterior updates. We validate our approach across multiple benchmarks including regression and image inpainting tasks, and show significant performance improvements of RNPs in real-world problems. Our extensive experiments show consistently better log-likelihoods over state-of-the-art NP models.
Lay Summary: How to predict the probability of a function? We wanted to answer this question using a deep learning method called Renyi neural processes. The general idea is to encode the function with a prior model that is conditioned on some observations and later decode that information for predictions. We identified a problem of prior misspecification in existing neural processes where the prior model can be overconfident due to some parameter coupling mechanism. In light of this, we proposed a new objective using Renyi divergence that mitigates the effects of a misspecified prior. We show consistent likelihood improvements over tasks including time series regression and image inpainting. Our method has implications in predictive distributions for functions, which covers a wide range of tasks in vision, language such as super resolution generation and missing feature imputation. Our findings can also be beneficial for transfer learning such as fine tuning tasks where the prior knowledge might be overconfident.
Link To Code: https://github.com/csiro-funml/renyineuralprocesses
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: neural processes, Rényi divergence
Submission Number: 2712
Loading