Abstract: Neural processes (NPs) are parametric stochastic processes that can be trained
from a dataset consisting of sets of input-output pairs. During test time, given
a context set of input-output pairs and a set of target inputs, they allow us to
approximate the posterior predictive of the target outputs. NPs have shown promise
in applications such as image super-resolution, conditional image generation or
scalable Bayesian optimization. It is, however, unclear which objective and model
specification should be used to train NPs. This abstract empirically evaluates the
performance of NPs for different objectives and model specifications. Given that
some objectives and model specifications clearly outperform others, our analysis
can be useful in guiding future research and applications of NPs.
Loading