Learning to InferDownload PDF

31 Jan 2018 (modified: 10 Feb 2022)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: Bayesian Deep Learning, Amortized Inference, Variational Auto-Encoders, Learning to Learn
TL;DR: We propose a new class of inference models that iteratively encode gradients to estimate approximate posterior distributions.
Abstract: Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs). In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients. Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings. We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10)
1 Reply

Loading