Keywords: variational inference, variational autoencoders, generative models, energy-based models
TL;DR: We use unnormalized priors in VAEs, leading to better priors, improved likelihoods, and faster training for energy-based models without costly Markov chain sampling.
Abstract: Variational inference typically assumes normalized priors, limiting the expressiveness of generative models like Variational Autoencoders (VAEs). In this work, we propose a novel approach by replacing the prior π(π§) with an unnormalized energy-based distribution
exp(βπΈ(π§))/π, where πΈ(π§) is the energy function and π is the partition function. This leads to a variational lower bound that allows for two key innovations: (1) the incorporation of more powerful, flexible priors into the VAE framework, resulting in improved likelihood estimates and enhanced generative performance, and (2) the ability to train energy-based models (EBMs) without the need for computationally expensive Markov chain sampling, requiring only a small π > 1 importance samples from the posterior distribution. Our approach bridges VAEs and EBMs, providing a scalable and efficient framework for leveraging unnormalized priors in probabilistic models.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authorsβ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 937
Loading