Variational Learning of Fractional Posteriors

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors.
Abstract: We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.
Lay Summary: We need more tools to approximate solutions for problems that are too hard to solve exactly in machine learning and statistical inference. This paper expands the space of possible approximate solutions to include a particular class of solutions, beyond what is currently available. This gives more flexibility for machine learning and statistical inference.
Link To Code: https://github.com/csiro-funml/Variational-learning-of-Fractional-Posteriors/
Primary Area: Probabilistic Methods->Variational Inference
Keywords: Variational Inference, Fractional Posterior
Submission Number: 11396
Loading