Amortized Structural Variational Inference

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Variational inference (VI) is a popular method for approximate Bayesian inference and plays a key role in deep generative models such as variational autoencoders (VAEs). However, traditional VI can scale poorly with sample size and typically requires re-optimization for each new data point. Amortized variational inference (AVI) addresses this by learning a global inference map from observed data to variational parameters. However, standard mean-field based AVI suffers from a large variational gap due to restrictive independence assumptions, which can lead to inconsistent parameter estimation, along with a potentially larger accompanying amortization gap. We propose amortized structural variational inference (ASVI), a framework that improves AVI by incorporating structural dependencies among latent variables. ASVI employs deep neural networks for the inference map, using architectures that explicitly encode local neighborhood structures to better capture posterior dependencies. We provide theoretical guarantees showing that ASVI significantly reduces both variational and amortization gaps while maintaining the scalability of amortized inference. Experiments on synthetic and real-world datasets demonstrate that ASVI consistently outperforms AVI in predictive accuracy and posterior fidelity, while matching the performance of fully optimized structured VI with substantially improved scalability.
Submission Number: 1586
Loading