Keywords: deep generative models, variational inference, discrete latent variable models, discrete representations, error correcting codes
TL;DR: This paper presents the first proof-of-concept demonstration that safeguarding latent information with Error Correcting Codes within generative models can enhance Variational Inference.
Abstract: Despite advances in deep probabilistic models, learning discrete latent representations remains challenging. This work introduces a novel method to improve inference in discrete Variational Autoencoders by reframing the inference problem through a generative perspective. We conceptualize the model as a communication system, and propose to leverage Error-Correcting Codes (ECCs) to introduce redundancy in latent representations, allowing the variational posterior to produce more accurate estimates and reduce the variational gap. We present a proof-of-concept using a Discrete Variational Autoencoder with binary latent variables and low-complexity repetition codes, extending it to a hierarchical structure for disentangling global and local data features. Our approach significantly improves generation quality, data reconstruction, and uncertainty calibration, outperforming the uncoded models even when trained with tighter bounds such as the Importance Weighted Autoencoder objective. We also outline the properties that ECCs should possess to be effectively utilized for improved discrete variational inference.
Latex Source Code: zip
Code Link: https://github.com/mariamartinezgarcia/codedVAE
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission495/Authors, auai.org/UAI/2025/Conference/Submission495/Reproducibility_Reviewers
Submission Number: 495
Loading