Integrating Bayesian Network Structure into Residual Flows and Variational Autoencoders

Published: 11 Apr 2023, Last Modified: 11 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep generative models have become more popular in recent years due to their scalability and representation capacity. Unlike probabilistic graphical models, they typically do not incorporate specific domain knowledge. As such, this work explores incorporating arbitrary dependency structures, as specified by Bayesian networks, into variational autoencoders (VAEs). This is achieved by developing a new type of graphical normalizing flow, which extends residual flows by encoding conditional independence through masking of the flow’s residual block weight matrices, and using these to extend both the prior and inference network of the VAE. We show that the proposed graphical VAE provides a more interpretable model that generalizes better in data-sparse settings, when practitioners know or can hypothesize about certain latent factors in their domain. Furthermore, we show that graphical residual flows provide not only density estimation and inference performance competitive with existing graphical flows, but also more stable and accurate inversion in practice as a byproduct of the flow’s Lipschitz bounds.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: This is the third revised version based on requests made by the reviewers. It contains the additional VAE+MAF baseline experiments.
Code: https://gitlab.com/pleased/grf-and-siren-vae
Assigned Action Editor: ~George_Papamakarios1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 789
Loading