Big Learning Variational Auto-Encoders

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Variational Auto-Encoders, big learning, foundation models, incomplete data, conditional sampling, in-painting
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We upgrade the VAE with versatile conditional sampling capabilities.
Abstract: As a representative latent variable model, the Variational Auto-Encoder (VAE) is powerful in modeling high-dimensional signals like images and texts. However, practical applications often require versatile data capabilities, such as conditional generation/completion, inference with incomplete/marginal data, \emph{etc}, which are challenging to harvest from a conventional/joint VAE. To satisfy those requirements, we leverage the recently proposed big learning to upgrade the joint VAE to its big-learning variant termed BigLearn-VAE, which delivers joint, marginal, and conditional generation/completion, inference, and reconstruction capabilities, simultaneously. In addition, we also reveal that the BigLearn-VAE can be constructed based on one foundation model, manifested as one universal model possessing plenty of versatile capabilities. Code will be released.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2221
Loading