Keywords: autoencoders_diffusion+generative models
TL;DR: We replace GAN loss with a diffusion loss while training autoencoders and show that autoencoder produces less distortion while being better for generation.
Abstract: For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches lack a principled interpretation. Concurrently, in generative settings diffusion has demonstrated a remarkable ability to create crisp, high quality results and has solid theoretical underpinnings (from variational inference to direct study as the Fisher Divergence). Our work combines autoencoder representation learning with diffusion and is, to our knowledge, the first to demonstrate jointly learning
a continuous encoder and decoder under a diffusion-based loss
and showing that it can lead to higher compression and better generation..
We demonstrate that this approach yields better reconstruction quality as compared to GAN-based
autoencoders while being easier to tune.
We also show that the resulting representation is easier to model
with a latent diffusion model as compared to the representation obtained from a state-of-the-art GAN-based loss.
Since our decoder is stochastic, it can generate details not encoded in the otherwise deterministic latent representation; we therefore name our approach ``Sample what you can't compress'', or SWYCC for short.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 806
Loading