Invertible Hierarchical Generative Model for Images

Published: 16 Nov 2023, Last Modified: 16 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Normalizing flows (NFs) as generative models enjoy desirable properties such as exact invertibility and exact likelihood evaluation, while being efficient to sample from. These properties, however, come at the cost of heavy restrictions on the architecture. Due to these limitations, modeling multi-modal probability distributions can yield poor results even with low-dimensional data. Additionally, typical flow architectures employed on real image datasets produce samples with visible aliasing artifacts and limited variation. The latent decomposition of flow-models also falls short on that of competing methods, with uneven contribution to a decoded image. In this work we build an invertible generative model using conditional normalizing flows in a hierarchical fashion to circumvent the aforementioned limitations. We show that we can achieve superior sample quality among flow-based models with fewer parameters compared to the state of the art. We demonstrate ability to control individual levels of detail via the latent decomposition of our model.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready. We have added a link to the source code repository. We also added the sampling time measurement suggested by reviewer xzyC.
Code: https://github.com/timoneh/hflow
Supplementary Material: zip
Assigned Action Editor: ~Tom_Rainforth1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1344
Loading