UNCONDITIONAL IMAGE-TEXT PAIR GENERATION WITH MULTIMODAL CROSS QUANTIZERDownload PDF

Published: 29 Mar 2022, Last Modified: 22 Oct 2023ICLR 2022 DGM4HSD workshop PosterReaders: Everyone
Keywords: Multimodal, Unconditional generation, Vector quantization, Joint representation
TL;DR: Unconditional image-text pair generation with a vector quantization method that learns a joint quantized representation space.
Abstract: Though deep generative models have gained a lot of attention, most of the existing works are designed for the unimodal generation task. In this paper, we explore a new method for unconditional image-text pair generation. We propose MXQ-VAE, a vector quantization method for multimodal image-text representation. MXQ-VAE accepts a paired image and text as input, and learns a joint quantized representation space, so that the image-text pair can be converted to a sequence of unified indices. Then we can use autoregressive generative models to model the joint image-text representation, and even perform unconditional image-text pair generation. Extensive experimental results demonstrate that our approach effectively generates semantically consistent image-text pair and also enhances meaningful alignment between image and text.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2204.07537/code)
3 Replies

Loading