TL;DR: Tokenizers with very high compression ratio possess an expressive latent space suitable for image generation via latent space manipulation.
Abstract: Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called *1D* image tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations -- such as copying and replacing tokens between latent representations of images -- enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer's latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.
Lay Summary: Generating realistic images with AI is difficult because images contain hundreds of thousands of pixels with complex relationships. To make this easier, the image generation task is typically split into two steps: first "compress" the image into a smaller set of meaningful pieces called "tokens," then learn how these tokens relate to each other.
Recent advances have created extremely efficient compression methods that can represent an entire image using just 32 small integers. We discovered that these compressed representations actually capture surprisingly rich information about what's in the image that humans can understand.
More importantly, we found that you can edit images by simply manipulating these 32 tokens directly -- no complex AI training required. Furthermore, we show that this enables users to define any custom goal or "objective function" for how they want their image to look, and our system can achieve it in just a few seconds without needing to train new models. Our examples demonstrate this approach for various image tasks like text-guided editing, filling in missing parts, and generating new images from text descriptions.
Link To Code: https://github.com/lukaslaobeyer/token-opt
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: image tokenizer, 1D tokenizer, autoencoder, generative model, text-to-image generation, image editing, training-free
Submission Number: 8965
Loading