Abstract: In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For high-dimensional visual data, it reduces redundancy and emphasizes key features for high-quality generation. Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input. In this work, we offer a new perspective by proposing denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder. We evaluate our approach by assessing both reconstruction (rFID) and generation quality (FID), comparing it to state-of-the-art autoencoding approaches. By adopting iterative reconstruction through diffusion, our autoencoder, namely Epsilon-VAE, achieves high reconstruction quality, which in turn enhances downstream generation quality by 22% at the same compression rates or provides 2.3x inference speedup through increasing compression rates. We hope this work offers new insights into integrating iterative generation and autoencoding for improved compression and generation.
Lay Summary: Creating high-quality digital images with AI often starts by simplifying complex visual information into compact representations, a process called visual tokenization. Current methods usually reconstruct images from these simplified forms in a single step, which can limit the final quality. This paper introduces Epsilon-VAE, a new approach that reimagines this reconstruction. Instead of a one-shot process, Epsilon-VAE decodes images by treating it as an iterative denoising task: it starts with an initial noisy state and progressively refines it over a few steps to build back the detailed image, guided by the compact representation from an encoder. This method of iterative refinement leads to significantly better image reconstruction quality, especially when information is highly compressed. As a result, AI systems using Epsilon-VAE can generate entirely new images with up to 22% improved visual quality or achieve more than a twofold speed-up in generating images by using more compressed data without sacrificing quality.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Diffusion Model, VAE, Image Tokenizer, Rectified Flow
Submission Number: 217
Loading