Keywords: Image Tokenizer, Image Generative Models, Representation Learning
TL;DR: We find training tokenizers with latent denoising objectives significantly improve their generative performance across different and diverse generative models.
Abstract: Despite their fundamental role, it remains unclear what properties could make tokenizers more effective for generative modeling. We observe that modern generative models share a conceptually similar training objective---reconstructing clean signals from corrupted inputs, such as signals degraded by Gaussian noise or masking---a process we term \emph{denoising}. Motivated by this insight, we propose aligning tokenizer embeddings directly with the downstream denoising objective, encouraging latent embeddings that remain reconstructable even under significant corruption. To achieve this, we introduce the Latent Denoising Tokenizer (\method), a simple yet highly effective tokenizer trained to reconstruct clean images from latent embeddings corrupted via interpolative noise or random masking. Extensive experiments on class-conditioned (ImageNet $256\times256$ and $512\times512$) and text-conditioned (MSCOCO) image generation benchmarks demonstrate that our \method consistently improves generation quality across \textit{six} representative generative models compared to prior tokenizers. Our findings highlight denoising as a fundamental design principle for tokenizer development, and we hope it could motivate new perspectives for future tokenizer design. Code is available at: https://github.com/Jiawei-Yang/DeTok
Supplementary Material: zip
Primary Area: generative models
Submission Number: 13568
Loading