CAT: Content-Adaptive Image Tokenization

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Image tokenization
TL;DR: We develop an adaptive image tokenizer that compresses images into variable-sized latent features based on its content complexity.
Abstract: Most existing image tokenizers encode images into a fixed number of tokens or patches, overlooking the inherent variability in image complexity and introducing unnecessary computate overhead for simpler images. To address this, we propose Content-Adaptive Tokenizer (CAT), which dynamically adjusts representation capacity based on the image content and encodes simpler images into fewer tokens. We design (1) a caption-based evaluation system that leverages LLMs to predict content complexity and determine the optimal compression ratio for an image, and (2) a novel nested VAE architecture that performs variable-rate compression in a single model. Trained on images with varying complexity, CAT achieves an average of 15% reduction in rFID across seven detail-rich datasets containing text, humans, and complex textures. On natural image datasets like ImageNet and COCO, it reduces token usage by 18% while maintaining high-fidelity reconstructions. We further evaluate CAT on two downstream tasks. For image classification, CAT consistently improves top-1 accuracy across five datasets spanning diverse domains. For image generation, it boosts training throughput by 23% on ImageNet, leading to more efficient learning and improved FIDs over fixed-token baselines.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 19174
Loading