Keywords: VQVAE, tokenizer, autoencoder, efficiency
Abstract: Visual tokenizers are pivotal in multimodal large models, acting as bridges between continuous inputs and discrete token. Nevertheless, training high-compression-rate VQ-VAEs remains computationally demanding, often necessitating thousands of GPU hours. This work demonstrates that a pre-trained VAE can be efficiently transformed into a VQ-VAE by controlling quantization noise within the VAE's tolerance threshold. We present \textbf{Quantize-then-Rectify (ReVQ)}, a framework leveraging pre-trained VAEs to enable rapid VQ-VAE training with minimal computational overhead. By integrating \textbf{channel split quantization} to enhance codebook capacity and a \textbf{post rectifier} to mitigate quantization errors, ReVQ compresses ImageNet images into at most $512$ tokens while sustaining competitive reconstruction quality (rFID = $0.82$). Significantly, ReVQ reduces training costs by over two orders of magnitude relative to state-of-the-art approaches: ReVQ finishes full training on a single NVIDIA 4090 in approximately 22 hours, whereas comparable methods require 4.5 days on a 32 A100 GPUs. Experimental results show that ReVQ achieves superior efficiency-reconstruction trade-offs.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 2556
Loading