DiVeQ: Differentiable Vector Quantization Using the Reparameterization Trick

Published: 26 Jan 2026, Last Modified: 11 Apr 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vector Quantization, Differentiability, Backpropagation, Differentiable Vector Quantization, Gradient Collapse, Codebook Learning
TL;DR: We propose DiVeQ and SF-DiVeQ, two differentiable vector quantization techniques that enable end-to-end training by preserving hard assignments in the forward pass while enabling meaningful gradient flow.
Abstract: Vector quantization is common in deep models, yet its hard assignments block gradients and hinder end-to-end training. We propose DiVeQ, which treats quantization as adding an error vector that mimics the quantization distortion, keeping the forward pass hard while letting gradients flow. We also present a space-filling variant (SF-DiVeQ) that assigns to a curve constructed by the lines connecting codewords, resulting in less quantization error and full codebook usage. Both methods train end-to-end without requiring auxiliary losses or temperature schedules. In VQ-VAE image compression, VQGAN image generation, and DAC speech coding tasks across various data sets, our proposed methods improve reconstruction and sample quality over alternative quantization approaches.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 7500
Loading