VQToken: Neural Discrete Token Representation Learning for Extreme Token Reduction in Video Large Language Models

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Video LLMs; Token Reduction; Discrete Token Representation; Extreme Token Reduction; Token Merge; Token Clustering
TL;DR: We introduce an extreme token-reduction task and a discrete representation (VQToken) that adaptively compresses video token sequences by 99.93% of their original length with only a 0.66% accuracy drop.
Abstract: Token-based video representation has emerged as a promising approach for enabling large language models (LLMs) to interpret video content. However, existing token reduction techniques, such as pruning and merging, often disrupt essential positional embeddings and rely on continuous visual tokens sampled from nearby pixels with similar spatial–temporal locations. By removing only a small fraction of tokens, these methods still produce relatively lengthy continuous sequences, which falls short of the extreme compression required to balance computational efficiency and token count in video LLMs. In this paper, we introduce the novel task of **Extreme Short Token Reduction**, which aims to represent entire videos using a minimal set of discrete tokens. We propose **VQToken**, a neural discrete token representation framework that (i) applies adaptive vector quantization to continuous ViT embeddings to learn a compact codebook and (ii) preserves spatial–temporal positions via a token hash function by assigning each grid-level token to its nearest codebook entry. On the Extreme Short Token Reduction task, our VQToken compresses sequences to just **0.07\%** of their original length while incurring only a **0.66\%** drop in accuracy on NextQA-MC benchmark. It also achieves comparable performance on ActNet-QA, Long Video Bench, and VideoMME. We further introduce the **Token Information Density** (**TokDense**) metric and formalize fixed-length and adaptive-length subtasks, achieving state-of-the-art results in both settings. Our approach dramatically lowers theoretical complexity, increases information density, way fewer tokens counts, and enables efficient video large language models in resource-constrained environments.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 3333
Loading