Less Is More: Vision Representation Compression for Efficient Video Generation with Large Language Models
Abstract: Video generation using Large Language Models (LLMs) has shown promising potential, effectively leveraging the extensive LLM infrastructure to provide a unified framework for multimodal understanding and content generation. However, these methods face critical challenges, i.e., token redundancy and inefficiencies arising from long sequences, which constrain their performance and efficiency compared to diffusion-based approaches. In this study, we investigate the impact of token redundancy in LLM-based video generation and propose Vision Representation Compression (VRC), a novel framework designed to achieve more in both performance and efficiency with less video token representations. VRC introduces learnable representation compressor and decompressor to compress video token representations, enabling autoregressive next-sequence prediction in a compact latent space. The proposed approach eliminates redundancy, reduces token sequence length, and enhances the model's ability to capture underlying video structures. Our experiments demonstrate that VRC reduces token sequence lengths by a factor of 4, achieving more than 9$\times$ acceleration in inference while maintaining performance comparable to state-of-the-art video generation models. In addition, VRC not only accelerates the inference process but also significantly reduces memory requirements during both model training and inference.
Loading