TL;DR: A new language modeling perspective by utilizing punctuation marks
Abstract: Large Language Models (LLMs) have exhibited exceptional performance across a spectrum of natural language processing tasks. However, their substantial sizes pose considerable challenges, particularly in computational demands and inference speed, due to their quadratic complexity. In this work, we have identified a key pattern: certain seemingly meaningless separator tokens (i.e., punctuations) contribute disproportionately to attention scores compared to semantically meaningful tokens. This observation suggests that information of the segments between these separator tokens can be effectively condensed into the separator tokens themselves without significant information loss. Guided by this insight, we introduce SepLLM, a plug-and-play framework that accelerates inference by compressing these segments and eliminating redundant tokens. Additionally, we implement efficient kernels for training acceleration. Experimental results across training-free, training-from-scratch, and post-training settings demonstrate SepLLM's effectiveness. Notably, using the Llama-3-8B backbone, SepLLM achieves over 50% reduction in KV cache on the GSM8K-CoT benchmark while maintaining comparable performance. Furthermore, in streaming settings, SepLLM effectively processes sequences of up to 4 million tokens or more while maintaining consistent language modeling capabilities.
Lay Summary: SepLLM introduces a novel perspective in language modeling, proposing that separator tokens in LLMs naturally serve as division and summarization points for the segments they divide. Leveraging this insight, SepLLM enables sparse modeling of natural language by intentionally compressing segment information into separator tokens during pretraining. This reduces attention computation, minimizes KV cache size, and improves training and inference efficiency.
Essentially, SepLLM is a native sparse attention mechanism that aligns closely with the natural semantic distribution of natural language. Since separators act as natural boundaries within language, the segments they divide are inherently coherent, self-contained, and semantically unified units. Thus, separators naturally become the ideal summarization and compression points for such semantic units. After training, SepLLM can also function as a KV cache compression approach to further reduce inference overhead.
In summary, **SepLLM can be regarded as a native sparse attention mechanism inherent to the structure of natural language, and it is highly suitable to serve as a fundamental baseline model for sparse attention mechanisms in LLMs.**
Link To Code: https://sepllm.github.io
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Language Modeling, Attention Mechanisms, LLM Architecture, Sparse Attention, KV Cache Compression
Submission Number: 3786
Loading