Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law

ACL ARR 2025 May Submission7460 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Tokenization is a fundamental step in natural language processing (NLP) and other sequence modeling domains, where the choice of vocabulary size significantly impacts model performance. Despite its importance, selecting an optimal vocabulary size remains underexplored, typically relying on heuristics or dataset-specific choices. In this work, we propose a principled method for determining the vocabulary size by analyzing token frequency distributions through Zipf’s law. We show that downstream task performance correlates with how closely token distributions follow power-law behavior, and that aligning with Zipfian scaling improves both model efficiency and effectiveness. Extensive experiments across NLP, genomics, and chemistry demonstrate that models consistently achieve peak performance when the token distribution closely adheres to Zipf’s law, establishing Zipfian alignment as a robust and generalizable criterion for vocabulary size selection.
Paper Type: Long
Research Area: Phonology, Morphology and Word Segmentation
Research Area Keywords: morphological segmentation, subword representations
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English, German, French, Chinese
Submission Number: 7460
Loading