How Much is Enough? The Diminishing Returns of Tokenization Training Data

ICML 2025 Workshop TokShop Submission35 Authors

Published: 10 Jun 2025, Last Modified: 13 Jun 2025TokShopEveryoneRevisionsBibTeXCC BY 4.0
Archiving Submission: Yes (archival)
Keywords: Tokenization
TL;DR: We investigate the impact of scaling tokenizer training data on tokenization characteristics, revealing diminishing returns and a practical limit due to pre-tokenization constraints.
Abstract: Tokenization, a crucial initial step in natural language processing, is governed by several key parameters, such as the tokenization algorithm, vocabulary size, pre-tokenization strategy, inference strategy, and training data corpus. This paper investigates the impact of an often-overlooked hyperparameter, tokenizer training data size. We train BPE, UnigramLM, and WordPiece tokenizers across various vocabulary sizes using English training data ranging from 1GB to 900GB. Our findings reveal diminishing returns as training data size increases beyond roughly 150GB, suggesting a practical limit to the improvements in tokenization quality achievable through additional data. We analyze this phenomenon and attribute the saturation effect to constraints introduced by the pre-tokenization stage. We then demonstrate the extent to which these findings can generalize by experimenting on data in Russian, a language typologically distant from English. For Russian text, we observe diminishing returns after training a tokenizer from 200GB of data, which is approximately 33\% more than when training on English. These results provide valuable insights for optimizing the tokenization process by reducing the compute required for training on large corpora and suggest promising directions for future research in tokenization algorithms.
Submission Number: 35
Loading