Archiving Submission: No (non-archival)
Previous Venue If Non Archival: ACL 2025
Keywords: Tokenization, bias, pre-training, subword representations, vocabulary learning, language modeling, causality
TL;DR: We estimate tokenisation bias: the causal effect of having a subword in a model's vocabulary on the model's ability to predict its characters.
Abstract: Modern language models are typically trained over subword sequences, but ultimately define probabilities over character-strings. Ideally, the choice of the tokeniser---which maps character-strings to subwords---should not affect the probability assigned to the underlying character-string; in practice, it does. We define this mismatch as **tokenisation bias**. In this work, we quantify one particular type of tokenisation bias: the effect of including or not a subword (e.g., $\langle$ hello $\rangle$) in a tokeniser’s vocabulary on the probability a trained model assigns to the corresponding characters (i.e., "hello''). Estimating this effect is challenging because each model is trained with only one tokeniser. We address this by framing tokenisation bias as a causal effect and estimating it using the regression discontinuity design. Specifically, we exploit the fact that tokenisers rank subwords and add the first $K$ subwords to their vocabularies, where $K$ is an arbitrary cutoff point. As such, we can estimate a causal effect by comparing similar subwords around this cutoff. Experimentally, we find that tokenisation consistently affects models' outputs across scales, vocabularies, and tokenisers. Notably, a subword's presence in a small model's vocabulary may increase its characters' probability by up to 17 times, or 2.88 more nats on average, highlighting tokenisation as a key design choice in language modelling.
Submission Number: 52
Loading