Archiving Submission: No (non-archival)
Keywords: Tokenization, tools and code
TL;DR: Byte-level subword tokenization creates vocabularies whose tokens aren't well-formed utf-8, requiring workarounds to interpret as code points.
Abstract: Subword tokenization segments input text according to a pre-defined vocabulary to feed it into a language model; the language model, in turn, generates a sequence made from this same vocabulary. The members of the vocabulary can be built of code points or bytes. Using code points means that all members of the vocabulary are valid UTF-8 characters. However, it also requires thousands of initial members to achieve acceptable coverage of inputs and more than a million to entirely avoid out-of-vocabulary errors. Beginning with bytes, on the contrary, avoids out-of-vocabulary errors with only 256 initial members of the vocabulary, but the members of the vocabulary and sequences of them are not guaranteed to be valid UTF-8. Sequences that are not valid UTF-8 break code that assumes its input to be valid UTF-8. Applications of language models that operate under this assumption must account for the breakage thereby introduced.
In this paper, we formalize tokenization using monoid theory and prove that byte-level tokenizers with vocabularies smaller than the full Unicode space inevitably face either out-of-vocabulary issues or generate invalid UTF-8 sequences. We demonstrate formally that attempting to incrementally convert tokens back to a string and interpret the results as UTF-8 gives different results than converting the whole sequence of tokens at once. This formal result causes real-world bugs; in some cases we discover bugs whose existence was predicted by the theoretical result. We evaluate mitigations for the problem identified and provide case studies of major foundation models, serving engines, and constrained generation systems.
Submission Number: 8
Loading