Sampling from Your Language Model One Byte at a Time

ICML 2025 Workshop TokShop Submission28 Authors

Published: 10 Jun 2025, Last Modified: 11 Jun 2025TokShopEveryoneRevisionsBibTeXCC BY 4.0
Archiving Submission: No (non-archival)
Previous Venue If Non Archival: We present an algorithm to sample text from off-the-shelf language models conditioning on a sequence of bytes (instead of tokens).
Keywords: language models, tokenization, byte-level
Abstract: Tokenization is used almost universally by modern language models, enabling efficient text representation using multi-byte or multi-character tokens. These models are typically invoked to autoregressively complete a text prompt by tokenizing the prompt, sampling more tokens to continue the tokenized prompt, and detokenizing the result. However, prior work has shown that this process can introduce distortion into the model's sampling distribution, leading to unexpected or undesirable generations. For example, users are often advised not to end their prompts with a space because it prevents the model from including the space as part of the next token. While this heuristic is effective in English, the underlying problem continues to affect languages such as Chinese as well as code generation, settings where word and syntactic boundaries may not line up with token boundaries. We present an optimal method to solve this "Prompt Boundary Problem," which is based on an efficient online algorithm for Byte-Pair Encoding (BPE). This allows one to compute the next byte distribution conditioned on an arbitrary byte prefix, given only logit access to the original tokenizer-based model. This procedure can be applied iteratively to convert any autoregressive LM with a BPE tokenizer into a character-level or byte-level LM, _without changing the generative distribution at the text level_. We show that this significantly improves next-character prediction accuracy when computed on arbitrary prefixes. Moreover, our method is able to unify the vocabularies of language models with different tokenizers, allowing one to ensemble LMs with different tokenizers at inference time as well as transfer the post-training from one model to another using proxy-tuning. We demonstrate in experiments that the ensemble and proxy-tuned models outperform their constituents on downstream evals.
Submission Number: 28
Loading