Archiving Submission: Yes (archival)
Keywords: tokenization, machine translation, language modeling, natural language processing
TL;DR: We introduce a novel approach that extends unigram tokenization by conditioning target token probabilities on source-language tokens from parallel data.
Abstract: We introduce conditional unigram tokenization, a novel approach that extends unigram tokenization by conditioning target token probabilities on source-language tokens from parallel data.
Given a fixed source tokenizer, our method learns a target tokenizer that maximizes cross-lingual semantic alignment.
We evaluate our tokenizer on four language pairs across different families and resource levels, examining intrinsic properties and downstream performance on machine translation and language modeling.
While our conditional tokenizer maintains comparable statistical properties to standard unigram tokenizers, results are mixed: we observe no improvements in machine translation quality, but find consistent perplexity reductions in language modeling.
We hypothesize that the quadratic scaling of conditional probability estimation with respect to the vocabulary size creates a data efficiency bottleneck.
Our findings suggest that alternative parameterizations may be necessary for practical cross-lingual tokenization.
Submission Number: 15
Loading