Archiving Submission: Yes (archival)
Keywords: tokenization, morphology, multilingual NLP
TL;DR: We create an evaluation for morphological alignment for tokenizers in 70 languages and test its relationship to downstream language model performance.
Abstract: While tokenization is a key step in language modeling, with effects on model training and performance, it remains unclear how to effectively evaluate tokenizer quality. One proposed dimension of tokenizer quality is the extent to which tokenizers preserve linguistically meaningful subwords, aligning token boundaries with morphological boundaries within a word. Here, we expand on previous work and develop datasets for 86 languages, which can be used to study tokenizer quality crosslinguistically. We also develop a new evaluation framework, addressing limitations of previous evaluations and providing flexible evaluation for 71 of those languages. We then correlate out alignment scores with downstream task performance for five pre-trained languages models on seven tasks, with at least one task in each of the languages in our sample. We find that morphological alignment does not explain very much variance in model performance, suggesting
that morphological alignment alone does not measure dimensions of tokenization quality relevant to model performance.
Submission Number: 32
Loading