Keywords: tokenization, multilingual, indian languages, subword tokenization
Abstract: Tokenization plays a pivotal role in NLP and is fundamental to training language models. However, existing tokenizers are often skewed towards high-resource languages, limiting their effectiveness for linguistically diverse and morphologically rich languages such as those in the Indian subcontinent. In this work, we present a comprehensive empirical study of multilingual tokenization across 17 Indic languages spanning 11 scripts and two language families. We systematically evaluate the effects of (i)~widely used subword algorithms (BPE \cite{sennrich-etal-2016-neural} and Unigram LM \cite{kudo-2018-subword}), (ii)~script and orthography-aware normalization, (iii) vocabulary size, and (iv) multilingual vocabulary construction strategies. We use a combination of intrinsic and extrinsic evaluations to obtain the following observations: (i) script-specific normalization improves tokenization quality, (ii)~Unigram LM better preserves morphological boundaries than BPE, (iii)~ cluster-based vocabulary construction shows improvement in downstream tasks compared to the joint method.
Our findings highlight the importance of linguistically informed design choices in multilingual tokenization and offer practical guidance for building effective tokenizers for low-resource and morphologically complex languages.
Paper Type: Long
Research Area: Phonology, Morphology and Word Segmentation
Research Area Keywords: tokenization, multilingual, indian languages, subword tokenization
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data analysis
Languages Studied: Hindi, Marathi, Kannada, Malayalam, Telugu, Gujarati, Konkani, Urdu, Oriya, Bengali, Assamese, Sanskrit, Tamil, Sindhi, Punjabi, Maithili, Nepali
Submission Number: 10637
Loading