Archiving Submission: Yes (archival)
Keywords: Kurdish language processing, morphological segmentation, tokenization strategies, word embeddings, BPE, low-resource languages, BiLSTM-CRF, evaluation bias, neural morphology, subword units
TL;DR: Comparing tokenization strategies for Kurdish word embeddings reveals that morphological approaches provide more balanced performance than statistical methods like BPE when fairly evaluated.
Abstract: We investigate tokenization strategies for Kurdish word embeddings by comparing word-level, morpheme-based, and BPE approaches on morphological similarity preservation tasks. We develop a BiLSTM-CRF morphological segmenter using bootstrapped training from minimal manual annotation and evaluate Word2Vec embeddings across comprehensive metrics including similarity preservation, clustering quality, and semantic organization. Our analysis reveals critical evaluation biases in tokenization comparison. While BPE initially appears superior in morphological similarity, it evaluates only 28.6% of test cases compared to 68.7% for morpheme model, creating artificial performance inflation. When assessed comprehensively, morpheme-based tokenization demonstrates superior embedding space organization, better semantic neighborhood structure, and more balanced coverage across morphological complexity levels. These findings highlight the importance of coverage-aware evaluation in low-resource language processing and offers different tokenization methods for low-resourced language processing.
Submission Number: 31
Loading