Keywords: tokenization, unigram
Abstract: The Unigram tokenization algorithm offers a probabilistic alternative to the greedy heuristics of Byte-Pair Encoding.
Despite its theoretical elegance, its implementation in practice is complex, limiting its adoption to the SentencePiece package and adapters thereof.
We bridge this gap between theory and practice by providing a clear guide to implementation and parameter choices.
We also identify a simpler algorithm that accepts slightly higher training loss in exchange for improved compression.
Paper Type: Short
Research Area: Phonology, Morphology and Word Segmentation
Research Area Keywords: subword representations, morphological segmentation
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English, German, Korean, Chinese, Arabic, Hindi
Submission Number: 73
Loading