Improving Tokenisation by Alternative Treatment of SpacesDownload PDF

Anonymous

17 Dec 2021 (modified: 05 May 2023)ACL ARR 2021 December Blind SubmissionReaders: Everyone
Abstract: Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity, and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens, finding it alleviates existing problems, improving performance of models. Concretely, we apply a modification to the BPE and Unigram algorithms which implements this approach, and find it gives more morphologically correct tokenisations, in particular when handling prefixes. In addition, we show that the modified algorithms give improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Given the results of our experiments, we advocate for always treating spaces as individual tokens as a superior tokenisation method.
Paper Type: long
Consent To Share Data: yes
0 Replies

Loading