Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?

Published: 23 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Linguistic Theories, Cognitive Modeling, and Psycholinguistics
Keywords: psycholinguistics, sentence processing, tokenization
TL;DR: We evaluate the surprisal of n-gram language models trained under orthographic, morphological, and BPE tokenizations against human reading time data.
Abstract: An important assumption that comes with using LLMs on psycholinguistic data has gone unverified. LLM-based predictions are based on subword tokenization, not decomposition of words into morphemes. Does that matter? We carefully test this by comparing surprisal estimates using orthographic, morphological, and BPE tokenization against reading time data. Our results replicate previous findings and provide evidence that *in the aggregate*, predictions using BPE tokenization do not suffer relative to morphological and orthographic segmentation. However, a finer-grained analysis points to potential issues with relying on BPE-based tokenization, as well as providing promising results involving morphologically-aware surprisal estimates and suggesting a new method for evaluating morphological prediction.
Submission Number: 4340
Loading