“Is Bigger Always Better?”: Comparing The Surprisals Of LLMs Against Humans For Sentence ComprehensionDownload PDF

Anonymous

17 Apr 2023ACL ARR 2023 April Blind SubmissionReaders: Everyone
Abstract: This paper investigates the similarities and differences between human and machine language processing by comparing human and machine surprisals from two self-paced-reading corpora. The study examines how the frequency distribution of surprisals changes with increasing context length and presents evidence that with greater context, both humans and machine language models can better predict upcoming words, resulting in narrow surprisal values. The study also analyzes how machine surprisals behave differently from human surprisals across parts of speech tags, and shows that increasing context size leads to better correlation with human processing effort. The findings also suggest that with increasing model complexity, machine language models may capture a wider range of cognitive and neural processes, potentially providing a more accurate representation of human language processing.
Paper Type: long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
0 Replies

Loading