Transformers Can Model Human Hyperprediction in Buzzer QuizDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Humans are thought to predict the next words during sentence comprehension, but under unique circumstances, they demonstrate an ability for longer coherent word sequence prediction. This study investigates whether language models can model such hyperprediction observed in humans during sentence processing, specifically in the context of buzzer quizzes. We conducted eye-tracking experiments where participants read the first half of buzzer quiz questions and predicted the second half, while we modeled their reading time using language models. The results showed that the pre-trained language model can partially capture human hyperprediction. When the language model was fine-tuned with quiz questions, the perplexity value decreased. Lower perplexity corresponded to higher psychometric predictive power; however, excessive data for fine-tuning led to a decrease in perplexity and the fine-tuned model exhibited a low psychometric predictive power.
Paper Type: long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Contribution Types: NLP engineering experiment
Languages Studied: Japanese
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview