EyeBench: Predictive Modeling from Eye Movements in Reading

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: eye tracking, eye movements in reading, multimodal learning, cognitive modeling, human language processing, benchmark
Abstract: We present EyeBench, the first benchmark designed to evaluate machine learning models that decode cognitive and linguistic information from eye movements during reading. EyeBench offers an accessible entry point to the challenging and underexplored domain of modeling eye tracking data paired with text, aiming to foster innovation at the intersection of multimodal AI and cognitive science. The benchmark provides a standardized evaluation framework for predictive models, covering a diverse set of datasets and tasks, ranging from assessment of reading comprehension to detection of developmental dyslexia. Progress on the EyeBench challenge will pave the way for both practical real-world applications, such as adaptive user interfaces and personalized education, and scientific advances in understanding human language processing. The benchmark is released as an open-source software package which includes data downloading and harmonization scripts, baselines and state-of-the-art models, as well as evaluation code, publicly available at https://github.com/EyeBench/eyebench.
Code URL: https://github.com/EyeBench/eyebench
Primary Area: Data and Benchmarking scenarios in Neuroscience and cognitive science (e.g., neural coding, brain-computer interfaces)
Flagged For Ethics Review: true
Submission Number: 1980
Loading