Abstract: Human gaze data provide cognitive information that reflect human language comprehension and has been effectively integrated into a variety of natural language processing (NLP) tasks, demonstrating improved performance over corresponding plain text-based models. In this work, we propose to integrate a gaze module into pre-trained language models (PLMs) at the fine-tuning stage to improve their capabilities to learn representations that are grounded in human language processing. This is done by extending the conventional purely text-based fine-tuning objective with an auxiliary loss to exploit cognitive signals. The gaze module is only included during training, retaining compatibility with existing PLM-based pipelines. We evaluate the proposed approach using two distinct PLMs on the GLUE benchmark and observe that the proposed model improves performance compared to both standard fine-tuning and traditional text augmentation baselines. All code is available on \url{anonymous_git}.
Paper Type: short
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Languages Studied: English
Preprint Status: There is no non-anonymous preprint and we do not intend to release one.
A1: yes
A1 Elaboration For Yes Or No: Section Limitations
A2: yes
A2 Elaboration For Yes Or No: Section Ethics Statement
A3: yes
B: yes
B1: yes
B6: yes
C: yes
C1: yes
C2: yes
C3: yes
C4: yes
D: no
E: yes
E1: n/a
E1 Elaboration For Yes Or No: AI assistants are used to polish language, not to suggest new content.
0 Replies
Loading