Keywords: Gaze-VLM, eye-tracking, human gaze, fixations, scanpaths, multimodal large language models, MLLM, vision-language models, VLM, visual question answering, VQA, referential ambiguity, disambiguation, grounding, referent grounding, gaze-informed VQA, human-in-the-loop, inference-time steering, training-free method, ambiguity resolution, attention alignment, visual attention, saliency, speech-aligned gaze, temporal dynamics, fixation filtering, real-time protocol, interactive VQA, benchmark dataset, evaluation suite, embedding-based similarity, open-ended generation, ambiguous questions, unambiguous questions, multimodal reasoning, instruction following, zero-shot generalization, AR/VR eye tracking, cognitive signals for AI, human attention signals, gaze augmentation, dataset release
TL;DR: We show that in real-time VQA, gaze data just before speaking the questions are the most informative for resolving referential ambiguity at inference time across diverse VLMs, more than doubling accuracy on ambiguous questions—without retraining.
Abstract: Vision-Language Models (VLMs) have demonstrated remarkable capabilities in visual question answering (VQA), yet they often struggle with referential ambiguity when multiple objects in an image could satisfy a given query. To address this challenge, we present Gaze-VLM, a novel training-free approach that uses eye-tracking data in real-time as an external alignment signal to resolve ambiguity in open-ended VQA. Through a comprehensive user study with 500 unique image-question pairs, we demonstrate that fixations closest to the time participants start verbally asking their questions are the most informative for disambiguation in Multimodal Large Language Models (MLLMs), more than doubling the accuracy of responses on ambiguous questions (from 35.2\% to 77.2\%) while maintaining performance on unambiguous queries. We evaluate our approach across state-of-the-art VLMs, showing consistent improvements when gaze data is incorporated in ambiguous image-question pairs, regardless of architectural differences. To facilitate future research in gaze-informed VQA, we release a new benchmark dataset to use eye movement data for disambiguated VQA, a novel real-time interactive protocol, and an evaluation suite. Our findings demonstrate that human visual attention signals can effectively guide VLMs toward intended referents in ambiguous contexts without requiring model retraining or architectural changes.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 21367
Loading