GazeSummary: Exploring Gaze as an Implicit Prompt for Personalization in Text-based LLM Tasks

Jiexin Ding, Yizhuo Zhang, Xinyun Liu, Ke Chen, Yuntao Wang, Shwetak N. Patel, Akshay Gadre

Published: 2026, Last Modified: 08 Apr 2026HotMobile 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Smart glasses are accelerating progress toward more seamless and personalized LLM-based assistance by integrating multimodal inputs. Yet, these inputs rely on obtrusive explicit prompts. The advent of gaze tracking on smart devices offers a unique opportunity to extract implicit user intent for personalization. This paper investigates whether LLMs can interpret user gaze for text-based tasks. We evaluate different gaze representations for personalization and validate their effectiveness in realistic reading tasks. Results show that LLMs can leverage gaze to generate high-quality personalized summaries and support users in downstream tasks, highlighting the feasibility and value of gaze-driven personalization for future mobile and wearable LLM applications.
Loading