Keywords: Human-Centered AI, Code Intelligence, Eye Tracking, Large Language Models, Attention Mechanisms, Code Generation, Program Comprehension
Abstract: Code Language Models (CodeLLMs) traditionally learn attention based solely on statistical input-output token correlations (“machine attention”). In contrast, human developers rely on intuition, selectively fixating on semantically salient tokens during program comprehension. We present EyeMulator, a model-agnostic technique to align CodeLLM attention with human visual attention without architectural changes. By extracting scan paths from eye-tracking data, we derive token-level attention weights used to augment the loss function during fine-tuning. This induces the model to mimic human focus. Our evaluation across StarCoder, Llama-3.2, and DeepSeek-Coder shows that EyeMulator significantly outperforms baselines, achieving gains of over 30 CodeBLEU points in translation and up to 22 BERTScore points in summarization. Ablation studies confirm that these gains stem directly from replicating human attention dynamics. Artifacts are available at https://zenodo.org/records/16134801.
Paper Type: Long
Research Area: Code Models
Research Area Keywords: Code language models
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: Java, C#
Submission Number: 8230
Loading